Next Article in Journal
DEI Research in Higher Education: Results from a Study at an American Minority-Serving Institution
Previous Article in Journal
Factors Influencing IT Students’ Selection of Group Project Partners in Collaborative Programming Projects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning with Peers in Higher Education: Exploring Strengths and Weaknesses of Formative Assessment

by
Davide Parmigiani
1,*,
Elisabetta Nicchia
1,
Myrna Pario
1,
Emiliana Murgia
1,
Slaviša Radović
1 and
Marcea Ingersoll
2
1
Department of Education, University of Genoa, 16128 Genoa, Italy
2
School of Education, St. Thomas University, Fredericton, NB E3B 5G3, Canada
*
Author to whom correspondence should be addressed.
Trends High. Educ. 2025, 4(3), 48; https://doi.org/10.3390/higheredu4030048
Submission received: 16 July 2025 / Revised: 5 August 2025 / Accepted: 2 September 2025 / Published: 4 September 2025

Abstract

Implementing formative assessment strategies represents a challenge for higher education institutions. As they are frequently adopted only to support summative assessment and final grading, this study aims to investigate the most effective formative assessment strategies for higher education. It emphasizes the features of peer- and group-assessment, underlining strengths and weaknesses of both formative assessment strategies. Additionally, this study investigates the relationship between metacognitive and evaluative formative assessment aspects to support students’ learning processes and highlights the connection between formative and summative approaches. In the academic year 2023–2024, 240 higher education students were involved in a four-stage mixed-method study, alternating peer- and group-assessment strategies split in two steps focused on, respectively, metacognitive and evaluative aspects. Qualitative and quantitative data were collected after each stage. The findings revealed that students preferred the group-assessment and that the metacognitive formative assessment helped them improve their learning and prepare for the final test with summative assessment. Regarding policy implications, on the basis of this study, higher education institutions should improve instructor capacity to integrate formative assessment activities in their courses.

1. Introduction

Early studies about the use and efficacy of formative assessment focused on primary and secondary schools. Black and Wiliam [1] emphasized that formative assessment should support students’ self-regulated learning as underlined also by Perrenoud [2], Allal and Lopez [3], Andrade and Brookhart [4], and Panadero et al. [5]. Recently, the European Commission (2022) [6] reinforced how effective assessment strategies can support pupils’ learning processes. In particular, assessment procedures in primary and secondary schools should combine the strengths of both formative and summative aspects in order to help pupils develop self-regulation and manage their own learning strategies (European Commission et al., 2023) [7].
Higher education institutions began integrating formative assessment strategies in the last two decades of the past century (Boud, D., 1986; Murray, 1984; Sadler, 1983, 1989) [8,9,10,11]. Nowadays, research has progressively focused on the integration of theory and practice in formative assessment [12]. More recently, several studies have focused on approaches to incorporating formative assessment in higher education courses through the application of different strategies [13,14,15,16,17,18].
In the Italian context, Doria et al. [19] stated that formative assessment continues to represent a challenge for Italian higher education institutions because summative assessment and final exams are the predominant assessment strategies used to determine student academic achievement. They also noted that formative assessment during coursework remains underused and emphasize the need for greater awareness of diverse assessment strategies in the Italian higher education context.
This paper represents the continuation of an exploratory study in which the authors examined the benefits and the limitations of formative assessment strategies in higher education contexts [20]. Based on previous findings, this study incorporated peer- and group-assessment strategies split into two steps: metacognitive and evaluative, respectively. The metacognitive activity focused solely on the learning processes and no grades were assigned as this first phase was aimed at supporting students’ learning processes. Through peer- and group-assessment strategies, they could reflect on the strengths and weaknesses of their work from the feedback they received from their peers. During the evaluative step, both peer- and group-assessment allowed students to have several deliveries of feedback, followed by a formal grade given by the instructor, as Baker [21] and Gladovic et al. [22] suggested. Our study focuses on two main points. Firstly, we analyzed both peer- and group-assessment strategies, underlining strengths and the weaknesses. Secondly, we investigated whether the students’ learning processes were positively affected by formative assessment strategies during the metacognitive and evaluative steps. More specifically, we explored whether peer- and group-assessment supported crucial aspects of students’ learning processes: feedback quality, stress levels, resilience skills, and metacognitive issues.

2. Literature Review

Assessment is a multidimensional concept that includes several dimensions aimed at achieving complementary purposes: assessment for learning, assessment as learning, and assessment of learning.
Assessment for learning is a formative assessment procedure that allows teachers to identify and incorporate teaching strategies that support dialogue among students and teachers in order to enhance and inform ongoing learning [23]. The second, assessment as learning, actively involves students in metacognitive aspects of the formative assessment process. During assessment as learning activities, students engaged in self- and peer-assessment methods to actively monitor their learning processes, develop self-regulation, and self-direct their learning [24]. The third dimension, assessment of learning, is generally used to indicate a student’s performance on assessments connected to outcomes and standards and to measure student achievement at the end of a formal learning activity [25]. Assessment of learning is also sometimes referred to as summative assessment.
Within theories of assessment as learning, Carney et al. [26] report that formative assessment can support students’ reflection capability [27] and ability to modify their own learning strategies [28]. Assessment as learning represents a crucial aspect in the development of students’ metacognitive skills by reflecting and revising their learning processes [29,30].
Baker [21] emphasizes the connection between developmental and evaluative approaches, including their value in providing feedback about student performance. The first approach, developmental, supports and enhances the quality of participants’ learning processes, whilst the second, evaluative, focuses on outcomes and accountability. Druskat and Wolff [31] suggest combining the strengths of both approaches as they reflect the metacognitive and summative aspects of assessment procedures [32,33]. Specifically, the metacognitive aspects are mainly developed during assessment as learning strategies and the summative or evaluative facets are found in the assessment of learning strategies. The challenge for higher education institutions is represented by the implementation of assessment strategies that support both metacognitive and evaluative aspects to, on the one hand, enhance students’ learning processes and, on the other, describe their learning outcomes.
Wininger [34] and Normann et al. [35] stated that assessment strategies should integrate both assessment as learning and assessment of learning aspects to ensure meaningful relationships between evaluative and metacognitive issues. This work highlights how formative assessment strategies should be arranged in a close connection with summative assessment. In this way, educators can broadly view students’ learning, including processes and outcomes. In this study, we approach how to establish a connection between the characteristics of metacognitive skills and the formal measurements of the learning outcomes [33].
After defining the role of formative assessment and its intersections with the metacognitive and evaluative facets of assessment procedures, it is necessary to identify suitable formative assessment strategies for higher education. Parmigiani et al. [20] categorized three main formative strategies: self-, peer-, and group-assessment. Self-assessment is a self-reflection activity that can be seen as a learning practice within a pedagogical perspective [36]. From a technical point of view, self-assessment is an strategy in which students have the opportunity to reflect on their own learning processes through feedback arising from various sources, so that they can evaluate their own performance on the basis of selected criteria [37]. Regarding peer-assessment, this educational practice can be defined as an activity during which students or groups of students analyze and assess the tasks elaborated by of other students, their peers [38]. Topping [39] had already specified that peer-assessment represents a strategy aimed at enabling individuals to assess the level and the quality of learning outcomes of peers. Fleckney et al. [40] listed a series of advantages of peer-assessment: it can provide benefits for both assessor and assessee, as stated by Double et al. [41], Huisman et al. [42], and Nicol et al. [43]; it develops students’ higher-order cognitive, problem-solving and metacognitive thinking skills, as indicated by Armengol-Asparó et al. [44] and Lerchenfeldt et al. [45]; and it supports interpersonal and social skills, including communication and teamwork, as underlined by Donia et al. [46], Reinholz [47], and Zheng et al. [48]. Ultimately, group-assessment represents a development of peer-assessment. Group-assessment is aimed at reducing the limitations due to the deficiency of peer assessment at the individual level, emphasizing the group judgement of artefacts or performances of other groups [49]. Group-assessment is a form of peer-assisted learning during which students, working in small groups, can coach and support each other and other groups. In this way, all group members can reflect on their outcomes, artifacts and academic performance and improve them [50].
Even if group- and peer-assessment are both based on feedback given by peers, in this study, group-assessment differs from peer-assessment for the following reasons. Peer-assessment has been carried out always in one-on-one pairs so each pair’s member had the opportunity to give and receive feedback. In contrast, during group-assessment, each group’s member presented an educational activity, then the group gave oral feedback on the strengths and weaknesses of the presentation. The advantages and limitations of both strategies were complementary. On the one hand, peer-assessment allowed students to create a relationship in pairs, exchanging ideas and opinions about a task, but the feedback was given only by one peer. On the other hand, group-assessment allowed students to develop a professional relationship, simulating a group of colleagues, but the situation could be, potentially, more stressful.
Findings of the exploratory study carried out by Parmigiani et al. [20], revealed that students appreciated peer- and group-assessment practices because they “allowed students to reflect deeply on their own learning processes, create feedback and give them the opportunity to improve and modify their learning strategies” [20], p. 11. While it is important to underline that self-assessment remains an essential dimension of making internal feedback explicit [51], for the current study, we decided to focus our attention on the peer- and group-assessment strategies valued by our previous participants.
In the current study, we posit that peer- and group-assessment represent an opportunity to apply formative assessment in many educational and academic fields [52,53,54]. In particular, to investigate the strengths and weaknesses of several aspects of formative assessment in higher education, we derive our categories and analysis from extant studies. We investigated how to manage and organize formative assessment practices [55]; how to face stress and anxiety levels [56]; how to emphasize feedback typologies, specifically when the feedback is specific, constructive/actionable, balanced, and respectful/sensitive [57,58,59]; how to decide if the feedback should be either open or blind [60]; when it is necessary to provide feedback by the teacher [17,61]; and how to support the students’ resilience skills [62].
Instructors often complain that formative assessment takes too much time (Duhart, 2018), yet it is essential to understand how to create and integrate balanced formative assessment strategies in higher education institutions [63]. The challenge is to design formative assessment strategies to be easily applied in everyday higher educational contexts.

3. Research Design

3.1. Context of the Study

This study was conducted at the Department of Education of University of Genoa, Italy. The department offers courses for pre-service teachers, social workers, early childhood educators and heads of social services. The teacher education program is a 5-year degree aimed at educating both kindergarten and primary teachers. Social workers and early childhood educators complete a 3-year bachelor degree that educates professionals to be recruited in educational contexts outside schools, such as: educational services for early childhood (children aged 0 to 3); communities for minors with family difficulties; educational services for migrants; counseling centers; etc. Finally, heads of social services complete a 2-year master’s degree. Their main employment includes a number of education-related positions: designers of educational courses in adult education; designers of national/international educational projects carried out by private and public bodies/centers; and coordinators of private and public social and educational services focused on different sectors such as early childhood, minors, migrants, etc. Given the educational focus of these higher education courses, the study was enacted in each of these professional degree programs.

3.2. Aims and Research Questions

The overall purpose of this study was aimed at investigating the role of formative assessment strategies carried out in university programs for pre-service teachers, social workers, and heads of social services. In particular, we wanted to explore the strengths and weaknesses of peer- and group-assessment strategies. Additionally, we investigated the efficacy of formative assessment strategies focused on both metacognitive and evaluative aspects. Consequently, this study was aimed at examining how peer- and group-assessment can affect students’ learning processes when the instructors design and implement formative assessment methods with explicit metacognitive or evaluative purposes. The overall research question can be expressed as follows:
(RQ) To what extent can peer- and group-assessment help higher education students in enhancing their metacognitive and evaluative competencies as future professionals in educational fields?
Furthermore, we identified specific sub-research questions:
(RQ1) How does the formative assessment strategy (peer vs. group) affect students’ perceptions of organizational quality, feedback quality, learning impact, and satisfaction with the process?
(RQ2) How does the focus of formative assessment strategy (metacognitive vs. evaluative) affect students’ perceptions of organizational quality, feedback quality, learning impact, and satisfaction with the process?
To answer these research questions, a mixed method design was chosen. Particularly, we adopted a mixed method multistrand design since the exploratory study provided several stages, as suggested by Tashakkori and Teddlie [64]. The timing of data collection was concurrent since we gathered both the quantitative and qualitative strands during each phase of the study, as indicated by Creswell and Clark [65]. Concerning the interpretation of the findings, we considered a triangulation of quantitative and qualitative data [66] since both kinds of data had the same priority to reach a deep understanding of the phenomenon.

4. Participants, Procedure, and Instruments

4.1. Participants

We contacted 323 students attending courses for pre-service teachers, social workers, early childhood educators, and heads of social services offered by the University of Genoa (Italy). We explained to them the aims, activities and procedures for the study. On a voluntary basis, 240 students accepted to be involved in the study. Specifically, they were split into four main groups: pre-service teachers (35%), social workers (35%), early childhood educators (17.08%), and heads of social services (12.92%). Table 1 shows the socio-demographic characteristics of participants. Almost all participants are female. Two-thirds of participants were between 19 and 22 years old, which means that the majority of students were in the 1st or 2nd year of their bachelor’s degree. Some students had work experience in schools or other educational contexts. Notably, 45% of participants had no work experience, almost one-third had few work experiences (some days/weeks), and only one-fourth of the participants had many (some months/1 year) or fulltime work experiences (2 or more years).

4.2. Procedure

The procedure was divided into two main phases during specific educational courses common to the students’ academic programs. Each phase was split into two sub-phases. During the courses, all students participated in two formative assessment moments.
The first phase consisted of a peer-assessment activity with two sub-phases. The first sub-phase was a metacognitive-focused peer-assessment activity. One week before a mid-term written test, each student had to write an answer to an open-ended question, selected freely from a list of questions prepared by the instructor. Then, all students were randomly paired and asked to peer-assess their partner’s response to the open-ended question. The feedback given during the peer-assessment activity was given orally, face-to-face, and students were asked to openly comment on each other’s answers by identifying potential mistakes and highlighting possible improvements. Table 2 shows the key questions, provided by the teacher, to support the feedback discussion.
The second sub-phase of the peer-assessment activity in the first phase was evaluative-focused. After the mid-term written test, all students were randomly paired and asked to peer-assess their partner’s responses. In this sub-phase, the feedback was again oral and open but students had to follow a teacher-prepared rubric which indicated specific levels of achievement. After the peer-assessment activity, the teacher gave a formal grade to each student, completely independent from the assessment given by the peers.
The second phase consisted of a group-assessment activity with two sub-phases similar to those carried out for the peer-assessment. The students, in groups of 5 members, worked on and presented an educational activity, selected from a list of situations prepared by the teacher. During the metacognitive step, the groups had to give oral feedback on the strengths and weaknesses of the presentations of their partners, based on a rubric designed by the teacher (see the rubric’s dimensions listed in Table 2). After the group-assessment activity, the teacher gave a formal grade to each student, completely independent from the assessment given by the peers.

4.3. Instrument

The research procedure provided two data collections. After each phase, the students filled an online questionnaire composed of open- and closed-ended questions focused on the research questions. The questionnaire was composed of three sections. The first section contained the demographic characteristics of participants. The second section included eight scales and two sub-scales. Table 3 indicates the scales/sub-scales, the number of items and the topics included into each scale, and the references which inspired the questions. The items were rated by the students with a four-point Likert scale, from 1 (I strongly disagree) to 4 (I strongly agree). Additionally, in the third section, the participants were able to add open-ended qualitative comments. The items and the open-ended questions were focused on the metacognitive and evaluative sub-phases.
The principles of research ethics were strictly followed. The research procedure was approved by the ethical committee of the University of Genoa. All participants were informed about the aims, activities, and procedures for the study. Participation was optional, and those who agreed to be involved gave online informed consent.

5. Data Analysis and Findings

The data analyses were focused on both quantitative and qualitative data. The qualitative data were coded with NVivo 14 on the basis of the three moments suggested by grounded theory: open coding, axial coding, and selective coding [67,68]. The quantitative data were analyzed with SPSS 29 and we performed the following analyses: reliability, frequencies, paired sample t-test, ANOVA for repeated measures, and exploratory factor analysis (EFA) to highlight potential statistically significant differences.

5.1. Quantitative Analysis with Specific Findings

First of all, we checked the instrument’s reliability through the calculation of the following coefficients: Cronbach’s Alpha (α), McDonald’s Omega (ω), and average inter-item correlation. Table 4 summarizes the results.

5.1.1. Scales and Sub-Scales Frequencies and Differences

The frequencies were calculated in all scales and sub-scales to provide an overview of answer distribution. To calculate the frequencies, we added up the answers categorized as, on the one hand, “I strongly agree” and “I agree” and, on the other, “I disagree” and “I strongly disagree”. Consequently, we observed that more than 90% of students rated both the metacognitive and evaluative steps of the group-assessment positively whilst around 80% of students valued the peer-assessment tasks positively. ANOVA for repeated measures underscored that group-assessment was more appreciated compared to peer-assessment (MD = 0.195, p < 0.000). In particular, the time dedicated to the formative assessment was highly appreciated (MD = 0.202, p < 0.000), especially during the peer-assessment (MD = 0.325, p < 0.000).
Regarding emotional issues, 95% of students rated the group-assessment positively (both metacognitive and evaluative) except for the stress levels. In this case, students indicated that the group-assessment, especially the evaluative step, was more stressful than peer-assessment. ANOVA for repeated measures highlighted that the emotional issues (except stress levels) received higher values during the group-assessment (MDs were between 0.205 and 0.281 with a p-value between 0.000 and 0.001). When considering the factor “age”, the youngest students (aged 19–20, 21–22, and 23–24) felt higher levels of stress and anxiety during the evaluative steps. In comparison, the oldest students (aged 25–29 and >30) with many or fulltime work experiences also felt the lowest levels of stress and anxiety during the evaluative steps.
The feedback typologies were “specific”, “constructive/actionable”, “balanced”, and “respectful/sensitive”. Almost all students (97–98%) rated positively the respectful/sensitive feedback during both metacognitive and evaluative steps. The remaining factors were rated more positively during the group-assessment (between 85% and 89%). Concerning the sub-scales “Feedback open/blind issues” and “Feedback teacher issues”, around one-third of students would have preferred blind feedback, especially in the evaluative step. Instead, more than a half of students would have liked to hear teacher feedback, especially during the peer-assessment. ANOVA for repeated measures indicated that “respectful/sensitive” was rated higher than the other feedback aspects (MDs were between 0.617 and 0.646 with a p-value always < 0.000). All feedback aspects were more appreciated during the group-assessment (MDs were between 0.168 and 0.351 with a p-value between 0.000 and 0.008) except for “respectful/sensitive” which was more appreciated during the peer-assessment (MD = 0.110, p < 0.003). In general, students emphasized that the feedback by the teacher was necessary compared to blind feedback (MD = 0.174, p < 0.001). The feedback by the teacher is particularly needed during the peer-assessment compared to the group-assessment (MD = 0.466, p < 0.000) and, specifically, during the metacognitive step compared to the evaluative one (MD = 0.226, p < 0.004).
Formative assessment was useful in improving students’ resilience capacity; in particular, it was useful for supporting positive reactions to difficulties identified during the peer-assessment (around 76%) and group-assessment (around 88%). Similarly, the capacity to cope with stress and anxiety was appreciated by 87% of students but only during the metacognitive step of group-assessment whilst only 58% of students rated this point positively during the peer-assessment. Generally, ANOVA for repeated measures indicated that “reacting positively to the difficulties” was rated higher than “coping with stress and anxiety” (MD = 0.201, p < 0.000). Within the formative assessment methods, “reacting positively to the difficulties” was rated higher than “coping with stress and anxiety” both during peer-assessment (MD = 0.298, p < 0.000) and group-assessment (MD = 0.104, p < 0.042). When considering the factor “work experience”, participants with no work experiences appreciated both resilience factors (MD = 0.076, p < 0.349). The youngest students (aged 19–20 and 21–22) with no or few work experiences appreciated both resilience factors mainly during group assessment.
Regarding the metacognitive aspects, the metacognitive step of group-assessment was particularly useful for “reflecting on my own learning strategies” (82%) and “modifying my own learning strategies” (79.47%). The peer-assessment was rated positively by 72% of students for the metacognitive step regarding “reflecting on my own learning strategies” whilst only 59% of students appreciated the peer-assessment activity (both metacognitive and evaluative) for “modifying my own learning strategies”. ANOVA for repeated measures underlined a statistically significant difference between “reflecting on my own learning strategies” and “modifying my own learning strategies” (MD = 0.152, p < 0.000). Within the formative assessment methods, “reflecting on my own learning strategies” was more appreciated than “modifying my own learning strategies” both during peer-assessment (MD = 0.217, p < 0.000) and group-assessment (MD = 0.089, p < 0.015).
In general, the students do not want formative assessment strategies to affect their formal grade but 45.83% of them would like for the evaluative group assessment to inform the summative assessment. Almost 37% of students would like the metacognitive step of group assessment to contribute to the final grade. Only one student out of six declared that the peer-assessment (both metacognitive and evaluative step) should impact the summative grade. ANOVA for repeated measures revealed a statistically significant difference between group- and peer-assessment both in metacognitive (MD = 0.451, p < 0.009) and evaluative steps (MD = 0.569, p < 0.000).

5.1.2. Exploratory Factor Analysis (EFA)

The exploratory factor analysis (EFA) represents an interesting analysis in the context of this study because it can pinpoint latent factors and underline those factors that can explain variance. The EFA was performed with varimax rotation and Kaiser normalization, using principal components extraction with eigenvalues  >  1. The results indicate that the sample was adequate since the Kaiser–Meyer–Olkin test was 0.877; additionally, Bartlett’s Test of Sphericity revealed a p-value of  <0.000 (χ2 = 9848.049; df  =  861).
The EFA revealed four relevant factors. The first factor explains 38.03% of the total variance. This factor is composed of all items included into the scale “Metacognitive issues”. The second factor (7.08% of explained variance) is strictly connected with the scale “Emotional issues” but includes only the items referred to the metacognitive step. The third factor (4.89%) includes the items of scale “Resilience issues”. The fourth factor (4.57%) contains the items related to the scale of “Feedback issues”, in particular balanced and constructive/actionable feedback during both metacognitive and evaluative steps.

5.1.3. Peer- vs. Group-Assessment

A basic question was aimed at understanding whether students preferred peer- or group-assessment. The quantitative data analysis indicates clearly that students appreciated the group-assessment. ANOVA for repeated measures emphasized that group-assessment was more appreciated than peer-assessment in all scales: organizational issues (MD = 0.144, p < 0.047); emotional issues (MD = 0.161, p < 0.030); feedback issues (MD = 0.163, p < 0.028); resilience issues (MD = 0.426, p < 0.000); metacognitive issues (MD = 0.271, p < 0.007).
Analyzing the data in detail, we found that youngest students (aged 19–20) gave higher scores to group-assessment compared to peer-assessment for emotional issues (MD = 0.319, p < 0.048) and for resilience issues (MD = 0.537, p < 0.009) compared to older students. Instead, the oldest students (aged more than 30) preferred group-assessment for feedback issues (MD = 0.332, p < 0.042).
Similarly, students with never and few work experiences rated higher group-assessment for organizational issues (MD = 0.253 and 0.261, p < 0.047 and 0.045), for feedback issues (MD = 0.299 and 0.288, p < 0.033 and 0.039), for resilience issues (MD = 0.552 and 0.543, p < 0.002 and 0.001), and for metacognitive issues (MD = 0.437 and 0.425, p < 0.021 and 0.029).
In general, group-assessment was preferred by all students during both metacognitive and evaluative steps for resilience issues (MD = 0.461 and 0.392, p < 0.001 and 0.003) and for metacognitive issues, especially during the evaluative step (MD = 0.277, p < 0.048).

5.1.4. Metacognitive vs. Evaluative Steps

Another important question was focused on the potential difference between the metacognitive and evaluative steps. A paired sample t-test underlined that the metacognitive step was useful to prepare for the evaluative one, especially for group-assessment (t = 4.523, p < 0.001). Within the “Emotional issues”, ANOVA showed that the emotional issues were more appreciated during the metacognitive step than during the evaluative one (MD = 0.088, p < 0.047). In particular, during the metacognitive step, the emotional issues were rated higher in the peer-assessment than in the group-assessment (MD = 0.150, p < 0.019). The “metacognitive issues” were particularly appreciated by the youngest students (19–20) with no or few work experiences during the metacognitive step (respectively, MD = 0.255 and 0.273, p < 0.004 and 0.033).

5.2. Qualitative Analysis with Specific Findings

Figure 1 presents the analysis of qualitative data. The figure is composed of three main categories: “Management issues” (on the bottom), “Metacognitive aspects” (on the left side), and “Evaluative aspects” (on the right side). Each category was developed from the reflections and thoughts of students during both peer- and group-assessment, considered as codes. Finally, each formative assessment method contains a certain number of subcodes that specify and point out the particularities of peer- and group-assessment from both metacognitive and evaluative points of view. The subcodes shared between peer- and group-assessment are indicated by ellipses with borders made of lines and points. Specific codes for either peer or group assessment are included in ellipses with borders made with unbroken lines.
Concept mapping can be considered as an alternative approach and an effective tool for analyzing open-ended survey responses [72]. Figure 1 is aimed at visually representing the spatial relationships between concepts and ideas derived from qualitative data, presenting contextual information and improving understanding of participant experiences and perspectives [73].

5.2.1. Management Issues

Table 5 shows the subcodes related to the “management issues” category that contains the codes focused on the main questions that emerged in students’ reflections on the design and organizational features. The codes with more references are common for both formative assessment methods. Although the code named “time management” has been quoted in both formative assessment methods, the students underline some organizational problems mainly with peer-assessment. Particularly, students complained that they would have benefitted from extended time for more effective discussions and feedback during peer-assessment. A student declared: “I would have needed time to be able to complete the task”. Another significant aspect is represented by the code called “teacher’s feedback”. Students expressed the importance of having feedback by the teacher after both peer- and group-assessment. A student, after the peer-assessment activity stated: “I would have preferred that all topics discussed during the metacognitive peer-assessment should be summarized and specified also by the teacher, just to be sure that everything was correct”. Also during the evaluative step of group-assessment, students needed teacher feedback: “It would be better that the teacher made some comments during the presentation so the students could have the opportunity to clarify potential unclear points”.

5.2.2. Metacognitive Aspects

The metacognitive aspects are summarized in Table 6. Most subcodes were quoted both in peer- and group-assessment. In particular, the subcode called “peers’ feedback” indicates that the students had the opportunity to interact deeply with peers and develop several forms of feedback. A student stated: “The peer-assessment activity allowed us to compare different opinions and observe the various topics from different points of view; so we had clearer ideas about the issues to be studied for the exam”. In fact, the subcode named “learning improvement” also indicated that the metacognitive steps supported the development of the students’ learning processes. Many students expressed the strengths of group-assessment in this sense. A participant stated: “The students themselves were led to identify themselves in the role of an educator or a teacher, studying and imagining the context, activities and possible scenarios of their future work by comparing the comments made by their colleagues”. Further positive elements are represented by the emotional issues. The metacognitive steps supported the engagement of students and low level of stress and anxiety. Critical aspects are represented by the subcodes “lower feedback’s power” and “training needs”. The first aspect is connected to the importance of having feedback by the teacher after both peer- and group-assessment. With the second subcode, the students underlined the importance of having more training opportunities to effectively develop all metacognitive aspects.

5.2.3. Evaluative Aspects

The characteristics of the evaluative steps are shown in Table 7. Also during the evaluative steps, both formative assessment methods supported “learning improvement”. Specifically, a student noted: “Through a deep discussion during the peer-assessment, we had the opportunity to compare our ideas and how to improve my learning strategies, my answers and to argue my ideas. Definitely, I could widen my perspectives”. On the other hand, the students indicated that the evaluative steps aroused more anxiety and stress levels: “I was really anxious during the evaluative step of group-assessment. I had trouble during the presentation of my educational activity”.

6. Discussion

The data analysis and the findings suggest some important considerations about the application of formative assessment in higher education contexts. Regarding the first sub-research question (RQ1), participants preferred group-assessment compared to peer-assessment. Professionals in education (teachers, social workers, and heads of social services) are used to working in groups so group-assessment likely represents a simulation of their future job contexts and career dispositions, as indicated by Merritt et al. [53], Otaki et al. [54], and Atasoy and Kaya [52].
Group-assessment received higher values in the scales related to feedback, resilience, and metacognitive issues. From an organizational point of view, group-assessment was appreciated mainly for the extended time dedicated to the activity. Instead, peer-assessment needed more time to allow students to have deep discussions and, consequently, offer and receive effective feedback. These points reveal the importance of time when considering the sustainability of formative assessment strategies in general [63,74]. On the one hand, formative assessment strategies need enough time to be successful but, on the other hand, it may be difficult to incorporate formative assessment into courses with a short amount of time. Potential solutions can be found with the help of digital devices, as suggested by Melesko and Ramanauskaite [75].
Group-assessment represented the best formative assessment practice, except for stress/anxiety levels, which were quite high mainly in the evaluative steps. The lowest levels for stress/anxiety were found in the metacognitive step of peer-assessment. In this case, it is clear that the metacognitive steps were appreciated by the students because they had the opportunity to discuss without being under the pressure of grading. In the metacognitive steps, students could be focused on their own learning processes, as underlined by Xu et al. [76].
It is curious that, regarding the emotional issues, the youngest students felt very high levels of stress and anxiety during the evaluative steps whilst the oldest students with a good level of work experience did not feel high levels of stress and anxiety during the same evaluative steps. The consequent consideration is that the youngest students tend to act still as students and they are particularly worried about grading. Conversely, the oldest students can appreciate the formative assessment methods even during evaluative steps because they already have job experiences in which they had the opportunity to discuss with colleagues and carry out informal group-assessment. In practice, the oldest students identify more as professionals compared to the youngest students, an issue discussed by Bin Mubayrik [77].
Regarding feedback issues, it is interesting to discuss the techniques related to formative assessment strategies. We opted for peer and group assessment based on open and oral feedback, as indicated by Gedye [60], because this strategy allows the students to discuss, debate and generate a multiplicity of ideas and comments. The data told us that around one-third of students would have preferred blind feedback to preserve anonymity and promote more open comments. Blind feedback can be useful in several contexts and, in particular, written comments oblige students to be very precise and clear. As a result, we conclude that our open and oral option cannot be considered the best solution. Teachers can select different forms of feedback depending on the contextual situation. In any case, the quality of feedback remains one of the most important questions regarding formative assessment [58].
Similarly, quantitative and qualitative data underlined that students needed feedback by the instructor in metacognitive and evaluative steps, especially during the peer assessment. The students declared that they need the instructors’ comments in any case, even after a formative assessment activity [17]. This is another important issue because it means that even higher education students need a kind of formal final summary to be sure that the discussions among peers are correct. As mentioned previously, this point represents a potential difficulty: if the instructor has to address all questions raised during the formative assessment that means additional tasks for the instructor and considerable extra time [61]. In any case, the exploratory factor analysis highlighted that feedback issues represent a crucial factor for implementing formative assessment strategies effectively.
An important issue raised by the data analysis is the potential of formative assessment strategies to support students’ resilience skills, in particular their ability to react more positively to difficulties through an effective formative assessment activity. It is particularly important that the youngest students with no or few work experiences appreciated both resilience factors, particularly during group assessment, as underlined by Sahoo et al. [62].
The second sub-research question (RQ2) was focused on the potential support of the metacognitive steps to students’ learning development. In both quantitative and qualitative data, students clearly indicated that the metacognitive steps were useful in preparing them to undertake the evaluative step. Students appreciated the metacognitive steps, likely because they could appreciate the professional discussions that characterize this format. From an emotional point of view, students felt free to engage in discussion during the metacognitive steps, which resulted in an improvement in their capacities to reflect on their own learning strategies, especially for the youngest students. From this we can conclude that formative assessment can help young students (without work experience) to develop their reflections on their own professional capacity, as discussed by Xie and Cui [30] and Atjonen et al. [29]. The qualitative data clearly underscored the importance of feedback with peers and the learning improvement that occurred during the metacognitive steps, especially during group-assessment.
Our final remarks are dedicated to grading and the relationship between formative and summative assessment. The data showed that, in general, students do not want formative assessment strategies to affect the formal grade but a significant number of students would have liked for the group assessment to inform the final grade and the summative assessment. These students may want to have their deep commitment and dedication to the tasks reflected in their final assessments. The problem and the risk are to try to quantify the metacognition levels. The data suggest that formative assessment should not affect the summative assessment, because formative assessment is aimed at improving students’ formative understanding and personal capacity to manage their learning processes, as indicated by Normann et al. [35]. The debate about grading is quite complex, and our qualitative data suggested some potential solutions. The evaluative steps of formative assessment could: account for 20% of the final grade; round up the final grade; be valued two to three points more in the final grade; recognize critical analysis and add some points to the final grade. We consider the portfolio as the best solution to combine summative assessments with formal grades, incorporating content and academic achievements alongside comments made by teachers about the students’ metacognitive development and their capacities to manage their own learning processes [33].

Limitations of the Study

This study presents two main limitations. Peer-assessment was conducted in phase 1 and group-assessment was conducted in phase 2. Due to this sequence, the second formative assessment strategy might be perceived more positively because it was conducted after peer-assessment. To confirm the results of this study, it would be necessary to conduct further research with randomization or counterbalancing. All students involved in the study are enrolled in bachelor and master degrees focused on educational issues. For this reason, formative assessment methods are simultaneously an actionable strategy for making learning processes interactive and dynamic and a content topic to be studied for the final exam. This bias can cause a kind of over-motivation of the students when engaging in and studying the structure and activities related to formative assessment methods. We recommend that study be carried out with students enrolled in several different courses (physics, chemistry, architecture, etc.) to reveal potential significant differences.

7. Conclusions

It is important to remark on the answers to research questions and, secondly, to propose suggestions and implications for policy and practice.
Concerning RQ1, students preferred the group assessment compared to peer assessment. This does not mean that peer-assessment should not be used in higher education. Peer-assessment can be applied at the beginning of a course to create a good level of feedback but, at least for courses focused on educational issues, group-assessment represents the best simulation of a professional situation. For these reasons, we recommend organizing formative assessment activities based on this strategy. It is important to underline that group-assessment is an evolved form of peer-assessment. Since the group management is more complex compared to student pairs, supporting students in developing the social competences necessary to create an effective interaction among members is key. Peer-assessment becomes a kind of training to make the group assessment successful.
Focusing on RQ2, the metacognitive steps represented a crucial support for student participation in the evaluative steps both during peer- and group-assessment. The metacognitive steps allowed students to focus their attention on their own learning strategies without worrying about the final grade. Instead, the summative aspects emerged clearly during the evaluative steps. Consequently, we suggest arranging formative assessment activities completely independent of summative expectations during educational activities that precede mid-term or final exams. In this way, students have the opportunity to observe and improve their emotional, metacognitive, and strategic learning issues prior to a summative assessment.
The organizational issues affected the efficacy of formative assessment activities. For this reason, it is important to carefully arrange and integrate formative assessment strategies, otherwise there is a risk of creating confusion. As mentioned previously, an effective formative assessment strategy takes time, and therefore may be unsustainable for teachers who do not have enough time. To address this question, we suggest integrating formative assessment strategies within other teaching strategies, such as problem-based learning, team-based learning, etc., in order to actively integrate content and assessment.
To answer the main research question (RQ) formative assessment strategies can allow students to reflect on and modify their learning approaches in order to enhance their metacognitive and evaluative competencies as future professionals in educational fields. It is fundamental to create an educational pathway that allows students to experiment, step by step, with competences needed to face increasingly complex learning tasks.
Considering policy implications, higher education institutions should improve instructor capacity to integrate formative assessment activities, particularly in professional programs in higher education (medical, engineering, chemistry, etc.). Additionally, it would be necessary to outline a model to create a link between formative and summative assessment. As indicated in the previous paragraph, we suggest the use of a portfolio, possibly digital, to simplify and decrease the instructor’s workload, as indicated by Marinho et al. [78].
To conclude, formative assessment, in its several forms, is always aimed at making students aware of their reflections and their learning. Thus, as stated by Nicol and McCallum [51], self-, peer-, and group-assessment should help students in making their internal feedback explicit. In this way, students will be able to create and sustain learning interactions and improve their own learning processes.

Author Contributions

D.P.: conceptualization, formal analysis, investigation, methodology, project administration, writing—original draft. E.N.: conceptualization, formal analysis, investigation, methodology, project administration, writing—original draft. M.P.: conceptualization, formal analysis, investigation, methodology, project administration, writing—original draft. E.M.: formal analysis, methodology, writing—original draft. S.R.: writing—review and editing. M.I.: writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of UNIVERSITY OF GENOA, ITALY (2024/40 of 12 April 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Black, P.; Wiliam, D. Assessment and classroom learning. Assess. Educ. Princ. Policy Pract. 1998, 5, 7–74. [Google Scholar] [CrossRef]
  2. Perrenoud, P. From formative evaluation to a controlled regulation of learning processes. Towards a wider conceptual field. Assess. Educ. Princ. Policy Pract. 1988, 5, 85–102. [Google Scholar] [CrossRef]
  3. Allal, L.; Lopez, L.M. Formative assessment of learning: A review of publications in French. In Formative Assessment: Improving Learning in Secondary Classrooms; OECD: Paris, France, 2005; pp. 241–264. [Google Scholar]
  4. Andrade, H.; Brookhart, S.M. The role of classroom assessment in supporting self- regulated learning. In Assessment for Learning: Meeting the Challenge of Implementation; Allal, L., Laveault, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 293–309. [Google Scholar]
  5. Panadero, E.; Andrade, H.; Brookhart, S. Fusing self-regulated learning and formative assessment: A roadmap of where we are, how we got here, and where we are going. Aust. Educ. Res. 2018, 45, 13–31. [Google Scholar] [CrossRef]
  6. European Commission. Pathways to School Success—Commission Staff Working Document, Accompanying the Document Proposal for a COUNCIL Recommendation on Pathways to School Success; Publications Office of the European Union: Luxembourg, 2022; Available online: https://data.europa.eu/doi/10.2766/874295 (accessed on 15 July 2025).
  7. European Commission; Looney, J.; Kelly, G. Assessing Learners’ Competences—Policies and Practices to Support Successful and Inclusive Education; Publications Office of the European Union: Luxembourg, 2023; Available online: https://data.europa.eu/doi/10.2766/221856 (accessed on 15 July 2025).
  8. Boud, D. Implementing student self-assessment. In HERDSA Green Guide No.5; Higher Education Research and Development Society of Australasia: Hammondville, NSW, Australia, 1986. [Google Scholar]
  9. Murray, H.G. The impact of formative and summative evaluation of teaching in North American universities. Assess. Eval. High. Educ. 1984, 9, 117–132. [Google Scholar] [CrossRef]
  10. Sadler, D.R. Evaluation and the improvement of academic learning. J. High. Educ. 1983, 54, 60–79. [Google Scholar] [CrossRef]
  11. Sadler, D.R. Formative assessment and the design of instructional systems. Instr. Sci. 1989, 18, 119–144. [Google Scholar] [CrossRef]
  12. Black, P.; Wiliam, D. Developing the theory of formative assessment. Educ. Assess. Eval. Account. 2009, 21, 5–31. [Google Scholar] [CrossRef]
  13. Fisher, R.; Cavanagh, J.; Bowles, A. Assisting transition to university: Using assessment as a formative learning tool. Assess. Eval. High. Educ. 2011, 36, 225–237. [Google Scholar] [CrossRef]
  14. Kruiper, S.M.A.; Leenknecht, M.J.M.; Slof, B. Using scaffolding strategies to improve formative assessment practice in higher education. Assess. Eval. High. Educ. 2022, 47, 458–476. [Google Scholar] [CrossRef]
  15. Leenknecht, M.; Wijnia, L.; Köhlen, M.; Fryer, L.; Rikers, R.; Loyens, S. Formative assessment as practice: The role of students’ motivation. Assess. Eval. High. Educ. 2021, 46, 236–255. [Google Scholar] [CrossRef]
  16. López-Pastor, V.; Sicilia-Camacho, A. Formative and shared assessment in higher education. Lessons learned and challenges for the future. Assess. Eval. High. Educ. 2017, 42, 77–97. [Google Scholar] [CrossRef]
  17. Morris, R.; Perry, T.; Wardle, L. Formative assessment and feedback for learning in higher education: A systematic review. Rev. Educ. 2021, 9, e3292. [Google Scholar] [CrossRef]
  18. Pereira, D.; Flores, M.A.; Niklasson, L. Assessment revisited: A review of research in Assessment and Evaluation in Higher Education. Assess. Eval. High. Educ. 2016, 41, 1008–1032. [Google Scholar] [CrossRef]
  19. Doria, B.; Grion, V.; Paccagnella, O. Assessment approaches and practices of university lecturers: A nationwide empirical research. Ital. J. Educ. Res. 2023, 30, 129–143. [Google Scholar] [CrossRef]
  20. Parmigiani, D.; Nicchia, E.; Murgia, E.; Ingersoll, M. Formative assessment in higher education: An exploratory study within programs for professionals in education. Front. Educ. 2024, 9, 1366215. [Google Scholar] [CrossRef]
  21. Baker, D.F. Peer assessment in small groups: A comparison of methods. J. Manag. Educ. 2008, 32, 183–209. [Google Scholar] [CrossRef]
  22. Gladovic, C.; Tai, J.H.-M.; Nicola-Richmond, K.; Dawson, P. How can learners practice evaluative judgement using qualitative self-assessment? Assess. Eval. High. Educ. 2024, 49, 755–766. [Google Scholar] [CrossRef]
  23. Klenowski, V. Assessment for learning revisited: An Asia-Pacific perspective. Assess. Educ. Princ. Policy Pract. 2009, 16, 263–268. [Google Scholar] [CrossRef]
  24. Earl, L.M. Assessment as Learning: Using Classroom Assessment to Maximize Student Learning; Corwin Press: Thousand Oaks, CA, USA, 2013. [Google Scholar]
  25. Crooks, T. Assessment for learning in the accountability era: New Zealand. Stud. Educ. Eval. 2011, 37, 71–77. [Google Scholar] [CrossRef]
  26. Carney, E.A.; Zhang, X.; Charsha, A.; Taylor, J.N.; Hoshaw, J.P. Formative assessment helps students learn over time: Why aren’t we paying more attention to it? Intersect. A J. Intersect. Assess. Learn. 2022, 4, n1. [Google Scholar] [CrossRef]
  27. Wenden, A.L. Metacognitive knowledge and language Learning1. Appl. Linguist. 1998, 19, 515–537. [Google Scholar] [CrossRef]
  28. Pintrich, P.R. The role of metacognitive knowledge in learning, teaching, and assessing. Theory Into Pract. 2002, 41, 219–225. [Google Scholar] [CrossRef]
  29. Atjonen, P.; Kontkanen, S.; Ruotsalainen, P.; Pöntinen, S. Pre-service teachers as learners of formative assessment in teaching practice. Eur. J. Teach. Educ. 2024, 47, 267–284. [Google Scholar] [CrossRef]
  30. Xie, Q.; Cui, Y. Preservice teachers’ implementation of formative assessment in English writing class: Mentoring matters. Stud. Educ. Eval. 2021, 70, 101019. [Google Scholar] [CrossRef]
  31. Druskat, V.U.; Wolff, S.B. Effects and timing of developmental peer appraisals in self-managing work groups. J. Appl. Psychol. 1999, 84, 58–74. [Google Scholar] [CrossRef]
  32. Daniar, A.V.; Herdyastuti, N.; Lutfi, A. Analysis effectiveness of implementation assessment as learning on metacognitive skills. IJORER Int. J. Recent Educ. Res. 2023, 4, 759–770. [Google Scholar] [CrossRef]
  33. Ismail, S.M.; Rahul, D.R.; Patra, I.; Rezvani, E. Formative vs. summative assessment: Impacts on academic motivation, attitude toward learning, test anxiety, and self-regulation skill. Lang. Test. Asia 2022, 12, 40. [Google Scholar] [CrossRef]
  34. Wininger, S.R. Using your tests to teach: Formative summative assessment. Teach. Psychol. 2005, 32, 164–166. [Google Scholar] [CrossRef]
  35. Normann, D.-A.; Sandvik, L.V.; Fjørtoft, H. Reduced grading in assessment: A scoping review. Teach. Teach. Educ. 2023, 135, 104336. [Google Scholar] [CrossRef]
  36. Yan, Z.; Carless, D. Self-assessment is about more than self: The enabling role of feedback literacy. Assess. Eval. High. Educ. 2022, 47, 1116–1128. [Google Scholar] [CrossRef]
  37. Panadero, E.; Brown, G.T.L.; Strijbos, J.-W. The future of student self-assessment: A review of known unknowns and potential directions. Educ. Psychol. Rev. 2016, 28, 803–830. [Google Scholar] [CrossRef]
  38. Ashenafi, M.M. Peer-assessment in higher education–twenty-first century practices, challenges and the way forward. Assess. Eval. High. Educ. 2017, 42, 226–251. [Google Scholar] [CrossRef]
  39. Topping, K. Peer assessment between students in colleges and universities. Rev. Educ. Res. 1998, 68, 249–276. [Google Scholar] [CrossRef]
  40. Fleckney, P.; Thompson, J.; Vaz-Serra, P. Designing effective peer assessment processes in higher education: A systematic review. High. Educ. Res. Dev. 2025, 44, 386–401. [Google Scholar] [CrossRef]
  41. Double, K.S.; McGrane, J.A.; Hopfenbeck, T.N. The impact of peer assessment on academic performance: A meta-analysis of control group studies. Educ. Psychol. Rev. 2020, 32, 481–509. [Google Scholar] [CrossRef]
  42. Huisman, B.; Saab, N.; Van Den Broek, P.; Van Driel, J. The impact of formative peer feedback on higher education students’ academic writing: A meta-analysis. Assess. Eval. High. Educ. 2019, 44, 863–880. [Google Scholar] [CrossRef]
  43. Nicol, D.; Thomson, A.; Breslin, C. Rethinking feedback practices in higher education: A peer review perspective. Assess. Eval. High. Educ. 2014, 39, 102–122. [Google Scholar] [CrossRef]
  44. Armengol-Asparó, C.; Mercader, C.; Ion, G. Making peer-feedback more efficient: What conditions of its delivery make the difference? High. Educ. Res. Dev. 2022, 41, 226–239. [Google Scholar] [CrossRef]
  45. Lerchenfeldt, S.; Mi, M.; Eng, M. The utilization of peer feedback during collaborative learning in undergraduate medical education: A systematic review. BMC Med. Educ. 2019, 19, 321. [Google Scholar] [CrossRef]
  46. Donia, M.B.L.; Mach, M.; O’Neill, T.A.; Brutus, S. Student satisfaction with use of an online peer feedback system. Assess. Eval. High. Educ. 2022, 47, 269–283. [Google Scholar] [CrossRef]
  47. Reinholz, D. The assessment cycle: A model for learning through peer assessment. Assess. Eval. High. Educ. 2016, 41, 301–315. [Google Scholar] [CrossRef]
  48. Zheng, L.; Chen, N.-S.; Cui, P.; Zhang, X. A systematic review of technology-supported peer assessment research. Int. Rev. Res. Open Distrib. Learn. 2019, 20, 168–191. [Google Scholar] [CrossRef]
  49. Zhang, S.; Li, H.; Wen, Y.; Zhang, Y.; Guo, T.; He, X. Exploration of a group assessment model to foster student teachers’ critical thinking. Think. Ski. Creat. 2023, 47, 101239. [Google Scholar] [CrossRef]
  50. Asghar, A. Reciprocal peer coaching and its use as a formative assessment strategy for first-year students. Assess. Eval. High. Educ. 2010, 35, 403–417. [Google Scholar] [CrossRef]
  51. Nicol, D.; McCallum, S. Making internal feedback explicit: Exploiting the multiple comparisons that occur during peer review. Assess. Eval. High. Educ. 2022, 47, 424–443. [Google Scholar] [CrossRef]
  52. Atasoy, V.; Kaya, G. Formative assessment practices in science education: A meta-synthesis study. Stud. Educ. Eval. 2022, 75, 101186. [Google Scholar] [CrossRef]
  53. Merritt, D.J.; Colker, R.; Deason, E.E.; Smith, M.; Shoben, A.B. Formative assessments: A law school case study. U. Det. Mercy L. Rev. 2017, 94, 387–428. [Google Scholar] [CrossRef]
  54. Otaki, F.; Gholami, M.; Fawad, I.; Akbar, A.; Banerjee, Y. Students’ perception of formative assessment as an instructional tool in competency-based medical education: Proposal for a proof-of-concept study. JMIR Res. Protoc. 2023, 12, e41626. [Google Scholar] [CrossRef]
  55. Crossouard, B. Using formative assessment to support complex learning in conditions of social adversity. Assess. Educ. Princ. Policy Pract. 2011, 18, 59–72. [Google Scholar] [CrossRef]
  56. Pancorbo, G.; Primi, R.; John, O.P.; Santos, D.; De Fruyt, F. Formative assessment of social-emotional skills using rubrics: A review of knowns and unknowns. Front. Educ. 2021, 6, 687661. [Google Scholar] [CrossRef]
  57. Hardavella, G.; Aamli-Gaagnat, A.; Saad, N.; Rousalova, I.; Sreter, K.B. How to give and receive feedback effectively. Breathe 2017, 13, 327–333. [Google Scholar] [CrossRef]
  58. Lui, A.M.; Andrade, H.L. The next black box of formative assessment: A model of the internal mechanisms of feedback processing. Front. Educ. 2022, 7, 751548. [Google Scholar] [CrossRef]
  59. Orsini, C.; Rodrigues, V.; Tricio, J.; Rosel, M. Common models and approaches for the clinical educator to plan effective feedback encounters. J. Educ. Eval. Health Prof. 2022, 19, 35. [Google Scholar] [CrossRef]
  60. Gedye, S. Formative assessment and feedback: A review. Planet 2010, 23, 40–45. [Google Scholar] [CrossRef]
  61. Van Der Steen, J.; Van Schilt-Mol, T.; Van Der Vleuten, C.; Joosten-ten Brinke, D. Designing formative assessment that improves teaching and learning: What can be learned from the design stories of experienced teachers? J. Form. Des. Learn. 2023, 7, 182–194. [Google Scholar] [CrossRef]
  62. Sahoo, S.; Tirpude, A.P.; Tripathy, P.R.; Gaikwad, M.R.; Giri, S. The impact of periodic formative assessments on learning through the lens of the complex adaptive system and social sustainability principles. Cureus 2023, 15, e41072. [Google Scholar] [CrossRef]
  63. Duhart, O. The “F” word: The top five complaints (and solutions) about formative assessment. J. Leg. Educ. 2018, 67, 531–552. [Google Scholar]
  64. Tashakkori, A.; Teddlie, C. Foundations of Mixed Methods Research; Sage: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  65. Creswell, J.W.; Clark, V.P. Designing and Conducting Mixed Methods Research; Sage: Thousand Oaks, CA, USA, 2011. [Google Scholar]
  66. Creswell, J.W.; Clark, V.P.; Gutmann, M.; Hanson, W. Advanced mixed methods research designs. In Handbook of Mixed Methods in Social & Behavioral Research; Tashakkori, A., Teddlie, C., Eds.; Sage: Thousand Oaks, CA, USA, 2003; pp. 209–240. [Google Scholar]
  67. Charmaz, K. Constructing Grounded Theory; Sage: Thousand Oaks, CA, USA, 2014. [Google Scholar]
  68. Corbin, J.; Strauss, A. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory; Sage: Thousand Oaks, CA, USA, 2015. [Google Scholar]
  69. Henson, R.K. Understanding internal consistency reliability estimates: A conceptual primer on coefficient alpha. Meas. Eval. Couns. Dev. 2001, 34, 177–189. [Google Scholar] [CrossRef]
  70. Spiliotopoulou, G. Reliability reconsidered: Cronbach’s alpha and paediatric assessment in occupational therapy. Aust. Occup. Ther. J. 2009, 56, 150–155. [Google Scholar] [CrossRef]
  71. DeVon, H.A.; Block, M.E.; Moyle-Wright, P.; Ernst, D.M.; Hayden, S.J.; Lazzara, D.J.; Savoy, S.M.; Kostas-Polston, E. A psychometric toolbox for testing validity and reliability. J. Nurs. Scholarsh. 2007, 39, 155–164. [Google Scholar] [CrossRef]
  72. Jackson, K.M.; Trochim, W.M.K. Concept mapping as an alternative approach for the analysis of open-ended survey responses. Organ. Res. Methods 2002, 5, 307–336. [Google Scholar] [CrossRef]
  73. Burgess-Allen, J.; Owen-Smith, V. Using mind mapping techniques for rapid qualitative data analysis in public participation processes. Health Expect. 2010, 13, 406–415. [Google Scholar] [CrossRef] [PubMed]
  74. Higgins, M.; Grant, F.; Thompson, P. Formative assessment: Balancing educational effectiveness and resource efficiency. J. Educ. Built Environ. 2010, 5, 4–24. [Google Scholar] [CrossRef]
  75. Melesko, J.; Ramanauskaite, S. Time saving students’ formative assessment: Algorithm to balance number of tasks and result reliability. Appl. Sci. 2021, 11, 6048. [Google Scholar] [CrossRef]
  76. Xu, X.; Shen, W.; Islam, A.Y.M.A.; Zhou, Y. A whole learning process-oriented formative assessment framework to cultivate complex skills. Humanit. Soc. Sci. Commun. 2023, 10, 653. [Google Scholar] [CrossRef]
  77. Bin Mubayrik, H.F. New trends in formative-summative evaluations for adult education. Sage Open 2020, 10, 2158244020941006. [Google Scholar] [CrossRef]
  78. Marinho, P.; Fernandes, P.; Pimentel, F. The digital portfolio as an assessment strategy for learning in higher education. Distance Educ. 2021, 42, 253–267. [Google Scholar] [CrossRef]
Figure 1. Map of qualitative data.
Figure 1. Map of qualitative data.
Higheredu 04 00048 g001
Table 1. Participants’ characteristics.
Table 1. Participants’ characteristics.
FactorCategoryOccurrencesCount (%)
GenderF22292.50
M125.00
Other20.83
I don’t wish to say41.67
Age19–208234.17
21–227230.00
23–243615.00
25–292410.00
≥30+2610.83
Work experiencenever10945.42
few experiences (some days/weeks)7230.00
many experiences (some months/1 year)3414.17
fulltime experience (2 or more years)2510.41
Education areaKindergarten teacher 3–6/Primary teacher8435.00
Social worker8435.00
Early childhood educator 0–34117.08
Head of social services3112.92
Table 2. Key questions to support peer- and group-assessment during metacognitive and evaluative steps.
Table 2. Key questions to support peer- and group-assessment during metacognitive and evaluative steps.
Peer-AssessmentGroup-Assessment
Metacognitive step
  • How do you plan to best remember the contents you need to cope with the test?
  • How do you plan to organize and link the contents to expose them effectively?
  • Are you developing strategies to build a coherent picture of contents (e.g., maps, lists, etc.)?
  • How do you plan to argue the information?
  • Are you preparing exemplifications and arguments to support your positions?
  • Are you preparing any counter-arguments that effectively challenge other positions?
How do you plan to elaborate educational activities so that they:
  • Are correctly, comprehensively, and constructed coherently?
  • Are expendable and replicable at other times or even by your colleagues?
  • Support developmental processes for your diverse and complex users?
  • Engage and get users and, perhaps, families to interact?
Evaluative stepDid your mate:
  • Completely present the contents?
  • Present contents effectively?
  • Understand the meaning of the contents?
  • effectively connect the contents?
  • Create a coherent picture of the contents?
  • Present examples, arguments and counter-arguments?
  • Accuracy (Is the activity described correctly and comprehensively?)
  • Coherence (Is the activity well designed and connected?)
  • Impact (Is the activity educationally significant to support learning processes?)
  • Involvement (Does the activity involve participants in an interactive and significant way?)
Table 3. Questionnaire’s scales and sub-scales.
Table 3. Questionnaire’s scales and sub-scales.
ScaleSub-ScalesItemsTopicsReferences
Organizational issues 8
  • Leading questions
  • Rubrics
  • Organization
  • Time available
[55]
Emotional issues 8
  • Pleasant
  • Comfortable
  • Involvement
  • Stress/Anxiety
[56]
Feedback issues 8
  • Specific
  • Constructive/actionable
  • Balanced
  • Respectful/sensitive
[57,59]
Feedback open/blind issues2
  • Open feedback
  • Blind feedback
[60]
Feedback teacher issues2
  • No teacher feedback
  • Yes teacher feedback
[17]
Resilience issues 4
  • Reacting positively to the difficulties
  • Coping with stress and anxiety
[62]
Metacognitive issues 4
  • Reflecting on my own learning strategies
  • Modifying my own learning strategies
[26]
Grading issues 2
  • Relationship between formative and summative assessment
[35]
Table 4. Reliability coefficients.
Table 4. Reliability coefficients.
Scale and Sub-ScalesCronbach’s αMcDonald’s ωAverage Inter-Item Correlation
Organizational issues0.8380.8360.391
Emotional issues0.7870.7830.349
Feedback issues0.8360.8140.337
Feedback open/blind issues0.9380.9390.877
Feedback teacher issues0.9060.9070.822
Resilience issues0.8660.8580.604
Metacognitive issues0.9170.9150.729
Grading issues0.9120.9120.829
Critical valuesgood
0.900 > α > 0.700
excellent
α > 0.900
[69]
0.400 to 0.500
[70]
0.300 to 0.700
[71]
Table 5. Management issues category.
Table 5. Management issues category.
CategoryCodeSubcodeFrequency
Management issuesPeer-assessmentTeacher’s feedback16
Time management22
Group-assessmentTeacher’s feedback10
Time management4
Table 6. Metacognitive aspects category.
Table 6. Metacognitive aspects category.
CategoryCodeSubcodeFrequency
Metacognitive aspectsPeer-assessmentEngagement6
Learning improvement18
Peers’ feedback44
Training needs14
Lower feedback’s power5
Group-assessmentEngagement3
Learning improvement22
Peers’ feedback17
Training needs7
Uncomfortable6
Not stressful4
Table 7. Evaluative aspects category.
Table 7. Evaluative aspects category.
CategoryCodeSubcodeFrequency
Evaluative aspectsPeer-assessmentAnxiety2
Learning improvement17
Uncomfortable 12
Training needs3
Peers’ feedback8
Not stressful8
Test training5
Different points of view18
Group-assessmentAnxiety 2
Learning improvement17
Uncomfortable 8
Training needs2
Peers’ feedback16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Parmigiani, D.; Nicchia, E.; Pario, M.; Murgia, E.; Radović, S.; Ingersoll, M. Learning with Peers in Higher Education: Exploring Strengths and Weaknesses of Formative Assessment. Trends High. Educ. 2025, 4, 48. https://doi.org/10.3390/higheredu4030048

AMA Style

Parmigiani D, Nicchia E, Pario M, Murgia E, Radović S, Ingersoll M. Learning with Peers in Higher Education: Exploring Strengths and Weaknesses of Formative Assessment. Trends in Higher Education. 2025; 4(3):48. https://doi.org/10.3390/higheredu4030048

Chicago/Turabian Style

Parmigiani, Davide, Elisabetta Nicchia, Myrna Pario, Emiliana Murgia, Slaviša Radović, and Marcea Ingersoll. 2025. "Learning with Peers in Higher Education: Exploring Strengths and Weaknesses of Formative Assessment" Trends in Higher Education 4, no. 3: 48. https://doi.org/10.3390/higheredu4030048

APA Style

Parmigiani, D., Nicchia, E., Pario, M., Murgia, E., Radović, S., & Ingersoll, M. (2025). Learning with Peers in Higher Education: Exploring Strengths and Weaknesses of Formative Assessment. Trends in Higher Education, 4(3), 48. https://doi.org/10.3390/higheredu4030048

Article Metrics

Back to TopTop