Next Article in Journal
Evaluation of Different Amino Acids on Growth and Cyanide Production by Bacillus megaterium for Gold Recovery
Previous Article in Journal
The Effect of Artificial Intelligence on End-User Online Purchasing Decisions: Toward an Integrated Conceptual Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Holistic Overview of Studies to Improve Group-Based Assessments in Higher Education: A Systematic Literature Review

1
School of Engineering and Technology, Central Queensland University, Sydney, NSW 2000, Australia
2
School of Engineering and Technology, Central Queensland University, Melbourne, VC 3000, Australia
3
School of Education and the Arts, Central Queensland University, Cairns, QL 4870, Australia
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(15), 9638; https://doi.org/10.3390/su14159638
Submission received: 15 July 2022 / Revised: 1 August 2022 / Accepted: 3 August 2022 / Published: 5 August 2022

Abstract

:
There is a soaring demand for work-ready graduates who can quickly adapt to an ever-challenging work environment. Group-based assessments have been widely recommended as a means to develop the skills required for the world of work. However, group-based assessments are perceived as challenging for both students and educators. This systematic literature review (SLR), based on the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA), focuses on analyzing and synthesizing the existing literature on group-based assessments. A four-step approach was undertaken in order to conduct this research. The SLR identified 71 relevant articles, analyzed using thematic analysis with the aid of NVivo software. An open coding approach was adopted to generate codes. The validity of the SLR process and the reliability of the research tool were maintained through the use of trustworthiness. The findings identified dominant themes such as self- and peer evaluations, training students for group work assessments, group formation, group size, and the role of academics and technology in facilitating group processes. The outcomes of this review contribute significantly to the design and administration of group-based assessments in higher education by providing academics with practical guidelines to effectively facilitate group-based assessments which fit the purpose.

1. Introduction

Being able to work efficiently with others is considered one of the critical skills demanded by employers once graduates enter the labor market [1]. Assisting students to construct knowledge and skills with alternative viewpoints, improving communication skills, and developing generic skills are some of the many reasons for including teamwork as a graduate attribute within the higher education curricula [2]. Universities around the world explicitly mention teamwork or being able to work in a group as one of the critical graduate attributes [3]. Despite the emphasis exerted on developing teamwork skills, employers are often dissatisfied with graduates’ ability to work effectively in groups [4]. One of the ways of developing teamwork skills in graduates is through the implementation of group-based assessments in higher education [5]. The 21st-century skills, such as communication, team-building, problem-solving, collaboration, creativity, and technical skills [6], are developed by actively engaging in group work assessments [7,8]. Therefore, higher education institutions have continuously been focusing on the inclusion of group work assessments in various subjects [9].
Alongside providing a wide range of benefits, group-based assessments are intertwined with guilt, anxiety, and uncertainty in the minds of both teachers and students. Dealing with free-riding, assessing individuals’ knowledge and skills, and grading students fairly are some of the dominating challenges discussed in the literature [10,11]. Furthermore, the complexity may result from inexperienced academics implementing group work assessments. Group-based assessments have two elements: product(s) and process [5,12]. When introducing group-based assessments, it should be made clear to students whether the product, the process, or both will be assessed. Glebhill and Smith (1996) argue that effective group work assessment should consider both the product and the process of student learning. However, educators may place more emphasis on the product(s) of the assessments rather than the process [13].
Although a few researchers explored various singular aspects of group-based assessments in the context of higher education [14,15] and carried out reviews on group assessments [3,13,16], no effort appears to have been made to identify the critical themes and research gaps. This study addresses this gap by identifying, analyzing, and evaluating the critical themes discussed in the design and institutionalization of group-based assessments within the existing literature. An SLR is credible to identify research gaps in the existing literature and inform a future research agenda [17]. This systematic literature review highlights the guidelines for future research gaps in group-based assessments in the higher educational setting through the identification of new insights. Therefore, the objectives of this review are twofold:
  • To identify critical themes in the existing literature to facilitate the design and operation of group-based assessments in higher education.
  • To identify the present research gaps and provide direction for future research avenues.
An SLR is an appropriate research method for this research as it systematically identifies, collects, reviews, and analyzes the existing literature for the topic under investigation [18]. A systematic literature review considers all the available studies within the set of inclusion and exclusion criteria, and as a result, this methodology may be considered more transparent in comparison to narrative reviews [19]. This SLR advances the knowledge of group-based assessments by highlighting the academic practices, planning, and implementation of the best practices for the successful execution of group-based assessments. The review paper is structured as follows. First, an overview of group-based assessments in higher education is provided; then, an SLR methodology focused on recognizing the critical themes of group-based assessments is undertaken. A descriptive analysis for the identified articles is conducted, followed by a thematic analysis to identify critical themes along with future research directions. Finally, the limitations, the conclusion and the practical implications for higher education academics and policymakers are presented.

2. Group-Based Assessments in Higher Education

In this review paper, group-based assessments are defined as pieces of work assessed by teachers upon submission where two or more students work together to develop knowledge, skills, and abilities in a higher educational setting [9,12]. In this SLR, the terms group-based assessment, group assessment, group work assessment, and group assignment have been used interchangeably as each fits with the above definition. Both students and teachers can benefit from group-based assessments in a number of ways [20]. Academics are enabled to effectively manage large numbers of students with the utilization of group-based assessments [21]. For example, Rust [22] states that group-based assessments reduce the marking load for tutors and tutorial briefings. If effectively planned and managed, group assessments have the potential to polish students’ soft skill sets, including teamworking, leadership competencies, problem-solving skills [7,8], communication skill [7], and behavioral competencies [23]. Students exchange knowledge and develop learning by interacting with one another in collaborative learning [24]. Collaborative learning environments can facilitate the development of greater depths of knowledge, soft skills, and enhanced learning outcomes [25,26]. To achieve the full potential of collaborative learning, students are required to assist their group members by providing and receiving explanations which can improve their metacognitive, social interaction, and cognitive abilities [27]. Interaction with peers, the use of social media platforms to communicate with peers, social presence, and the interaction between students and teachers contribute to active collaborative learning [28]. With a range of benefits, group-based assessments also come with some challenges for both students and teachers. Students often express dissatisfaction with the unfair grading of assessments where all the students in a team achieve the same grades regardless of their contributions [29]. For staff, it is perceived as cumbersome to identify individual contribution and knowledge gained during the group work assessments [5,9].
Because of the complexity involved in the implementation of group-based assessments, guidelines are required as to the techniques for the effective execution of group assessments [30]. The existing literature proposes a range of variables and factors which are required to be taken into consideration when designing and administering group-based assessments [31]. Some of the factors included, but not limited to group formation, are grade structure, group size, student engagement, and peer evaluations [31]. Aggarwal and O’Brien [32] demonstrated the relationship between project set-up factors, such as group formation, size of the group, and peer evaluations, and how they contribute to the reduction in the social-loafing behavior of students. Social loafing has been defined as “a reduction in individual effort because of the presence of other people and is most likely to occur when students feel less likely to be identified” [33]. In addition, Thom [34] highlighted course preliminaries, group work scope, evaluation and accountability, and program consideration. Furthermore, Davies [7] emphasized intra-communication among group members, smaller groups, ethnic diversity, ground rules, and complex group tasks as important factors for consideration when designing group-based assessments.

3. Research Method

A systematic literature review (SLR) has the potential to process the breadth of the existing knowledge in a specific area of investigation, to help researchers to map and evaluate the diversity of knowledge, and to contribute to the existing body of knowledge by identifying research gaps [35]. By conducting an SLR, researchers attempt to collect all the empirical studies meeting the pre-specified eligibility criteria to answer a specific research question [18]. A systematic literature review was adopted for this research to gain new insights into the existing knowledge and to facilitate practical guidelines for designing group-based assessments in higher education [36]. The stages of the SLR and its trustworthiness inherent in the process are discussed below.

3.1. Stages of SLR

This SLR is guided by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, which consists of four phases: (1) identification, (2) screening, (3) eligibility, and (4) inclusion for review [37]. The PRISMA 2020 checklist used for this review is provided in Appendix A (see Table A1). Recent educational-based research adopted the SLR approach using the PRISMA guidelines [5,38]. Figure 1 shows the steps involved in the SLR for this review. The first phase involves the identification of articles in various databases, as suggested by de Araújo et al. [39], including A+ Education, SAGE, ScienceDirect, Teachers Reference Centre, Education Research Complete, Emerald, ERIC, and Taylor and Francis. These databases contain a wealth of publications, are easily available at academic institutions, and have been widely used in previous SLRs [40,41]. The keywords employed for this SLR were (Group OR Team) AND (Assessment OR Assignment OR Project). Other phrases used for the search included “Group project”; “Group assessment”; “Group assignment”; “Team project”; “Team assessment”; “Team assignment”. The keywords were kept as general as possible to identify all the existing literature around group-based assessments. These keywords were utilized to locate the relevant literature in the selected databases. From the different database searches, a total of 7565 documents were recorded. Within the databases, the search was restricted to peer-reviewed journal articles that were published in English within the period between 2010 and 2021. The application of these restrictions reduced the number of documents to 2392.
The second stage (screening) involves a two-stage process. Firstly, the duplicates arising from the different databases were removed, leading to a reduction of 587 articles. Secondly, the titles and abstracts of the remaining articles were assessed for relevance. The detailed inclusion and exclusion criteria are outlined in Table 1. Only eighty-eight articles were deemed relevant as they potentially discussed aspects of group-based assessments in the higher education setting.
In the third phase, the eligibility of the eighty-eight articles was checked by reading the full texts to ensure that the included articles addressed the aims of the review. The full texts of 54 articles met the inclusion criteria, and 34 articles were deemed ineligible as they did not respond to the aims of the review and were excluded from the review.
The final stage entailed the inclusion of articles for analysis which predominantly discussed the design features of group-based assessments in higher education. An additional search was conducted at this stage on Google Scholar to check relevant studies for inclusion. Another seventeen peer-reviewed empirical studies were found in Google Scholar using the search terms above. The 17 articles were added to the previously identified 54 articles, thus resulting in a total of 71 articles (see Table A2 in Appendix A).
Quality assessment of the 71 peer-reviewed empirical articles was conducted by following the approach described in [42]. The quality appraisal of the identified articles was guided by three questions.
  • Q1: Was the research methodology clearly described in the study?
  • Q2: Was the data collection method explicitly described in the study?
  • Q3: Were the data analysis steps clearly stated in the study?
The identified articles were assessed based on these questions. The articles were ranked as high, medium, and low based on the scores given for each question. Each question carries a maximum score of two and a minimum of zero. Therefore, an article could score a maximum score of 6 (3 × 2). An article was considered to be of a high, medium, or low quality if the particular article scored 5, 4, or 2, respectively. Upon quality assessment, the vast majority of the articles were considered to be of a high quality. This can be attributed to their Q1 and Q2 ranking in the SCImago Journal Rank (SJR).
Figure 1 depicts the selection process of the reviewed articles. Key data, including reference, journal name, publication year, research method, and key findings were extracted from the included articles and organized in a Microsoft Excel sheet. A descriptive and thematic analysis was undertaken to identify the descriptive elements (geographic spread, research methods used, annual publication outputs, distribution of publication amongst journals, and citation analysis) and subsequently to group the critical themes of the reviewed articles. NVivo software was used for open coding and generating themes.

3.2. Trustworthiness of SLR

The trustworthiness of the SLR was maintained through the use of credibility, dependability, transferability, and an audit trail, as explained in Lincoln, Guba, and Pilotta [43]. For this review process, credibility was achieved by being transparent about the review process and outlining the step-by-step process to ensure that the process was explicit to the reader. Furthermore, credibility was maintained through the use of member checking, as suggested by Lincoln, Guba, and Pilotta [43]. Biases were controlled by involving all the authors in the article review process and reporting any outcome of the reviewed articles in an objective manner [44,45]. All the authors ensured that the inclusion and exclusion criteria were applied as objectively as possible [46,47]. In order to ensure transferability, a thick description of the process was provided. Each step of the process was discussed in great detail. To achieve dependability, a clearly documented process was provided. PRISMA, a popular systematic literature review protocol, was used, and the review protocol was validated by all the authors. All the authors collectively decided on the keywords, search string, used databases, and inclusion and exclusion criteria. Any disagreement was resolved among the authors through discussion. The confirmability of the process was established as the dependability, transferability, and credibility were obtained [43]. To achieve audit trails, the authors maintained a record, keeping a Microsoft Excel sheet of the collected data to enhance the consistency and transparency [48].

4. Results and Discussion

4.1. Descriptive Analysis of Selected Articles

4.1.1. Countries Contributing to Group-Based Assessment Research

An analysis of the geographical distribution of the included studies demonstrates the popularity of the research field on a global scale [49]. The geographical origin of the reviewed articles provides future researchers with an understanding of the locations of the studies conducted, thus highlighting the narrowly focused regions and the need for the further undertaking of future research in those countries. Figure 2 illustrates the geographical distribution of the reviewed articles on group-based assessments in higher education. As shown in Figure 2, the United States stands out notably with the highest number of publications (26 articles out of the 71 reviewed articles), followed by Australia (22 articles), the United Kingdom (UK) (4 articles), and Canada (3 articles). All the other countries’ publications range from one to two, including the Netherlands, Indonesia, Ireland, Israel, New Zealand, Hong Kong, Spain, Denmark, Turkey, the UAE, and South Africa. Therefore, the results indicate that group-based assessments were highly researched in affluent countries. Future research may be carried out in non-affluent countries as developing teamwork skills through group-based assessments is one of the ways to ensure all graduates have the best opportunity to enter the labor market successfully and meet employers’ demands.

4.1.2. Dominant Research Methods Used in the Selected Articles

Analyzing the research methods applied in the reviewed articles is essential in order to understand the trend of the research method used in the existing research, which will help future researchers to adopt an appropriate research method for similar studies. The researchers have used most of the common methodological approaches and methods for the investigation of group-based assessments in higher education (see Table 2). Quantitative research methodology was the most commonly implemented, occurring in 52 of the 71 reviewed articles. Surveys/questionnaires were the most widely used research methods relating to seeking students’ and/or teachers’ perceptions of any particular aspect of group-based assessments. These include identifying students’ perceptions regarding peer assessments and proposing improvements in their use [50]. A mixed-methods approach was utilized in 11 of the 71 reviewed articles. The mixed methods included the combination of interviews, observations, and surveys [51]; pre-/post-questionnaires, artefacts of the group processes, and student focus groups [52]; and surveys, focus group interviews, and individual staff interviews [53]. Qualitative research methodology was employed in only eight of the reviewed articles. Interviews were the most popular research method applied in these eight studies, primarily to explore and assess the experiences and views of students on group marking [54], peer assessments during a group poster presentation, and the use of new media technologies, particularly wikis, for the compiling and grading of group assessment tasks [55]. Although quantitative analysis provides strong statistical evidence of the results, it may also indicate a lack of rigor. Future research may employ more qualitative approaches to increase the validity and reliability. Furthermore, the application of mixed methods can also offer triangulation of the findings to enhance the robustness of future research.

4.1.3. Annual Publication Outputs of the Reviewed Articles

The 71 articles were published between 2010 (including) and 2021 (until October). Figure 3 illustrates the publication trends by year. The analysis of the 71 articles indicated that there was a fluctuation in the number of publications from 2000 to 2021 related to group-based assessments in higher education. However, there was a huge spike in the number of publications in 2014 (13 articles), compared with only four articles published in 2013 and nine and eight articles in 2015 and 2012, respectively. From 2014 to 2018, there was a decline in the number of publications. The average number of publications from 2010–2015 was 7.33 per year, while the number of articles decreased to 3.5 per year from 2016–2021. The years with the lowest number of publications were 2021, 2017, and 2018, with one, two, and two, respectively, within the set inclusion and exclusion criteria.

4.1.4. Distribution of Publication amongst Journals

Scholarly journals are important vehicles for carrying information to accumulate and disseminate and for communicating scientific publications and achievements [56]. A journal analysis helps to identify the core journals publishing scholarly work in the context of group-based assessments. The reviewed 71 articles were published in 46 journals covering education, construction, psychology, and other research domains. There were numerous journals with only one publication. Table 3 illustrates the top six journals which published at least two articles in group-based assessments. Assessment & Evaluation in Higher Education accounted for the highest number of the articles (19 articles of the reviewed 71 articles), followed by Journal of Education for Business (3 articles). Nurse Education Today, Nurse Education in Practice, Teaching Sociology, and Higher Education contributed two articles each. The remaining 40 journals published only one article each.

4.1.5. Citation Analysis

A citation analysis is a useful matrix to determine the number of times the reviewed 71 articles were cited by other scholarly studies. The citation analysis aims to locate the impact level of the articles and their contribution to the research of group-based assessments in higher education in order to establish policies and practices [57]. The citation analysis revealed that eighteen articles were cited at least 50 times or more by other scholarly works in various influential journals at the end of 2022, as listed in Table 4. Most of the reviewed articles had not been cited more than 50 times by the time of writing. The authors of [50,58,59] had the top three cited articles, which were cited 153, 160, and 152 times, respectively. These articles reported on the students’ and teachers’ perceptions of self- and peer assessments and the ways of conducting these practices in group-based assessments.

4.2. Critical Themes That Emerged from Thematic Analysis

4.2.1. Self- and Peer Evaluations

Peer evaluations have been proven to be a useful method for addressing the problem of freeloading in group-based assessments. Individuals in a group assessment are traditionally awarded a group mark which may be unfair to students. To gauge the breadth and depth of the individual’s learning, academics need to provide the opportunity to students to demonstrate their individual state of learning [14,72]. Individual contributions need to be reflected in the assessment design to avoid free-riding [73]. Students are often in a better position to judge peers’ contributions than tutors [74,75,76,77]. When students undertake peer assessments, they consider themselves part of the assessment process which motivates and encourages them and fosters independent learning [75]. Peer evaluations also improve students’ reflective and practical skills through managing free-riders [78]. The incorporation of group members’ peer evaluations makes students active, ensures equal participation from each student, promotes equity [79], and helps them to develop their participation through feedback [51]. Based on the peer evaluation feedback, marks for non-contributors are adjusted to ensure a fair distribution of marks [79]. In some studies, peer assessments were found to be less favored because they created a competitive and uncomfortable environment in the groups. This situation was shown to be improved by briefing students about the peer evaluation practice and incorporating formative peer evaluations which may help free-riders to improve their contribution before the summative peer evaluation is performed [54]. In addition, academics should not only employ peer evaluation approaches to use as a ‘wake-up call’ after each round of the formative peer evaluations and bound them to work better next time. Academics should look beyond the contemporary views of peer evaluations and instead encourage students to work collaboratively and cohesively, which will be more fruitful in the long run [80].
Alongside peer evaluations, students should complete self-evaluations as they work as an additional source of information for academics about the students’ contributions to the group assessment. A student may receive a wide range of ratings from their peers which may make it challenging for academics to decide about the actual contribution of the student. Self-evaluations of students assist academics in understanding the reasons for high variation and in adjusting the marks accordingly. For example, in cases where a student receives a wide range of ratings from their peers, educators can review the ratings and decide if the peer evaluations need adjustment. However, self-evaluation marks should not typically be factored into the grades as students tend to inflate their self-ratings [74,81]. Although peer evaluations in group assessments were investigated widely, the research around self-assessments seemed to be barely discussed in the reviewed literature. In future studies, self-evaluations should be explored to understand their dynamics in student learning and on performance level.
To execute self- and peer evaluations in group-based assessments, the reviewed literature evaluated and assessed a number of tools and techniques. Comparing the advantages of one technique over the others is not within the scope of the review, and thus, this warrants future investigation. However, listing a number of peer evaluation tools helps academics select and implement the tools to identify individual contributions and make the assessment process fair for all members of the group. Some of the self- and peer evaluation tools included it-IWF [82], SPARKPLUS [83], the Peer Assessment System (PAS) [65], the CASNIWF method [84], WebPA [11,52], budget of the point [85], and the web-based and color-coded system [74]. The pros and cons of different approaches to rating individual contributions should be explained to students when peer assessment is introduced [86].
To better execute self- and peer evaluations, the existing literature proposes some techniques, such as the introduction of formative and summative peer assessments, adjusting peer marks, anonymity and confidentiality in peer evaluations, including qualitative and quantitative peer assessment questions, and teaching students the appropriate ways and justifications of self- and peer evaluations. These elements are discussed below.

Formative and Summative Peer Assessments

The reviewed literature indicated that formative peer assessments help non-contributors calibrate their performance, and they give them the opportunities to improve their contributions and behavior during the course of group-based assessments [87]. Formative peer assessments within the group are effective in tackling group issues and conflicts as they appear [79]. Formative peer evaluations allow academics to tailor their feedback on student performance [74,88,89]. Furthermore, having more than two peer evaluations during the course of group assessments has the potential to reduce social loafing, thereby increasing student satisfaction [32,90]. Summative peer assessments at the end of the group work are highly unlikely to reduce students’ social-loafing behavior as they do not allow social loafers to adjust their behavior and improve their contributions to the group work [67].

Adjusting Peers’ Marks

While peer evaluations are useful for identifying non-contributors, they may not be entirely reliable and fair. Students tend to judge their peers’ performances accurately when the marks are not incorporated into the grade. On the other hand, students demonstrated more honesty in assessing their group members in formative assessments when the marks were not contributing towards the final grade [71]. When peer assessment marks are unchallenged and unadjusted, students tend to inflate the peers’ marks. Given that some peer ratings may not be reliable and authentic, academics should be vigilant when incorporating them into the grade. It is advisable that academics review peer ratings and make adjustments to the unfair ratings before including them into the final grades [75,81,86]. The reviews of the peer ratings help academics to understand the reasons for the high variation in peer ratings and to adjust the ratings accordingly to better reflect the students’ contributions. This practice could be challenging and impractical for academics in large classrooms. The educators’ involvement in scrutinizing self- and peer ratings is a contentious topic in the literature as academics are not willing to invest the substantial amount of time required to check and regulate self- and peer evaluations by students. From an academic’s point of view, peer assessments are time-consuming; however, this view tended to be a little different between part-time and full-time instructors [91]. Academics need to review self- and peer ratings and students’ comments in order to make an informed decision about the students’ relevant contributions to group work assessments. By doing so, academics may be in a position to identify the reasons for high variation (if any) in a student’s rating and determine whether scores need to be adjusted to satisfactorily reflect the students’ authentic contributions towards the group work [71,81]

Anonymity and Confidentiality in Peer Evaluations

One of the aspects that appeared in the reviewed literature was about the anonymity and confidentiality of peer evaluations. Peer evaluations need to be confidential and anonymous to ensure that the process is less intimidating for students [65,86]. The transparent peer evaluations inhibit the students’ ability to give authentic peer ratings, and the students might give incorrect and unfair reflections of individual contributions to avoid reciprocity [86,88,92,93]. Confidential peer evaluations help students to assess their contributions in comparison to those of their peers [74,81].

Qualitative and Quantitative Questions in Peer Evaluations

Peer assessments should have both open-ended (qualitative) and close-ended (quantitative) questions in the feedback process [65,87]. However, the research indicates that only close-ended feedback may lead to less reliable and informative feedback. Conversely, open-ended questions allow students to report freely on their observations and opinions of other peers’ contributions to the group task [85,94]. A combination of quantitative and qualitative comments in peer evaluations is the best way of capturing students’ self- and peer evaluations. Academics are recommended to ask students to explain their quantitative ratings. The explanation is useful for educators to understand any discrepancy in ratings and to moderate the students’ marks accurately as students tend to judge their peers’ performance inaccurately [71,81,95].

Training for Self- and Peer Evaluations

The goals and objectives of introducing self- and peer evaluations are highly unlikely to be elicited if students are not provided with sufficient training and instructions. Formal training on developing students’ ability to assess peers and to achieve the objectives of peer evaluations should be provided [50,81,86,88,92]. Training on peer evaluations should begin in the first year of any the program [81,86]. Students need to learn how to critique peers rather than provide feedback that hurts or discriminates [51]. A prior training scheme must be embedded into the curriculum to educate students that peer evaluations are not merely for grading peers and identifying free-riders but to improve their critical thinking and judgmental ability and to foster deep learning. The educational purposes of incorporating peer evaluations must be communicated to students [50]. Academics should not ‘take-for-granted’ students’ skills in peer evaluations; instead, efforts should be made in interventions, such as training in giving and responding to peer feedback [96]. Sridharan and Boud [96] demonstrated that without a strong pedagogical practice in place and the appropriate feedback literacies provided to students, peer evaluation interventions might not be successful. In order to stimulate and motivate students and academics, a ‘big picture’ approach seems to be noteworthy [51,58]. Without preparation, self- and peer evaluations can be considered as a ‘cart before horse’ approach, which is not only ineffective but can also have detrimental effects on the outcomes [96]. Training measures might involve developing students’ capacities in giving peer comments and reducing differences in marking expectations [92]. Other training approaches include explaining the score matrix, marking rubrics, and grade descriptors to students prior to the peer evaluation process and analyzing a case together [50]. While the significance of this training is evident in the reviewed literature, what is not present is the empirical study which measures the effectiveness of those training methods; thus, this aspect requires further investigation.

4.2.2. Group Formation

The first and foremost aspect which academics consider when designing group-based assessments is the method of group formation. The literature indicated that there are two types of group formation: (1) student-selected group formation and (2) teacher-selected group formation, either structured or random. Both of the group formation approaches have their advantages and disadvantages. While student-selected groups tended to exhibit the highest level of social-loafing attitudes [97], teacher-selected group formation demonstrated increased effectiveness and improved learning outcomes and exhibited reduced free-riding [98]. Furthermore, the students experienced better learning outcomes when groups were formed based on the teachers’ selection [15,90,99]. Structured team formation considering heterogeneity enhances students’ motivation level and improves their handling capacity in problematic situations [99]. A wide range of structured teacher-selected group formation techniques were identified in the reviewed literature, such as the flocking method [98] and groups formed by work experience, strengths and weaknesses testing, and GPA [97]. Groups that were composed based on the students’ GPAs and work experiences demonstrated better performances from their members [97]. Academics in the teacher-selected groups balance the groups depending on the students’ strengths and weaknesses [90]. However, students always prefer the self-selected group formation process, which allows them to work with their friends [53,77]. On the other side of the spectrum, randomization selection was the most commonly used group formation by academics [97,100]; this method lowered free-riding [101]. However, this type of group formation was found to be ineffective, and students expressed a reluctance to engage in this process [97] and made continuous complaints, thus decreasing team performance [99].
In the articles, heterogeneity in terms of gender, prior academic achievements, race, and ethics was explored [15,61,64,99,102,103], where gender-mixed groups performed significantly higher than uniform-gender groups. Heterogeneous groups engaged in better collaborative learning than homogeneous groups. The findings of the reviewed articles suggest the gender balance in group composition informs equal contribution of the group members and reduces students’ free-riding attitudes [15,61,64]. However, Skelley, Firth, and Kendrach [97] demonstrated that gender heterogeneity resulted in negative outcomes. On the other hand, one study suggested that groups should be formed based on prior academic achievement in order to develop the desired level of heterogeneity [15]. Furthermore, racial composition did not demonstrate a significant difference in project grades [15]. However, ethnic heterogeneity demonstrated detrimental impacts on group project achievements [104]. Therefore, the review indicates that academics should adopt ability heterogeneity while reducing ethnic heterogeneity in group-based assessments in higher education.

4.2.3. Group Size

The reviewed articles indicated that students in a smaller group demonstrated better performances, improved communication, and reduced free-riding [15,66,103] because with the increase in the number of members in the groups the interpersonal transactions among group members increases. Group size also needs to be informed by the nature and size of the task. Ideally, group size should be such that the workload is not overwhelming for the group members [90]. With the specific group size, there is a little consensus on the number of members in a group. Some findings suggest that a group should have a maximum of five members with higher perceived participation [66], and groups exceeding this limit can facilitate social loafing in the group [105]. However, academics may not always get the opportunity to form groups with four or five members because of the large class size and the absence of a teaching assistant in the class [103]. Similarly, Monson [103] and Monson [15] empirically suggested that groups should not be formed with less than four members and to specifically avoid groups of three members.

4.2.4. Training for Working in Groups

If the intention is to develop teamwork skills through group-based assessments in a subject or course, adequate training and instructions must be provided to the students. Providing students with comprehensive guidelines on working in a group and conducting weekly meetings to manage group work smoothly is deemed critical for the success of group-based assessments [100,106]. The students involved in Melville’s (2020) study expressed trepidations regarding the guidelines on how to deal with group processes. The students were provided with little instruction on managing group dynamics, resulting in shallow learning in the group-based assessments [76]. It is then evident that some training for students must be arranged before they are placed in any group work assessments. The training might include discussing high-level communication topics and include one discourse-level topic [67,107]. Students require guidance on how to work in a team and manage team conflicts and issues [86,107,108]. Shiu, Chan, Lam, Lee, and Kwong [86] recommended that debriefing students on their experiences should be undertaken to clarify the students’ tasks and the relationship functions of group-based assessments. Furthermore, a team charter is a useful way of developing team norms, operating guidelines, and performance management processes [109]. The institutionalization of team charters augmented team quality, team functioning, and member satisfaction [106]. In addition to a team charter, a communication charter delineates communication protocols, mutual expectations, and media for communication [67]. Setting out the group expectation at the beginning of the group work assessments may reduce group conflict and improve the work processes [90]. While the retained literature underscored the training arrangements for students, a unique recommendation emerged in the study of Augar, Woodley, Whitefield, and Winchester [100]. They suggest that not only should students be provided with training on how to work in groups, but academics assessing, designing, and managing group-based assessments should also be offered professional development based on best practices [100]. They went on to explain that in many universities, academics have not been adequately supported and directed to conduct this complex process of team assessments [100].

4.2.5. Academics’ Support and Guidance

The academics’ assistance and involvement in group-based assessments can bring a wide range of benefits to students. Instructor involvement in group projects was positively related to improved communication, cohesion, goal orientation, and planning. The instructors’ advocacy promoted positive attitudes about group assessments and improved intra-group processes [76,110]. Teachers should act as supportive facilitators by engaging themselves in resolving group complexity, group conflict, discussing group skills development in students [93,103], and explaining marking rubric to students [14]. Postlethwait [90] outlined a range of techniques implemented by instructors to help deal with group conflict effectively. Such techniques included allocating designated class time for group meetings, making sure that students are aware of virtual tools, and reminding students about the time-consuming tasks. The vital interaction between students and academics helps to engage both groups and individuals in groups in an ongoing conversation to clarify assessment specifications and improve the quality of the assessment tasks [72]. Students may not be aware of how to achieve the intended learning outcomes given the complexity involved in group-based assessments. The continuous feedback and conversation between academics and groups promote the unpacking of the assessment tasks and the reinforcement of the underlying alignment of activities with the learning outcomes [14,72]. The academics’ continuous monitoring and support is critical for students to achieve quality products. Rotational responsibilities among students initiated by academics expose students to different tasks which help students hone their skills and reduce free-riding in groups. Tutors could also ask students to rotate tasks, which would expose students to different skills and reduce the problem of freeloaders [76]. As long as academics provide groups with support and ongoing guidance, students experience a positive learning environment even though the group may not function well [103].

4.2.6. Facilitation of Group Work by Technology

The use of technology has the potential to improve and increase the facilitation of group work. In the 21st century, technology has a substantially positive influence on the execution of group assessments, especially during the COVID-19 pandemic [111]. Technology helps students to collaborate, communicate, facilitate group discussion, and share resources [52,90]. Students can use an array of platforms to facilitate group discussion and share resources, including Wiki [107,112,113], discussion boards, Skype, FaceTime, Zoom, Google Docs, the GroupMe application [90], and social media platforms [107]. Wiki has been found useful for collaboration and sharing resources which improve the process of group-based assessments and enrich learning because of its ease of use. Wiki also has the potential to track individual contributions and reduce the chances of free-riding. However, students will have to familiarize themselves with the technology [55]. It is very likely that students will have to meet in a group to perform group tasks outside the class set up. Therefore, students should have a common digital platform for discussion beyond regular class time [113] so that they can answer each other’s questions about the group task or group processes.

5. Future Research

5.1. COVID-19 and Group-Based Assessment Design

The COVID-19 pandemic has transformed the education system dramatically in both developed and developing countries [114]. Due to the extensive lockdowns in countries around the world, education providers shifted their teaching from face-to-face to online mode [115]. As a result, academics introduced e-assessments, be they for individual assessments or group-based assignments [116]. The reviewed literature suggests that group-based assessments were predominantly researched in face-to-face settings, which may not be generalizable to the group assessments conducted on an online platform. Nonetheless, academics can introduce the concept of learning analytics, which can assist facilitators to assess the individual and group performances of group members based on the activities stored on electronic platforms [117]. In future research, various aspects and elements of group-based assessments should be re-considered or validated to reflect online education systems in the case of any future pandemic.

5.2. Group Tasks Mapped to Learning Outcomes

In assessment design, the connection between group-based assessment tasks and the achievement of learning outcomes needs to be made explicit to ensure that the assessments are fair and effective. Without an explicit relationship between the assessment task design and the learning outcomes, students may find the assessment task ambiguous and off-target [72]. Therefore, academics must consider how to match and align group-based assessment tasks with their learning outcomes [72,95]. This correlation between group task and learning outcomes needs to be further researched.

5.3. Task Design

There is a lack of agreement among scholars regarding the influence of task complexity and the effectiveness of group work. According to Davies [7], group work should be adequately complex and challenging so that it is difficult for free-riders to hide. However, the task should not be too difficult. Based on empirical studies, Strong and Anderson [118] made a range of suggestions about how group tasks should be designed. They found that when the complexity of the task increases, it becomes difficult to assess the contribution of individuals. Hence, free-riding issues arise in the group. In contrast, some studies propose that when the task is easy and does not challenge students, the inclination for social loafing increases. Because of the lack of challenge and stimulation, group members reduce their contribution, which leads to an increase in free-riding attitudes in students [119]. There is a lack of agreement among scholars regarding the complexity of tasks in group assessments. Hence, it deserves further investigation to establish a consensus based on empirical research.

5.4. Weightage of Group-Based Assessments

Academics need to consider the weightage of the particular tasks in their design of the product(s) of group-based assessments. However, the reviewed literature does not focus on this aspect of group-based assessments. According to the reviewed articles, group-based assessments should be capped at 30%, both in each subject and in a course [100]. On the other hand, Plastow, Spiliotopoulou, and Prior [95] identified the fairest adjustment of the weighting of individual to group marks, which are 80% and 20%, respectively. This is certainly a scarcely researched area and requires further exploration in the future. Moreover, in the reviewed literature, there is a lack of consensus around group assessment weightage and whether varied weightage has any impact on group performance and students’ motivation level. Future researchers may also delve into whether the number of group assessments in a course or program has any impact on students’ learning and their performance.

5.5. Group Heterogeneity

Although diversity in groups generally improves students’ performance and enriches the outcomes of the tasks, reporting on the influence of racial, ethnic, and gender heterogeneity was limited in the literature. In addition, a thorough investigation is required to explore the correlation between the group size and the performance of students as the current research presents contradictory findings. In addition, what and how group size is an influence in an educational setting requires further attention.

5.6. Framework to Design Group-Based Assessments

This review indicated that there is literature on the design and execution of group-based assessments in higher education. Various aspects such as group formation [97,98], group size [90,103], self- and peer evaluations of group members [78,84], training students for working in groups [86], academics’ involvement and guidance to students [67,110], use of technology in facilitating group work assessments [67,112], task alignment and design [95], and the weightage of group tasks [100] have been scrutinized in the extant literature. Bree et al. [120] developed an institute-wide framework for group-based assessments based on an Irish institution. The framework, however, did not present a distinct set of best practices. Bayne et al. [121] did propose a range of best practices for group assessments for accounting students to improve their collaborative learning and team-building skills. However, this may not be applicable in other disciplines. Future research can be carried out in different disciplines, such as Project Management, Engineering, Business, Nursing, and Medicine, as employers demand teamwork and the associated skills of graduates regardless of disciplines.

6. Limitations

While this review provided a holistic overview of the key aspects for group assessments, the SLR is not without its limitations. The review did not include peer-reviewed articles written in languages other than English or articles published before 2010 or after 2021. It also excluded articles in the grey literature. Cohen Kappa statistics was not used to measure the agreement between researchers to assess the articles for the inclusion. In addition, group assessments in higher education were considered which cannot be generalized in other educational contexts. This review did not provide recommendations on the best practices for conducting group-based assessments which could be addressed in future studies. Furthermore, the reviewed literature spanned across different disciplines from Nursing to Sociology, adopting a wide range of research methods published in different regional contexts. Therefore, combining the findings of the research and presenting them as themes may have some contextual limitations. Future research may compare and contrast the findings and conduct a more focused review.

7. Conclusions and Practical Implications

Group-based assessment is highly valuable for developing teamwork and the associated skills in students. If well planned and managed, group-based assessments help students develop key skills and attributes demanded by employers. While different constructs of group-based assessments were researched in silos, a review on the existing literature was essential to present key themes to academics for their practical reference. This systematic literature review identified 71 relevant articles from 46 scholarly journals. Six high-level themes, including self- and peer assessments, training students for group assessments, group formation, group size, academics’ guidance, and use of technology for facilitating group processes. Along with the critical themes, a range of future directions has been presented for consideration. Administering group-based assessments is often considered challenging for academics. The effective administration of group-based assessments involves a lot of planning and careful monitoring during execution. While group assessments are considered cumbersome, this assessment type needs to be encouraged due to its contribution to the development of employability skills in graduates.
The findings of this review are useful for academics in higher education who design and administer group-based assessments in their curriculum. The findings have highlighted some key areas of group-based assessments which it is hoped will be beneficial for academics when it comes to planning and administering group-based assessments in a practical, effective, and authentic way. These comprehensive themes which emerged from the review have the potential to enhance students’ learning and academics’ practice. The review has the credibility to inform best practices, policy, and guidelines for academics in higher education. In summary, this systematic review on group assessments adds insights into the existing literature and makes a scientific contribution to the existing body of research.

Author Contributions

Conceptualization, R.J.T.; formal analysis, R.J.T.; methodology, R.J.T. and S.S.; supervision, S.S., M.H. and G.C.; writing—original draft, R.J.T.; writing—review and editing, R.J.T., S.S., M.H. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data generated or analyzed during this study are available from the corresponding author on request.

Acknowledgments

This research is part of the PhD studies of the corresponding author. The authors would like to express gratitude to Central Queensland University for the support. The editors and the reviewers of this journal deserve appreciation for providing commendable feedback for improving the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. PRISMA checklist (adapted from Page et al. [122]).
Table A1. PRISMA checklist (adapted from Page et al. [122]).
TopicDescription
TitleReport as systematic literature review.
AbstractInclude title as systematic review, background, methods, result, discussion, and others, including funding and registration.
IntroductionProvide a justification for this research in existing research, include the objectives of the research and research questions.
MethodsIdentify eligibility criteria, sources of the data collection, search strategy, selection and data collection process, types of data collected, bias assessment, and data analysis method.
ResultsInclude the detailed process of the search and selection process, including number of records found in each stage, study characteristics, bias in studies, results of studies, syntheses process, and reporting bias.
DiscussionProvide the interpretation of the results, limitations of studies included, limitations of the review process, and practical implications.
Other informationMention registration information, financial and non-financial support, competing interests, availability of data, code, and other materials.
Table A2. List of selected articles for the SLR.
Table A2. List of selected articles for the SLR.
Reference NumberArticle with Author(s)
1Eliot, Howard, Nouwens, Stojcevski, Mann, Prpic, Gabb, Venkatesan, and Kolmos [72]
2Bong and Park [94]
3Paterson and Prideaux [14]
4Lockeman, Dow, and Randell [85]
5Sridharan and Boud [96]
6Ko [82]
7Wu, Chanda, and Willison [83]
8Friess and Goupee [88]
9Sridharan, Tai, and Boud [71]
10Caple and Bogle [55]
11Fete, Haight, Clapp, and McCollum [87]
12Handayani and Genisa [51]
13Jin [80]
14Melville [76]
15Harding [98]
16Anson and Goodman [65]
17Cen, Ruta, Powell, Hirsch, and Ng [61]
18Kooloos, Klaassen, Vereijken, Van Kuppeveld, Bolhuis, and Vorstenbosch [66]
19Monson [103]
20Moore and Hampton [53]
21Spatar, Penna, Mills, Kutija, and Cooke [84]
22Takeda and Homberg [64]
23Lavy and Yadin [123]
24Monson [15]
25Swaray [101]
26Strauss, U, and Young [62]
27Plastow, Spiliotopoulou, and Prior [95]
28Postlethwait [90]
29Lam [67]
30Gransberg [79]
31Dommeyer [124]
32Sprague, Wilson, and McKenzie [81]
33Moraes, Michaelidou, and Canning [93]
34Nepal [125]
35Mi and Gould [113]
36Maiden and Perry [10]
37Román-Calderón, Robledo-Ardila, and Velez-Calle [89]
38Shiu, Chan, Lam, Lee, and Kwong [86]
39Parratt, Fahy, and Hastie [107]
40Skelley, Firth, and Kendrach [97]
41Sahin [99]
42Adwan [74]
43Lee, Ahonen, Navarette, and Frisch [77]
44Ohaja, Dunlea, and Muldoon [54]
45Smith and Rogers [126]
46Biesma, Kennedy, Pawlikowska, Brugha, Conroy, and Doyle [92]
47Augar, Woodley, Whitefield, and Winchester [100]
48Wagar and Carroll [127]
49Aaron, McDowell, and Herdman [106]
50Planas Lladó, Soley, Fraguell Sansbelló, Pujolras, Planella, Roura-Pascual, Suñol Martínez, and Moreno [50]
51Willey and Gardner [59]
52Bailey, Barber, and Ferguson [110]
53Ding, Bosker, Xu, Rugers, and Heugten [104]
54Delaney, Fletcher, Cameron, and Bodle [108]
55Dingel and Wei [128]
56McClure, Webber, and Clark [91]
57Adachi, Tai, and Dawson [58]
58Demir [129]
59Lubbers [130]
60Orr [63]
61Agarwal and Rathod [131]
62Thondhlana and Belluigi [132]
63ONeill et al. [133]
64Mostert and Snowball [60]
65Vaughan et al. [134]
66D’Eon and Trinder [135]
67Weaver and Esposto [70]
68Guzmán [136]
69Warhuus et al. [137]
70Rienties, Alcott, and Jindal-Snape [68]
71Volkov and Volkov [69]

References

  1. AAGE. Australian Association of Graduate Employers. Available online: https://aage.com.au/ (accessed on 21 April 2022).
  2. Riebe, L.; Roepen, D.; Santarelli, B.; Marchioro, G. Teamwork: Effectively teaching an employability skill. Educ. + Train. 2010, 52, 528–539. [Google Scholar] [CrossRef]
  3. Riebe, L.; Girardi, A.; Whitsed, C. Teaching teamwork in Australian university business disciplines: Evidence from a systematic literature review. Iss. Educ. Res. 2017, 27, 134–150. [Google Scholar]
  4. Harder, C.; Jackson, G.; Lane, J. Talent Is Not Enough; Canada West Foundation: Calgary, AB, Canada, 2014. [Google Scholar]
  5. Forsell, J.; Frykedal, K.F.; Chiriac, E.H. Group Work Assessment: Assessing Social Skills at Group Level. Small Group Res. 2019, 51, 87–124. [Google Scholar] [CrossRef]
  6. Van Laar, E.; van Deursen, A.J.; van Dijk, J.A.; de Haan, J. Determinants of 21st-century skills and 21st-century digital skills for workers: A systematic literature review. Sage Open 2020, 10, 2158244019900176. [Google Scholar] [CrossRef]
  7. Davies, W.M. Groupwork as a form of assessment: Common problems and recommended solutions. High. Educ. 2009, 58, 563–584. [Google Scholar] [CrossRef]
  8. Huff, P.L. The Goal Project: A Group Assignment to Encourage Creative Thinking, Leadership Abilities and Communication Skills. Account. Educ. 2014, 23, 582–594. [Google Scholar] [CrossRef]
  9. Dijkstra, J.; Latijnhouwers, M.; Norbart, A.; Tio, R.A. Assessing the “I” in group work assessment: State of the art and rec-ommendations for practice. Med. Teach. 2016, 38, 675–682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Maiden, B.; Perry, B. Dealing with free-riders in assessed group work: Results from a study at a UK university. Assess. Eval. High. Educ. 2011, 36, 451–464. [Google Scholar] [CrossRef]
  11. Murray, J.-A.; Boyd, S. A Preliminary Evaluation of Using WebPA for Online Peer Assessment of Collaborative Performance by Groups of Online Distance Learners. Int. J. E-Learn. Dist. Educ. 2015, 30, n2. [Google Scholar]
  12. Forsell, J.; Frykedal, K.F.; Chiriac, E.H. Teachers’ perceived challenges in group work assessment. Cogent Educ. 2021, 8, 1886474. [Google Scholar] [CrossRef]
  13. Riebe, L.; Girardi, A.; Whitsed, C. A Systematic Literature Review of Teamwork Pedagogy in Higher Education. Small Group Res. 2016, 47, 619–664. [Google Scholar] [CrossRef]
  14. Paterson, T.; Prideaux, M. Exploring collaboration in online group based assessment contexts: Undergraduate Business Program. J. Univ. Teach. Learn. Pract. 2020, 17, 3. [Google Scholar]
  15. Monson, R. Groups That Work: Student Achievement in Group Research Projects and Effects on Individual Learning. Teach. Sociol. 2017, 45, 240–251. [Google Scholar] [CrossRef]
  16. Dearnley, C.; Rhodes, C.; Roberts, P.; Williams, P.; Prenton, S. Team based learning in nursing and midwifery higher education; a systematic review of the evidence for change. Nurse Educ. Today 2018, 60, 75–83. [Google Scholar] [CrossRef]
  17. Xiao, Y.; Watson, M. Guidance on Conducting a Systematic Literature Review. J. Plan. Educ. Res. 2017, 39, 93–112. [Google Scholar] [CrossRef]
  18. Higgins, J.P.; Green, S.; Scholten, R. Maintaining reviews: Updates, amendments and feedback. In Cochrane Handbook for Systematic Reviews of Interventions; Wiley: Hoboken, NJ, USA, 2008; p. 31. [Google Scholar]
  19. Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; EBSE Technical Report, Version 2.3; Keele University: Keele, UK; University of Durham: Durham, UK, 2007. [Google Scholar]
  20. Chen, C.-M.; Kuo, C.-H. An optimized group formation scheme to promote collaborative problem-based learning. Comput. Educ. 2019, 133, 94–115. [Google Scholar] [CrossRef]
  21. Cumming, J. Student-initiated group management strategies for more effective and enjoyable group work experiences. J. Hosp. Leis. Sport Tour. Educ. 2010, 9, 31–45. [Google Scholar] [CrossRef]
  22. Rust, C. A Briefing on Assessment of Large Groups; LTSN: York, UK, 2001. [Google Scholar]
  23. Alam, M.; Gale, A.; Brown, M.; Khan, A. The importance of human skills in project management professional development. Int. J. Manag. Proj. Bus. 2010, 3, 495–516. [Google Scholar] [CrossRef]
  24. Johnson, D.W.; Johnson, R.T. An Educational Psychology Success Story: Social Interdependence Theory and Cooperative Learning. Educ. Res. 2009, 38, 365–379. [Google Scholar] [CrossRef] [Green Version]
  25. Lee, S.-M. The relationships between higher order thinking skills, cognitive density, and social presence in online learning. Internet High. Educ. 2014, 21, 41–52. [Google Scholar] [CrossRef]
  26. Garrison, D.R.; Anderson, T.; Archer, W. Critical thinking, cognitive presence, and computer conferencing in distance education. Am. J. Dist. Educ. 2001, 15, 7–23. [Google Scholar] [CrossRef] [Green Version]
  27. Kumar, R. The Effect of Collaborative Learning on Enhancing Student Achievement: A Meta-Analysis. Master’s Thesis, Concordia University, Montreal, QC, Canada, 2017. [Google Scholar]
  28. Qureshi, M.A.; Khaskheli, A.; Qureshi, J.A.; Raza, S.A.; Yousufi, S.Q. Factors affecting students’ learning performance through collaborative learning and engagement. Interact. Learn. Environ. 2021, 1–21. [Google Scholar] [CrossRef]
  29. Alm, F.; Colnerud, G. Teachers’ Experiences of Unfair Grading. Educ. Assess. 2015, 20, 132–150. [Google Scholar] [CrossRef]
  30. Sutton, M.; Zamora, M.; Best, L. Practical Insights on the Pedagogy of Group Work. Res. Teach. Dev. Educ. 2005, 22, 71–81. [Google Scholar]
  31. Schreiber, L.M.; Valle, B.E. Social Constructivist Teaching Strategies in the Small Group Classroom. Small Group Res. 2013, 44, 395–411. [Google Scholar] [CrossRef]
  32. Aggarwal, P.; O’Brien, C.L. Social loafing on group projects: Structural antecedents and effect on student satisfaction. J. Mark. Educ. 2008, 30, 255–264. [Google Scholar] [CrossRef]
  33. Watkins, R. Groupwork and Assessment: The Handbook for Economics Lecturers; Economics Network: Bristol, UK, 2004; pp. 1–24. [Google Scholar]
  34. Thom, M. Are group assignments effective pedagogy or a waste of time? A review of the literature and implications for practice. Teach. Public Adm. 2020, 38, 257–269. [Google Scholar] [CrossRef]
  35. Tranfield, D.; Denyer, D.; Smart, P. Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
  36. Imbiri, S.; Rameezdeen, R.; Chileshe, N.; Statsenko, L. A Novel Taxonomy for Risks in Agribusiness Supply Chains: A Systematic Literature Review. Sustainability 2021, 13, 9217. [Google Scholar] [CrossRef]
  37. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [Green Version]
  38. Adewuyi, M.; Morales, K.; Lindsey, A. Impact of experiential dementia care learning on knowledge, skills and attitudes of nursing students: A systematic literature review. Nurse Educ. Pract. 2022, 62, 103351. [Google Scholar] [CrossRef] [PubMed]
  39. de Araújo, M.C.B.; Alencar, L.H.; Mota, C.M.D.M. Project procurement management: A structured literature review. Int. J. Proj. Manag. 2017, 35, 353–377. [Google Scholar] [CrossRef]
  40. Wijewickrama, M.; Chileshe, N.; Rameezdeen, R.; Ochoa, J.J. Quality assurance in reverse logistics supply chain of demolition waste: A systematic literature review. Waste Manag. Res. J. Sustain. Circ. Econ. 2020, 39, 3–24. [Google Scholar] [CrossRef] [PubMed]
  41. Ahmed, R.; Philbin, S.P. Systematic literature review of project manager’s leadership competencies. Eng. Constr. Archit. Manag. 2020, 28, 1–30. [Google Scholar] [CrossRef]
  42. Kochhar, N. Social Media Marketing in the Fashion Industry: A Systematic Literature Review and Research Agenda. Master’s Thesis, The University of Manchester, Manchester, UK, 2021. [Google Scholar]
  43. Lincoln, Y.S.; Guba, E.G.; Pilotta, J. Naturalistic Inquiry; Sage: Newbury Park, CA, USA, 1985. [Google Scholar]
  44. Zachary, K.F. Caregiver Burden with Alzheimer’s and Dementia Patients: A Systematic Literature Review. Ph.D. Thesis, The Chicago School of Professional Psychology, Chicago, IL, USA, 2021. [Google Scholar]
  45. Coleman, S.A. Strategies to Combat Elder Abuse: A Systematic Literature Review. Ph.D. Thesis, University of Arizona Global Campus, Chandler, AZ, USA, 2021. [Google Scholar]
  46. Okoli, C. A guide to conducting a standalone systematic literature review. Commun. Assoc. Inform. Syst. 2015, 37, 43. [Google Scholar] [CrossRef] [Green Version]
  47. Hoeft, R. Confidence Issues during Athletic Injury Recovery: A Systematic Literature Review. Ph.D. Thesis, University of Arizona Global Campus, Chandler, AZ, USA, 2021. [Google Scholar]
  48. Rosario, M.M. Voir Dire Suitability: A Comprehensive Systematic Literature Review of the Jury Selection; University of Arizona Global Campus: Chandler, AZ, USA, 2021. [Google Scholar]
  49. Amin, T.; Khan, F.; Zuo, M.J. A bibliometric analysis of process system failure and reliability literature. Eng. Fail. Anal. 2019, 106, 104152. [Google Scholar] [CrossRef]
  50. Lladó, A.P.; Soley, L.F.; Sansbelló, R.M.F.; Pujolras, G.A.; Planella, J.P.; Roura-Pascual, N.; Martínez, J.J.S.; Moreno, L.M.; Planas, A. Student perceptions of peer assessment: An interdisciplinary study. Assess. Eval. High. Educ. 2013, 39, 592–610. [Google Scholar] [CrossRef] [Green Version]
  51. Handayani, R.D.; Genisa, M.U. Triyanto. Empowering Physics Students’ Performance in a Group Discussion through Two Types of Peer Assessment. Int. J. Instr. 2019, 12, 655–668. [Google Scholar]
  52. Lawrie, G.A.; Gahan, L.R.; Matthews, K.E.; Weaver, G.C.; Bailey, C.; Adams, P.; Kavanagh, L.J.; Long, P.D.; Taylor, M. Technology Supported Facilitation and Assessment of Small Group Collaborative Inquiry Learning in Large First-Year Classes. J. Learn. Des. 2014, 7, 120–135. [Google Scholar]
  53. Moore, P.; Hampton, G. ‘It’sa bit of a generalisation, but…’: Participant perspectives on intercultural group assessment in higher education. Assess. Eval. High. Educ. 2015, 40, 390–406. [Google Scholar] [CrossRef]
  54. Ohaja, M.; Dunlea, M.; Muldoon, K. Group marking and peer assessment during a group poster presentation: The experiences and views of midwifery students. Nurse Educ. Pract. 2013, 13, 466–470. [Google Scholar] [CrossRef] [PubMed]
  55. Caple, H.; Bogle, M. Making group assessment transparent: What wikis can contribute to collaborative projects. Assess. Eval. High. Educ. 2011, 38, 198–210. [Google Scholar] [CrossRef]
  56. Zou, X.; Yue, W.L.; Le Vu, H. Visualization and analysis of mapping knowledge domain of road safety studies. Accid. Anal. Prev. 2018, 118, 131–145. [Google Scholar] [CrossRef] [PubMed]
  57. Hou, J.; Yang, X.; Chen, C. Emerging trends and new developments in information science: A document co-citation analysis (2009–2016). Scientometrics 2018, 115, 869–892. [Google Scholar] [CrossRef]
  58. Adachi, C.; Tai, J.H.-M.; Dawson, P. Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education. Assess. Eval. High. Educ. 2017, 43, 294–306. [Google Scholar] [CrossRef]
  59. Willey, K.; Gardner, A. Investigating the capacity of self and peer assessment activities to engage students and promote learning. Eur. J. Eng. Educ. 2010, 35, 429–443. [Google Scholar] [CrossRef]
  60. Mostert, M.; Snowball, J.D. Where angels fear to tread: Online peer-assessment in a large first-year class. Assess. Eval. High. Educ. 2013, 38, 674–686. [Google Scholar] [CrossRef]
  61. Cen, L.; Ruta, D.; Powell, L.; Hirsch, B.; Ng, J. Quantitative approach to collaborative learning: Performance prediction, individual assessment, and group composition. Int. J. Comput. Collab. Learn. 2016, 11, 187–225. [Google Scholar] [CrossRef]
  62. Strauss, P.; U, A.; Young, S. ‘I know the type of people I work well with’: Student anxiety in multicultural group projects. Stud. High. Educ. 2011, 36, 815–829. [Google Scholar] [CrossRef]
  63. Orr, S. Collaborating or fighting for the marks? Students’ experiences of group work assessment in the creative arts. Assess. Eval. High. Educ. 2010, 35, 301–313. [Google Scholar] [CrossRef]
  64. Takeda, S.; Homberg, F. The effects of gender on group work process and achievement: An analysis through self- and peer-assessment. Br. Educ. Res. J. 2013, 40, 373–396. [Google Scholar] [CrossRef] [Green Version]
  65. Anson, R.; Goodman, J.A. A Peer Assessment System to Improve Student Team Experiences. J. Educ. Bus. 2013, 89, 27–34. [Google Scholar] [CrossRef]
  66. Kooloos, J.G.; Klaassen, T.; Vereijken, M.; Van Kuppeveld, S.; Bolhuis, S.; Vorstenbosch, M. Collaborative group work: Effects of group size and assignment structure on learning gain, student satisfaction and perceived participation. Med. Teach. 2011, 33, 983–988. [Google Scholar] [CrossRef] [PubMed]
  67. Lam, C. The Role of Communication and Cohesion in Reducing Social Loafing in Group Projects. Bus. Prof. Commun. Q. 2015, 78, 454–475. [Google Scholar] [CrossRef]
  68. Rienties, B.; Alcott, P.; Jindal-Snape, D. To let students self-select or not: That is the question for teachers of culturally diverse groups. J. Stud. Int. Educ. 2014, 18, 64–83. [Google Scholar] [CrossRef] [Green Version]
  69. Volkov, A.; Volkov, M. Teamwork benefits in tertiary education: Student perceptions that lead to best practice assessment design. Educ. + Train. 2015, 57, 262–278. [Google Scholar] [CrossRef]
  70. Weaver, D.; Esposto, A. Peer assessment as a method of improving student engagement. Assess. Eval. High. Educ. 2012, 37, 805–816. [Google Scholar] [CrossRef]
  71. Sridharan, B.; Tai, J.; Boud, D. Does the use of summative peer assessment in collaborative group work inhibit good judgement? High. Educ. 2019, 77, 853–870. [Google Scholar] [CrossRef]
  72. Eliot, M.; Howard, P.; Nouwens, F.; Stojcevski, A.; Mann, L.; Prpic, J.K.; Gabb, R.; Venkatesan, S.; Kolmos, A. Developing a conceptual model for the effective assessment of individual student learning in team-based subjects. Australas. J. Eng. Educ. 2012, 18, 105–112. [Google Scholar] [CrossRef]
  73. Alden, J. Assessment of Individual Student Performance in Online Team Projects. J. Asynchron. Learn. Netw. 2011, 15, 5–20. [Google Scholar] [CrossRef]
  74. Adwan, J. Dynamic online peer evaluations to improve group assignments in nursing e-learning environment. Nurse Educ. Today 2016, 41, 67–72. [Google Scholar] [CrossRef] [PubMed]
  75. Subramanian, R.; Lejk, M. Enhancing student learning, participation and accountability in undergraduate group projects through peer assessment. S. Afr. J. High. Educ. 2013, 27, 368–382. [Google Scholar]
  76. Melville, A. “The Group Must Come First Next Time”: Students’ Self-Assessment of Groupwork in a First-Year Criminal Justice Topic. J. Crim. Justice Educ. 2020, 31, 82–99. [Google Scholar] [CrossRef]
  77. Lee, C.J.; Ahonen, K.; Navarette, E.; Frisch, K. Successful student group projects: Perspectives and strategies. Teach. Learn. Nurs. 2015, 10, 186–191. [Google Scholar] [CrossRef]
  78. Pocock, T.M.; Sanders, T.; Bundy, C. The impact of teamwork in peer assessment: A qualitative analysis of a group exercise at a UK medical school. Biosci. Educ. 2010, 15, 1–12. [Google Scholar] [CrossRef] [Green Version]
  79. Gransberg, D.D. Quantifying the Impact of Peer Evaluations on Student Team Project Grading. Int. J. Constr. Educ. Res. 2010, 6, 3–17. [Google Scholar] [CrossRef]
  80. Jin, X.-H. A comparative study of effectiveness of peer assessment of individuals’ contributions to group projects in under-graduate construction management core units. Assess. Eval. High. Educ. 2012, 37, 577–589. [Google Scholar] [CrossRef]
  81. Sprague, M.; Wilson, K.F.; McKenzie, K.S. Evaluating the quality of peer and self evaluations as measures of student contributions to group projects. High. Educ. Res. Dev. 2019, 38, 1061–1074. [Google Scholar] [CrossRef]
  82. Ko, S.-S. Peer assessment in group projects accounting for assessor reliability by an iterative method. Teach. High. Educ. 2013, 19, 301–314. [Google Scholar] [CrossRef]
  83. Wu, C.; Chanda, E.; Willison, J. Implementation and outcomes of online self and peer assessment on group based honours research projects. Assess. Eval. High. Educ. 2013, 39, 21–37. [Google Scholar] [CrossRef]
  84. Spatar, C.; Penna, N.; Mills, H.; Kutija, V.; Cooke, M. A robust approach for mapping group marks to individual marks using peer assessment. Assess. Eval. High. Educ. 2014, 40, 371–389. [Google Scholar] [CrossRef]
  85. Lockeman, K.S.; Dow, A.W.; Randell, A.L. Notes from the Field: Evaluating a Budget-Based Approach to Peer Assessment for Measuring Collaboration Among Learners on Interprofessional Teams. Eval. Health Prof. 2019, 43, 197–200. [Google Scholar] [CrossRef] [PubMed]
  86. Shiu, A.T.; Chan, C.W.; Lam, P.; Lee, J.; Kwong, A.N. Baccalaureate nursing students’ perceptions of peer assessment of individual contributions to a group project: A case study. Nurse Educ. Today 2012, 32, 214–218. [Google Scholar] [CrossRef]
  87. Fete, M.G.; Haight, R.; Clapp, P.; Mccollum, M. Peer Evaluation Instrument Development, Administration, and Assessment in a Team-based Learning Curriculum. Am. J. Pharm. Educ. 2017, 81, 68. [Google Scholar] [CrossRef] [PubMed]
  88. Friess, W.A.; Goupee, A.J. Using Continuous Peer Evaluation in Team-Based Engineering Capstone Projects: A Case Study. IEEE Trans. Educ. 2020, 63, 82–87. [Google Scholar] [CrossRef]
  89. Román-Calderón, J.P.; Robledo-Ardila, C.; Velez-Calle, A. Global virtual teams in education: Do peer assessments motivate student effort? Stud. Educ. Eval. 2021, 70, 101021. [Google Scholar] [CrossRef]
  90. Postlethwait, A.E. Group projects in social work education: The influence of group characteristics and moderators on un-dergraduate student outcomes. J. Teach. Soc. Work 2016, 36, 256–274. [Google Scholar] [CrossRef]
  91. McClure, C.; Webber, A.; Clark, G.L. Peer Evaluations in Team Projects: What a Major Disconnect Between Students and Business Instructors. J. High. Educ. Theory Pract. 2015, 15, 27–35. [Google Scholar]
  92. Biesma, R.; Kennedy, M.-C.; Pawlikowska, T.; Brugha, R.; Conroy, R.; Doyle, F. Peer assessment to improve medical student’s contributions to team-based projects: Randomised controlled trial and qualitative follow-up. BMC Med. Educ. 2019, 19, 371. [Google Scholar] [CrossRef]
  93. Moraes, C.; Michaelidou, N.; Canning, L. Students’ Attitudes toward a Group Coursework Protocol and Peer Assessment System. Ind. High. Educ. 2016, 30, 117–128. [Google Scholar] [CrossRef] [Green Version]
  94. Bong, J.; Park, M.S. Peer assessment of contributions and learning processes in group projects: An analysis of information technology undergraduate students’ performance. Assess. Eval. High. Educ. 2020, 45, 1155–1168. [Google Scholar] [CrossRef]
  95. Plastow, N.; Spiliotopoulou, G.; Prior, S. Group assessment at first year and final degree level: A comparative evaluation. Innov. Educ. Teach. Int. 2010, 47, 393–403. [Google Scholar] [CrossRef]
  96. Sridharan, B.; Boud, D. The effects of peer judgements on teamwork and self-assessment ability in collaborative group work. Assess. Eval. High. Educ. 2018, 44, 894–909. [Google Scholar] [CrossRef]
  97. Skelley, J.W.; Firth, J.M.; Kendrach, M.G. Picking teams: Student workgroup assignment methods in U.S. schools of pharmacy. Curr. Pharm. Teach. Learn. 2015, 7, 745–752. [Google Scholar] [CrossRef]
  98. Harding, L.M. Students of a Feather “Flocked” Together: A Group Assignment Method for Reducing Free-Riding and Improving Group and Individual Learning Outcomes. J. Mark. Educ. 2017, 40, 117–127. [Google Scholar] [CrossRef]
  99. Sahin, Y.G. A team building model for software engineering courses term projects. Comput. Educ. 2011, 56, 916–922. [Google Scholar] [CrossRef]
  100. Augar, N.; Woodley, C.J.; Whitefield, D.; Winchester, M. Exploring academics’ approaches to managing team assessment. Int. J. Educ. Manag. 2016, 30, 1150–1162. [Google Scholar] [CrossRef]
  101. Swaray, R. An evaluation of a group project designed to reduce free-riding and promote active learning. Assess. Eval. High. Educ. 2011, 37, 285–292. [Google Scholar] [CrossRef]
  102. Ding, Z.; Zhu, M.; Tam, V.W.; Yi, G.; Tran, C.N. A system dynamics-based environmental benefit assessment model of con-struction waste reduction management at the design and construction stages. J. Clean. Prod. 2018, 176, 676–692. [Google Scholar] [CrossRef]
  103. Monson, R.A. Do they have to like it to learn from it? Students’ experiences, group dynamics, and learning outcomes in group research projects. Teach. Sociol. 2019, 47, 116–134. [Google Scholar] [CrossRef]
  104. Ding, N.; Bosker, R.; Xu, X.; Rugers, L.; Van Heugten, P.P. International Group Heterogeneity and Students’ Business Project Achievement. J. Teach. Int. Bus. 2015, 26, 197–215. [Google Scholar] [CrossRef]
  105. Colbeck, C.L.; Campbell, S.E.; Bjorklund, S.A. Grouping in the Dark: What College Students Learn from Group Projects. J. High. Educ. 2000, 71, 60. [Google Scholar] [CrossRef]
  106. Aaron, J.R.; McDowell, W.C.; Herdman, A.O. The Effects of a Team Charter on Student Team Behaviors. J. Educ. Bus. 2014, 89, 90–97. [Google Scholar] [CrossRef]
  107. Parratt, J.A.; Fahy, K.M.; Hastie, C.R. Midwifery students’ evaluation of team-based academic assignments involving peer-marking. Women Birth 2014, 27, 58–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  108. Delaney, D.A.; Fletcher, M.; Cameron, C.; Bodle, K. Online self and peer assessment of team work in accounting education. Account. Res. J. 2013, 26, 222–238. [Google Scholar] [CrossRef]
  109. Mathieu, J.E.; Rapp, T. Laying the foundation for successful team performance trajectories: The roles of team charters and performance strategies. J. Appl. Psychol. 2009, 94, 90–103. [Google Scholar] [CrossRef] [PubMed]
  110. Bailey, S.; Barber, L.K.; Ferguson, A.J. Promoting perceived benefits of group projects: The role of instructor contributions and intragroup processes. Teach. Psychol. 2015, 42, 179–183. [Google Scholar] [CrossRef]
  111. Fuchs, K. Students’ Perceptions Concerning Emergency Remote Teaching during COVID-19: A Case Study between Higher Education Institutions in Thailand and Finland. Perspect. Glob. Dev. Technol. 2021, 20, 278–288. [Google Scholar] [CrossRef]
  112. Kear, K.; Donelan, H.; Williams, J. Using wikis for online group projects: Student and tutor perspectives. Int. Rev. Res. Open Distrib. Learn. 2014, 15, 70–90. [Google Scholar] [CrossRef] [Green Version]
  113. Mi, M.; Gould, D. Wiki Technology Enhanced Group Project to Promote Active Learning in a Neuroscience Course for First-Year Medical Students: An Exploratory Study. Med. Ref. Serv. Q. 2014, 33, 125–135. [Google Scholar] [CrossRef]
  114. Rashid, S.; Yadav, S.S. Impact of COVID-19 Pandemic on Higher Education and Research. Ind. J. Hum. Dev. 2020, 14, 340–343. [Google Scholar] [CrossRef]
  115. Flores, M.A.; Barros, A.; Simão, A.M.V.; Pereira, D.; Flores, P.; Fernandes, E.; Costa, L.; Ferreira, P.C. Portuguese higher ed-ucation students’ adaptation to online teaching and learning in times of the COVID-19 pandemic: Personal and contextual factors. High. Educ. 2022, 83, 1389–1408. [Google Scholar] [CrossRef] [PubMed]
  116. St-Onge, C.; Ouellet, K.; Lakhal, S.; Dubé, T.; Marceau, M. COVID-19 as the tipping point for integrating e-assessment in higher education practices. Br. J. Educ. Technol. 2022, 53, 349–366. [Google Scholar] [CrossRef] [PubMed]
  117. Williams, P. Assessing collaborative learning: Big data, analytics and university futures. Assess. Eval. High. Educ. 2017, 42, 978–989. [Google Scholar] [CrossRef]
  118. Strong, J.T.; Anderson, R.E. Free-Riding in Group Projects: Control Mechanisms and Preliminary Data. J. Mark. Educ. 1990, 12, 61–67. [Google Scholar] [CrossRef]
  119. Jackson, J.M.; Williams, K.D. Social loafing on difficult tasks: Working collectively can improve performance. J. Personal. Soc. Psychol. 1985, 49, 937. [Google Scholar] [CrossRef]
  120. Bree, R.; Cooney, C.; Maguire, M.; Morris, P.; Mullen, P. An institute-wide framework for assessed group work: Development and initial implementation in an Irish Higher Education Institution. High. Educ. Pedagog. 2019, 4, 347–367. [Google Scholar] [CrossRef] [Green Version]
  121. Bayne, L.; Birt, J.; Hancock, P.; Schonfeldt, N.; Agrawal, P. Best practices for group assessment tasks. J. Account. Educ. 2022, 59, 100770. [Google Scholar] [CrossRef]
  122. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef]
  123. Lavy, I.; Yadin, A. Team-based peer review as a form of formative assessment-the case of a systems analysis and design workshop. J. Inform. Syst. Educ. 2010, 22, 85. [Google Scholar]
  124. Dommeyer, C.J. A new strategy for dealing with social loafers on the group project: The segment manager method. J. Mark. Educ. 2012, 34, 113–127. [Google Scholar] [CrossRef]
  125. Nepal, K.P. An approach to assign individual marks from a team mark: The case of Australian grading system at universities. Assess. Eval. High. Educ. 2012, 37, 555–562. [Google Scholar] [CrossRef]
  126. Smith, M.; Rogers, J. Understanding nursing students’ perspectives on the grading of group work assessments. Nurse Educ. Pract. 2013, 14, 112–116. [Google Scholar] [CrossRef] [PubMed]
  127. Wagar, T.H.; Carroll, W.R. Examining Student Preferences of Group Work Evaluation Approaches: Evidence from Business Management Undergraduate Students. J. Educ. Bus. 2012, 87, 358–362. [Google Scholar] [CrossRef]
  128. Dingel, M.; Wei, W. Influences on peer evaluation in a group project: An exploration of leadership, demographics and course performance. Assess. Eval. High. Educ. 2013, 39, 729–742. [Google Scholar] [CrossRef]
  129. Demir, M. Using online peer assessment in an Instructional Technology and Material Design course through social media. High. Educ. 2017, 75, 399–414. [Google Scholar] [CrossRef]
  130. Lubbers, C.A. An assessment of predictors of student peer evaluations of team work in the capstone campaigns course. Public Relat. Rev. 2011, 37, 492–498. [Google Scholar] [CrossRef]
  131. Agarwal, N.; Rathod, U. Defining ‘success’ for software projects: An exploratory revelation. Int. J. Proj. Manag. 2006, 24, 358–370. [Google Scholar] [CrossRef]
  132. Thondhlana, G.; Belluigi, D.Z. Students’ reception of peer assessment of group-work contributions: Problematics in terms of race and gender emerging from a South African case study. Assess. Eval. High. Educ. 2016, 42, 1118–1131. [Google Scholar] [CrossRef] [Green Version]
  133. Oneill, T.A.; Boyce, M.; McLarnon, M.J.W. Team Health and Project Quality Are Improved When Peer Evaluation Scores Affect Grades on Team Projects. Front. Educ. 2020, 5, 49. [Google Scholar] [CrossRef]
  134. Vaughan, B.; Yoxall, J.; Grace, S. Peer assessment of teamwork in group projects: Evaluation of a rubric. Iss. Educ. Res. 2019, 29, 961–978. [Google Scholar]
  135. D’Eon, M.F.; Trinder, K. Evidence for the Validity of Grouped Self-Assessments in Measuring the Outcomes of Educational Programs. Eval. Health Prof. 2013, 37, 457–469. [Google Scholar] [CrossRef] [PubMed]
  136. Guzmán, S.G. Monte Carlo evaluations of methods of grade distribution in group projects: Simpler is better. Assess. Eval. High. Educ. 2017, 43, 893–907. [Google Scholar] [CrossRef]
  137. Warhuus, J.P.; Günzel-Jensen, F.; Robinson, S.; Neergaard, H. Teaming up in entrepreneurship education: Does the team formation mode matter? Int. J. Entrep. Behav. Res. 2021, 27, 1913–1935. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow chart for the systematic literature review.
Figure 1. PRISMA flow chart for the systematic literature review.
Sustainability 14 09638 g001
Figure 2. Number of publications per location where the studies were conducted. Multiple countries *: when the data were collected from multiple geographical locations.
Figure 2. Number of publications per location where the studies were conducted. Multiple countries *: when the data were collected from multiple geographical locations.
Sustainability 14 09638 g002
Figure 3. Annual publications on group-based assessments in higher education from 2000–2021.
Figure 3. Annual publications on group-based assessments in higher education from 2000–2021.
Sustainability 14 09638 g003
Table 1. Inclusion and exclusion criteria for this systematic review on group-based assessments.
Table 1. Inclusion and exclusion criteria for this systematic review on group-based assessments.
Inclusion CriteriaExclusion Criteria
Any scholarly article
published in peer-reviewed journals
Conference proceedings, reports, book chapters, and
dissertations were excluded.
Articles based on empirical studies Any non-empirical studies, such as review papers, were
excluded from this review.
Any study which is in the higher education domainStudies related to primary school, secondary education, vocational education, training, and workplace sectors were excluded.
Full text only In order to read and understand the full findings of the
article.
In the year range (2010–2021) The scoping review suggests that a considerable number of publications are available from the last decade.
Described in English The authors’ inability to interpret other languages.
Table 2. Major research methods used in the selected articles.
Table 2. Major research methods used in the selected articles.
Research MethodsNo of Articles
Qualitative/Quantitative/MixedDetailed Methods
Quantitative
(52 articles)
Survey/questionnaire33
Retrospective data (students’ grades, quiz scores, self- and peer assessment results)11
Statistical modelling/quasi-experiment8
Mixed
(11 articles)
Survey and focus group4
Pre-/post-questionnaires, artefacts of the group processes, and student focus groups1
Interviews, observation, and survey1
Interviews, observation, and observation1
Peer assessment scores and focus groups1
Literature review and survey and focus group1
Surveys, focus group interviews, and individual staff interviews1
Survey and course grade1
Qualitative
(8 articles)
Interviews5
Reflection data2
Focus group1
Table 3. The top-ranked journals with minimum of two articles published in each.
Table 3. The top-ranked journals with minimum of two articles published in each.
JournalNumber of Publications per Journal
Assessment & Evaluation in Higher Education19
Journal of Education for Business3
Nurse Education in Practice2
Nurse Education Today2
Teaching Sociology2
Higher Education2
Table 4. List of the highly cited articles.
Table 4. List of the highly cited articles.
NoArticlesCitations
1 Adachi, Tai, and Dawson [58]160
2 Planas Lladó, Soley, Fraguell Sansbelló, Pujolras, Planella, Roura-Pascual, Suñol Martínez, and Moreno [50]153
3 Willey and Gardner [59]152
4 Maiden and Perry [10]138
5 Mostert and Snowball [60]100
6 Cen et al. [61]94
7 Strauss et al. [62]93
8 Orr [63]90
9 Takeda and Homberg [64]88
10 Anson and Goodman [65]87
11 Kooloos et al. [66]86
12 Lam [67]85
13 Rienties et al. [68]80
14 Volkov and Volkov [69]80
15 Moore and Hampton [53]76
16 Weaver and Esposto [70]67
17 Caple and Bogle [55]67
18 Sridharan et al. [71]66
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tumpa, R.J.; Skaik, S.; Ham, M.; Chaudhry, G. A Holistic Overview of Studies to Improve Group-Based Assessments in Higher Education: A Systematic Literature Review. Sustainability 2022, 14, 9638. https://doi.org/10.3390/su14159638

AMA Style

Tumpa RJ, Skaik S, Ham M, Chaudhry G. A Holistic Overview of Studies to Improve Group-Based Assessments in Higher Education: A Systematic Literature Review. Sustainability. 2022; 14(15):9638. https://doi.org/10.3390/su14159638

Chicago/Turabian Style

Tumpa, Roksana Jahan, Samer Skaik, Miriam Ham, and Ghulam Chaudhry. 2022. "A Holistic Overview of Studies to Improve Group-Based Assessments in Higher Education: A Systematic Literature Review" Sustainability 14, no. 15: 9638. https://doi.org/10.3390/su14159638

APA Style

Tumpa, R. J., Skaik, S., Ham, M., & Chaudhry, G. (2022). A Holistic Overview of Studies to Improve Group-Based Assessments in Higher Education: A Systematic Literature Review. Sustainability, 14(15), 9638. https://doi.org/10.3390/su14159638

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop