1. Introduction
In traditional instructor-centered classrooms, the instructors are regarded as the only disciplinary experts that work as knowledge providers while students act as passive recipients of information [
1]. Such a one-directional knowledge transmission mode in classrooms decreases students’ participation and motivation to learn and is unsuitable for cultivating students’ critical thinking [
2]. To reverse the situation, previous research examined how to promote students’ active learning (i.e., where students engage actively in their learning instead of passively) through engaging them in small group learning activities (SGLA) [
3,
4], whereby small groups of students work together to meet shared learning goals [
5].
Compared with traditional didactic teaching methods, small group learning activities (SGLA) provides experiential learning from which students’ collaboration skills are developed not only in academic performance but also in their future careers. Experiential team-based work also helps students develop verbal communication skills, fosters their problem-solving abilities, and improves interpersonal relations skills [
6] by promoting elaboration skills and the appropriateness of students’ responses to peers seeking assistance [
7]. When learners work together on tasks that are meaningful to them, they naturally describe, explain, listen, and interpret, thus developing language skills, collaboration skills, and self-monitoring or meta-cognitive skills through group rehearsal (i.e., learning from teaching peers) [
8]. Shared knowledge-building thus allows learners to integrate creation and reception, to negotiate meaning and purpose, to divide and manage collective work, and to come to regard themselves as persons who solve problems and develop conclusions. This outcome appeals keenly to future employers of university students [
9].
Even though this article describes an SGLAs research study that was carried out before the COVID-19 pandemic happened, the virtual paradigm of online teaching and learning in higher institutions during the pandemic adds more importance to exploring the potential of SGLAs in improving students’ interactions with peers, learning engagement, and motivation. Since the year 2020, the sudden outbreak of the COVID-19 pandemic forced most higher institutions to transit to “emergency synchronous online instruction” [
10] in the U.S. Previous studies showed that emergency remote education gave rise to some new challenges for instructors and students in comparison with in-person education. For instance, students became more demotivated or disengaged in online classes [
11,
12], and easily become bored or fatigued by Zoom [
13] after staring at screens for an extended period of time without personal interaction [
14]. In addition, virtual education makes students feel isolated and lack connection to instructors and peers [
15] without physical interactions that would otherwise happen in classrooms [
11]. Even though most universities have shifted back to in-person classes in the year 2022 in the U.S., hybrid education that combines synchronous online classes with in-person classes has become a new normal of education in the post-pandemic era [
16]. Because students’ learning engagement is closely related to their interactions and collaboration with peers or faculty [
17], this new education trend (hybrid education) in turn makes it necessary to explore the potential of SGLAs in offering students opportunities to collaborate with each other, thereby improving their learning experience especially while they are taking online classes.
Despite all the aforementioned benefits, SGLAs are underutilized in higher education, resulting from a lack of coherent instructional design, collaborative activity design, and technology support for SGLAs [
18,
19]. In addition, SGLAs would lay extra burdens on instructors who should undertake different responsibilities including class administration, group facilitation, and oversight. Such burdens make it more intimating and challenging for instructors to adopt SGLAs in classrooms. Therefore, if SGLAs are going to be effectively integrated into instructors’ practices, we must reduce instructor burdens for adopting this collaborative learning technique. Although previous studies showed that some learning management systems have been utilized to facilitate SGLAS, such as Moodle, on which collaborative learning activities including forums, vocabulary, and databases could be done by students [
20]; Canvas [
21]; or other more mundane tools such as Media Wiki, Google docs [
22], and Blogs [
23], none of the tools could reduce instructors’ burdens by automating instructors’ workflow activities such as grading, managing student groups, etc. Even though the Moodle workshop tool [
24] and Canvas [
21] can enable peer assignment reviews, they cannot enable students’ flexibility to change groups or conduct appeals if they are unsatisfied with the peer assessments.
To facilitate students’ collaborative learning and reduce instructor burdens, we developed and deployed the SmartGroup system, an SGLA class management tool; the guiding idea behind the system is to use technology to foster collaborative interactions between students, and automate workflow activities (e.g., forming and managing groups, and grading) with potentially no apparent drawbacks [
25]. SmartGroup reduces instructor burdens by allowing them to divide a class into small teams and direct these teams to carry out a learning activity. Most importantly, the system specifically allows students to deliver a project to peer graders, receive feedback from peer graders, and optionally appeal a grading to other peer graders. SmartGroup thus can programmatically create and manage a thread of peer-accountable collaborative learning activities.
We deployed the SmartGroup system at a public research university located in the U.S. Mid-Atlantic region for the fall 2018 semester in a course (N = 42; 1 instructor, 1 TA, 40 students) titled organization of data. After the deployment, we interviewed both the instructor and the TA, and collected 20 anonymous survey responses. During deployment, our minimalistic initial iteration of SmartGroup was continuously redesigned and deployed to meet the instructor’s needs; a key theme for iterations was to provide the instructor with greater control over SGLAs and management (grouping, grading, etc.). Iterations are recorded and discussed chronologically. Our results reveal findings consistent with the literature regarding the benefits of peer individual work assessments and SGLAs, and we highlight interesting cases throughout our deployment. We end with a discussion of these findings and implications regarding promoting wider SGLA adoption. This work is designed to offer contributions specifically to education technology. Education technology and group collaboration research in CSCW are becoming increasingly prevalent, and this work adds to these growing bodies of literature. Specifically, we offer the CSCW community exploration of a tool-based approach aimed at reducing instructor burdens while facilitating quality SGLA learning for students in a more scalable fashion. We hope this tool-based approach can effectively lead to the management of scale-free SGLA. This work’s main contributions are:
Descriptions of the tool ensemble as an instructional intervention, its iterative design process, and rationale for iterative changes.
An empirical study of tool-assisted, student-managed, and student-evaluated peer group work in SGLAs, including classroom experiences and peer assessment and grouping consequences.
A discussion of future opportunities for active learning, meta-cognitive benefits and effects of peer group work assessment, and how to further promote SGLA adoption in higher education.
2. Related Work
In the following section, we will briefly review the literature regarding the philosophical background of SGLAs, active learning, peer assessment, grouping technologies and tools.
SGLAs are an implementation of the concept of experiential education: that authentic doing, and reflection on doing are primary paths to learning [
26,
27]. Constructivist learning is a process of enculturation, constituted by the appropriation of the artifacts and practices of a community through collaboration, social norms, tool manipulation, domain-specific goals and heuristics, problem-solving, and reflection in action directed at authentic challenges, that is, challenges that are consequential to the learner [
28,
29,
30,
31]. In this, collaborative learning through group activity is essential; mentoring, coaching, and cooperation are primary learning mechanisms. Learners develop their own understandings through articulating and demonstrating knowledge to other learners, as well as benefiting themselves from the assistance of other learners [
32,
33]. SGLAs are thus one type of collaborative learning activity that facilitates active learning.
2.1. Active Learning Technologies and Techniques
Technology-facilitated active learning has many fruitful research avenues (e.g., classroom response systems [
34], blended classrooms [
35], minute papers [
36], flipped classrooms [
37], and online synchronous peer learning frameworks [
38]). During the COVID-19 pandemic, to promote students’ active learning in online settings, different learning management tools, and mobile devices have been employed by higher institutions [
39]. For instance, classroom response systems (clickers) have been adopted to allow students to vote or to answer simple choice questions posed by instructors [
40] in synchronous online classrooms. Although such technology system increased the interactions between students and faculty, it also results in limited student activity and engagement, literally reducing the students’ participation to a button push. Moreover, blended classrooms integrate online exercises or activities into a classroom presentation [
35]; these approaches involve students more actively, though only within an a priori interaction space. “Minute papers” and other small, lightly graded or ungraded activities allow students to quickly analyze or practice a particular point or operation [
36]; such interventions are highly engaging and involve students creatively, but students frequently get no direct feedback. Flipped classrooms [
37] have students encounter and study information outside class, and then subsequently apply and practice concepts and skills within class meetings (working either as individuals or teams). Flipped classrooms turn class meetings into workshops, which may be engaging, the instructor (and teaching assistants) are still a bottleneck, as they can only consult with students (or groups of students) one at a time. Our work explores the use of technology to reduce this bottleneck and provide students with opportunities for active learning.
2.2. Peer Assessment
Our approach to reducing instructor burdens in the adoption of SGLAs and collaborative learning relies upon utilizing students and self-guided learning and instructional resources through group work and peer evaluation. Peer assessment can have positive effects on student attitudes and outcomes which are as good as or better than instructor assessments [
41], but the major strength of peer assessment is that it allows students to receive quality and timely feedback regarding projects in a manageable way for lecturers [
42]. Such feedback is often available in greater volumes, and because of potentially greater amounts of time, peer assessors of lesser assessment skills are still capable of producing feedback on par with instructors [
43]; student peer assessment feedback can more closely approximate instructor feedback if students are aware of clearly defined assessment criteria [
44]. Although researchers have thought that student involvement (e.g., devising the scheme for intervention and training in its use) is necessary for students to benefit, students feel that they benefit from peer assessment interventions even without said involvement [
45]. These benefits come from both the student’s roles as an assessor and as an assessee [
46]. We note that the benefits of peer assessment complement the benefits of SGLAs in higher education.
Technologically facilitated peer assessment is a promising avenue for supporting peer assessment in group work; peer evaluators in such technologically assisted systems still receive benefits from both their roles as an assessor and an assessee [
46]. For example, the peer assessment platform PeerStudio uses technology to facilitate rapid rubric-based feedback for individuals’ works and offers students grades based on statistical manipulations of peer reviews; this platform demonstrates that students in large classrooms (i.e., both online and in-person) can benefit from such systems in the form of improved outcomes [
47]. Prior research by Kulkarni et al. shows that peer feedback in such online classrooms that utilize rubric-based assessment correlates highly with staff-determined grading [
48]; however, such use of rubrics in peer evaluations creates significant time burdens on students, which is why hybrid grading approaches (i.e., machine grading/algorithmic scoring in conjunction with peer rubric-based evaluations) might be favorable [
49]. Such hybrid grading systems also allow instructors to offer students questions with free responses, instead of just multiple choice questions [
50]. In addition, researchers have developed tools intended for the evaluation of peers’ contributions within group projects such as CATME [
51]. CATME (i.e., Comprehensive Assessment of Team Member Effectiveness) is a prominent tool that offers both peer evaluation and group formation capabilities [
52], making it similar to our SmartGroup system.
SmartGroup is designed based upon this rich foundation of peer assessment research, but it differs from available tools in the following key ways: (1) peer evaluations are for group work as opposed to individuals’ work (e.g., as opposed to PeerStudio [
47]) or individuals’ contributions (e.g., as opposed to CATME [
51]); (2) grades are to be determined entirely by peer reviews based on instructor-provided rubrics (i.e., as opposed to machine grading [
47]); (3) if students feel that their group’s grade and the peer reviewer’s written rationale for said grade are unfair, SmartGroup offers students a novel approach for disputing a grade through an appeal process. The dearth of research specifically regarding technologically assisted peer evaluations for group outcomes is a significant gap in the literature which we hope this work can begin to fill.
2.3. Grouping Techniques and Tools
Group learning activities offer varied active learning opportunities, but they have until relatively recently remained without a formal framework for their design, implementation, or evaluation [
53]; although a full review of said group learning activity processes is beyond this work, the element of group formation needs to be discussed. Grouping techniques range from random to highly structured and can either be facilitated by tools or human-derived. Non-random grouping techniques in the literature often rely upon evaluations of students’ characteristics or responses (e.g., affinity and creativity indexes [
54], Felder–Silverman personality axes [
55], and student survey responses [
56]); non-random techniques which require such student data thus often require more instructor effort than randomized grouping. Technology-assisted non-random grouping research has explored and demonstrated the efficacy of software at grouping students based upon instructor-provided criteria [
56], as well as the efficacy of tools in promoting such characteristics as student creativity and originality [
54]. For instance, during the COVID-19 pandemic, breakout rooms on the Zoom platform are a typical function to group students for collaborative activities or discussions [
57]. Despite the efficacy of these tools, instructor-selected groupings can facilitate greater interaction and learning for students, so long as a grouping method that appropriately meets a group’s intended goals is used; note that certain grouping techniques may be better suited for short-term in-class group work (e.g., latent jigsaw grouping), while other techniques (e.g., Felder–Silverman personality axes groupings) may be better for longer group projects [
55]. In addition, assigning students to new teams for each successive team activity (i.e., as opposed to having teams persist longer than half a semester) enables the best learning outcomes and can attenuate problematic team behavior patterns (e.g., interpersonal conflict and free riding) [
58].
Despite the benefits of non-random grouping and successive reorganization of teams, existing tools and techniques require instructors to adopt more responsibilities and burdens (conducting surveys, collecting student responses, reorganizing teams based on prior memberships, etc.); such new burdens may lead to instructors refusing or being unable to adopt these novel techniques, just as burdens prevent them from adopting active learning techniques [
59]. SmartGroup intends to offer instructors a solution for facilitating quality group work with minimal new instructor burdens, and thus hopefully leading to higher adoption rates. SmartGroup differs from existing tools by offering instructors options either to randomly reshuffle teams or to keep consistent groups as they need for all of a course’s SGLAs; this novel approach grants instructors who use SmartGroup the flexibility to implement both short and long group projects of varying complexities. This research thus offers to the field, as well as instructors, a flexible tool for various SGLAs which eliminates the extraneous grouping burdens placed upon both students and instructors.
3. SmartGroup System Design
Noting the benefits of both SGLAs and peer assessment, as well as the difficulties in utilizing these techniques to foster said benefits, we design SmartGroup to facilitate both. SmartGroup is an SGLA class management tool, which is separated into two different web-based user interfaces (
Figure 1 and
Figure 2). The system takes instructor input (a learning activity description, assessment rubric, etc.), creates and manages the small group activity, and then provides the instructor a set of grades based on peer evaluations (
Figure 3). SmartGroup does not directly manage whatever software might be used by students during their SGLAs; it only manages who is in the team, collates group assignments, passes the finished group assignments on to peer assessors, delivers feedback and assignment grades to the original student teams, and then finally provides instructors with grades. The goal of the SmartGroup system is to reduce instructors’ burdens and increase their abilities to employ novel active learning techniques. To ensure system sustainability and promote ease of adoption, we, therefore, utilized generic components, created simple user interfaces, and integrated our system with as little other software as possible. We describe key design elements, system implementation, and both user interfaces in the following subsections.
3.1. Design Elements and Rationale
Prior to building our system, we constructed a framework of rules and goals to guide its design. For example, we decided to have instructors create student accounts (
Figure 1b) to better react to enrollment turbulence before a course, require only one group member to upload the assignment, and provide each team member the same grade and feedback for said assignment, etc. We describe these key design elements in greater detail in the following subsection.
3.1.1. Reiterative Grouping
SmartGroup groups students iteratively without prior assessment of their individual skills, strengths, weaknesses, or personality traits. The point of this design idea is not to be random about groups, but to make it so easy to reassign groups that the elaborate analysis and balancing of group membership is not needed. This grouping strategy thus offers lower instructor and student burdens during formation. Successive refactoring of teams (i.e., grouping individuals on prior group membership to avoid successive teams comprised of the same members) was considered, but discarded because small classrooms (i.e., our preliminary deployment course) would not be able to sustain such a strategy. In addition, to avoid that students who are placed into teams based on their personalities or strengths may “perform tasks in the same constellation throughout a longer period” [
60], SmartGroup is designed to always use reiterative grouping as the default so that students would not miss out on potential growth opportunities.
3.1.2. Reshuffle
Noting the benefits of not keeping consistent teams for a semester, SmartGroup is designed to group students randomly and allows instructors to reshuffle groups (
Figure 1a) between successive SGLAs. Effective teamwork depends on students’ skills to negotiate authority in small peer groups and manage the possible conflicts [
61]. Group shuffling cannot prevent interpersonal conflicts in students’ collaboration, but it does minimize the maximum duration of such conflicts within a group by allowing students to change to different teams. We note that reshuffling may make the development of peer group relationships difficult, but the goal of our system is not to form consistent roles for students based on strengths, but to promote exposure to diverse experiences for growth.
3.1.3. Grading Rationale
To ease grading burdens, we defaulted SmartGroup to use letter grades (i.e., with the addition of + and −,
Figure 1a) instead of numerical grades; students could more intuitively understand that excellent work deserves an A, whereas they might not be able to differentiate nuances to determine if a work deserves an 86 or an 83 [
62]. Despite this default setting, SmartGroup is also able to support numerical grading. The instructor must adjust the number of students per group, and the number of peer assessors for each assignment. The grade for the assignment is the average of all peer assessors’ grades. Peer assessors are required by the system to provide explanations for their grading rationale for all assignments (
Figure 2). This promotes accountability, responsibility, and participation. These explanations also serve as invaluable feedback for the assessees, and the assessor may better internalize assignment expectations by working with a rubric and others’ work. This requirement thus offers both assessees and assessors opportunities to grow through either receiving or providing feedback.
3.1.4. Peer Assessment
Peer assessment included both peer grading and peer feedback provision in our original iteration. The design decision to include peer assessment was intended for promoting student active learning and reflection [
63]; this approach can provide students with quick feedback and grades while reducing instructors’ burdens for providing said feedback and grades [
64]. This feedback is critical, as it allows students to engage with assignments with which they recently engaged as a learner from the lens of an evaluator; this approach thus promotes meta-cognitive benefits in student active learning. We adopted a double-blind review process for the SmartGroup system. Peer assessors remain anonymous to their teams. All assessors are randomly assigned and they could not assess their own groups or the groups of those who are assessing their own group. Although we know true anonymity is unlikely in small classrooms, we believe these measures limit unwanted activities (cheating, harsh grading based on revenge, inflation of grades based on friendships, etc.) [
65]. SmartGroup provides peer assessors with a great degree of responsibility and authority because we believe that student peer assessment is an opportunity for active learning [
63].
3.1.5. Appeals and Appeal Assessors
Unique to our SGLA system is the role of an appeal assessor, which essentially works as a mediator for grade disputes. To initiate an appeal, all group members must agree to make an appeal, and one student representative must provide a written rationale for their appeal claim. By submitting this rationale in the student interface, the system sends relevant material (rubric, assignment, peer grades and feedback, appeal argument, etc.) to an appeal assessor (
Figure 2d). Appeals can only be made one time per individual project. Considering the significant power appeal assessors carry, we choose them carefully. For our system, we choose appeal assessors from peer assessors who do not have appeals against them; if assessors do not have appeals against them, then they are considered to have provided good peer evaluation by the system. If an appeal is requested, an appeal assessor is assigned to assess the project. The appeal assessor’s grade for the project supersedes the average grade (i.e., if there were multiple assessors) originally provided. We intend for appeal assessors to read peer assessors’ feedback first (i.e., to ensure that peer assessors’ perspectives are considered and assessed, thus limiting the potential negative effects of consolidating authority in appeal assessors), but they may choose to read the material in any order. We are aware that this decision might introduce bias, but we think it is necessary to reduce appeal assessor authority and ensure peer assessors’ opinions are heard.
3.1.6. Peer Assessor Evaluation
To ensure that students provide high-quality feedback, and thus benefit from higher engagement with rubric requirements and others’ perspectives, we designed SmartGroup to provide grades for peer assessors’ evaluations. To simplify the process, the system automatically assigns peer evaluators a grade value of 100 for their evaluation unless an appeal is in process for a given evaluation. If an appeal is requested, the appeal assessor is assigned to the assignment and conducts an assessment as discussed above. Once the appeal assessor has assigned a grade, the system automatically checks the deviance of each peer assessor’s initial grades from the appeal grade. These values of deviance will be subtracted from the peer assessors’ evaluation grades. To clarify, if an appeal assessor grades an assignment as an A, but a peer assessor grades it as a C, then there is a deviance of 7 grade values (i.e., A, A−, B+, B, B−, C+, C); each grade value constitutes 10 points of a peer assessor’s evaluation grade, so the peer assessor would receive 30 points for his or her evaluation grade (i.e., 100 − (10 × 7) = 30). The minimum value for a peer evaluation grade is set to 0. These evaluation grades are intended for instructor assessment of student performance in peer grading activities and can either be used as a qualitative assessment or as a portion of the student’s course grade (in the form of required work, extra credit, etc.); the exact use of these values in a given course is left to instructor discretion, although we believe the potential threat or reward of using these grade values will keep students accountable to produce higher quality assessments and thus benefit more from active learning opportunities.
3.2. Implementation and User Interface
SmartGroup is built as a web application in the Python web framework. It uses Django and HTML/CSS for the frontend, Jquery for the backend, Nginx as a web server, and PostgreSQL for the database. GitHub acts as our version control system. The system is split into two user interfaces, one for instructors and the other for students. We will discuss each user interface in more detail in the following sections.
3.2.1. Instructor User Interface
Instructors are responsible for three basic requirements when creating a course with SmartGroup: (1) using the “Course Info” tab to create the course (
Figure 1a) by providing high-level grading policies (course name and section, grading system choice, assignment grade weight towards final grades, etc.) and assignment information (due dates, assessment deadlines, number of students per group, number of assessors per assignment, SGLA description and rubric, etc.); (2) using the “Enrollment” tab to create student accounts (
Figure 1b) by uploading a student enrollment .CSV file from the institution’s learning management system (LMS); (3) using the“Grouping” tab to randomly group students for assignments (
Figure 1c). The “All grades” tab allows an instructor to export all available grades for group assignments, and the “Peer Grades” tab allows the instructor to view all available peer assessor evaluation grades. These basic factors are necessary for normal lesson plans during course design, so the only significant source of added burden from using this system is transcription. With all of these components are taken care of, the system will create and manage SGLAs and send the instructor a set of grades upon completion. The instructor user interface for managing these steps and for viewing grades can be seen in
Figure 1.
3.2.2. Student User Interface
The student user interface is designed with a layout similar to the instructor user interface (
Figure 2). Students do not need to sign up for an account in the system, as accounts should be created by the instructor (i.e., see
Section 3.1). To sign in, the system will thus recognize that a student’s account and password are both initially the school account from the .CSV file (i.e., in this case the school-affiliated email address); a password can be changed at any time by the student after his or her initial login. Once logged into their accounts, students are able to see the following series of tabs on the left side of their screens: (1) Your Progress, (2) Group, (3) Peer Evaluation, (4) Grades, (5) Appeal Grader, and (6) Review History. Note that tabs are only active when they have material that students need to take care of, thus negating their need to constantly check every tab. The “Group” tab (
Figure 2b) allows students to upload assignments, find group members and their contact information, see deadlines, find assignment grades and peer feedback. Once an assignment has been uploaded, the system chooses a selected number of peer assessors based upon instructor input and allows chosen peer assessors to access the “Peer Evaluation” tab (
Figure 2a) to assess the assignment. Students can view a grade for a given assignment by clicking on the “Grades” tab; on the same page, they can also request an appeal by simply putting their arguments into the text box below their grades and comments and clicking the “Appeal” button (
Figure 2c). Once an appeal has been requested, the system automatically notifies and sends a consent request to all team members. When consent for appeal is unanimous, the system then assigns an appeal assessor to perform an assessment (
Figure 2d); at this point, the “Appeal Grader” tab becomes accessible to the chosen appeal grader. Students can access their previous review history and how their previous peer evaluation is evaluated by the system through the “Review History” tab.
6. Discussion
The pre-deployment and post-deployment iterations of the SmartGroup system differ radically from one another, and in ways which provide us with real-world insights that further develop our original design rationale. Notably, our current iteration based on the lessons learned in this deployment better accounts for instructor needs and interests. We are confident that these changes have expanded the potential applications of the SmartGroup system (e.g., by supporting semi-automated grouping, offering instructors more flexibility to control grades, and supporting different grouping methods to be employed based on assignment needs). Furthermore, we identify strengths and weaknesses in the system which can be exploited or reduced, respectively, in future iterations. We will discuss key insights in the following subsections.
6.1. Leveling the Field
Our results support the argument that the SmartGroup system has indeed improved students’ active learning in a manner consistent with available literature (e.g., see [
41] for peer assessment, and [
6] for SGLAs); According to the instructors’ interview, she mentioned in the interview that the SmartGroup
has created a positive learning environment for students who performed pretty well in group projects, and she would continue using the system for next semester.
Even though the SmartGroup system was deployed before the COVID-19 pandemic, the pandemic experience and continuing effects on education add to the importance of using SmartGroup to improve students’ engagement in learning. SmartGroup facilitates small-group activities where the interactivity between students shall be strengthened that otherwise is lacking in online classroom settings [
57]. Moreover, students and faculty will become more used to taking advantage of online tools to assist remote instruction and learning [
71] during or after the COVID-19 pandemic, making it even easier for them to adopt the online SmartGroup tool in online classrooms than in physical ones.
Furthermore, our survey participants indicate that the system was generally satisfactory. SmartGroup does appear to foster student outcomes for what the instructor called “weaker” students, as final assignment grades were relatively high, and the projects themselves were all of a similar caliber. The instructor noted being impressed by the performance of even what she considered to be “weaker” students, who out-performed her expectations; this appears to demonstrate SmartGroup’s general success at helping her to achieve her goal to foster student growth, despite potentially contributing to grades becoming uniform. Future work determining the trade-offs caused by openness of work sharing in peer group work assessments (i.e., between friendly, open environments with potentially uniform grade distributions and strict, controlled environments with less uniform grades) is warranted.
We note that SmartGroup may not have as concretely helped “stronger” students as it did “weaker” students, as the “stronger” students did not meet the instructor’s high expectations. The instructor did not believe that these “stronger” students challenged themselves to produce the best work they could; she suggested these higher-performing “outliers” may have become demotivated as a result of seeing the quality of work produced by peers. We are uncertain as to the exact causes or magnitudes for this phenomenon, if it is real (i.e., we are wary of independent work success being directly correlated to group work expectations, as team success relies upon more than just the general intelligence of team members [
72]), but the instructor’s suggestion is viable. Furthermore, the small size of the class may have also made accessing all peer groups’ works too easy; it is less likely that students would develop a strong feel for what would constitute “enough effort” if they had access to proportionately fewer groups’ works. This suspicion is based on studies that suggests that the expected average grade of a course’s students is correlated with students’ study efforts (e.g., if an A grade is expected as the class average, students will work less) [
73]. Another potential explanation is that people who are already learning successfully have much less room for improvement, and any minor improvements demonstrated by “stronger” students may not have been as easily noticed; this would be akin to the ceiling effect, as good students start out closer to the ceiling. After all, students who are already learning successfully might have less room for improvement. We believe this is a design space that requires greater attention, as we want to foster the growth of all students, even the few high-performing outliers. Future research regarding how to incentivize these higher-performing students who might otherwise be disincentivized to do their best work may be necessary.
6.2. Fostering Meta-Cognition in Active Learning
One important current focus in fostering student success through improving learning processes is meta-cognition [
74,
75,
76,
77]. Meta-cognitive skill development occurs naturally when small groups of students work together, as they learn from each other while learning through providing peer feedback [
8]; meta-cognitive learning strategies are thus fostered by combining active learning activities with feedback which provokes self-reflection. Learners employing meta-cognition reflect on and regulate their learning as they are engaged in learning, and we leverage this through SmartGroup’s grouping and peer feedback features. SmartGroup supports metacognition in two respects. First, it eases the logistics and instructor burdens of forming groups and employing active learning SGLAs; this provides students with autonomy and accountability to undertake those burdens and actively engage with the material. Because SmartGroup is designed for group work, students will further be incentivized to share understandings and perspectives with peers, thus provoking reflection. Second, peer evaluation causes learners to revisit learning activities in which they were learners, but to experience it as evaluators who provide feedback and guidance to peers. This change of perspective is meta-cognitive because a task that was recently performed is then critically reflected upon. Our results show promising evidence to support that the SmartGroup system fosters meta-cognitive learning opportunities. Notably, survey respondents stated that our system’s features helped them to better understand their assignments, others’ expectations for those assignments, and even influenced how they themselves graded and provided feedback. Simply put, these respondents note reflecting upon the material and making cognitive changes. Furthermore, the instructor noted that grades were generally high and that assignment submissions were homogeneous; this apparent averaging or equalization might indicate a high degree of cross-group perspective sharing which ultimately resulted in successful final projects.
Self-reflection is key to student meta-cognition, performance, and growth. In our original design, we used written peer feedback to promote reflection and meta-cognition. Our survey participants reported that both providing and receiving peers’ feedback contributed to reflection and meta-cognitive activities (see
Section 5.4.1). We believe this indicates either: (1) reflection is promoted by receiving and producing feedback in co-located peer assessment, or (2) students subjectively believe that reading feedback contributed to reflection. Students’ subjective experiences should not be discounted. Furthermore, students still receive meta-cognitive opportunities from peer assessment when they adopt the assessor role and provide feedback on an assignment they have themselves finished; the literature from workforce contexts supports that reviewing peers’ work does improve reviewers’ subsequent work [
78]. Future work is necessary to determine how best to promote student meta-cognition through SGLA systems, and as such work was beyond the scope of this deployment.
6.3. Reshaping the Instructor’s Activities
The adoption of new technological supports and SGLAs might shift group management efforts and grading burden to other tasks, notably by altering instructor interaction styles in a way that does not necessarily reduce burden [
79]. We noted similar shifts during our deployment regarding grading, feedback provision, and group formation and management. Instructors and TAs have been freed from certain mundane tasks and allowed to spend more time and effort in intellectual and collaborative activities such as assignment planning, rubric design, and coaching students in labs. In addition, we note that the adoption of SGLAs does not necessarily reduce overall instructor burdens; it only alters the types of burdens they experience via altering their styles of interaction (by increasing mediated-learning interactions, through walking around and asking students questions, helping them reach the right conclusions, etc.) [
79]. In this regard, the implementation of SmartGroup is consistent with the implementation of SGLAs, as both alter the efforts and burdens of instructors; indeed, the goal of SmartGroup was to reduce several primary learning activity burdens to allow for time and effort to incorporate SGLAs, and we believe it was successful in that regard. However, the adoption of SmartGroup may have unknown indirect effects; we need larger studies in the future to identify the nature and extent of these effects to ultimately allow the field to build better education supporting interventions.
6.4. Scalability and Wider Adoption
The SmartGroup system was originally designed to be a minimally burdensome SGLA and peer assessment management tool aimed at reducing instructors’ burdens and facilitating active learning teaching opportunities by constructively leveraging students’ time and effort. Despite our intentions, our original system design was misaligned with the instructor’s needs in this case study. Our intention was to build a scale-free system that could have broader applications by designing it to free instructors’ time forming groups and help provide quick feedback and grading; we thus automated these features or provided students with the responsibilities to perform them. However, for more “hands-on” (i.e., our instructor participant’s own words) instructors, we learned that tools that shift instructor grading and feedback burdens entirely to students may hinder instructor satisfaction with, and thus adoption of, said tools; if we want our tools to be adopted, we must allow instructors greater flexibility regarding control (semi-automated grouping, overriding grades, etc.) over even the burdens we seek to reduce for them. As a result, many of our iterations during the deployment were geared toward providing instructors with more control over students’ grouping and activities (semi-automated grouping based on prior knowledge of students’ individual works, grade overrides, etc.). We believe that these changes to allow options regarding the control of said features ultimately allow for more diverse instructors to satisfactorily use the SmartGroup system; in particular, we believe that our system would be able to greatly benefit instructors for massive online open courses (MOOCs), as MOOCs’ large class numbers might be ideal for reducing challenges regarding double-blind review anonymity and higher-performing students choosing to underperform based on knowledge of peer work. As of right now, SGLAs are not employed in large online courses such as MOOCs [
80,
81], despite such courses being able to provide potentially greater diversity of perspectives [
82]; therefore, our SmartGroup system could make significant contributions to this area.
6.5. Future Opportunities
Several features in the initial SmartGroup iteration were either not used or used in unintended ways during deployment. These include group reshuffling, appeal assessor, and processes. As discussed above, those are important features that need more field deployments and study.
Reshuffling: SmartGroup’s initial design was predicated on the idea that group reshuffling would lead to the sharing of diverse ideas and perspectives, reduce the severity of interpersonal conflicts, and prevent role stagnation; however, owing to the design of the course and the needs of the instructor, we could not test the group reshuffle feature in this deployment. Interestingly, we note that three respondents discussed having a student assignment submitter role develop; we are uncertain if these respondents belonged to the same group, but this does suggest that student roles may develop from prolonged consistent groups. Noting that student groups persisted for 10 weeks (i.e., longer than the maximum length for optimal learning [
58]), we believe reshuffling may have reduced this development and should be deployed and evaluated fully. Just as we note that peer assessment provides benefits from both the assessee and assessor roles [
46], we believe role stagnation within groups may unequally distribute learning opportunities and benefits; we intended to help promote student growth by ensuring each student experienced a diversity of roles and tasks, so even the limited evidence of role stagnation (i.e., dedicated submitters) we have may be cause for concern. However, examining student role development is outside of our current study’s scope (i.e., an instructor’s tool for facilitating SGLA adoption). Future dedicated research may be warranted regarding role development from peer group work assessment tool adoption.
Appeal Assessors and Processes: The appeal process was used once during deployment; it was initiated because a peer assessor had mistakenly judged an assignment using the wrong rubric. If not for this human error, the appeal system would not have been used, as we know that most respondents were happy with the quality of received peer assessment feedback. This use of the system’s appeal processes was hindered by the group’s lack of in-system appeal consensus and resulted in said group going outside of the system to ask the instructor for an appeal directly. We believe the necessity of consensus for the appeal may not have been communicated clearly or often enough for the students to be familiar enough with the system to utilize its features; in the future, we believe this requirement can be communicated more clearly with reminders, especially considering how rarely students may use such processes. We further believe that the scarcity of appeals, independent of survey responses, represents successful feedback delivery and student satisfaction with peer assessments. However, we still believe the appeal assessor role offers novel active learning opportunities despite the scarcity of its use and should be pursued further.
Design for the Benefits of Learners: Looking beyond the specific SmartGroup tool, the future education system designers should consider that the learners themselves might both contribute to and benefit from organizing and evaluating their own learning activities. We believe that this design direction is always available to some extent in developing new educational support systems.
6.6. Limitations
Although this preliminary deployment offered valuable insights, expectations should be tempered based on a few limitations. First, this study focuses on one course with a limited number of student participants and one teacher being studied, which may reduce the generability of the study results to other course contexts and student/teacher groups. Future work could recruit a larger number of teachers and students who are from diverse class contexts to examine the effectiveness of the SmartGroup on promoting collaborative learning and alleviating instructors’ burdens. More diverse course contexts would also generate interesting insights into the impact of different courses and content on the use of SmartGroup.
Second, even though the survey data shows that students’ attitude and experience with the SmartGroup is positive, it still remains to be examined whether SGLAs will result in extra workload for students who are supposed to not only complete assignments but also peer assessments. In this study, since no students came to the office hour to be interviewed, we could not obtain the interview data detailing students’ experience with the SGLAs. Future work could verify whether the workload students need to undertake for peer assessment outweigh the gains they get from small group learning activities or not, and if yes, how such costs of SGLAs could be reduced.
Third, survey responses were adequate, but may have been biased towards proactive participants; we may not have captured the entire range of student experiences. Although we observe high and uniform grades, we note this was by the instructor’s design (i.e., rubrics and grading choices). This was only our initial deployment, and in order to respond to the instructor’s needs, we did not test certain features from our initial design. Although we gained valuable insights into instructor needs for SGLA facilitating systems, further work to account for these limitations is necessary.