Next Article in Journal
Towards True Climate Neutrality for Global Aviation: A Negative Emissions Fund for Airlines
Next Article in Special Issue
Institutional Motivations for Conversion from Public Sector Unit to a Social Business: The Case Study of Burgundy School of Business in France
Previous Article in Journal
Efficient Pricing of Spread Options with Stochastic Rates and Stochastic Volatility
Previous Article in Special Issue
Mega Universities, Nanodegrees, and the Digital Disruption in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Students’ Perception of the Use of a Rubric and Peer Reviews in an Online Learning Environment

by
Letebele Mphahlele
Department of Accountancy, College of Business and Economics, University of Johannesburg, Johannesburg 2092, South Africa
J. Risk Financial Manag. 2022, 15(11), 503; https://doi.org/10.3390/jrfm15110503
Submission received: 1 September 2022 / Revised: 21 October 2022 / Accepted: 25 October 2022 / Published: 31 October 2022
(This article belongs to the Special Issue The Digital Transformation of Universities: Risks and Opportunities)

Abstract

:
Moving towards online learning during the coronavirus pandemic presented challenges, such as identifying assessments for learning. Assessments for learning involve using assessments as part of the learning process. Alternative assessments, as opposed to traditional assessments, are favoured for promoting for learning. These assessments include peer assessments and using criteria-referenced tools such as a rubric. Online learning environments often favour automated grading tools such as multiple choice. However, essay-type probing questions help students adopt a deep learning approach. Peer assessments and rubrics can help with grading essay-type questions. However, while the benefits of rubrics and peer assessments are well documented, there is limited research on students’ perceptions in South Africa on the use of rubrics and peer assessments in online environments to facilitate a deep approach to learning. A mixed method approach using a Likert scale and an online qualitative questionnaire was undertaken to explore students’ perceptions of the use of peer assessments with a rubric in an undergraduate module at the University of Johannesburg. Despite a low response rate, four main themes emerged: a clear performance criterion, structured writing, and a deep approach to learning and critical thinking. However, the study also showed limitations of the peer rubric and peer assessments in helping students prepare for formal summative assessment. The results suggest that the rubric and peer assessments, with amendments, could help students adopt a deep approach in online learning environments.

1. Introduction

The closure of universities during the Coronavirus pandemic resulted in a shift to online learning (Mukhtar et al. 2020) to prevent study disruptions. However, the move towards online learning presented numerous challenges (Dumford and Miller 2018). The main challenge presented by the online learning environment was related to assessment tools. Online learning environments are where students access learning experiences through technology (Conrad 2002). Assessments are a critical component of the learning process as they are central to what students consider important in their learning (Gizem Karaoğlan-Yilmaz et al. 2020).
The main challenge with assessments in the online learning environment was assessing modules with diverse concepts and theories, requiring a deeper analysis (Vista et al. 2015). Identifying effective assessment tools appropriate for the online learning environment was essential (Vonderwell et al. 2007). Automated scoring assessments are usually used in online learning environments. However, these assessment tools work best in contexts with explicit correct responses (Rovai 2000).
Furthermore, automated grading assessment tools are ineffective in encouraging students to adopt a deep approach to learning (Andresen 2009). These automated assessment tools are often criticised for testing superficial or surface learning (Brady 2005). Brown and Liedholm (2002) found that online students did worse in questions that asked them to apply basic concepts in sophisticated ways and did better on those questions asking for definitions (Brown and Liedholm 2002).
Assessments are also the primary driver influencing whether students adopt a surface, deep or strategic approach to learning (Duff and McKinstry 2007; Booth et al. 1999). An approach to learning is concerned with how students carry out their academic tasks, which affects the learning outcomes (Chin and Brown 2000). Learning approaches are primarily associated with the level of understanding (Tsingos et al. 2015). The learning approach a student adopts also depends on the learning environment (Marton and Säaljö 1976).
A deep learning approach occurs when a student intends to understand and construct the meaning of the learned content (Entwistle and Ramsden 1983; Gijbels et al. 2005). Students who adopt a deep approach can understand the content at a deeper level and, as a result, can retain what they have learned and apply it to different contexts (Camac Bachelor 2018).
On the other hand, students who adopt a surface approach to learning want to reproduce the material they have learned and reduce their understanding to memorising facts (Newble and Clarke 1986; Dolmans et al. 2016; Lindblom-Ylänne et al. 2019). Adopting a surface approach to learning may often be influenced by the lack of feedback during the learning process (McDowell 1995). Therefore, assessment tools that encourage students’ active participation in the subject/module, clearly state academic standards and provide feedback on student progress are likely to encourage students to adopt a deep approach to learning (McDowell 1995).
As a result, finding assessment tools that encourage students to adopt a deep approach to learning in the online environment was essential.
Therefore, assessment and approaches to learning are strongly related (Struyven et al. 2005). Consequently, if one considers changing student learning, the methods of assessments should be changed (Gaytan and McEwen 2007). There are usually two different roles that assessments are perceived to have in learning: formative and summative assessments (López-Pastor and Sicilia-Camacho 2017). Summative assessments are used to assess a student’s learning at the end of an instructional activity or periodically against a predetermined standard or goal to determine what a student knows at a particular time (Taras 2008; Dixson and Worrell 2016).
On the other hand, formative assessments indicate the gap between the required standard and the student’s level of learning (Dixson and Worrell 2016). Therefore, formative assessments are often seen as an opportunity to improve students’ achievement by diagnosing and providing feedback to enhance the student’s learning and the teachers teaching through simultaneous adjustment of learning and teaching (Nasab 2015; Garrison and Ehringhaus 2007). Therefore, the most crucial consideration regarding assessments should be the type of learning that an assessment communicates to students (Falchikov and Boud 1989; Boud 1990, 1992, 1995).
According to Nasab (2015), there are three ways of looking at assessments. There are assessments for learning, assessment as learning and assessment of learning. Assessments for learning usually include students critically assessing and reflecting on their learning (Jonsson and Svingby 2007). These assessments generally employ criteria-based assessments, typically performed using rubric-articulated feedback (Nordrum et al. 2013).
Assessments for learning are sometimes referred to as alternative assessments. Alternative assessments allow learners to actively participate in the learning process (Nasab 2015). These assessments offer teachers a way to recognise a student’s weaknesses and strengths in various situations. Alternative assessments differ from traditional assessments in that they treat learning as an active process, emphasising the process of learning over the product, assume knowledge has multiple meanings, and consider an assessment’s role in enhancing student learning (Anderson 1998; Nasab 2015). These assessments ask students to apply their knowledge, and they have been known to not only engage students in real-world scenarios or problems but also curb cheating. According to Bretag et al. (2019), higher-order thinking skills like application, critical thinking, and problem-solving are all improved due to the student’s participation in alternative assessments.
The use of peer assessments as alternative assessments in universities has increased (Vickerman 2009; Wen and Tsai 2008). Peer assessments as alternative assessments are reliable option in the “assessment for learning” (Vickerman 2009; Taras 2008). Peer assessment occurs when students provide summative or formative feedback to each other on certain tasks (Adachi et al. 2017). While peer assessments can be conducted formatively or summatively, the formative use of peer assessments has been favoured as an opportunity to promote assessment as a learning tool (Carless 2009). Formative feedback indicates to the learner the gap between the required standard and the student’s current level of learning ((Dixson and Worrell 2016). Therefore, peer assessments communicate to students an understanding of what constitutes outstanding work (Culver 2022), encouraging students to improve their performance and learning (Sadler and Good 2006). Peer assessments have also been found to help promote students’ deep approach to learning rather than a surface approach (Falchikov and Goldfinch 2000; Vickerman 2009).
Peer assessments have also been found to improve student achievement and motivation through self-monitoring. Self-monitoring is achieved by observing one’s performance, comparing one’s performance to an identified standard, and reacting to the differences between self and the expected outcome (Nieminen et al. 2021). Self-monitoring includes planning for future activities to achieve the standard outcome (Dabbagh et al. 2004). Peer assessments and feedback have also been reported to help develop students’ critical thinking skills through reflection and judgement of their work and that of their peers (Boase-Jelinek et al. 2013; Somervell 1993). According to Hadzhikoleva et al. (2019), peer assessments can be used to develop higher order thinking skills by defining assessment criteria according to the different levels of Bloom’s taxonomy. Bloom’s taxonomy classifies learning objectives according to a cognitive level of comprehension, application, synthesis, and evaluation (Hadzhikoleva et al. 2019). According to Blooms taxonomy, lower cognitive levels include knowledge, comprehension, and application level, while higher-order thinking includes analysis, synthesis, and evaluation (Ramirez 2017)
Students exposed to peer assessments also show structure and organisation regarding their written work and assessments (Vickerman 2009). Peer assessments allow students to participate actively in the learning process and feedback (Gielen et al. 2010). However, van den Berg et al. (2007) further argued that the effectiveness of peer assessments also depended on the combination of the design characteristics implemented. As a result, several frameworks and guidelines were formulated for implementing peer review assessments in higher education (K. J. Topping 1996; van den Berg et al. 2007; Gielen et al. 2010; Adachi et al. 2017; Liu and Carless 2007). Peer assessments need clear criteria to ensure that student feedback is accurate and valuable (Sluijsmans and Prins 2006). Therefore, explicit grade descriptors will be crucial to ensure that students properly judge other students’ work (Ellery and Sutherland 2004).
Another important aspect of assessment for learning is providing feedback (Nasab 2015). Giving feedback also allows students to identify the areas of strength and those areas where they can improve (Nordrum et al. 2013). The positive impact of feedback on student learning has also been well documented (Wu et al. 2022; Alqassab et al. 2018; Carless and Boud 2018; Ruegg 2015; van Ginkel et al. 2015). These benefits include facilitating the development of a deeper understanding, encouraging the active participation of students, improving critical analysis capacity and fostering a deep approach to learning (van der Pol et al. 2008; Lladó et al. 2014; Gijbels and Dochy 2006; Asikainen and Gijbels 2017; Dochy et al. 1999; Vickerman 2009; K. Topping 1998; Wen and Tsai 2006; Kearney 2013). Furthermore, students’ perception of peer assessments has also been positive, indicating that peer assessments contribute positively to their learning and improved the quality of their work (Lladó et al. 2014; Vickerman 2009; Dochy et al. 1999; Sun and Wang 2022). However, research has also found that feedback effects are not always beneficial and vary greatly (Wollenschläger et al. 2016). Camarata and Slieman (2020) indicated that feedback could be combined with a rubric to ensure it works effectively for learning.
A rubric is a grading tool that outlines the criteria for assessing a student’s piece of work, communicating to students the quality of each criterion on a scale from excellent to poor (Andrade 2001). While the formats of rubrics can vary, they have two important elements, a list of criteria and the graduations of quality (Andrade 2005). Rubrics may be either holistic or analytic. Holistic rubrics provide a single score based on an overall impression of a student’s performance on a task (Mertler 2019). In contrast, analytic rubrics provide specific feedback on multiple dimensions and levels (Moskal 2002). Assessment for learning tools usually employs criteria-based assessments, performed using rubric-articulated feedback (Nordrum et al. 2013). Numerous studies have found that rubrics provide real benefits, including facilitating peer feedback, helping students identify areas of improvement in their work, improving performance and facilitating self-monitoring in learning (Brookhart and Chen 2015; Andrade et al. 2010; Panadero and Jonsson 2013).
Rubrics make the teachers’ expectations clear to students and provide more informative feedback about areas in which they are strong and areas they may need to address (Andrade 2001). Rubrics are also usually used for essay-type questions and writing (Rezaei and Lovorn 2010). Peer review assessments can be combined with a rubric and serve as an assessment for learning tool by enabling students to receive descriptive feedback (Liu and Carless 2007). While rubrics are favoured, there have been concerns about their reliability and validity. Validity refers to measuring what one intends to measure (Sundeen 2014). However, the benefits of a rubric and peer assessments have been found to outweigh the cost of using them (Andrade 2001; Liu and Carless 2007; Anderson 1998; Vickerman 2009; Moskal 2002). Furthermore, while the benefits of rubrics and peer assessments have been well documented, there is limited research on the perception of students on the effectiveness of rubrics and peer assessments used together, especially in online environments, to facilitate the adoption of a deep approach to learning.
Therefore, this study aims to explore students’ perceptions of using a rubric and peer assessment as alternative assessments in the online learning environment. By addressing this objective, the study seeks to demonstrate the usefulness of peer assessments with rubrics in online learning environments.
The study is laid out into the following sections: Section 1 presents the introduction and background to the study. Section 2 depicts the proposed method, explaining how the peer assessments were carried out. Section 3 presents the results and, Section 4 discusses the results. Finally, Section 5 states the conclusions, limitations, and future considerations.

2. Materials and Methods

2.1. Participants

This study population included second-year undergraduate Bachelor of Accounting full-time degree students at the University of Johannesburg in South Africa titled Management Accounting Strategy. The module had 654 enrolled students. The module focused on theory and constructs.

2.2. Instruments

The rubric used for scoring student compositions was adapted from (Yen 2018).
The researcher developed suggested solutions and rubrics for assessing all the peer assessments, identifying the important categories using research by (Andrade 2001; Moskal and Leydens 2000; Keith Topping 2009). The six criteria on the rubric and weightings were (a) mechanics and presentation (10%) (b), structure organisation (flow of ideas) (10%), (c) use of evidence (45%), (d) technical knowledge usage (30%) and (e) conclusion (5%). Each criterion had four proficiency levels from 0–25% (below standard); 26–50% (developing); 51–74% (accomplished) to 75–100% (exceptional). The total points possible were 50 for each peer assessment. While the questions for the five peer assessments were different, similar rubrics were used for all the peer reviews. The “use of evidence” criteria was modified for each peer assessment to reflect the concepts required of the students for the different questions.
The researcher developed the closed-ended and open-ended questions based on literature by (Malan and Stegmann 2018; Vickerman 2009; Lunney et al. 2008). The open and closed-ended questions were shared with two other lecturers for feedback on the validity and amended as per their feedback. There were no reliability tests done on both instruments.

2.3. Procedure

Students completed five essay-type question submissions and peer reviews. The essay type of questions covered the various levels of Bloom’s taxonomy, including comprehension (explain/discuss), application (distinguish), analysis (justification of arguments), and synthesis (criticise, conclude) (Krathwohl 2002). The following conditions were implemented for the peer assessment.
Firstly, the students had to complete the first peer assessment submission and review without a rubric.
After that, the researcher provided the students with the instructional rubric for the first peer assessments, explicitly explaining the criteria of the rubric to the students with an online session. The researcher used the rubric to grade the first peer assessments, training the students to conduct a review and give feedback using a rubric. The online session was recorded and uploaded for the students for referral later. The researcher explicitly explained the objective of the peer assessments and how it links to all the semester assessments and the module’s learning outcomes.
Each round of peer assessment was three weeks long. The students had one week to complete and submit their responses to the questions. The students were given the second week after the submission to review their peers’ submissions. The students had to provide detailed feedback for their reviewees under each criteria component. The feedback was made available to the students in the third week. After every grading week, the researcher reviewed random checks (15 peer reviews). The five highest, average, and lowest-graded reviews were selected. During the third week the researcher had an online session discussing the rubric and question, providing general feedback for all the students based on the randomly selected 15 reviews. The students were encouraged to go through the feedback in the third week to use it for the next peer submission. These steps were repeated for all four peer assessments carried out.
The rubric criteria added to a total grade of 50 marks for the reviews. These grades were released to the students after each round of reviews. All students completed reviews for all peer assessments were awarded an overall 2% towards their semester mark. These data were obtained from the learning management system. The peer assessments were implemented in detail using the elements of peer assessment based on the framework by (Adachi et al. 2017).

2.4. The Survey

After completing all five peer assessments, the survey was made available to the students online. The survey was made available to all 654 students enrolled on the module. It was communicated to the students that participation was voluntary. A consent form for participation was made available to the students. The survey was made available at the end of the semester after all the peer assessments were concluded. The students completed the survey anonymously on Google Forms. The link was sent via the learning management system to all students enrolled in the module. The online questionnaire survey contained ten close-ended and five open-ended questions. The open-ended questions are depicted in Table 1. The closed-ended questions compromised statements that students had to rate on a five-point Likert scale ranging from strongly disagree to strongly agree (Barua 2013).

2.5. Data Analysis

Qualitative and quantitative methods were employed in the study. The responses were transferred from the Google form to an excel spreadsheet and analysed per question, per student. To determine students’ subjective perceptions of peer assessment and rubric use, the percentage of total respondents at each point was determined for the Likert scale questions as depicted in Table 2.
The open-ended questions were analysed for common themes. The primary analysis consisted of themes extracted directly from the raw data using inductive data analysis procedures (Thomas 2006). Classifying themes from data contrasts with traditional data analysis methods, which group the data into categories in advance (Patton 1990). The inductive data analysis by Thomas (2006) utilised in this study is comprised of the following five main steps: (1) the initial reading of the text data; (2) the identification of specific text segments and related objectives; (3) the labelling of the segments of texts to create categories; (4) reducing overlap and redundancy among the categories, and (5) creating a model incorporating the most important categories, which should comprise three to eight total categories (Thomas 2006).

3. Results

While the module had 654 enrolled students, only 48 (7%) completed the online questionnaire survey, both the closed-ended and open-ended questions. For the closed-ended questions, percentages were calculated for each item on an excel spreadsheet, as indicated in Table 2.
Table 2. Closed-ended questions.
Table 2. Closed-ended questions.
Question IdentifierStrongly AgreeAgreeNeutralDisagreeStrongly Disagree
12345
Category A: Peer assessment performance
Q1My performance improved over the five peer assessment opportunities.14%40%26%17%2%
Category B: Clearer learning outcomes and summative assessment requirements
Q2The opportunity to be involved in peer assessment helped develop my content knowledge of what was expected in the final assessments.19%45%26%5%5%
Q3The rubric has helped me understand the learning outcomes and final assessment requirements19%52%17%7%5%
Q4I clearly understand the learning outcomes and final assessment requirements following the opportunity to engage in peer assessment.19%26%29%21%5%
Category C: Better structure and quality writing
Q5The rubric has helped structure my writing and improve my writing style.29%43%19%5%5%
Q6I have developed the ability to write analytically, using evidence to support my answers.31%43%19%5%2%
Category D: Deep approach to learning and critical thinking
Q7I have developed the ability to analyse content and think critically about the subject knowledge.24%45%21%7%2%
Q8The peer assessment helped me with a deeper understanding of the subject knowledge of the module.17%48%26%5%5%
Category E: Better analysis, application and synthesis of theory and concepts
Q9The peer assessment and rubric have helped apply the subject knowledge to real-life situations19%36%36%5%5%
Q10I feel confident in answering essay-type questions for this module following the peer-assessment experience.10%36%31%14%10%

4. Discussion

The study’s main objective was to investigate the student’s perception of using a rubric and peer assessment as alternative assessments in the online learning environment. Furthermore, the study sought to investigate whether the rubric and peer assessment helped the students adopt a deep learning approach. The themes that emerged from the open-ended questions ranked in order are represented in Table 3.
The questions broadly covered the students’ perceptions concerning the benefits of the peer assessment, the benefits of using the rubric and the help of the rubric and peer assessment together. There was a consensus from the students that indicated that the peer review assessments and rubric were helpful in the online learning environment. The students found them useful in understanding the subject knowledge better, thinking critically, and writing analytically and in a structured manner. However, the study also revealed that the students’ performances did not improve over the five peer assessments despite the benefits of the peer assessment and rubric use. In addition, the students felt they were not prepared for the final summative assessment. The overall benefits of the peer assessment and rubric from the student’s perspective are discussed below per the four major themes that emerged, as represented in Table 3. The problems encountered and suggestions are also briefly discussed afterwards. The closed-ended questions will be addressed by referring to questions (Table 2). Responses from the open-ended questions are also quoted to support the themes identified.
Clear performance criteria
Clear performance criteria was the theme that came up the most. Most of the students (73%) indicated that the rubric helped them clarify the performance criteria and learning outcomes by understanding the rubric’s different levels. The understanding of the performance criteria was also supported by the results from Table 3, Category B of the close-ended questions. In total, 71% of the students agreed that the introduction of the rubric helped them understand what was required of them (Table 2, Q3) and hence enabled them to produce quality work. The use of rubrics is vital in clarifying learning goals, designing instruction to address these goals, communicating goals to students, giving feedback on student progress towards the goals and finally judging the final product in terms of the degree to which the goals were met (Andrade and Du 2005). Students indicated that the rubric helped them understand their weaknesses and as a result, perform better. The student stated that it was through applying the rubric and all its elements, which allowed them to better understand what was required of them.
  • “I followed the rubric and used all the elements, which improved my work” (Participant #41).
Similarly, Reynolds-Keefer (2010) found that all the students involved in their study indicated that they better understood the teacher’s expectations when the assignment involved a rubric. A rubric describes to the students what constitutes quality work (K. Topping 1998; Reynolds-Keefer 2010). A rubric can serve as a blueprint for the students by defining the required learning outcomes (Greenberg 2015; Su 2021). Su (2021) also found rubrics helpful in a translation module, stating that the students could better understand errors, had better knowledge of the module and faster analysis of problems. According to Andrade (2010), a good rubric should identify and characterise outstanding work and point out potential weaknesses in students’ work and how students can overcome these weaknesses. It, therefore, serves as a guide for students to review and improve their work (Reddy and Andrade 2010; Andrade et al. 2010). The way students attempted to answer the essay questions without a rubric indicates how a rubric can aid students in overcoming the difficulties they faced before introducing a rubric.
The students also displayed the use of self-evaluation by comparing their performance to the rubric. According to Dabbagh et al. (2004), rubrics and peer assessments can help students self-evaluate in online learning environments. The student can use the rubric and peer assessment feedback to self-evaluate. This type of feedback promotes learning in online learning environments.
  • “Upon self-reflection, I could reanalyse my submitted work and dedicate more time to investigating my errors and, as such, try to improve on the identified weak areas …” (Participant #3).
  • “Through self-reflection, I could reanalyse my own work that was submitted. I dedicated time to understanding my own errors and tried my best to improve” (Participant #33)
Structured writing
The second theme that came up the most was related to student writing. Most students also indicated that they had no structure to their writing before the rubric was introduced. Students suggested that they regurgitated the textbook’s content before the rubric was introduced.
  • “I followed the rubric, used all the elements given, and this improved my work and the structure of my writing” (Participant #41).
Saddler and Andrade (2004) stressed the importance of a rubric and its effect on student writing. A rubric can help students by clearly articulating high-quality work. Keith Topping (2009) also indicated that peer assessments could lead to an improvement in writing. There is substantial evidence of the effectiveness of peer assessment in the context of writing (O’Donnell and Topping 1998; Saddler and Andrade 2004). Similarly, in this study, the students indicated introducing the rubric helped them better structure their writing, immensely improving the quality of their writing.
Better structure and quality writing were also reflected with the closed-ended questions under Category C. 72% (Table 2, Q5) of the students agreed that the rubric helped structure their writing. Moreover, 74% (Table 2, Q6) of the students agreed that they developed the ability to write analytically. The improvement in writing is helpful, as Riley and Simons (2016) found a problem with accounting student’s written communication skills. The authors suggested that by implementing rubrics, educators can help improve students’ writing (Riley and Simons 2016).
  • “I started using the structure of introduction, linking theory to the scenario, and concluding. The rubric helped me structure my writing” (Participant #35).
  • “I followed the rubric and ensured I had an introduction, body and conclusion, checked my spellings and punctuation, and applied the concepts to the scenario instead of regurgitating the theory” (Participant #5).
The improvement in writing was also acknowledged by Greenberg (2015). Greenberg (2015) found that even without any training on using a rubric, students who used it produced higher-quality writing than those who did not. In a study where the authors aimed to improve the essay writing performance of first-year students in Geography, Mowl and Pain (2006) also found that a rubric helped the students. The authors indicated that the rubric helped students understand the essay criteria and requirements. This understanding improved students’ essay writing skills (Mowl and Pain 2006; Camarata and Slieman 2020). Camarata and Slieman (2020) and van den Berg et al. (2007) also found that student feedback and rubric improved academic writing.
Deep approach to learning and critical thinking
The students also responded that the peer assessment and rubric helped them to think critically about the subject content and develop a deeper understanding of the knowledge.
  • “Yes, I understood all the content from the textbook, but applying the content to the scenarios helped me develop a deeper understanding of the content” (Participant #41).
Furthermore, 69% (Table 2, Q7) and 65% (Table 2, Q8) agreed that they developed their ability to think critically about the subject. The students agreed that the peer assessment and rubric helped them develop a deeper understanding of the subject.
Essay writing type of questions promote higher-order thinking skills, such as critical thinking and are associated with a deep approach to learning (Tsingos et al. 2015). This view is in line with McDowell’s (1995) argument that giving students detailed guidelines, such as rubrics on how they are assessed, encourages students to adopt a deeper learning approach instead of simply reproducing what they learned. Students indicated that the rubric helped them think about the subject more critically.
  • “Yes, they (rubric and peer assessment) required deep analysis of the module, and the rubric helped with steering me in the correct direction in terms of answering and analysing’ (Participant #24). Another student stated that
  • “It (the rubric) has developed my critical thinking ability and taught me how to apply such a skill in the real-life application as well as in different scenarios like essay-type questions” (Participant #9).
This view aligns with Jhangiani (2016), who suggested that peer review assessments could help students deepen their learning, reflect on a subject matter and adopt a deep approach to learning.
  • “Yes … I used just to paraphrase content from the textbook, forgetting that evidence is crucial in the proper application. Now I can say it assisted me with thinking deeper about the content and giving analytical responses relating to the question at hand” (Participant #27).
Boud et al. (2006) also indicated that the improvement is a result of the active participation of the students in the learning process, which helps improve students’ basic understanding (Boud et al. 2006; Jhangiani 2016).
Previous research by Landry et al. (2015) also found that students found that the peer assessment in graduate-level writing improved theory ability to think critically. The improvement was primarily associated with the opportunity to critique their peer’s work. Similarly, Orsmond et al. (2002) and Morris (2001) described an enhancement in critical thinking as an outcome of peer assessments (Morris 2001; Orsmond et al. 2002; Landry et al. 2015)
The students also indicated that the rubric and peer assessment led them to adopt a deep approach by actively engaging with the new information and comparing or relating it to what they already know and to the real world.
  • “Yes, I could use the textbook knowledge on real-life situations” (Participant #25).
According to Struyven et al. (2005), incorporating peer assessment in a module helped create a learning environment that promoted deeper approaches to learning. Carbonaro and Ravaioli (2017) also suggested peer review assessments could help engage students in deep learning rather than the surface approach. Indeed, one student said,
  • “It helped me because I realised I had to think a little bit beyond just what the textbook says; I need to apply my thoughts on whatever it is about and base my answers on that” (Participant #5).
Overall, it seems that the peer-review statements helped most students gain a deeper understanding of the work and, as such, better retention of the content. A participant said,
  • “Yes. I had a better understanding of the work without forgetting” (Participant #6).
These views support those of Schamber and Mahoney (2006), who suggested that the use of rubrics as an instructional method and evaluation could facilitate the thinking process, providing explicit cues to students on how to think and better content retention (Schamber and Mahoney 2006).
Better analysis and application of subject knowledge
There were various opinions about the usefulness of the peer assessments and rubric in analysing the subject content and applying it. Most students indicated that the peer review assessments helped them know that they could not just state content from the textbook but that they also needed to justify it by applying it to the given scenario. Applying content to a scenario is also indicative of a deep approach to learning (Tsingos et al. 2015).
  • “Yes, because I knew that after stating a statement, I had to justify it using the given passage” (Participant #29). Another stated,
  • “Yes, I was able to link the theory to the scenario without losing the context of the answer” (Participant #22).
55% of the students also indicated that the peer assessment and the rubric helped them apply technical knowledge to real-life situations (Table 2, Q9). Therefore, the peer assessment and rubric could be used as assessments for learning tools According to Na et al. (2021), learning happens when a student combines new knowledge with existing knowledge. In this manner, the students can think about what they knew, what the new knowledge is and how to internalise it by organising it with existing knowledge.
  • “I would answer or explain what is being asked without relating it to the scenario…” (Participant #48), whilst another said,
  • “I would use anything but without evidence and linking theory to the case study/scenario given …” (Participant # 17).
According to Hadzhikoleva et al. (2019), asking students application, analysis and synthesis questions also leads to students developing higher-order thinking skills. Questions asking students’ application of knowledge and the ability to analyse situations involve using higher-order thinking skills such as critical thinking (Yonker 2010). Writing essay-type questions involves analytical and critical thinking and requires students to present their understanding. Warburton (2003) also stated that questions that emphasised applying principles and concepts rather than accumulated facts encouraged critical thinking (Warburton 2003; Yonker 2010). Furthermore, students also indicated that the rubric and peer assessments helped them apply the learned knowledge to new contexts.
  • “It forced me to think critically about what I learned and how I could apply it to a given situation in a way that was both efficient and beneficial” (Participant #17).
    Another student (Participant #27) indicated that,
  • “Applying the rubric forced him to think creatively in formulating his answers for the long questions”.
Similarly, Chin and Brown (2000) discovered that science students could be encouraged to use a deep approach to learning by prompting them for explanations. As a result, this could indicate the usefulness of formative assessments as an assessment for learning tool (Gipps and Stobart 2003; Nasab 2015; Reeves 2000).
Problems encountered by students
However, the students’ perceptions also indicated potential problems with the peer assessments. Only 45% agreed or strongly agreed that they were clear about the requirements of the learning outcomes and of the assessments after conducting peer assessments (Table 2, Q4), despite indicating that the rubric helped them understand what was required of them. This reflects the usefulness of the rubric when combined with the peer assessment. However, students not being clear about final summative assessment requirements could also indicate that using peer assessments is not necessarily developing students’ skills and competencies. Greenberg (2015) found that better student performance could result from their learning to use the rubric and not necessarily from the students developing the core skills and competencies of the module.
Moreover, in their study, Green and Bowser (2008) found no significant difference between the performance of two cohorts of students that used the rubric and those who did not (Green and Bowser 2008). However, the authors stated the lack of rubric training as a possible reason for the lack of difference in performance. Simply providing a rubric to the student is not sufficient to enhance performance rather, the students have to engage with the rubric (Andrade 2001). This could be a reason for the lack of improvement in the student’s peer assessment grades. However, the students were given training. Furthermore, the final summative assessment grades of the students did not form the scope of the study. It may be that the students may have felt that they did not understand the requirements of the assessments, but their grades might reflect otherwise.
Moreover, only 46% (Table 2, Q10) agreed that the peer assessment helped them improve their confidence in answering essay-type questions. Only 54% of the students agreed and strongly agreed that their performance improved over the five peer assessments. (Table 2, Q1). However, Gielen et al. (2010) indicated that performance improvement was not always related to the quality of the feedback but rather the attitude of the assess towards the peer feedback. Additionally, the efficacy of rubrics in improving student performance is not clear-cut (Francis 2018). However, Crowe et al. (2015) indicated that the benefits of peer review may not always be measurable through graded assessments and could include enriching a student’s learning experience (Crowe et al. 2015). Andrade (2001) also argued that rubrics do not always lead to better writing or knowledge transfer, but they were a good tool for providing students with feedback. Feedback helps students with learning.
Suggestions for improvement in the peer review process
Most students suggested that the educator and not their peers review the peer assessments.
  • “The reviews should be conducted by lectures and not students” (Participant#15).
  • “Lecturers should mark the long questions” (Participant #43).
  • “It has to be evaluated by the lecturers before the marks are given to students because some students were lazy when they carried out the reviews” (Participant #12).
This is in line with the various studies that indicate that students are uncomfortable criticising each other’s work and find it difficult to rate their peers (Topping et al. 2000). Students perceive grading each other as risky and unfair (Dochy et al. 1999; Sluijsmans et al. 2002; Kwan and Leung 1996). Students also found their peers to be less competent in providing feedback than their instructors (Panadero and Alqassab 2019). As a result, students lacked confidence in their own ability and that of their peers as assessors to provide constructive feedback (McDowell 1995). According to van Gennip et al. (2009), trust is essential in implementing peer assessments. Sluijsmans et al. (2002) suggested peer assessment training should help with performance and refining reviewing skills.
However, the students also suggested that there should be no grade awarded to the peer assessments. Students should only obtain commentary feedback. Interestingly, this supports Lipnevich et al. (2014) study. The authors found that receiving grades as a form of feedback as compared to written comments predicted the students experience of negative emotions. The researchers concluded that feedback of written comments may be more beneficial to student than receiving a grade (Lipnevich and Smith 2009; Lipnevich et al. 2021; Shute 2008; Black and William 1998; van Gennip et al. 2009). Black and William (1998) also drew from their review that descriptive feedback leads to the highest improvement in performance as opposed to grades or scores.

5. Conclusions

The main objective of this study was to explore the student’s perception of using a rubric and peer assessment as alternative assessments in an online learning environment. The study aimed to demonstrate the usefulness of peer assessments with rubrics in online learning environments. Through an online survey, four main themes emerged, all of which were reflected in the literature. The four main themes were clear performance criteria, structured writing, a deep approach to learning and a better analysis and application of subject content knowledge. The findings of this study can serve as a basis for using peer assessment and a rubric to help students adopt a deep approach to learning in online learning environments. This study adds to the knowledge that rubrics combined with meta-cognitive activities like peer assessments can play an influential role in assessment for learning. The study also showed that a framework helps effectively carry out a peer assessment.
The main limitation of this study is the response rate. Only 48 of 640 students completed the questionnaire. A reasonable response rate is usually between 13% and 36% (Baruch 1999), whilst online questionnaires are considered to have better response rates (Denscombe 2009). As a result, it may be hard to generalise the findings of this study. Furthermore, an instrument such as the Study Process Questionnaire (SPQ) by John Biggs et al. (2001) can also be used to identify the actual approach to learning that students adopted. The SPQ results could help other essential elements in understanding student learning (John Biggs et al. 2001).
This study did not analyse the actual peer assessments of the students and their performance in the final summative assessments. Therefore, future research could analyse the peer submissions of the students and their performance in the final summative assessments. These data could be linked to the perception of the students. Such an analysis could reveal whether the improvement identified by the students extended to their summative assessments and the actual quality of their responses.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the School of Accounting Research Ethics Committee of the University of Johannesburg (SAREC20200728/09, 28 July 2020).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the institutional ethical clearance permission.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Adachi, Chie, Joanna Tai, and Phillip Dawson. 2017. A Framework for Designing, Implementing, Communicating and Researching Peer Assessment. Higher Education Research & Development 37: 453–67. [Google Scholar] [CrossRef]
  2. Alqassab, Maryam, Jan Willem Strijbos, and Stefan Ufer. 2018. Training Peer-Feedback Skills on Geometric Construction Tasks: Role of Domain Knowledge and Peer-Feedback Levels. European Journal of Psychology of Education 33: 11–30. [Google Scholar] [CrossRef]
  3. Anderson, Rebecca S. 1998. Why Talk about Different Ways to Grade? The Shift from Traditional Assessment to Alternative Assessment. New Directions for Teaching and Learning 74: 5–16. Available online: https://eric.ed.gov/?id=EJ570381 (accessed on 10 October 2022).
  4. Andrade, Heidi. 2001. The Effects of Instructional Rubrics on Learning to Write. Current Issues in Education 4: 1–21. Available online: http://cie.asu.edu/ojs/index.php/cieatasu/article/download/1630/665 (accessed on 10 October 2022).
  5. Andrade, Heidi. 2005. Teaching With Rubrics: The Good, the Bad, and the Ugly. College Teaching 53: 27–31. [Google Scholar] [CrossRef]
  6. Andrade, Heidi, and Ying Du. 2005. Student Perspectives on Rubric-Referenced Assessment. Practical Assessment, Research, and Evaluation 10: 1–11. [Google Scholar] [CrossRef]
  7. Andrade, Heidi L., Ying Du, and Kristina Mycek. 2010. Rubric-referenced Self-assessment and Middle School Students’ Writing. Assessment in Education: Principles, Policy & Practice 17: 199–214. [Google Scholar] [CrossRef]
  8. Andresen, Martin A. 2009. Asynchronous Discussion Forums: Success Factors, Outcomes, Assessments, and Limitations. Journal of Educational Technology & Society 12: 249–57. [Google Scholar]
  9. Asikainen, Henna, and David Gijbels. 2017. Do Students Develop Towards More Deep Approaches to Learning During Studies? A Systematic Review on the Development of Students’ Deep and Surface Approaches to Learning in Higher Education. Educational Psychology Review 29: 205–34. [Google Scholar] [CrossRef]
  10. Barua, Ankur. 2013. Methods for Decision-Making in Survey Questionnaires Based on Likert Scale. Journal of Asian Scientific Research 3: 35–38. Available online: https://archive.aessweb.com/index.php/5003/article/view/3446 (accessed on 10 October 2022).
  11. Baruch, Yehuda. 1999. Response Rate in Academic Studies-A Comparative Analysis. Human Relations 52: 421–38. [Google Scholar] [CrossRef]
  12. Biggs, John, David Kember, and Doris Y. P. Leung. 2001. The Revised Two-Factor Study Process Questionnaire: R-SPQ-2F. The British Journal of Educational Psychology 71: 133–49. [Google Scholar] [CrossRef]
  13. Black, Paul, and Dylan Wiliam. 1998. Assessment and Classroom Learning. Assessment in Education: Principles, Policy & Practice 5: 7–74. [Google Scholar] [CrossRef]
  14. Boase-Jelinek, Daniel, Jenni Parker, and Jan Herrington. 2013. Student Reflection and Learning through Peer Reviews. Issues in Educational Research 23: 119. [Google Scholar]
  15. Booth, Peter, Peter Luckett, and Rosina Mladenovic. 1999. The Quality of Learning in Accounting Education: The Impact of Approaches to Learning on Academic Performance. Accounting Education 8: 277–300. [Google Scholar] [CrossRef]
  16. Boud, David. 1990. Assessment and the Promotion of Academic Values. Studies in Higher Education 15: 101–11. [Google Scholar] [CrossRef]
  17. Boud, David. 1992. The Use of Self-Assessment Schedules in Negotiated Learning. Studies in Higher Education 17: 185–200. [Google Scholar] [CrossRef]
  18. Boud, David. 1995. Enhancing Learning Through Self-Assessment—David Boud—Google Books, 1st ed. New York: RoutledgeFalmer. [Google Scholar]
  19. Boud, David, Ruth Cohen, and Jane Sampson. 2006. Peer Learning and Assessment. Assessment & Evaluation in Higher Education 24: 413–26. [Google Scholar] [CrossRef]
  20. Brady, Anne Marie. 2005. Assessment of Learning with Multiple-Choice Questions. Nurse Education in Practice 5: 238–42. [Google Scholar] [CrossRef]
  21. Bretag, Tracey, Rowena Harper, Michael Burton, Cath Ellis, Philip Newton, Karen van Haeringen, Sonia Saddiqui, and Pearl Rozenberg. 2019. Contract Cheating and Assessment Design: Exploring the Relationship. Assessment and Evaluation in Higher Education 44: 676–91. [Google Scholar] [CrossRef]
  22. Brookhart, Susan M., and Fei Chen. 2015. The Quality and Effectiveness of Descriptive Rubrics. Educational Review 67: 343–68. [Google Scholar] [CrossRef]
  23. Brown, Byron W., and Carl E. Liedholm. 2002. Can Web Courses Replace the Classroom in Principles of Microeconomics? American Economic Review 92: 444–48. [Google Scholar] [CrossRef]
  24. Camac Bachelor, Kyle. 2018. Student-Centered Approach vs. Teacher-Centered Approach: Which Is More Effective for a Graduate Data Analytics Course in an e-Learning Environment? Columbia: University of South Carolina. [Google Scholar]
  25. Camarata, Troy, and Tony A. Slieman. 2020. Improving Student Feedback Quality: A Simple Model Using Peer Review and Feedback Rubrics. Journal of Medical Education & Curricular Development 7: 2382120520936604. [Google Scholar] [CrossRef]
  26. Carbonaro, Antonella, and Mirko Ravaioli. 2017. Peer Assessment to Promote Deep Learning and to Reduce a Gender Gap in the Traditional Introductory Programming Course. Journal of E-Learning and Knowledge Society 13: 121–29. [Google Scholar]
  27. Carless, David. 2009. Trust, Distrust and Their Impact on Assessment Reform. Assessment& Evaluation in Higher Education 34: 79–89. [Google Scholar] [CrossRef]
  28. Carless, David, and David Boud. 2018. The Development of Student Feedback Literacy: Enabling Uptake of Feedback. Assessment & Evaluation in Higher Education 43: 1315–25. [Google Scholar] [CrossRef]
  29. Chin, Christine, and David E. Brown. 2000. Learning in Science: A Comparison of Deep and Surface Approaches. Journal of Research in Science Teaching 37: 109–38. [Google Scholar] [CrossRef]
  30. Conrad, Dianne L. 2002. Engagement, Excitement, Anxiety, and Fear: Learners’ Experiences of Starting an Online Course. International Journal of Phytoremediation 21: 205–26. [Google Scholar] [CrossRef]
  31. Crowe, Jessica A., Tony Silva, and Ryan Ceresola. 2015. The Effect of Peer Review on Student Learning Outcomes in a Research Methods Course. Teaching Sociology 43: 201–13. [Google Scholar] [CrossRef] [Green Version]
  32. Culver, Christopher. 2022. Learning as a Peer Assessor: Evaluating Peer-Assessment Strategies. Assessment & Evaluation in Higher Education, 1–17. [Google Scholar] [CrossRef]
  33. Dabbagh, Nada, Nada Dabbagh, and Anastasia Kitsantas. 2004. Supporting Self-Regulation in Student-Centered Web-Based Learning Environments. International Journal on E-Learning 3: 40–47. [Google Scholar]
  34. Denscombe, Martyn. 2009. Item Non-response Rates: A Comparison of Online and Paper Questionnaires. International Journal of Social Research Methodology 12: 281–91. [Google Scholar] [CrossRef]
  35. Dixson, Dante D., and Frank C. Worrell. 2016. Formative and Summative Assessment in the Classroom. Theory Into Practice 55: 153–59. [Google Scholar] [CrossRef]
  36. Dochy, Filip, Mien Segers, and Dominique Sluijsmans. 1999. The Use of Self-, Peer and Co-Assessment in Higher Education: A Review. Studies in Higher Education 24: 331–50. [Google Scholar] [CrossRef] [Green Version]
  37. Dolmans, Diana H. J. M., Sofie M. M. Loyens, Hélène Marcq, and David Gijbels. 2016. Deep and Surface Learning in Problem-Based Learning: A Review of the Literature. Advances in Health Sciences Education: Theory and Practice 21: 1087–112. [Google Scholar] [CrossRef] [Green Version]
  38. Duff, Angus, and Sam McKinstry. 2007. Students’ Approaches to Learning. Issues in Accounting Education 22: 183–214. [Google Scholar] [CrossRef]
  39. Dumford, Amber D., and Angie L. Miller. 2018. Online Learning in Higher Education: Exploring Advantages and Disadvantages for Engagement. Journal of Computing in Higher Education 30: 452–65. [Google Scholar] [CrossRef]
  40. Ellery, Karen, and Lee Sutherland. 2004. Involving Students in the Assessment Process. Perspectives in Education 22: 99–110. Available online: https://journals.co.za/doi/abs/10.10520/EJC87239 (accessed on 10 October 2022).
  41. Entwistle, Noel, and Paul Ramsden, eds. 1983. Understanding Student Learning, 1st ed. London: Routledge. [Google Scholar] [CrossRef]
  42. Falchikov, Nancy, and David Boud. 1989. Student Self-Assessment in Higher Education: A Meta-Analysis. Review of Educational Research 59: 395–430. [Google Scholar] [CrossRef]
  43. Falchikov, Nancy, and Judy Goldfinch. 2000. Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research 70: 287–322. [Google Scholar] [CrossRef]
  44. Francis, Julie Elizabeth. 2018. Linking Rubrics and Academic Performance: An Engagement Theory Perspective. Journal of University Teaching & Learning 15: 3. [Google Scholar] [CrossRef]
  45. Garrison, Catherine, and Michael Ehringhaus. 2007. Formative and Summative Assessments in the Classroom. Available online: http://ccti.colfinder.org/sites/default/files/formative_and_summative_assessment_in_the_classroom.pdf (accessed on 10 October 2022).
  46. Gaytan, Jorge, and Beryl C. McEwen. 2007. Effective Online Instructional and Assessment Strategies. American Journal of Distance Education 21: 117–32. [Google Scholar] [CrossRef]
  47. Gielen, Sarah, Filip Dochy, and Patrick Onghena. 2010. An Inventory of Peer Assessment Diversity. An Inventory of Peer Assessment Diversity 36: 137–55. [Google Scholar] [CrossRef]
  48. Gijbels, David, and Filip Dochy. 2006. Students’ Assessment Preferences and Approaches to Learning: Can Formative Assessment Make a Difference? Educational Studies 32: 399–409. [Google Scholar] [CrossRef]
  49. Gijbels, David, Gerard van de Watering, Filip Dochy, and Piet van den Bossche. 2005. The Relationship between Students’ Approaches to Learning and the Assessment of Learning Outcomes. European Journal of Psychology of Education 20: 327–41. [Google Scholar] [CrossRef]
  50. Gipps, Caroline, and Gordon Stobart. 2003. Alternative Assessment. In International Handbook of Educational Evaluation. Dordrecht: Springer, pp. 549–75. [Google Scholar] [CrossRef]
  51. Gizem Karaoğlan-Yilmaz, Fatma, Ahmet Berk Üstün, and Ramazan Yilmaz. 2020. Investigation of Pre-Service Teachers’ Opinions on Advantages and Disadvantages of Online Formative Assessment: An Example of Online Multiple-Choice Exam. Journal of Teacher Education & Lifelong Learning (TELL) 2: 10–19. Available online: https://dergipark.org.tr/en/pub/tell/issue/52517/718396 (accessed on 10 October 2022).
  52. Green, Rosemary, and Mary Bowser. 2008. Observations from the Field Sharing a Literature Review Rubric. Journal of Library Administration 45: 185–201. [Google Scholar] [CrossRef]
  53. Greenberg, Kathleen P. 2015. Rubric Use in Formative Assessment: A Detailed Behavioral Rubric Helps Students Improve Their Scientific Writing Skills. Teaching of Psychology 42: 211–17. [Google Scholar] [CrossRef]
  54. Hadzhikoleva, Stanka, Emil Hadzhikolev, and Nikolay Kasakliev. 2019. Using Peer Assessment to Enhance Higher Order Thinking Skills Student Information Systems (SIS) View Project Educational Mobile Games for Children View Project Using Peer Assessment to Enhance Higher Order Thinking Skills. TEM Journal 8: 242–47. [Google Scholar] [CrossRef]
  55. Jhangiani, Rajiv S. 2016. The Impact of Participating in a Peer Assessment Activity on Subsequent Academic Performance. Teaching of Psychology 43: 180–86. [Google Scholar] [CrossRef]
  56. Jonsson, Anders, and Gunilla Svingby. 2007. The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences. Educational Research Review 2: 130–44. [Google Scholar] [CrossRef]
  57. Kearney, Sean. 2013. Improving Engagement: The Use of ‘Authentic Self-and Peer-Assessment for Learning’ to Enhance the Student Learning Experience. Assessment and Evaluation in Higher Education 38: 875–91. [Google Scholar] [CrossRef] [Green Version]
  58. Krathwohl, David R. 2002. A Revision of Bloom’s Taxonomy: An Overview. Theory Into Practice 41: 212–18. [Google Scholar] [CrossRef]
  59. Kwan, Kam Por, and Roberta Leung. 1996. Tutor Versus Peer Group Assessment of Student Performance in a Simulation Training Exercise. Assessment & Evaluation in Higher Education 21: 205–14. [Google Scholar] [CrossRef]
  60. Landry, Ashley, Shoshanah Jacobs, and Genevieve Newton. 2015. Effective Use of Peer Assessment in a Graduate Level Writing Assignment: A Case Study. International Journal of Higher Education 4: 38–51. [Google Scholar] [CrossRef] [Green Version]
  61. Lindblom-Ylänne, Sari, Anna Parpala, and Liisa Postareff. 2019. What Constitutes the Surface Approach to Learning in the Light of New Empirical Evidence? Studies in Higher Education 44: 2183–95. [Google Scholar] [CrossRef] [Green Version]
  62. Lipnevich, Anastasiya A., and Jeffrey K. Smith. 2009. Effects of Differential Feedback on Students’ Examination Performance. Journal of Experimental Psychology: Applied 15: 319–33. [Google Scholar] [CrossRef]
  63. Lipnevich, Anastasiya A., Dana Murano, Maike Krannich, and Thomas Goetz. 2021. Should I Grade or Should I Comment: Links among Feedback, Emotions, and Performance. Learning and Individual Differences 89: 102020. [Google Scholar] [CrossRef]
  64. Lipnevich, Anastasiya A., Leigh N. McCallen, Katharine Pace Miles, and Jeffrey K. Smith. 2014. Mind the Gap! Students’ Use of Exemplars and Detailed Rubrics as Formative Assessment. Instructional Science 42: 539–59. [Google Scholar] [CrossRef]
  65. Liu, Ngar-Fun, and David Carless. 2007. Teaching in Higher Education Peer Feedback: The Learning Element of Peer Assessment. Teaching in Higher Education 11: 279–90. [Google Scholar] [CrossRef] [Green Version]
  66. Lladó, Anna Planas, Lídia Feliu Soley, Rosa Maria, Fraguell Sansbelló, Gerard Arbat Pujolras, Joan Pujol Planella, Núria Roura-Pascual, Joan Josep Suñol Martínez, and Lino Montoro Moreno. 2014. Assessment & Evaluation in Higher Education Student Perceptions of Peer Assessment: An Interdisciplinary Study. Assessment & Evaluation in Higher Education 39: 592–610. [Google Scholar] [CrossRef] [Green Version]
  67. López-Pastor, Victor, and Alvaro Sicilia-Camacho. 2017. Formative and Shared Assessment in Higher Education. Lessons Learned and Challenges for the Future. Assessment and Evaluation in Higher Education 42: 77–97. [Google Scholar] [CrossRef]
  68. Lunney, Margaret, Keville Frederickson, Arlene Spark, and Georgia McDuffie. 2008. Facilitating Critical Thinking through Online Courses. Journal of Asynchronous Learning Networks 12: 85–97. [Google Scholar] [CrossRef]
  69. Malan, Marelize, and Nerine Stegmann. 2018. Accounting Students’ Experiences of Peer Assessment: A Tool to Develop Lifelong Learning. South African Journal of Accounting Research 32: 205–24. [Google Scholar] [CrossRef]
  70. Marton, Ference, and Roger Säaljö. 1976. On qualitative differences in learning-ii outcome as a function of the learner’s conception of the task. British Journal of Educational Psychology 46: 115–27. [Google Scholar] [CrossRef]
  71. McDowell, Liz. 1995. The Impact of Innovative Assessment on Student Learning. Innovations in Education and Training International 32: 302–13. [Google Scholar] [CrossRef]
  72. Mertler, Craig A. 2019. Designing Scoring Rubrics for Your Classroom. Practical Assessment, Research, and Evaluation 7: 25. [Google Scholar] [CrossRef]
  73. Morris, Jenny. 2001. Peer Assessment: A Missing Link between Teaching and Learning? A Review of the Literature. Nurse Education Today 21: 507–15. [Google Scholar] [CrossRef]
  74. Moskal, Barbara M. 2002. Recommendations for Developing Classroom Performance Assessments and Scoring Rubrics. Practical Assessment, Research, and Evaluation 8: 14. [Google Scholar] [CrossRef]
  75. Moskal, Barbara M., and Jon A. Leydens. 2000. Scoring Rubric Development: Validity and Reliability. Practical Assessment, Research, and Evaluation 7: 10. [Google Scholar] [CrossRef]
  76. Mowl, Graham, and Rachel Pain. 2006. Using Self and Peer Assessment to Improve Students’ Essay Writing: A Case Study from Geography. Innovations in Education and Training International 32: 324–35. [Google Scholar] [CrossRef]
  77. Mukhtar, Khadijah, Kainat Javed, Mahwish Arooj, and Ahsan Sethi. 2020. Advantages, Limitations and Recommendations for Online Learning during COVID-19 Pandemic Era. Pakistan Journal of Medical Sciences 36: S27. [Google Scholar] [CrossRef]
  78. Na, Seung Joo, Young Geon Ji, and Dong Hyeon Lee. 2021. Application of Bloom’s Taxonomy to Formative Assessment in Real-Time Online Classes in Korea. Korean Journal of Medical Education 33: 191–201. [Google Scholar] [CrossRef]
  79. Nasab, Fatemeh Ghanavati. 2015. Alternative versus Traditional Assessment. Journal of Applied Linguistics and Language Research 2: 165–78. Available online: http://www.jallr.com/index.php/JALLR/article/view/136 (accessed on 10 October 2022).
  80. Newble, D., and R. Clarke. 1986. The Approaches to Learning of Students in a Traditional and in an Innovative Problem-based Medical School. Medical Education 20: 267–73. [Google Scholar] [CrossRef]
  81. Nieminen, Juuso Henrik, Henna Asikainen, and Johanna Rämö. 2021. Promoting Deep Approach to Learning and Self-Efficacy by Changing the Purpose of Self-Assessment: A Comparison of Summative and Formative Models. Studies in Higher Education 46: 1296–311. [Google Scholar] [CrossRef]
  82. Nordrum, Lene, Katherine Evans, and Magnus Gustafsson. 2013. Comparing Student Learning Experiences of In-Text Commentary and Rubric-Articulated Feedback: Strategies for Formative Assessment. Assessment and Evaluation in Higher Education 38: 919–40. [Google Scholar] [CrossRef]
  83. O’Donnell, Angela M., and K. J. Topping. 1998. Peers Assessing Peers: Possibilities and Problems. In Peer Assisted Learning. Edited by Keith Topping and Stewart Ehly. New York: Routledge, pp. 255–78. Available online: https://books.google.com/books?hl=en&lr=&id=pDSRAgAAQBAJ&oi=fnd&pg=PA255&dq=Peers+assessing+peers:+Possibilities+and+problems.&ots=sKCre62nHe&sig=nXi8yGASWJ5tGPZakOlxA3QTW18 (accessed on 10 October 2022).
  84. Orsmond, Paul, Stephen Merry, and Kevin Reiling. 2002. The Use of Exemplars and Formative Feedback When Using Student Derived Marking Criteria in Peer and Self-Assessment. Assessment and Evaluation in Higher Education 27: 309–23. [Google Scholar] [CrossRef]
  85. Panadero, Ernesto, and Anders Jonsson. 2013. The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review. Educational Research Review 9: 129–44. [Google Scholar] [CrossRef]
  86. Panadero, Ernesto, and Maryam Alqassab. 2019. An Empirical Review of Anonymity Effects in Peer Assessment, Peer Feedback, Peer Review, Peer Evaluation and Peer Grading. Assessment & Evaluation in Higher Education 44: 1253–78. [Google Scholar] [CrossRef]
  87. Patton, M. Q. 1990. Qualitative Evaluation and Research Methods. Available online: https://psycnet.apa.org/record/1990-97369-000 (accessed on 10 October 2022).
  88. Ramirez, Tatyana V. 2017. On Pedagogy of Personality Assessment: Application of Bloom’s Taxonomy of Educational Objectives. Journal of Personality Assessment 99: 146–52. [Google Scholar] [CrossRef] [PubMed]
  89. Reddy, Y. Malini, and Heidi Andrade. 2010. A Review of Rubric Use in Higher Education A Review of Rubric Use in Higher Education. Assessment & Evaluation in Higher Education 35: 435–48. [Google Scholar] [CrossRef]
  90. Reeves, Thomas C. 2000. Alternative assessment approaches for online learning environments in higher education. J. Educational Computing Research 23. Available online: http://www.aahe.org (accessed on 10 October 2022). [CrossRef]
  91. Reynolds-Keefer, Laura. 2010. Rubric-Referenced Assessment in Teacher Preparation: An Opportunity to Learn by Using. Practical Assessment, Research and Evaluations 15. [Google Scholar] [CrossRef]
  92. Rezaei, Ali Reza, and Michael Lovorn. 2010. Reliability and Validity of Rubrics for Assessment through Writing. Assessing Writing 15: 18–39. [Google Scholar] [CrossRef]
  93. Riley, Tracey J., and Kathleen A. Simons. 2016. The Written Communication Skills That Matter Most for Accountants. Accounting Education 25: 239–55. [Google Scholar] [CrossRef]
  94. Rovai, Alfred P. 2000. Online and Traditional Assessments: What Is the Difference? Internet and Higher Education 3: 141–51. [Google Scholar] [CrossRef]
  95. Ruegg, Rachael. 2015. The Relative Effects of Peer and Teacher Feedback on Improvement in EFL Students’ Writing Ability. Linguistics and Education 29: 73–82. [Google Scholar] [CrossRef]
  96. Saddler, Bruce, and Heidi Andrade. 2004. The Writing Rubric. Educational Leadership 62: 48–52. [Google Scholar]
  97. Sadler, Philip M., and Eddie Good. 2006. The Impact of Self-and Peer-Grading on Student Learning. Educational Assessment 11: 1–31. [Google Scholar] [CrossRef]
  98. Schamber, Jon F., and Sandra L. Mahoney. 2006. Assessing and Improving the Quality of Group Critical Thinking Exhibited in the Final Projects of Collaborative Learning Groups. The Journal of General Education 55: 103–37. [Google Scholar] [CrossRef]
  99. Shute, Valerie J. 2008. Focus on Formative Feedback. Review of Educational Research 78: 153–89. [Google Scholar] [CrossRef]
  100. Sluijsmans, Dominique M. A., Saskia Brand-Gruwel, and Jeroen J. G. van Merriënboer. 2002. Peer Assessment Training in Teacher Education: Effects on Performance and Perceptions. Assessment & Evaluation in Higher Education 27: 443–54. [Google Scholar] [CrossRef]
  101. Sluijsmans, Dominique, and Frans Prins. 2006. A Conceptual Framework for Integrating Peer Assessment in Teacher Education. Studies in Educational Evaluation 32: 6–22. [Google Scholar] [CrossRef]
  102. Somervell, Hugh. 1993. Issues in Assessment, Enterprise and Higher Education: The Case for Self-, Peer and Collaborative Assessment. Assessment & Evaluation in Higher Education 18: 221–33. [Google Scholar] [CrossRef]
  103. Struyven, Katrien, Filip Dochy, and Steven Janssens. 2005. Students’ Perceptions about Evaluation and Assessment in Higher Education: A Review. Assessment and Evaluation in Higher Education 30: 325–41. [Google Scholar] [CrossRef]
  104. Su, Wei. 2021. Understanding Rubric Use in Peer Assessment of Translation. Studies in Translation Theory and Practice 30: 71–85. [Google Scholar] [CrossRef]
  105. Sun, Haiyang, and Mingchao Wang. 2022. Effects of Teacher Intervention and Type of Peer Feedback on Student Writing Revision. Language Teaching Research. [Google Scholar] [CrossRef]
  106. Sundeen, Todd H. 2014. Instructional Rubrics: Effects of Presentation Options on Writing Quality. Assessing Writing 21: 74–88. [Google Scholar] [CrossRef]
  107. Taras, Maddalena. 2008. Summative and Formative Assessment: Perceptions and Realities. Active Learning in Higher Education 9: 172–92. [Google Scholar] [CrossRef] [Green Version]
  108. Thomas, David R. 2006. A General Inductive Approach for Analyzing Qualitative Evaluation Data. American Journal of Evaluation 27: 237–46. [Google Scholar] [CrossRef]
  109. Topping, Keith. 1996. The Effectiveness of Peer Tutoring in Further and Higher Education—A Typology and Review of the Literature. Higher Education 32: 321–45. [Google Scholar] [CrossRef]
  110. Topping, Keith. 1998. Peer Assessment between Students in Colleges and Universities. Review of Educational Research 68: 249–76. [Google Scholar] [CrossRef]
  111. Topping, Keith. 2009. Peer Assessment. Theory into Practice 48: 20–27. [Google Scholar] [CrossRef]
  112. Topping, Keith J., Elaine F. Smith, Ian Swanson, and Audrey Elliot. 2000. Formative Peer Assessment of Academic Writing Between Postgraduate Students. Assessment & Evaluation in Higher Education 25: 149–69. [Google Scholar] [CrossRef]
  113. Tsingos, Cherie, Sinthia Bosnic-Anticevich, and Lorraine Smith. 2015. Learning Styles and Approaches: Can Reflective Strategies Encourage Deep Learning? In Currents in Pharmacy Teaching and Learning. Amsterdam: Elsevier Inc. [Google Scholar] [CrossRef]
  114. van den Berg, Ineke, Wilfried Admiraal, and Albert Pilot. 2007. Design Principles and Outcomes of Peer Assessment in Higher Education. Studies in Higher Education 31: 341–56. [Google Scholar] [CrossRef]
  115. van der Pol, Jakko, B. A. M. van den Berg, Wilfried F. Admiraal, and P. Robert Jan Simons. 2008. The Nature, Reception, and Use of Online Peer Feedback in Higher Education. Computers and Education 51: 1804–17. [Google Scholar] [CrossRef] [Green Version]
  116. van Gennip, Nanine A. E., Mien S. R. Segers, and Harm H. Tillema. 2009. Peer Assessment for Learning from a Social Perspective: The Influence of Interpersonal Variables and Structural Features. Educational Research Review 4: 41–54. [Google Scholar] [CrossRef]
  117. van Ginkel, Stan, Judith Gulikers, Harm Biemans, and Martin Mulder. 2015. The Impact of the Feedback Source on Developing Oral Presentation Competence. Studies in Higher Education 42: 1671–85. [Google Scholar] [CrossRef]
  118. Vickerman, Philip. 2009. Student Perspectives on Formative Peer Assessment: An Attempt to Deepen Learning? Assessment & Evaluation in Higher Education 34: 221–30. [Google Scholar] [CrossRef]
  119. Vista, Alvin, Esther Care, and Patrick Griffin. 2015. A New Approach towards Marking Large-Scale Complex Assessments: Developing a Distributed Marking System That Uses an Automatically Scaffolding and Rubric-Targeted Interface for Guided Peer-Review. Assessing Writing 24: 1–15. [Google Scholar] [CrossRef]
  120. Vonderwell, Selma, Xin Liang, and Kay Alderman. 2007. Asynchronous Discussions and Assessment in Online Learning. Journal of Research on Technology in Education 39: 309–28. [Google Scholar] [CrossRef] [Green Version]
  121. Warburton, Kevin. 2003. Deep Learning and Education for Sustainability. International Journal of Sustainability in Higher Education 4: 44–56. [Google Scholar] [CrossRef]
  122. Wen, Meichun Lydia, and Chin Chung Tsai. 2006. University Students’ Perceptions of and Attitudes toward (Online) Peer Assessment. Higher Education 51: 27–44. [Google Scholar] [CrossRef] [Green Version]
  123. Wen, Meichun Lydia, and Chin Chung Tsai. 2008. Online Peer Assessment in an Inservice Science and Mathematics Teacher Education Course. Teaching in Higher Education 13: 55–67. [Google Scholar] [CrossRef]
  124. Wollenschläger, Mareike, John Hattie, Nils Machts, Jens Möller, and Ute Harms. 2016. What Makes Rubrics Effective in Teacher-Feedback? Transparency of Learning Goals Is Not Enough. Contemporary Educational Psychology 44–45: 1–11. [Google Scholar] [CrossRef]
  125. Wu, Wenyan, Jinyan Huang, Chunwei Han, and Jin Zhang. 2022. Evaluating Peer Feedback as a Reliable and Valid Complementary Aid to Teacher Feedback in EFL Writing Classrooms: A Feedback Giver Perspective. Studies in Educational Evaluation 73: 101140. [Google Scholar] [CrossRef]
  126. Yen, Ai Chun. 2018. Effectiveness of Using Rubrics for Academic Writing in an EFL Literature Classroom. The Asian Journal of Applied Linguistics 5: 70–80. Available online: http://caes.hku.hk/ajal (accessed on 1 September 2022).
  127. Yonker, Julie. 2010. The Relationship of Deep and Surface Study Approaches on Factual and Applied Test-bank Multiple-choice Question Performance. Assessment & Evaluation in Higher Education 36: 673–86. [Google Scholar] [CrossRef]
Table 1. Open-ended questions.
Table 1. Open-ended questions.
Question IdentifierOpen-Ended Questions
QaHow did you answer the essay type of questions before the introduction of the rubric?
QbHow did you go about answering the essay-type questions after the introduction of the rubric?
QcHave the peer assessment and rubric helped you analyse and apply the subject content? If yes, how did they help you?
QdHave the peer assessment and rubric helped you think critically about the subject content and develop a deeper understanding? If yes, how did they help you?
QeAny suggestions for improvement of the peer review assessment process?
Table 3. Themes of open-ended questions.
Table 3. Themes of open-ended questions.
Themes
Clear performance criterion
Structured writing
Deep approach to learning and critical thinking
Better analysis and application of theory
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mphahlele, L. Students’ Perception of the Use of a Rubric and Peer Reviews in an Online Learning Environment. J. Risk Financial Manag. 2022, 15, 503. https://doi.org/10.3390/jrfm15110503

AMA Style

Mphahlele L. Students’ Perception of the Use of a Rubric and Peer Reviews in an Online Learning Environment. Journal of Risk and Financial Management. 2022; 15(11):503. https://doi.org/10.3390/jrfm15110503

Chicago/Turabian Style

Mphahlele, Letebele. 2022. "Students’ Perception of the Use of a Rubric and Peer Reviews in an Online Learning Environment" Journal of Risk and Financial Management 15, no. 11: 503. https://doi.org/10.3390/jrfm15110503

Article Metrics

Back to TopTop