Next Article in Journal
Transferring Knowledge in a Knowledge-in-Use Task—Investigating the Role of Knowledge Organization
Previous Article in Journal
Characterising Communication of Scientific Concepts in Student-Generated Digital Products
Previous Article in Special Issue
Problem-Oriented Project Learning as a First Year Experience: A Transformative Pedagogy for Entry Level PPL
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implications of Assessing Student-Driven Projects: A Case Study of Possible Challenges and an Argument for Reflexivity

1
Social Psychology of Everyday Life, Department of People & Technology, Roskilde University, 4000 Roskilde, Denmark
2
Computer Science, Department of People & Technology, Roskilde University, 4000 Roskilde, Denmark
*
Author to whom correspondence should be addressed.
Educ. Sci. 2020, 10(1), 19; https://doi.org/10.3390/educsci10010019
Submission received: 8 November 2019 / Revised: 19 December 2019 / Accepted: 6 January 2020 / Published: 8 January 2020
(This article belongs to the Special Issue Problem-based Pedagogies in Higher Education)

Abstract

:
Employing student-driven project work in a higher education setting challenges not only the way in which we understand students’ learning and how we define the expected learning outcomes, it also challenges our ways of assessing students’ learning. This paper will address this question specifically and illustrate with a case that highlights some of the challenges that may arise in practice when assessing student-driven, problem-based projects. The case involved an assessment situation in which a discrepancy arose between the internal and external examiner in relation to what was valued. The discrepancy had consequences not only for the concrete assessment of students’ work, but also for the validity of the problem-based university pedagogy in general, and it raised the question of how to assess students’ work adequately. The research focus of this study was to explore the implications of assessing student-driven projects within a progressive approach to higher education teaching, along with the potential underlying issues. We found a need for clear assessment criteria while insisting on a space for students’ creativity and reflexivity as essential parts of a learning process. The paper thus makes a case for the notion of reflexivity as an assessment criterion to be integrated into learning objectives.

1. Introduction

Problem-based approaches to university teaching and learning enable an array of ways in which to define and work with respective subject matters, as well as how to position students and conceptualize student–teacher collaborations. Such approaches stand opposed to more traditional university pedagogies in the shape of curriculum- and test-based academic practices, which means that the limits of what is generally accepted as academic work are often questioned, stretched, and negotiated. As a consequence, less clarity seems to be present when it comes to the question of the assessment of these problem-based and student-driven projects. Here we (implying the university in general) seem to employ similar ways of thinking regarding assessment and often employing the same assessment criteria, as is found in the more traditional approaches to university teaching and learning. Ideally, there should be a constructive alignment between the expected learning outcomes, the teaching activities, and the assessment criteria in relation to how students’ projects are evaluated. Furthermore, (we would argue) the intended learning outcomes in particular are of the utmost importance when it comes to ensuring this constructive alignment and as a means of minimizing complications in the assessment process. We, therefore, see the need to try to investigate some of the complications and challenges that may arise in practice when assessing student-driven, problem-based project work.
As such, our intention was primarily to illuminate the multifaceted implications that are at play in an (on the surface) superficial and common discrepancy between the examiners in an assessment situation. In the analytical section, we trace back some of the strains of unclarity that served as the basis for the discrepancy in an attempt to place the structural responsibility for the exam situation primarily with the university and the internal examiners, rather than with the external examiner or the students. Furthermore, in the discussion section, we revisit these strains to discuss what changes could and should be made to better accommodate assessment in student-driven project work. However, to frame the analysis and discussion, we first situate the paper in a problem-oriented, interdisciplinary, and participant-directed project work mode (PPL) approach to teaching and learning, along with a consideration of students as producers. Subsequently, we briefly situate our understanding of assessment.

1.1. Students as Producers: The Visions of Progressive Teaching

The specific context for this endeavor was the pedagogical model of PPL as it is practiced at Roskilde University (hereafter RUC). The PPL approach [1] to teaching and learning at RUC entails that 50% of the students’ curriculum is anchored in their own projects and experiments, namely the semester projects. Among other things, this model positions students as producers in their own right as the projects are student-driven. Students are expected to not only acquire knowledge, but to contribute to knowledge production. This aspiration for creating novelty in academia is essentially the epitome of the RUC identity; not only was it a main pillar in the foundation of RUC in 1972, but it remains to date a strong signifier of what is valued.
With this aspiration, the PPL approach connects to a part of the larger academic discussion regarding the relationship between the university and its students, and the question of how to best facilitate and support learning (see, e.g., Biggs and Tang [2]; Johnson and Johnson [3]; Neary and Winn [4]; Bovill, Cook-Sather, and Felten [5]; Walkington [6]). As McCulloch [7] points out, students are commonly regarded as consumers of knowledge and the teachers (within university settings) as providers. He questions whether this distinction adequately accounts for the student–teacher relationship, as a teaching situation is essentially co-dependent on the engagement of the teacher and the student. The consumer dynamic creates an “undue distance between the student and the educational process, thereby de-emphasising the student’s role in learning” [7] (p. 177). Focus thus becomes fixated on the product, instead of acknowledging the cooperative process of teaching and learning as an interchange and development of knowledge. Much in line with PPL, McCulloch suggests to think in terms of co-producers of knowledge. “[C]o-production recognises that both student and university bring resources to the educational process, and that both make demands and levy expectations on each other during that process” [7] (p. 178). Similarly, Bovill, Cook-Sather, and Felten [5] argue that there is a pedagogical point in ensuring students’ ownership in the learning process. Ownership makes students more aware of their own learning: “Active learning implies not only a shift from passivity to agency but also from merely doing to developing a meta-cognitive awareness about what is being done” [5] (p. 1).
Gibbs and Simpson [8] and Neary [9] address this concept of student as producer, and according to Gibbs and Simpson, this implies “learning as a change in personal reality” [8] (p. 22). The focus is thus on transformation and not just the exchange of knowledge [9]. What Neary points to, with the notion of students as producers, is a critical stance toward the narrowing focus on “marketisation and commercialisation with the student cast in the role of consumer” [9] (p. 5). Neary [9] thus initiates a discussion of how we can redesign the university to support a vision of revolutionary science; this implies a need for the university to contribute to novelty, or, one could say, an emancipatory approach with Marxist undertones [4,5].
Similarly, from within a Danish context, Brinkmann and Tanggaard [10] argued for replacing the dominant metaphor of an epistemology of the eye with a pragmatist approach, namely an epistemology of the hand. Building on William James and John Dewey, among others, they state:
“Education is not—or ought not to be—simple transmission of stable ideas across generations, but should be a way of reconstructing social relationships in ways that enable human beings to respond to the changing world in which they find themselves. In other words: Education is society’s way of making sure that fruitful new ideas will be devised in the future, and this is achieved only through communication”.
[10] (p. 244)
The epistemology of the eye draws on Plato’s philosophy stating that ideas are “out there,” waiting to be discovered, or properly seen. Hence, the mind has been conceptualized as a “mirror of nature” (Rorty 1980, cited in Brinkmann and Tanggaard [10]), which means that “learning happens through visual confrontation with something” [10] (p. 244). This understanding has been rather persistent in Western thought and has thus had an impact on the educational system and dominant conceptualizations of learning. It implied that schools were to re-present certain elements of the world that were preselected for the students by someone who knew more (teachers). Regarding knowledge as representation has direct consequences for how to assess students’ learning in terms of their ability to represent the world accurately [10].
The question of what an educational system should provide was not only a topic for the pragmatists, but also a concern for pedagogical thinkers situated in dialectical materialism. The challenge of overcoming the reproductive aspect of education was, for instance, addressed by Ilyenkov [11], where he poses the question of how to help students know the objects in their world rather than reproducing the descriptions that others have made. What Ilyenkov is proposing is that mastering knowledge cannot be a reproductive venture. Rather, education is about constructing knowledge in a way that allows for actual thinking; which for Ilyenkov means “functioning knowledge” [11] (pp. 75, 76). This notion challenges the idea that one can teach abstract decontextualized knowledge to be applied later; knowledge is always embedded in a social practice tradition. Subsequently, thinking is an action relating to a concrete worldly matter.
Regarding students as producers thus draws on a theoretical foundation situated in pragmatism and the notion of learning by doing through active engagement with the world.

1.2. Assessment: The Challenges of Alignment in Progressive Teaching

As the previous section suggests, research is available on the vision of engaging students as producers, but less so about what this entails in relation to assessment. According to Gibbs and Simpson [8], “it is assumed that assessment has an overwhelming influence on what, how and how much students study” [8] (p. 3). At the same time, they point out that “assessment sometimes appears to be, at one and the same time, enormously expensive, disliked by both students and teachers, and largely ineffective in supporting learning” [8] (p. 11). What they argue for is not to disregard assessment, but to redesign it to support learning to ensure a constructive alignment between expected learning outcomes, teaching, and assessment. In other words, a constructive alignment (cf. Biggs [12]) between the various pedagogical aspects to support students’ learning outcomes. Furthermore, this may prove more difficult to ensure when working outside the confines of traditional curriculum- and test-based approaches to university teaching and learning because the standard that one is assessing students in relation to might be harder to define clearly. The question of how to formulate and frame intended learning outcomes that at the same time recognize the student-driven aspect of the project work all the while ensuring that clear and unequivocal assessment criteria may be formulated remains open, and as we wish to discuss with the vignette, this creates a challenge to the assessment situation, and ultimately to the alignment and coherence of the problem-oriented project work pedagogy.
Assessment as a term may be defined or elaborated upon in multiple ways (see also Bjælde et al. [13] for an elaboration), ranging from a more overall definition as provided by Bearman et al. [14] (p. 547): “graded and non-graded tasks, undertaken by an enrolled student as part of their formal study, where the learner’s performance is judged by others (teachers or peers),” to a more exam-specific definition as provided by Bridges et al. [15] (p. 36): “Assessment undertaken in strict formal and invigilated time-constrained conditions.” What we wanted to investigate in this study is the end-of-semester assessment that occurs in the exam situation. An exam situation that is to assess students as producers.

1.3. Research Focus: Exploring the Implications of Assessing Student-Driven Projects

Our starting point for exploration is based on a specific case (a vignette) from our own practice within the context of the problem-oriented teaching at Roskilde University (employing a PPL framework). The case illustrates an assessment situation, where a rather large discrepancy in relation to the grading of a semester project occurred between the internal and external examiners. (Henceforth in the paper, we will employ the term “internal examiner” when referring to ourselves, as this is the role we took up in the concrete situation described in the vignette. However, to fully understand the position of the internal examiner, we need to acknowledge how we, up until the oral exam, were engaged with the students’ project work as supervisors.) The disagreement did not relate to the scientific framework of the given field, nor the chosen approach to the problem as such, but rather a disagreement relating to how to assess the students’ performance in light of the PPL framework. When situations like these occur, they may spark insecurities about the validity of our problem-based teaching methods, and of course, our own abilities to assess appropriately. At the same time, we tend to point the arrow outward, at the external examiners, who (to us) may appear to be off-track or out of tune with the differences found in problem-based approaches, such as the PPL framework, relative to traditional approaches in relation to assessment. However, by attributing the problem to the external examiner, we not only limit our understanding of the situation as a whole, but we also hinder important learning that may contribute to more clear and well-aligned future assessment practices. It thus raises fundamental questions in relation to potential (mis)alignment and (un)clarity when working with problem-based project work. The vignette serves as the analytical foundation for us to explore the research questions of what the implications of assessing student-driven projects within a progressive approach to higher education teaching and learning are and what the underlying issues may be.
Our discussion eventually points toward the importance of regarding students as researchers. This implies a processual take on learning, which means that assessment must take on an aspect other than merely grading in accordance with test- or curriculum-based approaches to teaching and learning. It comprises of more than the evaluation of knowledge acquisition in relation to predefined and preselected (measurable) knowledge; it is an integral part of the learning process and a means to scaffold and support students’ learning. This adds complexity as it forces us to consider what we are assessing when assessing student-driven projects. We will argue that reflexivity becomes an essential component in the assessment of student-driven project work.

2. Methodology

The study took on a qualitative research approach by making use of a single-case study, namely the vignette. The described exam situation stood out to us as something worth exploring further as it actualized questions regarding assessment practices when regarding students as producers. It made us wonder and we felt we stumbled upon unanswered questions regarding how to improve our practice. The case is thus an authentic case, and we have attempted to describe it as neutrally as possible, recognizing however, that it is a case description representing only one first-person perspective, namely that of one of the authors. For the purpose of discussing our main research question regarding how to assess student-driven projects, we found that the internal examiner perspective was a valid one as a way to open up and enable this discussion; the responsibility for establishing and ensuring constructive alignment between teaching and assessment lies not with the students nor the external examiners, and therefore this is a case of primarily confronting our own perspective and responsibility. This, clearly, does not exclude the potential relevance of including multiple perspectives on the same situation. However, that falls outside the scope of this paper.

2.1. The Use of Single-Case Studies

The use of single-case studies provides an opportunity to explore problems in situated manners (see Bromley [16]; Yin [17]). Even though a case has been selected and “cuts” have been made—by someone, with certain intentions—a good case study presents complexities since these appear in practice. They are thus not designed to prove a point or insist on unambiguity. The described exam situation was in many ways typical, just as the process of reaching an agreement on grading is typical. The case was chosen because it challenged us in relation to the arguments used for grading and thus opened for reflections on the assessment practice in general. In terms of relevance, it is thus not limited to assessing projects within the subject matter of psychology. The case raises questions and allows for a discussion that is relevant to the field of assessment in higher education in general, and more specifically, the question of how to assess student-driven projects. We would thus argue that this makes it a typical case, in Gerring’s terms [18], as it is representative of how exam situations typically unfold, and of how compromises on grading are made in cases of dispute. It is a standard exam situation that will, we expect, resonate with examiners across universities and disciplines. With reference to Flyvbjerg [19], one might also consider it a paradigmatic case in the sense that it highlights a general characteristic of how assessment is practiced. The case in point being that assessing student-driven project work may require something other than a mainstream take on assessment. The case thus both informs us regarding the general practice of assessment and points to the problems inherent in this practice when assessing student-driven projects.
A general criticism extended toward the use of single-case studies relates to the problem of generalization. Rather than dismissing the potential for generalization of a single case, we would side with Flyvbjerg [19], who claims that “one can often generalize on the basis of a single case” [19] (p. 77). Generalizability depends largely on the case in question and how it was chosen, and for what purpose. Flyvbjerg argues for the importance of phronesis, namely concrete, practical, and context-dependent knowledge (for an extended argumentation, see Flyvbjerg [19]). Similar arguments are presented by Beckstead et al. [20], and more recently by Schraube and Højholt [21], Dreier [22], and Valsiner [23], who all argue for the primacy of situated generalization in qualitative research.
Employing a single-case study for us serves as a way to enable reflections on the nuances of practice that we perhaps do not offer (enough) time otherwise. As the analytical work will illustrate, the casework proved generative in the sense that many strains evolved, pointing beyond the case into more theoretically grounded discussions.

2.2. Analytical Strategy

To identify and analytically work with the problems and challenges presented in the vignette, we drew on the principles of the PPL framework: specifically, the framework document for RUC’s Quality Policy [1], as well as drawing on Andersen & Heilesen [24] as a supplement. The reason for employing the principles of the PPL framework as our analytical framework is the foundational character it constitutes in relation to understanding the teaching and learning approaches at Roskilde University, and the difference these make in relation to assessment. On the basis of three of the principles—project work, problem-orientation, and participant-directed learning—we critically explored the challenges and contradictions that arose in practice, as presented in the vignette. Subsequently, we briefly looked into the main legal pillars of assessment (in relation to the vignette), the specific study regulation (for the subject module of semester projects in Psychology at the bachelor’s level), and the Qualifications Framework for Danish Higher Education [25] in terms of how these documents frame the task of assessment. We then reflected upon how to understand these issues in order to point to possible ways of overcoming these in the future.
Our selected focus, and hence the discrepancies we choose to highlight, automatically rendered other aspects obscure, which is what Law [26] termed a process of “othering.” Here, two aspects in particular are worth mentioning (especially if one was to read the vignette from a feminist perspective), namely the age and gender discrepancies between the internal and external examiner. These are surely important issues and could well have served as analytical foci in their own right; however, they will not be included in our analysis. Instead, we direct attention to, e.g., Andersen and Salomonsen [27], who explicitly examine and discuss gender differences in relation to assessment.

3. Student-Driven Projects as Problem-Oriented Learning

The pedagogical profile at Roskilde University is defined as a problem-oriented, interdisciplinary, and participant-directed project work mode (PPL) (Roskilde university employs a specific version of project work that is problem-oriented, interdisciplinary, and participant-directed. It distinguishes itself from problem-based learning (PBL), but also shares similarities in terms of pedagogical ideas [24].). As mentioned in the introduction, it offers an alternative to more traditional approaches to higher education pedagogies that are often dominated by curriculum- and test-based approaches to teaching and learning. Working with problem-orientation and student-driven project work, we would argue, makes a difference in relation to assessment, and in order to unfold this argument and further discuss it, we need to take a closer look at some of the key aspects of the PPL approach. The first aspect to emphasize here is the significance of the student-driven project work. These projects account for half of the students’ ECTS-points (European Credit Transfer System) each semester and invite students to engage with the world through their own research interests. Topics or themes for the projects are thus not preselected by teachers, but organized around problems that students identify within the field of a particular subject framework, e.g., psychology, pedagogy, social science, etc. This problem-orientation, the second aspect that we want to highlight, is given primacy over the rigid demarcation lines between disciplines, as well as over the internal logic of a given discipline or field. The aim is to not only identify, understand, and explain problems, but also to engage in how to potentially solve them [1], which again underlines the value of engaging with the world and ways in which to further develop society. An important aspect of the learning process and potential learning outcome is the personal relevance of the problem [28]; if students are not engaged in personally meaningful ways with the problem they are working on, the learning potential risks being diminished, and vice versa, when personal meaningfulness and relevance is present, the problem-oriented project work can contribute to deeper learning and holds an emancipatory potential. As Illeris [28] (pp. 82–83) writes: “Accommodative learning is a demanding process that requires commitment. You accommodate only in situations that are relevant to yourself and what you are doing.” This commitment that Illeris points to is intimately connected to students’ ownership of their projects; the fact that projects are student-driven, a third aspect, supports students’ engagement and commitment to the project and to creating knowledge.
Through the project work, students are expected to acquire in-depth knowledge as they are taking ownership of the project work from initial problem development to the final critical reflections on their own knowledge production. It is a process of continuous evaluation and reflection.
This problem-oriented, project work pedagogy emphasizes the role of the student as being active or engaged in relation to constructing the content of their education, and it implies that students are, from the very beginning of their time at university, positioned as producers of knowledge rather than mere re-producers of knowledge. The organization of project work in group formations is meant to support and stimulate the students’ abilities to constantly reflect upon the problems they engage with, their subject matter, and (potentially challenge) the established truths in the field [1]. This marks a clear distinction to approaches to learning, where students are assessed on their ability to recount or make use of a preselected curriculum.
In many ways, we could therefore stipulate that the PPL pedagogy positions students as producers, which implies that students are in a learning process of becoming researchers, and at RUC, this is structured as a learning by doing (or action-learning, sensu Dewey [29,30,31], for example).

4. Vignette: The Assessment of a Student-Driven Semester Project

The setting is an oral semester-long project-exam in Psychology at Roskilde University (Bachelor’s level); the project group consisted of five students who submitted a 100-page project report. The report itself was fairly well-written, although it contained some weaknesses and unclarities. During the oral exam, the students addressed most of these weaknesses and unclarities by themselves (in their initial presentations); they brought up various aspects and argued for how they could expand the written part, or further reflect the content critically.
The internal examiner started the following dialogue by asking clarifying questions as an invitation for further discussion and reflection of the subject matter and the chosen theories and methods.
The external examiner joined the dialogue; all of his questions took as their starting point specific sentences from the written report. He enquired into details that students were not expected to be able to answer (at this level of their degree); for instance, he wanted to know how come they had not read Hegel as primary literature when they, in their report, note that Honneth (whose theory they make use of) draws on Hegel. His questions confused the students, and they clearly had a feeling of being questioned more than invited to take part in a scientifically grounded discussion.
Both internal and external examiners invited reflections on elements in the report that could have been dealt with differently, and the students were apt to meet those invitations for reflecting upon their choices in the process in relation to the concrete outcome that they had produced. Here, the students showed a remarkable ability to think along and display a humility in relation to what they have learned from their project work.
In the following assessment, the external examiner assessed the performance of the students as a 4 (equivalent to a grade D in the international grading system); the internal examiner argued for a B (10), and the final grade C (7) was awarded as a compromise). He based his assessment on the number of weaknesses that were pointed out during the oral examination. The internal examiner strongly disagreed; her assessment was a 10 (B), pointing toward a potential 12 (A), given that students demonstrated convincing skills in critically reflecting on their own work and being able to discuss various aspects of the problem field and the potentials and limitations of the chosen theory and method, along with participating actively in the discussion points that came up (and building on a fairly well-written project report).
The external examiner did not accept the internal examiner’s argument; to him, the most important issue was the number of weaknesses in the written report, regardless of what was said in the oral exam. The internal examiner acknowledged the issues concerning the written report, but inferred that the students managed to address these issues in a convincing manner and that they demonstrated good skills in critically reflecting and discussing various issues that arose, and eventually that the students’ performance should be assessed as a whole rather than mainly on the basis of the written report. The internal examiner referred to the study regulations and how the discussion and dialogue is valued as part of the overall assessment. The external examiner could not see why this should make a difference; he refused to give a grade higher than 7 (C) based on the number of weaknesses he pointed out in the report, regardless of how the students had responded to them.
He again underlined his emphasis on the number of weaknesses rather than the content or quality of the reflections in relation to working with the chosen subject matter, or the dialogue that had taken place in the exam situation. He mentioned his long experience as an examiner (he was considerably older than the internal examiner), and again refused to go beyond 7 in terms of grading. Finally, a grade of 7 (C) was awarded.

5. Analysis: What Are We Actually Assessing When Assessing Student-Driven Projects?

The vignette presents a case in which discrepancies arose between the internal and external examiner in relation to which criteria were relevant for determining the grade of a student-driven project; this points to assessment criteria not being as unequivocal as could be expected.
Within the case of the vignette, the external examiner insisted on primarily seeing the students’ performance as a product of factual knowledge, and stressed the significance of the written report over the reflections and discussions that the students engaged in during the oral exam situation. To follow this line of thought, it is clear that the written report made up the ground for understanding and assessing the students’ process of knowledge acquisition and -production, and the academic level at which they have managed to do so. Here, one can read the report directly in relation to the taxonomy employed, which indicates the scientific level that students are expected to perform at. This was also reflected in the expected learning outcomes for the subject module in question (according to the specific wordings of the study regulation). Therefore, one can argue, it was a fair assessment to look at the students’ ability to, e.g., display knowledge of the philosophy of science related to the chosen theories, display the ability to identify a psychological problem pertaining to people’s everyday life, make use of scientific literature (within the field) to analyze this problem, etc. (see the overview of expected learning outcomes in Table 1).
Overall, the external examiner’s standpoint was coherent with traditional assessment practices; however, the question is whether students’ performance as researchers (in the context of PPL) was being adequately assessed?
According to Schmidt [32], the legal foundation for exams rests on four set of rules: the first is the (university) law regulating the education in question; the second is the exam regulations; the third is the concrete study regulations for the given subject module, clearly indicating the formal requirements and learning objectives; and the fourth is the study curriculum (for the specific subject) [32]. What becomes evident here is the absence of the PPL framework and what value should be ascribed to it in relation to assessment.
The learning objectives (or intended learning outcomes) for the subject module in question were formulated in accordance with the national Qualifications Framework for Danish Higher Education [25] (hereafter referred to as the Qualifications Framework). The Qualifications Framework defines the desired or expected level of competency at different educational levels, e.g., a master level assessment emphasizes higher-(taxonomy)-level competencies compared to a bachelor level. Having clearly defined learning objectives or intended learning outcomes is an essential part of ensuring constructive alignment (cf. Biggs [12]).
In the specific study regulations for the subject module, assessment criteria were divided into three different taxonomic levels in assessing learning: knowledge in relation to knowing, understanding, describing, and explaining; skills regarding how to frame, analyze, calculate, measure, etc.; and competencies to make use of knowledge, combining complex situations, independent learning, etc. The concrete formulation of the assessment criteria can be found in Table 1 below.
Reading the assessment criteria for the specific subject module, we can see how these (very directly) align with the formulations in the Qualifications Framework. However, a significant ambiguity appears when we take into consideration the following presentation of what matters when conducting examinations within the PPL framework, as presented at a meeting at Roskilde University for external examiner chairmen.
“The grading is based on an overall assessment of the individual student’s oral presentation and the ability to engage in a shared conversation in which responsibility for insight and reflection competence in relation to the subject matter of the project report in both depth and breadth is demonstrated (note that more accurate assessment criteria are stated in the study regulations).” (Assessment of projects and theses at RUC, memo of 26.10.2017, authors’ translation) (The memo, authored by RUC’s executive university management, is a response to questions that arose in a meeting in the fall of 2017 between the external examiner chairman and the Executive University Management of RUC addressing the assessment of semester projects.)
As stated, grading was based on an overall assessment of the shared discussion of the subject matter. In addition, what was conveyed at the meeting in relation to reflection competence and subject matter was not explicitly stated in the formal assessment criteria. It can be argued that it may be stated indirectly, but it creates room for unclarity, as well as personal interpretation and preference for both the internal and external examiners, and for students.
It is our impression that a tension lingers between the need for clear and well-aligned regulations vis-à-vis the ideals embedded in the PPL framework. This tension is highlighted by Andersen [34] who argues that constructive alignment comes with the risk of simplifying the teaching practice as a teaching-to-the-test approach to learning, which can “easily lose the personal, creative or unpredictable elements […] the independent, individually or collectively added” [34] (pp. 31–32, authors’ translation); these are all elements that are considered central within the PPL framework. The tension becomes even more apparent when considering that, in the very same memo to external examiners that we refer to in the paragraph above, the PPL framework was not mentioned explicitly.
The discrepancy especially appears in relation to how to assess the qualifications displayed by the students, which is primarily in the written report here. It can be summarized as a discrepancy between numerically summing up and assessing weaknesses in the written report versus assessing how and how far students got with handling, grasping, and potentially manipulating the subject matter at hand. To understand the difference this makes in relation to assessment, it is relevant to dwell on each perspective, the assumptions behind them, and which formalities play up against them.
If we look at the wording of the grading scale used in Denmark, it heavily emphasizes a concept of weaknesses (see Table 2). To achieve the highest grade possible (12), the academic performance of the student can comprise of “no or only a few minor weaknesses.” In the lower end of the grading scale, grade 4 is awarded for a performance demonstrating “some major weaknesses.” In the lowest end of the scale, where it becomes a question of passing or failing, the term “weaknesses” is replaced by the notion of “acceptance.” Thus, to a relatively large degree, grading becomes a numerical summation of weaknesses using the seven-point grading scale.
The numerical summation of weaknesses as the ground for assessment is a common interpretation (Although we have not been able to find a formal definition that defines the 7-point grading scale as a summation of weaknesses, this interpretation is supported by the rector of Roskilde University, Hanne Leth Andersen, who has problematized this issue multiple times in the Danish press; see References [36,37,38]). For example, here it is explicitly formulated in the guide for assessment of master theses at another Danish university:
“Grades below the grade 12 are awarded for performances that, to an increasing extent, comprise of weaknesses in relation to the excellent performance. This implies that every assessment begins with 12, from which will be deducted according to weaknesses”.
[39] (p. 3; authors’ translation)
In the vignette, the external examiner seemed to align himself with an idea of an expected ideal for performance at this level (or perhaps exceeding this level, with reference to his expectation of students reading primary works of Hegel when only secondarily referencing Hegel’s work). This implies an (implicit) understanding of what kinds of theoretical knowledge can be expected of the students within the field of psychology, albeit primarily on what appears to be a general basis rather than in relation to the context of project work, where students do not refer to a preselected curriculum. Assessing the semester project thus appears to become (for the external examiner) an exercise regarding the degree to which the students meet the expected ideals (of the excellent performance), and a deduction from the initial grade 12 in relation to a numerical summation of weaknesses displayed by the students (primarily in their written report).
However, one could argue that if approaching assessment from within the PPL framework, assessment criteria in the study regulation ought to be read in a different light. Although students’ work and academic performance should still be assessed in relation to the specific criteria, the PPL-concept of problem-orientation becomes relevant to consider, as in the case of the vignette. Working with problem-orientation is an invitation to respectfully engage with the world, to get involved, and perhaps ultimately to contribute to making the world a better place. Evidently, this relates to much more than university studies alone as a key point here is to prepare students to meet the world and the problems they encounter or identify with respect and humility. This brings us back to the aforementioned tension between constructive alignment as a teaching-to-the-test approach in relation to the personally relevant, engaged, and unexpected (cf. Andersen [34]). If we here take Illeris’ [28] wording, we can see that the notion of a problem opens up to a variety of foci, all of which make a difference for assessing students’ performance:
“They must be perceived as immediately relevant for the individual participants in a group of learners and of common interest to all learners in the group, they must be of such a nature that they can elicit broader social structures and the basis for these structures, they must cover the curricula of the relevant study programme in conjunction with other educational activities”.
(Illeris [28] (p. 187, authors’ translation), in Andersen and Kjeldsen [40] (p. 8))
As Andersen [34] points out, the emphasis on meeting clearly defined expectations potentially comes at the cost of students’ creativity. Here, she positions herself with the ideals of students as producers or creators of novelty and marks a divide or a tension with the dominant discourses of constructive alignment (cf. Gibbs [41]). This points to an inbuilt tension between ensuring clarity in constructive alignment and creating a space for students’ creative processes in conducting their project work.
In the official PPL descriptions (see Framework document [1]), students’ curiosity and concrete engagements are highlighted as the epitome of this higher education pedagogy; students’ questions and engagements are prior to choices of theory and method, and thus the starting point for developing knowledge. This is hardly relevantly represented in a numerical summation of weaknesses alone, as the mere notion of weaknesses presupposes a clear idea of an ideal standard, a standard that relates well to orienting oneself within a field (here, the field of psychology), but that at the same time loses its function when wanting to assess students’ process of engaging with a problem area or subject matter.
Summarizing weaknesses in student-driven projects does not account for the reflexivity displayed by students in relation to their own learning process, namely to the strengths and weaknesses of their choices (of theory and method), and to the way the problem focus shifted along the way and the difference this made. This calls for a discussion of what is needed to ensure a better, and perhaps more constructive, alignment of formal documents that frame assessment and the ideals of the problem-oriented, project work approach.

6. Discussion: Assessing Reflexivity

The problem-oriented, project work approach to teaching and learning, which is embedded in PPL, clearly makes a strong connection between students’ engagement in their own research projects and their learning outcome. The assessment criteria that we listed earlier (Table 1) contain highly relevant criteria in relation to establishing some sort of (ideal) standard against which students’ performance can be evaluated and assessed. However, if we take the notion of the student-driven project work seriously, as engagement in a research process, then a key factor not found in the assessment criteria is reflexivity. Regardless of a semester project not always providing the expected outcomes or answers, there is still learning to be gained by reflecting about the process, the choices made along the way, the analysis, and the subject matter or problem in general. This paves the way for new, and perhaps more relevant or precise, research questions. Research processes are iterative processes and reflexivity is not just an “academic life vest” to wear as a shield from critical questions, but rather a cardinal skill or competence that needs to be learned, trained, and refined; it is an ongoing process of letting oneself be moved by the world and ultimately contribute to moving the world for the better.
As Bruno et al. [42] points out, there are various descriptions available of what reflection or reflexivity means. When we deliberately go with the notion of reflexivity, we place ourselves with the understanding initiated by Schön [43], with an emphasis on integrating values with one’s working activities, and the further developments of this line of thought that highlight the dialogical aspect of reflexivity: reflexivity is a relational activity and a “form of knowledge that is built up through dialogue and negotiation” (Cunliffe [44], in Bruno et al. [42]). Therefore, reflexivity may be considered the epitome of learning; it is what bridges verbally articulable goals and that which is, in Dohn’s [45] terms, “the silent resonance base and meaning frame that practical know-how and personal experience represent” [45] (p. 44, authors’ translation). Creating a reflexive space for learning, in Boud’s terms, fosters “confidence and an image of oneself as an active learner, not one solely directed by others” [46] (p. 32). Thus, this process moves toward more formative learning ideals.
As we saw in the memo on assessment from 2017, assessment is meant to comprise the student’s ability to demonstrate responsibility, insight, and reflexivity in relation to the subject matter at hand. Here, reflexivity is thus explicitly valued as part of the learning process, and therefore something to assess. In our reading of the PPL framework, it is the coupling between the expected learning outcomes and the personal reflection on the process that in the end contribute to the student’s learning, and this is what assessment is supposed to support. One could argue that reflexivity is an implicit undertone that need not be formally explicated, but as exemplified in the vignette, the lack of explication may contribute to challenges in practice when assessing student-driven projects. However, how does one explicate and assess reflexivity?
To integrate reflexivity more clearly and explicitly, we see three areas in which alterations or expansions could be productive: (1) clearly integrating reflexivity in the formal assessment criteria, (2) considering the oral examination a space for reflexivity, and (3) inviting students to co-formulate (additional) relevant assessment criteria for their project as part of their project work.

6.1. Reflexivity as an Assessment Criterion

Evidently, the discrepancy between the internal and the external examiner in the vignette can largely be explained as the application of different lenses for reading the requirements in the study regulations. On the one hand, it is not fair to contest the reading done by the external examiner as he built his argument directly on the instructions embedded in the seven-point grading scale, emphasizing a numerical summation of weaknesses in relation to the formal requirements or learning objectives in the study regulations. At the same time, the arguments presented by the internal examiner were also valid in the sense that they drew forth central aspects or values of the PPL framework, insisting on assessing the weaknesses in the students’ performance in relation to the students’ abilities to display reflexivity and discuss their project critically in relation to the problem field in question. This unclarity of what is actually assessed not only constituted a problem between the two examiners, but certainly also in relation to the potential learning outcome of the students. As clearly articulated by Gibbs and Simpson [8]:
“Students described all aspects of their study—what they attended to, how much work they did and how they went about their studying—as being completely dominated by the way they perceived the demands of the assessment system. Derek Rowntree stated that ‘if we wish to discover the truth about an educational system, we must first look to its assessment procedures’”.
[47] (p. 1) in [8] (p. 4)
As is also voiced by Rienecker and Troelsen [48], regardless of what is being assessed, be it more traditional notions of learning outcomes or other competences, such as reflexivity or competences relating to collaboration, it is of primary concern that assessment criteria are explicit and communicable “as students have the right to know on what grounds they are assessed” [48] (p. 4).
When analyzing the wording of the study regulations for the respective subject module, we found these to be ambiguous in such a way that they can be read differently depending on what is valued by the examiner. This contributes to a relatively large space for examiners to employ their own judgment in, based on criteria that may not be explicit to students, or themselves for that matter (for instance, reflecting a different kind of higher education pedagogy than the one valued at Roskilde University). In the cases where examiners agree on the grade, the potential unclarity may be passed on for students to figure out, whereas in the case of dispute about the grade (as in the vignette), examiners may have a hard time verbalizing their different perspectives on which they base their assessment.
We found that there appears to be a lack of clarity regarding factual knowledge versus processes of reflexivity in relation to what should be assessed. On the one hand, it can be argued that students’ knowledge production can and should be assessed as an accumulation of relevant knowledge, whereas on the other hand, it can be argued that students’ knowledge production as presented in the project report presents a starting point from which to engage in discussions and reflections about what can be learned from what was produced, what could have been changed, ameliorated, etc., and what new questions it gives rise to. We would argue that if one is to take the notions of the PPL framework seriously, pertaining to engaging with problems in practice, and allowing oneself to be moved by the process of learning (in line with Klafki’s notions of self-formation (2001, referenced in Andersen and Kjeldsen [49]), then assessment needs to somehow take reflexivity into account. Otherwise, one could ask whether the assessment is more in line with a teaching-to-the-test-approach, and if so, then this could be better supported by other examination forms, such as answering pre-determined exam questions or even multiple choice exams. We are thus calling into question the alignment between positioning students as researchers (or producers of research) with student-driven project work without fully recognizing the consequences of this in the final assessment of their work.
This points to the need for ensuring an integration of the PPL framework in the requirements in the study regulations, and more specifically, the intended learning outcomes. In other words, we argue that it is a matter of making the absent present (cf. Law [26]). When reflexivity is not made explicit in the formal exam documents constituting the legal foundation for the examination, it becomes difficult to ensure continuity between what the project work asks for (up until the exam), and what is taking place in the exam situation, i.e., what is being assessed. Thus, in a worst case scenario, continuity depends on the external examiner’s understanding of how the student-driven project work pedagogy comes into play in the exam situation in the form of including, for instance, reflexivity. Furthermore, we argue, this was exactly the cause of dispute between the internal and external examiner in the vignette, namely whether or not to consider (and include) the difference that this specific higher education pedagogy made in relation to assessing the students’ performance.

6.2. The Role of the Oral Exam as a Space for Reflexivity

In student-driven project work, students’ performance in both the written project report and the oral exam situation is assessed as a whole, or at least this is the ideal. However, the vague definition of “assessed as a whole” contains room for interpretation in relation to how the different elements—the project report and the oral exam—are weighted. Again, this becomes evident in the vignette, where the internal and external examiner represented two different ideas of the role of the oral exam. First, one could consider the role of the examination as an opportunity for students to defend their written report. In this case, the assessment would pertain to the degree to which the students managed to convince the examiners of the validity of their knowledge production and the strength of their arguments. Second, however, one could consider the examination as part of the process to be assessed in relation to students’ ability to reflect upon their own learning process and the specific knowledge contribution that they have produced. In this case, the exam situation is turned into a space for shared knowledge development and reflection. From this perspective, one could argue that the internal and external examiner took on the role of colleagues assessing other colleagues’ work, which would be in line with regarding students as researchers. It therefore ultimately becomes a matter of regarding the oral exam as a test situation or a learning space.
In the vignette, the external examiner seemed preoccupied with pinpointing weaknesses or mistakes for the students to correct or simply accept. This meant the project report was given superior status (vis-à-vis the oral performance) with an emphasis on the craftsmanship of writing an academic report as measured against an (implicit) ideal standard for such reports. His questions and comments mostly invited students to defend themselves, which may of course be fruitful as part of a learning process in some cases. However, at the same time, it may be counterproductive regarding students’ ability to critically reflect upon their own process and the consequences of the theoretical and methodological choices they have made throughout the project. Here, the discrepancy between the focus of the external and internal examiner became evident in that the internal examiner tried to ask questions that primarily invited such reflections, as well as pointed out contradictions in the project report to serve as a starting point for students’ reflexivity in relation to their knowledge production and their epistemic interests.
Building on Gibbs [37], one of the functions of assessment is to help students internalize the standards and the notions of quality in the respective discipline, as well as provide an appropriate learning activity [37] (p. 47). Considering the exam situation a space for reflexivity is an opportunity to foreground self-analysis of the students’ own performance, which makes them more likely to improve on their learning (see Ryan [50]). Following this line of thought, one could argue that in this case, it translates to grasping the central theoretical concepts and methodological considerations within the field of psychology, as well as engaging oneself in exemplary learning, meaning that the problem or subject matter that is engaged with should be relevant to oneself (sensu Dewey and Illeris), to larger societal issues or concerns (sensu Negt), or ought to illustrate core issues within the discipline (sensu Wagenschein). Such considerations cannot be pre-selected or considered static, but are far more likely to be thought of as part of a process of engagement with matters of concern or personal interest (see also Andersen and Kjeldsen [40] for an elaboration of the notion of exemplarity).

6.3. The Co-Creation of Relevant Assessment Criteria

So far, what we have addressed are some of the challenges that one may encounter, or that are perhaps inherent to a student-driven project work approach in higher education teaching and learning. They do not appear insurmountable but evidently require some attention if they are to be avoided in the future. Alongside these challenges, new potentials arise or could be visible in a future practice and are therefore worth considering; this is a potential that we recognize relates to the co-creation of relevant assessment criteria.
Considering the student-driven part of the project work, it is striking that students have no influence on the criteria on which they will be assessed. As Boud [51] points out, the summative evaluation formats have been dominant (in relation to formative ones), both in the minds of students and teachers, which relates well to the approach of the external examiner in the vignette and the invitation to conduct a numerical summation of weaknesses as found in the official grading scale, pointing to the notion of an ideal (unequivocal and measurable) standard to be reached. However, if we look to theory on project planning and evaluation, setting evaluation criteria is an integral part of project planning in the initial project phases (see Mørch [52]; Weiss [53]). The current university practice dictates that students are to relate to the predetermined assessment criteria; however, they have no chance of exerting ownership of these as they are non-negotiable. As Boud phrases it: “[i]f students are to become autonomous and interdependent learners as argued in statements of aims of higher education (for example, see Boud, 1988), then the relationship between student and assessor must be critically examined and the limiting influences of such an exercise of power explored” [51] (p. 5).
Thus, one could argue that an integral part of conducting student-driven project work would entail the possibility of exerting ownership of how to evaluate or assess the quality of the project work; hence, granting students (partial) ownership of the assessment criteria could potentially be not only beneficial for the coherence of the project process, but would potentially also support students’ engagement in the project work and ultimately their learning outcome.
In future practice while working with student-driven project work, we imagine adding two or three assessment criteria to the overall intended learning outcomes that were specifically developed in a collaboration between students and teachers as part of the initial project development. This would be a pragmatic way to relate directly to the project that students were working on and thus be more specific and concrete than the abstract intended learning outcomes for the subject module. Our prediction is that this could help bridge the abstract formal requirements with the concrete subject matter of the project work and support students’ reflexivity of the research process, taking the notion of students-as-researchers seriously.

7. Concluding Thoughts

After exam situations, such as the one illustrated in the vignette, a common interpretation among internal examiners and students is that the external examiner is not properly in tune with the PPL framework, which is often the cause of frustration. However, as our analysis suggests, merely blaming the external examiner does not account for the underlying issue at stake, namely the lack of clarity when it comes to assessing student-driven project work. Therefore, the research focus of this paper was to explore the implications of assessing student-driven projects within a progressive approach to higher education teaching, as well as the potential underlying issues that could be at stake. What difference does, and perhaps should, this higher education pedagogy make in relation to assessment?
As we have tried to argue, it is of utmost importance that reflexivity is made more explicit as an assessment criterion. This could readily be achieved by writing it into the study regulations and the intended learning outcomes. Although one could argue for reading the study regulations in their current form through a problem-oriented, project work perspective, it is—as is clearly illustrated in the vignette—not a feasible strategy in the long run. Not only would clearly stating reflexivity in the study regulations diminish potential disputes relating to grading between internal and external examiners, but it would also ensure a continuity throughout the project work process, which includes the exam situation, thus underlining the status of the exam situation as a learning space.
This brings us back to the question of constructive alignment. Ensuring a proper (constructive) alignment is not just a question of (direct) coherence between intended learning outcomes and the teaching and examination formats in a narrow sense, as this may carry a risk of restricting students’ ownership of their project work and the space for creativity that is intimately intertwined with being a researcher. Ensuring constructive alignment when adopting a problem-oriented, project work pedagogy must, necessarily, consider, recognize, and integrate the difference that this pedagogy makes and the values it carries such that it is possible to construct and employ sustainable assessment criteria without limiting students’ learning spaces. The exam situation must be carried out in accordance with the overall higher education pedagogy, and in the case of our current practice as illustrated in the vignette, this points to further developments in this area being called for.
We have only briefly touched upon the role of reflexivity in assessment in relation to ensuring reflexivity as a lifelong practice, e.g., how does reflexivity in assessment prepare students for a reflexive practice after obtaining their university diploma? This conversation is a part of a more extensive discussion of how well knowledge (obtained in a university setting) and reflexive practices are transferable (see Boud [54]) and whether universities are well equipped to prepare students for lifelong learning (see Ryan [50]). Although we certainly empathize with this discussion, we found limited grounding in the chosen case to substantiate the discussion beyond what is already written about it.

Author Contributions

Conceptualization, S.P. and M.H.; data curation, S.P.; methodology, S.P. and M.H.; writing—original draft, S.P. and M.H.; writing—review and editing, S.P. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors appreciated all the help provided by the Centre for Research on Problem-Oriented Project Learning, RUC.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. The Pedagogical Profile at Roskilde University; Framework Document; Framework Document for RUC’s Quality Policy; Roskilde University: Roskilde, Denmark, 2017.
  2. Biggs, J.; Tang, C. Teaching for Quality Learning at University Maidenhead; McGraw-Hill Education: Berkshire, UK, 2007. [Google Scholar]
  3. Johnson, D.W.; Johnson, R.T. Making cooperative learning work. Theory Pract. 1999, 38, 67–73. [Google Scholar] [CrossRef]
  4. Neary, M.; Winn, J. The student as producer: Reinventing the student experience in higher education. In The Future of Higher Education: Policy, Pedagogy and the Student Experience; Continuum: London, UK, 2009; pp. 192–210. [Google Scholar]
  5. Bovill, C.; Cook-Sather, A.; Felten, P. Students as co-creators of teaching approaches, course design, and curricula: Implications for academic developers. Int. J. Acad. Dev. 2011, 16, 133–145. [Google Scholar] [CrossRef] [Green Version]
  6. Walkington, H. Students as Researchers: Supporting Undergraduate Research in the Disciplines in Higher Education; The Higher Education Academy: York, UK, 2015. [Google Scholar]
  7. McCulloch, A. The student as co-producer: Learning from public administration about the student–university relationship. Stud. High. Educ. 2009, 34, 171–183. [Google Scholar] [CrossRef]
  8. Gibbs, G.; Simpson, C. Conditions under which assessment supports students’ learning. Learn. Teach. High. Educ. 2005, 1, 3–31. [Google Scholar]
  9. Neary, M. Student as producer: An institution of the common? [or how to recover communist/revolutionary science]. Enhanc. Learn. Soc. Sci. 2012, 4, 1–16. [Google Scholar] [CrossRef] [Green Version]
  10. Brinkmann, S.; Tanggaard, L. Toward an epistemology of the hand. Stud. Philos. Educ. 2010, 29, 243–257. [Google Scholar] [CrossRef]
  11. Ilyenkov, E.V. Knowledge and Thinking. J. Russ. East Eur. Psychol. 2007, 45, 75–80. [Google Scholar] [CrossRef]
  12. Biggs, J. Constructive alignment in university teaching. HERDSA Rev. High. Educ. 2014, 1, 5–22. [Google Scholar]
  13. Bjælde, O.E.; Jørgensen, T.H.; Lindberg, A.B. Continuous assessment in higher education in Denmark. Dansk Universitetspædagogisk Tidsskrift 2017, 12, 1–19. [Google Scholar]
  14. Bearman, M.; Dawson, P.; Boud, D.; Bennett, S.; Hall, M.; Molloy, E. Support for assessment practice: Developing the Assessment Design Decisions Framework. Teach. High. Educ. 2016, 21, 545–556. [Google Scholar] [CrossRef]
  15. Bridges, P.; Cooper, A.; Evanson, P.; Haines, C.; Jenkins, D.; Scurry, D.; Woolf, H.; Yorke, M. Coursework Marks High, Examination Marks Low: Discuss, Assessment & Evaluation. High. Educ. 2002, 27, 35–48. [Google Scholar]
  16. Bromley, D.B. The Case-Study Method in Psychology and Related Disciplines; John Wiley & Sons: New York, NY, USA, 1986. [Google Scholar]
  17. Yin, R.K. Case Study Research Design and Methods; Sage Publications: London, UK, 1986. [Google Scholar]
  18. Gerring, J. Case Study Research: Principles and Practices; Cambridge University Press: New York, NY, USA, 2006. [Google Scholar]
  19. Flyvbjerg, B. Making Social Science Matter: Why Social Inquiry Fails and How It Can Succeed Again; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  20. Beckstead, Z.; Cabell, K.R.; Valsiner, J. Generalizing Through Conditional Analysis: Systemic Causality in the World of Eternal Becoming. Humana Mente 2009, 11, 65–80. [Google Scholar]
  21. Schraube, E.; Højholt, C. Introduction: Subjectivity and Knowledge—The Formation of Situated Generalization in Psychological Research. In Subjectivity and Knowledge—Generalization in the Psychological Study of Everyday Life; Højholt, C., Schraube, E., Eds.; Springer: New York, NY, USA, 2019. [Google Scholar]
  22. Dreier, O. Generalizations in situated practices. In Subjectivity and Knowledge—Generalization in the Psychological Study of Everyday Life; Højholt, C., Schraube, E., Eds.; Springer: New York, NY, USA, 2019. [Google Scholar]
  23. Valsiner, J. Generalization is possible only from a single case (and from a single instance): The value of a personal diary. In Integrating Experiences: BODY and Mind Moving between Contexts; Wagoner, B., Chaudhary, N., Hviid, P., Eds.; Information Age Publishing: Charlotte, NC, USA, 2015; pp. 233–243. [Google Scholar]
  24. Andersen, A.S.; Heilesen, S.B. (Eds.) The Roskilde Model: Problem-Oriented Learning and Project Work; Springer International Publishing: Cham, Switzerland, 2015. [Google Scholar]
  25. Qualifications Framework for Danish Higher Education. 2008. Available online: https://ufm.dk/en/education/recognition-and-transparency/transparency-tools/qualifications-frameworks/other-qualifications-frameworks/danish-qf-for-higher-education?set_language=en&cl=en (accessed on 2 December 2018).
  26. Law, J. Making a Mess with Method. In The SAGE Handbook of Social Science Methodology; Outhwaite, W., Turner, S.P., Eds.; Sage Publications: London, UK, 2007; pp. 595–606. [Google Scholar]
  27. Andersen, L.B.; Salomonsen, H.H. Giver kvinder og mænd forskellige karakterer? Køn og karaktergivning på universitetet. Dansk Universitetspædagogisk Tidsskrift 2014, 9, 71–89. [Google Scholar]
  28. Illeris, K. Problemorientering og Deltagerstyring: Oplæg til en Alternativ Didaktik [Problemorientation and Participant-Directed Learning]; Munksgaard: Copenhagen, Denmark, 1974. (In Danish) [Google Scholar]
  29. Dewey, J. Democracy and Education: An Introduction to the Philosophy of Education; Macmillan: New York, NY, USA, 1923. [Google Scholar]
  30. Dewey, J. Experience and Nature; Open Court: Chicago, IL, USA, 1925. [Google Scholar]
  31. Dewey, J. Experience and Education; Touchstone/Simon & Schuster: New York, NY, USA, 1938. [Google Scholar]
  32. Schmidt, E. Bedømmelsens kompleksitet. Dansk Universitetspædagogisk Tidsskrift 2006, 1, 6–12. [Google Scholar]
  33. Subject Module Project in Psychology. Available online: https://study.ruc.dk/class/view/14795 (accessed on 2 December 2018).
  34. Andersen, H.L. »Constructive alignment« risikoen for en forsimplende universitetspædagogik. Dansk Universitetspædagogisk Tidsskrift 2010, 5, 30–35. [Google Scholar]
  35. Bekendtgørelse om Karakterskala og Anden Bedømmelse. 2015. Available online: https://www.retsinformation.dk/forms/r0710.aspx?id=167998#Kap1 (accessed on 2 December 2018).
  36. Ejsing, J. Rektor: Vi Skal Have Ny Karakterskala Med +12 Som Topkarakter; Berlingske: Copenhagen, Denmark, 2016; Available online: https://www.berlingske.dk/samfund/rektor-vi-skal-have-ny-karakterskala-med-12-som-topkarakter (accessed on 2 December 2018).
  37. Andersen, L.H. Rektor til ny Minister: Vi Udvikler Ikke Nytænkende Unge Gennem Nulfejl og Kontrol; Altinget: Copenhagen, Denmark, 2018; Available online: https://www.altinget.dk/forskning/artikel/rektor-til-ny-minister-vi-udvikler-ikke-nytaenkende-unge-gennem-nulfejl-og-kontrol (accessed on 2 December 2018).
  38. Romme-Mølby, M. Karakterskala er Blevet for Tjeklisteorienteret; Gymnasieskolen: Copenhagen, Denmark, 2017; Available online: https://gymnasieskolen.dk/karakterskala-er-blevet-tjeklisteorienteret (accessed on 2 December 2018).
  39. Institut for Psykologi Københavns Universitet. Specialepjece til Bedømmerne. 2017. Available online: https://www.psy.ku.dk/uddannelser/censor/faglig-information/Specialepjece_til_bed_mmerne_September_2017.pdf (accessed on 2 December 2018).
  40. Andersen, A.S.; Kjeldsen, T.H. Theoretical Foundations of PPL at Roskilde University. In The Roskilde Model: Problem-Oriented Learning and Project Work; Andersen, A.S., Heilesen, S.B., Eds.; Springer: New York, NY, USA, 2015. [Google Scholar]
  41. Gibbs, G. Using assessment strategically to change the way students learn. In Assessment Matters in Higher Education; Brown, S., Glasner, A., Eds.; SRHE and Open University Press: Maidenhead, UK; Philadelphia, PA, USA, 1999. [Google Scholar]
  42. Bruno, A.; Galuppo, L.; Gilardi, S. Evaluating the reflexive practices in a learning experience. Eur. J. Psychol. Educ. 2011, 26, 527–543. [Google Scholar] [CrossRef]
  43. Schön, D.A. The Reflective Practitioner; Basic Books: New York, NY, USA, 1983. [Google Scholar]
  44. Cunliffe, A.L. On becoming a critically reflexive practitioner. J. Manag. Educ. 2004, 28, 407–426. [Google Scholar] [CrossRef] [Green Version]
  45. Dohn, N.B. Karaktergivning—Intuitiv ekspertise eller ‘viden i praksis’? DUT Dansk Universitetspædagogisk Tidsskrift 2006, 1, 38–46. [Google Scholar]
  46. Boud, D. Reframing assessment as if learning were important. In Rethinking Assessment in Higher Education; Routledge: New York, NY, USA, 2007; pp. 24–36. [Google Scholar]
  47. Rowntree, D. Assessing Students: How Shall We Know Them? Taylor & Francis: Oxfordshire, UK, 1987. [Google Scholar]
  48. Rienecker, L.; Troelsen, R.N.P. Bedømmelse og censur. Dansk Universitetspaedagogisk Tidsskrift 2006, 1, 4–5. [Google Scholar]
  49. Andersen, A.S.; Kjeldsen, T.H. A Critical Review of the Key Concepts in PPL. In The Roskilde Model: Problem-Oriented Learning and Project Work; Andersen, A.S., Heilesen, S.B., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 17–35. [Google Scholar]
  50. Ryan, M. Introduction: Reflective and reflexive approaches in higher education: A warrant for lifelong learning? In Teaching Reflective Learning in Higher Education; Springer: Cham, Switzerland, 2015; pp. 3–14. [Google Scholar]
  51. Boud, D. Assessment and learning: Contradictory or complementary? In Assessment for Learning in Higher Education; Knight, P., Ed.; Kogan Page: London, UK, 1995; pp. 35–48. [Google Scholar]
  52. Mørch, S. Projektbogen: Teori og Metode i Projektplanlægning; Rubikon: København, Denmark, 1993. [Google Scholar]
  53. Weiss, C.H. Evaluation: Methods for Studying Programs and Policies, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  54. Boud, D. Knowledge at work: Issues of learning. In Work-Based Learning: A New Higher Education? Boud, D., Solomon, N., Eds.; SRHE and Open University Press: Buckingham, UK, 2001. [Google Scholar]
Table 1. Formal assessment criteria for “Subject Module Project in Psychology” [33] (authors’ translation).
Table 1. Formal assessment criteria for “Subject Module Project in Psychology” [33] (authors’ translation).
Intended learning outcomes/assessment criteria. The goal of the project is for the student to achieve
Knowledge:
  
  • Knowledge of psychological issues relating to human cognition, development, personality, and social life.
  • Knowledge of theoretical scientific discussions associated with the issue and the chosen approaches.
  • Insight into key theories and issues in psychology relating to cognition, development, personality, and sociality, seen in a global social and cultural context.
Skills:
  
  • Skills to independently identify and formulate a psychological issue relating to human cognition, development, personality, or social life.
  • Skills in analyzing and addressing the issue in a methodical manner with the use of scientific literature.
  • Skills to carry out an independent and critical discussion of the theoretical approaches used.
  • Skills in communicating knowledge and insight of key theories and issues in psychology, both in written, scientifically structured reports and orally.
Competencies:
  
  • Competency to analyze and work independently with scientific literature that deals with central theories and issues in psychology.
  • Competency to utilize key psychological theories, and other theories of relevance to psychology, relating to human cognition, development, personality, or social life seen in a social and cultural context.
  • Competency to form coherent conclusions based on the analysis conducted and the treatment of the issue.
  • Competency to contextualize the conclusion in relation to its psychological and/or social implications.
Table 2. Formal definition of the seven-point grading scale; emphasis added [35].
Table 2. Formal definition of the seven-point grading scale; emphasis added [35].
12For an excellent performance displaying a high level of command of all aspects of the relevant material, with no or only a few minor weaknesses.
10For a very good performance displaying a high level of command of most aspects of the relevant material, with only minor weaknesses.
7For a good performance displaying good command of the relevant material but also some weaknesses.
4For a fair performance displaying some command of the relevant material but also some major weaknesses.
2For a performance meeting only the minimum requirements for acceptance.
00For a performance that does not meet the minimum requirements for acceptance.
−3For a performance that is unacceptable in all respects.

Share and Cite

MDPI and ACS Style

Pedersen, S.; Hobye, M. Implications of Assessing Student-Driven Projects: A Case Study of Possible Challenges and an Argument for Reflexivity. Educ. Sci. 2020, 10, 19. https://doi.org/10.3390/educsci10010019

AMA Style

Pedersen S, Hobye M. Implications of Assessing Student-Driven Projects: A Case Study of Possible Challenges and an Argument for Reflexivity. Education Sciences. 2020; 10(1):19. https://doi.org/10.3390/educsci10010019

Chicago/Turabian Style

Pedersen, Sofie, and Mads Hobye. 2020. "Implications of Assessing Student-Driven Projects: A Case Study of Possible Challenges and an Argument for Reflexivity" Education Sciences 10, no. 1: 19. https://doi.org/10.3390/educsci10010019

APA Style

Pedersen, S., & Hobye, M. (2020). Implications of Assessing Student-Driven Projects: A Case Study of Possible Challenges and an Argument for Reflexivity. Education Sciences, 10(1), 19. https://doi.org/10.3390/educsci10010019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop