1. Introduction
The increasing internationalisation and diversification of higher education, particularly in the UK, has intensified the imperative to ensure that institutional practices, including assessment, are inclusive and equitable [
1,
2]. As students from a wide range of national, cultural, and socioeconomic backgrounds enter universities, longstanding assumptions underpinning assessment design are increasingly called into question. Alongside widening participation initiatives targeting underrepresented groups—such as students with disabilities, first-generation learners, and those from low-income or minority ethnic backgrounds—there is a growing recognition that traditional assessment practices may inadvertently reinforce systemic inequities [
3,
4].
Inclusion in higher education encompasses a broad array of dimensions, including disability, social inclusion, gender, and cultural diversity [
5]. While institutional strategies often rely on categorical equity labels (e.g., “disabled,” “international,” “low SES”), research suggests that such labels can obscure the diverse experiences and intersectional identities within these groups [
6,
7]. Consequently, inclusive assessment must move beyond standard accommodations to address the structural and epistemic assumptions embedded in conventional evaluation approaches [
8,
9]. For example, providing additional time for a test may mitigate certain access barriers, but it does not necessarily ensure that the assessment format itself is valid for all learners [
10].
Emerging scholarship highlights the role of the institution—not just the individual—in shaping educational outcomes. Ref. [
11] argue that institutional culture, teaching practices, and assessment design often have greater predictive power for student success than demographic characteristics such as nationality or socioeconomic status. These findings underline the need for universities to critically assess their own practices, particularly in relation to assessment, which remains a key mechanism through which power, knowledge, and success are defined and distributed.
At the same time, there is increasing emphasis on equipping students with critical 21st-century skills such as problem-solving, creativity, and collaboration [
12,
13]. Traditional assessment methods—often rooted in individualised, summative, and decontextualised formats—are frequently misaligned with these pedagogical goals. In response, scholars and institutions have called for alternative forms of assessment, including project-based, collaborative, and formative approaches that better reflect real-world complexity and foster inclusive learning environments [
14,
15]. However, such reforms are often constrained by institutional inertia, marketisation pressures, and rigid policy frameworks.
In this study, we use the term critical skills to refer to a broad set of competencies associated with 21st-century learning, including problem-solving, collaboration, creativity, and communication [
13,
15]. While critical thinking is often regarded as one element of this broader category, our focus is specifically on how academic faculty understand and embed these wider competencies into assessment practices. This distinction is important because it reflects the emphasis in higher education policy on preparing students with transferable skills that extend beyond cognitive reasoning to include interpersonal and creative capacities.
Specifically, this study has three aims:
To explore how academic faculty define and understand the concept of critical skills in relation to assessment.
To examine how inclusive and alternative assessment practices are currently embedded in their teaching.
To investigate the institutional and cultural factors that enable or constrain such practices.
The study is guided by the overarching research question:
How do academic faculty conceptualise and implement inclusive and alternative assessment practices designed to foster critical skills in diverse higher education contexts?
This study responds to these tensions by exploring how academic faculty in a UK university perceive and navigate inclusive and alternative assessment practices in their day-to-day teaching. Through qualitative interviews, it investigates staff awareness of critical skills, the degree to which inclusive practices are embedded in ongoing assessments, and the barriers that hinder or enable such integration. By centring on staff perspectives, this research contributes to a deeper understanding of how the ideals of inclusive assessment intersect with the practical realities of teaching in increasingly diverse and marketised higher education environments.
3. Methodology
Ethical approval for this study was obtained from the researcher’s Institutional Research Ethics Committee. Following approval, informed consent forms were distributed to all participants prior to data collection. The study was conducted at a British university and employed a qualitative research design using semi-structured interviews. Of the nine academic faculty members who initially consented to participate, six ultimately completed the interviews; the remaining three were unable to participate due to time constraints. Participants represented a range of teaching experience: one had 1–5 years of experience, three had 11–15 years, and two had over 20 years of experience. To protect participant anonymity, detailed demographic data are not reported; direct quotations are attributed using pseudonymous labels (e.g., P1 = Participant 1, see
Table 1).
Semi-structured interviews were selected as the primary data collection method to enable in-depth exploration of participants’ experiences, perceptions, and practices related to inclusive and alternative assessment [
34]. An interview protocol was developed to ensure alignment with the study’s central research question and was reviewed in consultation with an academic advisor prior to fieldwork. Interviews lasted between 45 and 60 min and were audio-recorded with participants’ consent. All interviews were subsequently transcribed verbatim to facilitate rigorous analysis.
Data were analysed using thematic analysis, which enables researchers to identify patterns and themes across qualitative datasets [
35]. This approach was chosen for its flexibility and suitability for exploring complex, under-researched phenomena [
36]. Thematic coding was conducted inductively, with each interview analysed separately before patterns were examined across the dataset. Through this process, several key themes were identified: awareness of critical skills; institutional barriers to selecting assessment types; opportunities for independent learning; assessment approaches used in the classroom; fostering active and transformative learning; perceptions of assessment importance; peer and staff–student interaction; authenticity in learning and assessment; feedback practices; and the influence of students’ social backgrounds on assessment. These themes form the foundation for the subsequent presentation and discussion of findings.
Participants represented a range of disciplinary backgrounds, including education, humanities, business, social sciences, and media/communication, and taught across undergraduate and postgraduate levels. To protect anonymity, these details are reported in broad categories (see
Table 1). The institution operates within national quality assurance frameworks but grants staff considerable autonomy in designing assessments within programme structures. A central Teaching and Learning Centre provides professional development and resources for assessment innovation, though staff reported varying engagement with this support. The student body is internationally diverse, with a mix of home and international students from varied educational backgrounds, which shaped staff perceptions of assessment challenges and opportunities.
The study was conducted in a research-intensive UK university with a strong emphasis on both academic and professional preparation. The institution offers a wide range of undergraduate and postgraduate programmes across humanities, social sciences, business, and STEM disciplines. While many students pursue professional careers following graduation, a significant proportion continue into further postgraduate study. The university has an internationally diverse student body, reflecting the broader internationalisation of UK higher education. These institutional characteristics shape how academic faculty conceptualise assessments and critical skills, balancing preparation for employment with the development of academic competencies.
Instrument
A semi-structured interview protocol was designed to explore academic faculty members’ perceptions of classroom assessment practices and the development of critical skills such as problem-solving, collaboration, and communication (see
Appendix A). The questions addressed staff definitions of critical skills, the alignment between assessment and these skills, the use of non-traditional and alternative assessment approaches, and experiences with institutional barriers and student diversity. The protocol also explored how staff promote active learning through assessment, their feedback practices, and their views on formative versus summative assessment. The interview guide was reviewed by an academic advisor to ensure alignment with the study’s aims and clarity of content. The semi-structured format enabled consistency across interviews while allowing for the exploration of emergent themes during the conversations.
4. Findings
The findings suggest that academic faculty hold only a general understanding of critical skills and often lack comprehensive knowledge of what these skills entail or how to apply them effectively in their assessment practices.
4.1. Awareness of Critical Skills
Most academic faculty members had general information and understanding of critical skills. Critical skills were defined by two academic members of staff as interpretation and evaluation of information and making connections between ideas.
I understand by critical skills the ability to stand back from presented information and think about the validity in relation to its obvious truthfulness and how it fits with other information.
(p. 5)
The other two participants expressed critical skills as the ability to use learned information in daily life. In other words, it is important that information can be used and applied to real life, rather than information being memorised and remembered.
The ability of students to be able to process information and then use that information …can be applied to the context, whichever context the students are working in, and importantly cannot just take in information as stated on your paper.
(p. 2)
One participant described critical skills in terms of individual reasoning and evaluation, without reference to communication. This suggests a narrower interpretation compared to others who highlighted collaboration and dialogue.
When it comes to strictly within the classroom environment, one of my assessments does involve both a discussion as well as questions and answers sections which is facilitated by students.
(p. 4)
Although all of the participants were able to give a definition of critical skills in general, when asked specifically about the names of the critical skills one by one, the names of the critical skills were not expressed. While most participants described critical skills in general terms (e.g., applying knowledge to real-life situations or engaging in classroom communication), only one participant explicitly listed examples such as problem-solving, teamwork, and time management. This variation reflects the absence of a universally agreed definition and the influence of disciplinary perspectives on how such skills are understood.
I can see critical skills as problem solving, working in a team, managing time, organising work, people management, sometimes presentation skills.
(p. 1)
At the same time, the same participant described problem solving as a critical skill differently to the others.
I give mathematical problem to students, these are a bit unusual problems. I ask them to solve it. They can use any resources, e.g., Internet and textbooks, and then, reflect on their solution.
(p. 1)
It was also evident that participants sometimes used the terms critical skills and critical thinking interchangeably. While some associated critical skills primarily with evaluative reasoning, others highlighted broader competencies such as teamwork, communication, and creativity.
This variation also appeared to be shaped by disciplinary context. For example, participants from STEM fields emphasised problem-solving and reasoning tasks, while those from humanities and education highlighted communication, creativity, and collaborative learning. This suggests that staff conceptualisations of critical skills are partly informed by the epistemologies and assessment traditions of their disciplines.
4.2. Institutional Barriers to Choosing Assessment Type
Competitiveness and understanding the capitalist education system are considered to be the most important problems. In other words, because of the sense of education being based on making income and profits from the students, some changes cannot be made easily. In the light of this understanding of education, as the students apply to programmes or courses under certain circumstances and assumptions, including the types and methods of assessment that will be implemented, academic faculty cannot immediately change these assessment methods even when they learn about an alternative and innovative assessment type.
Nearly half of the participants reported that they had to discuss the assessment types and methods with the other lecturers as well as sometimes with the students before making decisions about evaluation methods. For example, one participant explained:
Basically through discussion with lecturers; but nowadays the discussion has to be with the students as well. To make changes to the design of courses and modules we have to discuss these with students as well as with lecturers.
(p. 5)
A strict application of anonymous marking has led to problems with the evaluation of students. Institutions generally want to have knowledge about the progress of the students because they want to show progress as numerical and qualitative data. In most cases, it is necessary to give more time to support the unusual nature of the assessments and the self-confidence of the students in such learning and assessment. Students need more time to think and interpret information for these types of assessment approaches, which measure students’ high-order skills. There should be additional and flexible time.
4.3. Independent Learning Opportunities
More than half of the participants, when designing assessments to be performed in the classroom, attach great importance to the students interacting and communicating with each other. This type of approach is illustrated by most participants, who described how interaction and communication with a fellow classmate led to students’ ability to use high-order skills in a more useful way.
I often ask them to discuss the topic between themselves and come up with questions that they have about the topic; maybe to discuss between themselves in small groups, any questions that cannot be answered within the group they can put to the lecturer.
(p. 5)
Similarly, another participant pointed out that the sharing and generating new ideas in classroom settings is quite commonly used:
In the sense that we are encouraging working in a team and we are telling them what approach might work best. In terms of individual learning or free learning as I said I think it is absolutely essential. We wish to approach the task in an innovative way, stressing new ideas, creating knowledge based on your own reflection on matters, I just give you a structure for discussion, I am not telling you always where to look for this information or how to interpret it.
(p. 4)
As mentioned above by the participants, it was emphasised that it is important to guide the students rather than giving them all the information.
Guidance is very helpful. It helps my students to know how to teach critical thinking because they have never done this before.
(p. 6)
In this context, two participants paid attention to guidance in terms of receiving feedback. One of them explained:
All learners sometimes need feedback whether that is peer feedback or whether that is conversation with an academic expert in a particular period to clarify their thoughts.
(p. 2)
4.4. Assessment Approaches Utilised in the Class
Almost all of the participants expressed the fact that it is important to know how to conduct assessments, to understand the nature and the structure of the assessment and the relevance of the assessment approach before the actual assessment, in order to not create any confusion in the students’ minds.
Nearly half of the participants reported that new alternative assessment approaches have emerged along with the new developments coming from the world of technology. When evaluating students, keeping up with these developments is an inevitable necessity.
Things are changing in the last 20 years and universities are slowly coming around to the fact that alternative assessment can take place and they can be just as critical and just as high-quality as in previous formats.
(p. 5)
On the other hand, two participants reported that it should be made sure before implementing these new assessment approaches because students are not familiar with them. Additionally, they said that more care needed to be taken with learning outcomes.
It is a non-traditional approach that might create apprehension among students because they have not familiarised it they don’t know exactly what they should do.
(p. 4)
Specifically, most of the participants reported that in their assessment approaches they allowed students to communicate with each other and with other people. These assessment approaches mostly took the form of posters and presentations. One participant explained:
Students give presentation of their research topic but on top of that we have breakout networking sections of students get network with each other and understanding each others’ topic. One student is presenting the other student act as discussed which means by first person to evaluate questions, critically analyse the other person presentation.
(p. 2)
4.5. Moving Students from Passive to Active Learning (Transformative Practice)
One participant reported that in order to make the students more active and enthusiastic, it is important to give students responsibility and to increase their self-confidence.
I give them a little task to work on, they share their thoughts, usually I work in the class and I speak to them individually or in groups. So, I start to make them feel more confident.
(p. 1)
Two of the participants emphasised that the given theoretical information and knowledge should be applicable to daily life. To put it another way, students must transfer their learning in the classroom to real-life situations. One participant reported:
I would love all my students to go away and not just the one giving them is something that is for their final assessment for the exam and then they forget it, I try and engage them in theory, I try and get them to think about in particular my subjects the media they are watching in their reading and in their consuming so that they can take the skills and unpack stuff in their everyday lives, I am trying to think through different lenses about their everyday media, in their everyday practices.
(p. 3)
Half of the participants stated that the importance of asking questions was emphasised as part of learning a topic. If they ask questions, students are curious and have an interest in learning and will be actively learning for their whole lives. One participant reported that:
If students ask the question then I think they are half way to answering it themselves. Formulating questions is particularly important to doctoral level but I think any level. Even for young children: if they ask questions, it is a sign that they want to know why something happens in a particular way. So, I think you also question to move from being a passive to an active learner.
(p. 5)
4.6. The Importance of Assessment
Students must motivate themselves intrinsically throughout the learning process and in fulfilling course requirements. As a result, they tend to view classroom assessments as opportunities to demonstrate what they have learned and achieved, rather than as high-stakes outcomes. Consequently, they often feel less pressure during classroom-based assessments, understanding these as part of an ongoing learning experience rather than solely as final measures of achievement.
Two participants reported that the evaluations performed as part of a group and as presentations are not accepted or liked by students because their grades rely on other students’ efforts. One participant reported:
Some of them really hate their grade relying on other students because they feel like other students will not pull their weight through will bring the grade down.
(p. 3)
4.7. Interaction with People
All of the participants reported that assessment approaches that allow students to communicate and interact each other have been implemented, such as presentations, teamwork, group discussions and so on. However, these evaluation methods are used as formative assessments rather than as summative assessments. This is because if these assessment approaches were used as summative assessments, they would not give reliable, objective results. In order to prevent this situation, the assessment criteria must be clear beforehand. One participant reported:
Students come together and discuss their studies to make it work because it is quite big work and then they come together to agree the marks and there is a lot of objectivity in marking but we try to be transparent every time and assessment machines, i.e., give them a copy of the marking scale.
(p. 3)
4.8. Authenticity and Learning Process
An important issue arising from the interviews was the fact that the complexities and contradictions that students encountered in the real world could not be easily captured by traditional assessment strategies. In other words, with traditional assessment approaches, it is very difficult to expect students to use them and engage them in real-life situations. There was a general awareness of the fact that traditional techniques and skills that students learn in academic disciplines can be very restrictive and do not give them the ability to think critically. On this point, most of the participants stated that evaluating students through the process was more important than evaluating them at the end of the process. One participant noted that evaluation throughout the process was also important in giving feedback.
I prefer throughout the process to be honest. Because it helps to access development as well. My assessments are always at the end of term. But, I prefer throughout the term. It is feedback.
(p. 1)
4.9. The Importance of Feedback in Classroom Assessment
Almost all of the participants agreed that feedback related to assessment in the classroom should be two-sided. To be more specific, students should not just receive feedback from the teachers, but students should also give feedback to the teachers about the assessment types applied in the classroom. Students were able to express their opinions about whether the assessment types applied in the classroom were suitable for them or not and if the assessment types were not appropriate for them, they were able to say which evaluation approaches should be applied. On the other hand, as the majority of a class is composed of students, students were not surprisingly allowed to assess their own processes.
One participant reported that the discussion section in the classroom was important as it facilitated feedback both from the teacher and from the students’ peers.
One feedback is the feedback I give them during the class. When we have discussion or, when they offer presentation or when they share ideas, give them verbal feedback.
(p. 1)
Another participant stated that after the class had finished, students were able to reach him via email and ask whatever questions they wanted to ask.
I pretty much reply that every student can communicate with us 24 h you know they can send email, that’s the easiest thing to check how things they are doing so that’s one thing.
(p. 4)
Two participants mentioned that there was a formative assessment that is a linked summative assessment. In this way, the teachers had the opportunity to give detailed feedback to students about their work.
I think that the kind of feedback that is valued is the deeper feedback about ideas, structure, logic and coherence. It is really important feedback and it is more intellectual than surface features.
(p. 5)
One participant reported that the students gave teachers anonymous feedback and that they made some changes in the evaluation methods in accordance with this feedback.
At the end of the module, we get the students’ feedback, done anonymously through a questionnaire they fill out online, but I try to make changes based on the feedback.
(p. 3)
4.10. The Effect of Social Background on Assessment
Most of the participants agreed that at the beginning of the term, the students did not want to communicate with each other for group assessments and peer assessments as they came from different countries and different cultures. They stated that individual differences (different social and cultural backgrounds) can present difficulties in assessing critical skills. One of the participants mentioned that:
We accept very diverse student population who are coming from different academic experience and backgrounds. For this reason, some of their skills are related to what they have done in the past.
(p. 4)
On the other hand, as time went by, this situation changed. Nearly half of the participants reported that having come from a different cultural and social background was not a negative situation, but a positive situation. One participant reported:
I think, if they come from different countries, they bring richness to the course because they can relate different aspects of their own education system, so, for example, if you are from Turkey, your education system is different but you can talk about its benefits.
(p. 6)
5. Discussion
This study explored how academic faculty at a UK university conceptualise and implement alternative assessment practices aimed at fostering critical 21st-century skills in increasingly diverse higher education contexts. The findings reveal several important tensions and opportunities in this space.
First, while academic faculty expressed a general understanding of the importance of critical skills such as problem-solving, teamwork, and communication, their knowledge of these skills remained somewhat superficial and fragmented. Few participants articulated a comprehensive or pedagogically grounded framework for critical skills development, and many struggled to explicitly name or systematically integrate these skills into their assessment practices. This echoes the existing literature suggesting that institutional rhetoric around 21st-century skills often outpaces staff capacity and conceptual clarity [
15,
16]. Without clear institutional guidance and professional development, staff may be left to interpret these demands individually, leading to inconsistent practice [
24]. This conceptual ambiguity was compounded by the interchangeable use of “critical skills” and “critical thinking skills” among staff. Without clearer institutional definitions or guidance, participants interpreted these concepts individually, leading to inconsistent understandings and practices in assessment. This diversity of definitions is not surprising, as there is no universally agreed set of “critical skills” in the literature; rather, they are often shaped by disciplinary traditions and institutional contexts.
These differences also point to the importance of disciplinary norms and epistemologies. As Bourdieu’s framework highlights, academic fields are structured by distinct forms of cultural capital and habitus, which shape what kinds of knowledge and skills are recognised as valuable within a given field [
17,
37]. For instance, what counts as “critical skills” in mathematics may prioritise problem-solving and independent reasoning, whereas in the humanities or social sciences, communication and collaborative inquiry may be foregrounded. This resonates with research on Pedagogical Content Knowledge (PCK), which underscores that effective pedagogy and assessment are inherently tied to the disciplinary nature of knowledge [
38,
39]. In this sense, both Bourdieu’s theory of cultural capital and the PCK framework suggest that assessments cannot be meaningfully designed or interpreted without recognising the epistemic traditions of particular disciplines.
Second, institutional constraints emerged as a pervasive barrier to assessment innovation. Participants reported that accountability regimes, marketised institutional logics, and prescriptive quality assurance frameworks limited their ability to experiment with more inclusive and developmental forms of assessment. The influence of the Teaching Excellence Framework and similar metrics was particularly salient, aligning with broader critiques of how neoliberal policy environments privilege standardisation over pedagogical responsiveness [
25,
27,
28]. Staff also described the difficulty of negotiating assessment changes within institutional structures that prioritise consistency, reliability, and efficiency—often at the expense of contextual responsiveness or inclusive design.
At the same time, participants demonstrated a strong commitment to fostering independent learning, peer interaction, and real-world relevance in their teaching. Many actively designed activities that encouraged collaboration, discussion, and authentic application of knowledge, aligning with inclusive assessment principles [
8,
23]. However, these practices were typically situated within formative assessment contexts rather than summative assessments, where institutional constraints around objectivity, reliability, and marking practices were more difficult to navigate. This points to an important structural tension: while formative spaces may support more dialogic and inclusive learning, the high-stakes nature of summative assessment continues to reproduce more traditional and exclusionary practices [
30].
Importantly, the relationship between critical skills and assessment methods is not uniform. While alternative assessments such as collaborative projects or presentations may be particularly well-suited to fostering teamwork, communication, and creativity, other skills such as independent problem-solving or critical thinking can also be effectively assessed through more traditional individual tasks when designed with inclusivity in mind. The findings suggest that rather than assuming all alternative assessments are inherently superior, it is crucial to align assessment formats with the specific critical skills intended for development, as well as the disciplinary context in which they are applied. These findings also highlight the tension between academic freedom and institutional structures. While faculty must retain autonomy to choose pedagogical and assessment approaches that align with their disciplinary expertise and strengths, autonomy alone is not sufficient to ensure inclusive practice. Without institutional support—such as professional development opportunities, recognition of diverse pedagogies, and flexible policy frameworks—faculty may lack the resources or confidence to innovate. Our recommendations therefore emphasise enabling institutional cultures rather than prescriptive control, seeking to balance academic freedom with systemic support for inclusive assessment.
The findings also underscore the importance of feedback processes. Participants recognised the value of two-way feedback—both giving students constructive feedback and responding to student evaluations of assessment practices. This aligns with calls for dialogic feedback approaches that foster student agency and promote deeper learning [
10,
33]. Yet, as with assessment practices more broadly, feedback approaches were often shaped by individual staff initiative rather than systematic institutional support.
A further key finding relates to how cultural and social diversity shaped assessment dynamics. While initial differences in student backgrounds sometimes led to reluctance around group assessment and peer interaction, participants reported that diversity ultimately enriched classroom learning and assessment when appropriately scaffolded. This reflects wider evidence that culturally responsive assessment requires careful design but can promote more equitable and inclusive outcomes [
5,
21]. However, the lack of consistent institutional support for culturally responsive assessment design again highlights the need for more systemic change.
Overall, this study reinforces arguments that meaningful assessment reform cannot rely on individual staff efforts alone. Structural constraints—rooted in marketised policy frameworks and institutional governance—continue to shape and often inhibit inclusive and skills-oriented assessment practices [
19,
25]. Staff in this study demonstrated a significant commitment to fostering critical skills and inclusive learning, but their efforts were frequently constrained by systemic factors beyond their immediate control. While these findings suggest important directions for institutional and policy reform, they should be interpreted as indicative rather than conclusive, given the small-scale and exploratory nature of this study.
To advance inclusive and transformative assessment, institutions must move beyond rhetorical commitments and address the underlying structures that shape assessment design. This includes providing sustained professional development, fostering cultures of pedagogical innovation, and revising quality assurance processes to better support diverse and developmental forms of assessment. As Ref. [
26] argues, assessment reform is not a technical exercise but a social justice imperative—one that requires alignment across policy, leadership, and everyday pedagogical practice.
A further consideration relates to the rapid rise of artificial intelligence (AI) in higher education, which poses both risks and opportunities for assessment practices. On the one hand, AI-powered tools such as generative language models raise concerns about academic integrity, authorship, and the authenticity of student work. On the other hand, these same technologies can be harnessed to support inclusive and non-traditional assessments by enabling personalised feedback, scaffolding diverse learners, and creating opportunities for authentic, process-oriented demonstrations of skills. While participants in this study did not explicitly foreground AI, the broader implications for inclusive and alternative assessment are significant. Future research should therefore examine the extent to which non-traditional assessment approaches can mitigate risks and leverage opportunities presented by AI in diverse higher education contexts.
6. Conclusions
This study has illuminated how academic faculty in a UK university perceive and implement alternative assessment practices aimed at fostering critical skills within an increasingly diverse and marketised higher education environment. While there is broad recognition of the importance of critical skills such as problem-solving, collaboration, and creativity, many staff members lack a deep conceptual understanding of these competencies or a systematic approach to embedding them in assessment practices. Institutional barriers—including rigid assessment policies, performance-driven cultures, and limited opportunities for professional development—further constrain efforts to implement inclusive and transformative assessment approaches.
Future research should further examine how disciplinary contexts shape the meaning and assessment of critical skills, as what counts as appropriate assessment practice may vary significantly across fields.
To advance this agenda, universities must move beyond surface-level commitments to inclusion and skills development and reconfigure institutional structures to support sustained innovation in assessment. This entails fostering cultures of pedagogical reflexivity, investing in staff development, and ensuring that accountability frameworks do not stifle inclusive practices. Future research should further explore how these dynamics vary across disciplines and institutional contexts and should foreground student perspectives on assessment equity. Ultimately, creating assessment systems that genuinely support diverse learners and cultivate critical skills is not only a pedagogical priority but a matter of educational justice. As such, the recommendations presented here are offered as preliminary insights into the challenges of fostering critical skills and inclusive assessment, and they highlight the need for further research using larger and more diverse samples to substantiate these implications.
While this study provides valuable insights into academic faculty perspectives, it is limited by the relatively small sample size and the absence of student data. Future research should therefore incorporate students’ perspectives to examine how they perceive, experience, and make sense of inclusive and alternative assessment practices. Such a multi-voiced approach would provide a more comprehensive picture of how assessment reforms are negotiated within higher education.