Next Article in Journal
Learning from the Unexpected Journeys of Novice Teachers’ Professional Identity Development
Previous Article in Journal
Game on for Climate Action: Big Game Delivers Engaging STEM Learning
Previous Article in Special Issue
Exploring Education as a Complex System: Computational Educational Research with Multi-Level Agent-Based Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bridging Teacher Knowledge and Practice: Exploring Authentic Assessment across Educational Levels

by
Rachael Hains-Wesson
1,* and
Sanri le Roux
2
1
School of Global, Urban, and Social Studies, The Royal Melbourne Institute of Technology (RMIT), Melbourne 3000, Australia
2
Client Experience Team, MTP Health, Sydney 2000, Australia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(8), 894; https://doi.org/10.3390/educsci14080894
Submission received: 3 July 2024 / Revised: 9 August 2024 / Accepted: 14 August 2024 / Published: 16 August 2024

Abstract

:
As teachers, we are living and working in times of abundant challenge and change. These challenges transpire across different education levels and sectors, including K–12, vocational, tertiary, and adult learning. Within this vast education ecosystem, a major challenge for all teachers is to allocate time, effort, and resources to ensure that their students receive a quality education with real-world implications, influencing soft-skill attainment, such as teamwork, communication, and critical thinking skills. In this article, the authors discuss, through a theoretical lens, the value of considering a national and universal approach to self- and peer-evaluations of authentic assessment tasks to improve teacher practice in Australia. Currently, there is modest opportunity amongst K–12 and tertiary teachers to comprehensively learn together, limiting cross-fertilisation of practice and interconnectedness, and as a national community of practice. The authors argue in this paper that offering an avenue to share knowledge and practice in authentic assessment design could potentially assist in addressing this challenge. Therefore, the article is dedicated to exploring the barriers and opportunities to advance a national and universal approach to transferable professional development in authentic assessment practice within the Australian education ecosystem.

1. The Australian Education System: A Brief Overview of Strengths and Challenges

Despite Australia facing challenges, as well as vast and rapid changes, related to teacher shortages, the selective distribution of funding, and responding to the influence of Generative Artificial Intelligence (Gen AI), Australia remains a quality education ecosystem [1,2,3]. Australian children typically enter the school system in preprimary (the year they turn five or six). The legal age for leaving school differs slightly between states [4] but it is generally once a learner has completed Year 10, although at least 80% of learners go on to complete Year 12 [5,6]. Therefore, the term “K–12” is generally used to refer to the Australian school system, where students typically begin their studies at preprimary and conclude with Year 12. The term “tertiary”, on the other hand, refers to vocational and technical colleges and/or university study.
In terms of this study, and with the above in mind, this article briefly explores, through a theoretical lens, the utility of an Authentic Assessment Rubric Tool and how it might best support a national and/or universal approach to teacher practice in the domain of authentic assessment (i.e., summative) for all teachers operating within the Australian education ecosystem.
Thus, in this study authentic assessment refers to a form of evaluation that requires students to apply skills and knowledge to real-world tasks and problems [7,8], reflecting the kinds of challenges they are likely to encounter in their professional and personal lives [9,10]. While advocacy for authentic assessment in this paper emphasises its role in enhancing employment readiness, the authors recognise and strongly support the broader educational goals of fostering critical thinking, creativity, and ethical reasoning [11]. Authentic assessment should be contextually diverse, promoting transferable skills that benefit not only professional life but also personal and civic engagement, thereby contributing to a well-rounded and holistic education [12]. This can also be achieved through other alternative assessment measures, besides authentic assessment, such as copartnering with students. However, in this paper, the focus is on authentic assessment.
In terms of standardised testing, in Australia, students in Years 3, 5, 7, and 9 sit (amongst other tests) the National Assessment Program Literacy and Numeracy (NAPLAN) test, in addition, to a school leavers’ exam. These tests vary from state to state for Year 12 students [13], for instance. When shifting the focus to consider Australia’s tertiary learning environment, in 2023, according to the Australian Bureau of Statistics, 61% of people aged 15–24 years were at school or enrolled in further study compared with 63% in 2022 and 65% in 2021. This is at a higher rate than the tertiary attainment percentage in Finland, which (like Australia) belongs to the Organization for Economic Cooperation and Development (OECD) and shares an education-quality reputation [14].
To help solve this perplexing problem, it is important to consider that students who enter university from Year 12 in Australia may choose a specialisation upon entry. This contrasts with countries such as Canada, Germany, the United Kingdom, and the United States, where students can enter more flexible and broad-based programs before deciding on specialising in a particular field. Second, the complexity around the transition from K–12 to tertiary education has impacted Australia’s job-ready strategic initiatives. This could be due to high school students needing to decide on High School Curriculum (HSC) subjects as early as Year 10 [15,16], which may limit students in exploring a wider range of diverse fields of study, such as entrepreneurship.
Keeping in mind that, in some Australian states, students in Years 11 and 12 can choose from a variety of subjects, with multiple pathways being offered to vocational and/or apprenticeship programs upon completion of Year 12, resulting in inconsistencies across pathway options at a national level. Although the Australian secondary and tertiary systems help to develop strong technical skills, such as disciplinary knowledge in a particular field (i.e., business and medicine), the authors posit that early specialisation may still have distinct disadvantages. Thus, students may find it challenging to navigate from K–12 to tertiary education or to transition into employment for numerous reasons, for example, the lack of a national and universal career-development learning framework and the wide range of unshared and/or acknowledged authentic assessments and learning via summative practices.
First, it can limit exposure to diverse subjects, potentially hindering the acquisition of dynamic problem-solving abilities and critical thinking and reasoning skills required in a constantly evolving job market. For example, industry have noted that graduates are not job-ready, especially in the domain of self-management skills, communication, team-work skills, cognitive skills, system thinking, and innovation and creativity [17]. This can mean that students who specialise in a comparatively narrow field might find it more challenging to change their career path in the future and/or be equipped to do so. Furthermore, organisations and industries are increasingly looking to employ graduates with strong soft skills, such as teamwork and communication skills that are transferable across fields [17,18]. A focus on technical skills at the cost of developing these transversal competencies may impact students’ personal and professional development, including education and career pathway choices.
In the Australian context, it is inaccurate to state that different education systems operate in silos without collaboration. For example, state education authorities, like NESA (NSW Education Standards Authority) and the Centre for Educational Measurement and Assessment (CEMA), in New South Wales, actively collaborate with universities to prepare students for the future. However, the current level of collaboration, which primarily involves a few organisations at the state level, may still be seen as insufficient. Comprehensive and regular sharing of practice, knowledge, and experiences among all teachers—and, not just through preservice teacher practice, and programs—remains a crucial area for improvement. When considering the high percentage of students who transition from the K–12 school system into the tertiary sector in Australia, it is even more paramount that we investigate ways in which we can ensure all school and graduate leavers are better prepared for the social and intellectual demands of a particular education and/or career choices, keeping in mind that not all students will journey along the same education or career pathway nor experience authentic assessment learning in the same way. For example, a student may opt to exit secondary education to enter directly into the workforce or undertake a vocational apprenticeship.

2. Authentic Assessment: A Learning Tool across All Education Systems

In this paper, an example of how the education system, nationally in Australia (and more widely) might consider the use of an Authentic Assessment Rubric Tool (AART) to help create opportunities for meaningful and sustained collaboration between K–12 and tertiary teachers, supporting positive education pathway experiences for students is presented. The tool was developed in response to a request from the authors’ university’s executive leadership group, where this study took place. The artefact was to be used to support the teaching and learning strategy to transform the curriculum and improve student learning outcomes via summative authentic assessment design. The university expected all teachers to adapt their assessments tasks to include authentic assessment principles. Subsequently, the teaching community required guidance via an artefact that included agreed-to key elements, components, and signposts to assist teachers to effectively self- and/or peer-evaluate their assessment types while providing clear directions for improvement towards more authentic assessments.
In the following sections, the authors outline the theory supporting the use of authentic assessments. Next, they provide an overview of the process of developing the AART, detailing the methodology used and how it led to the creation of the final AART (see Table A1), which has yet to be formally evaluated and will form the basis of a future research project. Finally, the authors offer a set of recommendations for the effective application of the tool and suggest directions for future research into the tool’s efficacy.

Research Question

The development of the Authentic Assessment Rubric Tool (AART) (see Table A1) was peer-created, reviewed, and tested by a teaching community at the university where this study took place, including K–12 and university instructors. The tool was aimed at helping all teachers to self- and peer-evaluate their tasks to review their authenticity standards according to a set of key design principles, which were (1) working towards a shared understanding of authentic assessments for summative requirements, (2) providing a practical toolkit for all teachers to adapt, implement, or improve upon, and (3) offering inclusive access to the tool to improve authentic assessment at the macro- and microlevels. To help guide the project, the following research question was posed: what are the guiding principles of summative authentic assessment (as a tool) for teachers to self-evaluate and improve assessment design and delivery?

3. Authentic Assessment: The Theoretical Background

In the field of education practice, a commonly known strategy that teachers employ to teach and assess students’ critical thinking and nontechnical skills (i.e., soft skills), such as teamwork, is the use of authentic assessments. However, the term “authentic assessment” can mean different things to different people. This is particularly true when referring to the literature to assist with providing a common understanding around this contentious term. The label authentic assessment first originated with Archbald and Newman when they suggested an assessment as “authentic”, when it emphasised its application in real-life settings or beyond the classroom [17,18].
Subsequently, the literature on authentic assessment provides numerous definitions that are often dependent on the context, and which are widely contested [19,20,21,22]. This has led to confusion regarding the level of teacher involvement, how to measure it, and how to improve practice to accommodate different contexts. In response, several authors have conducted meta-analyses of existing literature to develop common characteristics of authentic assessments [23]. These characteristics can be used as guidelines in determining the type of assessments used and the degree of authenticity desired. For instance, based on the characteristics provided by Ghosh et al. [24], one can define authentic assessments as encompassing tasks resulting in outcomes in a real-world context that require an integration of competencies to address complex questions or ill-structured problems. Authenticity from this perspective is seen as a crucial element for assessing relevant skills for successful performance in the workplace. Thus, it can be argued that authentic assessment aims to integrate what happens in the classroom with employment, replicating the tasks and performance standards typically faced by professionals in the world of work.
Amid this notion, it is suggested that authentic assessment is designed to evaluate learners’ understanding and capabilities in authentic, meaningful contexts mirroring real-life situations [25]. Such assessments focus on the process and outcome of learning, necessitating higher-order thinking skills, including complex problem-solving, decision-making, synthesis, and interpersonal skills, such as collaboration, negotiation, and peer-assessment. This approach makes learning more transferable, empowering students to reflect on and cocreate their educational journey [26].
In this paper, the term authentic assessment refers to learning tasks designed and delivered by teachers and/or initiated by students, involving “ill-structured challenges and roles that help students rehearse for the complex ambiguities of adult and professional life” [26]. This definition suggests that authentic assessment is multifaceted and can be perceived differently across various educational contexts. While authentic assessments are often associated with student-directed activities like self-assessment and peer assessment, their effectiveness varies, being utilised differently in postsecondary education and pre- and primary school settings, for instance. Consequently, the term cannot be uniformly applied across the K–12 spectrum and can mean different things to different people. Nevertheless, research has demonstrated that authentic assessment not only enhances student engagement and motivation but also improves their ability to transfer knowledge and skills to new and complex situations [26,27,28,29].
Additionally, although teachers do collaborate, such collaborations do not typically include all teachers at a local, state, and national level. There remains a lack of comprehensive support in terms of time, resources, and finances for K–12 and tertiary teachers to collectively establish a universal definition and approach to authentic assessment. This disparity complicates the establishment of shared criteria and standards. Teachers and students often view education from different perspectives: as an opportunity to experience diverse situations and for learning’s sake or as an investment in time and resources for future employment [27]. These differing support agendas and philosophies across the education ecosystem in Australia create challenges in developing a unified understanding of summative authentic assessment, its criteria, and the appropriate ways to measure and evaluate standards.
Authentic assessments are frequently incorporated into experiential learning, integrating work experience and work-based learning pedagogies [18,28,29]. This type of formative and/or summative assessment practice occurs across diverse education ecosystems, including K–12, tertiary, vocational, and adult learning. It is perhaps this link to employability and transversal competencies—also known as soft skills—that explain why most of the research into authentic assessment focuses on the tertiary sector. It is this education sector that is most concerned with the need for the economic stability of its graduates. Further, metrics of success in tertiary education are often being viewed through the lens of the employability of graduate students. For instance, universities are evaluated by industries on the calibre of graduates and simultaneously prospective students want to enrol at a university that has a reputation for producing work-ready graduates.
In addition, the literature is peppered throughout with arguments about the importance of authentic assessment to increase the employability of graduates. For example, a systematic literature review by Sokhanvar and colleagues confirm that authentic assessment in the tertiary sector not only improves students’ engagement and satisfaction, but also consistently show enhanced communication, collaboration, problem-solving and critical thinking skills [30]. Thus, it could be argued that increased engagement with students’ own learning journey in turn increases students’ ability to self-reflect, which is an indispensable skill for personal and professional development in the realm of authentic assessment [30,31,32].
Several studies posit that another advantage of authentic assessment is that it is a tool for preserving academic integrity in tertiary education. Researchers in this area have found that authentic assessment functions on two levels to prevent academic misconduct. In the first instance, when students have a greater sense of ownership of their work and see the applicability and transferability of what they are learning to other areas of their lives, they are more motivated to complete the assessments due to the observable learning gains [31,33]. Second, as authentic assessment tasks are typically more complex and require students to integrate multiple skills, areas of knowledge, and their own experiences, it is much more challenging to employ a ghost writer or use generative AGI to produce a response [34,35].
As pointed out before, studies on authentic assessment in the primary and secondary years are much scarcer compared to tertiary education. Despite this, on a closer review of the literature, the authors discovered a book on authentic assessment (2021), which dedicates 5 out of 17 chapters to authentic assessment in primary and secondary school classrooms [36]. Out of the five chapters, three refer to the Australian context, which is very encouraging in the context of this paper. By proposing enhanced collaboration and synergies across different stages of education in terms of authentic assessment practices [36,37,38,39], it is hoped that innovative practice, governance, policy, and teacher support at a national level (and more widely) will result.
However, in terms of the book as noted above (2021), it is observed that two of these articles do not provide a working definition for authentic assessment to help frame the discussion presented in this article. Thus, the brief analysis presented here, underscores the need for a more integrated discussion amongst K–12 and tertiary education teachers on how to effectively prepare students for the next level of education and/or the workforce and society.
In the tertiary sector, effective authentic assessments are often linked to students’ positive employability learning outcomes and as seen through annual reporting such as the Graduate Outcomes Survey (GOS) [40] and the Employer Satisfaction Survey (ESS) [41]. Although employers generally regard the skills of university graduates positively, the 2022 ESS report also notes areas for improvement, such as teamwork and critical thinking skills [41]. To address these gaps, Australia’s education community, spanning K–12 and tertiary sectors, could be supported more to collaborate on a national level.

4. Authentic Assessment Rubric Tool (AART): An Example

4.1. Research Design and Methodology

A robust design methodology was chosen [42], which was influenced by design-based research for the creation of the AART. Both these methodologies rely on iterative cycles of product development, and, therefore, the process of improving or adapting the tool for a new set of users (in this case K–12 teachers) becomes a natural extension of the tool.
The methodology of choice included four main phases. The first phase was based on “planning and clarifying” the needs of the user. For instance, this phase of the RDM is where artefact creators (i.e., participants) agree to consider various artefact ideas based on the end users’ requirements, which in this case were teachers and students in the domain of summative authentic assessment. The chosen idea was explored during each community of practice meeting and via a set of agenda items that included (1) design specifications, (2) functions, (3) properties and authentic assessment components, and (4) relevancy to the user. These key points have also been suggested as an appropriate method in artefact creation in the product design literature [43].
The second phase was termed “concept design and improvement”. In this phase, a focus on describing the form, function, and features that the artefact would take and why, including how users would ensure that they could continually and iteratively improve upon practice, was the key interest. For example, community of practice discussions focused on the platforms users might use to self-assess authentic assessment design work, providing continuous feedback and recommendations as the artefact began to take shape. According to Thornton [44] the concept design phase should include rough design layouts, simple prototypes with key components and technical choices. It is also important during this phase, that many ideas and concepts are considered, solutions generated, and all concepts are evaluated by the artefact creators. This is achieved by implementing regular opportunities for community of practice participants to discuss, debate, self- and peer reflect, resulting in less useful concepts being eradicated through consensus making. It is during this phase that the community of practice participants decided that the artefact would take the form of a rubric.
The third phase was “embodiment design”, and this is where the artefact creators defined the arrangements of each component, ensuring that each descriptor articulated specific levels of competency with examples. This phase generally involves many iterations before artefact creators feel confident that an agreed-to design emerges [45]. This phase is also cyclic in nature, encouraging analysis, synthesis, evaluation, and improvement, which occurs simultaneously until collective agreement through consensus is achieved.
The fourth phase is called “detail design, test, improvement, and evaluation”. This phase includes all the detail, concepts, parts, and components that will be included in the chosen format for the artefact, which in this case was a rubric. It is a design phase that consists of decisions being made on the materials needed, production effort, and dissemination possibilities [46]. This enabled the artefact creators to locate areas of product strengths and weaknesses, for example.
Finally, “continuous improvement” was undertaken. This was completed by ensuring that the community of practice tested and retested the rubric artefact using a diverse range of assessments from diverse disciplines such as health, education, social sciences, and business. Participants who volunteered their assessment tasks for peer-review, using the rubric, provided a diverse range of assessment types. Considering diverse types of assessments, such as essays and practicals, including those that teachers deemed as nonauthentic such as quizzes, assisted the core members to further critique the artefact’s weaknesses, refine key components within the rubric, and adjust the descriptor levels to enhance robustness.

4.2. Participant Selection and Engagement

As part of the design process, the authors invited a group of eight university teachers in education-focused roles across different faculties and disciplines, who were identified via the university’s data base of Higher Education Academy (HEA) and Higher Education Research and Development Society of Australasia (HERDSA) fellows. The HEA and HERDSA fellows were invited to become involved in the project as key artefact creators. The invitation resulted in the first named author facilitating several critical friends’ meetings, round table discussions and debates. It was during these activities that the authors explored (with HEA and HERDSA fellows) the term and concept of authentic assessment. This was an important phase of the discovery process because one of the main aims of the project was to create an artefact that could be utilised at the university, influencing context specific content while being appropriate for diverse types of components and principles for a variety of disciplines and student year levels, including K–12. The task was, therefore, highly complex and challenging.
To avoid personal bias and decision making, a group of approximately fifty interested members were recruited over a twelve-month period by the authors, HEA and HERDSA fellows, creating community of practices. The members consisted of academics, professional staff, and Higher Degree by Research students, focusing on K–12 curriculum studies (Ethics approval project number: 2021/902, participant members’ details omitted to meet ethics requirements). Participants opted-in once per month to join facilitated meetings that were conducted via Zoom and in-person. Key feedback was provided to the authors and HEA and HERDSA fellows through an online chat platform. On average, a core group of five to seven members participated in workshops via the online chat area with other participants joining ad hoc online or in-person, and at different times, and over a three-year period.
Throughout the discussions and meetings, participants openly debated, listened, and documented pertinent ideas, thoughts, and opinions around authentic assessment. This was done in conjunction with examining the relevant literature, highlighting pertinent definitions, authentic assessment characteristics, measurement frameworks, models, and typologies.

4.3. Applications of the AART

The AART was created to provide a tool to help meet the Australian government’s desire to improve teaching quality [41]. For instance, the emphasis on sharing and encouraging the uptake of best practice teaching methodologies and curriculum as well as teaching quality, including the use of more systematic uses of peer review of teaching [41], p. 27.
Although the tool was developed by and for university students and teachers, the consultative and collaborative process used, helped to create the tool and for it to become transferable and useful for the K–12 settings. The authors of the tool also invited teachers to adjust the descriptors to suit the year level of a students’ context-specific situation/s, including preservice teacher training.
When considering the AART’s design criteria, the components in the tool could apply to all sectors of education, such as K–12, vocational and tertiary, keeping in mind that each user will view and design authentic assessments to suit their context of operation. Therefore, the tool is not to be used as a “one-size-fits-all” but rather as a discussion point and a helpful tool to support self-reflect and/or undertake peer review to improve authentic assessment practice. The five criteria are (1) the applicability or relevance of the task to a wider context; (2) the level of connection between the task and other tasks in the curriculum; (3) how well the task is aligned to the learning outcomes of the curriculum; (4) the quality of supporting materials and feedback presented to support students in completing the task; and (5) whether the rubric is characterised by consistent and clear performance criteria showing growth in relation to the components of the constructs being measured.
Thus, the tool is useful beyond a tertiary setting. Where amendments or adjustments are necessary for the K–12 settings, teachers could return to Phases 3 and 4 of the design process to ensure applicability to the context. However, for there to be a productive and successful collaboration amongst K–12 and tertiary teachers, it would require buy-in from a large community who are currently modestly supported to come together to undertake professional development as a national community of practice.
With this in mind, the development of the AART is a timely response to the evolving demands of the workforce and the educational landscape in Australia. The Australian government’s final Accord Report [41] highlights a pressing need for a significant portion of the workforce to attain higher education qualifications. The report emphasises the importance of providing accessible and relevant tertiary education to diverse groups, including those from low socio-economic backgrounds, First Nations communities, individuals with disabilities, and students in regional and remote areas [41]. To achieve these ambitious targets, it is essential to foster a national community of practice focused on teacher and teaching quality. Quality teaching and learning should be inclusive, transparent, relevant, and timely, encompassing authentic assessment design, delivery, standards, as well as robust evaluation and measurement mechanisms [47,48,49]. Thus, the AART can help guide teachers as a catalyst for innovative learning approaches.
Presenting the AART here is only a starting point to help establish an evidence-based framework to support teaching quality at a national level, which is crucial for aligning to the Australian government’s Accord Report published in 2023 [41] and its recommendations. By teachers considering the benefits of using the AART, the education community can help enhance transparency across educational ecosystems and improve teacher performance, leading to better student outcomes. Developing new metrics like the AART to evaluate learning and teaching quality across diverse educational sectors, including K–12 and tertiary education, can provide additional support for enacting quality teaching standards and encourage peer review of teaching practices [47,48,49].
Further, the implementation of the AART framework and its focus on quality teaching standards align with contemporary theories of education, such as constructivism, which emphasises the active role of learners in constructing their knowledge, and connectivism, which highlights the importance of networks and connections in the digital age [11,34,50]. Additionally, the emphasis on authentic assessment resonates with situated learning theory, which argues that learning is most effective when it is contextualised and relevant to real-world situations [12,51]. For instance, the AART includes a set of authentic assessment design criteria that have been collectively established and peer reviewed. The criteria include an assessment task’s applicability, which means its relevance and applicability of the task to a work, professional experience, or life situation. Moving forward, a peer-reviewed-approved AART assessment could include specific interconnectedness, such as the task interconnects with previous and postlearning tasks that are at the year, program, and/or degree levels. Such an approach would encourage cross fertilisation of ideas and learning that corresponds to a prior and future learning attainment that is collectively inspired.
The alignment of the task to the learning outcomes, including higher order thinking levels and skill-based competency is also highlighted in the AART, along with appropriate and relevant support materials. In the context of the AART, the support materials should explain the purpose and what it is exactly students need to achieve and to what purpose. In addition, feedback is at the core of the AART to ensure that opportunities for students to receive regularly, consistent, and meaningful feedback is provided, including peer reviewed rubrics, showing students how to improve learning over time.
In summary, the AART represents a starting point to help address the challenges outlined in the Australian government’s final Accord Report, which highlights the need to better support professional development for all teachers. This can be achieved by fostering a national community of practice focused on summative authentic assessment criteria, embracing innovative teaching and assessment methodologies.

5. Recommendations and Future Research

A key recommendation is to advocate for a stronger focus on authentic assessment practices across both K–12 and tertiary education sectors. This includes promoting more strategic and consistent sharing of summative authentic assessment design practices. Such a position could help highlight the positive consequences of creating a national and shared understanding around evaluating authentic assessment practices, leading to increased quality standards and the sharing of knowledge among teachers across diverse educational ecosystems in Australia. This in turn, could benefit students’ learning outcomes as well as career and education pathway decision making.
In any case, this could be achieved by providing an Authentic Assessment Rubric Tool (AART), such as presented here (see Table A1), serving as an important starting point. The tool offers a set of descriptions and criteria that can play a vital role in supporting and improving curriculum development, practice, and standards, enhancing and supporting students’ lifelong learning journeys and career pathways, and no matter the context and education year level. At a minimum, the tool can help to create a shared vocabulary among teachers across the different levels of education to create a more uniform language around the term and practice of authentic assessment. In fact, given that students spend considerable time in different education environments from K–12 and potentially move on to tertiary education experiences, it is timely to include a nation-wide conversation around this important topic and provide relevant-for-purpose resources, inspiring K–12 through to tertiary teachers to work together and more often.
Thus, the AART offered here is a useful and helpful bridge amongst diverse teachers at a local, state, and national level as well as being a crucial starting point to encourage debate, conversation, and discussion around a shared understanding for authentic assessment design more widely. The different views from diverse groups of teachers, students, and policymakers to create a shared understanding around authentic assessment practice and delivery should be seen as innovative, inspirational, and not as a hurdle. Finally, a key area for future research is the financial and resource support needed to include K–12 and tertiary teachers at major education conferences, and vice versa, expanding on the discourse presented here and those communities that are readily doing such activity.
Finally, it is highly recommended to provide a detailed breakdown and discussion of each step in the design process for authentic assessment, viewed through the lens of evaluation and various contexts, including different teacher and student year levels. This approach will enable educators across all sectors to share and enhance their knowledge, practice, and theory of authentic assessment.

6. Conclusions

The focus of this article was an exploration of a potential approach to integrating summative authentic assessment design across all stages of education in Australia through a theoretical discussion. At this stage, the article raises more questions than it answers. It is important to note that educational institutions, government organisations, education conferences, governing bodies, and teachers are all striving towards similar goals. Consequently, the rubric tool presented here may not meet every teacher’s needs. However, asking the right questions is a crucial step forward in addressing problems of practice.
Therefore, the main aim of this paper was to initiate a conversation on how the AART might be utilised as a starting point to help influence authentic assessment practice, delivery, and policy reform at a national level. However, this can only be achieved when a teaching community, which includes all sectors of education, is supported to regularly come together to share knowledge, debate, discuss, and work shoulder to shoulder to improve teacher practice.
If such a strategy were to be pursued and advocated by government, industry, and the diverse education sectors (i.e., K–12 to tertiary), in addition to those organisations already undertaking such initiatives, it could result in additional transitional and transferable employability skill learning mapping. This would create a scaffolded approach to skill development across K–12 and a comprehensive approach to education pathways from K–12 through tertiary education and into employment.
The potential impacts of this idea are significant. Firstly, it could further help develop and prepare the future workforce. Secondly, it could promote the idea of a national education framework in authentic assessment design and delivery from K–12 through to tertiary education. Moreover, it could enhance education and career pathway directions for students, ensuring they are better equipped for the future.
Finally, while authentic assessments are highly valuable, it is important to recognise that education’s purpose extends beyond academic and/or employability achievement to encompass personal growth on both individual and societal levels. Although this discussion touches briefly on the broader impact of authentic assessment, it is essential to remember that authentic assessments are just one part of a diverse array of assessment strategies. Embracing a holistic approach, which includes various assessment methods, can also lead to meaningful and effective teaching practices that benefit all teachers and students.

Author Contributions

Conceptualization, R.H.-W.; writing—original draft preparation, S.l.R. and R.H.-W.; writing—review and editing, S.l.R.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analysed in this study.

Acknowledgments

We would like to express our sincere gratitude to James (Jim) Tognolini for his invaluable assistance with the early drafts of this paper. His insightful feedback, expert guidance, and unwavering support have significantly contributed to the development and refinement of our work. We deeply appreciate his time, as well as the staff at the Centre for Educational Measurement and Assessment (CEMA) for their help with shaping this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Authentic Assessment Rubric Tool (AART).
Table A1. Authentic Assessment Rubric Tool (AART).
Authentic Assessment Design Criteria 1 Authentic Assessment Design Standards
(1)
Narrow Design 2
(2)
Reasonable Design
(3)
Comprehensive Design
Applicability
Relevance and applicability of the task to a work, professional experience, or life situation.
The task has narrow relevance and minimal applicability to a work, professional experience, or life situationThe task has broad relevance and reasonable applicability to a work, professional experience, or life situation. There is some similarity to comparable tasks that people tackle in a work, professional experience, or life situation. The task is authentic, relevant, and applicable to a work, professional experience, or life situation. It is almost the same, or is the same as, a task that people tackle in a work, professional experience, or life situation.
Interconnectedness
Connection of the task to tasks in other units of study at a program level.
* Might not apply to all disciplines, programs that are accredited by a professional body or unit of study contexts.
The task is an isolated undertaking that has minimal connection to other tasks that come before or after. The task is broadly related to and partially connected to at least one task that comes before and/or after. Tasks in related units of study have been considered. The task is related to and interconnected to tasks that come before and after, and colleagues from other units of study (i.e., peer-review) have been involved in a collaborative approach to the task design as have students and where relevant industry. * The task may be an intra-/inter-/multidisciplinary and/or interprofessional learning task designed by a cross-disciplinary team.
Alignment
Alignment of task with relevant unit of study and course learning outcomes at higher order thinking levels, and graduate qualities.
The task has minimal alignment with relevant unit of study and course outcomes involving higher-order thinking (e.g., problem solving). There is minimal alignment with at least one graduate quality beyond depth of disciplinary expertise. The task is aligned in several ways with relevant unit of study and course outcomes involving higher-order thinking (e.g., problem solving). There is reasonable alignment with at least one graduate quality beyond depth of disciplinary expertise. The task is aligned with all relevant unit of study and course outcomes involving higher-order thinking (e.g., problem solving). There is comprehensive alignment with one or more graduate qualities beyond depth of disciplinary expertise, allowing students to demonstrate clearly that they have achieved the quality.
Supporting Materials and Feedback
Supporting materials explain the purpose of, and what students are required to do in, the task (e.g., what problem to solve, what product to create, what performance to enact). Opportunities for supportive feedback are incorporated.
There are no supporting materials, or supporting materials are limited and/or confusing. It is unclear what the purpose of the task is and/or what students are required to do in the task. Feedback loops (during and postlearning) are not clear and/or are not built into the task.Supporting materials explaining the task are mostly clear and the purpose of the task is stated. What students are required to do in the task is outlined. Feedback loops (during and postlearning) associated with the task are mostly clear.Supporting materials explaining the task and processes of undertaking the task are clear and well structured, with a well-integrated and clearly explained purpose of the task. Feedback loops (during and postlearning) are clear and fully integrated with the task and feedback provided is equitable and fair for each student.
Rubric
Assessment rubric for the task with separate and consistent performance criteria and fair standards that shows students how to improve learning gains.
There is no rubric, or the rubric mixes criteria and standards together in statements. Knowledge and/or skills to be assessed are inconsistent with unit and course learning outcomes and/or supporting materials. Major knowledge and/or skills that are part of the task is/are missing. Criteria and standards in the task rubric are mostly clearly separated and generally show growth in the criteria. Most knowledge and/or skills to be assessed are consistent with unit and course learning outcomes and supporting materials. Some minor knowledge and/or skills that are part of the task is/are missing. Criteria and standards in the task rubric are well separated and clearly show growth in the criteria. The knowledge and/or skills to be assessed are well-defined and are completely consistent with unit and course learning outcomes and supporting materials. There is/are no missing knowledge and/or skills that are part of the task.
1 The term Authentic Assessment is defined as “assessment tasks that relate the application of knowledge to problems, skills and performances that are found in general or disciplinary practices or professional contexts. It includes but is not limited to projects, investigations and report writing” (University of Sydney, Coursework Policy, 2021, p. 7). 2 The AA rubric template is a resource to help teachers to enhance and/or evaluate summative assessment design and can be adapted to suit a variety of contexts, teacher experience and student year levels.

References

  1. OECD. Education GPS. Organisation for Economic Co-Operation and Development (OECD). 2023. Available online: https://gpseducation.oecd.org/CountryProfile?primaryCountry=AUS&treshold=10&topic=PI (accessed on 15 August 2024).
  2. Employer Satisfaction Survey. ESS National Report; Social Research Centre: Melbourne, VIC, Australia, 2022; Available online: https://www.qilt.edu.au/surveys/employer-satisfaction-survey-(ess) (accessed on 15 August 2024).
  3. Darling-Hammond, L. Teacher education around the world: What can we learn from international practice? Eur. J. Teach. Educ. 2017, 40, 291–309. [Google Scholar] [CrossRef]
  4. Gurr, D. Australia: The Australian Education System. In Educational Authorities and the Schools; Ärlestig, H., Johansson, O., Eds.; Springer: Cham, Switzerland, 2020; Volume 13, pp. 311–331. [Google Scholar] [CrossRef]
  5. NSW Education Standards Authority (NESA). The Higher School Certificate. NSW Government. 2023. Available online: https://ace.nesa.nsw.edu.au/higher-school-certificate#:~:text=The%20Higher%20School%20Certificate%20(HSC,for%20the%20statewide%20HSC%20examinations (accessed on 15 August 2024).
  6. National Assessment Program. NAPLAN. National Assessment Program. 2023. Available online: https://www.nap.edu.au/naplan (accessed on 15 August 2024).
  7. Law, H.-Y. Humanising mathematics education through authentic assessment: The story of Sarah. In Authentic Assessment and Evaluation Approaches and Practices in a Digital Era: A Kaleidoscope of Perspectives; Barkatsas, T., McLaughlin, T., Eds.; Brill: Boston, MA, USA, 2021; pp. 262–284. [Google Scholar] [CrossRef]
  8. Confrey, J.; Shah, M.; Belcher, M. Using learning trajectory-based ratio curriculum and diagnostic assessments for promoting learner-centred instruction. In Authentic Assessment and Evaluation Approaches and Practices in a Digital Era: A Kaleidoscope of Perspectives; Barkatsas, T., McLaughlin, T., Eds.; Brill: Boston, MA, USA, 2021; pp. 187–216. [Google Scholar] [CrossRef]
  9. Gulikers, J.T.M.; Bastiaens, T.J.; Kirschner, P.A. A five-dimensional framework for authentic assessment. Educ. Technol. Res. Dev. 2004, 52, 67–86. [Google Scholar] [CrossRef]
  10. Lombardi, M.M. Making the Grade: The Role of Assessment in Authentic Learning. EDUCAUSE Learn. Initiat. 2008, 1–16. Available online: https://library.educause.edu/resources/2008/1/making-the-grade-the-role-of-assessment-in-authentic-learning (accessed on 15 August 2024).
  11. Siemens, G. Connectivism: A learning theory for the digital age. Int. J. Instr. Technol. Distance Learn. 2005, 2, 3–10. [Google Scholar] [CrossRef]
  12. Lave, J.; Wenger, E. Situated Learning: Legitimate Peripheral Participation; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  13. Australian Government. Study Australia. Australian Trade and Investment Commission. 2023. Available online: https://www.studyaustralia.gov.au/Dictionary.aspx?FirstLetter=y#:~:text=Each%20State%20and%20Territory%20has,Year%2012%20Certificate%20(ACT) (accessed on 15 August 2024).
  14. Australian Bureau of Statistics. Education and Work, Australia. Australian Bureau of Statistics. 2023. Available online: https://www.abs.gov.au/statistics/people/education/education-and-work-australia/latest-release (accessed on 15 August 2024).
  15. Australian Universities Accord. Final Report, Canberra, Australia. 2024. Available online: https://www.education.gov.au/australian-universities-accord/resources/final-report (accessed on 15 August 2024).
  16. Bahr, N. Choosing Your Senior School Subjects Doesn’t Have to Be Scary. Here Are 6 Things to Keep in Mind. The Conversation. 2021. Available online: https://theconversation.com/choosing-your-senior-school-subjects-doesnt-have-to-be-scary-here-are-6-things-to-keep-in-mind-160257 (accessed on 15 August 2024).
  17. Prikshat, V.; Montague, A.; Connell, J.; Burgess, J. Australian graduates’ work readiness–deficiencies, causes and potential solutions. High. Educ. Ski. Work. Based Learn. 2020, 10, 369–386. [Google Scholar] [CrossRef]
  18. Santos Rego, M.A.; Sáez-Gambin, D.; González-Geraldo, J.L.; García-Romero, D. Transversal competences and employability of university students: Converging towards service-learning. Educ. Sci. 2022, 12, 265. [Google Scholar] [CrossRef]
  19. Archbald, D.A.; Newmann, F.M. Beyond Standardized Testing: Assessing Authentic Academic Achievement in the Secondary School; National Association of Secondary School Principals: Reston, VA, USA, 1988. [Google Scholar]
  20. Savery, J.R.; Duffy, T.M. Problem based learning: An instructional model and its constructivist framework. Educ. Technol. 1995, 35, 31–38. [Google Scholar]
  21. Gore, J.M.; Griffiths, T.; Ladwig, J.G. Towards better teaching: Productive pedagogy as a framework for teacher education. Teach. Teach. Educ. 2004, 20, 375–387. [Google Scholar] [CrossRef]
  22. Hart, D. Authentic Assessment: A Handbook for Educators; Dale Seymour Publications: Parsippany, NJ, USA, 1994. [Google Scholar]
  23. Torrance, E.P. Insights about creativity: Questioned, rejected, ridiculed, ignored. Educ. Psychol. Rev. 1995, 7, 313–322. [Google Scholar] [CrossRef]
  24. Ghosh, S.; Brooks, B.; Ranmathugala, D.; Bowles, M. Authentic vs traditional assessment: An empirical study investigating the difference in seafarer students’ academic achievement. J. Navig. 2020, 797–812. [Google Scholar] [CrossRef]
  25. Bosco, A.M.; Ferns, S. Embedding of authentic assessment in work-integrated learning curriculum. Asia-Pac. J. Coop. Educ. 2014, 15, 281–290. [Google Scholar]
  26. Colthorpe, K.; Gray, H.; Ainscough, L.; Ernst, H. Drivers for authenticity: Student approaches and responses to an authentic assessment task. Assess. Eval. High. Educ. 2021, 46, 995–1007. [Google Scholar] [CrossRef]
  27. Wiggins, G. The case for authentic assessment. Practical Assessment. Res. Eval. 1990, 2. [Google Scholar] [CrossRef]
  28. Fergusson, L.; van der Laan, L.; Imran, S.; Danaher, P.A. Authentic assessment and work-based learning: The case of professional studies in a post-COVID Australia. Authentic Assess. Prof. Stud. 2022, 12, 1189–1210. [Google Scholar] [CrossRef]
  29. Kaider, F.; Hains-Wesson, R.; Young, K. Practical typology of authentic WIL learning activities and assessments. Asia-Pac. J. Coop. Educ. Spec. Issue 2017, 18, 153–165. [Google Scholar]
  30. Sokhanvar, Z.; Salehi, K.; Sokhanvar, F. Advantages of authentic assessment for improving the learning experience and employability skills of higher education students: A systematic literature review. Stud. Educ. Eval. 2021, 70. [Google Scholar] [CrossRef]
  31. James, L.T.; Casidy, R. Authentic assessment in business education: Its effects on student satisfaction and promoting behaviour. Stud. High. Educ. 2018, 43, 401–415. [Google Scholar] [CrossRef]
  32. Sotiriadou, P.; Logan, D.; Daly, A.; Guest, R. The role of authentic assessment to preserve academic integrity and promote skill development and employability. Stud. High. Educ. 2019, 45, 2132–2148. [Google Scholar] [CrossRef]
  33. Wiewiora, A.; Kowalkiewicz, A. The role of authentic assessment in developing authentic leadership identity and competencies. Assess. Eval. High. Educ. 2018, 44, 415–430. [Google Scholar] [CrossRef]
  34. Jopp, R. A case study of a technology enhanced learning initiative that supports authentic assessment. Teach. High. Educ. 2020, 25, 942–958. [Google Scholar] [CrossRef]
  35. Lawrie, G. Establishing a delicate balance in the relationship between artificial intelligence and authentic assessment in student learning. Chem. Educ. Res. Pract. 2023, 24, 392–393. [Google Scholar] [CrossRef]
  36. Barkatsas, T.; McLaughlin, T. (Eds.) Authentic Assessment and Evaluation Approaches and Practices in a Digital Era: A Kaleidoscope of Perspectives; Brill: Boston, MA, USA, 2021. [Google Scholar]
  37. Rogers, A. Removing the teacher ‘blind spot’: Developing a comprehensive online place value assessment tool for Year 3–6 teachers. In Authentic Assessment and Evaluation Approaches and Practices in a Digital Era: A Kaleidoscope of Perspectives; Barkatsas, T., McLaughlin, T., Eds.; Brill: Boston, MA, USA, 2021; pp. 217–239. [Google Scholar] [CrossRef]
  38. Seah, R.; Horne, M. How much do they know about 3D objects: Using authentic assessment to inform teacher practice. In Authentic Assessment and Evaluation Approaches and Practices in a Digital Era: A Kaleidoscope of Perspectives; Barkatsas, T., McLaughlin, T., Eds.; Brill: Boston, MA, USA, 2021; pp. 240–261. [Google Scholar] [CrossRef]
  39. Wilks-Smith, N. Translanguaging pedagogies for multilingual learner assessment. In Authentic Assessment and Evaluation Approaches and Practices in a Digital Era: A Kaleidoscope of Perspectives; Barkatsas, T., McLaughlin, T., Eds.; Brill: Boston, MA, USA, 2021; pp. 285–311. [Google Scholar] [CrossRef]
  40. Graduate Outcome Survey. National Report for Domestic Students; Social Research Centre: Melbourne, VIC, Australia, 2022; Available online: https://www.qilt.edu.au/surveys/graduate-outcomes-survey-(gos)#anchor-2 (accessed on 15 August 2024).
  41. Australian Government. The Final Accord Report. Department of Education, Skills and Employment. 2024. Available online: https://www.education.gov.au/accord-final-report (accessed on 15 August 2024).
  42. Hasenkamp, T.; Adler, T.; Carlsson, A.; Arvidsson, M. Robust Design Methodology in a generic product design process. Total Qual. Manag. Bus. Excell. 2007, 18, 351–362. [Google Scholar] [CrossRef]
  43. Thornton, A. Variation Risk Management, Focusing Quality Improvements in Product Development and Production; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  44. Reimann, P. Design-based research. In Methodological Choice and Design: Scholarship, Policy and Practice in Social and Educational Research; Markauskaite, L., Freebody, P., Irwin, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 37–50. [Google Scholar] [CrossRef]
  45. Reeves, T. Design research from a technology perspective. In Educational Design Research; Van den Akker, J., Gravemeijer, K., McKenney, S., Nieveen, N., Eds.; Routledge: London, UK, 2006; pp. 86–109. Available online: http://www.fisme.science.uu.nl/publicaties/literatuur/EducationalDesignResearch.pdf#page=102 (accessed on 15 August 2024).
  46. Ulrich, K.T.; Eppinger, S.D. Product Design and Development; McGraw-Hill: New York, NY, USA, 1995. [Google Scholar]
  47. Ajjawi, R.; Tai, J.; Dollinger, M.; Dawson, P.; Boud, D.; Bearman, M. From authentic assessment to authenticity in assessment: Broadening perspectives. Assess. Eval. High. Educ. 2023, 499–510. [Google Scholar] [CrossRef]
  48. Plomp, T. Educational design research: An introduction. In An Introduction to Educational Design Research, 2nd ed.; Plomp, T., Nieveen, N., Eds.; SLO Netherlands Institute for Curriculum Development: Enschade, The Netherlands, 2009; pp. 9–35. [Google Scholar]
  49. Danielson, C. Enhancing Professional Practice: A Framework for Teaching; ASCD: Alexandria, VG, USA, 2011. [Google Scholar]
  50. Herrington, J. Authentic e-learning in higher education: Design principles for authentic learning environments and tasks. In E-Learn: World Conference on e-Learning in Corporate, Government, Healthcare, and Higher Education; Association for the Advancement of Computing in Education (AACE): Waynesville, NC, USA, 2006; pp. 3164–3173. [Google Scholar]
  51. Wiggins, G. Educative Assessment: Designing Assessments to Inform and Improve Student Performance; Jossey-Bass: San Francisco, CA, USA, 1998. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hains-Wesson, R.; le Roux, S. Bridging Teacher Knowledge and Practice: Exploring Authentic Assessment across Educational Levels. Educ. Sci. 2024, 14, 894. https://doi.org/10.3390/educsci14080894

AMA Style

Hains-Wesson R, le Roux S. Bridging Teacher Knowledge and Practice: Exploring Authentic Assessment across Educational Levels. Education Sciences. 2024; 14(8):894. https://doi.org/10.3390/educsci14080894

Chicago/Turabian Style

Hains-Wesson, Rachael, and Sanri le Roux. 2024. "Bridging Teacher Knowledge and Practice: Exploring Authentic Assessment across Educational Levels" Education Sciences 14, no. 8: 894. https://doi.org/10.3390/educsci14080894

APA Style

Hains-Wesson, R., & le Roux, S. (2024). Bridging Teacher Knowledge and Practice: Exploring Authentic Assessment across Educational Levels. Education Sciences, 14(8), 894. https://doi.org/10.3390/educsci14080894

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop