Abstract
The rapid uptake of generative AI (e.g., ChatGPT, DALL·E and MS Copilot) is disrupting conventional notions of authenticity in assessment across higher education. The dominant response, surveillance and AI detection, misdiagnoses the problem. In an AI-mediated world, authenticity cannot be policed into existence; it must be redesigned. Situating AI within contemporary knowledge work shaped by digitisation, collaboration and evolving ethical expectations, we reconceptualise authenticity as something constructed in contexts where AI is expected, declared and scrutinised. The emphasis shifts from what students know to how they apply knowledge, make judgement, and justify choices with AI in the loop. We offer practical design for learning moves, i.e., discipline-agnostic learning design patterns that position AI as a collaborator rather than a cheating application: tasks that require students to critique, adapt and verify AI outputs, provide explicit process transparency (prompts, iterations, rationale) and exercise assessable demonstrations of digital discernment and ethical judgement. Examples include asking business students to interrogate a chatbot-generated market analysis and inviting pre-service teachers to evaluate AI-produced lesson plans for inclusivity and pedagogical soundness. Reflective artefacts such as metacognitive commentary, process logs, and oral defences make students’ thinking visible, substantiate attribute, and reduce reliance on punitive “gotcha” approaches. Our contribution is twofold: i. a conceptual account of authenticity fit for an AI-mediated world, and ii. a set of actionable, discipline-agnostic patterns that can be tailored to local contexts. The result is an integrity stance anchored in design rather than detection, enabling assessment that remains meaningful, ethical and intellectually demanding in the presence of AI, while advancing a broader shift toward assessment paradigms that reflect real-world professionalism.
1. Introduction
Authenticity in assessment has long been recognised as a central concern in higher education, reflecting the desire to connect learning with the complexity of professional and civic life. Early frameworks such as Gulikers, Bastiaens, and Kirschner’s () five-dimensional model established authenticity as multidimensional, encompassing the task, context, social setting, assessment criteria, and learner interpretation. Building on this, () argued that authenticity should not be treated as an optional enhancement, but as a principle that underpins effective assessment design. Their articulation of eight critical elements, ranging from cognitive challenge to metacognition, continues to shape assessment scholarship. More recent work has reinforced this trajectory, positioning authentic assessment as part of a broader paradigm shift in higher education, where assessment is increasingly expected to function as both an evaluation and a learning process in its own right (). Authenticity matters because it aligns student learning with the expectations of professional practice, supports the development of transferable skills, and enables deeper engagement with knowledge as something to be used, critiqued, and applied rather than simply reproduced (; ; ).
Generative artificial intelligence (GenAI) has brought this question of authenticity into sharper focus, but it is only one of several forces reshaping professional work. We outline this landscape concisely below, emphasising that students are preparing for practice in environments that are global, digital, interdisciplinary, and ethically complex.
In this context, GenAI has quickly emerged as a particularly visible and disruptive force. Tools such as ChatGPT, DALL·E, and GitHub Copilot now produce text, images and code across fields and are increasingly integrated into automated, analytic, and collaborative workflows (; ). For education, GenAI simultaneously expands possibilities for creativity and scaffolding while introducing risks of bias, misinformation, and over-reliance; we develop these tensions below. What makes GenAI significant for assessment is not simply the technology itself, but the way it reveals deeper tensions, and opportunities, around what counts as authentic learning when powerful tools are part of standard practice ().
A significant response in higher education has been to focus on detection—using AI-detection software or designing tasks aimed at minimising AI use, such as in-person proctored exams (). While understandable, these approaches risk narrowing the conversation to surveillance and control, rather than meaningful assessment reform (; ; ). Detection strategies are limited in accuracy, can generate mistrust between educators and students, and do not address the broader reality: graduates will enter workplaces where such tools are ubiquitous ().
The argument in this paper is that assessment design must move away from “catching” GenAI use towards rethinking authenticity in light of contemporary professional practice. Rather than treating GenAI as a threat to academic integrity, educators should see it as one example of a broader transformation, and as an enabler of new ways of defining, designing, and evidencing authentic learning.
Throughout this paper, we pursue a single proposition: in an AI-mediated world, authenticity is achieved through design rather than detection, operationalised via process transparency, critical/ethical AI use, and assessable judgement.
This paper is conceptual and design-led. The framework and learning-design patterns were derived through the following: i. an integrative review of scholarship on authentic assessment and integrity in AI-present contexts, ii. an analysis of cross-disciplinary assessment exemplars and contemporary sector guidance (e.g., ; , ), and iii. iterative practitioner critique with teaching teams. We treat patterns as design propositions—named solutions to recurrent problems under AI-present conditions (; ). Inclusion criteria required that a pattern i. makes student judgement and process evident and assessable, ii. is adaptable across disciplines, and iii. is generally feasible under typical HE constraints. The set is intentionally compact and non-exhaustive, scaffolds local adaptation, and invites future empirical evaluation.
2. Background
2.1. Authenticity in Assessment: Concepts and Traditions
Authenticity in assessment has become a central concern in higher education, reflecting the growing need for tasks that extend beyond rote reproduction of knowledge and instead mirror the complex demands of professional and civic life (). () argue that authenticity should not be considered a desirable yet optional feature of assessment, but rather a set of design principles that, when applied systematically, strengthen the alignment between learning outcomes, workplace expectations, and meaningful student engagement. Their framework of eight critical elements—ranging from challenge and collaboration to fidelity and metacognition—offers a structured approach to embedding authenticity within assessment design, ensuring that students are prepared for the complexity of professional practice.
Recent research indicates a paradigm shift in higher education assessment, moving beyond standardised tests toward authentic, complex tasks that reflect the judgement, agility and problem-solving required in professional practice. Such tasks can be designed not only to exemplify the nature of professional work, but also to serve as learning-enabling rather than merely judgmental, pointing toward a reconceptualisation of assessment itself (; ). Authenticity requires tasks that engage learners in producing knowledge or artefacts of value beyond the classroom (; ), but it also emphasises the process of learning—critical reflection, iterative feedback, and collaborative knowledge building—as much as the final product (; ). This positions authenticity as epistemological (what counts as knowledge), methodological (how we assess learning), and productive.
The literature highlights the ways authentic assessment fosters deeper learning by aligning with students’ future professional identities (). () describe authenticity as multidimensional, requiring attention to task, context, social setting, assessment criteria, and learner interpretation. Building on this, contemporary studies emphasise that authenticity is context-specific: what is “realistic” in engineering may look quite different from what is “authentic” in the creative arts or health sciences (). Thus, authenticity should be seen as a principle that underpins assessment across disciplines, while being adapted to disciplinary epistemologies and practices. Authenticity also challenges narrow definitions of assessment as verification, positioning assessment as integral to the learning process itself (; ). In this way, authenticity functions as a design principle that ensures assessment remains responsive to both disciplinary traditions and the broader demands of a changing world.
2.2. Shifts in Professional Practice
Professional practice in most fields is undergoing rapid transformation, shaped by globalisation, digitalisation, and evolving social expectations (; ). These developments have profound implications for higher education, as graduates must be prepared for professional contexts that are increasingly dynamic, interconnected, and ethically complex.
Globalisation continues to reshape the landscape. Professionals are now expected to collaborate across national borders, languages, and cultural contexts (). Distributed teams have become commonplace, requiring new forms of intercultural competence and collaboration across time zones. Authentic assessments, in turn, must prepare students for tasks that mirror these global conditions, rather than assuming a narrow local frame of reference.
Digitalisation is one of the most visible drivers of change and adds a further layer of complexity. Cloud-based collaboration platforms, large-scale data analytics, and immersive technologies such as augmented and virtual reality have become standard features of many professions (; ). These tools not only alter the processes of work but also reshape professional identities, as the boundaries between physical and digital practices become blurred. In such contexts, authenticity in assessment cannot be defined by static or traditional tasks alone but must take account of the hybrid environments in which graduates will operate.
Another major shift is the prominence of interdisciplinary work. Contemporary challenges—climate change, sustainability, global health, digital transformation—present wicked problems that cannot be addressed within the boundaries of a single discipline. Effective professional practice now requires collaboration across knowledge domains and the capacity to integrate multiple perspectives (; ). Authentic assessment must therefore move beyond discrete disciplinary tasks and reflect the interdisciplinary problem-solving that characterises much professional work.
Finally, there is growing recognition of the ethical dimensions of professional life. Commitments to equity, diversity, and inclusion are no longer considered peripheral but are central to responsible practice (; ). At the same time, new technologies raise urgent ethical issues around privacy, sustainability, and algorithmic bias (). Authentic assessments must provide opportunities for students to engage with these ethical dimensions, recognising that professional practice is as much about values and judgement as it is about technical proficiency.
Taken together, these shifts highlight the inadequacy of assessment practices that rely on narrow, decontextualised tasks. Authentic assessment must reflect the realities of professional life as dynamic, global, digital, interdisciplinary, and ethically charged. Within this broader context, generative AI can be understood as one highly visible example of wider systemic change.
2.3. Generative AI in Education and Work
Generative artificial intelligence has emerged as a particularly visible and disruptive example of technological change in both professional practice and education (). Tools such as ChatGPT, DALL·E, and GitHub Copilot are being rapidly adopted across industries, where they are used to generate text, images, code, and other artefacts once produced exclusively by human effort (). These tools rarely operate in isolation but are typically embedded within broader digital workflows that include automation, data analytics, and collaborative platforms ().
The opportunities associated with generative AI are significant. In professional contexts, such tools can increase efficiency by automating routine tasks, stimulate creativity by offering new possibilities, and support decision-making by producing multiple options or scenarios (). Within education, they can provide scaffolding by helping students to draft, test, and revise ideas, exposing them to alternative perspectives and enabling more iterative forms of learning ().
At the same time, generative AI carries substantial risks. Outputs may perpetuate bias, reproduce misinformation, or create “hallucinated” content (). There are also concerns about over-reliance on these tools, with the potential erosion of key human skills such as writing fluency, critical reasoning, originality (), and reflective metacognitive engagement (; ). In this sense, generative AI challenges educators not simply to incorporate new tools, but to ensure that students develop the discernment needed to use them responsibly.
Importantly, generative AI is already being integrated into professional workflows. Journalists may use AI to draft preliminary content, designers to prototype concepts, and software engineers to accelerate coding (; ). In each case, the tools are not substitutes for expertise but collaborators within the professional process. This reality suggests that graduates will need both technical familiarity with generative AI and the ethical judgement to decide when and how to rely upon it ().
For assessment, this creates a fundamental challenge. If AI is part of standard practice, then tasks that ban its use risk constructing artificial forms of assessment divorced from professional reality (). The question, therefore, is not whether AI should be permitted but how authenticity in assessment should be reconceptualised in contexts where AI is an expected tool. This leads directly to the discussion in Section 3, which explores the reconceptualisation of authenticity, and Section 4, which examines practical strategies for designing AI-inclusive assessments.
3. Reconceptualising Authentic Assessment in Changing Contexts
This section provides the conceptual basis for design by reframing authenticity to align with AI-present professional practice () and motivating the patterns in Section 4.
Authenticity in assessment has always been a contested concept, reflecting shifting understandings of what it means for learning tasks to be “real” and meaningful. In the current context, pressures from technological, cultural, and professional change demand a redefinition of authenticity that goes beyond replication of workplace tasks to foster critical engagement with the conditions of modern professional practice. Generative AI is one highly visible catalyst for this rethinking, but it is not the only one. Authentic assessment in today’s higher education context must prepare students to work ethically, reflectively, and adaptively with a wide range of tools, environments, and challenges. This section develops that argument across four dimensions: i. reframing AI and similar technologies as tools for learning rather than threats (practice-proximal use), ii. embedding contemporary tools in within tasks that make process visible and assessable (process-transparent design), iii. fostering ethical judgement, digital discernment, and responsible participation (ethically literate engagement), and iv. redefining evidence of learning to emphasise workflow and justification as much as product (consequential assessment).
3.1. From Threat to Tool
Much of the discourse surrounding generative AI in higher education has been framed through the lens of academic integrity, with emphasis on the risks of plagiarism, unauthorised assistance, and ambiguity about authorship and origin (). This framing has tended to cast AI as an external threat to be policed, reinforcing assessment practices that are more concerned with detection of misconduct than with meaningful learning. While such concerns are legitimate, focusing solely on threat undermines the wider pedagogical opportunity: developing graduates who are able to engage with emerging technologies as capable, ethical professionals.
While their intent is understandable, detection-led responses face well-documented validity and fairness limits (e.g., bias against non-native writers) and notable error rates; they can also corrode trust and distract from assessment design. We therefore treat detection as a limited, situational tool rather than a strategy of first resort, and re-centre design for learning (; ).
A redefinition of authenticity requires a shift in mindset. Rather than asking how we can prevent students from using AI or other tools, the question becomes how we can enable students to use them thoughtfully, responsibly, and effectively in contexts that mirror their future work. This perspective is consistent with accounts of authentic assessment as preparing students for tasks that simulate the activities and performance standards they will encounter in the world of work (). If AI and similar technologies are becoming ubiquitous across sectors, then excluding them from assessment creates an inauthentic scenario, one that fails to prepare students for the realities of their disciplines.
From this standpoint, authenticity is not only about mirroring workplace practice but also about cultivating dispositions of judgement, ethics, and discernment. The authentic professional is not simply someone who can use tools, but someone who can justify when, how, and why they use them, and can critically evaluate their outputs. Framing AI and other technologies as tools rather than threats therefore shifts the purpose of assessment from policing compliance and assessing outputs in isolation from the learning process to enabling them to demonstrate the higher-order capacities that define professional capability in digital and complex environments (; ).
3.2. Embedding Contemporary Tools in Authentic Tasks
Authentic assessments must be grounded in the tools, environments, and processes that define modern professional practice. While generative AI is currently a prominent example, it is only one of many technologies shaping how work is done. Disciplines such as engineering, health, design, and education all rely on specialised platforms and equipment, and these need to be reflected in assessment contexts if students are to experience tasks that approximate their future roles (; ).
The principle here is not simply “include the tools students might use” but rather “design tasks where tool use is purposeful and aligned with intended learning outcomes.” For instance, a journalism assessment might require students to critique and edit an AI-generated news brief, not to test their ability to generate text but to assess their ability to identify bias, refine clarity, and uphold ethical standards of reporting. In engineering or architecture, digital fabrication tools might be integrated into design projects, where the focus is not on the tool’s mechanics but on the decisions students make in balancing efficiency, feasibility, and sustainability. In health sciences, students might evaluate AI-assisted diagnostic recommendations, with assessment criteria emphasising clinical judgement, patient-centred reasoning, and ethical considerations ().
Better embedding contemporary tools in this way could allow assessments to increase focus on how students curate their use of technologies, rather than whether they can replicate outputs that GenAI tools already provide. It reflects the insight that professional capability involves adaptation and decision-making in environments where technologies are ever-present, evolving, and often imperfect (). In this sense, authenticity demands more than simply updating task contexts; it requires assessment design that positions digital tools as enablers of critical engagement, judgement, and adaptability.
3.3. Critical Engagement and Digital Discernment
If digital tools are embedded in authentic assessment tasks, then the focus of evaluation must shift towards how students engage critically with those tools and their outputs. This involves developing and assessing digital discernment—the capacity to question assumptions, identify biases, and make informed choices about when and how to integrate tool outputs into one’s work ().
Critical engagement requires students to look beyond surface-level outputs and interrogate the processes and logic that underpin them. For example, an AI-generated market analysis may appear coherent and comprehensive but may rest on biased training data, omit key contextual factors, or overstate causal claims. Similarly, algorithmic tools in design may optimise for efficiency while neglecting aesthetic or ethical dimensions. Authentic assessment can encourage students to identify these limitations, refine outputs, and justify their choices in adapting or rejecting tool-generated material ().
This approach repositions authenticity as less about reproducing professional products and more about enacting the kinds of reflective, critical, and ethical practices that define responsible professionals. As scholars such as () emphasise, authentic assessment requires higher-order thinking and metacognitive awareness. In the age of AI, this translates into the ability to critically evaluate digital artefacts, recognise their partiality, and situate them within broader professional and ethical frameworks.
By foregrounding digital discernment, assessments also address concerns about over-reliance on technology. Students are not rewarded for uncritical use but for demonstrating awareness of risks, limitations, and biases. This emphasis helps to ensure that assessments develop transferable skills that remain valuable even as specific tools evolve. It also reflects the wider educational goal of fostering graduates who can contribute responsibly to societies where technology is increasingly pervasive and contested ().
3.4. Redefining Evidence of Learning
Authenticity is not only about the design of tasks but also about what counts as evidence of learning. Traditional assessment has often privileged final products—essays, reports, designs, or solutions—as the primary or sole evidence of achievement. In contexts shaped by powerful digital tools, this is no longer sufficient. The process by which students reach outcomes, and their ability to reflect on, justify, and adapt their approaches, are equally, if not more, important (; ).
Redefining evidence of learning therefore means broadening the artefacts and activities considered legitimate for assessment. Drafts, prompt logs, design iterations, and reflective commentaries can provide insight into how students engaged with tools, where they exercised judgement, and how they responded to challenges (). Yet the emphasis should not be on mandating particular formats, but on valuing transparency of process, clarity of justification, and depth of metacognition (; ).
Assessment criteria therefore need to move beyond the “accuracy” or “quality” of final products to encompass adaptability, ethical reasoning, and awareness of transferable learning strategies. AI tools expose the sector’s over-reliance on outputs (e.g., essays or reports) as evidence of learning. Because AI can generate polished products without the learner engaging in the underlying process, policy and pedagogy should refocus on the learning journey, not just final products (; ; ). The evaluative focus should shift from performance on a single test to the cumulative demonstration of learning across a program of study and, where appropriate, to evidence of capability-in-praxis. A more process-focused and authenticity-oriented approach to design for learning would improve assessment quality, enhance resilience to AI-mediated disruption, and support more meaningful learning for 21st-century students. For instance, a student who can explain why they chose not to use an AI-generated output because it conflicted with professional standards demonstrates learning that is arguably more authentic than one who simply submits a polished but unreflective product.
This shift aligns with broader calls in higher education to assess learning as an iterative, dialogical process rather than a one-off performance (). It highlights the value of reflective commentary, design iterations, and documentation of decision-making as indicators of authentic learning. It also addresses the paradox that as tools become more powerful at generating products, what becomes distinctive and valuable is the student’s ability to demonstrate process awareness, critical engagement, and ethical judgement (). In this sense, evidence of learning is redefined not by discarding products, but by situating them within a richer tapestry of process and reflection.
Taken together, these four dimensions outline a reconceptualisation of authenticity in assessment that reflects evolving professional norms. Authenticity is no longer simply about replicating tasks from the workplace but about preparing students to navigate complex, technology-rich, and ethically charged environments with judgement and discernment.
4. Designing Authentic Assessments for Modern Professional Practice
We translate this reconceptualisation into design through discipline-agnostic patterns that make students’ AI-mediated judgement and process visible and assessable ().
Reconceptualising authenticity in assessment requires more than abstract principles; it demands practical approaches that can be enacted within the realities of higher education. This section proposes design strategies that align assessment with contemporary professional practice, emphasising core principles, discipline-specific applications, and methods to foster transparency and reflection. Together, these approaches illustrate how assessment can move beyond detection-focused responses to technology, preparing students for meaningful engagement in complex professional environments (see Section 3.1).
4.1. Core Principles for Modern Authenticity
Authentic assessment has always been concerned with alignment between learning tasks and the practices of real-world contexts (; ). In a rapidly shifting landscape of work, this alignment now requires attention not only to disciplinary content but also to the values, tools, and processes that characterise professional activity. Several design principles are therefore critical.
First, authenticity is strengthened when assessment tasks mirror actual professional practice. This does not mean replicating workplace activities in a superficial sense, but creating tasks that engage students in forms of judgement, decision-making, and problem-solving that resemble those they will encounter in practice (). For example, professionals often operate under conditions of uncertainty, requiring them to weigh multiple sources of evidence, anticipate risks, and justify their decisions to diverse stakeholders. Assessments that ask students to simulate these processes are more authentic than those that simply test recall or reproduction of information.
Second, authentic assessments should emphasise judgement, ethics, adaptability, and collaboration. () emphasise sustainable assessment practices, designed not only for immediate course outcomes but to equip students for ongoing professional learning. This position is reinforced in recent frameworks that highlight judgement, ethics, adaptability, and collaboration as essential professional capabilities in technology-rich environments (). Tasks should therefore invite students to apply disciplinary knowledge in ways that foreground ethical reasoning, sensitivity to context, and the ability to adapt strategies as conditions change. Collaborative assessments play a critical role in aligning assessment practices with the realities of professional environments, where collective problem-solving is the norm rather than the exception (; ).
Third, authentic assessment must evaluate the process of learning as well as the product (). As Section 3 emphasised, powerful digital tools are capable of producing polished outputs that can mask superficial understanding. What distinguishes capable graduates is not the final artefact alone but the process of engagement: the rationale for decisions, the ability to critique and refine, and the capacity to justify choices in light of professional and ethical standards (). () argue that this kind of process orientation reflects the “messiness” of authentic professional tasks, where outcomes are negotiated and refined through iteration.
Finally, assessments need to balance tool use with human creativity and decision-making. Generative AI and other digital technologies are increasingly embedded in professional practice, but authentic assessment should not reward uncritical reliance on these tools. Instead, the focus should be on how students use technologies as part of a broader repertoire of strategies, integrating human creativity, critical reasoning, and ethical judgement (; ). The challenge for assessment design is to create tasks where digital tool use supports, rather than substitutes, higher-order capabilities ().
An emergent and broader challenge for institutions and educators is to de-escalate anxieties around AI use while doubling down on critical thinking—akin, with obvious limits to the analogy, to the shift from manual long division to calculators. Arguments that students must first learn foundational concepts before engaging with contemporary tools are increasingly insufficient on their own; conceptual understanding and tool competence can be developed in tandem. Instead of striving for the unrealistic goal of eliminating AI use, we should assess how effectively, appropriately, and transparently students mobilise such tools within ethically grounded professional judgement (; ; ; ; ).
In this paper, we conceptualise authenticity as a multidimensional continuum rather than a binary. We locate authenticity across four intersecting dimensions: i. task–context alignment with contemporary professional practice; ii. foregrounding of professional judgement and ethics; iii. visibility of process (iteration, critique, rationale); and iv. appropriate use of tools (including AI) within human decision-making. Different strategies emphasise different dimensions; combining strategies strengthens authenticity when it deepens performance on these dimensions, not merely by adding features. The appropriate level of authenticity is stage-appropriate: early units warrant constrained, well-scaffolded tasks; later units can open complexity, uncertainty, and stakeholder engagement.
Together, these principles provide a foundation for designing assessments that prepare students for the complexities of contemporary work. The following subsections illustrate how these ideas can be enacted across disciplines.
4.2. Examples by Discipline
Although the principles of authenticity are shared, their application varies across disciplines depending on the tools, practices, and professional expectations that define each field. The following examples highlight possible approaches. The examples are intentionally able to be combined; educators should select and blend them to fit learning outcomes, cohort readiness, and resource constraints.
4.2.1. Engineering and Design
Students might be asked to apply digital fabrication tools, simulation platforms, or CAD software in developing prototypes. Assessment would require not only submission of the prototype but also reflection on design choices, trade-offs, and ethical considerations (e.g., sustainability, accessibility, safety). This integrates technical competence with broader professional responsibility.
Use when you want students to weigh trade-offs under constraints and evidence risk-aware decision-making.
4.2.2. Education
In teacher preparation programs, students might be asked to evaluate AI-generated lesson plans. The focus would not be on the plan itself but on the student’s capacity to assess inclusivity, alignment with curriculum standards, and pedagogical soundness. Reflective commentary could require students to propose modifications, justify their decisions in terms of learning outcomes, and consider ethical questions around reliance on AI in classrooms.
Use when evaluation of pedagogy and policy alignment matters more than artefact production, and for rehearsing classroom-management decisions in low-stakes AI simulations ahead of work placements.
4.2.3. Creative Industries
In fields such as design, media, or the arts, assessments might involve integrating AI-generated images or text into human-led workflows. Students could be tasked with using generative tools as part of a design project, but required to document their creative decision-making, explain their rationale for modifications, and reflect on the interplay between human and machine creativity. The outcome is less about the artefact itself than the demonstration of critical, reflective, and creative practice.
Use when the learning goal is critical integration of AI within a human creative workflow.
4.2.4. Healthcare
Authenticity in healthcare education involves balancing clinical judgement with emerging technologies. An assessment might present students with patient care suggestions generated by an AI decision-support system. Students would be expected to appraise these suggestions for safety, clinical validity, and ethical implications, explaining how they would incorporate or challenge AI-generated recommendations in practice. Here, assessment highlights the integration of tool outputs with professional reasoning and patient-centred care.
Use when students must demonstrate clinical reasoning, safety and ethics against AI outputs, and rehearse difficult conversations or produce cultural-sensitivity reports.
4.2.5. Business
Rather than asking students to generate a market analysis from scratch, a more authentic task might be for students to use AI tools to simulate market segments, personas and brand archetypes, or to produce preliminary market data for critique and selective inclusion. Students could then be required to identify strengths and weaknesses, assess the accuracy and relevance of data, highlight potential biases, and suggest how the analysis might inform decision-making. Assessment criteria would emphasise critical evaluation, ethical awareness, and the ability to translate analysis into actionable recommendations.
Use when students must evaluate AI-generated segments or data, surface biases, and turn analysis into actionable recommendations.
4.2.6. Finance
In finance courses, algorithmic trading simulators can provide authentic contexts. Students might be tasked with developing, testing, and justifying investment strategies under varying conditions. Assessment could include analysis of risk, consideration of ethical implications (e.g., insider trading, social responsibility), and reflection on how their strategy would need to adapt in response to unpredictable market shifts.
Use when judgement under uncertainty and justification of risk posture are primary.
4.3. Designing for Transparency and Reflection
Authentic assessments in technology-rich contexts require deliberate design features that foreground transparency and reflection. Without these, tasks risk rewarding polished outputs without revealing the quality of students’ engagement. Several strategies are particularly effective.
First, students should be required to submit process logs, AI prompt records, or other “behind the scenes” artefacts alongside the final output. These materials provide evidence of how students engaged with tools, where they exercised judgement, and how their ideas developed over time. For example, a design student might submit drafts showing iterations of a prototype, while a writing student might include logs of AI prompts and revisions. These artefacts make visible the decision-making process that underpins the final product ().
Second, assessments can incorporate short reflective commentaries in which students explain key decisions, justify their use of tools, and account for changes made during the project. Reflection should not be an afterthought but integrated into the assessment design, with explicit criteria for evaluating depth, criticality, and ethical awareness ().
Third, oral defences, annotated portfolios, or recorded walkthroughs can be used to assess reasoning in real time. These approaches allow educators to probe student thinking, clarify uncertainties, and ensure that the submitted work genuinely represents the student’s understanding. They also reflect professional practices such as pitching, consultation, or peer review, where professionals are expected to explain and defend their work.
Finally, designing opportunities for self-critique and peer feedback strengthens both transparency and collaborative skills. For instance, students might be required to review each other’s projects, provide constructive feedback, and reflect on how feedback influenced their own revisions. This mirrors professional environments where collaboration and iterative improvement are central to effective practice (). () likewise emphasise the importance of feedback literacy as a core dimension of assessment design, suggesting that opportunities for self- and peer critique are essential for developing reflective, independent learners.
Across a program, authenticity should build via a progressive release of complexity: early-stage units/courses emphasise transparency artefacts (process logs, prompt records) and short oral defences; mid-stage units/courses add collaboration, negotiated briefs and peer review; capstones introduce external stakeholders, open-ended briefs, and negotiated criteria with explicit ethical framing and critical use of digital tools.
These strategies ensure that assessment captures not only what students produce but how they engage in the process of producing it. By making reasoning, decision-making, and reflection visible, they shift the focus of authenticity from outputs alone to the fuller range of practices that define capable professionals.
The design approaches outlined here demonstrate how authenticity can be strengthened when assessments reflect contemporary professional practice. Rather than relying on detection-focused responses to emerging technologies, educators can prepare students for the realities of work by creating tasks that foreground judgement, ethics, adaptability, and reflection. Section 5 turns to the challenges and considerations that must be addressed when implementing these approaches in higher education contexts.
5. Challenges and Considerations
While the arguments for rethinking authenticity in assessment are compelling, translating these ideas into practice is not without difficulty. Universities and educators face significant challenges in ensuring equity, addressing ethical risks, managing workload, and supporting staff capability. This section outlines four key considerations that must be addressed if authenticity is to be strengthened in the age of generative AI and other emerging technologies.
5.1. Equity and Access
One of the most pressing challenges in embedding contemporary tools into assessment design is the question of equity. Not all students have equal access to advanced technologies, high-speed internet, or conducive learning environments (). Introducing generative AI or discipline-specific platforms into assessment tasks may inadvertently privilege students who already have access to these resources, deepening existing divides along socio-economic lines. Actively supporting institutionally provided access to AI (e.g., fenced deployments within the ICT suite) can reduce reliance on unequal or back-channel access; however, free tiers and institutional offerings vary in availability, capability and accessibility. Equity is not only about economic access to technologies but also about inclusivity for students with disabilities, diverse linguistic and/or cultural backgrounds, or complex personal circumstances. Authentic assessments must be designed with inclusivity in mind, ensuring that all students have meaningful opportunities to engage, regardless of their starting point ().
Institutions therefore need to consider issues of licencing, infrastructure, and training to avoid disadvantaging particular cohorts. In some cases, this may mean providing institutional access to generative AI platforms or alternative pathways for students who cannot use certain tools. Without such measures, the promise of authenticity risks becoming another mechanism of exclusion rather than empowerment. However, authentic assessment does not automatically guarantee inclusivity. While shifting from rigid examinations to work-relevant or multimodal tasks can remove barriers for some students, more authentic formats can also create new ones—through unfamiliar workload demands, pressures on students balancing employment or carer responsibilities, and constraints inherent in real-world simulations (for example, time limits, safety and accessibility). The equity effects therefore turn on design decisions and supports (). These risks are mitigated by transparent workload modelling, staged scaffolding, equivalent-standards choice of modality, and accessible alternatives for risk-laden tasks.
5.2. Ethics, Bias and Responsibilities
Generative AI and related technologies raise critical questions of ethics and bias. These tools are trained on large, culturally specific datasets that often reproduce existing social inequalities, cultural stereotypes, or political biases; their outputs may also be fluent yet unfaithful to the input or to facts (). While this situation is improving, when students work with AI-generated outputs they may inadvertently reproduce or legitimise problematic assumptions unless assessment tasks explicitly foster critical awareness.
While classroom practice does the immediate work of making process visible and assessable, AI-assisted assessment ultimately relies on institutional settings that enable safe and transparent use. In line with our stance that authenticity is designed rather than detected, providers should create conditions in which students and staff can be explicit about AI-mediated work without privacy or equity penalties.
At minimum, institutional responsibilities include, i. completing a privacy/data-protection impact assessment (PIA/DPIA) for any AI tool used in assessment, with clear data-flow maps and student-facing summaries, ii. vetting and approving tools against privacy, bias, accessibility and auditability criteria, and iii. standardising prompt/process transparency (e.g., simple prompt-log conventions and version control) so students can evidence how they worked without disclosing personal data (; ; ; ).
There are also concerns around misinformation and reliability. AI-generated texts and analyses can appear fluent and authoritative while omitting or distorting salient detail, or containing factual inaccuracies, fabricated references, or flawed reasoning (). If assessments simply reward surface polish, students may be incentivised to present unverified outputs rather than engage in verification, critique, and justification.
Finally, the use of commercial AI platforms raises questions about data privacy, ownership, and power asymmetries between universities, students, and technology companies (). Without explicit attention to ethics, assessments risk reinforcing the very inequities and misinformation they should be preparing students to challenge. Authentic assessment must therefore include explicit attention to ethical engagement, requiring students to identify, critique, and address biases and risks in the tools they use. In this way, the focus shifts from avoiding technology altogether to preparing students for responsible participation in digital professional environments.
5.3. Assessment Load and Feasibility
Authentic assessments that emphasise process, reflection, and digital tool use can be demanding to design, implement, and evaluate. For educators already facing significant workloads, the addition of process logs, reflective commentaries, or oral defences may appear impractical. This challenge is particularly acute in large cohorts where efficiency often dictates reliance on standardised testing or essay-based assessments ().
Balancing authenticity with feasibility requires careful design. Authenticity does not require greater volume of assessment but a rethinking of task design so that existing assessments capture process as well as product. One strategy is to embed transparency and reflection requirements into existing assessment formats rather than creating wholly new tasks. For example, students could append a short, structured, reflective statement to an essay, or submit AI prompt logs alongside a report. Peer review and collaborative assessments can also distribute evaluative responsibility, while digital tools may assist in capturing and organising process data ().
Institutional recognition of assessment workload is also crucial. Without resourcing, time, and support, authentic assessment risks being perceived as another burden rather than an opportunity for innovation. Authenticity must therefore be understood as a systemic commitment, not simply an individual choice by motivated educators.
5.4. Staff Development
Finally, the successful implementation of authentic assessment in technology-rich environments depends on staff capability. Many academics are unfamiliar with generative AI, algorithmic platforms, or other digital tools now shaping professional practice. If staff are not supported to engage with these technologies, there is a risk of superficial use, inconsistent application, or complete avoidance ().
Developing capability requires a balance. Some level of tool-specific literacy is necessary, but this should not be treated as the starting point or the central aim. Given the pace of technological change, especially in AI, training staff on the features of individual platforms is quickly rendered obsolete. What endures are skills in critical engagement, adaptability, and design, which enable staff to apply their expertise across new tools as they emerge. More importantly, academics need to be encouraged to experiment in safe, supported contexts, so they remain attuned to industry practices that are evolving beyond the university at extraordinary speed. Without this, assessments risk becoming detached from the professional realities they are meant to prepare students for.
Learning designers have a particularly important role in this process, but not as providers of technical training. Their expertise lies in helping academic staff recognise the pedagogical value of generative AI and other cutting-edge technologies, and in co-designing courses, units, and assessments that embed these tools in ways that strengthen authenticity (; ). Professional development should therefore prioritise design-led collaboration, complemented but not overshadowed by basic digital tool awareness, to ensure that assessment innovation keeps pace with the changing conditions of knowledge work.
Collaboration across disciplines can also strengthen capability, as academics share examples of authentic assessment that incorporate contemporary digital tools in diverse fields. Institutional policies and professional learning programs must reinforce that the goal is not simply to add new tools to old tasks, but to reconceptualise assessment in ways that genuinely reflect evolving professional contexts.
Taken together, the challenges related to equity, ethics, load, and staff development are not peripheral concerns but central to the viability of authentic assessment in higher education. Addressing these challenges is essential if the shift from detection to design is to be realised in practice and if assessment is to keep pace with the changing conditions of knowledge work. Authentic assessment must reflect the realities of professional life as dynamic, global, digital, interdisciplinary, and ethically charged.
6. Conclusions
This paper has argued that higher education must move decisively away from detection-focused responses to generative AI and toward the design of assessments that authentically reflect contemporary professional practice (). Detection technologies have not yet provided sufficient reassurance, and they cannot address the broader changes reshaping knowledge work. Instead, authenticity in assessment should be reconceptualised as an additional approach to preparing students to critically and ethically navigate environments where generative AI and other emerging tools are integral.
The implications for curriculum are clear: assessment must shift from testing recall or reproduction to evaluating how students apply, adapt, and justify knowledge in dynamic contexts. For policy, this requires additional institutional commitment to resourcing and supporting authentic assessment, ensuring equity of access to digital tools, and embedding ethical frameworks that anticipate bias, misinformation, and issues of data governance. For educators, it demands confidence in rethinking long-established practices, supported by staff development that prioritises design-led collaboration over superficial digital tool training.
There are also important implications for learning designers, whose expertise in pedagogical innovation positions them as partners in embedding emerging technologies into meaningful assessment design (). Cross-disciplinary dialogue will be crucial, both within institutions and across sectors, to ensure that assessment practices reflect the diversity of professional contexts graduates will encounter.
Further research is needed to examine discipline-specific implementations of authentic assessment in AI-mediated environments, to explore student experiences of such designs, and to evaluate their impact on learning outcomes and employability. Collaborative inquiry across fields—education, technology, ethics, and industry—will strengthen both the conceptual grounding and practical application of this work.
In sum, the challenge is not to defend old forms of assessment against technological disruption, but to harness this moment as an opportunity to reimagine assessment in ways that are valid, rigorous, equitable, and aligned with the realities of modern professional life. Moving from detection to design is not a defensive manoeuvre; it is a constructive agenda for preparing students to engage critically and creatively with the tools that will shape their futures.
Author Contributions
Conceptualization, S.K., K.A.-R. and H.H.; Writing—original draft, S.K.; Writing—review & editing, S.K., K.A.-R., A.K., J.B. and H.H.; Project administration, S.K. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analysed in this study.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Alexander, C. (1977). A pattern language: Towns, buildings, construction. Oxford University Press. [Google Scholar]
- Arthars, N., Thompson, K., Huijser, H., Kickbusch, S., Cunningham, S., Winter, G., Cook, R., & Lockyer, L. (2024). Formative assessment of group work skills: An analytics enabled conceptual framework. Australasian Journal of Educational Technology, 40(4), 139–154. [Google Scholar] [CrossRef]
- Ashford-Rowe, K., Herrington, J., & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205–222. [Google Scholar] [CrossRef]
- Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, 49(6), 893–905. [Google Scholar] [CrossRef]
- Beetham, H., & MacNeill, S. (2022). Approaches to curriculum and learning design across UK higher education. Jisc. Available online: https://beta.jisc.ac.uk/reports/approaches-to-curriculum-and-learning-design-across-uk-higher-education (accessed on 26 September 2025).
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March 3–10). On the dangers of stochastic parrots: Can Language models be too big? 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623), Virtual. [Google Scholar] [CrossRef]
- Boud, D., & Bearman, M. (2024). The assessment challenge of social and collaborative learning in higher education. Educational Philosophy and Theory, 56(5), 459–468. [Google Scholar] [CrossRef]
- Boud, D., & Falchikov, N. (2007). Rethinking assessment in higher education. Routledge. [Google Scholar]
- Boud, D., & Molloy, E. (Eds.). (2012). Feedback in higher and professional education: Understanding it and doing it well. Routledge. [Google Scholar] [CrossRef]
- Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), 400–413. [Google Scholar] [CrossRef]
- Connolly, D., Dickinson, L., & Hellewell, L. (2023). The development of undergraduate employability skills through authentic assessment in college-based higher education. Journal of Learning Development in Higher Education, 27. [Google Scholar] [CrossRef]
- Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: Why structural assessment changes are needed for a time of GenAI. Assessment & Evaluation in Higher Education, 50, 1087–1097. [Google Scholar] [CrossRef]
- Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
- Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. [Google Scholar] [CrossRef]
- Deardorff, D. K., & Arasaratnam-Smith, L. A. (2017). Intercultural competence in higher education. Routledge. [Google Scholar]
- Fawns, T., Bearman, M., Dawson, P., Nieminen, J. H., Ashford-Rowe, K., Willey, K., Jensen, L. X., Damşa, C., & Press, N. (2024). Authentic assessment: From panacea to criticality. Assessment & Evaluation in Higher Education, 50(3), 396–408. [Google Scholar] [CrossRef]
- Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1), 82–101. [Google Scholar] [CrossRef]
- Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52(3), 67–86. [Google Scholar] [CrossRef]
- Herrington, J., Reeves, T. C., & Oliver, R. (2009). A guide to authentic e-learning. Routledge. [Google Scholar]
- Herrington, T., & Herrington, J. (2006). Authentic learning environments in higher education. IGI Global Scientific Publishing. [Google Scholar] [CrossRef]
- Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. [Google Scholar] [CrossRef]
- Kickbusch, S., Dawes, L., Kelly, N., & Nickels, K. (2022). Developing mathematics and science teachers’ ability to design for active learning: A design-based research study. Australian Journal of Teacher Education, 47(9). [Google Scholar] [CrossRef]
- Kickbusch, S., Kelly, N., & Huijser, H. (2025). Framing the core expertise of learning designers through strong concepts. Higher Education. [Google Scholar] [CrossRef]
- Kickbusch, S., Wright, N., Sternberg, J., & Dawes, L. (2020). Rethinking learning design: Reconceptualizing the role of the learning designer in pre-service teacher preparation through a design-led approach. International Journal of Design Education, 14(4), 29–45. [Google Scholar] [CrossRef]
- Li, N., Tay, T., Wang, Q., & Zhang, X. (2024). Sustainable educational research through interdisciplinary lens: A guideline framework for effective collaboration. In A. ElSayary, & R. Olowoselu (Eds.), Interdisciplinary approaches for educators’ and learners’ well-being: Transforming education for sustainable development (pp. 69–80). Springer Nature. [Google Scholar] [CrossRef]
- Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. [Google Scholar] [CrossRef]
- Lodge, J. M., & Ashford-Rowe, K. (2024). Intensive modes of study and the need to focus on the process of learning in higher education. Journal of University Teaching and Learning Practice, 21(2), 1–11. [Google Scholar] [CrossRef]
- Lury, C., Fensham, R., Heller-Nicholas, A., Lammes, S., Last, A., Michael, M., & Uprichard, E. (2018). Routledge handbook of interdisciplinary research methods (1st ed.). Routledge. [Google Scholar] [CrossRef]
- Mao, J., Chen, B., & Liu, J. C. (2024). Generative artificial intelligence in education and its implications for assessment. TechTrends, 68(1), 58–66. [Google Scholar] [CrossRef]
- Matthews, A., McLinden, M., & Greenway, C. (2021). Rising to the pedagogical challenges of the Fourth Industrial Age in the university of the future: An integrated model of scholarship. Higher Education Pedagogies, 6(1), 1–21. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology (NIST). (2024). Artificial intelligence risk management framework: Generative artificial intelligence profile. U.S. Department of Commerce. [CrossRef]
- Office of the Australian Information Commissioner (OAIC). (2020). Guide to undertaking privacy impact assessments. Australian Government. Available online: https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/privacy-impact-assessments/guide-to-undertaking-privacy-impact-assessments (accessed on 17 September 2025).
- Organisation for Economic Co-operation and Development. (2018). The future of education and skills 2030: Conceptual learning framework (No. EDU/EDPC(2018)45/ANN2). OECD. Available online: https://one.oecd.org/document/EDU/EDPC(2018)45/ANN2/en/pdf (accessed on 2 September 2025).
- Press, N., Ashford-Rowe, K., & Huijser, H. (2024). Authentic assessment: Opportunities and challenges. In Research handbook on innovations in assessment and feedback in higher education (pp. 425–442). Edward Elgar Publishing Ltd. [Google Scholar]
- Quality Assurance Agency (QAA). (2023). Reconsidering assessment for the ChatGPT era: QAA advice on developing sustainable assessment strategies. Available online: https://www.qaa.ac.uk/docs/qaa/members/reconsidering-assessment-for-the-chat-gpt-era.pdf (accessed on 19 September 2025).
- Selwyn, N. (2021). Education and technology: Key issues and debates (3rd ed.). Bloomsbury Publishing. [Google Scholar]
- Shah, D., Behravan, N., Al-Jabouri, N., & Sibbald, M. (2024). Incorporating equity, diversity and inclusion (EDI) into the education and assessment of professionalism for healthcare professionals and trainees: A scoping review. BMC Medical Education, 24(1), 991. [Google Scholar] [CrossRef]
- Silvola, A., Kajamaa, A., Merikko, J., & Muukkonen, H. (2025). AI-mediated sensemaking in higher education students’ learning processes: Tensions, sensemaking practices, and AI-assigned purposes. British Journal of Educational Technology, 56(5), 2001–2018. [Google Scholar] [CrossRef]
- Sotiriadou, P., Logan, D., Daly, A., & Guest, R. (2020). The role of authentic assessment to preserve academic integrity and promote skill development and employability. Studies in Higher Education, 45(11), 2132–2148. [Google Scholar] [CrossRef]
- Tertiary Education Quality and Standards Agency (TEQSA). (2024). Gen AI strategies for Australian higher education: Emerging practice [Toolkit]. Australian Government. Available online: https://www.teqsa.gov.au/sites/default/files/2024-11/Gen-AI-strategies-emerging-practice-toolkit.pdf (accessed on 16 September 2025).
- Tertiary Education Quality and Standards Agency (TEQSA). (2025). Enacting assessment reform in a time of artificial intelligence [Toolkit]. Australian Government. Available online: https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/enacting-assessment-reform-time-artificial-intelligence (accessed on 24 September 2025).
- UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137?utm (accessed on 2 September 2025).
- UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693?utm (accessed on 2 September 2025).
- Villarroel, V., Bloxham, S., Bruna, D., Bruna, C., & Herrera-Seda, C. (2018). Authentic assessment: Creating a blueprint for course design. Assessment & Evaluation in Higher Education, 43(5), 840–854. [Google Scholar] [CrossRef]
- Vlachopoulos, D., & Makri, A. (2024). A systematic literature review on authentic assessment in higher education: Best practices for the development of 21st century skills, and policy considerations. Studies in Educational Evaluation, 83, 101425. [Google Scholar] [CrossRef]
- Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 26. [Google Scholar] [CrossRef]
- World Economic Forum. (2023). The future of jobs report 2023. World Economic Forum. Available online: https://www.weforum.org/publications/the-future-of-jobs-report-2023/ (accessed on 1 September 2025).
- Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90–112. [Google Scholar] [CrossRef]
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
- Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11(1), 28. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).