Next Article in Journal
A Systematic Review of Land- and Water-Management Technologies for Resilient Agriculture in the Sahel: Insights from Climate Analogues in Sub-Saharan Africa
Next Article in Special Issue
Enhancing Inclusive Sustainability-Oriented Learning in Higher Education Using Adaptive Learning Platforms and Performance-Based Assessment
Previous Article in Journal
The Effect of Green Credit Policies on Sustainable Innovation: Evidence and Mechanisms from China
Previous Article in Special Issue
The Mediating Role of Internationalization in Higher Education in the Relationship Between Cultural Intelligence and Intercultural Sensitivity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance

Department of Curriculums and Instructional Technologies, College of Humanities and social Sciences, Northern Border University, Arar 73222, Saudi Arabia
Sustainability 2026, 18(2), 785; https://doi.org/10.3390/su18020785
Submission received: 22 November 2025 / Revised: 22 December 2025 / Accepted: 2 January 2026 / Published: 13 January 2026

Abstract

Artificial intelligence (AI) is increasingly utilized in higher-education assessment; however, existing research remains fragmented, with limited synthesis regarding the interplay of fairness, transparency, pedagogy, and governance. To address this gap, this systematic review analyzed 47 studies published between 2019 and 2025 across Western, Gulf, South Asian, and East Asian contexts, employing the PRISMA 2020 framework. Among these studies, 32 addressed fairness, 29 examined transparency, 34 explored pedagogical implications, and 22 investigated governance practices. Quantitative evidence demonstrated that AI achieved greater scoring consistency than human graders in over two-thirds of fairness-focused studies. Conversely, more than half of the transparency studies identified inadequate or partial disclosure of AI decision processes. Pedagogical studies indicated AI-enhanced feedback frequency and revision opportunities in approximately 70% of cases, although teacher mediation was necessary to mitigate over-reliance. Governance findings showed that fewer than one-third of institutions had established policies or audit mechanisms for ethical AI use. Based on these patterns, the review proposes a governance-anchored model that integrates fairness and transparency with pedagogical design, providing a coherent framework for institutions aiming to implement AI-based assessment responsibly and equitably.

1. Introduction

Artificial intelligence (AI) is fundamentally changing assessment practices in higher education by transforming the generation, interpretation, and support of learning evidence [1]. Modern AI systems now encompass adaptive feedback mechanisms, predictive analytics, and tools that facilitate complex evaluative judgments, extending well beyond automated scoring. These advancements present opportunities to improve the efficiency, consistency, and personalization of assessment processes, potentially broadening access to high-quality feedback and alleviating faculty workload [2,3]. However, this evolving landscape also introduces significant questions regarding fairness, transparency, pedagogical integrity, and institutional readiness, all of which influence whether AI enhances or compromises educational values [4,5].
Fairness concerns focus on how AI systems represent student work, mitigate unintended bias, and uphold equity principles central to educational measurement. While AI offers the potential for greater scoring consistency, its effectiveness is influenced by training data, design assumptions, and contextual deployment, which may advantage or disadvantage specific learner groups [6,7]. Transparency is equally essential; both students and instructors must be aware of when and how AI is used, the data that informs its evaluations, and the interpretation of AI-generated feedback. Researchers argue that transparency extends beyond technical attributes to encompass educational practices that require clear communication, accessible rationales, and approaches that maintain learner agency and trust [8,9,10].
The integration of AI into assessment aligns with broader theories of formative learning, feedback literacy, and reflective engagement. AI can expedite feedback cycles and facilitate iterative improvement; however, its pedagogical value depends on effective teacher mediation and instructional design [11,12]. In the absence of sufficient scaffolding, students may misinterpret automated feedback or develop excessive dependence on AI-generated guidance [2]. These considerations underscore the importance of embedding AI within established learning philosophies rather than positioning it as a substitute for human judgment.
At the institutional level, higher-education systems exhibit varying degrees of digital readiness and policy development across regions. International frameworks, such as those from UNESCO (2021) and the OECD (2019), advocate for governance structures that ensure accountability, ethical oversight, and data protection [13,14]. Nevertheless, research from the Gulf, South Asia, and other rapidly evolving contexts reveals inconsistent preparedness among faculty and students, highlighting the necessity for capacity-building initiatives and well-defined governance mechanisms [10,15]. These disparities underscore the urgent need for a comprehensive understanding of how fairness, transparency, pedagogy, and governance interact within AI-enabled assessment ecosystems.
Collectively, the growing body of literature demonstrates both the potential and complexity of AI in higher-education assessment. Despite this progress, significant conceptual and operational gaps persist. Current research addresses fairness, transparency, pedagogy, and governance as separate constructs, with limited integration of their interrelationships. Furthermore, there is insufficient understanding of how institutional frameworks and stakeholder competencies influence the ethical and pedagogical outcomes of AI-driven assessment. Addressing these gaps is essential to ensure that AI implementation upholds educational integrity, advances equity, and maintains learner-centered practices.
Accordingly, this review aims to advance both conceptual and practical understanding of AI-enabled assessment by investigating how these constructs function collectively across diverse higher-education contexts. To guide this inquiry, the following research questions are proposed:
  • RQ1: How are the constructs of fairness and transparency conceptualized and operationalized within AI-driven assessment in higher education?
  • RQ2: What pedagogical implications, advantages, and risks emerge from the deployment of AI-enabled assessment tools?
  • RQ3: What governance infrastructures, including policies, audit mechanisms, and professional competencies, are required to ensure ethical and educational integrity in AI-supported assessment?

2. Literature Review

The scholarly literature on artificial intelligence in higher education increasingly frames assessment as a complex socio-technical system shaped by ethical principles, methodological traditions, and institutional structures rather than solely by algorithmic performance [1,4]. This literature review outlines the conceptual foundations underpinning contemporary debates.

2.1. Conceptualizing Fairness in AI Assessment

Fairness is theorized as a normative construct grounded in established educational measurement principles and contemporary algorithmic ethics. Historically, fairness in assessment encompassed comparability, criterion alignment, and non-discrimination. In the context of AI, scholars have expanded these traditions to include model-level fairness, data lineage integrity, and procedural safeguards within assessment systems [3,16].
The conceptual literature further emphasizes fairness as a relational condition, influenced by system design as well as broader educational norms such as autonomy, inclusion, and epistemic access [17]. As a result, fairness is operationalized by schools and universities through policies, training, and interpretive frameworks, rather than relying exclusively on technical metrics.

2.2. Transparency and Explainability

Although transparency is frequently addressed in AI ethics, higher-education literature frames it as an epistemic requirement that ensures assessment remains interpretable, contestable, and accountable [10,18]. Conceptual scholarship distinguishes transparency, which involves disclosing system purpose, boundaries, and data use, from explainability, which pertains to the intelligibility of AI-generated evaluative reasoning [19].
Theoretical contributions contend that transparency supports academic integrity, due process in evaluation, and student agency by enabling learners to comprehend the basis of performance judgments [8,9]. Rather than simply revealing technical mechanisms, transparency is conceptualized as an educational practice that necessitates communication models, shared vocabulary, and guidelines for interpreting automated judgments.

2.3. Pedagogical Perspectives on AI in Assessment

Pedagogically, AI is framed as part of an evolving ecosystem of learning support tools that interact with instructional design, learner identity, and disciplinary norms [11,20]. Conceptual scholarship emphasizes the role of feedback literacy, reflective engagement, and dialogic assessment traditions in shaping how students make sense of automated evaluations. According to Guo et al. (2024), the AI-aided peer feedback enhanced writing quality and allowed the reviewers to improve their skills, which implied some pedagogical advantage to both evaluators and those evaluated [21]. McLaughlin et al. (2025) reported better confidence, collaboration, and learning results when health professionals were trained using collaborative AI tutors in collaborative research training [22].
Instead of emphasizing the functionality of AI tools, theoretical models investigate how AI intersects with academic cultures, authority structures, and expectations of formative assessment [23]. These contributions underscore the pedagogical imperative to embed AI within broader learning philosophies, rather than treating it as an isolated evaluative mechanism.

2.4. Ethical and Governance Dimensions

Ethical governance is presented as essential for the sustainable adoption of AI in evaluation. Governance literature frames AI-enabled assessment as an institutional responsibility that necessitates system-level design, oversight, and ethical stewardship [13,14]. Key conceptual elements include accountability structures, documentation standards, bias audit mechanisms, responsible data governance, and professional competence frameworks [17,24].
Instead of focusing solely on compliance, governance is conceptualized as a dynamic capability—a set of coordinated organizational practices that enable institutions to align AI tools with educational values, regulatory requirements, and culturally situated expectations of fairness and transparency [15,25].

2.5. Synthesis and Conceptual Gap

Across these thematic strands, the literature’s treatment of fairness, transparency, pedagogy, and governance is conceptually robust yet fragmented. Most frameworks address these constructs in isolation, without articulating their interactions within AI-enabled assessment ecosystems. A systematic understanding of their interdependencies and collective implications for institutional policy and instructional design remains largely undeveloped.
This review addresses the identified conceptual gap by analyzing how these constructs interact within real-world higher-education environments and by proposing an integrative governance-based model that aligns ethical, pedagogical, and institutional priorities.

3. Materials and Methods

3.1. Research Design

This research took the form of a systematic review to synthesize and critically analyze the research on artificial-intelligence-driven evaluation in higher education. The systematic methodology ensured transparency, reproducibility, and comprehensive evidence coverage, including equity, transparency, and pedagogical considerations. Following the PRISMA 2020 framework, the process included a structured search, clearly defined inclusion and exclusion criteria, and multi-stage screening and synthesis (Figure 1 and Supplementary Materials). This design was suitable because the topic spans a wide range of empirical, conceptual, and policy-oriented research within education, computing, and ethics. As such, narrative or meta-analytic synthesis alone is not sufficient.

3.2. Strategy and Data Sources

Multiple database searches were conducted using Web of Science (WoS), Scopus, and Google Scholar. They were supplemented by targeted queries of reputable publishers, including Elsevier, Springer, Taylor & Francis, Wiley, and MDPI. The search encompassed publications from January 2019 to June 2025, aiming to capture both the emergence and widespread adoption of large language models and generative AI systems in higher education. Boolean operators and wildcards were employed to combine keywords into four main clusters:
  • Artificial intelligence terms: “artificial intelligence,” “machine learning,” “deep learning,” “generative AI,” “ChatGPT,” “LLM.”
  • Assessment terms: “assessment,” “grading,” “evaluation,” “feedback,” “formative,” “summative.”
  • Fairness and transparency terms: “fairness,” “bias,” “equity,” “transparency,” “explainability,” “trust.”
  • Pedagogical terms: “teaching,” “learning,” “higher education,” “university,” “student,” “faculty.”
Although “ChatGPT” is the primary keyword in the search strategy, this choice reflects the terminology most often used in higher-education AI-assessment research from 2019 to 2025. Many studies refer directly to ChatGPT as a typical generative AI tool, even when discussing broader large language models. Region-specific generative models, like DeepSeek and ERNIE in China (where access to OpenAI tools is restricted), have also emerged. Their underrepresentation in the dataset is not intentional exclusion but matches the global publication landscape. Future reviews should use more model-specific keywords to increase geographic and technical coverage.

3.3. Inclusion and Exclusion Criteria

The following criteria were used to determine inclusion and exclusion:
  • Empirical, conceptual, or review studies addressing AI use in assessment or feedback within higher education.
  • Publications in peer-reviewed journals or indexed conference proceedings.
  • Studies that are available in English and provide sufficient methodological transparency.
  • Research that discussed at least one of the focal constructs: fairness, transparency, pedagogical impact, or governance.
A structured quality-assessment procedure was applied using the Mixed-Methods Appraisal Tool (MMAT) and two relevance criteria: clarity of assessment context and explicit engagement with fairness, transparency, and pedagogy. Two independent volunteer reviewers coded each study. Discrepancies were resolved through consensus. Thirty-one full-text papers were excluded due to insufficient methodological transparency, unclear assessment relevance, or lack of engagement with the core review constructs. After removal of duplicates, 145 records remained; subsequent screening of titles and abstracts reduced this number to 78 eligible studies. Following full-text appraisal, 47 high-quality papers were retained for final synthesis (see Figure 1).

3.4. Analytical Framework and Synthesis

The analysis utilized thematic synthesis, combining inductive and deductive coding. Initial data collection focused on recurring themes related to fairness, such as bias detection, subgroup equity, and algorithmic audit; transparency, including explainability, disclosure, and interpretability; and pedagogy, as assessed by learning outcomes, engagement, and teacher mediation. An analytical framework illustrating the connections between fairness, transparency, and pedagogy within ethical governance in a responsible AI-driven assessment ecosystem is provided in Figure 2.
A cross-tabulation approach identified intersections among these categories to reveal how ethical and pedagogical factors co-occur in reported evidence. Quantitative descriptions were summarized using frequency counts to show distributional trends. Qualitative findings were synthesized narratively to highlight contextual nuances. Visual mapping, adapted from the PRISMA flow diagram, was used to document inclusion and exclusion at each review stage. This ensured methodological traceability. Table 1, Table 2 and Table 3 in the Results Section present organized summaries of representative studies by focus theme and region.

3.5. Trustworthiness and Ethical Considerations

The study maintained methodological integrity through triangulation of data sources, including Scopus, WoS, ERIC, Google Scholar, peer debriefing, and transparent documentation of decisions. The use of the reference-management software Mendeley (version: v2.140.0) minimized citation errors and ensured the replicability of search trails. Because the review relied exclusively on secondary, publicly available data, no ethical clearance was required. However, ethical norms of scholarship were strictly observed. These included accurate attribution, avoidance of plagiarism, and respect for intellectual property.

4. Results

The study integrates empirical and conceptual evidence from regional and international studies to address fairness, transparency, pedagogical impact, and institutional governance in AI-driven assessment across higher-education contexts.

4.1. Integrative Overview of Study

The included studies span publications from 2019 to 2025 across diverse educational settings, including Western, Gulf, South Asian, East Asian, and multi-institutional contexts. Recent studies from Asia and the Gulf region reported that this debate is beyond Western-centric contexts, confirming that contextual readiness and faculty development are decisive for ethical integration of generative AI [1,10,19].
The dataset comprises quantitative, qualitative, mixed-methods, and conceptual studies, reflecting a broad methodological spectrum. Writing and language subjects, engineering and technology education, and health sciences formed the major disciplinary clusters, while multidisciplinary higher-education environments were also represented (Table 1, Table 2 and Table 3).
Student participants featured prominently in several studies, though a considerable number examined faculty use of AI-supported assessment tools. The reported outcomes primarily concerned the consistency and equity of automated scoring, the clarity and explainability of AI decision-making, the influence of AI on feedback cycles and learning engagement, and the preparedness of institutions to integrate AI within their assessment systems (Table 1, Table 2 and Table 3).

Student-Focused Findings

A subset of studies focused specifically on student experiences with AI-mediated assessment. These reported considerable variation in how learners interpreted automated comments, the degree of clarity they attributed to AI explanations, and the conditions under which they perceived AI-generated feedback as credible. Differences in digital literacy, confidence in technology use, and access to digital resources shaped the extent to which students benefited from these tools. In contexts where supplementary guidance or structured disclosure was provided, students demonstrated stronger engagement and more consistent use of AI feedback.

4.2. Algorithmic Assessment of Fairness and Equity

Altogether, 32 studies explicitly examined fairness-related outcomes (Table 1, Table 2 and Table 3). A prominent line of evidence concerned the consistency with which AI-supported systems applied rubric criteria. Many studies reported greater uniformity in automated scoring compared to human grading, particularly in writing-intensive subjects. Although particular contexts reveal the bias-reduction potential of AI, especially when it comes to the minimization of favoritism or a lack of consistency in peer-based grading, the literature also shows persistence in equity gaps associated with access, digital literacy, and language diversity [12,19,27]. Fairness was dependent on the competence of users and the institutional infrastructure in Pakistan and Gulf studies, where algorithmic neutrality was not a significant concern [7,11]. The more digitally literate students enjoyed an unfair advantage in using AI feedback systems. The under-resourced learners felt that these tools were not fair. The same tendencies were seen in Kazakhstan, where the digital preparedness variations strengthened the existing inequalities. Simultaneously, Saudi and Gulf university evidence indicates that fair professional development opportunities and context-sensitive AI training have a significant positive impact on the perceptions of fairness
Findings also reflected differences in user readiness. Learners and instructors with higher digital proficiency were better positioned to engage with automated feedback. At the same time, those with limited exposure to AI tools reported challenges in understanding and applying AI-generated comments [19,27,47]. Perceived procedural fairness was further influenced by the transparency of assessment instructions and the degree to which AI output appeared aligned with established evaluation criteria.
Overall, approximately two-thirds of studies reported that AI-based systems produced more consistent scoring than human evaluators, particularly in writing-intensive assessments. Moreover, fairness outcomes were described in relation to algorithmic consistency, user-level disparities in engagement, and institutional procedures governing assessment design.

4.3. Transparency and Explainability

Transparency-related findings were reported in 29 studies and clustered around three areas (Table 1, Table 2 and Table 3). Several studies described the use of rationale displays or simplified explanation layers that made AI-generated decisions more intelligible to learners and instructors. These explanations ranged from highlighted textual evidence to structured scoring breakdowns. Furthermore, disclosure practices varied considerably. Some systems explicitly indicated when AI tools were used to generate feedback or assign scores, while others provided only minimal or indirect notification (Table 2).
A smaller group of studies addressed transparency from a data governance perspective, discussing how participant data were stored, processed, or used to refine AI outputs. These disclosures were not uniformly present across institutions, reflecting varying levels of maturity in data governance practices (Table 2).

4.4. Learning Outcomes and Pedagogical Transformation

Altogether, 34 studies reported on the pedagogical effects of AI-enabled assessment. Across these studies, AI frequently enhanced the timeliness and volume of formative feedback. Learners often received more frequent comments and had greater opportunities to revise their work in response to automated suggestions (Table 1, Table 2 and Table 3).
Engagement patterns also shifted in settings where AI was integrated. Many learners reviewed feedback more regularly and demonstrated increased interaction with assessment tasks. On the teaching side, instructors experienced a reduction in repetitive grading demands, though some studies reported increased responsibility in interpreting AI outputs and integrating them into instruction (Table 1, Table 2 and Table 3).
In addition, pedagogical frameworks such as outcome-based education, reflective practice, and feedback literacy initiatives commonly shaped how AI tools were deployed. The nature and depth of this integration varied across institutions and disciplines.

4.5. Institutional Accountability and Governance Practices

Governance-related findings appeared in 22 studies. These studies described a range of institutional measures to support the responsible integration of AI into assessment. Some institutions have developed explicit policies governing ethical AI use, academic integrity, and acceptable assessment practices. Others offered structured capacity-building initiatives, such as faculty training or student orientation programs, designed to enhance competence in using AI systems (Table 1, Table 2 and Table 3).
Operational governance mechanisms such as bias audits, rubric-alignment checks, and documentation of decision trails were also reported. These practices reflected efforts to ensure transparency, accountability, and consistency in AI-driven assessment. Several institutions referenced external frameworks or accreditation requirements in developing their governance structures, though the extent of alignment varied considerably.
In short, the findings across the four domains present a diverse landscape of AI-enabled assessment practices. Fairness outcomes centered on the consistency of scoring and the influence of digital readiness on user experiences. Transparency findings highlighted differences in explanation formats, disclosure practices, and clarity of data governance. Pedagogical findings emphasized enhanced feedback cycles, evolving engagement patterns, and changes in instructional workload. Governance findings reflect emerging institutional frameworks focused on policy development, training, and oversight mechanisms.

5. Discussion

The study interprets the findings in relation to the study’s research questions, offering insights into the interplay between fairness, transparency, pedagogy, and governance within AI-driven assessment ecosystems.

5.1. Relationship Between Fairness and Transparency (RQ1)

The results indicate that fairness and transparency operate as interrelated components influencing perceptions of legitimacy in AI-driven assessment. On the other hand, AI systems demonstrated greater uniformity in rubric-based scoring; equitable outcomes depended on users’ ability to understand how automated decisions were made. Transparent explanations and clear disclosures served as enabling conditions that helped users calibrate trust in AI outputs. The extent to which learners could interpret such information depended not only on the system’s design but also on contextual variables such as digital competence and the availability of institutional support. Fairness thus emerges as a socio-technical construct shaped jointly by algorithmic performance and communicative clarity.
The empirical synthesis reinforces this relationship, which is that approximately two-thirds of the reviewed studies reported greater rubric-level scoring consistency with AI-supported assessment compared to human grading. However, more than half of these studies also identified limitations in the quality of explanations or disclosure practices. This convergence suggests that consistency alone does not ensure perceived fairness. Studies that included layered explanations and explicit disclosure mechanisms consistently documented higher levels of trust and procedural acceptance. These findings confirm that transparency serves as a mediating condition through which algorithmic fairness attains educational significance.

5.2. Pedagogical Implications and Learning Processes (RQ2)

The integration of AI tools significantly influences formative learning processes by accelerating revision cycles, supporting iterative learning, and encouraging students to assume a more active role in monitoring their progress. AI-enabled assessment aligns with Outcome-Based Education (OBE) by delivering timely, criterion-referenced feedback that identifies performance gaps, maps student work to rubric descriptors, and provides targeted micro-feedback linked to specific competencies. The pedagogical effectiveness of these systems, however, relies on structured teacher mediation. In the absence of instructional guidance, learners may misinterpret automated suggestions or develop excessive dependence on AI-generated recommendations. When teachers contextualize AI feedback, incorporate reflective tasks, and align automated insights with disciplinary expectations and higher-order outcomes, AI tools can enhance engagement, feedback literacy, and learning achievement. Therefore, the pedagogical value of AI is realized not through automation alone but through its deliberate integration into instructional design, where AI functions as a scaffold to promote transparency, consistency, and the developmental intent of OBE-aligned assessment practices while maintaining learner agency and academic integrity.
These pedagogical patterns have demonstrated practical significance. Among the 34 studies reporting learning outcomes, approximately 70% documented improvements in feedback frequency, revision opportunities, or learner engagement when AI was implemented in formative contexts. However, these improvements consistently depended on teacher mediation, rubric alignment, and reflective scaffolding. In the absence of these conditions, studies reported only superficial engagement or an over-reliance on automated suggestions. This evidence supports the assertion that AI enhances learning outcomes only when integrated within intentional pedagogical design, rather than operating as an autonomous evaluator.

5.3. Governance Implementation for Ethical Implementation (RQ3)

Institutional governance emerged as essential for ensuring ethical and sustainable adoption of AI-driven assessment. Policies specifying acceptable uses of AI, combined with audit mechanisms for detecting bias or inconsistencies, contributed to the development of trustworthy assessment environments. Training initiatives for faculty and students enhanced their capacity to engage effectively with AI tools, while transparent documentation practices supported accountability. Institutions that aligned their practices with national or international guidelines demonstrated greater readiness to implement AI in a manner that promotes fairness, reliability, and educational integrity.

5.4. Risks, Limitations, and Safeguards

Although AI-enabled assessment offers notable advantages, substantial risks persist. Generative models are particularly vulnerable to hallucinations, inaccuracies, and manipulation via adversarial or low-quality training data. These errors may result in misleading explanations, incorrect numerical reasoning, or fabricated content, which can distort learning outcomes, compromise assessment fairness, and diminish trust in digital systems. In extreme cases, intentionally compromised models may disseminate misinformation, threatening educational integrity and long-term societal progress. These vulnerabilities highlight the necessity for robust governance safeguards. Recommended measures include human-in-the-loop verification, institutional validation protocols, regular model audits, and the use of certified or institutionally approved AI tools. Integrating algorithmic efficiency with expert oversight enables institutions to address these risks, thereby ensuring that AI supports, rather than undermines, the sustainability, reliability, and credibility of higher-education assessment.

5.5. AI-Driven Assessment and Sustainable Development

The integration of AI-driven assessment has implications that extend beyond pedagogical innovation and directly affect sustainable development within higher education. When implemented responsibly, AI can increase assessment efficiency, reduce grading workloads, and improve diagnostic precision. These contributions advance Sustainable Development Goal (SDG) 4: Quality Education by enhancing instructional responsiveness and expanding access to timely, high-quality feedback. Additionally, fostering digital competencies among faculty and students supports SDG 9: Industry, Innovation, and Infrastructure by strengthening institutional capacity for technological advancement. However, these sustainability benefits are realized only when AI systems are governed ethically and equitably. Disparities in technology access, lack of transparency, or insufficiently skilled mediation may exacerbate educational inequities, particularly in developing regions. Furthermore, widespread adoption without robust quality controls can compromise data integrity and institutional trust, thereby undermining SDG 16: Peace, Justice, and Strong Institutions. Thus, sustainable integration of AI-driven assessment necessitates focused attention to equity, transparency, governance maturity, and institutional readiness to ensure that technological advancements reinforce, rather than diminish, the long-term quality and integrity of higher education.

5.6. Governance-Integrated Fairness–Transparency–Pedagogy (GFTP) Model

The GFTP model was developed through deductive thematic synthesis, employing fairness, transparency, pedagogy, and governance as predefined analytical categories outlined in the review protocol. Data from 47 studies were coded according to these domains, and recurring relationships among them were systematically identified.
The synthesis demonstrated that fairness and transparency consistently functioned as interdependent factors influencing users’ trust in AI-driven assessment. Educational value emerged when these principles were embedded within pedagogical practice, with teacher mediation and reflective scaffolding supporting students in interpreting and utilizing AI-generated feedback effectively.
Governance emerged as the structural enabler connecting these domains across studies. Institutional policies, bias audits, documentation protocols, and capacity-building initiatives were identified as essential mechanisms for ensuring fairness, promoting transparency, and maintaining pedagogical integrity.
The resulting GFTP model (Figure 3) represents a conceptually integrated framework, positioning governance as the foundation, fairness and transparency as ethical pillars, and pedagogy as the domain through which responsible AI adoption influences learning outcomes.

5.7. Resource and Implementation Considerations

The implementation of AI-driven assessment necessitates comprehensive institutional planning. Required resources encompass faculty training, computational infrastructure, data governance oversight, and time allocated for curriculum redesign. Cost–benefit analyses in the literature indicate that although AI can reduce grading time by 30–60% in large-enrollment courses, these efficiencies are typically achieved only after significant initial investments in capacity-building and governance. Institutions lacking digital readiness may encounter increased risks of inequity without corresponding pedagogical advantages.

5.8. Policy and Practice Implications

This paper argues that technological sophistication alone does not guarantee educational value. Institutions should:
  • Undertake fairness audits and digital-literacy tests to reveal inequity.
  • Introduce transparency in layers, simple articulations on the part of learners, and complete tabulations on the part of experts.
  • Promote informal integration by positioning AI as an assistant rather than a replacement for human pedagogy.
  • Provide ethical management using cross-functional governance committees.
Collectively, these practices have the potential to foster responsible innovation, ensuring that AI contributes positively to fairness, trust, and pedagogy in tertiary education.

5.9. Limitations and Future Research

Although current research is progressing, existing studies remain fragmented. Most research disproportionately focuses on writing and English as a Foreign Language (EFL) activity, offering limited insight into Science, Technology, Engineering and Mathematics (abbreviated as STEM) or creative disciplines. There is a lack of longitudinal studies examining sustained learning improvement, and algorithmic analyses often lack clarity. Future research should address underexplored areas, include diverse cultural groups, and employ mixed-methods designs to evaluate equity and knowledge acquisition over time. Studies that develop models for teacher training and human–AI collaboration are particularly needed to maintain pedagogical richness in an AI-enhanced educational landscape.

6. Conclusions

This review establishes that artificial intelligence has become integral to assessment practices in higher education, presenting significant opportunities alongside complex ethical and pedagogical challenges. Evidence suggests that AI systems improve scoring consistency, operational efficiency, and the provision of individualized feedback across a range of institutional and regional settings. However, these benefits depend on foundational safeguards of fairness and transparency. Fairness extends beyond the absence of algorithmic bias to include equitable access, varying levels of digital literacy, and robust institutional support structures that enable all students to engage meaningfully with AI-mediated assessment. Similarly, transparency should encompass not only technical disclosure but also interpretability, open dialogue, and user agency, thereby empowering students and educators to understand and respond to AI-generated judgments.
From a pedagogical perspective, AI offers significant potential as a formative tool that enhances feedback literacy, supports reflective practice, and aligns with outcome-based education frameworks. However, realizing this potential requires intentional design and active teacher mediation. In the absence of deliberate scaffolding, AI may obscure learning processes or inadvertently foster over-reliance. At the governance level, sustainable adoption necessitates comprehensive institutional policies that incorporate accountability mechanisms, bias audits, documentation protocols, and ethical oversight. The interaction among technical, ethical, and pedagogical considerations highlights the necessity for a holistic governance framework that ensures human oversight and aligns AI integration with educational values and accreditation requirements.
The synthesized findings indicate that the benefits of AI-driven assessment, including increased scoring consistency, enhanced feedback efficiency, and scalable personalization, are empirically validated across a range of higher-education contexts. Approximately two-thirds of the reviewed studies reported improved rubric-based consistency relative to human grading. Nearly three-quarters of studies identified pedagogical advantages related to formative feedback, iterative learning, and increased student engagement. However, the evidence also highlights ongoing deficiencies in transparency and governance, as fewer than one-third of institutions report established audit mechanisms or formal ethical oversight structures. These results demonstrate that AI does not inherently ensure fairness, transparency, or pedagogical value; rather, such outcomes are contingent upon institutional readiness, digital literacy, and governance capacity. The proposed Governance-Integrated Fairness–Transparency–Pedagogy (GFTP) model reflects these empirical trends by positioning governance as the essential enabler through which ethical principles and pedagogical objectives are operationalized into sustainable assessment practices.
Future research should move beyond perception-based studies and prioritize longitudinal and cross-cultural investigations that explore how fairness, transparency, and learning outcomes develop across underrepresented disciplines, diverse learner populations, and region-specific AI ecosystems. The central challenge for higher-education institutions is not merely the adoption of AI but its responsible integration. When guided by fairness, transparency, ethical governance, and pedagogical integrity, AI can serve as a reliable partner in the development of human-centered, equitable, explainable, and transformative assessment systems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su18020785/s1. In this paper PRISMA framework 2020 is uploaded as a supplementary element [48].

Funding

This research received funding grant with project number NBU-FFR-2026-2983-01.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

Acknowledgments

The author extend their appreciation to the Deanship of Scientific Research at Northern Border University, Arar, KSA for funding this research work through the project number NBU-FFR-2026-2983-01.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Bulut, O.; Beiting-Parrish, M. The Rise of Artificial Intelligence in Educational Measurement: Opportunities and Ethical Challenges. Chin./Engl. J. Educ. Meas. Eval. 2024, 5, 3. [Google Scholar] [CrossRef]
  2. Mpolomoka, D.L. Utilizing Artificial Intelligence for Assessment in Higher Education. Pedagog. Res. 2025, 10, em0243. [Google Scholar] [CrossRef]
  3. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W.; Siemens, G. A Meta Systematic Review of Artificial Intelligence in Higher Education: A Call for Increased Ethics, Collaboration, and Rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
  4. Egamberdiyeva, Z. Ethical and Pedagogical Implications of Artificial Intelligence in Education. Sci. Educ. 2025, 6, 44–51. [Google Scholar]
  5. Xia, Q.; Weng, X.; Ouyang, F.; Lin, T.J.; Chiu, T.K.F. A Scoping Review on How Generative Artificial Intelligence Transforms Assessment in Higher Education. Int. J. Educ. Technol. High. Educ. 2024, 21, 40. [Google Scholar] [CrossRef]
  6. Williams, A. Integrating Artificial Intelligence into Higher Education Assessment. Intersect. J. Intersect. Assess. Learn. 2025, 6, 128–154. [Google Scholar] [CrossRef]
  7. Alnemrat, A.; Aldamen, H.; Almashour, M.; Al-Deaibes, M.; AlSharefeen, R. AI vs. Teacher Feedback on EFL Argumentative Writing: A Quantitative Study. Front. Educ. 2025, 10, 1614673. [Google Scholar] [CrossRef]
  8. Alazemi, A.F.T. Formative Assessment in Artificial Integrated Instruction: Delving into the Effects on Reading Comprehension Progress, Online Academic Enjoyment, Personal Best Goals, and Academic Mindfulness. Lang. Test. Asia 2024, 14, 44. [Google Scholar] [CrossRef]
  9. Thomas, M.L.; Yildirim-Erbasli, S.N.; Hariharan, S. Exploring Undergraduate Students’ Perceptions of AI vs. Human Scoring and Feedback. Internet High. Educ. 2026, 68, 101052. [Google Scholar] [CrossRef]
  10. Baig, M.I.; Yadegaridehkordi, E. Factors Influencing Academic Staff Satisfaction and Continuous Usage of Generative Artificial Intelligence (GenAI) in Higher Education. Int. J. Educ. Technol. High. Educ. 2025, 22, 5. [Google Scholar] [CrossRef]
  11. Alwakid, W.N.; Dahri, N.A.; Humayun, M.; Alwakid, G.N. Exploring the Role of AI and Teacher Competencies on Instructional Planning and Student Performance in an Outcome-Based Education System. Systems 2025, 13, 517. [Google Scholar] [CrossRef]
  12. Qadeer, A. The Mediating Impact of Student Engagement on the Association between Generative AI-Based Feedback and Academic Performance in Higher Education. J. Res. Innov. Strat. Educ. (RISE) 2025, 2, 29–44. [Google Scholar] [CrossRef]
  13. UNESCO. AI and Education: Guidance for Policymakers; UNESCO: London, UK, 2021. [Google Scholar]
  14. OECD. Ethics Guidelines for Trustworthy AI; OECD: Paris, France, 2019. [Google Scholar]
  15. Alotaibi, N.S.; Alshehri, A.H. Prospers and Obstacles in Using Artificial Intelligence in Saudi Arabia Higher Education Institutions—The Potential of AI-Based Learning Outcomes. Sustainability 2023, 15, 10723. [Google Scholar] [CrossRef]
  16. Zhang, X. Fairness and Effectiveness in AI-Driven Educational. J. Innov. Dev. 2025, 11, 7–10. [Google Scholar] [CrossRef]
  17. Ilieva, G.; Yankova, T.; Ruseva, M.; Kabaivanov, S. A Framework for Generative AI-Driven Assessment in Higher Education. Information 2025, 16, 472. [Google Scholar] [CrossRef]
  18. Liang, J.; Stephens, J.M.; Brown, G.T.L. A Systematic Review of the Early Impact of Artificial Intelligence on Higher Education Curriculum, Instruction, and Assessment. Front. Educ. 2025, 10, 1522841. [Google Scholar] [CrossRef]
  19. Yu, N.; Zhang, J.; Mitra, S.; Smith, R.; Rich, A. AI-Educational Development Loop (AI-EDL): A Conceptual Framework to Bridge AI Capabilities with Classical Educational Theories. arXiv 2025, arXiv:2508.00970. [Google Scholar]
  20. Mammadova, I.; Aliyeva, E.; Mammadova, N.; Ismayilli, F. The Role of Artificial Intelligence in Shaping Assessment Practices in Higher Education. Educ. Process Int. J. 2025, 18. [Google Scholar] [CrossRef]
  21. Guo, K.; Pan, M.; Li, Y.; Lai, C. Effects of an AI-Supported Approach to Peer Feedback on University EFL Students’ Feedback Quality and Writing Ability. Internet High. Educ. 2024, 63, 100962. [Google Scholar] [CrossRef]
  22. McLaughlin, J.E.; Ponte, C.D.; Lyons, K. Student Perceptions of GenAI as a Virtual Tutor to Support Collaborative Research Training for Health Professionals. BMC Med. Educ. 2025, 25, 895. [Google Scholar] [CrossRef] [PubMed]
  23. Deneen, C. Editorial: Artificial Intelligence and Educational Assessment. Learn. Lett. 2025, 4, 38. [Google Scholar] [CrossRef]
  24. Fartușnic, R.; Istrate, O.; Fartușnic, C. Beyond Automation: A Conceptual Framework for AI in Educational Assessment. J. Digit. Pedagog. 2025, 4, 83–102. [Google Scholar] [CrossRef]
  25. Ren, X.; Wu, M.L. Examining Teaching Competencies and Challenges While Integrating Artificial Intelligence in Higher Education. TechTrends 2025, 69, 519–538. [Google Scholar] [CrossRef]
  26. Omirali, A.; Kozhakhmet, K.; Zhumaliyeva, R. Digital Trust in Transition: Student Perceptions of AI-Enhanced Learning for Sustainable Educational Futures. Sustainability 2025, 17, 7567. [Google Scholar] [CrossRef]
  27. Li, Z.; Yang, G.; Zhang, H.; Huang, C.; Liang, S. Undergraduate Dental Students’ Knowledge, Attitudes, and Satisfaction Following a Step-by-Step Course in Microscopic Dentistry. BMC Med. Educ. 2025, 25, 1376. [Google Scholar] [CrossRef] [PubMed]
  28. Alkhayat, D.S.; Alsubaiyi, H.N.; Alharbi, Y.A.; Alkahtani, L.M.; Akhwan, A.M.; Alharbi, A.A. Perception and Impact of AI on Education Journey of Medical Students and Interns in Western Region, KSA: A Cross-Sectional Study. J. Med Educ. Curric. Dev. 2025, 12, 23821205251340129. [Google Scholar] [CrossRef]
  29. Salih, S.M. Perceptions of Faculty and Students About Use of Artificial Intelligence in Medical Education: A Qualitative Study. Cureus 2024, 16, e57605. [Google Scholar] [CrossRef]
  30. Sobaih, A.E.E.; Elshaer, I.A.; Hasanein, A.M. Examining Students’ Acceptance and Use of ChatGPT in Saudi Arabian Higher Education. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 709–721. [Google Scholar] [CrossRef]
  31. Alenazi, L.; Al-Anazi, S.H. Understanding Artificial Intelligence through the Eyes of Future Nurses. Saudi Med. J. 2025, 46, 238–243. [Google Scholar] [CrossRef]
  32. Aldossary, A.S.; Aljindi, A.A.; Alamri, J.M. The Role of Generative AI in Education: Perceptions of Saudi Students. Contemp. Educ. Technol. 2024, 16, ep536. [Google Scholar] [CrossRef]
  33. Alnmr, A.; Ray, R.P.; Alsirawan, R. Comparative Analysis of Helical Piles and Granular Anchor Piles for Foundation Stabilization in Expansive Soil: A 3D Numerical Study. Sustainability 2023, 15, 11975. [Google Scholar] [CrossRef]
  34. Alsaiari, O.; Baghaei, N.; Lahza, H.; Lodge, J.; Boden, M.; Khosravi, H. Emotionally Enriched Feedback via Generative AI. arXiv 2024, arXiv:2410.15077. [Google Scholar]
  35. Aljabr, F.S.; Al-Ahdal, A.A.M.H. Ethical and Pedagogical Implications of AI in Language Education: An Empirical Study at Ha’il University. Acta Psychol. 2024, 251, 104605. [Google Scholar] [CrossRef]
  36. Owan, V.J.; Abang, K.B.; Idika, D.O.; Etta, E.O.; Bassey, B.A. Exploring the Potential of Artificial Intelligence Tools in Educational Measurement and Assessment. Eurasia J. Math. Sci. Technol. Educ. 2023, 19, em2307. [Google Scholar] [CrossRef]
  37. Najafi, F.T.; Pabba, V.R.; Subramanian, R.; Vidalis, S.M. AI-Assisted Grading—A Study on Efficiency and Fairness. In Proceedings of the ASEE Southeast Conference, Starkville, MS, USA, 9–11 March 2025. [Google Scholar]
  38. Almalki, M.; Alkhamis, M.A.; Khairallah, F.M.; Choukou, M.A. Perceived Artificial Intelligence Readiness in Medical and Health Sciences Education: A Survey Study of Students in Saudi Arabia. BMC Med. Educ. 2025, 25, 439. [Google Scholar] [CrossRef]
  39. Gundu, T. Strategies for E-Assessments in the Era of Generative Artificial Intelligence. Electron. J. E-Learn. 2024, 22, 40–50. [Google Scholar] [CrossRef]
  40. Hooda, M.; Rana, C.; Dahiya, O.; Rizwan, A.; Hossain, M.S. Artificial Intelligence for Assessment and Feedback to Enhance Student Success in Higher Education. Math. Probl. Eng. 2022, 2022, 7690103. [Google Scholar] [CrossRef]
  41. Oc, Y.; Gonsalves, C.; Quamina, L.T. Generative AI in Higher Education Assessments: Examining Risk and Tech-Savviness on Student’s Adoption. J. Mark. Educ. 2025, 47, 138–155. [Google Scholar] [CrossRef]
  42. Suazo-Galdames, I.C.; Chaple-Gil, A.M. AI-Driven Assessment Systems in Higher Education: Effectiveness for Enhancing Critical Thinking and Creativity. Ing. Des Syst. D’inform. 2025, 30, 1665–1671. [Google Scholar] [CrossRef]
  43. Sajja, R.; Sermet, Y.; Fodale, B.; Demir, I. Evaluating AI-Powered Learning Assistants in Engineering Higher Education: Student Engagement, Ethical Challenges, and Policy Implications. arXiv 2025, arXiv:2506.05699. [Google Scholar]
  44. Evangelista, E.D.L. Ensuring Academic Integrity in the Age of ChatGPT: Rethinking Exam Design, Assessment Strategies, and Ethical AI Policies in Higher Education. Contemp. Educ. Technol. 2025, 17, ep559. [Google Scholar] [CrossRef] [PubMed]
  45. Kofinas, A.K.; Tsay, C.H.H.; Pike, D. The Impact of Generative AI on Academic Integrity of Authentic Assessments within a Higher Education Context. Br. J. Educ. Technol. 2025, 56, 2522–2549. [Google Scholar] [CrossRef]
  46. Memarian, B.; Doleck, T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and Higher Education: A Systematic Review. Comput. Educ. Artif. Intell. 2023, 5, 100152. [Google Scholar] [CrossRef]
  47. Maistry, S.M.; Singh, U.G. Faculty Perspectives on the Role of ChatGPT-4.0 in Higher Education Assessments. J. Educ. 2025, 86–102. [Google Scholar] [CrossRef]
  48. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA-style flowchart of the study.
Figure 1. PRISMA-style flowchart of the study.
Sustainability 18 00785 g001
Figure 2. Analytical framework connecting fairness, transparency, and pedagogy under ethical governance in a responsible AI-driven assessment ecosystem.
Figure 2. Analytical framework connecting fairness, transparency, and pedagogy under ethical governance in a responsible AI-driven assessment ecosystem.
Sustainability 18 00785 g002
Figure 3. Governance-Integrated Fairness–Transparency–Pedagogy (GFTP) Model.
Figure 3. Governance-Integrated Fairness–Transparency–Pedagogy (GFTP) Model.
Sustainability 18 00785 g003
Table 1. Regional and Institutional Evidence on AI-Driven Assessment in Higher Education.
Table 1. Regional and Institutional Evidence on AI-Driven Assessment in Higher Education.
ContextFocus AreaFairness InsightsTransparency InsightsPedagogical and Practical ImpactReferences
Malaysia (academic staff)Staff satisfaction and continuous GenAI use (UTAUT + ECM).Equity and ethical literacy shape ongoing use; digital competence and institutional support moderate fairness Procedural transparency through privacy/security assurances fosters trust Institutional facilitation conditions predict responsible GenAI integration [4,7,8,12]
Kuwait (EFL undergrads)AI-based formative assessment (Nearpod) for reading comprehension.AI scoring minimizes human bias and enhances equity across learners Real-time dashboards enhance feedback visibility and grading transparency Improved comprehension, mindfulness, and learner satisfaction through formative AI feedback [3,9,10]
Pakistan (HE faculty and students)Teacher competencies + AI in instructional planning and learning outcomes (SEM model).Fairness dependent on faculty digital readiness and pedagogical designExplicit disclosure of AI functions strengthens teacher–student trust Collaborative AI–teacher design improves student engagement and performance [1,2,13,14]
Multi-institution (5 universities)Exploratory mixed-method study on how AI reshapes assessment.Reduced grading bias but awareness gaps among students regarding AI evaluation Transparency requires explainable scoring logic and auditability Institutional readiness and targeted training crucial for equitable rollout [4,7,12,15]
Saudi Arabia and Gulf UniversitiesAdoption of Generative AI and AI-enhanced teaching competencies among higher-education faculty.Equitable professional development is central to fairness; variability in institutional readiness and digital skills creates inequity in AI adoption Limited transparency about algorithms and data use constrains faculty trust; clear policy guidelines and disclosure practices improve confidenceIntegrating AI into faculty development enhances teaching innovation and reflective practice; competency-based frameworks support ethical, student-centered assessment [4,7,8,9,13,26]
Table 2. International and Conceptual Framework Evidence on Fairness, Transparency, and Pedagogical Implications.
Table 2. International and Conceptual Framework Evidence on Fairness, Transparency, and Pedagogical Implications.
ContextFocus AreaFairness InsightsTransparency InsightsPedagogical and Practical ImpactReferences
Undergraduate learners (U.S./Canada)Comparative perception of AI vs. human scoring feedback.Human grading perceived fairer due to interpretive empathy; fairness contingent on algorithmic explainabilityDisclosure of AI source reduces trust without interpretive guidanceAI–human hybrid evaluation improves trust and engagement[3,10,12,26]
Global (policy and measurement review)Ethical challenges in AI-enabled assessment: validity, reliability, bias.Algorithmic bias and data inequity highlight need for fairness audits Transparency mandates explainability in scoring and audit processes Personalized feedback promising but requires governance and teacher mediation [1,9,12]
Europe (Bulgaria case study)Generative AI assessment quality and fairness framework.Equitable participation reinforces distributive fairness Human–AI audit loops strengthen transparency and reliability Aligns AI evaluation with formative pedagogy [3,11,12,26]
Meta-systematic reviewsSynthesis of AI in educational assessment literature.Fairness and ethical constructs underrepresented in >70% of studies Transparency seldom operationalized, signaling research gap Highlights necessity of pedagogically driven AI models)[1,2,3]
Table 3. Integrative Analytical Synthesis on Fairness–Transparency–Pedagogical Impact–Ethical Governance in AI-Driven Assessment.
Table 3. Integrative Analytical Synthesis on Fairness–Transparency–Pedagogical Impact–Ethical Governance in AI-Driven Assessment.
DimensionSynthesized Thematic Insights from the LiteratureImplications for Policy and Pedagogical PracticeReferences
FairnessFairness is conceptualized as algorithmic impartiality and distributive justice, contingent on literacy, access, and explainability. Empirical and conceptual works emphasize bias mitigation and inclusive AI capacity-building Institutions should embed fairness audits, AI-literacy initiatives, and transparent calibration protocols in assessment workflows to ensure equitable participation.[3,7,12,15,20,26,27,28,29,30]
TransparencyTransparency functions as a mediator of trust and accountability. Structured disclosure and interpretive feedback enhance legitimacy, whereas uncontextualized AI disclosure erodes confidence Adopt layered transparency strategies, incorporating simplified learner-facing rationales and technical audit documentation.[1,10,11,12,27,31,32]
Pedagogical ImpactAI tools catalyze reflective learning when used as formative scaffolds within outcome-based education. Effectiveness depends on teacher mediation and reflective integration Promote AI–human co-design of assessment rubrics and feedback literacy training to balance automation with reflection.[2,9,13,14,33,34,35,36,37,38,39,40,41,42,43]
Ethical GovernanceGovernance frameworks converge on the need for accountability, explainability, and continual validation. Measurement studies highlight fairness–validity trade-offs Establish institutional AI-assessment governance protocols, including bias audits, ethics committees, and transparent reporting aligned with accreditation standards.[1,4,15,16,23,24,26,44,45,46]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alfaleh, M. Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance. Sustainability 2026, 18, 785. https://doi.org/10.3390/su18020785

AMA Style

Alfaleh M. Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance. Sustainability. 2026; 18(2):785. https://doi.org/10.3390/su18020785

Chicago/Turabian Style

Alfaleh, Maha. 2026. "Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance" Sustainability 18, no. 2: 785. https://doi.org/10.3390/su18020785

APA Style

Alfaleh, M. (2026). Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance. Sustainability, 18(2), 785. https://doi.org/10.3390/su18020785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop