Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance
Abstract
1. Introduction
- RQ1: How are the constructs of fairness and transparency conceptualized and operationalized within AI-driven assessment in higher education?
- RQ2: What pedagogical implications, advantages, and risks emerge from the deployment of AI-enabled assessment tools?
- RQ3: What governance infrastructures, including policies, audit mechanisms, and professional competencies, are required to ensure ethical and educational integrity in AI-supported assessment?
2. Literature Review
2.1. Conceptualizing Fairness in AI Assessment
2.2. Transparency and Explainability
2.3. Pedagogical Perspectives on AI in Assessment
2.4. Ethical and Governance Dimensions
2.5. Synthesis and Conceptual Gap
3. Materials and Methods
3.1. Research Design
3.2. Strategy and Data Sources
- Artificial intelligence terms: “artificial intelligence,” “machine learning,” “deep learning,” “generative AI,” “ChatGPT,” “LLM.”
- Assessment terms: “assessment,” “grading,” “evaluation,” “feedback,” “formative,” “summative.”
- Fairness and transparency terms: “fairness,” “bias,” “equity,” “transparency,” “explainability,” “trust.”
- Pedagogical terms: “teaching,” “learning,” “higher education,” “university,” “student,” “faculty.”
3.3. Inclusion and Exclusion Criteria
- Empirical, conceptual, or review studies addressing AI use in assessment or feedback within higher education.
- Publications in peer-reviewed journals or indexed conference proceedings.
- Studies that are available in English and provide sufficient methodological transparency.
- Research that discussed at least one of the focal constructs: fairness, transparency, pedagogical impact, or governance.
3.4. Analytical Framework and Synthesis
3.5. Trustworthiness and Ethical Considerations
4. Results
4.1. Integrative Overview of Study
Student-Focused Findings
4.2. Algorithmic Assessment of Fairness and Equity
4.3. Transparency and Explainability
4.4. Learning Outcomes and Pedagogical Transformation
4.5. Institutional Accountability and Governance Practices
5. Discussion
5.1. Relationship Between Fairness and Transparency (RQ1)
5.2. Pedagogical Implications and Learning Processes (RQ2)
5.3. Governance Implementation for Ethical Implementation (RQ3)
5.4. Risks, Limitations, and Safeguards
5.5. AI-Driven Assessment and Sustainable Development
5.6. Governance-Integrated Fairness–Transparency–Pedagogy (GFTP) Model
5.7. Resource and Implementation Considerations
5.8. Policy and Practice Implications
- Undertake fairness audits and digital-literacy tests to reveal inequity.
- Introduce transparency in layers, simple articulations on the part of learners, and complete tabulations on the part of experts.
- Promote informal integration by positioning AI as an assistant rather than a replacement for human pedagogy.
- Provide ethical management using cross-functional governance committees.
5.9. Limitations and Future Research
6. Conclusions
Supplementary Materials
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bulut, O.; Beiting-Parrish, M. The Rise of Artificial Intelligence in Educational Measurement: Opportunities and Ethical Challenges. Chin./Engl. J. Educ. Meas. Eval. 2024, 5, 3. [Google Scholar] [CrossRef]
- Mpolomoka, D.L. Utilizing Artificial Intelligence for Assessment in Higher Education. Pedagog. Res. 2025, 10, em0243. [Google Scholar] [CrossRef]
- Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W.; Siemens, G. A Meta Systematic Review of Artificial Intelligence in Higher Education: A Call for Increased Ethics, Collaboration, and Rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
- Egamberdiyeva, Z. Ethical and Pedagogical Implications of Artificial Intelligence in Education. Sci. Educ. 2025, 6, 44–51. [Google Scholar]
- Xia, Q.; Weng, X.; Ouyang, F.; Lin, T.J.; Chiu, T.K.F. A Scoping Review on How Generative Artificial Intelligence Transforms Assessment in Higher Education. Int. J. Educ. Technol. High. Educ. 2024, 21, 40. [Google Scholar] [CrossRef]
- Williams, A. Integrating Artificial Intelligence into Higher Education Assessment. Intersect. J. Intersect. Assess. Learn. 2025, 6, 128–154. [Google Scholar] [CrossRef]
- Alnemrat, A.; Aldamen, H.; Almashour, M.; Al-Deaibes, M.; AlSharefeen, R. AI vs. Teacher Feedback on EFL Argumentative Writing: A Quantitative Study. Front. Educ. 2025, 10, 1614673. [Google Scholar] [CrossRef]
- Alazemi, A.F.T. Formative Assessment in Artificial Integrated Instruction: Delving into the Effects on Reading Comprehension Progress, Online Academic Enjoyment, Personal Best Goals, and Academic Mindfulness. Lang. Test. Asia 2024, 14, 44. [Google Scholar] [CrossRef]
- Thomas, M.L.; Yildirim-Erbasli, S.N.; Hariharan, S. Exploring Undergraduate Students’ Perceptions of AI vs. Human Scoring and Feedback. Internet High. Educ. 2026, 68, 101052. [Google Scholar] [CrossRef]
- Baig, M.I.; Yadegaridehkordi, E. Factors Influencing Academic Staff Satisfaction and Continuous Usage of Generative Artificial Intelligence (GenAI) in Higher Education. Int. J. Educ. Technol. High. Educ. 2025, 22, 5. [Google Scholar] [CrossRef]
- Alwakid, W.N.; Dahri, N.A.; Humayun, M.; Alwakid, G.N. Exploring the Role of AI and Teacher Competencies on Instructional Planning and Student Performance in an Outcome-Based Education System. Systems 2025, 13, 517. [Google Scholar] [CrossRef]
- Qadeer, A. The Mediating Impact of Student Engagement on the Association between Generative AI-Based Feedback and Academic Performance in Higher Education. J. Res. Innov. Strat. Educ. (RISE) 2025, 2, 29–44. [Google Scholar] [CrossRef]
- UNESCO. AI and Education: Guidance for Policymakers; UNESCO: London, UK, 2021. [Google Scholar]
- OECD. Ethics Guidelines for Trustworthy AI; OECD: Paris, France, 2019. [Google Scholar]
- Alotaibi, N.S.; Alshehri, A.H. Prospers and Obstacles in Using Artificial Intelligence in Saudi Arabia Higher Education Institutions—The Potential of AI-Based Learning Outcomes. Sustainability 2023, 15, 10723. [Google Scholar] [CrossRef]
- Zhang, X. Fairness and Effectiveness in AI-Driven Educational. J. Innov. Dev. 2025, 11, 7–10. [Google Scholar] [CrossRef]
- Ilieva, G.; Yankova, T.; Ruseva, M.; Kabaivanov, S. A Framework for Generative AI-Driven Assessment in Higher Education. Information 2025, 16, 472. [Google Scholar] [CrossRef]
- Liang, J.; Stephens, J.M.; Brown, G.T.L. A Systematic Review of the Early Impact of Artificial Intelligence on Higher Education Curriculum, Instruction, and Assessment. Front. Educ. 2025, 10, 1522841. [Google Scholar] [CrossRef]
- Yu, N.; Zhang, J.; Mitra, S.; Smith, R.; Rich, A. AI-Educational Development Loop (AI-EDL): A Conceptual Framework to Bridge AI Capabilities with Classical Educational Theories. arXiv 2025, arXiv:2508.00970. [Google Scholar]
- Mammadova, I.; Aliyeva, E.; Mammadova, N.; Ismayilli, F. The Role of Artificial Intelligence in Shaping Assessment Practices in Higher Education. Educ. Process Int. J. 2025, 18. [Google Scholar] [CrossRef]
- Guo, K.; Pan, M.; Li, Y.; Lai, C. Effects of an AI-Supported Approach to Peer Feedback on University EFL Students’ Feedback Quality and Writing Ability. Internet High. Educ. 2024, 63, 100962. [Google Scholar] [CrossRef]
- McLaughlin, J.E.; Ponte, C.D.; Lyons, K. Student Perceptions of GenAI as a Virtual Tutor to Support Collaborative Research Training for Health Professionals. BMC Med. Educ. 2025, 25, 895. [Google Scholar] [CrossRef] [PubMed]
- Deneen, C. Editorial: Artificial Intelligence and Educational Assessment. Learn. Lett. 2025, 4, 38. [Google Scholar] [CrossRef]
- Fartușnic, R.; Istrate, O.; Fartușnic, C. Beyond Automation: A Conceptual Framework for AI in Educational Assessment. J. Digit. Pedagog. 2025, 4, 83–102. [Google Scholar] [CrossRef]
- Ren, X.; Wu, M.L. Examining Teaching Competencies and Challenges While Integrating Artificial Intelligence in Higher Education. TechTrends 2025, 69, 519–538. [Google Scholar] [CrossRef]
- Omirali, A.; Kozhakhmet, K.; Zhumaliyeva, R. Digital Trust in Transition: Student Perceptions of AI-Enhanced Learning for Sustainable Educational Futures. Sustainability 2025, 17, 7567. [Google Scholar] [CrossRef]
- Li, Z.; Yang, G.; Zhang, H.; Huang, C.; Liang, S. Undergraduate Dental Students’ Knowledge, Attitudes, and Satisfaction Following a Step-by-Step Course in Microscopic Dentistry. BMC Med. Educ. 2025, 25, 1376. [Google Scholar] [CrossRef] [PubMed]
- Alkhayat, D.S.; Alsubaiyi, H.N.; Alharbi, Y.A.; Alkahtani, L.M.; Akhwan, A.M.; Alharbi, A.A. Perception and Impact of AI on Education Journey of Medical Students and Interns in Western Region, KSA: A Cross-Sectional Study. J. Med Educ. Curric. Dev. 2025, 12, 23821205251340129. [Google Scholar] [CrossRef]
- Salih, S.M. Perceptions of Faculty and Students About Use of Artificial Intelligence in Medical Education: A Qualitative Study. Cureus 2024, 16, e57605. [Google Scholar] [CrossRef]
- Sobaih, A.E.E.; Elshaer, I.A.; Hasanein, A.M. Examining Students’ Acceptance and Use of ChatGPT in Saudi Arabian Higher Education. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 709–721. [Google Scholar] [CrossRef]
- Alenazi, L.; Al-Anazi, S.H. Understanding Artificial Intelligence through the Eyes of Future Nurses. Saudi Med. J. 2025, 46, 238–243. [Google Scholar] [CrossRef]
- Aldossary, A.S.; Aljindi, A.A.; Alamri, J.M. The Role of Generative AI in Education: Perceptions of Saudi Students. Contemp. Educ. Technol. 2024, 16, ep536. [Google Scholar] [CrossRef]
- Alnmr, A.; Ray, R.P.; Alsirawan, R. Comparative Analysis of Helical Piles and Granular Anchor Piles for Foundation Stabilization in Expansive Soil: A 3D Numerical Study. Sustainability 2023, 15, 11975. [Google Scholar] [CrossRef]
- Alsaiari, O.; Baghaei, N.; Lahza, H.; Lodge, J.; Boden, M.; Khosravi, H. Emotionally Enriched Feedback via Generative AI. arXiv 2024, arXiv:2410.15077. [Google Scholar]
- Aljabr, F.S.; Al-Ahdal, A.A.M.H. Ethical and Pedagogical Implications of AI in Language Education: An Empirical Study at Ha’il University. Acta Psychol. 2024, 251, 104605. [Google Scholar] [CrossRef]
- Owan, V.J.; Abang, K.B.; Idika, D.O.; Etta, E.O.; Bassey, B.A. Exploring the Potential of Artificial Intelligence Tools in Educational Measurement and Assessment. Eurasia J. Math. Sci. Technol. Educ. 2023, 19, em2307. [Google Scholar] [CrossRef]
- Najafi, F.T.; Pabba, V.R.; Subramanian, R.; Vidalis, S.M. AI-Assisted Grading—A Study on Efficiency and Fairness. In Proceedings of the ASEE Southeast Conference, Starkville, MS, USA, 9–11 March 2025. [Google Scholar]
- Almalki, M.; Alkhamis, M.A.; Khairallah, F.M.; Choukou, M.A. Perceived Artificial Intelligence Readiness in Medical and Health Sciences Education: A Survey Study of Students in Saudi Arabia. BMC Med. Educ. 2025, 25, 439. [Google Scholar] [CrossRef]
- Gundu, T. Strategies for E-Assessments in the Era of Generative Artificial Intelligence. Electron. J. E-Learn. 2024, 22, 40–50. [Google Scholar] [CrossRef]
- Hooda, M.; Rana, C.; Dahiya, O.; Rizwan, A.; Hossain, M.S. Artificial Intelligence for Assessment and Feedback to Enhance Student Success in Higher Education. Math. Probl. Eng. 2022, 2022, 7690103. [Google Scholar] [CrossRef]
- Oc, Y.; Gonsalves, C.; Quamina, L.T. Generative AI in Higher Education Assessments: Examining Risk and Tech-Savviness on Student’s Adoption. J. Mark. Educ. 2025, 47, 138–155. [Google Scholar] [CrossRef]
- Suazo-Galdames, I.C.; Chaple-Gil, A.M. AI-Driven Assessment Systems in Higher Education: Effectiveness for Enhancing Critical Thinking and Creativity. Ing. Des Syst. D’inform. 2025, 30, 1665–1671. [Google Scholar] [CrossRef]
- Sajja, R.; Sermet, Y.; Fodale, B.; Demir, I. Evaluating AI-Powered Learning Assistants in Engineering Higher Education: Student Engagement, Ethical Challenges, and Policy Implications. arXiv 2025, arXiv:2506.05699. [Google Scholar]
- Evangelista, E.D.L. Ensuring Academic Integrity in the Age of ChatGPT: Rethinking Exam Design, Assessment Strategies, and Ethical AI Policies in Higher Education. Contemp. Educ. Technol. 2025, 17, ep559. [Google Scholar] [CrossRef] [PubMed]
- Kofinas, A.K.; Tsay, C.H.H.; Pike, D. The Impact of Generative AI on Academic Integrity of Authentic Assessments within a Higher Education Context. Br. J. Educ. Technol. 2025, 56, 2522–2549. [Google Scholar] [CrossRef]
- Memarian, B.; Doleck, T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and Higher Education: A Systematic Review. Comput. Educ. Artif. Intell. 2023, 5, 100152. [Google Scholar] [CrossRef]
- Maistry, S.M.; Singh, U.G. Faculty Perspectives on the Role of ChatGPT-4.0 in Higher Education Assessments. J. Educ. 2025, 86–102. [Google Scholar] [CrossRef]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]



| Context | Focus Area | Fairness Insights | Transparency Insights | Pedagogical and Practical Impact | References |
|---|---|---|---|---|---|
| Malaysia (academic staff) | Staff satisfaction and continuous GenAI use (UTAUT + ECM). | Equity and ethical literacy shape ongoing use; digital competence and institutional support moderate fairness | Procedural transparency through privacy/security assurances fosters trust | Institutional facilitation conditions predict responsible GenAI integration | [4,7,8,12] |
| Kuwait (EFL undergrads) | AI-based formative assessment (Nearpod) for reading comprehension. | AI scoring minimizes human bias and enhances equity across learners | Real-time dashboards enhance feedback visibility and grading transparency | Improved comprehension, mindfulness, and learner satisfaction through formative AI feedback | [3,9,10] |
| Pakistan (HE faculty and students) | Teacher competencies + AI in instructional planning and learning outcomes (SEM model). | Fairness dependent on faculty digital readiness and pedagogical design | Explicit disclosure of AI functions strengthens teacher–student trust | Collaborative AI–teacher design improves student engagement and performance | [1,2,13,14] |
| Multi-institution (5 universities) | Exploratory mixed-method study on how AI reshapes assessment. | Reduced grading bias but awareness gaps among students regarding AI evaluation | Transparency requires explainable scoring logic and auditability | Institutional readiness and targeted training crucial for equitable rollout | [4,7,12,15] |
| Saudi Arabia and Gulf Universities | Adoption of Generative AI and AI-enhanced teaching competencies among higher-education faculty. | Equitable professional development is central to fairness; variability in institutional readiness and digital skills creates inequity in AI adoption | Limited transparency about algorithms and data use constrains faculty trust; clear policy guidelines and disclosure practices improve confidence | Integrating AI into faculty development enhances teaching innovation and reflective practice; competency-based frameworks support ethical, student-centered assessment | [4,7,8,9,13,26] |
| Context | Focus Area | Fairness Insights | Transparency Insights | Pedagogical and Practical Impact | References |
|---|---|---|---|---|---|
| Undergraduate learners (U.S./Canada) | Comparative perception of AI vs. human scoring feedback. | Human grading perceived fairer due to interpretive empathy; fairness contingent on algorithmic explainability | Disclosure of AI source reduces trust without interpretive guidance | AI–human hybrid evaluation improves trust and engagement | [3,10,12,26] |
| Global (policy and measurement review) | Ethical challenges in AI-enabled assessment: validity, reliability, bias. | Algorithmic bias and data inequity highlight need for fairness audits | Transparency mandates explainability in scoring and audit processes | Personalized feedback promising but requires governance and teacher mediation | [1,9,12] |
| Europe (Bulgaria case study) | Generative AI assessment quality and fairness framework. | Equitable participation reinforces distributive fairness | Human–AI audit loops strengthen transparency and reliability | Aligns AI evaluation with formative pedagogy | [3,11,12,26] |
| Meta-systematic reviews | Synthesis of AI in educational assessment literature. | Fairness and ethical constructs underrepresented in >70% of studies | Transparency seldom operationalized, signaling research gap | Highlights necessity of pedagogically driven AI models) | [1,2,3] |
| Dimension | Synthesized Thematic Insights from the Literature | Implications for Policy and Pedagogical Practice | References |
|---|---|---|---|
| Fairness | Fairness is conceptualized as algorithmic impartiality and distributive justice, contingent on literacy, access, and explainability. Empirical and conceptual works emphasize bias mitigation and inclusive AI capacity-building | Institutions should embed fairness audits, AI-literacy initiatives, and transparent calibration protocols in assessment workflows to ensure equitable participation. | [3,7,12,15,20,26,27,28,29,30] |
| Transparency | Transparency functions as a mediator of trust and accountability. Structured disclosure and interpretive feedback enhance legitimacy, whereas uncontextualized AI disclosure erodes confidence | Adopt layered transparency strategies, incorporating simplified learner-facing rationales and technical audit documentation. | [1,10,11,12,27,31,32] |
| Pedagogical Impact | AI tools catalyze reflective learning when used as formative scaffolds within outcome-based education. Effectiveness depends on teacher mediation and reflective integration | Promote AI–human co-design of assessment rubrics and feedback literacy training to balance automation with reflection. | [2,9,13,14,33,34,35,36,37,38,39,40,41,42,43] |
| Ethical Governance | Governance frameworks converge on the need for accountability, explainability, and continual validation. Measurement studies highlight fairness–validity trade-offs | Establish institutional AI-assessment governance protocols, including bias audits, ethics committees, and transparent reporting aligned with accreditation standards. | [1,4,15,16,23,24,26,44,45,46] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Alfaleh, M. Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance. Sustainability 2026, 18, 785. https://doi.org/10.3390/su18020785
Alfaleh M. Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance. Sustainability. 2026; 18(2):785. https://doi.org/10.3390/su18020785
Chicago/Turabian StyleAlfaleh, Maha. 2026. "Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance" Sustainability 18, no. 2: 785. https://doi.org/10.3390/su18020785
APA StyleAlfaleh, M. (2026). Sustainable AI-Driven Assessment in Higher Education: A Systematic Review of Fairness, Transparency, Pedagogical Innovation, and Governance. Sustainability, 18(2), 785. https://doi.org/10.3390/su18020785

