Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026)
Abstract
1. Introduction
1.1. Assessment, Authorship, and Academic Integrity in the Age of Generative AI
1.2. Disclosure, Attribution, and Academic Responsibility
1.3. From Principles to Practice: Implementation, Assessment, and Feasibility of Transparency Mechanisms
2. Materials and Methods
2.1. Study Design
2.2. Search Strategy and Information Sources
- (a)
- Generative AI (e.g., ChatGPT, generative AI);
- (b)
- Higher education;
- (c)
- Academic assessment (assessment, assignment, coursework);
- (d)
- Transparency mechanisms, including disclosure, citation, attribution, and prompt logs.
2.3. Search Queries and Results by Database
2.4. Study Selection Process
2.5. Inclusion and Exclusion Criteria
2.6. Data Extraction and Analytical Framework
3. Results
3.1. General Characteristics of the Included Studies
3.2. Approaches to Evaluating Compliance with Transparency Requirements (Evaluation Approach)
3.3. Level of Detail Required in Transparency Practices (Requirements Specified)
3.4. Implementation Patterns of Transparency Mechanisms
3.5. Reported Evidence on Compliance, Workload, and Acceptability
4. Discussion
5. Limitations
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Database-Specific Search Strategies
| Database | Searched Fields | Complete Search String | Filters/Limits Applied | Date(s) of Execution |
|---|---|---|---|---|
| Scopus | TITLE-ABS-KEY | (“AI disclosure” OR “disclosure statement” OR “AI citation” OR “prompt log” OR “prompt logs”) AND (ChatGPT OR “generative AI”) AND “higher education” AND (assessment OR assignment OR coursework) | Publication years: 2022–2026; Document type: Article, Conference Paper; Language: No restrictions | 20 December 2025; updated 10 January 2026 |
| Web of Science (WoS Core Collection) | TOPIC (TS) | (ChatGPT OR “generative AI”) AND “higher education” AND (assessment OR assignment OR coursework) AND (disclosure OR transparency OR attribution OR citation) | Timespan: 2022–2026; Indexes: SCI-EXPANDED, SSCI, ESCI; Document type: Article, Proceedings Paper; Language: No restrictions | 20 December 2025; updated 10 January 2026 |
| ERIC | All Fields | (ChatGPT OR “generative AI”) AND (“higher education”) AND (assessment OR assignment OR coursework) AND (disclosure OR transparency OR attribution OR citation) | Publication years: 2022–2026; Source type: Journals; Language: No restrictions | 20 December 2025; updated 10 January 2026 |
| IEEE Xplore | Metadata only (title, abstract, keywords) | (ChatGPT OR “generative AI”) AND “higher education” AND (assessment OR assignment OR coursework) AND (disclosure OR transparency OR attribution OR citation) | Publication years: 2022–2026; Content type: Journals, Conference Proceedings; Language: No restrictions | 20 December 2025; updated 10 January 2026 |
Appendix B. Operational Definitions and Coding Traceability
| Dimension | Category | Operational Definition (Applied Coding) | Study Coded |
|---|---|---|---|
| Evaluation approach | No explicit evaluation | Transparency is mentioned as a principle or recommendation without defined criteria or verification procedures. | Maguire et al. [18] |
| Evaluation approach | Unverified self-disclosure | Mandatory or requested declaration of AI use with no verification or evaluative consequences. | Gonsalves [8] |
| Evaluation approach | Instructor judgment | AI use is implicitly considered in grading decisions without standardized instruments. | Spirgi [7] |
| Evaluation approach | Light rubric/checklist | Disclosure accompanied by a simple rubric or checklist used for guidance or partial evaluation. | Overono & Ditta [15] |
| Evaluation approach | Criteria + spot verification | Explicit evaluative criteria combined with selective verification of process evidence. | Cotelli Kureth et al. [16] |
| Evaluation approach | Reactive management | AI use is addressed only when suspected, without predefined procedures. | Adnin et al. [9] |
| Evaluation approach | Hidden use | Absence of formal mechanisms; undeclared AI use is reported or inferred. | Kirsanov et al. [24] |
| Requirements specified | Not specified | Normative or ethical references to transparency without operational requirements. | Al-Hajaya [23] |
| Requirements specified | Minimal (use/no use) | Binary declaration indicating whether AI was used or not. | Spirgi [7] |
| Requirements specified | Descriptive | Narrative description of how AI was used and for what purpose. | Maguire et al. [18] |
| Requirements specified | With evidence | Submission of process evidence (e.g., marked outputs, appendices). | Cotelli Kureth et al. [16] |
| Requirements specified | Reflection + verification | Declaration combined with ethical reflection and explicit responsibility. | García Ramos [17] |
| Implementation pattern | Mandatory disclosure | Disclosure required but not integrated into assessment or evaluation. | Gonsalves [8] |
| Implementation pattern | Narrative, non-evaluable | Disclosure encouraged or requested without grading impact. | Maguire et al. [18] |
| Implementation pattern | Disclosure + rubric | Disclosure explicitly integrated with a checklist or rubric. | Overono & Ditta [15] |
| Implementation pattern | Integrated evidence | Process evidence embedded within the assessment design. | Cotelli Kureth et al. [16] |
| Implementation pattern | Implementation gap | Reported mismatch between formal expectations and actual student practices. | Adnin et al. [9] |
| Implementation pattern | Policy only | Institutional guidelines without concrete assessment procedures. | Dabis & Csáki [10] |
| Reported evidence | Compliance | Explicit reporting of adherence or non-adherence to transparency requirements. | Cotelli Kureth et al. [16] |
| Reported evidence | Acceptance | Empirical or descriptive evidence of student or faculty perceptions. | Overono & Ditta [15] |
| Reported evidence | Workload | Explicit discussion of workload implications for students or instructors. | Maguire et al. [18] |
| Reported evidence | No evidence | No empirical or descriptive feasibility evidence reported. | Dabis & Csáki [10] |
References
- Al-Hajaya, K. Academic integrity is under fire in the Generative AI age: Insights from accounting educators to overcome challenges, threats and ethical concerns. High. Educ. Ski. Work-Based Learn. 2025, early access. 1–19. [Google Scholar] [CrossRef]
- Gallent-Torres, C.; Zapata-González, A.; Ortego-Hernando, J. The impact of generative artificial intelligence in higher education: A focus on ethics and academic integrity. RELIEVE Rev. Electrón. Investig. Eval. Educ. 2023, 29, 2. [Google Scholar] [CrossRef]
- Revell, T.; Yeadon, W.; Cahilly-Bretzin, G.; Clarke, I.; Manning, G.; Jones, J.; Mulley, C.; Pascual, R.J.; Bradley, N.; Thomas, D.; et al. ChatGPT versus human essayists: An exploration of the impact of artificial intelligence for authorship and academic integrity in the humanities. Int. J. Educ. Integr. 2024, 20, 18. [Google Scholar] [CrossRef]
- Benuyenah, V.; Dewnarain, S. Students’ Intention to Engage with ChatGPT and Artificial Intelligence in Higher Education Business Studies Programmes. Int. J. Distance Educ. Technol. 2024, 22, 1–21. [Google Scholar] [CrossRef]
- Luo, J.; Dawson, P. Exploring value judgements in grading: Will teachers mark down student work assisted by GenAI, and should they? Stud. High. Educ. 2025, 1–15. [Google Scholar] [CrossRef]
- Kickbusch, S.; Ashford-Rowe, K.; Kemp, A.; Boreland, J.; Huijser, H. Beyond detection: Redesigning authentic assessment in an AI-mediated world. Educ. Sci. 2025, 15, 1537. [Google Scholar] [CrossRef]
- Deep, P.; Edgington, W.D.; Ghosh, N.; Rahaman, M.S. Evaluating the effectiveness and ethical implications of AI detection tools in higher education. Information 2025, 16, 905. [Google Scholar] [CrossRef]
- Spirgi, L. The Role of AI Disclosure in Academic Grading: Lecturer Perceptions, Challenges, and Implications. In Artificial Intelligence in Education, AIED 2025; Cristea, A.I., Walker, E., Lu, Y., Santos, O.C., Isotani, S., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; Volume 15882. [Google Scholar] [CrossRef]
- Gonsalves, C. Addressing student non-compliance in AI use declarations: Implications for academic integrity and assessment in higher education. Assess. Eval. High. Educ. 2025, 50, 592–606. [Google Scholar] [CrossRef]
- Adnin, R.; Pandkar, A.; Yao, B.; Wang, D.; Das, M. Examining Student and Teacher Perspectives on Undisclosed Use of Generative AI in Academic Work. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25); Association for Computing Machinery: New York, NY, USA, 2025; pp. 1–17. [Google Scholar] [CrossRef]
- Dabis, A.; Csáki, C. AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI. Humanit. Soc. Sci. Commun. 2024, 11, 1006. [Google Scholar] [CrossRef]
- Ogunleye, B.; Zakariyyah, K.I.; Ajao, O.; Olayinka, O.; Sharma, H. A Systematic Review of Generative AI for Teaching and Learning Practice. Educ. Sci. 2024, 14, 636. [Google Scholar] [CrossRef]
- Pérez-Jorge, D.; Olmos-Raya, E.; Alonso-Rodríguez, I.; Hernández-Dionis, P.; Pérez-Pérez, I. Harnessing AI for sustainable education: Pathways and implications. In Generative Artificial Intelligence in Education: Innovations, Challenges, and Future Prospects; Durak, G., Çankaya, S., Eds.; Springer Nature: Singapore, 2026; pp. 17–29. [Google Scholar] [CrossRef]
- Alshamy, A.S.A.; Al-Harthi, A.S.A.; Abdullah, S. Challenges of using generative AI tools in Omani higher education institutions: Perceptions of students and academics. In Proceedings of the 2025 International Conference on Smart Applications, Communications and Networking (SmartNets); IEEE: New York, NY, USA, 2025. [Google Scholar] [CrossRef]
- Hernández Aguirre, A. Microbe Detectives VS ChatGPT: Who Solves Better. In World Engineering Education Forum—Global Engineering Deans Council (WEEF-GEDC); IEEE: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
- Overono, A.L.; Ditta, A.S. The use of AI disclosure statements in teaching: Developing skills for psychologists of the future. Teach. Psychol. 2025, 52, 273–278. [Google Scholar] [CrossRef]
- Cotelli Kureth, S.; Paliot, E.; Zink, S. Fostering transparency: A critical introduction of generative AI in students’ assignments. Lang. Lang. Learn. High. Educ. 2025, 15, 63–85. [Google Scholar] [CrossRef]
- García Ramos, J. Development and introduction of a document disclosing AI-use: Exploring self-reported student rationales for artificial intelligence use in coursework: A brief research report. Front. Educ. 2025, 10, 1654805. [Google Scholar] [CrossRef]
- Maguire, J.; English, R.; Cao, Q.; Seow, C.K. Themes in the declared use of generative artificial intelligence in assessment. In Proceedings of the 9th Conference on Computing Education Practice (CEP 2025); Association for Computing Machinery: New York, NY, USA, 2025; pp. 17–20. [Google Scholar] [CrossRef]
- Ahn, S.H.; Choi, M.-J. Research on self-assessment items for teaching writing ethics in the era of generative AI. Glob. Educ. Citiz. 2024, 10, 7–35. [Google Scholar] [CrossRef]
- Pérez-Jorge, D.; González-Afonso, M.C.; Santos-Álvarez, A.G.; Plasencia-Carballo, Z.; Perdomo-López, C.d.l.Á. The Impact of AI-Driven Application Programming Interfaces (APIs) on Educational Information Management. Information 2025, 16, 540. [Google Scholar] [CrossRef]
- Pérez-Jorge, D.; González-Herrera, A.I.; Olmos-Raya, E.; Martínez-Murciano, M.C. Nurturing creative learning through generative AI: A systematic review. In Generative Artificial Intelligence in Education: Innovations, Challenges, and Future Prospects; Durak, G., Çankaya, S., Eds.; Springer Nature: Singapore, 2026; pp. 371–396. [Google Scholar] [CrossRef]
- Pérez-Jorge, D.; González-Afonso, M.C. Transparency Mechanisms for Generative AI Use in Higher Education Assessment. PROSPERO 2026; CRD420261287226. Available online: https://www.crd.york.ac.uk/PROSPERO/view/CRD420261287226 (accessed on 16 January 2026).
- Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef]
- Kirsanov, O.; Kushwah, L.; Selvaretnam, G. Beyond detection: How students use—And hide—AI in online assessments and what authentic tasks can do about it. J. Acad. Ethics 2026, 24, 14. [Google Scholar] [CrossRef]



| Database | Search Query | Results | Included |
|---|---|---|---|
| Scopus | (“AI disclosure” OR “disclosure statement” OR “AI citation” OR “prompt log” OR “prompt logs”) AND (ChatGPT OR “generative AI”) AND “higher education” AND (assessment OR assignment OR coursework) | 34 | 5 |
| Web of Science | (ChatGPT OR “generative AI”) AND “higher education” AND (assessment OR assignment OR coursework) AND (disclosure OR transparency OR attribution OR citation) | 38 | 5 |
| ERIC | (ChatGPT OR “generative AI”) AND (“higher education”) AND (assessment OR assignment OR coursework) AND (disclosure OR transparency OR attribution OR citation) | 14 | 1 |
| IEEE Xplore | (ChatGPT OR “generative AI”) AND “higher education” AND (assessment OR assignment OR coursework) AND (disclosure OR transparency OR attribution OR citation) | 6 | 0 |
| Total | 92 | 11 |
| Inclusion Criteria | Exclusion Criteria |
|---|---|
| Studies focused exclusively on:
|
| Reference | Document Type | Country/Institution | Discipline/Context | Assessment Type | Main Transparency Mechanism | Specified Requirements | Compliance Evaluation | Reported Evidence | Risks/Safeguards |
|---|---|---|---|---|---|---|---|---|---|
| Gonsalves [9] | Empirical Study | UK (King’s Business School) | Business/Higher Education | Coursework | Mandatory Disclosure | Explicit AI use on the coversheet | Unverified self-disclosure | Low compliance; fear of sanctions; mixed acceptance | Normative ambiguity; concealment incentives |
| Overono & Ditta [16] | Case Study | USA | Psychology | Written Essays | Guided Disclosure with Attribution | AI disclosure with attribution | Light rubric/checklist | High acceptability; metacognitive improvement | Minimized privacy; pedagogical approach |
| Maguire et al. [19] | Empirical Study | Ireland/Australia | Computing Education | Varied Assessment | Narrative Disclosure | Thematic description of AI use | No explicit evaluation | Usage patterns; moderate workload | Low surveillance |
| Spirgi [8] | Empirical Study | Europe | Higher Education | Written Assessment | Evaluable Declaration | Explicit use/no use declaration | Implicit instructor judgment | Instructor concern; grading impact | Risk of evaluative bias |
| García Ramos [18] | Case Study | Spain | Higher Education | Written Work | Structured Declaration with Reflection | Declaration + ethical reflection | Unverified self-disclosure | High acceptance; student ethical reflection | Protected privacy; low burden |
| Adnin et al. [10] | Empirical Study | International | General Higher Education | Academic Work | No Formal Mechanism | Not specified. Undeclared use | Reactive management | Frequent hidden use | Distrust; reactive surveillance |
| Cotelli Kureth et al. [17] | Action Research | Switzerland | Languages/L2 | Written Essay | Marked Outputs + Reflection | Explicit output ID + process reflection | Evaluable criteria + spot checks | Improved transparency; moderate workload | Data minimization; selective traceability |
| Dabis & Csáki [11] | Policy Analysis | Europe | Institutional | Recommended Declaration | Institutional disclosure guidelines | Not evaluated | Uneven adoption | Governance; equity | |
| Kirsanov et al. [25] | Empirical Study | International | Online Higher Education | Online Assessment | No Transparency Mechanism | Implicit penalty/judgment | Strategic concealment | Risk of excessive surveillance | |
| Luo & Dawson [5] | Empirical Study | Australia | General Higher Education | Essays | Implicit AI Use Reference | Indirect AI use mentioned | Instructor judgment | Grading and fairness impact | Bias; equity |
| Al-Hajaya [1] | Empirical Study | International | Accounting | Varied Assessment | Suggested Declaration | General policies and expectations | Implicit instructor judgment | Moderate resistance; normative ambiguity | Privacy; regulatory clarity |
| Dimension | Synthetic Category | N | % |
|---|---|---|---|
| Evaluation approach | No explicit evaluation | 3 | 27.3 |
| Unverified self-disclosure | 2 | 18.2 | |
| Instructor judgment/implicit penalty | 2 | 18.2 | |
| Light rubric/checklist | 1 | 9.1 | |
| Assessment criteria + spot verification | 1 | 9.1 | |
| Reactive management | 1 | 9.1 | |
| Absence of mechanisms (hidden use) | 1 | 9.1 | |
| Requirements specified | Not specified/variable | 5 | 45.5 |
| Minimal (use/no use) | 2 | 18.2 | |
| Descriptive | 2 | 18.2 | |
| With evidence | 1 | 9.1 | |
| With reflection and verification | 1 | 9.1 | |
| Implementation pattern | Mandatory disclosure without verification | 2 | 18.2 |
| Narrative disclosure without evaluative impact | 2 | 18.2 | |
| Disclosure + rubric/checklist | 1 | 9.1 | |
| Integrated process evidence | 1 | 9.1 | |
| Implementation gap (hidden use) | 2 | 18.2 | |
| Guideline/policy without operationalization | 1 | 9.1 | |
| Reported evidence | Explicit evidence of compliance | 3 | 27.3 |
| Perceptions of acceptance/resistance | 3 | 27.3 | |
| Reported workload | 4 | 36.4 | |
| No evidence reported | 7 | 63.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Pérez-Pérez, I.; González-Afonso, M.C.; Plasencia-Carballo, Z.; Pérez-Jorge, D. Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026). Computers 2026, 15, 111. https://doi.org/10.3390/computers15020111
Pérez-Pérez I, González-Afonso MC, Plasencia-Carballo Z, Pérez-Jorge D. Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026). Computers. 2026; 15(2):111. https://doi.org/10.3390/computers15020111
Chicago/Turabian StylePérez-Pérez, Itahisa, Miriam Catalina González-Afonso, Zeus Plasencia-Carballo, and David Pérez-Jorge. 2026. "Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026)" Computers 15, no. 2: 111. https://doi.org/10.3390/computers15020111
APA StylePérez-Pérez, I., González-Afonso, M. C., Plasencia-Carballo, Z., & Pérez-Jorge, D. (2026). Transparency Mechanisms for Generative AI Use in Higher Education Assessment: A Systematic Scoping Review (2022–2026). Computers, 15(2), 111. https://doi.org/10.3390/computers15020111

