Next Article in Journal
The Mediating Role of Motivational Self-Regulation in the Relationship Between Perceived Support from Family and Teachers and Academic Achievement
Previous Article in Journal
Pretty Vacant or Pretty Smart? Overcoming Educational Disadvantage in Language Education Through the Arts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Six Institutional Intervention Areas to Support Ethical and Effective Student Use of Generative AI in Higher Education: A Narrative Review

1
Department of Business and Law, Southampton Solent University, Southampton SO14 0YN, UK
2
James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
3
School of Chemistry and Materials, Jiangsu Provincial Key Laboratory of Green & Functional Materials and Environmental Chemistry, Yangzhou University, Yangzhou 225002, China
4
Department of Network and Security, Faculty of Computing, NSBM Green University Town, Homagama 10200, Sri Lanka
5
School of Business, Law, and Society, Southampton Solent University, Southampton SO14 0YN, UK
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(1), 137; https://doi.org/10.3390/educsci16010137
Submission received: 24 November 2025 / Revised: 4 January 2026 / Accepted: 15 January 2026 / Published: 16 January 2026

Abstract

The integration of generative AI tools, such as ChatGPT, Gemini, and DeepSeek, into higher education offers transformative opportunities for personalised learning and academic productivity. However, their unregulated use raises concerns about academic integrity, critical thinking, and educational equity. This systematic review synthesises insights from 96 peer-reviewed articles, identifying six key intervention themes, namely, curriculum integration, policy and governance, faculty development, student-centred strategies, assessment adaptation, and technological infrastructure. Together, these themes form a comprehensive intervention framework designed to guide students’ ethical and effective engagement with AI. This review highlights the need for institutions to move beyond fragmented policies, fostering systemic cultural and pedagogical change to align AI use with authentic learning outcomes. By bridging theoretical gaps and providing actionable strategies, this framework equips educators and policymakers to scaffold responsible AI integration across diverse higher education contexts.

1. Introduction

The rapid evolution and widespread availability of generative artificial intelligence (Generative AI) tools have significantly reshaped the landscape of higher education (Samala et al., 2025; S. Wang et al., 2024). Tools such as ChatGPT, Gemini, DeepSeek, and other generative AI platforms have become increasingly popular among university students worldwide for a range of academic tasks, including writing assistance, research support, coding, and creative ideation (Al-Sofi, 2024). These tools offer students innovative ways to engage with learning materials, enhance productivity, and personalise their academic experiences (Chan & Hu, 2023). However, their increasing use has sparked ongoing debate among educators, researchers, and policymakers. While some welcome generative AI as a valuable educational aid, others express concerns about academic integrity, overreliance, and the risk of diminishing students’ independent thinking and learning (Lim et al., 2023).
Regardless of these differing perspectives, it is evident that students will continue to incorporate generative AI into their educational routines (Vieriu & Petrea, 2025). This new reality demands that educators move beyond debate and instead focus on developing novel, evidence-based interventions that guide and support students in using these tools ethically and effectively (Jin et al., 2025). Such interventions are crucial to ensure that AI use aligns with learning outcomes, and promotes responsible engagement with technology (Tan et al., 2025), and fosters critical skills (Jin et al., 2025). Without structured interventions, students risk overreliance on AI (Vieriu & Petrea, 2025), which may undermine their independent thinking and professional development (Gerlich, 2025). Moreover, developing tailored interventions empowers educators to create learning environments that embrace AI’s potential while safeguarding academic integrity and preparing students for the evolving demands of their future careers (Luckin et al., 2022). Although some studies have proposed interventions to support ethical and effective generative AI integration in education (e.g., An et al., 2025; Barus et al., 2025; Corbin et al., 2025), the current body of knowledge is fragmented. These insights are scattered across various disciplines and contexts, lacking a cohesive structure or framework that educators can readily apply. This fragmentation presents a major barrier for institutions seeking to implement comprehensive, pedagogically sound interventions that incorporate generative AI into learning environments.
The aim of this research is to conduct a systematic literature review to collect, analyse, and consolidate existing studies on generative AI use in higher education, with a particular focus on identifying interventions that support ethical and effective student engagement. Through this process, the review will generate an intervention framework that can be used by educators to design meaningful interventions tailored to their specific learning contexts. Based on this background, the objectives of this research are as follows.
  • To systematically review and synthesise relevant literature, identify and code key concepts related to AI interventions, organise these codes into sub-themes and overarching themes, develop a comprehensive intervention framework grounded in these themes.
  • To provide actionable recommendations for educators and academic institutions aiming to support student learning through ethical and effective AI use.
The significance of this review is twofold: theoretical and practical. The theoretical significance lies in its development of an intervention framework through the systematic synthesis and integration of existing fragmented knowledge. By identifying, coding, and combining key concepts from diverse studies into coherent sub-themes and overarching main themes, this research will create a unified theoretical foundation that enriches the current understanding of ethical and effective use of generative AI in higher education. Practically, the significance emerges through the application of this consolidated intervention framework, which provides educators and academic institutions with actionable insights and evidence-based interventions. This practical guidance enables the creation of structured educational environments that foster responsible AI use, enhance student learning experiences, and effectively prepare students for the future demands of their professional and academic careers.
The remainder of this paper is organised as follows. Section 2 presents a detailed review of existing literature on generative AI in higher education, critically discussing ethical and effective use, and situating the current review within the broader academic discourse. Section 3 outlines the methodological design of the systematic literature review, including the six-phase protocol adopted to ensure rigour, transparency, and thematic coherence. Section 4 provides a comprehensive synthesis of findings, categorised into six overarching intervention themes supported by sub-themes and coded evidence from 99 peer-reviewed articles. This section forms the foundation of the proposed intervention framework. Section 5 discusses the theoretical and practical implications of the findings, critically analysing how they extend current understandings of pedagogy, policy, and institutional practice in the AI era. Finally, Section 6 presents the concluding remarks, summarising contributions, limitations, and avenues for future research.

2. Current State of Research on Generative AI in Higher Education

Over the past two years, generative AI has rapidly evolved from a niche technological curiosity into a mainstream educational tool, prompting a surge in academic inquiry. Research across disciplines has begun to map the multifaceted implications of AI in higher education, exploring its capacity to enhance learning personalisation, automate feedback, support multilingual access, and democratise knowledge production (Chan & Hu, 2023; Luckin et al., 2022). This growing body of literature affirms that generative AI is no longer peripheral, but it is becoming integral to how students learn, how educators teach, and how institutions operate.
Yet, despite its widespread adoption, the literature reveals significant conceptual and practical fragmentation. Studies vary widely in how they define, evaluate, and contextualise AI use, with some focused on institutional governance (Jin et al., 2025), others on faculty perspectives (Khlaif et al., 2024), and ethical concerns (An et al., 2025; Hanna et al., 2025). Moreover, while policy development and technical adaptation have received substantial attention, there remains a critical absence of consolidated frameworks to guide the pedagogically meaningful and ethically responsible use of generative AI.
To address this challenge, the main objective of this review is to propose a consolidated framework that can be used by educators and policymakers to promote the ethical and effective use of generative AI tools among students. This section establishes the conceptual foundation for the proposed intervention framework by first positioning the unique contribution of the current review within the broader research landscape. It then clarifies what is meant by ethical and effective use of generative AI in the context of higher education. These definitions are essential for grounding the framework in a shared understanding, ensuring that the strategies proposed later in the paper are interpreted in alignment with the pedagogical values and learning goals this review seeks to advance.

2.1. Uniqueness of the Current Review

While a growing body of research has begun to explore the institutional implications of generative AI in higher education, the focus of most current studies remains largely on policy development and governance. For instance, Jin et al. (2025) and McDonald et al. (2025) provide comprehensive analyses of institutional policies and guidelines across global and US universities, respectively, documenting institutional responses in terms of regulation, communication, and ethical oversight. Similarly, H. Wang et al. (2024) and Dai et al. (2024) offer region-specific insights into how universities in the US and Asia are adapting their governance frameworks and infrastructure to accommodate the rise of generative AI. Khlaif et al. (2024), although pedagogically oriented, are limited in scope to faculty perceptions regarding the use of generative AI in assessments, employing a UTAUT2 model to predict adoption behaviour rather than proposing structured pedagogical models.
In contrast, the present review fills a critical and underdeveloped gap by offering a review and synthesis of pedagogical interventions aimed at guiding the ethical and effective use of generative AI by students. Unlike previous research (Barus et al., 2025; Filgueiras, 2024; Oncioiu & Bularca, 2025) that focuses on what institutions are doing or planning in terms of governance, this review explores how educators, curriculum designers, and institutions can practically support student learning through research-informed interventions. Moreover, it moves beyond siloed or discipline-specific suggestions to develop an integrated, multi-dimensional framework that addresses curriculum, assessment, infrastructure, faculty development, and student engagement. As such, while complementary to these policy-oriented works, this review extends the knowledge base by translating institutional aspirations into actionable educational strategies. Central to this translation is a clear understanding of what constitutes the ethical use of generative AI in educational contexts, which forms the conceptual foundation for the framework proposed here.

2.2. What Is Ethical Use of Generative AI in Education?

Building on the study’s emphasis on turning institutional vision into practice, it is essential first to define the principles that underpin responsible AI integration in learning environments. The ethical use of generative AI in education refers to its integration in ways that uphold academic integrity (Balalle & Pannilage, 2025), promote transparency (Barus et al., 2025), respect student autonomy (Szabó & Szoke, 2024), mitigate bias (Michel-Villarreal et al., 2023), and foster critical engagement (Matsiola et al., 2024). Scholars emphasise that ethical AI use must align with the foundational values of academic work, ensuring that students use AI tools responsibly and not as a substitute for genuine learning (Chan, 2023). Transparency and accountability are central, requiring that students and educators clearly understand AI’s capabilities, limitations, and implications for authorship and evaluation (Memarian & Doleck, 2023). Ethical use also involves addressing algorithmic bias to avoid reinforcing inequality or misrepresentation in educational outcomes (Michel-Villarreal et al., 2023). Respecting learner autonomy means AI should not replace students’ intellectual effort but rather support and enhance their learning processes (Haroud & Saqri, 2025). Moreover, AI literacy is a critical dimension of ethical use as it equips learners and educators with the understanding needed to use AI judiciously, interpret its outputs critically, and maintain academic honesty (Kajiwara & Kawabata, 2024). Thus, ethical use is not just about compliance with rules but about cultivating a reflective and informed culture of engagement with AI that supports meaningful, inclusive, and equitable learning experiences. Having established the ethical principles that should guide AI integration, the next step is to consider how these principles translate into tangible pedagogical practices that ensure AI is not only used responsibly but also effectively in advancing learning goals.

2.3. What Is Effective Use of Generative AI in Education?

Anchored in the ethical framework outlined above, the effective use of generative AI in education refers to the purposeful, pedagogically aligned integration of AI tools that enhances learning outcomes, supports skill development, and complements existing instructional strategies. It involves leveraging AI’s capabilities, such as personalised feedback (Luckin et al., 2022), adaptive content generation (Chen et al., 2020), and real-time scaffolding to enrich student engagement, deepen understanding (Yan et al., 2025), and promote higher-order thinking (Kasneci et al., 2023). Effectiveness is defined not by frequency of use, but by alignment with curricular goals (H. Wang et al., 2024) and the facilitation of authentic (Salinas-Navarro et al., 2024), transferable learning experiences. Studies have shown that effective AI use helps students develop critical and metacognitive skills when implemented through guided, reflective practices rather than passive consumption (Essel et al., 2024; Neshaei et al., 2025). Furthermore, effective use ensures that AI complements rather than substitutes cognitive effort, supporting learners in constructing knowledge through active inquiry (Jayasinghe, 2024). It also includes equitability and inclusivity, ensuring access to AI tools across diverse student populations and adapting use cases to specific disciplinary needs (Jayasinghe et al., 2025). Therefore, effectiveness in AI use is pedagogically intentional, ethically grounded, and contextually responsive, contributing to meaningful learning without undermining student agency or academic standards.

3. Review Methodology

This study adopts a systematic literature review approach to identify key themes in interventions designed to support the ethical and effective integration of generative AI in higher education. While review procedures have been widely operationalised in evidence-based fields such as medicine, their application in educational (and other social science) research often requires methodological adaptation because terminology and indexing practices vary across databases, and abstracts may be insufficiently standardised to support reliable eligibility decisions at the abstract-screening stage (Curran et al., 2007). To address these challenges and ensure transparency and rigour, this review follows a structured six-step methodology informed by established review frameworks in education (Cong-Lem et al., 2025; Urquhart et al., 2025). This process includes an initial scoping phase, the application of clear inclusion and exclusion criteria (Table 1), quality appraisal of selected studies, and a thematic synthesis of findings to develop a comprehensive framework of pedagogical interventions.

3.1. Phase 1—Mapping the Field Through a Scoping Review

The first phase involved conducting a scoping review using a traditional narrative approach to gain an initial understanding of the research landscape on generative AI use in higher education (J. Davis et al., 2014). This phase aimed to map the breadth of existing literature on educational interventions that support ethical and effective AI integration, while identifying key publications, and recurring conceptual patterns. By examining highly cited articles and tracing their reference networks, the authors established a foundational understanding of prevailing perspectives, emerging concerns, and gaps in knowledge. This preliminary mapping informed the development of targeted search strings, relevant keywords, and inclusion/exclusion criteria for the subsequent review process.

3.2. Phase 2—Comprehensive Search

In the second phase, a comprehensive and systematic search was conducted to identify relevant academic literature on interventions supporting ethical and effective use of generative AI in higher education. When reviewing previous systematic literature reviews in the field of education, the authors observed a lack of consistency in the choice of databases used for literature identification. However, it was evident that SCOPUS and Web of Science featured prominently across a majority of these studies. Supporting this trend, Pranckutė (2021) highlights that SCOPUS and Web of Science continue to be the two most comprehensive databases for accessing publication metadata and impact indicators. Similarly, Mongeon and Paul-Hus (2016) argue that other databases have yet to demonstrate their viability as robust alternatives for conducting international bibliographic analysis.
The search term “ethical use of AI in education” was initially applied in SCOPUS, yielding 3662 results. From this, only journal articles were retained (1571), which were further refined using keywords such as “education,” “higher education,” “university,” and “university students” (447 results). Articles were then filtered to include only those published in journals focused on education, learning, and teaching (182), and further narrowed to Q1-ranked journals (96). Following this, abstracts were carefully reviewed to ensure the studies specifically addressed university students, resulting in a final shortlist of 57 articles suitable for data synthesis.
A similar procedure was followed in the Web of Science database. The initial search using the same term produced 1959 results, which were narrowed down to journal articles (1346). The scope was then limited to studies classified under the “Educational Research” category (480) and further refined to those explicitly referencing “higher education” (208). From these, Q1 journal articles were selected (114). After removing 24 duplicates already identified in SCOPUS, 90 articles remained. As with SCOPUS, abstracts were scrutinised for relevance, leading to a final set of 39 articles that focused specifically on university students and were deemed most appropriate for extracting codes relevant to the review’s data synthesis.
Altogether, 96 peer-reviewed journal articles, 57 from SCOPUS and 39 from Web of Science, were identified for inclusion in the final dataset.

3.3. Phase 3—Quality Assessment

To ensure the quality and relevance of the literature included in this review, the inclusion criteria were refined to include only articles indexed in SCOPUS and Web of Science. This decision follows the recommendations of Mongeon and Paul-Hus (2016), who argue that these databases remain the most reliable for conducting international bibliometric analyses, whereas some alternative sources can provide less standardised and less transparent coverage, with more variable metadata quality and reproducibility for bibliometric retrieval and comparison. Additionally, only Q1-ranked journal articles, as identified by the SCImago Journal Rank (SJR), were selected to ensure the academic rigour of the dataset (Maita et al., 2024). This approach struck a balance between selectivity and breadth, capturing high-impact contributions without overlooking relevant developments in the field.
Moreover, as part of the quality assessment process, a detailed manual screening of titles and abstracts was conducted to evaluate each article’s relevance to the study’s focus, interventions that support the ethical and effective use of generative AI by university students. Articles were shortlisted only if they explicitly addressed AI use in higher education and included student-focused or pedagogical components. Studies that did not engage with university learners or if they involved university learners but did not describe any AI-related intervention were excluded. This screening process was essential to ensure that the selected literature directly informed the subsequent stages of concept identification, coding, and thematic synthesis.
The process used to identify studies for data extraction (phase 2) and ensure the quality of sources (phase 3) is shown in Figure 1 below.

3.4. Phase 4—Data Extraction

In phase 4, authors extracted and coded data using an iterative thematic analysis process guided by Braun and Clarke (2006) and Cong-Lem et al. (2025). Data extraction focused on text segments describing interventions, strategies, practices, or pedagogical approaches intended to support the ethical and effective use of generative AI in higher education. The unit of coding was a semantic “meaning unit” (typically a sentence or short paragraph) within each article that contained a discrete intervention-related idea.
Coding proceeded in multiple rounds. First, three authors independently coded an initial subset of studies to generate candidate codes and provisional definitions. The team then met to consolidate overlapping codes, refine definitions, and produce a shared coding guide. Second, the authors independently coded the remaining studies using this guide while allowing new codes to be added when warranted by the data; all additions and definition changes were documented. Third, the coded dataset was cross-checked through structured comparison meetings. Where discrepancies occurred, they were resolved through discussion and reference to the article text and inclusion criteria, resulting in a reconciled set of codes and a final code list.
This iterative coding process created a transparent foundation for subsequent theme development by ensuring that codes consistently captured intervention-related content across the dataset and were sufficiently defined to support synthesis into sub-themes and overarching themes in Phase 5.

3.5. Phase 5 and 6—Data Synthesis and Write-Up

Following data extraction, the authors engaged in a rigorous synthesis process to organise the extracted codes into higher-level categories. As relationships and patterns among the codes were explored, distinct sub-themes began to emerge. These sub-themes reflected the collective understanding developed during the systematic literature review, capturing recurring conceptual and practical patterns in how generative AI is being integrated into higher education. The sub-themes were then further organised into overarching themes, each representing a key dimension of pedagogical or institutional interventions. To support this development, codes were grouped into candidate sub-themes and themes using thematic mapping. Themes were then reviewed and refined across multiple iterations by checking (i) coherence of meaning within each theme, (ii) clear distinctions between themes, and (iii) coverage across the dataset (i.e., support from multiple studies and multiple coded extracts). The thematic map and theme definitions were revised until the thematic structure was stable and could be consistently explained with representative extracts.
Table 1. A summary of the article inclusion criteria.
Table 1. A summary of the article inclusion criteria.
CharacteristicInclusion Criteria
Publication mediumPeer-reviewed journal articles, Q1-ranked (SCImago Journal Rank)
LanguagesEnglish
PeriodFrom 2023 to 2025
Research designConceptual, empirical (quantitative, qualitative, and mixed-methods), and systematic reviews
ContentStudies focusing on ethical and effective use of generative AI in higher education, specifically addressing university students and including student-focused or pedagogical interventions
SourceScopus and Web of Science databases
To ensure robustness, the proposed sub-themes and overarching themes were reviewed and verified by two senior academic members with extensive expertise in teaching and learning. Their input provided an additional layer of validation, ensuring that the themes were both conceptually sound and pedagogically relevant. This iterative and collaborative approach enhanced the analytical rigour of the synthesis process. Ultimately, the qualitative synthesis produced a structured framework comprising 38 codes, 18 sub-themes, and 6 overarching themes. This framework, which integrates insights from across the reviewed literature, serves as the key outcome of this review: a consolidated intervention framework to guide ethical and effective student engagement with generative AI in higher education. The framework is presented in Table 2 below. The outcomes of the final phase, including the write-up and dissemination of findings, are discussed in the remainder of the paper.

4. Findings

This section begins with a descriptive overview of the 96 included articles, outlining publication trends, study types, methods, and geographic contexts. It then presents the core contribution of this review; six key intervention themes developed through thematic synthesis to guide ethical and effective student use of generative AI in higher education.

4.1. Overview of the Included Articles

This systematic literature review included 96 peer-reviewed journal articles published between 2023 and 2025. A steady increase in scholarly interest was observed over this three-year span: 8 articles were published in 2023, 41 in 2024, and 47 in 2025, indicating a growing academic focus on the ethical and effective use of generative AI in higher education.
Regarding the nature of the studies reviewed, the majority (n = 78) were empirical in design, while 12 were conceptual, and 6 were systematic literature reviews. Among the 78 empirical studies, mixed-methods research was the most prevalent approach (n = 34), followed by quantitative methods (n = 28), and qualitative studies (n = 16). This suggests a strong emphasis on triangulating data sources and methods to capture the multifaceted implications of generative AI use in university contexts.
The reviewed articles originated from institutions across 50 countries, reflecting a broad global engagement with the topic. The United States (n = 16), Australia (n = 9), China (n = 9), and Saudi Arabia (n = 8) were the most frequently represented nations, followed by the United Kingdom (n = 7), the United Arab Emirates (n = 5), and several others including India, Spain, and the Philippines. The geographical spread demonstrates that ethical and effective integration of generative AI in higher education is a global concern, with diverse cultural, pedagogical, and infrastructural perspectives contributing to the discourse.
Empirical data collection primarily targeted higher education populations, specifically university students and teachers with notable representation from China, Saudi Arabia, and the USA (each with n = 8 studies), followed by the UK (n = 6), Hong Kong (n = 5), and Australia, India, and the Philippines (n = 4 each). This distribution underscores the international scope and relevance of AI integration across educational systems.
In sum, the reviewed literature reflects a growing, methodologically diverse, and globally distributed body of research, offering a robust foundation for constructing a comprehensive intervention framework. The prevalence of empirical and mixed-methods studies highlights the field’s commitment to grounding AI-related interventions in evidence-based pedagogical practice, while the international range of contributions provides a valuable cross-cultural lens on institutional readiness, ethical concerns, and pedagogical innovation.

4.2. Curriculum Integration Interventions

The first and most foundational intervention theme to emerge from data synthesis is curriculum integration interventions, which refers to deliberate pedagogical strategies aimed at embedding generative AI tools and competencies directly within the formal structures of teaching and learning. Unlike superficial or incidental uses of AI, this approach emphasises the intentional alignment of AI with curricular goals, ensuring that its adoption supports rather than undermines educational integrity, cognitive development, and disciplinary learning.
A critical aspect of curriculum integration is the embedding of AI literacy into higher education. The literature highlights increasing concern that students often engage with AI tools without adequate understanding of their ethical and social implications. In response, educators are introducing AI ethics education to address issues such as bias, transparency, and accountability in algorithmic decision-making (Bikanga Ada, 2024; Haroud & Saqri, 2025; Valdivieso & González, 2025; Z. Wang et al., 2025). Alongside this, responsible AI use modules are being developed to encourage reflective rather than passive engagement, often incorporating real-world tasks to guide appropriate usage (Acosta-Enriquez et al., 2024a; Alharbi, 2024; Corbin et al., 2025). Furthermore, the development of critical thinking is gaining traction as a complementary skill, helping students to evaluate AI outputs and question their applicability (Cao et al., 2025; Cowling et al., 2023; Fitzek & Bârgăoanu, 2025; Malik et al., 2023). These efforts collectively position AI literacy as more than technical competence, framing it as a foundation for ethical reasoning and informed professional practice.
Another significant aspect of curriculum integration is the alignment of AI use with learning outcomes. The literature revealed growing concern that unstructured or ad hoc use of AI can dilute or misalign with the intended learning objectives of a course. As a response, some educators are integrating Unit Learning Outcomes (ULOs) and Course Learning Outcomes (CLOs) directly with AI-based tasks to ensure constructive alignment (Hossain et al., 2025; Jin et al., 2025). In addition, discipline-specific AI applications are being employed to tailor AI integration in ways that respect the epistemological and methodological norms of different fields (Bearman & Ajjawi, 2023; Fleischmann, 2025; Huang et al., 2024; Usher & Barak, 2024). For example, AI might be used in business courses to simulate market analysis, in STEM courses to assist with data modelling, or in the humanities to prompt debates on authorship and originality. These applications reinforce learning outcomes rather than circumvent them, positioning AI as an enabler of deeper disciplinary engagement.
The final element within this theme is pedagogical redesign, an intervention strategy that reflects a shift in instructional practices to accommodate AI’s presence in the learning process. This includes adopting AI-informed teaching strategies such as using AI-generated content for critique, simulation, or peer evaluation exercises (Azcárate, 2024; Kutty et al., 2024; Nofal et al., 2025) and promoting collaborative learning with AI, where students work in teams to interpret, challenge, or build upon AI-generated work (Alzubi et al., 2025; Chu et al., 2025; Usher & Barak, 2024). These approaches do not merely accept AI as inevitable but use it to reimagine how students learn, co-construct knowledge, and engage in academic inquiry. Such interventions promote both metacognitive growth (Wass et al., 2023) and ethical discernment, encouraging students to act as active evaluators rather than passive consumers of AI outputs.
Collectively, the practices of AI literacy development (Southworth et al., 2023), outcome alignment (Pallant et al., 2025), and pedagogical redesign (Qian, 2025) form a coherent and comprehensive approach to curriculum integration. AI literacy development equips students with the ethical and critical competencies necessary to engage with AI tools responsibly. Aligning AI use with learning outcomes ensures that these tools support, rather than undermine, the achievement of educational goals. Pedagogical redesign, in turn, provides the instructional structures through which this literacy and alignment are enacted in practice. Together, these sub-themes capture the multidimensional nature of curriculum integration, moving beyond surface-level engagement with AI to encompass educational integrity, cognitive development, and disciplinary learning that uphold academic standards and promote effective student learning.

4.3. Policy and Governance Interventions

Another key intervention theme identified through data synthesis is policy and governance interventions, which refers to institutional strategies and regulatory frameworks designed to guide, monitor, and support the responsible use of generative AI within higher education settings. As AI tools increasingly penetrate teaching and learning environments, universities and colleges face growing pressure to articulate clear positions on their appropriate use (Chan, 2023). This intervention theme addresses the structural and procedural mechanisms that shape AI engagement at the organisational level, balancing innovation with accountability.
A primary focus within this intervention is the formulation of institutional AI policies. This sub-theme includes codes such as AI use regulations, academic integrity enforcement, and plagiarism prevention. AI use regulations are discussed in relation to data privacy, ethical chatbot usage, and responsible implementation, highlighting the need for clear, enforceable institutional frameworks (Ichikawa et al., 2025; Saihi et al., 2024). While some universities have issued formal guidelines, students expressed uncertainty about acceptable AI practices, revealing policy gaps (Zou & Huang, 2024). Academic integrity enforcement emerged as a central concern, particularly regarding unauthorised AI use in assessments and the difficulty of detecting AI-generated content. This raises questions about enforcement efficacy and student perceptions of misconduct (Cotton et al., 2024; Dolenc & Brumen, 2024; Qu & Wang, 2025). Plagiarism prevention further complicates these concerns, as generative AI blurs authorship boundaries and challenges traditional notions of originality (Acosta-Enriquez et al., 2024b; English et al., 2025; Sekli et al., 2024). Collectively, these codes highlight the complexity of governing AI in education and the necessity for comprehensive, adaptive policy responses.
Another critical sub-theme is transparent communication, which includes the development and dissemination of clear guidelines for both students and faculty regarding AI use. The extant literature identifies a significant gap in institutional direction, with both groups expressing uncertainty about acceptable practices involving tools like ChatGPT (Blahopoulou & Ortiz-Bonnin, 2025). Students highlighted the need for formal guidance on AI use in assignments, while faculty were shown to lack unified standards to inform their instruction and assessment strategies (Aldulaijan & Almalky, 2025). Moreover, there are recommendations for the creation of comprehensive, role-specific guidelines, as exemplified by the University of Hertfordshire’s call for “clear guidelines” (Mahrishi et al., 2024). Equally important is policy dissemination. Espartinez (2024) warns of ChatGPT’s misuse to evade plagiarism detection and stresses the need for proactive, accessible communication of evolving AI policies. Zembylas (2023) and Zhou et al. (2024) advocate for a decolonial approach to ethics, ensuring transparency supports equity, clarity, and shared responsibility across academic communities.
A third and increasingly vital sub-theme emerging in the literature is stakeholder involvement, characterised by two primary codes, namely, faculty–student collaboration and the creation of cross-departmental AI governance committees. Faculty–student collaboration reflects a shift towards co-constructive educational practices, where students are not merely recipients but active participants in shaping how AI tools are integrated into their learning environments (Cacho, 2024; Gupta & Jaiswal, 2024; Jayasinghe et al., 2025; Mariyam & Karthika, 2025). This engagement supports pedagogical responsiveness and fosters a shared sense of responsibility in navigating AI’s educational implications (Jayasinghe et al., 2025). In parallel, institutions are establishing cross-departmental AI governance committees to ensure that policymaking reflects the diverse needs of various academic disciplines (Gonsalves, 2025; Kim et al., 2025; Kutty et al., 2024). These governance structures encourage participatory decision-making and interdisciplinary dialogue, grounding AI-related policies in the lived realities of teaching and research. Thus, these two forms of stakeholder involvement enhance legitimacy, promote ethical engagement, and help embed AI practices that are contextually relevant and widely supported.
Taken together, these components, namely, policy development, transparent communication, and stakeholder involvement form a holistic approach to institutional AI governance. Policy and governance interventions can therefore be defined as structured, institution-wide strategies that establish, communicate, and enforce guidelines for ethical AI use while fostering inclusive dialogue and collective responsibility. Each sub-theme contributes a distinct yet interrelated dimension—institutional policies define the rules, transparent communication ensures their accessibility and clarity, and stakeholder involvement enhances their relevance and acceptance. This layered structure captures the complexity of AI governance, moving beyond disciplinary enforcement to embrace a model that is educative, participatory, and adaptive to disciplinary contexts.

4.4. Faculty Development and Support

The third intervention theme identified through the literature review is faculty development and support, which encompasses institutional strategies designed to prepare, equip, and sustain educators in their efforts to integrate generative AI into teaching and learning. As the capabilities and presence of AI tools grow rapidly, faculty members are under increasing pressure to adapt their pedagogical approaches (Lyu et al., 2025), develop new competencies, and navigate evolving ethical, technical, and instructional challenges. This intervention theme acknowledges that without robust faculty development, efforts to meaningfully integrate AI into higher education are unlikely to succeed or scale effectively. In other words, faculty development and support are essential for promoting the ethical and effective use of generative AI by students.
A key dimension of this intervention involves professional training, reflected in the codes AI pedagogical skills workshops and ethics training for educators. The literature frequently notes a gap between institutional expectations for AI integration and the actual preparedness of academic staff to meet those expectations. Faculty often report feeling uncertain about how to use AI effectively in course design, assessment, and student engagement. In response, institutions have begun offering targeted professional development programmes aimed at building educators’ capacity to understand, evaluate, and apply AI tools in pedagogically meaningful ways (Alharbi, 2024; Cai et al., 2025; Okamoto et al., 2025). Ethics training is particularly important, as it enables faculty to model and teach responsible AI use while aligning their practices with institutional values and academic integrity standards (Genovese et al., 2025; Merzifonluoglu & Gunes, 2025; Rawas, 2024). These training opportunities help move faculty from passive observers of technological change to active agents of pedagogical innovation.
Another essential sub-theme is resource provision, which includes the codes, access to AI tools and instructional materials for AI integration. While training offers conceptual grounding, resource provision ensures that faculty have the practical tools necessary to implement what they learn. Several studies highlight the importance of equitable access to licensed AI platforms, curated digital content, case studies, and lesson plan templates tailored for AI-enhanced teaching (Abduljawad, 2024; Strzelecki & ElArabawy, 2024; Summers et al., 2024). These resources reduce the time and cognitive load required for faculty to experiment with and adopt AI in their courses. Moreover, access to such materials signals institutional support and commitment to AI integration, encouraging wider faculty participation and innovation across disciplines (Chaaban et al., 2024; Li et al., 2025; Mzwri & Turcsányi-Szabo, 2025; Rasul et al., 2023).
The third sub-theme, ongoing support, reflects a recognition that faculty development is not a one-time event but a sustained process. The codes AI integration coaching and communities of practice illustrate efforts to provide continuous, collaborative, and context-sensitive guidance to educators. Coaching models pair faculty with instructional designers or AI-literate mentors who can offer personalised support in redesigning activities or troubleshooting implementation challenges (Chaaban et al., 2024; Sarıkahya et al., 2025). Simultaneously, communities of practice create spaces for peer-to-peer learning, interdisciplinary dialogue, and the co-construction of best practices (Barus et al., 2025; Bearman & Ajjawi, 2023; Cacho, 2024). These initiatives foster a culture of shared learning and collective growth, enabling institutions to evolve alongside the rapidly changing AI landscape.
Taken together, professional training, resource provision, and ongoing support form an integrated framework for empowering educators in the age of generative AI. Faculty development and support can thus be defined as a set of institutional initiatives aimed at enhancing educators’ pedagogical, ethical, and technical capacity to integrate AI into their teaching in ways that are contextually relevant and educationally sound. Each sub-theme contributes to a different stage of the faculty development journey, professional training builds foundational knowledge and confidence, resource provision enables practical application and ongoing support sustains long-term innovation and adaptation. Without these interventions, the broader aims of ethical, effective, and scalable AI use in higher education risk remaining aspirational rather than actionable.

4.5. Student-Centred Interventions

The fourth major theme emerging from the literature is student-centred interventions, which refers to pedagogical and institutional strategies specifically designed to shape how students engage with Generative AI tools. Unlike broader structural or faculty-oriented approaches, these interventions focus directly on cultivating students’ ethical awareness, technical competencies, diversity of student backgrounds, access to AI tools, and reflective capabilities in AI-supported learning environments. As students are often the primary users of AI technologies in academic contexts, interventions that address their understanding, behaviour, and decision-making are essential for ensuring that AI is used constructively, responsibly, and in alignment with educational values.
A central sub-theme within this intervention is ethical AI use education, reflected in the codes workshops on responsible AI and promoting academic honesty. Many studies emphasise the need to explicitly teach students what constitutes ethical and unethical use of AI tools. While institutional policies provide the formal rules, these interventions aim to internalise those principles through active learning. Workshops and seminars offer opportunities to explore case studies, simulate ethical dilemmas, and clarify grey areas such as acceptable collaboration or AI-assisted editing (Behrens et al., 2024; Duan et al., 2025; Mumtaz et al., 2025). These activities not only reinforce institutional expectations but also promote a deeper understanding of academic integrity in the context of emerging technologies (Bills, 2025; Jabar et al., 2024; Maheshwari, 2024; Z. Wang et al., 2025). Encouraging students to consider the ethical dimensions of their AI use fosters a sense of accountability and professional responsibility that extends beyond the classroom.
The second sub-theme, skill development, includes the codes AI tool proficiency training and critical evaluation of AI outputs. The literature increasingly recognises that effective AI use requires more than access, it demands skill. Students must learn how to use AI tools confidently and competently, which includes understanding prompt engineering, interpreting AI-generated results, and recognising the limitations or biases inherent in these systems (Dolenc & Brumen, 2024; Haq et al., 2025; Mahrishi et al., 2024). Equally important is the ability to critically evaluate AI outputs rather than accepting them at face value. Interventions in this category focus on training students to assess the credibility, relevance, and appropriateness of AI-generated content (Alghazo et al., 2025; Huang et al., 2024; Silvola et al., 2025). By doing so, they move students from passive consumers to active, discerning users of AI, a key distinction for fostering meaningful learning and reducing overreliance.
The third sub-theme is reflective practices, which includes guided reflection on AI use and AI impact discussions. These interventions are grounded in pedagogical theories of metacognition and transformative learning, which suggest that students learn more deeply when they are encouraged to reflect on their experiences. Guided reflection exercises might ask students to journal about how and why they used AI in an assignment, what they learned from the process, and how their thinking evolved (Caccavale et al., 2024; Mohamed et al., 2025; Xu et al., 2025). Classroom discussions about the broader impacts of AI on education, society, and future careers, invite students to engage critically with the technology’s role in their lives (Alshamy et al., 2025; Sekli et al., 2024; Urban et al., 2024). These reflective practices not only support personal growth but also foster the kind of ethical and civic thinking that higher education aims to cultivate.
Together, these sub-themes, namely, ethical education, skill development, and reflective practice form a cohesive framework of student-centred interventions. This intervention type can be defined as a set of pedagogical strategies that directly engage students in developing the knowledge, skills, and ethical dispositions needed to use generative AI effectively and responsibly in their academic work. The structure of this theme reflects a holistic understanding of student learning, ethical education provides the why, skill development provides the how, and reflective practice reinforces the what it means. This layered approach recognises students not just as users of AI, but as critical participants in shaping its educational impact. As such, these sub-themes logically and pedagogically belong under a single main theme, offering a student-focused pathway toward ethical and empowered AI engagement in higher education.

4.6. Assessment Adaptation Interventions

The fifth intervention theme identified through the review is assessment adaptation interventions, which encompasses strategic modifications to assessment design, evaluation practices, and feedback mechanisms in response to the growing presence of generative AI in higher education. As AI tools become increasingly capable of generating essays, solving problems, and simulating human expression, traditional assessment methods face challenges to validity, originality, and fairness. This theme captures the range of pedagogical and institutional responses aimed at preserving academic integrity while leveraging AI to enhance learning and assessment outcomes.
A key sub-theme within this category is AI-resilient assessment design, which includes the codes authentic assessments and AI detection mechanisms. The literature strongly supports the shift toward authentic assessments, tasks that are complex, context-rich, and reflective of real-world scenarios as a way to make AI-generated responses less relevant or useful (Chan & Zhou, 2023; Haroud & Saqri, 2025; Ozfidan et al., 2024). Authentic tasks often require personal reflection, iterative problem-solving, or direct application to specific contexts, which AI tools may struggle to replicate convincingly. At the same time, institutions are also exploring the use of AI detection software and plagiarism-checking tools specifically designed to identify AI-generated content (Alzubi et al., 2025; Chan & Lee, 2023; Farinosi & Melchior, 2025). While detection technologies are still evolving and not always reliable, they play a role in signalling the institution’s commitment to academic honesty and in supporting educators’ ability to uphold standards.
The second sub-theme, feedback and monitoring, comprises two key codes, student-centred AI feedback mechanisms and formative feedback on AI use. Student-centred AI feedback mechanisms involve tasks like reflection prompts, AI-use reports, or annotated submissions, encouraging students to critically evaluate how and why they used generative AI tools (Rawas, 2024; Tossell et al., 2024). These strategies foster transparency, ethical awareness, and responsible decision-making. In parallel, formative feedback on AI use refers to the guidance educators provide on the appropriateness and effectiveness of AI integration in student work (Chaaban, 2025; Medina-Gual & Parejo, 2025; Rienties et al., 2025). Rather than penalising use, this feedback supports ongoing learning by helping students refine their academic judgement. Together, these interventions create a feedback culture where AI is addressed openly and constructively, supporting student growth while upholding academic standards. This approach promotes ethical engagement and deeper learning in AI-enhanced educational contexts.
The third sub-theme, alternative evaluation methods, is reflected in the codes portfolio assessments and oral examinations. These formats are increasingly proposed as solutions to the challenges posed by AI-generated work. Portfolio-based assessments encourage students to document their learning journey across multiple stages, often requiring metacognitive reflection and evidence of process rather than just final products (Aladsani, 2025; An et al., 2025; Jin et al., 2025). Oral examinations, meanwhile, create opportunities for real-time interaction and spontaneous reasoning, making them resistant to AI impersonation (Aladsani, 2025; Niloy et al., 2024; Ou et al., 2024). While these methods are more resource-intensive, they offer greater authenticity, allow educators to verify individual understanding, and reduce the incentive or opportunity to misuse AI tools.
Together, these three sub-themes, AI-resilient design, feedback and monitoring, and alternative evaluation methods, form a comprehensive approach to rethinking assessment in an AI-enhanced academic environment. Assessment adaptation interventions can therefore be defined as intentional modifications to assessment design and practices aimed at maintaining academic integrity, promoting authentic learning, and supporting responsible student engagement with AI tools. Each sub-theme addresses a distinct component of the assessment process, design (what students are asked to do), evaluation (how their work is interpreted), and structure (how their understanding is demonstrated). Their combined focus reflects a shift from merely controlling AI use to pedagogically adapting to it transforming assessment from a site of risk into an opportunity for innovation and deeper learning. This interconnectedness justifies their classification under a single intervention theme, offering educators a flexible yet principled framework for assessment in the age of generative AI.

4.7. Technology and Infrastructure Interventions

The sixth and final intervention theme identified through data synthesis is technology and infrastructure interventions, which encompasses the technical, logistical, and institutional foundations necessary to support the responsible and effective integration of generative AI in higher education. While pedagogical and policy-focused strategies often dominate the dialogue, the success of AI-enhanced learning also hinges on the availability of supportive infrastructure that ensures equitable access, seamless integration, and ethical data practices. This theme addresses the often-overlooked but critical backend systems and institutional commitments that enable meaningful AI adoption.
The first sub-theme that contributes to form the main theme is tool accessibility, captured through the codes providing AI platforms and ensuring equitable access. First, it requires providing access to quality generative AI platforms such as Canva, Grammarly, ChatGPT, etc., that are reliable, educationally aligned, and institutionally supported (Rasul et al., 2023; Strzelecki & ElArabawy, 2024; Swidan et al., 2025). Institutions that offer campus-wide licences reduce barriers to experimentation and foster meaningful engagement. Second, it involves ensuring equitable access by addressing deeper structural issues (Al-Zahrani, 2024; Nofal et al., 2025; Park & Milner, 2025). These include socioeconomic inequalities, varied digital literacy levels, and infrastructural limitations that disproportionately impact certain student groups. True accessibility means that all students and staff not only have the tools but also the necessary training and support to use them effectively. Promoting equity in access is not just functional, it is a matter of educational justice, inclusion, and fair participation in the evolving digital landscape.
The second sub-theme, integration with Learning Management Systems (LMS), includes the codes embedding AI tools into LMS and technical support. The reviewed literature points to growing institutional efforts to connect AI applications directly with existing digital learning ecosystems (Jin et al., 2025; Rasul et al., 2023). By embedding AI tools within LMS platforms like Moodle, Blackboard, or Canvas, institutions can streamline student engagement, reduce the cognitive load of managing multiple platforms, and foster a more cohesive user experience. Equally important is the provision of dedicated technical support to assist educators and learners in navigating these tools (Li et al., 2025; Malik et al., 2023; Sekli et al., 2024). Without reliable help desks, tutorials, or on-demand troubleshooting services, even the most advanced tools risk going underused or misapplied. This sub-theme thus reflects the importance of usability and institutional investment in operational infrastructure.
The final sub-theme is data privacy and security, represented by two distinct codes, secure data handling and compliance with ethical standards. The code secure data handling reflects the growing institutional responsibility to protect user data as generative AI tools collect, process, and store information from students and staff (An et al., 2025; Barus et al., 2025; Genovese et al., 2025). The synthesised findings highlight the importance of implementing encrypted storage systems, anonymisation procedures, and routine audits to minimise the risk of data exposure. These technical safeguards form the backbone of any responsible AI infrastructure, ensuring that the benefits of AI adoption are not undermined by preventable security breaches. The code compliance with ethical standards emphasises the alignment of AI use with legal frameworks such as the General Data Protection Regulation (GDPR), and with institutional ethics policies (Cai et al., 2025; Essien et al., 2024; Yusuf et al., 2024). The literature consistently points to the necessity of transparent communication about how AI tools collect, process, and utilise user data. Such transparency is crucial for ensuring informed consent and maintaining user trust. Together, these codes establish the foundational conditions required for ethically sound and risk-aware implementation of generative AI tools in higher education settings.
Together, these sub-themes, namely, tool accessibility, LMS integration, and data privacy, form a coherent and critical foundation for ethical and effective AI use in academic environments. Technology and infrastructure interventions can therefore be defined as institutional actions and systems that ensure the availability, usability, and ethical deployment of AI technologies in higher education. Each sub-theme contributes to a different dimension of infrastructure, access (who can use the tools), integration (how tools are embedded into learning systems), and protection (how data is handled). Their combined implementation ensures that AI is not only pedagogically sound and ethically grounded, but also practically feasible and equitably available. This layered structure justifies their classification under a single intervention theme, highlighting the idea that robust infrastructure is not merely a technical necessity, but a foundational enabler of responsible and inclusive AI-enhanced education.

5. Discussion

Based on the findings of this review, several novel theoretical implications can be proposed to advance research on interventions aimed at improving the ethical and effective adoption and integration of generative AI by students in higher education. Moreover, the findings have several key implications for practice. This section discusses those implications.

5.1. Theoretical Implications

This review contributes substantively to the theoretical discussion surrounding the integration of generative AI in higher education by offering a structured, research-informed framework of interventions that directly engages with the pedagogical, institutional, and infrastructural challenges presented by AI technologies. It addresses a clear gap in the literature, to the best of the authors’ knowledge, no prior research has proposed a cohesive intervention framework designed to support ethical and effective student engagement with generative AI tools while preserving educational outcomes in terms of knowledge, skills, academic integrity, and cognitive development. This review, therefore, describes a new boundary within the theoretical landscape, one that provides a launch point for future empirical exploration and theoretical refinement. It enables subsequent scholarship to move beyond scattered, discipline-specific explorations toward a more coherent understanding of how interventions can be systematically deployed across educational contexts.
Furthermore, the proposed framework calls for cross-disciplinary collaborative research to deepen understanding of generative AI integration in higher education. While technology adoption theories such as diffusion of innovation (Rogers, 1995) can inform the uptake of AI tools, implementation also intersects with a range of other disciplinary concerns. For behavioural dimensions, theories like the technology acceptance model (TAM) (F. D. Davis, 1989) and theory of planned behaviour (Ajzen, 1991) can help explore student and staff intentions, motivations, and usage patterns. Governance-related issues, including policy and ethical oversight, can draw from theories such as institutional theory (North, 1990) to examine how organisational norms and structures shape AI integration. Another key element that impacts the success of the proposed framework is leadership and organisational readiness, which may benefit from frameworks such as transformational leadership theory (Hechanova & Cementina-Olpoc, 2013), as it addresses strategic vision and the facilitation of change. In the domain of professional development, adult learning theory offers insights into effective training approaches for academic staff. For curriculum and assessment development (as components of instructional design), TPACK (Koehler & Mishra, 2009) and constructive alignment theory (Biggs, 1996) can guide educators in aligning AI tools with disciplinary content, pedagogical goals, and intended learning outcomes. These theoretical touchpoints reveal the complexity of implementing the framework in practice and highlight the need for interdisciplinary research across education, psychology, leadership studies, information systems, and ethics. The framework thus provides not only a structural foundation, but also a research agenda spanning multiple domains.
Importantly, the framework also provides a theoretical counterpoint to dominant narratives that frame generative AI as a threat to cognitive depth, critical thinking, and independent learning. Critics frequently argue that AI tools promote superficial engagement by automating core cognitive tasks (Gerlich, 2025). However, the findings of this review, highlighting interventions such as guided reflection on AI use (Ou et al., 2024; Park & Milner, 2025), critical evaluation of AI outputs (Kramm & McKenna, 2023), and AI-informed teaching strategies (Bearman & Ajjawi, 2023), suggest that when AI is embedded within pedagogically sound structures, it can act as a scaffold rather than a crutch. Theoretically, this challenges the view of AI as a passive tool or an agent of academic erosion and instead positions it as a potential co-participant in the learning process, capable of enhancing metacognitive awareness, ethical reasoning, and disciplinary inquiry when used under structured guidance. Thus, this perspective invites a rethinking of cognitive theories of learning transfer in technologically augmented contexts.
The review also engages with the broader institutional tension in the theoretical discussion, where some universities have embraced AI, while others have resorted to restrictive policies and outright bans (Lin et al., 2024). The findings demonstrate that sufficient scholarly insight already exists to support a more proactive and educative stance on AI integration. By synthesising interventions across areas such as curriculum alignment, faculty development, assessment redesign, and data governance, this review challenges the theoretical basis for reactionary institutional responses, arguing instead for a shift toward anticipatory governance and capacity building. This positions the review within a strand of theory that calls for educational institutions to evolve not merely as gatekeepers of knowledge, but as facilitators of ethical and adaptive engagement with emerging technologies (Nguyen, 2025).
Perhaps the most significant theoretical implication of this review is the challenge it poses to established teaching and learning theories. The nature of the proposed interventions, especially those that promote student-led inquiry, peer interaction, and collaboration with AI tools, raises critical questions about whether current pedagogical models remain adequate. For instance, constructivist learning theory (Narayan et al., 2013) assumes learners build knowledge through direct experience and social interaction, but AI-generated content introduces new forms of knowledge construction that are partially mediated by non-human agents. Similarly, social learning theory (Bandura & Walters, 1971), which focuses on learning through observation and modelling, may need to be re-examined when AI becomes a learning partner or model. The cognitive load theory (Sweller, 2011), which guides instructional design by managing mental effort, must also be reconsidered in light of AI’s ability to automate or simplify cognitive tasks. Moreover, didactic teaching models that rely on content delivery are increasingly misaligned with a context that encourages learners to engage critically and independently with AI. If education is to remain relevant and effective, these foundational theories must be revisited and adapted to account for human-AI co-agency, shifting teacher roles, and the evolving nature of knowledge, learning processes, and assessment within AI-augmented environments.
Finally, the framework carries significant implications for theories of educational equity and justice. While prior literature rightly points out that generative AI has the potential to exacerbate existing inequalities, particularly in terms of access and digital literacy, the interventions identified in this review point toward a theoretical reframing of equity that includes algorithmic literacy, platform accessibility, and ethical data practices as essential components. For example, sub-themes such as tool accessibility and transparent communication demonstrate that equity in AI-enhanced education must extend beyond the material provision of devices or bandwidth. Instead, it must be conceptualised as the ability of all learners to meaningfully, ethically, and confidently engage with AI tools in ways that support their academic and professional growth. This insight contributes to a broader re-theorisation of inclusion, demanding that equity frameworks in education incorporate digital agency, informed consent, and participatory governance as fundamental pillars.
In summary, this review does not merely identify what interventions exist in the literature; it invites a rethinking of how educational theory must evolve in the face of generative AI. It challenges existing pedagogical assumptions, reframes institutional priorities, and offers a cohesive framework that can serve as both a practical guide and a theoretical platform for redefining learning in the AI age. These contributions make clear that theories of learning, teaching, and institutional governance must now contend with the presence of generative AI not as an anomaly, but as a central and enduring element of contemporary higher education.

5.2. Practical Implications

The findings of this review not only identify existing intervention practices but also challenge the implicit assumptions underpinning current pedagogical and institutional responses to generative AI in higher education. Several critical, practical implications emerge that demand a reorientation of how universities conceive of teaching, assessment, governance, and infrastructure in a post-generative AI landscape.
A key practical implication of this review is that while the six intervention themes provide a universal framework, their successful implementation requires discipline-specific adaptation to ensure relevance and impact. STEM disciplines can embed AI in data analysis, simulation, and modelling exercises; the humanities and social sciences can integrate it into critical discourse, historiographical interpretation, and ethical debate; creative arts can use AI for ideation and prototyping while fostering reflection on originality and authorship; and professional programmes such as law, business, or medicine can apply AI in case-based learning, regulatory analysis, and ethical decision-making simulations. This tailored approach ensures that interventions reinforce core disciplinary learning outcomes and professional standards, rather than imposing generic practices. It also highlights the need for cross-departmental collaboration so that operationalisation respects diverse epistemologies while remaining consistent with institutional policy, thereby maximising both pedagogical value and academic integrity.
Second, the review exposes the risk of operational inertia in educational institutions. Although the framework outlines specific intervention types, its practical implication is not merely additive (e.g., add workshops or rewrite policies). Rather, it calls for a reengineering of institutional workflows. For example, the segmentation of curriculum design, faculty development, student support, and policy creation, typically managed by different units, proves inadequate in the face of AI’s cross-cutting nature. Institutions must adopt cross-functional implementation models, bringing together learning designers, academic integrity officers, IT teams, and student representatives not as parallel contributors but as co-owners of AI integration. Without this structural realignment, even well-designed interventions risk fragmentation or contradictory messaging.
Third, the findings imply that institutions should invert their current logic of AI management. Most universities focus on detecting misuse and enforcing compliance. However, the framework demonstrates that the conditions enabling ethical, productive AI use are best cultivated upstream, at the design and culture level, not downstream in disciplinary or punitive measures. This reframing requires institutions to shift from surveillance to design-thinking: how can assessment be structured to encourage transparency? How can AI use be normalised in ways that reduce incentive for misconduct? This challenges institutional leadership to pivot from AI as a threat to be managed toward AI as a pedagogical design challenge requiring creative foresight.
Fourth, the assessment-related findings suggest that universities need to rethink what counts as valid evidence of student learning. If AI tools can produce high-quality written work, then assessments should focus more on how students got to their answers, not just the final product. This marks a major shift in how learning is measured. It is not just about adding new formats like portfolios or oral exams, it is about recognising that we can no longer assume a student wrote something just because their name is on it. Instead, educators will need to look for other signs of learning, such as reflective writing, records of how students used AI, and step-by-step development of ideas. Making this change will require training teachers, redesigning grading rubrics, changing how work is submitted, and rethinking how feedback is given. It is a broad shift in assessment culture that responds to the realities of learning alongside AI.
Fifth, the findings imply that many interventions will fail unless institutions confront the invisible labour and cognitive overload placed on faculty. Expecting educators to redesign curricula, upskill technically, adopt new assessment strategies, and navigate complex policy terrain without dedicated time or incentives reflects a profound practical disconnect. Hence, institutional leaders must embed AI adaptation into workload models, promotion criteria, and funding streams. Otherwise, the most innovative practices will remain confined to a few enthusiastic outliers. This insight shifts the conversation from tool training to systemic capacity-building, where meaningful adoption depends on institutional reward structures, not individual voluntarism.
Finally, while the framework addresses equitable access, its deeper practical implication is that AI integration cannot be equitable if it is neutral. Equity requires intentional asymmetry, targeted efforts to uplift digitally marginalised students, whether through culturally responsive AI tools, localised support structures, or proactive outreach. Simply “making tools available” does not close the digital divide. This means institutions must treat AI equity as a matter of social justice, embedding targeted interventions in enrolment strategies, student support services, and faculty expectations. Without such intentional asymmetry, AI integration risks replicating and amplifying existing educational inequalities.
In summary, the practical implications of this review do not lie in simply applying the six intervention themes in isolation. Instead, the review compels institutions to reconceptualise how AI-related change is governed, implemented, and experienced. It signals a shift away from ad hoc strategies toward system-level rethinking of academic work, institutional ethics, and the pedagogical assumptions that underlie university teaching. Those who fail to engage with this complexity risk not only pedagogical irrelevance, but also ethical erosion and strategic stagnation in an era increasingly shaped by generative AI.

5.3. Limitations of the Study and Suggestions for Future Research

Despite the contributions, several limitations must be acknowledged. The review relies solely on studies published in Q1-ranked journals indexed in SCOPUS and Web of Science, which, while ensuring quality, may exclude valuable insights from grey literature, emerging contexts, or alternative epistemologies. Nevertheless, the findings remain highly valid, supported by robust theoretical and practical implications generated through this research. Furthermore, although the thematic synthesis offers a robust analytical structure, it does not test the effectiveness of the proposed interventions in real educational settings. The review also stops short of providing discipline-specific application models, despite acknowledging that generative AI use varies significantly across academic fields. Moreover, the proposed thematic framework is primarily descriptive: it organises intervention dimensions but does not empirically test or specify causal or directional relationships among the themes. Additionally, the rapid evolution of generative AI technology may render certain recommendations temporally limited unless continuously updated and empirically revalidated. Finally, a critical limitation arises from the geographic and economic distribution of the included studies. High-income contexts dominate both researcher affiliations and data collection sites, accounting for approximately 45–50% of empirical contributions. While some representation exists from lower-middle-income countries such as India, the Philippines, and Bangladesh, no studies involved data collection in low-income countries. This reveals a significant blind spot in the literature and raises concerns about the global applicability and inclusiveness of the proposed framework.
Future research should therefore focus on empirically piloting the proposed interventions across diverse institutional contexts to evaluate their impact on learning outcomes, academic integrity, and student engagement. Moreover, as mentioned in the practical implications section, the proposed framework is a universal model. However, for better utilisation of the framework, future research could examine how discipline-specific adaptations enhance its relevance, effectiveness, and alignment with professional standards across varied academic contexts. In addition, future research could test and refine the relationships among the framework’s themes (e.g., how policy, assessment design, and pedagogical supports interact) through theory-building and empirical validation, moving from a descriptive synthesis toward a more explicitly relational model. In all contexts, longitudinal studies are especially needed to assess how AI use evolves over time and whether interventions sustain ethical engagement or become routinised workarounds. Moreover, interdisciplinary inquiry is crucial blending insights from computer science, education, ethics, and sociology to fully capture the multidimensionality of AI’s impact on higher education. Future research could also explore the role of external bodies such as the UK’s Quality Assurance Agency (QAA) and AdvanceHE in shaping, endorsing, or harmonising institutional AI integration strategies. Examining how such organisations’ guidelines influence policy adoption and pedagogical practice would provide valuable insights into sector-wide coherence and quality assurance. Finally, researchers must address the current equity gap by including voices, experiences, and challenges from low-income and marginalised regions. Culturally responsive and context-specific frameworks are urgently needed to ensure that AI integration does not reinforce existing educational inequities but instead becomes a tool for inclusion, accessibility, and empowerment on a global scale.

6. Conclusions

This review set out to achieve two primary objectives: (1) to systematically review and synthesise relevant literature to identify, code, and organise key concepts into a comprehensive (a set of institutional levers most consistently identified in the literature for shaping student AI use) and cohesive (levers are organised into a single, non-overlapping structure in which each intervention type addresses a distinct level of action) framework of pedagogical and institutional interventions for ethical and effective student use of generative AI; and (2) to offer actionable recommendations for educators and institutions navigating this emerging technological terrain. Through a rigorous six-phase review of 96 peer-reviewed journal articles, both objectives were met. The review identified six overarching intervention categories, namely, curriculum integration, policy and governance, faculty development, student-centred strategies, assessment adaptation, and technological infrastructure, each comprising multiple sub-themes and operational codes. These findings were synthesised into a coherent intervention framework that directly responds to the current fragmentation in the literature and offers a structured platform for practical and theoretical advancement.
Critically, this review contributes to the progressive discussion on generative AI in higher education by moving beyond institutional policy descriptions and normative debates. It constructs a bridge between abstract principles and classroom realities, offering educators a research-informed roadmap for ethical and impactful AI integration. In doing so, the review invites a shift from reactive, compliance-based models to proactive, pedagogy-led approaches. It underscores the need for cross-functional collaboration, intentional curriculum design, and systemic support to foster not only responsible AI use but also deeper, more reflective student learning. By addressing ethical risks and practical complexities simultaneously, the review helps reposition generative AI not as a threat to higher education but as a pedagogical resource that, if correctly channelled, can support its core values.
In conclusion, this review provides a foundational intervention framework that addresses an urgent gap in both practice and theory. It does not claim to offer final answers but rather inaugurates a structured, evidence-based conversation about how higher education can evolve ethically and pedagogically in the age of generative AI. As institutions grapple with this paradigm shift, such frameworks are essential for guiding thoughtful, inclusive, and effective change.

Author Contributions

Conceptualization, S.J. and K.A.A.G.; Methodology, S.J. and K.A.A.G.; Formal analysis, S.J., K.A.A.G., C.D. and U.D.A.; Investigation, S.J., K.A.A.G., C.D. and U.D.A.; Writing—original draft, S.J. and K.A.A.G.; Writing – review & editing, K.A.A.G., D.Y. and C.C.; Supervision, K.A.A.G.; Project administration, K.A.A.G., D.Y. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on request due to restrictions. The data are not publicly available since the respondent requested not to make the data publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abduljawad, S. A. (2024). Investigating the impact of ChatGPT as an AI tool on ESL writing: Prospects and challenges in Saudi Arabian higher education. International Journal of Computer-Assisted Language Learning and Teaching, 14(1), 1–19. [Google Scholar] [CrossRef]
  2. Acosta-Enriquez, B. G., Ballesteros, M. A. A., Vargas, C. G. A. P., Ulloa, M. N. O., Ulloa, C. R. G., Romero, J. M. P., Jaramillo, N. D. G., Orellana, H. U. C., Anzoátegui, D. X. A., & Roca, C. L. (2024a). Knowledge, attitudes, and perceived ethics regarding the use of ChatGPT among generation Z university students. International Journal for Educational Integrity, 20(1), 10. [Google Scholar] [CrossRef]
  3. Acosta-Enriquez, B. G., Vargas, C. G. A. P., Jordan, O. H., Ballesteros, M. A. A., & Morales, A. E. P. (2024b). Exploring attitudes toward ChatGPT among college students: An empirical analysis of cognitive, affective, and behavioral components using path analysis. Computers and Education: Artificial Intelligence, 7, 100320. [Google Scholar] [CrossRef]
  4. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. [Google Scholar] [CrossRef]
  5. Aladsani, H. K. (2025). Developing postgraduate students’ competencies in generative artificial intelligence for ethical integration into academic practices: A participatory action research. Interactive Learning Environments, 33, 5747–5765. [Google Scholar] [CrossRef]
  6. Aldulaijan, A. T., & Almalky, S. M. (2025). The impact of generative AI tools on postgraduate students’ learning experiences: New insights into usage patterns. Journal of Information Technology Education: Research, 24, 003. [Google Scholar] [CrossRef]
  7. Alghazo, R., Fatima, G., Malik, M., Abdelhamid, S. E., Jahanzaib, M., Nayab, D. E., & Raza, A. (2025). Exploring ChatGPT’s role in higher education: Perspectives from Pakistani university students on academic integrity and ethical challenges. Education Sciences, 15(2), 158. [Google Scholar] [CrossRef]
  8. Alharbi, W. (2024). Mind the gap, please!: Addressing the mismatch between teacher awareness and student AI adoption in higher education. International Journal of Computer-Assisted Language Learning and Teaching, 14(1), 1–28. [Google Scholar] [CrossRef]
  9. Alshamy, A., Al-Harthi, A. S. A., & Abdullah, S. (2025). Perceptions of generative AI tools in higher education: Insights from students and academics at Sultan Qaboos University. Education Sciences, 15(4), 501. [Google Scholar] [CrossRef]
  10. Al-Sofi, B. B. M. A. (2024). Artificial intelligence-powered tools and academic writing: To use or not to use ChatGPT. Saudi Journal of Language Studies, 4(3), 145–161. [Google Scholar] [CrossRef]
  11. Al-Zahrani, A. M. (2024). The impact of generative AI tools on researchers and research: Implications for academia in higher education. Innovations in Education and Teaching International, 61(5), 1029–1043. [Google Scholar] [CrossRef]
  12. Alzubi, A. A. F., Nazim, M., & Alyami, N. (2025). Do AI-generative tools kill or nurture creativity in EFL teaching and learning? Education and Information Technologies, 30, 15147–15184. [Google Scholar] [CrossRef]
  13. An, Y., Yu, J. H., & James, S. (2025). Investigating the higher education institutions’ guidelines and policies regarding the use of generative AI in teaching, learning, research, and administration. International Journal of Educational Technology in Higher Education, 22(1), 10. [Google Scholar] [CrossRef]
  14. Azcárate, A. L.-V. (2024). Foresight methodologies in responsible GenAI education: Insights from the intermedia-lab at Complutense University Madrid. Education Sciences, 14(8), 834. [Google Scholar] [CrossRef]
  15. Balalle, H., & Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Social Sciences & Humanities Open, 11, 101299. [Google Scholar] [CrossRef]
  16. Bandura, A., & Walters, R. H. (1971). Social learning theory. Prentice Hall. [Google Scholar]
  17. Barus, O. P., Hidayanto, A. N., Handri, E. Y., Sensuse, D. I., & Yaiprasert, C. (2025). Shaping generative AI governance in higher education: Insights from student perception. International Journal of Educational Research Open, 8, 100452. [Google Scholar] [CrossRef]
  18. Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160–1173. [Google Scholar] [CrossRef]
  19. Behrens, K. A., Marbach-Ad, G., & Kocher, T. D. (2024). AI in the genetics classroom: A useful tool but not a replacement for creative writing. Journal of Science Education and Technology, 34, 621–635. [Google Scholar] [CrossRef]
  20. Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364. [Google Scholar] [CrossRef]
  21. Bikanga Ada, M. (2024). It helps with crap lecturers and their low effort: Investigating computer ccience students’ perceptions of using ChatGPT for learning. Education Sciences, 14(10), 1106. [Google Scholar] [CrossRef]
  22. Bills, K. (2025). A learning tool or hazard? Concerns related to AI misuse in social work courses. Social Work Education, 1–11. [Google Scholar] [CrossRef]
  23. Blahopoulou, J., & Ortiz-Bonnin, S. (2025). Student perceptions of ChatGPT: Benefits, costs, and attitudinal differences between users and non-users toward AI integration in higher education. Education and Information Technologies, 30, 19741–19764. [Google Scholar] [CrossRef]
  24. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  25. Caccavale, F., Gargalo, C. L., Gernaey, K. V., & Krühne, U. (2024). Towards education 4.0: The role of large language models as virtual tutors in chemical engineering. Education for Chemical Engineers, 49, 1–11. [Google Scholar] [CrossRef]
  26. Cacho, R. M. (2024). Integrating generative AI in university teaching and learning: A model for balanced guidelines. Online Learning, 28(3), 55–81. [Google Scholar] [CrossRef]
  27. Cai, L., Msafiri, M. M., & Kangwa, D. (2025). Exploring the impact of integrating AI tools in higher education using the zone of proximal development. Education and Information Technologies, 30(6), 7191–7264. [Google Scholar] [CrossRef]
  28. Cao, X., Lin, Y.-J., Zhang, J.-H., Tang, Y.-P., Zhang, M.-P., & Gao, H.-Y. (2025). Students’ perceptions about the opportunities and challenges of ChatGPT in higher education: A cross-sectional survey based in China. Education and Information Technologies, 30, 12345–12364. [Google Scholar] [CrossRef]
  29. Chaaban, Y. (2025). Exploring research ethics through the lens of critical posthumanism in the age of Artificial Intelligence. Teaching in Higher Education, 30(7), 1740–1755. [Google Scholar] [CrossRef]
  30. Chaaban, Y., Qadhi, S., Chen, J., & Du, X. (2024). Understanding researchers’ AI readiness in a higher education context: Q methodology research. Education Sciences, 14(7), 709. [Google Scholar] [CrossRef]
  31. Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. [Google Scholar] [CrossRef]
  32. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  33. Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10(1), 60. [Google Scholar] [CrossRef]
  34. Chan, C. K. Y., & Zhou, W. (2023). An expectancy value theory (EVT) based instrument for measuring student perceptions of generative AI. Smart Learning Environments, 10(1), 64. [Google Scholar] [CrossRef]
  35. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264–75278. [Google Scholar] [CrossRef]
  36. Chu, H.-C., Lu, Y.-C., & Tu, Y.-F. (2025). How GenAI-supported multi-modal presentations benefit students with different motivation levels: Evidence from digital storytelling performance, critical thinking awareness, and learning attitude. Educational Technology & Society, 28(1), 250–269. [Google Scholar] [CrossRef]
  37. Cong-Lem, N., Ali, S., & Tsering, D. (2025). A systematic review of the limitations and associated opportunities of ChatGPT. International Journal of Human–Computer Interaction, 41(7), 3851–3866. [Google Scholar] [CrossRef]
  38. Corbin, T., Dawson, P., Nicola-Richmond, K., & Partridge, H. (2025). ‘Where’s the line? It’s an absurd line’: Towards a framework for acceptable uses of AI in assessment. Assessment & Evaluation in Higher Education, 50, 705–717. [Google Scholar] [CrossRef]
  39. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  40. Cowling, M., Crawford, J., Allen, K.-A., & Wehmeyer, M. (2023). Using leadership to leverage ChatGPT and artificial intelligence for undergraduate and postgraduate research supervision. Australasian Journal of Educational Technology, 39(4), 89–103. [Google Scholar] [CrossRef]
  41. Curran, C., Burchardt, T., Knapp, M., McDaid, D., & Li, B. (2007). Challenges in multidisciplinary systematic reviewing: A study on social exclusion and mental health policy. Social Policy & Administration, 41(3), 289–312. [Google Scholar] [CrossRef]
  42. Dai, Y., Lai, S., Lim, C. P., & Liu, A. (2024). University policies on generative AI in Asia: Promising practices, gaps, and future directions. Journal of Asian Public Policy, 18, 260–281. [Google Scholar] [CrossRef]
  43. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  44. Davis, J., Mengersen, K., Bennett, S., & Mazerolle, L. (2014). Viewing systematic reviews and meta-analysis in social research through different lenses. SpringerPlus, 3(1), 511. [Google Scholar] [CrossRef]
  45. Dolenc, K., & Brumen, M. (2024). Exploring social and computer science students’ perceptions of AI integration in (foreign) language instruction. Computers and Education: Artificial Intelligence, 7, 100285. [Google Scholar] [CrossRef]
  46. Duan, S., Liu, C., Rong, T., Zhao, Y., & Liu, B. (2025). Integrating AI in medical education: A comprehensive study of medical students’ attitudes, concerns, and behavioral intentions. BMC Medical Education, 25(1), 599. [Google Scholar] [CrossRef]
  47. English, R., Rebecca, N., & Mackenzie, H. (2025). ‘A rather stupid but always available brainstorming partner’: Use and understanding of generative AI by UK postgraduate researchers. Innovations in Education and Teaching International, 63, 193–207. [Google Scholar] [CrossRef]
  48. Espartinez, A. S. (2024). Exploring student and teacher perceptions of ChatGPT use in higher education: A Q-Methodology study. Computers and Education: Artificial Intelligence, 7, 100264. [Google Scholar] [CrossRef]
  49. Essel, H. B., Vlachopoulos, D., Essuman, A. B., & Amankwa, J. O. (2024). ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Computers and Education: Artificial Intelligence, 6, 100198. [Google Scholar] [CrossRef]
  50. Essien, A., Teslim, B. O., Xianghan, O. D., & Kremantzis, M. (2024). The influence of AI text generators on critical thinking skills in UK business schools. Studies in Higher Education, 49(5), 865–882. [Google Scholar] [CrossRef]
  51. Farinosi, M., & Melchior, C. (2025). ‘I use ChatGPT, but should I?’ A multi-method analysis of students’ practices and attitudes towards AI in higher education. European Journal of Education, 60(2), e70094. [Google Scholar] [CrossRef]
  52. Filgueiras, F. (2024). Artificial intelligence and education governance. Education, Citizenship and Social Justice, 19(3), 349–361. [Google Scholar] [CrossRef]
  53. Fitzek, S., & Bârgăoanu, A. (2025). Introducing large language models in communication and public relations education: A mixed-methods pilot study. International Journal of Artificial Intelligence in Education, 35, 2478–2494. [Google Scholar] [CrossRef]
  54. Fleischmann, K. (2025). The commodification of creativity: Integrating generative artificial intelligence in higher education design curriculum. Innovations in Education and Teaching International, 62, 1843–1857. [Google Scholar] [CrossRef]
  55. Genovese, A., Borna, S., Gomez-Cabello, C. A., Haider, S. A., Prabha, S., Trabilsy, M., & Forte, A. J. (2025). The current landscape of artificial intelligence in plastic surgery education and training: A systematic review. Journal of Surgical Education, 82(8), 103519. [Google Scholar] [CrossRef]
  56. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. [Google Scholar] [CrossRef]
  57. Gonsalves, C. (2025). Addressing student non-compliance in AI use declarations: Implications for academic integrity and assessment in higher education. Assessment & Evaluation in Higher Education, 50(4), 592–606. [Google Scholar] [CrossRef]
  58. Gupta, S., & Jaiswal, R. (2024). How can we improve AI competencies for tomorrow’s leaders: Insights from multi-stakeholders’ interaction. The International Journal of Management Education, 22(3), 101070. [Google Scholar] [CrossRef]
  59. Hanna, M. G., Pantanowitz, L., Jackson, B., Palmer, O., Visweswaran, S., Pantanowitz, J., Deebajah, M., & Rashidi, H. H. (2025). Ethical and bias considerations in artificial intelligence/machine learning. Modern Pathology, 38(3), 100686. [Google Scholar] [CrossRef]
  60. Haq, M. Z. U., Cao, G., & Abukhait, R. M. Y. (2025). Examining students’ attitudes and intentions towards using ChatGPT in higher education. British Journal of Educational Technology, 56, 2428–2452. [Google Scholar] [CrossRef]
  61. Haroud, S., & Saqri, N. (2025). Generative AI in higher education: Teachers’ and students’ perspectives on support, replacement, and digital literacy. Education Sciences, 15(4), 396. [Google Scholar] [CrossRef]
  62. Hechanova, R. M., & Cementina-Olpoc, R. (2013). Transformational leadership, change management, and commitment to change: A comparison of academic and business organizations. The Asia-Pacific Education Researcher, 22(1), 11–19. [Google Scholar] [CrossRef]
  63. Hossain, Z., Biswas, M. S., & Khan, G. (2025). AI literacy of library and information science students: A study of Bangladesh, India and Pakistan. Journal of Librarianship and Information Science. [Google Scholar] [CrossRef]
  64. Huang, W., Taoli, W., & Tong, Y. (2024). The effect of gamified project-based learning with AIGC in information literacy education. Innovations in Education and Teaching International, 63, 130–144. [Google Scholar] [CrossRef]
  65. Ichikawa, T., Olsen, E., Vinod, A., Glenn, N., Hanna, K., Lund, G. C., & Pierce-Talsma, S. (2025). Generative artificial intelligence in medical education—Policies and training at US osteopathic medical schools: Descriptive cross-sectional survey. JMIR Medical Education, 11, e58766. [Google Scholar] [CrossRef]
  66. Jabar, M., Elena, C.-J., & Pradubmook Sherer, P. (2024). Qualitative ethical technology assessment of artificial intelligence (AI) and the internet of things (IoT) among filipino Gen Z members: Implications for ethics education in higher learning institutions. Asia Pacific Journal of Education, 45(4), 1344–1358. [Google Scholar] [CrossRef]
  67. Jayasinghe, S. (2024). Promoting active learning with ChatGPT: A constructivist approach in Sri Lankan higher education. Journal of Applied Learning and Teaching, 7(2), 141–154. [Google Scholar] [CrossRef]
  68. Jayasinghe, S., Arm, K., & Gamage, K. A. A. (2025). Designing culturally inclusive Case studies with generative AI: Strategies and considerations. Education Sciences, 15(6), 645. [Google Scholar] [CrossRef]
  69. Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. [Google Scholar] [CrossRef]
  70. Kajiwara, Y., & Kawabata, K. (2024). AI literacy for ethical use of chatbot: Will students accept AI ethics? Computers and Education: Artificial Intelligence, 6, 100251. [Google Scholar] [CrossRef]
  71. Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. [Google Scholar] [CrossRef]
  72. Khlaif, Z. N., Ayyoub, A., Hamamra, B., Bensalem, E., Mitwally, M. A. A., Ayyoub, A., Hattab, M. K., & Shadid, F. (2024). University teachers’ views on the adoption and integration of generative AI tools for student assessment in higher education. Education Sciences, 14(10), 1090. [Google Scholar] [CrossRef]
  73. Kim, J., Klopfer, M., Grohs, J. R., Eldardiry, H., Weichert, J., Cox, L. A., & Pike, D. (2025). Examining faculty and student perceptions of generative AI in university courses. Innovative Higher Education, 50, 1281–1313. [Google Scholar] [CrossRef]
  74. Koehler, M., & Mishra, P. (2009). What is technological pedagogical content knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70. [Google Scholar] [CrossRef]
  75. Kramm, N., & McKenna, S. (2023). AI amplifies the tough question: What is higher education really for? Teaching in Higher Education, 28(8), 2173–2178. [Google Scholar] [CrossRef]
  76. Kutty, S., Chugh, R., Perera, P., Neupane, A., Jha, M., Li, D., Gunathilake, W., & Perera, N. C. (2024). Generative AI in higher education: Perspectives of students, educators and administrators. Journal of Applied Learning & Teaching, 7(2), 47–60. [Google Scholar] [CrossRef]
  77. Li, J., King, R. B., Chai, C. S., Zhai, X., & Lee, V. W. Y. (2025). The AI Motivation Scale (AIMS): A self-determination theory perspective. Journal of Research on Technology in Education, 1–22. [Google Scholar] [CrossRef]
  78. Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790. [Google Scholar] [CrossRef]
  79. Lin, X., Chan, Y., Sharma, S., & Bista, K. (2024). ChatGPT and global higher education. STAR Scholars Press. [Google Scholar]
  80. Luckin, R., Cukurova, M., Kent, C., & du Boulay, B. (2022). Empowering educators to be AI-ready. Computers and Education: Artificial Intelligence, 3, 100076. [Google Scholar] [CrossRef]
  81. Lyu, W., Zhang, S., Chung, T., Sun, Y., & Zhang, Y. (2025). Understanding the practices, perceptions, and (dis)trust of generative AI among instructors: A mixed-methods study in the U.S. higher education. Computers and Education: Artificial Intelligence, 8, 100383. [Google Scholar] [CrossRef]
  82. Maheshwari, G. (2024). Factors influencing students’ intention to adopt and use ChatGPT in higher education: A study in the Vietnamese context. Education and Information Technologies, 29(10), 12167–12195. [Google Scholar] [CrossRef]
  83. Mahrishi, M., Abbas, A., Radovanović, D., & Hosseini, S. (2024). Emerging dynamics of ChatGPT in academia: A scoping review. Journal of University Teaching and Learning Practice, 21(1). [Google Scholar] [CrossRef]
  84. Maita, I., Saide, S., Putri, A. M., & Muwardi, D. (2024). Pros and cons of artificial intelligence–ChatGPT adoption in education settings: A literature review and future research agendas. IEEE Engineering Management Review, 52(3), 27–42. [Google Scholar] [CrossRef]
  85. Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., Darwis, A., & Marzuki. (2023). Exploring artificial intelligence in academic essay: Higher education student’s perspective. International Journal of Educational Research Open, 5, 100296. [Google Scholar] [CrossRef]
  86. Mariyam, H., & Karthika, V. K. (2025). AI-enabled networked learning: A posthuman connectivist approach in an English for specific purposes classroom. Education and Information Technologies, 30, 18181–18211. [Google Scholar] [CrossRef]
  87. Matsiola, M., Lappas, G., & Yannacopoulou, A. (2024). Generative AI in education: Assessing usability, ethical implications, and communication effectiveness. Societies, 14(12), 267. [Google Scholar] [CrossRef]
  88. McDonald, N., Johri, A., Ali, A., & Collier, A. H. (2025). Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. Computers in Human Behavior: Artificial Humans, 3, 100121. [Google Scholar] [CrossRef]
  89. Medina-Gual, L., & Parejo, J.-L. (2025). Perceptions and use of AI in higher education students: Impact on teaching, learning, and ethical considerations. European Journal of Education, 60(1), e12919. [Google Scholar] [CrossRef]
  90. Memarian, B., & Doleck, T. (2023). Fairness, accountability, transparency, and ethics (FATE) in artificial intelligence (AI) and higher education: A systematic review. Computers and Education: Artificial Intelligence, 5, 100152. [Google Scholar] [CrossRef]
  91. Merzifonluoglu, A., & Gunes, H. (2025). Shifting dynamics: Who holds the reins in decision-making with artificial intelligence tools? Perspectives of gen Z pre-service teachers. European Journal of Education, 60(1), e70053. [Google Scholar] [CrossRef]
  92. Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), 856. [Google Scholar] [CrossRef]
  93. Mohamed, A. M., Shaaban, T. S., Bakry, S. H., Guillén-Gámez, F. D., & Strzelecki, A. (2025). Empowering the faculty of education students: Applying AI’s potential for motivating and enhancing learning. Innovative Higher Education, 50(2), 587–609. [Google Scholar] [CrossRef]
  94. Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics, 106(1), 213–228. [Google Scholar] [CrossRef]
  95. Mumtaz, S., Carmichael, J., Weiss, M., & Nimon-Peters, A. (2025). Ethical use of artificial intelligence based tools in higher education: Are future business leaders ready? Education and Information Technologies, 30(6), 7293–7319. [Google Scholar] [CrossRef]
  96. Mzwri, K., & Turcsányi-Szabo, M. (2025). The impact of prompt engineering and a generative AI-driven tool on autonomous learning: A case study. Education Sciences, 15(2), 199. [Google Scholar] [CrossRef]
  97. Narayan, R., Rodriguez, C., Araujo, J., Shaqlaih, A., & Moss, G. (2013). Constructivism—Constructivist learning theory. In B. J. Irby, G. Brown, R. Lara-Alecio, & S. Jackson (Eds.), The handbook of educational theories (pp. 169–183). IAP Information Age Publishing. [Google Scholar]
  98. Neshaei, S. P., Mejia-Domenzain, P., Davis, R. L., & Käser, T. (2025). Metacognition meets AI: Empowering reflective writing with large language models. British Journal of Educational Technology, 56, 1864–1896. [Google Scholar] [CrossRef]
  99. Nguyen, K. V. (2025). The use of generative AI tools in higher education: Ethical and pedagogical principles. Journal of Academic Ethics, 23, 1435–1455. [Google Scholar] [CrossRef]
  100. Niloy, A. C., Hafiz, R., Hossain, B. M. T., Gulmeher, F., Sultana, N., Islam, K. F., Bushra, F., Islam, S., Hoque, S. I., Rahman, M. A., & Kabir, S. (2024). AI chatbots: A disguised enemy for academic integrity? International Journal of Educational Research Open, 7, 100396. [Google Scholar] [CrossRef]
  101. Nofal, A. B., Ali, H., Hadi, M., Ahmad, A., Qayyum, A., Johri, A., Al-Fuqaha, A., & Qadir, J. (2025). AI-enhanced interview simulation in the metaverse: Transforming professional skills training through VR and generative conversational AI. Computers and Education: Artificial Intelligence, 8, 100347. [Google Scholar] [CrossRef]
  102. North, D. C. (1990). Institutions, institutional change and economic performance. Cambridge University Press. [Google Scholar]
  103. Okamoto, S., Kataoka, M., Itano, M., & Sawai, T. (2025). AI-based medical ethics education: Examining the potential of large language models as a tool for virtue cultivation. BMC Medical Education, 25(1), 185. [Google Scholar] [CrossRef]
  104. Oncioiu, I., & Bularca, A. R. (2025). Artificial intelligence governance in higher education: The role of knowledge-based strategies in fostering legal awareness and ethical artificial intelligence literacy. Societies, 15(6), 144. [Google Scholar] [CrossRef]
  105. Ou, A. W., Stöhr, C., & Malmström, H. (2024). Academic communication with AI-powered language tools in higher education: From a post-humanist perspective. System, 121, 103225. [Google Scholar] [CrossRef]
  106. Ozfidan, B., El-Dakhs, D. A. S., & Alsalim, L. A. (2024). The use of AI tools in English academic writing by Saudi undergraduates. Contemporary Educational Technology, 16(4), ep527. [Google Scholar] [CrossRef]
  107. Pallant, J. L., Blijlevens, J., Campbell, A., & Jopp, R. (2025). Mastering knowledge: The impact of generative AI on student learning outcomes. Studies in Higher Education, 1–22. [Google Scholar] [CrossRef]
  108. Park, J. J., & Milner, P. (2025). Enhancing academic writing integrity: Ethical implementation of generative artificial intelligence for non-traditional online students. TechTrends, 69(1), 176–188. [Google Scholar] [CrossRef]
  109. Pranckutė, R. (2021). Web of Science (WoS) and Scopus: The titans of bibliographic information in today’s academic world. Publications, 9(1), 12. [Google Scholar] [CrossRef]
  110. Qian, Y. (2025). Pedagogical applications of generative AI in higher education: A systematic review of the field. TechTrends, 69, 1105–1120. [Google Scholar] [CrossRef]
  111. Qu, Y., & Wang, J. (2025). The impact of AI guilt on students’ use of ChatGPT for academic tasks: Examining disciplinary differences. Journal of Academic Ethics, 23, 2087–2110. [Google Scholar] [CrossRef]
  112. Rasul, T., Nair, S., Kalendra, D., Robin, M., de Oliveira Santini, F., Ladeira, W. J., Sun, M., Day, I., Rather, R. A., & Heathcote, L. (2023). The role of ChatGPT in higher education: Benefits, challenges, and future research directions. Journal of Applied Learning and Teaching, 6(1), 41–56. [Google Scholar] [CrossRef]
  113. Rawas, S. (2024). ChatGPT: Empowering lifelong learning in the digital age of higher education. Education and Information Technologies, 29(6), 6895–6908. [Google Scholar] [CrossRef]
  114. Rienties, B., John, D., Subby, D., Christothea, H., Felipe, T., & Whitelock, D. (2025). What distance learning students want from an AI digital assistant. Distance Education, 46(2), 173–189. [Google Scholar] [CrossRef]
  115. Rogers, E. M. (1995). Diffusion of innovations. Free Press. [Google Scholar]
  116. Saihi, A., Ben-Daya, M., Hariga, M., & As’ad, R. (2024). A Structural equation modeling analysis of generative AI chatbots adoption among students and educators in higher education. Computers and Education: Artificial Intelligence, 7, 100274. [Google Scholar] [CrossRef]
  117. Salinas-Navarro, D. E., Vilalta-Perdomo, E., Michel-Villarreal, R., & Montesinos, L. (2024). Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Education Sciences, 14(1), 83. [Google Scholar] [CrossRef]
  118. Samala, A. D., Rawas, S., Wang, T., Reed, J. M., Kim, J., Howard, N.-J., & Ertz, M. (2025). Unveiling the landscape of generative artificial intelligence in education: A comprehensive taxonomy of applications, challenges, and future prospects. Education and Information Technologies, 30(3), 3239–3278. [Google Scholar] [CrossRef]
  119. Sarıkahya, S. D., Özbay, Ö., Torpuş, K., Usta, G., & Özbay, S. Ç. (2025). The impact of ChatGPT on nursing education: A qualitative study based on the experiences of faculty members. Nurse Education Today, 152, 106755. [Google Scholar] [CrossRef]
  120. Sekli, G. M., Godo, A., & Véliz, J. C. (2024). Generative AI solutions for faculty and students: A review of literature and roadmap for future research. Journal of Information Technology Education: Research, 23, 014. [Google Scholar] [CrossRef]
  121. Silvola, A., Kajamaa, A., Merikko, J., & Muukkonen, H. (2025). AI-mediated sensemaking in higher education students’ learning processes: Tensions, sensemaking practices, and AI-assigned purposes. British Journal of Educational Technology, 56, 2001–2018. [Google Scholar] [CrossRef]
  122. Southworth, J., Migliaccio, K., Glover, J., Glover, J. N., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4, 100127. [Google Scholar] [CrossRef]
  123. Strzelecki, A., & ElArabawy, S. (2024). Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. British Journal of Educational Technology, 55(3), 1209–1230. [Google Scholar] [CrossRef]
  124. Summers, A., Haddad, M. E., Prichard, R., Clarke, K.-A., Lee, J., & Oprescu, F. (2024). Navigating challenges and opportunities: Nursing student’s views on generative AI in higher education. Nurse Education in Practice, 79, 104062. [Google Scholar] [CrossRef]
  125. Sweller, J. (2011). Cognitive load theory. In J. P. Mestre, & B. H. Ross (Eds.), Psychology of learning and motivation (Vol. 55, pp. 37–76). Academic Press. [Google Scholar] [CrossRef]
  126. Swidan, A., Lee, S. Y., & Romdhane, S. B. (2025). College students’ use and perceptions of AI tools in the UAE: Motivations, ethical concerns and institutional guidelines. Education Sciences, 15(4), 461. [Google Scholar] [CrossRef]
  127. Szabó, F., & Szoke, J. (2024). How does generative AI promote autonomy and inclusivity in language teaching? ELT Journal, 78(4), 478–488. [Google Scholar] [CrossRef]
  128. Tan, X., Cheng, G., & Ling, M. H. (2025). Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education: Artificial Intelligence, 8, 100355. [Google Scholar] [CrossRef]
  129. Tossell, C. C., Tenhundfeld, N. L., Momen, A., Cooley, K., & de Visser, E. J. (2024). Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence. IEEE Transactions on Learning Technologies, 17, 1069–1081. [Google Scholar] [CrossRef]
  130. Urban, M., Děchtěrenko, F., Lukavský, J., Hrabalová, V., Svacha, F., Brom, C., & Urban, K. (2024). ChatGPT improves creative problem-solving performance in university students: An experimental study. Computers & Education, 215, 105031. [Google Scholar] [CrossRef]
  131. Urquhart, C., Cheuk, B., Lam, L., & Snowden, D. (2025). Sense-making, sensemaking and sense making—A systematic review and meta-synthesis of literature in information science and education: An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology, 76(1), 3–97. [Google Scholar] [CrossRef]
  132. Usher, M., & Barak, M. (2024). Unpacking the role of AI ethics online education for science and engineering students. International Journal of STEM Education, 11(1), 35. [Google Scholar] [CrossRef]
  133. Valdivieso, T., & González, O. (2025). Generative AI tools in Salvadoran higher education: Balancing equity, ethics, and knowledge management in the global south. Education Sciences, 15(2), 214. [Google Scholar] [CrossRef]
  134. Vieriu, A. M., & Petrea, G. (2025). The impact of artificial intelligence (AI) on students’ academic development. Education Sciences, 15(3), 343. [Google Scholar] [CrossRef]
  135. Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, 100326. [Google Scholar] [CrossRef]
  136. Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., & Du, Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Systems with Applications, 252, 124167. [Google Scholar] [CrossRef]
  137. Wang, Z., Chai, C.-S., Li, J., & Lee, V. W. Y. (2025). Assessment of AI ethical reflection: The development and validation of the AI ethical reflection scale (AIERS) for university students. International Journal of Educational Technology in Higher Education, 22(1), 19. [Google Scholar] [CrossRef]
  138. Wass, R., Tracy, R., Kim, B., Kelby, S.-H., Jacqueline, T., David, B., & Gallagher, S. (2023). Pedagogical training for developing students’ metacognition: Implications for educators. International Journal for Academic Development, 1–14. [Google Scholar] [CrossRef]
  139. Xu, X., Qiao, L., Cheng, N., Liu, H., & Zhao, W. (2025). Enhancing self-regulated learning and learning experience in generative AI environments: The critical role of metacognitive support. British Journal of Educational Technology, 56, 1842–1863. [Google Scholar] [CrossRef]
  140. Yan, L., Martinez-Maldonado, R., Jin, Y., Echeverria, V., Milesi, M., Fan, J., Zhao, L., Alfredo, R., Li, X., & Gašević, D. (2025). The effects of generative AI agents and scaffolding on enhancing students’ comprehension of visual learning analytics. Computers & Education, 234, 105322. [Google Scholar] [CrossRef]
  141. Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. [Google Scholar] [CrossRef]
  142. Zembylas, M. (2023). A decolonial approach to AI in higher education teaching and learning: Strategies for undoing the ethics of digital neocolonialism. Learning, Media and Technology, 48(1), 25–37. [Google Scholar] [CrossRef]
  143. Zhou, X., Zhang, J., & Chan, C. (2024). Unveiling students’ experiences and perceptions of artificial intelligence usage in higher education. Journal of University Teaching and Learning Practice, 21(06). [Google Scholar] [CrossRef]
  144. Zou, M., & Huang, L. (2024). The impact of ChatGPT on L2 writing and expected responses: Voice from doctoral students. Education and Information Technologies, 29(11), 13201–13219. [Google Scholar] [CrossRef]
Figure 1. Article Selection Process (Phase 2 and Phase 3).
Figure 1. Article Selection Process (Phase 2 and Phase 3).
Education 16 00137 g001
Table 2. Proposed generative AI intervention framework for higher education.
Table 2. Proposed generative AI intervention framework for higher education.
Main Themes
(Intervention Types)
Sub-ThemesCodes
Curriculum Integration InterventionsEmbedding AI LiteracyAI ethics education
Responsible AI use modules
Critical thinking development
Alignment with Learning OutcomesULO/CLO integration
Discipline-specific AI applications
Pedagogical RedesignAI-informed teaching strategies
Collaborative learning with AI
Policy and Governance InterventionsInstitutional AI PoliciesAI use regulations
Academic integrity enforcement
Transparent CommunicationPlagiarism prevention
Student and faculty guidelines
Policy dissemination
Stakeholder InvolvementFaculty-student collaboration
Cross-departmental AI governance committees
Faculty Development and SupportProfessional TrainingAI pedagogical skills workshops
Ethics training for educators
Resource ProvisionAccess to AI tools
Instructional materials for AI integration
Ongoing SupportAI integration coaching
Communities of practice
Student-Centred InterventionsEthical AI Use EducationWorkshops on responsible AI
Promoting academic honesty
Skill DevelopmentAI tool proficiency training
Critical evaluation of AI outputs
Reflective PracticesGuided reflection on AI use
AI impact discussions
Assessment Adaptation InterventionsAI-Resilient Assessment DesignAuthentic assessments
AI detection mechanisms
Feedback and MonitoringStudent centred AI feedback mechanisms
Formative feedback on AI use
Alternative Evaluation MethodsPortfolio assessments
Oral examinations
Technology and Infrastructure InterventionsTool AccessibilityProviding AI platforms
Ensuring equitable access
Integration with Learning Management SystemsEmbedding AI tools into LMS
Technical support
Data Privacy and SecuritySecure data handling
Compliance with ethical standards
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jayasinghe, S.; Gamage, K.A.A.; Yang, D.; Cheng, C.; Disanayake, C.; Apeji, U.D. Six Institutional Intervention Areas to Support Ethical and Effective Student Use of Generative AI in Higher Education: A Narrative Review. Educ. Sci. 2026, 16, 137. https://doi.org/10.3390/educsci16010137

AMA Style

Jayasinghe S, Gamage KAA, Yang D, Cheng C, Disanayake C, Apeji UD. Six Institutional Intervention Areas to Support Ethical and Effective Student Use of Generative AI in Higher Education: A Narrative Review. Education Sciences. 2026; 16(1):137. https://doi.org/10.3390/educsci16010137

Chicago/Turabian Style

Jayasinghe, Shan, Kelum A. A. Gamage, Dandan Yang, Chuang Cheng, Chamara Disanayake, and Uje Daniel Apeji. 2026. "Six Institutional Intervention Areas to Support Ethical and Effective Student Use of Generative AI in Higher Education: A Narrative Review" Education Sciences 16, no. 1: 137. https://doi.org/10.3390/educsci16010137

APA Style

Jayasinghe, S., Gamage, K. A. A., Yang, D., Cheng, C., Disanayake, C., & Apeji, U. D. (2026). Six Institutional Intervention Areas to Support Ethical and Effective Student Use of Generative AI in Higher Education: A Narrative Review. Education Sciences, 16(1), 137. https://doi.org/10.3390/educsci16010137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop