1. Introduction
As paradoxical as it may sound, ‘Technological innovation brings disruption’ [
1] (p. 11), and the education sector has witnessed that. The year 1986, for instance, saw a street demonstration in which math educators protested against student use of calculators in classrooms [
2]. Similar sentiments accompanied the use of the Internet by students in the preparation of their assignments [
3], as well as the use of laptops by students in classrooms [
4]. We all know that the negative responses to these three examples have eventually changed, with technology becoming an indispensable asset in the classroom in the education context of any level and far beyond, permeating all aspects of work of the education sector [
1]. AI is our next technological hurdle. The cyclical development of history [
5] may suggest that AI is on course to become an essential aspect of the functioning of the education sector, revolutionising the way education works and, moreover, transforming what education ultimately means [
6]. This transformation must also be considered in relation to global education priorities. The United Nations’ Sustainable Development Goal 4 (SDG4) emphasises inclusive, equitable, high-quality education and lifelong learning opportunities for all [
7]. As AI increasingly reshapes higher education, policy responses in the sector become part of the broader global effort to ensure that technological change supports, rather than undermines, SDG4 ambitions.
AI refers to the capability of machines to perform tasks that typically require human intelligence—with minimal human intervention [
8]. AI has evolved significantly over the past couple decades, with the advancements in deep learning and neural networks driving its popularity and application across different domains, including HE [
9]. The advancements of AI to-date are termed as ‘Artificial Narrow Intelligence’ (e.g., ChatGPT), with the more potent ‘Artificial General Intelligence’ and ‘Artificial Superintelligence’ remaining as theoretical concepts—for now [
9] (p. 4) and suggesting the exponential scope of AI development ahead of us. The field of HE has seen an increasing integration of AI into various areas, such as teaching, learning, research and administration [
10,
11].
Proliferating literature on AI in HE suggests that AI has started revolutionising the HE sector by offering personalised learning support [
12], aiding with predictive analytics of, for instance, student performance [
10], assessment and feedback provision [
13], administrative work [
14], streamlining research and writing for publication [
15]. These topic areas have received a lot of scholarly attention, as evidenced by numerous systematic literature reviews published on AI in HE [
10,
16,
17,
18,
19,
20].
The rapidly expanding impact of AI across all areas of higher education calls for robust governance responses, including formal policy, institutional procedures, pedagogical redesign, and assessment reform. The need to advance AI policy has been mentioned in the wider field of education [
21,
22], as well as specifically in HE [
23,
24]. AI policies in higher education represent an emerging area of study, with limited existing research—only 11 key articles available as of mid-January 2025, as demonstrated below. However, the significance of the current studies highlights the need to systematise the knowledge they contain.
Existing scholarship on AI in higher education has focused mainly on academic integrity, pedagogy, assessment, and ethics. However, there has been much less systematic attention to how these concerns are being translated into policy and governance responses across higher education institutions and systems. As a result, there is still limited clarity on what issues dominate the emerging literature on AI policy in higher education, how policy responses are being framed, and what patterns of policy learning can be identified across the field.
This paper relies on a systematic literature review of scholarship that analyses specifically what is known about AI policy in higher education, published between 2015 and 2025. The study is important because it synthesises a small but rapidly emerging body of evidence on AI policy in higher education, clarifies the main policy concerns shaping the field, and identifies gaps that matter for governance, equity, and the pursuit of SDG4. This analysis is important in order to promote the need for AI policy in HE, take stock of the achievements in AI policy development approaches in HE and identify relevant areas for development. This paper proceeds by presenting theoretical ideas around policy learning which guide our analysis, then explaining the systematic literature review as the research method we used, followed by the analysis of key relevant thematic areas in the literature and a discussion of findings.
2. Policy Learning
Policy learning provides a useful framework for interpreting the emergence of AI policy in higher education because it draws attention to how institutions respond to new challenges through adaptation, revision, and selective incorporation of new rules and practices. In this review, the concept is used not in a general sense but specifically to examine how AI governance in higher education is being built through incremental and layered policy responses.
Scholars who examine similar policy processes use a variety of terms, such as learning, transfer, translation, diffusion. Policy learning has been a common productive lens for policy analysis recently [
25].
Policy learning is viewed in three main ways in relation to other policy processes in relevant scholarship. First, it is seen as a component of broader policy change, often paired with borrowing as part of policy transfer [
24,
26]. Second, it is considered an equally important process alongside diffusion, transfer, and translation, with interdependencies among them [
27,
28]. Finally, policy learning is regarded as an umbrella term encompassing all other policy processes, acting as a source of transfer and diffusion, and integral to all policy-making activities [
29,
30].
Some sceptics do not agree with the all-encompassing view of policy learning, which suggests that every aspect of policy represents learning in some form. Consequently, identifying the boundaries of learning has become a priority for some scholars. For example, some argue that policy copying attempts do not constitute learning [
29], while others contend that pure copying attempts are unproductive in general [
31]. The lack of clear evidence defining what cannot be considered as policy learning supports the perspective that learning is intertwined with many other policy processes and plays a central role in a policy context.
The available definitions of policy learning, e.g., [
32,
33], emphasise that the core concept of policy learning–policy layering–involves an incremental updating of policy ideas by integrating new information with existing knowledge, which happens in a collective, messy and creative way [
33].
The concept of
layering suggests a gradual incremental renegotiation of some elements of a policy system while leaving some of them unchanged—so changes are accumulated on the basis of some of the conventions that were initially left unchanged [
34]. Layering occurs when establishments are not willing or, more likely, are incapable of radical transformations or when they simply do not attempt to change completely [
35].
Policy learning represents a combination of contributions from different policy actors [
36,
37]—thus, one could gather that policy learning has a collective nature. Policy learning is a productive process. More specifically, Freeman (2006) states that ‘Policy does not exist somewhere else in the finished form, ready to be looked at and learned from, but it is finished or produced in the act of looking and learning. Learning is the output of a series of communications, not its input; in this sense it is generated rather than disseminated’ [
38] (p. 379). The author further argues that policy learning ‘is not simply an interpretation act, a process of registering and taking account of the world; it is, in a fundamental way, about creating the world’ (p. 382). This means that meaning of policy may emerge through layering. Policy learning is very often a messy process too. There are claims that explicitly acknowledge the ‘messiness of policy making’ [
38] (p. 130) and the ‘bedlam’ nature of policy learning [
39] (p. 12).
In the AI era, policy learning, arguably, necessitates a gradual creation of completely new ways of the functioning of different aspects of the HE sector, which is not an easy task due to its shared and messy nature. In this review, policy learning is used as an interpretive analytical lens through which patterns in published scholarship on AI governance in higher education are examined. The reviewed studies do not all investigate policy learning directly as an empirical process [
40]; rather, they provide material from which patterns of adaptation, layering, and institutional response can be interpreted.
3. Materials and Methods
A PRISMA-informed systematic literature review (see
Supplementary Materials) with qualitative thematic synthesis was conducted to examine the available research on higher education and AI policy published between 2015 and 2025 [
41,
42]. The aim was to synthesise and critique all research that provided answers to the following important research question:
How has policy learning been occurring in the HE sector in terms of the response to the advent of AI?The 2015–2025 timeframe was chosen with the aim to cover the recent period when AI has gained a wide-scale popularity in HE, with the post-2015 period marking a notable increase in publications on AI in HE in general [
10]. The year 2015 is significant because it marks the period when deep learning, which is a subset of AI, began to achieve major breakthroughs, such as the development of more advanced neural networks and the success of AI in various applications; OpenAI was also founded in 2015. These advancements spurred increased interest and research in AI across multiple domains, including higher education [
7]. In sum, starting our search from 2015 has allowed us to capture the early momentum and foundational developments in the publications on AI policies within higher education, providing a thorough understanding of how the field has evolved first gradually and then explosively after the launch of ChatGPT in November 2022. The 13th of January 2025 was the literature search date and, thus, a cut-off point for data collection for this paper.
The systematic review follows the guidelines for systematic reviews as specified in the PRISMA 2020 statement [
42]. The search was conducted through Nottingham Trent University’s institutional library database as the search interface and gateway to multiple search databases, namely DOAJ Directory of Open Access Journals, ROAD: Directory of Open Access Scholarly Resources, EZB-FREE-00999 freely available EZB journals, ProQuest Central, Hellenic Academic Libraries Link, Scopus, Springer Nature OA Free Journals, Elsevier ScienceDirect Journals, Research Library, ERIC, SAGE Premier 2018, SAGE Journals Premier 2022 (PREM2022), SAGE:Jisc Collections:SAGE Journals Read and Publish 2023–2024: Reading List, Business Source Complete, EBSCOhost Academic Search Complete, Sage Premier Journal Collection, Social Sciences Citation Index (Web of Science), IngentaConnect Journals, Education Database, Political Science Database. The use of the institutional database as a gateway helped consolidate search returns across multiple databases and reduced the handling of duplicate records, although a final manual check for duplicates was still undertaken during screening. The full range of these databases has been used in order to take advantage from the complementing benefits in terms of the search that each of the databases offers, given the relatively limited number of publications on the topic HE and AI policies. This also helped us avoid being limited by the shortcomings of any one of them if the search was conducted on one database [
43]. The search was run on 13 January 2025. Platform-level filters were applied to restrict results to peer-reviewed journal articles, English-language publications, and open-access full text available through the interface. Although the institutional gateway reduced duplicate handling at the search stage, a final manual duplicate check was still undertaken during screening because indexing and metadata can vary across databases.
The full search string used for the institutional database search was: (‘Artificial Intelligence’ OR ‘AI’) AND (‘Higher Education’ OR ‘University’) AND ‘Policy’. The use of the term ‘policy’ was intended to maintain a specific focus on scholarship explicitly addressing AI policy in higher education rather than the broader field of AI governance or educational technology responses more generally. However, this choice may also have narrowed retrieval by excluding conceptually relevant studies using adjacent terms such as governance, regulation, guidance, ethics frameworks, or institutional rules without explicitly using the word ‘policy’. This potential retrieval bias should be borne in mind when interpreting the size and scope of the final corpus.
Figure 1 below demonstrates the search procedure that was applied, and
Table 1 that follows explains the inclusion and exclusion criteria used. The open-access criterion was used as a pragmatic boundary of the review design to ensure accessibility and transparency of the analysed corpus; however, this decision may also have excluded relevant peer-reviewed studies published behind paywalls and therefore constitutes a limitation of the review.
Data were extracted into a structured spreadsheet by one reviewer and checked for completeness before synthesis. No outcome measures were sought, as this review synthesised qualitative/conceptual evidence rather than intervention effects. We extracted the following data items: author/year; country/setting; document/study type; policy level (international/national/institutional); stated aims/scope; AI policy themes/topics (e.g., integrity, privacy, equity, accuracy); and reported governance approaches/recommendations. All records were screened manually by the lead author. Titles, abstracts, and full texts were assessed against the eligibility criteria. No independent second screening, adjudication, or intercoder verification was undertaken. This single-reviewer design increases the possibility of subjective inclusion decisions and selective extraction and should therefore be considered a methodological limitation of the review.
Given the conceptual/thematic nature of the included literature, we did not undertake a formal risk-of-bias/quality appraisal, nor did we calculate effect measures or assess reporting bias or certainty of evidence. All included studies contributed to a single qualitative thematic synthesis; extracted information was standardised in a spreadsheet and synthesised through iterative coding and theme refinement. As a result, the synthesis does not differentiate between studies according to a formal hierarchy of evidentiary strength. The findings should therefore be read as a thematic mapping of the peer-reviewed literature rather than as a ranked assessment of source robustness.
In this review, ‘original research’ was understood broadly to include peer-reviewed studies that made an explicit analytical contribution through stated methods, whether by primary empirical investigation or by structured review-based synthesis. Systematic literature reviews were retained where they met the eligibility criteria because, in an emergent field with a limited evidence base, they form part of the research conversation on AI policy in higher education. These review articles were treated as individual studies within the corpus and analysed at the level of their arguments, themes, and recommendations, rather than as repositories of claims to be counted alongside the studies they reviewed.
The titles and abstracts from the 14 results returned in the second stage of the literature search procedure (
Figure 1) were read during the third stage of the literature search procedure with the inclusion/exclusion criterion 5 (
Table 1) in mind and to identify and exclude duplicates. Three duplicates were deleted. Eleven articles were ultimately included in the subsequent thematic analysis.
During analysis, each study was examined in relation to its context, stated focus, key policy concerns, principal findings, and policy recommendations, and these elements were then compared across the corpus to generate the final themes. The analysis followed Kushnir’s (2025) [
44] guide with the following four stages: (1) identifying key words/phrases and producing codes, (2) searching for tentative themes, (3) reviewing and establishing (and if needed—renaming, restructuring) themes, and (4) establishing the order (and hierarchy) of the themes. The three key areas of findings that were generated through the critical analysis and synthesis of the literature are outlined in the next section.
The unit of analysis was the individual included study and the extracted information relating to its context, focus, policy concerns, findings, and recommendations. Coding was iterative and primarily inductive, while being informed by the review question and the policy learning framework used to interpret the findings. Tentative themes were generated through repeated comparison across the extracted material and then refined into the three major findings areas presented in the Results section.
4. Results: Policy Learning in the HE Sector in Response to AI
Following de-duplication, 11 studies were included in the synthesis (
Figure 1). Records were excluded at screening/full text where they did not meet the eligibility criteria (most commonly because they were not based on original research and were commentaries). The included studies (see
Table 2) covered single-country, cross-national and institutional analyses, and all contributed to a single qualitative thematic synthesis; no formal risk-of-bias assessment, reporting-bias assessment, or certainty-of-evidence appraisal was undertaken due to the conceptual/thematic focus of the review.
The results below synthesise the included studies in terms of their main policy concerns, findings, and recommendations, rather than treating the literature as a purely descriptive or theoretical body of work.
The geography of the 11 studies is important to consider the context of the findings presented below. Six articles present three single-country studies of AI policies in HE, namely the USA [
45,
46], the UK [
47,
48], and Hong Kong [
23,
49]. One study analyses policies from top 10 universities from each of the following six global regions: Africa, Asia, Europe, Latin America, North America, and Oceania [
50]. Another two studies focus on global AI policies applicable to HE, such as those from OECD [
51], UNESCO, the European Commission as well as a range of top universities from world rankings [
51,
52]. Two studies worked with university policies internationally: one—from Eaton’s crowdsourced Google Doc resource [
1] and the other one selected 20 universities recognised by the 2024 QS World University Rankings, with 10 being located in North America, five in Europe, three in Australia and two in Asia [
53]. While the corpus includes cross-national and global studies, it remains weighted toward Anglophone and globally prominent institutional contexts, which should be borne in mind when interpreting the findings.
The findings below are structured and discussed around the following key themes that have been emphasised in the literature: (1) why AI policy in HE is needed, (2) policy approaches to date, and (3) policy recommendations. While overlaps amongst these themes are acknowledged, the separations amongst them are preserved for analytical purposes.
4.1. Why AI Policy Is Needed in HE
Across the included studies, the main policy concerns clustered around academic integrity, ethical risk, access and equality, data governance, and the changing relationship between teaching, learning, and assessment.
The rapid expansion of AI has prompted significant responses across the international higher education community. AI’s growing presence requires higher education institutions to adapt established practices with careful attention to ethics, particularly in relation to academic integrity, equality, data privacy, and the accuracy and reliability of AI-generated information. Although efforts to develop relevant policies are underway, substantial gaps remain [
49,
50]. These challenges directly intersect with SDG4, which promotes equitable access to quality education and the development of relevant skills for the future. Without appropriate AI governance, gaps in access, biased algorithms, or compromised academic integrity may exacerbate existing inequalities rather than advance SDG4 targets. The reactions across the HE sector to AI presence have ranged from wishing for a ban on the use of AI [
45] to admitting that this is impossible and instead, adaptations have to be made [
48,
50], even though it triggers ‘anxiety and hesitation’ [
45].
4.1.1. Academic Integrity
For instance, in the spring term of 2023–2024, 80% of students across all areas in one UK university have acknowledged the use of AI for learning [
48]. This high level of reported use should not be read as implying a similarly mature body of peer-reviewed research on AI policy in higher education; rather, it underlines the gap between rapid institutional uptake and the still limited scholarly literature on policy responses. The need to prepare ‘a future generation of GAI [Generative AI]-literate students’ [
50] (p. 1) requires HEIs to rethink their curriculum and assessment design practices as ‘some students use GenAI platforms as a substitute for learning rather than as a tool to enhance learning’ [
48] (p. 1) by cheating and plagiarising in the preparation of their assignments [
23,
45]. Current weak assessment design in the face of AI is a key problem [
1].
4.1.2. Equality
Equitable access to AI and to the benefits of its use poses important challenges for higher education. Geographical disparities in AI uptake, with institutions in the Global North generally better positioned than those in many Global South contexts [
50], intersect with more local inequalities linked to class and access [
1,
50]. Moreover, some AI algorithms have also been found to be biased, reinforcing some inequalities. For instance, ‘Biased algorithms pose a significant threat, especially if used in admission or grading processes, as they could have devastating effects on students’ [
52]. Such patterns of unequal access contradict the SDG4 commitment to reducing disparities in education and highlight the need for AI policies attentive to digital equity.
4.1.3. Accuracy and Reliability
This point is tightly interlinked with the above. However, AI-generated factual content, summaries, feedback, and references raise concerns about accuracy and therefore reliability in higher education settings [
45,
49].
4.1.4. Data Privacy and Security
The privacy and security of sensitive or confidential data come to the forefront when we think of AI use in research as well as teaching, as instructors’ and students’ need to be protected too [
45]. Avoiding personal or confidential data input into AI chat boxes in any other area of work in the HE sector is a prerogative and a big concern if it does happen [
50].
4.2. Policy Approaches to Date
The studies also point to several recurring modes of policy response, including restrictive measures, guidance-based approaches, pedagogical adaptation, and more collaborative or heterarchical forms of governance. The concerns outlined above make it no surprise that work towards developing different forms of advisory and regulatory approaches on various levels has already taken place. Inspired by Dolowitz and Marsh’s (2000) question-based approach to analysing policy transfer [
54], we aim to unpack the when, how, who, and what of the policy development, as well as the consequences to date.
4.2.1. When
The development of AI policies has been slow. In the current technological context, AI policy development in higher education appears to have been largely reactive, often driven by attempts to catch up with the misuse of AI in student assignments and related academic integrity concerns. For instance, by May 2023 out of the world top 500 universities, fewer than one-third had a policy on AI usage and one-third of those chose to ban ChatGPT [
49]. First of all, ChatGPT is only one AI tool, and its ban disregards the availability of other ‘similar systems from rival tech companies such as Google’s Gemini, GitHub’s Copilot, Microsoft’s Copilot, and Anthropic’s Claude’ [
47] (p. 2). Second, banning AI usage, is arguably, counterproductive as the absence of mechanisms to trace down all instance of the application of AI renders the ban ineffective. Finally, although the number of universities with at least some AI policies has increased, many of these policies have focused on bans, which appear to be of limited effectiveness.
4.2.2. How
There has been a shift towards accepting the need to adapt to AI’s existence, rather than unsuccessfully attempting to ban it. An ‘open but cautious approach’ has been taken towards developing regulatory mechanisms towards adjusting HE sector to the existence of AI [457 (p. 1). The two main ways of developing AI policy in HE has involved, first, a ‘heterarchy’ which is a system with multiple governing principles and interconnected units [
45] (p. 568) and second, hard and soft tools [
51].
The development of AI policy in HE has been happening in ‘a policy network showing signs of a heterarchy permeated by neoliberal rationales and populated by policy actors actively promoting artificial intelligence technologies to be used in education’ [
47] (p. 568). The hybrid top-down/centralised and bottom-up approaches, as the names suggest, relate to the source of policy ideas and direction of their transfer in this policy network. These are, arguably, hybrid approaches due to multiple overlaps amongst the policy actors’ voices that ultimately represent a so-called level of policy-making. For instance, the UNESCO (2023) Guidance, analysed in Salai et al (2024) [
48] and presented below, has been composed by multiple actors from different levels of policy-making, such as UNESCO officials and academics from universities.
Some international organisations have put forward guidelines for AI policy development, which they have been reworking and updating. Here are a few documents that concerned the use of AI broadly, not specifically in the area of education/higher education, as follows:
OECD (2019): Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449);
High-Level Expert Group on Artificial Intelligence (EUHLEX) (2019): Ethics Guidelines for Trustworthy AI (set up by the European Commission);
European Commission (2021): Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021/0106 (COD));
UNESCO (2022): Recommendation on the Ethics of Artificial Intelligence (SHS/BIO/PI/2021/1) [
51].
While not always explicitly framed through the SDG agenda, these international guidelines align with SDG4 principles, particularly the safeguarding of educational equity, learner protection, and high-quality learning outcomes in the face of technological change. Subsequent documents from international organisation about the use of AI specifically in the area of education and research have appeared more recently:
While some of these documents serve as guidance for ethical use of AI in education and research, such as UNESCO’s 2023 Guidance for Generative AI in Education and Research, others are legally binding. For example, the Council of Europe’s 2024 Framework is based on the document from UNESCO and aims at ensuring a rights-based approach to AI use in education through legally binding instruments [
48].
The national policy-making level internationally has adopted similar documents drawing on the international guidelines and/or requirements. In the UK, for instance, the Department for Education (DfE) has issued a policy paper in 2023
Generative artificial intelligence in education which emphasises both AI’s potential in education to reduce workloads and the risks that the use of AI poses [
48].
In addition to the above examples of the top-down/centralised approach to the development of AI in HE, the bottom-up approach of voices from universities has played a tremendously important role too [
47,
51]. It has allowed for a great degree of flexibility in universities’ actors in the integration of AI into their practice.
In HE, AI policy development involves a ‘heterarchy’ where top-down ethical imperatives ensure student assignments reflect individual knowledge, while bottom-up approaches give instructors flexibility in using generative AI. For instance, Alqahtani and Wafula (2025) [
46] mention that in almost all universities analysed, teaching staff can determine how AI used in their courses or modules. A combination of these different approaches in the AI policy development heterarchy includes hard preventive measures and soft, dialogue-based sanctioning procedures [
47] (p. 568).
The ‘heterarchy’ in the development of AI policy in HE [
47] (p. 568) rests on hard preventive measures and soft, dialogue-based sanctioning procedures [
49]. The effectiveness of these measures in dealing with unethical uses of AI in higher education cannot yet be fully assessed from the reviewed literature, as the studies provide little evidence on enforcement, implementation, compliance, or behavioural outcomes.
4.2.3. Who
The above discussion has already revealed the names of key international players in the policy generation on AI in HE. To remind, they include UNESCO, OECD, the European Commission, the Council of Europe. Aside from this national governments have also made their voice heard, such as DfE in the UK, as illustrated earlier. Moreover, universities have, arguably, been the most active actors in the development of relevant policies. These bodies also serve as stewards of SDG4, situating AI-related educational policy within a wider global framework committed to equitable, ethical and sustainable educational development.
It is essential to emphasise the role of academics at universities in developing AI policies in the higher education sector, where these academics have a central role as they also act as ‘entrepreneurs and business people’ [
47] (p. 570). Their connections with and various role within the technology business sector facilitate their key position in the propelling of the development of AI policies for the HE sector. The policy network analysis for AI policy development in HE that Gellai (2023) [
47] (p. 570) conducted reveals that ‘Out of the 67 nodes identified, 22 (32.83%) were academics, universities, or university-affiliated organizations (such as research centres or tech incubators in universities). Moreover, when the nodes are ordered by the number of connections, of the top three of the network, approximately two-thirds are universities, academics, or university-affiliated organizations’. Aside from this, the role of academics should also be acknowledged with regard to developing the recommendations to policy that are outlined in the
Section 4.3.
Student voices have also been acknowledged in AI policy generation [
1].
4.2.4. What
There are three main interconnected products of all this work. First, it has resulted in the development of written guidelines for the ethical use of AI at universities [
45,
50] and many universities have displayed their AI policies on their websites [
1].
The second related product of AI policy generation is the provision of relevant training for staff and students ‘to foster GAI [Generative AI] literacy’ [
50] (p. 1) in the form of ‘diverse types of resources, such as syllabus templates, workshops, shared articles, and one-on-one consultations; focusing on a range of topics, namely general technical introduction, ethical concerns, pedagogical applications, preventive strategies, data privacy, limitations, and detective tools’ [
45] (p. 1) such as GPTZero.
The third related key product is evolving curriculum and designing ‘multifaceted evaluation strategies’ [
45] (p. 1) to ‘mitigate [AI] misuse’ [
50] (p. 1). Sallai et al. [
48] suggest that, for instance, essays may be on the way out, particularly given that scholars warned about the death of the essay even before AI.
4.2.5. Consequences
While advancements in AI policy in HE should be acknowledged, as evidenced by the above, significant gaps still remain [
49,
50]. As explained above, many universities have taken a long time to develop any guidelines, many still do not have any, and those that do need to address remaining questions or change the unproductive aspiration to ban AI in HE.
All in all, the heterarchy and a different pace of policy generation ‘could lead to inconsistency in students’ learning experiences in different courses due to a lack of standardization’ among a few of the drawbacks [
46].
4.3. Policy Recommendations
When the recommendations across the included studies are compared, four recurring priorities emerge: clearer institutional guidance, curriculum and assessment reform, collaborative governance, and stronger attention to equity and inclusion. Across the reviewed studies, policy recommendations converge around the need to move beyond ad hoc or prohibition-based responses towards more structured and educationally grounded governance of AI in higher education. A first area of convergence concerns the need for clearer institutional guidance that is both discipline-specific and audience-focused. Rather than relying on generic statements, the literature suggests that staff and students require tailored guidance reflecting the different ways AI is used across disciplines, roles, and forms of assessment [
1,
45].
Closely related to this is a second recurring recommendation concerns curriculum and assessment reform. Several studies argue that AI can no longer be treated simply as an external threat to academic integrity, but must be addressed through redesign of learning outcomes, teaching practices, and evaluation strategies so that students develop the capacity to use AI critically and ethically [
23,
45,
48,
49].
A third theme concerns governance processes. The literature broadly converges on the importance of collaboration across stakeholders, including university leaders, academics, students, and actors beyond institutions [
46,
50,
52]. However, the reviewed studies also point to an unresolved tension between consistency and flexibility. On the one hand, more coordinated institutional approaches are needed to avoid fragmented student experiences; on the other hand, teaching staff require some discretion to adapt AI-related expectations to disciplinary and pedagogical contexts.
A fourth theme relates to equity and inclusion. Some studies emphasise that AI policy should explicitly address vulnerable groups and potential inequalities, including gender bias, differential confidence in AI use, and the position of non-native English speakers [
49,
52,
55,
56]. Yet compared with the stronger attention given in the literature to academic integrity and assessment, these equity concerns remain less fully developed.
5. Discussion
Within the reviewed corpus, the development of AI policies in higher education appears slow and reactive, often driven by attempts to catch up with the use and misuse of AI in academic work [
48,
49]. Reacting to the events as they unfold in such a manner by gradually learning to respond illustrates the inability of the sector to revolutionise itself in response to the explosive development of AI. This is expected, given that layering is a common policy learning phenomenon [
34], also evident in the case of AI policy learning in the HE sector.
The findings of this review suggest that the literature on AI policy in higher education is still emerging and is concentrated around a relatively narrow set of concerns, particularly academic integrity, assessment, ethics, and institutional guidance. This indicates that current scholarship is stronger in identifying immediate institutional challenges than in evaluating longer-term policy effectiveness, implementation, or equity across contexts. The review, therefore, contributes not only by summarising existing themes but also by showing where the literature remains conceptually and geographically uneven. More recent work also suggests that these gaps are not only institutional or geographic but epistemic, with emerging calls for AI governance frameworks attentive to Indigenous data sovereignty, decolonial ethics, and the right of communities to refuse extractive digital systems [
57]. The synthesis of recommendations also points to an unresolved governance tension. While institutions need more coordinated approaches in order to avoid fragmented student experiences, teaching staff also require discretion to adapt AI-related expectations to disciplinary and pedagogical contexts. This suggests that effective AI governance in higher education is unlikely to rest either on purely centralised control or on wholly decentralised practice, but instead requires negotiated coordination across levels. The review also shows that, although AI governance is frequently linked to ethical and SDG4-oriented aspirations, the practical operationalisation of equity within policy remains comparatively underdeveloped.
The key characteristics of layering, such as its collective, creative and messy nature [
30,
33], also surface clearly in the analysis of AI policy learning in HE. The messiness of policy learning theorised [
30,
33,
38,
39] is evident in operating on the extremes of decision-making as to whether to ban AI in HE [
43] or not and rather adjust to its existence [
48,
50]. Aside from this, the collective nature of the policy-making process around the use of AI in HE has found its traces in the arguments about a ‘heterarchy’ in this process, which a system with multiple governing principles and interconnected units [
47] (p. 568). With academics leading the way [
47], voices cross-cutting the domains of universities, national governments and international organisations have been present [
44,
46,
49]. These inconsistencies also risk undermining SDG4 objectives, as unequal institutional responses to AI may deepen educational disparities both within and across national systems.
This pattern of layering can be seen particularly clearly in studies showing that universities have responded to generative AI not through wholesale policy replacement, but by adding new guidance, workshops, syllabus statements, and assessment rules onto existing governance arrangements [
45,
53]. It is also evident in the UK case of heterarchical policy development, where universities, academics, governments, and international actors contribute overlapping and incremental policy adjustments rather than a single coherent redesign of the system [
47].
The surface-level nature of most of the policy recommendations presented in the analysed articles and summarised above call for the need to carry on with the layering process in AI policy development in HE. The fast pace of AI development requires collective recognition of the need for more timely policy responses in order to maximise AI’s potential to enhance productivity across higher education while also preventing it from compromising academic integrity.
The evidence base remains limited and uneven, comprising a small number of studies (n = 11) concentrated in particular national and institutional contexts, and often relying on analyses of policy documents and guidance that may lag behind rapidly evolving practice. Our review also has methodological limitations: screening and extraction were conducted by a single reviewer. As independent duplicate screening and data extraction are generally recommended in systematic review guidance to reduce selection bias and extraction errors, our single-reviewer approach may have led to the omission of some relevant studies or to greater subjectivity in how evidence was selected and interpreted. The search was also restricted to English-language open-access peer-reviewed articles, and we did not undertake formal risk-of-bias, reporting-bias, or certainty assessment due to the conceptual/thematic focus. Nevertheless, the synthesis highlights practical and policy implications, including the need for clearer, audience-focused institutional guidance, investment in AI literacy and assessment redesign, and stronger cross-stakeholder coordination; future research should examine how these policies are implemented and with what consequences across different geopolitical contexts. The conclusions of this review should therefore be understood as applying to the characteristics of this specific peer-reviewed corpus rather than as definitive claims about higher education globally.
6. Conclusions
Overall, this review shows that AI policy development in higher education remains uneven, reactive, and layered, with institutions and policy actors incrementally adapting existing governance arrangements rather than replacing them. The literature converges on the need for clearer guidance, stronger pedagogic adaptation, and better coordination across stakeholders, but it also reveals unresolved tensions between consistency and flexibility and a comparatively underdeveloped treatment of equity. These findings should, therefore, be read as indicative rather than globally generalisable while still pointing to the importance of AI governance that supports quality, inclusion, and ethical practice in line with SDG4.
This review should be read in light of several limitations. The evidence base remains small and geographically uneven, and the review was restricted to peer-reviewed English-language open-access journal articles, with screening and extraction undertaken by one reviewer. Future research should expand the geographical coverage of the field, include stronger evidence on policy implementation and effectiveness, and examine how AI governance in higher education develops across different institutional and national contexts over time.