Next Article in Journal
A Structured AHP-Based Approach for Effective Error Diagnosis in Mathematics: Selecting Classification Models in Engineering Education
Previous Article in Journal
Examining Undergraduates’ Intentions to Pursue a Science Career: A Longitudinal Study of a National Biomedical Training Initiative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review

by
Birago Amofa
1,
Xebiso Blessing Kamudyariwa
1,
Fatima Araujo Pereira Fernandes
2,
Oluyomi Abayomi Osobajo
1,
Faith Jeremiah
3 and
Adekunle Oke
4,*
1
Aberdeen Business School, Robert Gordon University, Aberdeen AB10 7QE, UK
2
School of Business Management and Creativity, Arden University, London EC1N 2LX, UK
3
Faculty of Agribusiness and Commerce, Lincoln University, Christchurch 7647, New Zealand
4
Leeds Business School, Leeds Beckett University, Leeds LS1 F3HL, UK
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(7), 826; https://doi.org/10.3390/educsci15070826
Submission received: 18 April 2025 / Revised: 17 June 2025 / Accepted: 22 June 2025 / Published: 29 June 2025
(This article belongs to the Topic Generative Artificial Intelligence in Higher Education)

Abstract

Technological innovation has transformed educational settings, enabling artificial intelligence (AI)-driven teaching and learning processes. While AI is still in its embryonic stage in education, generative artificial intelligence has evolved rapidly, significantly shifting the teaching and learning context. With no clarity about the impacts of generative artificial intelligence on education, there is a need to synthesise research findings to demystify generative artificial intelligence and address concerns regarding its application in the teaching and learning process. This paper systematically synthesises studies on generative artificial intelligence in teaching and learning to understand key arguments and stakeholders’ perceptions of generative artificial intelligence in teaching and learning. The systematic review reveals five main domains of research within the field: (i) current awareness (understanding) of generative artificial intelligence, (ii) stakeholder perceptions, (iii) mechanisms for adopting generative artificial intelligence, (iv) issues and challenges of implementing generative artificial intelligence, and (v) contributions of generative artificial intelligence to student performance. This review examines the practical and policy implications of generative artificial intelligence, providing recommendations to address the concerns and challenges associated with generative artificial intelligence-driven teaching and learning processes.

1. Introduction

The Fourth Industrial Revolution (4IR) has transformed the way learners engage in Teaching and Learning (T&L) (Oke & Fernandes, 2020). Since the COVID-19 pandemic, the education sector has witnessed an unprecedented diffusion and adoption of technological innovations, particularly artificial intelligence (AI) (Barrett & Pack, 2023; Osobajo & Oke, 2022). The emergence of Generative Artificial Intelligence (GAI) in 2022 has further revolutionised the educational landscape, offering new opportunities and challenges in T&L (Cox et al., 2024; Kelly et al., 2023).
The receptiveness of the education sector to technological advances, particularly 4IR (Oke & Fernandes, 2020), has contributed to the rapid diffusion of GAI in higher education, supporting diverse tasks and allowing users to generate new content using prompts. This has reshaped how instructional materials, student engagement, and assessment methods are facilitated, sparking stakeholder interests and concerns (Almasre, 2024). On the one hand, GAI is a powerful tool promoting deep learning (Barrett & Pack, 2023) and enhancing student engagement (Cox et al., 2024; van den Berg & du Plessis, 2023) through conversational AI agents and interactive platforms (Lee et al., 2023; Liang et al., 2023).
On the other hand, using GAI has raised concerns, such as ethical issues, that must be addressed to legitimise its application in T&L (Singh, 2023). It is imperative to address whether digital technology in T&L contributes to teaching quality and/or enhances learners’ performance and experience (Oke & Fernandes, 2020). As stakeholders navigate the complex landscape of GAI, the growing debates and concerns about GAI may impact its application in T&L (Chan & Hu, 2023). While the utility of digital technology in facilitating T&L has been recognised, its scale, scope, speed, complexity, and disruptive abilities contribute to the lack of a holistic understanding of how it affects learning (Oke & Fernandes, 2020).
While there is a growing body of practical experimentation with GAI in educational contexts (Yang et al., 2024), a coherent theoretical discourse that bridges its technical capabilities with pedagogical principles remains underdeveloped (Mustafa et al., 2024). This gap highlights the pressing need for interdisciplinary research that integrates GAI’s affordances with robust educational theories to guide effective and ethical implementation in T&L (Mustafa et al., 2024; Salinas-Navarro et al., 2024). This paper addresses this gap by systematically analysing peer-reviewed empirical articles on GAI in T&L to identify and categorise prevalent themes, providing a robust understanding of GAI in education. This systematic synthesis of empirical peer-reviewed articles on GAI answers the following research questions:
RQ1. 
What are the key trends and the research landscape in GAI in teaching and learning?
RQ2. 
What are the key themes from the literature on GAI in the education sector?
RQ3. 
What is the future research direction on GAI in teaching and learning?
This review offers a comprehensive understanding and provides valuable insights into GAI in T&L, highlighting stakeholder perceptions and concerns associated with its application in T&L and its impact on learning performance. It specifically responds to Oke and Fernandes’ (2020) concern regarding technological innovation’s ethical, pedagogical, and epistemological implications in enhancing learners’ experience and employability through T&L. In doing so, this review contributes to the ongoing discussion on academic integrity (Singh, 2023), student agency (Yang et al., 2024), and cross-disciplinary application (Kelly et al., 2023) of GAI. The findings inform stakeholders of the need to harness GAI’s potential and address its complexities in higher education settings for enhanced student performance.

2. Research Methods

2.1. Search Procedure

This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology (Page et al., 2021). The review protocol (Table 1) outlines the criteria for selecting relevant studies.
This review utilised Scopus, EBSCOhost’s ERIC (Education Resources Information Center), and Web of Science (WoS) to ensure a comprehensive search of the relevant literature regardless of indexing location (Amofa et al., 2023), given the emergent nature of GAI in T&L. These databases are widely recognised as appropriate and robust for conducting systematic reviews (Gusenbauer & Haddaway, 2020). ERIC was specifically included due to its focus on education research and its function as a dedicated repository of the peer-reviewed and grey literature in the field.
Inclusion and exclusion criteria (see Table 1) were systematically applied to ensure the relevance and scholarly rigour of the reviewed studies. Conference proceedings, books, book chapters, opinion pieces (including blogs), and secondary reviews (e.g., systematic reviews, bibliometric analyses, and meta-analyses) were excluded to maintain methodological consistency and enhance the credibility of findings. To ensure breadth and contextual diversity, the review incorporated studies across all languages, time periods, and geographical regions, facilitating a comprehensive understanding of the application of GAI in diverse settings.

2.2. Search Steps and Article Selection

The initial literature search was conducted on 6 April 2024, and subsequently updated on 16 December 2024, to enhance the robustness and completeness of the review process. Predefined search criteria and keywords (see Table 1) were used to ensure methodological rigour, particularly the reliability, verifiability, and reproducibility of the procedure (Denyer & Tranfield, 2009).
  • Steps 1 and 2: Identification Phase
During the identification phase, the search was conducted using the exact phrases “generative artificial intelligence” OR “generative AI” within the titles, abstracts, and keyword fields of articles indexed in Scopus, ERIC (via EBSCOhost), and Web of Science (WoS). This targeted strategy was adopted to maximise the retrieval of articles with a substantive focus on GAI in educational contexts. The initial search yielded a total of 18,295 records, comprising 8328 from Scopus, 334 from ERIC, and 9633 from WoS (see Figure 1).
To further refine the search and ensure relevance to the T&L context, the second keyword phrase “teaching and learning” was added to the original query. This adjustment substantially reduced the dataset by excluding articles unrelated to educational settings. Specifically, 8100 records were excluded from Scopus, 90 from ERIC, and 9447 from Web of Science. The refined search yielded a total of 658 records across the three databases (see Figure 1).
  • Step 3: Screening Phase
In the screening phase, records that were not peer-reviewed articles were systematically excluded. This step removed 118 records from Scopus, 188 from ERIC, and 69 from WoS. A total of 283 eligible publications were retained following this filtering. These records were downloaded and imported into Mendeley, a reference management software, for duplicate detection and further screening.
  • Step 4: Final Inclusion
After merging records across the three databases, 87 duplicate entries were identified and removed. A subsequent manual relevance check was conducted to assess alignment with the inclusion criteria, resulting in the exclusion of an additional 56 articles. The final sample consisted of 140 peer-reviewed articles, which served as the basis for the subsequent full-text analysis in this systematic literature review.
In the next section, we present the analytical procedure and findings of the review. First, a descriptive analysis of the current research development and direction on GAI in T&L is presented. The publication year, journals, and subject areas of the 140 peer-reviewed articles are discussed in detail. As illustrated in Figure 1, a total of 140 peer-reviewed articles were deemed relevant for inclusion in this systematic literature review (SLR). Using thematic analysis, the second part identifies and synthesises relevant themes from the 140 retrieved articles, highlighting perceptions and key arguments about the utility of GAI in T&L.

3. Descriptive Analysis

3.1. Number of Publications per Year

Figure 2 presents the annual distribution of peer-reviewed articles addressing GAI in T&L. The first identifiable publications on this topic emerged in 2023, aligning with the broader public and academic recognition of GAI’s transformative potential. While foundational work in GAI dates back to the 1960s, its application within educational contexts only gained significant traction following the release of ChatGPT (GPT-3.5 model) in late November 2022. The number of relevant publications increased by 267% between 2023 and 2024, reflecting a sharp escalation in scholarly interest and the urgency of understanding GAI’s pedagogical implications.
This sharp rise demonstrates the growing academic interest in the subject, likely driven by advancements in AI technologies and their increasing adoption in educational settings (Oke & Fernandes, 2020). The exponential increase indicates the increasing awareness of GAI’s relevance and impact, suggesting its potential to attract future research interest.

3.2. Influential Journals

The analysis revealed that the 140 articles retrieved were published in more than 90 journals. Most journals published only one article, indicating the embryonic stage of GAI in T&L. While 140 articles were reviewed, Figure 3 shows the most influential journals in the field, representing the top 11 journals that have published three or more articles.
The Online Learning Journal published the highest number of articles (n = 7), representing approximately 5% of the reviewed literature. This was followed by Computers and Education: Artificial Intelligence, which contributed six articles. Several other journals, including the Journal of University Teaching and Learning Practice, Asian Journal of Distance Education, and Smart Learning Environments, each accounted for between 2% and 3% of the total publications. This distribution highlights the emerging and cross-disciplinary nature of scholarly engagement with GAI in T&L, with both education-specific and technologically oriented journals contributing to the discourse.

3.3. Subject Areas

According to the SCIMAGO subject classification, the analysis revealed that the reviewed studies were distributed across 36 distinct subject areas (see Figure 4), demonstrating the diffusion of GAI across diverse academic disciplines and highlighting the growing interdisciplinary interest and academic engagement with GAI. The review defines education as encompassing various studies in the field of teacher education, including in-service education and pre-service teacher education. This category of studies focuses on the preparation, training, and ongoing professional development of educators by examining various aspects of T&L, including the effectiveness of different teaching methods, the impact of teacher knowledge on student learning, and the influence of the social context on educational outcomes. For example, Thararattanasuwan and Prachagool (2024) explored the GAI perceptions of 45 undergraduate students who have enrolled and are pursuing a qualification in the field of teacher education.
Interestingly, education dominates the subject areas, with most journals identifying it as their primary focus, reflecting its central role in T&L research. Computer Science Applications is the next most represented subject area, demonstrating the technological foundation of AI in education. Other notable subject areas include Computer Science, Linguistics, Language, Engineering, Medicine, and Psychology.

3.4. Country of Research Focus

The reviewed studies reflect a globally distributed but highly uneven research landscape in terms of the research country of focus (Figure 5). While the United States (17), Hong Kong (9), South Africa (7), Australia (6), and China (6) represent most studies, other countries, including the United Kingdom (3), Thailand (3), Turkey (2), Canada (2), the Philippines (2), UAE (2), Singapore (2), and Vietnam (2), contributed to the existing studies on GAI in the education sector.
A further 21 countries were represented by a single study each, including India, Argentina, Poland, Saudi Arabia, and France, reflecting growing international interest but limited depth in most locations. Additionally, five studies were classified as “Global”, representing studies that focus on multicountry contexts. For example, Chan and Lee (2023) sampled participants from multiple locations, including Hong Kong, Mainland China, the United Kingdom/Ireland, North America, East Asia, and Australia.
According to our analysis, 50 out of 140 (36%) did not specify a geographic context; this lack of geographic focus is often the case when human participants were not involved in the research. These were particularly those that focused on evaluating the performance of AI tools, such as ChatGPT, Bard, or Khanmigo, or conducted prompt-based testing, content analysis of AI outputs, or document/policy reviews. Other studies were conceptual, theorising or critiquing AI use broadly without anchoring the analysis to a specific geographical region or country. This may suggest that AI tools, rather than their implementation by educators or students, are more critical in those studies, indicating that geographical context and/or study context become less relevant. This further reinforces the dual nature of the emerging field of AI use in T&L, with human-centred and contextually grounded approaches representing one strand, and tool-centred and perhaps globally abstract approaches, transcending national boundaries, representing another strand of research.

3.5. Frequently Used Research Methods

Authors investigating GAI in the education sector have employed various research methods to understand GAI in T&L. According to the review, the research methods used in the reviewed articles are classified into three categories: qualitative (65.7%), quantitative (25.7%), and mixed-methods (8.6). As presented in Figure 6, conceptual studies are dominant, indicating the embryonic state of research in GAI in T&L. However, experimental studies emerge as the dominant research method across the three categories. It is worth noting that not all the experimental studies involved human participants; some studies were based on the authors’ interactions with GAI models/platforms to understand the models’ behaviour toward T&L. For example, Jho and Ha (2024) compared human and machine assessment feedback by developing a web-based framework to provide automated assessment and feedback using ChatGPT. While some qualitative studies employed the experimental method, others adopted the diary method, allowing participants to reflect on their experiences and perceptions of GAI.

4. Thematic Analysis

NVivo software was employed to facilitate a centralised and systematic data management system, enabling the structured coding and analysis of the corpus. All 140 full-text articles were imported into NVivo, where they were organised and reviewed in alignment with the overarching research questions.
An inductive thematic analysis was undertaken to identify and synthesise emergent concepts from the data. Relevant segments of text were tagged with thematic labels, forming initial codes that captured key ideas, patterns, and discursive elements across the literature (Braun & Clarke, 2006). These codes were iteratively refined and grouped into higher-order themes representing stakeholders’ perceptions, practices, and concerns regarding the integration of GAI in T&L.

4.1. Current Awareness of GAI

GAI is widely regarded as a versatile tool that presents both opportunities and complexities, necessitating thoughtful integration into T&L. The review reveals that knowledge of GAI remains limited, and its use is often broadly conceptualised in the literature. For instance, studies have explored GAI through diverse lenses, including tutor perspectives (Kaplan-Rakowski et al., 2023), student interactions (Kelly et al., 2023), perceived benefits and limitations (Klimova et al., 2024), critical content evaluation (Lee et al., 2023), content generation and student engagement (Liang et al., 2023), and assessment practices (Salinas-Navarro et al., 2024). The findings suggest that GAI can enhance student engagement and support content creation, thereby facilitating learning and development. Additionally, GAI is viewed as a consistent and objective tool, particularly valued for its role in supporting assignment development and student assessment (Almasre, 2024).
Compared to social sciences, GAI appears to offer more immediate applications in STEM disciplines (Kelly et al., 2023), although scholars emphasise the necessity of human oversight to ensure accuracy (Parra et al., 2024). GAI assists with writing and conceptual explanations; however, challenges persist in multilingual settings (Klimova et al., 2024; Li et al., 2023), highlighting the need for greater scrutiny of GAI outputs with respect to accuracy and contextual relevance.
The literature also documents variability in both the application and output quality of tools, such as ChatGPT, ChatGPT 3.5, Bing Chat, and Bard. Among these, ChatGPT 3.5 was frequently identified as the most consistent and effective, particularly in tasks involving assessment, content generation, language editing, and subject-specific problem-solving, such as geometry (Kaplan-Rakowski et al., 2023; Šedlbauer et al., 2024). Although ChatGPT 3.5 demonstrated objectivity and consistency in grading, studies noted that it tended to assign higher scores than human assessors (Jackaria et al., 2024; Pedersen, 2023). These discrepancies underscore the importance of human moderation in ensuring fairness, accuracy, and quality in GAI-assisted evaluation.

4.2. Perceptions of GAI in Higher Education

Stakeholders’ perceptions of GAI in higher education, particularly in T&L, are notably mixed. While its potential to enhance educational processes is widely acknowledged, stakeholders remain cautious and, at times, sceptical of its adoption. The literature reveals that students, tutors, and institutions hold varied perspectives on the efficacy of GAI and the complexities of its integration into educational settings. This section unpacks these perspectives, beginning with tutors.

4.2.1. Tutors’ Perceptions

Tutors’ perceptions are broadly categorised into positive/supportive and negative/critical (Figure 7).
Predominantly, negative perceptions stem from concerns around academic integrity and the unintended consequences of AI in educational contexts. Tutors also express apprehension regarding the ethical, pedagogical, and epistemological implications of GAI, citing limitations in reliability, credibility, and its potential to erode independent thinking and creativity (Barrett & Pack, 2023; Chan, 2023). They are especially wary of the unethical use of GAI, including plagiarism, misinformation, and bias in AI-generated content (Barrett & Pack, 2023; Singh, 2023). There is growing unease that students may exploit GAI to gain unfair academic advantage, potentially undermining the development of critical thinking and independent learning (Chan & Lee, 2023). Furthermore, implementation challenges, such as the absence of institutional policies, formal training, and pedagogical guidance, compound tutors’ reservations.
Despite such concerns, the consensus across the literature reflects a cautiously optimistic stance. Tutors appreciate GAI’s potential to enhance instructional efficiency, function as a supplemental resource, promote inclusive and interactive learning, and offer objectivity in assessment and feedback processes (Cox et al., 2024; Eager & Brunton, 2023). The ability of GAI to streamline educational content creation and support lesson planning is particularly valued, enabling educators to shift their focus towards more interactive, high-impact pedagogical activities (van den Berg & du Plessis, 2023; Shimizu et al., 2023). One commonly cited advantage is the speed, consistency, and objectivity with which GAI can support the design of lesson activities and assessment tasks (Almasre, 2024; Jackaria et al., 2024). Educators report that GAI can alleviate the burden of repetitive and time-intensive responsibilities, such as administrative tasks and resource generation, thus freeing tutors’ time for deeper student engagement.
These mixed perceptions may be due to a lack of knowledge about GAI and insufficient formal guidance or structure for implementing GAI. While many educators acknowledge GAI’s capacity to revolutionise pedagogical practice (Eager & Brunton, 2023; Singh, 2023), the literature strongly suggests that institutional investment in ongoing professional development is essential to equip tutors with the competencies needed to adapt to evolving AI-driven educational landscapes.

4.2.2. Students’ Perceptions

Student perceptions of GAI range from positive and enthusiastic to critical and sceptical, with others eliciting a pragmatic perception of navigating GAI terrain to enhance their learning experience (Figure 8).
Student Optimistic Perceptions
The reviewed studies reveal a notable generational divide in how students perceive and engage with GAI in educational settings. Compared to older cohorts, Generation Z students tend to hold more positive views and exhibit greater optimism regarding the potential benefits of GAI (Chan & Lee, 2023). As digital natives, this cohort is particularly receptive to emerging technologies and is more likely to embrace GAI as a tool for enhancing productivity, efficiency, and personalised learning (Chan & Hu, 2023; Lee et al., 2023).
A distinctive feature shaping Gen Z’s favourable perception is GAI’s conversational interface, which some students claimed that the tool offers them a form of social support. Its ability to make solitary academic tasks, such as reading and writing, feel more interactive contributes to its appeal (Yang et al., 2024). In addition to appreciating its technical affordances (Chan & Hu, 2023), students also note GAI’s practical utility across multiple disciplines (Kelly et al., 2023). Despite these advantages, Gen Z students generally do not view GAI as a substitute for human instructors, expressing confidence that AI will complement, rather than replace, the pedagogical role of tutors (Chan, 2023).
Student Cautious Perceptions
In contrast, mature and older students tend to approach GAI with greater caution. Their scepticism is frequently informed by ethical considerations, including concerns about plagiarism and breaches of academic integrity associated with the misuse of AI in assessments (Barrett & Pack, 2023). For these students, the adoption of GAI in T&L raises broader ethical questions related to privacy, transparency, and social equity (Chan, 2023). These anxieties are compounded by the recognised limitations of GAI, such as its tendency to produce inaccurate or biased content or so-called “hallucinations”, and its lack of emotional intelligence and contextual sensitivity (Chan & Lee, 2023).
Perceptions of superficiality in AI-generated outputs further reinforce this scepticism. Some students view GAI’s responses as overly generic, difficult to verify, and lacking the intellectual depth required for advanced academic tasks (Chan & Hu, 2023; Šedlbauer et al., 2024). Moreover, there is growing awareness among students that an over-reliance on GAI may diminish critical thinking, reduce opportunities for creativity, and ultimately impair long-term employability (Chan & Hu, 2023; Yang et al., 2024).
Student Pragmatic Perceptions
In addition to students’ optimistic and sceptical views, a third category of student perception adopts a pragmatic stance toward the use of GAI in higher education. This viewpoint occupies a conceptual intersection between enthusiasm and cautiousness: students in this group acknowledge GAI’s practical benefits while remaining aware of its limitations. They view GAI as a supportive tool that necessitates guidance, critical reflection, and collaborative engagement to be effectively leveraged for enriched learning experiences (Chan & Hu, 2023; Yang et al., 2024).
This pragmatic orientation is often accompanied by a willingness to explore new technologies and develop the technical competencies necessary to use GAI constructively (Yang et al., 2024). Students in this category tend to demonstrate greater accountability and a proactive approach to their learning. Notably, even in the absence of prior subject-specific knowledge, such students can apply GAI to scaffold their understanding and contribute meaningfully to the learning process (Cox et al., 2024). This emergent disposition highlights GAI’s potential to support experiential and participatory learning, offering students hands-on opportunities to co-construct knowledge and engage more actively with academic content.

4.2.3. Institutional Perceptions

Institutional perspectives emphasise the critical need for strategic frameworks to guide the integration of GAI into T&L. The literature suggests that higher education institutions should assume proactive leadership roles by developing comprehensive policies that address both the opportunities and challenges associated with the adoption of GAI. For example, establishing explicit guidelines, training initiatives, and ethical protocols for both students and educators is regarded as essential to alleviate anxiety and uncertainty surrounding GAI in academic environments (Barrett & Pack, 2023).
Given the evolving nature of GAI, concerns raised by students and tutors regarding its potential misuse in T&L, particularly about academic integrity, underscore the need for institutional interventions. These include a fluid formal policy and the implementation of AI literacy programs designed to equip stakeholders with the skills necessary to navigate ethical complexities and mitigate the risks of misconduct (Chan, 2023). As a result, this will reaffirm institutional commitment to fostering responsible, transparent, and pedagogically sound use of GAI across educational contexts.

4.3. Students’ Attitudinal Profiles and Academic Impact of GAI Use

While the above findings capture the broad optimism, caution, and perceptions that characterise student sentiment, recent work suggests a more nuanced split between receptive and resistive learners, whose distinct engagement patterns shape academic outcomes. Thus, building on Yang et al.’s (2024) classification, the literature distinguishes two broad student profiles that correspond to patterns of GAI use, engagement, and learning outcomes: receptive and resistive. Table 2 synthesises these profiles across eight learning-relevant attributes.
As summarised in Table 2, receptive students are typically confident, exploratory, and strategically engaged GAI. They treat GAI as a learning partner and respond particularly well to its affordances for writing, language learning, and problem-solving (Lee et al., 2023; Šedlbauer et al., 2024). This profile is prevalent in STEM-related programmes, where students report greater academic benefits mediated by self-efficacy and sustained cognitive engagement (Kelly et al., 2023; Liang et al., 2023). By contrast, resistive students approach GAI with scepticism and minimal engagement, often citing concerns about ethical use, trustworthiness, and disciplinary relevance (Chan & Hu, 2023; Yang et al., 2024). Learners in the health and social care fields are disproportionately represented in this group, exhibiting lower confidence and greater ambivalence toward GAI.
Importantly, several studies report a positive association between GAI use and academic performance, although this relationship is not deterministic. Task design, learning environment, and students’ metacognitive dispositions substantially moderate learning outcomes and/or performance (Ironsi, 2023; Šedlbauer et al., 2024); frequent use alone does not guarantee meaningful gains.

4.4. Mechanisms for GAI Adoption: Drivers and Initial Barriers

The mechanism for adopting GAI in T&L involves strategies that both enhance motivation and mitigate barriers to its effective implementation in educational contexts. As synthesised in Table 3, the review identifies a diverse range of motivational factors and obstacles that shape GAI adoption.
Interestingly, media coverage has emerged as a significant driver of GAI adoption, shaping perceptions among both students and tutors. Extensive coverage has sparked enthusiasm and pragmatic engagement among students, encouraging them to incorporate GAI tools into their academic toolkit (Yang et al., 2024). In parallel, GAI’s ease of use and broad web accessibility have facilitated widespread diffusion, even in the absence of formal institutional initiatives (Kaplan-Rakowski et al., 2023).
Pedagogically important, the review indicates student curiosity and a willingness to experiment are also key drivers of GAI use, particularly in support of academic performance. As previously highlighted, tools, such as intelligent tutoring systems and conversational agents, provide quick, personalised feedback, which aligns with student preferences for responsive and adaptive learning environments. In addition, the ability of GAI to deliver tailored learning experiences is well-documented, particularly the potential to transform traditional pedagogy by promoting efficiency, personalisation, and self-directed learning (Chan & Hu, 2023). This has been shown to reduce instructor bias in assessment and feedback (Lee et al., 2023) and may facilitate co-constructed learning experiences between students and educators, fostering increased agency, engagement, higher-order thinking skills, and depth in inquiry-based learning.
The review further highlighted GAI as a mechanism for enhancing cognitive engagement and productivity. Liang et al. (2023) found that students’ interactions with GAI positively influenced their academic outcomes through increased self-efficacy and cognitive engagement. Similarly, Šedlbauer et al. (2024) highlighted GAI’s potential to stimulate systems thinking and strategic problem-solving abilities, suggesting its broader impact on students’ cognitive and metacognitive development.
GAI also holds promise for enhancing language acquisition and developing communication skills. In language learning contexts, students found GAI beneficial for retrieving information, providing grammar explanations, and correcting essays (Klimova et al., 2024). Additionally, GAI has been shown to enhance communicative competence among learners, fostering collaboration and peer-to-peer engagement through content generation and idea brainstorming (Lee et al., 2023; Mateos-Blanco et al., 2024). Studies further suggest that integrating virtual reality (VR) with GAI, supported by natural language processing (NLP) algorithms, holds long-term potential for enhancing speaking and interpersonal communication skills in immersive learning environments (Ironsi, 2023; Mateos-Blanco et al., 2024).
Nevertheless, despite these drivers, the review also identified critical barriers to the adoption of GAI in educational settings. A prominent concern is the lack of formal training and institutional support (Barrett & Pack, 2023; Kaplan-Rakowski et al., 2023). For instance, 95.6% of educators reported that they have received no formal training on AI tools, while 94.1% indicated an absence of institutional policies to guide AI use (Barrett & Pack, 2023). Many tutors remain unaware of the low threshold required to engage with GAI tools, such as ChatGPT, further compounding resistance to adoption (Kaplan-Rakowski et al., 2023). Moreover, their fears about GAI’s impact on critical thinking and independent learning remain a key concern. Shimizu et al. (2023) noted that tutors fear that students’ over-reliance on GAI may hinder their ability to think analytically and solve problems independently. These concerns are echoed by Chan and Hu (2023) and Liang et al. (2023), who argue that the habitual use of GAI could attenuate students’ creativity and reflective reasoning.
The reliability and contextual appropriateness of GAI outputs also present challenges to its pedagogical value. Inaccurate, biased, or misleading content can undermine student understanding and derail learning processes (Chan & Lee, 2023; Parra et al., 2024). Furthermore, the inconsistent performance of GAI tools across disciplines and task types raises questions about their scalability for general T&L applications. Tutors are, therefore, encouraged to redesign assessments to maintain academic integrity and learning quality.
Finally, technical limitations constrain the broader application of GAI in T&L. For example, Klimova et al. (2024) identified issues such as false referencing, a lack of visual support, and the absence of critical thinking logic in the outputs. Language and cultural biases also present systemic limitations. GAI systems are predominantly trained in English, which risks marginalising students from non-English speaking backgrounds (Li et al., 2023). These linguistic and cultural biases, ingrained in the data used to train GAI models, must be addressed to create equitable and diverse educational contexts (Pedersen, 2023).

4.5. Issues and Challenges of Implementing GAI

Stakeholders’ concerns about the use of GAI in higher educational contexts underscore the urgent need for clear guidelines, targeted training, and continued technological refinement. Central to these concerns are challenges related to academic integrity, equity and inclusion, algorithmic transparency, and the adequacy of institutional support structures. These concerns are explored in detail below:

4.5.1. Accessibility and the Digital Divide

GAI integration raises significant concerns about digital inequality, especially for students from under-resourced contexts. Several studies highlight the importance of inclusive educational resources and equitable access to AI technologies. For example, Cox et al. (2024) stress the importance of GAI tools that reflect diverse cultural knowledge and help mitigate racial and gender biases. Disparities in GAI awareness are particularly pronounced between international and domestic students, as well as across generational lines. Older students may have had limited exposure to advanced technologies during their formative education, which can contribute to lower levels of confidence and adoption (Kelly et al., 2023).
Institutional inequities also shape adoption. Kaplan-Rakowski et al. (2023) note that institutional awareness of GAI is hindered by insufficient policy clarity, disproportionately affecting institutions in less affluent regions. Van Wyk (2024) warns that the commercialisation of GAI tools, often developed for profit by large technology companies, risks placing these technologies out of reach for students in developing economies. Addressing these disparities requires the removal of financial and infrastructural barriers, as well as the intentional design of inclusive AI policies and practices.

4.5.2. Institutional Support and Professional Development

The review reveals a consistent lack of institutional support and strategic policy as a significant barrier to the effective implementation of GAI. Many educators operate in environments where policies on AI use are either absent or ambiguous, which fosters mistrust and uncertainty (Barrett & Pack, 2023; Chan, 2023). Institutional guidance is critical for shaping ethical norms, aligning pedagogical practices, and integrating AI across teaching, learning, and assessment.
Effective integration requires more than policy—it demands ongoing professional development. Current research reveals a significant gap: most educators have received no formal training in AI, leaving them ill-equipped to guide students in the appropriate and ethical use of GAI (Kaplan-Rakowski et al., 2023). Institutions should prioritise investment in training programmes that enhance digital literacy, update pedagogical practices, and encourage reflective engagement with emerging technologies (Ironsi, 2023; Singh, 2023). To support this effort, Lee et al. (2023) propose the Know–Build–Critique framework, which emphasises a structured, critical, and collaborative approach to GAI integration. This model offers a conceptual foundation for curriculum design, resource allocation, and assessment reform.

4.5.3. Inclusivity and Cultural Representation

Beyond access, content inclusivity and linguistic diversity are critical concerns. The literature identifies persistent biases in GAI systems, many of which rely on English-language training data and Western-centric cultural assumptions (Li et al., 2023). These limit linguistic neutrality and marginalise students from diverse cultural and linguistic backgrounds.
Efforts to address this include the development of Open Educational Resources (OER) grounded in knowledge from the Global South, which could enrich GAI content and broaden its relevance (Cox et al., 2024). Multiple-language support, especially for underrepresented or minority languages, has also been identified as a pathway to greater inclusivity, allowing students to engage with content through translation, clarification, and culturally responsive pedagogy (Klimova et al., 2024).

4.5.4. Academic Dishonesty: Integrity and Ethical Issues

Concerns around plagiarism and ethical misconduct are a recurring theme in the literature on GAI in education. Students and educators alike express unease over the use of GAI tools to complete assignments without proper attribution. Barrett and Pack (2023) argue that institutions should treat the undisclosed use of GAI in writing tasks as a breach of academic integrity. The rise in such incidents, combined with the lack of clear institutional policies, exacerbates the problem (Kaplan-Rakowski et al., 2023).
To mitigate these risks, Chan (2023) advocates for robust risk management frameworks that encompass data privacy, algorithmic transparency, and security. While students’ confidence in using GAI responsibly may grow with experience, there remains a pressing need for institutional guidance that clarifies acceptable use and reinforces the importance of attribution and critical engagement (Kelly et al., 2023).

4.5.5. Societal and Personal Impacts of Using GAI in Education

Beyond institutional and technical concerns, the adoption of GAI raises deeper questions about its long-term impact on learners’ personal and professional development. Recently, scholars (e.g., Chan & Hu, 2023; Šedlbauer et al., 2024) have cautioned that over-reliance on GAI may erode critical thinking, creativity, and originality, which are core competencies essential for future employability. There is also growing anxiety among students about AI’s potential to displace human jobs, contributing to broader societal unease regarding automation and economic security. These findings suggest that educational institutions must not only address practical concerns but also cultivate AI literacy that is ethically grounded, critically reflective, and socially responsive.

5. Discussion and Conclusions

Given the documented perceptions and challenges in the literature, this review shows that addressing the integration of GAI in higher education requires clear institutional guidelines, robust pedagogical frameworks, and pragmatic assessment strategies. Van Wyk (2024) emphasises the urgency of embedding ethical considerations into curricula and advocates for zero-tolerance policies on academic dishonesty. This position aligns with Singh’s (2023) argument that universities must explicitly cultivate students’ ethical competencies to navigate global affairs intelligently and responsibly. To that end, institutions must prioritise critical AI literacy programs that foster informed, reflective, and accountable engagement with AI tools (Cox et al., 2024).
Institutional commitment is central to mitigating ethical concerns, particularly through investments in tutor development and structured student exposure to GAI technologies (Shimizu et al., 2023). Enhanced, hands-on engagement may increase students’ familiarity and foster more strategic, intentional use. The observed variability in student awareness further underscores the need for inclusive educational models that cater to diverse learner profiles and foster deeper cognitive engagement, critical thinking, and self-directed application of GAI (Chan & Hu, 2023; Yang et al., 2024).
Moreover, the design of assessments plays a pivotal role in fostering a reflective and responsible use of GAI. Assignments should include components, such as reflective writing, oral presentations, and technical demonstrations, to effectively assess student comprehension and the quality of AI-assisted outputs (Pham et al., 2023; Salinas-Navarro et al., 2024; Šedlbauer et al., 2024). Peer evaluation mechanisms can further promote transparency, filter misinformation, and enhance accountability in student work (Lee et al., 2023). Ultimately, the pedagogical emphasis must remain on creativity, deep reasoning, and authentic problem-solving to navigate the ethical and practical challenges of integrating GAI into T&L (Liang et al., 2023).

5.1. Practical and Policy Implications of GAI in Education

This review reveals a pressing need for robust, institution-wide policies to guide the integration of GAI in T&L, particularly in response to growing concerns about ethics, academic dishonesty, and data privacy (Chan & Lee, 2023; Kelly et al., 2023). Ongoing dialogue among stakeholders should prioritise not only the mitigation of risk but the proactive harnessing of GAI’s pedagogical benefits (Klimova et al., 2024). Clear and consistent institutional communication regarding the appropriate use of GAI is crucial for upholding academic integrity and fostering trust in its application (Kelly et al., 2023).
Therefore, adopting practical, forward-looking strategies for implementing GAI in educational settings is crucial. In line with Oke and Fernandes (2020), the education sector should adopt rather than restrict AI technologies, embracing enabling policies that facilitate the ethical, inclusive, and equitable adoption of these technologies (Barrett & Pack, 2023; Eager & Brunton, 2023). Such policies should encompass guidance on ethical use, address issues of inequality and accessibility, and provide clear expectations for both students and tutors.
In addition, institutions must implement comprehensive evaluation systems that explicitly position GAI as a support tool, complementary to, rather than a replacement for, student learning and academic effort (Liang et al., 2023). These systems should foster a culture of accountability, transparency, and reflective engagement, particularly in higher education contexts where independent thinking and academic integrity are paramount.

5.2. Implications for Future Research: Understanding Unknown Unknowns

While GAI offers significant potential, considerable uncertainty remains about its long-term effects on educational practices, particularly in terms of skill development, pedagogical transformation, and employment outcomes (Barrett & Pack, 2023; Kelly et al., 2023). Table 4 below outlines key domains where further research is urgently needed.
Significant uncertainties remain about the long-term implications of GAI for student learning, knowledge development, and academic skills. Thus, future research should investigate how GAI influences the development of critical competencies, such as collaboration, communication, creativity, critical thinking, and problem-solving, which are essential to lifelong learning and employability (Barrett & Pack, 2023; Kelly et al., 2023).
With growing concerns about AI-driven automation and the potential marginalisation of human educators, future inquiries should examine how GAI can augment, rather than replace, educators’ roles, particularly those involving emotional intelligence, ethical judgement, and interpersonal nuance (Chan & Lee, 2023; Kaplan-Rakowski et al., 2023). Research should also prioritise strategies that improve tutors’ capacity to detect and respond to the unauthorised or uncritical use of GAI in student submissions, thus safeguarding academic integrity. Moreover, technical investigations are needed to address algorithmic consistency, accuracy, and personalisation, particularly in diverse cultural and linguistic contexts. Addressing persistent issues related to bias, hallucination, and uneven performance is vital to the trustworthiness of AI tools in education.
Importantly, longitudinal research is required to examine the relationship between students’ engagement with GAI and their future employability and professional competencies. This includes exploring how GAI adoption may shape the development of disciplinary knowledge, self-regulated learning, and career readiness. Finally, the development of rigorous monitoring and evaluation systems is crucial for guiding the responsible integration of GAI across educational settings. Without such mechanisms, sustainable and meaningful adoption remains elusive. These open questions highlight the need for ongoing interdisciplinary research, informed policy, and evidence-based practice to unlock GAI’s transformative potential in education.

Author Contributions

Conceptualisation, A.O., B.A., O.A.O. and X.B.K.; methodology, A.O., B.A., O.A.O. and X.B.K.; software, A.O., B.A. and O.A.O.; validation, A.O., B.A., F.J., O.A.O. and X.B.K.; formal analysis, A.O., B.A. and O.A.O.; investigation, A.O., B.A., O.A.O. and X.B.K.; resources, B.A., F.A.P.F., F.J. and X.B.K.; data curation, A.O. and B.A.; writing—original draft, A.O., B.A., O.A.O. and X.B.K.; writing—review and editing, A.O., B.A., F.A.P.F., F.J., O.A.O. and X.B.K.; visualisation, A.O., B.A. and O.A.O.; supervision, B.A., O.A.O. and X.B.K. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no funding for this study.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

“Not applicable” for studies not involving humans.

Data Availability Statement

All data generated or analysed during this study are included in this published article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Almasre, M. (2024). Development and evaluation of a custom GPT for the assessment of students’ designs in a typography course. Education Sciences, 14(2), 148. [Google Scholar] [CrossRef]
  2. Amofa, B., Oke, A., & Morrison, Z. (2023). Mapping the trends of sustainable supply chain management research: A bibliometric analysis of peer-reviewed articles. Frontiers in Sustainability, 4, 1129046. [Google Scholar] [CrossRef]
  3. Barrett, A., & Pack, A. (2023). Not quite eye to A.I.: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(1), 59. [Google Scholar] [CrossRef]
  4. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  5. Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. [Google Scholar] [CrossRef]
  6. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  7. Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learning Environments, 10(1), 60. [Google Scholar] [CrossRef]
  8. Cox, G., Brown, R., Willmers, M., & Held, M. (2024). Learning along the way: A case study on a pedagogically innovative approach to engage medical students in the creation of open educational resources using ChatGPT. Mousaion, 42(1), 21. [Google Scholar] [CrossRef] [PubMed]
  9. Denyer, D., & Tranfield, D. (2009). Producing a systematic review. In D. A. Buchanan, & A. Bryman (Eds.), The Sage handbook of organizational research methods (pp. 671–689). Sage Publications Ltd. [Google Scholar]
  10. Eager, B., & Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. Journal of University Teaching and Learning Practice, 20(5), 1–19. [Google Scholar] [CrossRef]
  11. Gusenbauer, M., & Haddaway, N. R. (2020). Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Research Synthesis Methods, 11(2), 181–217. [Google Scholar] [CrossRef]
  12. Ironsi, C. S. (2023). Investigating the use of virtual reality to improve speaking skills: Insights from students and teachers. Smart Learning Environments, 10(1), 53. [Google Scholar] [CrossRef]
  13. Jackaria, P. M., Hajan, B. H., & Mastul, A. R. H. (2024). A Comparative analysis of the rating of college students’ essays by ChatGPT versus human raters. International Journal of Learning, Teaching and Educational Research, 23(2), 478–492. [Google Scholar] [CrossRef]
  14. Jho, H., & Ha, M. (2024). Towards effective argumentation: Design and implementation of a generative AI-based evaluation and feedback system. Journal of Baltic Science Education, 23(2), 280–291. [Google Scholar] [CrossRef]
  15. Kaplan-Rakowski, R., Grotewold, K., Hartwick, P., & Papin, K. (2023). Generative AI and teachers’ perspectives on its implementation in education. Journal of Interactive Learning Research, 34(2), 313–338. Available online: https://www.learntechlib.org/primary/p/222363/ (accessed on 10 April 2024). [CrossRef]
  16. Kelly, A., Sullivan, M., & Strampel, K. (2023). Generative artificial intelligence: University student awareness, experience, and confidence in use across disciplines. Journal of University Teaching and Learning Practice, 20(6), 1–16. [Google Scholar] [CrossRef]
  17. Klimova, B., Pikhart, M., & Al-Obaydi, L. H. (2024). Exploring the potential of ChatGPT for foreign language education at the university level. Frontiers in Psychology, 15, 1269319. [Google Scholar] [CrossRef] [PubMed]
  18. Lee, A. V. Y., Tan, S. C., & Teo, C. L. (2023). Designs and practices using generative AI for sustainable student discourse and knowledge creation. Smart Learning Environments, 10(1), 59. [Google Scholar] [CrossRef]
  19. Li, B., Kou, X., & Bonk, C. J. (2023). Embracing the disrupted language teaching and learning field: Analyzing YouTube content creation related to ChatGPT. Languages, 8(3), 197. [Google Scholar] [CrossRef]
  20. Liang, J., Wang, L., Luo, J., Yan, Y., & Fan, C. (2023). The relationship between student interaction with generative artificial intelligence and learning achievement: Serial mediating roles of self-efficacy and cognitive engagement. Frontiers in Psychology, 14, 1285392. [Google Scholar] [CrossRef]
  21. Mateos-Blanco, B., Álvarez-Ramos, E., Alejaldre-Biel, L., & Parrado-Collantes, M. (2024). Vademecum of artificial intelligence tools applied to the teaching of languages. Journal of Technology and Science Education, 14(1), 77–94. [Google Scholar] [CrossRef]
  22. Mustafa, M. Y., Tlili, A., Lampropoulos, G., Huang, R., Jandrić, P., Zhao, J., Salha, S., Xu, L., Panda, S., López-Pernas, S., & Saqr, M. (2024). A systematic review of literature reviews on artificial intelligence in education (AIED): A roadmap to a future research agenda. Smart Learning Environments, 11(1), 1–33. [Google Scholar] [CrossRef]
  23. Oke, A., & Fernandes, F. A. P. (2020). Innovations in teaching and learning: Exploring the perceptions of the education sector on the 4th industrial revolution (4IR). Journal of Open Innovation: Technology, Market, and Complexity, 6(2), 31. [Google Scholar] [CrossRef]
  24. Osobajo, O. A., & Oke, A. (2022). Exploring learning for on-campus students transitioning to online learning during the COVID-19 pandemic: Perceptions of students in the higher education. Education Sciences, 12(11), 807. [Google Scholar] [CrossRef]
  25. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. The BMJ, 372, n71. [Google Scholar] [CrossRef]
  26. Parra, V., Sureda, P., Corica, A., Schiaffino, S., & Godoy, D. (2024). Can generative AI solve geometry problems? Strengths and weaknesses of LLMs for geometric reasoning in Spanish. International Journal of Interactive Multimedia and Artificial Intelligence, 8(5), 65–74. [Google Scholar] [CrossRef]
  27. Pedersen, I. (2023). The rise of generative AI and enculturating AI writing in postsecondary education. Frontiers in Artificial Intelligence, 6, 1259407. [Google Scholar] [CrossRef]
  28. Pham, T., Nguyen, B., Ha, S., & Ngoc, T. N. (2023). Digital transformation in engineering education: Exploring the potential of AI-assisted learning. Australasian Journal of Educational Technology, 39(5), 1–19. [Google Scholar] [CrossRef]
  29. Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 342–363. [Google Scholar] [CrossRef]
  30. Salinas-Navarro, D. E., Vilalta-Perdomo, E., Michel-Villarreal, R., & Montesinos, L. (2024). Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Education Sciences, 14(1), 83. [Google Scholar] [CrossRef]
  31. Shimizu, I., Kasai, H., Shikino, K., Araki, N., Takahashi, Z., Onodera, M., Kimura, Y., Tsukamoto, T., Yamauchi, K., Asahina, M., Ito, S., & Kawakami, E. (2023). Developing medical education curriculum reform strategies to address the impact of generative AI: Qualitative study. JMIR Medical Education, 9, e53466. [Google Scholar] [CrossRef]
  32. Singh, M. (2023). Maintaining the integrity of the South African university: The impact of ChatGPT on plagiarism and scholarly writing. South African Journal of Higher Education, 37(5), 203–220. [Google Scholar] [CrossRef]
  33. Šedlbauer, J., Činčera, J., Slavík, M., & Hartlová, A. (2024). Students’ reflections on their experience with ChatGPT. Journal of Computer Assisted Learning, 40(4), 1347–1986. [Google Scholar] [CrossRef]
  34. Thararattanasuwan, K., & Prachagool, V. (2024). Exploring perspectives of teacher students toward generative AI technologies. International Education Studies, 17(5), 22–28. [Google Scholar] [CrossRef]
  35. van den Berg, G., & du Plessis, E. (2023). ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Education Sciences, 13(10), 998. [Google Scholar] [CrossRef]
  36. Van Wyk, M. M. (2024). Is ChatGPT an opportunity or a threat? Preventive strategies employed by academics related to a GenAI-based LLM at a faculty of education. Journal of Applied Learning and Teaching, 7(1), 35–45. [Google Scholar] [CrossRef]
  37. Wang, X., Li, L., Tan, S. C., Yang, L., & Lei, J. (2023). Preparing for AI-enhanced education: Conceptualizing and empirically examining teachers’ AI readiness. Computers in Human Behavior, 146, 107798. [Google Scholar] [CrossRef]
  38. Yang, Y., Luo, J., Yang, M., Yang, R., & Chen, J. (2024). From surface to deep learning approaches with Generative AI in higher education: An analytical framework of student agency. Studies in Higher Education, 49, 817–830. [Google Scholar] [CrossRef]
Figure 1. PRISMA diagram of the SLR.
Figure 1. PRISMA diagram of the SLR.
Education 15 00826 g001
Figure 2. Growth trajectory.
Figure 2. Growth trajectory.
Education 15 00826 g002
Figure 3. Influential journals.
Figure 3. Influential journals.
Education 15 00826 g003
Figure 4. Journal subject areas.
Figure 4. Journal subject areas.
Education 15 00826 g004
Figure 5. Country of research focus.
Figure 5. Country of research focus.
Education 15 00826 g005
Figure 6. Research methods used by authors.
Figure 6. Research methods used by authors.
Education 15 00826 g006
Figure 7. Tutors’ perceptions of GAI.
Figure 7. Tutors’ perceptions of GAI.
Education 15 00826 g007
Figure 8. Student perceptions of GAI use in education.
Figure 8. Student perceptions of GAI use in education.
Education 15 00826 g008
Table 1. Review protocol.
Table 1. Review protocol.
ConsiderationDetails
Search SourcesDatabases: Scopus, EBSCOhost’s ERIC database, Web of Science
Inclusion CriteriaPublication Focus: Generative AI in educational settings
Article Type: Peer-reviewed articles (to ensure quality and credibility)
Language and Publication Year: No restrictions
Geographical Scope: Included studies from all geographical regions
ExclusionsReviews, conference papers, books, book chapters, opinion pieces, editorials, and letters
Keywords UsedPhrases: “generative artificial intelligence” OR “generative AI” AND “teaching and learning”
Search Areas: Article titles, abstracts, and keywords
Purpose: Ensure relevance to the intersection of generative AI and educational practices
Table 2. Comparison between receptive and resistive student attitudes towards GAI.
Table 2. Comparison between receptive and resistive student attitudes towards GAI.
AttributesReceptive StudentsResistive Students
Overall AttitudePositive and engaged; view GAI as beneficial for academic performance (Chan & Hu, 2023; Klimova et al., 2024).Sceptical and cautious about the utility of GAI (Chan & Hu, 2023; Yang et al., 2024).
Willingness to UseWilling to integrate GAI into studies and future work, with high expectations for its capabilities (Chan & Hu, 2023).Limited interaction and superficial use due to dissatisfaction with the quality and relevance (Yang et al., 2024).
Engagement with ActivitiesFind AI-generated activities engaging and motivating, particularly in language learning contexts (Lee et al., 2023).Exhibit scepticism and avoid further exploration of the tool due to concerns about its practical utility (Rudolph et al., 2023; Wang et al., 2023).
Interaction ExperienceDescribe interactions with GAI as fun, rewarding, and fast; view AI as a collaborator (Šedlbauer et al., 2024).Limited and superficial interaction with GenAI, expressing dissatisfaction (Yang et al., 2024).
Confidence in UseConfidence in using GenAI increases with experience (Kelly et al., 2023).Concerned about accuracy and transparency, leading to lower confidence (Chan & Hu, 2023).
Learning OutcomesA significant positive relationship between interaction with GAI and learning achievement is mediated by self-efficacy and cognitive engagement (Liang et al., 2023).The educational value of GAI is doubtful, and it is a concern that it may undermine university education (Chan & Hu, 2023).
Ethical and Accuracy ConcernsLess concerned about ethical issues, confident in ethical use with experience (Kelly et al., 2023). Unthinkingly integrating GAI content into tasks raises ethical concerns (Yang et al., 2024).Significant concerns about plagiarism, accuracy, transparency, and ethical implications (Chan & Hu, 2023).
Disciplinary VariationsHigher awareness and confidence in using GAI, especially in science and engineering disciplines (Kelly et al., 2023).Lower awareness and confidence, particularly in healthcare disciplines (Kelly et al., 2023).
Table 3. Mechanism for GAI in education.
Table 3. Mechanism for GAI in education.
MotivationsReference Authors
Personalised Learning and Tailored Assistance(Chan & Hu, 2023; Chan & Lee, 2023)
Enhancing Language Learning and Communication Skills(Ironsi, 2023; Klimova et al., 2024; Lee et al., 2023; Mateos-Blanco et al., 2024)
Enhancing Critical Thinking and Cognitive Engagement(Lee et al., 2023; Liang et al., 2023; Šedlbauer et al., 2024)
Practical Applications (Pham et al., 2023)
Meeting Modern Student Expectations(Chan & Lee, 2023)
Extensive Media Coverage(Kaplan-Rakowski et al., 2023; Yang et al., 2024)
BarriersReference Authors
Lack of Awareness and Training(Barrett & Pack, 2023; Kaplan-Rakowski et al., 2023)
Accuracy and Reliability(Chan & Lee, 2023; Klimova et al., 2024; Parra et al., 2024; van den Berg & du Plessis, 2023)
Socio-Cultural Shock, Ethical Issues, Biases in Outputs, and Privacy Issues(Chan & Hu, 2023; Pedersen, 2023; Van Wyk, 2024)
Impact on Learning and Critical Thinking(Chan & Hu, 2023; Liang et al., 2023; Shimizu et al., 2023)
Table 4. Suggested research questions.
Table 4. Suggested research questions.
DomainIllustrative Research Questions
Skill formationHow does sustained GAI use shape teamwork, communication, and creative thinking?
Tutor rolesIn what ways can AI augment rather than displace the affective and ethical dimensions of teaching?
Detection and integrityWhich algorithmic and pedagogical methods best identify undisclosed AI assistance?
Model bias and reliabilityHow can training data diversity reduce hallucinations and linguistic bias?
Labour-market outcomesWhat is the longitudinal relationship between student GAI proficiency and employability?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Amofa, B.; Kamudyariwa, X.B.; Fernandes, F.A.P.; Osobajo, O.A.; Jeremiah, F.; Oke, A. Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review. Educ. Sci. 2025, 15, 826. https://doi.org/10.3390/educsci15070826

AMA Style

Amofa B, Kamudyariwa XB, Fernandes FAP, Osobajo OA, Jeremiah F, Oke A. Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review. Education Sciences. 2025; 15(7):826. https://doi.org/10.3390/educsci15070826

Chicago/Turabian Style

Amofa, Birago, Xebiso Blessing Kamudyariwa, Fatima Araujo Pereira Fernandes, Oluyomi Abayomi Osobajo, Faith Jeremiah, and Adekunle Oke. 2025. "Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review" Education Sciences 15, no. 7: 826. https://doi.org/10.3390/educsci15070826

APA Style

Amofa, B., Kamudyariwa, X. B., Fernandes, F. A. P., Osobajo, O. A., Jeremiah, F., & Oke, A. (2025). Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review. Education Sciences, 15(7), 826. https://doi.org/10.3390/educsci15070826

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop