Next Article in Journal
Investigating How a Technology-Enhanced, Systems Thinking-Oriented Engineering Course Influences Students’ Attitudes Towards Design and Technology
Previous Article in Journal
Exploring the Impact of Multigrading on Learners with Disabilities: A Qualitative Study in Harry Gwala District, KZN, South Africa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Impact of AI on Inclusivity in Higher Education: A Rapid Review

by
José Manuel Cotilla Conceição
1,* and
Esther van der Stappen
2,*
1
Avans School of International Studies, Avans University of Applied Sciences, 4818 CR Breda, The Netherlands
2
Centre of Applied Research Future-Proof Education, Avans University of Applied Sciences, 4818 CR Breda, The Netherlands
*
Authors to whom correspondence should be addressed.
Educ. Sci. 2025, 15(9), 1255; https://doi.org/10.3390/educsci15091255
Submission received: 4 July 2025 / Revised: 4 September 2025 / Accepted: 17 September 2025 / Published: 19 September 2025
(This article belongs to the Section Higher Education)

Abstract

This paper examines the current implementation of Artificial Intelligence (AI) in higher education and its implications for inclusivity, particularly for minority groups. Using a rapid review methodology, it synthesises academic literature, policy reports, and case studies to explore how AI is reshaping educational environments. The analysis reveals that although AI technologies—such as adaptive learning systems, intelligent tutoring, and predictive analytics—are increasingly adopted, their primary aim remains institutional efficiency rather than fostering equity. Initiatives explicitly designed to support underrepresented students are rare, exposing a gap between technological innovation and inclusive practice. The study identifies key barriers, including socioeconomic inequality, cultural and linguistic bias, and limited institutional capacity, which are often compounded by AI systems trained on non-representative data. While isolated case studies demonstrate that (e.g., culturally) responsive AI can enhance educational access for marginalised learners, these remain exceptions rather than norms. The findings suggest that without deliberate efforts to embed inclusivity in AI design and deployment, existing inequalities may be perpetuated or worsened. The paper concludes that realising AI’s inclusive potential requires ethical frameworks, diverse development teams, and equitable access strategies. It calls for future empirical research focused on practical interventions that reduce disparities, contributing to a more just and inclusive higher education landscape.

1. Introduction

Artificial Intelligence (AI) has confronted society with the potential of having the same impact as the industrial revolution. The world is experiencing rapid changes in how its complex web of systems operates, generating a vivid public debate on whether AI will support society as an assisting force or destroy jobs by replacing human talent. The educational sector is one of the key domains where AI is being a disruptive element transforming the ways in which students learn, teachers educate, and institutions function. These institutions are still in the process of understanding the role of AI, its power and reach, and how it can be effectively and ethically leveraged to support education (Latif et al., 2023). The sudden access by students to chatbots like ChatGPT, has taken educators unprepared to understand these resources as tools to support learning, rather than threats to quality and integrity in learning (VE, 2023). While the potential of AI to personalise learning is widely acknowledged, there remains a critical research gap in understanding how, or whether, such technologies are inclusively serving minority1 student populations.
Educational institutions are initiating conversations on how they can face this new era of AI and make education more inclusive, accessible, and effective (Druga et al., 2022) and prepare themselves to address the challenges that may arise (Aliabadi et al., 2023; Kamalov et al., 2023). Educators, policymakers and researchers are examining the current developments and exploring ways to ensure that AI is applied in a structured manner with multi-layered considerations (Saputra et al., 2023). Despite the efforts from educational institutions to ensure responsible implementation of AI in Education, there is a general lack of consensus (UNESCO survey, 2023), resulting in every institution having their own interpretation of general guidelines and implementation (Artificial intelligence: Shaping Europe’s digital future, 2024).
Misinformation about the potential use of AI is present in different outlets (Farrow, 2023). There is an extensive list of publications highlighting the capabilities of AI in education from tailoring the student learning path (Magomadov, 2020), through minimising administrative work (Ge & Hu, 2020), to analysing student performance and predicting outcomes (Neji et al., 2023). However, there is a lack of concrete evidence on how these technologies have actually impacted different populations or how these findings are conclusive for generalising their outcome, raising concerns about our perception of them (Felix & Webb, 2024; Selwyn, 2024).
Education plays a determining factor in society. It is mostly responsible for the development of civilization as we know it (Fend, 2001), from how society functions to its own development (Spiel et al., 2018). Cultural, societal and economic factors influence education. While the objective of education should be universal and is considered a human right (Office of the High Commissioner for Human Rights, 2023), access and experience have not been equitable, with marginalised communities being the most affected (Gillani et al., 2023). From students with different learning needs, minorities, and first-generation students to those from low-income backgrounds, the lack of inclusive practices and access to resources have been a persistent problem (2020 GEM report, 2022; Inclusive and connected higher education, 2022; Working class heroes, 2019).
In a fast-changing educational field, combined with the rapid development of AI, we face the risk of perpetuating those inequalities and continue excluding those in need (Ferrara, 2023). This paper seeks to explore the current state of the art of the intersection between AI and inclusivity in Higher Education to understand how AI is supporting—or failing to support—inclusive practices and access in the field. For the purpose of this research, we formulate the following research question: “How is Artificial Intelligence currently being implemented in higher education, and what are its implications for inclusivity, equity, and ethical practice for minorities?”

2. Situating the Research: Background and Relevance

Education is one of the main drivers of society (Scott, 2010). It has a direct influence on how individuals develop, knowledge is disseminated, and communities evolve. It has the potential to drive progress and create opportunities for everyone (Rios et al., 2013). As a society, we hold the responsibility of ensuring education is made accessible to all, which means we need to acknowledge the needs of minorities (Chemulwo & Ali, 2019).
In the context of this paper, minorities refer to groups of students who face systemic challenges in accessing and succeeding within education systems due to a lack of inclusive practices and equitable resources. This includes, but is not limited to, students with different learning needs who require additional support or accommodations, first-generation students without familial experience in higher education, and students from low-income backgrounds who encounter financial and institutional barriers. It also encompasses individuals from ethnically diverse backgrounds, gender minorities, and LGBTQ+ students, who may experience exclusion or discrimination based on their identity. Addressing the needs of these diverse groups is essential in fostering an inclusive and equitable educational environment, particularly in the context of Artificial Intelligence in higher education (Fosch-Villaronga & Poulsen, 2022; Shams et al., 2023; Fenu et al., 2022).
Over the past decades, new Information and Communication Technology (ICT) have significantly transformed the field of education, reshaping how knowledge is accessed, delivered and managed (Pavlovna & Stanislavovna, 2015). The introduction of digital tools, such as interactive whiteboards, e-learning platforms and other online resources has led to a shift in traditional teaching and learning practices, offering new opportunities but also introducing new challenges in terms of accessibility and inclusion (Crompton & Burke, 2023). ICT in education has helped break down geographical and temporal barriers, making knowledge be accessed by wider audiences, having a significant impact on minorities (Martiniello et al., 2020). ICT has also enhanced teaching methodologies and didactics by enabling personalised learning experiences, data-driven education and collaborative projects across borders (Hoque & Alam, 2010). The administrative side of education has also experienced a substantial transformation: processes have been streamlined, with technologies simplifying tasks as enrolment, grading, study path monitoring and recommendation, communication, etc.
However, the rise in new technologies has also brought concerns. The integration of ICT has revealed challenges like digital inequality, data privacy, need for education systems (institutions, educators and policymakers) to adapt rapidly to changing tools, or about perpetuating existing inequalities, as access and digital skills can vary significantly across different socioeconomic and demographic groups (Martiniello et al., 2020). Within this context, the potential of Artificial Intelligence to enhance and transform educational practices has gained significant attention.
The research community, the media outlets and word in the street echo a common excitement about the arrival of AI in education, highlighting its potential to democratise access to knowledge, adapt to individual needs, and ultimately improve learning outcomes for all (Bulathwela et al., 2021).
AI, is one of the most influencing developments within ICT, claiming to have the potential to revolutionise education with applications in areas like personalised learning, assessment, plagiarism detection, classroom management, and study/career counselling (Ojha et al., 2023). Its potential to enhance education also consists of promises of adapting content to individual needs, optimising learning paths and predicting student performance (Alur et al., 2020). AI-powered adaptive learning promises to tailor lessons plans, assignments and assessments in real-time to each student’s unique needs and skills, enhancing engagement and learning outcomes. Another promised advantage is to free educators from administrative tasks and focus on teaching and mentoring, maximising resources and enhancing learning experiences (Gligorea et al., 2023). These advantages come with a number of significant risks: ethical concerns around privacy, bias in data and algorithms, and the potential for AI to reinforce existing inequalities if not developed and deployed with inclusion in mind (Smuha, 2020).
Despite the widespread optimism about AI’s potential to democratise education, there is a lack of concrete evidence showing its tangible impact on its claims (Renz et al., 2020; O’Dea & O’Dea, 2023). This study aims to give an overview of the current body of knowledge from research on the intersection of AI in education and inclusivity.

3. Methods

In this section, we describe the research objective, approach and methodology of this rapid literature review.

3.1. Objective

The purpose of this paper is to look at how Artificial Intelligence (AI) is being used in education, focusing on its potential to make education more inclusive and fairer. It also explores the ethical challenges that come with using AI. This paper aims to add to the discussion by reviewing existing research and offering ideas on how AI can be used in education in a responsible way. The scope is focused on how AI can improve education systems by helping minorities, while also addressing concerns about fairness and equal access. This paper contributes to the discourse by offering a structured synthesis of existing literature, policy reports and case studies and identifying practical and ethical tensions at the intersection of AI and inclusivity in higher education. It also provides a thematic model that could inform future empirical studies or institutional audits.

3.2. Research Approach

This paper uses a rapid review methodology (Grant & Booth, 2009). Given the speed of AI innovation and its rapid adoption in higher education, there is a pressing need for a timely synthesis of the current landscape. A comprehensive systematic review generally takes years to conduct and publish, making it not fitting for a rapidly evolving topic. A rapid review, as a form of knowledge synthesis where components of the systematic review process are simplified or omitted, allows to produce timely information (Tricco et al., 2017). This approach enables a structured synthesis of the existing literature, policy reports, and case studies, offering valuable insights for researchers and practitioners needing to respond promptly to AI’s influence on inclusivity in education.
This review relies on secondary data to provide a clear and well-supported view of AI in education with a focus on inclusivity. It looks at academic publications, policy reports, and case studies to find patterns, challenges, and opportunities. The analysis is focused on understanding the existing evidence and building a position based on the findings.

3.3. Data Sources

For conducting the analysis of this research, a range of data sources was consulted. The sources we retrieved during the search are as recent as January 2025, while we also found sources dating back as far as 1962. providing a comprehensive view of the evolution of AI in Education and its impact on inclusivity in higher education. A special focus is given to the past decade, a period that sustained significant advancements in AI technologies and their integration and uses in education. Academic databases were used to access peer-reviewed journal articles, conference contributions, and other academic publications. Furthermore, reports from education technology think tanks, policy briefs, and case studies are included to complement the academic literature and provide a holistic understanding of the topic. The objective of this selection is to obtain both a theoretical foundation and practical insights, ensuring a balanced understanding of AI implementation in higher education and its implications for inclusivity.
To find relevant literature, policy reports, and case studies, we searched platforms including Google Scholar, Jenni AI search engine, and Kaluga. Our search strings were combinations of keywords for the following concepts:
  • AI, Artificial Intelligence.
  • Higher education, university.
  • Teaching, learning, pedagogy.
  • Inclusivity, inclusion, accessibility, equity, diversity, etc.
For each search, we evaluated the first ten results to efficiently identify a foundational set of relevant reports.
For all three source types (i.e., academic literature, policy reports and case studies), we used the same set of inclusion criteria. For a record to be included, it had to be written in English and focus on the higher education context, Furthermore, records had to describe the opportunities and challenges of using AI in education or be related to inclusivity and/or equity in relation to the implementation of AI in educational settings.
To add further rigour and mitigate the limitations of relying solely on keyword search rankings, we used the tool Connected Papers. This complementary tool allowed us to start with the reports found through searching the above mentioned databased and explore their scholarly context, identifying highly influential papers (represented as larger “bubbles” in the tool’s visualisation) that were central to the academic discourse.

3.3.1. Academic Literature

In our review, we included peer-reviewed academic papers published in scientific journals or conference proceedings. Scientific contributions show what the current body of knowledge is on the theme of this review and help us evaluate what empirical evidence is available on whether AI is meeting the needs of marginalised groups in educational settings.

3.3.2. Policy Reports and Governmental Sources

Official reports and guidelines from governments and policymakers play a significant role in this research. These sources include publications, surveys, and websites from international organisations like UNESCO and the European Commission, as well as national-level frameworks. By consulting these sources, the paper examines the strategies and policies currently guiding AI implementation in higher education and evaluates how these align with inclusivity and equity goals. Our search string was extended with keywords like ‘policy’, ‘guidelines’ and ‘government*’ to find this specific type of document.

3.3.3. Case Studies

Examples from real life show how AI is being used in higher education, showing both successes and challenges in how it is applied. These case studies look at how institutions use AI for things like personalising learning, reducing administrative tasks, and improving teaching practices, always keeping inclusivity in mind. They also show the difficulties faced by institutions with fewer resources and diverse student groups, giving a practical view that adds to the theoretical and policy-based findings. Our search string was extended with keywords like ‘case study’ and ‘best practice’ to find this specific type of information.

3.4. Data Analysis

The research in this paper used a thematic approach to organise and understand the information collected from academic studies, policy reports, governmental guidelines, and case studies (Lochmiller, 2021). This approach was chosen because it helps to identify patterns and key ideas in the data, making it easier to examine how AI is being used in higher education and how it impacts inclusivity for minority groups.
To derive the thematic structure guiding the results, we followed an inductive analytical approach grounded in rapid review methodology. Rather than applying a predefined coding framework, we engaged in open coding to extract key concepts and recurring patterns related to inclusivity, ethical implementation, and institutional practices. These codes were then grouped and refined through continuous comparison and categorisation into four broader conceptual themes.

3.5. Declaration of the Usage of Generative AI and AI-Assisted Technologies

During the preparation of this paper, the first author used ChatGPT 4o, CoPilot and JenniAI in order to list and contrast key themes of the research with the input of previously curated documents (as described in detail in Section 3.3). The goal was to detect blind spots in the reviewing process and verify assumptions when creating the different categories. Also, they have been utilised in the development of the summary tables used to exemplify the findings (Table 1, Table 2, Table 3 and Table 4). After using these tools/services, both authors reviewed for correctness and completeness, edited the content as needed and hence take full responsibility for the content of the publication.

4. Results

To provide a nuanced analysis, the findings in this section are structured to move from AI’s theoretical potential to its practical impact on inclusivity. We begin by outlining the widely cited capabilities of AI in education (Section 4.1) to establish the current technological landscape and its purported benefits. This optimistic view is then deliberately juxtaposed with a review of the persistent, real-world barriers to inclusivity that AI tools must navigate as found in the selected records (Section 4.2). By first establishing what AI can do and what challenges exist, we can more critically analyse AI’s actual impact on educational inequalities (Section 4.3). This structure allows us to distinguish between the general, often efficiency-focused, potential of AI and its specific, and frequently underdeveloped, application to fostering inclusivity, with ethical challenges (Section 4.4) serving as a cross-cutting analytical lens.
The above-mentioned structure is filled in based on the three key thematic areas that emerged during analysis: (1) Capabilities of AI in Education, (2) Barriers to Inclusivity, (3) AI’s Impact on Educational Inequalities; in addition, we identified (4) Ethical Challenges in AI as a theme that serves as a cross-cutting analytical lens for the previous three thematic areas. A cross-theme analysis hence yields insights into AI’s role in promoting or hindering inclusivity in higher education.
First, we discuss the current capabilities of AI in education (Section 4.1). Second, the barriers currently hindering inclusivity in education are outlined (Section 4.2). Third, we discuss AI’s impact on inequalities (in general, and in education specifically; Section 4.3). Fourth, we identify ethical challenges of AI in education (Section 4.4). We conclude with a discussion of the impact of AI on inclusivity in education and the identification of the existing gaps in research and practice (Section 4.5).

4.1. Capabilities of AI in Education

The first theme that emerged during our analysis is Capabilities of AI in Education: describing what technologies are being used, and what for. AI is becoming a key part of education, impacting areas from primary and secondary education to higher education and online learning. Its applications are wide-ranging, from handling administrative processes to creating personalised learning experiences. These advancements can be grouped into six areas: Adaptative learning and personalisation intelligent tutoring systems, automated assessment and feedback, behavioural prediction and profiling, administrative efficiency, and online learning and distance education.
The following table summarises the key capabilities of AI in education and are drawn from the records selected in the review. Each capability will be further explored in the subsequent sections, examining the potential benefits (for inclusive education) and accompanying challenges.

4.1.1. Adaptive Learning and Personalization

AI technologies enable adaptive learning systems that tailor educational content to individual student needs, enhancing learning experiences and outcomes (Raj & Renumol, 2022). These systems use data analytics to adjust the difficulty and type of content based on student performance and learning pace. Adaptative learning can also detect knowledge gaps and provide targeted interventions to address them (Latif et al., 2023; Gligorea et al., 2023). By personalising the learning experience, AI can increase student engagement and motivation (Lin et al., 2023).

4.1.2. Intelligent Tutoring Systems (ITS)

ITS integrates machine learning (ML) models to personalise learning. ITS adapts to individual learners by analysing their performance, preferences, and interactions to tailor content and feedback. It simulates one-on-one tutoring. These systems help in subjects of the scientific domain, offering explanations, hints, and assessments tailored to each student’s learning style and progress (How effective are intelligent tutoring systems?, 2023; Intelligent tutoring systems: Enhancing learning through AI, 2024; Kamalov et al., 2023). ITS uses ML algorithms to predict student behaviours, such as the likelihood of mastering a concept or the risk of dropping out. Natural language processing (NLP), an ML subfield, is often used to give personalised, context-aware feedback to students. ITS are trained models that combine domain expertise, student understanding and teaching strategies.

4.1.3. Automated Assessment and Feedback

AI-driven automated assessment refers to the use of algorithms to evaluate students work, such as assignments, essays, tests, among others, and provide instant feedback without requiring manual grading by educators. This can range from simple tasks like multiple-choice grading to more complex ones and like assessing written responses or handle large volumes. The two main advantages revolve around time saving for educators and helping students understand their mistakes and learn more effectively (Hooda et al., 2022; Hahn et al., 2021; Calatayud et al., 2021). Automated assessment frees teachers from repetitive marking tasks, allowing them to focus on higher-value activities like lesson planning and student mentorship. Some AI models can also detect plagiarism, by comparting student’s submissions to peer submissions, cross-educational institutions and online content.

4.1.4. Behavioural Prediction and Profiling

AI algorithms analyse student data to predict behaviours, such as academic performance, engagement or dropout risks, allowing educators to intervene early and provide necessary support (Rastrollo-Guerrero et al., 2020). It also supports student advising and career planning, helping connect students with the right programmes, learning materials, activities and guide them through a more suitable study path or student counselling (Zawacki-Richter et al., 2019). AI systems use large datasets from students’ interactions, assessments, attendance records, and other digital traces to identify patterns and make predictions about students’ behaviours.
Berens et al. (2019) researched early detection of students at risk using ML methods with administrative student data from German universities. This is one of the limited empirical studies that addresses these practices and shows results that go from 79% of accuracy in predicting early drop out to 95%, without any specific criteria to consider students with different learning needs or from marginalised groups.

4.1.5. Administrative Efficiency

AI assists in administrative tasks such as scheduling, resource allocation, managing student records, streamlining communication, answering frequently asked questions, resulting in reducing the administrative workload on educators and allowing them to focus more on more meaningful interactions with students—teaching, mentoring, supervising, among others. (L. Chen et al., 2020; Ge & Hu, 2020; Khosravi et al., 2022; Çelik et al., 2022). AI enables educational institutions to enhance student experience while being cost-efficient.

4.1.6. Online Learning and Distance Education

AI enhances online learning platforms by providing personalised learning paths, real-time feedback, and more interactive learning experiences. The primary mechanism for eliminating barriers is the use of AI-driven adaptive learning algorithms that deliver asynchronous content, allowing students to learn remotely and at their own pace. Additionally, automated feedback systems and AI-powered chatbots provide instant, 24/7 support, removing the need for students and educators to be available at the same time. AI also helps monitoring student engagement and participation in online courses through data analytics, which can track activity to identify disengaged students and prompt early interventions (St-Hilaire et al., 2022; Tonbuloğlu, 2023; Luckin, 2019). By technically decoupling learning activities from a fixed time and place, these functionalities enable educational institutions to offer high-quality education to a wider audience, eliminating geographical and temporal barriers (AI in online learning platforms: Top 7 benefits, 2023; The impact of AI and EI on traditional and online education, 2023).

4.2. Barriers to Inclusivity in Education

The second theme that emerged during our analysis is Barriers to Inclusivity in Education, describing challenged faced by marginalised and minority groups.
There is a consensus that societal, cultural and economic factors affect access to education for minority groups (Ethnic and racial disparities in education, 2024). The following table provides a concise overview of these barriers, highlighting the multilayered challenges that must be addressed to achieve truly inclusive education.
A more in-depth exploration of the factors hindering inclusivity in education is presented in the sections below.

4.2.1. Societal Factors

Educational systems often reinforce structural inequities that are deeply rooted in historical and societal contexts. These inequities manifest in various ways, including disparities in resource allocation, access to quality education and representation of marginalised communities in educational content and leadership positions (Acedo et al., 2009; Ainscow & Sandill, 2010; Thomas, 2012). These systemic issues create significant barriers to inclusivity, highly affecting how students access opportunities and resources (Equity and quality in education, 2023, OECD; Marey et al., 2021). Class divisions and societal hierarchies influence who can access quality education, with healthier families often securing better opportunities (23 billion, 2023; Ethnic and racial disparities in education, 2024). Consequences of colonialism or systemic oppression do also impact in certain communities’ access to education (Madikizela-Madiya, 2021; Joshi, 2010; Wai et al., 2024). Gender inequality also plays an important role in access to education, with cases like society discouraging female students from pursuing higher education or STEM-related fields with some cultures expecting early marriage or domestic responsibilities for the female gender. The urban and rural divide is also a key factor, with quality education opportunities usually concentrated in urban centres and rural communities often facing limited access to higher education or the encouragement to pursue it (Gurrutxaga, 2012; Qiang et al., 2008; Tieken, 2017).

4.2.2. Cultural Factors

Cultural and linguistic differences can also contribute to systemic inequities in education. Students from underrepresented backgrounds or with different cultural or linguistic heritages may face challenges in educational environments favouring dominant cultural norms or particular ways of communicating and interacting (Bulut et al., 2024; Gillani et al., 2023). As a consequence, marginalised students may experience feelings of isolation, lack of belonging and decreased engagement in their learning process (Roldán et al., 2018; Ramis-Salas, 2015).
Isolation can be followed by discrimination in educational settings, further exacerbating the lack of belonging and the consequences on the academic performance and well-being of these students (Khosravi et al., 2022; Martiniello et al., 2020). Belonging and perceived inclusion are critical factors influencing educational outcomes and retention of underrepresented students (Dieterle et al., 2022). Something that can be further impacted by inadequate support structures, lack of role models, and inherent biases within educational systems and institutions, which can pile up the challenges faced by diverse students. The cumulative effect of such discrimination can have a negative impact in their academic and professional success (Raabe, 2018; Tarabini et al., 2017; The hard reality of school, 2022).

4.2.3. Economic Factors

The direct and indirect costs of education prevent access for low-income families. Direct costs are associated with tuition, books, and materials, among others, while indirect costs refer to those like transportation, technology, connectivity or housing (Rao et al., 2009). Families in poverty may depend on their youngsters to contribute to household income, reducing educational attendance, engagement or completion (The world’s families, 2017; Perna, 2005). Institutional funding inequities further exacerbate disparities, as underfunded schools in disadvantaged areas often have fewer resources, facilities, qualified teachers and extracurricular activities for their students (Shores & Ejdemyr, 2017; Charter school deserts: High poverty neighborhoods with limited educational options, 2018). Financial instability during economic downturns or crises (e.g., COVID-19 pandemic, Ukraine war, etc.) have a deeper impact in low-income families and their possibility to access education (UNICEF, 2021).

4.3. AI’s Impact on Bridging or Worsening Inequalities

The third theme we found in our analysis is the paradoxical Impact of AI on Inclusivity in Education, since it has the potential to both bridging the gap and worsening equalities.
The rapid development of AI has found educational institutions unprepared for its adoption. This has prompted some to react to tackle its impact at any observational level, mainly in higher education. While universities are still looking into how to regulate, implement and guide, lecturers and students are not waiting. 2024 EDUCAUSE AI landscape study (2024), states that the rise in student use of AI in their courses -around 70%- and the risks of inappropriate use of AI are perceived as primary motivators for AI-related strategic planning. Considering that less than 10% of schools and universities are reported to have formal guidance on AI by 2024, it leaves many stakeholders exposed to its risks, and institutions fail to leverage it responsibly to support inclusive practices (UNESCO survey, 2023). Integrating AI inclusively in education requires a multidimensional strategy that must be tailored to the specific educational context.
The search for concrete, verifiable practices of AI in terms of inclusivity returns extremely scarce results. This signals that educational research is not keeping pace with the rapid advancement of AI. This leads to a critical imbalance between technological progress and socio-ethical development, highlighting the urgent need to close this gap. The research reviewed highlights that when AI systems are trained appropriately, they can offer benefits in terms of improving conditions for minority groups. The desk research uncovered a limited number of verifiable examples of concrete applications of AI in support of inclusivity in education. One good example is a study that examined how marginalised female high school students engaged with machine learning (ML) tasks, focusing on how AI-based activities could help address the challenges these students face in STEM fields. The students were tasked with analysing diverse cultural texts to train ML models, requiring them to identify language patterns, writer intentions, and cultural nuances. Historically excluded from IT and AI, these students were able to use their lived experiences to excel in holistic language analysis, showcasing unique strengths often overlooked in traditional STEM education. By centring culturally responsive AI tasks, the study demonstrated how AI, when designed with inclusivity in mind, can empower underrepresented groups and create more equitable opportunities in education (Jiang et al., 2024, pp. 2557–2573). The work suggests that more research and innovation is needed to explore the potential of AI to support inclusivity and protect from perpetuating existing inequities.

4.4. Ethical Challenges in AI

The fourth theme that emerged during our analysis is Ethical Challenges in AI, describing challenges and dilemmas introduced by AI-driven technologies. The ethical implications of AI do not represent a separate outcome as such but instead serve as an analytical framework for interpreting the preceding themes. Issues such as algorithmic bias, lack of transparency, and unequal access are not isolated concerns but they shape how we evaluate AI’s capabilities, the barriers it addresses or intensifies, and the inequalities it may reinforce.
The academic study of AI began in 1950s (McCarthy, 2005), with discussions on its ethical implications starting no longer after (Samuel, 1962). It has only been recently that this field has experienced an exponential growth, accelerating not only its development but the dialogue about its ethical use and implications in our society. AI development and application have the power to reshape society, either reducing or enlarging existing inequalities and addressing on-going challenges or creating new ones. Steering this impact in a socially desirable direction requires not only thought-through regulation and shared consensus and standards, but also a strong foundation of ethical principles to guide concrete actions. Consequently, many relevant stakeholders have taken the initiative to start talks, discussions and working on ethical frameworks on how to adopt socially beneficial AI (Floridi & Cowls, 2019). Floridi and Cowls (2019) present a unified framework of five principles for Ethical AI to address the current needs: Beneficence (promoting well-being, preserving dignity and sustain the planet); non-maleficence (privacy, security and ‘capability caution’); autonomy (the power to decide); justice (promoting prosperity, preserving solidarity and avoiding unfairness); and explicability (enabling the other principles through intelligibility) and accountability.
When it comes to the application of AI in education, there are various ethical challenges that need to be carefully considered. Educational AI technologies may reproduce historical legacies of structural injustice and inequity, regardless of the parity of their models’ performance or the perceived neutrality of their underlying algorithms (Dieterle et al., 2022). These differences influence the design, implementation and impact of AI in education, often with the potential to reinforce existing gaps instead of bridging them (Holstein & Doroudi, 2021). Integrating AI into education, considering its multi-layered, complex and rapid growth, presents several ethical challenges (Floridi & Cowls, 2019).
A number of key issues have been identified, as listed in Table 3.
Having outlined the key ethical challenges of AI integration in higher education, the following subsections elaborate on each point with a more detailed analysis.

4.4.1. Algorithmic Bias

AI-powered software can perpetuate existing biases present in the datasets used to train the models, leading to discriminatory outcomes. For AI to work, it must be trained with sets of data. If this data is limited to certain societal profiles and not representative of the broader population, the resulting AI systems may favour certain groups over others, exacerbating existing inequities (Holstein & Doroudi, 2021). These biases can lead to decisions that unfairly influence educational opportunities, resources, and experiences, ultimately replicating existing structural inequities (Z. Chen, 2023). The lack of diversity among the teams designing and developing AI education tools may also result in overlooking the needs and perspectives of minority groups.

4.4.2. Data Privacy and Security

AI requires large amounts of data to operate, raising significant privacy concerns about its collection, storage and use (Huang, 2023). Sensitive student data is collected from different sources, from administrative software, LMS or smart classrooms, which could be exploited if servers’ security is compromised (How should we assess security and data minimisation in AI?, 2022; Ma & Jiang, 2023). Currently, there is no open dialogue between educational institutions and students and families regarding the use of these technologies and the data involved (Off task: EdTech threats to student privacy and equity in the age of AI, 2023). Students are often unaware of the full extent of data collection and the ways in which their personal information is being used by educational institutions and AI-powered technologies. There is a general lack of transparency around the types of student data being gathered, how it is stored, and the specific purposes for which it is being analysed and applied. This lack of awareness and oversight can contribute to concerns about privacy, security, and the fair and ethical use of sensitive student data (THE DATAFIED STUDENT: Why students’ data privacy matters and the responsibility to protect it, 2022).

4.4.3. Lack of Transparency

AI-powered feedback might feel personal, but it is not. The decision-making process of AI has an opaque nature (Chesterman, 2021). Gillani et al. (2023) discusses how while neural network-based machine learning methods might be very effective their processes are not transparent, making them hard to interpret. Consequently, it can be challenging to determine which inputs influenced specific decisions, leading to replicating existing bias.

4.4.4. Unequal Access and Exacerbation of Inequities

Disparities in access to AI technologies can widen existing educational gaps with minority groups. AI-powered tools require access to technology, connectivity and digital skills to be used effectively. Without equitable distribution and implementation of AI-powered educational tools, the use of these technologies’ risks exacerbating inequalities by providing greater educational benefits to students from more privileged backgrounds while leaving behind those from marginalised or low-income communities (Warschauer & Xu, 2018; Phillips & Shipps, 2022).

4.4.5. Autonomy and Over-Reliance

Dependence on AI for educational tasks may have a negative impact on the roles of humans in education (S. Ahmad et al., 2023). Floridi (2021) argues that AI can serve the role of supporting humans with a number of tasks, under the following premises: AI adoption requires balancing human decision-making power, with what is delegated to AI, ensuring human autonomy remains protected; AI development should prioritise enhancing human autonomy, allowing their norms and retain control over any decision, easy or critical; machine autonomy must be limited and reversible to safeguard human-decision authority; humans must retain the ability to decide whether and how much decision-making delegate to AI and also ensuring the option to reclaim control when needed. With AI intertwining between administrative and educational tasks in education, these are points to be addressed from both the educational workers perspective, but also from the student one. AI should always be an extension for educators, and a tool for students; but never strip any of their autonomy to perform their tasks. The administrative must retain capacity of choice, the educator the mastery and knowledge for educational activities and the student the ability for autonomous learning and critical thinking.

4.4.6. Emotional and Social Impact

Empathy is a human trait that machines currently lack the full capacity to replicate, despite being able to imitate certain aspects of empathy to a certain extent if trained sufficiently. Humans learn empathy from birth, and it is a complex social and emotional skill that is challenging to instil in robots and intelligent machines. While recent developments in artificial empathy using deep learning techniques have shown promise, the field is still evolving, and there are limitations in the ability of AI to truly emulate human empathy (Tahir et al., 2023).
Considering how much AI intertwines with education at different levels, there is a risk that human interactions and social connections could be reduced or even eliminated as AI solutions replace certain human roles. This could have a significant impact on students’ social and emotional development, as well as potentially perpetuating or even exacerbating biases towards minority groups, if the AI systems replicate previously learned biases. Conversely, some research suggests that certain students, particularly those with social anxiety or who fear judgement, may find interacting with non-judgmental AI systems more comfortable than with human instructors or peers, potentially opening new avenues for engagement (Almaiah et al., 2022). However, this potential benefit must be carefully balanced against the risk of reducing the development of essential human social skills and the nuanced mentorship that human educators provide. It is crucial that the integration of AI in education is carefully considered and balanced, ensuring that human interaction, critical thinking, and social-emotional learning remain at the core of the educational experience, rather than being replaced by AI alone (Mello et al., 2023).

4.4.7. Accountability

Determining responsibility for AI-driven decisions in education is complex. Through different applications, AI goes from giving recommendations to make potential long-lasting decisions on students (Bulut et al., 2024). Accountability is a key ethical principle in the development and implementation of AI, ensuring that when decisions are made by an AI there are clear mechanisms to identify who is responsible for the outcomes (Gürsoy & Kakadiaris, 2022). It is vital that humans can track the train-of-thought of the AI to understand the course of action of their decision-making process in a way that is understandable for its stakeholders (educators, students and policymakers).

4.4.8. Ethical Use of AI in Decision-Making

Deploying AI in human-related contexts such as education requires rigorous ethical scrutiny. Ethical standards must be carefully considered and integrated throughout the entire lifecycle of AI development and deployment, from the training data and algorithms used to the specific purposes for which the AI systems are designed and applied. Establishing ethical frameworks is crucial to identifying and mitigating risks, ensuring fairness, transparency, and accountability, and aligning the use of AI with social values and human rights principles (Bulut et al., 2024). Ethical principles and safeguards must be tailored to address the unique challenges and sensitivities within the educational domain (Holmes et al., 2021).

4.4.9. Intellectual Property and Ownership

The use of generative AI (GAI) in creating educational content raises questions about authorship and originality with educational institutions trying to embrace it with a responsible use (Wu et al., 2024). The concerns revolve around intellectual property rights, as well as the potential misuse of AI-generated material. For example, students could plagiarise using GAI, undermining academic integrity and failing to acquire the learning objectives involved. Educational institutions must develop clear policies on the sustainable, ethical and responsible use of GAI in student work, assessments, and for educational content creation (Wu et al., 2024).

4.4.10. Long-Term Impact and Misinformation

AI does not guarantee accurate and valid information. Zhou et al. (2023), argues that long-term integration of AI brings critical challenges regarding misinformation. This can undermine trust and prevent learners from achieving learning outcomes. AI systems, particularly large language models (LLM), have the capacity to generate large pools of information, rapidly, that appears credible yet is often inaccurate or misleading. Without enough development of scientific and critical thinking, it becomes a challenge to discern its validity even by experts. AI-generated content can imitate human writing so effectively that it is easy mistaken by human-made text. This has a critical impact in the education sector, as all stakeholders involved may rely on flawed information, perpetuating misconceptions while having a negative impact in the development of critical thinking skills. We must assure that, moving forward, these risks are addressed to ensure that AI supports, and not undermines, the integrity of educational systems, societal trust and human autonomy (S. F. Ahmad et al., 2023).

4.5. Gaps in Research and Practice

Barocas et al. (2021) affirm that AI systems can perform differently for different groups of people, frequently observing that it shows especially poor performance for already disadvantaged groups. The current limitation in research on how AI is applied to education and inclusivity as a factor raises major concerns about how the field is addressing equity and potentially amplifying existing disparities. The training data and models used to develop AI systems often lack diversity and reflect societal biases, which can lead to the further marginalisation and exclusion of vulnerable student populations in educational contexts.
There is a noticeable gap in both academic research and the practical implementation of AI systems explicitly designed to promote inclusive education. While researchers frequently highlight the risks associated with biassed and inequitable applications of AI in education, the number of systematic and empirical studies that investigate how AI can be intentionally employed to improve the inclusion of minority groups of students remains extremely limited. These findings form the basis for a critical reflection on the broader implications of AI’s role in educational equity, as discussed in the following section.

5. Discussion

This section builds on the findings presented in Section 4 by offering a critical synthesis of the three thematic areas, AI capabilities in higher education (theme 1), barriers to inclusivity (theme 2), and AI’s impact bridging or worsening inequalities (theme 3), through the ethical challenges (theme 4) outlined previously. Rather than reiterating the individual findings, we examine how these dimensions intersect, either by amplifying one another, or by revealing contradictions.
In doing so, we present Table 4 and aim to interpret what the current state of AI implementation in higher education implies for inclusive practice, institutional policy, and ethical governance. This discussion highlights key tensions, gaps in application, and opportunities for more equitable AI-driven education systems.
These ethical considerations are key to understanding the complex relationship between AI and inclusivity in higher education. The following points encompass the results of this research through the predetermined themes.

5.1. Current AI Initiatives in Higher Education

The discourse surrounding AI in higher education is often characterised by a narrative of transformative potential. We are presented with a future of personalised learning pathways, intelligent tutoring systems, and streamlined administrative processes. While these initiatives promise significant gains in operational efficiency, a closer analysis reveals a landscape facing significant ethical vulnerabilities. The widespread implementation of these technologies’ risks establishing a future on a fragile foundation, where the tools designed to advance education may inadvertently undermine its core values.
Given this landscape, a crucial practical heuristic for institutional leaders and educators is to operate under the default assumption that most commercial AI tools were not designed with inclusivity as a primary objective. These systems are typically optimised for efficiency, scalability, or general performance. Therefore, before adoption, institutions should proactively assess these tools for potential negative downstream effects on minority groups. This shifts the burden of proof from students having to demonstrate harm to the institution having to demonstrate a commitment to equity by thoroughly vetting its technology.
This ethical complexity begins with the interconnected challenges of algorithmic bias and data privacy. The historical inequities embedded within the vast datasets required for personalisation become the foundation upon which biassed AI-driven outcomes are perpetuated and amplified. At the same time, the collection of such granular data raises fundamental questions about student consent, data security, and the potential for misuse. This issue is made more complex by a systemic lack of transparency. When AI algorithms operate as a “black box,” it becomes exceedingly difficult to scrutinise their decision-making processes, rendering the task of identifying and rectifying bias problematic. This opacity, in turn, makes genuine Accountability a challenging concept; for an institution cannot be held fully responsible for a decision it cannot fully explain.
Also, these initiatives are deployed into an environment of pre-existing disparities, introducing the challenge of unequal access and the exacerbation of inequities. Advanced AI tools, often requiring significant digital infrastructure and literacy, will likely benefit more privileged students, threatening to widen, rather than close, the achievement gap. This leads to a necessary examination of autonomy and over-reliance, where the delegation of pedagogical tasks to machines without critical oversight risks the deskilling of educators and the erosion of students’ own critical thinking This automation has a direct emotional and social impact, potentially diminishing the essential mentorship, empathy, and nuanced human interaction vital for holistic development. Finally, the not curated proliferation of these tools introduces long-term systemic risks. concerns over intellectual property and ownership blur the lines of authorship, while the potential for AI-generated long-term impact and misinformation threatens the integrity of the academic information ecosystem. Realising the inclusive potential of AI therefore requires a fundamental reorientation from a paradigm of rapid technological adoption to one of proactive ethical governance.

5.2. Barriers to Inclusive Education

The mission of inclusive education is to dismantle systemic barriers, yet the integration of AI without sufficient critical examination threatens to erect new ones while reinforcing those that already exist. Rather than acting as a universal solution for long-standing issues, AI can function as an accelerant, magnifying the impact of societal, cultural, and economic disparities. The ethical challenges inherent in AI do not merely add to these barriers; they intertwine with them, creating multifaceted challenges to inclusion.
The most immediate interaction is between unequal access and exacerbation of inequities and the pre-existing digital divide. This is not simply a matter of technology access but of equitable usability; an AI tool is of little use without the digital literacy and consistent support required to use it effectively. This barrier is amplified by algorithmic bias, placing marginalised students at a dual disadvantage: they are not only less likely to have access but are also more likely to be misjudged by the system if they do. This injustice is often obscured by a lack of transparency, which prevents students and their advocates from understanding or challenging a system that may be systematically working against them. This opacity makes genuine accountability nearly impossible, allowing institutional responsibility to be deflected onto an algorithm that is not easily understood.
The emotional and social impact on students navigating these biassed and opaque systems cannot be overlooked. It can amplify feelings of alienation and reinforce stereotype threat, directly undermining their sense of belonging and academic self-worth. This dynamic calls into question the very feasibility of the ethical use of AI in decision-making. If the tools themselves are predisposed to inequitable outcomes, their use in high-stakes contexts, from admissions to academic support, becomes ethically questionable without profound safeguards. Finally, the long-term impact of biassed systems can perpetuate damaging narratives about the capabilities of certain student groups, negatively affecting the educational environment for future generations. Addressing these barriers therefore requires more than technical fixes; it demands a commitment to dismantling foundational inequities in parallel with the deployment of new technologies.

5.3. AI’s Impact on Bridging or Worsening Inequalities

The dual potential of AI to either bridge or worsen societal inequalities does not appear to be a balanced equation. A critical assessment of its current trajectory, viewed through the lens of its attendant ethical issues, suggests that without a significant course correction, AI is on a path to become a powerful factor in the exacerbation of inequality in higher education. The promise of democratisation remains largely speculative, while the mechanisms for deepening divides are already operational.
This trajectory is driven by a cascade of interconnected ethical issues. It begins with systems built on a foundation of Algorithmic Bias and complicated by concerns over data privacy and security. These systems are then often deployed with a lack of transparency that makes their inner workings difficult to interpret, rendering true accountability a significant challenge. When these opaque, unaccountable, and potentially biassed tools are introduced into an environment already defined by unequal access, the outcome is predictable: those with existing advantages are positioned to benefit most, while those already marginalised risk falling further behind.
The consequences of this trajectory are considerable. The negative emotional and social impact on disadvantaged students can erode trust and participation. The failure to establish clear frameworks for the ethical use of AI in decision-making means that high-stakes choices affecting students’ futures are increasingly influenced by potentially flawed systems. This culminates in risks related to long-term impact and misinformation, where educational systems not only fail to correct societal inequalities but may actively amplify them. The critical gap identified in this research is therefore not merely a lack of positive case studies, but an apparent absence of a collective, institutional will to prioritise equity in AI implementation. This establishes a clear mandate for the research community and institutional leaders: to pivot from a posture of passive observation to one of active, intentional, and equity-centred design.

6. Conclusions

This research highlights the paradoxical potential of AI in higher education: both a transformative tool capable of enhancing learning experiences and a system that risks perpetuating existing inequalities. The current state of AI implementation in higher education reveals a strong emphasis on operational efficiency and personalised learning, with technologies such as adaptive learning systems, intelligent tutoring platforms, and performance analytics driving these advancements. However, the findings indicate that these initiatives often neglect the specific needs of minority groups, failing to integrate inclusivity as a core objective.
Barriers such as socioeconomic disparities, cultural biases, and institutional limitations continue to impede the potential of AI to foster inclusivity for marginalised groups. AI systems frequently rely on datasets that lack diversity, perpetuating algorithmic biases that disadvantage underrepresented populations. Furthermore, unequal access to technology and digital infrastructure exacerbates these challenges, leaving students from low-income backgrounds unable to fully benefit from AI-driven tools.
The impact of AI on bridging or worsening inequalities remains inconclusive due to a lack of empirical evidence. While isolated examples, such as the use of culturally responsive AI tasks to engage marginalised female students in STEM, demonstrate the potential for AI to support inclusivity, these cases are exceptions rather than the norm. This highlights a significant gap between technological advancements and their practical, inclusive application in higher education.
In practice, this involves several steps. Institutions should establish multidisciplinary AI ethics committees, including diverse student and faculty representatives from both minority and majority groups to carefully examine beforehand new technologies before procurement. Furthermore, they could mandate “inclusivity impact assessments” for any new AI tool to proactively identify potential biases. For educators, institutions must provide professional development opportunities focused on critically evaluating AI tools and adapting them for inclusive pedagogical practices, rather than simply adopting them for efficiency.
To address these challenges, higher education institutions must prioritise inclusivity in AI implementation by developing clear ethical frameworks, ensuring diverse representation in AI design, and providing equitable access to the necessary infrastructure. Future research could focus on experimental studies that evaluate the effectiveness of AI in reducing educational disparities for marginalised groups.
This study directly responds to the main research question of how AI contributes to inclusivity in higher education, finding limited but promising evidence. Moving forward, institutions and researchers must ensure AI systems are co-designed with equity in mind, particularly for those who are most often left behind. By aligning AI’s capabilities with inclusivity goals, higher education can move closer to a model that not only leverages technological innovation but also ensures equitable opportunities for all students, particularly those from historically underserved communities.

6.1. Limitations

The conclusions of this paper should be considered within the context of the limitations inherent in its scope and methodology. These are organised into three key areas: the constraints of the rapid review approach, the nature of the data sources, and the potential for interpretive bias.
This study employed a rapid review methodology to provide a timely synthesis of a fast-evolving field. While this approach allows for an efficient overview, it involves deliberate simplifications compared to a full systematic review (Tricco et al., 2017). Specifically, our search strategy focused on the most relevant results from selected databases and did not include a formal risk-of-bias or methodological quality assessment for each included study. This trade-off, prioritising timeliness and breadth over exhaustive depth and quality appraisal, means our findings represent a high-level synthesis rather than a granular meta-analysis of effects.
The research relies entirely on secondary data, meaning the findings are limited to what has been previously published. The study does not include primary data from key stakeholders like students, educators, or administrators, whose firsthand accounts could provide nuance on the real-world impacts of AI. The absence of these perspectives restricts the depth of our analysis regarding how AI tools are experienced in practice. Furthermore, the accessible literature is predominantly from western contexts. This introduces a significant geographic bias, which is compounded by our exclusive focus on English-language publications, a choice that inevitably omits valuable research and perspectives from non-English scholarly publications. This introduces a significant geographic bias, limiting the generalisability of our findings. Challenges and initiatives in non-Western higher education systems, particularly from the Global South, are likely underrepresented, yet are critical for a truly global understanding of AI and inclusivity.
In any qualitative synthesis, the process of identifying themes and interpreting findings is influenced by the researcher’s own background and perspective. The first author’s background in inclusive education and technology policy informed the coding and analysis process. We sought to mitigate this potential bias through reflective practices and the use of AI-assisted tools to triangulate and challenge emerging themes, thereby enhancing the credibility of the analysis. However, we acknowledge that our interpretation is one of several possible readings of the existing literature.

6.2. Future Work

Future research should aim to address these gaps. There is a critical need for empirical, comparative research that evaluates specific AI interventions across diverse institutional and geopolitical contexts. Future work should focus on non-Western contexts and the Global South to understand how local cultural and socioeconomic factors shape AI’s role in educational equity. Such studies should employ primary data collection, incorporating the voices of students and educators from marginalised communities to provide a more comprehensive understanding of AI’s real-world impact on fostering inclusivity in higher education.

Funding

This research was partially funded by the ASML Foundation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

The authors wish to thank the anonymous reviewers for their time and insightful comments. Their constructive feedback was invaluable and significantly contributed to improving the clarity and quality of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
DEIDiversity, Equity, and Inclusion
HEHigher Education
LGBTQ+Lesbian, Gay, Bisexual, Transgender, Queer and Others
OECDOrganisation for Economic Co-Operation and Development
UNESCOUnited Nationals Educational, Scientific and Cultural Organisation
UNSDGUnited Nations Sustainable Development Goals

Note

1
Minorities, minority groups, minority students, marginalised students and marginalised groups are used interchangeably in this paper.

References

  1. 2020 GEM report. (2022). Available online: https://gem-report-2020.unesco.org/ (accessed on 18 November 2024).
  2. 2024 EDUCAUSE AI landscape study. (2024). Available online: https://www.educause.edu/ecar/research-publications/2024/2024-educause-ai-landscape-study/introduction-and-key-findings (accessed on 11 December 2024).
  3. 23 billion. (2023). Available online: https://edbuild.org/content/23-billion (accessed on 10 November 2024).
  4. Acedo, C., Ferrer, F., & Rovira, J. (2009). Inclusive education: Open debates and the road ahead. Prospects, 39(3), 227. [Google Scholar] [CrossRef]
  5. Ahmad, S., Umirzakova, S., Mujtaba, G., Amin, M., & Whangbo, T. K. (2023). Education 5.0. Available online: https://export.arxiv.org/pdf/2307.15846v1.pdf (accessed on 20 October 2024).
  6. Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10, 311. [Google Scholar] [CrossRef] [PubMed]
  7. AI in online learning platforms: Top 7 benefits. (2023). Available online: https://elearningindustry.com/top-benefits-of-leveraging-ai-in-online-learning-platforms (accessed on 10 January 2025).
  8. Ainscow, M., & Sandill, A. (2010). Developing inclusive education systems: The role of organisational cultures and leadership. International Journal of Inclusive Education, 14(4), 401. [Google Scholar] [CrossRef]
  9. Aliabadi, R., Singh, A., & Wilson, J. E. (2023). Transdisciplinary AI education: The confluence of curricular and community needs in the instruction of artificial intelligence. In Lecture notes on data engineering and communications technologies (p. 137). Springer International Publishing. [Google Scholar] [CrossRef]
  10. Almaiah, M. A., Alfaisal, R., Salloum, S. A., Hajjej, F., Thabit, S., El-Qirem, F. A., Lutfi, A., Alrawad, M., Al Mulhem, A., Alkhdour, T., Awad, A. B., & Al-Maroof, R. S. (2022). Examining the impact of artificial intelligence and social and computer anxiety in e-learning settings: Students’ perceptions at the university level. Electronics, 11(22), 3662. [Google Scholar] [CrossRef]
  11. Alur, R., Baraniuk, R. G., Bodík, R., Drobnis, A. W., Gulwani, S., Hartmann, B., Kafai, Y. B., Karpicke, J., Libeskind-Hadas, R., Richardson, D. J., Solar-Lezama, A., Thille, C., & Vardi, M. Y. (2020). Computer-aided personalized education. arXiv, arXiv:2007.03704. [Google Scholar] [CrossRef]
  12. Artificial intelligence: Shaping Europe’s digital future. (2024). Available online: https://digital-strategy.ec.europa.eu/en/policies/artificial-intelligence (accessed on 19 January 2025).
  13. Barocas, S., Guo, A., Kamar, E., Krones, J., Morris, M. R., Vaughan, J. W., Wadsworth, D., & Wallach, H. (2021). Designing disaggregated evaluations of AI systems: Choices, considerations, and tradeoffs. arXiv. [Google Scholar] [CrossRef]
  14. Berens, J., Schneider, K., Gortz, S., Oster, S., & Burghoff, J. (2019). Early detection of students at risk—Predicting student dropouts using administrative student data and machine learning methods. Journal of Educational Data Mining, 11(3), 1–41. [Google Scholar] [CrossRef]
  15. Bulathwela, S., Pérez-Ortiz, M., Holloway, C., & Shawe-Taylor, J. (2021). Could AI democratise education? Socio-technical imaginaries of an EdTech revolution. arXiv. [Google Scholar] [CrossRef]
  16. Bulut, O., Beiting-Parrish, M., Casabianca, J. M., Slater, S. C., Jiao, H., Song, D., Ormerod, C. M., Fabiyi, D. G., Ivan, R., Walsh, C., Rios, O., Wilson, J. M., Yildirim-Erbasli, S. N., Wongvorachan, T., Liu, J. X., Tan, B., & Morilova, P. (2024). The rise of artificial intelligence in educational measurement: Opportunities and ethical challenges. arXiv. [Google Scholar] [CrossRef]
  17. Calatayud, V. G., Espinosa, M. P. P., & Vila, R. R. (2021). Artificial intelligence for student assessment: A systematic review [Review of artificial intelligence for student assessment: A systematic review]. Applied Sciences, 11(12), 5467. [Google Scholar] [CrossRef]
  18. Charter school deserts: High poverty neighborhoods with limited educational options. (2018). Available online: https://files.eric.ed.gov/fulltext/ED592388.pdf (accessed on 7 January 2025).
  19. Chemulwo, M. J., & Ali, M. F. (2019). Equitable access to education and development in a knowledgeable society as advocated by UNESCO. Educational Research and Reviews, 14(6), 200. [Google Scholar] [CrossRef]
  20. Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review [Review of artificial intelligence in education: A review]. IEEE Access, 8, 75264. [Google Scholar] [CrossRef]
  21. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 567. [Google Scholar] [CrossRef]
  22. Chesterman, S. (2021). Through a glass, darkly: Artificial intelligence and the problem of opacity. The American Journal of Comparative Law, 69(2), 271. [Google Scholar] [CrossRef]
  23. Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. [Google Scholar] [CrossRef]
  24. Çelik, İ., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research [Review of the promises and challenges of artificial intelligence for teachers: A systematic review of research]. TechTrends, 66(4), 616. [Google Scholar] [CrossRef]
  25. Dieterle, E., Dede, C., & Walker, M. E. (2022). The cyclical ethical effects of using artificial intelligence in education. AI & Society, 39, 633–643. [Google Scholar] [CrossRef]
  26. Druga, S., Otero, N., & Ko, A. J. (2022, July 8–13). The landscape of teaching resources for AI education. ITiCSE 2022: Innovation and Technology in Computer Science Education, Dublin, Ireland. [Google Scholar] [CrossRef]
  27. Equity and quality in education—Supporting disadvantaged students and schools—OECD. (2023). Available online: https://www.oecd.org/education/school/equityandqualityineducation-supportingdisadvantagedstudentsandschools.htm (accessed on 15 November 2024).
  28. Ethnic and racial disparities in education. (2024). Available online: https://www.apa.org/ed/resources/racial-disparities (accessed on 15 November 2024).
  29. Farrow, R. (2023). The possibilities and limits of XAI in education: A socio-technical perspective. Learning Media and Technology, 48(2), 266. [Google Scholar] [CrossRef]
  30. Felix, J. R. B., & Webb, L. (2024). Use of artificial intelligence in education delivery and assessment. UK Parliament. [Google Scholar] [CrossRef]
  31. Fend, H. (2001). Educational institutions and society (p. 4262). Elsevier BV. [Google Scholar] [CrossRef]
  32. Fenu, G., Galici, R., & Marras, M. (2022). Experts’ view on challenges and needs for fairness in artificial intelligence for education. Available online: https://export.arxiv.org/pdf/2207.01490v1.pdf (accessed on 23 November 2024).
  33. Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. [Google Scholar] [CrossRef]
  34. Floridi, L. (2021). Ethics, governance, and policies in artificial intelligence [Philosophical studies series]. Springer International Publishing. [Google Scholar] [CrossRef]
  35. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society [Harvard data science review]. Harvard University. [Google Scholar] [CrossRef]
  36. Fosch-Villaronga, E., & Poulsen, A. (2022). Diversity and inclusion in artificial intelligence. In Information technology and law series/Information technology & law series (p. 109). T.M.C. Asser Press. [Google Scholar] [CrossRef]
  37. Ge, Z., & Hu, Y. (2020). Innovative application of artificial intelligence (AI) in the management of higher education and teaching. Journal of Physics Conference Series, 1533(3), 32089. [Google Scholar] [CrossRef]
  38. Gillani, N., Eynon, R., Chiabaut, C., & Finkel, K. A. (2023). Unpacking the “Black Box” of AI in education. arXiv. [Google Scholar] [CrossRef]
  39. Gligorea, I., Cioca, M., Oancea, R., Gorski, A.-T., Gorski, H., & Tudorache, P. (2023). Adaptive learning using artificial intelligence in e-learning: A literature review [Review of adaptive learning using artificial intelligence in e-learning: A literature review]. Education Sciences, 13(12), 1216. [Google Scholar] [CrossRef]
  40. Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies [Review of A typology of reviews: An analysis of 14 review types and associated methodologies]. Health Information & Libraries Journal, 26(2), 91. [Google Scholar] [CrossRef]
  41. Gurrutxaga, M. (2012). Changes in rural–urban sex ratio differences in the young professional age group as an indicator of social sustainability in rural areas: A case study of continental Spain, 2000–2010. Available online: https://rgs-ibg.onlinelibrary.wiley.com/doi/10.1111/area.12024 (accessed on 10 October 2024).
  42. Gürsoy, F., & Kakadiaris, I. A. (2022). System cards for AI-based decision-making for public policy. arXiv. [Google Scholar] [CrossRef]
  43. Hahn, M. G., Baldiris, S., de-la-Fuente-Valentín, L., & Burgos, D. (2021). A systematic review of the effects of automatic scoring and automatic feedback in educational settings [Review of a systematic review of the effects of automatic scoring and automatic feedback in educational settings]. IEEE Access, 9, 108190. [Google Scholar] [CrossRef]
  44. Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T. T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2021). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504. [Google Scholar] [CrossRef]
  45. Holstein, K., & Doroudi, S. (2021). Equity and artificial intelligence in education: Will “AIEd” amplify or alleviate inequities in education? arXiv. [Google Scholar] [CrossRef]
  46. Hooda, M., Rana, C., Dahiya, O., Rizwan, A., & Hossain, M. S. (2022). Artificial intelligence for assessment and feedback to enhance student success in higher education. Mathematical Problems in Engineering, 2022, 1. [Google Scholar] [CrossRef]
  47. Hoque, S. M. S., & Alam, S. (2010). The role of information and communication technologies (ICTs) in delivering higher education—A case of bangladesh. International Education Studies, 3(2). [Google Scholar] [CrossRef]
  48. How effective are intelligent tutoring systems? (2023). Available online: https://www.apa.org/pubs/highlights/spotlight/issue-37 (accessed on 22 November 2024).
  49. How should we assess security and data minimisation in AI? (2022). Available online: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-should-we-assess-security-and-data-minimisation-in-ai/ (accessed on 7 January 2025).
  50. Huang, L. (2023). Ethics of artificial intelligence in education: Student privacy and data protection. Science Insights Education Frontiers, 16(2), 2577. [Google Scholar] [CrossRef]
  51. Inclusive and connected higher education. (2022, June 18). Available online: https://education.ec.europa.eu/education-levels/higher-education/inclusive-and-connected-higher-education (accessed on 13 January 2025).
  52. Intelligent tutoring systems: Enhancing learning through AI. (2024). Available online: https://www.princetonreview.com/ai-education/intelligent-tutoring-systems (accessed on 13 January 2025).
  53. Jiang, S., McClure, J., Tatar, C., Bickel, F., Rosé, C. P., & Chao, J. (2024). Towards inclusivity in AI: A comparative study of cognitive engagement between marginalized female students and peers. British Journal of Educational Technology, 55(6), 2557–2573. [Google Scholar] [CrossRef]
  54. Joshi, K. M. (2010). Indigenous children of India: Enrolment, gender parity and drop-out in school education. International Journal of Sociology and Social Policy, 30(9), 545. [Google Scholar] [CrossRef]
  55. Kamalov, F., Calonge, D. S., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability, 15(16), 12451. [Google Scholar] [CrossRef]
  56. Khosravi, H., Shum, S. B., Chen, G., Conati, C., Tsai, Y., Kay, J., Knight, S., Martínez-Maldonado, R., Sadiq, S., & Gašević, D. (2022). Explainable Artificial Intelligence in education. Computers and Education Artificial Intelligence, 3, 100074. [Google Scholar] [CrossRef]
  57. Latif, E., Mai, G., Nyaaba, M., Wu, X., Liu, N., Lu, G., Li, S., Liu, T., & Zhai, X. (2023). Artificial general intelligence (AGI) for education. Available online: https://export.arxiv.org/pdf/2304.12479v2.pdf (accessed on 20 October 2024).
  58. Lin, Y., Luo, Q., & Qian, Y. (2023). Investigation of Artificial Intelligence algorithms in education. Applied and Computational Engineering, 16(1), 180. [Google Scholar] [CrossRef]
  59. Lochmiller, C. R. (2021). Conducting thematic analysis with qualitative data (The Qualitative Report). Nova Southeastern University. [CrossRef]
  60. Luckin, R. (2019). AI and education: An intelligence infrastructure to empower self-efficacy. Available online: http://oasis.col.org/handle/11599/3464 (accessed on 7 January 2025).
  61. Ma, X., & Jiang, C. (2023). On the ethical risks of artificial intelligence applications in education and its avoidance strategies. Journal of Education Humanities and Social Sciences, 14, 354. [Google Scholar] [CrossRef]
  62. Madikizela-Madiya, N. (2021). The question of access and spatial justice in universities in sub-Saharan Africa: A capabilities approach. Transformation in Higher Education, 6, a124. [Google Scholar] [CrossRef]
  63. Magomadov, V. S. (2020). The application of artificial intelligence and Big Data analytics in personalized learning. Journal of Physics Conference Series, 1691(1), 12169. [Google Scholar] [CrossRef]
  64. Marey, T., Baker, S., Williams, L. A., & Tzelios, K. (2021). Equity and STEM in elite contexts: Challenging institutional assumptions and critiquing student support. International Journal of Inclusive Education, 27(14), 1576. [Google Scholar] [CrossRef]
  65. Martiniello, N., Asuncion, J. V., Fichten, C. S., Jorgensen, M., Havel, A., Harvison, M., Legault, A., Lussier, A., & Vo, C. (2020). Artificial intelligence for students in postsecondary education. AI Matters, 6(3), 17. [Google Scholar] [CrossRef]
  66. McCarthy, J. (2005). The future of AI—A manifesto. AI Magazine, 26(4), 39. [Google Scholar] [CrossRef]
  67. Mello, R. F., Freitas, E., Pereira, F. D., Cabral, L., Tedesco, P., & Ramalho, G. (2023). Education in the age of generative AI: Context and recent developments. arXiv. [Google Scholar] [CrossRef]
  68. Neji, W., Boughattas, N., & Ziadi, F. (2023). Exploring new AI-based technologies to enhance students’ motivation. Issues in Informing Science and Information Technology, 20, 95. [Google Scholar] [CrossRef] [PubMed]
  69. O’Dea, X., & O’Dea, M. (2023). Is artificial intelligence really the next big thing in learning and teaching in higher education? A conceptual paper. Journal of University Teaching and Learning Practice, 20(5). [Google Scholar] [CrossRef]
  70. Office of the High Commissioner for Human Rights. (2023). About the right to education and human rights. United Nations. Available online: https://www.ohchr.org/en/special-procedures/sr-education/about-right-education-and-human-rights (accessed on 27 June 2025).
  71. Off task: EdTech threats to student privacy and equity in the age of AI. (2023). Available online: https://cdt.org/insights/report-off-task-edtech-threats-to-student-privacy-and-equity-in-the-age-of-ai/ (accessed on 8 October 2024).
  72. Ojha, S., Narendra, A., Mohapatra, S., & Misra, I. (2023). From robots to books: An introduction to smart applications of AI in education. Available online: https://export.arxiv.org/pdf/2301.10026v1.pdf (accessed on 17 November 2024).
  73. Pavlovna, K. T., & Stanislavovna, K. N. (2015). Historiography of developing the issue of the information skills in the social and cultural space of the further education. International Education Studies, 8(6), 217–224. [Google Scholar] [CrossRef]
  74. Perna, L. W. (2005). The benefits of higher education: Sex, racial/ethnic, and socioeconomic group differences. Review of Higher Education/The Review of Higher Education, 29(1), 23. [Google Scholar] [CrossRef]
  75. Phillips, B., & Shipps, B. (2022). The persistent digital divide: The case study of a minority serving institution. Journal of the Southern Association for Information Systems, 9(2), 33. [Google Scholar] [CrossRef]
  76. Qiang, D., Li, X., Hongping, Y., & Ke-yun, Z. (2008). Gender inequality in rural education and poverty. Chinese Sociology & Anthropology, 40(4), 64. [Google Scholar] [CrossRef]
  77. Raabe, I. J. (2018). Social exclusion and school achievement: Children of immigrants and children of natives in three european countries. Child Indicators Research, 12(3), 1003. [Google Scholar] [CrossRef]
  78. Raj, N. S., & Renumol, V. G. (2022). An improved adaptive learning path recommendation model driven by real-time learning analytics. Journal of Computers in Education, 11(1), 121. [Google Scholar] [CrossRef]
  79. Ramis-Salas, M. (2015). Plurality and equality in the learning communities. Intangible Capital, 11(3), 293–315. [Google Scholar] [CrossRef]
  80. Rao, R. R., Naidu, R. S., & Jani, R. (2009). A critical review of the methods used to estimate the cost of an adequate education [Review of a critical review of the methods used to estimate the cost of an adequate education]. Journal of Sustainable Development, 1(3), 98–102. [Google Scholar] [CrossRef]
  81. Rastrollo-Guerrero, J. L., Gómez-Pulido, J. A., & Durán-Domínguez, A. (2020). Analyzing and Predicting Students’ Performance by Means of Machine Learning: A Review. Applied Sciences, 10(3), 1042. [Google Scholar] [CrossRef]
  82. Renz, A., Krishnaraja, S., & Gronau, E. (2020). Demystification of artificial intelligence in education—How much AI is really in the educational technology? International Journal of Learning Analytics and Artificial Intelligence for Education (iJAI), 2(1), 14. [Google Scholar] [CrossRef]
  83. Rios, W., Lumayno, V., Barola, M. T. T., & Estorosos, J. (2013). The impact of education on the socioeconomic and political development of a nation. Recoletos Multidisciplinary Research Journal, 1(1), 11. [Google Scholar] [CrossRef]
  84. Roldán, A. Á., Parra, I., & Gamella, J. F. (2018). Reasons for the underachievement and school drop out of Spanish Romani adolescents. A mixed methods participatory study. International Journal of Intercultural Relations, 63, 113. [Google Scholar] [CrossRef]
  85. Samuel, A. L. (1962). Artificial intelligence: A frontier of automation. The Annals of the American Academy of Political and Social Science, 340(1), 10. [Google Scholar] [CrossRef]
  86. Saputra, I., Astuti, M., Sayuti, M., & Kusumastuti, D. (2023). Integration of artificial intelligence in education: Opportunities, challenges, threats and obstacles—A literature review. Indonesian Journal of Computer Science, 12(4). [Google Scholar] [CrossRef]
  87. Scott, P. (2010). Higher education. Available online: https://www.sciencedirect.com/science/article/pii/B9780080448947008204 (accessed on 21 November 2024).
  88. Selwyn, N. (2024). On the limits of artificial intelligence (AI) in education. In Nordisk tidsskrift for pedagogikk og kritikk (Vol. 10, Issue 1). Cappelen Damm Akademisk. [Google Scholar] [CrossRef]
  89. Shams, R. A., Zowghi, D., & Bano, M. (2023). Challenges and solutions in AI for all. arXiv. [Google Scholar] [CrossRef]
  90. Shores, K., & Ejdemyr, S. (2017). Pulling back the curtain: Intra-district school spending inequality and its correlates. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  91. Smuha, N. A. (2020). Trustworthy artificial intelligence in education: Pitfalls and pathways. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  92. Spiel, C., Schwartzman, S., Busemeyer, M. R., Cloete, N., Drori, G. S., Lassnigg, L., Schober, B., Schweisfurth, M., Verma, S., Bakarat, B., Maassen, P., & Reich, R. (2018). The contribution of education to social progress (p. 753). Cambridge University Press. [Google Scholar] [CrossRef]
  93. St-Hilaire, F., Vu, D., Frau, A., Burns, N., Faraji, F., Potochny, J., Robert, S., Roussel, A., Zheng, S., Glazier, T., Romano, J. V., Belfer, R., Shayan, M., Smofsky, A., Delarosbil, T., Ahn, S., Eden-Walker, S., Sony, K., Ching, A. O., … Kochmar, E. (2022). A new era: Intelligent tutoring systems will transform online learning for millions. arXiv. [Google Scholar] [CrossRef]
  94. Tahir, S. A., Shah, S. A. A., & Abu-Khalaf, J. (2023). Artificial empathy classification: A survey of deep learning techniques, datasets, and evaluation scales. arXiv. [Google Scholar] [CrossRef]
  95. Tarabini, A., Jacovkis, J., & Montes, A. (2017). Factors in educational exclusion: Including the voice of the youth. Journal of Youth Studies, 21(6), 836. [Google Scholar] [CrossRef]
  96. THE DATAFIED STUDENT: Why students’ data privacy matters and the responsibility to protect it. (2022). Available online: https://studentprivacycompass.org/resource/the-datafied-student-why-students-data-privacy-matters-and-the-responsibility-to-protect-it/ (accessed on 20 January 2025).
  97. The hard reality of school for LGBTQI+ students. (2022). [Data set]. OECD Podcasts. [CrossRef]
  98. The impact of AI and EI on traditional and online education. (2023). Available online: https://elearningindustry.com/the-impact-ai-and-ei-have-on-traditional-and-online-education (accessed on 21 January 2025).
  99. The world’s families: Hidden funders of education. (2017). Available online: https://uis.unesco.org/en/blog/worlds-families-hidden-funders-education (accessed on 5 December 2024).
  100. Thomas, G. (2012). A review of thinking and research about inclusive education policy, with suggestions for a new kind of inclusive thinking [Review of a review of thinking and research about inclusive education policy, with suggestions for a new kind of inclusive thinking]. British Educational Research Journal, 39(3), 473. [Google Scholar] [CrossRef]
  101. Tieken, M. C. (2017). The spatialization of racial inequity and educational opportunity: Rethinking the rural/urban divide. Peabody Journal of Education, 92(3), 385. [Google Scholar] [CrossRef]
  102. Tonbuloğlu, B. (2023). An evaluation of the use of artificial intelligence applications in online education. Journal of Educational Technology and Online Learning, 6(4), 866. [Google Scholar] [CrossRef]
  103. Tricco, A. C., Langlois, E. V., & Straus, S. E. (2017). A guide to rapid reviews. Lippincott Williams & Wilkins. [Google Scholar]
  104. UNESCO survey: Less than 10% of schools and universities have formal guidance on AI. (2023). Available online: https://www.unesco.org/en/articles/unesco-survey-less-10-schools-and-universities-have-formal-guidance-ai (accessed on 11 November 2024).
  105. UNICEF. (2021, December). The state of the global education crisis: A path to recovery. Available online: https://www.unicef.org/reports/state-global-education-crisis (accessed on 17 October 2024).
  106. VE, C. (2023). Why teachers should explore ChatGPT’s potential—Despite the risks. Nature, 623(7987), 457. [Google Scholar] [CrossRef]
  107. Wai, J., Anderson, S. M., Perina, K., Worrell, F. C., & Chabris, C. F. (2024). The most successful and influential Americans come from a surprisingly narrow range of ‘elite’ educational backgrounds. Humanities and Social Sciences Communications, 11(1), 1129. [Google Scholar] [CrossRef]
  108. Warschauer, M., & Xu, Y. (2018). Technology and equity in education (p. 1063). Springer international handbooks of education. Springer Nature. [Google Scholar] [CrossRef]
  109. Working class heroes: Understanding access to higher education for white students from lower socio-economic backgrounds. (2019, February 1). Available online: https://www.educationopportunities.co.uk/resource_items/working-class-heroes-understanding-access-to-higher-education-for-white-students-from-lower-socio-economic-backgrounds-2019/ (accessed on 5 December 2024).
  110. Wu, C., Zhang, H., & Carroll, J. M. (2024). AI governance in higher education: Case studies of guidance at big ten universities. Future Internet, 16(10), 354. [Google Scholar] [CrossRef]
  111. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Available online: https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-019-0171-0 (accessed on 17 October 2024).
  112. Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & Choudhury, M. D. (2023, April 23–28). Synthetic lies: Understanding AI-generated misinformation and evaluating algorithmic and human solutions. CHI ’23: CHI Conference on Human Factors in Computing Systems (p. 1), Hamburg, Germany. [Google Scholar] [CrossRef]
Table 1. Capabilities of AI in Education.
Table 1. Capabilities of AI in Education.
CapabilityDescriptionPotential General Benefits for EducationPotential Benefits for
Inclusive Education
Adaptive Learning and PersonalisationAI tailors educational content to individual student needs, adjusting difficulty and type of content based on performance and learning pace. It can also detect and address knowledge gaps.Enhanced learning experiences and outcomes, increased student engagement and motivation.Addresses diverse learning needs and paces by providing equitable scaffolding and individualised support pathways.
Intelligent Tutoring SystemsITS uses ML models to personalise learning by analysing student performance, preferences, and interactions. They simulate one-on-one tutoring, offering tailored explanations, hints, and assessments. They can also predict student behaviours like concept mastery or dropout risk.Personalised learning experience, improved learning outcomes in scientific subjects, early identification of at-risk students.Creates non-judgmental and private learning environments that can build confidence and support learners who may feel alienated in traditional classroom settings.
Automated
Assessment and Feedback
AI algorithms evaluate student work (assignments, essays, tests, etc.) and provide instant feedback, ranging from simple multiple-choice grading to complex assessments of written responses. Some AI models can also detect plagiarism.It saves educators time, helps students understand mistakes and learn more effectively, and facilitates large-scale assessments.Mitigates the risk of unconscious human bias in grading and provides consistent, immediate feedback to all students, regardless of their background.
Behavioural
Prediction and
Profiling
AI analyses student data to predict behaviours such as academic performance, engagement, or dropout risks. This allows for early interventions and targeted support. It can also support student advising and career planning.Early identification of at-risk students, personalised support and interventions, improved student retention, and data-driven student advising.Enables proactive and targeted support for historically underserved student populations who are at a higher risk of dropping out.
Administrative
Efficiency
AI assists with administrative tasks like scheduling, resource allocation, managing student records, streamlining communication, and answering FAQs.Reduced administrative workload for educators, allowing them to focus on more meaningful interactions with students; enhanced student experience; cost efficiency for institutions.Frees up educator capacity from routine tasks, enabling more time for proactive and equitable mentorship of a diverse student body.
Online Learning and Distance EducationAI enhances online learning platforms by providing personalised learning paths, real-time feedback, and interactive learning experiences. It also helps monitor student engagement and participation, enabling early interventions to improve completion rates and academic success. Increased accessibility to education, personalised learning in online environments, and improved student engagement and completion rates in online courses. AI can also make education more accessible by removing geographical and temporal barriers.Overcomes geographical, physical, and situational barriers to participation, expanding access for a wider range of learners.
Table 2. Barriers to inclusivity in education.
Table 2. Barriers to inclusivity in education.
Barrier CategorySub-CategoryDescriptionExamples
Societal FactorsSystemic InequitiesUnequal power structures and discriminatory practices embedded within societal systems.Historical oppression, racial discrimination, legal barriers.
Socioeconomic DisparitiesDisparities in wealth and income create unequal access to resources and opportunities.Differential access to quality healthcare, housing, and educational resources.
Gender InequalitySocietal norms and expectations limit opportunities based on gender.Gender pay-gap, limited leadership roles for women, societal expectations around caregiving responsibilities.
Urban–Rural DivideThe geographic location creates disparities in access to resources and opportunities.Limited access to healthcare, education, and employment in rural areas.
Cultural FactorsCultural and Linguistic DifferencesEducational systems often favour dominant cultural norms and language.Curriculum not reflecting diverse perspectives, lack of language support, and cultural bias in assessments.
Discrimination and Lack of BelongingMarginalised groups face prejudice, stereotypes, and exclusion, impacting their sense of belonging and well-being.Bullying, microaggressions, lack of representation in leadership and curriculum.
Economic FactorsDirect and Indirect CostsFinancial burdens associated with education create barriers to access.Tuition fees, textbooks, transportation, technology, and childcare.
Poverty and Financial InstabilityLow-income families face greater challenges in affording education and may prioritise immediate needs over long-term investments.Students working to support families, difficulty affording basic needs, and increased vulnerability during economic downturns.
Funding InequitiesUnequal distribution of resources across educational institutions creates disparities in educational quality.Underfunded schools in low-income neighbourhoods, disparities in teacher salaries and resources.
Table 3. AI and ethical challenges in education.
Table 3. AI and ethical challenges in education.
Ethical ChallengeDescriptionPotential Impact on Higher Education
Algorithmic BiasAI systems can inherit and amplify biases present in training data, leading to discriminatory outcomes. A lack of diversity in design teams can worsen this.Unfair decisions regarding admissions, grading, resource allocation, and potentially replicating existing inequities.
Data Privacy and SecurityAI’s reliance on vast amounts of data raises concerns about the collection, storage, and use of sensitive student information. Lack of transparency and open communication with students and families exacerbates these concerns. Security breaches pose a significant risk.Decrease in student and family trust, potential legal challenges, and reputational damage to institutions.
Lack of TransparencyThe “black box” nature of some AI algorithms makes it difficult to understand how decisions are made. This opacity makes it hard to identify and address biases, having a negative impact on trust and limiting feedback capabilities.Difficulty in understanding AI’s decision-making process, hindering the ability to identify and correct biases or provide meaningful feedback to students. This can lead to distrust in AI-driven assessments and limited opportunities for improvement.
Unequal Access and Exacerbation of InequitiesDisparities in access to technology, connectivity, and digital literacy, coupled with potential biases in AI systems, can worsen existing inequalities. AI tools may inadvertently benefit privileged students more, leaving behind those from marginalised communities.Widening the achievement gap, reinforcing societal biases, and further marginalising underprivileged students.
Autonomy and Over-RelianceOver-dependence on AI for educational tasks can diminish human agency and critical thinking for both students and educators. It is crucial to balance AI assistance with human oversight and ensure that educators retain control over decision-making.Reduced student engagement and critical thinking skills, deskilling of educators, oversimplification of learning, and potential negative impact of educator autonomy.
Emotional and Social ImpactAI’s limitations in replicating human empathy and the potential for reduced human interaction raise concerns about the social and emotional development of students. Over-reliance on AI could negatively impact students’ ability to develop crucial social skills and emotional intelligence.Difficulty fostering empathy, social skills, and a sense of community in learning environments. Potential for exacerbating biases if AI systems replicate learned prejudices.
AccountabilityDetermining responsibility for AI-driven decisions in education can be complex, especially as AI takes on more significant roles. Clear mechanisms are needed to ensure accountability for outcomes and to understand the AI’s decision-making process in a way that is transparent to stakeholders (educators, students, and policymakers).Challenges in addressing fairness and ensuring accountability for AI-driven decisions, particularly in high-stakes situations. Difficulty in tracing the decision-making process of AI systems can create confusion and distrust among stakeholders.
Ethical Use of AI in Decision-MakingDeploying AI in human-related contexts like education requires careful ethical consideration throughout its lifecycle. Ethical frameworks are crucial for identifying and mitigating risks, ensuring fairness, transparency, and accountability, and aligning AI use with social values and human rights. These frameworks must be tailored to the specific challenges of the educational domain.Ensuring that AI systems are used responsibly and ethically in educational decision-making, considering the potential impact on all stakeholders.
Intellectual Property and OwnershipThe use of generative AI in education raises concerns about authorship, originality, and intellectual property rights. The potential for misuse of AI-generated material, including plagiarism by students, necessitates clear institutional policies on responsible AI use in student work, assessments, and content creation.Disputes over copyright, data ownership, and appropriate use of AI-generated content in educational settings. Potential for academic dishonesty and undermining of learning objectives if AI is misused.
Long-Term Impact and MisinformationAI systems, especially LLMs, can generate large amounts of seemingly credible but inaccurate or misleading information. This poses challenges to critical thinking and trust, potentially hindering learning outcomes and perpetuating misconceptions among stakeholders. The long-term societal impact of AI in education remains uncertain, raising concerns about the spread of misinformation and its effects on knowledge acquisition and informed decision-making.Decrease in trust in information sources, difficulty discerning valid information, hindering the development of critical thinking skills, perpetuation of misconceptions, and potential negative impact on societal knowledge and informed decision-making.
Table 4. Conclusions through the four defined themes.
Table 4. Conclusions through the four defined themes.
Ethical ChallengesCurrent AI Initiatives
in Higher Education
Barriers to Inclusive EducationAI’s impact on Bridging or Worsening Inequalities
Algorithmic Bias
Data Privacy and Security
Lack of Transparency
Unequal Access and Exacerbation of Inequities
Autonomy and Over-Reliance
Emotional and Social Impact
Accountability
Ethical Use of AI in Decision-Making
Intellectual Property and Ownership
Long-Term Impact and Misinformation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cotilla Conceição, J.M.; van der Stappen, E. The Impact of AI on Inclusivity in Higher Education: A Rapid Review. Educ. Sci. 2025, 15, 1255. https://doi.org/10.3390/educsci15091255

AMA Style

Cotilla Conceição JM, van der Stappen E. The Impact of AI on Inclusivity in Higher Education: A Rapid Review. Education Sciences. 2025; 15(9):1255. https://doi.org/10.3390/educsci15091255

Chicago/Turabian Style

Cotilla Conceição, José Manuel, and Esther van der Stappen. 2025. "The Impact of AI on Inclusivity in Higher Education: A Rapid Review" Education Sciences 15, no. 9: 1255. https://doi.org/10.3390/educsci15091255

APA Style

Cotilla Conceição, J. M., & van der Stappen, E. (2025). The Impact of AI on Inclusivity in Higher Education: A Rapid Review. Education Sciences, 15(9), 1255. https://doi.org/10.3390/educsci15091255

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop