Next Article in Journal
Developing Inclusive Preschool Education for Children with Autism Applying Universal Learning Design Strategy
Previous Article in Journal
The Study of Core Practices in Support of More Ambitious Teacher Training: A Systematic Review (2019–2023)
Previous Article in Special Issue
Exploring ChatGPT’s Role in Higher Education: Perspectives from Pakistani University Students on Academic Integrity and Ethical Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence in Higher Education: Bridging or Widening the Gap for Diverse Student Populations?

by
Dorit Hadar Shoval
Department of Psychology, The Max Stern Yezreel Valley College, Mizra 1930600, Israel
Educ. Sci. 2025, 15(5), 637; https://doi.org/10.3390/educsci15050637
Submission received: 30 March 2025 / Revised: 19 May 2025 / Accepted: 19 May 2025 / Published: 21 May 2025

Abstract

:
This study addresses a critical gap in understanding the differential effects of AI-based tools in higher education on diverse student populations, focusing on first-generation and minority students. Conducted as a case study in an introductory psychology course at a peripheral college, this research employed a mixed-methods approach, combining surveys (n = 110), in-depth semi-structured interviews (n = 20 selected to reflect class diversity), and the lecturer’s reflective journal. Data were analyzed using descriptive and inferential statistics (t-tests, Chi-square) and thematic analysis, with triangulation across data sources to examine how AI-based simulations influenced learning experiences and outcomes. The findings reveal that while AI enhanced content understanding and engagement across groups, it also highlighted and potentially widened educational gaps through an emerging “AI literacy divide.” This divide manifested in varying AI engagement patterns and differences in applying AI knowledge beyond the course, which was significantly more pronounced among majority and non-first-generation students compared to minority and first-generation peers. Qualitative data linked these disparities to prior technological exposure, cultural background, and academic self-efficacy. This study proposes an integrative framework highlighting AI literacy, AI engagement, and AI-enhanced cognitive flexibility as mediators between cultural/technological capital and AI adoption. The conclusions underscore the need for inclusive pedagogical strategies and institutional support to foster equitable AI adoption.

1. Introduction

Integrating generative artificial intelligence (GenAI) in higher education offers opportunities and challenges for learning and teaching (O’Dea, 2024). Preliminary research has demonstrated the benefits of AI across various applications and explored faculty and student perceptions (Billy & Anush, 2023; Said et al., 2023), but a critical gap remains in understanding its differential effect on diverse student populations, particularly first-generation college students and minority groups. The present study addressed this gap by investigating whether AI-based tools in higher education bridge or widen the educational divide for these populations, focusing on the potential exacerbation or mitigation of existing disparities. This question is particularly salient in diverse educational contexts, where existing inequalities rooted in socioeconomic and cultural factors may intersect with the emerging challenges of AI integration.

1.1. Literature Review and Theoretical Framework

According to Bourdieu’s (1986) theory of cultural capital, success in the education system is not dependent exclusively on intellectual abilities but also on “cultural capital”—the knowledge, skills, and habits acquired primarily through family socialization. Students from disadvantaged backgrounds often enter academia with less of the cultural capital required for success in this environment, placing them at a structural disadvantage (Dumais & Ward, 2010; Shams et al., 2024).
Complementing Bourdieu’s work, the digital divide theory (Van Dijk, 2005; Warschauer, 2004) elucidates how access to digital technologies and their use can create or exacerbate social and educational inequalities. This theory has evolved from focusing strictly on physical access to technology to examining more nuanced aspects of digital inequality, including differences in digital skills and usage patterns (Livingstone & Helsper, 2007; Van Dijk, 2017).
Bridging these perspectives, the concept of “technological capital” emerges as a useful extension of Bourdieu’s theory in the digital age. Lee and Chen (2017) defined technological capital as a particular form of cultural capital that enables the meaningful use of information and communication technologies. They proposed two main components of technological capital: techno-competency—the technical ability to use digital technologies—and digital cultural production—the ability to create digital content. Their research shows that technological capital is related to socioeconomic background and cultural capital, and it influences students’ ability to leverage educational and professional opportunities in the digital era.
Hamilton and Hamilton (2022) expanded on the discussion of technological capital with regard to online learning in higher education. They showed that students with higher levels of technological capital are able to use online resources and support better, which bolstered their academic success. Their research suggests that technological capital is not only about access to technology but also the ability to use it effectively and derive educational value from it.
Integrating cultural capital, digital divide, and technological capital theories provides a broad framework for understanding how AI technologies may affect educational disparities in higher education. Cultural capital influences students’ starting point when they begin their higher education, including understanding the academic “rules of the game” and familiarity with the knowledge and skills required for success (Haney, 2022). Simultaneously, the digital divide affects their access and ability to effectively use digital technologies, including AI, in learning (Matsieli & Mutula, 2024). Technological capital combines these concepts, focusing on students’ ability to use technology effectively and derive meaningful value from it in their academic work (Kurban & Şahin, 2024). This integrated perspective allows for an in-depth understanding of how existing inequalities may be perpetuated or, alternatively, mitigated through the introduction of AI in educational settings. Regarding AI integration in higher education, this theoretical framework reveals several critical issues: the potential of AI to compensate for gaps in cultural capital by providing access to personalized resources and support; the consequences of the divide in terms of students’ ability to leverage AI tools effectively; and the increasing significance of technological capital and a nascent form of AI literacy in AI-enhanced learning environments. Examining these issues through the lens of established sociological and technological frameworks provides a critical perspective on the potential for AI to reshape educational equity.

1.2. AI in Higher Education: Recent Developments and Challenges

Recent advances in AI have opened new frontiers in educational technology (Dwivedi et al., 2023). Wu (2023) explored the application of these technologies in creating highly interactive and personalized learning experiences. AI-based simulations, in particular, offer immersive learning environments that allow students to apply theoretical knowledge to practical applications (Ramdurai & Adhithya, 2023).
Zhang and Aslan (2021) outlined several key applications of AI in educational settings, including the personalization of learning, task automation, intelligent content creation, and adaptive tutoring systems. These innovations offer the option to tailor educational materials and approaches to individual student’s needs, potentially enhancing both the efficiency and effectiveness of the learning process.
AI technologies offer potential solutions to some of the challenges faced by first-generation and minority students. Adaptive learning systems powered by AI can provide personalized instruction tailored to individual student’s needs, helping bridge gaps in existing knowledge and learning styles (Zhang & Aslan, 2021). AI-powered tools can also support students in orienting themselves within academic systems and accessing resources, areas where first-generation students often struggle (Schwartz et al., 2018). GenAI can also help bridge language gaps, which is particularly beneficial for minority students whose first language is not the language of instruction at their institution. With advanced translation capabilities, AI can make academic content more accessible, reducing language-related barriers and supporting more inclusive learning environments (Hadar Shoval et al., 2024).
The implementation of AI in higher education is not without challenges, however. Nasim et al. (2022) pointed out the need to address potential algorithmic biases and maintain transparency in AI-driven decision-making processes. UNESCO (2023a, 2023b) warns that the use of these technologies may exacerbate existing digital divides, particularly for students from developing regions or disadvantaged backgrounds.
First-generation college students, often from lower socioeconomic backgrounds, face unique challenges in higher education. They frequently lack the cultural capital and the knowledge typically passed down through families with higher education experience for coping at academic institutions (Lohfink & Paulsen, 2005; Verdín et al., 2021). The integration of AI technologies in education may present additional hurdles for these students, who may have lower digital proficiency and less exposure to advanced technologies than their colleagues do (Deng & Yang, 2021; Zhan et al., 2024).
As educational institutions increasingly adopt AI technologies, there is a growing concern that these advancements may inadvertently exacerbate existing inequalities. Specifically, there is a risk that AI integration will widen educational gaps, if not carefully managed. This risk stems from an emerging “AI literacy divide,” which encompasses not only technical skills but also the cognitive and strategic abilities needed to effectively utilize AI tools for academic purposes. Students who are already technologically adept or have greater access to AI tools outside of school may benefit fully from AI-enhanced learning environments, developing higher levels of AI literacy and gaining further advantages, while those with less technological proficiency or access may under-benefit, potentially widening the gap (Wu, 2023). This digital divide, rooted in differences in technological capital, can exacerbate inequalities based on socioeconomic status, race, and ethnicity. Understanding this dynamic is crucial for informing inclusive AI implementation strategies in higher education.
Research on student satisfaction with the integration of AI in higher education reveals that various factors influence attitudes toward AI-assisted learning. Cai et al. (2023) found that acquired familiarity with AI, perceived usefulness, and ease of use were significant determinants of positive attitudes toward AI-assisted learning. Deng and Yang (2021) observed that students with higher digital proficiency reported greater satisfaction with online learning experiences.
Despite the potential of AI to transform higher education, there remains a critical need for empirical research on its effect, particularly on under-represented and marginalized student populations. The literature lacks in-depth, qualitative studies examining the experiences of certain groups, such as first-generation college students, with AI-based learning tools. Building upon existing theoretical frameworks of cultural and technological capital and the digital divide, to address this gap, the present study investigated the effects of integrating GenAI into higher education, with a focus on its potential to narrow or widen educational disparities. Through a case study of an AI-based simulation used in an introductory psychology course at a peripheral college in Israel, this research explored how AI tools influenced learning experiences and outcomes for diverse student populations, including first-generation and minority students. This research positions itself within the critical discussion on whether technological advancements in education serve to promote equity or reinforce existing social stratifications.
The main research questions guiding this study were the following:
RQ1: How does the use of AI-based simulation influence the learning experiences of students from different backgrounds, particularly first-generation college students and minority groups?
RQ2: How does the use of this tool affect learning outcomes and the satisfaction of students in these groups compared to their peers?
RQ3: What factors shape the way students use AI-based tools and benefit from them?

2. Materials and Methods

This study follows an observational design and adheres to the STROBE guidelines for reporting observational studies.

2.1. Research Approach

This study represents an innovative approach by examining the differential impact of AI-based tools on diverse student populations within a real-world educational setting. This study used a descriptive case study approach, based on Yin’s (2009) methodology. Yin defined a case study as “an empirical inquiry that investigates a contemporary phenomenon in depth and within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident” (p. 13). This approach was chosen for its capacity to provide a comprehensive, contextualized understanding of complex phenomena through multiple data sources. The descriptive nature of this study allowed for a rich, detailed account of the integration of AI-based simulation in a given educational setting, capturing the varied experiences of diverse student groups.

2.2. Research Setting and Context

The college, located in northern Israel, is a microcosm of Israel’s diverse society. It was defined by the Council for Higher Education as a peripheral institution, and it faces unique social and economic challenges because of its remote location and the relatively low socioeconomic status of the region (Raz Rotem et al., 2021). The mission of the college is to provide access to higher education for a population largely consisting of first-generation college students (Rispler & Yakov, 2023).
The student population is diverse, and it includes Jewish and Arab citizens of Israel from various religious and cultural backgrounds. Many students come from low socioeconomic backgrounds and work part-time to finance their studies. For a significant percentage of students, Hebrew, the official language of instruction, is not their mother tongue (Raz Rotem et al., 2021). A notable disparity between students reflects their readiness for academic studies, stemming from differences in the quality of high school education across communities (Rispler & Yakov, 2023).
One hundred and ten students enrolled in the Introduction to Psychology course for first-year undergraduates, conducted during the first semester of the 2023–2024 academic year. The course structure includes frontal lectures, tutorials, and the extensive use of reading materials. The lecturer incorporated diverse pedagogical techniques despite class size limitations.
A key challenge in this course is balancing the delivery of extensive theoretical information with creating an engaging and participatory learning environment, especially for students from diverse backgrounds with varying needs. This is particularly evident regarding non-native Hebrew speakers, who often struggle to participate in class discussions. This case study reflects the challenges faced in higher education when dealing with diverse student populations, revealing the need for innovative teaching methods, such as the integration of AI, to enhance the learning experience for all students.
To address the unique challenges presented by the diverse student population in the Introduction to Psychology course, an innovative approach using GenAI tools was developed by the course instructor. This initiative aimed to assist students in understanding and applying psychological theories learned in the course while adapting the learning materials and instructional methods to individual student needs.
AI-based simulations were implemented as an integral part of the learning process in the course and were used after each of the main topics. Figure 1 presents an example of one simulation prompt used in the course.

2.3. Technical Implementation Details

2.3.1. Development of Simulation Prompt

The prompt building process included several stages:
  • Defining the theoretical approaches and the topic on which the simulation focused, compiling all relevant theoretical material for this content unit.
  • Developing scenarios that simulate situations from students’ daily lives, with cultural sensitivity to the diverse populations in the course.
  • Creating a system of questions and answers that allowed students to analyze scenarios from different theoretical perspectives.
  • Implementing a personalized feedback mechanism for student responses, based on the pedagogical principles of positive reinforcement and learning from mistakes.
  • Selecting the appropriate tone for the conversation between the LLM and the student.
  • Identifying safety and ethical issues to be addressed in prompt development.
The LLM Claude (Anthropic, Ltd., San Francisco, CA, USA) was used to run the simulation. We used Claude because at the time the course was conducted, Claude’s performance in Hebrew and Arabic was considered superior to that of other LLMs.

2.3.2. Classroom Implementation

The implementation process included the following steps:
  • Introduction of the tool: At the start of the semester, the AI-based simulation method was introduced to students as part of the learning tools of the course.
  • Technical preparation: Students received the prompt through the messaging system of the college. They were instructed to register for the free version of Claude and upload the prompt file sent by the lecturer.
  • Classroom use: Students practiced using the simulation during class for 20 min, each one working independently on a laptop or smartphone.
  • Language flexibility: Students were instructed to use the simulation in any language comfortable for them.

2.4. Participants

This study included all 110 undergraduate psychology students enrolled in the Introduction to Psychology course during the 2023–2024 academic year at the college. All participants completed a structured questionnaire at the end of the course. The demographic characteristics of the class reflected the diverse student body of the college (Table 1).
From this cohort, in-depth interviews were conducted with 20 students. All students in the course were invited to participate in the interviews, and those who agreed and could allocate an hour of their time were interviewed.
The demographic characteristics of the 20 interviewees were as follows: 60% (12) women; 50% (10) Jewish, 40% (8) Muslim, and 10% (2) Christian; 55% (11) reported that their native language was one other than Hebrew; and 45% (9) were first-generation college students. Additionally, 48% (9) defined their socioeconomic status as low or very low. Interviewees were selected to reflect the demographic diversity of the course participants, aiming for representation across gender, ethnic background, first-generation status, and socioeconomic status as reported in the survey.

2.5. Research Instruments

This study used three primary data collection tools:
  • Structured questionnaires were administered to all 110 students who participated in the course. The questionnaires (Table 2 and Table 3) focused on several key areas: pre-existing and acquired knowledge of AI, the frequency of AI tool usage, perceptions regarding the contribution of the simulations to the understanding of the course material, and the application of acquired knowledge to other areas of study. Answers were provided on a 5-point Likert scale for questions requiring the assessment of student perceptions; dichotomous (yes/no) questions were used to examine certain patterns of AI technology use.
  • In-depth, semi-structured interviews were conducted with 20 students. The interview protocol (see Appendix A) was designed to explore several key areas: students’ overall learning experiences with the simulation, their perceptions of its contribution to their understanding of the course content, challenges or difficulties encountered during usage, and how the simulation compared to other learning methods they had used. The interviews also explored how students’ backgrounds, including their status as first-generation college students and their ethnic backgrounds, influenced their interaction with the simulation and perception of it. The interviews also investigated the students’ initial thoughts about using AI-based simulations in their coursework, their comfort level with the technology, and how this comfort level may have changed over the semester. Furthermore, the protocol included questions about the effects of the simulation on students’ academic performance, their interest in psychology, and their perceptions of how such technology may affect educational opportunities for students from diverse backgrounds.
  • The lecturer maintained a journal throughout the course and included a systematic documentation of observations on the use of the simulation in the classroom, student reactions, and classroom interactions.

2.6. Data Analysis

This study used a mixed-methods approach to data analysis, integrating quantitative and qualitative methodologies:
  • Quantitative analysis: Questionnaire data were analyzed using SPSS version 27. Descriptive statistics were calculated, and group differences were examined using independent samples t-tests and Chi-square tests, with p < 0.05 considered significant. These tests were chosen as appropriate methods for comparing means and proportions between independent groups based on the nature of the data and research questions. The assumptions underlying parametric tests were examined where applicable. Effect sizes were calculated using Cohen’s d for t-tests and the phi (φ) coefficient for Chi-square tests.
  • Qualitative analysis: Data from in-depth interviews and the lecturer’s journal were analyzed using thematic analysis, following Braun and Clarke’s (2012) method. The journal provided a systematic documentation of observations and reflections throughout the course, which were coded and analyzed to identify recurring patterns and themes related to students’ engagement with the AI simulation and the lecturer’s pedagogical responses. This process involved familiarizing oneself with the data, generating initial codes, searching for themes, reviewing themes, defining and naming themes, and producing a report. Specifically, familiarization included multiple readings of the transcripts and journal entries; initial codes were generated systematically across the dataset; codes were then collated into potential themes, which were subsequently reviewed against the coded data and the entire dataset; themes were then refined, clearly defined, and named; and finally, a narrative report was produced, illustrated by representative quotes. To enhance the trustworthiness of the qualitative findings, a subset of the interview transcripts (25%) was independently coded by two researchers. Intercoder reliability was assessed using Cohen’s Kappa coefficient, which yielded a value of 0.85, indicating substantial agreement. Following the establishment of reliability, the remaining transcripts and the lecturer’s journal were coded by the primary researcher.
  • Triangulation: Following a separate analysis of quantitative and qualitative data, triangulation was performed across different data sources to identify central themes and patterns. This involved comparing and contrasting findings from the quantitative surveys, qualitative interviews, and the lecturer’s journal to corroborate emergent themes, explore contradictory findings, and develop a more comprehensive understanding of the phenomena under investigation. The findings from the quantitative analysis on group differences in usage and perception informed the interpretation of qualitative data on student experiences, while qualitative themes provided rich context and explanation for the observed quantitative patterns. This integrated approach allowed for a robust validation of the findings and a deeper, multifaceted interpretation.

3. Results

The study findings are organized around the three research questions, integrating quantitative and qualitative data and foregrounding the central themes that emerged from the analysis.
Table 2 and Table 3 summarize the key quantitative data, including comparisons between groups and statistical significance.

3.1. Effects of AI-Based Simulation on Learning Experiences

3.1.1. Quantitative Findings

When asked to retrospectively assess their initial AI knowledge at the end of the course, students reported relatively low levels across all groups, with statistically significant differences between the majority and minority groups but not between first-generation and other students. When evaluating their post-course knowledge, students indicated a substantial increase in their understanding of artificial intelligence. Although there was no statistically significant difference between the majority and minority groups in post-course AI knowledge, a significant difference emerged between first-generation and other students, the latter reporting higher levels. This suggests that although all students reported that they benefited from the AI-based simulations, first-generation students reported gaining less from the experience.
When asked about the contribution of simulations to understanding course material, minority students rated their benefit significantly higher than majority students. Similarly, first-generation students rated the contribution of simulations more highly than their peers. These findings suggest that while first-generation students may have gained less knowledge overall, still, they and minority students found the simulations particularly helpful in understanding course material.

3.1.2. Qualitative Findings

Initial Encounters and Technological Barriers

The analysis uncovered significant disparities in the initial experiences of students from different backgrounds, reflecting gaps in technological capital and digital access. Students from minority groups and first-generation college students described their considerable apprehension:
When the lecturer introduced the simulation, I felt really scared. I thought to myself, “How am I going to manage this? I barely know how to turn on a computer”.
(first-generation student, minority group)
It was like a foreign language to me. Everyone around seemed so confident, and I felt like I was trying to crack a secret code.
(minority group student)
By contrast, non-first-generation students expressed enthusiasm:
This was really cool! I saw it and immediately thought of all sorts of ways I could use this in other courses, too.
(Non-first-generation student, majority group)
I felt like I’d gotten a new toy. I couldn’t wait to play with it and see what I could do.
(non-first-generation student, majority group)
This analysis confirms the digital divide between groups. While non-first-generation students viewed technology as an opportunity, for first-generation and minority students, it presented an additional challenge in their academic journey.

Role of Academic Support

The analysis revealed that support from academic staff was a critical factor in bridging initial gaps:
I was really stressed out at first. The lecturer said we needed to register for the free version of the AI with our Google account. I couldn’t remember how to do it or what my password was, so I pretended to be practicing. But the lecturer didn’t give up; she went around checking that everyone was logged in. When she saw I wasn’t, she quietly said, “It’s okay, I’ll do it with you after class.” She sat with me after class, showed me how to register, and waited while I practiced the whole simulation. Since then, I haven’t had any problems with the subsequent simulations.
(first-generation student, minority group)
Nevertheless, some students indicated that they needed more support:
I felt like everyone understood except me. I needed more help, but I was too embarrassed to ask.
(first-generation student, minority group)
This analysis attests to the importance of differentiated support tailored to diverse student needs. It points to the need for structured and accessible support systems, particularly for first-generation and minority students.

Development over Time

As the course progressed, a change in students’ experiences became evident:
At first, I was scared to even open the AI platform. But after a few weeks, I felt much more relaxed. By the end of the semester, I wasn’t afraid anymore. It was like a weight had been lifted off my shoulders.
(minority group student)
It took me some time to feel confident enough to start the simulation, but once I did, I realized it wasn’t as hard as I thought. The instructions were clear, and I could just follow along. Now I feel much more at ease with it.
(first-generation student)
For non-first-generation students, who were typically more comfortable with the technology from the start, the development often extended beyond basic use to more advanced applications:
After the first simulation, I got curious about how to create my own prompts. I started experimenting with different instructions, trying to see what the AI could do. It’s been really exciting to explore its potential beyond just our coursework.
(non-first-generation student, majority group)
This analysis indicates that most students overcame initial apprehensions over time, but the nature of progress differed between groups. Whereas minority and first-generation students primarily gained confidence in using the tool as intended, others often moved beyond basic use to explore more advanced applications.

3.1.3. Insights from the Lecturer’s Journal

Week 1: I noticed a clear divide in students’ initial reactions to the AI simulation. Some were excited, others terrified. I realize I need to be prepared not just to explain the technology but to provide emotional support.
Week 3: As we progress, I’m noticing differences in the types of support different students need. While some students need basic technical help, others are seeking more advanced guidance. It’s a constant challenge to provide the right level of support to each student.
These observations attest to the multifaceted role of educators in implementing new technologies, pointing to the need for technical expertise alongside emotional support and a flexible pedagogical approach.
In conclusion, the qualitative findings paint a complex picture of the effects of AI-based simulations on learning experiences. Although this technology has significant potential for improving learning, it also exposes and amplifies gaps in technological and cultural capital. Academic support and development over time emerged as critical factors in bridging these gaps.

3.2. Effects on Learning Outcomes and Satisfaction

3.2.1. Quantitative Findings

The frequency of the current use of AI tools showed significant differences between groups. Majority group students reported higher use than minority group students. Similarly, non-first-generation students reported the more frequent use of AI tools than first-generation students. These findings suggest that exposure to AI-based simulations in the course may have led to the differential adoption of these tools by student groups.
A stark contrast emerged in the application of acquired AI knowledge to other areas of study. Majority group students reported a significantly higher application of AI knowledge than minority group students. A similar pattern was observed between non-first-generation and first-generation students. The large effect sizes indicate substantial differences in how students from different backgrounds integrated AI knowledge into their studies.
A statistically significant difference was observed between majority and minority group students in engagement with AI platforms beyond the course. Specifically, 29.7% of majority group students reported purchasing a paid subscription to AI platforms vs. only 11.1% of minority group students. This disparity was also present between non-first-generation and first-generation students, with 27.6% and 14.7% purchasing subscriptions, respectively, although this difference did not reach statistical significance.
These quantitative results reveal significant differences in learning outcomes and engagement with AI technologies between student groups, suggesting that although AI-based simulations may benefit all students, they may also amplify educational disparities.

3.2.2. Qualitative Findings

Enhanced Understanding and Application of Course Material

Students in all groups reported an improved comprehension of course content, although the nature of this improvement varied:
The simulations made the theories come alive. Instead of just memorizing definitions, I could see how they applied in real situations. It clicked in a way that just reading the textbook never did.
(first-generation student, minority group)
I used to struggle with connecting different concepts, but the AI helped me see the bigger picture. It’s like it filled in the gaps I didn’t even know I had.
(first-generation student, minority group)
Non-first-generation students often reported using the simulation for more advanced applications:
The simulations gave me a framework to think about psychology in my everyday life. I found myself analyzing situations using the theories we learned, even outside of class.
(non-first-generation student, majority group)
This theme indicates that across student groups, participants reported experiencing deeper engagement with course material through their interaction with the AI-based simulations.

Increased Engagement and Motivation

Many students reported higher levels of engagement with the course material:
Before, I’d often zone out during lectures. But with the simulations, I was actively involved. I wanted to see what would happen if I gave different answers.
(minority group student, non-first-generation student)
It made learning feel more like a game. I actually looked forward to doing the simulations, which is not something I usually say about coursework.
(first-generation student, majority group)
Yet, the engagement manifested differently for some non-first-generation students:
The simulations were a good starting point, but I often found myself going beyond them, researching more on my own. It sparked a real interest in AI and psychology.
(non-first-generation student, minority group)
This theme indicates that the AI-based simulations increased student engagement, potentially leading to improved learning outcomes through increased time and effort invested in the course.

Personalized Learning Experience

Students appreciated the ability to learn at their own pace and in their preferred style:
I could take my time with the simulations, replay parts I didn’t understand. In a regular class, I’d be too embarrassed to ask the lecturer to repeat something multiple times.
(first-generation student, minority group)
Being able to use the AI in my native language was a game-changer. I could focus on understanding the concepts without struggling with language barriers.
(first-generation student, minority group)
Non-first-generation students often used this flexibility for more advanced exploration:
I liked that I could dive deeper into topics that interested me. The AI allowed me to ask follow-up questions and explore scenarios beyond what was covered in class.
(non-first-generation student, majority group)
This theme suggests that the personalized nature of AI-based learning contributed to improved satisfaction and potentially better learning outcomes by adapting to individual student needs.

3.2.3. Insights from the Lecturer’s Journal

Week 7: I’m noticing a marked improvement in the quality of class discussions. Students are referencing the simulations, making connections I hadn’t anticipated. It’s as if the AI is providing a common ground for more in-depth exploration of the material.
Week 10: The divide I noticed at the beginning of the semester seems to be evolving. While all students are showing improvement, some are using the AI in increasingly sophisticated ways.
These reflections demonstrate the positive effects of AI-based simulations on learning outcomes and at the same time reveal the ongoing challenge of ensuring equitable benefits across diverse student groups.
In conclusion, the qualitative findings suggest that AI-based simulations generally benefited learning outcomes and satisfaction across student groups. They facilitated deeper understanding, increased engagement, and allowed for personalized learning experiences. The findings also reveal disparities in how different student groups used and benefited from the technology.

3.3. Factors Influencing Usage and Benefit

3.3.1. Quantitative Findings

The use of AI in academic tasks revealed statistically significant disparities between student groups. There were significant differences in AI usage for writing semester assignments between the majority and minority groups, as well as between first-generation students and others. Among majority group students, 83.8% reported using AI for assignments, compared to only 38.9% of minority group students. This disparity was even more pronounced between non-first-generation and first-generation students, with 93.4% of non-first-generation students using AI for assignments, compared to only 14.7% of first-generation students. These statistically significant differences, accompanied by large effect sizes, indicate substantial variations in how students from different backgrounds are integrating AI into their academic work.
Similar patterns were observed in the use of AI for studying for semester exams. Whereas 81.1% of majority group students reported using AI for exam preparation, only 69.4% of minority group students did so, although this difference was not statistically significant. Yet, a significant difference was found between non-first-generation (85.5%) and first-generation (58.8%) students in this regard.
The use of AI for composing emails or drafting written requests to official bodies or authorities also showed significant differences. A higher percentage of majority group students (91.9%) used AI for this purpose than did minority group students (75.0%). The gap was even wider between non-first-generation (97.4%) and first-generation (61.8%) students.
The data indicate that non-first-generation students and those from majority groups were more likely to leverage AI across a range of academic tasks, potentially gaining additional benefits from the technology beyond the course-specific simulations.

3.3.2. Qualitative Findings

Previous Technological Exposure

Students’ previous experiences with technology significantly influenced their approach to the AI simulations:
I grew up without a computer at home. When we started using AI, it felt like everyone else was speaking a language I didn’t understand. It took me weeks just to feel comfortable opening the program.
(first-generation student, majority group)
I’ve been coding since high school, so the AI interface felt intuitive. I was able to focus on the content rather than figuring out how to use the tool.
(non-first-generation student, majority group)
This theme indicates how disparities in technological capital can create initial barriers or advantages in engaging with AI-based learning tools.

Cultural and Linguistic Factors

Cultural background and language proficiency emerged as significant factors:
The AI let me practice first in Arabic, which helped me understand the concepts. Then I could try explaining in Hebrew without feeling so nervous about making mistakes.
(first-generation student, minority group)
Students from the majority group often found it easier to relate to the content:
The situations in the simulations were like things I’ve experienced. It made it easy to apply the theories we were studying.
(non-first-generation student, majority group)
This theme shows the importance of cultural and linguistic inclusivity in AI-based learning tools to ensure equitable benefits across diverse student populations.

Academic Self-Efficacy

Students’ beliefs about their academic abilities affected their engagement with the AI simulations:
At first, I was scared to use the AI because I thought I’d mess it up. But after a few successes, I started to feel more confident. By the end, I was even helping my classmates.
(first-generation student, majority group)
I’ve always done well in school, so I saw the AI as another challenge to master. I pushed myself to explore all its features.
(non-first-generation student, majority group)
This theme illustrates how academic self-efficacy can increase students’ willingness to engage with new learning technologies and benefit from them.

3.3.3. Insights from the Lecturer’s Journal

Week 10: The differences in how students engage with AI seem to reflect broader patterns of privilege and disadvantage. Those with more resources—time, technology, support networks—can extract more value from the tool. I’m grappling with how to level the playing field.
These reflections reveal the complex interaction between factors influencing students’ use and benefit from AI-based learning tools and the challenges educators face in ensuring equitable outcomes.
In conclusion, the qualitative findings suggest that a complex web of factors, including previous exposure to technology, cultural and linguistic background, and academic self-efficacy, significantly affect how students engage with AI-based simulations and benefit from them. These factors often align with the patterns of educational advantages and disadvantages, suggesting that although AI tools have the potential to enhance learning for all students, careful consideration must be given to implementation to avoid exacerbating inequalities.

4. Discussion

This study explored the consequences of integrating AI-based simulations in higher education, focusing on how these tools affected diverse student populations, including first-generation college students and minority groups. By examining the use of AI simulations in an introductory psychology course at a peripheral college, this study sought to understand whether these technologies bridge or widen educational gaps. The systematic integration of quantitative and qualitative findings led to the development of a conceptual framework that captures the complex dynamics of AI adoption in higher education (Figure 2). The model emerged from synthesizing three key patterns identified in the findings: the differential development of AI literacy across student groups, variations in cognitive flexibility when engaging with AI interfaces, and the distinct patterns of AI engagement that evolved throughout the semester. These patterns, which were consistently observed across the quantitative analyses and qualitative investigations, formed the foundation for understanding how cultural and technological capital shape students’ experiences with AI-enhanced learning environments.
Figure 2 illustrates these primary factors influencing AI adoption in learning environments, representing them as interconnected and mutually influential components. The framework situates these factors within the broader context of cultural and technological capital, demonstrating how pre-existing disparities can shape students’ experiences with and benefits from AI-enhanced learning environments. The findings suggest that the rapid integration of AI in higher education constitutes a socio-technical process that not only highlights existing inequalities but also challenges and reshapes our theoretical understanding of digital and academic disparities. This necessitates a critical examination of how established concepts like cultural and technological capital are manifested and potentially altered in the AI era, as well as acknowledging the context-dependent nature of emerging concepts such as the AI literacy divide.
The following sections present an integrative discussion of the research questions through the lens of this conceptual framework, highlighting the theoretical constructs that emerged from the model and their implications for understanding AI integration in higher education.

4.1. Effects of AI-Based Simulations on Learning Experiences: AI Literacy Divide

The integration of AI-based simulations in higher education reveals a complex interaction between technological innovation and social structures in education. The study findings extend Bourdieu’s (1986) theory of cultural capital and Lee and Chen’s (2017) concept of technological capital to the AI domain, demonstrating how these forms of capital are being redefined in AI-enhanced learning environments.
The disparities observed in initial comfort levels, usage patterns, and the ability to leverage AI tools for broader academic benefits point to an emerging AI literacy divide. This divide, as conceptualized and observed in this study, refers to differential access not only to the tools themselves but also, crucially, to the skills, knowledge, and confidence required to effectively engage with AI for learning and apply AI capabilities beyond the specific course context, as reflected in students’ reported usage patterns and self-perceptions. This divide encompasses a new set of competencies: the ability to adapt to novel AI interfaces, conceptualize AI applications in various academic tasks, and understand the implications of AI use in education. Building directly upon the frameworks of the digital divide and technological capital, it is revealed how rapidly evolving technologies can create new dimensions of educational inequality, even as they offer potential solutions to existing disparities.
The finding that minority and first-generation students reported higher perceived benefits from the simulations in understanding course material adds complexity to Zhang and Aslan’s (2021) research on AI’s potential for creating personalized learning experiences. This suggests that AI-enhanced learning environments offer unique affordances for students traditionally underserved by conventional educational methods. At the same time, the discrepancy between perceived benefits and the actual broader application of AI skills reveals a critical gap: although AI tools effectively convey course content, the ability to translate this into generalizable skills appears mediated by forms of cultural and technological capital.
A quantitative finding warrants specific attention in this context: a remarkably large effect size (Cohen’s d = 2.63; see Table 2) was observed for the single item assessing the application of acquired AI knowledge to other studies, particularly between the majority and minority groups. Given the case study design, the non-representative nature of the sample, and the fact that this is a single-item measure, this finding should be interpreted with considerable caution. However, it tentatively suggests a potential point of significant disparity in AI skill transfer that merits further investigation in larger, more generalizable studies.
The contrasting initial reactions to AI tools between student groups offer insight into how cultural capital operates in AI-enhanced learning environments. The enthusiasm and immediate recognition of broader applications displayed by non-first-generation and majority students suggest that certain forms of cultural capital may predispose students to view AI as an opportunity rather than a challenge. This extends Dumais and Ward’s (2010) findings on cultural capital and first-generation college success to the AI domain, suggesting that the rules of the game in higher education are rapidly evolving with AI integration.
These observations demonstrate the varied nature of the AI literacy divide. It encompasses not only technical skills but also the cognitive frameworks and cultural dispositions that allow students to fully leverage AI tools in their academic pursuits. Addressing this divide requires a comprehensive approach that considers both the technical and socio-cultural aspects of AI integration in higher education.

4.2. Effect on Learning Outcomes and Satisfaction: AI-Enhanced Cognitive Flexibility

The influence of AI-based simulations on learning outcomes and satisfaction reveals a complex interaction between technological innovation and educational equity. The improved comprehension across all student groups suggests the potential of AI to democratize access to complex academic concepts, aligning with the vision of van Heerden et al. (2023) of AI-powered language models enhancing global mental health services and extending Cai et al.’s (2023) work on factors influencing learner attitudes toward AI-assisted learning.
This potential is complicated by AI-enhanced cognitive flexibility, defined as the ability to engage with AI interfaces and manipulate ideas through them. In the context of our findings, this form of cognitive flexibility was evidenced by students’ capacity to utilize the AI simulation’s interactive features to explore psychological concepts from multiple theoretical perspectives and, critically, to apply this acquired knowledge in varied academic tasks and contexts beyond the course, as indicated by the statistically significant group differences. The findings suggest that this form of cognitive flexibility is developing unevenly across student populations, with significant implications for educational equity.
The disparities in applying AI knowledge beyond the course context attest to a critical gap between comprehension and application. This gap, which the findings suggest is significantly mediated by differences in technological and cultural capital, extends Hamilton and Hamilton’s (2022) work on technological capital, suggesting that in AI-enhanced environments, the ability to transfer knowledge becomes a key differentiator of student success. This ability seems heavily influenced by pre-existing advantages, indicating that AI tools may amplify disparities in how information is used and applied.
The increased engagement reported across groups aligns with Wu’s (2023) research but suggests that AI-enhanced cognitive flexibility has significant implications for educational equity. Students demonstrating higher levels of AI-enhanced cognitive flexibility may develop skills that extend far beyond immediate course content, possibly widening achievement gaps.
The disparity between groups in purchasing paid AI subscriptions suggests the risk of creating a two-tiered system in AI-enhanced education. This extends the digital divide theory, indicating that in the AI era, the divide involves access to not only technology but also to more advanced forms of AI assistance, which may further enhance cognitive flexibility for some students while limiting opportunities for others.

4.3. Factors Influencing Usage and Benefits: AI Engagement Patterns

The findings reveal an interaction between AI integration and educational inequalities that extends beyond mere technological access. This study noted the emergence of distinct AI engagement patterns, where the ability to strategically apply AI across diverse academic domains becomes a critical differentiator. These patterns, varying significantly across student groups as demonstrated by the quantitative findings on AI usage across different academic tasks (Section 3.3.1), represent different ways students leverage AI tools, shaped by their cultural and technological capital, echoing Van Dijk’s (2017) understanding of gradations in digital inequality.
AI engagement patterns resonate with Lee and Chen’s (2017) concept of technological capital, but the findings suggest that in AI-enhanced environments, this capital takes on new dimensions. The differential use of AI for various academic tasks indicates that effective AI engagement includes not only technical proficiency but also the ability to leverage AI for higher-order academic skills. This aligns with Hamilton and Hamilton’s (2022) observations about the role of technological capital in online learning, extending it to AI-enhanced education.
The emergence of diverse AI engagement patterns in this study echoes Livingstone and Helsper’s (2007) work on gradations in digital inclusion. The findings suggest that in AI-enhanced learning environments, these gradations are not only about the frequency of use but also about qualitative differences in how AI is integrated into academic practices. Students from minority groups and first-generation college students tend to exhibit more limited engagement patterns, potentially exacerbating existing disparities in higher education.
Factors shaping these AI engagement patterns include previous technological exposure, cultural and linguistic backgrounds, and academic self-efficacy. These factors influence how students interact with AI tools, from basic usage to more advanced applications across various academic tasks.
Drawing on Elyoseph et al. (2024) and their ethical perspective on AI in mental health, we argue that the integration of AI in education requires a carefully balanced approach between potential benefits and challenges. Although AI offers opportunities for personalized learning and access to information, the findings indicate the need for targeted interventions to ensure that these benefits are equitably distributed across diverse student populations, addressing the varied AI engagement patterns observed.
Taken together, the findings discussed through the lens of the proposed conceptual framework highlight the multifaceted ways in which cultural and technological capital shape AI adoption in higher education. The differential development of AI literacy, variations in AI-enhanced cognitive flexibility, and distinct AI engagement patterns demonstrate how existing disparities are not merely replicated but potentially amplified or transformed in AI-enhanced learning environments. This underscores the complex interplay between technological innovation and social structures, contributing to ongoing debates in educational sociology and digital equity regarding the potential for new technologies to either democratize learning or exacerbate inequality.

4.4. Limitations of This Study

This study has several limitations that should be considered when interpreting the results. First, this research was conducted at a single peripheral college in Israel, which limits the generalizability of the findings to other institutional and cultural settings. Although the demographic makeup of the college provided rich diversity, it is not representative of all higher education institutions. Consequently, the specific patterns of the AI literacy divide, AI engagement, and AI-enhanced cognitive flexibility observed here may differ in other contexts. Second, the one-semester duration of this study restricted our ability to observe the long-term effects of AI integration on student learning and skill development. This limitation is particularly significant given the rapidly evolving nature of AI technologies in education. As AI tools continue to develop and become more integrated into learning processes, the patterns of adoption and impact observed in this study may change. Specifically, a longer-term study could have revealed whether the initial AI literacy divide persists, narrows, or widens over time with continued exposure or how AI-enhanced cognitive flexibility develops through prolonged practice. Longitudinal studies are needed to track these trends over time and examine how changes in technology and educational practices affect adoption patterns across diverse student populations. Third, much of the data, particularly regarding AI usage and perceived benefits, were self-reported, which could introduce biases related to social desirability or inaccurate self-assessment. The in-depth interviews were conducted with a relatively small sample, which may not fully represent the experiences of all student groups. These limitations in data collection methods may affect the precision of reported usage frequencies and the subjective nature of perceptions regarding AI’s contribution to understanding and overall satisfaction. Fourth, this study was limited to the particular AI tool used in the course. Different AI platforms or applications may yield different results. As the field of AI in education is rapidly evolving, with new tools offering diverse capabilities and applications, the findings of this study should be considered within the context of the particular technology used. This limits our ability to generalize the specific findings about engagement patterns or perceived benefits to different types of AI tools or applications. Finally, although this study attempted to account for various background factors, it is possible that prior differences between student groups, beyond those measured, influenced the outcomes. The complex interaction of factors affecting AI adoption and usage in higher education may not be fully captured in a single-semester study.
These limitations indicate the need for further research, including longitudinal studies across various types of institutions, with different AI tools, to gain a more comprehensive understanding of the long-term effects of AI integration in higher education on diverse student populations.

5. Conclusions

This study provides new insights into the adoption of AI technologies in higher education by proposing and utilizing a novel conceptual framework that integrates AI literacy, AI-enhanced cognitive flexibility, and AI engagement patterns. This framework, developed based on observed empirical data, elucidates how cultural and technological capital influence AI adoption in learning environments and contributes to understanding the complex dynamics at play. Specifically, the findings highlight an emerging AI literacy divide and differential engagement patterns that, despite AI tools offering potential benefits across groups, may also exacerbate educational disparities. The conceptualization serves as a foundation for designing and evaluating more equitable AI implementation processes in educational settings.
The observed disparities in AI literacy, AI-enhanced cognitive flexibility, and AI engagement patterns carry significant ethical implications. They underscore the urgent ethical responsibility of higher education institutions to proactively address the risk that AI integration might exacerbate existing social and educational inequalities, potentially leading to a more stratified system based on technological access and digital capital. Ensuring equitable access to and the effective utilization of AI tools for all students is not merely a pedagogical challenge but a moral imperative.
For educational institutions, these insights indicate the need for targeted strategies to support diverse student populations in engaging with AI tools. This includes developing comprehensive digital literacy programs, designing culturally responsive AI-integrated curricula, and providing focused support for underserved students to narrow technological capital gaps. Specific initiatives may include peer mentoring programs, AI workshops tailored to different skill levels, and the integration of AI literacy components in various courses.
Building on the findings regarding the AI literacy divide and differential engagement patterns, we recommend that educators and administrators implement specific initiatives aimed at bridging these gaps. Specific initiatives may include the following: offering tiered AI training workshops tailored to varying levels of prior technological exposure and AI literacy; integrating AI literacy components into foundational courses across disciplines, not just technology-focused ones; developing culturally and linguistically responsive AI learning materials and prompts; establishing peer mentoring programs where students with higher AI literacy can support their peers; and providing accessible one-on-one technical and pedagogical support that proactively addresses the unique challenges faced by first-generation and minority students.
These practical recommendations are grounded in the ethical imperative to ensure educational equity in the age of AI. They aim to proactively counteract the observed tendencies towards increased stratification by empowering all students, particularly those from historically underserved backgrounds, with the necessary AI literacy and skills to thrive in AI-enhanced learning environments and beyond.
On a broader scale, we stand at the cusp of a significant technological revolution. Higher education institutions, and indeed educational systems at large, have the opportunity to teach not only subject-specific content but also skills that will be crucial in an AI-driven world. The introduction of AI technologies in education should be approached not merely as an addition to existing tools but as an opportunity to enhance cognitive capabilities. It is our responsibility to ensure that students who do not initially possess these skills are not left behind.
Policy initiatives should prioritize equitable access to advanced AI tools and increased funding for research on the long-term educational consequences of AI, particularly regarding equity and inclusion. This could involve partnerships between educational institutions, technology companies, and government agencies to develop and implement AI tools that are accessible and beneficial to all student populations.
Future research directions should include longitudinal studies to track the long-term effects of AI integration on diverse student populations. Additionally, investigating the effectiveness of targeted interventions designed to bridge the AI literacy gap would be valuable. Exploring how AI tools can be leveraged to enhance cognitive flexibility across different disciplines and student groups could provide further insights into optimizing AI use in education. Research should also focus on developing and evaluating culturally responsive AI tools that can adapt to diverse learning styles and backgrounds.
In conclusion, this study contributes to our understanding of AI adoption in higher education and its potential to either bridge or widen educational gaps. By recognizing the complex interaction between factors influencing AI engagement, educators and policymakers can work toward creating more inclusive and effective AI-enhanced learning environments. In this technological revolution, our goal should be to harness the potential of AI to create educational opportunities that empower all students, regardless of their background or initial technological capital. This approach not only addresses immediate educational needs but also prepares students for a future where AI literacy will be a critical skill across various professional and personal domains.

Funding

This research received no external funding

Institutional Review Board Statement

The study protocol was approved by the Ethical Committee of the institution (YVC, approval number 2024-122).

Informed Consent Statement

Informed consent was obtained from all participants, and their privacy rights were strictly observed. The participants were protected by hiding their personal information and by restricting information that could identify them. They knew that participation was voluntary, and could withdraw from the study at any time.

Data Availability Statement

The complete dataset used in the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Interview Guide: AI-Based Simulation in Introduction to Psychology Course

Introduction:
Thank you for participating in this interview. We are interested in understanding your experience with the AI-based simulation used in the Introduction to Psychology course. Your candid feedback is highly valuable and will help us understand the impact of this technology on student learning.
Background Information:
1.
Can you tell me a bit about your background? (Probe: first-generation college student, ethnic background, prior experience with technology)
2.
What were your initial thoughts when you learned that this course would use an AI-based simulation?
Topic 1: Learning Experiences
3.
Can you describe your overall experience using the AI-based simulation method in this course?
4.
How did this experience compare to other learning methods you’ve encountered in college?
5.
Were there any aspects of the simulations that you found particularly interesting or challenging?
6.
How did the AI-based simulation affect your interaction with the course material? With your classmates? With the instructor?
Topic 2: Learning Outcomes and Satisfaction
7.
In what ways, if any, do you think the AI-based simulation method affected your understanding of the course content?
8.
How satisfied are you with the learning experience provided by the AI-based simulation method?
9.
Do you feel the AI-based simulation method impacted your academic performance in this course? If so, how?
10.
Has this experience influenced your interest in psychology or your academic goals?
Topic 3: Factors Influencing Use and Benefit
11.
How comfortable did you feel using the simulation? Did this change over the semester?
12.
Were there any technical or personal challenges you encountered in using the simulation? How did you overcome them?
13.
Did your prior experience with technology affect how you engaged with the simulation? If so, how?
14.
How did your background or previous educational experiences influence your interaction with the simulation?
Topic 4: Equity and Accessibility
15.
Do you think the simulation provided equal learning opportunities for all students in the class? Why or why not?
16.
Were there any aspects of the simulation that you felt were particularly helpful or challenging for you, given your background?
17.
How do you think this type of technology might affect educational opportunities for students from diverse backgrounds?
Concluding Questions:
18.
If you could change something about the simulation or how it was used in the course, what would it be?
19.
Is there anything else you’d like to share about your experience with the simulation that we haven’t discussed?
Thank you for your time and insights. Your responses will help us better understand the impact of AI-based simulations in higher education.
Note to interviewer: Remember to probe for additional details when necessary and allow the interviewee to elaborate on their answers. Be attentive to unexpected themes that may arise during the interview.

References

  1. Billy, I., & Anush, H. (2023). A study of the perception of students and instructors on the usage of artificial intelligence in education. International Journal of Higher Education Management, 9(2), 66–73. [Google Scholar] [CrossRef]
  2. Bourdieu, P. (1986). The forms of capital. In J. Richardson (Ed.), Handbook of theory and research for the sociology of education (pp. 241–258). Greenwood Press. [Google Scholar]
  3. Braun, V., & Clarke, V. (2012). Research designs. In H. Cooper (Ed.), APA handbook of research methods in psychology (pp. 57–71). American Psychological Association. [Google Scholar]
  4. Cai, Q., Lin, Y., & Yu, Z. (2023). Factors influencing learner attitudes towards ChatGPT-assisted language learning in higher education. International Journal of Human–Computer Interaction, 40(20), 7112–7126. [Google Scholar] [CrossRef]
  5. Deng, X. N., & Yang, X. (2021). Digital proficiency and psychological well-being in online learning: Experiences of first-generation college students and their peers. Social Sciences, 10(6), 192. [Google Scholar] [CrossRef]
  6. Dumais, S. A., & Ward, A. (2010). Cultural capital and first-generation college success. Poetics, 38(3), 245–265. [Google Scholar] [CrossRef]
  7. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Misra, S., Bag, S., & Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. [Google Scholar] [CrossRef]
  8. Elyoseph, Z., Gur, T., Haber, Y., Simon, T., Angert, T., Navon, Y., Tal, A., & Asman, O. (2024). An ethical perspective on the democratization of mental health with generative artificial intelligence. JMIR Mental Health, 11, e58011. [Google Scholar] [CrossRef] [PubMed]
  9. Hadar Shoval, D., Haber, Y., Tal, A., Simon, T., Elyoseph, T., & Elyoseph, Z. (2024). Transforming perceptions: Exploring the multifaceted potential of generative AI for people with cognitive disabilities. JMIR Neurotechnology, 4, e64182. [Google Scholar] [CrossRef]
  10. Hamilton, W., & Hamilton, G. (2022). Community college student preferences for support when classes go online: Does techno-capital shape student decisions? In O. Ariyo, & A. Reams-Johnson (Eds.), Education reform in the aftermath of the COVID-19 pandemic (pp. 59–80). IGI Global. [Google Scholar] [CrossRef]
  11. Haney, J. L. (2022). Academic integrity at the graduate degree level: Graduate student perspectives on academic integrity and institutional policies [Doctoral dissertation, Indiana State University]. [Google Scholar]
  12. Kurban, C. F., & Şahin, M. (2024). Findings and interpretation. In The impact of ChatGPT on higher education (pp. 93–131). Emerald Publishing Limited. [Google Scholar] [CrossRef]
  13. Lee, K. S., & Chen, W. (2017). A long shadow: Cultural capital, techno-capital and networking skills of college students. Computers in Human Behavior, 70, 67–73. [Google Scholar] [CrossRef]
  14. Livingstone, S., & Helsper, E. (2007). Gradations in digital inclusion: Children, young people and the digital divide. New Media & Society, 9(4), 671–696. [Google Scholar] [CrossRef]
  15. Lohfink, M. M., & Paulsen, M. B. (2005). Comparing the determinants of persistence for first-generation and continuing-generation students. Journal of College Student Development, 46(4), 409–428. [Google Scholar] [CrossRef]
  16. Matsieli, M., & Mutula, S. (2024). COVID-19 and digital transformation in higher education institutions: Towards inclusive and equitable access to quality education. Education Sciences, 14(8), 819. [Google Scholar] [CrossRef]
  17. Nasim, S. F., Ali, M. R., & Kulsoom, U. (2022). Artificial intelligence incidents & ethics a narrative review. International Journal of Technology, Innovation and Management, 2(2), 52–64. [Google Scholar] [CrossRef]
  18. O’Dea, X. (2024). Generative AI: Is it a paradigm shift for higher education? Studies in Higher Education, 49(5), 811–816. [Google Scholar] [CrossRef]
  19. Ramdurai, B., & Adhithya, P. (2023). The impact, advancements and applications of generative AI. International Journal of Computer Science and Engineering, 10(6), 1–8. [Google Scholar] [CrossRef]
  20. Raz Rotem, M., Arieli, D., & Desivilya Syna, H. (2021). Engaging complex diversity in academic institution: The case of “triple periphery” in a context of a divided society. Conflict Resolution Quarterly, 38(4), 303–321. [Google Scholar] [CrossRef]
  21. Rispler, C., & Yakov, G. (2023). Ethics as a way of life: A case study at Yezreel Valley College. Journal of Ethics in Higher Education, 3, 145–156. [Google Scholar] [CrossRef]
  22. Said, N., Potinteu, A. E., Brich, I., Buder, J., Schumm, H., & Huff, M. (2023). An artificial intelligence perspective: How knowledge and confidence shape risk and benefit perception. Computers in Human Behavior, 149, 107855. [Google Scholar] [CrossRef]
  23. Schwartz, S. E. O., Kanchewa, S. S., Rhodes, J. E., Gowdy, G., Stark, A. M., Horn, J. P., Parnes, M., & Spencer, R. (2018). “I’m having a little struggle with this, can you help me out?”: Examining impacts and processes of a social capital intervention for first-generation college students. American Journal of Community Psychology, 61(1–2), 166–178. [Google Scholar] [CrossRef]
  24. Shams, D., Grinshtain, Y., & Dror, Y. (2024). ‘You don’t have to be educated to help your child’: Parental involvement among first generation of higher education Druze students in Israel. Higher Education Quarterly, 78, 1221–1240. [Google Scholar] [CrossRef]
  25. UNESCO. (2023a). Guidance for generative AI in education and research. UNESCO. [Google Scholar]
  26. UNESCO. (2023b). Harnessing the era of artificial intelligence in higher education: A primer for higher education stakeholders. National Resource Hub (Ireland). [Google Scholar]
  27. Van Dijk, J. A. G. M. (2005). The deepening divide: Inequality in the information society. SAGE Publications. [Google Scholar]
  28. Van Dijk, J. A. G. M. (2017). Digital divide: Impact of access. In P. Rössler, C. A. Hoffner, & L. van Zoonen (Eds.), The international encyclopedia of media effects (pp. 1–11). John Wiley & Sons, Inc. [Google Scholar] [CrossRef]
  29. van Heerden, A. C., Pozuelo, J. R., & Kohrt, B. A. (2023). Global mental health services and the impact of artificial intelligence-powered large language models. JAMA Psychiatry, 80(7), 662–664. [Google Scholar] [CrossRef]
  30. Verdín, D., Godwin, A., Kirn, A., Benson, L., & Potvin, G. (2021). Recognizing the funds of knowledge of first-generation college students in engineering: An exploratory study. Journal of Engineering Education, 110(2), 309–329. [Google Scholar] [CrossRef]
  31. Warschauer, M. (2004). Technology and social inclusion: Rethinking the digital divide. MIT Press. [Google Scholar]
  32. Wu, Y. (2023). Integrating generative AI in education: How ChatGPT brings challenges for future learning and teaching. Journal of Advanced Research in Education, 2(4), 6–10. [Google Scholar] [CrossRef]
  33. Yin, R. K. (2009). Case study research: Design and methods (5th ed.). SAGE Publications. [Google Scholar]
  34. Zhan, H., Cheng, K. M., Wijaya, L., & Zhang, S. (2024). Investigating the mediating role of self-efficacy between digital leadership capability, intercultural competence, and employability among working undergraduates. Higher Education, Skills and Work-Based Learning. Advance online publication. [Google Scholar] [CrossRef]
  35. Zhang, K., & Aslan, A. B. (2021). AI technologies for education: Recent research & future directions. Computers and Education: Artificial Intelligence, 2, 100025. [Google Scholar] [CrossRef]
Figure 1. The complete prompt for the first simulation included in the course was designed to summarize the initial topic: understanding the main theories in the field of psychology. The logo displays the name of the research institute, ‘HaShlishi HaMelachuti’ (The Third Artificial).
Figure 1. The complete prompt for the first simulation included in the course was designed to summarize the initial topic: understanding the main theories in the field of psychology. The logo displays the name of the research institute, ‘HaShlishi HaMelachuti’ (The Third Artificial).
Education 15 00637 g001
Figure 2. Factors influencing AI adoption in higher education learning environments.
Figure 2. Factors influencing AI adoption in higher education learning environments.
Education 15 00637 g002
Table 1. Demographic characteristics of students in Introduction to Psychology course.
Table 1. Demographic characteristics of students in Introduction to Psychology course.
Demographic CharacteristicNumberPercentage
Total students110100%
Female 9990%
Male 1110%
Majority group (Jewish)7467.3%
Minority group (other ethnicities)3632.7%
First-generation 3430%
Non-first-generation 7669.1%
Students born outside of Israel1110%
Non-native Hebrew speakers4742.7
Table 2. Comparison of knowledge, usage, and perceptions of artificial intelligence across different student groups.
Table 2. Comparison of knowledge, usage, and perceptions of artificial intelligence across different student groups.
Majority GroupMinority GrouppFirst-GenerationNon-First Generationp
Means.dMeans.d Means.dMeans.d
Before the course, how would you rate your knowledge of artificial intelligence?1.290.461.110.31t(108) = 2.18, p = 0.031
d = 0.44
1.110.321.280.45t(108) = −1.97, p = 0.051
d = −0.40
After the course, how would you rate your level of knowledge in artificial intelligence?3.250.743.020.81t(108) = 1.47, p = 0.143
d = 0.30
2.940.813.280.72t(108) = −2.2, p = 0.027
d = −0.46
How frequently do you currently use artificial intelligence tools?3.250.742.630.76t(108) = 4.06, p < 0.001
d = 0.82
2.640.773.230.74t(108) = −3.78, p < 0.001
d = −0.78
To what degree did the simulations contribute to your understanding of the course material?4.430.574.770.42t(108) = −3.20, p = 0.002
d = −0.65
4.850.354.400.56t(108) = 4.19, p < 0.001
d = 0.86
Are you applying the knowledge you acquired in artificial intelligence to other areas related to your studies?3.090.831.190.40t(108) = 12.98, p < 0.001
d = 2.63
1.410.652.940.99t(108) = −8.24, p < 0.001
d = −1.70
Note: Responses were measured on a 5-point Likert scale (1 = very low; 5 = very high). d represents Cohen’s d effect size. Majority group n = 74; minority group n = 36; first-generation n = 34; non-first-generation n = 76.
Table 3. Comparison of artificial intelligence utilization patterns for academic purposes across different student groups.
Table 3. Comparison of artificial intelligence utilization patterns for academic purposes across different student groups.
Majority GroupMinority GrouppFirst-GenerationNon-First Generationp
YesNoYesNo YesNoYesNo
Did you purchase a paid subscription to any artificial intelligence platforms?22 (29.7%)52 (70.3%)4 (11.1%)32 (88.9%)χ2(1) = 4.65, p = 0.031, φ = 0.205 (14.7%)29 (85.3%)21 (27.6%)55 (72.4%)χ2(1) = 2.17, p = 0.140, φ = 0.14
Did you use artificial intelligence in writing your semester assignments?62 (83.8%)12 (16.2%)14 (38.9%)22 (61.1%)χ2(1) = 22.85, p < 0.001, φ = 0.455 (14.7%)29 (85.3%)71 (93.4%)5 (6.6%)χ2(1) = 68.15, p < 0.001, φ = 0.78
Did you use artificial intelligence to study for your semester exams?60 (81.1%)14 (18.9%)25 (69.4%)11 (30.6%)χ2(1) = 1.86, p = 0.172, φ = 0.1320 (58.8%)14 (41.2%)65 (85.5%)11 (14.5%)χ2(1) = 9.53, p = 0.002, φ = 0.29
Did you use artificial intelligence to compose emails or draft written requests to official bodies or authorities?68 (91.9%)6
(8.1%)
27 (75.0%)9 (25.0%)χ2(1) = 5.868, p = 0.015, φ = 0.23121 (61.8%)13 (38.2%)74 (97.4%)2 (2.6%)χ2(1) = 25.285, p < 0.001, φ = 0.479
Note: Values represent number of respondents and percentages within each group. φ represents phi coefficient for effect size. Majority group n = 74; minority group n = 36; first-generation n = 34; non-first-generation n = 76.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hadar Shoval, D. Artificial Intelligence in Higher Education: Bridging or Widening the Gap for Diverse Student Populations? Educ. Sci. 2025, 15, 637. https://doi.org/10.3390/educsci15050637

AMA Style

Hadar Shoval D. Artificial Intelligence in Higher Education: Bridging or Widening the Gap for Diverse Student Populations? Education Sciences. 2025; 15(5):637. https://doi.org/10.3390/educsci15050637

Chicago/Turabian Style

Hadar Shoval, Dorit. 2025. "Artificial Intelligence in Higher Education: Bridging or Widening the Gap for Diverse Student Populations?" Education Sciences 15, no. 5: 637. https://doi.org/10.3390/educsci15050637

APA Style

Hadar Shoval, D. (2025). Artificial Intelligence in Higher Education: Bridging or Widening the Gap for Diverse Student Populations? Education Sciences, 15(5), 637. https://doi.org/10.3390/educsci15050637

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop