Next Article in Journal
Teachers’ Digital Leadership and Competencies in Primary Education: A Cross-Sectional Behavioral Study
Next Article in Special Issue
ChatGPT for Science Lesson Planning: An Exploratory Study Based on Pedagogical Content Knowledge
Previous Article in Journal
Science Learning Environments in Higher Education: Researching Classroom, Laboratory, and Field Settings
Previous Article in Special Issue
Reconceptualizing the Role of the University Language Teacher in Light of Generative AI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative AI Tools in Salvadoran Higher Education: Balancing Equity, Ethics, and Knowledge Management in the Global South

by
Tizziana Valdivieso
1,* and
Oscar González
2
1
Department of Education Sciences, School of Social Sciences and Humanities, Universidad Centroamericana José Simeón Cañas, Antiguo Cuscatlán, La Libertad 05001, El Salvador
2
The Learning and Research Resource Center (CRAI), Universidad Centroamericana José Simeón Cañas, Antiguo Cuscatlán, La Libertad 05001, El Salvador
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(2), 214; https://doi.org/10.3390/educsci15020214
Submission received: 28 November 2024 / Revised: 14 January 2025 / Accepted: 30 January 2025 / Published: 10 February 2025
(This article belongs to the Special Issue Teaching and Learning with Generative AI)

Abstract

:
The integration of generative artificial intelligence (GAI) tools in higher education offers new opportunities for personalized learning, critical thinking, and digital literacy. However, socio-economic disparities and ethical concerns present significant challenges to equitable and responsible GAI use, particularly in under-resourced educational settings. This mixed-methods study explored how undergraduate students at Universidad Centroamericana José Simeón Cañas (UCA) in El Salvador engage with GAI tools, focusing on patterns of access, usage, and the socio-economic and ethical factors shaping these interactions. A quantitative survey of 365 students and qualitative focus groups with 25 participants were conducted to examine device preferences, tool usage, and concerns related to academic integrity, data privacy, and responsible AI use. Results revealed significant socio-economic disparities in access to GAI tools, with students from lower-income backgrounds primarily using smartphones and free AI tools, while higher-income students reported greater access to laptops and premium features. Ethical concerns were more prominent among students with limited institutional support, highlighting the need for structured guidance on the responsible use of GAI tools. These findings underscore the importance of institutional policies that promote equitable access to educational technologies and provide ethical frameworks for their use. By integrating socio-constructivist and connectivist learning theories, this study emphasizes that equitable access and guided support are critical for maximizing the educational potential of GAI tools. The study contributes to ongoing discussions about how higher education institutions, particularly in the Global South, can responsibly and effectively integrate AI technologies to support diverse student populations.

1. Introduction

The rapid evolution of generative artificial intelligence (GAI) tools marked a pivotal shift in global higher education, creating powerful new avenues for personalized learning, critical thinking, and expanded access to knowledge (Kohnke et al., 2023). This trajectory traces back to foundational contributions by Ada Lovelace, who conceptualized the first computational algorithm, and Alan Turing, whose theory of machine intelligence laid the groundwork for AI’s transformative potential across diverse fields, including education (Abeliuk & Gutiérrez, 2021; Ríos, 2023). In recent years, advanced tools like ChatGPT and DALL-E represented significant advancements, enabling learners to engage in interactive, human-like exchanges that fostered self-directed inquiry, creativity, and digital literacy (Ahmad et al., 2022; Carrasco Rodríguez, 2023; McIntosh et al., 2023; Wallace & Zamudio-Suarez, 2023). These developments aligned closely with UNESCO’s 2030 Agenda, which emphasized ethical, context-sensitive AI applications as pathways toward equity and quality in education under Sustainable Development Goal 4 (UNESCO, 2021; Flores-Vivar & García-Peñalvo, 2023).
However, as promising as GAI was, its integration in educational systems introduced complex socio-economic and ethical challenges, particularly in the Global South (Borja Velezmoro & Carcausto, 2020; Kerres & Buchner, 2022; McIntosh et al., 2023). Latin America, where digital infrastructure and resources vary considerably, faced specific barriers that impacted equitable GAI usage in higher education (Salas-Pilco & Yang, 2022). For example, in El Salvador, empirical research on GAI in educational contexts was limited, underscoring the need for studies that addressed local conditions and educational goals. Scholars like Sánchez-Santamaría and Olmedo Moreno (2023) highlighted that over-reliance on research from wealthier countries could often promote a one-size-fits-all approach, which failed to capture essential cultural and socioeconomic nuances. Addressing such gaps, particularly in higher education, is critical for understanding how Salvadoran students engage with GAI to support their academic progress (Azcúnaga López, 2024; Valdez Perla, 2024).
To explore this, Universidad Centroamericana José Simeón Cañas (UCA) was selected as an ideal case study due to its diverse student body and commitment to social justice. As a leading Jesuit institution in El Salvador, UCA has a long-standing focus on addressing educational inequities and fostering critical engagement with emerging technologies. The university’s student population reflects a broad spectrum of socio-economic backgrounds, providing a representative snapshot of the challenges and opportunities that exist in low-resource educational settings. Moreover, UCA’s strategic location in San Salvador—a region marked by both technological advancement and persistent infrastructural disparities—makes it a critical site for exploring how generative AI tools are accessed, perceived, and utilized in contexts where digital divides remain a significant barrier. Despite global interest in AI integration in higher education, empirical studies focusing on institutions like UCA remain scarce, highlighting a critical gap in the literature that this study seeks to address (Briñis-Zambrano, 2024).
This study aimed to investigate how UCA students in El Salvador engaged with GAI tools in an environment where challenges to equitable technology access posed significant hurdles. In well-resourced contexts, research had shown that students often used GAI tools for various academic tasks, including clarifying complex concepts, conducting research, and translating materials (von Garrel & Mayer, 2023). However, understanding how Salvadoran students accessed and applied these tools required exploring specific motivations and applications within their educational environment. Studies on digital literacy emphasized that without equitable access, GAI could unintentionally widen educational divides rather than bridge them, highlighting the critical need for informed policies in underrepresented regions (Ruiz & Velásquez, 2023; Hurst et al., 2013).
In response to these challenges, this research incorporated a mixed-methods approach, combining quantitative surveys with qualitative focus groups to achieve a comprehensive view of both the prevalence and perceptions of GAI use (NIH Office of Behavioral and Social Sciences, 2018). This methodological approach allowed for a nuanced exploration of both student habits and also their critical reflections on AI’s role in their education (Zhou et al., 2024a). By situating this research within the Latin American experience, the study sought to illuminate how students’ engagement with GAI was shaped by unique regional factors, including disparities in digital literacy, limited infrastructure, and socio-economic barriers (Orellana-Rodriguez, 2024). This perspective provided a foundation for discussing the theoretical implications of GAI in education, particularly through a constructivist lens, where AI tools could act as agents of self-directed learning, assuming students were supported with adequate resources (García, 2024).
Recent studies examining AI in higher education across Latin America reveal a complex and dynamic integration of technology, characterized by innovative yet context-specific practices. Systematic reviews illustrate a range of strategies in AI integration within education, such as Chile’s use of AI-enhanced mentoring to streamline teacher training and Uruguay’s implementation of automated feedback systems to support educators and improve teaching quality. Additionally, a UNESCO assessment underscores significant disparities in AI and data readiness among Small Island Developing States (SIDS), highlighting the uneven landscape of technological adoption in educational settings (Salas-Pilco & Yang, 2022; UNESCO, 2024). Moreover, comprehensive reports from global organizations like the World Bank illustrate the transformative potential of AI through personalized learning tools and automated administrative systems, which can alleviate the workload for educators and improve learning experiences (Robert, 2024; Molina et al., 2024). These examples emphasize the importance of region-specific research that acknowledges both the potential and the challenges inherent in implementing AI within educational frameworks. This comparative perspective underscores the need for research that remains sensitive to the distinct challenges faced by students in the Global South, ensuring that technological advancements are equitable and effective across diverse educational landscapes (UNESCO, 2023, 2024).
Broader examples of how other countries in Latin America approached AI in higher education offered additional context, reinforcing the need for research sensitive to the challenges facing students across the Global South (Salas-Pilco & Yang, 2022). Salas-Pilco and Yang (2022), for example, identified gradual AI adoption in Latin American universities, hampered by infrastructure gaps and resource limitations, while studies in countries like Germany revealed robust engagement with AI tools when access and support were abundant (von Garrel & Mayer, 2023). In this context, the ethical considerations around AI, such as data privacy, misinformation, and academic integrity, became essential, as regions with limited resources often lack the institutional support necessary to address these risks effectively (Nguyen et al., 2023; UNESCO, 2021).
Guided by the central question—How did UCA students utilize and integrate GAI tools within their learning environments?—this study pursued three primary objectives: (1) to identify the main GAI tools used by students, (2) to assess the frequency and purposes of these tools in educational contexts, and (3) to explore how socio-economic and ethical factors influenced the access, engagement, and responsible use of GAI. We hypothesized that students’ socio-economic backgrounds significantly shape their interactions with GAI tools, with Salvadoran students uniquely experiencing context-specific usage patterns and barriers, such as limited access to technology and differing cultural perceptions of its utility. Ultimately, findings from this study may serve as a basis for future research exploring how AI can be adapted to meet the diverse educational needs across the Global South, supporting ethical and effective GAI integration in underrepresented regions (Penprase, 2024).

2. Materials and Methods

This study employed a mixed-methods sequential design to investigate the use of GAI tools among undergraduate students at Universidad Centroamericana José Simeón Cañas (UCA), El Salvador (Zhou et al., 2024b). The project was conducted in two phases: a quantitative survey followed by qualitative focus groups. Data collection took place in April 2024, with the survey phase lasting four weeks, followed by two weeks of focus groups.

2.1. Student Community Survey

The survey was designed to explore several key areas, including students’ demographics, familiarity with AI, usage of GAI tools, ethical concerns, and access to technology. A structured 21-item questionnaire was distributed electronically via Google Forms (Appendix A). Participation was voluntary, and anonymity and confidentiality were ensured. Recruitment was facilitated through flyers with QR codes posted around campus, and student assistants distributed the survey in person.
To minimize potential biases associated with electronic survey distribution, multiple recruitment strategies were implemented. Although the survey was administered via Google Forms, QR codes linking to the survey were displayed on posters placed in high-traffic campus locations, including the main cafeteria, library, administration building, campus fitness center, and the main entrance of UCA. Additionally, 10 student workers actively recruited participants by circulating campus with laminated QR code posters, dedicating 6 h each (totaling 60 h of recruitment). The survey remained open for four weeks to maximize accessibility and participation across diverse student groups.
The survey sample was drawn from the total UCA student population of 6949 students across three colleges: Faculty of Social Sciences and Humanities, Faculty of Economic and Business Sciences, and Faculty of Engineering and Architecture. The questionnaire collected demographic information such as age, gender, academic year, and socio-economic background. It included binary (yes/no), multiple-choice, Likert scale, and open-ended questions to assess students’ familiarity with GAI tools, frequency of use, the type of GAI tool used, reason for use in academic settings and reasons for non-use.
To ensure the validity and reliability of the survey instrument, a comprehensive three-phase validation process was conducted. First, the survey was pilot-tested with 36 undergraduate students to assess logical validity, following established guidelines recommending pilot samples of 30–50 participants for survey validation (Casas Anguita et al., 2003). Feedback from this pilot led to minor adjustments in question wording and structure to improve clarity and comprehension. Second, the survey was reviewed by Christian Marroquín, Director of the Centro de Enseñanza, Aprendizaje y Tecnología Educativa at Universidad Rafael Landívar, an expert in artificial intelligence in education. His feedback resulted in grammatical corrections and the rewording of response options to enhance cultural relevance and clarity (Hernández Sampieri et al., 2014). Third, a focus group of seven undergraduate students from diverse majors provided structured feedback, which informed additional content refinements (Schmidt et al., 2009; Walker & Fraser, 2005).
Quantitative data from the survey were analyzed using a one-factor chi-square test to evaluate differences between observed response distributions and expected distributions. The expected distribution was modeled as a uniform distribution, calculated as the homogeneous distribution of student respondents across all given categories. This analysis helped identify patterns in the demographics and usage of GAI tools. The final sample of 365 survey participants represented approximately 5.3% of the total UCA undergraduate student population (6949 students), ensuring diverse representation across colleges. This sample size was deemed appropriate for identifying significant patterns in GAI tool usage and aligned with recommended standards for survey research in higher education (Creswell & Plano Clark, 2017).

2.2. Student Community Focus Groups

The selection of 25 focus group participants was based on achieving thematic saturation while capturing a range of academic disciplines and academic cohorts, aligning with qualitative research guidelines for data saturation (Guest et al., 2006). Focus group prompts were developed to complement the survey findings and explore students’ academic experiences with GAI tools. These prompts were adapted from established studies on educational technology adoption and refined through pilot testing for clarity and relevance (Lai et al., 2022; Guo et al., 2024). Feedback from this pilot testing informed minor adjustments to the wording and structure of the prompts to ensure they effectively encouraged meaningful discussion.
Data collection for both the survey and focus groups occurred in April 2024, aligning with the academic calendar to maximize participation. The volunteering participants were randomly selected to capture a broad range of perspectives on GAI tools, encompassing both technical and non-technical disciplines. The focus groups involved 25 university students with diverse academic backgrounds from three colleges: nine students from the Faculty of Economic and Business Sciences, nine from the Faculty of Social Sciences and Humanities, and seven from the Faculty of Engineering and Architecture.
Data collection was conducted through focus groups designed to explore in greater depth how students used GAI tools in their academic activities. Discussions focused on the specific tools used, contexts of use, and perceived benefits. Each session was moderated by a trained facilitator who clearly informed students of their voluntary participation, emphasizing that they were free to leave the group at any time without explanation and without fear of negative consequences or repercussions. The facilitator also guided the discussions using open-ended questions to explore students’ attitudes toward AI, their motivations for utilizing AI tools, ethical considerations, and perceived limitations. All sessions were audio-recorded and transcribed with informed consent to ensure confidentiality and anonymity.
The qualitative data, including audio recordings, were transcribed and then coded using WeftQDA software (version 1.0.1.), selected for its capabilities in handling complex qualitative data and ensuring accurate thematic analysis. A content analysis approach was applied to the transcripts, allowing for the systematic identification of recurring themes, which were subsequently organized into three main categories: attitudes toward GAI tools, contexts and modes of GAI tool usage, and ethical concerns and perceived limitations. Within these overarching categories, subcategories were developed to capture nuanced perspectives, such as personal, peer, and faculty attitudes, specific motivations for tool use, and ethical reflections. Two researchers independently reviewed each transcript to ensure reliability in theme identification, with emerging themes cross-referenced and refined through consensus to enhance analytical rigor.
To complement the qualitative findings, descriptive statistics summarized the frequency of AI-related needs among participants, segmented by faculty and presented alongside totals. These descriptive statistics provided quantitative context to the qualitative themes, illustrating the varying extent of tool usage across disciplines. Direct participant quotes were included in the results to contextualize each thematic finding, allowing for a deeper insight into students’ attitudes and experiences with GAI tools.

2.3. Rationale for Mixed-Methods Design

This study employed a mixed-methods sequential design to provide a comprehensive understanding of undergraduate students’ engagement with GAI tools at the UCA (Zhou et al., 2024b). The integration of quantitative (survey) and qualitative (focus group) methods was strategically chosen to capture both the breadth and depth of student experiences, concerns, and interests related to GAI tools (Creswell & Plano Clark, 2017; Creswell & Creswell, 2018). The survey component offered broad insights into general usage patterns, accessibility, and familiarity with GAI tools across a large student population, while the focus groups facilitated an in-depth exploration of personal experiences, motivations, and ethical considerations. This complementary approach aligns with best practices in mixed-methods research, which advocate for combining quantitative and qualitative data to enrich understanding and provide a more complete picture than either method alone (Fetters et al., 2013; Zhou et al., 2024b). Notably, the objective of this research was not to compare or contrast findings between the survey and focus group data but to integrate both methods to provide a holistic view of student engagement with GAI tools. Such an integrative strategy is recommended to address complex research questions effectively (Caruth, 2013).

3. Results

3.1. Survey Data

A total of 365 students completed the survey, providing responses to all 21 questions without omissions.

3.1.1. Demographic Overview of UCA Student Community

The observed distribution of student age groups significantly differed from the expected distribution of 20.0% across all age groups (χ2 = 161.13, p < 0.0001, Table 1). Age groups 18–21 and 22–25 were above the expected values, while age groups 26–29, 30–32, and 34 or older were below the expected values.
Similarly, the observed distribution of Salvadoran Departments where students reported residing significantly differed from the expected distribution of 7.1% across all Departments (χ2 = 324.23, p < 0.0001, Table 2). The Departments of San Salvador, La Libertad, and students reporting a foreign residence were above the expected values, while the Departments of Ahuachapán, Cabañas, Chalatenango, Cuscatlán, La Paz, La Unión, Morazán, San Miguel, San Vicente, Santa Ana, and Sonsonate were below the expected values.
The observed distribution of student self-reported gender significantly differed from the expected distribution of 25.0% across self-reported gender categories (χ2 = 94.83, p < 0.0001, Table 3). Students identifying as [cisgender] male and [cisgender] female were above the expected values, while those identifying as gender fluid and transgender male were below the expected values.
A reanalysis of cisgender students identifying as male and female revealed that the observed distribution did not differ significantly from the expected distribution of 50% (χ2 = 0.04, p = 0.8503, Table 4).

3.1.2. Academic Profile of UCA Student Community

The observed distribution of student academic cohorts significantly differed from the expected distribution of 14.3% across all cohorts (χ2 = 27.13, p < 0.0001, Table 5). Academic cohorts in their first year, second year, third year, and fourth year were above the expected values, while cohorts in their fifth year and licentiate candidates working on theses (termed egressed) were below the expected values.
The observed distribution of student academic majors significantly differed from the expected distribution of 4.0% across all majors (χ2 = 82.49, p < 0.0001, Table 6). The academic majors of Architecture, Civil Engineering, Industrial Engineering, Computer Engineering, Chemical Engineering, Business Administration, Law, Mass Communication, English, Marketing, and Psychology were above the expected values. In contrast, the academic majors of Food Engineering, Electrical Engineering, Energy Engineering, Mechanical Engineering, Social Sciences, Public Accounting, Economics, Philosophy, Finance, Theology, Teaching Degree in English Language, Associate Degree in Accounting, Associate Degree in Digital Marketing, and Associate Degree in Multimedia Production were below the expected values.
The observed distribution of student employment status significantly differed from the expected distribution of 50% across all reported statuses (χ2 = 26.08, p < 0.0001, Table 7). Students reporting no employment were above the expected values, while students reporting employment were below the expected values.

3.1.3. Parental Education Background

The observed distribution of paternal highest degree earned significantly differed from the expected distribution of 12.5% across all reported educational levels (χ2 = 74.25, p < 0.0001, Table 8). Students reporting their father’s highest degree as a high school diploma, some university studies but no degree earned, and a Licentiate degree were above the expected values, while those reporting educational levels between 7th and 9th grade, none, 6th grade or lower, a Doctoral degree, and a Master’s degree were below the expected values.
The observed distribution of maternal highest degree earned significantly differed from the expected distribution of 12.5% across all reported educational levels (χ2 = 74.40, p < 0.0001, Table 9). Students reporting their mother’s highest degree as a High School diploma, some University studies but no degree earned, and a Licentiate degree were above the expected values, while those reporting educational levels between 7th and 9th grade, none, 6th grade or lower, a Doctoral degree, and a Master’s degree were below the expected values.

3.1.4. Usage of GAI Tools

The observed distribution of preferred devices for academic activities significantly differed from the expected uniform distribution of 14.3% across all reported devices (χ2 = 79.13, p < 0.0001, Table 10). Preferences for iPhone smartphones (15.1%), Android smartphones, and either PC or iOS laptops exceeded the expected values, whereas preferences for PC or iOS desktops, iPads, and Android tablets were below the expected values.
The observed distribution of the familiarity level with the concept of GAI significantly differed from the expected uniform distribution of 20.0% across all reported levels (χ2 = 45.46, p < 0.0001, Table 11). Reported levels of Somewhat Familiar, Moderately Familiar, and Very Familiar exceeded the expected values, whereas levels of Extremely Familiar and Not Familiar At All were below the expected values.
The observed distribution of familiarity with diverse GAI tools significantly differed from the expected uniform distribution of 50.0% across all reported levels of familiarity (χ2 = 19.02, p < 0.0001, Table 12). Affirmative responses exceeded the expected values, while negative responses were below the expected values.
Additionally, the distribution of how students reported learning about GAI significantly differed from the expected uniform distribution of 14.3% across all reported discovery methods (χ2 = 51.80, p < 0.0001, Table 13). Discovery through Friends, Content Creators, and Curiosity exceeded the expected values, while Advertisements, Website Integration, Necessity, and News Media were below the expected values.

3.1.5. Frequency of GAI Use in Academic Settings

The observed distribution of GAI usage in academic settings significantly differed from the expected uniform distribution of 50.0% across all reported usage levels (χ2 = 25.00, p < 0.0001, Table 14). Affirmative responses exceeded the expected values, while negative responses were below the expected values.
Additionally, the distribution of reasons students reported for not yet using GAI tools in academic settings significantly differed from the expected uniform distribution of 16.7% across all reported reasons (χ2 = 73.70, p < 0.0001, Table 15). Reasons for disengagement, such as Ethical Concerns, including avoidance due to doubts about reliability and bias, Prohibited Use, where AI tools are prohibited or inaccessible, and Skill Limitations, related to language or technical barriers, all exceeded the expected values. In contrast, Lack of Need, referring to students who have not yet found a need to use AI tools, Lack of Interest, and No Desire to Use, reflecting a lack of motivation or preference for other methods, were below the expected values.
The observed distribution of how often students use ChatGPT significantly differed from the expected uniform distribution of 16.7% across all reported usage levels (χ2 = 47.65, p < 0.0001, Table 16). Usage frequencies of once per week, two–three times per week, and four–five times per week exceeded the expected values, while usage of six times per week, never, and every day were below the expected values.
Similarly, the distribution of how often students use Gemini significantly differed from the expected uniform distribution of 16.7% across all reported usage levels (χ2 = 234.34, p < 0.0001, Table 17). Usage of ‘never’ exceeded the expected values, while frequencies of once per week, two–three times per week, four–five times per week, six times per week, and every day were below the expected values.
The distribution of how often students use Deepl significantly differed from the expected uniform distribution of 16.7% across all reported usage levels (χ2 = 279.82, p < 0.0001, Table 18). Usage of ‘never’ exceeded the expected values, while frequencies of once per week, two–three times per week, four–five times per week, six times per week, and every day were below the expected values.
Similarly, the distribution of how often students use CoPilot significantly differed from the expected uniform distribution of 16.7% across all reported usage levels (χ2 = 186.00, p < 0.0001, Table 19). Usage of ‘never’ exceeded the expected values, while frequencies of once per week, two–three times per week, four–five times per week, six times per week, and every day were below the expected values.
The distribution of how often students use SlidesAI significantly differed from the expected uniform distribution of 16.7% across all reported usage levels (χ2 = 368.48, p < 0.0001, Table 20). Usage of ‘never’ exceeded the expected values, while usage frequencies of once per week, two–three times per week, four–five times per week, six times per week, and every day were below the expected values.
Similarly, the distribution of how often students use Character AI significantly differed from the expected uniform distribution of 16.7% across all usage levels (χ2 = 343.19, p < 0.0001, Table 21). Usage of ‘never’ exceeded the expected values, while frequencies of once per week, two–three times per week, four–five times per week, six times per week, and every day were below the expected values.
Additionally, the distribution of how often students use Perplexity significantly differed from the expected uniform distribution of 16.7% (χ2 = 377.95, p < 0.0001, Table 22). Usage of ‘never’ exceeded the expected values, while frequencies of once per week, two–three times per week, four–five times per week, six times per week, and every day were below the expected values.
The distribution of how often students use other additional GAI tools significantly differed from the expected uniform distribution of 50.0% among reported usage levels (χ2 = 68.12, p < 0.0001, Table 23). Reported non-usage of additional GAI tools beyond those appearing in the survey exceeded the expected values, while usage of additional AI tools beyond those identified in the survey (e.g., Bing) were below the expected values.
Among students reporting the use of other GAI tools, the distribution of usage frequency significantly differed from the expected uniform distribution of 20.0% across all reported levels (χ2 = 97.17, p < 0.0001, Table 24). Usage frequencies of once per week and two–three times per week exceeded the expected values, while four–five times per week, six times per week, and every day were below the expected values.
The distribution of the year students reported starting to use GAI tools significantly differed from the expected uniform distribution of 25.0% across all reported start dates (χ2 = 39.02, p < 0.0001, Table 25). The percentage of students reporting that they started in 2023 exceeded the expected values, while those starting in 2022, 2024, and before 2022 were below the expected values.
Additionally, the distribution of time spent using GAI tools significantly differed from the expected uniform distribution of 20.0% across all reported levels (χ2 = 153.32, p < 0.0001, Table 26). Usage times of less than 1 h and 1–2 h exceeded the expected values, while rarely less than once a week, a few minutes for a specific action, and more than 3 h were below the expected values.
Finally, the distribution of students reporting whether they paid for GAI tools significantly differed from the expected uniform distribution of 50.0% (χ2 = 69.94, p < 0.0001, Table 27). The number of students answering ‘no’ exceeded the expected values, while those answering ‘yes’ were below the expected values.
The distribution of tasks for which students are using GAI tools significantly differed from the expected uniform distribution of 5.9% across all reported tasks (χ2 = 37.20, p < 0.0001, Table 28). Students reporting using GAI for research and academic tasks (e.g., analyzing data, supporting research, exam preparation, rewriting/reworking texts, summarizing texts, periodic reviews) as well as clarification and inspiration (e.g., clarifying/organizing ideas and questions, finding inspiration for homework) exceeded the expected values. In contrast, tasks such as content creation (e.g., creating songs, programming codes, quizzes, images, presentations, videos), technical and problem-solving tasks (e.g., programming, detecting errors/corrections such as grammar checking, problem-solving, decision-making), and language and translation (e.g., translating texts) were below the expected values.
The distribution of aspects students reported as most important when choosing a GAI tool did not differ significantly from the expected uniform distribution of 11.1% across all reported aspects (χ2 = 48.30, p < 0.0001, Table 29). Student preferences for tools that provide logical arguments, include relevant scientific data, produce understandable and complete responses, and provide precise citations exceeded the expected values. In contrast, preferences for tools that ensure accurate detection and correction of errors, human-like explanations or illustrations, error-free and non-duplicative responses, no grammatical errors, and affordable prices were below expected values.

3.2. Focus Groups

The analysis of focus group data revealed three primary categories: (1) attitudes toward GAI tools, (2) contexts and modes of GAI tool usage, and (3) ethical concerns and perceived limitations. Each category included subcategories that provided further nuance, capturing students’ personal views, peer influences, faculty stances, and the specific circumstances shaping their interactions with these tools.

3.2.1. Attitudes Toward GAI Tools

Students’ attitudes toward GAI tools generally reflected positive perceptions, viewing these tools as helpful complements to their studies, but not as replacements for human thought and critical engagement. Many participants described GAI tools as valuable for simplifying or expanding tasks, recognizing their role as supportive aids. As one participant from the Faculty of Economic and Business Sciences noted, “AI tools are here to assist, not to substitute human beings. They’re useful, but they don’t replace our thinking” (Participante03_CEE). Students from the Faculty of Engineering and Architecture similarly valued GAI for its efficiency in handling repetitive tasks, although some participants expressed concerns that excessive reliance on these tools could hinder the development of critical thinking skills. A Social Sciences and Humanities student reflected on this, saying, “Using AI for academic work could diminish critical thinking. I want my work to reflect my own efforts, not just automated responses” (Participante02_CSH).
Adaptation to GAI tools also emerged as a prominent theme, with students emphasizing the need to learn how to use these tools effectively. Participants recognized that developing skills for effective GAI usage could enhance academic outcomes, particularly when prompts were carefully crafted to elicit quality responses. One participant explained, “It’s about learning to use them well—being specific improves the response quality” (Participante01_CEE). This adaptability was viewed as essential for professional preparedness, with students acknowledging the importance of becoming proficient with these tools.
Faculty stances on GAI tools varied widely, influencing students’ own approaches to GAI use in academic work. Some students noted that faculty members in fields such as philosophy expressed concerns that GAI tools could foster superficial learning. A participant shared, “Professors don’t seem to like it; they worry about students just copying and pasting answers without thinking” (Participante01_CEE). However, students from applied fields, such as engineering, reported faculty who encouraged using AI to enhance writing or locate resources, as long as it was properly attributed. As one engineering student recalled, “Our professor advised us to use ChatGPT for finding bibliography but to ensure we reference it correctly” (Participante06_CEE).
In addition to individual attitudes and faculty perspectives, peer influences shaped students’ opinions on GAI tool usage. Many students found that their classmates held generally favorable views, often seeing AI tools as “productive and beneficial” (Personal Communication). However, some participants observed that heavy reliance on GAI tools by peers could lead to over-dependence and impact their own learning development. As one student reflected, “Some of my classmates rely too heavily on AI tools, not realizing that they’re not always accurate or reliable” (Participante03_IyA).

3.2.2. Context and Motivations for GAI Tool Usage

Students identified a variety of reasons for turning to GAI tools, often seeking assistance with difficult concepts, managing time constraints, or handling repetitive tasks. One Social Sciences and Humanities student shared, “There are theories I struggle to understand, and AI can break them down into simpler terms” (Participante01_CSH). Efficiency was a recurring theme, with students frequently using AI tools to complete assignments under tight deadlines or to address monotonous tasks.
Most students reported that they first learned about GAI tools through friends, family, or social media, with the pandemic increasing their exposure to tools like ChatPDF and Grammarly. Social media platforms such as TikTok were particularly influential; one participant recounted, “I first learned about ChatPDF through TikTok during the pandemic; it became widely advertised, and eventually, I tried it” (Participante06_CEE).
ChatGPT emerged as the preferred GAI tool among participants, primarily for its accessibility and ease of use. Students highlighted the platform’s popularity and simplicity, which contributed to its widespread adoption. A participant noted, “ChatGPT stands out because it’s so widely discussed, and it’s easier to navigate compared to other AI tools” (Participante04_CEE). While a few students invested in paid versions of GAI tools, most found the free options sufficient, citing that paid upgrades offered only marginal benefits for their academic needs.

3.2.3. Ethical Concerns and Perceived Limitations

Students also discussed ethical considerations related to GAI tools, ranging from concerns about academic integrity to the development of independent skills. Although some students had not previously considered the ethical implications, others expressed unease over the potential for GAI to detract from personal skill-building. For example, a participant reflected, “It’s not unethical if it’s just another tool to help us. But copying responses without our input—that’s different” (Participante09_CEE). Comparisons to other tools, such as computers or search engines, emerged frequently, with students emphasizing that ethical concerns depend largely on how GAI is applied in their academic work.
Another ethical reflection centered on the potential for over-reliance on GAI tools to undermine students’ abilities to think critically and independently. As one student observed, “It’s important to maintain our ability to develop ideas independently; too much AI use might weaken our critical thinking” (Participante05_CSH). Additionally, some participants underscored the limitations of AI in areas requiring empathy or nuanced understanding, such as human rights and cultural contexts, noting that AI cannot replicate human experience or emotional depth.
Finally, students highlighted the technical limitations of GAI tools, especially in terms of providing accurate and current information. A participant remarked on the challenge of using AI for recent events, explaining, “For recent events, AI doesn’t always have the latest data, so it’s not as reliable” (Participante05_IyA). Similarly, students observed that while GAI tools could offer general insights, they often lacked precision and failed to capture culturally specific details, limiting their effectiveness in certain fields.

3.3. Socio-Economic and Ethical Influences on GAI Access and Engagement

The analysis revealed notable disparities in students’ access to and engagement with GAI tools based on socio-economic factors. These disparities were reflected in device preferences, access to paid versus free tools, and patterns of tool usage across academic disciplines.

3.3.1. Device Preferences and Socio-Economic Access

Students from higher-income backgrounds demonstrated a higher likelihood of using laptops (37.8%) and iPhones (15.1%), devices that generally provide broader functionality for academic tasks. In contrast, students from lower-income backgrounds predominantly relied on Android smartphones (27.9%), reflecting a dependence on more affordable and accessible technology. This finding suggests that students’ financial circumstances directly impact the devices they use to access GAI tools, potentially influencing their ability to engage fully with the technology.

3.3.2. Access to Paid vs. Free GAI Tools

A significant majority of students (91.8%) reported using only free versions of GAI tools, with only 8.2% accessing paid versions. This divide indicates that most students face financial barriers that limit their ability to explore premium features of GAI tools, potentially affecting the depth and breadth of their engagement. Students in fields like Business Administration and Computer Engineering—majors often associated with higher-income students—reported slightly higher rates of paid tool usage compared to students in the Social Sciences and Humanities, further suggesting that socio-economic status influences access.

3.3.3. Academic Discipline and GAI Usage

Differences in GAI tool use across academic disciplines may also reflect socio-economic disparities. Students in STEM fields, particularly Computer Engineering (14.4%) and Industrial Engineering (9.0%), reported higher usage rates of GAI tools compared to those in the Social Sciences (0.5%) and Philosophy (0.5%). This discrepancy may be attributed to both the technological demands of certain disciplines and differential access to digital literacy resources, which often align with students’ socio-economic backgrounds.

3.3.4. Ethical Concerns and Socio-Economic Context

Ethical concerns regarding GAI use also varied with socio-economic context. Students from under-resourced backgrounds expressed greater apprehension about the ethical use of GAI tools, particularly around issues of academic integrity, data privacy, and bias. Notably, 42.6% of students cited ethical concerns—such as mistrust in tool reliability and bias—as a reason for not engaging with GAI tools. This hesitation suggests that beyond access, there are deeper concerns about responsible use, which may stem from limited institutional support in navigating ethical challenges.

3.3.5. Familiarity with GAI Tools and Access to Institutional Support

Students who reported higher levels of familiarity with GAI tools often had greater access to institutional resources or peer networks that facilitated their learning. However, students from less-connected socio-economic backgrounds were more likely to learn about GAI tools through informal channels (e.g., friends, 35.1% and content creators, 19.6%) rather than structured academic support, potentially limiting their understanding of responsible and effective GAI use.

4. Discussion

4.1. Student Insights

The surveyed academic community reflected a diverse range of age, geography, academic progression, and areas of study. The majority of students were between 18 and 25 years old, indicating a predominantly younger student body, with 65.4% falling in the 18–21 age group and 30.6% in the 22–25 age range. Most students resided in urban centers like San Salvador (46%) and La Libertad (28.5%), while fewer participants hailed from rural or less populated regions. This urban concentration may influence access to resources, such as academic technology and support services. In terms of academic progression, early cohorts were more represented, with most students in their first, second, or third year of study, which suggests a younger student population actively engaged in foundational coursework. Popular majors included Computer Engineering, Psychology, and various fields in Engineering and Business, reflecting a strong presence in both STEM and business-related disciplines.
Parental education levels, with a notable portion of students reporting that their parents had completed either a Licentiate degree (34.3% for fathers, 33.0% for mothers) or a High School diploma (24.7% for fathers, 24.5% for mothers), suggest that many students come from relatively well-educated households. This may contribute to a higher likelihood of engaging with advanced academic technologies, such as GAI tools, which are becoming integral to students’ academic experiences. These tools offer ways to enhance learning efficiency and organization, aligning with students’ academic backgrounds.
GAI tools were widely used for organizing ideas, supporting research, and clarifying prompts, with Android smartphones (27.9%) and laptops (37.8%) being the preferred devices for these tasks. This preference highlights the importance of portability and flexibility in students’ academic environments, as many students likely need tools they can access on the go. Familiarity with GAI was also high, with the majority of students reporting moderate to strong familiarity. Specifically, 40.4% of students described themselves as “moderately familiar” with GAI, suggesting that these tools are becoming a regular feature of students’ academic workflows, especially among those who engage with them frequently.
Interestingly, students primarily learned about GAI tools through informal channels such as friends (35.1%), content creators (19.6%), and personal curiosity (18.8%), rather than through formal institutional outreach. This reliance on informal networks underscores the organic and grassroots nature of AI tool adoption in academic settings. Despite widespread usage, concerns about the ethical implications of AI, such as biases and reliability (42.6%), as well as technical expertise barriers, were common reasons for some students’ hesitation. These concerns reflect a need for better guidance and education on the appropriate use of AI in academia, particularly to address students’ uncertainties around the ethical dimensions of these tools.
ChatGPT was the most commonly used GAI tool, with a significant portion of students using it regularly; 34% reported using it once per week and 29.1% used it two–three times per week. Other tools, such as Gemini, Deepl, and CoPilot, saw significantly lower engagement, indicating that while students are open to AI tools, they tend to favor platforms like ChatGPT, which may be more versatile and accessible for academic tasks. This suggests that ChatGPT’s broad functionality and ease of use make it a go-to resource for students, while other tools may lack the same level of awareness or perceived utility.
In terms of usage patterns, most students used AI tools for task-oriented activities, with 63.8% spending less than one hour per week using them. Common academic applications included organizing ideas, analyzing data, and summarizing texts, while more creative or technical uses—such as content creation or problem-solving—were relatively rare. This indicates that students primarily view AI as a practical tool for enhancing academic efficiency, helping with routine tasks rather than engaging in more complex, creative, or technical processes.
When selecting an AI tool, students prioritized functionality that directly supports their academic performance. They valued tools that present logical arguments (12.2%), provide relevant scientific information (13.5%), and ensure clear, understandable responses (30.5%). Affordability and human-like responses were less important, highlighting students’ focus on academic utility over esthetics or cost. This preference for tools that deliver reliable, academically relevant information suggests that students are highly motivated by performance and academic rigor when integrating AI into their studies.
The focus group findings illustrate a nuanced perspective on GAI tools among students, where efficiency and academic convenience are balanced against potential impacts on learning development and academic integrity. While students overwhelmingly recognized the benefits of GAI for tasks that required simplification or time management, they also demonstrated a keen awareness of the potential risks associated with over-reliance on these tools. This duality in perception suggests that students are not merely passive users of technology but are actively engaging in critical reflection on the role of AI in their academic journeys.
A major finding was that students frequently turned to GAI tools to streamline repetitive or time-consuming tasks, allowing them to manage their academic workload more effectively. This practical approach aligns with existing literature that identifies AI as a valuable support tool in educational settings (Abdelwahab et al., 2023; Albayati, 2024; Anthology Report, 2023; Aparicio-Gómez et al., 2024; Bahroun et al., 2023; Department for Education and The Open Innovation Team, 2024; Diliberti et al., 2024; Molina et al., 2024; Shibani et al., 2024; UNESCO, 2023), particularly when students are navigating heavy course loads. The emphasis on convenience and efficiency highlights a pragmatic approach, with students using AI tools as supplementary aids rather than primary sources of knowledge.
However, students’ concerns regarding academic integrity and skill development suggest an emerging sense of ethical responsibility and self-regulation. Participants expressed hesitation about allowing GAI tools to replace critical thinking or original analysis, mirroring broader educational concerns about AI’s potential to weaken students’ independent cognitive skills (Essien et al., 2024; Shoja et al., 2023; Zhang et al., 2024). This tension indicates that while GAI tools are perceived as valuable, their integration into academic routines may benefit from guidance and structure to support ethical usage. Educators might consider integrating discussions on the responsible use of AI into their curricula, encouraging students to approach GAI as a means to complement, rather than substitute, personal learning efforts.
This study contributes to addressing the significant gap in empirical research on the use of GAI tools in low-resource educational settings (Briñis-Zambrano, 2024; Castillo et al., 2024; Lemus, 2023). Existing literature on AI integration in higher education predominantly focuses on well-resourced institutions in developed countries, often overlooking the unique challenges faced by students in the Global South (Lisowska Navarro et al., 2023; Mpofu & Ndlovu-Gatsheni, 2020). At institutions like UCA, socio-economic disparities, inconsistent internet access, and limited institutional support shape how students engage with GAI tools. These structural barriers contrast with the seamless integration observed in wealthier educational contexts, highlighting the need for localized strategies that address digital equity and technological literacy (Mpofu & Ndlovu-Gatsheni, 2020). By situating this study within UCA, the findings offer critical insights into how educational institutions in developing regions can design interventions that bridge technological divides and support responsible GAI adoption (Capraro et al., 2024).

4.2. Digital Access and Technology Disparities

The observed disparities in device preferences and access to paid versus free GAI tools reflect broader patterns of digital inequality in higher education. These findings align with existing research showing that socio-economic factors significantly influence students’ access to educational technologies, with students from lower-income backgrounds more likely to rely on mobile devices rather than more powerful computing tools, limiting their ability to fully engage with advanced academic technologies (Capraro et al., 2024; Moore et al., 2018; Midwestern Higher Education Compact, 2020). For example, our data revealed that while laptops were the most commonly used device, a substantial portion of students relied on smartphones for academic tasks—a trend consistent with studies highlighting mobile device dependency among economically disadvantaged students (American University School of Education, 2019).
Additionally, the limited use of paid GAI tools compared to free versions underscores financial barriers that restrict access to premium educational resources. This mirrors findings in prior studies that students from lower socio-economic backgrounds face systemic obstacles in accessing paid digital learning tools, exacerbating existing educational disparities (The Education Trust–West, 2019; Tomaszewski et al., 2024). These gaps not only affect access to more advanced features but also contribute to unequal learning outcomes (Wang et al., 2024).
The integration of both survey and focus group data further highlights how these access disparities impact students’ academic engagement. Focus group participants frequently expressed frustration over the limitations of free GAI tools and the lack of institutional support in providing equitable access to educational technologies. These concerns reflect broader trends in the literature, where under-resourced institutions struggle to bridge the digital divide for their students (Reddy et al., 2021; Bustillos, 2017).
Addressing these disparities requires targeted institutional policies and investments in digital infrastructure to support equitable technology adoption. This includes providing affordable or subsidized access to advanced GAI tools and fostering digital literacy programs tailored to underrepresented student populations (Office of Educational Technology, n.d.; IEEE CTU, 2023; Wang et al., 2024).

4.3. Socio-Economic and Ethical Implications for GAI Access and Engagement

The results of this study reveal that socio-economic factors significantly influence students’ access to and engagement with Generative AI (GAI) tools. The reliance on different devices—such as smartphones versus laptops—and the overwhelming preference for free versions of GAI tools underscore how financial constraints shape students’ ability to fully engage with educational technologies. This pattern aligns with prior research emphasizing how digital divides disproportionately impact students in under-resourced contexts (van Dijk, 2020; Salas-Pilco & Yang, 2022).
Students from lower socio-economic backgrounds primarily accessed GAI tools through Android smartphones, reflecting the accessibility and affordability of mobile devices over more versatile laptops. This reliance may limit their ability to engage in more complex academic tasks that are better supported on larger, more powerful devices. Moreover, the stark contrast in the use of paid versus free GAI tools highlights how economic barriers restrict access to premium features that could further enhance learning outcomes.
These disparities underscore the importance of institutional policies aimed at bridging digital divides and ensuring equitable access to advanced educational technologies. Universities can play a pivotal role by offering greater access to shared digital resources, expanding technology lending programs, and providing training on how to maximize the utility of free GAI tools. This is particularly crucial for institutions in the Global South, where infrastructural and socio-economic barriers are more pronounced.
Ethical concerns related to GAI use also varied along socio-economic lines. Students with limited institutional support expressed greater apprehension regarding issues of academic integrity, data privacy, and bias. This suggests that beyond providing access, institutions must offer ethical guidance and digital literacy training to ensure responsible engagement with GAI tools. These findings support socio-constructivist perspectives, which emphasize the need for structured frameworks to help learners navigate complex tools effectively (Vygotsky, 1978; Palincsar, 1998; Luckin et al., 2016).
By highlighting how socio-economic and ethical factors interact with GAI use, this study contributes to a deeper understanding of the challenges and opportunities in integrating educational technologies. Addressing these disparities is essential for fostering an inclusive academic environment where all students can benefit from emerging digital tools.

4.4. Theoretical Implications

The findings of this study align closely with socio-constructivist learning theories, which emphasize that knowledge is actively constructed through social interaction and engagement with learning tools (Vygotsky, 1978) and that “connectivism is, as an applied theory of constructivism, therefore based on the conviction that every learning process is deeply individual, and that knowledge is formed (constructed) by one’s own experiences and interpretations of the world” (Klement, 2017, p. 98). Students’ use of Generative AI (GAI) tools to clarify ideas, organize thoughts, and complete academic tasks reflects self-directed learning practices consistent with constructivist principles. Focus group discussions revealed that students not only used GAI tools individually but also collaboratively, sharing strategies and insights with peers. This collaborative dynamic mirrors Vygotsky’s concept of the Zone of Proximal Development (ZPD), where learning is optimized through guided support (Wood et al., 1976). However, the absence of structured guidance in using GAI tools introduces challenges related to ethical use and critical engagement. This underscores the need for intentional support by educators to help students develop ethical reasoning and critical thinking skills when engaging with digital and AI technologies (Palincsar, 1998; Luckin et al., 2016; Siemens, 2005).
In addition, the results strongly resonate with connectivist learning theory (Siemens, 2005), which views learning as the process of forming and navigating networks of information. GAI tools, particularly adaptive platforms like ChatGPT, serve as critical nodes within students’ personal learning networks, enabling them to dynamically organize and process information. Connectivism emphasizes that the value of learning lies in forming connections between diverse information sources—including digital tools, peers, and academic content (Downes, 2012). The preference for versatile and responsive platforms reflects this principle, as students prioritize tools that facilitate immediate access to relevant information and support decision-making as part of the learning process. This behavior aligns with the connectivist notion that “decision-making itself is a learning process”, especially in rapidly evolving digital environments (Klement, 2017).
Moreover, the geographic and demographic diversity of the student population highlights the potential of GAI tools to bridge educational gaps in resource-constrained environments. Students from rural and under-resourced areas reported using smartphones and laptops to access GAI tools, demonstrating how these technologies compensate for disparities in physical infrastructure (van Dijk, 2020). This finding directly supports connectivism’s focus on leveraging digital networks to overcome traditional barriers to learning, highlighting GAI’s role in promoting educational equity (Klement, 2017).
However, while GAI tools offer opportunities to expand access and support learning, they also present challenges. Without structured support, students may not fully develop the ethical reasoning or critical thinking skills necessary for responsible engagement with these technologies (Islas Torres, 2021). This observation reinforces the socio-constructivist principle that learners benefit from guided support provided by educators, especially when interacting with complex tools (Islas Torres, 2021; Wood et al., 1976; Redecker, 2017). Integrating GAI tools into structured learning environments can help students develop not only technical proficiency but also the ethical and reflective thinking skills essential for meaningful learning.
By framing these findings through socio-constructivist and connectivist perspectives, this study contributes to a deeper understanding of how GAI tools facilitate learner autonomy, collaboration, and equitable access to knowledge (Islas Torres, 2021; Dabbagh & Kitsantas, 2012). This theoretical integration underscores the need for intentional educational design that supports responsible and ethical GAI integration, particularly in under-resourced educational settings where digital tools can play a pivotal role in leveling the academic playing field.

4.5. Institutional Recommendations

Interestingly, the influence of faculty and peers emerged as key factors in shaping students’ perspectives on GAI. Faculty attitudes, especially within specific disciplines, appeared to influence whether students felt comfortable or discouraged in using AI tools, suggesting that faculty guidance plays a crucial role in students’ adoption and responsible use of GAI. Positive faculty attitudes toward AI, particularly when they endorse AI for supporting writing, research, and idea generation, may normalize its responsible use and provide students with clear guidelines on ethical engagement with these tools.
These findings suggest that while GAI is becoming a key component of academic workflows, there is significant room for institutions to expand AI literacy programs and ensure students are equipped to use these tools effectively, ethically, and across a wider range of tasks. The heavy reliance on informal networks for learning about AI highlights a gap in formal institutional outreach, indicating that universities should take a more proactive role in guiding students toward responsible AI use and helping them explore its broader potential in academic settings.
In addition to improving AI literacy, institutions should encourage students to expand their use of AI beyond routine tasks like organizing ideas and summarizing texts. There is clear potential for deeper integration in areas such as content creation, technical applications, and complex problem-solving, which are currently underutilized. Curriculum enhancements that promote creative and technical uses of AI, especially in fields like engineering, design, and business, could help students realize the full range of capabilities these tools offer.
Concerns regarding accuracy, ethics, and technical expertise further underscore the importance of providing students with clear guidelines on responsible AI use (Bond et al., 2024). Addressing these concerns will require institutions to implement formal frameworks that emphasize the ethical implications of AI, such as bias and reliability, while also building students’ technical confidence in navigating these tools. By equipping students with the necessary skills and knowledge, universities can empower them to leverage AI for both practical and innovative academic applications.
Moreover, the preference for AI tools that prioritize functionality and academic rigor suggests that developers should focus on creating reliable, data-driven tools that align with students’ academic needs. While features aimed at creativity or entertainment may appeal to some, the primary focus should be on supporting academic performance, particularly through logical arguments, clear responses, and access to relevant scientific data. As AI tools continue to evolve, developers must strike a balance between innovation and academic utility to meet the demands of a student body that increasingly relies on AI for its academic success.
The widespread use of mobile devices like smartphones and laptops points to the importance of ensuring that AI tools are accessible across a variety of platforms. Institutions should also be mindful of equity issues related to device access, as students without the necessary technology may find themselves at a disadvantage. Supporting access to AI tools across diverse devices and promoting equitable access to technology are essential for ensuring that all students can benefit from these advancements.
The findings underscore the need for a balanced and intentional approach to integrating GAI tools into educational settings. While students value the efficiency and convenience of GAI, their concerns around ethical usage and skill development point to a critical awareness of AI’s limitations. This suggests an opportunity for educational institutions to develop frameworks that support responsible AI usage, emphasizing AI as a supplement to, rather than a replacement for critical thinking and independent learning.
To address students’ concerns, faculty and administrators might consider implementing explicit guidelines on GAI use in coursework, potentially incorporating discussions on the ethical, cognitive, and practical dimensions of AI into curricula. For instance, faculty could provide case studies or practical exercises that demonstrate both the benefits and limitations of GAI tools, encouraging students to reflect on AI’s role within their own academic and professional development. In doing so, educators could reinforce the message that while GAI can enhance productivity, it should be used mindfully to avoid compromising academic integrity or personal skill growth.
Additionally, the role of faculty attitudes, as highlighted in this study, suggests that instructor perspectives can significantly shape students’ approaches to GAI. Professional development initiatives focused on equipping faculty with strategies to guide responsible AI usage could create a more unified, supportive environment. Faculty workshops on AI could help educators across disciplines better understand GAI’s capabilities and limitations, enabling them to provide consistent, nuanced guidance that aligns with institutional values around academic integrity and personal development.
The integration of GAI into higher education presents transformative opportunities for knowledge management, instructional innovation, and collaborative learning (Walczak & Cellary, 2023). AI-empowered tools are being leveraged to support at-risk students, enhancing personalized learning experiences and potentially increasing retention and graduation rates (Eliyahu, 2023). In line with sustainable development goals for higher education, knowledge codification, storage, and generation via AI have shown positive impacts on institutional sustainability (Budur et al., 2024). However, to fully realize these benefits, institutions must address the digital skills gap, particularly in regions like Latin America, by investing in infrastructure, fostering digital literacy, and promoting partnerships to bridge accessibility challenges (Orellana-Rodriguez, 2024). A comprehensive framework for responsible AI use is critical to ensuring ethical integration into academic curricula. The AI-enabled Collaborative Learning Ecosystems (AI-CLE) model proposed by Baskara (2024) offers one such approach, combining socio-constructivist and connectivist theories to foster a dynamic, collaborative educational ecosystem. Moreover, organizational ecology highlights that the emergence and sustainability of AI-driven educational practices depend on supportive social, economic, and political conditions, as well as institutional commitment to fostering a community of inquiry among educators, policymakers, and technical developers (Yixin et al., 2024; Eke, 2023). This multi-stakeholder effort toward responsible AI integration can redefine the landscape of higher education, helping institutions to balance technological advancements with ethical, collaborative, and sustainable educational practices.

4.6. Evolution of GAI Explorations in Higher Education

Future research on GAI in higher education should address the complexities of organizational culture and structure, examining how these internal factors influence AI integration across diverse institutional environments (Batista et al., 2024; Batta, 2024). As GAI reshapes educational frameworks, understanding the impact on assessment practices, institutional policies, and academic integrity will be essential for navigating the risks and potential of this technology (Batista et al., 2024). Additionally, studies could investigate the ways higher education institutions are adapting outdated educational models to encompass AI-driven learning, potentially recalibrating traditional tools and definitions to better align with modern technological demands (Bastani, 2024).
Exploring AI’s implications for equity is critical, especially in regions like the Global South, where educational technology infrastructure varies significantly. Investigating how AI-based tools can address or exacerbate disparities in accessibility, fairness, and representation could offer pathways toward a more equitable global education system. Ethical considerations and the development of AI policies will play a central role in ensuring that AI applications uphold standards of fairness, particularly in mitigating biases in AI-generated assessments that disproportionately impact vulnerable student populations, including international students (Albright et al., 2023; Farrelly & Baker, 2023; Kurtz et al., 2024).
There is also a need for longitudinal studies that assess GAI’s influence on core teaching and learning processes, from developing workforce-ready skills to fostering critical thinking (Bowen & Watson, 2024). Scenario analysis frameworks, as suggested by Krause et al. (2024), could offer valuable insights into diverse outcomes based on institutional and student use of AI tools, balancing technological innovation with the preservation of essential cognitive skills such as critical thinking and problem-solving. Future studies should also address curriculum reform, educator upskilling, and the development of comprehensive AI literacy programs. These initiatives would prepare both students and faculty to adapt to AI’s pervasive influence, enabling higher education to leverage GAI’s advantages—such as accessibility and tailored learning—while avoiding stultifying pitfalls like over-reliance and misinformation (Kurtz et al., 2024).
Advancing the responsible integration of GAI will require a multi-stakeholder approach, including policymakers, educators, and AI developers, to create resilient and ethical educational ecosystems (Gottschalk & Weise, 2023; Krause et al., 2024). This will be particularly vital as institutions seek to harness the transformative potential of GAI in ways that are inclusive, sustainable, and aligned with the evolving needs of a global educational community (Eden et al., 2024; Kshetri & Voas, 2024).

5. Conclusions

This study explored how undergraduate students at Universidad Centroamericana José Simeón Cañas (UCA) engage with Generative AI (GAI) tools, focusing on their access, usage patterns, and the socio-economic and ethical factors influencing their engagement. The findings revealed that while GAI tools offer significant opportunities for enhancing learning, socio-economic disparities and ethical concerns shape how students access and utilize these technologies.
Students from lower-income backgrounds were more reliant on affordable devices such as Android smartphones, whereas students from higher-income groups had greater access to laptops and paid versions of GAI tools. These disparities highlight the persistent digital divide in higher education and emphasize the need for institutional policies that promote equitable access to educational technologies. Furthermore, ethical concerns—particularly regarding academic integrity, data privacy, and bias—were more pronounced among students with limited access to institutional support, underscoring the importance of structured guidance and digital literacy education.
By integrating socio-constructivist and connectivist learning theories, this study demonstrates that while GAI tools can foster learner autonomy and collaboration, their educational value is maximized when paired with intentional support and equitable access. Institutions must address both material access and the ethical dimensions of GAI use to support responsible and inclusive technology integration.
Future research should further explore targeted strategies for bridging digital divides and developing ethical frameworks for GAI use in under-resourced educational settings. This study contributes to the growing body of knowledge on AI in education by highlighting the intersection of technology, equity, and ethics, offering insights that can inform more inclusive educational practices in the Global South and beyond.

Author Contributions

Conceptualization, T.V. and O.G.; methodology, T.V. and O.G.; software, O.G.; validation, O.G.; formal analysis, T.V. and O.G.; investigation, T.V. and O.G.; data curation, T.V. and O.G.; writing—original draft preparation, T.V.; writing—review and editing, T.V.; visualization, T.V.; supervision, T.V. and O.G.; project administration T.V.; funding acquisition, T.V. and O.G. All authors have read and agreed to the published version of the manuscript.

Funding

Funding for this project, titled ‘Usos de las herramientas de inteligencia artificial generativa de los estudiantes de la Universidad Centroamericana José Simeón Cañas en sus procesos educativos’, was awarded by the Fondo de Investigación UCA from the Vicerrectoría de Investigación e Innovación.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the UCA Institutional Review Board (IRB00009046; 20 June 2024).

Informed Consent Statement

Written informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author [T.V.] upon reasonable request.

Acknowledgments

The authors would like to thank Mario Zetino and the Fondo de Investigación UCA for their valuable support and insightful contributions to this study. Their guidance and feedback were instrumental in refining our approach. We also extend our gratitude to the UCA students for their help in the project, which was essential to the success of this research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

List of all survey questions presented to UCA students.
  • Do you agree to participate in the study on the use of generative AI tools by UCA students in their educational processes? By agreeing, you confirm that you understand the study’s purpose, that the information is confidential, and that there will be no compensation for participation. You also understand that you can withdraw at any time. [Binary: Yes/No]
  • How old are you? [Multiple choice: 18–21, 22–25, 26–29,30–33, 34–more]
  • Which department are you from? [Multiple choice: Ahuachapán, Cabañas, Chalatenango, Cuscatlán, La Libertad, La Paz, La Unión, Morazán, San Miguel, San Salvador, San Vicente, Santa Ana, Sonsonate, Usulután, I am not Salvadoran]
  • If you answered “I am not Salvadoran,” please indicate your country of origin. [Open Response]
  • What gender do you identify with? [Open Response]
  • What academic year are you currently in? [Multiple choice: First-year, Second-Year, Third-Year, Fourth-Year, Fifth-Year, Egressed]
  • What is your major? [Multiple choice: Architecture, Food Engineering, Civil Engineering, Electrical Engineering, Energy Engineering, Industrial Engineering, Computer Engineering, Mechanical Engineering, Chemical Engineering, Bachelor’s in Law, Bachelor’s in Social Sciences, Bachelor’s in Philosophy, Bachelor’s in English Language, Bachelor’s in Psychology, Bachelor’s in Theology, Digital Marketing Technician, Multimedia Production Technician, Bachelor’s in Marketing, Bachelor’s in Social Communication, Accounting Technician, Bachelor’s in Business Administration, Bachelor’s in Public Accounting, Bachelor’s in Economics, Bachelor’s in Finance, Teaching Degree in English Language, Teaching Degree in Theology]
  • Which of the following devices do you use for your academic activities? (You may select more than one) [Multiple choice: iPhone (Cellphone), Smartphone (Cellphone), Desktop Mac (Desktop Computer), Desktop PC (Desktop Computer), iPad, Laptop, Tablet]
  • Are you currently working? [Binary: Yes/No]
  • What is the highest level of education completed by your father or guardian? [Multiple choice: High School Diploma, Seventh to Ninth Grade, Some University Studies (but no degree), None, Sixth Grade or Lower, Doctoral Degree, Master’s Degree, Undergraduate Degree]
  • What is the highest level of education completed by your mother or guardian? [Multiple choice: High School Diploma, Seventh to Ninth Grade, Some University Studies (but no degree), None, Sixth Grade or Lower, Doctoral Degree, Master’s Degree, Undergraduate Degree]
  • How familiar are you with the concept of Artificial Intelligence? [Multiple choice: Somewhat familiar, Extremely familiar, Moderately familiar, Very familiar, Not familiar at all]
  • Generative AI focuses on creating or producing new content, such as images, music, or text, through algorithms and machine learning models. Are you familiar with generative AI tools? [Multiple choice: Somewhat familiar, Extremely familiar, Moderately familiar, Very familiar, Not familiar at all]
  • How did you learn about generative AI tools? [Multiple choice: Friends, Advertisements, Content Creators, Curiosity, Integrated in a Website, Out of Necessity, News Media]
  • Have you used generative AI tools for your academic studies? [Binary: Yes/No]
  • If your answer is “no,” why haven’t you used GAI tools in your studies? (Indicate the main reason) [Open Response]
  • For the following AI tools, how many times did you use each one per week in the last month? (ChatGPT, Gemini, DeepL, Copilot, SlidesAI, Character.ai, Perplexity.ai) [Multiple choice: Once per week, 2–3 times per week, 4–5 times per week, 6 times per week, Never, Every day]
  • If there is another GAI tool you use that is not listed, please name it. (If none, leave the response blank) [Open Response]
  • If you added a GAI tool, how many times a week do you use it? (If none, skip to the next question) [Multiple choice: Once per week, 2–3 times per week, 4–5 times per week, 6 times per week, Never, Every day]
  • How long have you been using GAI tools for your studies? [Multiple choice: Since 2022, Since 2023, Since 2024, Since before 2022]
  • How many hours per day, on average, do you use these tools? [Open Response]
  • Have you paid to use a GAI tool or for a premium version? [Binary: Yes/No]
  • As part of my studies, I use generative AI tools for… [Open Response]
  • What factors are most important when choosing a GAI tool? [Open Response]

References

  1. Abdelwahab, H. R., Rauf, A., & Chen, D. (2023). Business students’ perceptions of Dutch higher educational institutions in preparing them for artificial intelligence work environments. Industry & Higher Education, 37(1), 22–34. [Google Scholar] [CrossRef]
  2. Abeliuk, A., & Gutiérrez, C. (2021). Historia y evolución de la inteligencia artificial. Revista Bits de Ciencia, 21, 14–21. [Google Scholar] [CrossRef]
  3. Ahmad, S. F., Alam, M. M., Rahmat, M. K., Mubarik, M. S., & Hyder, S. I. (2022). Academic and administrative role of artificial intelligence in education. Sustainability, 14(3), 1101. [Google Scholar] [CrossRef]
  4. Albayati, H. (2024). Investigating undergraduate students’ perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Computers and Education: Artificial Intelligence, 6, 100203. [Google Scholar] [CrossRef]
  5. Albright, K., Krymskaya, A., & Cervone, F. (2023). Introduction: International perspectives on knowledge management education. Library Trends, 72(2), 177–186. [Google Scholar] [CrossRef]
  6. American University School of Education. (2019). Understanding the digital divide in education. Available online: https://soeonline.american.edu/blog/digital-divide-in-education/ (accessed on 11 November 2024).
  7. Anthology Report. (2023). AI in higher ed: Hype, harm, or help. Anthology. Available online: https://www.anthology.com/paper/ai-in-higher-ed-hype-harm-or-help (accessed on 11 November 2024).
  8. Aparicio-Gómez, O. Y., Ostos-Ortiz, O. L., & Abadía-García, C. (2024). Convergence between emerging technologies and active methodologies in the university. Journal of Technology and Science Education, 14(1), 31–44. [Google Scholar] [CrossRef]
  9. Azcúnaga López, R. E. (2024, October 15). Inteligencia artificial y educación superior en el salvador. Diario el salvador. Available online: https://diarioelsalvador.com/inteligencia-artificial-y-educacion-superior-en-el-salvador/530613/ (accessed on 11 November 2024).
  10. Bahroun, Z., Anane, C., Ahmed, V., & Zacca, A. (2023). Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability, 15(17), 12983. [Google Scholar] [CrossRef]
  11. Baskara, R. (2024). From AI to we: Harnessing generative AI tools to cultivate collaborative learning ecosystems in universities. Proc. Int. Conf. Learn. Community (ICLC), 1(1), 676–690. [Google Scholar]
  12. Bastani, H. (2024, August 27). Without guardrails, generative AI can harm education. Knowledge at Wharton. Available online: https://knowledge.wharton.upenn.edu/article/without-guardrails-generative-ai-can-harm-education/ (accessed on 11 November 2024).
  13. Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and higher education: Trends, challenges, and future directions from a systematic literature review. Information, 15(11), 676. [Google Scholar] [CrossRef]
  14. Batta, A. (2024). Transforming higher education through generative AI: Opportunity and challenges. Paradigm, 28(2), 241–243. [Google Scholar] [CrossRef]
  15. Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21, 4. [Google Scholar] [CrossRef]
  16. Borja Velezmoro, G., & Carcausto, W. (2020). Herramientas digitales en la educación universitaria latinoamericana: Una revisión bibliográfica. Revista Educación Las Américas, 10(2), 254–264. [Google Scholar] [CrossRef]
  17. Bowen, J. A., & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning. Johns Hopkins University Press. [Google Scholar]
  18. Briñis-Zambrano, A. (2024). Beneficios y limitaciones en docentes y estudiantes universitarios salvadoreños sobre el uso de IA en procesos de enseñanza-aprendizaje [Benefits and limitations for Salvadoran university teachers and students on the use of AI in teaching-learning processes]. European Public & Social Innovation Review, 9(1), 1–19. [Google Scholar] [CrossRef]
  19. Budur, T., Abdullah, H., Rashid, C. A., & Demirer, H. (2024). The connection between knowledge management processes and sustainability at higher education institutions. Journal of Knowledge Economy, 1–34. [Google Scholar] [CrossRef]
  20. Bustillos, J. (2017). The digital divide: Neoliberal imperatives and education. In S. Isaacs (Ed.), European Social Problems (pp. 149–165). Routledge. [Google Scholar]
  21. Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., Bonnefon, J.-F., Brañas-Garza, P., Butera, L., Douglas, K. M., Everett, J. A. C., Gigerenzer, G., Greenhow, C., Hashimoto, D. A., Holt-Lunstad, J., Jetten, J., Johnson, S., Kunz, W. H., Longoni, C., . . . Viale, R. (2024). The impact of generative artificial intelligence on socio-economic inequalities and policy making. PNAS Nexus, 3(6), 191. [Google Scholar] [CrossRef]
  22. Carrasco Rodríguez, A. (2023). Reinventando la enseñanza de la Historia Moderna en Secundaria: La utilización de ChatGPT para potenciar el aprendizaje y la innovación docente. Studia Historica: Historia Moderna, 45(1), 101–145. [Google Scholar] [CrossRef]
  23. Caruth, G. D. (2013). Demystifying mixed methods research design: A review of the literature. Mevlana International Journal of Education (MIJE), 3(2), 112–122. [Google Scholar] [CrossRef]
  24. Casas Anguita, J., Repullo Labrador, J. R., & Donado Campos, J. (2003). La encuesta como técnica de investigación: Elaboración de cuestionarios y tratamiento estadístico de los datos (II). Atención Primaria, 31(9), 592–600. [Google Scholar] [CrossRef]
  25. Castillo, J., Juárez Saavedra, L., Moreno, M., Ponce-Benítez, L., Ramos, K., Lemus, R., Maida, H., García, P., Contreras, E., & Escobar, K. (2024). Un libro sobre IA: Inteligencia artificial, industrias creativas y educación en comunicación: Una mirada desde El Salvador. Mónica Herrera Ediciones. ISBN 978-99961-941-8-4. [Google Scholar]
  26. Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publications. [Google Scholar]
  27. Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). SAGE Publications. [Google Scholar]
  28. Dabbagh, N., & Kitsantas, A. (2012). Personal learning environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning. The Internet and Higher Education, 15(1), 3–8. [Google Scholar] [CrossRef]
  29. Department for Education and The Open Innovation Team. (2024). Generative AI in education: Educator and expert views report. United Kingdom. Available online: https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf (accessed on 11 November 2024).
  30. Diliberti, M. K., Schwartz, H. L., Doan, S., Shapiro, A., & Rainey, L. R. (2024). Using artificial intelligence tools in K–12 classrooms. RAND Corporation. Available online: https://www.rand.org/pubs/research_reports/RRA956-21.html (accessed on 11 November 2024).
  31. Downes, S. (2012). Connectivism and connective knowledge: Essays on meaning and learning networks. National Research Council Canada. Available online: http://www.downes.ca/files/books/Connective_Knowledge-19May2012.pdf (accessed on 11 November 2024).
  32. Eden, C. A., Chisom, O. N., & Adeniyi, I. S. (2024). Integrating AI in education: Opportunities, challenges, and ethical considerations. Magna Scientia Advanced Research and Reviews, 10(2), 6–13. [Google Scholar] [CrossRef]
  33. Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. [Google Scholar] [CrossRef]
  34. Eliyahu, S. (2023, August 23). How generative AI is revolutionizing knowledge management. Forbes. Available online: https://www.forbes.com/councils/forbestechcouncil/2023/08/23/how-generative-ai-is-revolutionizing-knowledge-management/ (accessed on 11 November 2024).
  35. Essien, A., Bukoye, O. T., O’Dea, X., & Kremantzis, M. (2024). The influence of AI text generators on critical thinking skills in UK business schools. Studies in Higher Education, 49(5), 865–882. [Google Scholar] [CrossRef]
  36. Farrelly, T., & Baker, N. (2023). Generative artificial intelligence: Implications and considerations for higher education practice. Education Sciences, 13(11), 1109. [Google Scholar] [CrossRef]
  37. Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving integration in mixed methods designs: Principles and practices. Health Services Research, 48(6), 2134–2156. [Google Scholar] [CrossRef]
  38. Flores-Vivar, J. M., & García-Peñalvo, F. J. (2023). Reflexiones sobre la ética, potencialidades y retos de la inteligencia artificial en el marco de la educación de calidad (ODS4). Comunicar: Revista Científica de Educomunicación, 74(XXXI), 37–47. [Google Scholar] [CrossRef]
  39. García, J. (2024, June 11). Nayib Bukele, asesores políticos, y el uso de inteligencia artificial en el gobierno. El Salvador. Available online: https://www.elsalvador.com/noticias/nacional/nayib-bukele-asesores-politicos-gobierno-inteligencia-artificial/1148521/2024/ (accessed on 11 November 2024).
  40. Gottschalk, F., & Weise, C. (2023). Digital equity and inclusion in education: An overview of practice and policy in OECD countries. OECD Education Working Papers, No. 299. OECD. [Google Scholar] [CrossRef]
  41. Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59–82. [Google Scholar] [CrossRef]
  42. Guo, S., Shi, L., & Zhai, X. (2024). Validating an instrument for teachers’ acceptance of artificial intelligence in education. arXiv. [Google Scholar] [CrossRef]
  43. Hernández Sampieri, R., Fernández Collado, C., & Baptista Lucio, P. (2014). Metodología de la investigación (6th ed.). McGraw-Hill Education. [Google Scholar]
  44. Hurst, B., Wallace, R., & Nixon, S. (2013). The impact of social interaction on student learning. Reading Horizons, 52(4), 375–398. [Google Scholar]
  45. IEEE CTU. (2023). Consequences of the digital divide in education. Available online: https://ctu.ieee.org/blog/2023/01/30/consequences-of-the-digital-divide-in-education/ (accessed on 11 November 2024).
  46. Islas Torres, C. (2021). Conectivismo y neuroeducación: Transdisciplinas para la formación en la era digital. CIENCIA Ergo-Sum, 28(1), e117. [Google Scholar] [CrossRef]
  47. Kerres, M., & Buchner, J. (2022). Education after the pandemic: What we have (not) learned about learning. Education Sciences, 12(5), 315. [Google Scholar] [CrossRef]
  48. Klement, M. (2017, August 22–31). Connectivism and ICT tools: The opinions and attitudes of teachers toward their use in education. 4th SGEM International Multidisciplinary Scientific Conferences on Social Sciences and Arts, Vienna, Austria. [Google Scholar] [CrossRef]
  49. Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537–550. [Google Scholar] [CrossRef]
  50. Krause, S., Panchal, B. H., & Ubhe, N. (2024). The evolution of learning: Assessing the transformative impact of generative AI on higher education. arXiv. [Google Scholar] [CrossRef]
  51. Kshetri, N., & Voas, J. (2024). Adapting to generative artificial intelligence: Approaches in higher education institutions. Computer, 57(9), 128–133. [Google Scholar] [CrossRef]
  52. Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Education Sciences, 14(5), 503. [Google Scholar] [CrossRef]
  53. Lai, K. W., De Nobile, J., Bower, M., & Breyer, Y. (2022). Comprehensive evaluation of the use of technology in education: Confirmatory factor analysis of an instrument. Education and Information Technologies, 27(1), 123–145. [Google Scholar] [CrossRef]
  54. Lemus, R. M. (2023). ¡Hey, IA, Evaluemos Experiencias en el Aula! Evaluación del impacto de las inteligencias artificiales generativas (IAG) en el desarrollo de competencias transversales: Un estudio de seguimiento en el aula universitaria. Abierta Anuario de Investigación, 17, 10–21. Available online: https://revistaabierta.monicaherrera.edu.sv/index.php/abierta/article/view/77 (accessed on 11 November 2024).
  55. Lisowska Navarro, M., García Amézquita, J. A., Espitia Castellanos, J., Blanco Castillo, H., & Pérez Hernández, J. P. A. (2023). La vanguardia de las tendencias internacionales en bibliotecas académicas. Universidad de Rosario. Available online: https://acortar.link/W55D9k (accessed on 11 November 2024).
  56. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education. [Google Scholar]
  57. McIntosh, T. R., Susnjak, T., Liu, T., Watters, P., & Halgamuge, M. N. (2023). From Google Gemini to OpenAI Q*(Q-star): A survey of reshaping the generative artificial intelligence (AI) research landscape. arXiv. [Google Scholar] [CrossRef]
  58. Midwestern Higher Education Compact. (2020). The digital divide among college students: Lessons learned from the COVID-19 emergency transition. Available online: https://www.mhec.org/sites/default/files/resources/2021The_Digital_Divide_among_College_Students_1.pdf (accessed on 11 November 2024).
  59. Molina, E., Cobo, C., Pineda, J., & Rovner, H. (2024). AI revolution in education: What you need to know. In Digital innovations in education. World Bank. Available online: http://documents.worldbank.org/curated/en/099734306182493324/IDU152823b13109c514ebd19c241a289470b6902 (accessed on 11 November 2024).
  60. Moore, R., Vitale, D., & Stawinoga, N. (2018). The digital divide and educational equity: A look at students with very limited access to electronic devices at home. ACT Center for Equity in Learning. Available online: https://www.act.org/content/dam/act/unsecured/documents/R1698-digital-divide-2018-08.pdf (accessed on 11 November 2024).
  61. Mpofu, B., & Ndlovu-Gatsheni, S. (2020). The dynamics of changing higher education in the Global South. Cambridge Scholars Publisher. [Google Scholar]
  62. National Institutes of Health (NIH) Office of Behavioral and Social Sciences. (2018). Best practices for mixed methods research in the health sciences (2nd ed.). National Institutes of Health. Available online: https://implementationscience-gacd.org/wp-content/uploads/2020/11/Best-Practices-for-Mixed-Methods-Research-in-the-Health-Sciences-2018-01-25-1.pdf (accessed on 11 November 2024).
  63. Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Thi Nguyen, B.-P. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28, 4221–4241. [Google Scholar] [CrossRef]
  64. Office of Educational Technology, U.S. Department of Education. (n.d.). The digital access divide. Available online: https://tech.ed.gov/netp/digital-access-divide/ (accessed on 11 November 2024).
  65. Orellana-Rodriguez, C. (2024). Latin America’s incomplete digital revolution. Libre AI. Available online: https://www.libreai.com/digital-divide/latin-americas-incomplete-digital-revolution/ (accessed on 11 November 2024).
  66. Palincsar, A. S. (1998). Social constructivist perspectives on teaching and learning. Annual Review of Psychology, 49(1), 345–375. [Google Scholar] [CrossRef]
  67. Penprase, B. (2024, September 24). The AI education leapfrog in the Global South. Forbes. Available online: https://www.forbes.com/sites/bryanpenprase/2024/09/24/the-ai-education-leapfrog-in-the-global-south/ (accessed on 11 November 2024).
  68. Reddy, B. A., Jose, S., & Vaidehi, R. (2021). Of access and inclusivity: Digital divide in online education. arXiv, arXiv:2107.10723. [Google Scholar]
  69. Redecker, C. (2017). European framework for the digital competence of educators: DigCompEdu. Publications Office of the European Union. [Google Scholar] [CrossRef]
  70. Ríos, R. H. (2023). El test de Turing y la filosofía de la inteligencia artificial: Acerca de la mente de las máquinas digitales. Revista de Filosofía de la Universidad de Costa Rica, 62(164), 47–57. [Google Scholar] [CrossRef]
  71. Robert, J. (2024). EDUCAUSE AI landscape study (research report). EDUCAUSE. Available online: https://www.educause.edu/ecar/research-publications/2024/2024-educause-ai-landscape-study/introduction-and-key-findings (accessed on 11 November 2024).
  72. Ruiz, R. B., & Velásquez, J. D. (2023). Inteligencia artificial al servicio de la salud del futuro. Revista Médica Clínica Las Condes, 34(1), 84–91. [Google Scholar] [CrossRef]
  73. Salas-Pilco, S. Z., & Yang, Y. (2022). Artificial intelligence applications in Latin American higher education: A systematic review. International Journal of Educational Technology in Higher Education, 19, 21. [Google Scholar] [CrossRef]
  74. Sánchez-Santamaría, J., & Olmedo Moreno, E. (2023). El despertar de la inteligencia artificial: Implicaciones para la competencia investigadora en educación. Aula Magna 2.0 [Blog]. Available online: https://cuedespyd.hypotheses.org/13719 (accessed on 11 November 2024).
  75. Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological Pedagogical Content Knowledge (TPACK): The development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123–149. [Google Scholar] [CrossRef]
  76. Shibani, A., Knight, S., Kitto, K., Karunanayake, A., & Buckingham Shum, S. (2024, May 11–16). Untangling critical interaction with AI in students ’written assessment. CHI ’24: CHI Conference on Human Factors in Computing Systems (pp. 1–6), Honolulu, HI, USA. [Google Scholar] [CrossRef]
  77. Shoja, M. M., Van de Ridder, J. M. M., & Rajput, V. (2023). The emerging role of generative artificial intelligence in medical education, research, and practice. Cureus, 15(6), e40883. [Google Scholar] [CrossRef]
  78. Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1), 3–10. [Google Scholar]
  79. The Education Trust–West. (2019). The digital divide in higher education. Available online: https://west.edtrust.org/resource/the-digital-divide-in-higher-ed/ (accessed on 11 November 2024).
  80. Tomaszewski, W., Xiang, N., & Kubler, M. (2024). Socio-economic status, school performance, and university participation: Evidence from linked administrative and survey data from Australia. Higher Education, 1–22. [Google Scholar] [CrossRef]
  81. UNESCO. (2021). Artificial intelligence and education: Guidance for policy makers. UNESCO. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000376709 (accessed on 11 November 2024).
  82. UNESCO. (2023). Harnessing the era of artificial intelligence in higher education: A primer for higher education stakeholders. UNESCO. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386670 (accessed on 11 November 2024).
  83. UNESCO. (2024). Needs assessment for artificial intelligence, digital transformation and open data for Small Island Developing States. UNESCO. [Google Scholar] [CrossRef]
  84. Valdez Perla, H. (2024, June 20). Transformando la práctica docente. Universidad Pedagógica de El Salvador. Available online: https://www.pedagogica.edu.sv/2024/06/20/transformando-lapractica-docente/ (accessed on 11 November 2024).
  85. van Dijk, J. A. G. M. (2020). The digital divide. Polity Press. [Google Scholar]
  86. von Garrel, J., & Mayer, J. (2023). Artificial intelligence in studies: Use of ChatGPT and AI-based tools among students in Germany. Humanities and Social Sciences Communications, 10, 799. [Google Scholar] [CrossRef]
  87. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. [Google Scholar] [CrossRef]
  88. Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to generative AI. Economics and Business Review, 9(2), 71–100. [Google Scholar] [CrossRef]
  89. Walker, S. L., & Fraser, B. J. (2005). Development and validation of an instrument for assessing distance education learning environments in higher education: The Distance Education Learning Environments Survey (DELES). Learning Environments Research, 8(3), 289–308. [Google Scholar] [CrossRef]
  90. Wallace, C., & Zamudio-Suarez, F. (2023). 2023: The year in higher ed. The Chronicle of Higher Education. Available online: https://www.chronicle.com/article/2023-the-year-in-higher-ed?utm_campaign=che-eng-sl-iu-sl-eoy&utm_medium=em&utm_source=mkto&utm_content=23-12-21&mkt_tok=OTMxLUVLQS0yMTgAAAGQK--dmtovM2-0FJmJkWN3JhP8N9F6FZcRT0LqDsJujA5tcSGxN1ai8skFG2KPfIBzJjgDjeDuxIOfeu5NM3muLRP7d0YQFM3r4ImVKR7817B-W1s (accessed on 11 November 2024).
  91. Wang, C., Boerman, S. C., Kroon, A. C., Möller, J., & de Vreese, C. H. (2024). The artificial intelligence divide: Who is the most vulnerable? New Media & Society, 1–23. [Google Scholar] [CrossRef]
  92. Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem-solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100. [Google Scholar] [CrossRef] [PubMed]
  93. Yixin, Z., Cheng, M., Yanbing, L., & Hui, L. (2024). Study on the coupling of higher education and artificial intelligence from the perspective of organizational ecology. In Y. Wang, J. Zou, L. Xu, Z. Ling, & X. Cheng (Eds.), Signal and information processing, networking and computers (Vol. 1186, pp. 47–54). Springer. [Google Scholar] [CrossRef]
  94. Zhang, S., Zhao, X., Zhou, T., & Kim, J. H. (2024). Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. International Journal of Educational Technology in Higher Education, 21, 34. [Google Scholar] [CrossRef]
  95. Zhou, C., Li, Q., Li, C., Yu, J., Liu, Y., Wang, G., Zhang, K., Ji, C., Yan, Q., He, L., Peng, H., Li, J., Wu, J., Liu, Z., Xie, P., Xiong, C., Pei, J., Yu, P. S., & Sun, L. (2024a). A comprehensive survey on pretrained foundation models: A history from BERT to ChatGPT. International Journal of Machine Learning and Cybernetics. [Google Scholar] [CrossRef]
  96. Zhou, Y., Zhou, Y., & Machtmes, K. (2024b). Mixed methods integration strategies used in education: A systematic review. Methodological Innovations, 17(1), 41–49. [Google Scholar] [CrossRef]
Table 1. Distributions of student age groups at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 1. Distributions of student age groups at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
AgeObserved (%)Expected (%)
18–2165.420.0
22–2530.620.0
26–292.120.0
30–331.320.0
34 or greater0.520.0
Table 2. Distributions of students’ reported Departments of residence at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 2. Distributions of students’ reported Departments of residence at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Salvadoran DepartmentObserved (%)Expected (%)
Ahuachapán0.57.1
Cabañas1.17.1
Chalatenango1.97.1
Cuscatlán0.57.1
La Libertad28.57.1
La Paz4.57.1
La Unión0.57.1
Morazán0.37.1
Not Salvadoran7.27.1
San Miguel1.97.1
San Salvador46.07.1
San Vicente0.57.1
Santa Ana3.27.1
Sonsonate3.57.1
Table 3. Distributions of students’ self-reported gender at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 3. Distributions of students’ self-reported gender at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
GenderObserved (%)Expected (%)
Gender Fluid1.125
[Cisgender] Male50.325
[Cisgender] Female48.425
Transgender Male0.325
Table 4. Distributions of cisgender students by gender at Universidad Centroamericana José Simeón Cañas (n = 371 responses).
Table 4. Distributions of cisgender students by gender at Universidad Centroamericana José Simeón Cañas (n = 371 responses).
GenderObserved (%)Expected (%)
[Cisgender] Male50.950
[Cisgender] Female49.150
Table 5. Distributions of student academic cohorts at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 5. Distributions of student academic cohorts at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Academic CohortObserved (%)Expected (%)
First-Year Student23.914.3
Second-Year Student23.114.3
Third-Year Student20.514.3
Fourth Year Student18.114.3
Fifth-Year Student12.814.3
Egressed1.614.3
Table 6. Distributions of student academic majors at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 6. Distributions of student academic majors at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Academic MajorObserved (%)Expected (%)
Architecture5.94.0
Civil Engineering8.04.0
Food Engineering1.64.0
Electrical Engineering2.74.0
Energy Engineering0.84.0
Industrial Engineering9.04.0
Computer Engineering14.44.0
Mechanical Engineering1.94.0
Chemical Engineering4.54.0
Business Administration7.24.0
Law4.04.0
Social Sciences0.54.0
Mass Communication5.64.0
Public Accounting3.24.0
Economics3.74.0
Philosophy0.54.0
Finance0.54.0
English4.84.0
Marketing4.54.0
Psychology11.74.0
Theology0.34.0
Teaching Degree in English Language1.34.0
Associate Degree in Accounting1.34.0
Associate Degree in Digital Marketing1.34.0
Associate Degree in Multimedia Production0.84.0
Table 7. Distributions of student employment status at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 7. Distributions of student employment status at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Employment StatusObserved (%)Expected (%)
Not Employed75.550
Employed24.550
Table 8. Distributions of paternal highest degree earned at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 8. Distributions of paternal highest degree earned at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Education LevelObserved (%)Expected (%)
None3.512.5
6th Grade or Lower4.012.5
7th–9th Grade7.712.5
High School Diploma24.712.5
Some University Studies (No Degree)15.412.5
Licentiate Degree34.312.5
Master’s Degree7.412.5
Doctoral Degree2.912.5
Table 9. Distributions of maternal highest degree earned at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 9. Distributions of maternal highest degree earned at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Education LevelObserved (%)Expected (%)
None1.912.5
6th Grade or Lower6.412.5
7th–9th Grade6.612.5
High School Diploma24.512.5
Some University Studies (No Degree)18.912.5
Licentiate Degree33.012.5
Master’s Degree6.612.5
Doctoral Degree2.112.5
Table 10. Distributions of preferred devices for academic activities at Universidad Centroamericana José Simeón Cañas (n = 842 responses).
Table 10. Distributions of preferred devices for academic activities at Universidad Centroamericana José Simeón Cañas (n = 842 responses).
DeviceObserved (%)Expected (%)
iPhone Smartphones15.114.3
Android Smartphones27.914.3
PC Desktop8.614.3
iOS Desktop0.514.3
iPad Tablets4.514.3
Android Tablets5.714.3
Laptop (PC or iOS)37.814.3
Table 11. Distributions of familiarity with Generative AI at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 11. Distributions of familiarity with Generative AI at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Familiarity LevelObserved (%)Expected (%)
Not Familiar At All2.720
Somewhat Familiar21.520
Moderately Familiar40.420
Very Familiar27.120
Extremely Familiar8.220
Table 12. Distributions of familiarity with Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 12. Distributions of familiarity with Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
ResponseObserved (%)Expected (%)
Negative25.050
Affirmative75.050
Table 13. Distributions of how students learned about Generative AI at Universidad Centroamericana José Simeón Cañas (n = 368 responses).
Table 13. Distributions of how students learned about Generative AI at Universidad Centroamericana José Simeón Cañas (n = 368 responses).
Discover MethodObserved (%)Expected (%)
Friends35.114.3
Advertisements9.014.3
Content Creators19.614.3
Curiosity18.814.3
Website Integration2.214.3
Necessity7.314.3
News Media8.214.3
Table 14. Distributions of Generative AI tool usage in academic settings at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
Table 14. Distributions of Generative AI tool usage in academic settings at Universidad Centroamericana José Simeón Cañas (n = 376 responses).
ResponseObserved (%)Expected (%)
Negative25.050
Affirmative75.050
Table 15. Distributions of reasons students are not using Generative AI tools in academic settings at Universidad Centroamericana José Simeón Cañas (n = 94 responses).
Table 15. Distributions of reasons students are not using Generative AI tools in academic settings at Universidad Centroamericana José Simeón Cañas (n = 94 responses).
Reason for Not Using GAIObserved (%)Expected (%)
Lack of Need10.616.7
Lack of Interest1.116.7
Ethical Concerns42.616.7
Prohibited Use19.116.7
Skill Limitations24.516.7
No Desire to Use2.116.7
Table 16. Distributions of ChatGPT usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 16. Distributions of ChatGPT usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Usage FrequencyObserved (%)Expected (%)
Never3.916.7
Once per week34.016.7
2–3 times per week29.116.7
4–5 times per week17.716.7
6 times per week5.316.7
Every day9.916.7
Table 17. Distributions of Gemini usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 17. Distributions of Gemini usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Usage FrequencyObserved (%)Expected (%)
Never73.016.7
Once per week13.116.7
2–3 times per week6.416.7
4–5 times per week4.316.7
6 times per week1.816.7
Every day1.416.7
Table 18. Distributions of Deepl usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 18. Distributions of Deepl usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Usage FrequencyObserved (%)Expected (%)
Never78.716.7
Once per week8.216.7
2–3 times per week7.416.7
4–5 times per week2.116.7
6 times per week1.416.7
Every day2.116.7
Table 19. Distributions of CoPilot usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 19. Distributions of CoPilot usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Usage FrequencyObserved (%)Expected (%)
Never66.716.7
Once per week11.316.7
2–3 times per week12.116.7
4–5 times per week6.416.7
6 times per week1.116.7
Every day2.516.7
Table 20. Distributions of SlidesAI usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 20. Distributions of SlidesAI usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Usage FrequencyObserved (%)Expected (%)
Never87.916.7
Once per week8.216.7
2–3 times per week2.516.7
4–5 times per week0.416.7
6 times per week1.116.7
Every day0.016.7
Table 21. Distributions of Character AI usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 21. Distributions of Character AI usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Usage FrequencyObserved (%)Expected (%)
Never85.516.7
Once per week8.216.7
2–3 times per week3.516.7
4–5 times per week0.416.7
6 times per week1.816.7
Every day0.716.7
Table 22. Distributions of Perplexity usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 22. Distributions of Perplexity usage frequency among students at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Usage FrequencyObserved (%)Expected (%)
Never89.016.7
Once per week6.016.7
2–3 times per week2.116.7
4–5 times per week1.116.7
6 times per week1.116.7
Every day0.716.7
Table 23. Distributions of student usage of additional Generative AI tools beyond those listed in the survey at Universidad Centroamericana José Simeón Cañas (n = 388 responses).
Table 23. Distributions of student usage of additional Generative AI tools beyond those listed in the survey at Universidad Centroamericana José Simeón Cañas (n = 388 responses).
Usage of Additional GAI Tools Observed (%)Expected (%)
Yes (Other Tools Used)7.250.0
No (No Other Tools Used)89.750.0
Table 24. Distributions of usage frequency of additional Generative AI tools among students at Universidad Centroamericana José Simeón Cañas (n = 99 responses).
Table 24. Distributions of usage frequency of additional Generative AI tools among students at Universidad Centroamericana José Simeón Cañas (n = 99 responses).
Usage FrequencyObserved (%)Expected (%)
Once per week47.520.0
2–3 times per week40.420.0
4–5 times per week4.020.0
6 times per week2.020.0
Every day6.120.0
Table 25. Distributions of the year students began using Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 25. Distributions of the year students began using Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Start YearObserved (%)Expected (%)
Before 20229.625.0
Since 202218.825.0
Since 202351.125.0
Since 202420.625.0
Table 26. Distributions of time spent using Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Table 26. Distributions of time spent using Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 282 responses).
Time Spent Using GAI ToolsObserved (%)Expected (%)
Less than 1 h63.820.0
1–2 h31.220.0
Rarely less than once a week0.420.0
A few minutes for a specific action0.420.0
More than 3 h4.320.0
Table 27. Distributions of students paying for Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 281 responses).
Table 27. Distributions of students paying for Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 281 responses).
Payment StatusObserved (%)Expected (%)
No91.850
Yes8.250
Table 28. Distributions of tasks for which students use Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 1555 responses).
Table 28. Distributions of tasks for which students use Generative AI tools at Universidad Centroamericana José Simeón Cañas (n = 1555 responses).
Task CategoryObserved (%)Expected (%)
Content Creation11.120.0
Research and Academic Tasks29.320.0
Technical and Problem-Solving 16.220.0
Clarification and Inspiration38.420.0
Language and Translation5.020.0
Table 29. Distributions of important aspects when choosing a Generative AI tool among students at Universidad Centroamericana José Simeón Cañas (n = 1285 responses).
Table 29. Distributions of important aspects when choosing a Generative AI tool among students at Universidad Centroamericana José Simeón Cañas (n = 1285 responses).
Selection CriteriaObserved (%)Expected (%)
Provide logical arguments12.211.1
Include relevant scientific data13.511.1
Produce understandable and complete responses30.511.1
Accurate detection and correction of errors8.411.1
Human-like explanations or illustrations4.311.1
Error-free and non-duplicative responses5.811.1
No grammatical errors8.411.1
Provide precise citations13.511.1
Affordable price3.411.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Valdivieso, T.; González, O. Generative AI Tools in Salvadoran Higher Education: Balancing Equity, Ethics, and Knowledge Management in the Global South. Educ. Sci. 2025, 15, 214. https://doi.org/10.3390/educsci15020214

AMA Style

Valdivieso T, González O. Generative AI Tools in Salvadoran Higher Education: Balancing Equity, Ethics, and Knowledge Management in the Global South. Education Sciences. 2025; 15(2):214. https://doi.org/10.3390/educsci15020214

Chicago/Turabian Style

Valdivieso, Tizziana, and Oscar González. 2025. "Generative AI Tools in Salvadoran Higher Education: Balancing Equity, Ethics, and Knowledge Management in the Global South" Education Sciences 15, no. 2: 214. https://doi.org/10.3390/educsci15020214

APA Style

Valdivieso, T., & González, O. (2025). Generative AI Tools in Salvadoran Higher Education: Balancing Equity, Ethics, and Knowledge Management in the Global South. Education Sciences, 15(2), 214. https://doi.org/10.3390/educsci15020214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop