4.1. Student Insights
The surveyed academic community reflected a diverse range of age, geography, academic progression, and areas of study. The majority of students were between 18 and 25 years old, indicating a predominantly younger student body, with 65.4% falling in the 18–21 age group and 30.6% in the 22–25 age range. Most students resided in urban centers like San Salvador (46%) and La Libertad (28.5%), while fewer participants hailed from rural or less populated regions. This urban concentration may influence access to resources, such as academic technology and support services. In terms of academic progression, early cohorts were more represented, with most students in their first, second, or third year of study, which suggests a younger student population actively engaged in foundational coursework. Popular majors included Computer Engineering, Psychology, and various fields in Engineering and Business, reflecting a strong presence in both STEM and business-related disciplines.
Parental education levels, with a notable portion of students reporting that their parents had completed either a Licentiate degree (34.3% for fathers, 33.0% for mothers) or a High School diploma (24.7% for fathers, 24.5% for mothers), suggest that many students come from relatively well-educated households. This may contribute to a higher likelihood of engaging with advanced academic technologies, such as GAI tools, which are becoming integral to students’ academic experiences. These tools offer ways to enhance learning efficiency and organization, aligning with students’ academic backgrounds.
GAI tools were widely used for organizing ideas, supporting research, and clarifying prompts, with Android smartphones (27.9%) and laptops (37.8%) being the preferred devices for these tasks. This preference highlights the importance of portability and flexibility in students’ academic environments, as many students likely need tools they can access on the go. Familiarity with GAI was also high, with the majority of students reporting moderate to strong familiarity. Specifically, 40.4% of students described themselves as “moderately familiar” with GAI, suggesting that these tools are becoming a regular feature of students’ academic workflows, especially among those who engage with them frequently.
Interestingly, students primarily learned about GAI tools through informal channels such as friends (35.1%), content creators (19.6%), and personal curiosity (18.8%), rather than through formal institutional outreach. This reliance on informal networks underscores the organic and grassroots nature of AI tool adoption in academic settings. Despite widespread usage, concerns about the ethical implications of AI, such as biases and reliability (42.6%), as well as technical expertise barriers, were common reasons for some students’ hesitation. These concerns reflect a need for better guidance and education on the appropriate use of AI in academia, particularly to address students’ uncertainties around the ethical dimensions of these tools.
ChatGPT was the most commonly used GAI tool, with a significant portion of students using it regularly; 34% reported using it once per week and 29.1% used it two–three times per week. Other tools, such as Gemini, Deepl, and CoPilot, saw significantly lower engagement, indicating that while students are open to AI tools, they tend to favor platforms like ChatGPT, which may be more versatile and accessible for academic tasks. This suggests that ChatGPT’s broad functionality and ease of use make it a go-to resource for students, while other tools may lack the same level of awareness or perceived utility.
In terms of usage patterns, most students used AI tools for task-oriented activities, with 63.8% spending less than one hour per week using them. Common academic applications included organizing ideas, analyzing data, and summarizing texts, while more creative or technical uses—such as content creation or problem-solving—were relatively rare. This indicates that students primarily view AI as a practical tool for enhancing academic efficiency, helping with routine tasks rather than engaging in more complex, creative, or technical processes.
When selecting an AI tool, students prioritized functionality that directly supports their academic performance. They valued tools that present logical arguments (12.2%), provide relevant scientific information (13.5%), and ensure clear, understandable responses (30.5%). Affordability and human-like responses were less important, highlighting students’ focus on academic utility over esthetics or cost. This preference for tools that deliver reliable, academically relevant information suggests that students are highly motivated by performance and academic rigor when integrating AI into their studies.
The focus group findings illustrate a nuanced perspective on GAI tools among students, where efficiency and academic convenience are balanced against potential impacts on learning development and academic integrity. While students overwhelmingly recognized the benefits of GAI for tasks that required simplification or time management, they also demonstrated a keen awareness of the potential risks associated with over-reliance on these tools. This duality in perception suggests that students are not merely passive users of technology but are actively engaging in critical reflection on the role of AI in their academic journeys.
A major finding was that students frequently turned to GAI tools to streamline repetitive or time-consuming tasks, allowing them to manage their academic workload more effectively. This practical approach aligns with existing literature that identifies AI as a valuable support tool in educational settings (
Abdelwahab et al., 2023;
Albayati, 2024;
Anthology Report, 2023;
Aparicio-Gómez et al., 2024;
Bahroun et al., 2023;
Department for Education and The Open Innovation Team, 2024;
Diliberti et al., 2024;
Molina et al., 2024;
Shibani et al., 2024;
UNESCO, 2023), particularly when students are navigating heavy course loads. The emphasis on convenience and efficiency highlights a pragmatic approach, with students using AI tools as supplementary aids rather than primary sources of knowledge.
However, students’ concerns regarding academic integrity and skill development suggest an emerging sense of ethical responsibility and self-regulation. Participants expressed hesitation about allowing GAI tools to replace critical thinking or original analysis, mirroring broader educational concerns about AI’s potential to weaken students’ independent cognitive skills (
Essien et al., 2024;
Shoja et al., 2023;
Zhang et al., 2024). This tension indicates that while GAI tools are perceived as valuable, their integration into academic routines may benefit from guidance and structure to support ethical usage. Educators might consider integrating discussions on the responsible use of AI into their curricula, encouraging students to approach GAI as a means to complement, rather than substitute, personal learning efforts.
This study contributes to addressing the significant gap in empirical research on the use of GAI tools in low-resource educational settings (
Briñis-Zambrano, 2024;
Castillo et al., 2024;
Lemus, 2023). Existing literature on AI integration in higher education predominantly focuses on well-resourced institutions in developed countries, often overlooking the unique challenges faced by students in the Global South (
Lisowska Navarro et al., 2023;
Mpofu & Ndlovu-Gatsheni, 2020). At institutions like UCA, socio-economic disparities, inconsistent internet access, and limited institutional support shape how students engage with GAI tools. These structural barriers contrast with the seamless integration observed in wealthier educational contexts, highlighting the need for localized strategies that address digital equity and technological literacy (
Mpofu & Ndlovu-Gatsheni, 2020). By situating this study within UCA, the findings offer critical insights into how educational institutions in developing regions can design interventions that bridge technological divides and support responsible GAI adoption (
Capraro et al., 2024).
4.3. Socio-Economic and Ethical Implications for GAI Access and Engagement
The results of this study reveal that socio-economic factors significantly influence students’ access to and engagement with Generative AI (GAI) tools. The reliance on different devices—such as smartphones versus laptops—and the overwhelming preference for free versions of GAI tools underscore how financial constraints shape students’ ability to fully engage with educational technologies. This pattern aligns with prior research emphasizing how digital divides disproportionately impact students in under-resourced contexts (
van Dijk, 2020;
Salas-Pilco & Yang, 2022).
Students from lower socio-economic backgrounds primarily accessed GAI tools through Android smartphones, reflecting the accessibility and affordability of mobile devices over more versatile laptops. This reliance may limit their ability to engage in more complex academic tasks that are better supported on larger, more powerful devices. Moreover, the stark contrast in the use of paid versus free GAI tools highlights how economic barriers restrict access to premium features that could further enhance learning outcomes.
These disparities underscore the importance of institutional policies aimed at bridging digital divides and ensuring equitable access to advanced educational technologies. Universities can play a pivotal role by offering greater access to shared digital resources, expanding technology lending programs, and providing training on how to maximize the utility of free GAI tools. This is particularly crucial for institutions in the Global South, where infrastructural and socio-economic barriers are more pronounced.
Ethical concerns related to GAI use also varied along socio-economic lines. Students with limited institutional support expressed greater apprehension regarding issues of academic integrity, data privacy, and bias. This suggests that beyond providing access, institutions must offer ethical guidance and digital literacy training to ensure responsible engagement with GAI tools. These findings support socio-constructivist perspectives, which emphasize the need for structured frameworks to help learners navigate complex tools effectively (
Vygotsky, 1978;
Palincsar, 1998;
Luckin et al., 2016).
By highlighting how socio-economic and ethical factors interact with GAI use, this study contributes to a deeper understanding of the challenges and opportunities in integrating educational technologies. Addressing these disparities is essential for fostering an inclusive academic environment where all students can benefit from emerging digital tools.
4.4. Theoretical Implications
The findings of this study align closely with socio-constructivist learning theories, which emphasize that knowledge is actively constructed through social interaction and engagement with learning tools (
Vygotsky, 1978) and that “connectivism is, as an applied theory of constructivism, therefore based on the conviction that every learning process is deeply individual, and that knowledge is formed (constructed) by one’s own experiences and interpretations of the world” (
Klement, 2017, p. 98). Students’ use of Generative AI (GAI) tools to clarify ideas, organize thoughts, and complete academic tasks reflects self-directed learning practices consistent with constructivist principles. Focus group discussions revealed that students not only used GAI tools individually but also collaboratively, sharing strategies and insights with peers. This collaborative dynamic mirrors Vygotsky’s concept of the Zone of Proximal Development (ZPD), where learning is optimized through guided support (
Wood et al., 1976). However, the absence of structured guidance in using GAI tools introduces challenges related to ethical use and critical engagement. This underscores the need for intentional support by educators to help students develop ethical reasoning and critical thinking skills when engaging with digital and AI technologies (
Palincsar, 1998;
Luckin et al., 2016;
Siemens, 2005).
In addition, the results strongly resonate with connectivist learning theory (
Siemens, 2005), which views learning as the process of forming and navigating networks of information. GAI tools, particularly adaptive platforms like ChatGPT, serve as critical nodes within students’ personal learning networks, enabling them to dynamically organize and process information. Connectivism emphasizes that the value of learning lies in forming connections between diverse information sources—including digital tools, peers, and academic content (
Downes, 2012). The preference for versatile and responsive platforms reflects this principle, as students prioritize tools that facilitate immediate access to relevant information and support decision-making as part of the learning process. This behavior aligns with the connectivist notion that “decision-making itself is a learning process”, especially in rapidly evolving digital environments (
Klement, 2017).
Moreover, the geographic and demographic diversity of the student population highlights the potential of GAI tools to bridge educational gaps in resource-constrained environments. Students from rural and under-resourced areas reported using smartphones and laptops to access GAI tools, demonstrating how these technologies compensate for disparities in physical infrastructure (
van Dijk, 2020). This finding directly supports connectivism’s focus on leveraging digital networks to overcome traditional barriers to learning, highlighting GAI’s role in promoting educational equity (
Klement, 2017).
However, while GAI tools offer opportunities to expand access and support learning, they also present challenges. Without structured support, students may not fully develop the ethical reasoning or critical thinking skills necessary for responsible engagement with these technologies (
Islas Torres, 2021). This observation reinforces the socio-constructivist principle that learners benefit from guided support provided by educators, especially when interacting with complex tools (
Islas Torres, 2021;
Wood et al., 1976;
Redecker, 2017). Integrating GAI tools into structured learning environments can help students develop not only technical proficiency but also the ethical and reflective thinking skills essential for meaningful learning.
By framing these findings through socio-constructivist and connectivist perspectives, this study contributes to a deeper understanding of how GAI tools facilitate learner autonomy, collaboration, and equitable access to knowledge (
Islas Torres, 2021;
Dabbagh & Kitsantas, 2012). This theoretical integration underscores the need for intentional educational design that supports responsible and ethical GAI integration, particularly in under-resourced educational settings where digital tools can play a pivotal role in leveling the academic playing field.
4.5. Institutional Recommendations
Interestingly, the influence of faculty and peers emerged as key factors in shaping students’ perspectives on GAI. Faculty attitudes, especially within specific disciplines, appeared to influence whether students felt comfortable or discouraged in using AI tools, suggesting that faculty guidance plays a crucial role in students’ adoption and responsible use of GAI. Positive faculty attitudes toward AI, particularly when they endorse AI for supporting writing, research, and idea generation, may normalize its responsible use and provide students with clear guidelines on ethical engagement with these tools.
These findings suggest that while GAI is becoming a key component of academic workflows, there is significant room for institutions to expand AI literacy programs and ensure students are equipped to use these tools effectively, ethically, and across a wider range of tasks. The heavy reliance on informal networks for learning about AI highlights a gap in formal institutional outreach, indicating that universities should take a more proactive role in guiding students toward responsible AI use and helping them explore its broader potential in academic settings.
In addition to improving AI literacy, institutions should encourage students to expand their use of AI beyond routine tasks like organizing ideas and summarizing texts. There is clear potential for deeper integration in areas such as content creation, technical applications, and complex problem-solving, which are currently underutilized. Curriculum enhancements that promote creative and technical uses of AI, especially in fields like engineering, design, and business, could help students realize the full range of capabilities these tools offer.
Concerns regarding accuracy, ethics, and technical expertise further underscore the importance of providing students with clear guidelines on responsible AI use (
Bond et al., 2024). Addressing these concerns will require institutions to implement formal frameworks that emphasize the ethical implications of AI, such as bias and reliability, while also building students’ technical confidence in navigating these tools. By equipping students with the necessary skills and knowledge, universities can empower them to leverage AI for both practical and innovative academic applications.
Moreover, the preference for AI tools that prioritize functionality and academic rigor suggests that developers should focus on creating reliable, data-driven tools that align with students’ academic needs. While features aimed at creativity or entertainment may appeal to some, the primary focus should be on supporting academic performance, particularly through logical arguments, clear responses, and access to relevant scientific data. As AI tools continue to evolve, developers must strike a balance between innovation and academic utility to meet the demands of a student body that increasingly relies on AI for its academic success.
The widespread use of mobile devices like smartphones and laptops points to the importance of ensuring that AI tools are accessible across a variety of platforms. Institutions should also be mindful of equity issues related to device access, as students without the necessary technology may find themselves at a disadvantage. Supporting access to AI tools across diverse devices and promoting equitable access to technology are essential for ensuring that all students can benefit from these advancements.
The findings underscore the need for a balanced and intentional approach to integrating GAI tools into educational settings. While students value the efficiency and convenience of GAI, their concerns around ethical usage and skill development point to a critical awareness of AI’s limitations. This suggests an opportunity for educational institutions to develop frameworks that support responsible AI usage, emphasizing AI as a supplement to, rather than a replacement for critical thinking and independent learning.
To address students’ concerns, faculty and administrators might consider implementing explicit guidelines on GAI use in coursework, potentially incorporating discussions on the ethical, cognitive, and practical dimensions of AI into curricula. For instance, faculty could provide case studies or practical exercises that demonstrate both the benefits and limitations of GAI tools, encouraging students to reflect on AI’s role within their own academic and professional development. In doing so, educators could reinforce the message that while GAI can enhance productivity, it should be used mindfully to avoid compromising academic integrity or personal skill growth.
Additionally, the role of faculty attitudes, as highlighted in this study, suggests that instructor perspectives can significantly shape students’ approaches to GAI. Professional development initiatives focused on equipping faculty with strategies to guide responsible AI usage could create a more unified, supportive environment. Faculty workshops on AI could help educators across disciplines better understand GAI’s capabilities and limitations, enabling them to provide consistent, nuanced guidance that aligns with institutional values around academic integrity and personal development.
The integration of GAI into higher education presents transformative opportunities for knowledge management, instructional innovation, and collaborative learning (
Walczak & Cellary, 2023). AI-empowered tools are being leveraged to support at-risk students, enhancing personalized learning experiences and potentially increasing retention and graduation rates (
Eliyahu, 2023). In line with sustainable development goals for higher education, knowledge codification, storage, and generation via AI have shown positive impacts on institutional sustainability (
Budur et al., 2024). However, to fully realize these benefits, institutions must address the digital skills gap, particularly in regions like Latin America, by investing in infrastructure, fostering digital literacy, and promoting partnerships to bridge accessibility challenges (
Orellana-Rodriguez, 2024). A comprehensive framework for responsible AI use is critical to ensuring ethical integration into academic curricula. The AI-enabled Collaborative Learning Ecosystems (AI-CLE) model proposed by
Baskara (
2024) offers one such approach, combining socio-constructivist and connectivist theories to foster a dynamic, collaborative educational ecosystem. Moreover, organizational ecology highlights that the emergence and sustainability of AI-driven educational practices depend on supportive social, economic, and political conditions, as well as institutional commitment to fostering a community of inquiry among educators, policymakers, and technical developers (
Yixin et al., 2024;
Eke, 2023). This multi-stakeholder effort toward responsible AI integration can redefine the landscape of higher education, helping institutions to balance technological advancements with ethical, collaborative, and sustainable educational practices.
4.6. Evolution of GAI Explorations in Higher Education
Future research on GAI in higher education should address the complexities of organizational culture and structure, examining how these internal factors influence AI integration across diverse institutional environments (
Batista et al., 2024;
Batta, 2024). As GAI reshapes educational frameworks, understanding the impact on assessment practices, institutional policies, and academic integrity will be essential for navigating the risks and potential of this technology (
Batista et al., 2024). Additionally, studies could investigate the ways higher education institutions are adapting outdated educational models to encompass AI-driven learning, potentially recalibrating traditional tools and definitions to better align with modern technological demands (
Bastani, 2024).
Exploring AI’s implications for equity is critical, especially in regions like the Global South, where educational technology infrastructure varies significantly. Investigating how AI-based tools can address or exacerbate disparities in accessibility, fairness, and representation could offer pathways toward a more equitable global education system. Ethical considerations and the development of AI policies will play a central role in ensuring that AI applications uphold standards of fairness, particularly in mitigating biases in AI-generated assessments that disproportionately impact vulnerable student populations, including international students (
Albright et al., 2023;
Farrelly & Baker, 2023;
Kurtz et al., 2024).
There is also a need for longitudinal studies that assess GAI’s influence on core teaching and learning processes, from developing workforce-ready skills to fostering critical thinking (
Bowen & Watson, 2024). Scenario analysis frameworks, as suggested by
Krause et al. (
2024), could offer valuable insights into diverse outcomes based on institutional and student use of AI tools, balancing technological innovation with the preservation of essential cognitive skills such as critical thinking and problem-solving. Future studies should also address curriculum reform, educator upskilling, and the development of comprehensive AI literacy programs. These initiatives would prepare both students and faculty to adapt to AI’s pervasive influence, enabling higher education to leverage GAI’s advantages—such as accessibility and tailored learning—while avoiding stultifying pitfalls like over-reliance and misinformation (
Kurtz et al., 2024).
Advancing the responsible integration of GAI will require a multi-stakeholder approach, including policymakers, educators, and AI developers, to create resilient and ethical educational ecosystems (
Gottschalk & Weise, 2023;
Krause et al., 2024). This will be particularly vital as institutions seek to harness the transformative potential of GAI in ways that are inclusive, sustainable, and aligned with the evolving needs of a global educational community (
Eden et al., 2024;
Kshetri & Voas, 2024).