Next Article in Journal
Structured Comparison Approach in Remote Interprofessional Education: Enhancing Role Clarity and Collaborative Identity Through Video-Based Reflection
Previous Article in Journal
Sex Education for Individuals with Intellectual Development Disorder (IDD): A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generative AI as a Cognitive Co-Pilot in English Language Learning in Higher Education

1
Faculty of Languages and Arts, Universitas Negeri Padang, Padang 25131, Indonesia
2
English Education Postgraduate Program, Education Faculty, Bengkulu University, Bengkulu 38119, Indonesia
3
Research Center for Language Teaching and Learning, School of Languages and General Education, Walailak University, Tha Sala 80160, Thailand
4
Language Pedagogy Study Program, Faculty of Languages and Arts, Universitas Negeri Padang, Padang 25131, Indonesia
5
English Education Study Program, Bengkulu University, Bengkulu 38119, Indonesia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(6), 686; https://doi.org/10.3390/educsci15060686
Submission received: 12 February 2025 / Revised: 23 May 2025 / Accepted: 29 May 2025 / Published: 1 June 2025

Abstract

:
Despite the global integration of generative artificial intelligence (GenAI) tools in higher education, limited research exists on how demographic factors such as gender and academic level shape their adoption and usage, particularly in language learning contexts outside Western settings. This study aimed to fill this gap by examining the usage patterns, satisfaction levels, and acceptance factors of GenAI tools among English major students in Indonesian higher education. Employing a mixed-methods approach, the research collected data from 277 students using surveys and structured interviews to gauge both quantitative and qualitative aspects of GenAI tool utilization. The results indicate ChatGPT, Google Translate, and Grammarly as the most utilized tools for writing assistance, language learning, and research tasks, with consistent satisfaction across demographics. Performance expectancy emerged as the most influential acceptance factor, followed by effort expectancy and facilitating conditions, while social influence played a moderate role. Qualitative findings reveal students rely on GenAI for grammar refinement, translation accuracy, content exploration, and idea generation, reflecting critical and reflective engagement. Nonetheless, concerns about overreliance and ethical implications accentuate the need for balanced integration. These findings inform tailored educational strategies, emphasizing ethical use and fostering critical thinking in GenAI adoption for English language education.

1. Introduction

As higher education worldwide adapts to technological advancements, the integration of generative artificial intelligence (GenAI) tools, such as ChatGPT 4o, Grammarly, and QuillBot, into English language learning represents a transformative shift, offering innovative ways to enhance linguistic skills and creativity (Al-khresheh, 2024; Chan & Hu, 2023). GenAI applications provide not only dynamic writing support and instant feedback but also facilitate complex linguistic analyses that extend beyond traditional educational resources (Chiu, 2024; C. Wang, 2024). In English education, these tools have evolved from supplementary aids to essential components of academic workflows, highlighting the increasing value of AI-powered language applications in higher education. Literature reviews have shown a rise in AI integration to enhance learners’ affective factors, language skills, and overall learning outcomes (AlTwijri & Alghizzi, 2024), while GenAI applications in English as a Foreign Language (EFL) education have proven effective in improving speaking, writing, reading, and vocabulary acquisition (Alshumaimeri & Alshememry, 2024). Empirical studies further support these findings; a study analyzing qualitative responses from university student surveys in Sweden (Ou et al., 2024) and a mixed-methods inquiry in Thailand revealed that students generally have positive perceptions of AI tools, viewing them as beneficial for language development and academic performance (Waluyo & Kusumastuti, 2024).
Although the integration of GenAI in education continues to grow, the existing literature rarely addresses how demographic factors such as gender and academic level influence students’ adoption and use of these tools, particularly in English language learning. Demographic characteristics are important to consider because they shape individual experiences, perceptions, and motivations toward educational technology. For example, despite widespread awareness of GenAI applications, female students and humanities majors tend to exhibit more negative attitudes toward them (Stöhr et al., 2024). Gender-specific priorities in technology adoption further highlight this divergence, with male students favoring compatibility, ease of use, and observability, while female students emphasize ease of use, compatibility, relative advantage, and trialability (Raman et al., 2024). Academic level and field of study also impact AI receptiveness, with technology and engineering students generally displaying more positive perspectives (Stöhr et al., 2024). However, targeted analyses of how these factors influence generative AI adoption in language learning remain limited, restricting insights into how varying acceptance patterns—such as those between genders or class levels—could inform more tailored support and resource allocation (Zhang et al., 2023). This oversight represents a critical gap in the literature, calling for more nuanced investigations that link technology adoption to learner diversity and educational equity. The present study seeks to fill this gap by examining how gender and academic year shape students’ engagement with GenAI tools in Indonesian English language education, offering demographic-specific insights that can inform a more inclusive and targeted digital pedagogy.
Specifically, the present study aims to analyze how English major students in Indonesia use generative AI tools as cognitive co-pilots or learning aids, focusing on both the applications employed and their usage across various academic tasks. In this study, we define “cognitive co-pilot” as a generative AI tool that supports and enhances learners’ cognitive processes—such as comprehension, idea generation, revision, and self-regulation—while preserving students’ active engagement in meaning-making and decision-making. Rather than replacing human thinking, a cognitive co-pilot facilitates reflective interaction and scaffolds academic performance, similar to a collaborative learning partner. Using a mixed-methods approach, this research will combine quantitative data on usage patterns and satisfaction levels with qualitative insights into students’ perceptions and experiences, providing a comprehensive understanding of AI’s role in their academic journeys. This study will also explore the impact of gender and year of study on AI tool usage and satisfaction, offering demographic-specific insights to inform more nuanced educational strategies in Indonesian universities. Nevertheless, it is important to note that although this study is attentive to demographic factors such as gender and academic year, these variables are not examined in isolation. Rather, they serve as important lenses for interpreting emergent usage trends, satisfaction, and acceptance of GenAI tools. The primary emphasis of the analysis is on understanding the functional and reflective use of AI technologies in academic contexts, and how these patterns are influenced—but not wholly determined—by demographic variation. This framing allows this study to offer both generalizable insights into GenAI-supported learning and context-specific implications tied to learner diversity. By situating the investigation within the unique context of Indonesian higher education, this research will fill a significant gap in the literature, extending the understanding of generative AI’s educational impact beyond Western-centric narratives.
The following research questions will guide this study:
  • Which generative AI applications are English major students using in Indonesian higher education?
  • For what types of academic tasks do English major students primarily use generative AI?
  • How satisfied are students with the generative AI applications they use in their studies?
  • What factors influence students’ acceptance of generative AI tools (performance expectancy, effort expectancy, facilitating conditions, and social influence)?
  • How do students utilize generative AI tools as cognitive co-pilots in their English language studies?
The research questions guiding this study were developed through a comprehensive synthesis of prior empirical and theoretical work on generative AI in education, particularly the Unified Theory of Acceptance and Use of Technology (Venkatesh et al., 2003) and its extensions in recent GenAI studies (e.g., Habibi et al., 2023; Jang, 2024; Waluyo & Kusumastuti, 2024). They were designed to explore not only the frequency and type of GenAI usage, but also learners’ satisfaction, perceived utility, and the sociotechnical factors influencing acceptance. Moreover, the final question reflects a shift in conceptual framing from mere usage to cognitive partnership, drawing on the notion of AI as a “co-pilot” in language learning (Chan & Hu, 2023; Du & Alm, 2024). A detailed synthesis of these empirical and conceptual foundations is provided in the following literature review section, which establishes the rationale and theoretical grounding for each research question. The combination of usage patterns, task types, satisfaction, adoption factors, and reflective learning practice ensures a comprehensive and context-sensitive inquiry into GenAI’s role in Indonesian higher education.

2. Literature Review

The worldwide spread of generative AI in education has produced a burgeoning body of research on its effects on language acquisition. Early scholarship was largely North American in origin, but recent scholarship has increasingly been drawn from a wide range of worldwide settings, providing rich descriptions of how generative AI technology is utilized across multiple cultural and institutional environments. For example, empirical studies on Iran (Fathi et al., 2024), China (An et al., 2023; Y. Wang & Zhang, 2023; Xu et al., 2024), Vietnam (Vo & Nguyen, 2024), Thailand (Waluyo & Kusumastuti, 2024), Indonesia (Habibi et al., 2023; Rosmayanti et al., 2022), South Korea (Jang, 2024), and Hong Kong (Chan & Hu, 2023) explored students’ use of AI tools in EFL contexts. Evidence in New Zealand (Du & Alm, 2024), Sweden (Ou et al., 2024), and Poland (Belda-Medina & Calvo-Ferrer, 2022; Strzelecki, 2024) also confirms the extent of research on GenAI adoption and student attitudes within higher education. Such expanding international research therefore provides a necessary background to the analysis of how institutions and sociocultural factors shape student use behavior, attitudes, and learning outcomes. This current study rests on these myriad views, placing its research in the relatively underexplored field of Indonesian English language instruction.

2.1. Generative AI Applications in Student Language Learning

Generative AI applications are playing an increasingly focal role in language learning, offering support across a diverse range of skills. C. Wang (2024) highlights that students leverage GenAI for various writing tasks, such as brainstorming, organizing ideas, and refining both global and local writing issues. Nonetheless, Lee et al. (2024) caution that GenAI can bolster writing skills, but excessive reliance might impede the learning process. Furthermore, applications extend to speaking practice, exemplified by AI-powered chatbots, e.g., Call Annie. Fathi et al. (2024) conducted empirical research in Iran, demonstrating that AI-chatbot-mediated activities significantly enhance EFL learners’ speaking skills—encompassing fluency, lexicon, grammatical range, and pronunciation—and increase their willingness to communicate. Learners reported positive attitudes toward AI-mediated instruction, indicating its effectiveness in improving interactive speaking abilities. In the areas of reading and listening, McCarthy and Yan (2024) and Aryadoust et al. (2024) show that generative AI helps with personalized and constructive learning in reading comprehension and the creation of listening tests that are suitable for various levels of proficiency. Despite these advancements, Waluyo and Kusumastuti (2024) note that although students in Thailand report enhanced efficiency, engagement, and linguistic confidence due to GenAI application use in EFL settings, concerns persist regarding an overreliance on AI and the need for its critical use.
The efficacy of generative AI in enhancing language learning is widely recognized, but its integration into language learning instructions as students’ cognitive co-pilots or learning aids remains complex. Chan and Hu (2023) surveyed 399 students in Hong Kong, revealing generally positive attitudes toward GenAI’s potential for personalized learning support. Correspondingly, Du and Alm (2024) found that EAP students in New Zealand appreciate GenAI’s flexibility, personalized feedback, and the safe practice space it offers, which foster students’ autonomy and competence. Nevertheless, its impact on fulfilling relatedness needs varies; some students feel a sense of companionship, while others are concerned about diminished human interaction. Vo and Nguyen (2024) delved deeper into this dimension in Vietnam, finding that students perceive GenAI as easy and useful, but their opinions on its overall usefulness vary. They recommend a balanced approach, integrating GenAI use with human interaction to preserve the irreplaceable elements of human teacher interactivity and empathy in effective language learning. Such findings accentuate the need for careful monitoring and adaptive teaching strategies to harness GenAI’s benefits while mitigating potential drawbacks in educational environments.
Furthermore, recent studies on generative AI and learning have mostly been concerned with technology uptake and user experience, but they also need to be grounded in pedagogical frameworks that simulate learners’ cognitive and emotional interaction with AI within learning environments. According to Self-Determination Theory (Deci & Ryan, 2008), generative AI programs such as ChatGPT can enhance autonomy in self-directed learning, competence in individualized scaffolding and feedback, and relatedness in virtual conversational interaction (Du & Alm, 2024; Chiu, 2024). Yet, more recent research gives a less dichotomous account: whereas some students perceive AI to increase social presence and connectedness, others worry AI reduces human interaction—most notably in situations where collaborative learning or affective involvement are promoted (Du & Alm, 2024; Xie et al., 2024). These cross-pressures necessitate consideration of how AI technologies enable and potentially limit major motivational aspects of language learning.
Apart from motivation, cognitive engagement theory (Greene & Miller, 1996) and ICAP (Interactive, Constructive, Active, and Passive) theory (Chi & Wylie, 2014) play a major role in comprehension of how students use AI to involve themselves in more superior learning processes. The ICAP model recognizes superior levels of engagement—especially constructive and interactive—are linked with better learning. Under AI-supported writing, students exercise metacognitive skills in several ways, such as planning, monitoring, and evaluation (Yao et al., 2025). Students typically read through, revise, and incorporate suggestions into their own writing, showing reflective and active processes (Chan & Hu, 2023; Yang et al., 2024). This artificial intelligence strategy aligns with self-regulated and metacognitive learning theories, which stress the agency of students to regulate their cognitive resources towards learning tasks (Lai, 2024). Overall, these educational theories highlight that generative AI applications cannot be viewed as exclusive aid technologies but instead as motivational and cognitive go-betweens that influence the way language learners learn, exert autonomy, and enjoy fruitful, self-initiated learning.

2.2. Applications of AI in Academic Tasks

Generative AI applications have become foundational in academia, enriching tasks such as writing, research ideation, and creative image generation. Law (2024) demonstrate that GenAI applications are widely used among students to enhance academic efficiency, reflecting a growing shift toward AI-supported learning. Yawson (2024) further explores AI’s role in text generation and proposes a framework for responsible adoption, emphasizing ethical considerations essential for maintaining academic integrity. Beyond writing, AI applications extend into course development, language learning, and self-directed study, allowing for personalized, flexible educational experiences. Kshetri (2023) notes that AI tools like interactive chatbots enhance language learning by supporting vocabulary acquisition and conversational practice, while Preiksaitis and Rose (2023) highlight AI’s role in empowering self-directed learning through instant feedback and independent study support. Specialized applications such as ChatGPT, Grammarly, and Quillbot play a critical role in writing processes, from brainstorming to revising, supporting the development of coherent, well-structured academic work (Barrett & Pack, 2023). However, the ease of access to AI-generated content presents challenges, particularly regarding academic integrity, as students must cultivate critical evaluation skills to use these tools responsibly (Boscardin et al., 2024). Watermeyer et al. (2024) argue for a balanced approach to AI integration in academia, stressing that thoughtful strategies are essential to prevent AI from undermining traditional academic roles and professional standards. Such a strategic, ethically guided integration can harness AI’s benefits while aligning its use with the educational values essential to meaningful learning outcomes.

2.3. Student Satisfaction with Generative AI Applications

The ongoing use and adoption of generative AI applications in education hinges on student satisfaction, but the current literature often fails to adequately explore the multifaceted factors shaping this satisfaction. Even though students increasingly value generative AI tools, such as ChatGPT and Grammarly, for their ability to facilitate tasks like writing, brainstorming, and research support (Chan & Hu, 2023), their satisfaction requires a complex balance of functionality, emotional engagement, and ethical considerations. Students value AI’s ability to enhance efficiency and flexibility, but they temper their acceptance of AI due to concerns about data privacy, content accuracy, and academic integrity (Barrett & Pack, 2023). Research further indicates that, although AI can assist in task-specific functions, it often falls short in nurturing essential creative and critical thinking skills, which are foundational to academic and professional growth (Spector & Ma, 2019). Moreover, demographic factors such as gender and academic focus also shape student attitudes, with findings suggesting that female students and humanities majors exhibit a more critical stance towards generative AI due to perceived misalignments with educational values, encompassing ethical integrity and critical analysis (Stöhr et al., 2024; Raman et al., 2024). Belda-Medina and Calvo-Ferrer (2022) examined chatbots as AI conversational partners in language learning among Polish and Spanish university students, finding no significant effect of gender or educational setting on satisfaction with linguistic capabilities; however, female participants exhibited greater sensitivity to inclusive design and concerns over gender stereotyping. Similarly, Xu et al. (2024) analyzed Chinese undergraduates and postgraduates, concluding that while gender did not significantly influence perceptions of ChatGPT’s reliability or safety, grade level and academic major shaped evaluations, with distinct views on reliability, privacy, and future potential across disciplines.

2.4. Factors Influencing Acceptance of Generative AI

Recent research on the adoption of generative AI (GenAI) in higher education frequently utilizes the Unified Theory of Acceptance and Use of Technology (UTAUT) framework (Venkatesh et al., 2003), identifying performance expectancy, effort expectancy, facilitating conditions, and social influence as primary factors influencing students’ intentions to engage with GenAI tools. In Indonesia, Habibi et al. (2023) highlighted the significance of facilitating conditions and behavioral intention in shaping the acceptance of ChatGPT, emphasizing the importance of institutional support and student readiness to adopt new technologies. In a study among South Korean business students, Jang (2024) extended the UTAUT model by incorporating AI literacy as an additional variable, revealing that performance expectancy was the most influential factor in determining students’ intentions, while effort expectancy had little impact. Strzelecki (2024), in a Polish study, modified the UTAUT2 model to include habit, discovering that behavioral intention had the greatest effect on actual usage behavior, with performance expectancy playing a secondary role. Further, in China, Y. Wang and Zhang (2023) incorporated elements from both the UTAUT2 model and the Technology Readiness Index, along with the concept of trait curiosity, and found that optimism, creativity, and trait curiosity positively influenced the intention to use GenAI, while performance expectancy did not exhibit a significant effect. In the UK and Nepal, Budhathoki et al. (2024) observed that performance expectancy, effort expectancy, and social influence were significant predictors of adoption, with social influence notably affecting both contexts.
Although significant research has focused on the factors influencing GenAI adoption in higher education, there is a noticeable gap in studies specifically addressing English major students, despite the profound impact generative AI has had on the field. Some studies, such as that conducted by Rosmayanti et al. (2022), found that performance expectancy, effort expectancy, social influence, and facilitating conditions played key roles in shaping pharmacy students’ intentions to use technology in English language learning. In Thailand, Waluyo and Kusumastuti (2024) found that English as a Foreign Language (EFL) students were very open to GenAI tools, especially when it came to performance expectations, effort expectations, and facilitating conditions. Yet, social influence was not as important. Furthermore, the study found no significant differences in GenAI usage between high- and low-performing students. In China, An et al. (2023) demonstrated that performance expectancy and cultural interest were strong predictors of students’ behavioral intentions to use AI-assisted language learning (AILL), with social influence affecting only junior high school students. These findings emphasize the importance of contextual factors, particularly in Southeast Asia, where cultural attitudes and social dynamics toward technology adoption may differ markedly from Western settings. In particular, the Indonesian educational context, where the perceived academic benefits and broader social acceptance of technological innovations could influence generative AI usage, requires further investigation to better understand the factors driving or hindering the adoption of GenAI within this cultural milieu.

2.5. Research Gaps and the Need for a Context-Specific Examination

Although generative AI has been increasingly adopted in education for tasks such as writing, speaking, and reading (Aryadoust et al., 2024; Fathi et al., 2024; C. Wang, 2024), few studies focus specifically on its use within English language learning, especially in multilingual, non-Western contexts such as Indonesia. The literature often overlooks how AI tools are applied to particular academic tasks and how satisfaction with these tools relates to tangible educational outcomes (Barrett & Pack, 2023; Chan & Hu, 2023; Waluyo & Kusumastuti, 2024). Moreover, demographic factors such as gender and academic seniority are underexplored in GenAI adoption research, despite evidence showing that these variables shape attitudes and experiences with technology (Stöhr et al., 2024; Zhang et al., 2023; Rosmayanti et al., 2022).
Furthermore, while acceptance frameworks such as UTAUT have been widely used to assess GenAI adoption in higher education (Habibi et al., 2023; Venkatesh et al., 2003), their application to English major students remains limited. Most studies have focused on general populations or STEM disciplines, neglecting the pedagogical and cultural nuances of language education. In Southeast Asia, factors such as institutional support, collectivist learning norms, and unequal access to resources complicate the adoption process (Waluyo & Kusumastuti, 2024; Vo & Nguyen, 2024; Danler et al., 2024). Given these gaps, this study offers a context-specific investigation of GenAI use among English major students in Indonesian higher education, addressing tool usage, academic tasks, satisfaction, acceptance factors, and students’ reflective engagement with AI as cognitive co-pilots.

3. Methods

As seen in Figure 1, the present study addressed the research gaps through a mixed-methods investigation (Creswell, 1999) of generative AI as cognitive co-pilots or learning aids among English major students in Indonesia. By examining the types of AI tools used, the specific academic tasks they supported, student satisfaction levels, and factors influencing acceptance, this research aimed to provide a comprehensive understanding of AI’s role in language education. The inclusion of demographic analyses, focusing on variations by gender and academic year, offered further insights into how different student groups engaged with AI. Exploring acceptance factors—such as performance expectancy, effort expectancy, facilitating conditions, and social influence—within the context of Indonesian higher education contributed culturally relevant insights into technology adoption, enriching the existing literature.

3.1. Challenges in English Language Learning in Indonesia’s Higher Education

In Indonesia, English major students are typically multilingual, speaking Indonesian (Bahasa Indonesia) alongside various regional and ethnic languages, reflecting the country’s linguistic diversity, with over 700 languages spoken across the archipelago (Zein, 2019). Although proficiency in Indonesian is crucial for academic and professional communication, local languages remain central to daily life and cultural identity. This multilingual environment presents challenges for students, particularly English majors, who must navigate between their native languages, Indonesian, and English in academic settings (Lie, 2017). The need to engage with complex English-language materials and develop skills in critical reading and writing in a second language further complicates this process, often within a context of inconsistent support for English proficiency. Generative AI tools, offering grammar correction, vocabulary enhancement, and contextual understanding, hold potential to bridge linguistic gaps where traditional resources fall short. English proficiency is highly valued for academic and professional success, but access to advanced English resources varies widely across institutions, with generative AI potentially leveling the playing field by providing immediate, accessible language support (Marzuki et al., 2023).
Indonesian English majors, navigating both linguistic challenges and the demands of mastering reading, writing, and oral communication, offer a unique demographic for investigating the impact of generative AI on language learning. Indonesia’s collectivist culture likely influences students’ perception of AI, viewing it not only as an individual aid but as a shared resource for collaborative academic experiences (Darwin et al., 2024). Investigating AI adoption and acceptance among English majors can provide insights into how students in collectivist cultures integrate autonomous learning technologies within collaborative frameworks, informing culturally responsive technology integration. With limited access to specialized English language resources in Indonesian institutions, generative AI may serve as a vital supplement, compensating for resource constraints (Rosmayanti et al., 2022). Unlike students in English-speaking countries, Indonesian students often rely on supplementary online tools or generative AI to address gaps in institutional resources, raising questions about how AI fits into academic routines in resource-limited contexts (Williyan et al., 2024).

3.2. Research Participants

This study employed a convenience sampling method, comprising 277 English major students, with a majority aged 21 (85 students, 30.69%) and 20 (84 students, 30.32%). Other significant age groups included 19 years (47 students, 16.97%), 22 years (33 students, 11.91%), and 18 years (13 students, 4.69%), among others. In terms of academic year, the sample was nearly evenly distributed between underclassmen (149 students, 53.79%)—comprising first- and second-year students—and upperclassmen (128 students, 46.21%), representing both early and later stages of their academic careers. The dataset included a wide range of Grade Point Average (GPA) scores, spanning 139 different values, with the highest GPA being 4.0 and the lowest 2.84, reflecting a broad spectrum of academic performance.
Regarding institutional affiliation, the largest group of participants attended the State University of Padang (91 students, 32.85%), followed by the University of Bengkulu (65 students, 23.47%) and the State Islamic University of Sjech M. Djamil Djamil (65 students, 23.47%). Smaller groups were drawn from the State Islamic University of Bukittinggi (21 students, 7.58%), Mahmud Yunus State Islamic University of Batusangkar (12 students, 4.33%), and University of Eka Sakti (10 students, 3.61%). Additional smaller groups came from Pattimura University (9 students, 3.25%) and other regional institutions, including the State Polytechnic of Ambon (2 students, 0.72%) and PGRI University of West Sumatra (2 students, 0.72%). Moreover, the majority of participants rated their English proficiency as average, with 172 participants (62.09%). A smaller proportion rated their proficiency as good (87 participants, 31.41%), while even fewer rated it as poor (15 participants, 5.42%) or very good (3 participants, 1.08%). Table 1 provides detailed data.
This study employed a convenience sampling method (Etikan et al., 2016) based on the accessibility and willingness of students to participate from multiple public and private universities across Indonesia. This approach was appropriate given this study’s exploratory nature and the need to capture diverse perspectives from students with varying access to AI resources. While this method may limit generalizability, the inclusion of participants from eight institutions across different regions helped mitigate sampling bias. Additionally, we acknowledge that institutional policies on AI usage may vary, potentially influencing students’ experiences and perceptions. Although this variable was not directly measured, it is recognized as an important contextual factor, and future studies should consider comparing student responses across institutions with different levels of AI integration support and guidelines.

3.3. Data Collection

3.3.1. Survey Questionnaire

The researchers designed and administered an online survey to English major students in Indonesia, comprising both Likert-scale items and open-ended questions. The survey was distributed through personal and professional group messages, with entirely voluntary participation, and written consent was obtained from all respondents. The Likert-scale items, adapted from Venkatesh et al. (2003) and Yilmaz et al. (2024), ranged from 1 (strongly disagree) to 5 (strongly agree), covering 18 items grouped into four sub-scales: performance expectancy (7 items), effort expectancy (4 items), facilitating conditions (3 items), and social influence (4 items). The open-ended questions included prompts such as: “What generative AI applications have you used for your studies at university?,” “For what types of university-related tasks do you primarily use generative AI apps?,” “How satisfied are you with the generative AI apps you have used?,” and “How do you use generative AI writing applications in your English learning at the university?” To maximize participants’ comprehension, the survey was translated into Indonesian.
To ensure validity, experts in English Language Teaching with experience in technology integration reviewed the survey for face validity. Additionally, Exploratory Factor Analysis (EFA) was conducted following Stapleton’s (1997) guidelines, with KMO and Bartlett’s tests confirming the survey’s structural soundness: χ2 (153) = 3923.97, p < 0.001, with a KMO measure of sampling adequacy at 0.946, which exceeds the 0.50 threshold, indicating excellent suitability for factor analysis. The survey constructs also demonstrated high reliability, as evidenced by Cronbach’s alpha values exceeding 0.80, thereby validating the robustness of the survey instrument.

3.3.2. Interviews

A total of six students voluntarily participated in the structured interview sessions. To ensure that participants had sufficient time to prepare, the interview protocols were shared with them in advance. The interviews were conducted online, facilitating both the process of transcription and providing a flexible platform for participants to engage in the discussion. To further ensure the accuracy and reliability of the transcriptions, the researchers utilized online AI transcription tools, which were cross-checked for precision. Each interview lasted between 40 and 45 min. The interview questions were designed to explore the participants’ use of generative AI tools, with the open-ended questions from the surveys serving as a guiding framework. These questions focused on understanding the participants’ experiences, perceptions, and challenges regarding the integration of AI tools into their learning processes. The use of a structured interview format, combined with open-ended questions, allowed for a comprehensive exploration of the students’ insights into the role of generative AI in their academic lives.

3.4. Data Analysis

To address Research Questions 1, 2, and 3, descriptive statistics were employed. This approach allowed for a comprehensive overview of the data, providing insights into the patterns and trends relevant to these questions. Research Question 4 was explored using Exploratory Factor Analysis (EFA) and multiple linear regression analysis. EFA was utilized to identify underlying factors that may explain the relationships between observed variables, while multiple linear regression analysis was conducted to assess the predictive relationships between the independent and dependent variables. For Research Question 5, thematic analysis was employed to identify and analyze patterns within qualitative data. This analysis followed the procedures outlined by Clarke and Braun (2017), which involve familiarizing oneself with the data, generating initial codes, searching for themes, reviewing themes, and finalizing the interpretation. This multi-method approach allowed for a robust examination of both quantitative and qualitative aspects of the research questions, providing a well-rounded understanding of the data.

4. Results

4.1. Patterns of Generative AI Usage Among English Majors in Indonesia

Among all AI applications reported (N = 277), ChatGPT was the most widely used tool, with 239 students (86.28%) indicating they utilized it for academic purposes. The high percentage reflected the tool’s popularity and versatility in supporting English learning. Google Translate followed closely, with 222 students (80.14%) using it, indicating its critical role in helping students overcome language barriers. Grammarly was also extensively used, with 191 students (68.95%) relying on it for grammar checking and writing support. Moreover, QuillBot was used by 135 students (48.74%), indicating a preference for tools that assist with paraphrasing and rephrasing tasks. Duolingo, a language learning platform, was utilized by 128 students (46.21%), highlighting its value for English language practice. Other AI applications showed lower usage rates. For instance, Elsa Speak was used by 29 students (10.5%), while HelloTalk had a usage rate of 5.8% (16 students). Specialized applications, such as DeepL (4 students, 1.4%), Babbel (3 students, 1.1%), and Perplexity (5 students, 1.8%), were used by fewer students. Unique tools, e.g., Bing AI, Character.Ai, Gemini AI, and Mendeley each appeared only once, reflecting minimal usage preferences for these niche AI applications. Figure 2 shows the top five AI applications used by the students.
Table 2 depicts the top five AI applications used by gender. When analyzing AI application usage by gender, several patterns emerged. ChatGPT was popular among both female and male students, with 173 female students (83.98%) and 66 male students (92.96%) reporting usage. Google Translate showed similar high usage, with 165 female students (80.10%) and 57 male students (80.28%) relying on it. Grammarly was used by 146 females (70.87%) and 45 males (63.38%), suggesting a slightly higher preference for this tool among female students. Duolingo also displayed a notable gender difference, with 107 female students (51.94%) and only 21 male students (29.58%) reporting usage, followed by Quillbot, which was used by 99 females (48.06%) and 36 males (50.70%). Lower-frequency applications also revealed some gender preferences. Elsa Speak was used by 23 females (11.17%) and 6 males (8.45%), showing a stronger preference among female students. Other less common applications, such as Babbel and Perplexity, were used by both genders but in small numbers. Some applications, such as AI Asus, Humata, and Lingvist were exclusively used by female students, while tools, e.g., Bing AI, Poe, DeepL, and Microsoft Bing were solely used by male students.
To examine whether gender significantly influenced the use of different generative AI tools, chi-square tests were conducted on the five most reported applications. The analysis showed that Duolingo usage differed significantly by gender (χ2 = 9.74, p = 0.002), with a higher proportion of female students (51.9%) using the app compared to male students (29.6%). However, no significant gender differences were found for ChatGPT (p = 0.090), Google Translate (p = 1.000), Grammarly (p = 0.304), or QuillBot (p = 0.805). These findings suggest that while overall adoption rates for major AI tools are largely uniform across genders, certain applications such as Duolingo may appeal differently, potentially reflecting gender-based preferences in language learning strategies. Table 3 below displays the results.
As shown in Table 4, ChatGPT and Google Translate emerged as the most widely used AI tools across both underclassmen and upperclassmen. ChatGPT was used by 124 underclassmen (83.22%) and 115 upperclassmen (89.84%), indicating broad adoption across study years. Google Translate followed with 120 underclassmen (80.54%) and 102 upperclassmen (79.69%), reflecting a widespread reliance on this tool for language assistance. Grammarly usage showed a slight decrease among upperclassmen, with 87 students (67.97%) using it compared to 104 underclassmen (69.80%), suggesting that younger students may have relied more heavily on grammar-checking tools. QuillBot usage was nearly balanced, with 69 underclassmen (46.31%) and 66 upperclassmen (51.56%) using it for paraphrasing support. Duolingo usage showed some variation, with 78 underclassmen (52.35%) and only 50 upperclassmen (39.06%) utilizing it, possibly indicating a greater need among newer students for foundational language practice. Less common applications, such as Elsa Speak and HelloTalk, displayed limited adoption across both groups without notable year-based differences. Elsa Speak was used by 18 underclassmen (12.08%) and 11 upperclassmen (8.59%), while HelloTalk saw slightly more usage among underclassmen (9 students, 6.04%) than upperclassmen (7 students, 5.47%). Other tools, encompassing, AI Asus, Babbel, Perplexity, and askpdf were each mentioned by only a few students, with no significant preference based on year of study, suggesting that these specialized applications served limited, specific functions rather than being essential for general academic use.

4.2. Academic Tasks Supported by Generative AI

As indicated in Figure 3 (N = 277), Writing Assistance emerged as the most frequently reported AI application, with 162 students (58.48%) indicating they primarily used generative AI for writing tasks. This finding highlights the significant role of AI tools in supporting students with essay writing, assignments, and other written work. Language Learning was the second most common use, with 149 students (53.79%) relying on AI for language practice and improvement, underscoring the importance of these tools in enhancing English proficiency. Research Tasks ranked third, with 134 students (48.38%) using AI for information gathering and synthesizing findings. Study Support was also a prominent category, with 111 students (40.07%) indicating they used AI to assist in understanding and reviewing study materials. Exam Preparation was another notable application, with 64 students (23.1%) using AI tools to prepare and revise for exams, likely through practice questions and summarization. Tasks with more specialized applications, such as Creative Writing (e.g., writing poems, stories, or scripts), were mentioned by 95 students (34.30%). Additionally, 61 students (22.0%) reported using AI for Communication tasks, which included composing messages or formal responses. A smaller subset of students, 18 (6.5%), used AI for Administrative Tasks such as organizing schedules or generating formal documents.
As depicted in Table 5, when analyzing task usage by gender, Writing Assistance emerged as the most frequently used category, with 56.31% of female students (116) and 64.79% of male students (46) indicating its use. Language Learning was similarly prominent, utilized by 56.31% of females (116) and 46.48% of males (33), demonstrating substantial interest across genders in leveraging AI to enhance language proficiency. Research tasks were also widely used, reported by 49.03% of females (101) and 46.48% of males (33). Study Support followed, with 41.26% of females (85) and 36.62% of males (26) engaging in this category. Creative Tasks, while less commonly used, still showed notable participation, with 35.44% of females (73) and 30.99% of males (22) using AI for creativity-based activities. The results suggest that while usage patterns are similar across genders, males slightly outpace females in Writing Assistance, while females show higher engagement in Language Learning and Study Support tasks.
For academic year breakdown, as shown in Table 6, Writing Assistance is the most frequently used task among both underclassmen (55.70%, 83 students) and upperclassmen (61.72%, 79 students), highlighting its importance across all levels. Language Learning is also widely utilized, with 55.03% of underclassmen (82 students) and 52.34% of upperclassmen (67 students) relying on AI for language improvement. Research tasks show significant engagement, with 51.68% of underclassmen (77 students) and 44.53% of upperclassmen (57 students) using AI for academic inquiries. Study Support exhibits slightly higher usage among underclassmen (44.97%, 67 students) compared to upperclassmen (34.38%, 44 students). Creative Tasks show balanced participation, with 30.87% of underclassmen (46 students) and 38.37% of upperclassmen (49 students) using AI for creativity-focused activities. The findings suggest that while Writing Assistance is consistently the most used task across both groups, underclassmen tend to engage more with Research and Study Support, whereas upperclassmen demonstrate higher involvement in Creative Tasks and Writing Assistance.

4.3. Satisfaction with Generative AI Tools

In an examination of student satisfaction with generative AI applications in their studies, descriptive statistics revealed an average satisfaction score of 3.65, a median of 4.0, and a standard deviation of 0.75, indicating a general tendency towards satisfaction. Comparative analysis using t-tests showed no significant differences in satisfaction across genders (t-statistic: −0.02, p-value: 0.98, Cohen’s d: −0.003) or academic years (t-statistic: −0.37, p-value: 0.71, Cohen’s d: −0.044), suggesting that neither demographic variable significantly affects satisfaction levels. These results underline a uniform perception of AI tools among students, regardless of gender or year of study, with very small effect sizes indicating minimal practical significance in the differences observed. The statistical consistency across different groups underscores a broad acceptance and satisfactory experience with AI applications, supporting their continued use and potential expansion in educational settings.

4.4. Factors Influencing Acceptance and Use of Generative AI

Based on the survey results on student acceptance in Table 7, Exploratory Factor Analysis (EFA) was performed using the principal method with varimax rotation to extract four factors, consistent with the theoretical dimensions of a four-factor model. Cronbach’s alpha was calculated for each factor, with α > 0.70 indicating high internal consistency. Eigenvalues and the percentage of variance explained were also examined. The EFA results revealed that performance expectancy (Items 1–7) predominantly loads onto Factor 1, with high negative values (e.g., Item 1: −0.77, Item 2: −0.83), suggesting a cohesive grouping. Effort expectancy (Items 8–11) aligns closely with Factor 1, indicating shared influence on acceptance. Facilitating conditions (Items 12–14) show moderate alignment with Factor 1, though some items (e.g., Item 13) split between Factors 1 and 2. Social influence (Items 15–18) primarily loads onto Factor 2, indicating it is a distinct dimension from performance and effort expectancy. While performance and effort expectancy items cluster closely, facilitating conditions and social influence show more complex loadings. I will now proceed with a multiple regression analysis to quantify each factor’s influence on the overall acceptance score and calculate effect sizes. Cronbach’s alpha for all factors shows acceptable internal reliability, ranging from 0.76 to 0.89. The eigenvalue for each factor reflects the variance explained, with Factor 1 having the highest eigenvalue (4.56), accounting for 45.6% of the variance. Factor 2, representing social influence, explains an additional 21.3%, while facilitating conditions (Factor 3) and miscellaneous (Factor 4) contribute 12.0% and 7.8%, respectively.
The multiple regression analysis in Table 8 discloses the factors that significantly influence students’ acceptance of generative AI tools, with each predictor contributing uniquely. Performance expectancy is the most influential factor, with a coefficient of 0.438 (F = 10.83, p < 0.001), an effect size of 0.50, and a partial R-squared of 0.33. This indicates that students’ expectations of AI tools enhancing their academic performance account for 33% of the variance in their acceptance. Effort expectancy also plays a significant role, with a coefficient of 0.188 (F = 1.61, p < 0.001), an effect size of 0.22, and a partial R-squared of 0.15, meaning ease of use explains 15% of the variance. Social influence and facilitating conditions both have coefficients of 0.188, with F-statistics of 3.10 and 1.82, effect sizes of 0.21 and 0.20, and partial R-squared values of 0.12 and 0.10, respectively. These results suggest that while supportive conditions and social factors contribute to acceptance, their impact is moderate compared to the perceived performance and ease-of-use benefits.
Performance expectancy clearly emerged as the strongest predictor of GenAI acceptance whereas effort expectancy and social influence demonstrated relatively weaker statistical contributions. Nonetheless, their retention in the model is supported by theoretical and practical relevance. Effort expectancy, although showing lower factor loadings, reflects usability concerns that remain critical for sustained engagement with AI tools, especially for less tech-confident users. Correspondingly, social influence, despite its modest variance explanation, captures peer and institutional norms that subtly shape technology adoption in collectivist cultures like Indonesia. Retaining these factors offers a more holistic understanding of the multifaceted dynamics influencing AI acceptance. These results emphasize that even predictors with weaker statistical weight can carry pedagogical and cultural importance, especially in emerging and context-specific technology environments.

4.5. Generative AI as Cognitive Co-Pilots in English Language Studies

Thematic analysis was conducted with the qualitative data collected from open-ended questions and interviews, and the results are disclosed in Figure 4.
  • Theme 1: Grammar and Writing Structure
A significant number of students use AI to refine grammar and structure in their academic writing, highlighting its dual role as both a linguistic assistant and quality checker. Tools like Grammarly and ChatGPT are viewed as invaluable for polishing writing, ensuring greater coherence and accuracy. By incorporating AI feedback, students address immediate language needs while also gradually improving their writing skills. The use of AI in enhancing structure and grammar reflects a deliberate effort to uphold academic rigor and linguistic proficiency, positioning AI tools as sources of constructive feedback and self-improvement. Instead of passively accepting corrections, students actively engage with the feedback, identifying areas for growth and refining their understanding of grammatical structures. This reflective use of AI suggests that it serves as a complementary tool in students’ academic journeys, rather than a sole source of linguistic accuracy.
“I always use Grammarly for writing assignments. It points out grammar mistakes and also suggests sentence structures that I might not think of.”
(Student 4)
“AI helps me improve my sentence flow. For instance, Grammarly will suggest rephrasing complex sentences to make them clearer, which has improved my writing a lot.”
(Student 10)
“I use AI to proofread my essays. It’s like having a teacher check my grammar, so I feel more confident submitting my work.”
(Student 15)
“My grammar has weaknesses, so I rely on AI to correct my sentences. But I also try to learn from these corrections to improve on my own.”
(Student 19)
  • Theme 2: Translation and Language Assistance
A prevalent theme in students’ responses reveals frequent reliance on AI applications like Google Translate and Grammarly to address challenges in vocabulary, syntax, and grammar. Particularly underclassmen use these tools to bridge language barriers, enabling them to engage with complex academic texts in English. Many students describe these AI tools as essential for reducing comprehension time and clarifying language points that might otherwise hinder understanding. Students are also aware of AI’s limitations, critically examining AI translations and making adjustments for contextual accuracy. This critical approach demonstrates their recognition that AI-generated translations may lack the nuanced understanding needed for precise interpretation. These practices reflect a sophisticated strategy, where students rely on AI’s speed while remaining mindful of its contextual shortcomings, using it to supplement rather than replace their linguistic judgment.
“Using Google Translate helps me understand difficult sentences in English articles. But I still review its translation to ensure it makes sense.”
(Student 2)
“I often use AI to check my grammar and vocabulary. For example, I’ll translate a word from English to my native language, then double-check if it fits the context.”
(Student 11)
“For writing assignments, I use Grammarly to improve grammar and clarity. However, I know AI can be wrong sometimes, so I adjust the suggestions if they don’t fit my meaning.”
(Student 17)
“I translate complex vocabulary and sentences in academic articles. AI provides a quick understanding, but I compare it with my knowledge to ensure it’s accurate.”
(Student 24)
  • Theme 3: Content Exploration and Example Search
Students, particularly upperclassmen, use AI to explore content deeply and access relevant examples, broadening their understanding and reinforcing academic arguments. By utilizing AI as an instant research resource, students can efficiently access diverse perspectives and examples, enhancing both comprehension and the depth of their assignments. This practical adaptation of AI’s capabilities shows that students view it as an efficient alternative to more time-consuming methods of gathering information. The reliance on AI for content exploration reflects a complex approach to information acquisition, where students combine AI-sourced content with traditional sources like textbooks and academic journals. This comparative method allows students to synthesize and validate AI-generated information against established academic references, illustrating a mature academic mindset that integrates technology to expedite knowledge acquisition while maintaining academic integrity.
“I use AI to get examples of language use in context, like idioms or specific phrases. It helps me understand how they’re used in real conversations.”
(Student 3)
“For assignments, I use AI to find example essays or research summaries on similar topics. It saves me time, and I can cross-check with other sources.”
(Student 12)
“AI provides quick references for my research topics. For instance, it helps me understand different viewpoints in literature, which I might not find easily in the library.”
(Student 20)
“I use AI to gather background information before starting a project. It’s fast and gives a broad overview, but I still go to academic sources for accuracy.”
(Student 25)
  • Theme 4: Idea Generation
Responses indicate that students frequently turn to AI for generating and expanding ideas, especially when encountering creative blocks. Upperclassmen, in particular, use AI to conceptualize complex assignments and presentations, often viewing it as a tool for creating an initial framework to develop further. By using AI as a brainstorming partner, students explore various perspectives and potential directions for their work. A sophisticated relationship between dependence and creative independence emerges: although AI aids in structuring ideas, many students remain aware that personal engagement and creative thought are central to the academic process. This awareness suggests that students see AI-generated ideas as a starting point, not a replacement, allowing AI to initiate thought without replacing their own intellectual agency.
“When I don’t have ideas, I ask ChatGPT to give me a topic outline, and I use it to get started. It’s like a brainstorming partner, but I add my thoughts to make it personal.”
(Student 5)
“For my essays, I often ask AI to give me potential arguments or points I could cover. It helps me develop a clearer structure, especially under tight deadlines.”
(Student 9)
“AI is helpful when I’m out of ideas. For example, I ask it for suggestions on argumentative essay topics, then I expand them based on what I want to write about.”
(Student 13)
“I rely on AI when I’m short on time. It provides basic ideas, which I then develop further to meet the assignment requirements.”
(Student 18)

4.6. Discussion and Implication

The findings confirm that generative AI tools are integral to the academic practices of English major students in Indonesia, with ChatGPT, Google Translate, and Grammarly emerging as the most widely used applications. This aligns with global trends highlighted by C. Wang (2024), who identified these tools as pivotal for supporting both academic writing and language learning. ChatGPT’s flexibility, catering to diverse academic needs from idea generation to drafting and revision, reflects its widespread adoption across disciplines, as noted by Chan and Hu (2023). Similarly, Google Translate continues to play a vital role in overcoming linguistic barriers, particularly in non-English-speaking contexts, as emphasized by Waluyo and Kusumastuti (2024) for EFL learners. Gender-based differences in tool preferences observed in this study align with findings by Stöhr et al. (2024) and Vo and Nguyen (2024), who noted that female students often prioritize tools that enhance grammatical accuracy and foundational language learning, while male students focus on tools for more efficient solutions to complex tasks. This reflects the broader cultural and pedagogical focus on linguistic accuracy and academic performance in non-Western settings (Waluyo & Kusumastuti, 2024; Jang, 2024). Additionally, the progression in AI usage between underclassmen and upperclassmen—where underclassmen focus more on language learning and study support tools, while upperclassmen leverage AI for creative and advanced academic tasks—parallels findings by Aryadoust et al. (2024). This suggests that as students advance academically, they adopt AI tools to meet more complex academic needs, in line with Xu et al.’s (2024) observation of AI’s evolving role in supporting autonomy and critical thinking.
Generative AI tools are primarily used for writing assistance, echoing findings by C. Wang (2024) and Barrett and Pack (2023), who emphasized AI’s role in addressing both global and local writing challenges. Tools like Grammarly and ChatGPT, used to refine grammar, coherence, and structure, align with the reflective practices observed by Waluyo and Kusumastuti (2024), where students actively engage with AI feedback to enhance their writing skills. This highlights generative AI’s dual role as a linguistic assistant and a tool for fostering academic rigor. The use of AI in research and language learning also mirrors findings by McCarthy and Yan (2024), who demonstrated its effectiveness in providing personalized support for vocabulary acquisition and reading comprehension. The balanced use of AI for both foundational and advanced tasks emphasizes its adaptability, as noted by Aryadoust et al. (2024) and Chan and Hu (2023), who observed its ability to meet diverse academic needs across student levels. Notably, the shift from study support among underclassmen to creative tasks among upperclassmen reflects Barrett and Pack’s (2023) findings on AI’s potential to foster innovation and creativity. This progression suggests that students increasingly adopt more sophisticated uses of AI tools as they adjust to their academic demands, aligning with the adaptive learning trajectories discussed by Xu et al. (2024).
The general satisfaction with generative AI tools observed in this study aligns with Chan and Hu’s (2023) findings, which highlighted the positive reception of AI applications among students due to their flexibility and efficiency. However, despite consistent satisfaction levels across demographic groups, concerns about overreliance and ethical implications reflect issues raised by Barrett and Pack (2023) and Cummings et al. (2024). These concerns stress the importance of fostering critical engagement with AI tools to ensure they complement, rather than replace, students’ intellectual efforts. The lack of significant differences in satisfaction based on gender or academic level contrasts with findings by Stöhr et al. (2024), who noted that demographic factors can influence attitudes toward AI tools. This divergence may be attributed to the cultural context of Indonesia, where the emphasis on academic performance and institutional support may lead to a more uniform perception of AI tools (Waluyo & Kusumastuti, 2024). These findings underscore the need for context-specific approaches to integrating AI in education, as highlighted by Vo and Nguyen (2024).
Performance expectancy emerged as the most significant factor influencing AI acceptance, consistent with studies by Habibi et al. (2023) and Strzelecki (2024), who found that students prioritize tools that enhance their academic performance. Effort expectancy and facilitating conditions also played critical roles, highlighting the importance of usability and institutional support, as noted by Jang (2024) and Waluyo and Kusumastuti (2024). These findings suggest that students value the ease of use and practical benefits of AI tools, aligning with the UTAUT framework proposed by Venkatesh et al. (2003). Social influence, while less impactful, remains relevant in shaping students’ attitudes toward adoption of AI. This aligns with Budhathoki et al. (2024), who observed that social and peer dynamics play a moderate role in technology acceptance. However, the lower influence of social factors in this study, compared to others, may reflect cultural differences, as Indonesian students may prioritize individual academic goals over peer perceptions, consistent with the findings of Rosmayanti et al. (2022).
The diverse applications of generative AI in English language learning underscore its potential to address both linguistic and academic needs. Tools like Grammarly and ChatGPT, which are used for grammar and writing structure, align with findings by Barrett and Pack (2023) and Waluyo and Kusumastuti (2024), who highlighted the reflective use of AI for improving writing skills. Students’ active engagement with AI feedback reflects a sophisticated approach to technology, where AI is viewed as a complementary learning tool rather than a substitute for human effort. The role of AI in translation and language assistance, particularly through Google Translate, mirrors the findings of Vo and Nguyen (2024), who emphasized the importance of balancing AI’s speed with critical oversight to ensure contextual accuracy. This illustrates students’ awareness of AI’s limitations, as also noted by Y. Wang and Zhang (2023), who observed that users often validate AI outputs against traditional academic sources. Regarding content exploration and idea generation, the findings echo McCarthy and Yan’s (2024) observation of AI’s efficiency in supporting deep academic inquiry. The use of AI for brainstorming and expanding ideas reflects its role in fostering intellectual creativity, as discussed by Barrett and Pack (2023). This adaptive use of AI tools demonstrates students’ ability to integrate AI-generated insights with traditional research methods, supporting the balanced approach advocated by Watermeyer et al. (2024).
While this research indicates the potential role of GenAI as an intellectual co-pilot for scholarly writing, language acquisition, and content searching, consideration needs to be given to the danger of over-reliance and diminished critical thinking. Although a number of students indicated revising AI-generated content actively and critically verifying translations, qualitative data also revealed instances where AI tools were being utilized as fact-based sources, especially in moments of urgency. This aligns with the balance between intellectual autonomy and productivity. Without guidance, students may be conditioned to rely on AI at the expense of their capacity for higher-level thinking, particularly in assessing bias, context relevance, or appropriateness of AI-driven content. These factors align with Malik et al. (2025) and Watermeyer et al. (2024), who cautioned that unchecked application of AI potentially destabilizes core academic skills and moral judgment. Hence, GenAI integration should be accompanied by direct teaching of critical thinking approaches and ethical application in a bid to sustain students’ active engagement as reflective learners as opposed to acting as passive recipients of machine support.
Thematic analysis results provided richer insight into students’ interactions with AI, complementing quantitative findings on satisfaction and usage patterns. Students’ reflective use of tools like Grammarly and ChatGPT for grammar refinement, content exploration, and idea generation mirrored the statistical prominence of writing assistance and research support tasks. This triangulation confirms that students are not merely passive users but demonstrate strategic engagement and critical filtering—especially when revising text or cross-checking AI translations. Moreover, students’ concerns about AI limitations and their adjustment of AI output align with the modest influence of effort expectancy and social influence in the regression results. These links illustrate the value of integrating qualitative narratives into the quantitative patterns, offering a more comprehensive and nuanced view of learner-AI interactions. The consistency between what students say (qualitative themes) and what they do (quantitative data) reinforces the credibility of findings and the explanatory strength of the mixed-methods design.
As for the theoretical implications, this study extends the UTAUT model by contextualizing its constructs—particularly performance expectancy and effort expectancy—within the domain of English language learning in a Southeast Asian context. The findings affirm the predictive strength of performance expectancy, in line with Strzelecki (2024) and Habibi et al. (2023), but also underscore the unique cultural dynamics of Indonesian learners, where social influence, though modest, still reflects collectivist values. Moreover, students’ reflective engagement with AI aligns with cognitive engagement theory (Greene & Miller, 1996) and the ICAP framework (Chi & Wylie, 2014), emphasizing the pedagogical potential of AI tools as catalysts for constructive and interactive learning. The use of AI for idea generation, content exploration, and revision suggests a model of AI not just as a tool, but as a dynamic agent in self-regulated and metacognitive learning.
Practically, these findings have direct implications for higher education institutions seeking to integrate generative AI into language curricula. First, this study highlights the need for structured training programs that help students critically evaluate AI output and use tools ethically. Second, institutions should consider adopting flexible AI integration policies that support both innovation and academic integrity. Faculty development programs should also be introduced to ensure instructors are equipped to guide students in reflective and responsible AI use. Lastly, AI tools should be positioned not as replacements for human instruction, but as pedagogical partners that complement traditional methods, supporting differentiated and inclusive learning experiences.

5. Conclusions, Limitation, and Recommendation

Generative AI technologies are becoming breakthrough cognitive co-pilots for Indonesian English major students, especially in facilitating academic writing, language acquisition, and creative tasks, consistent with global trends in AI-facilitated learning. Results indicate deep acceptance and satisfaction, yet the use of self-report measures restricts the capacity to capture various behavior and the long-term impact of AI use on the accomplishment of learning. In addition, despite the fact that the participants were recruited from a representative sample of institutions in Indonesia, the research is context-specific, and the cultural and institutional uniqueness of the Indonesian context can restrict the transferability of the results to other education systems. This includes the requirement of context-sensitive explanation and follow-up research in different settings. Subsequent studies employing longitudinal and observational study designs would better tell us about the long-term impact of generative AI tools on academic achievement and skill attainment. Intentional curriculum inclusion of GenAI calls for systematic training for appropriate use, critical thinking, and judicious dependence on technology. Having strong regulations in place and creating digital literacy will be crucial for making AI assets agents of academic excellence and innovation and reducing risks like over-dependence and ethics.
Further, given the widespread integration of GenAI tools among English major students, universities must take an active role in establishing clear guidelines for responsible and ethical use. There need to be policies drafted and implemented to manage the utilization of AI within academic work so that scholarly integrity is paired with innovation. In addition, effectively designed training programs need to be provided to students to enhance their digital literacy, and the focus should be on ethical engagement, critical evaluation of AI-produced content, and the risks of overreliance. These programs can equip students with the competencies essential to use AI-supported learning in a responsible and independent manner. Collaboration among student affairs offices, IT support services, and academic departments is needed to ensure that GenAI integration is aligned with institutional values and promotes academic excellence and autonomy among learners.
This research adds to the expanding body of work on generative AI in education by utilizing a context-specific uptake and perceived usefulness analysis within Indonesian English major undergraduates—a context lacking in the prevailing body of work. By integrating quantitative and qualitative results into the UTAUT framework and cognitive engagement theories, this study expands earlier research on GenAI from Western-oriented contexts to Southeast Asia. It discovers demographic nuances, task domain usage, and reflective engagement patterns that feed into theory and practice. By so doing, this study expands our existing understanding of how students in multilingual, low-resource contexts interact with GenAI tools not just as technological devices, but as cognitive co-pilots impacting scholarly conduct, motivation, and agency.

Author Contributions

Conceptualization, M.Z. (Muflihatuz Zakiyah) and S.A.; methodology, M.Z. (Muhammad Zaim) and M.Z. (Muflihatuz Zakiyah); validation, H.A.; formal analysis, S.A., H.A., A.N. and M.H.; investigation, S.A., H.A., M.Z. (Muhammad Zaim), W.S., A.N. and M.H.; resources, M.A.H.; data curation, M.Z. (Muhammad Zaim), M.A.H., M.Z. (Muflihatuz Zakiyah) and M.H.; writing—original draft preparation, M.Z. (Muhammad Zaim); writing—review and editing, B.W. and M.A.H.; visualization, M.A.H.; supervision, M.Z. (Muhammad Zaim), S.A. and B.W.; project administration, H.A., M.Z. (Muhammad Zaim) and W.S.; funding acquisition, M.Z. (Muhammad Zaim). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Lembaga Penelitian dan Pengabdian Masyarakat Universitas Negeri Padang (Contract Number: 1388/UN35.15/LT/2024).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research and Community Service Institute of Universitas Negeri Padang (protocol code 1388/UN35.15/LT/2024; date of approval: 6 May 2024).

Informed Consent Statement

Informed consent was obtained from all parties involved.

Data Availability Statement

The dataset generated and/or analyzed during the current study is not publicly available due to privacy policies. However, it can be made available upon request.

Acknowledgments

The authors would like to thank Lembaga Penelitian dan Pengabdian Masyarakat Universitas Negeri Padang for funding this work under contract number 1388/UN35.15/LT/2024.

Conflicts of Interest

There is no potential conflict of interest in this study.

References

  1. Al-khresheh, M. H. (2024). Bridging technology and pedagogy from a global lens: Teachers’ perspectives on integrating ChatGPT in English language teaching. Computers and Education: Artificial Intelligence, 6, 100218. [Google Scholar] [CrossRef]
  2. Alshumaimeri, Y. A., & Alshememry, A. K. (2024). The extent of AI applications in EFL learning and teaching. IEEE Transactions on Learning Technologies, 17, 653–663. [Google Scholar] [CrossRef]
  3. AlTwijri, L., & Alghizzi, T. M. (2024). Investigating the integration of artificial intelligence in English as foreign language classes for enhancing learners’ affective factors: A systematic review. Heliyon, 10, e31053. [Google Scholar] [CrossRef] [PubMed]
  4. An, X., Chai, C. S., Li, Y., Zhou, Y., & Yang, B. (2023). Modeling students’ perceptions of artificial intelligence assisted language learning. Computer Assisted Language Learning. [Google Scholar] [CrossRef]
  5. Aryadoust, V., Zakaria, A., & Jia, Y. (2024). Investigating the affordances of OpenAI’s large language model in developing listening assessments. Computers and Education: Artificial Intelligence, 6, 100204. [Google Scholar] [CrossRef]
  6. Barrett, A., & Pack, A. (2023). Not quite eye to AI: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(1), 59. [Google Scholar] [CrossRef]
  7. Belda-Medina, J., & Calvo-Ferrer, J. R. (2022). Using chatbots as AI conversational partners in language learning. Applied Sciences, 12(17), 8427. [Google Scholar] [CrossRef]
  8. Boscardin, C. K., Gin, B., Golde, P. B., & Hauer, K. E. (2024). ChatGPT and generative artificial intelligence for medical education: Potential impact and opportunity. Academic Medicine, 99(1), 22–27. [Google Scholar] [CrossRef]
  9. Budhathoki, T., Zirar, A., Njoya, E. T., & Timsina, A. (2024). ChatGPT adoption and anxiety: A cross-country analysis utilising the unified theory of acceptance and use of technology (UTAUT). Studies in Higher Education, 49, 831–846. [Google Scholar] [CrossRef]
  10. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  11. Chi, M. T., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. [Google Scholar] [CrossRef]
  12. Chiu, T. K. (2024). Future research recommendations for transforming higher education with generative AI. Computers and Education: Artificial Intelligence, 6, 100197. [Google Scholar] [CrossRef]
  13. Clarke, V., & Braun, V. (2017). Thematic analysis. The Journal of Positive Psychology, 12(3), 297–298. [Google Scholar] [CrossRef]
  14. Creswell, J. W. (1999). Mixed-method research: Introduction and application. In Handbook of educational policy (pp. 455–472). Academic Press. [Google Scholar]
  15. Cummings, R. E., Monroe, S. M., & Watkins, M. (2024). Generative AI in first-year writing: An early analysis of affordances, limitations, and a framework for the future. Computers and Composition, 71, 102827. [Google Scholar] [CrossRef]
  16. Danler, M., Hackl, W. O., Neururer, S. B., & Pfeifer, B. (2024). Quality and effectiveness of AI tools for students and researchers for scientific literature review and analysis. Studies in Health Technology and Informatics, 313, 203–208. [Google Scholar]
  17. Darwin, Rusdin, D., Mukminatien, N., Suryati, N., Laksmi, E. D., & Marzuki. (2024). Critical thinking in the AI era: An exploration of EFL students’ perceptions, benefits, and limitations. Cogent Education, 11(1), 2290342. [Google Scholar] [CrossRef]
  18. Deci, E. L., & Ryan, R. M. (2008). Self-determination theory: A macrotheory of human motivation, development, and health. Canadian Psychology/Psychologie Canadienne, 49(3), 182–185. [Google Scholar] [CrossRef]
  19. Du, J., & Alm, A. (2024). The impact of ChatGPT on English for academic purposes (EAP) students’ language learning experience: A self-determination theory perspective. Education Sciences, 14(7), 726. [Google Scholar] [CrossRef]
  20. Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1–4. [Google Scholar] [CrossRef]
  21. Fathi, J., Rahimi, M., & Derakhshan, A. (2024). Improving EFL learners’ speaking skills and willingness to communicate via artificial intelligence-mediated interactions. System, 121, 103254. [Google Scholar] [CrossRef]
  22. Greene, B. A., & Miller, R. B. (1996). Influences on achievement: Goals, perceived ability, and cognitive engagement. Contemporary Educational Psychology, 21(2), 181–192. [Google Scholar] [CrossRef]
  23. Habibi, A., Muhaimin, M., Danibao, B. K., Wibowo, Y. G., Wahyuni, S., & Octavia, A. (2023). ChatGPT in higher education learning: Acceptance and use. Computers and Education: Artificial Intelligence, 5, 100190. [Google Scholar] [CrossRef]
  24. Jang, M. (2024). AI literacy and intention to use text-based GenAI for learning: The case of business students in Korea. Informatics, 11(3), 54. [Google Scholar] [CrossRef]
  25. Kshetri, N. (2023). The economics of generative artificial intelligence in the academic industry. Computer, 56(8), 77–83. [Google Scholar] [CrossRef]
  26. Lai, J. W. (2024). Adapting self-regulated learning in an age of generative artificial intelligence chatbots. Future Internet, 16(6), 218. [Google Scholar] [CrossRef]
  27. Law, L. (2024). Application of generative artificial intelligence (GenAI) in language teaching and learning: A scoping literature review. Computers and Education Open, 6, 100174. [Google Scholar] [CrossRef]
  28. Lee, Y. J., Davis, R. O., & Lee, S. O. (2024). University students’ perceptions of artificial intelligence-based tools for English writing courses. Online Journal of Communication and Media Technologies, 14(1), e202412. [Google Scholar] [CrossRef]
  29. Lie, A. (2017). English and identity in multicultural contexts: Issues, challenges, and opportunities. Teflin Journal, 28(1), 71. [Google Scholar] [CrossRef]
  30. Malik, A., Khan, M. L., Hussain, K., Qadir, J., & Tarhini, A. (2025). AI in higher education: Unveiling academicians’ perspectives on teaching, research, and ethics in the age of ChatGPT. Interactive Learning Environments, 33(3), 2390–2406. [Google Scholar] [CrossRef]
  31. Marzuki, Widiati, U., Rusdin, D., Darwin, & Indrawati, I. (2023). The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Education, 10(2), 2236469. [Google Scholar] [CrossRef]
  32. McCarthy, K. S., & Yan, E. F. (2024). Reading comprehension and constructive learning: Policy considerations in the age of artificial intelligence. Policy Insights from the Behavioral and Brain Sciences, 11(1), 19–26. [Google Scholar] [CrossRef]
  33. Ou, A. W., Stöhr, C., & Malmström, H. (2024). Academic communication with AI-powered language tools in higher education: From a post-humanist perspective. System, 121, 103225. [Google Scholar] [CrossRef]
  34. Preiksaitis, C., & Rose, C. (2023). Opportunities, challenges, and future directions of generative artificial intelligence in medical education: Scoping review. JMIR Medical Education, 9, e48785. [Google Scholar] [CrossRef]
  35. Raman, R., Mandal, S., Das, P., Kaur, T., Sanjanasri, J. P., & Nedungadi, P. (2024). Exploring university students’ adoption of ChatGPT using the diffusion of innovation theory and sentiment analysis with gender dimension. Human Behavior and Emerging Technologies, 2024(1), 3085910. [Google Scholar] [CrossRef]
  36. Rosmayanti, V., Noni, N., & Patak, A. A. (2022). Students’ acceptance of technology use in learning English pharmacy. International Journal of Language Education, 6(3), 314–331. [Google Scholar] [CrossRef]
  37. Spector, J. M., & Ma, S. (2019). Inquiry and critical thinking skills for the next generation: From artificial intelligence back to human intelligence. Smart Learning Environments, 6(1), 8. [Google Scholar] [CrossRef]
  38. Stapleton, C. D. (1997). Basic concepts in exploratory factor analysis (EFA) as a tool to evaluate score validity: A right-brained approach. Southwest Educational Research Association, 142, 1–8. [Google Scholar]
  39. Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence, 7, 100259. [Google Scholar] [CrossRef]
  40. Strzelecki, A. (2024). Students’ acceptance of ChatGPT in higher education: An extended unified theory of acceptance and use of technology. Innovative Higher Education, 49(2), 223–245. [Google Scholar] [CrossRef]
  41. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  42. Vo, A., & Nguyen, H. (2024). Generative artificial intelligence and ChatGPT in language learning: EFL students’ perceptions of technology acceptance. Journal of University Teaching and Learning Practice, 21(6), 199–218. [Google Scholar] [CrossRef]
  43. Waluyo, B., & Kusumastuti, S. (2024). Generative AI in student English learning in Thai higher education: More engagement, better outcomes? Social Sciences & Humanities Open, 10, 101146. [Google Scholar]
  44. Wang, C. (2024). Exploring students’ generative AI-assisted writing processes: Perceptions and experiences from native and nonnative English speakers. Technology, Knowledge and Learning. [Google Scholar] [CrossRef]
  45. Wang, Y., & Zhang, W. (2023). Factors influencing the adoption of generative AI for art designing among Chinese generation Z: A structural equation modeling approach. IEEE Access, 11, 143272–143284. [Google Scholar] [CrossRef]
  46. Watermeyer, R., Phipps, L., Lanclos, D., & Knight, C. (2024). Generative AI and the automating of academia. Postdigital Science and Education, 6(2), 446–466. [Google Scholar] [CrossRef]
  47. Williyan, A., Fitriati, S. W., Pratama, H., & Sakhiyya, Z. (2024). AI as co-creator: Exploring Indonesian EFL teachers’ collaboration with AI in content development. Teaching English with Technology, 24(2), 5–21. [Google Scholar] [CrossRef]
  48. Xie, Z., Wu, X., & Xie, Y. (2024). Can interaction with generative artificial intelligence enhance learning autonomy? A longitudinal study from comparative perspectives of virtual companionship and knowledge acquisition preferences. Journal of Computer Assisted Learning, 40(5), 2369–2384. [Google Scholar] [CrossRef]
  49. Xu, X., Su, Y., Zhang, Y., Wu, Y., & Xu, X. (2024). Understanding learners’ perceptions of ChatGPT: A thematic analysis of peer interviews among undergraduates and postgraduates in China. Heliyon, 10(4), e26239. [Google Scholar] [CrossRef]
  50. Yang, Y., Luo, J., Yang, M., Yang, R., & Chen, J. (2024). From surface to deep learning approaches with generative AI in higher education: An analytical framework of student agency. Studies in Higher Education, 49(5), 817–830. [Google Scholar] [CrossRef]
  51. Yao, Y., Sun, Y., Zhu, S., & Zhu, X. (2025). A qualitative inquiry into metacognitive strategies of postgraduate students in employing ChatGPT for English academic writing. European Journal of Education, 60(1), e12824. [Google Scholar] [CrossRef]
  52. Yawson, R. M. (2024). Perspectives on the promise and perils of generative AI in academia. Human Resource Development International, 28, 476–487. [Google Scholar] [CrossRef]
  53. Yilmaz, F. G. K., Yilmaz, R., & Ceylan, M. (2024). Generative artificial intelligence acceptance scale: A validity and reliability study. International Journal of Human–Computer Interaction, 40(24), 8703–8715. [Google Scholar] [CrossRef]
  54. Zein, S. (2019). English, multilingualism and globalisation in Indonesia: A love triangle: Why Indonesia should move towards multilingual education. English Today, 35(1), 48–53. [Google Scholar] [CrossRef]
  55. Zhang, C., Schießl, J., Plößl, L., Hofmann, F., & Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. International Journal of Educational Technology in Higher Education, 20(1), 49. [Google Scholar] [CrossRef]
Figure 1. Illustration of the research design.
Figure 1. Illustration of the research design.
Education 15 00686 g001
Figure 2. Top five AI applications used.
Figure 2. Top five AI applications used.
Education 15 00686 g002
Figure 3. Top 5 task types by overall usage.
Figure 3. Top 5 task types by overall usage.
Education 15 00686 g003
Figure 4. Thematic analysis results.
Figure 4. Thematic analysis results.
Education 15 00686 g004
Table 1. Participants’ demographic profiles.
Table 1. Participants’ demographic profiles.
VariableCategoryFrequencyPercentage
GenderFemale20674.4
Male7125.6
English ProficiencyVery Poor31.1
Poor155.4
Average17262.1
Good8731.4
Very Good31.1
Underclassmen 14953.8
Upperclassmen 12846.2
State University of Padang 9132.9
University of Bengkulu 6523.5
State Islamic University of Sjech M. Djamil Djambek Bukittinggi 6523.5
State Islamic University of Bukittinggi 217.6
Mahmud Yunus State Islamic University of Batusangkar 124.3
University of Eka Sakti 103.6
Pattimura University 93.2
State Polytechnic of Ambon 20.7
PGRI University of West Sumatra 20.7
Table 2. Top five AI applications used by gender.
Table 2. Top five AI applications used by gender.
ApplicationFemale (N = 206)Male (N = 71)Total
ChatGPT173/83.98%66/92.96%239
Google Translate165/80.10%57/80.28%222
Grammarly146/70.87%45/63.38%191
Duolingo107/51.94%21/29.58%128
QuillBot99/48.06%36/50.70%135
Table 3. Chi-square results for gender differences in GenAI tool usage.
Table 3. Chi-square results for gender differences in GenAI tool usage.
AI Toolχ2p-ValueSignificance
ChatGPT2.880.090Not significant
Google Translate0.001.000Not significant
Grammarly1.060.304Not significant
QuillBot0.060.805Not significant
Duolingo9.740.002Significant
Table 4. Top five AI applications used by year of study.
Table 4. Top five AI applications used by year of study.
ApplicationUnderclassmen (N = 149)Upperclassmen (N = 128)Total
ChatGPT124/83.22%115/89.84%239
Google Translate120/80.54%102/79.69%222
Grammarly104/69.80%87/67.97%191
QuillBot69/46.31%66/51.56%135
Duolingo78/52.35%50/39.06%128
Table 5. Top 5 task types by usage by gender.
Table 5. Top 5 task types by usage by gender.
Task TypesFemale (N = 206)Male (N = 71)Total
Creative Tasks73/35.44%22/30.99%95
Language Learning116/56.3133/46.48%149
Research101/49.03%33/46.48%134
Study Support85/41.26%26/36.62%111
Writing Assistance116/56.31%46/64.79162
Table 6. Task types by year of study.
Table 6. Task types by year of study.
Task TypesUnderclassmen (N = 149)Upperclassmen (N = 128)Total
Creative Tasks46/30.87%49/38.37%95
Language Learning82/55.03%67/52.34%149
Research77/51.68%57/44.53%134
Study Support67/44.97%44/34.38%111
Writing Assistance83/55.70%79/61.72%162
Table 7. Exploratory Factor Analysis results.
Table 7. Exploratory Factor Analysis results.
FactorItemFactor LoadingCronbach’s AlphaEigenvalueVariance (%)
Performance ExpectancyItem 1−0.80.94.645.6
Item 2−0.8
Item 3−0.8
Item 4−0.7
Item 5−0.9
Item 6−0.9
Item 7−0.9
Effort ExpectancyItem 8−0.80.92.121.3
Item 9−0.8
Item 10−0.8
Item 11−0.8
Facilitating ConditionsItem 12−0.70.81.212.0
Item 13−0.5
Item 14−0.7
Social InfluenceItem 15−0.50.80.87.8
Item 16−0.4
Item 17−0.6
Item 18−0.1
Table 8. Multiple regression results.
Table 8. Multiple regression results.
FactorCoefficientFpEffect SizeR-Squared (Partial)
Performance Expectancy0.43810.83<0.0010.50.33
Effort Expectancy0.1881.61<0.0010.220.15
Facilitating Conditions0.1881.82<0.0010.20.1
Social Influence0.1883.1<0.0010.210.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zaim, M.; Arsyad, S.; Waluyo, B.; Ardi, H.; Al Hafizh, M.; Zakiyah, M.; Syafitri, W.; Nusi, A.; Hardiah, M. Generative AI as a Cognitive Co-Pilot in English Language Learning in Higher Education. Educ. Sci. 2025, 15, 686. https://doi.org/10.3390/educsci15060686

AMA Style

Zaim M, Arsyad S, Waluyo B, Ardi H, Al Hafizh M, Zakiyah M, Syafitri W, Nusi A, Hardiah M. Generative AI as a Cognitive Co-Pilot in English Language Learning in Higher Education. Education Sciences. 2025; 15(6):686. https://doi.org/10.3390/educsci15060686

Chicago/Turabian Style

Zaim, Muhammad, Safnil Arsyad, Budi Waluyo, Havid Ardi, Muhd. Al Hafizh, Muflihatuz Zakiyah, Widya Syafitri, Ahmad Nusi, and Mei Hardiah. 2025. "Generative AI as a Cognitive Co-Pilot in English Language Learning in Higher Education" Education Sciences 15, no. 6: 686. https://doi.org/10.3390/educsci15060686

APA Style

Zaim, M., Arsyad, S., Waluyo, B., Ardi, H., Al Hafizh, M., Zakiyah, M., Syafitri, W., Nusi, A., & Hardiah, M. (2025). Generative AI as a Cognitive Co-Pilot in English Language Learning in Higher Education. Education Sciences, 15(6), 686. https://doi.org/10.3390/educsci15060686

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop