4.1. Patterns of Generative AI Usage Among English Majors in Indonesia
Among all AI applications reported (N = 277),
ChatGPT was the most widely used tool, with 239 students (86.28%) indicating they utilized it for academic purposes. The high percentage reflected the tool’s popularity and versatility in supporting English learning.
Google Translate followed closely, with 222 students (80.14%) using it, indicating its critical role in helping students overcome language barriers.
Grammarly was also extensively used, with 191 students (68.95%) relying on it for grammar checking and writing support. Moreover,
QuillBot was used by 135 students (48.74%), indicating a preference for tools that assist with paraphrasing and rephrasing tasks.
Duolingo, a language learning platform, was utilized by 128 students (46.21%), highlighting its value for English language practice. Other AI applications showed lower usage rates. For instance,
Elsa Speak was used by 29 students (10.5%), while
HelloTalk had a usage rate of 5.8% (16 students). Specialized applications, such as
DeepL (4 students, 1.4%),
Babbel (3 students, 1.1%), and
Perplexity (5 students, 1.8%), were used by fewer students. Unique tools, e.g.,
Bing AI,
Character.Ai,
Gemini AI, and
Mendeley each appeared only once, reflecting minimal usage preferences for these niche AI applications.
Figure 2 shows the top five AI applications used by the students.
Table 2 depicts the top five AI applications used by gender. When analyzing AI application usage by gender, several patterns emerged.
ChatGPT was popular among both female and male students, with 173 female students (83.98%) and 66 male students (92.96%) reporting usage.
Google Translate showed similar high usage, with 165 female students (80.10%) and 57 male students (80.28%) relying on it.
Grammarly was used by 146 females (70.87%) and 45 males (63.38%), suggesting a slightly higher preference for this tool among female students.
Duolingo also displayed a notable gender difference, with 107 female students (51.94%) and only 21 male students (29.58%) reporting usage, followed by Quillbot, which was used by 99 females (48.06%) and 36 males (50.70%). Lower-frequency applications also revealed some gender preferences.
Elsa Speak was used by 23 females (11.17%) and 6 males (8.45%), showing a stronger preference among female students. Other less common applications, such as
Babbel and
Perplexity, were used by both genders but in small numbers. Some applications, such as
AI Asus,
Humata, and
Lingvist were exclusively used by female students, while tools, e.g.,
Bing AI,
Poe,
DeepL, and
Microsoft Bing were solely used by male students.
To examine whether gender significantly influenced the use of different generative AI tools, chi-square tests were conducted on the five most reported applications. The analysis showed that Duolingo usage differed significantly by gender (χ
2 = 9.74,
p = 0.002), with a higher proportion of female students (51.9%) using the app compared to male students (29.6%). However, no significant gender differences were found for ChatGPT (
p = 0.090), Google Translate (
p = 1.000), Grammarly (
p = 0.304), or QuillBot (
p = 0.805). These findings suggest that while overall adoption rates for major AI tools are largely uniform across genders, certain applications such as Duolingo may appeal differently, potentially reflecting gender-based preferences in language learning strategies.
Table 3 below displays the results.
As shown in
Table 4,
ChatGPT and
Google Translate emerged as the most widely used AI tools across both underclassmen and upperclassmen.
ChatGPT was used by 124 underclassmen (83.22%) and 115 upperclassmen (89.84%), indicating broad adoption across study years.
Google Translate followed with 120 underclassmen (80.54%) and 102 upperclassmen (79.69%), reflecting a widespread reliance on this tool for language assistance.
Grammarly usage showed a slight decrease among upperclassmen, with 87 students (67.97%) using it compared to 104 underclassmen (69.80%), suggesting that younger students may have relied more heavily on grammar-checking tools.
QuillBot usage was nearly balanced, with 69 underclassmen (46.31%) and 66 upperclassmen (51.56%) using it for paraphrasing support.
Duolingo usage showed some variation, with 78 underclassmen (52.35%) and only 50 upperclassmen (39.06%) utilizing it, possibly indicating a greater need among newer students for foundational language practice. Less common applications, such as
Elsa Speak and
HelloTalk, displayed limited adoption across both groups without notable year-based differences.
Elsa Speak was used by 18 underclassmen (12.08%) and 11 upperclassmen (8.59%), while
HelloTalk saw slightly more usage among underclassmen (9 students, 6.04%) than upperclassmen (7 students, 5.47%). Other tools, encompassing,
AI Asus,
Babbel,
Perplexity, and
askpdf were each mentioned by only a few students, with no significant preference based on year of study, suggesting that these specialized applications served limited, specific functions rather than being essential for general academic use.
4.2. Academic Tasks Supported by Generative AI
As indicated in
Figure 3 (N = 277),
Writing Assistance emerged as the most frequently reported AI application, with 162 students (58.48%) indicating they primarily used generative AI for writing tasks. This finding highlights the significant role of AI tools in supporting students with essay writing, assignments, and other written work.
Language Learning was the second most common use, with 149 students (53.79%) relying on AI for language practice and improvement, underscoring the importance of these tools in enhancing English proficiency.
Research Tasks ranked third, with 134 students (48.38%) using AI for information gathering and synthesizing findings.
Study Support was also a prominent category, with 111 students (40.07%) indicating they used AI to assist in understanding and reviewing study materials.
Exam Preparation was another notable application, with 64 students (23.1%) using AI tools to prepare and revise for exams, likely through practice questions and summarization. Tasks with more specialized applications, such as
Creative Writing (e.g., writing poems, stories, or scripts), were mentioned by 95 students (34.30%). Additionally, 61 students (22.0%) reported using AI for
Communication tasks, which included composing messages or formal responses. A smaller subset of students, 18 (6.5%), used AI for
Administrative Tasks such as organizing schedules or generating formal documents.
As depicted in
Table 5, when analyzing task usage by gender, Writing Assistance emerged as the most frequently used category, with 56.31% of female students (116) and 64.79% of male students (46) indicating its use. Language Learning was similarly prominent, utilized by 56.31% of females (116) and 46.48% of males (33), demonstrating substantial interest across genders in leveraging AI to enhance language proficiency. Research tasks were also widely used, reported by 49.03% of females (101) and 46.48% of males (33). Study Support followed, with 41.26% of females (85) and 36.62% of males (26) engaging in this category. Creative Tasks, while less commonly used, still showed notable participation, with 35.44% of females (73) and 30.99% of males (22) using AI for creativity-based activities. The results suggest that while usage patterns are similar across genders, males slightly outpace females in Writing Assistance, while females show higher engagement in Language Learning and Study Support tasks.
For academic year breakdown, as shown in
Table 6, Writing Assistance is the most frequently used task among both underclassmen (55.70%, 83 students) and upperclassmen (61.72%, 79 students), highlighting its importance across all levels. Language Learning is also widely utilized, with 55.03% of underclassmen (82 students) and 52.34% of upperclassmen (67 students) relying on AI for language improvement. Research tasks show significant engagement, with 51.68% of underclassmen (77 students) and 44.53% of upperclassmen (57 students) using AI for academic inquiries. Study Support exhibits slightly higher usage among underclassmen (44.97%, 67 students) compared to upperclassmen (34.38%, 44 students). Creative Tasks show balanced participation, with 30.87% of underclassmen (46 students) and 38.37% of upperclassmen (49 students) using AI for creativity-focused activities. The findings suggest that while Writing Assistance is consistently the most used task across both groups, underclassmen tend to engage more with Research and Study Support, whereas upperclassmen demonstrate higher involvement in Creative Tasks and Writing Assistance.
4.4. Factors Influencing Acceptance and Use of Generative AI
Based on the survey results on student acceptance in
Table 7, Exploratory Factor Analysis (EFA) was performed using the principal method with varimax rotation to extract four factors, consistent with the theoretical dimensions of a four-factor model. Cronbach’s alpha was calculated for each factor, with α > 0.70 indicating high internal consistency. Eigenvalues and the percentage of variance explained were also examined. The EFA results revealed that performance expectancy (Items 1–7) predominantly loads onto Factor 1, with high negative values (e.g., Item 1: −0.77, Item 2: −0.83), suggesting a cohesive grouping. Effort expectancy (Items 8–11) aligns closely with Factor 1, indicating shared influence on acceptance. Facilitating conditions (Items 12–14) show moderate alignment with Factor 1, though some items (e.g., Item 13) split between Factors 1 and 2. Social influence (Items 15–18) primarily loads onto Factor 2, indicating it is a distinct dimension from performance and effort expectancy. While performance and effort expectancy items cluster closely, facilitating conditions and social influence show more complex loadings. I will now proceed with a multiple regression analysis to quantify each factor’s influence on the overall acceptance score and calculate effect sizes. Cronbach’s alpha for all factors shows acceptable internal reliability, ranging from 0.76 to 0.89. The eigenvalue for each factor reflects the variance explained, with Factor 1 having the highest eigenvalue (4.56), accounting for 45.6% of the variance. Factor 2, representing social influence, explains an additional 21.3%, while facilitating conditions (Factor 3) and miscellaneous (Factor 4) contribute 12.0% and 7.8%, respectively.
The multiple regression analysis in
Table 8 discloses the factors that significantly influence students’ acceptance of generative AI tools, with each predictor contributing uniquely. Performance expectancy is the most influential factor, with a coefficient of 0.438 (F = 10.83,
p < 0.001), an effect size of 0.50, and a partial R-squared of 0.33. This indicates that students’ expectations of AI tools enhancing their academic performance account for 33% of the variance in their acceptance. Effort expectancy also plays a significant role, with a coefficient of 0.188 (F = 1.61,
p < 0.001), an effect size of 0.22, and a partial R-squared of 0.15, meaning ease of use explains 15% of the variance. Social influence and facilitating conditions both have coefficients of 0.188, with F-statistics of 3.10 and 1.82, effect sizes of 0.21 and 0.20, and partial R-squared values of 0.12 and 0.10, respectively. These results suggest that while supportive conditions and social factors contribute to acceptance, their impact is moderate compared to the perceived performance and ease-of-use benefits.
Performance expectancy clearly emerged as the strongest predictor of GenAI acceptance whereas effort expectancy and social influence demonstrated relatively weaker statistical contributions. Nonetheless, their retention in the model is supported by theoretical and practical relevance. Effort expectancy, although showing lower factor loadings, reflects usability concerns that remain critical for sustained engagement with AI tools, especially for less tech-confident users. Correspondingly, social influence, despite its modest variance explanation, captures peer and institutional norms that subtly shape technology adoption in collectivist cultures like Indonesia. Retaining these factors offers a more holistic understanding of the multifaceted dynamics influencing AI acceptance. These results emphasize that even predictors with weaker statistical weight can carry pedagogical and cultural importance, especially in emerging and context-specific technology environments.
4.5. Generative AI as Cognitive Co-Pilots in English Language Studies
Thematic analysis was conducted with the qualitative data collected from open-ended questions and interviews, and the results are disclosed in
Figure 4.
A significant number of students use AI to refine grammar and structure in their academic writing, highlighting its dual role as both a linguistic assistant and quality checker. Tools like Grammarly and ChatGPT are viewed as invaluable for polishing writing, ensuring greater coherence and accuracy. By incorporating AI feedback, students address immediate language needs while also gradually improving their writing skills. The use of AI in enhancing structure and grammar reflects a deliberate effort to uphold academic rigor and linguistic proficiency, positioning AI tools as sources of constructive feedback and self-improvement. Instead of passively accepting corrections, students actively engage with the feedback, identifying areas for growth and refining their understanding of grammatical structures. This reflective use of AI suggests that it serves as a complementary tool in students’ academic journeys, rather than a sole source of linguistic accuracy.
“I always use Grammarly for writing assignments. It points out grammar mistakes and also suggests sentence structures that I might not think of.”
(Student 4)
“AI helps me improve my sentence flow. For instance, Grammarly will suggest rephrasing complex sentences to make them clearer, which has improved my writing a lot.”
(Student 10)
“I use AI to proofread my essays. It’s like having a teacher check my grammar, so I feel more confident submitting my work.”
(Student 15)
“My grammar has weaknesses, so I rely on AI to correct my sentences. But I also try to learn from these corrections to improve on my own.”
(Student 19)
A prevalent theme in students’ responses reveals frequent reliance on AI applications like Google Translate and Grammarly to address challenges in vocabulary, syntax, and grammar. Particularly underclassmen use these tools to bridge language barriers, enabling them to engage with complex academic texts in English. Many students describe these AI tools as essential for reducing comprehension time and clarifying language points that might otherwise hinder understanding. Students are also aware of AI’s limitations, critically examining AI translations and making adjustments for contextual accuracy. This critical approach demonstrates their recognition that AI-generated translations may lack the nuanced understanding needed for precise interpretation. These practices reflect a sophisticated strategy, where students rely on AI’s speed while remaining mindful of its contextual shortcomings, using it to supplement rather than replace their linguistic judgment.
“Using Google Translate helps me understand difficult sentences in English articles. But I still review its translation to ensure it makes sense.”
(Student 2)
“I often use AI to check my grammar and vocabulary. For example, I’ll translate a word from English to my native language, then double-check if it fits the context.”
(Student 11)
“For writing assignments, I use Grammarly to improve grammar and clarity. However, I know AI can be wrong sometimes, so I adjust the suggestions if they don’t fit my meaning.”
(Student 17)
“I translate complex vocabulary and sentences in academic articles. AI provides a quick understanding, but I compare it with my knowledge to ensure it’s accurate.”
(Student 24)
Students, particularly upperclassmen, use AI to explore content deeply and access relevant examples, broadening their understanding and reinforcing academic arguments. By utilizing AI as an instant research resource, students can efficiently access diverse perspectives and examples, enhancing both comprehension and the depth of their assignments. This practical adaptation of AI’s capabilities shows that students view it as an efficient alternative to more time-consuming methods of gathering information. The reliance on AI for content exploration reflects a complex approach to information acquisition, where students combine AI-sourced content with traditional sources like textbooks and academic journals. This comparative method allows students to synthesize and validate AI-generated information against established academic references, illustrating a mature academic mindset that integrates technology to expedite knowledge acquisition while maintaining academic integrity.
“I use AI to get examples of language use in context, like idioms or specific phrases. It helps me understand how they’re used in real conversations.”
(Student 3)
“For assignments, I use AI to find example essays or research summaries on similar topics. It saves me time, and I can cross-check with other sources.”
(Student 12)
“AI provides quick references for my research topics. For instance, it helps me understand different viewpoints in literature, which I might not find easily in the library.”
(Student 20)
“I use AI to gather background information before starting a project. It’s fast and gives a broad overview, but I still go to academic sources for accuracy.”
(Student 25)
Responses indicate that students frequently turn to AI for generating and expanding ideas, especially when encountering creative blocks. Upperclassmen, in particular, use AI to conceptualize complex assignments and presentations, often viewing it as a tool for creating an initial framework to develop further. By using AI as a brainstorming partner, students explore various perspectives and potential directions for their work. A sophisticated relationship between dependence and creative independence emerges: although AI aids in structuring ideas, many students remain aware that personal engagement and creative thought are central to the academic process. This awareness suggests that students see AI-generated ideas as a starting point, not a replacement, allowing AI to initiate thought without replacing their own intellectual agency.
“When I don’t have ideas, I ask ChatGPT to give me a topic outline, and I use it to get started. It’s like a brainstorming partner, but I add my thoughts to make it personal.”
(Student 5)
“For my essays, I often ask AI to give me potential arguments or points I could cover. It helps me develop a clearer structure, especially under tight deadlines.”
(Student 9)
“AI is helpful when I’m out of ideas. For example, I ask it for suggestions on argumentative essay topics, then I expand them based on what I want to write about.”
(Student 13)
“I rely on AI when I’m short on time. It provides basic ideas, which I then develop further to meet the assignment requirements.”
(Student 18)
4.6. Discussion and Implication
The findings confirm that generative AI tools are integral to the academic practices of English major students in Indonesia, with ChatGPT, Google Translate, and Grammarly emerging as the most widely used applications. This aligns with global trends highlighted by
C. Wang (
2024), who identified these tools as pivotal for supporting both academic writing and language learning. ChatGPT’s flexibility, catering to diverse academic needs from idea generation to drafting and revision, reflects its widespread adoption across disciplines, as noted by
Chan and Hu (
2023). Similarly, Google Translate continues to play a vital role in overcoming linguistic barriers, particularly in non-English-speaking contexts, as emphasized by
Waluyo and Kusumastuti (
2024) for EFL learners. Gender-based differences in tool preferences observed in this study align with findings by
Stöhr et al. (
2024) and
Vo and Nguyen (
2024), who noted that female students often prioritize tools that enhance grammatical accuracy and foundational language learning, while male students focus on tools for more efficient solutions to complex tasks. This reflects the broader cultural and pedagogical focus on linguistic accuracy and academic performance in non-Western settings (
Waluyo & Kusumastuti, 2024;
Jang, 2024). Additionally, the progression in AI usage between underclassmen and upperclassmen—where underclassmen focus more on language learning and study support tools, while upperclassmen leverage AI for creative and advanced academic tasks—parallels findings by
Aryadoust et al. (
2024). This suggests that as students advance academically, they adopt AI tools to meet more complex academic needs, in line with
Xu et al.’s (
2024) observation of AI’s evolving role in supporting autonomy and critical thinking.
Generative AI tools are primarily used for writing assistance, echoing findings by
C. Wang (
2024) and
Barrett and Pack (
2023), who emphasized AI’s role in addressing both global and local writing challenges. Tools like Grammarly and ChatGPT, used to refine grammar, coherence, and structure, align with the reflective practices observed by
Waluyo and Kusumastuti (
2024), where students actively engage with AI feedback to enhance their writing skills. This highlights generative AI’s dual role as a linguistic assistant and a tool for fostering academic rigor. The use of AI in research and language learning also mirrors findings by
McCarthy and Yan (
2024), who demonstrated its effectiveness in providing personalized support for vocabulary acquisition and reading comprehension. The balanced use of AI for both foundational and advanced tasks emphasizes its adaptability, as noted by
Aryadoust et al. (
2024) and
Chan and Hu (
2023), who observed its ability to meet diverse academic needs across student levels. Notably, the shift from study support among underclassmen to creative tasks among upperclassmen reflects
Barrett and Pack’s (
2023) findings on AI’s potential to foster innovation and creativity. This progression suggests that students increasingly adopt more sophisticated uses of AI tools as they adjust to their academic demands, aligning with the adaptive learning trajectories discussed by
Xu et al. (
2024).
The general satisfaction with generative AI tools observed in this study aligns with
Chan and Hu’s (
2023) findings, which highlighted the positive reception of AI applications among students due to their flexibility and efficiency. However, despite consistent satisfaction levels across demographic groups, concerns about overreliance and ethical implications reflect issues raised by
Barrett and Pack (
2023) and
Cummings et al. (
2024). These concerns stress the importance of fostering critical engagement with AI tools to ensure they complement, rather than replace, students’ intellectual efforts. The lack of significant differences in satisfaction based on gender or academic level contrasts with findings by
Stöhr et al. (
2024), who noted that demographic factors can influence attitudes toward AI tools. This divergence may be attributed to the cultural context of Indonesia, where the emphasis on academic performance and institutional support may lead to a more uniform perception of AI tools (
Waluyo & Kusumastuti, 2024). These findings underscore the need for context-specific approaches to integrating AI in education, as highlighted by
Vo and Nguyen (
2024).
Performance expectancy emerged as the most significant factor influencing AI acceptance, consistent with studies by
Habibi et al. (
2023) and
Strzelecki (
2024), who found that students prioritize tools that enhance their academic performance. Effort expectancy and facilitating conditions also played critical roles, highlighting the importance of usability and institutional support, as noted by
Jang (
2024) and
Waluyo and Kusumastuti (
2024). These findings suggest that students value the ease of use and practical benefits of AI tools, aligning with the UTAUT framework proposed by
Venkatesh et al. (
2003). Social influence, while less impactful, remains relevant in shaping students’ attitudes toward adoption of AI. This aligns with
Budhathoki et al. (
2024), who observed that social and peer dynamics play a moderate role in technology acceptance. However, the lower influence of social factors in this study, compared to others, may reflect cultural differences, as Indonesian students may prioritize individual academic goals over peer perceptions, consistent with the findings of
Rosmayanti et al. (
2022).
The diverse applications of generative AI in English language learning underscore its potential to address both linguistic and academic needs. Tools like Grammarly and ChatGPT, which are used for grammar and writing structure, align with findings by
Barrett and Pack (
2023) and
Waluyo and Kusumastuti (
2024), who highlighted the reflective use of AI for improving writing skills. Students’ active engagement with AI feedback reflects a sophisticated approach to technology, where AI is viewed as a complementary learning tool rather than a substitute for human effort. The role of AI in translation and language assistance, particularly through Google Translate, mirrors the findings of
Vo and Nguyen (
2024), who emphasized the importance of balancing AI’s speed with critical oversight to ensure contextual accuracy. This illustrates students’ awareness of AI’s limitations, as also noted by
Y. Wang and Zhang (
2023), who observed that users often validate AI outputs against traditional academic sources. Regarding content exploration and idea generation, the findings echo
McCarthy and Yan’s (
2024) observation of AI’s efficiency in supporting deep academic inquiry. The use of AI for brainstorming and expanding ideas reflects its role in fostering intellectual creativity, as discussed by
Barrett and Pack (
2023). This adaptive use of AI tools demonstrates students’ ability to integrate AI-generated insights with traditional research methods, supporting the balanced approach advocated by
Watermeyer et al. (
2024).
While this research indicates the potential role of GenAI as an intellectual co-pilot for scholarly writing, language acquisition, and content searching, consideration needs to be given to the danger of over-reliance and diminished critical thinking. Although a number of students indicated revising AI-generated content actively and critically verifying translations, qualitative data also revealed instances where AI tools were being utilized as fact-based sources, especially in moments of urgency. This aligns with the balance between intellectual autonomy and productivity. Without guidance, students may be conditioned to rely on AI at the expense of their capacity for higher-level thinking, particularly in assessing bias, context relevance, or appropriateness of AI-driven content. These factors align with
Malik et al. (
2025) and
Watermeyer et al. (
2024), who cautioned that unchecked application of AI potentially destabilizes core academic skills and moral judgment. Hence, GenAI integration should be accompanied by direct teaching of critical thinking approaches and ethical application in a bid to sustain students’ active engagement as reflective learners as opposed to acting as passive recipients of machine support.
Thematic analysis results provided richer insight into students’ interactions with AI, complementing quantitative findings on satisfaction and usage patterns. Students’ reflective use of tools like Grammarly and ChatGPT for grammar refinement, content exploration, and idea generation mirrored the statistical prominence of writing assistance and research support tasks. This triangulation confirms that students are not merely passive users but demonstrate strategic engagement and critical filtering—especially when revising text or cross-checking AI translations. Moreover, students’ concerns about AI limitations and their adjustment of AI output align with the modest influence of effort expectancy and social influence in the regression results. These links illustrate the value of integrating qualitative narratives into the quantitative patterns, offering a more comprehensive and nuanced view of learner-AI interactions. The consistency between what students say (qualitative themes) and what they do (quantitative data) reinforces the credibility of findings and the explanatory strength of the mixed-methods design.
As for the theoretical implications, this study extends the UTAUT model by contextualizing its constructs—particularly performance expectancy and effort expectancy—within the domain of English language learning in a Southeast Asian context. The findings affirm the predictive strength of performance expectancy, in line with
Strzelecki (
2024) and
Habibi et al. (
2023), but also underscore the unique cultural dynamics of Indonesian learners, where social influence, though modest, still reflects collectivist values. Moreover, students’ reflective engagement with AI aligns with cognitive engagement theory (
Greene & Miller, 1996) and the ICAP framework (
Chi & Wylie, 2014), emphasizing the pedagogical potential of AI tools as catalysts for constructive and interactive learning. The use of AI for idea generation, content exploration, and revision suggests a model of AI not just as a tool, but as a dynamic agent in self-regulated and metacognitive learning.
Practically, these findings have direct implications for higher education institutions seeking to integrate generative AI into language curricula. First, this study highlights the need for structured training programs that help students critically evaluate AI output and use tools ethically. Second, institutions should consider adopting flexible AI integration policies that support both innovation and academic integrity. Faculty development programs should also be introduced to ensure instructors are equipped to guide students in reflective and responsible AI use. Lastly, AI tools should be positioned not as replacements for human instruction, but as pedagogical partners that complement traditional methods, supporting differentiated and inclusive learning experiences.