Next Article in Journal
Sources of Support and Their Benefits for New Primary School Teachers in Switzerland
Previous Article in Journal
Parent Perspectives of Behavioral and Emotional Development of Young High-Ability Children: A Pilot Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language †

by
Andrew S. Nelson
1,
Paola V. Santamaría
1,
Josephine S. Javens
1 and
Marvin Ricaurte
2,*
1
Yachay Tech Language Center, Yachay Tech University, Hacienda San José s/n y Proyecto Yachay, Urcuquí 100119, Ecuador
2
Grupo de Investigación Aplicada en Materiales y Procesos (GIAMP), School of Chemical Sciences and Engineering, Yachay Tech University, Hacienda San José s/n y Proyecto Yachay, Urcuquí 100119, Ecuador
*
Author to whom correspondence should be addressed.
This paper is an extended version of the paper entitled “Students’ Perceptions of Generative AI Use in Academic Writing”, presented at the 17th International Conference “Innovation in Language Learning”, Florence, Italy, 7–8 November 2024.
Educ. Sci. 2025, 15(5), 611; https://doi.org/10.3390/educsci15050611
Submission received: 28 February 2025 / Revised: 18 April 2025 / Accepted: 15 May 2025 / Published: 16 May 2025
(This article belongs to the Special Issue Emerging Pedagogies for Integrating AI in Education)

Abstract

:
While research articles on students’ perceptions of large language models such as ChatGPT in language learning have proliferated since ChatGPT’s release, few studies have focused on these perceptions among English as a foreign language (EFL) university students in South America or their application to academic writing in a second language (L2) for STEM classes. ChatGPT can generate human-like text that worries teachers and researchers. Academic cheating, especially in the language classroom, is not new; however, the concept of AI-giarism is novel. This study evaluated how 56 undergraduate university students in Ecuador viewed GenAI use in academic writing in English as a foreign language. The research findings indicate that students worried more about hindering the development of their own writing skills than the risk of being caught and facing academic penalties. Students believed that ChatGPT-written works are easily detectable, and institutions should incorporate plagiarism detectors. Submitting chatbot-generated text in the classroom was perceived as academic dishonesty, and fewer participants believed that submitting an assignment machine-translated from Spanish to English was dishonest. The results of this study will inform academic staff and educational institutions about how Ecuadorian university students perceive the overall influence of GenAI on academic integrity within the scope of academic writing, including reasons why students might rely on AI tools for dishonest purposes and how they view the detection of AI-based works. Ideally, policies, procedures, and instruction should prioritize using AI as an emerging educational tool and not as a shortcut to bypass intellectual effort. Pedagogical practices should minimize factors that have been shown to lead to the unethical use of AI, which, for our survey, was academic pressure and lack of confidence. By and large, these factors can be mitigated with approaches that prioritize the process of learning rather than the production of a product.

1. Introduction

It is generally accepted that technology is a great facilitator of the language learning and teaching process, and two emerging technological trends include the use of digital tools and artificial intelligence (AI) (Jomaa et al., 2024; F. Xiao et al., 2025). Many different digital tools are available, and language learners and educators use them in a variety of ways. Digital tools may include a range of items, including digital flashcards, individual and group games, presentation software, learning management systems, various language learning platforms, social media, and many others. Digital tools used by students may include quiz applications to support vocabulary acquisition (Quizlet, Quizizz, Kahoot), puzzle applications to support vocabulary (web-based minigames, interactive jigsaw puzzles, and card design games), and grammar learning and platform applications (learning management systems such as DynEd, Edmodo, Moodle, Virtual Room) to support reading and writing development (Lim & Toh, 2024). One popular digital tool used in language learning is Google Translate. It has been shown that EFL (English as a foreign language) learners use Google Translate for autonomous language learning through a combination of translation, text-to-speech synthesis, and automatic speech recognition (van Lieshout & Cardoso, 2022).
While digital tools have been used for several decades, generative AI (GenAI) tools, including ChatGPT, are relatively newer. ChatGPT is “a powerful text-generating dialogue system” that “generates human-like responses to inputs from users” (Mehta, 2023). Since ChatGPT was made public in November of 2022, it “quickly went viral on social media as users shared examples of what it could do[:]…everything from travel planning to writing fables to cod[ing] computer programs” (Marr, 2023). The ways that teachers and students use GenAI tools are still developing.
ChatGPT shows many potential benefits to student learning. It can provide a personalized educational experience that is often impossible to achieve in a traditional classroom setting, which may be too expensive for many (Y. Xiao & Zhi, 2023). The chatbot can act as a personal tutor or mentor that can answer questions, give easy-to-understand explanations of complex concepts, and engage students in conversations on various educational topics (Baidoo-Anu & Owusu Ansah, 2023; George et al., 2023). AI chatbots can be used as conversational partners to correct students’ mistakes in a target language. The independent learning opportunities that ChatGPT and other chatbots provide may be superior to the kind of independent learning that students have already been practicing with the internet for years. AlAfnan et al. (2023) mentioned that the answers to questions that are put to ChatGPT are obtained more quickly and succinctly than from search engines. This efficiency allows students to save time and energy (Zhao et al., 2024). Kohnke et al. (2023) have shown the usefulness of ChatGPT to assist reading comprehension, and their research showed that ChatGPT can explain terms to the students, and students can ask for explanations in their first language (L1), as well as definitions, examples, and sample sentences. Li et al. (2023), in their investigation of ChatGPT use among language learning communities on YouTube, found a positive reception to incorporating ChatGPT into language learning. The YouTube communities reported that ChatGPT can provide concise responses, which can help language learners save time. Furthermore, ChatGPT can offer relevant examples, practice questions, and explain vocabulary, grammar, and cultural nuances. It was also reported that ChatGPT can benefit not only EFL learners but also help students who want to learn languages with high resources on the internet.
The potential benefits of digital tools and AI use in education are exciting for many and promise to revolutionize education in the near future. However, as many experienced language teachers have undoubtedly noticed, technology can complicate the learning process. A familiar example for many language educators is machine translation (MT) technologies such as Google Translate (Shadiev et al., 2024). While MT provides many benefits in aiding understanding and expression in a foreign language at an efficient speed, students can use MT technologies to feign language skills that they do not possess, particularly in writing. For example, a student in an English for Academic Purposes (EAP) course could write an assigned essay in their first language (L1), use MT to translate it, and submit the translated product as their work (Almusharraf & Bailey, 2023). Ducar and Schocket (2018) go as far as to suggest that teachers may view foreign language “students’ reliance on [MT] [as] an absolute evil”, though they rightly advise teachers to learn to work within the reality of MT’s existence as it is unlikely to disappear. Similarly, ChatGPT and other GenAIs can be used to create text with minimal effort on the part of the student, which many teachers would consider academically dishonest. Indeed, Egloff (2024) stated that the ability of “AI tools [to] generate full essays at the click of a button is a major cause of alarm and source of anxiety for educators”.
Educators are concerned about ChatGPT’s impact on academic honesty, since its misuse may interfere with learning and assessment (Alghannam, 2024; Liu & Wang, 2024; Tram et al., 2024). Farrokhnia et al. (2024) claimed that ChatGPT could be a threat to academic integrity, perpetuating discrimination in education, democratizing plagiarism, and decreasing high-order cognitive skills. Sullivan et al. (2023) conducted a research review on ChatGPT and academic integrity. Their systematic search indicated that the most common themes that arose in the data collected were general concerns about academic integrity and cheating. ChatGPT is designed to generate responses based on prompts, and students could submit generated responses as their own work and ideas (Cotton et al., 2024). As chatbots become more advanced and sophisticated, there is a possibility that AI tools will be used more frequently, and students may use chatbots to cheat on assessments, thereby harming students’ education, development, and growth (Qadir, 2022). AlAfnan et al. (2023) stated that students may have chatbots write assignments for them to avoid a late submission grade or a zero grade.
Clearly, the stakes for uncontrolled AI use in education are high. Yet before educators rush to create policies to control its use or begin to have conversations with students to raise their awareness about appropriate AI use, it would be advisable to find out what the students themselves think about GenAI use in education. Understanding how rural, STEM-oriented students—especially those in an EMI setting—perceive the use of AI tools in academic writing and how these perceptions challenge prevailing narratives of AI use in an EFL context will be helpful. Given the current popularity of GenAI tools, this study is crucial. It aims to explore Ecuadorian EFL students’ perceptions of GenAI tools, specifically ChatGPT, within the context of academic writing. Through a structured survey, this study investigates students’ definitions of AI-related academic misconduct, their motivations for using AI in their writing, and their perceptions of its impact on their learning process. Additionally, it examines students’ views on AI detection tools, institutional responses to AI-based dishonesty, and the potential role of AI as a support mechanism for academic writing. The findings seek to provide information to educators and policymakers to assist the development of policies and practices to effectively integrate AI literacy into the curriculum while fostering academic integrity and critical thinking in an increasingly AI-driven educational landscape. This paper is a revised and expanded version of the paper entitled “Students’ Perceptions of Generative AI Use in Academic Writing”, which was presented at the 17th International Conference “Innovation in Language Learning”, Florence, Italy, 7–8 November 2024 (Nelson et al., 2024).

2. Literature Review, Theoretical Framework, and Guiding Perspectives

2.1. Academic Dishonesty and Plagiarism

Kibler (1993) defined academic dishonesty (AD) as “forms of cheating and plagiarism that involve students giving or receiving unauthorized assistance in an academic exercise or receiving credit for work that is not their own.” AD is any act that tarnishes the integrity of educational systems. Examples of AD include “lying, cheating on exams, copying or using other people’s work without permission, altering or forging documents, buying papers, plagiarism, purposely not following the rules, altering research results, providing false excuses for missed tests and assignments, making up sources, and so on” (Lambert et al., 2003). Cheating is more likely to happen when the means of opportunity, incentives, and rationalization are present (Humbert et al., 2022). Academic pressure also increases academic cheating behavior, and there is a correlation between greater opportunity and greater desire to cheat or take shortcuts, especially when there is low supervision (Puspitosari, 2022). Heavy academic workload, combined with time constraints, may lead to cheating (Abbas et al., 2024). Sources of academic pressure include fear of academic failure, academic procrastination, the demand for good grades, academic competition, and academic dissatisfaction (Lhutfi et al., 2021). Additionally, the desire to save time, combined with easy access to the internet, may tempt students to find materials and copy, especially if students believe there will be little to no punishment for being caught (Park, 2017).
Pecorari (2013) defines plagiarism as “the reproduction or paraphrasing, without acknowledgement, from public or private (i.e., unpublished) material (including material downloaded from the internet) attributable to, or which is the intellectual property of, another including the work of students” (p. 9). Plagiarism involves the appropriation of existing material to create new text, which constitutes a violation of academic integrity. A special case of plagiarism that is relevant in EFL and ESL settings is cross-language plagiarism. In cross-language plagiarism, a text is found in one language, copied, machine-translated into a target language, and presented as one’s own work (Dinneen, 2021). For instance, a researcher could have a paper translated from Indonesian into English, edit the English version several times, and submit the plagiarized paper to local publishers (Ratna et al., 2017). With the number of available texts on the internet combined with free MT services, it is easy to make an “original” study (Zubarev et al., 2022). Alotaibi and Joy (2021) stated that cross-language plagiarism is difficult to detect as each language has its own structure.

2.2. Academic Dishonesty, Technology, and AI-Related Plagiarism

The prevalence of plagiarism has increased with the rise of the internet (Carter et al., 2019). Urlaub and Dessein (2022) stated that mobile technologies have led to the popularity of Google Translate, which has transformed language communication. When Google Translate first became available, its use raised concerns in educational settings about whether it could hinder students’ ability to produce written work without relying on it. ChatGPT is being compared to Google Translate when it first premiered. ChatGPT is changing how plagiarism can be carried out (Azoulay et al., 2023). When a student uses ChatGPT to write an essay and submits the results as their own work, it constitutes “contract cheating” akin to submitting a custom paper that was written by someone else (Anson, 2022). Some researchers argue that academic dishonesty is not new nor inherently caused by technological advancements. Instead, academic dishonesty (driven by new technologies) is like “old wine in new bottles”, and digital tools simply facilitate previously existing behaviors (Selwyn, 2008). Regardless, the misappropriation or claiming of someone else’s intellectual property, ideas, or writing as original constitutes plagiarism in higher education (Carroll, 2013). Academic writing requires citing the original author’s ideas, concepts, and theories, and ChatGPT is not transparent about where its sources come from, making it difficult to credit authors, and this lack of transparency may lead to challenges with academic integrity (Koos & Wachsmann, 2023). Many teachers believe that students frequently use AI for assignments (Delello et al., 2025).
The vast amount of information available online, combined with the ability to share and manipulate information digitally, has significantly increased opportunities for plagiarism (Chiang et al., 2022). There are new concerns about ChatGPT and AI-assisted cheating Lo (2023). Teachers are now facing “AI-related plagiarism” or AI-giarism (Ghounane et al., 2024). AI-giarism was coined by Chan (2023), and it involves “the unethical practice of using artificial intelligence technology, particularly generative language models, to generate content that is plagiarised either from original humanauthored work or directly from AI generated content, without appropriate acknowledgement of the original sources or AI’s contribution” (p. 4).
Henry (2023) suggested that ChatGPT can “even generate authentic-sounding personal reflections on a given scenario”, and it is difficult to determine whether the author of a document is a human or a chatbot (Chatterjee & Dethlefs, 2023; Salvagno et al., 2023). Brainard (2023) stated that ChatGPT can create well-informed reports, essays, and even scientific manuscripts. Mijwil et al. (2023) prompted ChatGPT to write a science article, and they concluded that the technology behind ChatGPT will only continue to grow and will become more responsive and human-like. Even ChatGPT users have reported that the tool can be misused for educational tasks (Haque et al., 2022). Although ChatGPT’s writing style lacks nuance, style, and originality (Salvagno et al., 2023), it is still like letting a student have a friend take the exam on their behalf (Stokel-Walker, 2022). Thorp (2023) warned us that many AI-generated texts will soon appear in scientific literature, which is dangerous as trust in science decreases. Dergaa et al. (2023) suggested that generative AI will impact all stages of the scientific publishing process, given the fact that chatbots can generate all the parts of a manuscript, like the research questions, hypotheses, and methodology. Chen (2023) stated that ChatGPT can be used to help overcome language barriers, such as translating Chinese text into English. Since English is the primary language used in the sciences, and many scientists do not speak English as their first language, these authors may be less proficient in writing journal articles.
Students may have ChatGPT generate a paper for them instead of writing it themselves. In doing so, students misrepresent their work, and they can damage the credibility and reputation of both the student and the educational institution (Leong & Bing, 2025). Academic dishonesty is such a grave issue because it ultimately “defrauds those who may eventually depend upon [graduates’ supposed] knowledge and integrity” (Pavela, 1997). While students may have a clear understanding of traditional plagiarism, they may have less understanding of the implications of AI-generated content in academic work (Chan, 2025). Teachers need to prevent cheating in all its forms to ensure that students develop into ethical professionals (DiVall & Schlesselman, 2016). Unfortunately, teachers may struggle to determine whether a text was created by a student or ChatGPT, making it difficult to evaluate students’ comprehension accurately (Perkins, 2023). This begs the question—can we detect AI-generated content?
Teachers and students may be familiar with tools, such as Turnitin, that detect copy-and-paste-type plagiarism from texts that are available on the web. Unfortunately, detecting generative artificial intelligence content is not that simple, and there is disagreement about the reliability of existing AI detectors. Alexander et al. (2023) stated that neither humans nor AI detectors are reliable when it comes to determining if AI wrote a text. Gao et al. (2023) stated that when a person copies and pastes a generated abstract and then edits it, thereby making it a human–AI-generated product, human reviewers will only correctly identify abstracts as being generated by ChatGPT 68% of the time. Gorichanaz (2023) stated that using AI to rewrite itself can deceive AI detectors because the person can ask ChatGPT to change sentence types and lengths, inserting typographical errors on purpose. Clark et al. (2021) found that untrained evaluators could not distinguish between human- and GPT3-generated texts (stories, news articles, and recipes). In their study, Khalil and Er (2023) had ChatGPT generate 50 essays, which they then sent to a plagiarism detection software. Of the essays inspected, the software considered 40 essays to contain a high level of originality—a similarity score of 20% or less. Foster (2023) submitted GPT-4 writings to Turnitin’s AI detection model. By repeatedly changing the prompts, the author created a prompt that resulted in a ChatGPT output, which Turnitin inspected as ‘original’. With time and effort, it is possible for a 100% AI-generated essay to score 0% plagiarism on Turnitin’s AI detection. AlAfnan and MohdZuki (2023) stated that Turnitin has announced that they have been working on AI detection software. However, as Turnitin develops its software, Open AI develops newer and more advanced versions of ChatGPT. Rudolph et al. (2023) stated that teachers worry that students will not complete their own written assignments independently because ChatGPT can generate text without alarming plagiarism detection, while other researchers like Ladha et al. (2023) stated that AI detectors can accurately identify AI-generated text. There is disagreement about AI detection software accuracy (Uzun, 2023). Research on AI-text detectors and classifiers is still growing, which leaves L2 educators with few resources to face the challenge of AI-assisted plagiarism (Ibrahim, 2023).

2.3. Students’ Perceptions of Plagiarism and AI in Academic Dishonesty

Studies on students’ perceptions of plagiarism have revealed their understanding of plagiarism and authorship, particularly concerning the internet. In the UK, undergraduate students self-reported moderate internet-based plagiarism over the past year (Selwyn, 2008). In contrast, students in Israel perceived online information as part of the public domain (Baruchson-Arbib, 2004), a view that differs significantly from Hungarian students, who acknowledged the importance of responsible use and proper citation to avoid academic misconduct (Fajt & Schiller, 2025). Both Ghanaian and Rwandan students appreciated ChatGPT’s ability to facilitate faster and easier work, although both groups expressed uncertainty about what constitutes academic misconduct (Anani et al., 2025; Clarke et al., 2023). Scholars such as Pecorari (2013) stated that “students who incorporate source material without citation are not necessarily attempting to deceive their readers, but are often doing what they have been taught to do: use expert texts as models for their own writing” (p. 320). Therefore, students borrow texts as a developmental learning strategy. This perspective clashes with traditional views of plagiarism and academic misconduct, which assume that students are intentionally claiming someone else’s work as their own. This echoes the theme of “writing passages that are not copied exactly but that have been borrowed from another source” (Howard, 1995, p. 799). If students, like those in Ghana and Rwanda, are using AI-generated text to develop their writing skills, they may inadvertently plagiarize. Although tools like ChatGPT can support writing development, using AI-generated text without proper attribution may still constitute academic misconduct under institutional policies and standards. Carroll (2013) contended that when second-language students plagiarize, it is often due to unfamiliarity with academic norms rather than intentional dishonesty. In some cultures, for instance, practices such as memorization and textual reuse are considered respected forms of learning—practices that may conflict with Western expectations of originality and citation. Therefore, educators should approach such cases with cultural sensitivity and treat them as opportunities for instruction rather than punishment.
Several studies highlight the perceived advantages of ChatGPT, with students appreciating its ability to save time, provide personalized feedback, and offer a wide range of information, making it a helpful tool for learning (Valova et al., 2024). For instance, Ngo (2023) found that university students in Vietnam valued ChatGPT for improving learning efficiency and providing individualized tutoring. Similarly, Farhi et al. (2023) reported that students in the United Arab Emirates thought they benefited from ChatGPT as a virtual tutor, language practice partner, and writing support system. The results of a pilot study conducted by Bitzenbauer (2023) on the application of ChatGPT in a physics classroom showed that its incorporation was viewed as a favorable tool that influenced students’ perceptions of physics and science. Students who were able to use ChatGPT firsthand rated the experience highly. In Latin America, Cherrez-Ojeda et al. (2024) found that 70% of medical students said that they used ChatGPT for homework support. They also used the chatbot for support with research paper writing, medical/healthcare education and training, and mental health support. Acosta-Enriquez et al. (2024), in their survey on undergraduate students from universities in Peru, found that students who consider ChatGPT a valuable tool for their learning process are more likely to use it more frequently than those who do not perceive its importance, and students who have a greater intention to verify information and use ChatGPT responsibly tend to perform better in their academic activities. Román-Acosta et al. (2024), in their study on postgraduate students, found that the postgraduate students surveyed indicated that they occasionally used ChatGPT to generate content as well as structure ideas. An interesting fact is that 31.6% of participants never included an acknowledgement or attribution statement when using text generated by ChatGPT (one-third did not cite or credit ChatGPT).
Y. Xiao and Zhi (2023) found that students have positive feelings towards ChatGPT, but one concern was that students might use chatbots to take shortcuts—leading to plagiarism in writing assignments. Four out of the five students interviewed in this study believed that universities should embrace ChatGPT rather than ban it, despite their concerns regarding plagiarism. Elkhodr et al. (2023) found that students view chatbots as valuable and enjoyable; however, they have concerns about the information generated in their questions and whether this generated text is irrelevant or incorrect. Additionally, students worried that they could become overly reliant on chatbots for text generation, and this dependence on AI could negatively affect their critical thinking and problem-solving skills. Singh et al. (2023) found that students express that if they do not use ChatGPT properly, its use could negatively impact critical thinking skills as well as other skills used in the research process such as writing conclusions. Studies by Das and Madhusudan (2024) and Valova et al. (2024) indicated that students might rely too much on chatbots, which could reduce their ability to think critically and be creative. Farhi et al. (2023) found that students believed that ChatGPT generated high-quality written content; however, this could undermine academic integrity, and students expressed concerns that they might become overly dependent on ChatGPT for academic assignments. Bikanga Ada (2024) also found that students believe universities should allow the use of ChatGPT, and it is fair for students to use ChatGPT for academic purposes. In total, 87.8% of the students reported that they used ChatGPT weekly to compensate for missed class lectures and to fill knowledge gaps. Albayati (2024) found that students deem ChatGPT to be beneficial in terms of ease of use, but students also acknowledged drawbacks such as privacy and security concerns. Students expressed that ChatGPT’s inability to replicate human interaction and mentorship poses challenges, particularly in fostering academic growth and emotional support (Rahman et al., 2023; Sila et al., 2023). Moreover, specific technical limitations of ChatGPT have been noted, such as difficulties in generating accurate or detailed information on complex topics and its limited capacity to handle certain academic tasks effectively, like complex mathematical expressions (Ngo, 2023). These drawbacks have raised concerns about the use of ChatGPT, with students and educators alike debating its role in promoting or hindering genuine academic development (Bonsu & Baffour-Koduah, 2023; Farhi et al., 2023). Like students in other countries, Ecuadorian students face heavy workloads and time constraints, which may lead them to seek self-guided learning support, making tools like ChatGPT particularly appealing. As AI tools gain popularity across South America, their usage among students is likely to increase. It is therefore essential to examine whether Ecuadorian students use AI tools in ways like their peers abroad. Equally important is understanding how Ecuadorian students specifically engage with ChatGPT. This includes identifying perceived risks, exploring how and why these tools are used, and situating these insights within the local Ecuadorian context. By comparing Ecuadorian students’ responses with those of students from other countries, we may identify common trends and uncover opportunities for educational improvement.

2.4. Theoretical Framework and Guiding Perspectives

The Transformative Learning Theory, developed by Mezirow (2003), serves as the main theoretical framework for this study. For an in-depth exploration of how Mezirow’s Transformative Learning Theory has evolved, see Kitchenham (2008) and Christie et al. (2015). The Transformative Learning Theory was selected because it provides a valuable framework that can be applied to understanding students’ perceptions of AI within academic contexts.
The term transformative learning is defined by Mezirow (2003):
Transformative learning is learning that transforms problematic frames of reference—sets of fixed assumptions and expectations (habits of mind, meaning perspectives, mindsets)—to make them more inclusive, discriminating, open, reflective, and emotionally able to change. Such frames of reference are better than others because they are more likely to generate beliefs and opinions that will prove more true or justified to guide action.
(pp. 58–59)
The Transformative Learning Theory emphasizes the role of critical reflection in enabling individuals to reassess their assumptions and beliefs in response to new experiences (Mendoza, 2020). Critical reflection fosters a more inclusive worldview, openness to diverse perspectives, and the ability to connect personal experiences in meaningful ways (Enkhtur & Yamamoto, 2017). Within this framework, reflective processes can be used to encourage students to confront the implications of using AI in academic writing with a particular relevance in discussions of authorship, originality, and academic integrity—areas of concern when AI tools are employed in educational settings. As Castañeda and Selwyn (2018) argued, “…digital technologies do not simply support the transmission or exchange of information between staff and students. Instead, these technologies mould peoples’ values, beliefs and behaviours” (p. 4).
Therefore, Mezirow’s emphasis on self-reflection and the critical examination of assumptions is especially relevant as students determine when and how AI-assisted writing is appropriate and reflect on the motivations behind using such tools. Can et al. (2023) found that student engagement with ChatGPT often raised concerns about potential misuse and academic dishonesty, thereby reinforcing the relevance of the Transformative Learning Theory in guiding critical reflection on students’ use of AI tools.
To complement the Transformative Learning Theory and provide a more nuanced understanding of students’ engagement with AI in academic writing, this study also integrates academic dishonesty, detection of AI-generated content, and students’ perceptions of plagiarism through five lenses or guiding perspectives—process-oriented learning, constructivism, social constructivism, computational constructivism, and Wenger’s Communities of Practice (CoP)—to inform the research by offering specific perspectives on how students construct knowledge and interact with AI tools.
Process-oriented learning emphasizes cognitive and metacognitive skill development rather than focusing solely on final written products (Gibson et al., 2023). This iterative approach—planning, drafting, revising, and reflecting—promotes deeper engagement and critical thinking. However, LLMs like ChatGPT threaten this iterative learning by allowing students to bypass critical stages, thereby reducing writing to generating a finished text (Rodrigues et al., 2025).
Constructivist and social constructivist theories reflect specific epistemological beliefs about how individuals acquire knowledge. Epistemological beliefs encompass individuals’ viewpoints about the nature of knowledge and how it can be acquired, verified, and justified, and these beliefs shape how individuals behave, make decisions, and acquire knowledge (Wang & Kim, 2023). The educational theory of constructivism proposes that learners construct new knowledge by building from their current knowledge through an active process in which they negotiate their understanding of new information based on experiences and interactions with the environment (Amineh & Asl, 2015; Rasul et al., 2023). Some authors state that constructivism sees learning as both an active process and a personal representation of the world (Suneetha, 2014), while others say that it is an educational philosophy that focuses on exploring context-specific truths and that education is most successful when students take an active role in making sense of what they learn (Mohammed & Kinyo, 2020). It has been stated that students should seek “the pleasure that is inherent in solving a problem seen and chosen as one’s own” (Von, 1995, p. 7). Also, students need to reflect, monitor, and direct their own learning (Teo & Zhou, 2017).
Social constructivism is a branch of constructivism that highlights the role of collaboration and social interaction in the learning process. Individuals are seen to participate actively in the learning process through social interactions (Jha, 2017). It emphasizes the importance of social exchanges for learners’ cognitive growth and the role of culture and history in student learning (Applefield et al., 2001).
Educators who use constructivist epistemological frameworks to guide their teaching practices should not take the role of the teacher-centered lecturer who presents facts for students to memorize. Instead, the teacher should shift from the role of the transmitter of knowledge to the role of facilitator or guide who uses scaffolding, tutoring, cooperative learning, and learning communities in the classroom, and the most important goal is to help the learner to become an effective thinker (Amineh & Asl, 2015). Writing teachers (under the constructivist theory) can enhance student learning “by emphasizing the writing process and require[ing] students to document stages of their drafting process” (McGuire et al., 2024, p. 338). AI-generated content provides polished results with minimal cognitive engagement, aligning with computational constructivism’s concern that AI may encourage passive consumption rather than active knowledge construction (Ateeq et al., 2024; Chan, 2023).
AI’s educational use should align with Freire’s Critical Pedagogy, where education empowers students to engage critically with knowledge rather than passively consuming it (Freire, 2000). Educators should guide students in reflective, critical engagement with AI-generated content, facilitating co-construction of knowledge through dialog (Alghannam, 2024). Institutions must educate students about ethical AI use, shifting the focus from punitive measures to promoting responsible, reflective practices (Ateeq et al., 2024; Gibson et al., 2023; Wenger, 1998).
While academic dishonesty is longstanding, AI exacerbates the issue, making plagiarism easier and detection harder (Pudasaini et al., 2024). Students may view AI-generated content as “less dishonest”, particularly when used to refine rather than create original work (Ateeq et al., 2024). Yet reliance on AI diminishes cognitive effort, negatively impacting deeper learning and intrinsic motivation (Gerlich, 2025; Rodrigues et al., 2025).
Wenger’s Communities of Practice (Wenger, 1998) provided insight into how AI shapes academic behaviors (Wenger-Trayner et al., 2023). Communities form through shared engagement and collective learning, establishing norms around AI use. If students normalize AI-assisted writing, institutions risk weakening academic integrity and critical thinking. Overreliance on AI may devalue originality, blur ethical boundaries, and encourage superficial learning (Chan, 2023; Rodrigues et al., 2025). Wenger-Trayner et al. (2023) stressed that authentic community learning emerges from shared problem-solving and deep engagement—something potentially undermined by superficial AI use. Clear institutional guidelines and ethical frameworks are essential to maintaining academic standards and fostering responsible AI use (Ateeq et al., 2024).
Researchers encourage more surveys to better understand how students engage with ChatGPT and other GenAI tools (Sullivan et al., 2023). A systematic review article on AD in online learning environments was performed (Chiang et al., 2022). The authors examined 59 articles from 18 countries, and no publications were reviewed that originated from Ecuador; the closest country was Colombia (1 paper). In addition, another systematic review was conducted on ChatGPT in ESL and EFL education within a 1.5-year time period following ChatGPT’s release in 2022 (Lo et al., 2024), and nearly half of the 70 empirical studies written came from East Asia. Zero studies came from South America. In fact, South America was not listed at all. Due to the lack of research performed in Ecuador about how students engage with ChatGPT, it is vital to conduct studies in this region to fill a significant gap in the literature.
A gap exists in the literature related to undergraduate STEM students’ perceptions of artificial intelligence use in EFL classes in South America. By investigating how EFL students in a rural, STEM-focused higher education institution in Ecuador perceive and interact with AI for academic writing, this study contributes to an enhanced understanding of the role of technology in English-mediated education in geographically remote settings. It also sheds light on how AI influences academic integrity, learning behaviors, and writing development in non-traditional EFL populations. The results of this paper can inform conversations that teachers may have with students as they seek to understand their perspectives, raise awareness, and influence classroom behavior; it can also inform conversations among policymakers as they work to establish clear guidelines and approaches to the appropriate use of GenAI in education. Finally, the authors will provide recommendations to EFL practitioners, especially those who teach in South America.

3. Method

3.1. Aim of Research

The present study employs a multiple-choice survey to explore Ecuadorian EFL students’ perceptions of AI chatbots such as ChatGPT in creating and improving their writing in English, with a specific focus on academic dishonesty. The survey explores how students define cheating, why they believe students use ChatGPT to cheat, the impact of these technologies on cheating rates, and their opinions on the detection, consequences, and prevention of cheating with AI. Additionally, the survey examines how students think ChatGPT can support student writing and whether chatbots should be used for this purpose at all. The study contains five research questions (RQs).
  • RQ1: How do students perceive the use of generative AI dishonestly in L2 writing, including their definition of it, their views on its negative implications, and the motivations they believe make students use it? This research question and its sub-questions deal with general aspects of students’ perception of the use of generative AI dishonestly in L2 writing, including how they define it, whether they think it is negative, and what they believe motivates students to use it:
    • RQ1.1: What definition and examples do students give for academic dishonesty in L2 writing with ChatGPT?
    • RQ1.2: What negative consequence of using AI dishonestly in their L2 writing can students identify?
    • RQ1.3: What do students believe are their motivations for using AI dishonestly in their L2 writing?
  • RQ2: What do students believe about how easy it is to detect AI-generated textual content?
  • RQ3: What do students think teachers and institutions should do about AI-based academic dishonesty in writing in terms of response to and prevention of dishonesty?
  • RQ4: Do students think it is acceptable to use ChatGPT and similar technologies in their academic writing, and what reasons do they give for using it?
  • RQ5: How do students think tools like ChatGPT have already affected academic integrity in writing, and what predictions do they make about future generative AI use in writing?

3.2. Study Design

3.2.1. Context and Participants

The present study was conducted at a rural university in northern Ecuador, situated high in the Andes, far from major urban centers and other higher education institutions, where the student population is comprised mainly of Ecuadorian individuals. The university’s undergraduate offerings are exclusively STEM programs (Ricaurte et al., 2022; Ricaurte & Viloria, 2020). As a partial English as a Medium of Instruction (EMI) institution, English is necessary for academic study, research, and professional development for at least half of the student’s academic programs. The geographical isolation of this university presents unique educational challenges and opportunities. Unlike students in urban institutions, these learners do not have immediate access to extensive academic networks, international conferences, or English-speaking environments. Consequently, their English proficiency development is mainly classroom-dependent, and their exposure to academic writing in English is primarily mediated through formal instruction rather than organic immersion. This context makes integrating AI tools, such as ChatGPT, particularly relevant, as these technologies can serve as supplementary learning resources in an environment with limited access to native English speakers or specialized academic support.
The 56 participants who responded to the survey were intermediate EFL students (B1-level) enrolled as undergraduates in the university described above. The participants had undeclared majors at the time of this study, but all intended to declare STEM majors. The participants were not trained to be English teachers. The students belonged to two sections of one of the author’s B1 level, communicative EFL courses, a required course for students. Demographic information was collected from students and is illustrated in Figure 1. Overall, 31 self-reported as male, 24 as female, and 1 as other. Age information revealed that 29 students were 18 years old, 19 students were 19 years old, and 4 students were 23 or older.
The present study employs a multiple-choice survey to explore Ecuadorian EFL students’ perceptions of AI chatbots such as ChatGPT in creating and improving their writing in English, with a specific focus on academic dishonesty. The survey explores how students define cheating, why they believe students use ChatGPT to cheat, the impact of these technologies on cheating rates, and their opinions on the detection, consequences, and prevention of cheating with AI. Additionally, the survey examines how students think ChatGPT can support student writing and whether chatbots should be used for this purpose at all. The survey contains 11 questions related to perceptions and the use of AI in writing (see Appendix A).
The study adhered to ethical guidelines for educational research, ensuring voluntary participation, anonymity, and minimal influence from the instructor (Columbia University, 2024; University of Waterloo, 2024). Participation was explicitly voluntary, clearly communicated in the consent form, and students could opt out without academic consequences. The survey was administered at the end of class to avoid coercion, allowing students to leave freely if they chose not to participate. Anonymity was ensured by assigning numerical codes to responses rather than using identifying information. The survey was conducted through Microsoft Forms, allowing private and independent participation without researcher oversight, aligning with best practices to mitigate social desirability bias (University of Waterloo, 2024).
A clear distinction between instructional and research roles was maintained. Although present to address technical issues, the researcher avoided interaction regarding students’ responses. The researcher’s dual role was managed according to Institutional Review Board recommendations, clarifying the role as researcher, not evaluator, and ensuring confidentiality throughout data collection (Columbia University, 2024). The study strictly followed ethical strategies emphasized in educational research, including voluntary participation, separation of teaching and research roles, anonymous data collection, and explicit opt-out procedures (University of Waterloo, 2024). These practices ensured the reliability and authenticity of student responses.

3.2.2. Data Collection

An anonymous, multiple-choice Microsoft Forms questionnaire was created and delivered to students. It was written in English and included 11 questions, an additional item about age and gender. Questions (Appendix A) attempted to assess students’ opinions of what might constitute academic dishonesty with generative AI applied to writing and what constituted appropriate use. All questions included the option “other”, and students who chose this option were asked to provide an explanation. Although the survey included open-ended “other (please specify)” response options, many participants did not respond to these prompts. Out of the 11 questions, only one student provided an in-depth response. Consequently, qualitative insights were not central to the analysis. Open-ended responses were reviewed informally but were not subjected to systematic coding. All the survey questions were recorded with the participants’ consent.
The questionnaire was applied in the two classes that one of the researchers was assigned to teach. These students were given information about the topic of the study three days before the survey was given to them. The researcher noted that the students seemed to feel enthusiastic about being a part of the study. At this time, the students were also notified that their answers to the survey would be anonymous. The survey was applied the following Monday during the first hour of each class after students had signed a consent form. The survey was accessed via a link passed through a chat application. The researcher monitored students during the survey and helped them with questions. The average time to complete the survey was 35:07, which included outliers who completed the survey in as little as 2–3 min and one who completed it in 108 min. The time needed to complete the questionnaire was estimated to be 11 min. The questionnaire return rate was 100% because attendance is a graded component of the course, meaning that students must attend regularly and consistently as part of their academic performance. Since students were accustomed to being present every class session, participation in the survey was naturally high, contributing to the perfect response rate. Although the study achieved a 100% response rate, this figure must be interpreted cautiously. Participation occurred in a required course where class attendance was graded, and the instructor-researcher was physically present during survey administration.
The student–teacher power dynamic may have compromised the neutrality of responses because these contextual factors likely influenced the students’ decision to participate, thereby introducing a power imbalance. It is strange to have every single student complete a survey. While explicit steps were taken to ensure voluntary participation, such as emphasizing that students could leave freely, the instructor’s presence may have inadvertently created pressure to comply. This dynamic constitutes a methodological limitation, as it may have affected the authenticity of student responses or introduced social desirability bias. Future research should consider using a third-party facilitator or administering surveys outside class hours to minimize these concerns. In addition to addressing concerns about the 100% response rate, it is also essential to ensure conceptual clarity. Therefore, to avoid definitional ambiguity in the results and discussion section, several key terms are operationalized as follows:
Pseudo-success: Pseudo-success refers to situations in which students appear to achieve academic success (for example, a strong grade on an essay), but they do not meaningfully engage with the materials because they used AI tools to think for them. Said differently, students achieve good grades without truly understanding the material.
Dishonest use: Dishonest use is when students plagiarize. They use GenAI to complete assignments and do not report that they used AI to help them (a lack of transparency). Dishonest use is when students misrepresent who authored a written assignment, such as an essay.
Ethical AI Practices: Ethical AI practices are the transparent and responsible use of AI in academic contexts. These practices refer to adhering to academic policies and crediting GenAI text (transparency).
Raising Awareness: Raising awareness refers to informing students, teachers, and administration about creating original works.
Implementing Policies: Implementing policies refers to developing, communicating, and enforcing institutional rules or guidelines related to AI use and academic writing.
The purpose of these definitions is to ensure consistent conceptual anchoring of key concepts throughout the Results and Discussion section.

4. Results and Discussion

4.1. RQ1: Students’ General Perceptions of Generative AI

The survey questions related to this research question dealt with students’ general perceptions of academic dishonesty related to using ChatGPT and similar technologies in producing academic writing. The questions were as follows:
  • Q1: How do you define academic dishonesty involving AI technologies like ChatGPT in the context of your writing production?
  • Q2: What specific examples can you provide for using AI technologies dishonestly in your writing?
  • Q3: What are the main reasons someone might use AI technologies dishonestly in your writing production?
  • Q4: What do you believe are the consequences of using AI dishonestly in your writing?
Figure 2 shows the following results:

4.1.1. Q1: Main Findings

Only 34% of participants answered that copying, AI, and translating all count as academic dishonesty. The results from question Q1 indicate that 64% of students have an incomplete idea about academic dishonesty and its close relationship with copying texts, submitting AI essays and translation. The finding that only 34% of participants identified copying, the use of AI, and translation as forms of academic dishonesty contrasts with other studies in which students showed higher levels of awareness. For example, Cotton et al. (2024) reported that over 60% of students in their study considered AI-generated content without proper citation to be a form of misconduct. Similarly, Ateeq et al. (2024) found that most students recognized AI misuse (paraphrasing without attribution) as violating academic integrity norms. The relatively low percentage in the present study suggests that this group of EFL students may lack sufficient exposure to institutional policies and discussions about ethical writing practices, especially concerning emerging technologies like ChatGPT. A teaching practice educators could implement is raising awareness and having students discuss academic dishonesty practices in class. Freire’s Critical Pedagogy suggests that students need to engage critically with knowledge (Freire, 2000).
In addition, direct instruction could be used to define what constitutes academic dishonesty and explicitly state why it matters, as this approach will serve as a cognitive scaffold aligned with process-oriented learning theory, which emphasizes that awareness of ethical practices is foundational for the development of academic writing skills (Gibson et al., 2023). The purpose of academic integrity is to justify the students’ future degrees. A degree signifies that a student knows things and can do things, and they have met certain standards and achieved learning outcomes fairly well. Cheating can help students obtain a credential (a diploma), but they will not know anything and will not be prepared for the workforce. This finding may reflect a deeper cultural and institutional reality specific to the Ecuadorian EFL context (Castañeda & Selwyn, 2018). Writing is not only a classroom activity but a requirement that plays a key role in students’ academic progress. In this setting, students must often submit essays as part of essential evaluations, such as graduation projects or final term assignments, that determine whether they move forward in their studies. When students do not view AI-generated, translated, or copied texts as dishonest, it may not be an intentional act of misconduct, but rather a reflection of how writing is understood within their academic environment. Instead of being seen as a process of learning and development, writing is often viewed as a final product that must be completed to meet institutional expectations. This practical view, along with limited exposure to international standards of academic honesty, may help explain why many students do not recognize these practices as dishonest. For this reason, educators should focus not only on defining academic dishonesty, but also on helping students see writing as a meaningful activity that supports critical thinking and personal growth.

4.1.2. Q2: Main Findings

The results of Q2 indicate some disagreement between student and teacher perceptions of what constitutes academic dishonesty. Only 29% of participants answered that using AI, copying AI, and not citing sources are examples of using AI dishonestly in EFL writing. The results obtained on question Q2 indicated that 66% of students have an incomplete idea of specific examples for using AI technologies dishonestly in their writing, including using and copying AI without recognition. Chan (2025) mentioned that students clearly understand traditional plagiarism but understand GenAI plagiarism less clearly. This contrasts with our results because only 34% of students understood that all three options constituted plagiarism. To overcome this disconnect, educators could implement guided discussions on academic integrity; Wenger’s Communities of Practice (Wenger, 1998) support this strategy, in which ethical norms are developed through dialog and shared participation in a learning community. Since most students did not identify key forms of dishonesty, this suggests a gap in their understanding of academic integrity within AI-assisted writing. Teachers can also raise awareness by facilitating classroom discussions on why academic integrity matters, linking it to the credibility of their degrees and future professional competence (Mezirow, 2003). Teachers need to engage students in problem-based learning scenarios where they must determine whether specific AI-related writing practices are ethical or not. Teachers need to teach through a social constructivist approach instead of simply defining what constitutes AI-related dishonesty. Teachers need to explain how academic dishonesty in AI mirrors workplace dishonesty, contributing to professional incompetence, corruption, and societal underdevelopment, and to connect this information with real-world consequences (Chan, 2023; Mendoza, 2020). Finally, teachers must work with students to co-create classroom policies on responsible AI use, ensuring they internalize and apply ethical writing principles in their academic work.

4.1.3. Q3: Main Findings

The results from question Q3 indicate that 84% of students give valid arguments for the dishonest use of AI technologies in their writing productions, such as academic pressure, confidence issues, and over-reliance. Students discussed the need for high academic results, which mirrors the idea that students may use chatbots to avoid punishment for late grades or receiving a score of zero on assignments. Students can co-create deadlines with teachers as academic stress becomes a barrier to academic performance. In line with Wenger’s Communities of Practice (Wenger, 1998), the findings suggest that students should actively shape the norms that govern their educational experiences. If students are facing academic pressure, it may be due to time management skills and procrastination, and educators can help alleviate some of these pressures by having conversations to change how students approach schoolwork (Choo & Tan, 2023; Heriyati & Ekasari, 2020), and that suggestion strongly aligns with Freire’s ideas of empowerment, reflection, and dialog (Freire, 2000). Related to dialog, students and teachers can co-create their own knowledge through discussion, real-world analysis, and ethical reasoning tasks; and such strategies are consistent with the principles of computational constructivism which promotes active, technology-supported problem-solving (Ateeq et al., 2024).

4.1.4. Q4: Main Findings

The results from question Q4 indicate that 87% of students have an incomplete idea of the consequences of using AI technologies dishonestly in their writing, which involve academic penalties, hindering development, and pseudo-success, among others. The data suggest that many students perceive negative consequences associated with using generative AI in academic writing. This finding favors studies by Elkhodr et al. (2023) and Singh et al. (2023), whose students believed that AI negatively affected critical thinking, problem-solving skills, and other skills used in the writing process. The concept of hindering writing skills also relates to the impact of academic dishonesty on students’ own academic growth (Rahman et al., 2023; Sila et al., 2023). When students use AI tools to create a well-written essay (rather than engaging in the mental processes that lead to a deeper understanding of the material), they bypass the necessary steps that constructivist theory deems necessary for learning to take place, as inappropriate use of AI undermines the epistemological process of knowledge construction (Von, 1995; Wang & Kim, 2023). Classroom discussions on the long-term effects of AI reliance (weakened writing skills and pseudo-success) can be facilitated by educators, and this recommendation aligns with the theory of process-oriented learning because authentic writing not only involves the creation of a final product but the cognitive effort required to produce it (Rodrigues et al., 2025). Moreover, educators should design activities that contrast AI-generated output with student-created content, allowing learners to see the differences in depth, personalization, and skill development. Institutions might also consider incorporating formative assessments and reflective tasks where students evaluate their own learning processes, helping them understand that academic success is not just about completing tasks, but about meaningful growth. By leveraging students’ existing concerns, especially about being penalized or unprepared, teachers can motivate ethical behavior and promote process-oriented learning practices that emphasize integrity, effort, and authentic achievement.

4.2. RQ2: What Do Students Believe About AI Text Detection?

The survey related to this research question dealt with students’ beliefs about detecting AI-generated content. The question was as follows:
  • Q5: How do you perceive the detection of AI-based academic dishonesty in your writing?
Figure 3 shows the following results:

Q5: Main Findings

According to the results obtained, 73% of the students perceived that AI use is easily detectable or detectable if reviewed. Survey results show that students overestimate teachers’ abilities to detect works that an AI has written, as 46% of students believed that AI is easily detectable. In contrast, AI-generated text is difficult to detect, and AI-detecting programs are unreliable for determining whether a text was written by AI (Alexander et al., 2023; Clark et al., 2021; Foster, 2023; Gao et al., 2023; Gorichanaz, 2023; Khalil & Er, 2023), yet the students surveyed did not seem to know this. Reasons for this discrepancy are open to speculation. A likely explanation is that students have a naïve understanding how AI detectors function. Another possible explanation could be that their previous teachers have oversold the efficiency of AI detection capabilities either due to their own ignorance on the subject or in an attempt to decrease plagiarism. Students may assume that AI detectors are like the plagiarism detectors that they may already be familiar with and which they may think are effective due to their past experiences with them. Or perhaps they, especially EFL and ESL students, assume it would be easy for a teacher to detect AI-generated content based on grammatical and mechanical perfection and sophistication that would clearly be unlikely for them to produce.
A significant consequence of this mismatch is the debate as to whether teachers should disabuse students of this notion. It is reasonable for teachers to think that students’ beliefs that AI is easy to detect would dissuade students from using AI for plagiarism. Thus, teachers may not want to inform students of the truth about the difficulties of AI detectability. However, this assumption is not necessarily supported by the research. Some research has shown that the likelihood of unethical behavior being detected does not reduce the probability that one will engage in an unethical act (Gamliel & Peer, 2013), while other researchers have found that it does (Dawson & Hanoch, 2024). Regardless of whether the ease of detectability affects a tendency towards unethical behavior or not, it is known that a host of factors affect the choice to engage in unethical acts (Belle & Cantarelli, 2017), so the argument that letting students know that AI is difficult to detect, and therefore will increase academic dishonesty, has poor research support and is likely reductive, considering all the factors that can influence ethical decision making.
Furthermore, there are ethical considerations about letting students persist in their ignorance on this point. It has been shown that students who are aware of the failings of AI detectors have “rais[ed] concerns…regarding…the ethical implications and negative consequences of misclassification of genuinely original works as machine-generated outputs” (Angeles et al., 2024, p. 56). This is especially relevant in the EFL and ESL setting, as AI detectors have been shown to be especially prone to marking ESL and EFL writers’ work as AI-generated when it is not (Liang et al., 2023). Without the knowledge that AI detectors are highly fallible, students may not be able to adequately defend themselves against teachers and administrators who implement AI detectors without awareness of the unreliability of AI detection. In terms of educational policies and procedures in response to this issue, administrators should be sure to provide training to their teachers on human and machine AI detection as well as to establish clear guidelines on how teachers should address AI-giarism as such policies align with social constructivist theory that states learning should be grounded in dialog (Jha, 2017). Future research might determine why some students might not understand that detecting AI-generated text is difficult and how that ignorance affects their tendency to cheat with AI. Finally, AI detectors should be improved, if possible.

4.3. RQ3: How Should Authorities Respond to AI-Based Academic Dishonesty?

The survey questions related to this research question dealt with students’ opinions on what teachers and institutional authorities should do when they encounter AI-based academic dishonesty in student writing and what they should do to prevent it from occurring. The questions were as follows:
  • Q6: How do you believe teachers or institutions should respond to AI-based academic dishonesty in writing?
  • Q7: What measures do you believe could effectively prevent AI-based academic dishonesty in writing?
Figure 4 shows the following results:

4.3.1. Q6: Main Findings

In total, 37% of the students reported that teachers or institutions should respond to AI-based academic dishonesty in writing by educating students. Additionally, 55% of students believe that forcible penalties should be implemented or AI-based plagiarism detectors used. Research indicates the importance of instruction and awareness-building in the classroom, but this research also highlights that punitive actions are ineffective long-term (Ateeq et al., 2024; Gorichanaz, 2023). Even so, the students reported that safety measures like AI detectors are necessary to maintain academic integrity. Similarly, researchers state that a lack of clear institutional guidance contributes to students’ uncertainty around responsible AI use (Rodrigues et al., 2025). While some institutions may focus on detection tools or strict regulations, the literature suggests that proactive education, framing AI as a literacy issue rather than a threat, is more impactful. Updated writing pedagogies that incorporate AI ethically must first be situated within clear conversations, instruction, and policies on appropriate AI use (Mezirow, 2003). Teachers can add the following text to their syllabi to help students understand what constitutes unauthorized assistance. Here is an example paragraph that could be added to educators’ syllabi regarding AI use in the writing classroom:
Unauthorized assistance:
In a language class, evaluation is based on your ability to show that you are working towards mastery of the language, including showing skill with grammar, sentence structure, punctuation, and word choice that is appropriate to your level. You are not expected to produce perfect writing that is completely error-free and sounds like a native speaker wrote it. Rather, you should show that you have mastered the grammatical structures and vocabulary that have been covered in this course and in previous courses in the program. Because the teacher must have an accurate picture of your language skills at the time of evaluation, it is considered academically dishonest to use unauthorized assistance to complete your assignments. Unauthorized assistance may include, but is not limited to:
Having your paper revised, edited, and corrected by another person;
Having your paper edited and corrected by artificial intelligence or another computer program, such as Grammarly, Linguix, or Ginger, among others;
Using an online translator to help you understand written or spoken texts.
Your teacher will guide you on the appropriate use of outside resources and help you understand what constitutes unauthorized assistance.

4.3.2. Q7: Main Findings

Overall, 78% of students consider that raising awareness, providing support, and encouraging honesty are effective measures to prevent AI-based academic dishonesty. Additionally, 28% consider that strict monitoring detection tools should be implemented. Currently, AI-based technologies are seen as tools that make our lives easier and reduce the time of dedication to school homework. However, universities and other educational institutions must show students the implications of academic dishonesty and provide conditions for encouraging honesty. In the study by Y. Xiao and Zhi (2023), students believed that universities should embrace ChatGPT. In contrast, our study’s students believe universities need to provide support and raise awareness of AI-based academic dishonesty. The finding that 46% of students emphasized the need for support suggests that learners are not rejecting AI. Instead, they are asking for structured guidance and institutional resources to help them use it ethically and effectively. Teachers can reduce academic pressure and add more ungraded writing practice in class. Teachers should incorporate more low-stakes (ungraded writing) practice in their writing courses. Students in this study seem more concerned with understanding how to navigate AI responsibly. This finding highlights an important role for educators: they must serve as instructors and mentors in digital literacy to help students develop the skills needed to evaluate and use AI tools appropriately. Educators should also validate students’ desire for support and create open, judgment-free spaces where students feel comfortable asking questions about what is allowed and why. This proactive, student-centered approach can foster responsible engagement with AI tools, helping students balance innovation with academic integrity.

4.4. RQ4: Reasons for Typical and Acceptable Use of ChatGPT

The survey questions related to this research question sought to understand the effect of generative AI technologies on students’ writing habits and their motivations for using them:
  • Q8: How do you perceive the use of AI tools like ChatGPT as a support for your writing tasks?
  • Q9: In your opinion, is it correct to use AI tools like ChatGPT in your writing?
Figure 5 shows the following results:

4.4.1. Q8: Main Findings

In this study, 71% of students considered the use of ChatGPT to be a valuable tool that saves time, helps with writer’s block, and can be a dependency risk. Students viewed ChatGPT as a valuable tool, and this is reflected by the study performed by Bitzenbauer (2023). Students viewed the incorporation of ChatGPT in the classroom as a favorable tool. Our students felt that GenAI was beneficial for brainstorming, and this reflects the views of postgraduate students in a study published by Román-Acosta et al. (2024); these students used the tool to generate ideas (brainstorming). In an academic setting, AI-based technologies are viewed to be a source of ideas and inspiration. However, only 34% of students shared this appreciation. This finding may stem from the students’ previous teachers’ epistemological beliefs about AI use and language learning. Suppose teachers provide strategies for students to use AI for brainstorming activities. In that case, students may deepen their understanding of topics because this instructional scaffolding reflects computational constructivism, where technology is used not as a cognition shortcut but to provoke inquiry, hypothesis-testing, and critical analysis (Chan, 2023). For example, teachers can model how to develop a Socratic Line of Questioning (using open-ended questions to encourage critical thinking skills and test assumptions). When applied to AI tools like ChatGPT, this technique encourages students to move from surface-level engagement (“What is X?”) to more complex inquiry (“Why does X happen?”, “What are the implications of X?”, “How does X compare to Y?”). This type of interaction promotes higher-order thinking and reinforces the idea that AI is a learning partner, not an answering machine. In addition, educators can design classroom activities where students compare AI-generated ideas with peer-generated responses. These activities will allow students to evaluate the differences in types of responses in terms of quality, depth, and relevance. These activities not only develop critical evaluation skills but also clarify how AI and help students better understand its limitations and advantages. By cultivating these habits, educators foster students’ metacognitive awareness and empower them to use AI as a tool that enhances, rather than replaces, cognitive and academic growth.

4.4.2. Q9: Main Findings

Overall, 66% of the students responded, “it depends”. This finding implies that students could be aware that using AI-based technologies depends on contact and extension. Notably, a small group of students (13%) consider the use of ChatGPT unacceptable. This finding could be associated with the students’ academic integrity (they do not copy under any concept in the different subjects) or their ignorance of using AI tools. While it is true that the group of students considered in this study are considered Generation Z or “centennials”, who consider themselves digital natives, the non-use of ChatGPT could be associated with ignorance of this AI-based tool. This nuanced view aligns with recent studies that suggest students are increasingly aware of the complexities surrounding AI use in academic writing. For example, George et al. (2023) found that many students saw ChatGPT as a valuable support tool when used for idea generation, vocabulary enhancement, or error correction but also acknowledged its potential for misuse. In contrast, Cotton et al. (2024) highlighted a tendency among some students to see AI use as either fully acceptable or entirely unethical, depending on institutional communications on appropriate AI use. The current study’s findings suggest that students are moving toward a more reflective, context-dependent understanding of AI, which opens opportunities for educators to shape ethical usage. Practitioners should use the results of Q9 as a foundation for teaching students how to use AI effectively. For example, students may turn to ChatGPT to find synonyms, clarify meaning, or ask for more formal or academic phrasing of ideas they already intend to express. In such cases, the AI functions more like a language assistant, helping students recall vocabulary or make more precise word choices, especially when dealing with fatigue or cognitive overload. By embracing the “it depends” mindset reflected in the data, educators can equip students with the skills and wisdom necessary to navigate AI use responsibly by aligning their practices with both academic integrity and language development goals, and this support aligns with process-oriented learning as students’ reflection on their writing practices may lead to more informed decisions regarding AI use (Gibson et al., 2023). It is important to encourage transparency and to urge students to self-reflect on their AI use and, when possible, disclose how AI supported their writing process. Teachers should reinforce the idea that AI should not substitute human cognition but rather serve as a scaffold to support it, particularly in the context of second-language writing and academic discourse development.

4.5. RQ5: Generative AI Effects on Academic Integrity and Writing Now and in the Future

The survey questions related to this research question were intended to determine how students perceived AI’s impact on academic integrity in writing in the present and how it might impact writing in the future. The questions were as follows:
  • Q10: In your opinion, how has the arrival of AI technologies like ChatGPT impacted academic integrity in your writing production?
  • Q11: How do you predict the use of AI tools like ChatGPT for writing will change in the near future?
Figure 6 shows the following results:

4.5.1. Q10: Main Findings

Overall, 73% of the students consider a significant or moderate increase in instances of academic dishonesty. Students from this study believe that AI technologies on academic integrity within academic writing will significantly increase. The literature reflects this finding (Dergaa et al., 2023; Thorp, 2023). Practitioners should interpret these data as an opportunity to encourage deeper reflection among students on their own writing behaviors. One possibility is that students who see no significant impact may not engage in dishonest practices themselves and, therefore, do not perceive AI as a threat to integrity. Alternatively, it may indicate a lack of awareness or an underestimation of how AI tools can compromise academic honesty. Teachers can facilitate reflective writing or class discussions where students analyze their own use of AI and its potential impact on their development as writers. Teachers can encourage students to think critically about the role of academic integrity not just as a set of rules, but as a foundation for genuine learning and professional readiness. Classroom activities could include analyzing AI use and discussing scenarios where students must assess the integrity of different writing practices. Academic integrity is about avoiding punishment and ensuring that students’ future degrees reflect true competency and knowledge. For students who do not perceive the inappropriate use of AI on academic writing as a problem, it is important to create safe classroom spaces where differing viewpoints can be discussed, especially about accountability. By engaging students in dialog and reflection, educators can move beyond simple enforcement and toward transformative learning experiences that prepare students for responsible academic and professional conduct.

4.5.2. Q11: Main Findings

The response for the option “other” was lengthy. One student wrote, “My opinion about AI is that it helps you do long jobs and that you are too lazy to do since it is not useful for the career you want to study, but I also use it to learn new things that make me curious, such as genetic mutations in humans. or about how to find good forums on the deep web and apart from this, the AI helps you to understand with simple words about the topic you want, for example, if you are going to have an osmosis test and you don’t know anything, the AI gives you the best summary of the world in a few seconds and you can learn it, based on the question if it will increase the use of AI in the majority of people since Intel is manufacturing the new range of its CPUs incorporating AI in the CPU for which it helps in many things to computers and sooner or later this will reach phones”.
In total, 21% of the students consider that using AI-based technologies for writing will be more common in the near future. Heidt (2025) proposed that “Unlike the ‘early days’ of two years ago, when using AI meant summarizing a paper or outlining an essay, students are now reading into the tools’ ability to emulate human connection, turning chatbots into podcast hosts, language tutors, professors, and even personal trainers.” This individualized response mirrors the findings of Baidoo-Anu and Owusu Ansah (2023), who observed that students use AI not only for academic tasks but also for curiosity-driven learning and real-world applications. Similarly, George et al. (2023) highlighted that students value AI for its ability to explain difficult concepts in accessible language and to support autonomous learning outside traditional classroom settings. However, unlike some studies focusing primarily on AI’s misuse, this response reflects a balanced view, acknowledging its practical and intellectual benefits. The participant’s viewpoint aligns with a growing body of research that sees students as critical users of AI rather than passive or purely instrumental users. Educators should consider incorporating open-ended questions in classroom discussions or assessments related to AI to uncover diverse student perspectives, especially those beyond standard academic uses. Open-ended responses like those found in Q11 suggest that some students actively think about AI’s role in society, learning, and future technologies. Educators can better tailor instruction to support both language development and digital literacy in a fast-evolving technological landscape by creating space for students to articulate how and why they use AI.

5. Conclusions

This study contributes a novel perspective to the growing body of literature on generative AI in education by focusing on the experiences of rural, STEM-oriented university students in a South American context where English is used as a medium of instruction. Unlike previous studies that have mostly examined urban environments, students from humanities disciplines, or native English speakers, this research highlights how EFL learners in geographically isolated areas engage with GenAI tools like ChatGPT. These learners depend heavily on classroom instruction for language development, and their academic success is often closely tied to their ability to complete writing tasks in English. In this context, the use of AI-generated, translated, or copied texts may not reflect intentional misconduct. Instead, it may be a response to academic pressure, limited institutional guidance, and a practical view of writing as a product required to meet institutional expectations. GenAI thus becomes both a linguistic aid and a shortcut that reduces cognitive effort. By focusing on this underrepresented population, the study creates space to reconsider how academic integrity policies, teaching practices, and AI literacy can be better adapted to support learners who face multiple layers of disadvantage, including linguistic, geographic, and disciplinary challenges.
Building on this perspective, the study explored the impact of generative AI technologies like ChatGPT on students’ English writing skills and their perceptions of academic integrity by using a multiple-choice survey, and the research examined how EFL students at a rural STEM university define AI-related dishonesty, what motivates them to use these tools, and how they assess the risks and benefits associated with GenAI in academic writing. The findings reveal that, while students generally understand what academic dishonesty means, they disagree on whether AI-generated content can be accurately detected. Students also hold varied opinions on how teachers should handle dishonesty involving AI and how tools like ChatGPT influence their writing. For instance, students acknowledged that the pressure to achieve high grades might tempt them to use AI-generated texts dishonestly. About 43% recognized potential negative consequences, such as facing academic penalties, weakening their writing skills, and experiencing a false sense of accomplishment. These results support previous research highlighting that relying heavily on AI can damage critical thinking, problem-solving skills, and overall academic development.
Additionally, students were split regarding the detectability of AI-generated texts. Nearly half (46%) of the participants believed AI content could easily be spotted, contrasting with existing research showing significant limitations in AI detection technologies. The findings further emphasize that proactive educational measures and precise guidance are more effective than applying strict punishments. In contrast to Y. Xiao and Zhi’s (2023) study, where students favored fully integrating ChatGPT into their studies, participants in this study showed more significant interest in responsible and guided AI use. Another noteworthy finding was that students did not consider translating text from Spanish to English as academically dishonest. This aligns with the findings in this study, which noted that while translation tools like Google Translate can enrich vocabulary, they may discourage students from directly engaging with the target language.
Future research could explore student perceptions more broadly by investigating cultural and institutional differences across various contexts. Additionally, studies could examine how student attitudes toward AI evolve over time. Moreover, studies could test the effectiveness of educational interventions aimed at encouraging ethical AI practices. Based on the findings of this study, educators can take several practical steps. First, teachers should actively raise students’ awareness of why academic integrity matters and link the idea of honesty directly to professional and personal credibility. Open class discussions about the long-term impacts of excessive AI reliance, such as weaker critical thinking, can effectively promote genuine learning. Second, educators should address factors contributing to academic dishonesty, such as intense pressure from deadlines. Allowing flexibility in assignment deadlines and helping students shift from a fixed to a growth mindset may reduce stress and the temptation to use AI on assignments. Moreover, including more low-stakes, ungraded writing activities can build students’ confidence and foster open dialog about ethical AI usage.
Educators should also move beyond strict punishment and emphasize the intrinsic value of writing as a process that helps students grow intellectually. Classroom discussions around originality, authorship, and the ethics of AI can encourage students to engage meaningfully with their work. Teachers should provide clear guidance about responsible AI use in order to guide students in distinguishing between legitimate support and unethical practices. Teachers should encourage students to discuss why and how they use AI openly. This open discussion can help teachers personalize their instruction by blending language development and digital literacy skills together. Finally, teachers can thoughtfully integrate AI into their writing instruction once expectations are clearly set and students are sufficiently supported. Integrating AI into the classroom may help with clear, structured feedback; teaching practical revision and paraphrasing strategies; and increased opportunities for formative practice. Ultimately, educators should ensure that AI serves as an aid rather than a shortcut to prepare students for the evolving demands of future academic and professional contexts.

Author Contributions

Conceptualization, A.S.N. and P.V.S.; methodology, P.V.S.; validation, A.S.N., P.V.S. and J.S.J.; formal analysis, A.S.N.; investigation, P.V.S.; data curation, M.R.; writing—original draft preparation, A.S.N. and J.S.J.; writing—review and editing, A.S.N., P.V.S., J.S.J. and M.R.; visualization, A.S.N. and P.V.S.; supervision, A.S.N.; project administration, A.S.N., P.V.S. and J.S.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

All participants were informed about the confidentiality of their responses, the research purpose, its application, and the study design, as well as the voluntary nature of their participation. Informed consent was obtained from all participants for the use of their data.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data supporting this study’s findings are available from the corresponding author (M.R.), upon reasonable request.

Acknowledgments

The authors acknowledge the support of the Yachay Tech Language Center (project number: DPI24-04) and the School of Chemical Sciences and Engineering (project number: CHEM23-02) at Yachay Tech University.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Survey Questions

Q1. 
How do you define academic dishonesty involving AI technologies like ChatGPT in the context of your writing production?
(a)
Copying and using exact texts without proper citations (copying texts)
(b)
Submitting AI-generated essays as my own work (submitting AI essays)
(c)
Using a translator to put Spanish version in English (translating)
(d) 
All of the above
(e)
Other:______________________
Q2. 
What specific examples can you provide for using AI technologies dishonestly in your writing?
(a)
Using AI to write entire essays or assignments (using AI)
(b)
Copying AI-generated text without paraphrasing (copying AI)
(c)
Using AI to improve my work without acknowledging the help (no acknowledgement)
(d) 
All the above
(e)
Other:______________________
Q3. 
What are the main reasons someone might use AI technologies dishonestly in your writing production?
(a) 
Pressure to achieve high academic results (academic pressure)
(b) 
Lack of confidence in my writing skills (confidence issues)
(c)
Belief that AI use is not easily detectable (AI detection)
(d) 
Over-reliance on technology for convenience (over-reliance)
(e)
Other:______________________
Q4. 
What do you believe are the consequences of using AI dishonestly in your writing?
(a)
Risk of being caught and facing academic penalties (academic penalties)
(b)
Hindering the development of my own writing skills (hindering development)
(c)
Creating a false sense of achievement (pseudo success)
(d) 
All the above
(e)
Other:______________________
Q5. 
How do you perceive the detection of AI-based academic dishonesty in your writing?
(a)
Easily detectable with current technology (easily detectable)
(b)
Difficult to detect unless closely reviewed (detectable if reviewed)
(c) 
Only detectable if the work is inconsistent with my previous submissions (detectable)
(d)
Not detectable at all (not detectable)
(e)
Other:______________________
Q6. 
How do you believe teachers or institutions should respond to AI-based academic dishonesty in writing?
(a) 
Educating students on academic integrity and AI use (educating students)
(b) 
Implementing stricter penalties for dishonesty (implementing policies)
(c)
Using AI-based plagiarism detectors (using detectors)
(d)
Ignoring or overlooking minor instances (ignore instances)
(e)
Other:______________________
Q7. 
What measures do you believe could effectively prevent AI-based academic dishonesty in writing?
(a) 
Raising awareness about the importance of original work (raising awareness)
(b) 
Providing better support and resources for writing skills (providing support)
(c)
Implementing strict monitoring and detection tools (implementing tools)
(d) 
Encouraging a culture of academic honesty (encouraging honesty)
(e)
Other:______________________
Q8. 
How do you perceive the use of AI tools like ChatGPT as a support for your writing tasks?
(a)
A valuable learning tool (valuable tool)
(b)
Save time on writing assignments (saves time)
(c) 
Source of ideas and inspiration (brainstorming)
(d)
Bypass difficult part of writing (writer’s block)
(e)
Risk of becoming dependent on AI (dependence risk)
(f)
Other:______________________
Q9. 
In your opinion, is it correct to use AI tools like ChatGPT in your writing?
(a)
Yes, it is completely acceptable (yes, completely acceptable)
(b)
Yes, but only if properly cited (yes, if cited)
(c) 
It depends on the context or extent of use (depends)
(d)
No, it is not acceptable (no, not acceptable)
(e)
Other:______________________
Q10. 
In your opinion, how has the arrival of AI technologies like ChatGPT impacted academic integrity in your writing production?
(a) 
Significantly increased instances of academic dishonesty (significant increase)
(b)
Moderately increased instances of academic dishonesty (moderate increase)
(c)
Not significant impact (not significant)
(d)
Decreased instances of academic dishonesty due to better detection tools (decreased instances)
(e)
Other:______________________
Q11. 
How do you predict the use of AI tools like ChatGPT for writing will change in the near future?”
(a) 
It will become more common and widely accepted (more common)
(b)
It will be more strictly regulated by educational institutions (more regulated)
(c)
It will remain similar to current usage patterns (remain similar)
(d)
It will decline due to ethical concerns and detection technologies (decline)
(e)
Other:______________________
NOTE: The correct responses according to the researchers’ perspectives are bolded.

References

  1. Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10. [Google Scholar] [CrossRef]
  2. Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Huamaní Jordan, O., López Roca, C., & Saavedra Tirado, K. (2024). Analysis of college students’ attitudes toward the use of ChatGPT in their academic activities: Effect of intent to use, verification of information and responsible use. BMC Psychology, 12(1), 255. [Google Scholar] [CrossRef]
  3. AlAfnan, M. A., Dishari, S., Jovic, M., & Lomidze, K. (2023). ChatGPT as an educational tool: Opportunities, challenges, and recommendations for communication, business writing, and composition courses. Journal of Artificial Intelligence and Technology, 3(2), 60–68. [Google Scholar] [CrossRef]
  4. AlAfnan, M. A., & MohdZuki, S. F. (2023). Do artificial intelligence chatbots have a writing style? An investigation into the stylistic features of ChatGPT-4. Journal of Artificial Intelligence and Technology, 3(3), 85–94. [Google Scholar] [CrossRef]
  5. Albayati, H. (2024). Investigating undergraduate students’ perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Computers and Education: Artificial Intelligence, 6, 100203. [Google Scholar] [CrossRef]
  6. Alexander, K., Savvidou, C., & Alexander, C. (2023). Who wrote this essay? Detecting AI-generated writing in second language education in higher education. Teaching English with Technology, 23(2), 25–43. [Google Scholar] [CrossRef]
  7. Alghannam, M. S. M. (2024). Artificial intelligence as a provider of feedback on EFL student compositions. World Journal of English Language, 15(2), 161. [Google Scholar] [CrossRef]
  8. Almusharraf, A., & Bailey, D. (2023). Machine translation in language acquisition: A study on EFL students’ perceptions and practices in Saudi Arabia and South Korea. Journal of Computer Assisted Learning, 39(6), 1988–2003. [Google Scholar] [CrossRef]
  9. Alotaibi, N., & Joy, M. (2021, September 1–3). English-Arabic cross-language plagiarism detection. International Conference on Recent Advances in Natural Language Processing (pp. 44–52), Online. [Google Scholar]
  10. Amineh, R. J., & Asl, H. D. (2015). Review of constructivism and social constructivism. Journal of Social Sciences, Literature and Languages, 1(1), 9–16. [Google Scholar]
  11. Anani, G. E., Nyamekye, E., & Bafour-Koduah, D. (2025). Using artificial intelligence for academic writing in higher education: The perspectives of university students in Ghana. Discover Education, 4(1), 46. [Google Scholar] [CrossRef]
  12. Angeles, C. N., Samson, B. D., Mama, B. R. Z. I., Luriaga, R. L., Delizo, J. P. D., & Ching, M. R. D. (2024, May 28–30). Students’perception of the use of AI detector system by faculty members in determining the originality of submitted academic requirements. 2024 8th International Conference on E-Commerce, E-Business, and E-Government (pp. 56–61), Ajman, United Arab Emirates. [Google Scholar] [CrossRef]
  13. Anson, C. M. (2022). AI-based text generation and the social construction of “fraudulent authorship”: A revisitation. Composition Studies, 51(1), 37–46. [Google Scholar]
  14. Applefield, J. M., Huber, R., & Moallem, M. (2001). Constructivism in theory and practice: Toward a better understanding. High School Journal, 84(2), 35–53. [Google Scholar]
  15. Ateeq, A., Alzoraiki, M., Milhem, M., & Ateeq, R. A. (2024). Artificial intelligence in education: Implications for academic integrity and the shift toward holistic assessment. Frontiers in Education, 9, 1470979. [Google Scholar] [CrossRef]
  16. Azoulay, R., Hirst, T., & Reches, S. (2023). Let’s do it ourselves: Ensuring academic integrity in the age of ChatGPT and beyond. echRxiv. [Google Scholar] [CrossRef]
  17. Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN Electronic Journal, 7(1), 52–62. [Google Scholar] [CrossRef]
  18. Baruchson-Arbib, S. (2004). A study of students’ perception. The International Review of Information Ethics, 1, 1–7. [Google Scholar] [CrossRef]
  19. Belle, N., & Cantarelli, P. (2017). What causes unethical behavior? A meta-analysis to set an agenda for public administration research. Public Administration Review, 77(3), 327–339. [Google Scholar] [CrossRef]
  20. Bikanga Ada, M. (2024). It helps with crap lecturers and their low effort: Investigating computer science students’ perceptions of using ChatGPT for learning. Education Sciences, 14(10), 1106. [Google Scholar] [CrossRef]
  21. Bitzenbauer, P. (2023). ChatGPT in physics education: A pilot study on easy-to-implement activities. Contemporary Educational Technology, 15(3), ep430. [Google Scholar] [CrossRef]
  22. Bonsu, E. M., & Baffour-Koduah, D. (2023). From the consumers’ side: Determining students’ perception and intention to use ChatGPT in ghanaian higher education. Journal of Education, Society & Multiculturalism, 4(1), 1–29. [Google Scholar] [CrossRef]
  23. Brainard, J. (2023). Journals take up arms against AI-written text. Science, 379(6634), 740–741. [Google Scholar] [CrossRef]
  24. Can, Z. B., Duman, H., Buluş, B., & Erişen, Y. (2023). How did ChatGPT transform us in terms of transformative learning? Journal of Social and Educational Research, 2(2), 41–51. [Google Scholar]
  25. Carroll, J. (2013). A handbook for deterring plagiarism in higher education (2nd ed.). Oxford Centre for Staff and Learning Development. [Google Scholar]
  26. Carter, H., Hussey, J., & Forehand, J. W. (2019). Plagiarism in nursing education and the ethical implications in practice. Heliyon, 5(3), e01350. [Google Scholar] [CrossRef]
  27. Castañeda, L., & Selwyn, N. (2018). More than tools? Making sense of the ongoing digitizations of higher education. International Journal of Educational Technology in Higher Education, 15(1), 22. [Google Scholar] [CrossRef]
  28. Chan, C. K. Y. (2023). Is AI changing the rules of academic misconduct? An in-depth look at students’ perceptions of “AI-giarism”. arXiv, arXiv:2306.03358. Available online: http://arxiv.org/abs/2306.03358 (accessed on 15 April 2025).
  29. Chan, C. K. Y. (2025). Students’ perceptions of ‘AI-giarism’: Investigating changes in understandings of academic misconduct. Education and Information Technologies, 30, 8087–8108. [Google Scholar] [CrossRef]
  30. Chatterjee, J., & Dethlefs, N. (2023). This new conversational AI model can be your friend, philosopher, and guide … and even your worst enemy. Patterns, 4(1), 100676. [Google Scholar] [CrossRef]
  31. Chen, T.-J. (2023). ChatGPT and other artificial intelligence applications speed up scientific writing. Journal of the Chinese Medical Association, 86(4), 351–353. [Google Scholar] [CrossRef]
  32. Cherrez-Ojeda, I., Gallardo-Bastidas, J. C., Robles-Velasco, K., Osorio, M. F., Velez Leon, E. M., Leon Velastegui, M., Pauletto, P., Aguilar-Díaz, F. C., Squassi, A., González Eras, S. P., Cordero Carrasco, E., Chavez Gonzalez, K. L., Calderon, J. C., Bousquet, J., Bedbrook, A., & Faytong-Haro, M. (2024). Understanding health care students’ perceptions, beliefs, and attitudes toward AI-powered language models: Cross-sectional study. JMIR Medical Education, 10, e51757. [Google Scholar] [CrossRef]
  33. Chiang, F., Zhu, D., & Yu, W. (2022). A systematic review of academic dishonesty in online learning environments. Journal of Computer Assisted Learning, 38(4), 907–928. [Google Scholar] [CrossRef]
  34. Choo, F., & Tan, K. (2023). Abrupt academic dishonesty: Pressure, opportunity, and deterrence. The International Journal of Management Education, 21(2), 100815. [Google Scholar] [CrossRef]
  35. Christie, M., Carey, M., Robertson, A., & Grainger, P. (2015). Putting transformative learning theory into practice. Australian Journal of Adult Learning, 55(1), 9–30. [Google Scholar]
  36. Clark, E., August, T., Serrano, S., Haduong, N., Gururangan, S., & Smith, N. A. (2021, August 1–6). All that’s ‘Human’ is not gold: Evaluating human evaluation of generated text. 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol. 1: Long Papers, pp. 7282–7296), Bangkok, Thailand. [Google Scholar] [CrossRef]
  37. Clarke, O., Chan, W. Y. D., Bukuru, S., Logan, J., & Wong, R. (2023). Assessing knowledge of and attitudes towards plagiarism and ability to recognize plagiaristic writing among university students in Rwanda. Higher Education, 85(2), 247–263. [Google Scholar] [CrossRef]
  38. Columbia University. (2024). Teachers college Institutional Review Board (IRB). Available online: https://www.tc.columbia.edu/institutional-review-board/ (accessed on 15 April 2025).
  39. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the Era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  40. Das, S. R., & Madhusudan, J. V. (2024). Perceptions of higher education students towards ChatGPT usage. International Journal of Technology in Education, 7(1), 86–106. [Google Scholar] [CrossRef]
  41. Dawson, I. G. J., & Hanoch, Y. M. (2024). The role of perceived risk on dishonest decision making during a pandemic. Risk Analysis, 44(12), 2762–2779. [Google Scholar] [CrossRef]
  42. Delello, J. A., Sung, W., Mokhtari, K., Hebert, J., Bronson, A., & De Giuseppe, T. (2025). AI in the classroom: Insights from educators on usage, challenges, and mental health. Education Sciences, 15(2), 113. [Google Scholar] [CrossRef]
  43. Dergaa, I., Chamari, K., Zmijewski, P., & Ben Saad, H. (2023). From human writing to artificial intelligence generated text: Examining the prospects and potential threats of ChatGPT in academic writing. Biology of Sport, 40(2), 615–622. [Google Scholar] [CrossRef]
  44. Dinneen, C. (2021). Students’ use of digital translation and paraphrasing tools in written assignments on direct entry english Programs. English Australia Journal, 37(1), 40–53. [Google Scholar]
  45. DiVall, M. V., & Schlesselman, L. S. (2016). Academic dishonesty: Whose fault is it anyway? American Journal of Pharmaceutical Education, 80(3), 35. [Google Scholar] [CrossRef]
  46. Ducar, C., & Schocket, D. H. (2018). Machine translation and the L2 classroom: Pedagogical solutions for making peace with Google translate. Foreign Language Annals, 51(4), 779–795. [Google Scholar] [CrossRef]
  47. Egloff, J. (2024). The college essay is not dead. Proceedings of the H-Net Teaching Conference, 2, 72–100. [Google Scholar] [CrossRef]
  48. Elkhodr, M., Gide, E., Wu, R., & Darwish, O. (2023). ICT students’ perceptions towards ChatGPT: An experimental reflective lab analysis. STEM Education, 3(2), 70–88. [Google Scholar] [CrossRef]
  49. Enkhtur, A., & Yamamoto, B. A. (2017). Transformative learning theory and its application in higher education settings: A review paper. Bulletin of the Graduate School of Human Sciences, Osaka University, 43, 193–214. [Google Scholar] [CrossRef]
  50. Fajt, B., & Schiller, E. (2025). ChatGPT in academia: University students’ attitudes towards the use of ChatGPT and plagiarism. Journal of Academic Ethics. [Google Scholar] [CrossRef]
  51. Farhi, F., Jeljeli, R., Aburezeq, I., Dweikat, F. F., Al-shami, S. A., & Slamene, R. (2023). Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Computers and Education: Artificial Intelligence, 5, 100180. [Google Scholar] [CrossRef]
  52. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460–474. [Google Scholar] [CrossRef]
  53. Foster, A. (2023). Can GPT-4 fool turnItIn? Testing the limits of AI detection with prompt engineering. In IPHS 300: Artificial intelligence for the humanities: Text, image, and sound (p. p. 39). Digital Kenyon. Available online: https://digital.kenyon.edu/dh_iphs_ai/39 (accessed on 15 April 2025).
  54. Freire, P. (2000). Pedagogy of the oppressed. Continuum International Publishing Group Inc. [Google Scholar]
  55. Gamliel, E., & Peer, E. (2013). Explicit risk of getting caught does not affect unethical behavior. Journal of Applied Social Psychology, 43(6), 1281–1288. [Google Scholar] [CrossRef]
  56. Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. Npj Digital Medicine, 6(1), 75. [Google Scholar] [CrossRef]
  57. George, A. S., George, A. S. H., & Martin, A. S. G. (2023). A review of ChatGPT AI’s impact on several business sectors. Partners Universal International Innovation Journal, 1(1), 9–23. [Google Scholar]
  58. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. [Google Scholar] [CrossRef]
  59. Ghounane, N., Al-Zubaidi, K., & Rahmani, A. (2024). Exploring algerian EFL master’s students’ attitudes toward AI-giarism. Indonesian Journal of Socual Science Research, 5(2), 444–459. [Google Scholar] [CrossRef]
  60. Gibson, D., Kovanovic, V., Ifenthaler, D., Dexter, S., & Feng, S. (2023). Learning theories for artificial intelligence promoting learning processes. British Journal of Educational Technology, 54(5), 1125–1146. [Google Scholar] [CrossRef]
  61. Gorichanaz, T. (2023). Accused: How students respond to allegations of using ChatGPT on assessments. Learning: Research and Practice, 9(2), 183–196. [Google Scholar] [CrossRef]
  62. Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., & Ahmad, H. (2022). “I think this is the most disruptive technology”: Exploring sentiments of ChatGPT early adopters using Twitter data. arXiv, arXiv:2212.05856. Available online: http://arxiv.org/abs/2212.05856 (accessed on 15 April 2025).
  63. Heidt, A. (2025). ChatGPT for students: Learners find creative new uses for chatbots. Nature, 639(8053), 265–266. [Google Scholar] [CrossRef]
  64. Henry, E. S. (2023). Hey ChatGPT! Write me an article about your effects on academic writing. Anthropology Now, 15(1), 79–83. [Google Scholar] [CrossRef]
  65. Heriyati, D., & Ekasari, W. F. (2020). A study on academic dishonesty and moral reasoning. International Journal of Education, 12(2), 56–62. [Google Scholar] [CrossRef]
  66. Howard, R. M. (1995). Plagiarisms, authorships, and the academic death penalty. College English, 57(7), 788. [Google Scholar] [CrossRef]
  67. Humbert, M., Lambin, X., & Villard, E. (2022). The role of prior warnings when cheating is easy and punishment is credible. Information Economics and Policy, 58, 100959. [Google Scholar] [CrossRef]
  68. Ibrahim, K. (2023). Using AI-based detectors to control AI-assisted plagiarism in ESL writing: “The Terminator Versus the Machines”. Language Testing in Asia, 13(1), 46. [Google Scholar] [CrossRef]
  69. Jha, A. (2017). ICT pedagogy in higher education: A constructivist approach. Journal of Training and Development, 3, 64–70. [Google Scholar] [CrossRef]
  70. Jomaa, N., Attamimi, R., & Al Mahri, M. (2024). The use of Artificial Intelligence (AI) in teaching english vocabulary in Oman: Perspectives, teaching practices, and challenges. World Journal of English Language, 15(3), 1. [Google Scholar] [CrossRef]
  71. Khalil, M., & Er, E. (2023). Will ChatGPT get you caught? Rethinking of plagiarism detection. EdArXiv Preprints. [Google Scholar] [CrossRef]
  72. Kibler, W. L. (1993). Academic dishonesty: A student development dilema. NASPA Journal, 30(4), 252–267. Available online: https://eric.ed.gov/?id=EJ468340 (accessed on 15 April 2025). [CrossRef]
  73. Kitchenham, A. (2008). The evolution of John Mezirow’s transformative learning theory. Journal of Transformative Education, 6(2), 104–123. [Google Scholar] [CrossRef]
  74. Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537–550. [Google Scholar] [CrossRef]
  75. Koos, S., & Wachsmann, S. (2023). Navigating the impact of ChatGPT/GPT4 on legal academic examinations: Challenges, opportunities and recommendations. Media Iuris, 6(2), 255–270. [Google Scholar] [CrossRef]
  76. Ladha, N., Yadav, K., & Rathore, P. (2023). AI-generated content detectors: Boon or bane for scientific writing. Indian Journal of Science And Technology, 16(39), 3435–3439. [Google Scholar] [CrossRef]
  77. Lambert, E. G., Hogan, N. L., & Barton, S. M. (2003). Collegiate academic dishonesty revisited: What have they done, how often have they done it, who does it, and why did they do it? Electronic Journal of Sociology, 7, 1–27. [Google Scholar]
  78. Leong, W. Y., & Bing, Z. J. (2025). AI on academic integrity and plagiarism detection. ASM Science Journal, 20(1), 1–9. [Google Scholar] [CrossRef]
  79. Lhutfi, I., Hardiana, R. D., & Mardiani, R. (2021). Fraud pentagon model: Predicting student’s cheating academic behavior. Jurnal ASET (Akuntansi Riset), 13(2), 234–248. [Google Scholar] [CrossRef]
  80. Li, B., Bonk, C. J., & Kou, X. (2023). Exploring the multilingual applications of ChatGPT. International Journal of Computer-Assisted Language Learning and Teaching, 13(1), 1–22. [Google Scholar] [CrossRef]
  81. Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. [Google Scholar] [CrossRef] [PubMed]
  82. Lim, F. V., & Toh, W. (2024). Apps for English language learning: A systematic review. Teaching English With Technology, 2024(1). [Google Scholar] [CrossRef]
  83. Liu, W., & Wang, Y. (2024). The effects of using AI tools on critical thinking in English literature classes among EFL learners: An intervention study. European Journal of Education, 59(4), e12804. [Google Scholar] [CrossRef]
  84. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  85. Lo, C. K., Yu, P. L. H., Xu, S., Ng, D. T. K., & Jong, M. S. (2024). Exploring the application of ChatGPT in ESL/EFL education and related research issues: A systematic review of empirical studies. Smart Learning Environments, 11(1), 50. [Google Scholar] [CrossRef]
  86. Marr, B. (2023, May). A short history of ChatGPT: How we got to where we are today. Forbes. Available online: https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/#open-web-0 (accessed on 15 April 2025).
  87. McGuire, A., Qureshi, W., & Saad, M. (2024). A constructivist model for leveraging GenAI tools for individualized, peer-simulated feedback on student writing. International Journal of Technology in Education, 7(2), 326–352. [Google Scholar] [CrossRef]
  88. Mehta, V. (2023). ChatGPT: An AI NLP model. Available online: https://www.ltimindtree.com/wp-content/uploads/2023/02/ChatGPT-An-AI-NLP-Model-POV.pdf (accessed on 15 April 2025).
  89. Mendoza, J. J. N. (2020). Pre-service teachers’ reflection logs: Pieces of evidence of transformative teaching and emancipation. International Journal of Higher Education, 9(6), 200. [Google Scholar] [CrossRef]
  90. Mezirow, J. (2003). Transformative learning as discourse. Journal of Transformative Education, 1(1), 58–63. [Google Scholar] [CrossRef]
  91. Mijwil, M. M., Hiran, K. K., Doshi, R., Dadhich, M., Al-Mistarehi, A.-H., & Bala, I. (2023). ChatGPT and the future of academic integrity in the artificial intelligence era: A new frontier. Al-Salam Journal for Engineering and Technology, 2(2), 116–127. [Google Scholar] [CrossRef]
  92. Mohammed, S., & Kinyo, L. (2020). Constructivist theory as a foundation for the utilization of digital technology in the lifelong learning process. Turkish Online Journal of Distance Education, 21(4), 90–109. [Google Scholar] [CrossRef]
  93. Nelson, A., Santamaría, P., & Javens, J. (2024). Students’ perceptions of generative AI use in academic writing. In Pixel (Ed.), 17th international conference “innovation in language learning” (pp. 237–245). Filodiritto—inFOROmatica S.r.l. [Google Scholar]
  94. Ngo, T. T. A. (2023). The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning (IJET), 18(17), 4–19. [Google Scholar] [CrossRef]
  95. Park, C. (2017). In Other (People’s) Words: Plagiarism by university students—Literature and lessons. In R. Barrow, & P. Keeney (Eds.), Academic ethics (1st ed., pp. 525–542). Routledge. [Google Scholar]
  96. Pavela, G. (1997). Applying the power of association on campus: A model code of academic integrity. Journal of College and University Law, 24(1), 97–118. [Google Scholar]
  97. Pecorari, D. (2013). Teaching to avoid plagiarism: How to promote good source use (1st ed.). Open University Press. [Google Scholar]
  98. Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching and Learning Practice, 20(2). [Google Scholar] [CrossRef]
  99. Pudasaini, S., Miralles-Pechuán, L., Lillis, D., & Salvador, M. L. (2024). Survey on plagiarism detection in large language models: The Impact of ChatGPT and gemini on academic integrity. arXiv, arXiv:2407.13105. Available online: http://arxiv.org/abs/2407.13105 (accessed on 15 April 2025).
  100. Puspitosari, I. (2022). Fraud triangle theory on accounting students online academic cheating. Accounting and Finance Studies, 2(4), 229–240. [Google Scholar] [CrossRef]
  101. Qadir, J. (2022). Engineering education in the Era of ChatGPT: Promise and pitfalls of generative AI for education. TechRxiv. [Google Scholar] [CrossRef]
  102. Rahman, M. S., Sabbir, M. M., Zhang, D. J., Moral, I. H., & Hossain, G. M. S. (2023). Examining students’ intention to use ChatGPT: Does trust matter? Australasian Journal of Educational Technology, 39, 51–71. [Google Scholar] [CrossRef]
  103. Rasul, T., Nair, S., Kalendra, D., Robin, M., de Oliveira Santini, F., Ladeira, W. J., Sun, M., Day, I., Rather, R. A., & Heathcote, L. (2023). The role of ChatGPT in higher education: Benefits, challenges, and future research directions. Journal of Applied Learning & Teaching, 6(1), 41–56. [Google Scholar] [CrossRef]
  104. Ratna, A. A. P., Purnamasari, P. D., Adhi, B. A., Ekadiyanto, F. A., Salman, M., Mardiyah, M., & Winata, D. J. (2017). Cross-language plagiarism detection system using latent semantic analysis and learning vector quantization. Algorithms, 10(2), 69. [Google Scholar] [CrossRef]
  105. Ricaurte, M., Ordóñez, P. E., Navas-Cárdenas, C., Meneses, M. A., Tafur, J. P., & Viloria, A. (2022). Industrial processes online teaching: A good practice for undergraduate engineering students in times of COVID-19. Sustainability, 14(8), 4776. [Google Scholar] [CrossRef]
  106. Ricaurte, M., & Viloria, A. (2020). Project-based learning as a strategy for multi-level training applied to undergraduate engineering students. Education for Chemical Engineers, 33, 102–111. [Google Scholar] [CrossRef]
  107. Rodrigues, M., Silva, R., Borges, A. P., Franco, M., & Oliveira, C. (2025). Artificial intelligence: Threat or asset to academic integrity? A bibliometric analysis. Kybernetes, 54(5), 2939–2970. [Google Scholar] [CrossRef]
  108. Román-Acosta, D., Rodríguez Torres, M., Baquedano Montoya, M., López Zabala, L., & Pérez Gamboa, A. (2024). ChatGPT and its use to improve academic writing in postgraduate students. PRA, 24(36), 53–73. [Google Scholar]
  109. Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 342–363. [Google Scholar] [CrossRef]
  110. Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Correction to: Can artificial intelligence help for scientific writing? Critical Care, 27(1), 99. [Google Scholar] [CrossRef]
  111. Selwyn, N. (2008). ‘Not necessarily a bad thing …’: A study of online plagiarism amongst undergraduate students. Assessment & Evaluation in Higher Education, 33(5), 465–479. [Google Scholar] [CrossRef]
  112. Shadiev, R., Chen, X., & Altinay, F. (2024). A review of research on computer-aided translation technologies and their applications to assist learning and instruction. Journal of Computer Assisted Learning, 40(6), 3290–3323. [Google Scholar] [CrossRef]
  113. Sila, C. A., William, C., Yunus, M. M., & M. Rafiq, K. R. (2023). Exploring students’ perception of using ChatGPT in higher education. International Journal of Academic Research in Business and Social Sciences, 13(12), 4044–4054. [Google Scholar] [CrossRef]
  114. Singh, H., Tayarani-Najaran, M.-H., & Yaqoob, M. (2023). Exploring computer science students’ perception of ChatGPT in higher education: A descriptive and correlation study. Education Sciences, 13(9), 924. [Google Scholar] [CrossRef]
  115. Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays—Should professors worry? Nature. [Google Scholar] [CrossRef]
  116. Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching, 6(1), 31–40. [Google Scholar] [CrossRef]
  117. Suneetha, Y. (2014). Constructive classroom: A cognitive instructional strategy in ELT. I-Manager’s Journal on English Language Teaching, 4(1), 1–3. [Google Scholar] [CrossRef]
  118. Teo, T., & Zhou, M. (2017). The influence of teachers’ conceptions of teaching and learning on their technology acceptance. Interactive Learning Environments, 25(4), 513–527. [Google Scholar] [CrossRef]
  119. Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. [Google Scholar] [CrossRef]
  120. Tram, N. H. M., Nguyen, T. T., & Tran, C. D. (2024). ChatGPT as a tool for self-learning English among EFL learners: A multi-methods study. System, 127, 103528. [Google Scholar] [CrossRef]
  121. University of Waterloo. (2024). Research with human participants. Available online: https://uwaterloo.ca/research/office-research-ethics/research-human-participants (accessed on 15 April 2025).
  122. Urlaub, P., & Dessein, E. (2022). From disrupted classrooms to human-machine collaboration? The pocket calculator, Google Translate, and the future of language education. L2 Journal, 14(1), 45–59. [Google Scholar] [CrossRef]
  123. Uzun, L. (2023). ChatGPT and academic integrity concerns: Detecting artificial intelligence-generated content. Language Education & Technology, 3(1), 45–54. [Google Scholar]
  124. Valova, I., Mladenova, T., & Kanev, G. (2024). Students’ perception of ChatGPT usage in education. International Journal of Advanced Computer Science and Applications, 15(1). [Google Scholar] [CrossRef]
  125. van Lieshout, C., & Cardoso, W. (2022). Google Translate as a tool for self-directed language learning. Language Learning & Technology, 26(1), 1–19. [Google Scholar]
  126. Von, G. E. (1995). A Constructivist Approach to Teaching. In Constructivism in education (pp. 3–15). Erlbaum. [Google Scholar]
  127. Wang, J., & Kim, E. (2023). Exploring changes in epistemological beliefs and beliefs about teaching and learning: A mix-method study among Chinese teachers in transnational higher education institutions. Sustainability, 15(16), 12501. [Google Scholar] [CrossRef]
  128. Wenger, E. (1998). Communities of practice. Cambridge University Press. [Google Scholar] [CrossRef]
  129. Wenger-Trayner, E., Wenger-Trayner, B., Reid, P., & Bruderlein, C. (2023). Communities of practice within and across organization: A guidebook (2nd ed.). Social Learning Lab. [Google Scholar]
  130. Xiao, F., Zhu, S., & Xin, W. (2025). Exploring the landscape of generative AI (ChatGPT)-powered writing instruction in English as a foreign language education: A scoping review. ECNU Review of Education, 1–19. [Google Scholar] [CrossRef]
  131. Xiao, Y., & Zhi, Y. (2023). An exploratory study of EFL learners’ use of ChatGPT for language learning tasks: Experience and perceptions. Languages, 8(3), 212. [Google Scholar] [CrossRef]
  132. Zhao, X., Cox, A., & Cai, L. (2024). ChatGPT and the digitisation of writing. Humanities and Social Sciences Communications, 11(1), 482. [Google Scholar] [CrossRef]
  133. Zubarev, D., Tikhomirov, I., & Sochenkov, I. (2022). Cross-lingual plagiarism detection method (pp. 207–222). Springer International Publishing. [Google Scholar] [CrossRef]
Figure 1. Participants’ profile: age distribution and gender.
Figure 1. Participants’ profile: age distribution and gender.
Education 15 00611 g001
Figure 2. RQ1: Students’ general perceptions of generative AI—survey results.
Figure 2. RQ1: Students’ general perceptions of generative AI—survey results.
Education 15 00611 g002
Figure 3. RQ2: What do students believe about AI text detection?—survey results.
Figure 3. RQ2: What do students believe about AI text detection?—survey results.
Education 15 00611 g003
Figure 4. RQ3: How should authorities respond to AI-based academic dishonesty?—survey results.
Figure 4. RQ3: How should authorities respond to AI-based academic dishonesty?—survey results.
Education 15 00611 g004
Figure 5. RQ4: Reasons for typical and acceptable use of ChatGPT—survey results.
Figure 5. RQ4: Reasons for typical and acceptable use of ChatGPT—survey results.
Education 15 00611 g005
Figure 6. RQ5: Generative AI effects on academic integrity and writing now and in the future—survey results.
Figure 6. RQ5: Generative AI effects on academic integrity and writing now and in the future—survey results.
Education 15 00611 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nelson, A.S.; Santamaría, P.V.; Javens, J.S.; Ricaurte, M. Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language. Educ. Sci. 2025, 15, 611. https://doi.org/10.3390/educsci15050611

AMA Style

Nelson AS, Santamaría PV, Javens JS, Ricaurte M. Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language. Education Sciences. 2025; 15(5):611. https://doi.org/10.3390/educsci15050611

Chicago/Turabian Style

Nelson, Andrew S., Paola V. Santamaría, Josephine S. Javens, and Marvin Ricaurte. 2025. "Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language" Education Sciences 15, no. 5: 611. https://doi.org/10.3390/educsci15050611

APA Style

Nelson, A. S., Santamaría, P. V., Javens, J. S., & Ricaurte, M. (2025). Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language. Education Sciences, 15(5), 611. https://doi.org/10.3390/educsci15050611

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop