Next Article in Journal
Dual-View Sign Language Recognition via Front-View Guided Feature Fusion for Automatic Sign Language Training
Next Article in Special Issue
Evaluating ChatGPT’s Cognitive Performance in Chemical Engineering Education
Previous Article in Journal
Two Pathways to Digital Flourishing: How Meaning and Positivity Orientations Shape Online Behavior and Well-Being
Previous Article in Special Issue
Digital Maturity of Academic Teaching and Learning, Digital Culture and Transformation: Empirical Evidence and Implications for Higher Education in Bosnia and Herzegovina and Croatia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Writing Accuracy: How AI-Assisted Writing Instruction Can Support EFL Undergraduate Students

Department of English, Faculty of Science and Arts, King Abdulaziz University, Rabigh 25732, Saudi Arabia
*
Author to whom correspondence should be addressed.
Information 2026, 17(2), 157; https://doi.org/10.3390/info17020157
Submission received: 29 November 2025 / Revised: 3 January 2026 / Accepted: 8 January 2026 / Published: 5 February 2026

Abstract

Recent research suggests that artificial intelligence (AI) tools allow EFL (English as a Foreign Language) learners to exert greater control over their language-learning process. Furthermore, these tools enhance their language skills by providing them with elements often absent in traditional classroom settings, such as autonomy and individual pace of learning. Specifically, AI-based tools, such AI chatbots, have the potential to facilitate learning and streamline tasks for both students and instructors in language-learning contexts. These digital companions (i.e., chatbots) can be methodically crafted and equipped with the required essential materials to support students in practising language skills independently, regardless of time or location. The current study presents an experiment conducted with undergraduate students at a university in Saudi Arabia to assess the effectiveness of a customised AI chatbot, WritePro (GPT-4), in improving their writing skills. Learners in the experimental group were instructed to use WritePro to navigate through their writing stages, focusing their queries on four key components: content and organisation, grammar mechanics, vocabulary usage, and sentence structure. The findings showed that WritePro serves as an effective tool for EFL learners to overcome several challenges in developing writing competencies. Therefore, the significance of these findings lies in the potential of AI tools to enable EFL instructors to effectively integrate chatbots into classroom instruction, supporting the development of students’ writing skills. Furthermore, these findings can be used as a basis for advocacy with university policymakers regarding the use of AI chatbots in language classrooms.

Graphical Abstract

1. Introduction

The use of artificial intelligence (AI) in daily activities is no longer a novelty, given its widespread implementation across various automated services, including office automation, customer service points, recruitment processes (CV screening), and remote working environments. Notably, this trend is similarly evident within the educational sector. AI tools are increasingly being adopted in universities, colleges, and schools as essential supplements to traditional modes of imparting education [1,2]. Specifically, AI-based tools, such as chatbots, are recognised as the “next big thing” in learning and teaching, since they can be integrated with extremely helpful feedback features. As a result, this provides learners with greater autonomy while saving the time and effort of educators [3,4]. For instance, research has indicated that AI-based tools have proven to be extremely useful for adult EFL learners seeking to improve their writing skills [3,4], whether as students or professionals [5,6,7,8,9].
Adult EFL learners usually require special attention to improve their written English independently. This often occurs because many instructors in higher education either assume that their students have already acquired proficient writing skills or are constrained by time, as they are mandated to deliver extensive curricula within limited timeframes. Consequently, these circumstances often leave learners struggling with underdeveloped writing skills. In such a scenario, any form of support for these learners will likely attract significant interest; hence, handy AI-based tools, such as ChatGPT, Copy.ai, and ProWritingAid, are becoming increasingly popular among adult EFL learners [10,11]. However, despite its benefits, this trend remains alarming. It can be observed that not all learners use AI tools for self-development, and the menace of plagiarism is increasing globally. This is mainly because these writing tools can produce highly sophisticated essays with minimal effort, which are difficult to identify as academic misconduct [12,13,14].
Advancements in AI technology have empowered users to either create their own AI chatbots or use pre-configured models designed to specific areas, such as language acquisition, constructive feedback, real-time correction, suggestions for improvement, detecting syntactical errors, vocabulary enhancement, and resolving stylistic issues [15]. Furthermore, these chatbots can manage complex ideas and handle extensive information. Consequently, they support learners in various facets of the writing process, including idea generation, content creation, structural organisation, and iterative editing. Moreover, these customised AI chatbots can integrate built-in ethical boundaries to promote academic integrity and ensure compliance with institutional standards, provided that rigorous ethical protocols and usage boundaries are established during the customisation phase [16]. The overall impact of EFL instructors’ use of AI chatbots to enhance their students’ writing skills can result in increased writing accuracy. Therefore, these tools can positively transform EFL academic writing contexts by helping instructors to foster creativity in their classrooms through multiple facets, such as collaborating on lesson design and generating several instructional methodologies [17].

1.1. Research Background

Historically, language instruction in Saudi Arabian schools and universities has mainly relied on teacher-centred, curriculum-bound, and form-focused methodologies, where rote memorisation, grammar translation techniques, and the explicit teaching of linguistic rules often govern classroom practice. Despite various curricular reforms, studies note that the implementation of Communicative Language Teaching (CLT) remains a challenge to implement due to large class sizes, inadequate teacher training, and assessment systems that prioritise linguistic accuracy over communicative proficiency [18]. Furthermore, academic specialists evaluating current instructional practices similarly report that language classrooms continue to emphasise content delivery, controlled practice, and examination preparation, with limited opportunities for authentic interaction or learner autonomy [19]. Although recent national initiatives under Vision 2030 aim to modernise pedagogy, conventional methods remain prevalent across many Saudi educational institutions.
In the Saudi Arabian tertiary education sector, the primary medium of instruction is English. Therefore, it is mandatory that all university students attain proficiency across the four core linguistic domains: reading, writing, speaking, and listening. Furthermore, the Saudi government provides substantial financial support to eligible students who seek to pursue higher education abroad. As a result, many students strive to enhance their foreign language skills, especially reading and writing, as they are required to achieve specific scores in English proficiency tests for admission to foreign universities. However, despite English being included in the curriculum as a discrete subject of study, students joining undergraduate courses often originate from rural areas where the vernacular is the predominant language of instruction. It is widely observed that these students often lack good development of English language skills. Moreover, some of their linguistic habits may become fossilised by the time they commence their university education [20,21,22].
First, university students in Saudi Arabia typically learn English language for a year (i.e., two semesters). However, this duration is generally insufficient to fully develop their language skills [21,22]. Second, although the focus of teaching English is to develop all four skills, writing skills are often neglected due to a lack of time and individual attention. Therefore, many students exhibit inadequate knowledge to write long, cohesive essays, as they are never taught to write with supportive research on given topics. Within this context, AI tools emerge as a significant pedagogical resource. The integration of AI tools can be used as a supportive educational aid for the benefit of both educators and learners. Educational AI tools, such as AI chatbots, can be equipped with sufficient materials regarding the development of essential writing skills, such as improving sentence structure, making suggestions for good and relevant vocabulary use, and bringing coherence to the writing, to help students enhance their writing skills at their own pace. Moreover, these tools can help teachers save time when preparing for extra classes while providing learners with ample opportunities for independent practice to hone their writing proficiency [23]. Nevertheless, there is a substantial risk of academic dishonesty or overreliance on such tools. Therefore, it is important that these chatbots are designed with boundaries that allow them to function as writing tutors instead of generating full assignments.

1.2. Research Problem

AI chatbots can be a valuable resource for adult EFL learners, as well as for teachers who often lack sufficient time to pay individual attention to learners’ errors in English language skills, especially in writing. Moreover, large class sizes further exacerbate these issues, as teachers are unable to provide adequate feedback to every student to address their difficulties. Consequently, the need to identify AI resources that can supplement teachers’ classroom instructions, particularly an AI-based self-learning tool that is accessible to all students, has led to the concept of building an AI writing bot. Notably, most, if not all, adult learners are already exposed to AI tools in various forms, such as Apple’s smartphone application Siri, Microsoft Copilot, and other AI-powered, chat-based services, even if they may not actively use these tools to enhance their language skills.
The present study focused on examining the effectiveness of the use of an AI chatbot, WritePro, in supporting the development of writing skills among adult EFL learners in Saudi Arabia. This study was inspired by the success stories reported in previous research on the use of AI chatbots to enhance writing accuracy.
The primary objective of creating the writing chatbot WritePro, the AI tool used in this study, was to guide learners in improving their writing across different areas, such as content development, the accurate use of grammatical rules, selecting appropriate and diverse vocabulary, and refining sentence structures. WritePro is a customised ChatGPT-based tool that was specifically designed to address the challenges commonly faced by EFL learners in the early stages of writing, such as grammar, sentence structure, style consistency, and writing mechanics.
However, since the Saudi EFL students in this study had not previously used AI-based tools specifically designed to improve their writing skills, it was imperative to evaluate the effectiveness of WritePro through an empirical, experimental study. A preliminary review of the literature based on research investigating the potential effectiveness of AI tools in improving adult EFL learners’ writing showed that such tools are promising [5,6,7,24]. Their findings were encouraging and inspired the researchers to conduct the present study. Moreover, there has been a lack of focused research in the Saudi Arabian context on this given topic. Therefore, the present study aimed to achieve two objectives: (i) to understand the potential benefits of an AI chatbot for EFL learners, and (ii) to contribute to the growing body of literature on the use of AI tools in adult EFL instruction.

2. Literature Review

Extensive research has been conducted globally to examine the potential benefits of AI tools in educational settings across various aspects of academic activity (e.g., teaching, learning, curriculum development, assessment, classroom management, etc.). Therefore, a substantial body of literature has been generated thanks to previous studies on the subject, including many studies that investigate the efficacy of AI tools in supporting EFL learners of all ages. Nevertheless, the present review focuses specifically on studies that explore the effectiveness of AI tools in supporting adult EFL learners, particularly in the development of their writing skills.

2.1. EFL Learners and Their Challenges in Writing

Writing is widely recognised as a challenging skill to master, as it requires comprehensive phonological, syntactic, and lexical knowledge that may not be part of a learner’s L1 [25]. In addition, writing is an iterative and complex process that is characterised by the sequential development of different stages, including brainstorming, planning and outlining, drafting, and final composition. EFL learners encounter several challenges due to various factors, such as limited vocabulary knowledge [26], mother-tongue influence [21,27], low levels of English literacy [28], challenges in structuring information and idea generation, and writing anxiety [29]. Alshammari [22] highlighted the persistent problem of low English proficiency among Saudi learners, with the findings revealing significant gaps between the theory and practice of teaching English in non-native environments such as Saudi Arabia.
While EFL learners face numerous challenges, viable solutions are limited. One of the major reasons behind the limited progress in EFL learning is learners’ lack of practice outside the classroom, as they often lack a supportive environment. However, it is believed that AI tools can offer the potential to address this gap by promoting autonomous learning [30]. Learners can make use of AI chatbot tools to practice all four English language skills—reading, writing, listening, and speaking—outside of regular classroom hours, as AI tools can adapt to individual learning needs in a manner comparable to a teacher in the class supervising each student individually [31]. The affordability of AI tools should not be an issue for most EFL learners, since the tools are freely available online. In addition, most leaners are equipped with smartphones to access the service. Teachers can help their students by designing customised AI bots to guide them in specific subjects or areas of study [5,8,9,10]. Therefore, integrating AI into academic settings holds the potential to create a transformative shift in education [24].

2.2. AI in Language Education

Presently, advancements in the daily applications of AI are unparalleled, affecting nearly every aspect of human life, including teaching and learning. AI-driven machines, such as computer systems, use software that enables them to perceive, learn from, and respond to their surroundings. These machines can produce a variety of outputs in response to a relevant input [32,33]. Regarding language learning, contemporary AI chatbots can not only produce texts, images, charts, graphs, tables, figures, designs, audios, and videos but also text to images, text to videos, realistic conversation models, pronunciation patterns, and immediate corrective feedback [1]. Presently, AI chatbots and agents are embedded in every website to simulate real-life dialogue, making them effective tools for practising spoken English [9]. For instance, Vesselinov and Grego examined the effectiveness of ‘Duolingo’ (an educational tool teaching foreign languages) and found that the AI-powered system is highly effective, as it reproduces the nuances of conversations in a foreign language in a manner similar to interacting with native speakers [34].
Popular AI language-learning models are designed to respond to learners’ prompts and questions, providing suggestions for improvement in the desired skill. Therefore, this revolutionises the ways in which learners acquire a particular skill, such as writing [35]. Researchers have tested AI language-learning models and found that the integration of the technology with traditional pedagogical methods boosts learners’ linguistic skills, since such an approach aligns with the principles of cognitive development [15,31,36,37]. Systematic review studies on AI in language education consistently report a favourable impact of AI on learners [9,10,36,38]. Specifically, regarding the development of EFL learners’ writing skills, AI writing tools provide users with a plethora of writing support mechanisms and instant feedback that boost learners’ writing proficiency [5,6,7,8,39].

2.3. AI Chatbots and Writing Development

AI has proven itself to be a valuable resource for both EFL teachers and learners, especially in supporting the development of writing skills. A few studies have explored the integration of AI chatbots into the teaching of writing and reported promising results regarding the overall performance in learners’ writing skills in different educational settings [40,41,42,43]. These extend to several key components of EFL writing skills, such as task achievement, coherence and cohesion, grammatical range and accuracy, and lexical range and accuracy [44]. The main hurdles in EFL learners’ writing skill development are often cognitive and vary according to their proficiency levels. These barriers might lead to unwillingness to communicate in writing, difficulty generating ideas ‘for’ or ‘against’ any topic, lack of adequate reading and research, limited vocabulary, and lack of planning to make drafts and revise them [8]. Previous research indicates that these challenges can be effectively addressed through the implementation of appropriate technological tools. Findings from several studies consistently affirm the potential of AI chatbots in helping language learners to overcome these hurdles [41,45,46,47]. For instance, Guo et al. [8] reported that integrating AI tools into EFL argumentative writing as ‘writer companions’ (mirroring peer interactions) helped learners to develop their argumentation skills, resulting in notable improvements in their argumentative writing. Similarly, Ayedoun et al. [48] showed that a conversational chatbot that is trained to facilitate the exchange of ideas can effectively address EFL learners’ reluctance to communicate in English [48]. Brandtzaeg and Følstad [49] identified several motivational factors behind individuals’ use of chatbots, including the desire to communicate both orally and in writing [49]. Fryer et al. [50] (p. 8), observing the exemplary success of chatbots in language learning, queried why language-teaching bots have not yet achieved a lasting impact on language learning that will be readily incorporated into academic institutions’ language curricula. The researchers argue that ‘chatbots will eventually be the perfect language learning partners, potentially enabling us to learn multiple languages anywhere, anytime and at our own pace’.
The use of AI chatbots by learners in writing depends on the type of assistance they need or their proficiency levels. Usher and Amzalag [43] reported that learners consult AI chatbots for a variety of purposes, such as content development (34.66%), source integration and verification (28.22%), concept clarification and definitions (10.62%), writing consultation (10.23%), text refinement and formatting (9.94%), sentence refinement (3.50%), and translation assistance (2.83%). In addition, they investigated what communication styles learners used and reported that the most predominant style was requesting (58%), followed by questioning (27%), and declarative statements (15%), with a neutral tone most often employed (66%), followed by praising (26%) and reprimanding (8%). Furthermore, the use of AI chatbots in writing varies according to learners’ proficiency levels. According to Duong and Chen [40], EFL learners actively interacted with their chatbot, Writing Assistant Bot (WAB), about vocabulary and content development. WAB served as a planning tool for lower-proficiency learners, while higher-proficiency learners used it as a language refinement tool. Similarly, Lin and Chang [41] reported that their AI chatbot, DD, when used by university psychology students, showed significant improvements in developing students’ essay outlines, thesis statements, and peer feedback quality.
A key advantage of AI technology for users is that, with the support of open-source AI platforms, such as Custom ChatGPT, teachers can design their own ChatGPT-style chatbots powered by specific data suitable to their classroom needs [51], with only a little technical know-how. This is possible because the underlying codes of open-source AI programmes are accessible to users. They can modify the code for use in scenarios other than the ones they were primarily intended. The large and active community of open-source AI program users comprises coders, program developers, and end-users, allowing individuals to make contributions and seek help from others when needed [52,53,54,55,56,57]. Generally, research in this area shows that adult EFL learners can receive adequate support, such as tailored writing prompts, vocabulary aid, idea generation, exercises, syntax error correction, and feedback, through AI chatbots integrated into their study materials [58,59,60,61,62].

2.4. Apprehensions About AI in Education

As discussed earlier, the literature on the impact of AI chatbots on adult writing development presents an optimistic view of the developing scenario pertaining to the integration of AI into language teaching and learning, especially in the area of writing skill development. However, there are concerns about the potential negative side of the use of AI in education [63]. Teachers, researchers, and institutional stakeholders expressed concerns that excessive reliance on AI may inhibit the development of learners’ independent creativity required across all stages of life. Total reliance on AI tools could result in the erosion of learners’ own skills. Moreover, the renowned linguist Noam Chomsky has criticised the term ‘artificial intelligence’, referring to it as a misnomer. In his opinion, what is termed as artificial intelligence is a collection of vast amounts of data with predictable questions and answers that are made available to users at a rapid speed. In an interview with Dr. Roberts and Dr. Watumull (Chomsky and Dr. Roberts are professors of linguistics, while Dr. Watumull acts as a director of artificial intelligence at a science and technology company), Chomsky remarked that ‘OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Sydney are marvels of machine learning’. However, he cautioned that AI’s lack of morality and rational thinking makes it an example of the ‘banality of evil’, as such systems are detached from reality and truth, operating solely within the constraints of their programming [64]. Limitations of morality and rational thought are insurmountable difficulties for AI to imitate exact human cognitive processes. Chomsky further adds, ‘We know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how human beings reason and use language. These differences place significant limitations on what these programs can do’ [64]. However, the major issues regarding the academic use of AI are related to cheating and plagiarism in higher education. Globally, there is a growing concern over students’ reliance on custom academic writing services and the consequential lack of academic integrity [65]. This risk of academic misconduct is further heightened by the capability of AI chatbots to produce written tasks, thus tempting students to complete assignments without investing the necessary efforts [5]. These risks could be mitigated by integrating a balanced use of AI chatbot alongside conventional instructional methods, thus ensuring that students cultivate their own skills while leveraging these tools [66,67]. Furthermore, incorporating an AI literacy component earlier in writing courses is essential to equip learners with comprehensive knowledge about the benefits and potential risks associated with AI usage, promoting a culture of responsible use of these tools [68].
The reviewed literature collectively illustrates a convergence of several theoretical perspectives—sociocultural theory, cognitive development theory, autonomous learning theory, and technology-enhanced language learning (TELL)—to explain both the challenges faced by EFL learners and the pedagogical potential of AI-mediated instruction. Alshammari’s observation of Saudi learners’ low proficiency reflects a classic sociocultural perspective [22]. Moreover, the integration of AI with traditional pedagogical practices is repeatedly justified through the principles of cognitive development. Studies showing improved linguistic performance with AI support are based on the assumption that learning occurs through scaffolded cognitive engagement. Furthermore, the argument that AI promotes autonomous learning draws on self-regulated learning (SRL) and learner autonomy theories, which emphasise learners’ active role in managing their own learning processes. Across the studies, AI is positioned as part of a broader TELL paradigm, which posits that technology can transform language learning. The evidence from several studies demonstrates that AI chatbots act as peer-like collaborators, writing companions, and feedback providers, aligning with TELL’s emphasis on interactive, learner-centred digital environments [8,41,48]. The apprehensions about AI—particularly those raised by Chomsky—introduce a humanistic and rationalist theoretical critique. However, the scarcity of empirical, classroom-based studies in the Arab region—especially on adult EFL writing—reveals a contextual and theoretical gap. The review further indicates that there is almost negligible research on the academic use of AI chatbots, such as teaching writing to adult EFL learners, in the Arab region, particularly in Saudi Arabia. This significant academic area needs scholarly attention, as students do make use of AI tools and are aware of their functions, albeit not to learn English or enhance their academic performance. While global research supports AI’s potential, the absence of localised evidence highlights the need for ecological validity and context-sensitive pedagogical models. Accordingly, this gap justifies the present study, positioning it as a timely and meaningful contribution to both theoretical understanding and pedagogical practice.

3. The Present Study

The current study was designed to examine the effects of an AI chatbot [integrated into the study materials] on the writing accuracy of adult Saudi university EFL learners. An associated objective was to instruct students in the effective use of AI chatbots so that they can practice targeted language learning at their own pace. Furthermore, another objective was to fill the existing gap in the research literature on the effects of AI in EFL education in Saudi Arabia by contributing to the growth of the literature on the subject (i.e., development of Saudi EFL learners’ writing skills).

3.1. Conceptual Framework

The present research was designed with reference to two core learning concepts. First, feedback and suggestions for improvement are important for learning to occur [69]. Cognitivism and connectivism place high emphasis on the importance of feedback in learning. Second, total learner engagement is crucial for learning to take place [70]. The engagement theory and humanistic learning theory emphasise the role of total learner engagement in effective learning. Moreover, the concept of the Zone of Proximal Development highlights the need for support from a more knowledgeable person to facilitate learning [71]. Therefore, an AI chatbot could serve as an on-demand resource for EFL learners to improve writing in this context, provided that they are aware of its benefits and pitfalls and use it in a thoughtful and responsible manner. A schematic representation of the conceptual framework is shown in Figure 1 below.

3.2. Research Hypothesis

Based on a comprehensive review of the literature on the positive impact of the integration of AI in adult EFL classrooms, particularly the use of AI chatbots to enhance learners’ writing accuracy, it was hypothesised that the integration of WritePro with the conventional mode of teaching writing in English helps learners to achieve accuracy in writing. Specifically, this integration was expected to enhance learners’ control over repetitiveness, choice of verbs, sentence length, and arguments relevant to the topic in their writings.

3.3. Research Questions

The current study was designed to address the following research questions:
RQ 1: Does AI-assisted writing instruction help adult EFL learners achieve greater writing accuracy than conventional instruction, both in overall and across key components of writing accuracy?
RQ 2: Which key component of writing accuracy best predicts the overall improvement in writing of students?

4. Research Methodology

4.1. Participants

The participants in this study include undergraduate students majoring in the discipline of sciences who were enrolled in the second semester of their mandatory English language training course (ELIS 120). The students were enrolled in a one-year English language programme and had already completed pre-university education that included English as a subject. Their level of proficiency in English was intermediate (B1). A total of 65 students participated in the study. All participants were male, ranging in age between 20 and 22 years. None of the participants had prior experience using AI chatbots to learn English language skills. However, they had the basic knowledge of AI chatbots in other daily applications, such as smartphone apps and website-based conversational bots. During the course of the study, the participants were capable of writing short essays with academic accuracy issues, such as repetitiveness, the use of weak verbs, short sentence length, and sometimes with arguments not relevant to the given topic. It was observed that these participants had little chance of improving their written English outside the classroom.
The demographic data of the participants are presented in Table 1 below.

4.2. Data Collection Instruments

4.2.1. Writing Test

Two tests were conducted by the participants: pre- and post-tests. The pre-test was given to all the participants before they were divided into either experimental or control groups, while the post-test was administered after experimental teaching involving the use of AI chatbot. The two writing prompts in the two tests were different; however, they were comparable in terms of topic and difficulty to ensure consistency and to allow for the meaningful comparison of student performance before and after the intervention. Pilot tests were conducted to ascertain the validity and reliability of the tests, with required modifications being made based on the results. In addition, suggestions from experts in the field were sought to improve the final drafts of the tests.

4.2.2. Scoring Rubrics

Students’ writing was assessed using an analytic rubric consisting of three components: content and organisation (clarity of main idea, paragraph structure, and use of cohesion devices), grammar and mechanics (subject–verb agreement, verb tense consistency, word order, and punctuation), vocabulary use (spelling accuracy and repetition vs. variety), and sentence structure (simple vs. compound or complex sentence). Each component was evaluated based on a five-point scale, resulting in a total writing score of 20 marks.

4.2.3. WritePro

WritePro was created using the customisation feature in ChatGPT. During the configuring stage, a brief description of this GPT was given, stating that it is only a tool to help users in resolving form-focused writing issues. Furthermore, some conversation starters were provided, such as ‘Can you check this paragraph for grammar and structure?’, ‘Highlight and correct any grammar issues in the following text. Explain each correction briefly’, and ‘Revise the sentence structure in this paragraph’. Furthermore, several EFL writing books were incorporated into the knowledge section, such as Longman Academic Writing Series (1–5), Writing Skills Practice Book for EFL, The Oxford Essential Guide to Writing, and Improve Your Writing Skills (guidance notes for students). This customisation helps to provide more context-aware guidance for several writing tasks. The GPT tool is designed to encourage users to improve their self-editing skills over time.
WritePro differs from, and advances beyond, previously reported AI-assisted writing tools in the following ways:
  • It was purpose-built for form-focused instruction (with no general writing assistance). Most AI chatbots in the literature (e.g., WAB, DD, writer-companion bots) provide broad writing support, including idea generation, content development, or argument construction. In contrast, WritePro is explicitly constrained to addressing form-focused issues.
  • A custom knowledge base was built from authentic EFL writing texts. Existing AI tools typically rely on general training data or limited domain-specific corpora. This gives WritePro a pedagogically aligned, EFL-specific knowledge foundation. To date, no reported tool in the literature incorporates such a structured, textbook-based knowledge layer.
  • It was designed to scaffold self-editing skills rather than replace them. Many AI tools risk encouraging passivity by generating corrected text directly.
  • Conversation starters modelled effective revision behaviours. Unlike generic chatbots, WritePro includes pre-designed conversation starters that teach learners how to ask for help. The prompts function as scaffolds for metacognitive awareness, helping learners internalise revision strategies. Previous tools rarely embed such explicit modelling of learner–AI interaction.
  • It has a controlled, pedagogically safe environment that reduces the risk of plagiarism.
  • It has been tailored to EFL learners’ common error patterns. By drawing on EFL-specific textbooks and providing form-focused feedback, WritePro is specifically configured to address L1-influenced grammar errors.

4.3. Data Collection Procedure

4.3.1. Introductory Phase

Participants were first given a pre-test. Then, they were divided into experimental and control groups, with an equal number of students in each group. A simple random sampling procedure was followed to divide the participants into two groups. This was performed using a simple random sampling procedure to (a) avoid systematic bias, (b) balance mixed abilities, (c) ensure that the only systematic difference between groups is the intervention, and (d) support the use of inferential statistics. All writing samples were assessed using the analytic rubric mentioned earlier. Furthermore, an introductory session was given to students in Week 1, where the students in the experimental group were given an information sheet that explained how to use the AI chatbot WritePro. During the same session, both groups were given a consent form to ensure voluntary participation.

4.3.2. Intervention (Use of AI in Experimental Group)

The experimental group was taught writing using materials incorporated with an AI chatbot. WritePro was shared with the students through a direct link. While learning to write and practice essays, students were required to type out their written work on their personal devices and then seek feedback and corrections from WritePro. In contrast, the control group participants were taught the same subject using a conventional lesson delivery method and completed their writing tasks on paper.

4.3.3. Final Assessment Phase

The experiment lasted for a month, where the two groups were given a post-test writing task under similar conditions to those of the pre-test. All writing samples were assessed using the same analytic rubric applied in the pre-test.

4.4. Data Analysis

Data were analysed quantitatively to examine the overall and component-level effects (rubric components) of using AI-assisted writing instruction using students’ pre- and post-test scores in both sections. Statistical analyses were carried out using R (version 4.5.1; R Core Team, 2025). Several packages were used for data analysis (me4, lmerTest, dplyr, and emmeans). Descriptive statistics (mean, standard deviation, and standard error) were calculated for the overall pre- and post-test scores across both experimental and control groups, as well as the four components of the rubric: content and organisation, grammar and mechanics, vocabulary use, and sentence structure. This provided a general overview of students’ writing performance prior to and following the intervention.
The dataset was examined prior to any analysis for missing responses, and no missing responses were found. Then, it was structured in a long format with a single line for a student score at either the pre- or post-test per component. Three categorical variables were converted to variables to support modelling: group (control vs. experimental), time (pre-test vs. post-test), and component (content and organisation, grammar and mechanics, vocabulary use, and sentence structure). To examine the effect of AI intervention on students’ writing, linear mixed-effects models (LLMs) were performed using the lmer function (in R). The model included group, time, and component as fixed effects, along with their interactions, and a random intercept for Student ID was included to account for individual variation. Pairwise comparisons were then employed using estimated marginal means to examine differences across the rubric components.
All writing scripts were scored using the study’s analytic rubric, and procedures were implemented to ensure scoring consistency. Two trained raters, both experienced in EFL writing assessment, participated in a calibration session prior to scoring, during which they jointly evaluated a sample set of scripts to establish a shared understanding of the rubric criteria. Following calibration, each rater independently scored the assigned scripts. To verify scoring consistency, 20% of the scripts were double-rated, and inter-rater reliability was calculated using an intraclass correlation coefficient (ICC). Any discrepancies exceeding one scale point on a rubric component were resolved through discussion. This process ensured that the scoring was both reliable and aligned with the rubric’s intended constructs.

5. Results

The descriptive statistics (mean [M], standard deviation [SD], and standard error [SE]) of students’ scores in pre- and post-tests by components—content and organisation, grammar and mechanics, vocabulary use, and sentence structure—are presented in Table 2. The data is split between the control group (Ctrl) and experimental group (Exp). Standard deviations (SDs) are generally lower in the experimental group post-test, suggesting more consistent performance gains. In contrast, standard errors (SEs) remain relatively stable across groups and components, indicating reliable mean estimates. Sample sizes (N) are comparable (Ctrl = 35, Exp = 30), supporting fair group comparisons. A component-wise analysis reveals notable differences. In content and organisation, the control group displays minimal change from the pre-test (M = 2.36) to post-test (M = 2.40), suggesting little improvement. In contrast, the experimental group displays a substantial increase from the pre-test (M = 2.82) to post-test (M = 4.45), indicating strong gains. The difference is to be interpreted as the effectiveness of the intervention in significantly enhancing students’ ability to structure and organise their writing. Regarding grammar and mechanics, the control group shows a slight decline from the pre-test (M = 2.22) to post-test (M = 2.17), while the experimental group shows a marked improvement from the pre-test (M = 2.67) to post-test (M = 4.11). Therefore, the experimental group benefited notably in grammatical accuracy and mechanical correctness, while the control group remained stagnated. In vocabulary use, the control group shows a modest increase from the pre-test (M = 2.08) to post-test (M = 2.26), while the experimental group shows a significant rise from the pre-test (M = 2.53) to post-test (M = 4.27). This indicates that vocabulary development was strongly influenced by the intervention. In the case of sentence structure, the control group shows some improvement from the pre-test (M = 2.47) to post-test (M = 3.20), while the experimental group shows a strong improvement from the pre-test (M = 2.75) to post-test (M = 4.33). This shows that sentence complexity and syntactic variety improved more in the experimental group, reinforcing the intervention’s effectiveness.
Then, in order to investigate whether these differences are statistically significant, linear mixed-effects models were used to examine the effects of group, time, and writing component on students’ writing scores across the pre- and post-tests. The results, presented in Table 3, showed a significant interaction effect between time and group (β = 1.37, SE = 0.21, t = 6.64, p < 0.001), with a medium effect size (partial η2 = 0.09), suggesting that the experimental group had greater improvements in writing scores than the control group from the pre- to post-test. The model was reparametrised with sentence structure as the reference component to investigate how this improvement could vary by writing component. The sentence structure scores were significantly higher than both the grammar (β = −0.42, p = 0.004) and vocabulary scores (β = −0.42, p = 0.004), while those of content and organisation did not differ significantly from those of sentence structure (β = −0.20, p = 0.17).
Figure 2 below shows that the sentence structure component scored highest overall in both groups, especially in the experimental groups, in the estimated marginal post-test means across components by group. Sentence structure appears to be the most responsive component to the AI intervention and the strongest predictor of the overall improvement observed in the learners’ writing scores.

6. Discussion

This study aimed to evaluate the effect of a customised chatbot, namely WritePro, on overall student performance and across different writing components: content and organisation, grammar and mechanics, vocabulary use, and sentence structure. The discussion below addresses the main findings in relation to our research questions.

6.1. Effectiveness of AI-Assisted Writing Instruction on EFL Writing Performance

In answer to RQ1, which explores the effect of AI-assisted writing instruction on EFL writing performance, the findings revealed that the group x time interaction was significant. This demonstrates that the experimental group outperformed the control group in terms of overall writing performance. The interaction effect shows that AI chatbot use can substantially enhance EFL writing performance. These improvements are statistically robust and educationally meaningful, providing strong support for incorporating AI into conventional writing instruction. Moreover, for component-level comparisons, the results showed that the scores of the sentence structure component were more significantly improved than other components. Content and organisation, vocabulary use, and grammar and mechanics improved significantly but remained lower than the sentence structure scores. Therefore, the performance of the experimental group was greater in the post-test scores across all components compared to the pre-test scores, as well as to the control group’s scores. The most pronounced improvements were observed in content and organisation and vocabulary use, suggesting that the intervention had a particularly strong impact on higher-order writing skills. The control group showed minimal or no improvement, underscoring the effectiveness of the experimental treatment. These findings highlight the effectiveness of our chatbot, WritePro, in enhancing learners’ overall writing skills, with changes noticed across three components: content and organisation, grammar and mechanics, and sentence structuring.
These findings substantiate previous findings in the literature about the possibilities for using AI chatbots to bring about a change in writing instruction in EFL contexts [40,42,44]. This is evidential not only in terms of overall improvement but also at the component level, corroborating with the findings of Boudouaia et al. [44], albeit with different components. The findings indicate the probability of a more advanced and well-designed chatbot to substantially boost EFL writing skills, surpassing the enhancements typically attained in conventional classroom contexts.
These findings could be attributed to a number of factors. Firstly, WritePro was mainly a personalised tutor, providing students with instant formative feedback drawn from the previously established EFL knowledge base incorporated into the bot. The findings are in line with Vygotsky’s idea of the Zone of Proximal Development [71], where learners are poised to perform best in tasks, such as writing, provided that they are aided by a more knowledgeable person. Under this paradigm, WritePro served as the more knowledgeable other for our students, leading to a marked improvement in their writing tasks. In addition, WritePro was specifically trained to align with the rubric and requirements, such as using at least three transition signals in the essay, including at least one conditional sentence in each paragraph and using ‘unless’ in at least one sentence. Therefore, the post-test writing samples from the participants in the control group did not exhibit these features as prominently compared to those from the experimental group, suggesting a greater improvement in writing performance in comparison to what the control group subjects have achieved.
Another important aspect relates to the nature of the feedback for each interaction, which revolves around sentence-level feedback comprising elements such as word order, clause construction, and syntactic complexity, and hence individualising responses in line with distinct requirements. This approach allows for more precise and context-driven feedback and thus assists learners in internalising the nuances of sentence structure. Additionally, personalised chatbots have the ability to adapt and adjust their feedback according to the level of proficiency of the learner, increasingly enhancing the complexity level of structural suggestions as a learner progresses. AI chatbots can offer immediate and tailored feedback, potentially enhancing the drafting/writing skills of EFL learners and boosting their writing proficiency on a personal level [72]. Specifically, progress at the sentence level is often more transparent and easily audible, as witnessed from explicit before-and-after contrasts, thus supporting the learning process. AI chatbots can assist in non-linguistics facets of writing (such as argument-building, style, tone, thematic structure, and so on) as well as linguistics facets (such as grammar check, language translation, vocabulary selection, and so on) [73]. The personalised and adaptive approach to structural feedback thus has the ability to produce better learning results and improve writing skills compared to conventional language acquisition tools.
The control group participants also received feedback, but the nature of their feedback was conventional, teacher-driven, and involved linguistic accuracy (grammar corrections, vocabulary issues, mechanics of writing, such as spelling, punctuation, capitalisation), content feedback (i.e., clarity of thought, relevance of prompt, accuracy of information), organisation and coherence feedback, overall general, global comments on the writing, and feedback on revision, especially on process writing. Writing is not taught as a separate subject as such. Therefore, both the groups received similar instructions with regard to time allocation. The distinction was in their post-instruction support and feedback. The experimental group participants had the advantage of deriving instant support and feedback from the chatbot, whereas to the control group participants, the needed help was available only in the classroom.

6.2. Predictors of Overall Writing Improvement with AI-Assisted Instruction

In answer to RQ2, which aimed to identify which key component of writing skills contributes more to the overall writing performance of EFL learners, the findings of the linear mixed-effects models (LLMs) showed that sentence structure was a statistically significant predictor for overall improvements, while content and organisation, grammar and mechanics, and vocabulary use were less significant predictors than sentence structure. These findings further deepen the greater effectiveness of integrating an AI chatbot into the instructional materials of EFL writing. However, though the vocabulary use was significant in our component-level comparison, it appeared to be less significant in predicting the overall performance in writing. The content and organisation and grammar and mechanics seem not only to vary significantly across groups but also appear to be less important to drive actual improvements in writing. These regression analysis results add more power to these components beyond only group-level comparisons.

7. Conclusions

The objective of the present research was to investigate whether AI writing tools can positively affect adult EFL learners’ writing accuracy. The findings from the research point to the positive impact of AI tools on learners’ writing accuracy. The results indicate that the AI tool helped learners organise their ideas, write coherently, check for suitable vocabulary and grammatical correctness, use transitional phrases, use conditionals, and use certain conjunctions, such as unless, until, while, etc. The findings from the present study compare positively with those from previous studies on the topic, particularly studies highlighting the point that EFL learners receive all the necessary help from AI tools to achieve accuracy in writing, such as tailored writing prompts, vocabulary aid, writing ideas, exercises, syntax error correction, and instant feedback [8,9,41,45,46,47,48,58,59,60,62].
The findings are significant in many ways, and future research on the topic may corroborate these findings. Nevertheless, based on the results obtained, the findings may be generalised as a theoretical input that AI tool intervention positively affects EFL learners’ writing accuracy. Embracing such technologies in the conventional classroom is not only beneficial but tends to be essential in the AI era nowadays. However, the use of AI tools cannot be without challenges. Learners can tend to overly rely on these tools, and perhaps that could impact their creativity and critical thinking abilities. Furthermore, some would not even think about the originality and ethical writing practices [73]. Hence, a balanced incorporation of AI into early classes with a course focused on the ethical practices of using AI and how these tools could be more of an aid rather than a doer tool would help to preserve the engagement of learners with their writing as a way to preserve their creativity and critical thinking.
To address the risks associated with overreliance on AI and to cultivate responsible, informed use, instructors can embed structured AI literacy practices throughout the writing curriculum, using strategies such as the following:
Establishing explicit policies for AI use and learning outcomes;
Teaching students how AI works (at an accessible level);
Scaffolding the use of AI through structured, multi-stage writing tasks and integrating it into specific, controlled phases, instead of allowing AI at the final drafting stage;
Ensuring transparency through “AI Use Logs”;
Asking students to submit a brief reflection with each assignment;
Incorporating critical evaluation activities;
Promoting the use of AI as a peer, not as a ghostwriter;
Embedding AI literacy modules early in the course.

7.1. Limitations of the Present Study

Research is an unending process, and whatever efforts researchers may make, certain limitations typically affect their studies. The present study is no exception. The first limitation of the study was that owing to the lack of coeducation in Saudi Arabia, the research was confined to only male participants. Gender differences in the access to, and use of, AI tools in learning may exist, but the present study failed to take this aspect into account. The findings from the present research are not generalisable, since generalisability requires research with a larger population; owing to financial and time constraints, the number of participants for the present study was rather low. So, in a sense, the present study is just the beginning.

7.2. Further Recommendations

The limitations cited in the present study may become steppingstones for future researchers in this area. Specifically, future researchers may conduct a similar study that includes both male and female participants to see if they obtain different results. Second, research conducted with a much larger population of EFL students may generate generalisable findings.

Author Contributions

Conceptualisation, H.A. and A.A.A.; methodology, H.A.; software, H.A.; validation, A.A.A. and M.S.; formal analysis, H.A.; investigation, H.A.; resources, M.S.; data curation, M.S.; writing—original draft preparation, H.A.; writing—review and editing, A.A.A. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under Grant No. G: 135-662-1442. The authors acknowledge the technical and financial support provided by DSR at KAU.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Scientific Research Ethics Committee, King Abdulaziz University, (REC-FAH-KAU-2026-001) on 4 January 2026.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data may be made available with the written permission of the Deanship, Scientific Research, King Abdulaziz University.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CtrlControl Group
EFLEnglish as a Foreign Language
ExpExperimental Group
L1First Language

References

  1. Huong, L.P.H.; Hung, B.P. Mediation of digital tools in English learning. LEARN J. Lang. Educ. Acquis. Res. Net. 2021, 14, 512–528. [Google Scholar]
  2. Raja, R.; Nagasubramani, P.C. Impact of modern technology in education. J. Appl. Adv. Res. 2018, 3, S33–S35. [Google Scholar] [CrossRef]
  3. Hung, P.D.; Hung, N.D.; Diep, V.T. In Cooperative Design, Visualization, and Engineering; Luo, Y., Ed.; URL classification using convolutional neural network for a new large dataset. Springer International Publishing: Cham, Switzerland, 2022; pp. 103–114. [Google Scholar] [CrossRef]
  4. Liu, G.-Z.; Rahimi, M.; Fathi, J. Flipping writing metacognitive strategies and writing skills in an English as a foreign language collaborative writing context: A mixed-methods study. J. Comput. Assist. Learn. 2022, 38, 1730–1751. [Google Scholar] [CrossRef]
  5. Apriani, E.; Cardoso, L.; Obaid, A.J.; Muthmainnah, M.; Wijayanti, E.; Esmianti, F.; Supardan, D. Impact of AI-powered chatbots on EFL students’ writing skills, self-efficacy, and self-regulation: A mixed-methods study. Glob. Educ. Res. Rev. 2024, 1, 57–72. [Google Scholar] [CrossRef]
  6. Boykin, A.; Evmenova, A.S.; Regan, K.; Mastropieri, M. The impact of a computer-based graphic organizer with embedded self-regulated learning strategies on the argumentative writing of students in inclusive cross-curricula settings. Comput. Educ. 2019, 137, 78–90. [Google Scholar] [CrossRef]
  7. Dai, J.; Wang, L.; He, Y. Exploring the effect of wiki-based writing instruction on writing skills and writing self-efficacy of Chinese English-as-a-foreign language learners. Front. Psychol. 2023, 13, 1069832. [Google Scholar] [CrossRef]
  8. Guo, K.; Wang, J.; Chu, S.K.W. Using chatbots to scaffold EFL students’ argumentative writing. Assess. Writ. 2022, 54, 100666. [Google Scholar] [CrossRef]
  9. Huang, W.; Hew, K.F.; Fryer, L.K. Chatbots for language learning—Are they really useful? A systematic review of chatbot-supported language learning. J. Comput. Assist. Learn. 2022, 38, 237–257. [Google Scholar] [CrossRef]
  10. Mohebbi, A. Enabling learner independence and self-regulation in language education using AI tools: A systematic review. Cogent Educ. 2024, 12, 2433814. [Google Scholar] [CrossRef]
  11. Roll, I.; Wylie, R. Evolution and revolution in artificial intelligence in education. Int. J. Artif. Intell. Educ. 2016, 26, 582–599. [Google Scholar] [CrossRef]
  12. Bissessar, C. An exploration of students’ perceptions of artificial intelligence and plagiarism at a higher education institution. Equity Educ. Soc. 2025. [Google Scholar] [CrossRef]
  13. Elali, F.R.; Rachid, L.N. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns 2023, 4, 100706. [Google Scholar] [CrossRef]
  14. Kabir, A.I.; Ahmed, M.K.; Begum, A.; Gomes, G.A. Impact of AI technologies on academic integrity: Challenges, opportunities, and the plagiarism dilemma—Enhancing educational proficiency and credibility. Soc. Sci. Hum. Open 2025. under review. [Google Scholar] [CrossRef]
  15. Jingxin, G.; Razali, A.B. Tapping the potential of Pigai automated writing evaluation (AWE) program to give feedback on EFL writing. Univers. J. Educ. Res. 2020, 8, 8334–8343. [Google Scholar] [CrossRef]
  16. Rodriguez, D.; Seymour, W.; Del Alamo, J.M.; Such, J. Towards safer chatbots: A framework for policy compliance evaluation of custom GPTs. arXiv 2025. [Google Scholar] [CrossRef]
  17. Korucu-Kış, S. Zone of proximal creativity: An empirical study on EFL teachers’ use of ChatGPT for enhanced practice. Think. Ski. Creat. 2024, 54, 101639. [Google Scholar] [CrossRef]
  18. Alharbi, A.S. Communicative language teaching approach in a Saudi context: A critical Appraisal. Eur. J. App. Ling. 2024, 10, 60–71. [Google Scholar] [CrossRef]
  19. Ahmed, M.A. Instruction in the language classroom and the Saudi Vision 2030: A study using Delphi technique with academics. J. Lang. Teach. Res. 2024, 15, 1711–1718. [Google Scholar] [CrossRef]
  20. Alfaifi, A.A.; Saleem, M. Negative transfer and delay in proficiency development: L1 influenced syntax issues faced by Arab EFL learners. Forum Linguist. Stud. 2024, 6, 42–57. [Google Scholar] [CrossRef]
  21. Al-Mohanna, A.D. Difficulties and challenges encountered by Saudi EFL learners: A diagnostic study. Sch. Int. J. Linguist. Lit. 2024, 7, 288–299. [Google Scholar] [CrossRef]
  22. Alshammari, H.A. Investigating the low English proficiency of Saudi EFL learners. Arab. World Engl. J. 2022, 13, 129–144. [Google Scholar] [CrossRef]
  23. Baek, E.O.; Wilson, R.V. An inquiry into the use of generative AI and its implications in education: Boon or bane. Int. J. Adult Educ. Technol. 2024, 15, 1–14. [Google Scholar] [CrossRef]
  24. Zhang, S.; Shan, C.; Lee, J.S.Y.; Che, S.; Kim, J.H. Effect of chatbot-assisted language learning: A meta-analysis. Educ. Inf. Technol. 2023, 28, 15223–15243. [Google Scholar] [CrossRef]
  25. Akhter, T. Problems and challenges faced by EFL students of Saudi Arabia during COVID-19 pandemic. Rupkatha J. Interdiscip. Stud. Humanit. 2020, 12, 1–7. [Google Scholar] [CrossRef]
  26. Khan, R.M.I.; Radzuan, N.R.M.; Shahbaz, M.; Ibrahim, A.H.; Mustafa, G. The role of vocabulary knowledge in speaking development of Saudi EFL learners. Arab. World Engl. J. 2018, 9, 406–418. [Google Scholar] [CrossRef]
  27. Paquot, M. L1 frequency in foreign language acquisition: Recurrent word combinations in French and Spanish EFL learner writing. Second Lang. Res. 2017, 33, 13–32. [Google Scholar] [CrossRef]
  28. Nguyen, L.A.P.; Nguyen, T.H.B. A study on adult learners of English as a foreign language in Vietnam: Motivations, advantages, and challenges. Int. J. Lang. Instr. 2024, 3, 31–42. [Google Scholar] [CrossRef]
  29. Sun, J.; Motevalli, S.; Chan, N.N. Exploring writing anxiety during writing process: An analysis of perceptions in Chinese English as a foreign language (EFL) learners. Qual. Res. Educ. 2024, 13, 149–164. [Google Scholar] [CrossRef]
  30. Yekollu, R.K.; Ghuge, T.B.; Biradar, S.S.; Haldikar, S.V.; Abdul Kader, O.F.M. AI-driven personalized learning paths: Enhancing education through adaptive systems. In Smart Data Intelligence; Asokan, R., Ruiz, D.P., Piramuthu, S., Eds.; Springer: Singapore, 2024; pp. 507–517. [Google Scholar] [CrossRef]
  31. Maity, S.; Deroy, A. Generative AI and its impact on personalized intelligent tutoring systems. arXiv 2024. [Google Scholar] [CrossRef]
  32. Farrelly, T.; Baker, N. Generative artificial intelligence: Implications and considerations for higher education practice. Educ. Sci. 2023, 13, 1109. [Google Scholar] [CrossRef]
  33. Godwin-Jones, R. Distributed agency in language learning and teaching through generative AI. Lang. Learn. Technol. 2024, 28, 5–31. [Google Scholar] [CrossRef]
  34. Vesselinov, R.; Grego, J. Duolingo Effectiveness Study: Final Report. 2012; Available online: http://comparelanguageapps.com/documentation/DuolingoReport_Final.pdf (accessed on 7 January 2026).
  35. Khalifa, M.; Albadawy, M. Using artificial intelligence in academic writing and research: An essential productivity tool. Comput. Methods Programs Biomed. Update 2024, 5, 100145. [Google Scholar] [CrossRef]
  36. Lin, C.-C.; Huang, A.Y.Q.; Lu, O.H.T. Artificial intelligence in intelligent tutoring systems toward sustainable education: A systematic review. Smart Learn. Environ. 2023, 10, 41. [Google Scholar] [CrossRef]
  37. Liu, C.; Hou, J.; Tu, Y.-F.; Wang, Y.; Hwang, G.-J. Incorporating a reflective thinking promoting mechanism into artificial intelligence-supported English writing environments. Interact. Learn. Environ. 2021, 31, 5614–5632. [Google Scholar] [CrossRef]
  38. Wang, S.; Wang, F.; Zhu, Z.; Wang, J.; Tran, T.; Du, Z. Artificial intelligence in education: A systematic literature review. Expert Syst. Appl. 2024, 252, 124167. [Google Scholar] [CrossRef]
  39. Han, J.; Hiver, P. Genre-based L2 writing instruction and writing-specific psychological factors: The dynamics of change. J. Second. Lang. Writ. 2018, 40, 44–59. [Google Scholar] [CrossRef]
  40. Duong, T.-N.-A.; Chen, H.-L. An AI chatbot for EFL writing: Students’ usage tendencies, writing performance, and perceptions. J. Educ. Comput. Res. 2025, 63, 406–430. [Google Scholar] [CrossRef]
  41. Lin, M.P.-C.; Chang, D. Enhancing post-secondary writers’ writing skills with a chatbot: A mixed-method classroom study. Educ. Techol. Soc. 2020, 23, 78–92. [Google Scholar] [CrossRef]
  42. Kwon, S.K.; Shin, D.; Lee, Y. The application of chatbot as an L2 writing practice tool. Lang. Learn. Techol. 2023, 27, 1–19. [Google Scholar] [CrossRef]
  43. Usher, M.; Amzalag, M. From prompt to polished: Exploring student-chatbot interactions for academic writing assistance. Educ. Sci. 2025, 15, 329. [Google Scholar] [CrossRef]
  44. Boudouaia, A.; Mouas, S.; Kouider, B. A study on ChatGPT-4 as an innovative approach to enhancing English as a foreign language writing learning. J. Educ. Comput. Res. 2024, 62, 1289–1317. [Google Scholar] [CrossRef]
  45. Cai, W.L.; Grossman, J.; Lin, Z.J.; Sheng, H.; Wei, J.T.-Z.; Williams, J.J.; Goel, S. Bandit algorithms to personalize educational chatbots. Mach. Learn. 2021, 110, 2389–2418. [Google Scholar] [CrossRef]
  46. Kim, N.-Y.; Cha, Y.; Kim, H.-S. Future English learning: Chatbots and artificial intelligence. Multimed.-Assist. Lang. Learn. 2019, 22, 32–53. [Google Scholar] [CrossRef]
  47. Yin, Q.; Satar, M. English as a foreign language learner interactions with chatbots: Negotiation for meaning. Int. Online J. Educ. Teach. 2020, 7, 390–410. https://iojet.org/index.php/IOJET/article/view/707 (accessed on 7 January 2026).
  48. Ayedoun, E.; Hayashi, Y.; Seta, K. Adding communicative and affective strategies to an embodied conversational agent to enhance second language learners’ willingness to communicate. Int. J. Artif. Intell. Educ. 2019, 29, 29–57. [Google Scholar] [CrossRef]
  49. Brandtzaeg, P.B.; Følstad, A. In Internet Science—INSCI 2017; Lecture Notes in Computer Science; Kompatsiaris, I., Cave, J., Satsiou, A., Carle, G., Passani, A., Kontopoulos, E., Diplaris, S., McMillan, D., Eds.; Why people use chatbots. Springer: Cham, Switzerland, 2017; Volume 10673, pp. 377–392. [Google Scholar] [CrossRef]
  50. Fryer, L.K.; Coniam, D.; Carpenter, R.; Lăpuşneanu, D. Bots for language learning now: Current and future directions. Lang. Learn. Techol. 2020, 24, 8–22. [Google Scholar] [CrossRef]
  51. Clay, G. AutomatED: Teaching Better with Tech. 2024. Available online: https://automatedteach.com (accessed on 7 January 2026).
  52. Gayed, J.M.; Carlon, M.K.J.; Oriola, A.M.; Cross, J.S. Exploring an AI-based writing assistant’s impact on English language learners. Comput. Educ. Artif. Intell. 2022, 3, 100055. [Google Scholar] [CrossRef]
  53. Giglio, A.D.; Costa, M. The use of artificial intelligence to improve the scientific writing of non-native English speakers. Rev. Assoc. Médica Bras. 2023, 69, e20230560. [Google Scholar] [CrossRef]
  54. Golan, R.; Reddy, R.; Muthigi, A.; Ramasamy, R. Artificial intelligence in academic writing: A paradigm-shifting technological advance. Nat. Rev. Urol. 2023, 20, 327–328. [Google Scholar] [CrossRef]
  55. Guan, L.; Li, S.; Gu, M.M. AI in informal digital English learning: A meta-analysis of its effectiveness on proficiency, motivation, and self-regulation. Comput. Educ. Artif. Intell. 2024, 7, 100323. [Google Scholar] [CrossRef]
  56. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
  57. Xu, T.; Wang, H. The effectiveness of artificial intelligence on English language learning achievement. System 2024, 125, 103428. [Google Scholar] [CrossRef]
  58. Baidoo-Anu, D.; Ansah, L.O. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. J. AI 2023, 7, 52–62. [Google Scholar]
  59. Dergaa, I.; Chamari, K.; Zmijewski, P.; Saad, H.B. From human writing to artificial intelligence generated text: Examining the prospects and potential threats of ChatGPT in academic writing. Biol. Sport 2023, 40, 615–622. [Google Scholar] [CrossRef]
  60. Marzuki; Widiati, U.; Rusdin, D.; Darwin; Indrawati, I. The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Educ. 2023, 10, 2236469. [Google Scholar] [CrossRef]
  61. Xiao, F.; Zhu, S.; Xin, W. Exploring the landscape of generative AI (ChatGPT)-powered writing instruction in English as a foreign language education: A scoping review. ECNU Rev. Edu. 2025. [Google Scholar] [CrossRef]
  62. Yang, W.; Lin, C. Translanguaging with generative AI in EFL writing: Students’ practices and perceptions. J. Second. Lang. Writ. 2025, 67, 101181. [Google Scholar] [CrossRef]
  63. Williams, C. Hype, or the Future of Learning and Teaching? 3 Limits to AI’s Ability to Write Student Essays; London School of Economics: London, UK, 2023; Available online: https://kar.kent.ac.uk/99505/ (accessed on 7 January 2026).
  64. Chomsky, N.; Roberts, I.; Watumull, J. Noam Chomsky: The false promise of ChatGPT. The New York Times, 2023 March 8. Available online: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html (accessed on 7 January 2026).
  65. Elsen-Rooney, M. NYC Education Department Blocks ChatGPT on School Devices, Networks; Chalkbeat: New York, NY, USA, 2023; Available online: https://ny.chalkbeat.org/2023/1/3/23537987/nyc-schools-ban-chatgpt-writing-artificial-intelligence (accessed on 7 January 2026).
  66. Amin, M.Y.M. AI and Chat GPT in language teaching: Enhancing EFL classroom support and transforming assessment techniques. Int. J. High. Educ. Pedagog. 2023, 4, 1–15. [Google Scholar] [CrossRef]
  67. Yang, H.; Kim, H.; Lee, J.H.; Shin, D. Implementation of an AI chatbot as an English conversation partner in EFL speaking classes. ReCALL 2022, 34, 327–343. [Google Scholar] [CrossRef]
  68. Kassorla, M.; Georgieva, M.; Papini, A. AI Literacy in Teaching and Learning: A Durable Framework for Higher Education; Educause: Louisville, CO, USA, 2024; Available online: https://www.educause.edu/content/2024/ai-literacy-in-teaching-and-learning/introduction (accessed on 7 January 2026).
  69. Thurlings, M.C.G.; Vermeulen, M.; Bastiaens, T.J.; Stijnen, P.J.J. Understanding feedback: A learning theory perspective. Educ. Res. Rev. 2013, 9, 1–15. [Google Scholar] [CrossRef]
  70. Ebralidze, P. Note on engagement theory: Fostering a meaningful learning experience. Anthropology 2023, 11, 297. [Google Scholar]
  71. Vygotsky, L.S. Cole, M., John-Steiner, V., Scribner, S., Souberman, E., Eds. and Translators; Mind in Society: The Development of Higher Psychological Processes; Harvard University Press: Cambridge, MA, USA, 1978. [Google Scholar] [CrossRef]
  72. Pan, J. AI-driven English language learning program and academic writing integrity in the era of intelligent interface. Engl. Lang. Teach. Linguist. Stud. 2024, 6, 120–135. [Google Scholar] [CrossRef]
  73. Malik, A.R.; Pratiwi, Y.; Andajani, K.; Numertayasa, I.W.; Suharti, S.; Darwis, A.; Marzuki. Exploring artificial intelligence in academic essay: Higher education student’s perspective. Int. J. Educ. Res. Open 2023, 5, 100296. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of conceptual framework.
Figure 1. Schematic representation of conceptual framework.
Information 17 00157 g001
Figure 2. Estimated marginal means of post-test scores across components by group.
Figure 2. Estimated marginal means of post-test scores across components by group.
Information 17 00157 g002
Table 1. Participants’ demographic data.
Table 1. Participants’ demographic data.
Data Collection InstrumentParticipantsGenderNAverage AgeEnglish Proficiency LevelKnowledge of AI Tools
Pre-TestExpM3020–22B1Basic
CtrlM3520–22B1Basic
Post-TestExpM3020–22B1Basic
CtrlM3520–22B1Basic
Table 2. Descriptive statistics of writing scores by component.
Table 2. Descriptive statistics of writing scores by component.
ComponentGroupNPre-TestPost-Test
MeanSDSEMeanSDSE
Content and organisationCtrl352.360.7570.1282.4 0.8470.143
Exp302.820.6800.1244.450.6740.123
Grammar and mechanicsCtrl352.220.7360.1242.170.6180.104
Exp302.670.6860.1254.110.6180.113
Vocabulary useCtrl352.080.6550.1112.260.8860.150
Exp302.530.9280.1694.270.6660.122
Sentence structureCtrl352.470.9250.1563.23.780.639
Exp302.750.6790.1244.330.6860.125
Table 3. Linear mixed-effects model results for writing scores.
Table 3. Linear mixed-effects model results for writing scores.
Predictorβ EstimateSEt-Valuep-ValueInterpretation
Time × Group (Interaction)1.370.216.64<0.001Significant improvement in writing scores for experimental group over time
Grammar vs. Sentence Structure−0.420.004Grammar scores significantly lower than those for sentence structure
Vocabulary vs. Sentence Structure−0.420.004Vocabulary scores significantly lower than those for sentence structure
Content and Organisation vs. Sentence Structure−0.200.17Vocabulary scores significantly lower than those for sentence structure
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almutairi, H.; Alfaifi, A.A.; Saleem, M. Writing Accuracy: How AI-Assisted Writing Instruction Can Support EFL Undergraduate Students. Information 2026, 17, 157. https://doi.org/10.3390/info17020157

AMA Style

Almutairi H, Alfaifi AA, Saleem M. Writing Accuracy: How AI-Assisted Writing Instruction Can Support EFL Undergraduate Students. Information. 2026; 17(2):157. https://doi.org/10.3390/info17020157

Chicago/Turabian Style

Almutairi, Hana, Abdullah A. Alfaifi, and Mohammad Saleem. 2026. "Writing Accuracy: How AI-Assisted Writing Instruction Can Support EFL Undergraduate Students" Information 17, no. 2: 157. https://doi.org/10.3390/info17020157

APA Style

Almutairi, H., Alfaifi, A. A., & Saleem, M. (2026). Writing Accuracy: How AI-Assisted Writing Instruction Can Support EFL Undergraduate Students. Information, 17(2), 157. https://doi.org/10.3390/info17020157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop