Next Article in Journal
Corpus and Experimental Analysis of Passive Structures in Garrusi Kurdish
Previous Article in Journal
Beyond Sociodemographics: Attitudinal and Personality Predictors of Lexical Change
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learner Engagement and Writing Performance in Assessment as Learning L2 Writing

1
Center for General Education, Tokyo Keizai University, Tokyo 185-8502, Japan
2
Department of Curriculum and Instruction, Faculty of Education, The Chinese University of Hong Kong, Hong Kong, China
Languages 2026, 11(4), 62; https://doi.org/10.3390/languages11040062
Submission received: 28 October 2025 / Revised: 11 March 2026 / Accepted: 13 March 2026 / Published: 31 March 2026

Abstract

While previous studies on assessment as learning (AaL) in second language (L2) writing have mainly focused on writing teachers’ practices and perceptions of AaL, scant research has examined the relation between students’ engagement and writing performance in an AaL context. To fill the void, this study examined how students’ engagement related to their writing performance. Drawing on writing drafts, interviews, verbal reports, observation field notes, and documents, cross-case analyses of two focal students demonstrated that learner engagement in an AaL context was positively associated with improvements in writing performance. The student who demonstrated greater reciprocity in collaborating with teachers and peers in the AaL context, as well as proactivity in taking charge of her learning in L2 writing, showed greater improvements in content, organization, and language of argumentative writing.

1. Introduction

The past two decades have seen growing interest in shifting classroom writing assessment in English as a second language (L2) contexts from assessment of learning (AoL), where teachers use scores to measure students’ learning outcomes, to assessment for learning (AfL), where teachers and learners utilize evidence about learning as feedback to inform subsequent teaching and learning (Black & Wiliam, 2009; Lee, 2017; Leung et al., 2018; Shepard, 2000). This paradigm shift is driven by several reasons: a focus on scores alone does not benefit students’ learning owing to a dearth of formative feedback (Lee & Coniam, 2013); meanwhile, the dominant role of teachers in assessment could hinder students from acquiring the skills of independent learning and critical thinking, educational goals in the twenty-first century (Wong & Mak, 2019). However, setbacks have been observed during the AfL movement: AfL cannot be fully implemented when teachers only follow the letter rather than the spirit (Marshall & Jane Drummond, 2006) and do not empower students to take charge of their learning. Thus, assessment as learning (AaL) has been introduced to highlight the student-centered dimension of AfL (Earl, 2013). AaL emphasizes students’ agentive role in evaluating their learning, and using feedback to set goals, monitor, and adjust their learning (Earl, 2013). Since AaL can potentially promote L2 writers’ self-regulatory abilities (Lee et al., 2019) and writing quality (e.g., X. Xiang et al., 2022), further research is worthwhile.
In L2 writing, AaL remains under-researched. Existing literature on AaL in L2 writing has mainly focused on writing teachers’ AaL practices (e.g., Lee et al., 2019; Wang et al., 2020), as well as teachers’ and students’ perceptions of the strengths and challenges of AaL (e.g., X. Xiang et al., 2022). Since students’ involvement with AaL is deemed to improve their self-regulation and help them grow into independent writers and key assessors (Lam, 2016), recent studies (e.g., Wang & Lee, 2021; Wang & Shen, 2025) have begun to examine AaL in L2 writing from the perspective of learner engagement, defined as the extent of a student’s active involvement in a learning activity (Reeve, 2013). Learner engagement was observed to have a positive association with learning outcomes, including language performance in writing and reading (Christenson et al., 2012). Given the crucial role of engagement in facilitating language learning (Mercer & Dörnyei, 2020), investigating engagement can offer insights into how AaL fosters students’ writing performance, i.e., learners’ ability to perform writing tasks (Hyland, 2003). Although limited prior research has investigated how learners engaged in L2 writing assessment (e.g., Wang & Lee, 2021; Wang & Shen, 2025; Zhang, 2020), little research has explored learner engagement as a process through which an AaL context relates to writing performance. Theoretically, understanding how learners engage with AaL and how their engagement may relate to their writing performance can provide insights into the complex process of L2 learning in classroom writing assessment.
The present study explores how Chinese undergraduates’ engagement may relate to their writing performance in an AaL-oriented writing classroom. Pedagogically, this study can inform instructional practice and assessment design to promote learner engagement and improve students’ writing performance, ultimately leading to more effective assessment, teaching, and learning experiences.

2. Literature Review

2.1. AaL in L2 Writing

AaL is grounded in socio-constructivism, which holds that learning is socially constructed when students take charge of their learning and teachers create learning experiences that enable students to collaborate with others (Powell & Kalina, 2009). While AaL shares key features with AfL, such as clarifying learning goals and conducting peer-assessment, AaL underscores students’ agentive roles in connecting assessment and learning (X. Xiang et al., 2022). In AaL, classroom assessment serves as a vehicle for developing students’ metacognition, e.g., what strategies they can use to evaluate, adjust, and advance their learning (Earl, 2013).
AaL implementation in L2 writing requires teachers to offer instructional scaffolding to support students throughout the writing process and to engage them in co-regulation of learning, in which students’ learning is influenced by self-regulation and other-regulation from teachers/peers/assessment instruments (Allal, 2011; Lee, 2016). Key strategies for adopting AaL in L2 writing were outlined by Lee (2017). First, teachers clarify learning goals and success criteria. To enable students to be co-learners in writing classrooms, students can co-construct and negotiate success criteria with teachers and peers (Wang & Lee, 2021), e.g., by identifying key elements of a genre (e.g., content, structure, and language) after analyzing samples. Second, students set personal learning goals based on the shared goals provided by teachers. Third, descriptive and focused teacher feedback is provided so that students understand their strengths/weaknesses/workable next steps. Fourth, teachers engage students in peer-assessment so that they become learning resources for each other. Peer-feedback protocols are useful for guiding students to proactively seek feedback and initiate discussion (Lee, 2016). Fifth, to promote students’ ownership of their learning, teachers can guide students to ask metacognitive questions, use learning logs for self-monitoring, and conduct self-assessment (B. Bai, 2015; Lee & Mak, 2018). These AaL strategies should be viewed holistically as they collectively support students’ learning (Hawe & Parr, 2014).
Prior research has mainly addressed how specific aspects of AaL, such as goal-setting (e.g., Huang, 2015), peer-assessment (e.g., Lam, 2010), self-monitoring (e.g., W. Xiang, 2004), and self-assessment (e.g., Hale, 2015; Lam, 2013) affect students’ learning of L2 writing. Among the limited number of empirical studies on AaL as a unitary concept, most focus on teachers’ AaL practices and beliefs in L2 writing classrooms (e.g., Lee et al., 2019; Wang et al., 2020; He & Wang, 2025), and on students’ perceived benefits and challenges of AaL (e.g., X. Xiang et al., 2022). While learner involvement in the AaL process is deemed part of learning (Dann, 2002), research on the process by which student writers engage in AaL activities and its relationship to the development of writing performance is scarce. These gaps provide the impetus for this study on learner engagement and writing performance in AaL.

2.2. AaL and Learner Engagement

Learner engagement refers to the feelings, thoughts, and actions learners exhibit to enhance their learning; it results from the interplay of contextual (e.g., teaching and assessment contexts) and individual factors (e.g., learners’ beliefs and prior learning experience), functioning as a mediator between learning contexts and learning outcomes (Oga-Baldwin, 2019). Engagement has been conceptualized as a multifaceted construct comprising three to four interrelated dimensions of emotion, cognition, behaviour, and agency (Oga-Baldwin, 2019; Reeve, 2013). Behavioural engagement concerns effort, persistence, and concentration (Fredricks et al., 2004). Emotional engagement refers to students’ affective reactions during the task including positive and negative emotions. Cognitive engagement concerns students’ application of sophisticated learning and self-regulatory strategies (Cleary & Zimmerman, 2012). Agentic engagement involves the “proactive, constructive, and reciprocal action students initiate to catalyze their academic progress and to create a more supportive learning environment for themselves” (Reeve et al., 2020, p. 2). Derived from Bandura’s reciprocal triad of the environment, the person, and the behaviour, agentic engagement represents the reversed arrow in the reciprocal triad leading from behaviour to the environment, suggesting that learners’ willful actions can impact their surroundings (Oga-Baldwin, 2019).
Regarding learner engagement in L2 writing assessment, Wang and Lee (2021) and Wang and Shen (2025) further developed a conceptual framework to explore how L2 writers engaged agentively in an AaL context based on empirical data from students’ verbal reports, interviews, and writing drafts. Rather than treating agentic engagement as a fourth dimension, we argued that agency underpinned the notion of engagement. From social cognitive or socio-cultural perspectives, agency refers to learners’ ability to influence sociocultural contexts (Van Lier, 2008) and regulate their own learning processes (Bandura, 2006). Students’ operations pertaining to metacognition (e.g., how to plan/monitor/evaluate/adjust their own learning) and emotional regulation (R. Bai et al., 2014) relate to agentic engagement as well. Therefore, we placed agency at the core of the three dimensions of engagement. Learner engagement in L2 writing assessment was defined as the proactive and reciprocal involvement students initiate to self-regulate their learning and to co-construct an assessment context with peers and teachers (Wang & Lee, 2021):
  • Proactive involvement in learning self-regulation pertained to (1) affective strategies used to maintain a positive attitude toward L2 writing, (2) metacognitive operations such as planning, self-monitoring, self-assessment, reflection, and goal adjustment in cognition, and (3) goal-oriented actions including finding external resources and learning additional knowledge, as well as self-initiated revisions (e.g., form-focused and meaning-focused) in behaviour.
  • Reciprocal involvement concerned how learners interacted with peers and teachers to collaboratively establish an assessment context through their actions. This included providing constructive contributions (e.g., summarizing the success criteria for a genre with peers), expressing preferences (e.g., requesting specific feedback), and seeking assistance (e.g., asking for clarification on feedback, Wang & Lee, 2021; Wang & Shen, 2025).
While the above study examined how students engaged in AaL contexts, the relation between learner engagement and writing performance has not been explored. To unpack the process of empowering students as critical assessors of their L2 writing and its influence on their learning, it is important to investigate students’ engagement and writing performance in AaL. The present study, therefore, seeks to explore how learner engagement operates as a process linking an AaL context with students’ writing performance (see Figure 1). This study sets out to answer the following questions: How may students’ engagement (or lack of it) relate to their writing performance in an AaL-oriented writing classroom?

3. Methods

3.1. Context and Participants

3.1.1. Teacher Participant and Her AaL Implementation in L2 Writing

The study was conducted in English Writing, a compulsory course for second-year English majors at a private university in China. The course was offered in a 90 min session each week in a 16-week term. Students were asked to compose four drafts of two argumentative essays, each about 200–400 words.
The teacher, Helen (pseudonym), earned her bachelor’s and master’s degrees in Translation in Hong Kong and had six years of experience teaching English at universities in Mainland China. At the time of the study, she had been teaching English writing for two years. One year before the study, the author engaged Helen and her colleagues in a 12 h workshop on theories and strategies for AaL in L2 writing (see Lee, 2017). Teachers designed their assessment materials with the author’s assistance, including assessment criteria, feedback forms, and learning logs. During the study, although the author supported the teacher by responding to her questions via emails and video conferences, Helen took an agentive role in deciding her writing assessment practices.
When integrating AaL into L2 writing, Helen used a process-genre approach to empower students to take a major role in assessment. At the pre-writing stage, students were given a sample argumentative essay and summarized the traits of good argumentation in the aspects of purpose, structure, and language on their own. The teacher guided students to co-construct success criteria for argumentation with peers and provided instructional scaffolding on genre knowledge of argumentative writing (e.g., the writer is able to support the thesis with a list of major arguments in supporting paragraphs) and process writing strategies (e.g., planning, monitoring, evaluating, and reflection). Then, students were guided to set individual learning goals so that they have a clear vision of where to head towards.
At the while-writing stage, students were trained to self-assess draft 1 using feedback forms (adapted from Lee, 2017). Helen usually provided instructional scaffolding on other language points, such as introducing approaches to enhance vocabulary richness, sentence variety, and guiding students to proofread common grammatical errors. Online tools such as Pigai (an AWE system available institutionally) and Thesaurus were also introduced to encourage students to independently check for language accuracy and diversify lexical choices. Based on self-assessment results and newly learned language points, students were asked to revise for Draft 2.
Next, students were trained to peer-assess Draft 2. Peer assessment training included a teacher demonstration and student practice: for written peer feedback, Helen demonstrated how to comment on a previous student essay by going through the criteria on the feedback form, explaining how to give grades and write commentary on the strengths and weaknesses. For oral peer feedback, Helen played videos on the feedback sandwich, guiding peer reviewers to comment on three strengths and two weaknesses of the writing in the areas of content, structure, and language, and to provide constructive suggestions for solving these problems. For student writers, a feedback protocol was also shown to encourage them to proactively seek specific feedback, ask peers for clarification of comments, and discuss with peers how to revise (Lee, 2016). Learner-driven feedback was also implemented by providing areas in feedback forms for students to write questions and doubts.
Helen then offered descriptive feedback on students’ Draft 3 according to the assessment criteria and students’ individual questions in the feedback forms, providing commentaries on the structure, content, and language. As for the scope of written corrective feedback (WCF), Helen gave focused WCF on five pre-selected errors. Five frequently made grammatical errors (verb tense and form, preposition, word choice, word form, and sentence structure) were identified in the pre-study timed argumentative writing test and served as language foci in the assessment criteria for two argumentation tasks (verb tense and form, preposition, and word choice in Essay 1; word form and sentence structure in Essay 2). Based on teacher feedback, students wrote final drafts. During multiple drafting, students were guided to plan/monitor/regulate their actions towards their learning goals with the assistance of metacognitive checklists, learning logs (adapted from Lee & Mak, 2018), and peer/teacher feedback.
At the post-writing stage, students reflected on their writing and set new goals. All in all, Helen’s AaL practices reflected its unitary nature. Samples of a learning log and assessment forms are attached in Appendix A.

3.1.2. Student Participants

Student participants were second-year English majors in the same English Writing class (45 students in total). Students were between 19 and 21 years old and had learned English for around 12 years. Since prior studies on engagement in L2 writing indicated that language proficiency may influence learner engagement in writing assessment (e.g., Zhang, 2020), two participants with different writing proficiency levels, Sarah and Anne (pseudonyms), were selected for case studies based on argumentative writing pre-test scores (15 points in total), teacher recommendations and their English scores in the university entrance examination (UEE). Sarah was an average student in this class, earning 8 points in the pre-test, around the class average, and 110 points in UEE, while Anne was a relatively high-performing student in the class, scoring 11 points on the pre-test and 123 in UEE.

3.2. Data Collection

For the two student participants, semi-structured interviews, verbal reports of the writing process, writing drafts, writing test scores, classroom observation field notes, and documents were collected over a term by the author. Pre-study interviews examined participants’ initial self-perceived writing performance by asking them to comment on their English writing in the areas of organization, content, vocabulary and grammar. These data were used to compare with their perceived changes in writing performance later this term. Post-study interviews investigated (1) how and the extent to which they engaged agentively in AaL in terms of their collaboration with teachers and peers, and proactive involvement in taking ownership of learning, and (2) their perceived changes in writing performance (organization, content, and language). All interviews were conducted in Chinese (their L1) for around 90 min and were audio-recorded.
Verbal reports were collected to examine participants’ engagement in cognitive and affective dimensions. Eight think-alouds were gathered from each participant to explore their engagement while writing their essays. Participants recorded their think-alouds in Chinese (L1), English (L2), or both languages while composing four drafts of two essays and submitted the recordings to the author. Another eight stimulated recalls occurred within two days after they wrote each draft. Participants (1) reviewed their previous drafts along with the provided feedback and learning logs, and (2) reflected on their feelings, thoughts, and actions during multiple drafts.
Writing drafts were collected to study participants’ engagement in terms of self-initiated revisions and their changes in writing performance. Eight argumentative drafts from each participant were gathered, including four drafts for both Essay 1 (two-sided argumentation) and Essay 2 (one-sided argumentation). The writing topic for Essay 1 was whether college graduates should live in first-tier or second-tier cities. In Essay 2, participants stated their views on the famous saying: Although the world is full of suffering, it is also full of the overcoming of it (see the writing task prompts in the Supplementary Materials).
Pre-/post-writing tests were administered at weeks 1 and 16. Quantitative pre/post writing tests were used to (a) inform participant selection and (b) provide descriptive indicators of performance change. The former qualitative text analysis of multiple drafts constituted the primary source for examining the development of writing performance. Two parallel writing tasks from past College English Test Band 4 (CET-4) exam papers were used (see Supplementary Materials). The original wording (“…write a short essay on how to develop positive relationships between parents and children/teachers and students”) was modified to (“…write a short essay on important principles to develop positive relationships between parents and children/teachers and students”). This modification was intended to encourage an argumentative rather than expository response. “How-to” prompts often invite descriptive listing, whereas prompts emphasizing “important principles” encourage students to generalize, evaluate, and justify ideas, which are core features of argumentative writing. It is acknowledged, however, that the revised wording does not preclude expository responses. To support genre alignment, the task was administered following explicit instruction in argumentative writing, and students were informed that an argumentative essay was expected. Students wrote argumentative essays within 30 min. CET-4 was chosen because it has been widely used in Mainland China for three decades and validated as a reliable and valid indicator of undergraduates’ writing proficiency and performance (Gao, 2011).
Documents, including written teacher/peer feedback forms and learning logs, were obtained to triangulate participants’ engagement in behaviour (e.g., self-initiated revision) and cognition (e.g., goal adjustment). Classroom observation field notes (16 lessons) and two audio recordings of oral peer-feedback conferences from each participant were collected to triangulate their engagement in terms of collaborative behaviour. A timeline of data collection was summarized in Table 1.

3.3. Data Analysis

To study how students’ engagement may relate to their writing performance in an AaL-oriented context, content analyses of qualitative data and text analyses of two participants’ writing drafts were performed by the author. Audio-recorded data, including interviews/verbal reports/peer-feedback conferences, were transcribed verbatim and translated into English by the author, who is bilingual in Chinese and English. To safeguard meaning accuracy, selected excerpts used in the findings were reviewed by a second bilingual researcher, and discrepancies were discussed until a consensus was reached. Content analyses of qualitative data were performed in NVivo 12 according to the cyclic coding approach (Miles et al., 2014). Data regarding learner engagement in the AaL classroom were identified in the first-cycle coding. Reeve’s (2013) agentic engagement items were used to capture participants’ proactive and reciprocal behaviour, and R. Bai et al. (2014) metacognitive and affective strategies were utilized to capture participants’ proactive involvement in cognitive and emotional dimensions. In the second-cycle coding codes were categorized into (a) their proactive involvement in self-regulating their learning which included affect regulation operations (e.g., rewarding oneself for completing a draft), metacognitive operations (e.g., setting and adjusting goals) and goal-directed actions (looking up a dictionary), as well as (b) their collaboration with teachers and peers to establish an AaL context (e.g., offering constructive input when synthesizing assessment criteria, seeking feedback and expressing feedback preference, Wang & Lee, 2021; Wang & Shen, 2025). The coding scheme was attached in the Supplementary Materials. Participants’ interview data on changes in argumentative writing performance were identified in the first-cycle coding, and were categorized into (1) content and structure, and (2) language in the second-cycle coding according to assessment criteria in two argumentative writing assessment forms (see Table 2). One-fourth of interview transcripts were double-coded by a Ph.D. candidate in L2 writing. When disagreements arose, codes and categories were refined after discussion. Two coders compared their coding output and engaged in iterative discussions to resolve disagreements and refine the coding scheme. A third researcher was consulted to help reach a consensus when disagreement persisted.
Writing performance was operationalized as changes in participants’ pre-/post-writing test scores and as their writing development across multiple drafts during the term. The quantitative and qualitative data were interpreted complementarily, with test scores providing outcome-level reference points and draft analyses offering process-level explanations. Writing tests were scored based on a 15-point holistic scale from the CET-4 assessment rubric which covered content, organization, and language (see Supplementary Materials). To ensure the scoring reliability, the author and Helen, both experienced CET raters, scored the writing tests. The author rated 45 pre-tests and 45 post-tests, while one fourth of the essays (12 pre-tests and 12 post-tests) were double-rated. The Pearson correlation coefficient was 0.813, suggesting acceptable inter-rater reliability (Mackey & Gass, 2005). Final scores were calculated by averaging two independent whole-number ratings assigned by the raters. Although the CET-4 holistic rubric requires raters to assign only integer scores, the resulting averaged scores therefore included half-point values. This averaging procedure represented a slight departure from the CET scoring rule described in the Supplementary Materials and was adopted to more precisely reflect rater agreement.
About text analyses, since Helen used a process-genre approach to integrate AaL into writing, key features of the target genre, i.e., argumentation, were specified, taught, and employed to describe students’ writing performance (Hyland, 2007). Therefore, genre-specific assessment criteria in two argumentative writing assessment forms were used as codes (see Table 2) to analyze eight writing drafts from each participant. The primary outcome of writing performance focused on overall essay quality, reflected in genre-move and rhetorical features (CS1–CS8), including organizational structure, argument development, and logical persuasiveness. Supporting indicators focused on language use, which included language range (LA2–4) and accuracy (LA1). This approach enabled the examination of how learner engagement related to writing performance both in terms of global writing quality and specific linguistic features. The range and appropriateness of vocabulary, sentence patterns, and connectives were analyzed through close manual textual analysis. Lexical range was assessed via variation in word choice, sentence pattern range through identification of varied grammatical constructions (e.g., simple, compound, complex sentences), and connective range by coding types of logical connectors (e.g., additive, adversative, sequential). Appropriateness was judged based on semantic accuracy and contextual suitability of these features within the argumentative texts. Coding decisions were reviewed by a second coder, with disagreements resolved through discussion. Qualitative narratives were generated to present how participants’ engagement may relate to their writing performance in content and structure, as well as language concerning range and appropriateness of vocabulary, sentence patterns, and connectives. In addition, quantitative error analyses were conducted to provide complementary evidence of participants’ development in grammatical accuracy. The number of errors for pre-selected error types in focused WCF was counted in each draft of two participants. Errors of verb tense and form, preposition, and word choice were counted in Essay 1; errors of word form, sentence structure, and the former three error types were counted in Essay 2. Then, error ratios for each draft were calculated by dividing the total number of target errors by the total word count. Error ratios were rounded to the nearest 0.1%. During error analysis, another postdoctoral fellow in English education coded one-third of participants’ essay drafts. Inter-coder reliability of error analysis is 86.3%.

4. Findings

Case analyses were employed to examine how learner engagement may relate to their changes in writing performance in an AaL context.

4.1. Sarah

Writing tests and text analyses indicated that Sarah’s writing performance enhanced in content, structure, and language. Her test scores rose from 8 to 11 out of 15. According to the CET rubric, the content of her essay in the pre-test was of basic relevance to the task; ideas were extended but were not adequately substantiated. Information was presented with some organization, but there was a lack of overall progression. As for language, there were many linguistic errors, some serious ones. In comparison, the content of her essay in the post-test was relevant and adequate to the task. The ideas presented in the essay were extended and supported. In addition, the ideas were arranged coherently, and there were only a few language errors.
Consistent with the writing tests, text analyses during the term indicated higher writing performance in content, structure, and language in both essays. In week 1, Sarah perceived that the content and structure of her essays were acceptable; although she made grammatical mistakes sometimes, her vocabulary and range of sentence patterns were her biggest weaknesses (pre-study interview). In week 16, Sarah perceived that she made the largest improvement in genre knowledge regarding the purpose, structure, and content of argumentative writing:
The purpose of argumentative writing is to put forward my opinion, and then analyze and support my argument in paragraphs… For structure, I begin an argumentation with an introduction of the topic, summarize the materials, and then present a thesis statement. I use two to three arguments to support my thesis statement. According to these arguments, I come up with topic sentences…and support them. In the conclusion, I restate my thesis statement and propose a piece of advice, etc.
(Post-study interview)
Sarah considered that her improvements in content and structure were related to her reciprocal involvement in synthesizing assessment criteria at the pre-writing stage:
Since I need to think on my own when analyzing sample texts and discussing with peers, instead of just listening to my teacher’s explanation, I have a stronger impression of the key features of argumentation in the aspects of purpose, content, and structure… With these criteria, I know what to pay attention to during writing and what to improve during revision.
(Post-study interview)
In addition, writing drafts and observation field notes revealed that Sarah’s commitment to dialogic feedback and her proactivity in adjusting writing goals aligned with higher writing performance in content and structure. Taking the location of the thesis statement and organization of main arguments in the two-sided argumentation as an example (see Figure 2), Sarah elicited relevant responses from peers and explained her thinking during the peer conference; when opinions among peers diverged, she solicited help from her teacher and the whole class to revise the structure and content of the essay:
As Sarah agentively engaged in the feedback process, she came to realize that the purpose of two-sided argumentation is to “argue for my point of view through a comparison; if I wanted to argue that college students should live in first-tier cities, then the comparison of two kinds of cities should be aligned with my thesis statement” (post-study interview). With this understanding, Sarah further adjusted her goals for Draft 3 in her learning log (Figure 3) and thought aloud a concrete next step: “I decided to revise the overall structure. I will present the thesis statement first and then write a few arguments” (Think aloud protocol, Essay 1 Draft 3).
As a result, her third draft saw a clearer thesis statement at the beginning: “living in first-tier cities is more suitable for graduates”. In addition, three topic sentences were revised from factual comparisons of two kinds of cities (e.g., “first-tier cities and second-tier cities have dissimilar resources’ advantages”) to claims that big cities rather than small cities are better suited to graduates (e.g., first-tier cities have more advantages in resources than second-tier cities). As such, her Draft 3 (see Figure 4) saw an enhancement in composing main arguments that supported the thesis statement, which strengthened the relevance of the essay content: “I used to place topic sentences in different places, but now I begin each paragraph with a topic sentence. With topic sentences, I plan to write corresponding content in each paragraph, which increases the relevance of the essay content” (post-study interview).
In addition to content and structure, Sarah progressed in vocabulary and sentence patterns: “I used to repeat simple connectives such as ‘first’ and ‘second’, but now I can use a wider range to meet different purposes” (post-study interview). Drafts and learning logs analyses revealed that the increased richness of vocabulary and sentence patterns was associated with Sarah’s proactive involvement in self-assessment, goal-adjusting, and self-initiated revisions of lexical choices, connectives, and sentence patterns. For instance, Sarah self-assessed her Draft 1 of Essay 1 and identified her weakness: “the words are not abundant” and planned to “enhance the variety of words” (self-assessment form). When revising for Draft 2, Sarah expanded her vocabulary range by using synonyms, e.g., replacing “more and more” with “an increasing number of”; she also diversified her sentence patterns by applying newly learned comparison and contrast structure words such as “nevertheless,” “on the other hand,” “while,” and “unlike” to discuss similarities and differences between big and small cities.
Think-aloud protocols and text analyses of drafts (see Figure 5) also indicated that Sarah’s self-monitoring of the variety and accuracy of vocabulary and sentence patterns positively influenced her metalanguage for different error types and sentence patterns. For instance, to achieve her goal of increasing sentence pattern variety in Essay 2 Draft 2 (think-aloud protocol), Sarah analyzed the syntax of each sentence, seeking to use new sentence patterns (e.g., emphatic sentences) and to self-edit for grammatical errors (e.g., noun endings and run-on sentences).
As a result of her self-monitoring, a comparison of eight drafts of two argumentations found that her grammatical accuracy increased. As seen in Table 3, the error ratio (the number of errors/total word count) for Essay 1 decreased from 2.5% in Draft 1 to 0.4% in Draft 4; the error ratio for Essay 2 reduced from 2.4% in Draft 1 to 1.1% in Draft 4.
In a nutshell, Sarah’s engagement in AaL was positively associated with her improvements in writing performance. Her reciprocal involvement in co-constructing assessment criteria and in driving dialogic feedback strengthened her genre knowledge regarding the purpose, content, and structure of arguments. Benefiting from increased genre knowledge and her goal adjustment across multiple drafts, her performance in writing well-formed thesis statements and main arguments developed. In addition, Sarah’s proactivity in self-assessment, goal-adjusting, self-monitoring, and self-initiated revisions was associated with enhanced writing performance in language, e.g., diversified vocabulary and sentence patterns, and grammatical accuracy.

4.2. Anne

Writing tests and text analyses demonstrated that Anne made greater gains in language than in content and organization. Writing test scores improved slightly from 11 to 12.5 points out of 15 points. According to CET-4 band descriptors, the pre-test content was relevant and adequate to the task, with ideas extended and supported. The ideas were arranged coherently. However, there were a few language errors. Her argumentation in the post-test improved in language regarding grammatical accuracy and lexical range. The content was relevant to the task, and the ideas were substantiated and coherently organized, but the thesis statement was not well-written. Therefore, Anne’s post-test essay failed to reach the next band of 14 points.
Text analysis of writing drafts during the term revealed improvements in content, organization, and language in Essay 1, but there were mainly enhancements in language in Essay 2. At the start of the term, Anne ranked the structure of her essay as the best, followed by the relevance of content, vocabulary range, and grammar, with the diversity of sentence patterns as the worst. At the end of the term, Anne thought her arguments improved in content, organization, and grammar.
In the areas of content and structure, Anne developed a clear idea of how to organize an argument: “I used to write whatever came up in my mind, but now I plan and think about whether the content is suitable for the writing topic…I will come up with a thesis statement and main arguments, and then add evidence to enhance the persuasiveness of my arguments” (post-study interview). Anne agreed with Sarah that the development of the organization was attributed to their engagement in analyzing sample texts and establishing success criteria with the teacher:
When summarizing key features of good argumentative writing from the sample, I wrote grammar accuracy and sentence coherence. However, my groupmates and teacher attached importance to the content…Then I realized content is important”.
(Post-study interview)
Analyses of writing drafts and verbal reports in Essay 1 demonstrated Anne’s performance in content and organization was also boosted by her proactivity in goal-adjusting and self-initiated meaning-focused revisions. Based on self-assessment results that Draft 1 lacked a thesis statement at the beginning, a comparison of both sides in the body, and a summary of arguments at the end (Figure 6), Anne adjusted her goals for Draft 2: “I read the topic ‘Should graduates live in first-tier or second-tier cities’ again and thought there should be a clear thesis. I chose ‘living in the second-tier cities’ as my thesis statement… Therefore, I need to emphasize the advantages of second-tier cities and deemphasize the pros of first-tier cities to support my thesis” (stimulated recall). As a result of her goal adjustment and self-initiated revisions throughout the essay, Draft 2 showed improved performance in writing a clear thesis statement in the introduction (“From where I stand, I think graduates should live in the second-tier cities”), in using comparison and contrast to support her argument that “living in the second-tier cities enjoys more advantages than second-tier cities”, and in replacing an ambiguous statement “it depends on personal choice” with a restatement of her thesis that “I would suggest that graduates stay in the second-tier cities for development” in the conclusion (see Figure 7).
On the other hand, Anne’s lack of reciprocal involvement in communicating her needs and doubts about feedback with peers and the teacher coincided with weaker improvements in using evidence to substantiate her arguments. When she did not understand or disagreed with feedback, she rarely sought clarification from others; nor did she set relevant goals or make any changes in subsequent drafts (post-study interview). For example, Anne neglected her peer’s feedback that her second argument was not strong enough to support her thesis statement in Essay 1 Draft 2; instead, she set “checking and modifying grammatical mistakes” as her goals for Draft 3 (learning log). In Essay 2, despite receiving two peers’ feedback on the content and structure of Draft 2 (“there are too many facts, but the essay lacks your own ideas”; “the opinion isn’t strong enough”), Anne established an unfocused goal called “modify the structure” for Draft 3 (learning log), without discussing with her peers about the meaning of the feedback and how to tackle these problems. While writing, she failed to self-monitor and did not revise the structure. On the contrary, Anne worked on her language goals of replacing words with synonyms and adding connectives. Consequently, writing performance in supporting evidence and the persuasiveness of ideas did not improve after peer feedback in both Essay 1 and 2.
Compared with writing performance in content and structure, writing drafts showed higher performance in language, including grammatical accuracy, vocabulary richness, and diversified sentence patterns. Such development could be attributed to her metacognitive operations in planning and self-initiated revisions with the automated writing evaluation (AWE) system. For example, before writing Essay 2 Draft 2, Anne planned to check for grammatical errors and increase the range of vocabulary and sentence patterns (think-aloud protocol). During revision, she first uploaded her Draft 1 to Pigai (AWE) and edited typos and grammatical errors based on assessment results: “Paragraph 3, line 2, ‘We were badly need of’ lacks a preposition (AWE feedback). Add ‘in’ before badly? No error now… ‘in need of’ is correct then” (think-aloud protocol). In addition, Anne also corrected word choice errors based on AWE feedback: “Paragraph 2, there is no ‘defeat hardship’ in the corpus; it seems to be Chinglish (AWE feedback). Change it to ‘overcome difficulties’” (think-aloud protocol). Anne’s self-editing grammatical errors reduced her error ratio from 3.5% in Draft 1 to 2.2% in Draft 2. An analysis of eight drafts from two argumentative essays also indicated higher grammatical accuracy. As seen in Table 4, the error ratio of Essay 1 declined from 1.0% in Draft 1 to 0.4% in Draft 4; the error ratio of Essay 2 dropped from 3.5% in Draft 1 to 1.8% in Draft 4.
Besides editing grammatical errors, based on corpus tips about synonyms in the AWE system (see an example in Figure 8), Anne made six self-initiated revisions to lexical choices (e.g., changing “made” to “compelled”, “people” to “citizens”, “donate” to “contribute”, “but” to “however”, “many” to “quite a few”, and “besides” to “additionally”). As a result, she used synonyms more effectively to demonstrate vocabulary richness in Draft 3 (see Figure 8).
Furthermore, the variety of sentence patterns in Anne’s essays increased due to her self-initiated revisions: “Now, before writing, I list various sentence patterns in my outline… e.g., attributive clause, inverted sentences… to remind myself to use them while drafting” (post-study interview). The comparison of Essay 1 Draft 1 and 2 showed a wider variety of sentence patterns including subject clauses (e.g., “as we all know” to “it is universally acknowledged that”), inverted sentences (e.g., “we never give up” to “in no circumstance will we give up”), and emphatic sentences (e.g., “it is Chinese people who adopted”).
To summarize, Anne’s reciprocity in co-establishing assessment criteria as well as her proactivity in goal-adjusting and self-initiated revisions in Essay 1 were related to better writing performance in content and organization concerning thesis statements and main arguments. However, Anne lacked reciprocity in communicating her needs and doubts during the feedback process in both essays and showed low proactivity in self-regulating her writing in content and structure in Essay 2. These patterns were associated with weaker performance in effectively using evidence to substantiate main arguments. On the other hand, her proactivity in making form-focused revisions based on AWE feedback aligned with better grammar accuracy, vocabulary richness, and sentence pattern variety in both essays.

5. Discussion

This study examined how students’ engagement in an AaL context may relate to their writing performance. Case studies of two undergraduate students demonstrated that learner engagement was closely associated with improvements in writing performance. While differences in initial proficiency and potential ceiling effects must be considered, cross-case analysis showed that Sarah (middle proficiency in the class), who displayed a higher degree of engagement, made greater improvements in writing performance, as reflected in both pre/post-tests and text analyses of multiple drafts. Her commitment to collaboration in the AaL context and proactive involvement in self-regulating her cognition and behaviour coincided with enhanced essay content and organization, as well as improved language use and accuracy. In comparison, Anne (high proficiency in the class), who demonstrated lower proactivity and minimal reciprocity, made relatively fewer improvements in her writing performance. Admittedly, the lower gain Anne experienced during one term of AaL might be attributed to the ceiling effect of English instruction on the development of writing performance among high-proficiency EFL learners (Rifkin, 2005). However, within-case analysis indicated that engagement served as an explanatory mechanism beyond proficiency level. Anne’s engagement varied across multiple drafts, which coincided with changes in her writing performance, as reflected in text analyses. For example, Anne’s constant self-initiated form-focused revisions based on AWE feedback appeared to relate to improvements in language use and accuracy. However, a dearth of reciprocity in communicating her needs and doubts during the feedback process and low proactivity in self-regulation coincided with weaker improvements in content and organization (e.g., supporting evidence and the persuasiveness of arguments in Essay 2). The modest gain in Anne’s post-test score resulted from an unclear thesis statement aligned with her limited engagement with global issues, as observed in text analyses. The following section will discuss how similarities and differences in students’ engagement may be related to varying progress in writing performance.
Firstly, students who were collaborative in co-constructing the AaL context appeared to show improvements in writing performance. Before writing, both students reported that their reciprocal involvement in co-establishing assessment criteria with peers and the teacher (e.g., analyzing sample texts by themselves and working with others to compile success criteria) could strengthen their understanding of key features of argumentation. For example, both students were able to articulate important criteria for content and structure, such as a thesis statement, topic sentences, and supporting evidence. As Anne mentioned, a comparison between self-synthesized criteria and those from peers and their teacher prompted them to reflect on and reconstruct their quality writing standard. Consistent with previous studies which found that the dialogic use of exemplars facilitated students’ understanding of assessment criteria (e.g., Carless & Chan, 2017), students’ engagement in extensive discussion of exemplars with others in this study also assisted them in co-constructing evaluative judgments by assimilating or accommodating new knowledge into their extant knowledge network (Earl, 2013).
However, differences in students’ reciprocity at the while-writing stage seemed to be associated with variation in writing performance development. Cross-case comparisons revealed that the student who reciprocally engaged with feedback by soliciting clarification to better understand comments and seeking help to act upon feedback tended to show greater improvements. For example, Sarah’s commitment to dialogic feedback strengthened her genre knowledge regarding the purpose of two-sided argumentation, which was linked to her improved performance in writing well-formed thesis statements and main arguments. In comparison, Anne’s minimal interaction with peers during feedback conferences seemed to limit her opportunities to obtain additional prompt feedback to resolve misunderstandings, potentially contributing to the limited strengthening of supporting evidence, a weakness noted by both peers. These results supported Storch’s (2008) finding that “elaborate engagement where learners deliberated over alternatives, questioned and explained their suggestions, led to consolidation/learning, more so than limited engagement” (p. 110) where only one student made suggestions and the other repeated or did not respond. From a sociocultural perspective, social interaction is a site of knowledge construction, and a collaborative dialogue is a source of language learning (Lantolf, 2006). Students who enacted engagement in expressing preferences, soliciting clarification/help/advice can make greater progress in writing performance, probably because they are able to pool resources and gain prompt tailored external feedback, which can then be internalized as individual resources to guide their revisions (Nicol & Macfarlane-Dick, 2006). In other words, students can be better aware of how they are doing and where to go next during collaboration.
Nonetheless, the above findings on the relation between students’ writing performance and their reciprocal engagement during the feedback process contradicted Teng and Zhang’s (2018) survey which did not find a significant impact of social behaviour strategies (e.g., peer learning and feedback handling) on writing performance. They argued that learners might not have opportunities to seek help or collaborate with peers when writing essays in a test-taking context. In comparison, the process-genre approach adopted in an AaL context provided students with opportunities to transcend knowledge gaps and address learning-related issues through interaction with the learning context (Oxford et al., 2014). This difference in learning potential inherent in learning contexts (e.g., AoL versus AaL) may influence whether students’ reciprocal involvement in the feedback process is positively related to writing performance.
Secondly, students who demonstrated higher proactivity in conducting goal-setting/adjusting/self-monitoring/self-initiated revisions appeared to make greater progress in writing performance. While previous studies emphasized the beneficial effect of goal-setting (e.g., Huang, 2015), the present study further revealed that goal-adjusting during process writing benefited writing performance. For example, Sarah gained better performance in writing thesis statements, main arguments, and supporting evidence after she adjusted writing goals based on feedback from self/peers/teacher.
While W. Xiang (2004) found that instruction on self-monitoring only significantly improved high achievers’ performance in organization, students who conducted self-monitoring in the present study also demonstrated improvement in language features, e.g., Sarah’s variety and accuracy of sentence patterns and Anne’s grammar accuracy. Since the teacher in this study provided instructional scaffolding for both global and local issues in this AaL classroom, students can be equipped with metalanguage to analyze linguistic aspects. When students were meta-cognitively aware of what they needed to do and what options were available (Lee, 2016), their writing performance was more likely to improve across various aspects.
In addition, students’ self-initiated revisions in the behavioural dimension appeared to be positively associated with writing performance. Similar to Zhang’s (2020) study on AWE feedback, Anne used AWE feedback to make self-initiated form-focused revisions in grammar and vocabulary, which mainly coincided with improvements in language aspects, e.g., grammatical accuracy, range, and appropriateness of vocabulary. On the other hand, Sarah made greater progress in writing performance in content and organization than Anne did, which may relate to her proactivity in seeking additional feedback to make meaning-focused revisions according to assessment criteria.
Overall, the findings aligned with Wiliam’s (2011) view that learner engagement in actions in classroom assessment played an important role in students’ learning development. Sarah’s greater improvements in writing performance were associated with her higher levels of reciprocity in collaborating with teachers and peers within the AaL context, as well as her stronger proactivity in taking charge of her learning in L2 writing. During students’ collaboration in an AaL context, individuals’ evaluative judgments could be established; timely and targeted resources and feedback could be solicited. Knowledge and feedback gained through reciprocal involvement in the AaL context could assist students in planning, monitoring, evaluating, and adjusting their learning in L2 writing (Clark, 2012). As students proactively engaged in self-regulating their learning to narrow the gaps between their current performance and learning goals, their writing performance was likely to improve.

6. Conclusions

The present study examined how students’ engagement (or lack of it) related to their writing performance. Case analyses demonstrated that students’ engagement in an AaL context was closely related to their development of writing performance. Compared with Anne, Sarah who demonstrated a greater extent of reciprocity in collaborating with teachers and peers in the AaL context as well as proactivity in taking charge of her learning in L2 writing showed greater improvements in writing performance in the aspects of content, structure, and language, as reflected in both pre/post-tests and text analyses of multiple drafts throughout the term. It is likely that when students enacted minimal engagement in AaL (e.g., Anne in Essay 2), e.g., failing to use learning logs to regulate mental efforts to achieve their learning goals, or not making full use of interactions and dialogues as learning resources, studying in an AaL-oriented writing classroom may not contribute to a substantial increase in students’ writing performance. The findings help to clarify the relation between AaL, engagement and writing performance: Teachers’ implementation of AaL itself may not directly improve students’ writing performance; rather, students’ engagement in AaL can unlock the benefits of formative assessment.
One major limitation was that qualitative narratives could not establish a causal relationship between enhanced writing performance and students’ engagement. Admittedly, students’ improved writing performance cannot be attributed solely to their engagement; it also resulted from a combination of factors, including their teacher’s instructional scaffolding, feedback, and the language input students received from other English lessons. Future classroom-based research can further collect quantitative data, including engagement surveys and writing tests, from a larger sample to conduct statistical analyses of the relationship between engagement and writing performance. Despite these limitations, compared with most previous studies that investigated the manifestation of learner engagement, the present study offers a qualitative account of how learner engagement operates as a process linking assessment contexts and learning, contributing to an understanding of how learner engagement in AaL contexts relates to the development of writing performance.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/languages11040062/s1.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by Ethics Committee of the Chinese University of Hong Kong (protocol code No. SBRE-18-371 and date of approval 28 March 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to ethical reasons.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Samples of a Learning Log and Assessment Forms

Languages 11 00062 i001
Languages 11 00062 i002
Languages 11 00062 i003
Languages 11 00062 i004
Languages 11 00062 i005
Languages 11 00062 i006

References

  1. Allal, L. (2011). Pedagogy, didactics and the co-regulation of learning: A perspective from the French-language world of educational research. Research Papers in Education, 26(3), 329–336. [Google Scholar] [CrossRef]
  2. Bai, B. (2015). The effects of strategy-based writing instruction in Singapore primary schools. System, 53, 96–106. [Google Scholar] [CrossRef]
  3. Bai, R., Hu, G., & Gu, P. (2014). The relationship between use of writing strategies and English proficiency in Singapore primary schools. The Asia-Pacific Education Researcher, 23(3), 355–365. [Google Scholar] [CrossRef]
  4. Bandura, A. (2006). Toward a psychology of human agency. Perspectives on Psychological Science, 1(2), 164–180. [Google Scholar] [CrossRef]
  5. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31. [Google Scholar] [CrossRef]
  6. Carless, D., & Chan, K. K. H. (2017). Managing dialogic use of exemplars. Assessment & Evaluation in Higher Education, 42(6), 930–941. [Google Scholar] [CrossRef]
  7. Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.). (2012). Handbook of research on student engagement. Springer Science. [Google Scholar]
  8. Clark, I. (2012). Formative assessment: Assessment is for self-regulated learning. Educational Psychology Review, 24(2), 205–249. [Google Scholar] [CrossRef]
  9. Cleary, T. J., & Zimmerman, B. J. (2012). A cyclical self-regulatory account of student engagement: Theoretical foundations and applications. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement (pp. 237–257). Springer. [Google Scholar]
  10. Dann, R. (2002). Promoting assessment as learning: Improving the learning process. Routledge Falmer. [Google Scholar]
  11. Earl, L. M. (2013). Assessment as learning: Using classroom assessment to maximize student learning (2nd ed.). Corwin Press. [Google Scholar]
  12. Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59–109. [Google Scholar] [CrossRef]
  13. Gao, H. (2011). A construct validation study of CET band 4. Foreign Language Testing and Teaching, 4, 33–41. [Google Scholar]
  14. Hale, C. C. (2015). Self-assessment as academic community building: A study from a Japanese liberal arts university. Language Testing in Asia, 5(1), 1. [Google Scholar] [CrossRef]
  15. Hawe, E., & Parr, J. (2014). Assessment for learning in the writing classroom: An incomplete realisation. Curriculum Journal, 25(2), 210–237. [Google Scholar] [CrossRef]
  16. He, M., & Wang, L. (2025). Implementing assessment as learning in online EFL writing classes. RELC Journal, 56(2), 468–482. [Google Scholar] [CrossRef]
  17. Huang, S. (2015). Setting writing revision goals after assessment for learning. Language Assessment Quarterly, 12(4), 363–385. [Google Scholar] [CrossRef]
  18. Hyland, K. (2003). Second language writing. Cambridge University Press. [Google Scholar]
  19. Hyland, K. (2007). Genre pedagogy: Language, literacy and L2 writing instruction. Journal of Second Language Writing, 16(3), 148–164. [Google Scholar] [CrossRef]
  20. Lam, R. (2010). A peer review training workshop: Coaching students to give and evaluate peer feedback. TESL Canada Journal, 27(2), 114. [Google Scholar] [CrossRef]
  21. Lam, R. (2013). The relationship between assessment types and text revision. ELT Journal, 67(4), 446–458. [Google Scholar] [CrossRef]
  22. Lam, R. (2016). Assessment as learning: Examining a cycle of teaching, learning, and assessment of writing in the portfolio-based classroom. Studies in Higher Education, 41(11), 1900–1917. [Google Scholar] [CrossRef]
  23. Lantolf, J. P. (2006). Sociocultural theory and L2 development: State-of-the-art. Studies in Second Language Acquisition, 28, 67–109. [Google Scholar] [CrossRef]
  24. Lee, I. (2016). Putting students at the centre of classroom L2 writing assessment. Canadian Modern Language Review, 72(2), 258–280. [Google Scholar] [CrossRef]
  25. Lee, I. (2017). Classroom writing assessment and feedback in L2 school contexts. Springer. [Google Scholar]
  26. Lee, I., & Coniam, D. (2013). Introducing assessment for learning for EFL writing in an assessment of learning examination-driven system in Hong Kong. Journal of Second Language Writing, 22(1), 34–50. [Google Scholar] [CrossRef]
  27. Lee, I., & Mak, P. (2018). Metacognition and metacognitive instruction in second language writing classrooms. TESOL Quarterly, 52(4), 1085–1097. [Google Scholar] [CrossRef]
  28. Lee, I., Mak, P., & Yuan, R. (2019). Assessment as learning in primary writing classrooms: An exploratory study. Studies in Educational Evaluation, 62, 72–81. [Google Scholar] [CrossRef]
  29. Leung, C., Davison, C., East, M., Evans, M., Green, A., Hamp-Lyons, L., Liu, L., & Purpura, J. E. (2018). Using assessment to promote learning: Clarifying constructs, theories, and practices. In J. M. Davis, J. M. Norris, M. E. Malone, T. McKay, & Y. A. Son (Eds.), Useful assessment and evaluation in language education (pp. 75–91). Georgetown University Press. [Google Scholar]
  30. Mackey, A., & Gass, S. M. (2005). Second language research: Methodology and design. Lawrence Erlbaum. [Google Scholar]
  31. Marshall, B., & Jane Drummond, M. (2006). How teachers engage with assessment for learning: Lessons from the classroom. Research Papers in Education, 21(2), 133–149. [Google Scholar] [CrossRef]
  32. Mercer, S., & Dörnyei, Z. (2020). Engaging language learners in contemporary classrooms. Cambridge University Press. [Google Scholar]
  33. Miles, M., Huberman, A., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Sage. [Google Scholar]
  34. Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. [Google Scholar] [CrossRef]
  35. Oga-Baldwin, W. Q. (2019). Acting, thinking, feeling, making, collaborating: The engagement process in foreign language learning. System, 86, 102128. [Google Scholar] [CrossRef]
  36. Oxford, R. L., Rubin, J., Chamot, A. U., Schramm, K., Lavine, R., Gunning, P., & Nel, C. (2014). The learning strategy prism: Perspectives of learning strategy experts. System, 43, 30–49. [Google Scholar] [CrossRef]
  37. Powell, K. C., & Kalina, C. J. (2009). Cognitive and social constructivism: Developing tools for an effective classroom. Education, 130(2), 241–250. [Google Scholar]
  38. Reeve, J. (2013). How students create motivationally supportive learning environments for themselves: The concept of agentic engagement. Journal of Educational Psychology, 105(3), 579–595. [Google Scholar] [CrossRef]
  39. Reeve, J., Cheon, S. H., & Jang, H. (2020). How and why students make academic progress: Reconceptualizing the student engagement construct to increase its explanatory power. Contemporary Educational Psychology, 62, 101899. [Google Scholar] [CrossRef]
  40. Rifkin, B. (2005). A ceiling effect in traditional classroom foreign language instruction: Data from Russian. The Modern Language Journal, 89(1), 3–18. [Google Scholar] [CrossRef]
  41. Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. [Google Scholar] [CrossRef]
  42. Storch, N. (2008). Metatalk in a pair work activity: Level of engagement and implications for language development. Language Awareness, 17, 95–114. [Google Scholar] [CrossRef]
  43. Teng, L. S., & Zhang, L. J. (2018). Effects of motivational regulation strategies on writing performance: A mediation model of self-regulated learning of writing in English as a second/foreign language. Metacognition and Learning, 13(2), 213–240. [Google Scholar] [CrossRef]
  44. Van Lier, L. (2008). Agency in the classroom. In J. P. Lantolf, & M. E. Poehner (Eds.), Sociocultural theory and the teaching of second languages (pp. 163–186). Equinox. [Google Scholar]
  45. Wang, L., & Lee, I. (2021). L2 learners’ agentic engagement in an assessment as learning-focused writing classroom. Assessing Writing, 50, 100571. [Google Scholar] [CrossRef]
  46. Wang, L., Lee, I., & Park, M. (2020). Chinese university EFL teachers’ beliefs and practices of classroom writing assessment. Studies in Educational Evaluation, 66, 100890. [Google Scholar] [CrossRef]
  47. Wang, L., & Shen, B. (2025). Influencing factors of L2 writers’ engagement in an assessment as learning-focused context. International Review of Applied Linguistics in Language Teaching, 63(4), 2591–2632. [Google Scholar] [CrossRef]
  48. Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3–14. [Google Scholar] [CrossRef]
  49. Wong, K. M., & Mak, P. (2019). Self-assessment in the primary L2 writing classroom. Canadian Modern Language Review, 75(2), 183–196. [Google Scholar] [CrossRef]
  50. Xiang, W. (2004). Encouraging self-monitoring in writing by Chinese students. ELT Journal, 58(3), 238–246. [Google Scholar] [CrossRef]
  51. Xiang, X., Yuan, R., & Yu, B. (2022). Implementing assessment as learning in the L2 writing classroom: A Chinese case. Assessment & Evaluation in Higher Education, 47(5), 727–741. [Google Scholar] [CrossRef]
  52. Zhang, Z. V. (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing, 43, 100439. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework of learner engagement in an AaL context (Adapted from Wang & Lee, 2021).
Figure 1. Conceptual framework of learner engagement in an AaL context (Adapted from Wang & Lee, 2021).
Languages 11 00062 g001
Figure 2. Excerpt taken from Sarah’s peer conference of Essay 1.
Figure 2. Excerpt taken from Sarah’s peer conference of Essay 1.
Languages 11 00062 g002
Figure 3. Sarah’s learning log of Essay 1.
Figure 3. Sarah’s learning log of Essay 1.
Languages 11 00062 g003
Figure 4. Excerpt taken from Sarah’s Draft 2 and 3 of Essay 1 (revised sections are in bold).
Figure 4. Excerpt taken from Sarah’s Draft 2 and 3 of Essay 1 (revised sections are in bold).
Languages 11 00062 g004
Figure 5. Excerpt taken from Sarah’s Draft 1 and 2 of Essay 2 (revised sections are in bold).
Figure 5. Excerpt taken from Sarah’s Draft 1 and 2 of Essay 2 (revised sections are in bold).
Languages 11 00062 g005
Figure 6. Anne’s self-assessment form of Essay 1.
Figure 6. Anne’s self-assessment form of Essay 1.
Languages 11 00062 g006
Figure 7. Excerpt taken from Anne’s Draft 1 and 2 of Essay 1 (revised sections are in bold).
Figure 7. Excerpt taken from Anne’s Draft 1 and 2 of Essay 1 (revised sections are in bold).
Languages 11 00062 g007
Figure 8. Excerpt from AWE feedback and Anne’s revision on lexical choices in Draft 3 of Essay 2 (deleted words were in red and newly added words were highlighted in yellow by Anne).
Figure 8. Excerpt from AWE feedback and Anne’s revision on lexical choices in Draft 3 of Essay 2 (deleted words were in red and newly added words were highlighted in yellow by Anne).
Languages 11 00062 g008
Table 1. Data collection timeline.
Table 1. Data collection timeline.
TimelineData Collected
Week 1Pre-test
Week 2Pre-study interviews
Week 4Essay 1 Draft 1 (E1D1) with think-alouds, learning logs and stimulated recalls (within 2 days after completing each draft)
Week 5Self-assessment forms of E1D1
E1D2 with think-alouds, learning logs and stimulated recalls
Week 6Peer feedback forms and conference recordings of E1D2
Week 7E1D3 with think-alouds, learning logs and stimulated recalls
Week 8Teacher feedback forms of E1D3
Week 9E1D4 with think-alouds, learning logs and stimulated recalls
Week 11E2D1 with think-alouds, learning logs and stimulated recalls
Week 12Self-assessment forms of E2D1
E2D2 with think-alouds, learning logs and stimulated recalls
Week 13Peer feedback forms and conference recordings of E2D2
Week 14E2D3 with think-alouds, learning logs and stimulated recalls
Week 15Teacher feedback forms of E2D3
Week 16E2D4 think-alouds, learning logs and stimulated recalls
Week 16Pro-test and post-study interviews
During the termClassroom observation
Table 2. Coding scheme of writing performance.
Table 2. Coding scheme of writing performance.
CategoriesCodes
Content and StructureCS1. Introduction-Lead in
CS2. Introduction-Thesis statement
CS3. Body-Presenting main arguments in topic sentences
CS4. Body-Organizing main arguments in proper paragraphs
CS5. Body-Supporting details
CS6. Conclusion-Restating thesis and summarizing main arguments
CS7. Conclusion-Concluding the arguments
CS8. Logic and persuasiveness of ideas
LanguageLA1. Grammatical accuracy (verb tenses and forms, prepositions (prep.), word choices, word forms)
LA2. Sentence patterns (range and appropriateness)
LA3. Vocabulary (range and appropriateness)
LA4. Connectives (range and appropriateness)
Table 3. The number of grammatical errors in Sarah’s drafts.
Table 3. The number of grammatical errors in Sarah’s drafts.
Error TypesVerb Form and TensePrep.Word ChoiceWord FormSentence StructureRatio (Number of Errors/Total Word Count)
Draft Number (Total Word Count)
Essay 1 Draft 1 (241)213 2.5%
Essay 1 Draft 2 (257)113 1.9%
Essay 1 Draft 3 (314)123 1.9%
Essay 1 Draft 4 (275)001 0.4%
Essay 2 Draft 1 (415)115032.4%
Essay 2 Draft 2 (454)115032.2%
Essay 2 Draft 3 (444)115032.3%
Essay 2 Draft 4 (454)112011.1%
Table 4. The number of grammatical errors in Anne’s writing drafts.
Table 4. The number of grammatical errors in Anne’s writing drafts.
Error TypesVerb Form and TensePrep.Word ChoiceWord FormSentence StructureRatio (Number of Errors/Total Word Count)
Draft Number (Total Word Count)
Essay 1 Draft 1 (294)120 1.0%
Essay 1 Draft 2 (281)200 0.7%
Essay 1 Draft 3 (245)200 0.8%
Essay 1 Draft 4 (263)100 0.4%
Essay 2 Draft 1 (396)533123.5%
Essay 2 Draft 2 (410)321122.2%
Essay 2 Draft 3 (432)421122.3%
Essay 2 Draft 4 (441)420111.8%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L. Learner Engagement and Writing Performance in Assessment as Learning L2 Writing. Languages 2026, 11, 62. https://doi.org/10.3390/languages11040062

AMA Style

Wang L. Learner Engagement and Writing Performance in Assessment as Learning L2 Writing. Languages. 2026; 11(4):62. https://doi.org/10.3390/languages11040062

Chicago/Turabian Style

Wang, Lu. 2026. "Learner Engagement and Writing Performance in Assessment as Learning L2 Writing" Languages 11, no. 4: 62. https://doi.org/10.3390/languages11040062

APA Style

Wang, L. (2026). Learner Engagement and Writing Performance in Assessment as Learning L2 Writing. Languages, 11(4), 62. https://doi.org/10.3390/languages11040062

Article Metrics

Back to TopTop