1. Introduction
In an era where lifelong learning and self-directed competence are increasingly emphasized, self-regulated writing ability has emerged as a pivotal educational goal for higher education. This ability encompasses cognitive and metacognitive strategies and emotional and motivational components to empower learners to independently plan, monitor, and revise their writing (
Harris, 2024;
Rodriguez-Gomez et al., 2024). Writing is cognitively demanding and emotionally charged, and self-regulation in writing significantly influences learners’ academic success, persistence, and adaptability (
Bai & Wang, 2023;
Hwang, 2025;
Karlen et al., 2021;
Sun & Wang, 2020;
Zimmerman & Bandura, 1994).
Recent studies have highlighted the crucial role of individual learner characteristics—such as beliefs, attitudes, and emotional competencies—in shaping writing behaviors and outcomes (
Hwang, 2025;
Lin et al., 2024;
Mastrokoukou et al., 2024;
Sun & Wang, 2020;
L. S. Teng, 2024;
L. S. Teng & Huang, 2019;
Yang & Zhang, 2023). One particularly influential factor is the mindset (
Dweck, 2006). A growth mindset—the belief that writing ability improves through effort—enhances learners’ persistence and openness to feedback (
L. S. Teng, 2024). Learners with a growth mindset believe that their abilities can be improved through effort and learning, whereas those with a fixed mindset view their abilities as innate and unchangeable. Growth-minded learners are more likely to embrace challenges, persist through hurdles, and interpret feedback constructively. In contrast, fixed-minded learners commonly avoid challenges and perceive feedback as a threat to their self-worth (
Haimovitz & Dweck, 2016;
L. S. Teng, 2024;
Waller & Papi, 2017). In the context of writing, learners’ perceptions of feedback play a critical role in influencing their engagement, revision strategies, and overall writing performance. Feedback perception refers to how learners interpret and emotionally respond to feedback from instructors or peers, directly influencing their willingness to revise and refine their writing (
Ajjawi et al., 2022;
Ekholm et al., 2015;
He et al., 2023;
Hwang, 2025;
McGrath et al., 2011;
Pearson, 2022;
Rowe et al., 2014;
Xu, 2022;
Yang & Zhang, 2023;
Zhu & Carless, 2018;
Zumbrunn et al., 2016). Writing feedback perception, particularly when interpreted constructively, mediates learners’ engagement with revision strategies (
He et al., 2023). Additionally, academic emotion regulation—the ability of learners to monitor and manage emotions such as anxiety, frustration, or motivation during academic tasks—is essential for sustaining cognitive engagement (
Camacho-Morles et al., 2021;
Harley et al., 2019;
Karlen et al., 2021;
Mastrokoukou et al., 2024;
Pekrun et al., 2002;
Winstone et al., 2017). Likewise, the relations among mindset, motivation, emotion, self-regulated learning, and academic performance have been studied, suggesting that the influence of mindset on academic outcomes is often indirect, working through adaptive learning behaviors (
Burnette et al., 2013;
Karlen et al., 2019). Building on this body of work, this study investigates the direct effect of mindset on self-regulated writing ability and the indirect pathways through which mindset could shape writing outcomes.
While numerous studies have explored the individual effects of mindset, feedback, and emotion regulation on writing-related learning and performance, few have examined how these factors interact in influencing self-regulated writing. For example,
L. S. Teng and Zhang (
2020) highlighted the role played by emotional and motivational dimensions in writing, and
Yang and Zhang (
2023) stressed the importance of understanding feedback engagement. Some more recent studies in broader learning contexts have suggested that self-regulated learning involves a dynamic interplay of cognitive, emotional, and motivational components (
Esnaashari et al., 2023;
Heikkinen et al., 2025); however, empirical research that applies such integrative models specifically to writing—and examines the direct and mediating roles of mindset, feedback perception, and emotion regulation—remains limited.
Thus, this research investigates the interconnected roles of mindset, writing feedback perception, and academic emotion regulation in predicting self-regulated writing ability among university students. Specifically, it aims to identify both the direct and indirect effects of mindset on self-regulated writing ability. To address this objective, the following research questions (RQs) and corresponding conceptual model (
Figure 1) were developed:
What are the relations among mindset, writing feedback perception, academic emotion regulation, and self-regulated writing ability among university students?
Does writing feedback perception mediate the interplay between mindset and self-regulated writing ability?
Does academic emotion regulation mediate the interplay between mindset and self-regulated writing ability?
Do writing feedback perception and academic emotion regulation sequentially mediate the interplay between mindset and self-regulated writing ability?
Based on these RQs, the following hypotheses were proposed:
H1: Mindset is significantly correlated with writing feedback perception, academic emotion regulation, and self-regulated writing ability.
H2: Writing feedback perception mediates the interplay between mindset and self-regulated writing ability.
H3: Academic emotion regulation mediates the interplay between mindset and self-regulated writing ability.
H4: Writing feedback perception and academic emotion regulation sequentially mediate the interplay between mindset and self-regulated writing ability.
3. Methods
3.1. Participants and Sampling Procedures
This study investigated the relationships among a growth mindset, writing feedback perception, academic emotion regulation, and self-regulated writing ability. Participants were 313 first-year undergraduate students enrolled in the mandatory “Logical Thinking and Writing” course at H University during the second semester of the 2024 academic year. The course included students from various majors, thus ensuring disciplinary diversity. Before survey administration, the instructor (also the researcher) explained the purpose of the study, emphasizing its use for course improvement and academic research. Participation was voluntary, and informed consent was obtained from all students. The study received ethical approval from the university’s Institutional Review Board (IRB number: 7002340-202505-HR-011, approval date: 13 May 2025), and all procedures adhered to the Declaration of Helsinki and the university’s IRB guidelines.
Data were collected through a pre-course online survey administered via a secure platform between September 15 and 30, 2024. The survey included demographic items (e.g., gender and major) and measures of the study’s main variables. To ensure data quality, time limits and the required items were implemented. Students were eligible for inclusion if they (1) were enrolled in the course, (2) completed the full survey, and (3) provided informed consent. Students with incomplete responses or unofficial course registration were excluded. The final dataset comprised 313 valid cases. The survey was completed in regular class hours and took approximately 20 min. The average age of the participants was 20.84 years (SD = 1.77). Then, 168 students (53.7%) identified as male and 145 (46.3%) as female; no participants selected the “prefer not to disclose/other” option. In terms of academic background, 33.24% (n = 104) students were enrolled in engineering-related majors—including mechanical information (7.67%), software convergence (8.63%), game software (7.99%), and electronics and electrical convergence engineering (8.95%). The majority (56.87%, n = 178) majored in arts-related disciplines such as design convergence (20.45%), media and animation (15.65%), and game graphics (20.77%). Additionally, 9.9% (n = 31) of the students were pursuing interdisciplinary studies through free major programs at the campus.
3.2. Measures
In this work, a self-reported online survey was administered, and each item was measured using a 5-point Likert scale ranging from “Strongly Agree” to “Strongly Disagree.” The survey comprised four sections: mindsets, writing feedback perception, academic emotion regulation, and self-regulated writing ability.
Firstly, mindsets were measured using the Mindset Scale developed by (
Dweck, 2006) and translated into Korean by (
Choi, 2022). This scale includes two subdomains: growth mindset and fixed mindset, each consisting of four items, for a total of eight items (e.g., “No matter who you are, you can always change your abilities significantly” for the growth mindset; “
Your abilities are something very basic about you that you can’t change” for the fixed mindset). A growth mindset refers to the belief that intelligence and abilities can improve through learning and effort, whereas a fixed mindset reflects the belief that intelligence and abilities are unchangeable despite effort. The reliability (Cronbach’s α) of the original scale was 0.775 for the growth mindset and 0.815 for the fixed mindset (
Choi, 2022). In this research, the reliability coefficients (Cronbach’s α) were 0.74 for the growth mindset and 0.78 for the fixed mindset. All items for this scale are available in
Appendix A.
Secondly, writing feedback perception was assessed using five items selected from (
Ekholm et al., 2015) and (
Zumbrunn et al., 2016), which were adapted to align with the course objectives. The scale comprised three subfactors—perceptions of instructor feedback, peer feedback, and feedback from acquaintances—with a total of five items (e.g., “I like it when teachers comment on my writing”). The original scale’s reliability coefficients (Cronbach’s α) were 0.81 (
Ekholm et al., 2015) and 0.83 (
Zumbrunn et al., 2016). In the current research, the scale demonstrated good reliability, with a Cronbach’s α of 0.73. All items for this scale are available in
Appendix A.
Thirdly, academic emotion regulation was measured using a scale developed and validated by (
J. Yu, 2012), based on the theoretical framework of (
Pekrun & Stephens, 2009). This scale includes four subdimensions—emotional awareness, goal-congruent behavior, positive reappraisal of emotions, and access to emotion regulation strategies—each consisting of 6 items, for a total of 24 items (e.g., “Even if I feel frustrated after a test, I quickly forget it and focus on preparing for the next one”). The original scale’s reliability coefficients were 0.81, 0.85, 0.84, and 0.88, respectively (
J. Yu, 2012). In this research, the overall reliability coefficient (Cronbach’s α) for academic emotion regulation strategies was 0.881, with subdimension coefficients of 0.820 (emotional awareness), 0.823 (goal-congruent behavior), 0.763 (positive reappraisal), and 0.886 (access to strategies). All items for this scale are available in
Appendix A.
Fourthly, self-regulated writing ability was assessed using 12 items (e.g., “Before I write, I set goals for my writing”) from (
Zumbrunn et al., 2016), covering seven subdimensions: goal setting, planning, self-monitoring, attention control, emotion regulation, self-instruction, and help-seeking in writing. The original scale reported a Cronbach’s α of 0.79 (
Zumbrunn et al., 2016), whereas this study obtained a reliability coefficient of 0.777. All items for this scale are available in
Appendix A.
The internal consistency reliability of each construct was assessed using Cronbach’s alpha. All subscales demonstrated sufficient unidimensionality; thus, this method was considered appropriate. Although McDonald’s omega may provide certain advantages, Cronbach’s alpha remains a valid and widely accepted estimate of reliability, particularly in unidimensional scales, such as those used in this study.
3.3. A Priori Power Analysis and Sample Size Justification
A priori power analysis was conducted using G*Power 3.1. Assuming a medium effect size (f2 = 0.15, which is equivalent to Cohen’s f = 0.387), significance level of α = 0.05, and a desired power of 0.80, the minimum required sample size for the detection of an overall effect in a multiple regression model with five predictors was calculated to be approximately 91. The final sample of 313 exceeded this threshold, ensuring sufficient statistical power not only for multiple regression but also for the serial mediation analysis conducted using PROCESS Model 6, which involved two mediators arranged in a pre-specified causal sequence.
3.4. Data Analysis
To answer RQ1, correlation analysis and hierarchical multiple regression analysis were conducted to explore the relationships among key variables. To address RQs 2–4, SPSS Statistics 29 and the PROCESS macro (Model 6) were employed to test the hypothesized serial mediation model. Firstly, prior to analysis, missing values were identified and replaced using the Expectation–Maximization (EM) method to ensure data quality. Secondly, Cronbach’s alpha coefficients were calculated to assess the internal consistency reliability of the measurement instruments. Thirdly, descriptive statistics, including means, standard deviations, skewness, and kurtosis, were examined to evaluate data distribution and response patterns. Pearson correlation coefficients were then computed to assess linear relationships among continuous variables, as assumptions of normality were met. Finally, data were analyzed using the PROCESS macro for SPSS (Model 6;
Hayes, 2017), which is specifically designed for serial mediation analysis. This model was selected because it enables the examination of multiple mediators in a predetermined causal sequence, thus fitting the theoretical assumptions of this study. In particular, it allows for testing both simple and sequential indirect effects, reflecting how mindsets may influence writing outcomes in intermediate cognitive and emotional mechanisms. Regression-based mediation analysis was conducted using 5000 bootstrap samples, and 95% bias-corrected confidence intervals were used to assess the significance of indirect effects. This analytical framework enabled the evaluation of both direct and indirect effects, including simple and sequential dual mediation, thereby providing a comprehensive understanding of how learners’ beliefs, feedback perception, and emotion regulation together contribute to their self-regulated writing ability.
4. Results
4.1. Scale Validation
Confirmatory factor analysis (CFA) was conducted to evaluate the construct validity of the four-factor measurement model, which included growth mindset, feedback perception, academic emotion regulation, and self-regulated writing ability.
Table 1 exhibits the goodness-of-fit indices for the four-factor measurement model, indicating a good fit to the data: χ
2(205) = 205.00,
p = 0.487, χ
2/df = 1.85, RMSEA = 0.045, SRMR = 0.035, CFI = 0.962, and TLI = 0.958. The subscales—mindsets, writing feedback perception, academic emotion regulation, and self-regulated writing ability—also exhibited adequate convergent validity, with average variance extracted (AVE) values of 0.58, 0.61, 0.55, and 0.59, respectively. Additionally, multicollinearity was not detected, as indicated by the variance inflation factor (VIF) values ranging from 1.33 to 1.55.
All observed variables were loaded significantly into their respective latent constructs. The standardized factor loadings ranged from 0.683 to 0.821, and the R
2 values ranged from 0.467 to 0.674, showing that each item accounted for a substantial proportion of the variance in its corresponding latent factor. These findings support the factorial validity of the five-factor measurement model, showing satisfactory levels of indicator reliability and convergent validity.
Table 2 presents the standardized factor loadings and squared multiple correlations (R
2) for each latent construct.
4.2. Descriptive Statistics and Correlation Analysis Between Variables
Table 3 presents the descriptive statistics and Pearson’s correlation coefficients for the key variables. The following abbreviations are used throughout the results and tables: growth mindset (GM), fixed mindset (FM), feedback perception (FBP), academic emotion regulation (AER), and self-regulated writing (SRW).
Because all variables satisfied the assumption of normality, as indicated by skewness values ranging from 0.015 to 0.535 and kurtosis values ranging from 0.036 to 1.273, Pearson’s r was deemed appropriate for the examination of the linear relationships among continuous variables. A GM was positively correlated with SRW ability (r = 0.453) and AER (total) (r = 0.326), whereas an FM showed negative correlations with both SRW ability (r = −0.233) and AER (total) (r = −0.057). In addition, AER (total) and SRW ability were positively correlated (r = 0.478). These correlation results confirm H1, demonstrating that a growth mindset is positively associated with feedback perception, academic emotion regulation, and self-regulated writing ability.
4.3. Verification of the Predictive Power of Mindsets, Writing Feedback Perception, and Academic Emotion Regulation on Self-Regulated Writing Ability
To examine the effects of mindsets, writing feedback perception, and AER on SRW ability, a hierarchical multiple regression analysis was conducted. Multicollinearity was tested to ensure the independence of the variables. Tolerance values ranged from 0.71 to 0.76, which is within the acceptable range (0–1), and all VIF values were all below 10, indicating no multicollinearity issues. The detailed results showing how first-year university students’ perceptions of mindset, writing feedback, and AER influence SRW ability are presented in
Table 4.
Model 1 examined the effect of the GM on SRW ability. The results confirmed that the GM accounted for 17.3% of the variance (F = 43.254, p < 0.001). A higher perception of the GM was correlated with higher SRW ability (ß = 0.416, p < 0.001). In Model 2, the FM was added while controlling for the GM. The explanatory power remained at 17.3%, and the FM did not significantly contribute to the model (β = −0.006, p = 0.930). While the FM was not a significant predictor for Model 2, it was retained in Models 3 and 4 to maintain the theoretical consistency and control for potential suppressor effects. This approach ensures that the subsequent predictive contributions of writing feedback perception and AER are evaluated while accounting for variance that can be potentially explained by an FM. The hierarchical inclusion of variables was guided with theoretical rationale: mindset variables were entered first, as they represented foundational learner beliefs, which was followed by writing feedback perception and AER, which are conceptually and proximally linked to SRW behavior. In Model 3, writing feedback perception was added while controlling for the growth and fixed mindsets. The explanatory power increased to 25.1% (F = 21.533, p < 0.001). Writing feedback perception had a significantly positive effect on SRW ability (ß = 0.311, p < 0.001), highlighting that learners with a more positive perception of writing feedback exhibit higher SRW ability. In Model 4, AER was added while controlling for the GM, FM, and writing feedback perception. The model’s explanatory power significantly increased to 35.4% (F = 32.202, p < 0.001). AER was the strongest predictor of SRW ability (ß = 0.344, p < 0.001), suggesting that a learner’s ability to regulate emotions plays a crucial role in fostering effective SRW.
4.4. Inferential Statistics
In this research, the fixed mindset—initially included in the research model—was found to have no significant effect on writing feedback perception, AER, or SRW ability. Accordingly, the final model excluded the FM and analyzed only the pathways centered on the GM. This finding is consistent with previous research indicating that a GM, rather than an FM, is more closely associated with psychological and behavioral factors related to learners’ self-regulation (
Dweck, 2006).
To verify the mediation effects of writing feedback perception and AER in the interplay between the GM and SRW ability, the PROCESS macro Model 6 proposed by
Hayes (
2017) was applied. In the analysis, GM was designated as the independent variable (X), SRW ability as the dependent variable (Y), and writing feedback perception (M1) and AER (M2) as the mediators. The results of the mediation analysis, encompassing all direct and indirect paths, are presented in
Table 5 and visually illustrated in
Figure 2 as a conceptual path model.
The path analysis revealed that all specified relationships were statistically significant, supporting the hypothesized mediation model. Firstly, a GM positively predicted writing feedback perception (B = 0.3734, p < 0.001), AER (B = 0.2153, p < 0.01), and SRW ability (B = 0.1663, p < 0.05). Secondly, writing feedback perception positively predicted AER (B = 0.1715, p < 0.05) and SRW ability (B = 0.2364, p < 0.001). Thirdly, AER was identified as the strongest predictor of SRW ability, exhibiting a significantly positive effect (B = 0.3361, p < 0.001).
According to
Hayes (
2017), a mediation effect is established when the following conditions are met: the interplay between the independent variable (IV) and the dependent variable (DV) is significant. The interplay between the IV and the mediator is significant. When the mediator is added, the total effect of the IV on the DV increases. In this study model, the direct effect of a GM on SRW ability (B = 0.1663,
p < 0.05) was lower than the total effect when the mediators were included (B = 0.3485,
p < 0.001). This indicates that writing feedback perception and AER serve as mediators in the interplay between a GM and SRW ability.
A more detailed analysis confirmed that the indirect effect through writing feedback perception, the indirect effect through AER, and the sequential indirect effect through both writing feedback perception and AER were all statistically significant. To test the serial dual mediation model, 5000 bootstrapped samples were generated with a 95% confidence interval. The unstandardized indirect effects are reported in
Table 6, and the completely standardized indirect effects are presented in
Table 7. The indirect effects through both writing feedback perception and AER were statistically significant, supporting H2 and H3, respectively.
The specific indirect effect of a GM on SRW ability through writing feedback perception (X → M1 → Y) was 0.0883 (unstandardized B, 95% CI [0.0414, 0.1489]) and 0.1053 (standardized B, 95% CI [0.0507, 0.1696]), both indicating a statistically significant mediation effect. Additionally, the indirect effect through AER (X → M2 → Y) was 0.0724 (unstandardized B, 95% CI [0.0256, 0.1316]) and 0.0863 (standardized B, 95% CI [0.0308, 0.1523]), which were also significant. The serial dual mediation effect through both mediators (X → M1 → M2 → Y) was 0.0215 (unstandardized B, 95% CI [0.0044, 0.0435]) and 0.0257 (standardized B, 95% CI [0.0053, 0.0516]). As such, none of the confidence intervals included zero; thus, all indirect paths were statistically significant. Finally, the total indirect effect of a GM on SRW ability was 0.1822 (unstandardized B, 95% CI [0.1069, 0.2686]) and 0.2173 (standardized B, 95% CI [0.1343, 0.3099]), confirming a robust overall mediation effect. These findings validate H4, as the serial indirect effect through both feedback perception and AER was statistically significant.
In summary, both the unstandardized and standardized analyses confirmed that writing feedback perception and AER significantly mediated the effect of a GM on SRW ability, with all individual and combined indirect paths reaching statistical significance.
Figure 2 illustrates the structural equation model, including the total, direct, and indirect paths from a GM to SRW ability, mediated by writing feedback perception and AER. The coefficients presented reflect unstandardized direct effects, where all paths reach statistical significance, thereby supporting the proposed serial mediation mechanism.
5. Discussion
This research investigated how a GM, writing feedback perception, and AER individually and collectively affect SRW ability. The main findings are as follows: Firstly, a GM had a significantly positive effect on SRW ability, whereas an FM was not a significant predictor. This finding supports previous research showing that a GM enhances sustained effort, persistence, and SRL behaviors, and is more strongly associated with emotional regulation than an FM (
Bai & Wang, 2023;
Burnette et al., 2013;
Dweck, 2006;
Karlen et al., 2021;
M. F. Teng et al., 2024;
Yeager & Dweck, 2020). In writing contexts, a GM promotes goal setting, self-monitoring, and strategic use, even in complex tasks, and fosters perseverance and openness to challenges (
Dweck & Leggett, 1988;
M. F. Teng et al., 2024). In addition, writing feedback perception significantly predicted SRW ability, even after controlling for mindset. This finding aligns with studies that have shown that learners who perceive feedback as constructive are more likely to engage in self-regulated revision and strategic learning (
He et al., 2023;
Panadero et al., 2017;
Winstone et al., 2017;
Zhu & Carless, 2018). Effective feedback supports performance monitoring and strategic adjustment and is linked to increased self-efficacy, motivation, and academic success (
Ekholm et al., 2015;
Nicol & Macfarlane-Dick, 2006). Finally, AER was the strongest predictor of SRW ability. It helps alleviate negative emotions (e.g., anxiety, frustration, and boredom), supports persistence and higher-order strategy use, and enhances perceived control and task value (
Camacho-Morles et al., 2021;
Efklides, 2011;
Harley et al., 2019;
Pekrun, 2006;
Pekrun et al., 2017). Learners with better emotion regulation maintain their self-efficacy and motivation, contributing to more active SRL (
L. S. Teng & Zhang, 2016).
These findings identified a GM, positive writing feedback perception, and AER as key contributors to SRW ability. These factors interact to enhance learners’ writing performance and self-regulation. The findings highlight the importance of fostering a GM, promoting constructive feedback perception, and supporting emotion regulation in college writing education. Accordingly, educational programs should aim to strengthen a GM (
Yeager & Dweck, 2020), feedback literacy (
Carless & Young, 2024), and emotion regulation strategies (
Pekrun et al., 2007;
Tzohar-Rozen & Kramarski, 2014) to support learners’ SRW.
Secondly, the mediating role of writing feedback perception on the relationship between a GM and SRW ability was confirmed. Learners with a GM tended to perceive feedback as positive and useful, which encouraged the use of SRW strategies and enhanced writing ability (
Waller & Papi, 2017). This aligns with prior research, which showed that learners who view feedback as an opportunity for improvement are more likely to engage in self-monitoring and strategic revision (
Nicol & Macfarlane-Dick, 2006;
Panadero et al., 2017;
Zhu & Carless, 2018). Feedback perception also supports writing improvement by boosting self-efficacy and motivation (
Ekholm et al., 2015;
Zumbrunn et al., 2016), and plays a central role in promoting feedback engagement and SRL (
Hwang, 2025;
Winstone et al., 2017;
Zhu & Carless, 2018). These findings underscore the importance of developing students’ feedback literacy—the ability to interpret, evaluate, and apply feedback effectively (
Carless & Boud, 2018;
Carless & Young, 2024). Feedback literacy involves critical reflection and emotional regulation in response to feedback (
Pearson, 2022;
Zhu & Carless, 2018). The present study empirically supports the structural linkage between a GM, feedback perception, and SRW, highlighting the need for pedagogical practices that enhance feedback interpretation, encourage dialogic feedback, and integrate strategy-based writing instruction (
Carless & Young, 2024;
Winstone et al., 2017).
Thirdly, the mediating effect of AER in the interplay between a GM and SRW ability was also found to be significant. This finding supports previous research indicating that emotion regulation has a substantial impact on learning motivation, strategy use, and task persistence (
Pekrun, 2006). In academic contexts, emotion regulation facilitates the usage of higher-order approaches and sustained engagement (
Camacho-Morles et al., 2021;
Harley et al., 2019;
Karlen et al., 2021) by alleviating negative emotions and enhancing positive ones (
Efklides, 2011;
Tzohar-Rozen & Kramarski, 2014). Emotion regulation is particularly critical in writing, which is inherently cognitively and emotionally demanding (
Harley et al., 2019;
Rowe et al., 2014). Learners with stronger AER experience less writing anxiety, exhibit greater persistence, and engage more in strategic revisions (
Li et al., 2023;
L. S. Teng & Zhang, 2016). In this context, emotion regulation serves to mitigate emotions such as frustration and boredom while supporting self-efficacy and sustained task engagement (
Harley et al., 2019;
Li et al., 2023;
Yildirim & Atay, 2024). Thus, integrating emotion regulation training into writing instruction is essential. Strategies such as emotion awareness, cognitive reappraisal, and goal maintenance (
Pekrun et al., 2017) can be effectively embedded in writing courses and integrated with metacognitive approaches to further optimize learners’ SRW (
Efklides, 2011;
Gross & John, 2003;
Pekrun & Perry, 2014).
Finally, a sequential double mediation model revealed that writing feedback perception and AER significantly mediated the interplay between a GM and SRW ability. This provides empirical support for the process by which learners who view feedback positively and regulate their emotional responses tend to engage in strategic and autonomous writing behaviors (
Lipnevich et al., 2021;
Panadero et al., 2017). This finding aligns with theoretical perspectives that conceptualize SRL as an integrated process encompassing cognitive, motivational, and emotional regulation (
Pekrun, 2006;
Zimmerman, 2000). Specifically, a GM encourages learners to interpret feedback as opportunities for development (
Dweck, 2006), which fosters positive feedback perception (
Winstone et al., 2017). In turn, this perception activates emotion regulation approaches that optimize self-regulatory capacity in writing (
Carless & Young, 2024;
Hwang, 2025). These findings highlight that learners’ responses to feedback are not merely cognitive but involve complex emotional interpretations that are reconstructed into practical learning behaviors (
Pearson, 2022;
Ryan & Henderson, 2018;
Zhu & Carless, 2018).
Educational implications point to the need for instructional interventions that go beyond the mere provision of feedback to foster learners’ interpretation, emotional regulation, and internalization of feedback into self-regulated behaviors (
Ajjawi et al., 2022;
Winstone et al., 2017). To this end, cognitive and emotional scaffolding should be utilized (
Nash & Winstone, 2017). For instance, writing instructions may incorporate activities such as exploring emotional responses after receiving feedback, feedback interpretation workshops, and dialogic feedback sessions utilizing emotion regulation approaches such as reappraisal and attentional shift. These practices can help learners reframe feedback as a resource for personal growth and support sustained SRW in emotionally secure learning environments.
These findings may appear generalizable but should be properly interpreted in light of the Korean cultural context of the sample. Cultural norms concerning authority, emotional expression, and academic feedback are likely influential on how the learners in this study perceived and responded to feedback. In East Asian educational settings, students often display heightened sensitivity to instructor evaluations and tend to internalize feedback emotionally. These cultural dimensions should be considered when applying the findings to different populations and educational systems. The study provides evidence for a dynamic and interrelated model of writing development that integrates cognitive, emotional, and motivational processes. A GM enhances FBP, which in turn fosters emotion regulation, ultimately supporting SRW. These findings indicate the need for a shift in writing instruction from a purely strategy-based approach toward a holistic pedagogical design embracing cognitive, emotional, and cultural dimensions in support of learners’ SRW.
6. Limitations and Future Directions of This Study
Despite the insights offered by this study, future research should address several limitations. Firstly, the data were collected through self-reported questionnaires, which are inherently subject to biases such as social desirability and recall inaccuracies. Consequently, the findings should be interpreted with caution, particularly with respect to the accuracy of the participants’ cognitive and emotional processes in writing. Future research should consider incorporating behavioral measures or observational methods to capture more authentic writing-related behaviors.
Secondly, the cross-sectional design of the study limits the ability to draw definitive causal inferences among the variables. Constructs such as mindset, feedback perception, and AER are dynamic and may evolve or exert reciprocal influences over time. Longitudinal or experimental research designs are required to validate the directionality and stability of the observed relationships.
Thirdly, the sample consisted of first-year university students from a single institution in South Korea, which may restrict the generalizability of the findings to other populations or educational contexts. Cultural factors and institutional practices can significantly influence learners’ beliefs, emotional responses, and feedback orientations. The sample was obtained via convenience sampling; however, its disciplinary diversity partially mitigates concerns regarding generalizability. To enhance their representativeness, future studies should adopt stratified sampling methods that systematically account for variations in academic performance, major, and language background.
Fourthly, while this study focused on a GM, writing feedback perception, and AER, other important factors—such as writing self-efficacy, intrinsic motivation, metacognitive strategies, and external support systems—were not included in the analysis. Expanding the model to include these variables could offer a more comprehensive account of the mechanisms that contribute to SRW ability.
Fifthly, although the instruments used were validated in previous studies, they may not fully capture the nuanced and context-dependent nature of constructs like AER and feedback perception. Future research could benefit from using real-time or context-sensitive assessment tools, such as learning analytics or AI-based tracking systems, to gain more granular insights into learners’ behaviors.
Finally, this study relied solely on quantitative methods. While statistical analysis helped clarify variable relationships, it did not explore the qualitative dimensions of learners’ writing experiences, such as how they interpret feedback or manage emotional challenges during actual writing tasks. Future research should consider mixed-methods approaches, including interviews, reflective journals, or think-aloud protocols, to provide richer, more contextualized understandings of SRW processes.
7. Conclusions
The current research investigated the effects of a GM, writing feedback perception, and AER on SRW ability among university students, analyzing both direct and indirect relationships. Unlike previous studies that explored these variables in isolation, this work empirically investigated their interactions and structural relationships, thereby offering a multi-layered understanding of the formation of SRW ability. Firstly, the findings underscore the integrated role of psychological and emotional factors as key determinants of SRW ability. This emphasizes the importance of moving beyond strategy-based writing instruction to address learners’ internal mindsets, emotional responses, and feedback interpretation skills as essential components of SRW performance. Secondly, the results verified that a GM not only directly influences SRW ability but also exerts an indirect effect through writing feedback perception and AER. Specifically, the sequential mechanism between feedback perception and AER demonstrates that feedback functions not merely as an external stimulus but as a cognitive–emotional catalyst that fosters self-regulation. Thirdly, this work redefines SRW ability as a learner-centered competence integrating cognition, motivation, and emotion, rather than a mere set of strategic skills. This perspective proposes that writing instruction should move beyond skill training to a more comprehensive pedagogical design encompassing learners’ cognitive processes, emotional engagement, and self-belief. Fourthly, from an educational standpoint, the study recommends an integrated instructional framework in university writing courses that encompasses fostering a GM, refining feedback literacy, and developing emotional regulation strategies. Such a holistic approach underscores the need for long-term, systemic approaches to enhance learners’ self-directed writing abilities. Finally, by presenting an integrative model of SRW ability, this research offers a theoretical and practical foundation for future longitudinal and context-specific studies. Notably, the emphasis on the mediating role of AER contributes to scholarly attention to the emotional aspects of writing education, which have been relatively underexplored. Overall, this work offers empirical evidence that enhancing SRW ability necessitates instructional designs that simultaneously enhance learners’ growth-oriented mindset, feedback interpretation skills, and emotional regulation capacity. These results provide valuable insights for reimagining the future of university-level writing education.