Next Article in Journal
Chilean Teachers’ Knowledge of and Experience with Artificial Intelligence as a Pedagogical Tool
Previous Article in Journal
Universal Design for Learning as an Equity Framework: Addressing Educational Barriers and Enablers for Diverse Non-Traditional Learners
 
 
Article
Peer-Review Record

The Effect of Growth Mindset Interventions on Students’ Self-Regulated Use of Retrieval Practice

Educ. Sci. 2025, 15(10), 1267; https://doi.org/10.3390/educsci15101267
by Jingshu Xiao 1,*, Martine Baars 2, Kate Man Xu 3 and Fred Paas 1,4
Reviewer 1: Anonymous
Reviewer 2:
Educ. Sci. 2025, 15(10), 1267; https://doi.org/10.3390/educsci15101267
Submission received: 25 June 2025 / Revised: 4 September 2025 / Accepted: 18 September 2025 / Published: 23 September 2025
(This article belongs to the Section Education and Psychology)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Summary and Evaluation

The present study examined whether the likelihood of learners engaging in retrieval practice was sensitive to a growth mindset intervention. First year psychology students either received a general grown mindset (GGM) intervention regarding the malleability of intelligence, a specific growth mindset (SGM) intervention about self-regulated learning (SRL), or read an article about the function of the brain (control condition). All participants then received instructions extolling the benefits of retrieval practice. Next, in the learning phase, participants studied 32 anatomical image-name pairs in blocks of 4. After two exposures to each pair in each block, participants chose whether they wanted to restudy or be tested on the pairs. Following a rating of mental effort and a brief filler task, participants took a cued recall test on half of the pairs. One week later, all participants were tested on the remaining half of the pairs and also were queried on their beliefs about retrieval practice, growth mindsets, and SRL. Results showed that participants in the GGM condition showed greater gains in growth mindset than the SGM or control condition, although differences in gains in SRL mindset were not as substantial. However, there were no differences among conditions in beliefs about retrieval practice or the likelihood of choosing retrieval practice. Most importantly, there was no omnibus difference in immediate recall across conditions although there was a trend for the SGM intervention to show superior recall performance.

Overall, this was an interesting approach to considering whether the likelihood of engaging in retrieval practice may be modified by a growth mindset intervention. The most striking result was that although there was some movement in general or specific mindset beliefs based on the intervention, there was little impact on the probability of engaging in retrieval practice or, most importantly, in learning outcomes on the immediate or delayed test (although see note h below on the SGM group relative to the GGM group). Perhaps this is to be expected. For example, the two most substantial meta-analyses of growth mindset interventions on academic achievement (Sisk et al., 2018; McNamara & Burgoyne, 2023) have suggested that such interventions have minimal impacts on academic achievement. A subsequent meta-analysis (Burnette et al., 2023) using multi-level modeling was highly critical of McNamara and Burgoyne but still returned a relatively modest effect size (d = .14) on academic achievement. The current study is not concerned with broad academic achievement, focused primarily on the effect of an intervention on a paired associate learning task, but it may help to contextualize the results by heeding the lessons of these literatures (see also Yan & Schuetze, 2023). In that sense, the current work provides a small scale exploration of an important topic and warrants publication after revision. In addition to incorporating this broader work on the mindset literature in a revision, I provide several other minor suggestions as follows.

Other Points to Consider    

  1. Please provide additional information regarding the sampling decisions with respect to power. For example, given the small effect sizes yielded by the prior literature, why was a medium effect size (d = .50) assumed?
  2. Building on point a, what specific test was the basis for the power analysis? (I assume a one-way ANOVA with 3 groups, consistent with my own replication of the power analysis in G*Power, but this is not indicated.)
  3. Please consider citing Kornell and Son (2009) as one of the earliest papers examining choices regarding self-testing vs. restudy.
  4. Does Table 1 show poorer pre-existing anatomy knowledge among the GGM group? If so, do comparisons on immediate recall performance change when this is held as a covariate?
  5. I want to thank the authors for their careful design in testing unique items for the immediate and delayed test. Was assignment to immediate vs. delayed test randomized or counterbalanced?
  6. Were the default priors used for Bayesian analyses in JASP?
  7. Please provide effect sizes for all statistical tests and interpret results in light of those effect sizes.
  8. The text indicates that, “The Bonferroni Post Hoc analysis indicated that participants in the SGM condition achieved higher accuracy compared to those in the GGM condition (p = .05)” but then suggests caution in interpreting this finding. There are two issues here. First, given the inclusion of Bayesian statistics, I did not fully understand the rigid adherence to an alpha level in NHST. Second, a calculation shows that the difference between these two conditions yielded a medium effect size (d = .6075). Does this suggest a benefit of the SGM intervention?
  9. Please indicate which analyses were planned and which were exploratory. In addition, were analyses pre-registered?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

I would like to thank the editor of the Education Sciences for this opportunity to review the manuscript, The Effect of Growth Mindset Interventions on Students’ Self-Regulated Use of Retrieval Practice. Overall, this manuscript is well written, sections are well organized, and the content of each section is adequate, too. Below I left some questions/suggestions that I hope the authors can address.

  • In the section 3.2.4. Retrieval practice instruction, the authors noted that all participants, regardless of their assignment to experimental conditions, were introduced to retrieval practice and the benefit of using this strategy. This, I think, contributes to the nonsignificant comparisons about retrieval practice. Later in the discussion section, the authors talked about this as well, which I appreciate. However, I do not quite capture why introducing retrieval practice to participants across all research groups should be incorporated in the experiment. Relatedly, given that the retrieval practice is a major outcome of this study, why not collect a baseline measure of it?
  • For the measures, are those used to assess mental effort (section 3.3.3.) and retrieval practice beliefs (section 3.3.5.) based on a single item? If they are not single-item measures, elaborating details about the length of the measure, the scoring of responses, and psychometric evidence for each is critical. If they are single-item measures, then the analyses for both outcome variables seem problematic because the data are ordered categorical, and ANOVA is more appropriate for continuous dependent variables.
  • Data analysis (section 5), as the authors stated, the frequentist one-way ANOVA and Bayesian one-way ANOVA were used to analyze data. What are the reasons of analyzing the data twice with the same method (i.e., one-way ANOVA) but different approaches (i.e., frequentist and Bayesian)? If the authors believe that one approach can complement the other in any fashion, then they should justify it in this section.
  • Concerning the results (section 6), the authors conducted a series of ANOVAs, and I think including a table to summarize the findings can be more intuitive.
  • Results, section 6.2., the authors stated that “On average, participants in the GGM condition choose to use retrieval practice 6 times out of 8.” Please clarify in the manuscript what the numbers refer to. Also, the averages for SGM and Control conditions are not given. Is there a specific reason for that?
  • Limitation (section 8), lines 553-557. The authors stated the following “Additionally, the revised Implicit Theories of SRL Scale (cf. Hertel & Karlen, 2021) used in our study had a relatively low stability (the Cronbach's alpha of the pre-intervention data was 0.63; the Cronbach's alpha of the post-intervention data was 0.65), which may lead to unstable results on the change of participants’ SRL growth mindset after intervention.” This statement is not quite accurate because Cronbach’s alpha is a reliability index of internal consistency (i.e., associations of the items given the latent trait they measure). May produce a correlation between different time points to examine stability, although a more rigorous approach is to assess longitudinal measurement invariance.
  • For some reason, I cannot find the appendices that the authors referenced in the manuscript, so as a result, I did not see the details on materials and measures, and my feedback can be limited to some extent.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

I thank the editor again for this opportunity to review this manuscript. Compared to the initial submission, the authors have enhanced the manuscript’s clarity in this version. Overall, the content reads well, but I suggest that the authors consider the following for publication.

When reporting statistics such as Bayes factor, Cronbach’s alpha (e.g., Page 7, line 295), ensure that the number of digits after the decimal point is consistent across the manuscript.

For acronyms (e.g., Page 10, line 379. BOA), they should be written out for their first time appearance in the manuscript.

Page 12. It appears that the ANCOVA was newly added to the resubmission (I compared the initial and this analysis was not included). However, this analytical plan (i.e., ANCOVA) was not described in the Data Analysis section for the research question or hypothesis it aims to address. Please update.

Author Response

We sincerely thank the reviewer for the careful reading of our manuscript and for the constructive comments that helped us improve the clarity and rigor of the paper. We have addressed all points as follows:

  1. Statistical reporting
    We revised the reporting of Bayes factors, Cronbach’s alpha, and other statistics to ensure consistency across the manuscript. Specifically, values greater than 0.01 are reported with two decimal places, while values smaller than 0.01 are noted with three decimal places.
  2. Acronyms
    We revised the manuscript so that all acronyms are spelled out upon first appearance. For example, “Bayesian one-way ANOVA (BOA)” is now introduced in the Data Analysis section before the abbreviation “BOA” is used. (Page 9, Line 346)
  3. ANCOVA analytical plan
    We updated the Data Analysis section to include the ANCOVA. Specifically, we added the following sentence:

“In addition, to account for the potential influence of pre-existing anatomy knowledge on immediate learning performance, an analysis of covariance (ANCOVA) was conducted with prior knowledge of anatomy as a covariate.” (Page 9, Line 363-365)

This addition ensures that the rationale and procedure for the ANCOVA are clearly described and aligned with the results section.

 

We hope these revisions satisfactorily address the reviewer’s concerns, and we thank you again for the valuable feedback.

Back to TopTop