Hybrid Schooling and Reading Acquisition: Motivational, Well-Being, and Achievement Profiles in Second Grade
Melike Yumuş
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis paper made for an informative read and I thank the authors for considering t children's reading motivation given its obvious importance. With this in mind, I have some thoughts I think can improve the quality of the paper. Please see my suggestions below -
Abstract
I think your total sample should be presented as N = 287 students instead of ‘overall, 287 students'.
Literature review
The section on EVT is well explained and helps to explain to the readership how it is associated with reading motivation and beliefs. Essentially, whilst EVT asks children to consider: Can I succeed in this reading task? Is it worth it? I wondered whether including a discussion on self-determination theory – e.g., instilling autonomy, building competence and relatedness, may extend this discussion area of the literature review. In my view, this section should take a critical multi-perspective of reading motivation drawing on wider literature and models.
I did wonder about the challenges of reading acquisition and its relevance to the ‘basic skills’ you suggest children ‘consolidate’ in the second grade. Whilst I think this is a fair observation, I think it could be improved to suggest that reading acquisition is developmentally a challenge for all children, and these skills may develop from the second grade. I think this adds to your argument in terms of understanding school settings ‘from the second grade’ will help to strengthen the understanding of reading motivation.
Present study
What was the sampling method you used to select the schools? And was it a deliberate choice to select low SES schools? I think the rationale for selecting these schools needs further explanation.
Methods
The low alpha scores for self-concept (a = .58) and literacy out loud (a = .44) need further investigation. I think it would be worth detailing how these interpretations were rationalised considering the results and potential limitations.
Author Response
Respond to Reviewers
Manuscript ID: education-3904980
Type of manuscript: Article
Title: Hybrid Schooling and Reading Acquisition: Motivational, Well-Being, and Achievement Profiles in Second Grade
Dear Editor,
We have attached the final version of the manuscript.
We would like to thank the reviewer once again for their valuable comments, which have helped us improve the clarity and focus of the paper.
Sincerely,
Authors
Reviewer 1
Remark 1: I think your total sample should be presented as N = 287 students instead of ‘overall, 287 students'.
Response 1: Thank you for noting this. The abstract now reports the sample as N = 287 students.
Remark 2: The section on EVT is well explained and helps to explain to the readership how it is associated with reading motivation and beliefs. Essentially, whilst EVT asks children to consider: Can I succeed in this reading task? Is it worth it? I wondered whether including a discussion on self-determination theory – e.g., instilling autonomy, building competence and relatedness, may extend this discussion area of the literature review. In my view, this section should take a critical multi-perspective of reading motivation drawing on wider literature and models.
Response 2: Thank you for this helpful suggestion. We agreed that expanding the motivational framework would strengthen the literature review. In response, we added a theoretically grounded link to self-determination theory (Ryan & Deci, 2000) to complement the expectancy value perspective. This revision clarifies how children’s competence beliefs and motivational orientations are shaped not only by expectancy and value components but also by the extent to which learning environments support autonomy, competence, and relatedness. We also integrated SDT into the Discussion, linking motivational findings to frameworks that distinguish cognitive foundations from motivational engagement.
Remark 3: I did wonder about the challenges of reading acquisition and its relevance to the ‘basic skills’ you suggest children ‘consolidate’ in the second grade. Whilst I think this is a fair observation, I think it could be improved to suggest that reading acquisition is developmentally a challenge for all children, and these skills may develop from the second grade. I think this adds to your argument in terms of understanding school settings ‘from the second grade’ will help to strengthen the understanding of reading motivation.
Response 3: We agree that reading acquisition is developmentally demanding for most children and continues to unfold across the early elementary years. To reflect this broader developmental context while maintaining accuracy with respect to our study design, we revised the introduction to clarify that second grade represents a key stage at which many learners are still consolidating foundational reading skills (see introduction, last paragraph).
Remark 4: What was the sampling method you used to select the schools? And was it a deliberate choice to select low SES schools? I think the rationale for selecting these schools needs further explanation.
Response 4: We appreciate this comment and have clarified the sampling method and rationale in the Participants section. Schools were recruited in collaboration with the local education authority using purposive sampling. We intentionally focused on Hebrew-speaking state primary schools in the same low-SES municipality because prior work indicates that children from socioeconomically disadvantaged backgrounds are at elevated risk for language–literacy delays and may be disproportionately affected by disruptions to in-person instruction and by limited at-home support for distance learning (Arnold & Doctoroff, 2003; Betthäuser et al., 2023; Kogan & Lavertu, 2021; Wolf, 2008). Within the participating schools, all second-grade classes were included in both cohorts. These details and the rationale for focusing on low-SES schools are now stated explicitly in the participant's section.
Remark 5: The low alpha scores for self-concept (a = .58) and literacy out loud (a = .44) need further investigation. I think it would be worth detailing how these interpretations were rationalized considering the results and potential limitations.
Response 5: Thank you for raising this important point. We have now clarified in the manuscript that both the MMRP (Nevo & Vaknin-Nusbaum, 2018, 2020; Nevo et al., 2020; Vaknin-Nusbaum, 2025; Vaknin‐Nusbaum et al., 2018) and SEHS-P (Vaknin-Nusbaum & Tuckwiller, 2022) instruments were previously adapted and used in Hebrew-speaking samples in studies conducted among novice readers, demonstrating acceptable reliability in similar contexts. In those studies, Cronbach’s alpha for MMRP subscales ranged from .60 to .78, and from .67 to .82 for SEHS-P subscales. These values support the cultural appropriateness and internal consistency of the measures for Hebrew-speaking children. Nevertheless, we have also noted the somewhat lower reliability observed in the current sample within the study limitations, acknowledging this as a potential constraint on the interpretation of subscale-level findings.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsDear author(s),
While the study I reviewed is valuable in demonstrating the unique value of learning conditions during the pandemic on the reading achievement and school well-being of second-grade primary school students, it has several scientific limitations.
The lack of a statement of the research purpose/problem and the data analysis techniques used in the study in the abstract section is a significant shortcoming.
The literature review for the study lacks sufficient scope and depth. The theoretical framework is quite superficial. It does not clearly explain how the Expectancy-Value Theory and the Covitality approach, expressed in the theoretical framework, relate to reading achievement and the research problem. Furthermore, the connection established with the research's theoretical framework is also quite superficial in the discussion section. The study's unique value and contribution to the field are not clearly stated.
The research design and rationale are not clearly stated in the method section. The outdated nature of the surveys used in the study poses a significant validity and reliability problem. The very low internal consistency values in some sub-dimensions of the measurement tools in the study weaken the reliability of the analyses. Furthermore, the lack of cultural adaptations (e.g., vocabulary tests) and the lack of presentation of culturally relevant reliability calculations in most surveys constitute a significant shortcoming.
The outdated data on school indexes is a significant problem. This data needs to be supported by more up-to-date sources.
The research findings are not sufficiently linked to empirical research findings in the Discussion section.
The study's implications for practice and future research (implications and recommendations) are unclear.
In addition to these comments, there are critiques/comments on the text of the article.
The article should be re-evaluated after the necessary corrections have been made. Otherwise, the article was deemed unsuitable for publication.
Comments for author File:
Comments.pdf
Author Response
Respond to Reviewers
Manuscript ID: education-3904980
Type of manuscript: Article
Title: Hybrid Schooling and Reading Acquisition: Motivational, Well-Being, and Achievement Profiles in Second Grade
Dear Editor,
We have attached the final version of the manuscript.
We would like to thank the reviewer once again for their valuable comments, which have helped us improve the clarity and focus of the paper.
Sincerely,
Authors
Reviewer 2
Remark 1: The lack of a statement of the research purpose/problem and the data analysis techniques used in the study in the abstract section is a significant shortcoming.
Response 1: We have revised the abstract to more explicitly state both the research purpose and the main data-analytic approach. The abstract now specifies that the study compares two independent cohorts of second graders (pre- vs. during COVID-19) on reading motivation, school-related well-being, and reading achievement, and it identifies multivariate analyses of variance (MANOVAs) as the primary analytical technique used to compare cohorts and reader groups while controlling for gender, class, and school.
Remark 2: Abbreviations must be clearly stated on the first use.
Response 2: We reviewed the manuscript to ensure that all abbreviations are introduced clearly at first use.
Remark 3: The literature review for the study lacks sufficient scope and depth. The theoretical framework is quite superficial. It does not clearly explain how the Expectancy-Value Theory and the Covitality approach, expressed in the theoretical framework, relate to reading achievement and the research problem. A more comprehensive literature review and conceptual framework regarding reading motivation should be considered. The relationship between Expectancy-Value Theory (EVT) and Covitality theoretical frameworks and reading acquisition should be explained in more detail.
Response 3: Following this remark, we expanded the theoretical framework substantially. The literature review now includes: A clearer integration of expectancy–value theory with self-determination theory, a strengthened explanation of how these frameworks relate to reading motivation, covitality, and early reading acquisition, clearer links between these theories and the study’s research questions.
Remark 4: Furthermore, the connection established with the research's theoretical framework is also quite superficial in the discussion section.
Response 4: This has been fully addressed. Large portions of the Discussion were revised to deepen theoretical integration. In the revised version, motivational findings are now explicitly interpreted through EVT and SDT. Well-being findings are now tied more clearly to covitality theory and research on classroom climate. Reading-achievement findings are linked to models of decoding consolidation and the instructional needs of early readers. Also,
a new integrative paragraph was added aligns the discussion with frameworks distinguishing cognitive from motivational/socio-emotional components. All changes appear in blue in the revised discussion.
Remark 5: The study's unique value and contribution to the field are not clearly stated. The study’s originality and contribution to the field are not sufficiently clear. The study’s unique value and potential contribution to the existing literature should be more clearly defined. The theoretical or applied gaps the study fills and the new perspective it offers should be more clearly defined.
Response 5: Thank you for this important comment. In response, we revised the “Current Study” section to more clearly articulate the study’s theoretical and applied contribution. The revised text now emphasizes that most pandemic-period research has focused on achievement slowdowns, whereas far fewer studies have examined reading motivation and school-related well-being, despite their documented relevance for engagement and reading development. We also clarify that within contemporary reading frameworks, including the Active View of Reading (Duke & Cartwright, 2021), motivational and socio-emotional experiences are recognized as important components of the reading process, yet their sensitivity to instructional disruptions has rarely been investigated.
To address this gap, the revised section explains that the present study offers novel evidence from low-SES second graders and provides an integrated examination of motivation, school-related well-being, and foundational reading skills within the same instructional contexts. We also highlight the unique contribution of distinguishing typical and poor readers, given evidence that these groups may differ in motivational and emotional experiences. These additions more explicitly define the conceptual gap and clarify the study’s unique contribution.
Remark 6: The research design and rationale are not clearly stated in the method section.
Response 6: To clarify the design and its rationale, we have added a short “study design” subsection at the beginning of the Method section. In this subsection, we explicitly describe the study as a cross-sectional cohort-comparison design involving two independent cohorts of second-grade students from the same four low-SES schools. We specify that one cohort completed first grade under continuous face-to-face instruction (before COVID-19), whereas the other learned to read under a hybrid instructional setting combining distance online learning with intermittent, restricted in-person schooling (during COVID-19). We also note that cohorts were not randomly assigned but were determined by school year, reflecting naturally occurring variation in instructional modality, and that this design allowed us to compare motivation, school-related well-being, and reading achievement while holding schools, grade level, language of instruction, and assessment tools constant.
Remark 7: The outdated nature of the surveys used in the study poses a significant validity and reliability problem. The very low internal consistency values in some sub-dimensions of the measurement tools in the study weaken the reliability of the analyses.
Response 7: Thank you for raising this important point. We have now clarified in the manuscript that both the MMRP (Nevo & Vaknin-Nusbaum, 2018, 2020; Nevo et al., 2020; Vaknin-Nusbaum, 2025; Vaknin‐Nusbaum et al., 2018) and SEHS-P (Vaknin-Nusbaum & Tuckwiller, 2022) instruments were previously adapted and used in Hebrew-speaking samples in studies conducted among novice readers, demonstrating acceptable reliability in similar contexts. In those studies, Cronbach’s alpha for MMRP subscales ranged from .60 to .78, and from .67 to .82 for SEHS-P subscales. These values support the cultural appropriateness and internal consistency of the measures for Hebrew-speaking children. Nevertheless, we have also noted the somewhat lower reliability observed in the current sample within the study limitations, acknowledging this as a potential constraint on the interpretation of subscale-level findings.
Remark 8: The lack of cultural adaptations (e.g., vocabulary tests) and the lack of presentation of culturally relevant reliability calculations in most surveys constitute a significant shortcoming.
Response 8: The vocabulary test (and reading tests) was drawn from the ELUL battery (Shatil et al., 2007), a standardized diagnostic tool developed and normed for Hebrew-speaking children and used to identify students performing below expected levels. This battery has been validated in second-grade samples and has been widely used in studies of early reading development among primary-grade students who speak Hebrew, so it reflects the basic vocabulary knowledge expected from L1 speakers. The other literacy tests were also taken from this battery, and both questionnaires have been used previously in studies conducted in Hebrew. This information has been added to the revised manuscript.
Remark 9: The outdated data on school indexes is a significant problem. This data needs to be supported by more up-to-date sources.
Response 9: We thank the reviewer for this comment and have updated the description of the schools’ socioeconomic context in the Participants section. In addition to the socioeconomic information drawn from the Central Bureau of Statistics (2021) and the Nurture Decile Index reported by Meida La’am (2017), we have now incorporated updated and more current data from the National Authority for Measurement and Evaluation in Education (RAMA, 2024). This addition provides a more precise and up-to-date classification of the participating schools’ socioeconomic context.
Remark 10: The research findings are not sufficiently linked to empirical research findings in the Discussion section.
Response 10: We addressed this extensively throughout the revised discussion. We added specific empirical anchors for motivation, covitality, hybrid learning, decoding development, and learning ecology. All additions and changes are marked in blue in the discussion.
Remark 11: The study's implications for practice and future research (implications and recommendations) are unclear.
Response 11: The implications section was revised. It now connects directly to covitality theory, offers specific, research-grounded instructional suggestions, distinguishes where hybrid learning can and cannot substitute for face-to-face decoding instruction, identifies implications specifically for low-SES schools and clarifies directions for future research.
See second paragraph from the end.
The limitations section has now been expanded to include all requested issues: group size imbalance, cross-sectional design and lack of longitudinal data, and teacher-related instructional variation.
The following remarks appeared in the manuscript:
Remark 12: The reviewer commented that there is a too long sentences (School-Related Well-Being as Covitality section - the end of first paragraph) - make it difficult to understand.
Response 12: Thank you for this helpful observation. We revised the relevant sentences to improve clarity and readability while preserving the theoretical meaning.
Remark 13: The reviewer asked to clarify this sentence: “Early studies often focused on single attributes such as mindfulness, gratitude, or life satisfaction (Renshaw et al., 2015). Which studies?
Response 13: Thank you for pointing this out. The original phrasing implied a broader body of evidence than the single citation supported. We revised the sentence to accurately reflect the scope of the cited work
Remark 14: The reviewer mentioned that a more comprehensive description of covitality is needed, and the relationship between covitality and reading acquisition should be detailed.
Response 14: Thank you for this helpful observation. We expanded the covitality section to provide a clearer and more comprehensive description of its components and theoretical foundations. We also clarified its relevance for early reading development. Specifically, we added a description of how covitality-related assets (e.g., optimism, persistence) are linked with engagement and adaptive learning behaviors, and how these socio-emotional strengths can support children’s participation in literacy tasks. These additions appear in the manuscript in bold within the covitality section of the literature review.
Author Response File:
Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe study's topic is highly interesting and relevant. Thank you for the opportunity to review this manuscript.
- Regarding the abstract, the use of assessment tool abbreviations in parentheses (e.g., MMRP, SEHS-P) may cause confusion for readers. It may be clearer to integrate these terms into the text without parentheses or to explain them elsewhere in the manuscript.
- Some expressions are not clear, please check and correct them “COVID-19 altered early literacy instruction, yet evidence on socioemotional and motivational correlates remains mixed”
Method
- It is not fully clear which groups the four schools represent. For example, does the sample consist of three schools assessed before COVID-19 and one school assessed during COVID-19? A clearer explanation of how the schools were grouped would improve the readers’ understanding.
- In addition, providing the mean age (and SD) for each group would make the comparison more transparent.
- Please specify the academic years in which the assessments took place. Cohort times please.The statement “All assessments were delivered at the beginning of second grade (October–November)” would be clearer if the exact years were provided.
- Regarding measures - In the section describing student covitality, it is not necessary to present all subscales of the original instrument if they were not included in the current study. Please reconsider whether the reference to the unused “prosocial behavior” subscale is needed, as it interrupts the flow. If you choose to justify why only four subscales were used, please provide an appropriate citation to support the decision.
- The abbreviations do not need to be repeated every time they appear in the text. Introducing each abbreviation once, followed by consistent use of the abbreviated form, would be sufficient and would improve readability.
- It may be clearer to group these assessments under a broader label such as “Literacy Measures” rather than “Standardized Diagnostic Reading Measures.” This would allow the subtests (word identification, decoding, vocabulary, comprehension..) to be presented more coherently as literacy component
- Key MANOVA assumptions are not fully addressed. In particular, there is no information on (1) Box’s M test for homogeneity of covariance matrices and (b) multicollinearity among the dependent variables. Reporting these would improve the transparency and robustness of the analytical approach.
- K-means clustering is a reasonable approach, but please clarify why k = 2 was chosen, whether the clustering solution was validated, and whether alternative classification methods (e.g., percentile cutoff or latent profile analysis) were considered.
- Why was profile analysis (e.g., latent profile analysis) not considered? Given the sample size (N = 287), a 2- or 3-class model would be statistically supported and provide a more robust categorization.
- Regarding your main analysis, I'm a little skeptical of these analyses. Please repeat them. The reported results do not reflect a MANOVA but rather a series of separate ANOVAs?
- The introduction and conclusion are well structured, but both sections contain unnecessary repetition. The analytical section requires revision. Additionally, the limitations section omits several major constraints, including unbalanced group sizes, the absence of longitudinal data, and teacher effects, all of which need to be explicitly acknowledged. And also there is no multicollinearity information.
Author Response
Respond to Reviewers
Manuscript ID: education-3904980
Type of manuscript: Article
Title: Hybrid Schooling and Reading Acquisition: Motivational, Well-Being, and Achievement Profiles in Second Grade
Dear Editor,
We have attached the final version of the manuscript.
We would like to thank the reviewer once again for their valuable comments, which have helped us improve the clarity and focus of the paper.
Sincerely,
Authors
Reviewer 3
Remark 1: The reviewer mentioned that in the abstract, the use of assessment tool abbreviations in parentheses (e.g., MMRP, SEHS-P) may cause confusion for readers.
Response 1: Following this suggestion, we revised the abstract accordingly.
Remark 2: The reviewer asked to correct the unclear sentence that appeared in the abstract: “COVID-19 altered early literacy instruction, yet evidence on socioemotional and motivational correlates remains mixed”.
Response 2: Following this request, the sentence was revised.
Remark 3: The reviewer mentioned that it is not fully clear which groups the four schools represent. For example, does the sample consist of three schools assessed before COVID-19 and one school assessed during COVID-19?
Response 3: Thank you for pointing out the need for clarity in how schools and classes were grouped. We revised the participant description to make explicit that the same four schools contributed second-grade classes to both cohorts. Each cohort consisted of all second-grade classes in those schools during the year of testing (seven classes per cohort). This clarification now appears in the revised method section in blue font.
Remark 4: In addition, providing the mean age (and SD) for each group would make the comparison more transparent.
Response 4: The mean age and standard deviation for each group have now been added to the manuscript (before the COVID-19 pandemic: M = 7.47, SD = .36; and during the pandemic: M = 7.50, SD = .32).
Remark 5: Please specify the academic years in which the assessments took place. Cohort times please. The statement “All assessments were delivered at the beginning of second grade (October–November)” would be clearer if the exact years were provided.
Response 5: Thank you for this helpful suggestion. We clarified the cohort timing by explicitly adding the academic years for each group.
Remark 6: Regarding measures - In the section describing student covitality, it is not necessary to present all subscales of the original instrument if they were not included in the current study. Please reconsider whether the reference to the unused “prosocial behavior” subscale is needed, as it interrupts the flow. If you choose to justify why only four subscales were used, please provide an appropriate citation to support the decision.
Response 6: Thank you for this helpful comment. In response, we revised the paragraph to remove unnecessary detail about the unused “prosocial behavior” subscale and to improve the flow of the description. We now specify that only four subscales were administered and clarify that this choice aligns with prior research identifying these four traits as reliably clustering into a higher-order covitality factor in young children. Relevant citations have been added to support this decision, and the revised text appears in the covitality subsection of the measures section.
Remark 7: The abbreviations do not need to be repeated every time they appear in the text. Introducing each abbreviation once, followed by consistent use of the abbreviated form, would be sufficient and would improve readability.
Response 7: Thank you. This was corrected throughout the manuscript.
Remark 8: It may be clearer to group these assessments under a broader label such as “Literacy Measures” rather than “Standardized Diagnostic Reading Measures.” This would allow the subtests (word identification, decoding, vocabulary, comprehension...) to be presented more coherently as literacy component.
Response 8: Thank you for this suggestion. We agree that grouping the subtests under a broader, conceptually unified label improves clarity. Accordingly, we revised the section heading to “Literacy Measures” and streamlined the presentation of each subtest to emphasize their role as components of early literacy.
Remark 9: Key MANOVA assumptions are not fully addressed. In particular, there is no information on (1) Box’s M test for homogeneity of covariance matrices and (b) multicollinearity among the dependent variables. Reporting these would improve the transparency and robustness of the analytical approach.
Response 9: We appreciate this important comment and have revised the Statistical Analyses, Results, and Limitations sections to more clearly document our checks of MANOVA assumptions. First, to evaluate multicollinearity among the dependent variables, we computed Pearson correlation matrices within each MANOVA domain (motivation, school-related well-being, and reading achievement). All intercorrelations were below .90, indicating no multicollinearity concerns and supporting the use of MANOVA. Second, we examined homogeneity of covariance matrices using Box’s M test for each MANOVA. Because Box’s M is known to be sensitive to minor deviations from homogeneity, particularly with unequal group sizes, we based inference on Pillai’s trace, which is robust under such conditions. We now explicitly state that Pillai’s trace served as the multivariate omnibus criterion for the effects of time, reader group, and their interaction, and that adjusted univariate tests were examined only following significant multivariate effects. In addition, the Results section now reports Pillai’s trace, F, p, and partial η² for each MANOVA, followed by the adjusted univariate follow-up tests summarized in Table 1.
Remark 10: K-means clustering is a reasonable approach, but please clarify why k = 2 was chosen, whether the clustering solution was validated, and whether alternative classification methods (e.g., percentile cutoff or latent profile analysis) were considered.
Why was profile analysis (e.g., latent profile analysis) not considered? Given the sample size (N = 287), a 2- or 3-class model would be statistically supported and provide a more robust categorization.
Response 10: Thank you for your careful attention to the clustering procedure. We have clarified the rationale for selecting k = 2 in the manuscript. This choice reflects both theoretical expectations about early Hebrew reading development and the observed distribution of students’ reading scores. K-means clustering was used because it iteratively minimizes within-cluster variance and yields clearly separated groups without relying on arbitrary cutoffs. Word-level indicators (orthographic word identification and phonological decoding) were chosen based on their foundational role in early reading acquisition.
The resulting clusters differentiated students performing below the 30th percentile from those in the typical range, aligning with developmental norms and Hebrew literacy screening practices. These clarifications are now marked in bold in the revised Methods section.
Remark 11: Regarding your main analysis, I'm a little skeptical of these analyses. Please repeat them. The reported results do not reflect a MANOVA but rather a series of separate ANOVAs?
Response 11: We appreciate the opportunity to clarify our analytical approach. As now described more explicitly in the Statistical Analyses and Results sections, we conducted three between-subjects MANOVAs, one for each construct domain (motivation, school-related well-being, and reading achievement), with time (before vs. during COVID-19) and reader group (typical vs. poor readers) as fixed factors and gender, class, and school as controls. Pillai’s trace was used as the multivariate omnibus statistic, and the Results section now reports the corresponding Pillai’s trace, F, p, and partial η² values for each MANOVA. The F values reported in Table 1 are adjusted univariate follow-up tests from these MANOVA models rather than separate stand-alone ANOVAs, and we have revised the table title and note to make this explicit.
Remark 12: The introduction and conclusion are well structured, but both sections contain unnecessary repetition.
Response 12: We have reviewed both the introduction and the concluding section of the discussion for redundancy. We removed overlapping sentences and consolidated repeated statements. In the concluding paragraphs, we reduced repetition of the main findings and streamlined the implications to avoid restating points made earlier in the discussion.
Remark 13: The analytical section requires revision.
Response 13: We have revised the Statistical Analyses and Results sections to clarify the analytic strategy. The revised text now specifies that three two-way between-subjects MANOVAs were conducted (one for each domain), reports the multivariate Pillai’s trace statistics (including F, p, and partial η²), and clearly distinguishes these multivariate tests from the adjusted univariate follow-up F tests summarized in Table 1.
Remark 14: Additionally, the limitations section omits several major constraints, including unbalanced group sizes, the absence of longitudinal data, and teacher effects, all of which need to be explicitly acknowledged.
Response 14: These items have now been integrated into the limitations section in the revised manuscript (see second paragraph from the end - appear in blue font).
Remark 15: And also, there is no multicollinearity information.
Response 15: Thank you for raising this point. To assess multicollinearity among the dependent variables, we computed Pearson correlation matrices for each MANOVA domain. Across all analyses (motivation, well-being, achievement), the correlations between dependent variables were below .90, indicating acceptable levels of interdependence and no issues of multicollinearity. These results support the use of MANOVA as an appropriate analytic strategy for our data.
Author Response File:
Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for AuthorsMany thanks to the authors for their review performance.
