Psychometric Properties of the Emotional Exhaustion Scale for Children and Adolescents (EES-CA)
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis manuscript aims to validate the Chilean Emotional Exhaustion Scale for Children and Adolescents (EES-CA), providing a psychometrically sound instrument for assessing emotional exhaustion in school-age populations. It is a generally well structured study, with clear hypotheses and a systematic presentation of the results. In addition, the results offer insights into how the importance of emotional exhaustion can affect academic outcomes and the mental health of children and adolescents.
Although the manuscript is generally well structured, it could be improved with these recommendations:
Methodology: The authors do not mention which statistical package was used to analyze the data, nor the level of significance. It seems important to add this information.
It seems important to highlight the differences in the sample, particularly in terms of development: a 10-year-old child and an 18-year-old adolescent are at different stages of maturity, physical, cognitive and social development, and even school demands vary significantly with age. Consequently, the manifestations of emotional fatigue, symptoms of depression, anxiety and stress vary differently; different ages are associated with different life events and stressors, such as the academic stress experienced by children or adolescents.
I believe it would be pertinent to put this reflection in the section on recommendations.
I believe it would be pertinent to include this reflection in the recommendations section for future research, or to mention it as a possible risk of bias.
Author Response
Dear Reviewer 1,
We sincerely thank you for your positive evaluation of our manuscript and your thoughtful and constructive suggestions. Your feedback has helped us to further improve the rigor and clarity of our study. Below, we respond point-by-point to each of your comments and describe the modifications made to the manuscript, which have been highlighted in yellow for ease of review.
Comment 1: The authors do not mention which statistical package was used to analyze the data, nor the level of significance. It seems important to add this information.
Response: Thank you for this important observation.
In response, we have now included explicit information in the Data Analysis section (Section 2.5) of the manuscript regarding the statistical software and significance level used. Specifically: Data analyses were performed using SPSS 28.0 for descriptive statistics, EFA, reliability analysis, and correlation calculations; and SPSS AMOS 28.0 for the CFA and multi-group analyses. The level of statistical significance was set at p < .05 for all hypothesis testing.
This addition enhances methodological transparency and aligns with best practices in psychometric reporting.
Comment 2: It seems important to highlight the differences in the sample, particularly in terms of development: a 10-year-old child and an 18-year-old adolescent are at different stages of maturity, physical, cognitive and social development, and even school demands vary significantly with age.
Response: Thank you very much for this thoughtful and important observation.
We fully agree that developmental differences across ages 10–18 could potentially impact the interpretation of emotional exhaustion scores.
To address this concern empirically, we conducted a multi-group confirmatory factor analysis (MG-CFA) to assess measurement invariance across two distinct age groups: children (10–12 years) and adolescents (13–18 years).
The results, presented in Section 3.4 of the revised manuscript, confirmed configural, metric, scalar, and strict invariance between the groups.
This indicates that the EES-CA measures the same constructs equivalently across developmental stages, and that differences in cognitive, emotional, and academic maturity do not substantially affect the psychometric properties of the scale.
Additionally, following your valuable suggestion, we have explicitly reflected on the developmental differences in the Limitations (Section 4.1) and Future Research Directions (Section 4.2) sections, highlighting the need for future studies to examine emotional exhaustion trajectories longitudinally and to tailor interventions based on developmental stages.
We sincerely appreciate this observation, which has helped us to further strengthen both the methodological rigor and the interpretative depth of our study.
Comment 3: I believe it would be pertinent to include this reflection in the recommendations section for future research, or to mention it as a possible risk of bias.
Response: Thank you for reinforcing this important point. Following your suggestion, we have incorporated a specific paragraph in the Future Research section (Section 4.2) highlighting:
The need to analyze emotional exhaustion separately for children (10–12 years) and adolescents (13–18 years) in future studies.
The importance of considering developmental stage differences (cognitive, emotional, social) when designing interventions based on EES-CA scores.
The acknowledgment that the heterogeneous age range may act as a potential confounding factor influencing the scale’s interpretation and psychometric performance.
We believe these additions enhance the depth and applicability of our study’s conclusions and demonstrate a proactive stance towards addressing sources of potential bias.
We are truly grateful for your insightful comments and valuable suggestions, which have contributed significantly to the improvement of our manuscript. We trust that the revisions made address your concerns satisfactorily and further enhance the scientific rigor and relevance of our work.
Thank you again for your time, expertise, and thoughtful evaluation.
Sincerely,
Author Response File: Author Response.docx
Reviewer 2 Report
Comments and Suggestions for AuthorsThe EES-CA demonstrates promising psychometric properties, addressing these concerns—particularly regarding generalizability, methodological transparency, and theoretical grounding—would significantly enhance its validity and impact. The authors are encouraged to revise the manuscript accordingly and consider cross-cultural collaborations for future validation studies.
1.Sample Representativeness and Generalizability. The study exclusively focuses on Chilean students, limiting the cross-cultural validity of the EES-CA. Future research should validate the scale in diverse populations (e.g., different countries, socioeconomic backgrounds) to ensure its applicability beyond Chile. Cultural nuances in emotional exhaustion manifestations may affect factor structures.
2.Age-Specific Validity. The sample spans ages 10–18, but no subgroup analyses (e.g., children vs. adolescents) were conducted. Age-related differences in cognitive and emotional development could influence item interpretation. Stratified analyses by age groups are necessary to confirm the scale’s validity across developmental stages.
3.EFA and CFA Reporting Gaps. The rotated component matrix (Table 3) lacks clarity in factor loadings (e.g., thresholds for item retention) and justification for factor naming (e.g., "Scholar Stress" vs. "Emotional Fatigue"). Detailed factor loadings and theoretical alignment with existing burnout models should be explicitly discussed.
4.Model Comparison Methodology. While the bifactorial model showed better fit, the authors omitted critical statistical comparisons (e.g., Δχ² test, BIC differences). Including these would strengthen the argument for rejecting the unifactorial model. Additionally, the rationale for forcing a two-factor EFA after initially identifying one factor needs elaboration.
5.Convergent Validity Concerns. High correlations between EES-CA and DASS-21 subscales (e.g., r = 0.693 for Stress) suggest potential item overlap or common method bias. A discriminant validity analysis (e.g., comparing EES-CA with unrelated constructs) is essential to confirm the scale’s uniqueness.
6.Ethical Procedure Transparency. The ethics section lacks details on guardian consent rates, participant withdrawal processes, and how assent was obtained from minors. Clarifying these procedures would enhance reproducibility and ethical rigor, especially for vulnerable populations.
7.EFA Statistical Completeness. While KMO = 0.912 is reported, Bartlett’s test of sphericity results (χ², p-value) are missing. Providing these values is critical to justify the suitability of the data for factor analysis and strengthen methodological rigor.
8.Practical Implications Specificity. The discussion vaguely mentions "targeted interventions" without concrete examples. Linking EES-CA scores to evidence-based strategies (e.g., mindfulness programs for high Emotional Fatigue) would enhance the scale’s utility for educators and clinicians.
9.Cross-Sectional Design Limitations. The study’s cross-sectional nature precludes causal inferences about emotional exhaustion trajectories. The limitations section should explicitly address this and recommend longitudinal designs to explore temporal relationships between academic stress and mental health outcomes.
10.Language and Grammar Issues. Grammatical errors (e.g., "This emotional state compliments lessen academic productivity") detract from clarity. Professional editing is advised to ensure precise scientific communication and avoid misinterpretation.
Author Response
Dear Reviewer 2,
We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Your insightful comments and constructive suggestions have been invaluable in helping us improve the quality and clarity of our work. Below, we provide a detailed, point-by-point response to each of your comments. For each point, we describe the changes made to the manuscript and indicate where these revisions can be found. We trust that these modifications address your concerns and contribute to strengthening the manuscript.
Comment 1: The EES-CA demonstrates promising psychometric properties, addressing these concerns—particularly regarding generalizability, methodological transparency, and theoretical grounding—would significantly enhance its validity and impact. The authors are encouraged to revise the manuscript accordingly and consider cross-cultural collaborations for future validation studies.
In response to your suggestions regarding generalizability, methodological transparency, and theoretical grounding, we have made substantial revisions to the manuscript:
- Generalizability: We have now explicitly acknowledged the limitation of the study sample, emphasizing that the findings are specific to Chilean school-aged populations. A new subsection on Future Projections (Section 4.2) has been added, proposing the need for cross-cultural validation studies to ensure broader applicability of the EES-CA in diverse sociocultural contexts.
- Methodological Transparency: We have expanded the Procedure section (Section 2.4), detailing the process of school recruitment, consent/assent procedures, exclusion criteria, and data collection methods. We have clearly distinguished the random split-sample approach employed for the EFA and CFA, specifying the sample sizes for each analysis.
- Theoretical Grounding: We have enhanced the Introduction and Discussion sections by integrating additional theoretical frameworks and empirical studies related to emotional exhaustion and burnout in school-aged populations, especially in post-pandemic contexts. In the Discussion (Section 4), we clarified how the two identified factors—Scholar Stress and Emotional Fatigue—are consistent with and extend existing models of burnout (e.g., Maslach Burnout Framework and school burnout theories). Finally, following your valuable recommendation, we have explicitly encouraged future international collaborations for the cross-validation of the EES-CA, aiming to confirm its factorial structure and psychometric robustness across different cultural settings.
Comment 2:Sample Representativeness and Generalizability. The study exclusively focuses on Chilean students, limiting the cross-cultural validity of the EES-CA. Future research should validate the scale in diverse populations (e.g., different countries, socioeconomic backgrounds) to ensure its applicability beyond Chile. Cultural nuances in emotional exhaustion manifestations may affect factor structures.
Response: Thank you for this valuable observation. We fully acknowledge that the present study is limited by its focus on a single national context (Chile), which may restrict the cross-cultural generalizability of the EES-CA. In response, we have explicitly addressed this limitation in the Limitations and Future Projections section (Section 4.1 and 4.2) of the revised manuscript.
Specifically, we now state that emotional exhaustion manifestations may vary across cultural settings, potentially impacting the factor structure and psychometric properties of the scale. Accordingly, we highlight the urgent need for cross-cultural validations of the EES-CA, involving students from different countries, regions, and socioeconomic backgrounds, to ensure the instrument’s broader applicability and robustness.
Furthermore, we propose that future studies examine the invariance of the scale across cultural groups using multi-group confirmatory factor analyses (MG-CFA) to assess whether the factorial structure is stable across diverse populations.
These modifications have been highlighted in yellow in the updated version of the manuscript for your review.
Thank you once again for your insightful suggestion, which has helped strengthen the international relevance of our work.
Comment 3: Age-Specific Validity. The sample spans ages 10–18, but no subgroup analyses (e.g., children vs. adolescents) were conducted. Age-related differences in cognitive and emotional development could influence item interpretation. Stratified analyses by age groups are necessary to confirm the scale’s validity across developmental stages.
Response: Thank you for highlighting this important point regarding the need to evaluate age-specific validity. In response, we have conducted additional analyses to address this concern. Specifically, we performed a multi-group confirmatory factor analysis (MG-CFA) to assess measurement invariance across age groups (children: 10–12 years; adolescents: 13–18 years).
The results, presented in Section 3.4 of the revised manuscript, demonstrated configural, metric, scalar, and strict invariance, indicating that the EES-CA measures the same constructs equivalently across developmental stages.
This suggests that differences in cognitive and emotional development between children and adolescents do not substantially affect the interpretation of the scale’s items or its underlying factor structure.
We agree that ensuring measurement equivalence across age groups is essential for supporting the scale’s validity, and we believe that this addition significantly strengthens the robustness and applicability of the EES-CA.
The new analyses and corresponding interpretations have been incorporated into the manuscript and highlighted in yellow for your review.
Thank you again for your insightful suggestion, which has led to an important enhancement of the study.
Comment 4: EFA and CFA Reporting Gaps. The rotated component matrix (Table 3) lacks clarity in factor loadings (e.g., thresholds for item retention) and justification for factor naming (e.g., "Scholar Stress" vs. "Emotional Fatigue"). Detailed factor loadings and theoretical alignment with existing burnout models should be explicitly discussed.
Response: Thank you for your careful review and valuable recommendations regarding the exploratory and confirmatory factor analysis (EFA and CFA) reporting. In response, we have made several improvements to enhance clarity and theoretical grounding:
- We have now explicitly stated that a standardized factor loading threshold of ≥ 0.40 was used for item retention, based on established psychometric guidelines (Hair et al., 2010; Hair Jr et al., 2020). Items slightly below this threshold were exceptionally retained when strong theoretical justification and satisfactory performance in the CFA were observed, and this rationale is now clearly described in Section 3.2.
- We expanded the description of how the two factors were named: "Scholar Stress" was chosen because items loading on this factor reflect academic-related stressors (e.g., test anxiety, workload pressure), while "Emotional Fatigue" encompasses somatic and affective symptoms of exhaustion. We have now explicitly linked these factors to the theoretical models of burnout, especially Maslach’s burnout framework (Maslach et al., 1996) and recent models of school burnout (e.g., Salmela-Aro et al., 2009; Vansoeterstede et al., 2024), reinforcing the theoretical validity of the proposed structure.
- We revised Table 3 to ensure that all factor loadings are fully reported, including cross-loadings where applicable. We also discuss the factor loadings in the text, providing a clearer interpretation of how the item content aligns with the defined latent constructs.
These revisions can be found in Section 3.2 and have been highlighted in yellow in the updated manuscript.
We are confident that these enhancements provide a clearer and more theoretically grounded presentation of our EFA and CFA results, strengthening the psychometric rigor of the study.
Thank you again for your excellent suggestions, which have greatly improved the quality and precision of our reporting.
Comment 5: Model Comparison Methodology. While the bifactorial model showed better fit, the authors omitted critical statistical comparisons (e.g., Δχ² test, BIC differences). Including these would strengthen the argument for rejecting the unifactorial model. Additionally, the rationale for forcing a two-factor EFA after initially identifying one factor needs elaboration.
Response: Thank you for your insightful feedback regarding the model comparison methodology. To address your concerns, we have made the following improvements:
- We have now explicitly reported the Δχ² statistic between the unifactorial and bifactorial models, including the associated degrees of freedom and p-values (Section 3.3). Furthermore, we provided a direct comparison of BIC and AIC values between models, quantitatively supporting the superior fit and parsimony of the bifactorial solution. These comparative indices (Δχ², ΔCFI, BIC difference) are now clearly presented in Table 4 and discussed in the corresponding text, strengthening the argument for rejecting the unifactorial model.
- We have elaborated on the decision to conduct a second EFA imposing a two-factor solution. Although the initial EFA revealed a dominant first factor (Eigenvalue > 1), the explained variance (48.67%) was below the recommended threshold of 50%-60% for sufficient factor solution in psychological constructs (Lopez-Pina & Veas, 2024). Additionally, the scree plot showed a clear inflection after the second component, suggesting the presence of a meaningful secondary factor. The two extracted factors aligned better with theoretical expectations from the burnout literature, distinguishing between academic stressors and emotional manifestations of exhaustion. We have integrated this explanation into Section 3.2 to clarify the methodological and theoretical justification behind this decision.
All these modifications have been made in the revised manuscript and highlighted in yellow for your review.
Thank you again for your detailed and helpful comments, which have significantly strengthened the methodological rigor and interpretive clarity of our model comparisons.
Comment 6: Convergent Validity Concerns. High correlations between EES-CA and DASS-21 subscales (e.g., r = 0.693 for Stress) suggest potential item overlap or common method bias. A discriminant validity analysis (e.g., comparing EES-CA with unrelated constructs) is essential to confirm the scale’s uniqueness.
Response : Thank you very much for this important observation regarding the convergent and discriminant validity of the EES-CA.
In response, we took additional steps to address this concern:
We conducted a new discriminant validity analysis by correlating the EES-CA with theoretically distinct constructs: subjective wellbeing (measured by the Personal Wellbeing Index, PWI) and emotional intelligence (measured by the TMMS-24). As reported in Section 3.5 and Table 6, the EES-CA dimensions showed low and negative correlations with subjective wellbeing (r = -0.192 and r = -0.279, p < .01) and non-significant or very low correlations with emotional intelligence dimensions (r = 0.014 and r = -0.061), supporting the scale’s discriminant validity.
These new analyses and discussions have been incorporated into the manuscript (Section 3.5 and Discussion) and are highlighted in yellow for your review.
We thank you again for your insightful recommendation, which has substantially strengthened the psychometric validation of the EES-CA.
Comment 7: Ethical Procedure Transparency. The ethics section lacks details on guardian consent rates, participant withdrawal processes, and how assent was obtained from minors. Clarifying these procedures would enhance reproducibility and ethical rigor, especially for vulnerable populations.
Response: Thank you very much for this important comment concerning ethical procedures and transparency. In response, we have revised and expanded the Ethical Criteria and Procedure sections (Sections 2.4 and 2.6) to provide greater detail on the ethical protocols followed during the study:
- We clarified that a total of 621 parental consent forms were obtained from an initial pool of invited participants. Of these, 78 students were excluded because, despite having parental consent, they did not provide personal assent. Therefore, the final sample consisted of 543 students who provided both guardian consent and personal assent.
- We emphasized that students were informed verbally and in writing about their right to withdraw from the study at any time without any negative consequences. No students who initially provided assent chose to withdraw after the survey started.
- We expanded the description of how informed assent was obtained directly from the minors. Before answering the survey, researchers provided a simple and age-appropriate explanation of the study objectives, procedures, risks, and voluntary nature of participation. Only those students who actively provided verbal and written assent were included in the final sample.
- We reaffirmed that the study was approved by the Central Bioethics Committee of Universidad Andrés Bello (Approval Act 024/2022), and that all procedures conformed to the Declaration of Helsinki and to international ethical standards for research involving minors.
These improvements enhance the reproducibility, transparency, and ethical rigor of the study and have been incorporated into the manuscript with all relevant changes highlighted in yellow.
We sincerely appreciate this recommendation, as it allowed us to strengthen one of the most critical aspects of research involving vulnerable populations.
Comment 8: EFA Statistical Completeness. While KMO = 0.912 is reported, Bartlett’s test of sphericity results (χ², p-value) are missing. Providing these values is critical to justify the suitability of the data for factor analysis and strengthen methodological rigor.
Response: Thank you very much for your valuable observation regarding the completeness of the exploratory factor analysis (EFA) reporting. In response, we have added the missing information regarding the Bartlett’s test of sphericity to the revised manuscript. Specifically:
- The results now clearly indicate that Bartlett’s test was significant (χ² = 842.115, df = 45, p < .001), confirming that the correlation matrix is sufficiently different from an identity matrix and thus appropriate for factor analysis.
- We have included these values alongside the Kaiser-Meyer-Olkin (KMO) statistic in the Factor Structure section (Section 3.2) to fully justify the adequacy of the data for conducting EFA.
Including both the KMO index and the Bartlett’s test results enhances the methodological transparency and rigor of the study.
These updates have been incorporated into the manuscript and highlighted in yellow for ease of review.
Thank you again for your careful attention to methodological detail, which has contributed to strengthening the psychometric robustness of the research.
Comment 9: Practical Implications Specificity. The discussion vaguely mentions "targeted interventions" without concrete examples. Linking EES-CA scores to evidence-based strategies (e.g., mindfulness programs for high Emotional Fatigue) would enhance the scale’s utility for educators and clinicians.
Response: Thank you very much for this valuable suggestion regarding the practical implications of the EES-CA. In response, we have significantly expanded the Future Projections and Practical Implications sections of the revised manuscript (Sections 4.2 and 4.3) to provide concrete, evidence-based intervention strategies linked to EES-CA scores.
Specifically, we now state:
- For students with high Scholar Stress (SS) scores:
- We recommend academic skills workshops, time management training, and psychoeducational programs focused on coping with academic demands, referencing intervention studies in educational psychology.
- For students with high Emotional Fatigue (EF) scores:
- We recommend mindfulness-based stress reduction (MBSR) programs, relaxation techniques, and social-emotional learning (SEL) interventions, citing studies showing these approaches' effectiveness in reducing emotional exhaustion and improving resilience among adolescents (e.g., Gustafsson & Öster, 2023).
We have also emphasized that the differentiated bifactorial structure of the EES-CA allows for tailored interventions targeting the specific dimension (academic vs. emotional-somatic exhaustion), thus enhancing the scale’s practical value for school psychologists, educators, and mental health professionals.
All additions and revisions have been highlighted in yellow in the manuscript for your review.
We believe that these clarifications significantly strengthen the applied relevance and impact of the study.
Thank you once again for your thoughtful and constructive feedback.
Comment 10: Cross-Sectional Design Limitations. The study’s cross-sectional nature precludes causal inferences about emotional exhaustion trajectories. The limitations section should explicitly address this and recommend longitudinal designs to explore temporal relationships between academic stress and mental health outcomes.
Response: Thank you very much for this important and relevant observation regarding the study design.
In response, we have expanded the Limitations and Future Projections sections (Sections 4.1 and 4.2) to explicitly acknowledge that:
- The cross-sectional design of the study limits our ability to draw causal inferences regarding the development or progression of emotional exhaustion over time.
- Therefore, the results should be interpreted as indicative of associations rather than causal pathways between academic stress and emotional fatigue.
Additionally, following your suggestion, we have now recommended that future research employ longitudinal study designs to:
- Track temporal trajectories of emotional exhaustion across different developmental stages (e.g., transitions between school levels).
- Evaluate how changes in academic demands and environmental stressors affect emotional well-being and mental health outcomes over time.
- Better understand predictive relationships between Scholar Stress, Emotional Fatigue, and mental health issues such as depression, anxiety, and school disengagement.
These clarifications have been integrated into the revised manuscript and highlighted in yellow for easy review.
Thank you again for your thoughtful suggestion, which has helped to enhance the scientific rigor and clarity regarding the study’s methodological limitations.
Comment 11: Language and Grammar Issues. Grammatical errors (e.g., "This emotional state compliments lessen academic productivity") detract from clarity. Professional editing is advised to ensure precise scientific communication and avoid misinterpretation.
Response: Thank you very much for your careful review and your attention to language precision. In response to your observation, we undertook a comprehensive professional language revision of the manuscript. Specifically:
- We corrected grammatical errors, including the example you kindly pointed out ("This emotional state compliments lessen academic productivity"), which was revised to accurately state: "This emotional state reduces academic productivity."
- Additionally, minor issues related to verb agreement, article use, phrasing, and transitions were corrected to improve clarity, precision, and scientific tone throughout the manuscript.
- The entire text has been carefully edited to align with the conventions of formal academic English suitable for publication in high-impact journals.
We are confident that the quality of the writing has now been substantially improved, minimizing the risk of misinterpretation and ensuring clearer scientific communication.
The language revisions are reflected throughout the manuscript and have been marked with yellow highlights for ease of review.
Thank you again for pointing this out and contributing to the overall quality and professionalism of the work.
We would like to sincerely thank you once again for your invaluable feedback and thoughtful suggestions.
Your detailed and constructive review has significantly contributed to enhancing the methodological rigor, theoretical grounding, and practical relevance of our study.
We deeply appreciate the time, expertise, and dedication you have invested in helping us improve the quality of our manuscript.
We hope that the revisions made satisfactorily address your comments and that the updated version of our work meets the high standards expected by the journal.
Thank you very much for your
Author Response File: Author Response.docx
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors have answered all the questions I raised in the last round. I have no new questions.