Evaluative Judgment: A Validation Process to Measure Teachers’ Professional Competencies in Learning Assessments
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper is considered of interest and uses appropriate research methodology. Furthermore, the following recommendations are recommended for improvement:
1. Make the paragraphs shorter, as they are sometimes too dense, making it difficult to read and follow the argument. 2. Include a table of achievements for the proposed research objectives in the conclusions section.
After reviewing the paper, the text is considered suitable for publication, although it would be of higher quality if the aforementioned were included. Thank you very much and have a nice afternoon.
Specific Comments:
- Main Question Addressed by the Research
The research addresses the validation of an instrument called "Classroom Evaluative Judgment" designed to measure teachers' professional competencies in assessing their students' learning. The main question is: How can an instrument that measures teachers' evaluative competencies in the context of their professional practice be validated?
- Is the Topic Original or Relevant to the Field? Does it Address a Specific Gap in the Field? Explain Why.
The study question is considered relevant and addresses a specific gap in the field of educational assessment.
The research responds to the need for valid and reliable instruments to measure teachers' evaluative competencies, an area identified as lacking adequate tools. The originality lies in the proposal of an instrument that not only measures technical competencies but also socio-emotional and contextual aspects of teachers' evaluative practice.
- What Does It Contribute to the Thematic Area Compared to Other Published Material?
The article contributes a validated instrument that integrates theoretical and practical dimensions of learning assessment, which can be considered a significant advancement compared to other studies that have presented instruments with weak psychometric evidence. Additionally, the focus on teachers' evaluative identity and the inclusion of socio-emotional and contextual dimensions represent a novel and valuable contribution to the field.
- What Specific Improvements Should the Authors Consider Regarding the Methodology?
Suggested improvements to enhance the research methodology include expanding the sample. It is recommended to consider a more diverse sample that includes teachers from different educational levels and geographical contexts to increase the generalizability of the results. Furthermore, the study could be deepened with qualitative analysis, incorporating qualitative analyses that complement the quantitative data and help provide a deeper understanding of teachers' perceptions and experiences.
Additionally, cross-validation studies could be conducted in different educational contexts to confirm the validity of the instrument.
- Are the Conclusions Consistent with the Evidence and Arguments Presented and Do They Address the Main Question Raised? Explain Why.
The conclusions are consistent with the evidence and arguments presented. The article demonstrates, through statistical analyses, that the instrument has high validity and reliability for measuring teachers' evaluative competencies. Moreover, the conclusions address the main question by confirming the usefulness of the instrument in real contexts of professional practice.
- Are the References Appropriate?
The references are appropriate and cover a wide range of relevant studies in the field of educational assessment and teacher professional development. The authors have cited fundamental and recent works that support the theory and methodology used in the research.
- Any Additional Comments on Tables and Figures?
The tables and figures are clear and well-organized, providing detailed information on the results of the statistical analyses. However, the visual presentation of some tables could be improved to facilitate data interpretation, especially in the results section.
Conclusion
The article presents solid and well-structured research on the validation of an instrument to measure teachers' evaluative competencies. The relevance of the topic, the originality of the approach, and the consistency of the results make this study a valuable contribution to the field of educational assessment. The proposed methodological recommendations can help further strengthen the research in future applications of the instrument.
Author Response
Dear reviewer, after reviewing your comments, I would like to respond to the points you have raised.
1.-Shorten the paragraphs, as they are sometimes too dense, making it difficult to read and follow the plot.
Response: The suggestion is taken up, shortening the paragraphs to obtain greater clarity and communicability of the theoretical approach.
2.- Include a table of achievements for the proposed research objectives in the conclusion section
Response: The suggestion is taken up, in the conclusions we return to the Objective of the article to provide greater clarity and consistency.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsComments and Suggestions for Authors
Interest. Assessment is a current topic in teaching and teaching-learning processes. This article focuses on the Evaluative Judgment in the Classroom instrument from a quantitative perspective. However, a review of all sections would be necessary to enrich the inclusion of theories and ideas that help deepen the assessment competency of the task of assessing students.
Significance. In the introduction, methodology, results, discussion, and conclusions sections, the narrative content of the paragraphs is repeated, achieving a repetitive effect and impeding progress in the theoretical development (introduction) and in the description of the research findings (discussion and conclusions).
It is necessary to describe in text the six dimensions and corresponding indicators that comprise the Evaluative Judgment in the Classroom instrument, establishing connections between the theoretical foundation and the statistical data obtained in the research.
Merit. The Evaluative Judgment in the Classroom instrument is validated as an assessment construct; however, it is necessary to include the theoretical foundation based on Wyatt-Smith, Adie, and Harris's (2024) three dimensions, the six dimensions, and the indicators.
Methodology. Is the question original and well-defined? Do the results advance current knowledge? The research question or hypothesis should be clearly reflected in the methodology section; this is a quantitative study.
Quality. For the article to be published, the assessment vocabulary should be reviewed, expanding on concepts and theories, and helping to delve deeper, from the general to the specific. The same ideas and generalizations about evaluative judgment and identity are repeated in all sections.
Congratulations on the effort it takes to write in English.
Author Response
Dear reviewer, after reviewing your comments, I would like to respond to the points you have raised.
Evaluation is a topical issue in the teaching and teaching-learning processes. This article focuses on the instrument Evaluative Judgment in the Classroom from a quantitative perspective. However, a review of all sections would be necessary to enrich the inclusion of theories and ideas that help to deepen the evaluative competence of the task of assessing students.
Response: The introduction includes a reference to situate the formative assessment approach assumed in the article. In particular the object of the study focuses on competencies to assess students from a formative assessment approach.
Significance . In the sections of introduction, methodology, results, discussion and conclusions, the narrative content of the paragraphs is repeated, achieving a repetitive effect and preventing progress in the theoretical development (introduction) and in the description of the research findings (discussion and conclusions).
Response: Changes are introduced in the introduction to achieve more compact paragraphs that move from the theoretical approach to the instruments used in practice to measure teachers' evaluative competencies. In particular, it is emphasized that the construction of evaluative judgments of teachers is part of the evaluative competencies of teachers.
It is necessary to describe in text the six dimensions and corresponding indicators that make up the instrument Evaluative Judgment in the Classroom, establishing connections between the theoretical foundation and the statistical data obtained in the research
Response: Explanation added in paragraph preceding Table 3 (Line 200-209)
Merit . The Evaluative Judgment in the Classroom instrument is validated as an evaluation construct; however, it is necessary to include the theoretical basis based on the three dimensions, six dimensions and indicators of Wyatt-Smith, Adie and Harris (2024).
Response: Explanation added in paragraph preceding Table 3. Advances on the model by Wyatt-Smith, Adie and Harris (2024) are included. See line 200-209.
Methodology: Is the question original and well defined, and do the results drive the current knowledge? The research question or hypothesis should be clearly reflected in the methodology section; this is a quantitative study.
Response: The question guiding the study is incorporated in the methodological section according to the evaluator's suggestion (line 134-135).
Quality . For the article to be published, the vocabulary of evaluation must be revised, expanding concepts and theories, and helping to go deeper, from the general to the specific. The same ideas and generalizations about evaluative judgment and identity are repeated in all sections.
Response: Theoretical concepts on evaluation are incorporated in the introduction section following the evaluator's suggestion (Line 49-70).
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsDear authors.
Please see feedback below
Abstract
- The abstract clearly identifies the research focus, validating an instrument for teacher assessment literacy, but could be more concise. Key findings and implications are well summarized.
Introduction
- The introduction is thorough and well-situated within relevant literature, especially regarding assessment literacy and evaluative identity. However, it is quite dense and could benefit from greater clarity and brevity.
Method
- Methodological design is appropriate, using a sequential mixed approach with expert validation and CFA. However, clarity could improve with better structuring of pilot and final phase descriptions. Lines 200–204 mention that 3 items were eliminated, but do not state which or why. A brief rationale (e.g., poor factor loadings, overlap with other items) would improve transparency
Results
- Detailed and statistically well-supported. Factor analysis and internal consistency (Cronbach's alpha) are robust.
- See Table 2 and 5. On the bottom horizontal line, there are numerous vertical dashes that need to be removed.
- For the level of significance in Table 4, include a zero before the decimal point.
Discussion
- The discussion is strong, linking findings back to theory and prior studies. It effectively highlights the contribution to assessment literacy and evaluative identity. More critical reflection on limitations would strengthen it.
- The discussion could be improved by more explicitly addressing the study’s limitations, such as the non-random sample and context-specific focus on Chilean public schools, and by expanding on how the instrument can be practically applied in teacher development, appraisal, or training. A more critical comparison with existing instruments like TFALS or ACAI would help highlight the unique contributions of this tool, particularly its emphasis on in-context evaluative decision-making. Finally, the discussion should propose a clearer future research agenda, including testing the tool’s predictive validity, adapting it to other educational contexts, and exploring longitudinal impacts on teaching practice and student learning.
- Just a point to consider, only in the final paragraph (lines 327–330) is it briefly noted that the study included only public school teachers. A more critical reflection on generalisability to private or early childhood sectors would be valuable.
Kindest regards
Author Response
Dear reviewer, after reviewing your comments, I would like to respond to the points you have raised.
Comments 1 The clarity of the design could be improved by better structuring the descriptions of the pilot and final phases.
Response: The design description is reorganized following the evaluator's suggestions.
Comments 2 Lines 200-204 mention that 3 elements were eliminated, but do not specify which ones or why.
Response: Arguments for its elimination are included.(line 181-183)
Comments 3 See Tables 2 and 5 In the lower horizontal line there are numerous vertical dashes that should be removed.
Response: Errors are corrected.
Comment 4 The discussion could be improved by more explicitly addressing the limitations of the study.
Response: Limitations and possibilities of the study are explicitly included in the conclusions section.
Comment 5 A more critical comparison with existing instruments such as TFALS or ACAI would help to highlight the unique contributions of this tool, in particular its emphasis on evaluative decision making in context
Response: Included in the discussion section (line 261-270).
Comment 6 Finally, the discussion should propose a clearer future research agenda, including the evaluation of the predictive validity of the tool, its adaptation to other educational contexts, and the exploration of its longitudinal impacts on teaching practice and student learning
Response: It is included in the conclusions.
One point to consider: only in the last paragraph (lines 327-330) is it briefly mentioned that the study included only public school teachers. A more critical reflection on the generalization to the private or early childhood sector would be valuable. They are included in the possibilities and limitations of the study (lines 308-3149).
Author Response File: Author Response.pdf