Next Article in Journal
Is the Teaching Environment a Risk Factor for Depression Symptoms? The Case of Capricorn District in Limpopo, South Africa
Next Article in Special Issue
Using a Virtual Avatar Teaching Simulation and an Evidence-Based Teacher Observation Tool: A Synergistic Combination for Teacher Preparation
Previous Article in Journal
Development of Social and Environmental Competences of Teachers in Training Using Sound and Visual Landscape
 
 
Article
Peer-Review Record

Impact of Data-Driven Feedback and Coaching on Preservice Teachers’ Questioning Skills for Higher-Order Thinking within a Mixed-Reality Simulation Environment

Educ. Sci. 2023, 13(6), 596; https://doi.org/10.3390/educsci13060596
by Wes J. DeSantis 1, Marcia A. B. Delcourt 1,*, Bruce M. Shore 2 and Jacob C. Greenwood 3
Reviewer 2:
Educ. Sci. 2023, 13(6), 596; https://doi.org/10.3390/educsci13060596
Submission received: 12 April 2023 / Revised: 25 May 2023 / Accepted: 6 June 2023 / Published: 11 June 2023
(This article belongs to the Special Issue The Use of Mixed Reality Simulations in Teacher Education)

Round 1

Reviewer 1 Report


Comments for author File: Comments.pdf

Author Response

Responses to Reviewer 1 Data Driven Feedback and Coaching

Review of “Impact of Data-Driven Feedback and Coaching on Preservice Teachers ́ Questioning Skills for Higher-Order Thinking within a Mixed-Reality Simulation Environment”

Response to Reviewer 1 Comments

Point 1: A central finding states the preservice students´ abilities to ask HOT-questions at different levels evolved during the project semester. However, it would strengthen the transparency of the article if the asked HOT-questions were distinguished when it comes to taxonomy. As mentioned in the introduction of the article in review Bloom´s taxonomy of cognition states different levels of cognition (application, analysis, synthesis, and evaluation). The article in review does not convey which of these were used, though, and to which extent, in the preservice teacher´s live-sessions. The findings further show that there hardly was an increase in the preservice students´ HOT-questions asked in session 2 and 3. Could one of the reasons for this be found the inherent taxonomy of the questions? That is, would a HOTquestion that conveys the highest taxonomy (evaluation) be more time-consuming (but no less interesting) than a more low-levelled question when it comes to its taxonomy (e.g. application)? The quantitative nature of the two first research questions would thereby profit from a more qualitative approach.

Response 1: The reviewer poses interesting questions about the relationship between the level of the question within the HOT category and the number of HOT questions asked in each session. We want to thank you for bringing to our attention that the description of the observation tool lacked an explanation of the dichotomous nature of the CPR observation tool. We have added this information to the instrumentation section, pp. 7-8.

To further explain the changes in question use over the three sessions, we have pointed out that there was a slight increase in use of HOT questions. “This incremental increase for members of the treatment group is also observed in the number of HOT questions asked and the number of candidates engaged in posing these questions from sessions one to three. Ten candidates in session one asked 17 HOT questions, however, five participants asked only K/C questions. In session two, 18 HOT questions were asked by 12 candidates in the treatment group, with no HOT questions asked by three of the candidates. In session three, 19 HOT questions were asked by 15 preservice teachers with each asking between one to three HOT questions,” p. 10.

Although the CPR observation tool was validated on the use of a dichotomous coding schema for labeling types of questions, K/C or HOT, we identified the specific level of Bloom’s taxonomy for each HOT question asked. Most questions for the treatment group were at the analysis level, 34/54 questions. Three were categorized as application, nine as synthesis, and eight as evaluation. For the comparison group, one was identified as application, two represented analysis, and one reflected evaluation. Further analysis revealed that the HOT questions categorized at the evaluation level, the top tier in the taxonomy, included four Level 1 and four Level 2 HOT questions, as well as one HOT question at Level 3. The only evaluation question for the comparison group was at Level 2, p. 11.

Point 2: Another question is whether the three to five seconds of wait-time that distinguishes the second level of the HOT-questions actually allow for higher-level thinking among the students? Or should there be added another level of the HOT-questions where the avatars´ responses to the HOT-questions is analyzed more closely in connection with the questions asked, focusing on different levels of higher-order thinking based on Bloom´s taxonomy?

Response 2: You raise an interesting question regarding responses from students in the classroom. The CPR observation tool can be used to record student responses to observe the dialogue and synergy that occurs when students are engaged. We chose to limit the expectations for the candidates in the classes, given the short time frame of 10 minutes for a single lesson. We have included this information in the instrument section, p. 8. We did note in the text that in response to their HOT questions, two candidates recognized that the avatars created their own HOT questions as an extension of the classroom dialogue (pp. 14-15). Recording responses and extending wait time can certainly be an area of future research. Refer to p. 19.

Reviewer 2 Report

Article is clear and easy to understand.  The conclusions are sound and use good judgement.

The area lacking was a clear establishment of what the coaching used with the pre-service teacher consisted of in terms of type, theoretical undermining, approach, etc.  As the hope would be to replicate the study, another researcher would not know how to replicate the coaching experience or adjust to examine if different results would ensue.

Development of the explanation of the coaching used is the only area I would recommend strengthening prior to publication.

Author Response

Responses to Reviewer 2 Data Driven Feedback and Coaching

Review of “Impact of Data-Driven Feedback and Coaching on Preservice Teachers ́ Questioning Skills for Higher-Order Thinking within a Mixed-Reality Simulation Environment”

Response to Reviewer 2 Comments

Point 1: The area lacking was a clear establishment of what the coaching used with the pre-service teacher consisted of in terms of type, theoretical undermining, approach, etc.  As the hope would be to replicate the study, another researcher would not know how to replicate the coaching experience or adjust to examine if different results would ensue.

Response 1: To address your concern about the coaching protocol, information has been added to p. 8, indicating the script that was followed for each conversation with a candidate.

The following steps were followed in each coaching session:

  1. (Initial coaching session) Refer to the demographic form, I see that you are in the elementary education program/secondary education program. What topics do you look forward to teaching when you complete the program?
  2. How do you think the MRS experience is preparing you for student teaching?
  3. In your most recent MRS session, how do you think you did?
  4. How do you think you did with respect to the questions you asked?
  5. What seemed easy to do? What seemed difficult?
  6. At this time the researcher explained the number of K/C and HOT level questions the participant generated during the lesson. After the results were given, the researcher answered any questions to make sure the participant understood the results.
  7. What do you think about these results? How were you thinking about the types of questions you were asking during the session?
  8. What do you think you could do to improve your questioning skills? What do you think you could do to include more HOTs questions in your next session?
  9. Do you need any additional resources to help you achieve your goal?
  10. Recommendations for next session: At this time the researcher gave some strategies for improving their questioning skills and explained how HOT questions can be used to promote student engagement. Generic examples of levels of questions were used to help candidates differentiate between K/C and HOT questions. The researcher did not review questions for the next lesson. Students initiated their own questions when they individually planned their lesson activities.
  11. Finally, the call ended with closing rapport.

 

Back to TopTop