Mentorship in the Age of Generative AI: ChatGPT to Support Self-Regulated Learning of Pre-Service Teachers Before and During Placements
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsDear Author(s),
I have thoroughly reviewed your manuscript. The idea of examining mentorship in the age of generative AI is compelling and should be of current interest. However, the quality of the writing requires significant improvement. Both the results and discussion sections need extensive revisions. Also, the methods section would benefit from more detail. I have provided comments within the manuscript.
Best,
Reviewer
Comments for author File: Comments.pdf
Author Response
Please see the attached document.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsI recommend presenting the data in tables or charts to enhance readability and facilitate a clearer understanding of the findings. Additionally, incorporating the demographics of the mentor teachers would provide a valuable layer of analysis, as these demographics can significantly influence the study's outcomes.
Author Response
Please see the attached document.
Author Response File: Author Response.docx
Reviewer 3 Report
Comments and Suggestions for AuthorsThank you for this opportunity to review work on such a practically and theoretically significant topic. The strengths of the paper are its strong theoretical backdrop, rich types of data, and data size that allows highly interesting questions to be posed. However, it seemed as if the dataset, results, and discussions had two separate sets of research questions -- one on mentorship across low and high ICSEA schools (from the lens of the pyramid model proposed by the authors), and the other on AI use in low and high ICSEA schools (from the lens of SRL and AI literacy)-- without these inquiries being consistently integrated. Please see my detailed feedback below about how this rift manifested in different parts of the paper, as well as other topics and suggestions.
1. In the methods section, as of now, I found it difficult to clearly grasp the connections between the research questions and the study design. From the introduction, I had expected AI to be used to support mentoring, but it seems here that the focus of the in-course activity was on AI literacy, and that pre-service teachers used AI on their own rather than with their mentors. I would appreciate clarification on how the current setting and dataset allows authors to answer the second question, "In what ways does the integration of gen-AI tools enhance or challenge the development of SRL and autonomy among PSTs during WIL placements?"
2. I would appreciate seeing the data size for different data sources, as well as the timeframe in which different data types were collected, to get a better sense of the research design as an integrated whole.
3. In the results, I was again unsure of how the results speak to "gen-AI's ability to navigate the complex interpersonal dynamics essential for effective mentorship" mentioned in the abstract. I was able to see that AI was used to help students self-regulate, or support regular teaching practice, but it did not seem that it was deployed/used as a conscious part of mentoring practice.
4. In line with the comment above, I would suggest that authors reorganize the results section so that it is easier to see how results answer the three research questions, rather than being organized by data source.
5. "…suggesting that a low AI literacy may be correlated to more struggles in WIL performance"
The statement above from p 12 seems to implicitly suggest that low AI use leads to low WIL performance, which seems questionable both intuitively, and due to the anecdotal source of this inference. I would personally imagine that it is more likely is that there is a set of factors (e.g., SRL skills, metacognition, motivation, effort, willingness to engage with feedback) that correlates both with use of AI and WIL performance. While the discussion of outliers address this to a point, I think a more nuanced, qualified statement on the relationship between AI use and WIL performance could come out of considering alternative explanations that incorporate the two cases instead of singling them out as an outlier.
6. On the other hand, I found the results focused on ICSEA highly interesting and coherent, on how mentoring practices of schools are influenced by their sociocultural context and resources.
Overall, there are interesting findings about pre-service teachers' use of AI, and about mentoring practice across different schools, but the two areas of inquiries are not clearly integrated in the methodology or results as is implied in the research questions. As a result, after reading the paper, it is difficult to identify what the key takeaways are, and how these relate to the research questions. I would suggest that authors re-identify a smaller set of cohesive, and justifiable findings, and ensure that the RQs, methods, and organization or selection of results are reworked in a way that makes these key findings clear.
Minor comments:
a. Authors may consider if some of the acronyms can be replaced by full phrases to improve readability. Also, I suggest that the full term for ITE be reintroduced in the body of the writing as it is currently only introduced in the abstract.
b. After reading the paper, I became confused on why OLMs were introduced as a central topic in the introduction. It seems like the study's RQs are wholly unrelated to it. Please consider expanding the focus of the paragraph currently on OLM so that it is relevant to the RQs and the barriers discussed by the authors earlier.
c. If (and only if) there is justification for discussing OLMs at length, I would like to see a clear definition of open learner models (p2). Particularly, what seems missing is distinguishing learner models from open learner models. Currently, many of the descriptions apply to the more general category of learner models. I was also not sure why a lack of consideration of genAI is listed as an inherent limitation of Open learner models (p3). They are different tools.
d. I have typically seen survey items introduced in the materials section, rather than in the results section. Some questions seemed like they are a mix of two different questions that may impact how students interpreted it, like "To what degree did AI tools assist in filling gaps in mentor feedback when availability was limited?" I would have liked to see more information on how new items were designed.
Author Response
Please see the attached document.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsDear Author(s),
I can see that you have done a lot of work in revising the manuscript. The quality has improved significantly. However, the discussion section still needs major revisions. I have left comments in the manuscript for your guidance.
Best,
Reviewer
Comments for author File: Comments.pdf
Author Response
Thank you for your thorough review and guidance. Please view our response and action taken outline the attached document.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsI appreciate the thorough revision of the authors and the much improved logical clarity and quality of arguments of the revised manuscript. I agree with the authors' decision to rescope the RQs to focus on AI use in SRL development, and particularly appreciated the added details and examples on the training participants received, as well as the general study setting. I have organized my remaining and new feedback in order of the manuscrip sections.
Introduction
- The revised writing and added references in the introduction successfully improves the strength of the argument that this research is timely and beneficial.
- Minor, but I find found a the following sentence somewhat less convincing:
"More critically, OLMs fail to directly facilitate the training in and use of 90 gen-AI, a skill increasingly essential in the contemporary educational landscape and a 91 competitive edge in the workforce, for PSTs"
The criteria or goal discussed previously is to facilitate self-regulated learning and feedback practices, so that these two tools (OLM vs genAI) are suddenly compared on the extent to which they facilitate a new skill (AI literacy) seems a little abrupt. The authors may consider rephrasing this so that this is introduced as an extra (albeit important) benefit, rather than a critical one, or as a wholly new advantage (e.g., "An added benefit of utilizing gen-AI for PST feedback is…") - The paragraphs in pages 2 to 3 are extremely long, and touch on several topics. Please consider breaking down these sections into smaller paragraphs.
- I appreciate the major updates to the research questions. The new questions immediately seemed much more relevant to the introduction, and answerable given the setting and data. I did have a question about the focus of RQ3, please see comment on results section.
- I am still unsure of whether OLMs are a relevant topic to be named and discussed. It seems narrow and specific, while not directly related with gen-AI. In fact, two of the four papers cited in this section are not about OLMs, but general AI feedback systems. OLMs are a special case within AI feedback systems, where models are transparent & there is an emphasis on promoting student-model interactions and agency. Please consider a more general term, such as "Personalized feedback systems", or "specialized AI feedback systems".
Materials and Methods
- The added examples and details for how PSTs were trained to use GenAI for feedback (p6) were immensely helpful in understanding the study design and intellectual efforts of the paper. In fact, I would not mind if the section was expanded with additional training details, examples of materials, etc., akin to explaining an intervention in an intervention study, or an instructional design of a design-based study. See continued suggestions on this below (results).
- I was not sure of that "An interview protocol of deidentification and balanced moderation" meant.
- How were authors were able to revise survey questions after they had collected the data? (i.e. authors' response to comments)
Results
- The results for the first RQ are convincing, interesting, and coherent. It demonstrates a deep, purposeful integration between the data and the authors' novel framework (MSPM).
- I appreciated how mentor involvement in PSTs' AI usage is now explained as a natural outgrowth of co-regulated learning, rather than being alluded to as an planned form of mentorship.
- I understand that the focus or the theoretical construct is different for RQs 1 and 2, but the actual findings and observations seemed to overlap significantly between the two RQs (e.g., risks to autonomy, need for structured integration...). It seems like some of this overlap may come from the fact that self-regulation is already a component of the MSPM used to present findings in RQ1; authors may wish to consider a different foci for RQ2 that reduces overlap, or reorganizing both RQs, etc.
- For RQ3, the contents of this section were interesting, but did not seem to clearly and concisely answer the stated research question.
- For instance, I was not sure how placement scores are relevant to the topic of "Optimising course design for AI integration in WIL placements"
- I think a meaningful (and reasonable) set of findings may be on how the intervention aided AI use in WIL placements, how it did not, and suggested improvements and caveats for implementing similar interventions. The authors may consider reorganizing, removing, and expanding parts of RQ3 with reference to how design-based studies present findings in a similar way (albeit across iterations). Examples:
- Killen, H., Coenraad, M., Byrne, V., Cabrera, L., Mills, K., Ketelhut, D. J., & Plane, J. D. (2023). Teacher education to integrate computational thinking into elementary science: A design-based research study. ACM Transactions on Computing Education, 23(4), 1-36.
- Wang, S. K., Hsu, H. Y., Reeves, T. C., & Coster, D. C. (2014). Professional development to enhance teachers' practices in using information and communication technologies (ICTs) as cognitive tools: Lessons learned from a design-based research study. Computers & Education, 79, 101-115.
- Minor, grammatical error: "A key benefit of AI integration within the preparatory university THE COURSE was"…
Discussions
- Given that RQ1 finds that AI lacks "the interpersonal depth required for psychosocial mentoring", the sentence below seems contradictory:
"Gen-AI is demonstrated to provide psychosocial support to PSTs when the tone or delivery mode of mentors’ feedback may otherwise erode PSTs’ confidence."
- While I agree with the authors that the removing socio-educational advantange (ICSEA) as a topic was helpful in increasing coherence, I did find these results and their implications for PST AI literacy interesting, and ask cautiously if authors may be able to find ways to weave this in into the RQs or discussions.
- As this is not an experimental study, authors may consider attaching caveats or rewording the claim for effect made in the sentence:
"The study reveals the extent of usefulness in integrating gen-AI in university preparatory courses before WIL placement. After AI literacy was built through carefully designed tasks that combined gen-AI and simulations of placement experiences and processes, PSTs continued to use gen-AI while on placement." - The topic of the paragraph starting with the sentence above was unclear, and contents seemed to overlap with the previous paragraph.
- It seems that the sentence mentioning "disparities observed in AI integration across different educational settings" (and similar sentences near it) is a remnant of the old manuscript, which had an RQ relating to ICSEA; consider removing these or clarifying what the "different educational settings" are.
Overall, I hope to emphasize that my comments in this round are significantly more focused on local and stylistic issues, acknowledging the substantial improvements to the paper's central argument and contribution. I look forward to seeing the revisions.
Author Response
Thank you for your thorough review and guidance. Please view our response and action taken outlined in the attached document.
Author Response File: Author Response.pdf