Next Article in Journal
Resource-Person-Mediated Instruction and Secondary Students’ Learning Outcomes in Yorùbá Orature: A Culturally Responsive Education
Next Article in Special Issue
Building Successful STEM Partnerships in Education: Strategies for Enhancing Collaboration
Previous Article in Journal
Modeling Training Using Information and Communication Technologies in Early Childhood Education with Functional Diversity: The Case in Spain
Previous Article in Special Issue
Addressing Humanities Pre-Service and In-Service Teachers’ Concerns in Integrating STEM Education—A Case Study of Geography Education
 
 
Review
Peer-Review Record

Advancing Artificial Intelligence Literacy in Teacher Education Through Professional Partnership Inquiry

Educ. Sci. 2025, 15(6), 659; https://doi.org/10.3390/educsci15060659
by Michelle Kelley * and Taylar Wenzel
Reviewer 1: Anonymous
Reviewer 2:
Educ. Sci. 2025, 15(6), 659; https://doi.org/10.3390/educsci15060659
Submission received: 16 April 2025 / Revised: 20 May 2025 / Accepted: 22 May 2025 / Published: 27 May 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper presents a thoughtful, well-constructed, convincing narrative of a practical exploration of generative AI in education. The paper is highly readable and has a strong grasp of relevant and very up to date literature, which is important in such a fast-moving field. The paper also provides a well-rounded overview of the potential positives and challenges of this technology in the field of teacher education. Critically, the paper explores the application of AI in their domain, which is where all disciplines now need to be focusing their energy as generalised guidance has taken us about as far as it can.

The authors should be commended for their commitment to exploring gen AI themselves, rather than relying only on the emerging literature or popular sources of narrative, which can be quite polarised. They should also be commended for their commitment to sharing what they learned and helping to facilitate the exploration of their colleagues through their various activities.

I’ve provided comments in the PDF throughout the paper (attached), but these are mostly relatively minor. There are some minor errors in the reference list – some numbers missing, a couple of references not listed in text etc. In terms of more substantial suggestions, first I would suggest acknowledging ethics approval for the surveys mentioned if that was the case. It would be helpful to provide a more fulsome description of the results of any surveys undertaken if possible, as they are mentioned in the text but a summary table or appendix would help to give a better picture of the data collected. More detail about what the surveys covered would also be helpful. I also wondered if there was any feedback or evaluation of any of the professional development activities the authors organised – did they lead to changes in behaviour, approaches, ongoing collaborations, research opportunities etc.?

Additionally, as noted in the comments, there has been some criticism of the stoplight metaphor for managing AI control in the classroom. It is a relatable metaphor that should be relatively easily understood by students and faculty alike, but has some practical limitations now and into the very near future as AI becomes more and more ubiquitous, making it technically impossible to avoid interacting with it when doing academic work (thus rendering the ‘no AI’ end of the spectrum virtually impossible to achieve). At the other end of the spectrum, it seems unlikely that full unlimited use of AI would be a desirable or acceptable option to most universities at this point, though that may change down the track and is certainly different in industry already where full leverage of AI for productivity and efficiency gains is expected.

Overall, the paper is practice-focused and provides insights to a process that should be transferrable to most disciplines for faculty who want to critically explore AI in their field.

Comments for author File: Comments.pdf

Author Response

 

For review article #3618857

 

Response to Reviewer 1 Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find our responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files

2. Point-by-point response to Comments and Suggestions for Authors

 

 

 

Comments 1: Ref style incorrect

Response 1: Fixed L-54

 

Comments 2: These don't need capitals L-102 (Science of Reading and Active View of Reading)

Response 2: We disagree, these are reading research models and they should be capitalized.

 

Comments 3: What was the outcome of this review? What AI model/tool did you use and when? (date gives an indication of the era of agents you were using and therefore, likely capabilities).

Response 3: We have added to this section to clarify L 101-110.

 

Comments 4: Is that legal under the ToS of either the publisher, or Claude? L-109
Response 4: Yes

Comments 5: “it” L-117
Response 5- Fixed

 

Comments 6: Application categories is likely more useful than most known tool. L-119

Response 6: We removed the word applications to not cause confusion.

 

Comments 7: Table 1-

Response 7- We added the following “such as generating research questions, role-playing with parents, and synthesizing research articles” to clarify and be explicit.

 

Comments 8: What were the key concerns you recognised? (I ask as these are culturally constructed and different for everyone, so can provide useful insights for others) L- 135

Response 8: We added to the first sentence to clarify, use “such as hallucinations and plagiarism”.

 

Comments 9: L-151 “may be better to consider this academic misconduct - see for example Sarah Eaton's work on post-plagiarism” instead of the word plagiarism

Response 9: Thank you for your input, and we agree), however, we did not design the instrument, Hamilton (2024) did for the Forbes study, and that was the terminology that was used and reported.

Comments 10: L-153 Perhaps just "But how can we address these issues?"
Response 10: We fixed.

 

Comments 11: L- 163 Practically speaking, what does that look like in your context? Is it about reading the ToS and knowing what the company is allowed to do with your/your data, or do you look for other indicators?

Response 11: This is what we revised to clarify, “of data privacy and knowledge of how companies are using shared data”.

 

Comments 12: L- 170 I am not aware of any, but are there any national (or other level) stats on proportion of schools that provide access for staff and students? That would be a useful metric for the size of the challenge we face in access equity.

Response 12: We located 2 stats to support. We added, “In fact, 30% of educators reported concerns that students do not have equal access to AI resources (Hamilton, 2024), and 15 % of high school students reported not having access to AI (Schiel, Bobek, & Schnieders, 2024)”.

 

Comments 13 and 14: Table 2- It's more realistic now to have a 'minimal AI use' category, as it is becoming increasingly difficult to not interact with AI at all in doing academic work. Another criticism of the stoplight approach is often whether there is a situation where unlimited use of AI would be appropriate e.g. would it be OK to completely generate an assignment and hand it in? Curtis (2025) talks about the middle road (where some critical and informed AI use is expected and encouraged as part of the academic/creative process) as being the place where pretty much everything lands now and suggests a focus on appropriate use in that context. That said though, the same constraints will not apply in most workplaces once students graduate, so perhaps HE needs to change its thinking on this?

Response: We appreciate the reviewer's input and ideas related to the stoplight. We definitely can see that in the future we may need to adjust the stoplight for the issues the reviewer pointed out however this is how we have used the stoplight in our courses this past year. This article is less about the stoplight and more about our professional journey.

 

Comments 15: L-255 This seems very intensive for the faculty members...any indication of the time commitment, and how many students were in your courses?

Response 15: We added to the paragraph before to contextualize the number of students who engage in the ARCS project regardless of whether they are in our section(s) or not.

 

Comments 16: Table 3- is this mean to be 'with or without'?

Response 16: Fixed

 

Comments 17: Table 3- Can they use it to generate the plan as well? If they did, how would that be different from handing in some thing from Teachers Pay Teachers? Would brainstorming/ideation be OK?

Response 17: We added the word brainstorm.

 

Comments 18: But students are provided with access to a tool, right? Was it Co-pilot for the web?

Response 18: We added, (although they have free access to Microsoft Copilot).

 

Comments 19: Did you have ethics clearance for the survey? A summary of the data, perhaps in a table or appendix, would be helpful and add to the credibility of this study.

Response 19: Descriptive statistics are provided rather than empirical data as this is a review article, not a research article.

 

Comments 20: L-288 Response rate?
Response 20: Number of respondents/students

Comments 21: L- 296 Could it be that more use it, but less than half admitted to it and followed the tracking process, or perhaps, as we've found, didn't recognise when they were using AI tools?

Response 21- added, “reported using” to clarify.

 

Comments 22- L- 386 Is this something allowed/encouraged at the institution? If so, is it done with specialised tools, or is everyone allowed to do what they want in this respect?

Response 22- This item was part of a survey and not the focus of this article. Our institution does have automatic grading in our Canvas platform.

 

Comments 23- L-426 Table 5?

Response 23- Yes, we fixed.

 

 

Reviewer 2 Report

Comments and Suggestions for Authors

This paper presents a very interesting and detailed narrative of an approach a faculty has taken to the professional learning of teacher education staff about AI in and across their disciplines. As a reflective review, the writing focuses mainly on the participants' experience and journey. There are some general comments I would make about the structure of the paper and improvements which could be made to support the intended reader:

  1. More in-depth review of background literature: there is a wealth of empirical research on AI in tertiary education and it would be of benefit to provide a more nuanced review of the current field in the Introduction section.
  2. The DEC model is a conceptual model which has been published by an EU body, but it would strengthen its validity as a model to guide this professional learning process if there was empirical evidence of its effectiveness in practice. I note that there are a few references at various stages of the process which is guided by this model, but these are minimal and it would be beneficial to include more to support the decisions you made as you progressed through the process.
  3. Methodology/introduction to the process: It would be useful for the reader to understand your design thinking going into this professional learning process, e.g. how many teacher educators would be involved? how long would each stage of the process take? How did you intend to collect feedback (surveys are mentioned later, but it would be helpful to understand the initial intention and plan).

Overall, this is an interesting article and presents good detail of the professional learning process and experiences of participants, however more background information and referencing is needed to support the reader prior to the narrative of the professional learning process.

Author Response

 

For review article #3618857

1. Summary

 

Thank you very much for taking the time to review this manuscript. Please find our responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files

2. Point-by-point response to Comments and Suggestions for Authors

Response to Reviewer 2 Comments

  1. More in-depth review of background literature: there is a wealth of empirical research on AI in tertiary education and it would be of benefit to provide a more nuanced review of the current field in the Introduction section.
  2. The DEC model is a conceptual model which has been published by an EU body, but it would strengthen its validity as a model to guide this professional learning process if there was empirical evidence of its effectiveness in practice. I note that there are a few references at various stages of the process which is guided by this model, but these are minimal and it would be beneficial to include more to support the decisions you made as you progressed through the process.
  3. Methodology/introduction to the process: It would be useful for the reader to understand your design thinking going into this professional learning process, e.g. how many teacher educators would be involved? how long would each stage of the process take? How did you intend to collect feedback (surveys are mentioned later, but it would be helpful to understand the initial intention and plan).

 

Response to Reviewer #2- We added the following paragraph (and citations) to address some of the reviewers concerns (after the Conceptual Framework). We did not do a more in-depth literature review as it is significant (see Phase 2 heading 4.1-4.8), and is currently sprinkled throughout the review article. We do not include empirical data in this review article.

In this review article, we employ a retrospective methodological approach to analyze and reflect on our professional partnership journey. We systematically align our efforts with the DEC AI Literacy Framework as our conceptual framework. Rather than presenting new empirical data, this review critically examines past practices, decisions, and implementations through the lens of this theoretical model. As such, we aim to identify lessons learned and inform future directions. This type of methodology is particularly valuable in review contexts, as it enables researchers to synthesize experiences, surface implicit knowledge, and enhance transparency and rigor in program or initiative evaluation (Patton, 2015). By anchoring our reflection in the DEC AI Literacy Framework as our conceptual framework, we provide a structured lens for interpretation, allowing for deeper insights into the process and potential future actions or research for other higher education faculty (Creswell & Poth, 2018).

 

Back to TopTop