Next Article in Journal
Resource-Person-Mediated Instruction and Secondary Students’ Learning Outcomes in Yorùbá Orature: A Culturally Responsive Education
Previous Article in Journal
Modeling Teaching Using Information and Communication Technologies in Early Childhood Education with Functional Diversity: The Case in Spain
Previous Article in Special Issue
Addressing Humanities Pre-Service and In-Service Teachers’ Concerns in Integrating STEM Education—A Case Study of Geography Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancing Artificial Intelligence Literacy in Teacher Education Through Professional Partnership Inquiry

School of Teacher Education, University of Central Florida, Orlando, FL 32816, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(6), 659; https://doi.org/10.3390/educsci15060659 (registering DOI)
Submission received: 16 April 2025 / Revised: 20 May 2025 / Accepted: 22 May 2025 / Published: 27 May 2025

Abstract

:
Artificial Intelligence (AI) has increasingly been integrated into daily life, yet many individuals, including teacher candidates, remain unaware of its presence, despite the rise in Generative AI and its influence on personal and professional spaces. AI offers promising advancements in education by enhancing efficiency, streamlining administrative tasks, and personalizing learning experiences. Recognizing the need to address AI’s role within their instructional practice and teacher preparation program, the authors describe their AI journey, detailing a multi-phase approach to integrating AI into higher education through individual exploration, faculty partnerships, pilot study implementation, and expanded partnerships and professional development. This article includes concrete examples of how a semester-long action research project was intentionally reimagined to align with AI-driven learning. Using the Digital Education Council’s AI Literacy Framework as a guiding structure, the authors examine the competencies necessary for AI literacy and leadership in education. This retrospective review highlights lessons learned, challenges faced, and emerging strategies for fostering responsible AI integration in K-12 and teacher preparation programs.

1. Introduction

Artificial Intelligence (AI) permeates our everyday interactions—from voice-assisted devices to chatbots and facial recognition, yet only 55% of residents in the United States report regularly using AI (Marr, 2025). In reality, most of us engage with AI on a daily basis, including our teacher candidates, even if they do not realize it. However, the recent Generative AI explosion has made AI more visible, reshaping personal and professional spaces. AI is already redefining teaching and learning (Marino et al., 2023) in K-12 settings and teacher preparation programs. It can enhance educators’ efficiency by streamlining administrative tasks such as grading and feedback (Zawacki-Richter et al., 2019) and can support instructional planning, design, and adaptation (Akgun & Greenhow, 2022). AI can also further personalize learning (Crompton & Burke, 2023), offering tutoring support, research assistance, and specialized help for students with disabilities (Marino et al., 2023). As educators of pre-service and in-service teachers, we recognized AI’s potential and felt a professional responsibility to address it in our courses. This required deliberate action—learning to use AI firsthand, critically assessing tools and outputs, examining ethical considerations, and determining effective strategies for AI integration, including providing teacher educators with guided practice and independent practice with AI.
In this article, we document our AI journey, highlighting our key considerations for integrating AI and detailing how professional learning and partnerships accelerated our understanding and applications of AI in our courses and research. Through a process of individual inquiry and exploration, we delved into critical ethical issues surrounding AI, while also seeking professional learning opportunities to deepen our knowledge of AI applications. A pivotal point in our journey was the launch of our faculty partnership and participation in a professional development institute offered at our university, where we refined strategies to mitigate AI-related challenges. During this professional development, we also developed tools and a plan for AI use in a reading education course as part of a pilot study for the fall 2024 semester. Over the 2024–2025 academic year, we expanded our professional development roles from participants to presenters at local, national, and international conferences (Kelley & Wenzel, 2025b). We also spearheaded expanded partnerships by forming a Special Interest Group and hosting an AI institute for K-12 practitioners. In the following sections, we highlight the expansion of our professional learning efforts and emphasize the role of collaborative partnerships in knowledge building. Throughout this retrospective review (Ikram et al., 2024), we embed the existing literature in the field to frame the outcomes and events from each phase of our journey (See Figure 1). When we use the term AI in this article, we are specifically referring to Generative AI.

2. Conceptual Framework and Methodology

Throughout this review article, we employ a retrospective methodological approach to analyze and reflect on our professional partnership journey. As we reflected on our AI journey, we realized it was best described using the Digital Education Council (2025) AI Literacy Framework This framework offers structured guidance for higher education institutions to develop AI literacy approaches that equip individuals with foundational AI competencies and discipline-specific applications. We systematically align our efforts with the DEC AI Literacy Framework as our conceptual framework. The Digital Education Council (2025) defines AI literacy as the ability to use AI tools effectively and ethically, evaluate their output, ensure humans are at the core of AI, and adapt to the changing AI landscape in personal and professional settings. This framework identifies five key dimensions of AI literacy. The dimensions include the knowledge and skills needed to understand, interact with, and critically assess AI. For each dimension, the Digital Education Council (2025) offers three levels of competency, providing a brief description, specific examples of each, and actions for progressing from competency 1 to competency 3. The five dimensions include the following:
  • Dimension 1—Understanding AI and Data;
  • Dimension 2—Critical Thinking and Judgment;
  • Dimension 3—Ethical and Responsible Use;
  • Dimension 4—Human-Centricity, Emotional Intelligence, and Creativity;
  • Dimension 5—Domain Expertise.
Dimension 5, domain expertise, encompasses Dimensions 1–4 and focuses on a faculty member’s ability to evaluate AI applications within a given discipline, modify AI tools to enhance professional practices, and traverse domain-specific ethical and operational challenges. Level 1 of Dimension 5 is foundational applied AI awareness, and Level 2 is AI application in teaching and learning. Levels 1 and 2 involve faculty engagement with students in the classroom and were the focus of the first three phases of our professional learning. Level 3, strategic AI leadership in higher education, involves faculty action beyond their classroom. Throughout this article, we note the application of the DEC dimensions and specifically how we addressed the three levels of Dimension 5, domain expertise, through our professional learning and partnerships.
Rather than presenting new empirical data, this review critically examines past practices, decisions, and implementations through the lens of this theoretical model. As such, we aim to identify lessons learned and inform future directions. This type of methodology is particularly valuable in review contexts, as it enables researchers to synthesize experiences, bring implicit knowledge to the surface, and enhance transparency and rigor in program or initiative evaluation (Patton, 2015). By anchoring our reflection in the DEC AI Literacy Framework, we provide a structured lens for interpretation, allowing for deeper insights into the process and potential future actions or research for other higher education faculty (Creswell & Poth, 2018).

3. Phase 1: Individual Inquiry and Exploration

AI became more apparent to us post-COVID-19. At that time, we individually sought out professional developments on AI with a primarily haphazard approach. If it had AI and education in the title, we signed up. Most of these training sessions focused on Large Language Models and understanding how AI works, which is Dimension 1 of the DEC AI Literacy Framework (2025). Over two years, we collectively attended over 40 AI training sessions. These learning opportunities piqued our interest and led us to explore AI for personal use. We used AI to create travel itineraries, high-protein menus, checklists for children’s routines and chores, and party planning.
As we each learned and developed more confidence in using AI, we shifted to using it professionally. Concerned with AI hallucinations and skeptical of AI output, these initial uses often involved critically evaluating the AI output by questioning and fact-checking, which is Dimension 2 of the DEC AI Literacy Framework (2025). For example, in May of 2024, we asked Chat-GPT-4 to “compare and contrast the Science of Reading and the Active View of Reading, identifying how these models are similar and how they are different with sources”. This was an existing assignment in one of our reading courses, and we were curious about what AI would produce and its accuracy. We reviewed the output to see if the similarities and differences were the distinguishing elements that we anticipated and whether the sources they used would be appropriate for the assignment. After reviewing the output, we found the content to be accurate, and this increased our confidence and interest in using AI in our courses.
Once critically thinking about AI’s output became a habit, we began to independently dabble with using AI to assist our teaching and research. We uploaded research articles, prompting Claude AI to summarize or complete a specific task with the articles. Using Elicit AI, we had the platform analyze multiple papers by providing summaries, extracting data, and synthesizing findings.

Reflecting on Level 1: Applied AI Awareness

As we explored using AI, we recognized that there were promising ways we could integrate it into our courses and support student learning. In this individual inquiry and exploration phase, we gained a basic understanding of how AI could be used in education, and we identified relevant AI tools we might use with students. Table 1 describes the actions we took toward Level 1 of Dimension 5 (Digital Education Council, 2025).

4. Phase 2: Partnership Launch: Faculty Pair

While in the experimental phase with AI, we would text and email each other about our learnings, discoveries, and questions related to AI. We also began brainstorming ways we could use AI to enhance student learning in our courses. We knew that our training and explorations represented general AI literacy and that we had only scratched the surface in relation to AI’s potential. Looking to move into the Digital Education Council (2025) Dimension 5, domain expertise, we sought to collaborate and look for ways to integrate AI into our teaching. We applied and were accepted to the Writing Across the Curriculum and AI Track to investigate domain-specific applications of AI during the University of Central Florida Faculty Center for Teaching and Learning (FCTL) 2024 Summer Conference.
Despite recognizing the potential benefits of AI, we knew there were issues related to its use such as hallucinations and plagiarism. We also realized that these concerns were preventing some of our peers and students from using it. To illuminate these issues, during the FCTL conference, we attended sessions on AI ethics and dove into the literature related to AI literacy. Our track leader fed us articles, websites, and suggestions, and we used this opportunity to illuminate AI’s challenges and problem-solve how to address them (Akgun & Greenhow, 2022). In the following section, we explore some of these concerns and offer ideas to mitigate these issues.

4.1. Ethical Considerations

The Rome Call for AI Ethics (2024), a document signed by governments, institutions, and corporations, emphasized the importance of transparency, inclusion, reliability, impartiality, responsibility, and security in the development of AI systems, as well as in research, education, and workforce development. Additional ethical dilemmas include biases in algorithms, surveillance concerns, unequal access, misuse, and intellectual property (Murugesan, 2023), concepts addressed in the DEC Dimension 3 (Digital Education Council, 2025). Not surprisingly, 98% of teachers surveyed by Forbes felt that students needed some degree of education concerning the ethical uses of AI (Hamilton, 2025). A total of 65% percent of educators were concerned about plagiarism in essays/work, 42% were concerned with data privacy and security, and 30% were concerned with unequal access to AI resources (Hamilton, 2025). But how can we address these issues?

4.2. Privacy, Data Security, and Bias

How AI systems collect and use student data raises concerns about privacy, security, and bias. While most AI systems ask users’ consent to access their personal information, many may not realize the extent to which their personal information is being shared. Akgun and Greenhow (2022) suggest that AI algorithms that make predictions based on personal information lead to questions about autonomy and fairness. Furthermore, it is widely known that AI systems have demonstrated gender and racial bias (Miller et al., 2018; Murphy, 2019), partly explained by the underrepresentation of people of color and women in technology and in the data training that shapes AI (Buolamwini, 2019). Awareness of data privacy and knowledge of how companies are using shared data are some ways to deal with these issues. Our university has established policies and guidelines for AI. They have also given data-protected use of Microsoft Copilot to all students, faculty, and staff. If school districts for K-12 education have not already, they should establish safety protocols and policies to protect students’ privacy and safety when using educational technology, reducing this burden on classroom teachers.

4.3. Equity and Access

Not all schools and students have equal access to AI-powered tools, which could widen educational disparities. In fact, 30% of educators reported concerns that students did not have equal access to AI resources (Hamilton, 2025), and 15% of high school students reported not having access to AI (Schiel et al., 2024). The proliferation of AI has led to an AI divide, which Gonzales (2024) described as the unequal access, opportunities, and benefits in AI technology across various socioeconomic groups, communities, and countries. Equitable access should be a priority in K-12 education and teacher education programs, which include advanced technology hardware (beyond smartphones), utilities, and reliable internet connectivity (Colorado Education Initiative, 2024). Providing these basic resources gives everyone the opportunity to explore and engage with AI.

4.4. Plagiarism/Aigiarism

While cheating is not new to academia, evolving technology has exacerbated these concerns (Perry & Steck, 2015). From calculators in math exams to spell-check devices/programs, technology has always pushed the boundaries of ethical behavior. Plagiarism with AI, AIgiarism, is difficult to detect. There are several AI-powered tools designed to detect cheating (Hartshorne, 2024); however, they can be expensive and inaccurate. Xie et al. (2023) have identified three ways AI cheating is detrimental in higher education: it degrades the quality of education, creates an unfair advantage for AI users, and damages the integrity of educational institutions. This has led to anti-AI policies, which Gillard and Rorabaugh (2023) suggest are counterproductive. A better approach is to focus on education, awareness, and responsible and ethical AI usage rather than blaming AI itself (Gillard & Rorabaugh, 2023)

4.5. Impact on Critical Thinking

Beyond cheating, some educators worry that AI tools might reduce students’ ability to think critically and independently. Oravec (2023) suggests that educators promote AI literacy by teaching students to critically evaluate AI-generated content for deficiencies and inaccuracies. This can include reviewing AI-sourced materials for reliability, cross-referencing claims with authoritative sources, and understanding how to properly cite AI-generated content. As part of this effort, educators should emphasize AI’s limitations, including hallucinations—cases where AI fabricates information—and biases in training data that can impact responses. Educators should also rethink traditional assignment structures to better align with the realities of AI-assisted learning (Tlili et al., 2023). Rather than discouraging AI use outright, a more effective strategy is to intentionally integrate AI into coursework.

4.6. Proactively Addressing AI Ethical Dilemmas

Several states have established policies related to AI use in K-12 schools (Colorado Education Initiative, 2024). Developing and disseminating AI policies invites discussion and awareness, contributing to transparency and clear expectations. Not only has our university provided guidelines for AI use and access to students, they have also provided syllabi language suggestions and offered several professional learning opportunities for faculty. Most recently, they created a web course for faculty highlighting how AI works, ethical issues and suggestions for confronting them, and ideas for enhancing teaching and student learning using Generative AI. In the fall of 2025, they will launch an AI web course for students, similar to the faculty course. They have also established a Special Assistant to the Provost for Artificial Intelligence, who is coordinating efforts across our campus.

4.7. Making AI Transparent

Winkelmes et al. (2019) argued that when instructors make learning processes more transparent, it benefits students and fosters student success in college. These benefits include a sense of belonging, academic confidence, persistence, and metacognitive awareness. Transparent instruction involves faculty discussing the purpose of the assignment, what students will gain from it, the tasks involved, examples, and real-world applications before students undertake the work (Winkelmes et al., 2019). Transparency in AI not only builds trust and ethical use but also enhances learning outcomes by making educational processes clearer and more understandable for students. This approach ensures that AI is used responsibly and effectively in educational settings.

4.8. Using a Stoplight to Promote AI Transparency

One potential approach to promote AI transparency is a stoplight that visually alerts students to acceptable AI use for assignments (Mormando, 2023). This metaphor categorizes AI usage into three levels: green, yellow, and red lights, each representing different levels of permission and restriction. This framework clarifies when and how AI can be used and promotes ethical use and academic integrity. It encourages active dialog between teachers and students, helping them understand the implications of AI in their work. Since some assignments may use more than one stoplight, it is important to clearly articulate your expectations for AI usage based on the task and desired learning goal. Including examples and explaining how each fits into a category helps students better understand expectations and fosters responsible AI use. This model appealed to us, and we felt students would easily grasp the stoplight’s intent; therefore, we modified it for our courses. Table 2 describes the three levels, disclosure expectations, and potential teacher language related to AI use.

5. Phase 3: AI Pilot Study: Assignment Reconfiguration in a Reading Course

Once we decided to employ the stoplight framework for AI transparency with students, we moved to determining where AI use would fit best in our reading practicum course based on our learning objectives. We chose to reconfigure a semester-long reading action research case study project (ARCSP) that approximately 200 teacher educators would complete with a K-12 student while in a concurrent field experience or placement. For this project, students maintained a digital researcher log, which we created to scaffold students through the ARCSP process. There were six sections in the log for each step of the ARCSP. For each step, teacher educators were given feedback and evaluated. The project culminated with them presenting their ARCSP to peers.

5.1. Assignment Reconfiguration

The six steps of the ARCSP include the following: identifying a data collection plan, completing data collection and analysis, crafting a research question and conducting a mini literature review, creating an intervention/instructional plan, determining results and sharing findings, and reflecting on limitations and the action research process. Thinking about our experiences using AI, we reflected on what AI is good at and how it might be used to support students with the ARCSP steps. Historically, our students self-reported having the most challenges with identifying a research question based on their data analysis and writing a literature review based on their research question. We would often have to suggest a research question based on their data collected and guide them to peer-reviewed articles and resources for their literature review. Since we were already doing some of this work for them, we thought AI could be a teaching assistant for these steps.
Next, we experimented with using AI in the ARCSP steps, exploring what this could look like and the potential changes we would need to make to the assignment, including developing AI use guidance and embedding the stoplight into each section of the researcher log, alerting our students to acceptable uses. During this process, we refined prompts to optimize output results and determined when AI was the best fit. We decided to break the literature review step into two parts: source evaluation and the literature review. This led us to create an additional scaffold for source evaluation and a new section in the researcher log. Table 3 identifies the ARCSP steps, whether and how intentional AI use was infused, and AI stoplight guidance.

5.2. Pilot Study of Reconfigured ARCSP

In the fall of 2024, we implemented the reconfigured ARCSP into two sections of our reading practicum course. One section was online and consisted of graduate students; the other was an undergraduate hybrid course that met in person almost weekly. It is important to note that although we thoughtfully looked for ways to use AI to meet our learning objectives and encouraged AI use, we did not require our students to use AI (although they had free access to Microsoft Copilot provided by UCF), even if we included a green stoplight. At the beginning of the semester, we had students complete a survey to gauge their readiness for AI use. We anticipated that they would be more comfortable using AI for personal reasons rather than for academic use and that they would not feel they had been adequately prepared to use AI for teaching and learning with students.
Interestingly, of the students who responded to the survey across both course sections (n = 49), 29 shared that they did not use AI at all for academic purposes, while 12 shared that they used it occasionally. Only eight students reported that they used AI on a weekly to monthly basis for academic use. As we expected, more students, 22, reported using AI for personal use, and 13 students reported weekly to monthly use. We found it interesting that 14 students reported not ever using AI for personal use, which led us to wonder if students were fully aware of the applications and technology from their daily lives that employ AI, similar to Marr’s (2025) data regarding US residents’ perceived use. We also confirmed that, in general, students did not feel adequately prepared to use AI for teaching and learning, with 31 students reporting that they felt slightly prepared or not prepared at all. Only 14 students indicated that they were somewhat prepared, and 5 reported being prepared to very prepared.
While the data suggested that our teacher candidates were not using AI readily for academic purposes, we expected most of our students to use AI for support in completing the ARCSP. However, this was not the case, especially in the online section. Less than half of the students reported using AI for various parts of the ARCSP, and only a few used it throughout the project as allowed. In retrospect, this makes sense, since most of the students had not used AI previously for academic purposes, and there was only one synchronous opportunity to show students how to use AI. There was an increase in AI use in the hybrid undergraduate section, with all students using AI in at least one part of the ARCSP and the majority of students using AI in every section in which it was allowed. We attribute this difference to the course modality, given that the hybrid course included in-person instructor modeling of AI use for tasks such as generating potential research questions and summarizing peer-reviewed journal articles. Additionally, students used their electronic devices to start content generation in class immediately after each modeled example, using it in the same way the instructor had modeled for them.
Both instructors made several anecdotal observations from students who chose to use AI. Overall, their logs looked more professional, especially the visual representation of data in steps two and five. The sentence frame and the AI prompt supplied to them in step two simplified the drafting of research questions. Since the instructor did not have to create the research question, the instructor–student time was more collaborative and focused on why one question would be more appropriate than another and on tweaking the AI-generated research questions to the student’s data. Student feedback on the post-AI use survey suggested that the students who used AI for the source evaluation and literature review felt more confident to do something similar in the future.
At the end of the pilot semester, we observed interesting trends in post-survey outcomes from the undergraduate students (n = 23), as compared to the pre-survey data. Of the students in the undergraduate course, 16 reported feeling prepared or very prepared to use Generative AI tools for their teaching, 5 felt somewhat prepared, and only 2 students felt slightly prepared or not prepared. These trends were a noticeable increase from the 21 students selecting that they were slightly prepared or not prepared on the pre-survey. While we expressed curiosity as to whether our students’ use of AI for academic purposes in our course had an actual impact on their preparedness to use AI in a teaching context with future students, we felt that their perceived increase in preparation was a positive outcome of the pilot study. In future semesters, we intend to study additional metrics to gauge how their knowledge and action research outcomes are impacted by their AI use in the action research process.

5.3. Reflecting on Level 2: AI Application in Teaching and Learning

As we have noted, in retrospect, we were using Levels 1 and 2 of the DEC AI Literacy Framework’s (2025) Dimension 5, domain expertise, as we reconfigured the ARCSP. Table 4 describes the actions we took specifically toward Level 2 of Dimension 5 (Digital Education Council, 2025) as we reconfigured the ARCSP assignment with an AI lens.

6. Phase 4: Expanding Partnerships and Professional Learning Leadership

As a result of our faculty partnership efforts and the implementation of our pilot study, we began to receive opportunities to share our early outcomes and research with others. By actively contributing to professional learning communities, our efforts supported faculty at all levels of AI expertise. Our initiatives aimed to expand partnerships and professional learning in AI literacy by focusing on faculty development in evaluating and applying AI tools within their specific contexts. In the sections that follow, we share the evolution of this phase of our partnership and professional learning journey, including actions aligned to Level 3 of the DEC Dimension 5, Strategic AI Leadership in Higher Education (2025).

6.1. University-Based Professional Learning

As we launched our pilot study, we began sharing our work with colleagues in our elementary education program area. During a summer retreat before the start of the fall 2024 semester, we highlighted what we had learned during the FCTL conference and made the AI stoplight and an AI online module shareable for faculty interested in using them in their courses. We also connected our faculty with other AI leaders in our university and shared relevant AI resources. We added a standing agenda item at our monthly program area meetings in the fall, where we shared AI updates from our work, such as data from our pilot study and anecdotal observations from our teaching experiences with AI.

6.2. Professional Learning at the International Level

During 2024–2025, the role of professional learning in our AI literacy journey evolved significantly. Our initial focus was on seeking out learning opportunities from others to build our understanding of AI in education. However, as our expertise grew, we shifted toward contributing to the professional learning community by sharing our insights and experiences. We took an active role in presenting a featured session for our university’s Writing Across the Curriculum department, engaging colleagues across disciplines in discussions about integrating AI in pedagogy and promoting transparent use. Additionally, we had the opportunity to present our work at two international conferences, expanding our impact and knowledge exchange, and we began publishing work related to our pilot study, contributing to the growing body of research on AI in education and helping to foster a collaborative, reflective learning environment within our academic community.

6.3. Self-Study and Collaborative Inquiry: Forming a Special Interest Group (SIG)

We were interested in building our faculty network within our School of Teacher Education and felt a Special Interest Group (SIG) would offer a structure for organized professional learning and partnership among colleagues. The launch of the SIG was also linked to a voluntary faculty self-reflection survey that we designed to assess the personal and professional uses of AI, as well as faculty members’ willingness to integrate AI into their teaching practices. The survey results revealed several key trends. Of the 13 faculty members who participated, it was clear that while the group was moderately familiar with AI, there was a notable self-reported lack of preparedness to use AI effectively in teaching. Most faculty reported using AI primarily for assessment creation and developing instructional materials, with many expressing a desire for more training and support in incorporating these tools into their courses. Interestingly, there was limited familiarity with AI’s potential applications for tutoring or automatic grading, indicating a gap in knowledge and readiness for these advanced uses. Faculty also expressed concerns about the ethical implications of AI, including issues related to data privacy, plagiarism, and the potential for AI tools to limit students’ writing development. These concerns were identified as significant barriers to adoption. Of the 13 faculty members who responded to the survey, 10 expressed interest in forming an SIG that could address these issues while exploring AI topics in a structured, collaborative manner. As such, the SIG for AI in K-12 Teacher Education was formed, representing the following program areas: special education, math, science, social studies, reading, and language arts. The SIG meets monthly for professional learning, and we have created a shared drive for AI resources. Through the SIG, we aim to provide ongoing support, facilitate dialog, engage in research, and share the best practices for integrating AI into education, ensuring that all faculty feel empowered to engage with these emerging technologies.

6.4. Practitioner Partnerships: Hosting a Professional Learning Institute

Looking to expand our work, we secured internal funding to develop a 3-day professional learning institute for K-12 educators focused on AI literacy and applications which was offered in the summer of 2025. The institute was designed to provide educators with interactive workshops that combined hands-on exploration, collaborative discussions, and practical guidance on the effective use of AI in their classrooms. The content of the institute was organized around the big ideas of assessment, curriculum development, differentiation, and communication in K-12 settings. Additionally, it emphasized ensuring equitable access to AI-enhanced learning for all learners. Presenters included university faculty, district leaders, and classroom teachers. The overarching goal of the institute was to empower educators to use AI ethically and responsibly, equipping them with the tools and knowledge needed to foster meaningful, student-centered learning experiences. This initiative aimed to not only build AI literacy among educators but also to expand our partnerships to include practitioners currently in the field. We hope these new stakeholders can be a part of a sustainable network of professional learning to support research, teaching, and learning efforts around AI integration in diverse classrooms.

6.5. Reflecting on Level 3: Strategic AI Leadership in Higher Education

The partnership and professional learning efforts described above align with the Digital Education Council’s (2025) framework, specifically Dimension 5, Level 3, by demonstrating strategic AI leadership in higher education through institutional collaboration, pedagogical innovation, and the development of professional learning ecosystems. The university-based professional learning efforts, particularly the integration of the AI stoplight and module sharing, reflect our leadership in faculty training and embedding AI critical engagement into coursework. The creation of an SIG and the use of faculty self-reflection surveys exemplify structured, data-informed professional development, contributing to the design of institution-wide AI literacy frameworks. By presenting at international conferences, contributing to research publications, and hosting a professional learning institute for K–12 educators, the team has actively influenced both national and global AI literacy conversations. These actions show a clear commitment to promoting ethical, equitable, and transformative uses of AI in education, fulfilling the DEC framework’s call to lead institutional change and contribute to discourse on responsible AI adoption. Table 5 provides the details of how our actions align with the tenets of Domain 5, Level 3 in more detail.

7. Discussion and Discoveries

As we further developed our AI literacy and applied AI in our teaching practices and professional partnerships, we began to experience a “zoom in/zoom out” phenomenon (Busch-Jensen & Schraube, 2019) across the Digital Education Council (2025) AI Literacy Framework. While we could trace our progress through each dimension, we quickly realized that these dimensions were not linear or isolated. Rather, they flowed and intersected in dynamic, ongoing ways. As we experimented with new AI tools and uses in our courses, our focus would shift between critical thinking and ethical considerations, sometimes revisiting foundational knowledge as we encountered new ideas or challenges. Our learning moved fluidly between these dimensions, almost like a continuum where each new application of AI would deepen our understanding in one area while prompting us to reconsider others. This fluidity reflected the complex, interconnected nature of AI literacy, requiring us to constantly adapt our thinking as we applied AI in evolving contexts.
Similarly, the zoom in/zoom out phenomenon (Busch-Jensen & Schraube, 2019) also applied to the relationship between domain-specific and domain-general knowledge and applications of AI. While Dimension 5 emphasizes domain expertise, we found that learning from AI applications in other fields provided valuable insights that could be adjusted and applied to our own context. For example, methods for teaching critical thinking in STEM fields can be adapted to social science education applications, and ethical considerations regarding HIPAA in healthcare AI can be applied to data privacy related to FERPA in our field. This cross-disciplinary learning highlights the importance of avoiding a siloed approach where AI uses and applications are viewed as isolated to specific fields. Instead, we found that sharing knowledge across disciplines not only enriched our understanding but also created opportunities for innovative teaching practices that bridged gaps between disciplines. By adopting a more interconnected approach, we realized that domain-specific examples could enhance domain-general knowledge, offering valuable perspectives and the potential for unique partnerships moving forward.

8. Conclusions

As scholars in teacher education, we recognize that preparing future generations of diverse citizens to engage ethically with AI requires an investment in professional development and partnership building for pre-service and practicing educators. To foster AI literacy and responsible integration, teachers, teacher leaders, and teacher education faculty must be equipped with the knowledge and strategies to navigate an evolving technological landscape. Darling-Hammond et al. (2017) emphasized that continuous, high-quality professional development allows educators to stay abreast of emerging technologies and best practices, ultimately enhancing teaching and learning outcomes. One effective approach is sustained professional learning, whereby teachers actively engage with curated curriculum resources, pedagogical strategies, and discussions on best practices. As the phases of our journey indicated, collaborative efforts among university faculty significantly enhanced the acquisition and application of new ideas. Specific to AI adoption and use in education, we have experienced firsthand how engaging in partnerships has exponentially increased our knowledge, applications, and creative ideas for AI integration. By building and contributing to communities of practice, educators can share insights, co-develop innovative teaching strategies, and critically reflect on their experiences with AI in the classroom. These collaborative efforts have the potential to lead to a profound and practical understanding of AI in teaching and learning.

9. Future Directions

With intentional engagement and a commitment to collaborative inquiry and research, faculty have a unique opportunity to actively shape the future of teaching and learning using AI. As the academic community embraces AI to enhance learning experiences, optimize assessments, and better meet the needs of diverse learners, there is an exciting potential to be part of this transformative shift. Utilizing the Digital Education Council (2025) AI Literacy Framework, institutions can assess their level of competency in each dimension, developing and fostering strategic AI leadership in higher education. Through collaborative partnerships and professional learning, we can remain at the forefront of advancements while also shaping thoughtful, ethical, and human-centered approaches to AI in education.

Author Contributions

All authors contributed to the writing and editing of this review article. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
DECDigital Education Council
ARCSPAction Research Case Study Project

References

  1. Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI Ethics, 2, 431–440. [Google Scholar] [CrossRef] [PubMed]
  2. Buolamwini, J. (2019, February 7). Artificial intelligence has a problem with gender and racial bias. TIME. Available online: https://time.com/5520558/artificial-intelligence-racial-gender-bias/ (accessed on 13 April 2025).
  3. Busch-Jensen, P., & Schraube, E. (2019). Zooming in zooming out: Analytical Strategies of situated generalization in psychological research. In C. Højholt, & E. Schraube (Eds.), Subjectivity and knowledge. Theory and history in the human and social sciences. Springer. [Google Scholar] [CrossRef]
  4. Colorado Education Initiative. (2024). Colorado roadmap for AI in K-12 education. Available online: https://www.coloradoedinitiative.org/wp-content/uploads/2024/08/Colorado-Roadmap-for-AI-in-K-12-Education_August-2024.pdf (accessed on 13 April 2025).
  5. Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches (4th ed.). SAGE Publications. [Google Scholar]
  6. Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 1–22. [Google Scholar] [CrossRef]
  7. Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher professional development. Learning Policy Institute. [Google Scholar]
  8. Digital Education Council. (2025). DEC AI literacy framework: AI literacy for all. Digital Education Council. [Google Scholar]
  9. Gillard, C., & Rorabaugh, P. (2023, February). You’re not going to like how colleges respond to ChatGPT. Slate. Available online: https://slate.com/technology/2023/02/chat-gpt-cheating-college-ai-detection.html (accessed on 14 April 2023).
  10. Gonzales, S. (2024, August 6). AI literacy and the new digital divide—A global call for action. UNESCO. Available online: https://www.unesco.org/en/articles/ai-literacy-and-new-Digital-divide-global-call-action (accessed on 13 April 2025).
  11. Hamilton, I. (2025, March 10). Artificial intelligence in school. Forbes. Available online: https://www.forbes.com/advisor/education/it-and-tech/artificial-intelligence-in-school/ (accessed on 13 April 2025).
  12. Hartshorne, D. (2024, April 30). The best AI content detectors in 2024. Zapier. [Google Scholar]
  13. Ikram, M. K., Labrecque, J. A., & Ikram, M. A. (2024). Perspective versus retrospective: A simple matter of timing? [version 1; peer review: 1 approved with reservations]. Open Research Europe, 4, 225. [Google Scholar] [CrossRef]
  14. Kelley, M., & Wenzel, T. (2025a). From red to green: Guiding transparent AI use in higher education. Faculty Focus, 24(1), 5. [Google Scholar]
  15. Kelley, M., & Wenzel, T. (2025b). Green light for AI: Navigating transparency in teacher preparation. In R. J. Cohen (Ed.), Proceedings of society for information technology & teacher education international conference (pp. 3284–3288). Association for the Advancement of Computing in Education (AACE). Available online: https://www.learntechlib.org/primary/p/225940/ (accessed on 8 April 2025).
  16. Marino, M. T., Vasquez, E., Dieker, L., Basham, J., & Blackorby, J. (2023). The future of artificial intelligence in special education technology. Journal of Special Education Technology, 38(3), 404–416. [Google Scholar] [CrossRef]
  17. Marr, B. (2025, March 10). 15 mind-blowing AI statistics everyone must know about now. Forbes. Available online: https://www.forbes.com/sites/bernardmarr/2025/03/10/15-mind-blowing-ai-statistics-everyone-must-know-about-now/ (accessed on 13 April 2025).
  18. Miller, F. A., Katz, J. H., & Gans, R. (2018). The OD imperative to add inclusion to the algorithms of artificial intelligence. OD Practitioner, 5(1), 6–12. [Google Scholar]
  19. Mormando, S. (2023, November 9). A stoplight model for guiding student AI usage. Edutopia. [Google Scholar]
  20. Murphy, R. F. (2019). Artificial intelligence applications to support k–12 teachers and teaching: A review of promising applications, challenges, and risks. Perspective, 10, 1–20. [Google Scholar] [CrossRef]
  21. Murugesan, S. (2023, April 24). The rise of ethical concerns about AI content creation: A call to action. IEEE Computer Society. [Google Scholar]
  22. Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT and Bard. Journal of Interactive Learning Research, 34(2), 213–237. [Google Scholar]
  23. Patton, M. Q. (2015). Qualitative research & evaluation methods (4th ed.). SAGE Publications. [Google Scholar]
  24. Perry, D. R., & Steck, A. K. (2015). Increasing student engagement, self-efficacy, and meta-cognitive self-regulation in the high school geometry classroom: Do iPads help? Computers in the Schools, 32(2), 122–143. [Google Scholar] [CrossRef]
  25. Rome Call for AI Ethics. (2024). Rome call for AI ethics report. Available online: https://www.romecall.org/wp-content/uploads/2024/02/RomeCall_report-web.pdf (accessed on 13 April 2025).
  26. Schiel, J., Bobek, B. L., & Schnieders, J. Z. (2024). High school students’ use and impressions of AI tools. ACT Research. Available online: https://www.act.org/content/act/en/research/pdfs/High-School-Students-Use-and-Impressions-of-AI-Tools-Accessible.html (accessed on 13 April 2025).
  27. Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 1–24. Available online: https://link.springer.com/article/10.1186/s40561-023-00237-x#citeas (accessed on 13 April 2025). [CrossRef]
  28. Winkelmes, M., Boye, A., & Tapp, S. (2019). Transparent design in higher education teaching and leadership: A guide to implementing the transparency framework institution-wide to improve learning and retention. Routledge. [Google Scholar]
  29. Xie, Y., Wu, S., & Chakravarty, S. (2023, October 11–14). AI meets AI: Artificial intelligence and academic integrity—A survey on mitigating AI-assisted cheating in computing education. 24th annual conference on information technology education (SIGITE ‘23), Marietta, GA, USA. [Google Scholar] [CrossRef]
  30. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
Figure 1. Phases of AI professional partnership inquiry. This figure illustrates the progression from individual inquiry to professional learning leadership.
Figure 1. Phases of AI professional partnership inquiry. This figure illustrates the progression from individual inquiry to professional learning leadership.
Education 15 00659 g001
Table 1. Individual inquiry and exploration through the DEC’s Dimension 5, domain expertise, Level 1 (Digital Education Council, 2025).
Table 1. Individual inquiry and exploration through the DEC’s Dimension 5, domain expertise, Level 1 (Digital Education Council, 2025).
Level 1: Applied AI Awareness Examples for EducationExamples of Our Actions Towards Level 1
Identify key AI applications relevant to education.Attended over 40 AI professional development sessions.
Read articles about AI applications.
Watched YouTube videos on AI applications in education.
Experimented with AI applications to determine which applications worked best for certain tasks, such as generating research questions, role-playing with parents, and synthesizing research articles.
Recognize how AI is transforming professional roles in education. Attended over 40 AI professional development sessions.
Read articles about AI use in education.
Curated AI resources.
Understand the basic limitations of AI when applied in education.Attended over 40 AI professional development sessions.
Gained personal experience using AI platforms for professional purposes.
Note: Adapted from Digital Education Council (2025). DEC AI literacy framework: AI literacy for all.
Table 2. Framework for AI transparency: stoplight model.
Table 2. Framework for AI transparency: stoplight model.
Stoplight AI Use AI Disclosure Expectations Potential Teacher Language
RedNo AI Use May require an academic integrity pledge. “AI cannot be used for this assignment. All work must be original, and use of AI will be considered plagiarism.”
YellowAI-Assisted
Idea Generation and Editing
An AI disclosure statement must specify AI use.“AI use can be used for

__________________, but

not for, ________________”
GreenAI Use for Assignment
Content Generation
AI use must be cited using APA (or another citation style)“AI use is permissible to complete this assignment (with citation).”
Note: Adapted from Kelley and Wenzel (2025a). From red to green: Guiding transparent AI use in higher education. Faculty Focus, 24, 1, 5.
Table 3. Reconfiguration of the ARCSP.
Table 3. Reconfiguration of the ARCSP.
Original ARCSP StepsReconfigurations With/Without AIAI Use Stoplight Guidance
Step One
Data Collection Plan
No changeYellow—Use for grammar and mechanics ONLY.
Step Two
Data Collection and Analysis
Teacher educators embed student assessments. Teacher educators summarize data and create a research question using a provided If/Then sentence frame with or without AI. If using AI, sources are requested.Yellow—Use for grammar and mechanics ONLY.
Green—Can use to create data tables (with student names de-identified) and/or draft research questions. Must provide APA citation for AI use.
Step Three
Research Question and Mini Literature Review
Revised to be two steps with a new section in the researcher log.

Step 1—Source evaluation with or without AI. Teacher educators cite in APA, summarize, and identify key takeaways and instructional implications for at least three sources.

Step 2—Mini literature review with or without AI. Teacher educators synthesize across sources and draw conclusions in APA style with in-text citations and references.
Yellow—Use for grammar and mechanics ONLY.
Green—Can use to identify sources and/or summarize sources. Must provide APA citation for AI use.
Step Four
Instructional/Intervention Plan
Teacher educators develop an instructional or intervention plan with artifacts based on their research question and literature review with or without AI. Yellow—Use for grammar and mechanics ONLY.
Green—Can use to brainstorm or identify lesson plans based on literature review. Must provide APA citation for AI use.
Step Five
Results and Findings
Teacher educators compare student’s pre- and post-intervention data and create a visual representation with or without AI.Yellow—Use for grammar and mechanics ONLY.
Green—Can use to create data tables (with student names de-identified). Must provide APA citation for AI use.
Step Six
Reflection on Limitations and Action Research Process
No changeYellow—Use for grammar and mechanics ONLY.
Table 4. AI pilot study: assignment reconfiguration in a reading course through the DEC’s Dimension 5, domain expertise, Level 2 (2025).
Table 4. AI pilot study: assignment reconfiguration in a reading course through the DEC’s Dimension 5, domain expertise, Level 2 (2025).
Level 2: AI Application in Teaching and Learning Examples for EducationExamples of Our Actions Towards Level 2
Select and apply AI tools that enhance efficiency and accuracy in a professional or academic setting.Attended over 40 AI professional development sessions.
Read articles about AI applications.
Watched YouTube videos on AI applications in education.
Experimented with AI applications to determine which applications worked best for certain tasks.
Assess the strengths and weaknesses of AI applications within specific processes or parts of the value chain. Explored AI applications and uses for the ARCSP, determining how AI could support students.
Modified the ARCSP to include the use of AI for specific steps.
Integrate AI insights into professional decision-making while understanding AI’s role as a complement to human expertise. Created a generic AI prompt that students could use to assist them with identifying research questions.

“Create a list of research questions written in an if/then format based on instruction in ___________ (foundational reading skill) and the Active View of Reading Construct ___________________ (related to foundational reading skill identified from data) with sources cited”.
Note: Adapted from Digital Education Council (2025). DEC AI literacy framework: AI literacy for all.
Table 5. Expanding partnerships and professional learning leadership through the DEC’s Dimension 5, domain expertise, Level 3 (Digital Education Council, 2025).
Table 5. Expanding partnerships and professional learning leadership through the DEC’s Dimension 5, domain expertise, Level 3 (Digital Education Council, 2025).
Level 3: Strategic AI Leadership in Higher Education (Digital Education Council, 2025)Examples of Our Actions Towards Level 3
Evaluate and refine AI adoption strategies within the field, considering regulatory, ethical, and operational constraints.Development of a stoplight for AI use.
Sharing syllabi statements for AI use.
Lead the implementation of AI-driven innovations in a professional or academic context. Introduction of AI as a “thought partner” in action research.
Assignment reconfiguration to promote academic source evaluation and synthesis of research in course assignments.
Collaboration with university departments for professional learning: Faculty Center for Teaching and Learning, Writing Across the Curriculum.
Faculty self-reflection surveys.
Develop training materials or guidelines to enhance AI literacy among peers and colleagues in the field. Graduate course development: AI for Teacher Educators.
Presentations at international professional conferences.
Publications in international conferences, proceedings, book chapters, and peer-reviewed journals.
Note: Adapted from Digital Education Council (2025). DEC AI literacy framework: AI literacy for all.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kelley, M.; Wenzel, T. Advancing Artificial Intelligence Literacy in Teacher Education Through Professional Partnership Inquiry. Educ. Sci. 2025, 15, 659. https://doi.org/10.3390/educsci15060659

AMA Style

Kelley M, Wenzel T. Advancing Artificial Intelligence Literacy in Teacher Education Through Professional Partnership Inquiry. Education Sciences. 2025; 15(6):659. https://doi.org/10.3390/educsci15060659

Chicago/Turabian Style

Kelley, Michelle, and Taylar Wenzel. 2025. "Advancing Artificial Intelligence Literacy in Teacher Education Through Professional Partnership Inquiry" Education Sciences 15, no. 6: 659. https://doi.org/10.3390/educsci15060659

APA Style

Kelley, M., & Wenzel, T. (2025). Advancing Artificial Intelligence Literacy in Teacher Education Through Professional Partnership Inquiry. Education Sciences, 15(6), 659. https://doi.org/10.3390/educsci15060659

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop