1. Introduction
Artificial Intelligence (AI) permeates our everyday interactions—from voice-assisted devices to chatbots and facial recognition, yet only 55% of residents in the United States report regularly using AI (
Marr, 2025). In reality, most of us engage with AI on a daily basis, including our teacher candidates, even if they do not realize it. However, the recent Generative AI explosion has made AI more visible, reshaping personal and professional spaces. AI is already redefining teaching and learning (
Marino et al., 2023) in K-12 settings and teacher preparation programs. It can enhance educators’ efficiency by streamlining administrative tasks such as grading and feedback (
Zawacki-Richter et al., 2019) and can support instructional planning, design, and adaptation (
Akgun & Greenhow, 2022). AI can also further personalize learning (
Crompton & Burke, 2023), offering tutoring support, research assistance, and specialized help for students with disabilities (
Marino et al., 2023). As educators of pre-service and in-service teachers, we recognized AI’s potential and felt a professional responsibility to address it in our courses. This required deliberate action—learning to use AI firsthand, critically assessing tools and outputs, examining ethical considerations, and determining effective strategies for AI integration, including providing teacher educators with guided practice and independent practice with AI.
In this article, we document our AI journey, highlighting our key considerations for integrating AI and detailing how professional learning and partnerships accelerated our understanding and applications of AI in our courses and research. Through a process of individual inquiry and exploration, we delved into critical ethical issues surrounding AI, while also seeking professional learning opportunities to deepen our knowledge of AI applications. A pivotal point in our journey was the launch of our faculty partnership and participation in a professional development institute offered at our university, where we refined strategies to mitigate AI-related challenges. During this professional development, we also developed tools and a plan for AI use in a reading education course as part of a pilot study for the fall 2024 semester. Over the 2024–2025 academic year, we expanded our professional development roles from participants to presenters at local, national, and international conferences (
Kelley & Wenzel, 2025b). We also spearheaded expanded partnerships by forming a Special Interest Group and hosting an AI institute for K-12 practitioners. In the following sections, we highlight the expansion of our professional learning efforts and emphasize the role of collaborative partnerships in knowledge building. Throughout this retrospective review (
Ikram et al., 2024), we embed the existing literature in the field to frame the outcomes and events from each phase of our journey (See
Figure 1). When we use the term AI in this article, we are specifically referring to Generative AI.
2. Conceptual Framework and Methodology
Throughout this review article, we employ a retrospective methodological approach to analyze and reflect on our professional partnership journey. As we reflected on our AI journey, we realized it was best described using the
Digital Education Council (
2025) AI Literacy Framework This framework offers structured guidance for higher education institutions to develop AI literacy approaches that equip individuals with foundational AI competencies and discipline-specific applications. We systematically align our efforts with the DEC AI Literacy Framework as our conceptual framework. The
Digital Education Council (
2025) defines AI literacy as the ability to use AI tools effectively and ethically, evaluate their output, ensure humans are at the core of AI, and adapt to the changing AI landscape in personal and professional settings. This framework identifies five key dimensions of AI literacy. The dimensions include the knowledge and skills needed to understand, interact with, and critically assess AI. For each dimension, the
Digital Education Council (
2025) offers three levels of competency, providing a brief description, specific examples of each, and actions for progressing from competency 1 to competency 3. The five dimensions include the following:
Dimension 1—Understanding AI and Data;
Dimension 2—Critical Thinking and Judgment;
Dimension 3—Ethical and Responsible Use;
Dimension 4—Human-Centricity, Emotional Intelligence, and Creativity;
Dimension 5—Domain Expertise.
Dimension 5, domain expertise, encompasses Dimensions 1–4 and focuses on a faculty member’s ability to evaluate AI applications within a given discipline, modify AI tools to enhance professional practices, and traverse domain-specific ethical and operational challenges. Level 1 of Dimension 5 is foundational applied AI awareness, and Level 2 is AI application in teaching and learning. Levels 1 and 2 involve faculty engagement with students in the classroom and were the focus of the first three phases of our professional learning. Level 3, strategic AI leadership in higher education, involves faculty action beyond their classroom. Throughout this article, we note the application of the DEC dimensions and specifically how we addressed the three levels of Dimension 5, domain expertise, through our professional learning and partnerships.
Rather than presenting new empirical data, this review critically examines past practices, decisions, and implementations through the lens of this theoretical model. As such, we aim to identify lessons learned and inform future directions. This type of methodology is particularly valuable in review contexts, as it enables researchers to synthesize experiences, bring implicit knowledge to the surface, and enhance transparency and rigor in program or initiative evaluation (
Patton, 2015). By anchoring our reflection in the DEC AI Literacy Framework, we provide a structured lens for interpretation, allowing for deeper insights into the process and potential future actions or research for other higher education faculty (
Creswell & Poth, 2018).
3. Phase 1: Individual Inquiry and Exploration
AI became more apparent to us post-COVID-19. At that time, we individually sought out professional developments on AI with a primarily haphazard approach. If it had AI and education in the title, we signed up. Most of these training sessions focused on Large Language Models and understanding how AI works, which is Dimension 1 of the DEC AI Literacy Framework (2025). Over two years, we collectively attended over 40 AI training sessions. These learning opportunities piqued our interest and led us to explore AI for personal use. We used AI to create travel itineraries, high-protein menus, checklists for children’s routines and chores, and party planning.
As we each learned and developed more confidence in using AI, we shifted to using it professionally. Concerned with AI hallucinations and skeptical of AI output, these initial uses often involved critically evaluating the AI output by questioning and fact-checking, which is Dimension 2 of the DEC AI Literacy Framework (2025). For example, in May of 2024, we asked Chat-GPT-4 to “compare and contrast the Science of Reading and the Active View of Reading, identifying how these models are similar and how they are different with sources”. This was an existing assignment in one of our reading courses, and we were curious about what AI would produce and its accuracy. We reviewed the output to see if the similarities and differences were the distinguishing elements that we anticipated and whether the sources they used would be appropriate for the assignment. After reviewing the output, we found the content to be accurate, and this increased our confidence and interest in using AI in our courses.
Once critically thinking about AI’s output became a habit, we began to independently dabble with using AI to assist our teaching and research. We uploaded research articles, prompting Claude AI to summarize or complete a specific task with the articles. Using Elicit AI, we had the platform analyze multiple papers by providing summaries, extracting data, and synthesizing findings.
Reflecting on Level 1: Applied AI Awareness
As we explored using AI, we recognized that there were promising ways we could integrate it into our courses and support student learning. In this individual inquiry and exploration phase, we gained a basic understanding of how AI could be used in education, and we identified relevant AI tools we might use with students.
Table 1 describes the actions we took toward Level 1 of Dimension 5 (
Digital Education Council, 2025).
4. Phase 2: Partnership Launch: Faculty Pair
While in the experimental phase with AI, we would text and email each other about our learnings, discoveries, and questions related to AI. We also began brainstorming ways we could use AI to enhance student learning in our courses. We knew that our training and explorations represented general AI literacy and that we had only scratched the surface in relation to AI’s potential. Looking to move into the
Digital Education Council (
2025) Dimension 5, domain expertise, we sought to collaborate and look for ways to integrate AI into our teaching. We applied and were accepted to the Writing Across the Curriculum and AI Track to investigate domain-specific applications of AI during the University of Central Florida Faculty Center for Teaching and Learning (FCTL) 2024 Summer Conference.
Despite recognizing the potential benefits of AI, we knew there were issues related to its use such as hallucinations and plagiarism. We also realized that these concerns were preventing some of our peers and students from using it. To illuminate these issues, during the FCTL conference, we attended sessions on AI ethics and dove into the literature related to AI literacy. Our track leader fed us articles, websites, and suggestions, and we used this opportunity to illuminate AI’s challenges and problem-solve how to address them (
Akgun & Greenhow, 2022). In the following section, we explore some of these concerns and offer ideas to mitigate these issues.
4.1. Ethical Considerations
The
Rome Call for AI Ethics (
2024), a document signed by governments, institutions, and corporations, emphasized the importance of transparency, inclusion, reliability, impartiality, responsibility, and security in the development of AI systems, as well as in research, education, and workforce development. Additional ethical dilemmas include biases in algorithms, surveillance concerns, unequal access, misuse, and intellectual property (
Murugesan, 2023), concepts addressed in the DEC Dimension 3 (
Digital Education Council, 2025). Not surprisingly, 98% of teachers surveyed by Forbes felt that students needed some degree of education concerning the ethical uses of AI (
Hamilton, 2025). A total of 65% percent of educators were concerned about plagiarism in essays/work, 42% were concerned with data privacy and security, and 30% were concerned with unequal access to AI resources (
Hamilton, 2025). But how can we address these issues?
4.2. Privacy, Data Security, and Bias
How AI systems collect and use student data raises concerns about privacy, security, and bias. While most AI systems ask users’ consent to access their personal information, many may not realize the extent to which their personal information is being shared.
Akgun and Greenhow (
2022) suggest that AI algorithms that make predictions based on personal information lead to questions about autonomy and fairness. Furthermore, it is widely known that AI systems have demonstrated gender and racial bias (
Miller et al., 2018;
Murphy, 2019), partly explained by the underrepresentation of people of color and women in technology and in the data training that shapes AI (
Buolamwini, 2019). Awareness of data privacy and knowledge of how companies are using shared data are some ways to deal with these issues. Our university has established policies and guidelines for AI. They have also given data-protected use of Microsoft Copilot to all students, faculty, and staff. If school districts for K-12 education have not already, they should establish safety protocols and policies to protect students’ privacy and safety when using educational technology, reducing this burden on classroom teachers.
4.3. Equity and Access
Not all schools and students have equal access to AI-powered tools, which could widen educational disparities. In fact, 30% of educators reported concerns that students did not have equal access to AI resources (
Hamilton, 2025), and 15% of high school students reported not having access to AI (
Schiel et al., 2024). The proliferation of AI has led to an AI divide, which
Gonzales (
2024) described as the unequal access, opportunities, and benefits in AI technology across various socioeconomic groups, communities, and countries. Equitable access should be a priority in K-12 education and teacher education programs, which include advanced technology hardware (beyond smartphones), utilities, and reliable internet connectivity (
Colorado Education Initiative, 2024). Providing these basic resources gives everyone the opportunity to explore and engage with AI.
4.4. Plagiarism/Aigiarism
While cheating is not new to academia, evolving technology has exacerbated these concerns (
Perry & Steck, 2015). From calculators in math exams to spell-check devices/programs, technology has always pushed the boundaries of ethical behavior. Plagiarism with AI, AIgiarism, is difficult to detect. There are several AI-powered tools designed to detect cheating (
Hartshorne, 2024); however, they can be expensive and inaccurate.
Xie et al. (
2023) have identified three ways AI cheating is detrimental in higher education: it degrades the quality of education, creates an unfair advantage for AI users, and damages the integrity of educational institutions. This has led to anti-AI policies, which
Gillard and Rorabaugh (
2023) suggest are counterproductive. A better approach is to focus on education, awareness, and responsible and ethical AI usage rather than blaming AI itself (
Gillard & Rorabaugh, 2023)
4.5. Impact on Critical Thinking
Beyond cheating, some educators worry that AI tools might reduce students’ ability to think critically and independently.
Oravec (
2023) suggests that educators promote AI literacy by teaching students to critically evaluate AI-generated content for deficiencies and inaccuracies. This can include reviewing AI-sourced materials for reliability, cross-referencing claims with authoritative sources, and understanding how to properly cite AI-generated content. As part of this effort, educators should emphasize AI’s limitations, including hallucinations—cases where AI fabricates information—and biases in training data that can impact responses. Educators should also rethink traditional assignment structures to better align with the realities of AI-assisted learning (
Tlili et al., 2023). Rather than discouraging AI use outright, a more effective strategy is to intentionally integrate AI into coursework.
4.6. Proactively Addressing AI Ethical Dilemmas
Several states have established policies related to AI use in K-12 schools (
Colorado Education Initiative, 2024). Developing and disseminating AI policies invites discussion and awareness, contributing to transparency and clear expectations. Not only has our university provided guidelines for AI use and access to students, they have also provided syllabi language suggestions and offered several professional learning opportunities for faculty. Most recently, they created a web course for faculty highlighting how AI works, ethical issues and suggestions for confronting them, and ideas for enhancing teaching and student learning using Generative AI. In the fall of 2025, they will launch an AI web course for students, similar to the faculty course. They have also established a Special Assistant to the Provost for Artificial Intelligence, who is coordinating efforts across our campus.
4.7. Making AI Transparent
Winkelmes et al. (
2019) argued that when instructors make learning processes more transparent, it benefits students and fosters student success in college. These benefits include a sense of belonging, academic confidence, persistence, and metacognitive awareness. Transparent instruction involves faculty discussing the purpose of the assignment, what students will gain from it, the tasks involved, examples, and real-world applications before students undertake the work (
Winkelmes et al., 2019). Transparency in AI not only builds trust and ethical use but also enhances learning outcomes by making educational processes clearer and more understandable for students. This approach ensures that AI is used responsibly and effectively in educational settings.
4.8. Using a Stoplight to Promote AI Transparency
One potential approach to promote AI transparency is a stoplight that visually alerts students to acceptable AI use for assignments (
Mormando, 2023). This metaphor categorizes AI usage into three levels: green, yellow, and red lights, each representing different levels of permission and restriction. This framework clarifies when and how AI can be used and promotes ethical use and academic integrity. It encourages active dialog between teachers and students, helping them understand the implications of AI in their work. Since some assignments may use more than one stoplight, it is important to clearly articulate your expectations for AI usage based on the task and desired learning goal. Including examples and explaining how each fits into a category helps students better understand expectations and fosters responsible AI use. This model appealed to us, and we felt students would easily grasp the stoplight’s intent; therefore, we modified it for our courses.
Table 2 describes the three levels, disclosure expectations, and potential teacher language related to AI use.
5. Phase 3: AI Pilot Study: Assignment Reconfiguration in a Reading Course
Once we decided to employ the stoplight framework for AI transparency with students, we moved to determining where AI use would fit best in our reading practicum course based on our learning objectives. We chose to reconfigure a semester-long reading action research case study project (ARCSP) that approximately 200 teacher educators would complete with a K-12 student while in a concurrent field experience or placement. For this project, students maintained a digital researcher log, which we created to scaffold students through the ARCSP process. There were six sections in the log for each step of the ARCSP. For each step, teacher educators were given feedback and evaluated. The project culminated with them presenting their ARCSP to peers.
5.1. Assignment Reconfiguration
The six steps of the ARCSP include the following: identifying a data collection plan, completing data collection and analysis, crafting a research question and conducting a mini literature review, creating an intervention/instructional plan, determining results and sharing findings, and reflecting on limitations and the action research process. Thinking about our experiences using AI, we reflected on what AI is good at and how it might be used to support students with the ARCSP steps. Historically, our students self-reported having the most challenges with identifying a research question based on their data analysis and writing a literature review based on their research question. We would often have to suggest a research question based on their data collected and guide them to peer-reviewed articles and resources for their literature review. Since we were already doing some of this work for them, we thought AI could be a teaching assistant for these steps.
Next, we experimented with using AI in the ARCSP steps, exploring what this could look like and the potential changes we would need to make to the assignment, including developing AI use guidance and embedding the stoplight into each section of the researcher log, alerting our students to acceptable uses. During this process, we refined prompts to optimize output results and determined when AI was the best fit. We decided to break the literature review step into two parts: source evaluation and the literature review. This led us to create an additional scaffold for source evaluation and a new section in the researcher log.
Table 3 identifies the ARCSP steps, whether and how intentional AI use was infused, and AI stoplight guidance.
5.2. Pilot Study of Reconfigured ARCSP
In the fall of 2024, we implemented the reconfigured ARCSP into two sections of our reading practicum course. One section was online and consisted of graduate students; the other was an undergraduate hybrid course that met in person almost weekly. It is important to note that although we thoughtfully looked for ways to use AI to meet our learning objectives and encouraged AI use, we did not require our students to use AI (although they had free access to Microsoft Copilot provided by UCF), even if we included a green stoplight. At the beginning of the semester, we had students complete a survey to gauge their readiness for AI use. We anticipated that they would be more comfortable using AI for personal reasons rather than for academic use and that they would not feel they had been adequately prepared to use AI for teaching and learning with students.
Interestingly, of the students who responded to the survey across both course sections (n = 49), 29 shared that they did not use AI at all for academic purposes, while 12 shared that they used it occasionally. Only eight students reported that they used AI on a weekly to monthly basis for academic use. As we expected, more students, 22, reported using AI for personal use, and 13 students reported weekly to monthly use. We found it interesting that 14 students reported not ever using AI for personal use, which led us to wonder if students were fully aware of the applications and technology from their daily lives that employ AI, similar to
Marr’s (
2025) data regarding US residents’ perceived use. We also confirmed that, in general, students did not feel adequately prepared to use AI for teaching and learning, with 31 students reporting that they felt slightly prepared or not prepared at all. Only 14 students indicated that they were somewhat prepared, and 5 reported being prepared to very prepared.
While the data suggested that our teacher candidates were not using AI readily for academic purposes, we expected most of our students to use AI for support in completing the ARCSP. However, this was not the case, especially in the online section. Less than half of the students reported using AI for various parts of the ARCSP, and only a few used it throughout the project as allowed. In retrospect, this makes sense, since most of the students had not used AI previously for academic purposes, and there was only one synchronous opportunity to show students how to use AI. There was an increase in AI use in the hybrid undergraduate section, with all students using AI in at least one part of the ARCSP and the majority of students using AI in every section in which it was allowed. We attribute this difference to the course modality, given that the hybrid course included in-person instructor modeling of AI use for tasks such as generating potential research questions and summarizing peer-reviewed journal articles. Additionally, students used their electronic devices to start content generation in class immediately after each modeled example, using it in the same way the instructor had modeled for them.
Both instructors made several anecdotal observations from students who chose to use AI. Overall, their logs looked more professional, especially the visual representation of data in steps two and five. The sentence frame and the AI prompt supplied to them in step two simplified the drafting of research questions. Since the instructor did not have to create the research question, the instructor–student time was more collaborative and focused on why one question would be more appropriate than another and on tweaking the AI-generated research questions to the student’s data. Student feedback on the post-AI use survey suggested that the students who used AI for the source evaluation and literature review felt more confident to do something similar in the future.
At the end of the pilot semester, we observed interesting trends in post-survey outcomes from the undergraduate students (n = 23), as compared to the pre-survey data. Of the students in the undergraduate course, 16 reported feeling prepared or very prepared to use Generative AI tools for their teaching, 5 felt somewhat prepared, and only 2 students felt slightly prepared or not prepared. These trends were a noticeable increase from the 21 students selecting that they were slightly prepared or not prepared on the pre-survey. While we expressed curiosity as to whether our students’ use of AI for academic purposes in our course had an actual impact on their preparedness to use AI in a teaching context with future students, we felt that their perceived increase in preparation was a positive outcome of the pilot study. In future semesters, we intend to study additional metrics to gauge how their knowledge and action research outcomes are impacted by their AI use in the action research process.
5.3. Reflecting on Level 2: AI Application in Teaching and Learning
As we have noted, in retrospect, we were using Levels 1 and 2 of the DEC AI Literacy Framework’s (2025) Dimension 5, domain expertise, as we reconfigured the ARCSP.
Table 4 describes the actions we took specifically toward Level 2 of Dimension 5 (
Digital Education Council, 2025) as we reconfigured the ARCSP assignment with an AI lens.
6. Phase 4: Expanding Partnerships and Professional Learning Leadership
As a result of our faculty partnership efforts and the implementation of our pilot study, we began to receive opportunities to share our early outcomes and research with others. By actively contributing to professional learning communities, our efforts supported faculty at all levels of AI expertise. Our initiatives aimed to expand partnerships and professional learning in AI literacy by focusing on faculty development in evaluating and applying AI tools within their specific contexts. In the sections that follow, we share the evolution of this phase of our partnership and professional learning journey, including actions aligned to Level 3 of the DEC Dimension 5, Strategic AI Leadership in Higher Education (2025).
6.1. University-Based Professional Learning
As we launched our pilot study, we began sharing our work with colleagues in our elementary education program area. During a summer retreat before the start of the fall 2024 semester, we highlighted what we had learned during the FCTL conference and made the AI stoplight and an AI online module shareable for faculty interested in using them in their courses. We also connected our faculty with other AI leaders in our university and shared relevant AI resources. We added a standing agenda item at our monthly program area meetings in the fall, where we shared AI updates from our work, such as data from our pilot study and anecdotal observations from our teaching experiences with AI.
6.2. Professional Learning at the International Level
During 2024–2025, the role of professional learning in our AI literacy journey evolved significantly. Our initial focus was on seeking out learning opportunities from others to build our understanding of AI in education. However, as our expertise grew, we shifted toward contributing to the professional learning community by sharing our insights and experiences. We took an active role in presenting a featured session for our university’s Writing Across the Curriculum department, engaging colleagues across disciplines in discussions about integrating AI in pedagogy and promoting transparent use. Additionally, we had the opportunity to present our work at two international conferences, expanding our impact and knowledge exchange, and we began publishing work related to our pilot study, contributing to the growing body of research on AI in education and helping to foster a collaborative, reflective learning environment within our academic community.
6.3. Self-Study and Collaborative Inquiry: Forming a Special Interest Group (SIG)
We were interested in building our faculty network within our School of Teacher Education and felt a Special Interest Group (SIG) would offer a structure for organized professional learning and partnership among colleagues. The launch of the SIG was also linked to a voluntary faculty self-reflection survey that we designed to assess the personal and professional uses of AI, as well as faculty members’ willingness to integrate AI into their teaching practices. The survey results revealed several key trends. Of the 13 faculty members who participated, it was clear that while the group was moderately familiar with AI, there was a notable self-reported lack of preparedness to use AI effectively in teaching. Most faculty reported using AI primarily for assessment creation and developing instructional materials, with many expressing a desire for more training and support in incorporating these tools into their courses. Interestingly, there was limited familiarity with AI’s potential applications for tutoring or automatic grading, indicating a gap in knowledge and readiness for these advanced uses. Faculty also expressed concerns about the ethical implications of AI, including issues related to data privacy, plagiarism, and the potential for AI tools to limit students’ writing development. These concerns were identified as significant barriers to adoption. Of the 13 faculty members who responded to the survey, 10 expressed interest in forming an SIG that could address these issues while exploring AI topics in a structured, collaborative manner. As such, the SIG for AI in K-12 Teacher Education was formed, representing the following program areas: special education, math, science, social studies, reading, and language arts. The SIG meets monthly for professional learning, and we have created a shared drive for AI resources. Through the SIG, we aim to provide ongoing support, facilitate dialog, engage in research, and share the best practices for integrating AI into education, ensuring that all faculty feel empowered to engage with these emerging technologies.
6.4. Practitioner Partnerships: Hosting a Professional Learning Institute
Looking to expand our work, we secured internal funding to develop a 3-day professional learning institute for K-12 educators focused on AI literacy and applications which was offered in the summer of 2025. The institute was designed to provide educators with interactive workshops that combined hands-on exploration, collaborative discussions, and practical guidance on the effective use of AI in their classrooms. The content of the institute was organized around the big ideas of assessment, curriculum development, differentiation, and communication in K-12 settings. Additionally, it emphasized ensuring equitable access to AI-enhanced learning for all learners. Presenters included university faculty, district leaders, and classroom teachers. The overarching goal of the institute was to empower educators to use AI ethically and responsibly, equipping them with the tools and knowledge needed to foster meaningful, student-centered learning experiences. This initiative aimed to not only build AI literacy among educators but also to expand our partnerships to include practitioners currently in the field. We hope these new stakeholders can be a part of a sustainable network of professional learning to support research, teaching, and learning efforts around AI integration in diverse classrooms.
6.5. Reflecting on Level 3: Strategic AI Leadership in Higher Education
The partnership and professional learning efforts described above align with the
Digital Education Council’s (
2025) framework, specifically Dimension 5, Level 3, by demonstrating strategic AI leadership in higher education through institutional collaboration, pedagogical innovation, and the development of professional learning ecosystems. The university-based professional learning efforts, particularly the integration of the AI stoplight and module sharing, reflect our leadership in faculty training and embedding AI critical engagement into coursework. The creation of an SIG and the use of faculty self-reflection surveys exemplify structured, data-informed professional development, contributing to the design of institution-wide AI literacy frameworks. By presenting at international conferences, contributing to research publications, and hosting a professional learning institute for K–12 educators, the team has actively influenced both national and global AI literacy conversations. These actions show a clear commitment to promoting ethical, equitable, and transformative uses of AI in education, fulfilling the DEC framework’s call to lead institutional change and contribute to discourse on responsible AI adoption.
Table 5 provides the details of how our actions align with the tenets of Domain 5, Level 3 in more detail.
7. Discussion and Discoveries
As we further developed our AI literacy and applied AI in our teaching practices and professional partnerships, we began to experience a “zoom in/zoom out” phenomenon (
Busch-Jensen & Schraube, 2019) across the
Digital Education Council (
2025) AI Literacy Framework. While we could trace our progress through each dimension, we quickly realized that these dimensions were not linear or isolated. Rather, they flowed and intersected in dynamic, ongoing ways. As we experimented with new AI tools and uses in our courses, our focus would shift between critical thinking and ethical considerations, sometimes revisiting foundational knowledge as we encountered new ideas or challenges. Our learning moved fluidly between these dimensions, almost like a continuum where each new application of AI would deepen our understanding in one area while prompting us to reconsider others. This fluidity reflected the complex, interconnected nature of AI literacy, requiring us to constantly adapt our thinking as we applied AI in evolving contexts.
Similarly, the zoom in/zoom out phenomenon (
Busch-Jensen & Schraube, 2019) also applied to the relationship between domain-specific and domain-general knowledge and applications of AI. While Dimension 5 emphasizes domain expertise, we found that learning from AI applications in other fields provided valuable insights that could be adjusted and applied to our own context. For example, methods for teaching critical thinking in STEM fields can be adapted to social science education applications, and ethical considerations regarding HIPAA in healthcare AI can be applied to data privacy related to FERPA in our field. This cross-disciplinary learning highlights the importance of avoiding a siloed approach where AI uses and applications are viewed as isolated to specific fields. Instead, we found that sharing knowledge across disciplines not only enriched our understanding but also created opportunities for innovative teaching practices that bridged gaps between disciplines. By adopting a more interconnected approach, we realized that domain-specific examples could enhance domain-general knowledge, offering valuable perspectives and the potential for unique partnerships moving forward.
8. Conclusions
As scholars in teacher education, we recognize that preparing future generations of diverse citizens to engage ethically with AI requires an investment in professional development and partnership building for pre-service and practicing educators. To foster AI literacy and responsible integration, teachers, teacher leaders, and teacher education faculty must be equipped with the knowledge and strategies to navigate an evolving technological landscape.
Darling-Hammond et al. (
2017) emphasized that continuous, high-quality professional development allows educators to stay abreast of emerging technologies and best practices, ultimately enhancing teaching and learning outcomes. One effective approach is sustained professional learning, whereby teachers actively engage with curated curriculum resources, pedagogical strategies, and discussions on best practices. As the phases of our journey indicated, collaborative efforts among university faculty significantly enhanced the acquisition and application of new ideas. Specific to AI adoption and use in education, we have experienced firsthand how engaging in partnerships has exponentially increased our knowledge, applications, and creative ideas for AI integration. By building and contributing to communities of practice, educators can share insights, co-develop innovative teaching strategies, and critically reflect on their experiences with AI in the classroom. These collaborative efforts have the potential to lead to a profound and practical understanding of AI in teaching and learning.
9. Future Directions
With intentional engagement and a commitment to collaborative inquiry and research, faculty have a unique opportunity to actively shape the future of teaching and learning using AI. As the academic community embraces AI to enhance learning experiences, optimize assessments, and better meet the needs of diverse learners, there is an exciting potential to be part of this transformative shift. Utilizing the
Digital Education Council (
2025) AI Literacy Framework, institutions can assess their level of competency in each dimension, developing and fostering strategic AI leadership in higher education. Through collaborative partnerships and professional learning, we can remain at the forefront of advancements while also shaping thoughtful, ethical, and human-centered approaches to AI in education.