Next Article in Journal
Validating an Expanded Model of Teacher Satisfaction: Introducing Occupational Prestige in the Greek Preschool Context
Previous Article in Journal
Bridging the Silence: Understanding Motivations and Participation Barriers in Transnational Engineering Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Literacy: Elementary and Secondary Teachers’ Use of AI-Tools, Reported Confidence, and Professional Development Needs

by
Zoi A. Traga Philippakos
1,* and
Louis Rocconi
2
1
Theory and Practice in Teacher Education, University of Tennessee, Knoxville, TN 37996, USA
2
Louis Rocconi, Educational Leadership and Policy Studies, University of Tennessee, Knoxville, TN 37996, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(9), 1186; https://doi.org/10.3390/educsci15091186
Submission received: 9 August 2025 / Revised: 29 August 2025 / Accepted: 3 September 2025 / Published: 9 September 2025

Abstract

The development of artificial intelligence (AI) has had a significant impact on various sectors, including education. The successful integration of AI tools in classrooms depends largely on teachers’ understanding and knowledge, confidence, and training. This study explores how educators in grade 3 to 12 utilize AI tools, their level of confidence employing these tools, and their professional development (PD) needs. It also investigates the challenges educators face integrating AI into their practice. Participants were 242 teachers (79% female). Findings revealed that while most teachers were aware of AI writing tools like ChatGPT, their overall familiarity with and understanding of AI concepts were mixed, with slightly less than half currently using AI tools in their instruction. Secondary teachers consistently showed significantly higher levels of AI knowledge, familiarity, usage, and confidence in AI-related teaching practices compared to their elementary counterparts. The primary PD need identified to boost teacher confidence was “Professional development workshops,” with challenges highlighted by lower confidence in areas like AI-based grading and troubleshooting, alongside desires for clear policies and best practice guidance.

1. Introduction

The continuous development of artificial intelligence (AI) technologies over the past several years has had profound implications across multiple sectors, including education (Jin et al., 2024; Kalantzis & Cope, 2025b). Defined as “the simulation of human intelligence by a system or a machine” (Xu et al., 2021, p. 1), AI encompasses a range of capabilities such as learning, reasoning, planning, and natural language processing. A particularly transformative subfield, generative AI (GenAI), has garnered substantial attention due to its ability to generate human-like text in response to natural language prompts, made possible through large-scale computing power and extensive training datasets (Buttrick, 2024; Orwig et al., 2024).
While the educational potential of AI is widely acknowledged, its meaningful integration into classroom practice is neither automatic nor uniform. The efficacy of AI in education is contingent upon several key variables: teachers’ familiarity with AI tools, their confidence in using such tools effectively, and the extent and quality of professional development (PD) they receive (Williamson & Eynon, 2020). Without a grounded understanding of how educators are currently interacting with AI, school systems risk implementing policies or technologies that are misaligned with the practical needs and readiness of teachers. To address this gap, the present study seeks to investigate how teachers in grades 3 through 12 are using AI tools, what tools are most commonly adopted, how confident educators feel in their use, the PD they have accessed or require, and the barriers they encounter in implementing AI-enhanced instruction.

1.1. AI Literacy and Educators AI Use

AI literacy has emerged as a critical competency for the 21st century. Given the ubiquity of AI in both public and private sectors, there is a growing consensus that K–12 students must develop a foundational understanding of AI before entering the workforce or higher education (see Fosso Wamba et al., 2021; Kong et al., 2023). However, it is also imperative that educators have the knowledge and skills necessary to address AI literacy in their instruction and themselves to be AI literate and responsible users.
AI literacy is an essential competency for educators, encompassing the ability to understand, critically evaluate, and effectively interact with artificial intelligence technologies both inside and outside the classroom. This foundational knowledge includes comprehending the fundamental nature of AI systems, their operational mechanisms, inherent strengths and limitations, and associated ethical considerations (Bulut et al., 2024). For teaching professionals, developing AI literacy enables informed pedagogical decision-making regarding the integration of AI tools such as ChatGPT and adaptive learning platforms into educational practice. Furthermore, AI-literate educators are better positioned to identify and address potential issues including algorithmic bias, misinformation propagation, and excessive dependence on automated systems. Perhaps most importantly, teachers with robust AI literacy can effectively guide students in understanding AI’s expanding role in society, preparing them for a future where human-AI interaction will be increasingly prevalent across all domains of life and work (Kalantzis & Cope, 2025a).
However, achieving AI literacy for educators and instructors may be far from straightforward. Yi (2021) conceptualizes AI literacy not simply as the ability to use AI tools, but as the critical capacity to understand their socio-cultural implications, empowering individuals to make informed choices and actively shape their futures in an AI-saturated world. This definition recognizes AI literacy as both a technical and ethical imperative. As generative AI (GenAI) technologies, particularly large language models (LLMs) such as ChatGPT, Claude, Gemini, rapidly enter educational settings, research must investigate not only how teachers are using these tools, but also their confidence in do so and the professional development the need to integrate them effectively and responsibly. These dimensions of use, confidence, and professional capacity building are critical to ensuring that AI supports human instruction rather than introduce pedagogical risk or exacerbating inequity (Anson, 2024; Hicks et al., 2024; Kalantzis & Cope, 2025b).

1.2. Teachers’ Use of AI Tools in Practice

A growing body of research has begun to explore how educators engage with GenAI tools in practice. Su and Yang (2023), in interviews with ten kindergarten teachers in Hong Kong, found a range of perceptions about ChatGPT’s usefulness. While some teachers saw value in using it for lesson planning and content generation, others expressed concerns about accuracy and technological constraints. Similarly, Luo et al. (2023), in a study involving early childhood education professors in China and the U.S., found that participants saw AI as a helpful supplemental resource, but not a replacement for human teaching. These findings highlight that teacher engagement with AI is uneven, shaped by factors such as access, perceived value, and knowledge of the tool’s limitations. Other studies have examined broader perceptions of ChatGPT’s potential in education. In a literature synthesis, Grassini (2023) concluded that educators see promise in AI’s ability to assist with instructional tasks. However, this promise is conditional on whether teachers understand the tools well enough to apply them responsibly. Without this understanding, GenAI may be misapplied. At the same time, cutting-edge implementations of AI-enhanced learning environments (e.g., CyberScholar) (Kalantzis & Cope, 2025b; Tzirides et al., 2025), demonstrate that GenAI can support personalized learning. Within such spaces, the teacher’s role shifts from content provider to orchestrator and moderator of the interaction between learners and AI. However, to ensure teachers are well-prepared for these emerging roles, it is essential that they develop knowledge, skills, and understanding of AI tools and their applications, as well as confidence in using them. These competencies cannot be assumed to arise solely through exposure or informal practice.

1.3. Teacher Confidence and AI Literacy

Confidence in instruction, or self-efficacy in the implementation of instructional practices, are essential in instructional delivery (e.g., see Traga Philippakos et al., 2023, 2025). Confidence in use of AI tools plays a crucial role in whether teachers adopt them, how they integrate them into instruction, and whether they use them critically (Wang & Fan 2025). Lo (2023) emphasizes that teacher training is essential to responsible AI integration, especially in preventing misuse or superficial adoption. AI literacy and appropriate use of AI tools (e.g., ChatGPT) must become a competency for educators, including the ability to evaluate AI-generated content, maintain ethical standards, and guide students to critically and thoughtfully use AI (Kalantzis & Cope, 2025a; Wang & Fan, 2025).
Confidence is also necessary to mitigate known limitations of GenAI tools. For instance, Wu et al. (2023), in their exploration of ChatGPT’s historical development and technical underpinnings, emphasize the model’s tendency to produce plausible but factually incorrect responses—a phenomenon known as “hallucination”—and its limitations in modeling domain-specific or explicit knowledge. This unreliability poses a particular challenge in educational contexts where accuracy is essential for conceptual understanding and assessment integrity. Glaser (2023) reinforces this point, identifying variability in accuracy, equity of access, and erosion of critical thinking skills as key concerns associated with AI use in education. Teachers who lack confidence or clarity about AI tools may either over-rely on them or avoid them altogether, both of which can undermine instructional quality (also see Li & Wilson, 2025; Wilson et al., 2021).
Moreover, ethical concerns amplify the need for confident and competent teacher oversight (Zhang et al., 2023; Li & Wilson, 2025). Lindebaum and Fleming (2023) caution that ChatGPT can generate “irresponsible knowledge” by disregarding personal and cultural context, while Hill-Yardin et al. (2023) stress that AI tools lack accountability and critical thinking. In this landscape, the teacher must act as a safeguard, ensuring that technology use aligns with pedagogical and ethical standards. Yet this role is only possible if teachers feel adequately equipped to evaluate and manage AI outputs and use these tools effectively.

1.4. The Imperative of Professional Development (PD)

Given the novelty and complexity of GenAI, understanding teachers’ professional development (PD) needs is essential (see Tan et al., 2025). Effective use of AI requires more than access (see Wang & Fan 2025); it depends on knowledge, confidence, and pedagogical understanding, all of which must be supported through ongoing, targeted professional learning. The potential of GenAI is evident in studies like De Simone et al. (2025), which reported significant gains in Nigerian schools using AI-supported after-school programs. Kalantzis and Cope (2025b) and Tzirides et al. (2025) argue that the role of the teacher is in the process of transformation from delivering content to AI moderator. This evolving role, however, requires professional development (PD) that prepares educators to understand, critically assess, and apply AI in diverse classroom contexts. Further, the widespread concern on academic integrity, places new demands on teachers to monitor student use of AI (Adeshola & Adepoju, 2023; Hwang et al., 2023; Sison et al., 2023).
Teachers need to feel prepared to develop assignments, assessments, and classroom policies that address AI use in appropriate and ethical ways. Yet, they can only do so if they themselves understand the capabilities and limitations of AI tools. As such, actual use and self-perceived competence are essential areas for empirical investigation. Additionally, PD should address the broader social and ethical responsibilities associated with AI integration. Thus, AI might be used not only to generate content but to support differentiated instruction, address diverse learner needs, and ensure that students engage with AI in reflective and meaningful ways. However, it remains unclear what specific PD needs teachers have and whether they are receiving adequate support within their school contexts.

1.5. Present Study

The purpose of the present study was to examine teachers’ knowledge of AI-tools and their uses in their classrooms. We sought to examine whether teachers used AI tools, which ones they used, if they had attended relevant workshops, and what additional PD they needed. Accurate understanding of AI’s role in education must include not just whether teachers are using AI tools like ChatGPT, but how they are using them, how confident they feel in using these tools, and what PD they need to grow as professionals in the transforming world of AI in education. Teacher behavior is shaped by their beliefs, knowledge, and training; if confidence is lacking, AI tools may be underutilized or used inappropriately. If confidence is high but unsupported by accurate knowledge, the risk of misuse increases (Li & Wilson, 2025). Hence, research into teacher use and confidence is foundational to the development of effective policies, training programs, and ethical guidelines that can support the responsible integration of AI in education.
In this study we were also interested in examining differences by elementary and secondary teachers. Based on our previous research showing differences in elementary and secondary teachers’ practices in reading and writing (see Traga Philippakos et al., 2022), we anticipated that secondary teachers would be more likely than elementary teachers to use GenAI tools. Our expectations were also shaped by prior research on teacher AI usage. For example, a national RAND study found that secondary-level language arts and science teachers were nearly twice as likely to use AI tools as elementary teachers, particularly for tasks such as feedback (Kaufman et al., 2024). At the same time, although secondary teachers may be more familiar with AI for writing support, they often report lower confidence in teaching AI concepts without targeted professional development (Tan et al., 2025). Elementary teachers, on the other hand, might be hesitant due to developmental appropriateness or tool complexity (Wilson et al., 2021), particularly when AI lacks customization for younger learners. Because K to 2 students are still developing foundational skills, we excluded teachers of those grades. Instruction at this level focuses heavily on competences related to learning to read, and policy requirements often mandate strict adherence to specific programs. The research questions that guided this investigation were the following:
  • What AI tools are teachers aware of, how familiar do teachers feel with AI in education, and how do teachers rate their understanding of AI in education? Does awareness, familiarity, and understanding differ between elementary and secondary teachers?
  • How do teachers use AI tools in their instruction (including types of use and frequency), what primary tools they use, and what are their feelings, satisfaction, and confidence regarding these tools? Additionally, what benefits and challenges do they perceive from using AI in the classroom, and do these patterns differ between elementary and secondary teachers?
  • How confident are teachers in using specific AI tools and in performing various AI-related teaching practices? Are there differences between elementary and secondary teachers?
  • What trainings have teachers received on AI tools in general, how helpful was that training, and do these differ by elementary and secondary teachers? What supports or resources do teachers need to increase their confidence in using AI?

2. Methods

2.1. Participants

A total of 242 teachers in grades 3–12 participated in the study. The majority of respondents came from one school district (93% (n = 225)) and 7% (n = 17) came from other school districts located in the southeast of the United States. The sample was predominantly female (78.7%, n = 185), with 20.9% (n = 49) identifying as male, and 0.4% (n = 1) preferring not to answer. Participants reported a range of teaching experience: the majority (59.2%, n = 129) had more than 10 years of experience, while 22.0% (n = 48) had 1–5 years, 15.1% (n = 33) had 6–10 years, and 3.7% (n = 8) had less than 1 year of experience. Further, they taught a range of grade levels with teachers from elementary grades 3–5 (28.9%; n = 70), middle school grades 6–8 (21.5%; n = 52), and high school grades 9–12 (38.4%; n = 93) with 27 (11.2%) not reporting the primary grade they teach (see Table 1 for detailed demographic information).

2.2. Instrument Development and Procedures

The survey instrument was developed to collect data on a range of variables, including demographic information (e.g., grade level taught, subject(s) taught, years of teaching experience, academic degrees, and gender), teachers’ knowledge and use of various AI educational tools, their familiarity with and understanding of AI in education, their confidence in using AI for various teaching-related practices, and their professional development experiences and needs.
The survey comprised four major sections. The first section focused on teachers’ awareness, familiarity, and understanding of AI tools and technologies. Participants were asked to indicate which categories of AI tools they were aware of, including writing tools (e.g., ChatGPT, Jasper), tutoring systems (e.g., Duolingo, Socratic), and grading and assessment platforms (e.g., Gradescope, Turnitin). They were also asked to identify specific tools (e.g., ChatGPT, Grammarly) they recognized from a provided list and to rate their familiarity with the concept of AI in education on a 5-point scale ranging from “Not at all familiar” to “Extremely familiar.” Similarly, understanding was assessed using a 5-point scale from “Very low” to “Very high.”
The second section examined how teachers use AI tools in their instructional practice. Teachers indicated whether they currently used AI tools in their teaching and, if so, how frequently they did so (e.g., daily, weekly, monthly). They were asked to identify their primary AI tool and to provide an example of how they used this AI tool in their teaching both through open-ended responses. Teachers also reported their feelings about using AI tools, selecting from a range of responses that included positive, neutral, stressed, and opposed sentiments. Satisfaction and confidence with specific tools (e.g., ChatGPT, Grammarly) and with their primary tool were rated on 5-point scales (very low to very high). Additionally, teachers identified perceived benefits from a list (e.g., increased engagement, increased student learning, time savings in grading/planning, personalized learning) and identified challenges through an open-ended response.
The third section assessed teachers’ confidence in performing various AI-related instructional tasks. Participants rated their confidence on six items: using AI tools in the classroom, integrating AI into lesson plans, integrating AI into grading, communicating with families about AI integration, communicating with their district about AI integration, and troubleshooting technical issues. Each item was rated on a 5-point scale from “Very unconfident” to “Very confident”.
The fourth section addressed teachers’ professional development experiences and support needs related to AI. Participants indicated whether they had received general training on AI tools (yes/no), who provided that training (in an open-ended response), and rated the helpfulness of that training using a 5-point scale from “Not at all helpful” to “Extremely helpful”. Teachers also identified the types of support they believed would increase their confidence in using AI and chose from a list including items such as, professional development workshops, access to tools, clear permission policies, and other (with a write-in response).
After the initial survey was designed, it was shared with two advanced degree researchers who had worked in elementary and secondary settings as English Language Arts teachers. Reviewers were asked to examine the clarity of each of the questions, its phrasing, and whether its classification under a specific section was appropriate and clearly addressed the intended construct (e.g., PD needs). Feedback was received on the phrasing of items, and on the classification of items under specific categories (e.g., knowledge, use, confidence, PD) and revisions were made. Next, the survey was sent to a group of three elementary and four secondary teachers who were asked to review the survey using the same criteria and then complete it. Teachers’ reviews were shared with the first author, who also held a focus-group meeting for teachers to share recommendations for revisions and to explain any areas of confusion. The survey items were then finalized, and the survey was created in Qualtrics. Following approval from the Institutional Review Board, study participants were invited to participate in the online survey. Participation was voluntary, and informed consent was obtained before starting the survey. Four reminders were provided every two weeks, and the survey was closed after eight weeks.

2.3. Data Analyses

Descriptive statistics and percentages were calculated to summarize participants’ understanding, usage, helpfulness, and confidence in using AI. Inferential statistics were employed to examine differences between elementary and secondary teachers. Prior to these main comparisons, preliminary analyses were conducted to explore differences between middle and high school teachers, but no statistically significant differences were observed on our key variables. Therefore, to simplify the presentation of results, we combined middle and high school teachers for all analyses. Chi-square tests of independence (χ2) were used to compare elementary and secondary teachers on categorical variables. The Phi coefficient, ϕ, or Cramer’s V were reported as measures of effect sizes for chi-square tests. Independent samples t-tests were conducted to compare mean scores between elementary and secondary teachers on Likert-scale data. Levene’s test for equality of variances was examined for each t-test to determine whether equal variances could be assumed, and results were reported accordingly. Cohen’s d was calculated and reported as a measure of effect size for t-tests.
Open-ended responses from the survey (e.g., details on PD providers) were reviewed and summarized thematically to provide additional context and detail to the quantitative findings (Strauss & Corbin, 1990).

3. Results

Findings are reported by research question. Frequency and mean comparisons between elementary and secondary teachers can be found in Table 2 and Table 3, respectively.
RQ 1: What AI tools are teachers aware of, how familiar do teachers feel with AI in education, and how do teachers rate their understanding of AI in education? Does awareness, familiarity, and understanding differ between elementary and secondary teachers?
Awareness of AI Tools. Teachers were asked to select all AI tools or technologies categories they were aware of by responding to the question: “Which AI tools or technologies are you aware of?” Overall, the most widely recognized category was AI writing tools (e.g., ChatGPT, Jasper), with 90.0% of teachers indicating awareness. This was followed by AI tutoring systems (e.g., Duolingo, Socratic) (66.5%) and Grading and assessment tools (e.g., Gradescope, Turnitin) (40.6%). Fewer teachers reported awareness of other tools or technologies. Comparisons between elementary (n = 76) and secondary (n = 152) teachers (Figure 1) revealed statistically significant differences on the following: AI Writing Tools (χ2(1, N = 215) = 9.40, p = 0.002, ϕ = 0.209), AI Tutoring Systems (χ2(1, N = 215) = 4.04, p = 0.044, ϕ = 0.137), Grading and Assessment Tools (χ2(1, N = 215) = 16.57, p < 0.001, ϕ = 0.278), and Not Aware of Any (χ2(1, N = 215) = 12.21, p < 0.001, ϕ = −0.238).
We next asked teachers about their awareness of specific AI educational tools (i.e., “Which AI educational tools or AI educational technologies are you aware of?”). The list included widely used tools such as ChatGPT, Grammarly and others (e.g., Jasper, Quillbot, copy.ai, DreamBox), as well as an “Other” option for write-ins. ChatGPT was the most recognized tool, with 92.3% of teachers indicating they were aware of it. Grammarly also had high awareness at 84.1%. Awareness for other listed tools such as Knewton (2.6%), DreamBox (6.9%), Jasper (11.6%), copy.ai (10.3%), writesonic (2.6%), quillbot (20.6%), wordtune (2.1%), Scribbr (20.6%), and Hemingway Editor (3.4%) was considerably lower. A small percentage (3.0%) reported being unaware of any of the specific tools listed. Comparisons between elementary and secondary teachers (Figure 2) regarding their awareness of specific AI educational tools revealed several statistically significant differences in awareness of ChatGPT (χ2(1, N = 215) = 14.181, p < 0.001, ϕ = 0.257), Grammarly (χ2(1, N = 215) = 6.380, p = 0.012, ϕ = 0.172), Quillbot (χ2(1, N = 215) = 8.473, p = 0.004, ϕ = 0.199), Scribbr (χ2(1, N = 215) = 12.537, p < 0.001, ϕ = 0.241), and copy.ai (χ2(1, N = 215) = 8.758, p = 0.003, ϕ = 0.202). Additionally, a significantly higher percentage of elementary teachers selected “I am not aware of any” of the listed specific educational tools compared to secondary teachers, χ2(1, N = 215) = 9.311, p = 0.002, ϕ = −0.208.
Approximately a quarter (23.6%) of teachers specified “Other” tools (beyond the pre-listed options), and a variety of tools were mentioned. The most frequently cited tool was Magic School, which was mentioned by over 30 teachers. Other tools that were mentioned multiple times or by several individual teachers included Brisk (mentioned 10 times), Khanmigo (mentioned 7 times), and Diffit (mentioned 4 times). Additionally, AI features within platforms like Canva and Quizizz/Quizlet were noted, alongside tools such as Perplexity, Photomath, and Turnitin. Several teachers also listed combinations of these and other tools.

3.1. Familiarity with the Concept of AI in Education

Teachers indicated their familiarity with the concept of AI in education on a 5-point scale (1 = Not at all familiar to 5 = Extremely familiar) by responding to the following question: How familiar are you with the concept of AI in education? Overall, approximately a quarter of teachers (26.4%) indicated they were “extremely” or “very familiar” with the concept of AI in education and a quarter of teachers (25.6%) indicated they were “not very” or “not at all familiar” with the concept of AI in education with around half (47.9%) reporting they were “somewhat familiar”. An independent samples t-test revealed that secondary teachers (M = 3.21, SD = 0.85) reported a significantly higher level of familiarity compared to elementary teachers (M = 2.52, SD = 0.92), t(212) = 5.42, p < 0.001, Cohen’s d = −0.79.

3.2. Understanding of AI in Education

Teachers rated their understanding of AI in education on a 5-point scale (1 = Very low to 5 = Very high) by answering: How would you rate your understanding of AI in education? Overall, 16.6% rated their understanding as “very high” or “high”, 46.4% rated their understanding as “moderate,” and 37.1% rated it as “low” or “very low”. Secondary teachers (M = 2.92, SD = 0.91) reported a significantly higher level of understanding compared to elementary teachers (M = 2.33, SD = 0.96), t(129.88) = 4.29, p < 0.001, Cohen’s d = −0.64.
RQ 2: How do teachers use AI tools in their instruction (including types of use and frequency), what primary tools they use, and what are their feelings, satisfaction, and confidence regarding these tools? Additionally, what benefits and challenges do they perceive from using AI in the classroom, and do these patterns differ between elementary and secondary teachers?

3.3. Current Use of AI Tools

Overall, out of 232 teachers who responded to the question “Do you currently use any AI tools in your teaching?” (yes, no), approximately half reported using AI tools in their teaching (n = 113, 46.7%). A chi-square test indicated a statistically significant association between teacher level and AI tool use, χ2(1, N = 215) = 5.285, p = 0.022, ϕ = 0.157, with a higher percentage of secondary teachers (52.4%) using AI tools compared to elementary teachers (35.7%). For the 113 teachers who reported using AI tools, we asked “How often do you incorporate AI tools into your teaching?” (response options included daily, weekly, monthly, rarely, and never). Around half of teachers reported integrating AI tools weekly (n = 58, 51.3%), followed by monthly (n = 23, 20.4%), rarely (n = 16, 14.2%), and daily (n = 13, 11.5%). A small number of teachers (n = 3, 2.5%) who initially indicated they use AI tools then selected “never” for frequency. A chi-square test indicated no statistically significant difference in the distribution of frequency of AI use between elementary and secondary teachers, χ2(4, N = 101) = 1.892, p = 0.756, Cramer’s V = 0.137.
We then asked teachers to specify how they use AI tools in their teaching with an open ended response. The most common use, mentioned by approximately 65 respondents, was lesson planning and content creation, where tools like ChatGPT and MagicSchool.AI were frequently used to generate outlines, presentations, exemplars, and guided notes. Also, 50 teachers said they used AI for creating assessments and questions, including quizzes, multiple-choice items, and state test-style questions. AI was also used by 35 teachers to provide feedback on student writing or generate exemplar essays, often through tools like Brisk and MagicSchool. Further, 30 teachers indicated that AI tools such as Diffit were used for text differentiation, including lowering Lexile levels for English Language Learners and special education students. Also, 25 teachers used AI for rubric creation and grading support, while 20 mentioned using it for professional tasks like drafting emails and newsletters. Additionally, 15 teachers used AI for creating visual aids and presentations using tools like Canva and Gemini. A smaller group (12 teachers) involved students in using AI tools under their guidance or with restrictions, while 8 teachers noted using AI to detect academic dishonesty. Finally, around 10 teachers shared specialized uses of AI, including support for music selection, coding, grant writing, and generating simulated data for research activities. These patterns highlight that teachers are integrating AI primarily as a time-saving tool that enhances planning, differentiation, assessment, and communication.

3.4. Teachers’ Feelings About Using AI Tools in Instruction

Teachers who reported using AI tools in their instruction (n = 113) were asked about their feelings regarding this practice by answering the question: What is the way you feel about using AI in your instruction? A majority (58.4%) expressed positive feelings regarding AI use. The most common sentiment, with 43.4% (n = 49) stating, “I feel great! They save me time.” Another 15.0% (n = 17) reported, “I feel great! Students learn how to work with those tools, too!” A smaller group (8.8%, n = 10) felt neutral (“I do not have any specific feelings about it. I just use it.”). Collectively, feelings of stress due to uncertainty about using AI tools, their correct application, or their effects on student learning were expressed by 8.9% (n = 10) of these AI-using teachers. A very small percentage (1.8%, n = 2) felt it was “not appropriate for students to have any exposure to AI tools.” A chi-square test did not find any differences in feelings about using AI tools in instruction between elementary (n = 25) and secondary (n = 76) teachers, χ2(8, N = 101) = 2.625, p = 0.956, Cramer’s V = 0.161.
Across the sample, one-fifth (22.1%, n = 25), selected “Other”. When asked to elaborate, the most common feelings were conflicted or mixed emotions (n = 10). Teachers appreciated AI’s time-saving and productivity benefits but were concerned about student misuse, ethical issues, and academic dishonesty. Five teachers voiced apprehension about students’ overreliance on AI, erosion of critical thinking and creativity, and uncertainties about accuracy, privacy, and plagiarism. Six teachers expressed excitement or strong support, citing AI’s effectiveness in enhancing instruction, providing examples, and streamlining tasks. Additionally, four teachers felt overwhelmed by the steep learning curve and additional cognitive load AI introduces to an already demanding profession.

3.5. Types of AI Tools Used in Instruction

Teachers who reported using AI were asked to select from a list of AI tools or technology they use in their teaching by responding to the question: Which of the AI educational tools or AI educational technologies do you use in your instruction? The provided list included tools such as ChatGPT, Grammarly, Jasper, Quillbot, and others, along with an “Other” option for write in responses. Teachers reported using a variety of specific AI tools (Figure 3). ChatGPT was the most frequently reported tool used (73.1%, n = 76), followed by Grammarly (32.7%, n = 34). Other AI tools listed were used by a smaller percentage of these respondents. Sixty-six teachers selected the “Other” option. Among the tools mentioned, Magic School was the most prominent, mentioned by 39.4% (n = 26). Brisk was the next most common, cited by 13.6% (n = 9). Other tools mentioned included Quizlet (6.1%, n = 4), Perplexity (4.5%, n = 3), Gemini/Notebook LM (4.5%, n = 3), Canva (3.0%, n = 2), Diffit (3.0%, n = 2), and Kahnmingo (3.0%, n = 2). Comparisons between elementary and secondary teachers revealed no differences in their use of ChatGPT or Grammarly.
Teacher satisfaction of use. We then asked teachers to rate their satisfaction with each of these AI tools on a 5-point scale from very low to very high (i.e., How would you rate your satisfaction about the use of this AI tool in your instruction?). However, due to the limited number of teachers reporting use of tools other than Grammarly and ChatGPT, only satisfaction scores for these two are detailed. Among the 76 teachers who reported using ChatGPT, satisfaction was overwhelming positive with nearly two-thirds (61.8%) rating it as “very high” or “high” while 36.8% indicating “moderate” satisfaction and only 1.3% indicating “low” satisfaction. An independent samples t-test comparing elementary (n = 16, M = 4.00, SD = 0.894) and secondary (n = 52, M = 3.85, SD = 0.826) teachers revealed no statistically significant difference in satisfaction scores; t(66) = 0.639, p = 0.525, Cohen’s d = 0.18.
For Grammarly, teachers reported high satisfaction as well, with 57.6% indicating “very high” or “high” satisfaction, 39.4% indicating “moderate” satisfaction, and only 3.0% reporting “low” satisfaction. A comparison between elementary (M = 4.00, SD = 1.054) and secondary teachers (M = 4.05, SD = 0.999) revealed no statistically significant differences; t(28) = −0.127, p = 0.900, Cohen’s d = −0.049.
Teacher confidence. Next, we asked teachers to rate their confidence about using these specific AI tools in their instruction using a 5-point scale ranging from very low to very high (i.e., How would you rate your confidence about the use of this AI tool in your instruction?). Similarly to satisfaction, confidence results are only reported for ChatGPT and Grammarly. For ChatGPT, the majority expressed moderate to high confidence (56.5%), 39.5% expressed “moderate” confidence, and 3.9% indicated “low” or “very low” confidence. An independent samples t-test compared confidence in using ChatGPT between elementary (n = 16, M = 4.00, SD = 1.10) and secondary (n = 52, M = 3.75, SD = 0.97) teachers and no statistically significant difference in confidence scores was found; t(66) = 0.876, p = 0.384, Cohen’s d = 0.25. For Grammarly, the majority (57.6%) reported “high” or “very high” confidence, 39.4% reported “moderate” confidence, and 3.0% reported “low” confidence (none indicated “very low” confidence). No statistically significant differences between elementary (n = 10, M = 4.20, SD = 0.92) and secondary (n = 20, M = 3.90, SD = 1.02) teachers were found; t(28) = 0.783, p = 0.440, Cohen’s d = 0.30.

3.6. Primary AI Tool Use

Teachers who use AI were asked to specify their primary tool in an open-response format (“What is the AI tool you primarily use in your teaching?”). ChatGPT was the most frequently cited primary tool overall (49.5%, n = 51), followed by Magic School (12.6%, n = 13) and teachers reporting the use of multiple tools (12.6%, n = 13). While a chi-square test indicated a statistically significant difference in the distribution of primary AI tools used between elementary and secondary teachers (Figure 4), χ2(12, N = 95) = 22.578, p = 0.032, Cramer’s V = 0.488, ChatGPT and MagicSchool were the most frequently used by both. For elementary teachers (n = 24), the most common primary tools were ChatGPT (33.3%, n = 8), MagicSchool (20.8%, n = 5), and multiple tools (16.7%, n = 4). For secondary teachers (n = 71), ChatGPT was also the most common primary tool (52.1%, n = 37), followed by multiple tools (12.7%, n = 9), and MagicSchool (11.8%, n = 8).
In addition, we asked teachers to rate their satisfaction with their primary AI tool on a 5-point scale from very low to very high. Overall, satisfaction was high with 29.2% indicating “very high”, 41.5% indicating “high”, and 29.5% reporting “moderate” satisfaction (none reported “low” or “very low” satisfaction). No statistically significant differences were found between elementary (n = 25, M = 4.00, SD = 0.764) and secondary (n = 73, M = 3.97, SD = 0.781) teachers, t(96) = 0.152, p = 0.879, Cohen’s d = 0.04. We then asked teachers to provide an example of how they use this AI tool in their teaching. The most frequent application, based on responses, is in lesson planning (38 responses), where AI is used to brainstorm ideas, align content to standards, and generate instructional materials. Assessment development follows closely (n = 28), with tools used to create quizzes, rubrics, and exemplar answers. Writing support (n = 25 mentions) is a major theme, both for students (prompting, editing, feedback) and for teachers (exemplar generation, rubric-aligned grading). Many educators (n = 22) also rely on AI to design worksheets and learning activities—from comprehension questions to graphic organizers. Teachers highlighted the value of AI in differentiating instruction (n = 17), using tools like Diffit to adjust lexile levels or tailor activities for English Language learners and students in special education. Other strong themes include their own productivity and use on time-saving tasks (emails, slide development).
Further, we asked teachers to rate their satisfaction and confidence with this specific tool on a 5-point scale from very low to very high. Overall, teachers reported high levels of satisfaction with their primary AI tool, with 29.2% indicating “very high”, 41.5% indicating “high”, and 29.5% reporting “moderate” satisfaction (none reported “low” or “very low” satisfaction). An independent samples t-test compared satisfaction scores between elementary (n = 25, M = 4.00, SD = 0.764) and secondary (n = 73, M = 3.97, SD = 0.781) teachers, and no statistically significant difference were noted; t(96) = 0.152, p = 0.879, Cohen’s d = 0.04. Similarly, teachers generally reported high levels of confidence with 61.5% indicating “very high” or “high” confidence, 31.3% indicating “moderate” confidence and 7.3% reporting “low” or “very low” confidence. No statistically significant differences in confidence scores were found between elementary (n = 24, M = 3.83, SD = 0.917) and secondary (n = 68, M = 3.74, SD = 0.940) teachers; t(90) = 0.442, p = 0.660, Cohen’s d = 0.10.
When teachers explained their satisfaction with AI tools, the most common reason cited was time savings, mentioned by 41 teachers. Many praised how AI streamlines tasks like planning, grading, and content creation. One teacher noted, “This tool saves me so much time,” while another shared, “It allows me to go on about my day.” Twenty-five teachers emphasized that AI provides a useful starting point, even if they need to personalize the output—“I will use what ChatGPT gives me, but will always reword it to be more personal and specific.” Eighteen teachers highlighted how AI tools improved their instructional quality, such as generating higher-level questions or supporting differentiation. However, 17 teachers expressed concern over inaccuracies or the need for revision, acknowledging AI’s value but noting that “there is always a margin of error.” Additionally, 14 teachers appreciated that the tools are easy to use, while 13 praised the quality of the output, especially for grammar and feedback. Some teachers (n = 10) raised philosophical concerns about AI’s impact on student thinking—“AI is chiseling away at the time and effort students put into thinking.” Others (n = 6) expressed skepticism over student misuse or overreliance, while 6 cautioned against using AI in unfamiliar content areas due to potential inaccuracies. Still, most teachers conveyed overall satisfaction, especially when using AI in tandem with their own expertise and judgment.
Teachers were also asked to explain their confidence in using AI for instruction and responses varied widely. Thirteen respondents expressed high confidence, citing frequent use and familiarity with the tools, including training others. Twelve described moderate confidence, using AI as a starting point but refining its outputs. Ten were still learning or becoming familiar with AI. Meanwhile, nine participants expressed cautious confidence, acknowledging AI’s usefulness but highlighting concerns about inaccuracies and the need to carefully craft prompts. Six respondents reported limited use or low familiarity, leading to lower confidence levels. Four participants linked their confidence to specific tools (e.g., Brisk, Quizizz, Diffit), suggesting tool-specific training may influence perceived competence.

3.7. Observed Benefits of Using AI in the Classroom

Teachers who reported using AI in their instruction were asked, “What benefits have you observed from using AI in your classroom?” The most frequently cited benefit was, “Time savings in grading/planning”, selected by 84.9% (n = 90) of teachers. This was followed by “Personalized learning” (50.9%, n = 54), “Improved student understanding” (42.5%, n = 45), and “Increased student engagement” (31.1%, n = 33). Additionally, 8.5% (n = 9) of teachers selected “other” benefits and provided an open-ended response. The benefits included creating instructional materials for multilingual students, assisting with accommodations, improving writing, increasing teacher confidence, providing timely feedback, and offering information without emotional bias. One respondent was uncertain about benefits, and another indicated a desire for more training. Chi-square tests showed no significant differences between elementary and secondary teachers in the observed benefits (p > 0.100 for all analyses).
We asked teachers to express some of the challenges they have faced with using AI in the classroom in an open-ended response. The most frequently reported challenge among research participants was student misuse of AI tools, noted by 41 teachers. This includes concerns about cheating, copying and pasting full responses, and bypassing the learning processes. Other significant challenges include the difficulty of crafting effective prompts (n = 10), AI inaccuracies (n = 9), and a lack of training or clear guidance for teachers (n = 9). Additionally, time constraints (n = 8) and the overwhelming variety of tools (n = 6) make integration difficult. Some educators also struggle with AI output lacking nuance (n = 6), ethical uncertainties (n = 5), and student readiness (n = 5). A few teachers noted their own limited familiarity with AI (n = 4), while access issues and technical limitations were each mentioned by two responders. Notably, 5 participants reported no major challenges, often because they limit student access or only use AI for planning.
RQ 3: How confident are teachers in using specific AI tools and in performing various AI-related teaching practices? Are there differences between elementary and secondary teachers?
We asked teachers to rate their confidence on several AI-related items using a 5-point scale (1 = Very Unconfident to 5 = Very Confident) (i.e., “How confident do you feel about the following items?”). Percentages for each item are reported in Figure 5 and Figure 6. Independent samples t-tests were conducted to compare confidence levels between elementary and secondary teachers. Regarding confidence in using AI tools in the classroom, secondary teachers (M = 3.09, SD = 1.338, n = 145) reported significantly higher confidence in this area compared to elementary teachers (M = 2.36, SD = 1.286, n = 70), t(213) = −3.809, p < 0.001, Cohen’s d = −0.554. When considering integrating AI into lesson plans, secondary teachers (M = 2.94, SD = 1.370, n = 144) again showed significantly higher confidence than elementary teachers (M = 2.43, SD = 1.388, n = 69), t(211) = −2.496, p = 0.013, Cohen’s d = −0.365. Confidence in integrating AI into grading was lower overall, but again secondary teachers (M = 2.68, SD = 1.388, n = 145) were significantly more confident in this aspect than elementary teachers (M = 2.22, SD = 1.327, n = 69), t(212) = −2.325, p = 0.021, Cohen’s d = −0.340. For communicating with families about AI integration, secondary teachers (M = 2.81, SD = 1.323, n = 145) had significantly higher confidence compared to elementary teachers (M = 2.30, SD = 1.344, n = 70), t(213) = −2.655, p = 0.009, Cohen’s d = −0.386. A similar pattern was observed for communicating with the district about AI integration, secondary teachers (M = 2.80, SD = 1.315, n = 144) were significantly more confident than elementary teachers (M = 2.20, SD = 1.258, n = 70), t(212) = −3.169, p = 0.002, Cohen’s d = −0.462. Finally, troubleshooting technical issues related to AI tools garnered the lowest confidence overall, and secondary teachers (M = 2.53, SD = 1.240, n = 144) were significantly more confident in troubleshooting than elementary teachers (M = 2.06, SD = 1.250, n = 70), t(212) = −2.637, p = 0.009, Cohen’s d = −0.384.
RQ 4: What trainings have teachers received on AI tools in general, how helpful was that training, and do these differ by elementary and secondary teachers? What supports or resources do teachers need to increase their confidence in using AI?
Overall, 24.0% (n = 56) of teachers reported having received training on AI tools in general. A chi-square test of independence indicated a significant difference between elementary and secondary teachers in receiving general AI training, with a higher percentage of secondary teachers (28.3%) reported receiving general AI training compared to elementary teachers (12.9%): χ2(1, N = 215) = 6.289, p = 0.012, ϕ = −0.171.
Of the teachers who received general AI training (n = 56), we asked who provided the training. Their open-ended responses indicated a range of sources. The most frequently mentioned providers included colleagues or peer teachers conducting sessions within their schools or districts; district-led PD days and initiatives, and external trainers, conferences, or organizations (e.g., “Ditch That Textbook,” TETA conference). Additionally, some teachers reported self-teaching efforts, utilizing online resources such as Khan Academy, or undertaking university coursework as their source of training.
We asked teachers who received general AI training (n = 56) to rate how helpful that training was on a 5-point scale (where 1 = Not at all helpful to 5 = Extremely helpful). Over half (57.1%) rated it as “extremely helpful” or “very helpful”, 28.6% found it “somewhat helpful,” and 14.3% found it “not very helpful” (no respondents selected “not at all helpful”). No statistically significant differences were found between elementary and secondary teachers, t(48) = −0.046, p = 0.964, Cohen’s d = −0.02. Of the 56 teachers who had received general AI training, 55.4% (n = 31) indicated this training was provided by their organization or state (no differences were found between elementary and secondary teachers, p = 0.363, ϕ = −0.129). Regarding the helpfulness of training provided by their organization or state (n = 30, on a 5-point scale where 1 = Not at all helpful to 5 = Extremely helpful), 46.7% found it “extremely helpful” or “very helpful”, 40.0% found it “somewhat helpful,” and 13.3% found it “not very helpful” (no respondents selected “not at all helpful”) (no difference were found between elementary and secondary teachers p = 0.763, Cohen’s d = −0.16).
Teachers were also asked about the types of support or resources that would help increase their confidence in using AI. The most frequently selected support was “Professional development workshops” (80.3%, n = 175), followed by “More information on best practices” (74.3%, n = 162), “Collaboration with other teachers” (72.9%, n = 159), “Clear permission policies” (67.4%, n = 147), “Access to AI tools” (62.8%, n = 137), “Professional Learning Communities” (42.2%, n = 92), and “Other” (3.7%, n = 8). Chi-square tests of independence compared the selections of elementary (n = 69) and secondary (n = 145) teachers. A significantly higher percentage of elementary teachers (88.4%) than secondary teachers (75.9%) indicated a need for “Professional development workshops,” χ2(1, N = 214) = 4.582, p = 0.032, ϕ = −0.146. No other statistically significant differences were found between elementary and secondary teachers (p’s > 0.100). Open-ended responses for “other” needs primarily centered on teachers expressing disinterest or ethical concerns regarding AI use in education, while others desired dedicated training opportunities, practice with low stakes, and clear school-wide policies on acceptable AI use.

4. Discussion

The present study aimed to investigate teachers’ knowledge and use of artificial intelligence (AI) tools within their educational settings. Specifically, this study sought to explore the extent to which educators independently utilized AI tools, identify the specific tools they employed and their confidence in their use, assess whether they had participated in any relevant training, and determine the types of PD they perceived as necessary. The findings of this study offer insights into grade 3 to 12 educators’ engagement with artificial intelligence (AI) tools, highlighting both the potential and the challenges associated with their integration into educational settings.
The study reveals notable disparity between educators’ general awareness of AI tools, such as ChatGPT, and their deeper familiarity and understanding of AI concepts. While 47.9% reported being “somewhat familiar” and 46.4% had a “moderate” understanding of AI, these figures suggest that knowledge about and familiarity with does not necessarily translate into comprehensive knowledge or effective application (Wang & Fan, 2025). This aligns with findings from the RAND Corporation, which reported that only 25% of teachers utilized AI tools in their instructional planning or teaching during the 2023–2024 school year, with variations based on subject area and school characteristics (Kaufman et al., 2024).
ChatGPT emerged as the most recognized and frequently used AI tool among educators, followed by Magic School AI. This trend mirrors broader patterns observed in the field, where tools like ChatGPT have gained prominence due to their versatility and user-friendly interfaces (Wang & Fan, 2025). However, the study also indicates that the actual adoption of AI tools in teaching practices remains at a modest level, with less than half (46.7%) of teachers currently using AI tools. This suggests that while awareness and interest are high, the transition from knowledge and familiarity to consistent application in the classroom is still developing. Concentrated efforts are needed to make this transition systematic and methodical (Kalantzis & Cope, 2025a; Tan et al., 2025).
Moreover, the study highlights that secondary educators generally report higher levels of awareness, familiarity, understanding, and confidence in using AI tools compared to their elementary counterparts. This disparity may be attributed to several factors, including the complexity of content areas, greater exposure to technology, and possibly more flexible curricula at the secondary level. Secondary teachers are more likely to use AI tools for instructional planning, with variations also observed based on subject area and school characteristics (Kaufman et al., 2024). These differences underscore the need for differentiated PD strategies tailored to the specific needs and contexts of educators at various educational levels.
In addition, findings show that educators who use specific AI tools like ChatGPT and Grammarly report positive feelings, satisfaction, and confidence in those tools. However, overall confidence in broader AI teaching practices, such as integrating AI into grading or troubleshooting technical issues, remains lower. This finding is consistent with research that suggests educators’ trust in AI is influenced by their self-efficacy and understanding of AI technologies (Li & Wilson, 2025; Wilson et al., 2021; Viberg et al., 2024). Wilson et al. (2021) observed that although elementary teachers appreciated how Automated Writing Evaluation (AWE) systems saved time and allowed them to focus on higher-level writing concerns, they also felt automated feedback lacked personalization and did not account for students’ developmental stage and effort trajectory; thus, it was rather superficial (Wilson et al., 2021). Further, concerns are raised on the accuracy of AI grading compared to humans with human-provided feedback being more accurate compared to AI-generated (Atasoy & Moslemi Nezhad Arani, 2025). In addition, ethical issues surrounding fairness pose additional barriers to teacher trust. AWE systems might utilize algorithmic biases rooted in unrepresentative training data or skewed human rater norms, potentially disadvantaging certain demographic groups (Bulut et al., 2024).
Further, the study reveals a significant gap in AI-related PD, with only 24.0% of teachers having received any general AI training. This is corroborated by findings from the EdWeek Research Center, which reported that 58% of teachers had received no training on AI tools. Despite this, there is a strong desire for PD, with 80.3% of teachers identifying “Professional development workshops” as a key need to increase their confidence in using AI tools. This indicates a clear demand for structured PD opportunities that equip educators with the knowledge and skills to effectively integrate AI into their teaching practices.
The findings underscore the need for deliberate strategies to support teachers’ use of AI, potentially involving educational institutions and teacher-preparation programs. This includes providing targeted PD, developing clear policies on AI usage, and ensuring equitable access to AI tools across diverse educational contexts (Bulut et al., 2024). As highlighted by the RAND Corporation, disparities in AI adoption are evident, particularly between higher- and lower-poverty schools, suggesting that systemic support is crucial to ensure all educators can leverage AI to enhance teaching and learning.

4.1. Limitations

Several limitations should be considered when interpreting the findings of this study. First, the sample was composed primarily of teachers from a single school district, which may limit the generalizability of the results. While the district provided valuable insight into educator perceptions of AI, the specific policies, demographics, and culture of that district may not reflect those of other regions of the United States.
In addition, this study relied on self-reported data to assess teacher awareness, use, confidence, and perceived institutional support regarding AI. Self-reported measures are inherently susceptible to various biases, including social desirability and recall bias. As a result, the findings may not fully capture actual behaviors or competencies related to AI use in the classroom. Future research could incorporate observations to better examine teachers’ use of AI tools. Furthermore, the cross-sectional design of the study provides only a snapshot in time. This is particularly important given the rapid evolution of artificial intelligence technologies and their applications in education. Teacher perceptions, institutional support, and the availability of tools are likely to change significantly over a short period of time, as new tools and applications are developed. Thus, even though this information is valuable, it is temporary. Finally, while the study examined specific AI tools, the broader landscape of AI in education is vast and diverse. Findings from a subset of tools may not necessarily apply to emerging technologies.

4.2. Policy Implications

This study highlights several important considerations for education policy at both the district and state levels, especially considering recent national attention to AI integration in education. With the U.S. Department of Education placing increased emphasis on the ethical and effective use of AI in schools, it is imperative that educational institutions respond proactively. Indeed, the strong desire among educators for professional development (PD) opportunities focused on AI should translate into policy initiatives. States and districts should invest in accessible, high-quality, and sustained PD programs that are updated regularly to reflect the fast-changing AI landscape. These should not be one-off workshops but part of a comprehensive PD strategy.
We found meaningful differences between elementary and secondary educators in their experiences and needs related to AI. This suggests that PD and support mechanisms should not follow a one-size-fits-all model. Policymakers should support differentiated approaches that address the unique pedagogical contexts and technological readiness of teachers across grade levels. It should also be noted that teachers’ calls for “clear permission policies” signal a critical gap in existing policy frameworks. As AI tools raise questions around data privacy, academic integrity, and equity, districts must develop and disseminate robust ethical guidelines and usage protocols. Clear policies can empower teachers to innovate while ensuring student protections are in place. Policies should support leadership development, infrastructure investments, and a clear, district-wide vision for AI use. When institutions demonstrate strategic confidence, it may, in turn, increase teacher confidence and willingness to engage with AI.

4.3. Research Implications

The findings of this study also point to several important directions for future research. Most notably, there is an urgent need for longitudinal research that tracks how teacher awareness, confidence, and AI use evolve over time, especially as new tools are introduced and as PD initiatives scale. Future studies should also investigate the impact of AI integration on student outcomes; a dimension not explored here. Understanding how AI tools affect student learning, engagement, and equity will be critical for making informed decisions about their classroom use.
Additionally, there is a need to examine which types of PD are most effective in enhancing AI literacy and confidence among educators (also see Tan et al., 2025; Wang & Fan, 2025). Research should explore variations across teacher experience levels, subject areas, and school settings to inform more targeted PD design. Ethical considerations also warrant closer examination (e.g., Adeshola & Adepoju, 2023; Lindebaum & Fleming, 2023). Teachers are increasingly confronted with issues related to data privacy, algorithmic bias, and academic integrity (Kalantzis & Cope, 2025b), but little is known about how they currently navigate these challenges. Research should identify the specific supports teachers need to address these dilemmas responsibly. It is worth noting that the pace of research on AI in education may be constrained by limited funding. As AI becomes a more central component of teaching and learning, investment in educational research must keep pace. Increased funding is essential to support the sustained, rigorous study of AI’s pedagogical, ethical, and institutional implications. Thus, collaborations between institutions of higher education and school settings may be crucial.

5. Conclusions

This study provides valuable insights into the current state of AI adoption among 3–12 educators, highlighting both the potential and the challenges associated with integrating AI into educational practices. The findings emphasize the importance of bridging the gap between awareness and deep understanding, fostering confidence in AI tools, and providing targeted PD to support educators in navigating the evolving landscape of AI in education. By addressing these areas, educational institutions can better equip teachers to harness the benefits of AI, ultimately enhancing educational outcomes for students. Research on the impacts of AI and GenAI models in educational settings is still in its infancy, with very few studies available. Existing literature highlights both the potential benefits of AI and the strong need for AI literacy. Concerns about ethics and social implications are widespread (Glaser, 2023; Sison et al., 2023; Wu et al., 2023; Hwang et al., 2023; Zhang et al., 2023). However, addressing the ethical and social concerns, along with the competent use of AI tools, is crucial for effective AI literacy.

Author Contributions

Conceptualization, Z.A.T.P.; methodology, Z.A.T.P. & L.R.; software, formal analysis, L.R.; investigation, resources, Z.A.T.P.; writing original draft, data curation, writing-review and editing, Z.A.T.P. & L.R.; supervision, project administration Z.A.T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of the University of Tennessee, Knoxville (UTK IRB-24-08494-XM; 21 October 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this article is not readily available because data is analyzed further.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adeshola, I., & Adepoju, A. P. (2023). The opportunities and challenges of ChatGPT in education. Interactive Learning Environments, 32(10), 6159–6172. [Google Scholar] [CrossRef]
  2. Anson, D. W. (2024). The impact of large language models on university students’ literacy development: A dialogue with Lea and Street’s academic literacies framework. Higher Education Research & Development, 43(7), 1465–1478. [Google Scholar] [CrossRef]
  3. Atasoy, A., & Moslemi Nezhad Arani, S. (2025). ChatGPT: A reliable assistant for the evaluation of students’ written texts? Education and Information Technologies. [Google Scholar] [CrossRef]
  4. Bulut, O., Beiting-Parrish, M., Casabianca, J. M., Slater, S. C., Jiao, H., Song, D., Ormerod, C. M., Fabiyi, D. G., Ivan, R., Walsh, C., Rios, O., Wilson, J., Yildirim-Erbasli, S. N., Wongvorachan, T., Liu, J. X., Tan, B., & Morilova, P. (2024). The rise of artificial intelligence in educational measurement: Opportunities and ethical challenges. arXiv, arXiv:2406.18900. [Google Scholar] [CrossRef]
  5. Buttrick, N. (2024). Studying large language models as compression algorithms for human culture. Trends in Cognitive Sciences, 28(1), 187–189. [Google Scholar] [CrossRef]
  6. De Simone, M., Tiberti, F., Barron Rodriguez, M., Manolio, F., Mosuro, W., & Dikoru, E. J. (2025). From chalkboards to chatbots: Evaluating the impact of generative AI on learning outcomes in Nigeria. In Policy research working paper; 11125. World Bank. Available online: http://hdl.handle.net/10986/43212 (accessed on 2 September 2025).
  7. Fosso Wamba, S., Bawack, R. E., Guthrie, C., Queiroz, M. M., & Carillo, K. D. A. (2021). Are we preparing for a good AI society? A bibliometric review and research agenda. Technological Forecasting and Social Change, 164, 120482. [Google Scholar] [CrossRef]
  8. Glaser, N. (2023). Exploring the potential of ChatGPT as an educational technology: An emerging technology report. Technology, Knowledge and Learning: Learning Mathematics, Science and the Arts in the Context of Digital Technologies, 28(4), 1945–1952. [Google Scholar] [CrossRef]
  9. Grassini, S. (2023). Shaping the future of education: Exploring the potential and consequences of AI and ChatGPT in educational settings. Education Sciences, 13(7), 692. [Google Scholar] [CrossRef]
  10. Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. [Google Scholar] [CrossRef]
  11. Hill-Yardin, E., Hutchinson, M. R., Laycock, R., & Spencer, S. J. (2023). A Chat(GPT) about the future of scientific publishing. Brain, Behavior, and Immunity, 110, 152–154. [Google Scholar] [CrossRef]
  12. Hwang, S. I., Lim, J. S., Lee, R. W., Matsui, Y., Iguchi, T., Hiraki, T., & Ahn, H. (2023). Is ChatGPT a “Fire of Prometheus” for non-native English-speaking researchers in academic writing? Korean Journal of Radiology, 24(10), 952–959. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  13. Jin, F., Song, Z., Cheung, W. M., Lin, C., & Liu, T. (2024). Technological affordances in teachers’ online professional learning communities: A systematic review. Journal of Computer Assisted Learning, 40(3), 1019–1039. [Google Scholar] [CrossRef]
  14. Kalantzis, M., & Cope, B. (2025a). Literacy in the time of artificial intelligence. Reading Research Quarterly, 60(1/e591), 1–34. [Google Scholar] [CrossRef]
  15. Kalantzis, M., & Cope, B. (2025b). Multiliteracies since social media and artificial intelligence. Harvard Educational Review, 95(1), 135–151. [Google Scholar] [CrossRef]
  16. Kaufman, J. H., Woo, A., Eagan, J., Lee, S., & Kassan, E. (2024). Uneven adoption of artificial intelligence tools among U.S. teachers and principals in the 2023–2024 school year. RAND Corporation. Available online: https://www.rand.org/pubs/research_reports/RRA134-25.html (accessed on 1 June 2025).
  17. Kong, S., Cheung, W. M., & Zhang, G. (2023). Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical awareness. Journal of Educational Technology & Society, 26(1), 16–30. [Google Scholar]
  18. Li, M., & Wilson, J. (2025). AI-Integrated Scaffolding to Enhance Agency and Creativity in K-12 English Language Learners: A Systematic Review. Information, 16(7), 519. [Google Scholar] [CrossRef]
  19. Lindebaum, D., & Fleming, P. (2023). Chatgpt undermines human reflexivity, scientific responsibility and responsible management research. British Journal of Management, 35, 566–575. [Google Scholar] [CrossRef]
  20. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  21. Luo, W., He, H., Liu, J., Berson, I. R., Berson, M. J., Zhou, Y., & Li, H. (2023). Aladdin’s genie or pandora’s box for early childhood education? Experts chat on the roles, challenges, and developments of ChatGPT. Early Education and Development, 35(1), 96–113. [Google Scholar] [CrossRef]
  22. Orwig, W., Edenbaum, E. R., Greene, J. D., & Schacter, D. L. (2024). The language of creativity: Evidence from humans and large language models. The Journal of Creative Behavior, 58, 128–136. [Google Scholar] [CrossRef] [PubMed]
  23. Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT: More than a “weapon of mass deception” ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective. International Journal of Human-Computer Interaction, 40(17), 4853–4872. [Google Scholar] [CrossRef]
  24. Strauss, A., & Corbin, J. M. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Sage Publications, Inc. [Google Scholar]
  25. Su, J., & Yang, W. (2023). Powerful or mediocre? Kindergarten teachers’ perspectives on using ChatGPT in early childhood education. Interactive Learning Environments, 32(10), 6496–6508. [Google Scholar] [CrossRef]
  26. Tan, X., Cheng, G., & Ling, M. H. (2025). Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education. Artificial Intelligence, 8, 100355. [Google Scholar] [CrossRef]
  27. Traga Philippakos, Z. A., MacArthur, C. A., & Rocconi, L. M. (2023). Effects of gen-re-based writing professional development on K to 2 teachers’ confidence and students’ writing quality. Journal of Teaching and Teacher Education, 135(1), 104–145. [Google Scholar] [CrossRef]
  28. Traga Philippakos, Z. A., Rocconi, L., & Voggt, A. (2025). Effects of online profes-sional development on first-grade writing instruction: Coaching plus manual improves teachers’ implementation, confidence, and students’ writing quality. Written Communication, 42(2), 264–300. [Google Scholar] [CrossRef]
  29. Traga Philippakos, Z. A., Voggt, A., Bell, S. M., Munsell, S., & Maples, A. (2022). Teachers’ online instructional practices and challenges during COVID-19: Teacher preparation and professional development. Reading & Writing Quarterly, 39(6), 470–490. [Google Scholar] [CrossRef]
  30. Tzirides, A. O., Galla, M., Cope, B., & Kalantzis, M. (2025). Thinking through AI: Advancing cognitive and collaborative research for AI in education. EdArXiv. [Google Scholar] [CrossRef]
  31. Viberg, O., Cukurova, M., Feldman-Maggor, Y., Alexandron, G., Shirai, S., Kanemune, S., Wasson, B., Tømte, C., Spikol, D., Milrad, M., Coelho, R., & Kizilcec, R. F. (2024). What explains teachers’ trust in AI in education across six countries? International Journal of Artificial Intelligence in Education. [Google Scholar] [CrossRef]
  32. Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities & Social Sciences Communications, 12(1), 621. [Google Scholar] [CrossRef]
  33. Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Journal of Educational Media: The Journal of the Educational Television Association, 45(3), 223–235. [Google Scholar] [CrossRef]
  34. Wilson, J., Ahrendt, C., Fudge, E. A., Raiche, A., Beard, G., & MacArthur, C. (2021). Elementary teachers’ perceptions of automated feedback and automated scoring: Transforming the teaching and learning of writing using automated writing evaluation. Computers and Education, 168, 104208. [Google Scholar] [CrossRef]
  35. Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q.-L., & Tang, Y. (2023). A brief overview of CHATGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica, 10(5), 1122–1136. [Google Scholar] [CrossRef]
  36. Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., Liu, X., Wu, Y., Dong, F., Qiu, C. W., Qiu, J., Hua, K., Su, W., Wu, J., Xu, H., Han, Y., Fu, C., Yin, Z., Liu, M., … Zhang, J. (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4), 100179. [Google Scholar] [CrossRef]
  37. Yi, Y. (2021). Establishing the concept of AI literacy: Focusing on competence and purpose. JAHR, 12(2), 353–368. [Google Scholar] [CrossRef]
  38. Zhang, H., Lee, I., Ali, S., DiPaola, D., Cheng, Y., & Breazeal, C. (2023). Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. International Journal of Artificial Intelligence in Education, 33(2), 290–324. [Google Scholar] [CrossRef]
Figure 1. Teacher Awareness of AI Tool Categories. Percentage of elementary and secondary teachers who reported awareness of general categories of AI educational tools.
Figure 1. Teacher Awareness of AI Tool Categories. Percentage of elementary and secondary teachers who reported awareness of general categories of AI educational tools.
Education 15 01186 g001
Figure 2. Teacher Awareness of Specific AI Educational Tools by Teacher Level. Percentage of elementary and secondary teachers who reported awareness of specific AI educational tools.
Figure 2. Teacher Awareness of Specific AI Educational Tools by Teacher Level. Percentage of elementary and secondary teachers who reported awareness of specific AI educational tools.
Education 15 01186 g002
Figure 3. Usage of Specific AI Tools. Tool use percentage among teachers who reported incorporating AI into their instruction.
Figure 3. Usage of Specific AI Tools. Tool use percentage among teachers who reported incorporating AI into their instruction.
Education 15 01186 g003
Figure 4. Primary AI Tool Used in Instruction by Teacher Level. Distribution of the primary AI tool reported by elementary and secondary teachers who use AI in their instruction.
Figure 4. Primary AI Tool Used in Instruction by Teacher Level. Distribution of the primary AI tool reported by elementary and secondary teachers who use AI in their instruction.
Education 15 01186 g004
Figure 5. Overall Teacher Confidence in Performing AI-Related Teaching Practices. Distribution of confidence levels across all teachers for six different AI-related teaching tasks. Responses were rated on a 5-point scale from “Very Unconfident” to “Very Confident”.
Figure 5. Overall Teacher Confidence in Performing AI-Related Teaching Practices. Distribution of confidence levels across all teachers for six different AI-related teaching tasks. Responses were rated on a 5-point scale from “Very Unconfident” to “Very Confident”.
Education 15 01186 g005
Figure 6. Teacher Confidence in Performing AI-Related Teaching Practices by Teacher Level. Distribution of confidence levels for six AI-related teaching practices between elementary and secondary teachers. Responses were rated on a 5-point scale from “Very Unconfident” to “Very Confident”.
Figure 6. Teacher Confidence in Performing AI-Related Teaching Practices by Teacher Level. Distribution of confidence levels for six AI-related teaching practices between elementary and secondary teachers. Responses were rated on a 5-point scale from “Very Unconfident” to “Very Confident”.
Education 15 01186 g006
Table 1. Participant Demographics.
Table 1. Participant Demographics.
CharacteristicFrequency (n)Percent (%)
Gender
 Female18578.7
 Male4920.9
 Prefer not to answer10.4
Teaching Experience
 Less than 1 year83.7
 1–5 years4822.0
 6–10 years3315.1
 More than 10 years12959.2
Grade Level
 Elementary (Grades 3–5)7028.9
 Middle School (Grades 6–8)5221.5
 High School (Grades 9–12)9338.4
 Not reported2711.2
Subject Taught
 Language Arts9041.3
 Math5726.1
 Science4721.6
 Social Studies4621.1
 Art83.7
 Other6931.7
Academic Degree
 BA in elementary education4319.7
 BA in secondary education2611.9
 MA in secondary education4118.8
 MA in elementary education3516.1
 MA in curriculum and instruction3214.7
 EdS167.3
 PhD136.0
 Other10950.0
Table 2. Summary of chi-square test results for comparisons between elementary and secondary teachers.
Table 2. Summary of chi-square test results for comparisons between elementary and secondary teachers.
VariableNElementary %Secondary %p
Awareness of General AI Tool Categories
 AI Writing Tools21580.093.80.002
 AI Tutoring Systems21555.769.70.044
 Grading and Assessment Tools 21520.049.0<0.001
 Not Aware of Any (General Categories) 21515.72.8<0.001
 Adaptive Learning Platforms 21512.913.10.96
 Chatbots for Student Inquiries 21522.928.30.399
 Other (General Categories) 21517.120.70.539
Awareness of Specific AI Educational Tools
 ChatGPT21582.997.2<0.001
 Grammarly21575.789.00.012
 Quillbot2158.625.50.004
 Scribbr2157.128.3<0.001
 copy.ai2151.414.50.003
 I am not aware of any2158.60.70.002
Currently use AI tools? Yes
 Yes21535.752.40.022
Types of AI Tools Used
 ChatGPT9269.675.40.583
 Grammarly9247.8290.098
Observed Benefits of Using AI
 Increased student engagement10144.026.30.096
 Personalized learning10160.050.00.385
 Time savings in grading/planning10180.086.80.404
 Improved student understanding10136.043.40.514
 Other10112.07.90.532
Supports to help increase confidence in using AI
 Professional Development Workshops 21488.475.90.032
 Access to AI tools 21468.160.00.251
 Collaboration with other teachers21473.971.70.738
 More information on best practices21475.473.80.806
 Professional Learning Communities21442.041.40.928
 Clear permission policies21466.768.30.814
 Other2140.05.50.047
Supports to help increase districts’ confidence you using AI
 Professional Development Workshops2128067.60.060
 Access to AI tools21255.754.90.914
 Collaboration with other districts21258.654.90.615
 Professional Learning Communities2125043.70.384
 More information on best practices21262.972.50.151
 Clear permission policies21252.966.20.060
 Other2125.74.90.809
PD Experiences
 Received General AI Training21512.928.30.012
 Received Org/State Provided Training 5044.461.00.363
Table 3. Summary of independent samples t-test results for comparisons between elementary and secondary teachers.
Table 3. Summary of independent samples t-test results for comparisons between elementary and secondary teachers.
VariableGroupnMeanSDpCohen’s d
How familiar are you with the concept of AI in education? (1 = Not at all to 5 = Extremely)Elementary702.520.92<0.001−0.79
Secondary1453.210.85
How would you rate your understanding of AI in education? (1 = Very low to 5 = Very high)Elementary702.330.96<0.001−0.64
Secondary1442.920.91
Satisfaction with ChatGPT (1 = Very low to 5 = Very High)Elementary164.000.890.5250.18
Secondary523.850.83
Satisfaction with Grammarly (1 = Very low to 5 = Very High)Elementary114.001.050.900−0.05
Secondary204.051.00
Confidence in using ChatGPT (1 = Very low to 5 = Very High)Elementary164.001.100.3840.25
Secondary523.750.97
Confidence in using Grammarly (1 = Very low to 5 = Very High)Elementary104.200.920.4400.30
Secondary203.901.02
Satisfaction with Primary AI Tool (1 = Very low to 5 = Very High)Elementary254.000.760.8790.04
Secondary733.970.78
Confidence with Primary AI Tool (1 = Very low to 5 = Very High)Elementary243.830.920.6600.10
Secondary683.740.94
How confident do you feel about: Using AI tools in the classroom (1 = Very Unconfident to 5 = Very Confident)Elementary702.361.29<0.001−0.55
Secondary1453.091.34
How confident do you feel about: Integrating AI into lesson plans (1 = Very Unconfident to 5 = Very Confident)Elementary692.431.390.013−0.37
Secondary1442.941.37
How confident do you feel about: Integrating AI into grading (1 = Very Unconfident to 5 = Very Confident)Elementary692.221.330.021−0.34
Secondary1452.681.39
How confident do you feel about: Communicating with families about AI (1 = Very Unconfident to 5 = Very Confident)Elementary702.301.340.009−0.39
Secondary1452.811.32
How confident do you feel about: Communicating with the district about AI (1 = Very Unconfident to 5 = Very Confident)Elementary702.201.260.002−0.46
Secondary1442.801.32
How confident do you feel about: Troubleshooting technical issues with AI (1 = Very Unconfident to 5 = Very Confident)Elementary702.061.250.009−0.38
Secondary1442.531.24
For your perspective, how confident is your district in you integrating AI into your instruction? (1 = Very Unconfident to 5 = Very Confident)Elementary682.650.930.2000.19
Secondary1452.831.02
Helpfulness of General AI Training (1 = Not at all helpful to 5 = Extremely helpful)Elementary93.670.870.964−0.02
Secondary413.680.99
Helpfulness of Organization or State Training (1 = Not at all helpful to 5 = Extremely helpful)Elementary43.501.290.763−0.16
Secondary253.681.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Traga Philippakos, Z.A.; Rocconi, L. AI Literacy: Elementary and Secondary Teachers’ Use of AI-Tools, Reported Confidence, and Professional Development Needs. Educ. Sci. 2025, 15, 1186. https://doi.org/10.3390/educsci15091186

AMA Style

Traga Philippakos ZA, Rocconi L. AI Literacy: Elementary and Secondary Teachers’ Use of AI-Tools, Reported Confidence, and Professional Development Needs. Education Sciences. 2025; 15(9):1186. https://doi.org/10.3390/educsci15091186

Chicago/Turabian Style

Traga Philippakos, Zoi A., and Louis Rocconi. 2025. "AI Literacy: Elementary and Secondary Teachers’ Use of AI-Tools, Reported Confidence, and Professional Development Needs" Education Sciences 15, no. 9: 1186. https://doi.org/10.3390/educsci15091186

APA Style

Traga Philippakos, Z. A., & Rocconi, L. (2025). AI Literacy: Elementary and Secondary Teachers’ Use of AI-Tools, Reported Confidence, and Professional Development Needs. Education Sciences, 15(9), 1186. https://doi.org/10.3390/educsci15091186

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop