Next Article in Journal
Fidelity, Virtual Human Assistants, and Engagement in Immersive Virtual Learning Environments: The Role of Temporal Functional Fidelity
Next Article in Special Issue
Heritage Management and Sustainable Tourism: A Systematic Literature Review
Previous Article in Journal
Intelligent Eyes on Buildings: A Scientometric Mapping and Systematic Review of AI-Based Crack Detection and Predictive Diagnostics of Building Structures
Previous Article in Special Issue
Resilience in High Abilities: Keys to Overcoming Academic and Personal Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Artificial Intelligence Literacy and Competency in Pre-Service Teacher Education

School of STEM Education, Innovation & Global Studies, Institute of Education, Dublin City University, D09 YT18 Dublin, Ireland
Encyclopedia 2026, 6(4), 76; https://doi.org/10.3390/encyclopedia6040076
Submission received: 16 February 2026 / Revised: 21 March 2026 / Accepted: 26 March 2026 / Published: 27 March 2026
(This article belongs to the Collection Encyclopedia of Social Sciences)

Definition

Artificial Intelligence (AI) literacy and competency in pre-service teacher education refer to a programme-level implementation that enables teachers to work with AI systems effectively, critically, and ethically across university coursework, school placements, and early-career practice. This includes not only capability, but also professional enactment, where teachers apply AI-related knowledge in context-sensitive and pedagogically grounded ways. AI literacy refers to a shared knowledge base for understanding how AI systems generate outputs, how to evaluate and verify AI-supported information, and how to reason about task–tool fit in relation to fairness, privacy, transparency, accountability, academic integrity, equity, and environmental sustainability. AI competency refers to the application of this literacy in routine professional tasks, such as designing and justifying AI-informed teaching, learning, and assessment, protecting students’ and school data, documenting decisions, and revising AI-supported materials after checking for reliability, transparency, accountability, and equity. Together, literacy and competency extend beyond personal use of AI by preparing future teachers to support students’ creative, critical, and ethical engagement with AI, while keeping classroom practice aligned with educational goals, objectives, and values.

1. AI Literacy and AI Competency

Artificial Intelligence (AI) technologies are increasingly present in education, and teacher education programmes are under growing pressure to prepare future teachers to work with AI in responsible and educationally meaningful ways [1,2,3]. In pre-service teacher education, a common question is what teachers need to know about AI and what they need to do with that knowledge in real teaching, learning, and assessment contexts, including situations involving generative AI [4,5,6,7,8]. In this area, two key terms are widely used: AI literacy and AI competency [9,10]. Although these terms are often used interchangeably in international frameworks and the literature [9,10,11], recent scholarship and policy-oriented work increasingly treat them as related but distinct constructs that differ in scope and specificity [9,12,13].

1.1. Distinguishing Literacy from Competency: Implications for Teacher Preparation

In pre-service teacher education, AI literacy is commonly described as knowledge that integrates three areas. First, it includes a basic understanding of how AI systems work [14]. Second, it includes the ability to evaluate AI outputs critically [15]. Third, it includes ethical and socio-technical awareness of AI-related issues such as bias, privacy, transparency, accountability, integrity, equity, and environmental sustainability [9,11]. In short, AI literacy helps pre-service teachers understand what AI is, how it generates outputs, and what risks and impacts may follow when it is used in educational settings [12,16,17]. This understanding of literacy aligns with broader perspectives that view literacy as socially situated and context-dependent [18,19,20].
In comparison, AI competency refers to a teacher’s context-specific capacity to apply AI-related knowledge, skills, and dispositions within their professional role [7,9,21]. This also resonates with established models of teacher knowledge that emphasise the integration of technology, pedagogy, and content [22,23]. In simple terms, AI competency is about using AI in ways that support teaching, learning, and assessment effectively, critically, and ethically under real constraints [6,9,24]. For example, a pre-service teacher may recognise that a generative AI tool can produce fluent but potentially inaccurate explanations of a topic, but AI competency involves deciding whether such output is appropriate for a specific lesson under time and curricular constraints, checking it against reliable sources, and adapting it in ways that support pupils’ learning outcomes while maintaining accuracy, fairness, and transparency. Research on teacher AI competency also highlights motivational and psychological factors, such as confidence, self-efficacy, and a reflective orientation [1,12]. These factors can shape whether pre-service teachers can adopt, critique, and adapt AI tools in beneficial ways in the classroom [6,9,11].
At the same time, major policy-oriented frameworks do not treat the boundary between literacy and competency as fully separate [1,25,26]. For example, the European Commission and the Organisation for Economic Co-operation and Development (OECD) [27] define AI literacy as the technical knowledge, durable skills, and future-ready attitudes required to thrive in a world influenced by AI. This definition already includes attitudes, and the framework presents learning expectations that resemble competency statements. For pre-service teacher education, the practical value of the distinction is, therefore, not to enforce rigid terminology. Instead, it helps clarify curriculum functions. AI literacy can be treated as the knowledge base and shared language across the programme, while AI competency can be treated as the application of that knowledge in professional tasks [9,12,13]. This framing also supports later sections in this entry, because ethical and socio-technical concerns require both understanding and enacted professional judgement [13,28].

1.2. UNESCO AI Competency for Teachers

UNESCO’s AI Competency Framework for Teachers (AI CFT) [24] describes how teachers can develop AI competency over time. It is organised as a matrix that crosses five aspects with three progression levels (Acquire, Deepen, and Create). The five aspects are human-centred mindset, ethics of AI, AI foundations and applications, AI pedagogy, and AI for professional development. The framework notes that competency development is complex and context-dependent, so the progression levels serve as a reference pathway rather than fixed steps. At the Acquire level, teachers develop the essential competencies needed to evaluate, select, and use AI tools appropriately. The Deepen level focuses on designing meaningful pedagogical strategies that integrate AI. The Create level emphasises innovative use, including more creative configuration of AI systems in education. By crossing the three levels with the five aspects, the AI CFT specifies fifteen competency blocks.
For pre-service teacher education, the AI CFT can be used as a planning reference for staged learning objectives across coursework and school placement. Early learning activities can focus on safe and critical use in low-stakes tasks, including checking the trustworthiness of AI outputs. Later activities can focus on designing, justifying, and evaluating AI-informed teaching and assessment practices in authentic classroom contexts, with clear evidence of professional judgement. The framework also cautions that over-reliance on AI may lead to the atrophy of teachers’ essential competencies, and it, therefore, highlights teacher agency and human accountability.

1.3. EU and OECD AI Literacy Framework

The draft AI literacy framework, developed through collaboration between the EU and OECD, [27], was established on the basis of the European Commission’s Digital Competence Framework for Citizens [29], UNESCO’s AI competencies for students [30] and AI competencies for teachers [24], The Digital Promise AI Literacy Framework [31], and The AI4K12 5 Big Ideas in AI [32] to define AI literacy through “technical knowledge, durable skills, and future-ready attitudes”. It structures AI literacy into four interaction domains: engaging, creating, managing, and designing. Each domain has a number of competencies. Each competence is a learning expectation that reflects technical knowledge, durable skills, and future-ready attitudes.
Within Engaging with AI, the framework expects learners to recognise where AI is used, evaluate whether AI outputs should be accepted, revised, or rejected, and consider how recommendations, bias, and environmental costs may shape decisions. Creating with AI focuses on co-producing ideas and artefacts with generative systems, using AI for ideation, prototyping, and feedback while attending to authenticity, attribution, and clear language that avoids anthropomorphism. Managing AI emphasises human agency in delegation: learners decide whether to use AI based on the task, and decompose a problem based on the capabilities and limitations of both AI systems and humans, then monitor and adjust AI use to stay aligned with goals and values. Designing AI develops a deeper understanding of how AI works and why it matters, including comparing rule-based and data-driven systems, considering data quality and representation, and evaluating AI systems against criteria such as purpose, users, limitations, and potential impacts. For pre-service teacher education, these four domains provide a practical organiser for mapping learning outcomes to routine tasks in planning, teaching, assessment, and professional judgement.

2. Ethical and Socio-Technical Considerations

When introducing AI literacy and AI competency into pre-service teacher education, ethical considerations address what counts as responsible use of AI for teaching, learning, and assessment, including fairness, privacy, transparency, accountability, academic integrity, student wellbeing, and environmental sustainability [5,24,28,33,34]. Socio-technical considerations emphasise that these risks and benefits do not arise from model behaviour alone [16,26]. Rather, they emerge through the interaction of data and design choices, platform governance and terms of use, and the institutional and relational conditions of schooling, such as assessment regimes, resourcing constraints, and teacher–student power relations [31,33,35,36].
These considerations cut across both AI literacy and AI competency in teacher preparation [9,12,37]. At the AI literacy level, pre-service teachers need the knowledge to understand how and why issues such as bias, hallucination, and data risks can arise, and how these issues may shape what becomes credible, fair, and appropriate in classroom practice [9,17,27,31]. At the AI competency level, they need the capacity to apply this knowledge through context-sensitive professional judgement about task–tool fit, and to enact practical safeguards in routine professional tasks (for example, lesson planning, assessment design, documentation, and communication with students and parents) [24,28,31]. To clarify these issues for pre-service teacher education, this section organises ethical and socio-technical considerations into seven interrelated dimensions: (1) bias and reliability; (2) privacy and data protection; (3) transparency and disclosure; (4) accountability and professional judgement; (5) academic integrity and assessment validity; (6) equity, access, and agency; and (7) environmental sustainability and resource use.

2.1. Bias and Reliability

AI systems can reproduce and amplify bias because outputs reflect patterns in training data and assumptions embedded in model design, including which language varieties, cultural references, and perspectives are treated as “normal” [27,33]. In classroom use, biased outputs can shape examples, explanations, and feedback in ways that advantage some learners while marginalising others, particularly when AI-generated content is treated as neutral or authoritative [16,31]. In addition, generative AI systems can produce plausible but incorrect information, so-called hallucinations, creating risks of inadvertent misinformation when verification is weak [27,38]. This implies that pre-service teachers need the ability to anticipate and detect biased or unreliable outputs, triangulate with trusted sources, and adapt or counterbalance AI-generated materials to protect fairness, inclusion, and content accuracy in everyday classroom practice [28,31,33].

2.2. Privacy and Data Protection

Using AI tools in educational settings involves decisions about what information is entered, where it is stored, and who may access or reuse it, shaped by platform terms of use, institutional policies, and data protection requirements [24,35,39]. Innocent prompts can disclose personal data or sensitive contextual information about children, families, teachers or schools, creating risks that are not always visible at the point of use [29,31,33]. This implies that pre-service teachers need habits of data minimisation, an ability to interpret institutional and platform rules, and practical workflows that safeguard privacy while still supporting legitimate teaching, learning, and assessment needs [29,31,33].

2.3. Transparency and Disclosure

In education, transparency is not only technical but also relational and professional: students, parents, mentors, and schools may reasonably ask how AI contributed to lesson planning, classroom explanations, feedback, or assessment decisions [33,39,40]. Without clear disclosure and accessible explanations, AI use can weaken trust and make it harder for learners to understand how knowledge claims and decisions are formed [7,16]. This implies that pre-service teachers need the capacity to communicate AI use clearly, document what was used and why, and translate relevant risks and limitations into plain language appropriate for school communities, while recognising the practical challenge of keeping pace with changing platforms and policies [28,41,42].

2.4. Accountability and Professional Judgement

AI tools can shift responsibility in subtle ways, particularly when time pressure encourages rapid acceptance of outputs or delegation of judgement about planning for teaching, learning, and assessment [43,44]. However, accountability in schools remains human and institutional: teachers remain responsible for educational decisions, their consequences, and their alignment with curricular and ethical expectations [33,40]. This implies that pre-service teachers need the capacity to make context-sensitive task–tool decisions [31], recognise when AI support is inappropriate, and maintain a clear chain of professional reasoning rather than outsourcing judgement to automated outputs [9].

2.5. Academic Integrity and Assessment Validity

Generative AI challenges conventional signals of authorship and effort, which can undermine assessment validity if tasks no longer capture authentic evidence of learning processes and outcomes [6,45]. At the same time, overly simplistic responses (such as blanket bans) may be difficult to sustain and can create inequities across learners [6]. This implies that pre-service teachers need the capability to design and justify assessments that remain valid in AI-rich environments, including clear expectations for AI use, process evidence where appropriate, and feedback practices that strengthen learning rather than reward undetected automation [2,28,40].

2.6. Equity, Access, and Agency

Access to AI is uneven: differences in devices, paid subscriptions, connectivity, and language proficiency can create new forms of advantage, while local norms about “acceptable use” can reshape who feels entitled to participate [7,34,46]. AI can also redistribute agency by lowering barriers to starting tasks, yet it may encourage over-reliance if learners do not develop strategies for critique, revision, and self-regulation [2,38,47]. This implies that pre-service teachers need the capacity to plan for equitable access and alternatives, teach critical and metacognitive use explicitly, and orchestrate human-AI interaction in ways that strengthen learner agency rather than replacing it [2,7].

2.7. Environmental Sustainability and Resource Use

AI systems rely on energy- and resource-intensive infrastructures, including data centres, specialised hardware, and repeated large-scale computation, meaning that educational use can have environmental implications that are largely invisible at the point of use [16,31,33]. Sustainability concerns are socio-technical because impact is shaped not only by individual choices but also by platform design defaults, institutional procurement, and the frequency and scale of use across cohorts [33]. This implies that pre-service teachers need the capacity to make sustainability-informed task–tool decisions, prioritise lower-impact alternatives where pedagogically appropriate, and embed responsible norms of use into everyday practice (for example, avoiding unnecessary high-compute generation for low-stakes tasks) [1,28].
In summary, these seven considerations describe the ethical and socio-technical conditions that shape responsible AI use in pre-service teacher education. They highlight that AI literacy and AI competency extend beyond technical proficiency. What matters is whether pre-service teachers can recognise risks and trade-offs, then apply context-sensitive professional judgement about task–tool fit when planning for teaching, learning, and assessment. Across concerns such as bias and reliability, privacy and disclosure, academic integrity, equity, and environmental sustainability, the emphasis is therefore on safeguarding learners and maintaining educational values rather than treating AI as a shortcut that replaces professional and authentic learning. The next section turns to approaches of teaching AI literacy and competency, outlining how course-based learning, critical inquiry and reflection, and co-design tasks can help pre-service teachers operationalise these considerations in routine practice.

3. Approaches to Teaching AI Literacy and Competency

In pre-service teacher education, AI-related preparation has been discussed in the literature using a range of approaches rather than a single format [7,26]. This diversity is linked to how AI literacy and AI competency are defined in Section 1, and to the ethical and socio-technical concerns outlined in Section 2. In broad terms, three approaches are frequently described: a lecture-based approach [7,25], a critical inquiry and reflection approach [2,16], and a co-design approach [7]. Across studies and policy-oriented discussions, these approaches are often presented as complementary because different approaches foreground different aspects of knowledge development, professional enactment, and risk awareness in school contexts.

3.1. Lecture-Based Approach

Lecture-based teaching is often described as an entry point for introducing AI literacy in teacher education. It typically appears as standalone lessons, short workshops, or as content integrated into educational technology or digital learning modules [7,14,38]. In this approach, emphasis is usually placed on establishing a basic conceptual model of AI for educational reasoning [7,14]. Common topics include how generative AI produces outputs, why outputs can be biased or incorrect, and what limitations are relevant for classroom use [39].
Lecture-based teaching is also used to introduce practical awareness related to the concerns in Section 2. For example, programmes may cover why bias and hallucination occur, what privacy and data protection risks look like in school settings, and how over-reliance can affect professional judgement [2,38,39]. Environmental sustainability is sometimes included at an awareness level, such as noting that model choice and frequency of use may involve different resource costs [30,33]. In the literature, lecture-based formats are often discussed as providing a shared foundation, while later learning activities are used to support contextual application [2,7].

3.2. Critical Inquiry and Reflection Approach

Critical inquiry and reflection approaches are commonly described in teacher education discussions as a response to the ethical and socio-technical complexity of AI use in schools [2,16]. Compared with lectures that present organised content, inquiry-oriented activities typically involve guided examination of AI outputs, classroom scenarios, and decision-making under institutional constraints [7]. This approach can help pre-service teachers examine, compare, and adapt AI-generated outputs [6,48,49], thereby enabling the development of critical evaluation, reasoning, and context-sensitive decision-making.
In practice, this approach often includes structured evaluation routines, such as comparing AI outputs across prompts, checking outputs against reliable sources, and identifying potential bias or omissions [27,39,42]. For example, pre-service teachers may be asked to compare two AI-generated explanations of the same topic, identify potential bias or factual weaknesses, and justify which version they would adapt for pupils in a specific classroom context. These tasks are frequently linked to concerns such as bias and hallucination, academic integrity, and assessment validity. Reflection activities are also used to examine transparency and disclosure, for example, by asking how teachers might explain AI involvement in planning or feedback to students, mentors, or parents [38,39]. Scenario-based discussion is often used for privacy and data protection, especially in relation to what counts as sensitive information and how school guidance may shape acceptable use [27,30,33]. Within this approach, teacher responsibility and accountability are usually framed as continuing to rest with the teacher, even when AI tools are used.

3.3. Co-Design Approach

Co-design approaches are often described as a way to connect AI-related understanding, skills, and attitudes to professional tasks such as lesson planning, resource development, and assessment design in social learning settings [7,16,50,51,52,53,54]. In these settings, pre-service teachers typically work with peers and sometimes mentor teachers or teacher educators to design, trial, and revise AI-informed teaching, learning, and assessment materials [2,7,53,54]. In this sense, co-design can be understood as a form of professional learning in which pre-service teachers develop, test, and refine AI-related teaching practices through collaborative and iterative processes [7,53,54,55,56,57,58]. Co-design may operate at different levels of practice [7]. It can involve the design of a specific classroom task [54], the planning of a full lesson [55], or the development of a sequence of lessons (e.g., a progressional scheme) [55,59]. While much of the literature focuses on instructional materials and lesson planning, co-design may also extend to assessment-related design [7], particularly as recent work highlights assessment as a core dimension of AI competency and notes its relative absence in co-design processes. This may include decisions about how AI is incorporated into student tasks, how learning evidence is generated, and how assessment expectations are communicated across task, lesson, and sequence levels [2].
A recurring feature of co-design is that it makes pedagogical decision-making explicit [2,7,50]. For example, pre-service teacher groups may specify where AI is used (or not used), what role AI plays in a lesson-planning activity, and what role learners play (such as critique, revision, or evidence checking) [28]. In the literature, co-design tasks are frequently linked to concerns in Section 3 because design work can surface issues of task–tool fit, bias checking, privacy-safe workflows, and disclosure practices [16,28]. Co-design is also discussed in relation to equity, access, and agency, particularly when lesson plans require differentiated learning and assessment activities for learners with different needs, resources, or language backgrounds [7,28]. In some cases, programmes also include discussion of sustainability as part of routine decision-making about whether AI use is necessary for a given task and to what extent AI is engaged [2,27,33].

4. Conclusions

This study has described AI literacy and AI competency in pre-service teacher education as two related levels of capability for working with AI in educational settings. AI literacy refers to the shared knowledge base that helps pre-service teachers understand how AI systems generate outputs, evaluate and verify AI-supported information, and consider task–tool fit in relation to fairness, privacy, transparency, accountability, academic integrity, equity, and environmental sustainability. AI competency refers to how this literacy is applied in routine professional tasks, such as lesson planning, assessment design, documentation, and the revision of AI-supported materials in ways that are defensible and educationally appropriate.
A consistent message across the frameworks and considerations reviewed is that responsible engagement with AI cannot be reduced to tool operation. Instead, pre-service teachers need clear reasoning routines for professional judgement, including when to use AI, when not to use it, how to verify outputs, and how to manage trade-offs. The ethical and socio-technical dimensions outlined in this entry, including bias and reliability, privacy and data protection, transparency and disclosure, accountability, academic integrity, equity and agency, and environmental sustainability, function as common criteria for both what pre-service teachers should understand and what they should be able to do in practice.
In programme design terms, AI-related preparation is often discussed as an integrated set of learning experiences rather than a single format. Lecture-based teaching can establish shared concepts and language. Critical inquiry and reflection can strengthen evaluation, disclosure, and accountability in context. Co-design can connect knowledge to planning and assessment work, where task–tool fit becomes visible through collective decisions. When these approaches are sequenced across coursework and placements with assessable evidence such as lesson plan artefacts, short rationales, and placement-linked reflection, pre-service teacher education programmes can make progression in AI literacy and AI competency more explicit while keeping attention on learner wellbeing and educational objectives, goals, and values.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author acknowledges the support of the Staff Research Publications Award, Institute of Education, Dublin City University.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Chee, H.; Ahn, S.; Lee, J. A Competency Framework for AI Literacy: Variations by Different Learner Groups and an Implied Learning Pathway. Br. J. Educ. Technol. 2025, 56, 2146–2182. [Google Scholar] [CrossRef]
  2. Le, T.H.; Huynh, L.; Dang, B.; Pham, H.-H.; Nguyen, N.T.V.; Nguyen, A. Development and evaluation of artificial intelligence literacy training for teacher education students. Br. J. Educ. Technol. 2026, in press. [Google Scholar] [CrossRef]
  3. Kohnke, L.; Zou, D.; Ou, A.W.; Gu, M.M. Preparing future educators for AI-enhanced classrooms: Insights into AI literacy and integration. Comput. Educ. Artif. Intell. 2025, 8, 100398. [Google Scholar] [CrossRef]
  4. Tran, P.T.H.; Huynh, L.; Bien, T.-A.; Dang, B.; Nguyen, A. Preparing preservice teachers for generative AI in lesson planning: A process mining study of AI mindset and tool-only training. J. Digit. Learn. Teach. Educ. 2025, 42, 16–32. [Google Scholar] [CrossRef]
  5. Mikeladze, T.; Meijer, P.C.; Verhoeff, R.P. A comprehensive exploration of artificial intelligence competence frameworks for educators: A critical review. Eur. J. Educ. 2024, 59, e12663. [Google Scholar] [CrossRef]
  6. Heine, S.; König, J. Applying artificial intelligence in teacher education: Preservice teachers’ attitudes and reflections in using ChatGPT for teaching and learning. Eur. J. Teach. Educ. 2025, 48, 934–963. [Google Scholar] [CrossRef]
  7. Zhou, X.; Lavicza, Z.; Chiu, T.K.F. Developing teacher AI competence: A systematic review of beliefs, professional learning, and cultural factors. Teach. Teach. Educ. 2026, 172, 105384. [Google Scholar] [CrossRef]
  8. Hsu, H.-P.; Mak, J.; Werner, J.; White-Taylor, J.; Geiselhofer, M.; Gorman, A.; Torrejon Capurro, C. Preliminary Study on Pre-Service Teachers’ Applications and Perceptions of Generative Artificial Intelligence for Lesson Planning. J. Technol. Teach. Educ. 2024, 32, 409–437. [Google Scholar] [CrossRef]
  9. Chiu, T.K.F. AI literacy and competency: Definitions, frameworks, development and future research directions. Interact. Learn. Environ. 2025, 33, 3225–3229. [Google Scholar] [CrossRef]
  10. Karaduman, C. Pre-Service EFL Teachers’ Perceived AI Literacy and Competency: The Integration of ChatGPT into English Language Teacher Education. Sage Open 2025, 15, 21582440251379712. [Google Scholar] [CrossRef]
  11. Chiu, T.K.F.; Sanusi, I.T. Define, foster, and assess student and teacher AI literacy and competency for all: Current status and future research direction. Comput. Educ. Open 2024, 7, 100182. [Google Scholar] [CrossRef]
  12. Zhou, X.; Li, Y.; Chai, C.S.; Chiu, T.K.F. Defining, enhancing, and assessing artificial intelligence literacy and competency in K-12 education from a systematic review. Interact. Learn. Environ. 2025, 33, 5766–5788. [Google Scholar] [CrossRef]
  13. Han, X. Improving pre-service teachers’ AI competencies: A quasi experimental study. Front. Psychol. 2025, 16, 1642465. [Google Scholar] [CrossRef] [PubMed]
  14. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  15. Laru, J.; Celik, I.; Jokela, I.; Mäkitalo, K. The antecedents of pre-service teachers’ AI literacy: Perceptions about own AI driven applications, attitude towards AI and knowledge in machine learning. Eur. J. Teach. Educ. 2025, 48, 964–986. [Google Scholar] [CrossRef]
  16. Dilek, M.; Baran, E.; Aleman, E. AI Literacy in Teacher Education: Empowering Educators Through Critical Co-Discovery. J. Teach. Educ. 2025, 76, 294–311. [Google Scholar] [CrossRef]
  17. Pei, B.; Lu, J.; Jing, X. Empowering preservice teachers’ AI literacy: Current understanding, influential factors, and strategies for improvement. Comput. Educ. Artif. Intell. 2025, 8, 100406. [Google Scholar] [CrossRef]
  18. Street, B.V. Literacy in Theory and Practice; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
  19. Buckingham, D. Media Education: Literacy, Learning and Contemporary Culture; Polity Press: Cambridge, UK, 2003. [Google Scholar]
  20. Bacalja, A.; Nichols, T.P.; Robinson, B.; Bhatt, I.; Kucharczyk, S.; Zomer, C.; Nash, B.; Dupont, B.; De Cock, R.; Zaman, B.; et al. Postdigital Videogames Literacies: Thinking with, Through, and Beyond James Gee’s Learning Principles. Postdigit. Sci. Educ. 2024, 6, 1103–1142. [Google Scholar] [CrossRef]
  21. Hsu, H.-P. Promote Environmental Awareness and Care by Creating a Virtual Reality Tour for the Local Community. Geogr. Teach. 2024, 21, 119–125. [Google Scholar] [CrossRef]
  22. Mishra, P.; Koehler, M.J. Technological pedagogical content knowledge: A framework for teacher knowledge. Teach. Coll. Rec. 2006, 108, 1017–1054. [Google Scholar] [CrossRef]
  23. Celik, I. Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Comput. Hum. Behav. 2023, 138, 107468. [Google Scholar] [CrossRef]
  24. United Nations Educational, Scientific and Cultural Organisation. AI Competency Framework for Teachers; UNESCO: Paris, France, 2024. [Google Scholar] [CrossRef]
  25. Hur, J.W. Fostering AI literacy: Overcoming concerns and nurturing confidence among preservice teachers. Inf. Learn. Sci. 2024, 126, 56–74. [Google Scholar] [CrossRef]
  26. Sperling, K.; Stenberg, C.-J.; McGrath, C.; Åkerfeldt, A.; Heintz, F.; Stenliden, L. In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Comput. Educ. Open 2024, 6, 100169. [Google Scholar] [CrossRef]
  27. European Commission and Organisation for Economic Co-operation and Development. Empowering Learners for the Age of AI: An AI Literacy Framework for Primary and Secondary Education (Review Draft); OECD: Paris, France, 2025. [Google Scholar]
  28. Chiu, T.K.F. Intelligent-TPACK (I-TPACK) framework developed from TPACK through integration of artificial intelligence literacy and competency. Interact. Learn. Environ. 2026, 1–16. [Google Scholar] [CrossRef]
  29. Vuorikari, R.; Kluzer, S.; Punie, Y. DigComp 2.2, the Digital Competence Framework for Citizens—With New Examples of Knowledge, Skills and Attitudes; Publications Office of the European Union: Luxembourg, 2022. [Google Scholar]
  30. United Nations Educational, Scientific and Cultural Organisation. AI Competency Framework for Students; UNESCO: Paris, France, 2024. [Google Scholar] [CrossRef]
  31. Mills, K.; Ruiz, P.; Lee, K.-w.; Coenraad, M.; Fusco, J.; Roschelle, J.; Weisgrau, J. AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology; Digital Promise: Washington, DC, USA, 2024. [Google Scholar]
  32. The AI4K12 Initiative, Grade Band Progression Charts. 2022. Available online: https://ai4k12.org/ (accessed on 16 February 2026).
  33. The Department of Education and Youth. Guidance on Artificial Intelligence in Schools; Department of Education and Youth: Dublin, Ireland, 2025. Available online: https://assets.gov.ie/static/documents/dee23cad/Guidance_on_Artificial_Intelligence_in_Schools_2025.pdf (accessed on 16 February 2026).
  34. Daher, R. Integrating AI literacy into teacher education: A critical perspective paper. Discov. Artif. Intell. 2025, 5, 217. [Google Scholar] [CrossRef]
  35. Digital Education Hub. Teachers’ Competences: Briefing Report No. 1; European Union: Brussels, Belgium, 2023; Available online: https://www.ai4t.eu/teachers-competences-briefing-report-no-1/ (accessed on 16 February 2026).
  36. Hsu, H.-P. An Autoethnographic Study of ESL Academic Writing with ChatGPT: From Psychological Insights to the SUPER Framework. Cogent Educ. 2025, 12, 2543113. [Google Scholar] [CrossRef]
  37. Sadaf, A.; Maxwell, D.; Küplüce, C.; Holz, H. Relationship between pre-service teachers’ perceived competencies, affective dispositions, and readiness to use artificial intelligence: A study informed by the intelligent-TPACK. Comput. Educ. Open 2025, 9, 100305. [Google Scholar] [CrossRef]
  38. Walter, Y. Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int. J. Educ. Technol. High. Educ. 2024, 21, 15. [Google Scholar] [CrossRef]
  39. Moorhouse, B.L.; Wan, Y.; Wu, C.; Kohnke, L.; Ho, T.Y.; Kwong, T. Developing language teachers’ professional generative AI competence: An intervention study in an initial language teacher education course. System 2024, 125, 103399. [Google Scholar] [CrossRef]
  40. Department of Education. Australian Framework for Generative Artificial Intelligence in Schools; Australian Government: Canberra, Australia, 2023. [Google Scholar]
  41. Dewaele, K.; Cams, G. Preparing Teachers for the AI Development in Education as an Innovative Asset: Creation of an AI&EDcomp Framework; PAIDEIA Project: Madrid, Spain, 2025; ISBN 9788894149586. Available online: https://www.paideiaproject.eu/creation-of-an-aidcomp-framework (accessed on 16 February 2026).
  42. Hsu, H.-P. From Programming to Prompting: Developing Computational Thinking through Large Language Model-Based Generative Artificial Intelligence. TechTrends 2025, 69, 485–506. [Google Scholar] [CrossRef]
  43. Abdulayeva, A.; Zhanatbekova, N.; Andasbayev, Y.; Boribekova, F. Fostering AI literacy in pre-service physics teachers: Inputs from training and co-variables. Front. Educ. 2025, 10, 1505420. [Google Scholar] [CrossRef]
  44. Erdem Coşgun, G. Artificial intelligence literacy in assessment: Empowering pre-service teachers to design effective exam questions for language learning. Br. Educ. Res. J. 2025, 51, 2340–2357. [Google Scholar] [CrossRef]
  45. Zou, D.; Xie, H.; Kohnke, L. Navigating the Future: Establishing a Framework for Educators’ Pedagogic Artificial Intelligence Competence. Eur. J. Educ. 2025, 60, e70117. [Google Scholar] [CrossRef]
  46. Özüdoğru, G.; Durak, H.Y. Conceptualizing pre-service teachers’ artificial intelligence readiness and examining its relationship with various variables: The role of artificial intelligence literacy, digital citizenship, artificial intelligence-enhanced innovation and perceived threats from artificial intelligence. Inf. Dev. 2025, 41, 916–932. [Google Scholar] [CrossRef]
  47. Guan, L.; Zhang, Y.; Gu, M.M. Pre-service teachers preparedness for AI-integrated education: An investigation from perceptions, capabilities, and teachers’ identity changes. Comput. Educ. Artif. Intell. 2025, 8, 100341. [Google Scholar] [CrossRef]
  48. Hsu, H.-P.; Zou, W.; Hughes, J.E. Developing Elementary Students’ Digital Literacy Through Augmented Reality Creation: Insights from a Longitudinal Analysis of Questionnaires, Interviews, and Projects. J. Educ. Comput. Res. 2019, 57, 1400–1435. [Google Scholar] [CrossRef]
  49. Karataş, F.; Yüce, E. AI and the Future of Teaching: Preservice Teachers’ Reflections on the Use of Artificial Intelligence in Open and Distributed Learning. Int. Rev. Res. Open Distrib. Learn. 2024, 25, 304–325. [Google Scholar] [CrossRef]
  50. Peck, C.A.; Gallucci, C.; Sloan, T.; Lippincott, A. Organizational learning and program renewal in teacher education: A socio-cultural theory of learning, innovation and change. Educ. Res. Rev. 2009, 4, 16–25. [Google Scholar] [CrossRef]
  51. Vygotsky, L. Mind in Society: The Development of Higher Psychological Processes; Cole, M., John-Steiner, V., Scribner, S., Souberman, E., Eds.; Harvard University Press: Cambridge, MA, USA, 1978. [Google Scholar]
  52. Rogoff, B. Apprenticeship in Thinking: Cognitive Development in Social Context; Oxford University Press: New York, NY, USA, 1990. [Google Scholar]
  53. Dai, Y. Negotiation of Epistemological Understandings and Teaching Practices Between Primary Teachers and Scientists about Artificial Intelligence in Professional Development. Res. Sci. Educ. 2023, 53, 577–591. [Google Scholar] [CrossRef]
  54. Gelder, W.; Yu, X.; Touretzky, D.; Gardner-McCune, C.; Uchidiuno, J. From Lecture Hall to Homeroom: Co-Designing an AI Elective with Middle School CS Teachers. Int. J. Artif. Intell. Educ. 2025, 35, 1749–1792. [Google Scholar] [CrossRef]
  55. Tatar, C.; Jiang, S.; Rosé, C.P.; Chao, J. Exploring Teachers’ Views and Confidence in the Integration of an Artificial Intelligence Curriculum into Their Classrooms: A Case Study of Curricular Co-Design Program. Int. J. Artif. Intell. Educ. 2025, 35, 702–735. [Google Scholar] [CrossRef]
  56. Murphy, R.; Ward, F.; McCabe, U.; Flannery, M.; Cleary, A.; Hsu, H.-P.; Brennan, E. Recasting embodied and relational teaching in the arts: Teacher educators reflect on the potential of digital learning. Ir. Educ. Stud. 2022, 41, 213–224. [Google Scholar] [CrossRef]
  57. Lave, J.; Wenger, E. Situated Learning: Legitimate Peripheral Participation; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  58. Hsu, H.-P.; Cheah, Y.H.; Hughes, J.E. A Case Study of a Secondary Biology Teacher’s Pedagogical Reasoning and Actions with Augmented Reality Technology. Educ. Sci. 2023, 13, 1080. [Google Scholar] [CrossRef]
  59. Dai, Y.; Liu, A.; Qin, J.; Guo, Y.; Jong, M.S.-Y.; Chai, C.-S.; Lin, Z. Collaborative construction of artificial intelligence curriculum in primary schools. J. Eng. Educ. 2023, 112, 23–42. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hsu, H.-P. Artificial Intelligence Literacy and Competency in Pre-Service Teacher Education. Encyclopedia 2026, 6, 76. https://doi.org/10.3390/encyclopedia6040076

AMA Style

Hsu H-P. Artificial Intelligence Literacy and Competency in Pre-Service Teacher Education. Encyclopedia. 2026; 6(4):76. https://doi.org/10.3390/encyclopedia6040076

Chicago/Turabian Style

Hsu, Hsiao-Ping. 2026. "Artificial Intelligence Literacy and Competency in Pre-Service Teacher Education" Encyclopedia 6, no. 4: 76. https://doi.org/10.3390/encyclopedia6040076

APA Style

Hsu, H.-P. (2026). Artificial Intelligence Literacy and Competency in Pre-Service Teacher Education. Encyclopedia, 6(4), 76. https://doi.org/10.3390/encyclopedia6040076

Article Metrics

Back to TopTop