Next Article in Journal
Exploring Relationships Between Qualitative Student Evaluation Comments and Quantitative Instructor Ratings: A Structural Topic Modeling Framework
Previous Article in Journal
Early Childhood Education Quality for Toddlers: Understanding Structural and Process Quality in Chilean Classrooms
Previous Article in Special Issue
Re-Evaluating Components of Classical Educational Theories in AI-Enhanced Learning: An Empirical Study on Student Engagement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Brief Prompt-Engineering Clinic Substantially Improves AI Literacy and Reduces Technology Anxiety in First-Year Teacher-Education Students: A Pre–Post Pilot Study

by
Roberto Carlos Davila-Moran
1,*,
Juan Manuel Sanchez Soto
2,
Henri Emmanuel Lopez Gomez
3,
Manuel Silva Infantes
4,
Andres Arias Lizares
5,
Lupe Marilu Huanca Rojas
6 and
Simon Jose Cama Flores
7
1
Industrial Engineering School, Faculty of Engineering, Continental University, Huancayo 12001, Peru
2
School of Administration and Systems, Faculty of Administrative and Accounting Sciences, Universidad Peruana Los Andes, Huancayo 12002, Peru
3
Technological University of Peru, Huancayo 12002, Peru
4
School of Dentistry, Faculty of Health Sciences, Universidad Peruana Los Andes, Huancayo 12002, Peru
5
Professional School of Secondary Education, Faculty of Education Sciences, National University of Altiplano de Puno, Puno 21001, Peru
6
Professional School of Intercultural Bilingual Education: Early Childhood and Primary Education, Faculty of Education, Juan Santos Atahualpa National Intercultural University of the Central Jungle, Chanchamayo 12615, Peru
7
Professional School of Business Administration, Faculty of Business Sciences, José María Arguedas National University, Andahuaylas 03701, Peru
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(8), 1010; https://doi.org/10.3390/educsci15081010
Submission received: 5 July 2025 / Revised: 1 August 2025 / Accepted: 2 August 2025 / Published: 6 August 2025
(This article belongs to the Special Issue ChatGPT as Educative and Pedagogical Tool: Perspectives and Prospects)

Abstract

Generative AI tools such as ChatGPT are reshaping educational practice, yet first-year teacher-education students often lack the prompt-engineering skills and confidence required to use them responsibly. This pilot study examined whether a concise three-session clinic on prompt engineering could simultaneously boost AI literacy and reduce technology anxiety in prospective teachers. Forty-five freshmen in a Peruvian teacher-education program completed validated Spanish versions of a 12-item AI-literacy scale and a 12-item technology-anxiety scale one week before and after the intervention; normality-checked pre–post differences were analysed with paired-samples t-tests, Cohen’s d, and Pearson correlations. AI literacy rose by 0.70 ± 0.46 points (t (44) = −6.10, p < 0.001, d = 0.91), while technology anxiety fell by 0.58 ± 0.52 points (t (44) = −3.82, p = 0.001, d = 0.56); individual gains were inversely correlated (r = −0.46, p = 0.002). These findings suggest that integrating micro-level prompt-engineering clinics in the first semester can help future teachers engage critically and comfortably with generative AI and guide curriculum designers in updating teacher-training programs.

1. Introduction

The emergence of generative models like ChatGPT, whose results critically depend on the quality of the prompt and the understanding of its algorithmic limitations, has highlighted that artificial intelligence literacy (AI literacy) is now a basic requirement for teachers in training, however, diagnostic studies reveal substantial gaps between the declarative knowledge and the know-how of future teachers (Pei et al., 2025). A specific bibliographic mapping on pedagogy programs confirms this gap in multiple contexts and recommends incorporating systematic training in AI into the curricula as soon as possible (Sperling et al., 2024).
Within this set of AI-literacy competencies—conceptual knowledge, prompt-writing technique, and ethical awareness— prompt engineering emerges as a critical skill to optimize interaction with generative AI models (e.g., ChatGPT-4o) and, according to recent evidence, potentially improve the quality of AI-mediated learning (Lee & Palmer, 2025). Recent professional reports support this claim by showing that mastering templates and iterative prompt refinement processes increases accuracy, reduces biases, and ensures an accessible and responsible pedagogical application (Seung, 2025).
The literature also warns that low levels of AI literacy are associated with technology anxiety, reflected in evasive attitudes towards the practical integration of AI in educational environments (Schiavo et al., 2024). This link suggests that any training proposal should simultaneously address the cognitive and socio-emotional dimensions of AI use.
To measure progress, we have validated instruments such as the SAIL4ALL scale of AI literacy (Soto-Sanfiel et al., 2024), the abbreviated ATAS of technology anxiety (Wilson et al., 2022) and, more recently, a 10-item test validated in three countries (Germany, United Kingdom and USA) and two languages (German and English) that allows to evaluate university AI literacy in less than five minutes (Hornberger et al., 2025). The 2025 reviews on generative AI applications in education underscore the scarcity of controlled experiments that simultaneously evaluate the effectiveness of teacher training programs in prompt engineering, which leaves pending the exploration of how this training could influence emotional variables such as perceived anxiety (Marzano, 2025).
The participants in this study were first-year students enrolled in a Bachelor’s in Education program with the specialization Innovation and Digital Learning. The program prepares future teachers to design and manage learning experiences in digital and hybrid environments at diverse educational levels. This pilot was embedded in the course Educational Technologies I. Positioning the micro-clinic at this initial stage of their pedagogical formation was intentional: it is when students begin to design lessons, craft assessments and manage heterogeneous classrooms. Accordingly, gains in AI literacy and reductions in technology anxiety are interpreted as proximal indicators that can later underpin core instructional competencies, such as differentiating materials, co-designing rubrics and critically vetting AI outputs for accuracy and bias, tracing a trajectory from foundational literacy to pedagogical enactment.
This pilot study examines the effect of a three-session prompt engineering clinic on (a) AI literacy and (b) technology anxiety in first-year Education students, anticipating an increase of ≥0.5 SD in literacy and a decrease of ≥0.3 SD in anxiety after the intervention.

2. Materials and Methods

2.1. Design and Context

A single-group pre–post quasi-experimental study was conducted over four weeks in the “Educational Technologies I” course at a Faculty of Education in Lima, Peru. This pilot design is used when it is not yet feasible to randomize entire first-year sections and allows for the evaluation of feasibility before scaling the intervention. The workshop was integrated as a co-curricular activity, in line with international initiatives that promote brief prompt engineering workshops for practicing and pre-service teachers (Tong, 2024).

2.2. Participants

The entire cohort (N = 48) of first-year undergraduate students in the teacher-education program was invited; three declined because of timetable clashes, leaving 45 participants (93.7%) who completed every measurement and were included in the analysis. The required sample size had been calculated a priori with G*Power/PASS for a two-tailed paired t-test, d = 0.50, α = 0.05, and power = 0.80, yielding a minimum of 34 cases; the final sample therefore represents 133% of that requirement (Cohen, 2013; NCSS, 2023). The average age was 18.9 ± 1.2 years; 71% were women, a proportion consistent with regional data showing that female participation in Latin American–Caribbean teaching ranges from 94% in early-childhood education to 59% in secondary education (UNESCO, 2024), while in Peru, women represent 63% of the national teaching staff (INEI, 2023). All analyses were performed on these 45 complete cases (full-protocol approach). Participation was voluntary and unpaid.

2.3. Intervention

The three-session workshop (120 min each) was designed according to contemporary prompt engineering frameworks applied to higher education (Federiakin et al., 2024; OpenAI, 2023). The program structure addressed three progressive axes: fundamentals and ethics of generative AI; prompt templates and iterative refinement; and critical evaluation with practical application in micro-lessons.
Session 1 introduced core concepts and ethical guidelines of generative AI (LO1–LO2). Session 2 practised prompt-template families and iterative refinement techniques (LO3–LO4). Session 3 applied critical evaluation criteria to AI outputs and incorporated them into micro-lesson plans (LO5–LO6). By the end of the clinic, participants were expected to: (LO1) explain basic LLM architecture and limitations, (LO2) outline three ethical principles for classroom use, (LO3) compose prompts with role-task-context framing, (LO4) iteratively refine prompts to improve relevance and bias mitigation, (LO5) evaluate AI outputs against curricular standards, and (LO6) draft a lesson snippet that embeds AI-generated materials responsibly. A concise syllabus with activities and rubrics is provided in Appendix A, Table A1 to facilitate replication.
Sessions were led by Carlos Álvarez Solís, PhD (Information and Knowledge Engineering) and one graduate teaching assistant, giving a facilitator-to-student ratio of 1:22. The syllabus drew on evidence-based programs that have enhanced academic performance and instructional design after intensive prompt-engineering training in engineering degrees (Perozzo et al., 2025) and teacher-education contexts (Kang et al., 2025).
All activities were conducted in the university’s main computer laboratory, where each participant used a networked desktop PC (Windows 11) or a personal laptop; access to ChatGPT-4o was provided through the institutional licence. Each session integrated guided demonstrations, practical exercises with ChatGPT-4o, and discussions oriented towards the identification of biases and limitations in the results generated by AI.

2.4. Instruments

Two standardized scales were used to measure key study variables. The first was the SAIL4ALL-12, a 12-item abbreviated version with a five-point Likert format, which assesses dimensions related to the knowledge, use, evaluation, and ethics of artificial intelligence (Lintner, 2024; Soto-Sanfiel et al., 2024). The second was the Abbreviated Technology Anxiety Scale (ATAS-12), composed of 12 items also in Likert format (1 to 5), designed to measure anxiety towards the use of digital technologies; its original validation reported high internal consistency (α = 0.91; ω = 0.95) in a university population (Wilson et al., 2022). Both scales were translated into Spanish using a double back-translation process and review by a panel of six experts, following the guidelines proposed by Beaton et al. (2000) for the cross-cultural adaptation of instruments. Content validity was satisfactory: item-level CVI values ranged from 0.83 to 1.00 and the scale-level average CVI (S-CVI/Ave) was 0.94. For example, the English item “I can formulate follow-up prompts to refine ChatGPT’s answers” was rendered in Spanish as “Puedo formular indicaciones de seguimiento para mejorar las respuestas de ChatGPT”. Internal reliability (Cronbach’s α) was estimated for pretest and posttest measurements.

2.5. Procedure

Participants completed the pretest (T0) one week before the workshop, attended the three consecutive clinics (weeks 1–3), and answered the posttest (T1) one week later. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Universidad Continental (Acta N.º 045-2025-CEUC, 14 February 2025). Procedures complied with the Peruvian Personal Data Protection Law N.º 29733, guaranteeing anonymity and exclusive academic use of the information (DLA Piper, 2023). Written informed consent was obtained from all participants.

2.6. Statistical Analysis

The normality of differences was inspected using the Shapiro–Wilk test. Because each outcome is the mean of 12 Likert-type items—an aggregation that yields approximately continuous, interval-level data—parametric tests are considered appropriate (Carifio & Perla, 2008; Norman, 2010). For normally distributed variables, a paired Student’s t-test was applied; otherwise, the Wilcoxon test was used. Effect sizes were estimated with Cohen’s d for related samples and their 95% CIs (Lakens, 2013). Cohen’s d was computed as mean Δ/SD Δ. In addition, Pearson’s correlation was calculated between changes (Δ) in AI literacy and anxiety. The significance level was set at α = 0.05 (two-tailed). All analyses were performed in R 4.4.0 “Puppy Cup”.

3. Results

3.1. Descriptives and Reliability

As shown in Table 1, AI literacy increased 0.9 SD (t = 6.10, p < 0.001, d = 0.91), while technology anxiety decreased 0.6 SD (t = −3.82, p = 0.001, d = 0.56). Both instruments showed high internal consistency (α > 0.88), and normality assumptions were met (Shapiro–Wilk, p > 0.10).

3.2. Visualization of Changes

Figure 1 shows a marked increase in AI literacy from 2.85 ± 0.54 to 3.55 ± 0.50 points out of 5 (p < 0.001), representing a relative improvement of ~25 %. In parallel, technology anxiety decreases from 3.30 ± 0.65 to 2.72 ± 0.61 points (p = 0.001), equivalent to a reduction of ~18%. The error bars show the standard deviation, evidencing that the dispersion remains stable after the intervention. These changes are consistent with the large (AI literacy, d ≈ 0.9) and medium (anxiety, d ≈ 0.6) effect sizes reported in Table 1, consistent with a substantial impact of the clinic on both the technical competence and digital well-being of the cohort.

3.3. Correlation Between Variations

Table 2 summarizes the association between individual variations in AI literacy and technology anxiety after the intervention. Pearson’s coefficient was r = −0.46 with n = 45, indicating an inverse correlation of moderate magnitude and statistically significant (p = 0.002). In other words, participants who experienced the greatest increase in AI literacy tended to show the steepest declines in anxiety; this pattern reinforces the hypothesis that strengthening technical competence can contribute to digital well-being.
Figure 2 confirms a clear inverse trend: the greater the increase in literacy, the greater the decrease in anxiety. The negative slope—consistent with r = −0.46—runs through the point cloud with a narrow confidence interval, reinforcing the robustness of the relationship. Despite some dispersion, especially at the extreme values, most participants cluster in the lower right quadrant (high gain, strong reduction in anxiety) or in the center, evidencing that the effects were not limited to a few outliers but were relatively consistently distributed in the cohort.

3.4. Qualitative Observations

At the end of the third session, open-ended comments were collected from 42 participants. The qualitative analysis revealed three central themes: first, immediate applicability, exemplified by the statement “I now know how to guide ChatGPT to plan lessons in minutes”; second, reduction of uncertainty, expressed in phrases such as “I lost the fear of “getting it wrong” with AI; I know how to check for bias”; and finally, various suggestions for improvement, mostly requests for more discipline-specific examples and more practice in creating automatic AI-generated rubrics.

4. Discussion

The intensive prompt engineering clinic applied to 45 first-year student teachers raised AI literacy by 0.70 ± 0.46 points (d = 0.91) and reduced technology anxiety by 0.58 ± 0.52 points (d = 0.56). The changes were inversely and moderately correlated (r = −0.46, p = 0.002), indicating a moderate association between the two variables; it is possible that greater AI proficiency attenuates distress, although controlled studies are required to confirm the causal direction. These findings support the hypothesis and provide initial evidence that ultra-brief interventions can close cognitive and emotional gaps in teacher education.
The results extend the scarce empirical evidence on targeted AI literacy interventions in education programs. The 0.9 SD gain obtained in our pilot falls within the range of large effects reported for comparable intensive interventions. For example, a five-week AI literacy module for prospective physics teachers recorded even greater improvements (Δ ≈ +2.8 points; d ≈ 3.0) (Abdulayeva et al., 2025), whereas, our three-session clinic still produced a large effect in AI literacy (d = 0.91), corroborating the negative relationship between perceived competence and technological discomfort described in studies of AI acceptance in undergraduates (Schiavo et al., 2024).
As for the prompt engineering focus, massive courses from Vanderbilt and IBM highlight the need to master prompting patterns to harness the creative potential of ChatGPT (IBM, 2024; Vanderbilt University, 2023). Likewise, a recent professional report stresses that prompt-engineering skills will be critical for 21st-century teachers by facilitating instructional differentiation, reducing administrative burden, and strengthening student digital literacy (Montalvo, 2025), and is already being incorporated by nearly half of U.S. school districts (Diliberti et al., 2025). The significant uptake in our cohort supports these predictions and complements teacher perception data collected in Latin America, where AI adoption still faces ethical and authoring misgivings (Bernilla Rodriguez, 2024; Díaz Vera, 2025; Ramírez Chávez & Litardo Caicedo, 2025; Ramírez Martinell & Casillas Alvarado, 2024).
From a methodological perspective, Cronbach’s alpha ≥ 0.88 confirms the internal robustness of the SAIL4ALL-12 and ATAS-12 scales, in line with their original validation (Wilson et al., 2022) and with the reliability synthesis published in the systematic review of literacy AI instruments (Lintner, 2024).
Embedding early prompt-engineering clinics may accelerate the adoption of generative AI in teacher-education practice and reduce technology anxiety, goals aligned with the recommendations of the RAND Education & Labor report, which shows how nearly half of U.S. districts already train their faculty to address fears and take advantage of AI tools (Diliberti et al., 2025). The format of three 120-min sessions appears replicable and scalable, a complete syllabus is available in Appendix A, Table A1, facilitating adoption by others programs, fits without overloading the Educational Technologies I subject and is consistent with the ethical recommendations on autonomy and responsible use of AI outlined by EDUCAUSE (Strunk & Willis, 2025). Finally, the correlation observed between higher literacy and lower anxiety further suggests the convenience of addressing, in an integrated manner, technical skills and digital well-being: surveys with university students show that AI literacy is associated with lower levels of technology anxiety (Schiavo et al., 2024), and the first critical co-discovery studies with teachers confirm that a combined approach enhances both competence and ethical reflection and self-care.
For illustration, a future language arts teacher can prompt a model to generate leveled reading questions and culturally responsive summaries, iteratively refining constraints (length, vocabulary level, local references) until they align with curricular standards. A mathematics preservice teacher might elicit step-by-step solutions that target common misconceptions, then transform the output into formative quizzes and error-analysis activities. In science, students can co-create lab-report rubrics or safety checklists and verify each criterion against national guidelines. In social studies, they may request multiple historical viewpoints or primary-source excerpts and critically inspect ideological bias before classroom use. Even in cross-curricular projects, candidates can script prompts that require the model to label sources, justify claims, or provide alternative modalities (audio, infographic) for diverse learners. These scenarios exemplify how improved prompt formulation (roles, constraints, iterations) and critical vetting translate literacy gains into tangible lesson-planning, differentiation, and assessment practices.
The main limitations of this study include, first, the absence of a control group, which prevents ruling out alternative explanations based on maturation or external events. Second, the sample was self-selected; the high participation rate may reflect above-average interest in technology, restricting generalizability. Third, the modest sample size from a single university reduces power for subgroup analyses and limits extrapolation. Fourth, the outcomes were measured with self-reported scales rather than observed performance in authentic AI-teaching tasks. Finally, the brief follow-up period precludes conclusions about long-term skill retention and transfer to classroom practice. Accordingly, all curricular implications advanced herein should be read as exploratory. Stronger causal claims and institutional recommendations will require randomized comparisons, authentic performance metrics, and longitudinal evidence of sustained pedagogical use.
For future studies, it is recommended, firstly, to conduct randomized controlled trials comparing face-to-face, hybrid and online formats, incorporating both placebo groups and passive controls. In addition, longitudinal follow-ups of at least six months should be carried out to assess the stability of the literacy achieved and its impact on genuine teaching practices. It is equally crucial to include authentic performance indicators, such as the quality of AI-generated lesson plans, along with behavioral metrics of ChatGPT use. These investigations should be based on multicenter samples stratified by gender, socioeconomic status, and prior digital experience in order to explore potential moderators. Finally, cost-benefit analyses are needed to quantify the feasibility of scaling this format in the framework of national teacher training policies.

5. Conclusions

The three-session prompt engineering clinic proved to be a brief, high-impact intervention: it increased AI literacy by almost one standard deviation and reduced technology anxiety to a medium level, with a moderate inverse relationship between the two changes. Taken together, these results suggest that embedding practical prompt-design activities early in the curriculum may strengthen both technical competencies and the digital well-being of future teachers.
In light of these findings, micro-clinics in prompt engineering may constitute a viable component of broader teacher-education strategies, however, our single-site, self-reported, pre–post design warrants caution. Rather than advocating immediate institutionalization, we present these outcomes as preliminary evidence to inform subsequent pilots and controlled, multi-site studies. Future iterations should incorporate classroom performance indicators (e.g., quality of AI-assisted lesson plans), authentic assessment artifacts, and longer follow-ups to examine durability and transfer. Only after such evidence accumulates should programs consider embedding mandatory modules across the curriculum—ideally coupled with ethical reflection and ongoing monitoring.

Author Contributions

Conceptualization, R.C.D.-M. and J.M.S.S.; methodology, R.C.D.-M. and H.E.L.G.; software, H.E.L.G.; validation, R.C.D.-M., J.M.S.S. and H.E.L.G.; formal analysis, R.C.D.-M.; investigation, J.M.S.S. and M.S.I.; resources, M.S.I. and A.A.L.; data curation, R.C.D.-M. and L.M.H.R.; writing—original draft preparation, R.C.D.-M.; writing—review and editing, J.M.S.S. and S.J.C.F.; visualization, L.M.H.R.; supervision, J.M.S.S.; project administration, R.C.D.-M.; funding acquisition, S.J.C.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request. The dataset contains information that could compromise the privacy of study participants and is therefore not publicly shared. Access will be granted to qualified researchers who present an ethically approved protocol and agree to a data-use agreement.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Session structure for critical literacy in generative AI: objectives, activities, and evidence of learning.
Table A1. Session structure for critical literacy in generative AI: objectives, activities, and evidence of learning.
SessionDurationLearning Objectives (LO)Key Activities & ResourcesEvidence/Assessment
1. Foundations and Ethics of Generative AI120 minLO 1 Describes the basic architecture of an LLM and its limitations. LO 2 State three ethical principles for teaching use (bias, attribution, privacy).
  • Mini lecture + interactive demo: “How does an LLM predict tokens?”.
  • Guided debate with ethical cases (Hallucination, Copyright, Data privacy).
  • Short reading: OpenAI (2023) Teaching with AI, section “Mitigating risks”.
Written reflection (150 words) on an ethical risk + mitigating action.
2. Prompt Templates and Iterative Refinement120 minLO 3 Write structured prompts using the role–task–context outline. LO 4 Apply refinement loop (Goal → Prompt → Evaluate → Iterate).
  • Practical workshop with ChatGPT-4o: apply the Person–Goal–Constraints prompt template and analyse/refine outputs using the PARTS matrix (Persona, Aim, Recipient, Theme, Structure) (Monash University Library, 2024; Park & Choo, 2025).
  • Pairs review prompts and add clarity/bias criteria.
Ten-item checklist on prompt quality (peers + instructor).
3. Critical Evaluation and Micro-Lesson Design120 minLO5 Evaluate AI outputs with a rubric of relevance, accuracy, and bias. LO6 Integrate AI-generated material into a micro-lesson plan aligned with standards.
  • Assessment rubric (3 criteria × 4 levels).
  • Activity: generate and refine a set of differentiated questions for a curriculum topic.
  • Design a micro-lesson (15 min) using the UDL template.
-
Rubric applied to the final version of prompts and responses.
-
Delivery: micro-lesson plan (1 page) with indication of risks and mitigations.

References

  1. Abdulayeva, A., Zhanatbekova, N., Andasbayev, Y., & Boribekova, F. (2025). Fostering AI literacy in pre-service physics teachers: Inputs from training and co-variables. Frontiers in Education, 10, 1505420. [Google Scholar] [CrossRef]
  2. Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186–3191. [Google Scholar] [CrossRef] [PubMed]
  3. Bernilla Rodriguez, E. B. (2024). Docentes ante la inteligencia artificial en una universidad pública del norte del Perú. Educación, 33(64), 8–28. [Google Scholar] [CrossRef]
  4. Carifio, J., & Perla, R. (2008). Resolving the 50-year debate around using and misusing Likert scales. Medical Education, 42(12), 1150–1152. [Google Scholar] [CrossRef] [PubMed]
  5. Cohen, J. (2013). Statistical power analysis for the behavioral sciences. Routledge. [Google Scholar] [CrossRef]
  6. Diliberti, M. K., Lake, R. J., & Weiner, S. R. (2025). More districts are training teachers on artificial intelligence: Findings from the american school district panel. Available online: https://www.rand.org/pubs/research_reports/RRA956-31.html (accessed on 28 June 2025).
  7. Díaz Vera, J. P. (2025). Más allá de los algoritmos: Desafíos y percepciones docentes sobre la inteligencia artificial generativa en la enseñanza virtual. Revista de Investigación en Tecnologías de la Información, 13(29), 141–153. [Google Scholar] [CrossRef]
  8. DLA Piper. (2023). Data protection laws in Peru—Data protection laws of the world. Available online: https://www.dlapiperdataprotection.com/index.html?c=PE&t=law (accessed on 26 June 2025).
  9. Federiakin, D., Molerov, D., Zlatkin-Troitschanskaia, O., & Maur, A. (2024). Prompt engineering as a new 21st century skill. Frontiers in Education, 9, 1366434. [Google Scholar] [CrossRef]
  10. Hornberger, M., Bewersdorff, A., Schiff, D. S., & Nerdel, C. (2025). Development and validation of a short AI literacy test (AILIT-S) for university students. Computers in Human Behavior: Artificial Humans, 5, 100176. [Google Scholar] [CrossRef]
  11. IBM. (2024). IA Generativa: Conceptos básicos de ingeniería de instrucciones. Coursera. Available online: https://www.coursera.org/learn/generative-ai-prompt-engineering-for-everyone (accessed on 28 June 2025).
  12. INEI. (2023). Instituto nacional de estadistica e informatica. Available online: https://m.inei.gob.pe/prensa/noticias/mas-de-medio-millon-de-maestros-en-el-peru-celebran-su-dia-9833/ (accessed on 25 June 2025).
  13. Kang, L., Shi, X., & Zhu, K. (2025). Uncovering the mediation of disciplinary literacy in the effect of GAI prompt engineering on pre—Service teachers’ instructional design. Education and Information Technologies. [Google Scholar] [CrossRef]
  14. Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 1–12. [Google Scholar] [CrossRef] [PubMed]
  15. Lee, D., & Palmer, E. (2025). Prompt engineering in higher education: A systematic review to help inform curricula. International Journal of Educational Technology in Higher Education, 22(1), 7. [Google Scholar] [CrossRef]
  16. Lintner, T. (2024). A systematic review of AI literacy scales. Npj Science of Learning, 9(1), 50. [Google Scholar] [CrossRef] [PubMed]
  17. Marzano, D. (2025). Generative artificial intelligence (GAI) in teaching and learning processes at the K-12 Level: A systematic review. Technology, Knowledge and Learning. [Google Scholar] [CrossRef]
  18. Monash University Library. (2024). How to write a prompt [Teaching resource]. Monash University Library. Available online: https://www.monash.edu/__data/assets/pdf_file/0005/3727085/How-to-write-a-prompt.pdf (accessed on 1 August 2025).
  19. Montalvo, T. (2025, June 5). AI prompt engineering: A critical new skillset for 21st-century teachers. eCampus News. Available online: https://www.ecampusnews.com/ai-in-education/2025/06/05/ai-prompt-engineering-a-critical-new-skillset-for-21st-century-teachers/ (accessed on 28 June 2025).
  20. NCSS. (2023). Paired T-tests using effect size. Available online: https://www.ncss.com/wp-content/themes/ncss/pdf/Procedures/PASS/Paired_T-Tests_using_Effect_Size.pdf (accessed on 25 June 2025).
  21. Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advances in Health Sciences Education, 15(5), 625–632. [Google Scholar] [CrossRef] [PubMed]
  22. OpenAI. (2023, August 31). Teaching with AI. Available online: https://openai.com/index/teaching-with-ai/ (accessed on 30 July 2025).
  23. Park, J., & Choo, S. (2025). Generative AI prompt engineering for educators: Practical strategies. Journal of Special Education Technology, 40(3), 411–417. [Google Scholar] [CrossRef]
  24. Pei, B., Lu, J., & Jing, X. (2025). Empowering preservice teachers’ AI literacy: Current understanding, influential factors, and strategies for improvement. Computers and Education: Artificial Intelligence, 8, 100406. [Google Scholar] [CrossRef]
  25. Perozzo, H., Sengewald, J., & Ravarini, A. (2025). Mastering the machine: How prompt engineering transforms generative AI learning in education. AMCIS 2025 Proceedings, 30. Available online: https://aisel.aisnet.org/amcis2025/is_education/is_education/30 (accessed on 25 June 2025).
  26. Ramírez Chávez, M. A., & Litardo Caicedo, L. G. (2025). Ética y responsabilidad en el uso de inteligencia artificial en la educación superior. Estudios y Perspectivas Revista Científica y Académica, 5(2), 66–84. [Google Scholar] [CrossRef]
  27. Ramírez Martinell, A., & Casillas Alvarado, M. A. (2024). Percepciones docentes sobre la Inteligencia Artificial Generativa: El caso mexicano. Revista Paraguaya De Educación A Distancia (REPED), 5(2), 44–55. [Google Scholar] [CrossRef]
  28. Schiavo, G., Businaro, S., & Zancanaro, M. (2024). Comprehension, apprehension, and acceptance: Understanding the influence of literacy and anxiety on acceptance of artificial Intelligence. Technology in Society, 77, 102537. [Google Scholar] [CrossRef]
  29. Seung, Y. (2025, February 12). CIDDL Research and practice brief: Generative AI prompt engineering for educators—CIDDL. Available online: https://ciddl.org/ciddl-research-and-practice-brief-generative-ai-prompt-engineering-for-educators/ (accessed on 24 June 2025).
  30. Soto-Sanfiel, M. T., Angulo-Brunet, A., & Lutz, C. (2024). The scale of artificial intelligence literacy for all (SAIL4ALL): A tool for assessing knowledge on artificial intelligence in all adult populations and settings. Preprint. [Google Scholar] [CrossRef]
  31. Sperling, K., Stenberg, C.-J., McGrath, C., Åkerfeldt, A., Heintz, F., & Stenliden, L. (2024). In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Computers and Education Open, 6, 100169. [Google Scholar] [CrossRef]
  32. Strunk, V., & Willis, J. (2025). Generative artificial intelligence and education: A brief ethical reflection on autonomy. EDUCAUSE Review. Available online: https://er.educause.edu/articles/2025/1/generative-artificial-intelligence-and-education-a-brief-ethical-reflection-on-autonomy (accessed on 28 June 2025).
  33. Tong, A. (2024, November 20). OpenAI launches free AI training course for teachers. Reuters. Available online: https://www.reuters.com/technology/artificial-intelligence/openai-launches-free-ai-training-course-teachers-2024-11-20/ (accessed on 25 June 2025).
  34. UNESCO. (2024). Global report on teachers: Addressing teacher shortages and transforming the profession. United Nations. [Google Scholar]
  35. Vanderbilt University. (2023). Prompt engineering for ChatGPT [Online course]. Coursera. Available online: https://www.coursera.org/learn/prompt-engineering/paidmedia?specialization=prompt-engineering (accessed on 28 June 2025).
  36. Wilson, M. L., Huggins-Manley, A. C., Ritzhaupt, A. D., & Ruggles, K. (2022). Development of the abbreviated technology anxiety scale (ATAS). Behavior Research Methods, 55(1), 185–199. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Pre- and post-intervention changes in AI literacy and technology anxiety.
Figure 1. Pre- and post-intervention changes in AI literacy and technology anxiety.
Education 15 01010 g001
Figure 2. Inverse association between Δ AI literacy and Δ technology anxiety (standardized scores).
Figure 2. Inverse association between Δ AI literacy and Δ technology anxiety (standardized scores).
Education 15 01010 g002
Table 1. Pre- and post-intervention results in AI literacy and technology anxiety.
Table 1. Pre- and post-intervention results in AI literacy and technology anxiety.
Variableα PreMedia ± DE Preα PostMedia ± DE PostΔ Media ± DEt (44)pdIC 95%-d
AI literacy (SAIL4ALL-12)0.912.85 ± 0.540.933.55 ± 0.50+0.70 ± 0.466.10<0.0010.910.55–1.26
Technology anxiety (ATAS-12)0.883.30 ± 0.650.92.72 ± 0.61−0.58 ± 0.52−3.820.0010.560.23–0.88
Table 2. Correlation between score changes (Δ) in AI-literacy and technology-anxiety.
Table 2. Correlation between score changes (Δ) in AI-literacy and technology-anxiety.
Variable XVariable Ynrp
Δ AI literacyΔ Technology anxiety45−0.460.002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Davila-Moran, R.C.; Sanchez Soto, J.M.; Lopez Gomez, H.E.; Silva Infantes, M.; Arias Lizares, A.; Huanca Rojas, L.M.; Cama Flores, S.J. Brief Prompt-Engineering Clinic Substantially Improves AI Literacy and Reduces Technology Anxiety in First-Year Teacher-Education Students: A Pre–Post Pilot Study. Educ. Sci. 2025, 15, 1010. https://doi.org/10.3390/educsci15081010

AMA Style

Davila-Moran RC, Sanchez Soto JM, Lopez Gomez HE, Silva Infantes M, Arias Lizares A, Huanca Rojas LM, Cama Flores SJ. Brief Prompt-Engineering Clinic Substantially Improves AI Literacy and Reduces Technology Anxiety in First-Year Teacher-Education Students: A Pre–Post Pilot Study. Education Sciences. 2025; 15(8):1010. https://doi.org/10.3390/educsci15081010

Chicago/Turabian Style

Davila-Moran, Roberto Carlos, Juan Manuel Sanchez Soto, Henri Emmanuel Lopez Gomez, Manuel Silva Infantes, Andres Arias Lizares, Lupe Marilu Huanca Rojas, and Simon Jose Cama Flores. 2025. "Brief Prompt-Engineering Clinic Substantially Improves AI Literacy and Reduces Technology Anxiety in First-Year Teacher-Education Students: A Pre–Post Pilot Study" Education Sciences 15, no. 8: 1010. https://doi.org/10.3390/educsci15081010

APA Style

Davila-Moran, R. C., Sanchez Soto, J. M., Lopez Gomez, H. E., Silva Infantes, M., Arias Lizares, A., Huanca Rojas, L. M., & Cama Flores, S. J. (2025). Brief Prompt-Engineering Clinic Substantially Improves AI Literacy and Reduces Technology Anxiety in First-Year Teacher-Education Students: A Pre–Post Pilot Study. Education Sciences, 15(8), 1010. https://doi.org/10.3390/educsci15081010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop