Next Article in Journal
The Application of Machine Learning to Educational Process Data Analysis: A Systematic Review
Previous Article in Journal
Emotion Management as Key to Mental Health? Teachers’ Emotions and Support Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transforming Physics Teacher Training Through ChatGPT: A Study on Usability and Impact

by
Marcos Guerrero-Zambrano
1,*,
Leonor Sanchez-Alvarado
1,
Bryan Valarezo-Chamba
1 and
Erick Lamilla-Rubio
1,2
1
UNEMI Facultad de Educación, Universidad Estatal de Milagro, Cdla. Universitaria Km. 1.5 vía Km. 26, 091050 Milagro, Ecuador
2
ESPOL Departamento de Física, Escuela Superior Politécnica del Litoral, Campus Gustavo Galindo, Km 30.5, 090150 Guayaquil, Ecuador
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(7), 887; https://doi.org/10.3390/educsci15070887
Submission received: 7 March 2025 / Revised: 2 May 2025 / Accepted: 5 June 2025 / Published: 11 July 2025
(This article belongs to the Topic Artificial Intelligence in Early Childhood Education)

Abstract

Teacher training in Physics often faces challenges related to engaging students and conveying abstract concepts effectively. Generative AI tools, such as ChatGPT, present transformative opportunities for designing innovative and tailored educational activities. This study investigates the impact of ChatGPT on pre-service Physics teacher training, focusing on its usability, effectiveness, and influence on participant satisfaction. Utilizing a quantitative research approach, two Likert-scale surveys were administered to 24 prospective Physics teachers in Ecuador, both before and after an intervention workshop. The workshop introduced participants to ChatGPT’s features and its applications in designing playful, Physics-focused learning activities. Results indicated a significant increase in familiarity with AI tools, enhanced activity design quality, and high satisfaction rates. Notably, 79 % of participants highlighted ChatGPT’s utility in adapting activities to diverse learning levels, and 83 % acknowledged its efficiency in reducing preparation time. These findings underscore ChatGPT’s potential to revolutionize Physics education by facilitating the creation of personalized and engaging learning resources. Future research should explore larger sample sizes and longitudinal impacts to fully realize the implications of AI-driven tools in educational contexts.

1. Introduction

The rapid advancements in artificial intelligence (AI) have led to significant transformations in various fields including healthcare, where AI has enabled breakthroughs in diagnostics and personalized treatments (Parekh et al., 2023). In finance, AI has powered advanced risk analysis, fraud detection, and algorithmic trading (Cao, 2022; Challoumis, 2024). In the transportation and logistics industries, AI has facilitated the development of autonomous vehicles and intelligent traffic management systems (Garikapati & Shetiya, 2024). In education, adaptive learning platforms and AI-enhanced pedagogical models are reshaping teaching and learning processes (Tahiru, 2021; Yim & Su, 2025; Zhai et al., 2021). These examples highlight the broad and transformative impact of AI on modern society.
ChatGPT, developed by OpenAI, was first introduced in June 2020 as part of the GPT-3 model. This innovation marked a significant milestone in artificial intelligence (AI), excelling in tasks involving natural language understanding and generation. Over time, ChatGPT has evolved through various iterations, culminating in the release of GPT-4 in March 2024, which improved precision, logical reasoning, and the ability to process multimodal inputs such as text and images (Wu et al., 2023).
This language model is built on the Transformer architecture and uses supervised learning and reinforcement learning techniques to produce coherent human-like responses in conversational contexts. Trained in millions of dialogues, ChatGPT ensures high accuracy and adaptability in its output. Its educational applications are extensive, ranging from helping in the generation of teaching materials and assessment tools to personalizing learning experiences (Prieto et al., 2023). Despite its versatility, ChatGPT is not without limitations. Challenges include occasional generation of inaccurate content, ethical concerns related to transparency and plagiarism, and biases stemming from its training data (Deng & Lin, 2022).
On the other hand, playful activities have proven to be effective tools for teaching Physics, a field often perceived as challenging and abstract by students across educational levels. Incorporating interactive elements and games into the learning process facilitates the understanding of complex concepts, increases motivation, and promotes meaningful learning. According to (da Silva et al., 2020), playful activities provide an alternative approach to traditional classes, which can often become monotonous. For example, a study on using playful and practical activities to teach concepts of heat and temperature showed that students demonstrated greater interest and comprehension of the topics when games and group competitions were included compared to more conventional methods. Owen et al. (2008) highlighted that a lack of interest in learning Physics at advanced educational levels remains a concern. However, research suggests that playful activities, such as hands-on experiments and interactive projects, can significantly improve students’ attitudes toward the discipline, making it more accessible and engaging. Recently, ChatGPT has facilitated the creation of playful activities; for instance, (Parekh et al., 2023) analyzed how ChatGPT and other AI tools fostered constructivist learning by promoting curiosity and creativity in the design of playful activities by educators for understanding physical phenomena.
Among the cutting-edge tools emerging in this domain, ChatGPT has garnered attention for its potential to support and personalize learning experiences (Holmes & Tuomi, 2022; Schiff, 2022). This generative AI tool, developed by OpenAI (Zahid et al., 2024), utilizes natural language processing (NLP) techniques to generate human-like text and adapt to diverse educational contexts. It offers educators innovative strategies to create tailored content, guide student inquiry, and foster interactive and reflective learning environments (Borenstein & Howard, 2021; Dai et al., 2024).
Recent educational research has emphasized the potential of ChatGPT to support personalized and engaging instructional design (Mai et al., 2024). However, several studies have reported that pre-service teachers often possess limited prior knowledge about how to use this tool pedagogically. In this sense, research in primary education (Cooper, 2023; Liang et al., 2023) has revealed that, despite the widespread media coverage of ChatGPT, many educators are unaware of functional and potential as an educational tool. These studies reveal that initial perceptions overestimated ChatGPT’s ability to generate lesson plans autonomously (Kalenda et al., 2025). After a guided use in a correct and structured manner, the participating teachers were able to explore the advantages and disadvantages of ChatGPT, finding the organization of ideas useful but under review and adjustment by an expert teacher (Orellana et al., 2025).
Given these capabilities, a particularly relevant application of ChatGPT and similar tools lies in physics education, a field where abstract reasoning and conceptual clarity are essential, yet often difficult to achieve through traditional methods (Liang et al., 2023). Exploring how AI can assist in overcoming persistent pedagogical challenges in physics provides a compelling avenue for educational innovation. Physics education is often characterized by its abstract and complex nature, requiring students to grasp theoretical concepts and apply them to real-world scenarios (Niss, 2012; Sung et al., 2019). However, traditional teaching methods frequently fall short in capturing students’ interest, leading to disengagement and poor academic performance (Sperling & Lincoln, 2024). Studies have shown that incorporating playful learning activities can significantly enhance students’ motivation and conceptual understanding (Candela Borja & Benavides Bailón, 2020). Despite these benefits, many educators struggle to design effective activities that balance educational rigor with engagement. ChatGPT addresses this gap by offering creative solutions for activity development. It enables educators to design simulations, role-playing scenarios, and interactive experiments with minimal effort (Adeshola & Adepoju, 2024; Lo, 2023).
The adoption of AI tools like ChatGPT in education is not without challenges. Pre-service teachers often exhibit limited familiarity with AI technologies, hindering their ability to leverage these tools effectively (West, 2023). Additionally, concerns regarding the accuracy and ethical implications of AI-generated content, such as bias and transparency, pose significant barriers to widespread implementation (Deng & Lin, 2022). Another critical issue is the accessibility of these tools in resource-constrained educational settings, where technological infrastructure and training opportunities may be limited. Overcoming these challenges requires targeted interventions, including comprehensive training programs and the development of user-friendly interfaces that cater to educators’ needs.
Generative AI tools like ChatGPT offer a transformative solution to the challenges outlined above. By simplifying the process of designing educational activities, ChatGPT enables educators to focus on delivering high-quality learning experiences (Du & Alm, 2024; Viorennita et al., 2023). The tool’s ability to generate context-specific and adaptive resources aligns with contemporary pedagogical frameworks, which emphasize personalization and active learning (Ha, 2023). For instance, ChatGPT can assist in creating activities tailored to students’ learning levels, such as quizzes, case studies, and project-based tasks. This adaptability not only enhances the relevance of educational content but also promotes deeper engagement among students (Barrot, 2024; Sudirman & Rahmatillah, 2023).
Furthermore, ChatGPT’s real-time feedback capabilities allow educators to refine their instructional strategies dynamically. For example, teachers can use the tool to generate multiple iterations of an activity, incorporating student feedback to improve its effectiveness (ElSayary, 2024). This iterative approach aligns with evidence-based teaching practices and fosters a culture of continuous improvement in educational settings (Borisova et al., 2023). Moreover, the integration of ChatGPT into Physics teacher training programs has the potential to democratize access to innovative pedagogical tools, bridging the gap between theory and practice (Kiryakova & Angelova, 2023).
The significance of this study lies in its contribution to the growing body of research on AI in education. While previous studies have explored the applications of ChatGPT in various educational contexts, limited research has focused on its role in Physics teacher training. By examining the impact of ChatGPT on teacher familiarity, usability perceptions, and satisfaction levels prior to service, this study provides valuable information on the potential of generative AI tools to improve teaching and learning in STEM disciplines (Prieto et al., 2023). The findings also underscore the importance of integrating technology-driven solutions into teacher training curricula, equipping educators with the skills and resources needed to navigate the evolving educational landscape (Bloch, 2024; Prieto et al., 2023). To further enhance the understanding of this integration, this paper presents the following main contributions: (1) the exploration of ChatGPT as a tool for designing playful learning activities tailored to physics teacher education, (2) the identification of pre-service teachers’ initial knowledge and perceptions regarding ludic pedagogical design prior to engaging with generative AI, and (3) the empirical assessment of ChatGPT’s perceived usefulness, difficulty, satisfaction, impact, and future potential, offering practical insights for the integration of AI-driven tools into teacher training curricula. The rest of the paper is organized as follows: Section 2 details the research methodology, including the workshop structure and survey design. Section 3 presents the results of the study, focusing on key metrics such as familiarity, usability, and satisfaction. Section 4, Dimensional analysis and Questionnaire procedures are reviewed. Section 5 discusses the implications of the findings, highlighting both the potential and limitations of integrating ChatGPT into teacher training. Finally, Section 6 provides conclusions and recommendations for future research and practice.

2. Method

2.1. Research Design and Data Collection Instruments

This study follows a quantitative approach, as two Likert-scale surveys were applied. The quantitative method was chosen because it aims to achieve precise measurement of attributes or phenomena through structured methods such as surveys, questionnaires, and experiments. It focuses on research questions that involve “how much” or “how often” a phenomenon occurs (Clarke & Collier, 2015). According to Mohajan (Mohajan, 2020), this approach is ideal for studying patterns, attitudes, and behaviors that can be quantified, allowing the generalization of results from representative samples. The study employed a descriptive-comparative design, as it compared the results obtained from two different surveys conducted before and after the use of ChatGPT for designing playful activities in Physics education. Descriptive-comparative studies allow for the analysis of phenomena by providing a detailed description of their attributes and comparing them across two or more groups, contexts, or situations (Grimes & Schulz, 2002).
The study population consisted of 378 prospective Physics teachers enrolled in the Pedagogy of Experimental Sciences program at a public university in Milagro, Ecuador, which comprises eight academic levels. A survey was sent to all students in the program, requesting their full name, email address, age, academic level, and the Physics course they were currently enrolled in. However, only 24 students responded to the survey and participated in the workshop.
The participants’ ages started at 17 years and older, and they were distributed across different academic levels, ranging from second to eighth level. Their coursework included subjects such as Mechanics and its Laboratory, Thermodynamics and its Laboratory, Oscillations and Waves and its Laboratory, Electromagnetism and its Laboratory, Quantum Physics, Atomic, Nuclear and Particle Physics, Energy Production and Global Warming, and Relativity. Additionally, a faculty member pursuing a master’s degree in Artificial Intelligence and Educational Resources at a Mexican university also took part in the study.
The sampling method employed in this research was self-selection sampling, which is a subtype of non-probabilistic sampling. According to Rutherford (Lebedeva et al., 2023; Rutherford, 2004), self-selection sampling involves the inclusion of participants who voluntarily choose to take part in the study, which can introduce significant biases in sample representativeness. As noted by Wolf et al. (Bile, 2022; Vehovar et al., 2016), this method is commonly used in open surveys, courses, and studies where participants are recruited without restrictions imposed by the researcher.
For this study, two five-point Likert scale surveys were administered at two different stages. The first survey assessed participants’ prior knowledge regarding the use of ChatGPT in designing playful activities for Physics education. It covered multiple dimensions, including familiarity, attitudes, knowledge of ChatGPT’s application, its impact on activity design, and potential future applications in the classroom. The second survey, conducted after the intervention, evaluated participants’ experiences using ChatGPT for activity design, focusing on usefulness, difficulty, satisfaction, impact, and future perspectives (Hyun et al., 2025; Jiang et al., 2024). The following section presents the dimensions and corresponding questions of the survey administered to the research sample, as shown in Table 1. To ensure the quality and validity of both instruments, an expert review was conducted by two university faculty members specializing in statistics, ensuring that all survey questions were appropriate for each dimension. Furthermore, the reliability of the survey items was assessed using McDonald’s Omega coefficient, which evaluates both overall reliability and dimension-specific consistency. According to Campo-Arias and Oviedo (2008), an acceptable McDonald’s Omega coefficient ranges between 0.70 and 0.90 , ensuring the reliability of each dimension measured in this study. In the context of the present study, two surveys (pre- and post-intervention) were administered to 24 students using instruments based on a five-point Likert scale, comprising five dimensions with four items each. To assess internal reliability, McDonald’s Omega coefficient was selected. This choice was grounded in the substantial advantages Omega offers over traditional coefficients such as Cronbach’s Alpha, particularly when applied to ordinal data and multidimensional structures. As noted by (Ventura-León & Caycho-Rodríguez, 2017), Omega is better suited for Likert-type scales because it relies on polychoric correlations, thereby preserving the ordinal nature of the data. Additionally, Viladrich et al. (2017) emphasize that Omega does not require the stringent assumption of tau-equivalence among items, thus enabling a more accurate estimation of reliability—an aspect of particular importance when instruments encompass multiple constructs or dimensions. Moreover, as highlighted by (Frías-Navarro, 2022; Roco-Videla et al., 2024), Omega facilitates the computation of reliability both globally and for each individual dimension, providing a more nuanced and robust evaluation of the psychometric properties of questionnaires administered in educational contexts. Accordingly, the adoption of McDonald’s Omega coefficient in this research ensures a more appropriate, sensitive, and rigorous assessment of the internal consistency of the instruments designed to measure students’ perceptions at the initial and final stages of the formative process.

2.2. Research Procedures

The procedural phase of the research began with the design of two surveys (see Appendix A), administered at the beginning and end of the intervention involving the use of ChatGPT for designing playful activities in Physics education. The surveys were structured in two stages: the first survey assessed prior knowledge of ChatGPT’s application in playful activity design, evaluating dimensions such as familiarity, attitudes, knowledge of usage, impact on activity design, and future applications in the classroom. Each dimension comprised four questions. The second survey measured participants’ experiences after using ChatGPT, focusing on dimensions such as usefulness, difficulty, satisfaction, impact, and future perspectives, with each dimension also consisting of four questions.
Table 2 presents a summary of the key aspects evaluated in each survey, along with the corresponding Likert scale implementation. This structured approach ensures consistency in assessing participants’ prior knowledge and post-intervention experiences.
Table 3 presents a structured summary of the key instructional topics covered during the intervention, outlining the fundamental concepts of ChatGPT in Physics education and its role in designing playful activities.
Following survey administration, participants engaged in a computer lab session with internet access. The instructor shared the first survey via institutional email with the 24 pre-service Physics teachers to assess their prior knowledge of ChatGPT’s role in playful activity design. Participants had 30 min to complete the survey. Subsequently, over a two-hour period, an instructional session covered the following topics (see Figure 1):
Figure 1. Computer laboratory with internet access.
Figure 1. Computer laboratory with internet access.
Education 15 00887 g001
Education 15 00887 i001

3. Results

3.1. Analysis of the Entrance Questionnaire

Regarding the perception of students in training for Physics education, the following dimensions were analyzed: Familiarity with ChatGPT in the design of recreational activities (I-D1), Knowledge of the use of ChatGPT in the design of recreational activities (I-D2), Attitudes about the use of ChatGPT in the design of recreational activities (I-D3), Knowledge of the impact of ChatGPT on the design of recreational activities (I-D4), and Future applications of ChatGPT in the classroom (I-D5). The Likert-type scale allowed for an examination of the students’ perspectives regarding ChatGPT as a tool within the Physics teaching-learning process. Although the instruments used differed in format, their underlying structure remained the same. For the purposes of this article, a scale was utilized that pluralizes and captures a wide range of students’ perceptions.
In Figure 2, it is evident that the entry survey results reveal a varied perception among students. In dimension I-D1, focused on familiarity with ChatGPT, there is a high concentration of negative and very negative responses, particularly regarding knowledge of specific capabilities (I-Q2), exploration of the tool (I-Q3), and ChatGPT’s ability to adapt recreational activities to specific Physics topics (I-Q4). In the latter, 50 % of students responded very negatively, indicating that perceptions of ChatGPT usage were unfavorable, likely due to a lack of prior knowledge about the tool. This trend suggests that most students began with a very low initial familiarity with ChatGPT in educational playful contexts.
Nevertheless, in dimension I-D2, which addresses knowledge of ChatGPT’s application in the design of recreational activities, a slight shift in perception is observed, with a greater presence of neutral and some positive responses. This is particularly evident regarding ChatGPT’s assistance in facilitating classroom experimentation activities (I-Q6) and its role in promoting critical thinking in Physics (I-Q8), where a significant change toward favorable perceptions is recorded, reflecting an emerging recognition of the educational possibilities of the tool. Moving to dimension I-D3, focused on students’ attitudes toward the use of ChatGPT, a noticeable positive shift is seen, with the majority of responses reflecting positive and very positive attitudes, exceeding 50 % . This is particularly evident in the use of ChatGPT to structure playful dynamics (I-Q10), which contributes to the enrichment and characterization of Physics classes (I-Q12). This indicates a growing willingness not only to accept but also to actively integrate the tool into their future teaching practices. Similarly, in dimension I-D4, which focuses on ChatGPT’s impact on the design of recreational activities, the trend is predominantly positive and very positive. Students recognize ChatGPT’s benefits in fostering active classroom participation (I-Q14). Moreover, students demonstrate awareness of the potential challenges (I-Q15) and their impact on motivation to learn Physics (I-Q16), revealing a mature and critical perception of the tool. Finally, in dimension I-D5, which explores future applications of ChatGPT in the classroom, 80 % of students show positive and very positive perceptions. A strong attitude of interest (I-Q17) is evident, along with a vision of transforming Physics education through a game-based learning approach (I-Q20).
For the dimension evaluating students’ knowledge of ChatGPT’s impact on designing playful activities, Question 13, which examined the benefits of ChatGPT in creating educational activities, showed 25 % of students reported “Nothing at all,” and 33 % indicated “Low,” while 29 % demonstrated a “Moderate” level, and only 14 % indicated advanced knowledge (“High” or “Very high”). Regarding Question 14, about the perception that ChatGPT could encourage greater student engagement, 50 % reported a “Moderate” level, followed by 17 % at a “High” level, while 13 % rated it as “Low,” and 17 % indicated “Nothing at all.” For Question 15, focused on the challenges of using ChatGPT in educational contexts, 54 % agreed, and 8 % strongly agreed, while 33 % were neutral, and only 4 % disagreed. Finally, Question 16, with a McDonald’s Omega coefficient of 0.83 indicating high internal consistency, reflected a moderate understanding of ChatGPT’s positive impact on motivation to learn Physics. 58 % of students agreed, and 21 % strongly agreed, while 17 % were neutral, and only 4 % opposed. These results highlight limited but growing recognition of ChatGPT’s benefits and a positive attitude toward its motivational potential, though efforts are needed to increase understanding and address perceived challenges.

3.2. Analysis of the Exit Survey

Given the relevance and importance of using ChatGPT, the study proceeded to measure students’ perceptions regarding the Perceived usefulness of ChatGPT in the design of leisure activities (O-D1), Difficulty of using ChatGPT in the design of recreational activities (O-D2), Satisfaction with the use of ChatGPT in the design of recreational activities (O-D3), Impact of ChatGPT on the design of recreational activities (O-D4), and ChatGPT future usage outlook (O-D5). Similarly, the same analysis approach was applied, using a Likert scale for the questions in each dimension.
In Figure 3, at a macro level, it is evident that more than 90 % of the responses to the exit survey reflect a very positive attitude across the dimensions assessed. Specifically, for dimension O-D1, focused on the perceived usefulness of ChatGPT in designing recreational activities, a high concentration of positive and very positive responses is observed, especially in the improvement of the quality of playful activities designed for teaching Physics (O-Q1), with 67 % very positive responses, and in the adaptation of activities to students’ learning levels (O-Q2), with 79 % very positive responses. Regarding the personalization of activities (O-Q3) and the improvement in the teaching of complex concepts (O-Q4), favorable perception levels remain high, with 63 % very positive responses. This suggests that, although the usefulness of ChatGPT is widely recognized, there is a small group that still expresses some reservations, particularly concerning ChatGPT’s usefulness in enhancing the quality and adaptation of activities. On the other hand, in dimension O-D2, which addresses the difficulty of using ChatGPT, a slight dispersion of responses is observed, particularly in the technical perception of using ChatGPT (O-Q7). Here, a diversity of views among students is evident, which could be related to the age distribution of the sample. About 21 % of students provided negative responses and 4 % very negative responses, representing groups that perceive the methodical use of ChatGPT in the teaching-learning process as complex. However, 33 % positive responses and 25 % very positive responses correspond to the majority who did not find handling this tool difficult. Moreover, regarding the ease of use of the application (O-Q8), 71 % of students indicated a very positive experience, confirming that, overall, the tool was accessible. From dimension O-D3, centered on satisfaction with the use of ChatGPT, a significant shift in students’ perception is noted, where most responses are concentrated at very positive levels. This is particularly reflected in the general satisfaction with the outcomes obtained (O-Q9) and the support to the teaching process (O-Q10), both reaching 71 % very positive responses. Concerning time savings in preparing activities (O-Q11) and the overall satisfaction with the ChatGPT usage experience (O-Q12), very positive responses reach 83 % and 75 % , respectively, reflecting not only acceptance but also a high degree of satisfaction with ChatGPT’s role in their teaching practice within the educational environment. Similarly, in dimension O-D4, focused on ChatGPT’s impact on the design of recreational activities, the trend remains markedly positive, with all questions exceeding 50 % very positive responses. Students acknowledge the simplification of activity preparation and better time management (O-Q13), as well as the ability to offer clearer and more dynamic explanations of Physics concepts (O-Q14) and to personalize their classes (O-Q15). Although a 4 % neutrality rate is identified in the evaluation of the teaching process through playful activities (O-Q16), the overall favorable perception predominates, demonstrating that students highly value ChatGPT’s impact on their educational practices. Finally, in dimension O-D5, focusing on the future outlook of ChatGPT usage, 80 % of students demonstrate positive and very positive perceptions across all questions, particularly highlighting the interest in continuing to use ChatGPT for new activities (O-Q17) and in further training in its use (O-Q19), as well as viewing ChatGPT as a key resource for future education (O-Q20), where 67 % responded very positively. Thus, a strong attitude of interest and commitment to permanently integrating ChatGPT into Physics teaching is evident, projecting it as an essential resource for teacher training.

4. Dimensional Analysis of the Questionnaire

Considering one of the sample’s characteristics, specifically the participants’ age, it can be asserted that the perception regarding the use of ChatGPT holds significant importance among students in training for Physics education. Regarding the age distribution of the participants, it was recorded that 2 students fall within the 17–19-year-old range, 9 students are between 20 and 22 years old, 3 students fall within the 23–25-year-old range, 2 students are between 26 and 28 years old, 4 students are between 29 and 31 years old, and, finally, 4 students belong to the group aged 32 years or older. Accordingly, an analysis was conducted considering the pre- and post-questionnaires, the participants’ ages, and their level of perception.

4.1. About the Entrance Questionnaire

The dimensional analysis revealed significant differences among the topics addressed. Dimension I-D1 (Familiarity with ChatGPT) exhibited predominantly negative and neutral perceptions (Very Negative 4.51 % , Negative 5.73 % , Neutral 4.86 % ), reflecting limited technical familiarity, particularly among older students. This trend is further illustrated in the graphical representation in Figure 4, where the bands corresponding to I-D1 are predominantly shaded in purple and yellow tones. A similar pattern was observed in Dimension I-D2 (Knowledge of ChatGPT Usage), where the increase in “Very Negative” responses ( 5.21 % ) and “Neutral” responses (5.56%), coupled with the absence of “Very Positive” ratings, suggests a limited mastery of the tool’s functionalities. These operational knowledge deficiencies are visually represented as fragmented and dispersed blocks, confirming a specific formative need.
In contrast to these weaknesses, Dimension I-D3 (Attitudes Toward ChatGPT Usage) emerged as the area of greatest strength, recording 8.33 % “Positive” and 3.65 % “Very Positive” perceptions. This finding indicates a generalized openness toward the integration of ChatGPT in the classroom, transcending current technical proficiency. In the graph, this favorable attitude is manifested through expansive, predominantly green blocks. Conversely, Dimension I-D4 (Knowledge of Impacts and Challenges) presented a balance between positive ( 5.73 % ) and neutral ( 5.38 % ) perceptions, indicating critical awareness of both the benefits and risks associated with ChatGPT usage. Finally, Dimension I-D5 (Future Applications) emerged as the most optimistic, with 8.51 % “Positive” and 4.34 % “Very Positive” responses, evidencing enthusiasm regarding the potential of artificial intelligence in future educational contexts. This positive outlook is visually reflected in the sunburst chart through wide and consistent green bands. Additionally, specific item-level analysis allowed for a more refined interpretation of these general trends. Notably, questions I-D5 I-Q19 (“Do I wish to experiment with ChatGPT to create educational games?”) and I-D3 I-Q9 (“Am I willing to use ChatGPT?”) exhibited the highest levels of positive perception (Positive above 9 % and Very Positive above 4 % ). In the graphical representation, these questions are depicted as large, compact blocks within the 20–22-year-old cohort, indicating a consolidated and proactive enthusiasm. In contrast, questions I-D1 I-Q4 (“Do I know how to adapt ChatGPT to Physics?”) and I-D2 I-Q7 (“Do I know how to create educational activities with ChatGPT?”) reflected the highest levels of “Very Negative” responses ( 8.33 % and 6.25 % , respectively). These problematic areas are visualized as smaller, bluish blocks, particularly among older age groups, evidencing a gap between positive disposition and technical competence that necessitates targeted educational interventions. Segmented analysis by age range revealed that generational variables play a crucial role in the acceptance and adoption of emerging technologies within educational environments. Specifically, the cohort aged 20 to 22 demonstrated notable openness toward ChatGPT, registering 13.75 % “Positive” and 3.33 % “Very Positive” responses. This favorable attitude positions this group as the primary driver for integrating artificial intelligence tools in higher education. In the graphical representation, this predisposition is visualized through vibrant green sectors, indicating both the quantity and quality of positive perceptions. This enthusiasm among younger students may be attributed to their status as “digital natives,” that is, individuals who have grown up immersed in a technological environment and consequently possess greater familiarity and confidence in using digital tools. Their constant exposure to information and communication technologies (ICT) from an early age has facilitated a quicker adaptation and a proactive attitude toward the incorporation of technological innovations in their learning processes. In contrast, the cohort aged over 32 exhibited the highest resistance rates to ChatGPT usage, with 4.17 % “Very Negative” and 4.38 % “Neutral” responses. This resistance is manifested in the graph through narrower, fragmented bluish blocks, reflecting lower familiarity and confidence in using emerging technologies. Various studies have identified that older adults encounter significant barriers to ICT adoption, including lack of experience, confidence, and motivation, as well as challenges related to the design and security of digital tools. Moreover, sociodemographic factors such as educational level, income, and cultural background influence older adults’ attitudes and motivations toward ICT. Resistance to change and age-related physical and cognitive limitations further contribute to the reluctance of this group to adopt emerging technologies. These barriers not only limit their participation in digital environments but may also impact their inclusion and well-being within an increasingly digitalized society. Intermediate groups, specifically those aged 23 to 25 and 26 to 28, exhibited more neutral perceptions, indicating latent potential that could be enhanced through targeted training initiatives aimed at reducing technological uncertainty. The implementation of training programs tailored to the specific needs and characteristics of these groups could facilitate a smoother transition toward the adoption of tools such as ChatGPT in their educational processes. Graphical representation served as a powerful validation tool for the quantitative results. The chromatic distribution across age groups and individual questions allows for intuitive identification of the concentration of positive, neutral, or negative perceptions. The predominance of green tones among younger students and cooler tones among older students visually confirms the trends identified through tabular analysis. Furthermore, the clustering of green blocks in questions related to positive attitudes and future projections (I-D3 and I-D5) evidences that the enthusiasm toward ChatGPT is not solely based on current technical familiarity but is fundamentally rooted in the expectation of pedagogical transformation in the medium and long term. Conversely, the chromatic fragmentation observed in dimensions such as Familiarity and Knowledge of Usage (I-D1, I-D2) highlights critical areas for educational intervention.

4.2. About the Exit Questionnaire

Dimension-based analysis revealed significant differences across the topics addressed. Dimension O-D1 (Perceived usefulness of ChatGPT in the design of leisure activities) exhibited an average positive perception of 5.03 % and a “Very Positive” perception of 11.28 % , reflecting a moderate perceived usefulness, particularly among younger students aged 20–22, who recorded the highest rates of “Very Positive” responses ( 25 % in specific items such as O-Q2). However, among students aged 29–31 and those over 32, responses remained predominantly neutral and, to a lesser extent, negative (approximately 4 % “Very Negative”), indicating the persistence of skepticism within these age segments. This trend is visually reinforced in Figure 5, where the O-D1 sections are predominantly green among younger students and shift toward yellow and purple tones among older age groups.
A similar, yet more concerning, pattern was identified in Dimension O-D2 (Difficulty of using ChatGPT in the design of recreational activities). Although the positive perception reached 6.6 % and “Very Positive” 7.8 % , the increase in “Negative” responses ( 0.87 % ) and “Neutral” responses ( 1.2 % ) evidenced perceived usage difficulties, particularly among students over 29 years old. In the sunburst chart, the O-D2 segments display fragmented blocks of purple and neutral tones among older groups, suggesting an urgent need for targeted technical support and differentiated training programs. In contrast to these challenges, Dimension O-D3 (Satisfaction with the use of ChatGPT) emerged as the most robust area, recording 4.17 % “Positive” and 12.5 % “Very Positive” perceptions, notably concentrated among students aged 20–22. This finding reveals an outstanding level of satisfaction among younger participants, as evidenced by the wider green bands in the corresponding section of the previous chart. Dimension O-D4 (Impact of ChatGPT on the design of recreational activities) also yielded encouraging results, with positive perceptions ( 6.9 % ) and “Very Positive” perceptions ( 9.5 % ), highlighting the recognition of improvements in preparation and time management facilitated by ChatGPT, particularly among students aged 20 to 28 years. Finally, Dimension O-D5 (ChatGPT future usage outlook) reaffirmed optimism toward the tool, with 5.9 % “Positive” and 10.7 % “Very Positive” responses. Once again, younger students led the enthusiasm for future integration of ChatGPT into educational practices, whereas older students maintained a positive but more moderate attitude. The item-level analysis revealed relevant trends. Items O-D5 O-Q19 (“Am I willing to continue training in the use of ChatGPT?”) and O-D3 O-Q11 (“Did ChatGPT save me time preparing activities?”) recorded the highest levels of “Very Positive” perceptions (over 13 % in both cases), particularly within the 20–22 age group. This data reflects not only satisfaction but also an active willingness to continue developing competencies in the use of artificial intelligence. Conversely, items such as O-D2 O-Q7 (“Do I think using ChatGPT requires too much technical preparation?”) exhibited higher levels of “Neutral” responses ( 2.77 % ) and some “Negative” responses, especially among older participants, highlighting technical barriers that must still be overcome to achieve universal adoption. Segmented analysis by age intervals confirmed that generational factors play a fundamental role. Students aged 20–22 consolidated their role as the most receptive group, recording 16.6 % “Positive” and 19.3 % “Very Positive” perceptions, clearly surpassing other age groups. In contrast, students over 32 years old exhibited lower rates of “Positive” ( 5 % ) and “Very Positive” ( 11.4 % ) responses, although without displaying overt rejection (almost no “Very Negative” responses). This youthful enthusiasm can be explained by their status as digital natives, who possess a natural familiarity with and confidence in integrating emerging technologies into educational processes, in contrast with older students, who face barriers associated with a lack of prior experience, resistance to change, and cognitive or physical limitations inherent to their age group. Intermediate groups (aged 23–28) exhibited intermediate positive perceptions, indicating a latent potential that could be strengthened through training strategies aimed at simplifying usage and reinforcing technological confidence. Graphical representations provided powerful visual validation of these findings: vibrant green sectors predominated among the 20–22 age group, while neutral or bluish tones were concentrated among older students, thereby corroborating the tabular perception trends. In conclusion, the findings confirm that, although the overall disposition toward ChatGPT in the design of leisure activities is positive, acceptance remains heterogeneous and strongly conditioned by age and technological familiarity levels. Dimensions related to satisfaction and future projections (O-D3 and O-D5) emerge as strengths to be consolidated, while the perceived technical challenges (O-D2) and immediate utility (O-D1) must be strategically addressed. Consequently, there is a pressing need to design differentiated training programs based on age groups, promoting effective technological inclusion that maximizes the pedagogical impact of tools such as ChatGPT in Physics education and other domains.

5. Discussion

The findings of this study highlight the positive impact of ChatGPT in training future physics educators, particularly in the design of gamified learning activities. These results align with prior research and provide significant implications for educational practice.

5.1. Key Findings and Their Relationship with the Literature

Prior to the intervention, participants exhibited low familiarity with ChatGPT, with 63 % considering themselves “unfamiliar” with the tool and 75 % having never used it for adapting gamified activities to physics education. However, post-intervention, perceptions shifted significantly: 79 % rated ChatGPT as “very useful” for tailoring activities to different learning levels, and 83 % emphasized its ability to optimize preparation time. These outcomes corroborate previous studies that identify ChatGPT as an effective tool for enhancing conceptual understanding and personalizing learning across various educational contexts (Farrokhnia et al., 2024; Silva et al., 2024; Uddin et al., 2024).
Recent studies have explored ChatGPT’s application in higher education, demonstrating improvements in students’ response quality following interactions with the platform. This is attributed to the AI’s ability to structure ideas and provide immediate feedback (Uddin et al., 2024). Similarly, in programming education, students using ChatGPT as an assistant exhibit increased confidence and improved conceptual comprehension (Silva et al., 2024). These findings reinforce the notion that AI-driven tools can serve as valuable assets for disciplines requiring analytical processes and practical applications.

5.2. Comparison with Other AI-Based Educational Tools

While ChatGPT’s benefits are well-documented, some studies have noted its limitations in comparison to AI tools specifically designed for educational purposes. For instance, a comparative study between ChatGPT and an anatomy-specific AI chatbot (Anatbuddy) found that the latter provided more precise information, suggesting that customized AI applications may be more effective in specialized educational settings (Arun et al., 2024). Similarly, in medical education, ChatGPT has been found useful for theoretical learning but presents limitations in generating complex clinical case studies when compared to other AI systems (Guo & Li, 2023).
In other domains, research in acupuncture education has shown that ChatGPT-4 generates more comprehensive and accurate responses than ChatGPT-3.5, highlighting the continuous evolution of these tools and their potential for educational enhancement (Lee, 2023). However, concerns have also been raised about the overreliance on ChatGPT, which could lead to a decrease in critical thinking skills among students (Farrokhnia et al., 2024).

5.3. Implications for Physics Education and Future Opportunities

The participants highly valued the ability of ChatGPT to generate gamified learning activities customized to different educational levels. This suggests that integrating ChatGPT into teacher training programs can promote a more interactive and meaningful pedagogical approach, allowing future educators to develop diverse teaching strategies suited to their students’ needs. Previous research has shown that incorporating AI tools in education improves student motivation and participation while fostering self-directed learning (Castro et al., 2024).
These findings emphasize the need for continued research on optimizing AI integration in teacher education. Future work should explore hybrid models that combine ChatGPT with domain-specific AI tools to maximize pedagogical effectiveness. Furthermore, addressing potential drawbacks, such as AI dependency and the ethical implications of its use in educational settings, remains crucial to ensure its responsible implementation in teaching practices.

6. Conclusions

The entry survey analysis reveals that students exhibited low levels of familiarity and knowledge regarding the use of ChatGPT for designing playful activities. In the familiarity dimension, 63 % of students reported being “Somewhat familiar” with ChatGPT for creating activities, while 75 % had never or rarely used it. Furthermore, 71 % indicated no or rare use of ChatGPT in adapting playful activities to specific Physics topics, underscoring its limited integration in educational contexts. A McDonald’s Omega coefficient of 0.85 confirmed the reliability of these findings, highlighting a clear need for additional training to enhance the adoption and practical application of this technology. In the knowledge dimension, 42 % of students reported “Nothing at all” and 33 % “Low” regarding ChatGPT’s capacity to assist in planning playful activities. Only 21 % agreed that ChatGPT could foster critical thinking in Physics. These results demonstrate limited understanding of ChatGPT’s capabilities and a lack of specific knowledge about its pedagogical implementation. The McDonald’s Omega coefficient of 0.83 further validated the reliability of these findings, emphasizing the need for targeted training to strengthen comprehension and use of ChatGPT in educational settings.
In terms of attitudes toward ChatGPT, 54 % of students agreed, and 46 % strongly agreed on their willingness to use it in Physics classes, reflecting an overall positive attitude. However, 50 % reported only “Moderate confidence” in ChatGPT’s ability to generate educational activities, indicating the need to reinforce perceptions of its utility. The McDonald’s Omega coefficient of 0.84 confirmed internal consistency, highlighting a moderately positive disposition toward the tool. Regarding perceived impact, 33 % reported “Low” knowledge of ChatGPT’s benefits for creating educational activities, and 29 % assessed it at a “Moderate” level. While 50 % believed the tool could foster student engagement, 17 % expressed negative responses. These findings point to an emerging but incomplete recognition of ChatGPT’s potential impact, which must be strengthened through training and practice. The McDonald’s Omega coefficient of 0.83 validated the reliability of this dimension. Finally, in the future applications dimension, 46 % of students were “Moderately interested,” and 25 % were “Very interested” in exploring ChatGPT for designing playful activities. Additionally, 54 % agreed that it could personalize activities based on student interests. These findings suggest a favorable outlook on ChatGPT’s future use, with strong interest in its practical integration. The McDonald’s Omega coefficient of 0.85 supports the high consistency of responses within this dimension.
Initially, 63 % of students reported being either slightly or entirely unfamiliar with ChatGPT, and 75 % had never used it to design activities in Physics. Following the workshop, 79 % considered ChatGPT to be highly useful for adapting activities to different educational levels, and 83 % highlighted its efficiency in reducing preparation time. The responses indicated a high level of satisfaction and a significant improvement in the perceived usefulness, ease of use, and overall impact on the teaching process. The 20–22 age group exhibited the most favorable attitude toward the future integration of ChatGPT.
From an age-based analysis, students aged 20 to 22 emerged as the most enthusiastic and receptive cohort, leading the proportion of “very positive” responses—particularly in items related to their willingness to continue training in the use of ChatGPT (O-Q19) and their perception of time savings (O-Q11), with over 13 % of responses marked as “very positive.” In contrast, students over the age of 32 displayed more moderate and, in some cases, skeptical attitudes, suggesting the need for age-differentiated training programs.
The exit survey analysis reflects significant positive changes in students’ perceptions and usage of ChatGPT following the intervention. In the perceived usefulness dimension, 67 % of students indicated that ChatGPT improved the quality of playful activities, and 79 % rated it as “Very useful” for adapting activities to students’ learning levels. Additionally, 63 % recognized its utility for personalizing activities to individual needs. These results demonstrate high appreciation for ChatGPT’s impact on the quality and adaptation of teaching strategies, supported by a McDonald’s Omega coefficient of 0.71. In the ease of use dimension, 46 % rated ChatGPT as “Easy” and 54 % as “Very easy” to understand and use. Instructor explanations were deemed “Completely sufficient” by 67 % of students, facilitating comprehension. However, 25 % considered that ChatGPT requires moderate technical preparation, highlighting the need for additional support. The McDonald’s Omega coefficient of 0.74 confirmed adequate consistency, reflecting that ChatGPT is perceived as accessible and user-friendly.
For the satisfaction dimension, 71 % of students were “Very satisfied” with the outcomes achieved using ChatGPT, while 83 % acknowledged its utility in optimizing preparation time. These results indicate high satisfaction levels, validated by a McDonald’s Omega coefficient of 0.77, which supports the reliability of this dimension. Regarding perceived impact, 67 % of students highlighted ChatGPT’s ability to personalize classes and provide clear, dynamic explanations of complex Physics concepts. Furthermore, 58 % acknowledged its usefulness in evaluating teaching processes through playful activities. The McDonald’s Omega coefficient of 0.89 confirmed high reliability, reflecting a significantly positive impact of ChatGPT in educational contexts. Lastly, in the future use dimension, 71 % strongly agreed on their intention to continue using ChatGPT for designing playful activities, and 63 % saw it as a key tool for long-term teaching practices. These findings demonstrate optimistic and proactive attitudes toward ChatGPT’s continued use, with a high valuation of its long-term educational potential. The McDonald’s Omega coefficient of 0.83 validated the internal consistency of this dimension, reflecting a strong consensus on ChatGPT’s future role in education.
The methodology applied was appropriate, employing a quantitative approach with pre- and post-intervention surveys. The descriptive-comparative design allowed for the measurement of changes in participants’ perceptions and knowledge after using ChatGPT. However, the self-selection sampling method, involving 24 participants, may limit the representativeness of the findings, potentially introducing bias. Expert reviews ensured the validity and clarity of the instruments, while the practical laboratory intervention enabled students to interact with the tool directly.

Author Contributions

Conceptualization, M.G.-Z. and L.S.-A.; methodology, M.G.-Z. and B.V.-C.; validation and formal analysis, M.G.-Z. and E.L.-R.; investigation, M.G.-Z. and E.L.-R.; writing—original draft preparation, M.G.-Z.; writing—review and editing, E.L.-R.; visualization, B.V.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Universidad Estatal de Milagro (protocol code FACE-001 and date of approval).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data supporting the findings of this study are fully available within the article. No additional datasets were generated or analyzed during the current study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Entrance and Exit Questionnaires

Table A1. Entrance Questionnaire.
Table A1. Entrance Questionnaire.
Dimension 1 (I-D1): Familiarity with ChatGPT in the Design of Recreational Activities
Am I familiar with the use of ChatGPT in creating recreational activities?
(A) Not at all familiar
(B) A little familiar
(C) Moderately familiar
(D) Very familiar
(E) Extremely familiar
Do I know the capabilities of ChatGPT to generate ideas and educational games?
(A) Not at all
(B) Little
(C) Moderate
(D) Quite a bit
(E) Completely
Have I explored ChatGPT as a tool to design interactive and fun activities?
(A) Never
(B) Rarely
(C) Sometimes
(D) Frequently
(E) Always
Do I know how to use ChatGPT to adapt recreational activities to specific Physics topics?
(A) Never
(B) Rarely
(C) Sometimes
(D) Frequently
(E) Always
Dimension 2 (I-D2): Knowledge of the use of ChatGPT in the design of recreational activities
Do I have knowledge of how ChatGPT can assist in planning games and recreational activities in Physics?
(A) Not at all
(B) Little
(C) Moderate
(D) A lot
(E) Extensive
Do I understand how ChatGPT can help design activities that encourage experimentation in Physics?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Do I know that ChatGPT can make it easy to create quizzes, role-plays, and other fun activities?
(A) Not at all
(B) Little
(C) Moderate
(D) A lot
(E) Completely
Do I understand how ChatGPT can be used to create activities that develop critical thinking in Physics?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Dimension 3 (I-D3): Attitudes about the use of ChatGPT in the design of recreational activities
Am I willing to use ChatGPT to design recreational activities in my Physics classes?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Do I feel comfortable with the idea that ChatGPT helps structure playful dynamics in Physics?
(A) Very uncomfortable
(B) Uncomfortable
(C) Neutral
(D) Comfortable
(E) Very comfortable
Do I trust ChatGPT to generate fun and educational activity ideas for students?
(A) No trust at all
(B) Little trust
(C) Moderate trust
(D) Quite a bit of trust
(E) Total trust
Do I think that using ChatGPT in the design of recreational activities will enrich Physics classes?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Dimension 4 (I-D4): Knowledge of the impact of ChatGPT on the design of recreational activities
Do I know the benefits that ChatGPT can bring to creating educational and recreational activities?
(A) Not at all
(B) Little
(C) Moderate
(D) Quite a bit
(E) Completely
Do I know that using ChatGPT in designing game activities can encourage greater student engagement?
(A) Not at all
(B) Little
(C) Moderate
(D) Quite a bit
(E) Completely
Do I understand the potential challenges of using ChatGPT in educational and recreational contexts?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Am I aware of the positive impact ChatGPT could have on students’ motivation to learn Physics?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Dimension 5 (I-D5): Future applications of ChatGPT in the classroom
Am I interested in exploring the use of ChatGPT to design gaming activities in the future?
(A) Not at all interested
(B) A little interested
(C) Moderately interested
(D) Very interested
(E) Extremely interested
Do I think ChatGPT could help customize gaming activities based on students’ interests?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Would I like to experiment with ChatGPT to create educational games and challenges in Physics?
(A) Not at all interested
(B) A little interested
(C) Moderately interested
(D) Very interested
(E) Extremely interested
Do I think ChatGPT has the potential to transform the game-based approach to teaching Physics in the future?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Table A2. Exit Questionnaire.
Table A2. Exit Questionnaire.
Dimension 1 (O-D1): Perceived usefulness of ChatGPT in the Design of Leisure Activities
Did using ChatGPT help me improve the quality of the fun activities I designed for teaching Physics?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Did you find ChatGPT helpful in adapting gaming activities to the students’ learning level?
(A) Totally useless
(B) Useless
(C) Neither useful nor useless
(D) Useful
(E) Very useful
Did using ChatGPT allow me to customize the play activities to better suit each student’s needs?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Do I think ChatGPT has improved my ability to teach complex Physics concepts through games?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Dimension 2 (O-D2): Difficulty of using ChatGPT in the design of recreational activities
Was it easy for me to learn how to use ChatGPT to design fun activities in Physics?
(A) Very difficult
(B) Difficult
(C) Neither easy nor difficult
(D) Easy
(E) Very easy
Were the teacher’s explanations sufficient to understand how to use ChatGPT in the design of recreational activities in the educational context?
(A) Totally insufficient
(B) Insufficient
(C) Neither sufficient nor insufficient
(D) Sufficient
(E) Totally sufficient
Do I think that using ChatGPT requires too much technical preparation to be implemented effectively in the design of recreational activities?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Was the ChatGPT app I used easy to understand and use?
(A) Very difficult to understand and use
(B) Difficult to understand and use
(C) Neither easy nor difficult
(D) Easy to understand and use
(E) Very easy to understand and use
Dimension 3 (O-D3): Satisfaction with the use of ChatGPT in the design of recreational activities
Am I satisfied with the results obtained after using ChatGPT in the design of recreational Physics activities?
(A) Very dissatisfied
(B) Dissatisfied
(C) Neither satisfied nor dissatisfied
(D) Satisfied
(E) Very satisfied
Am I satisfied with the way ChatGPT facilitated the teaching process in Physics by designing fun activities?
(A) Very dissatisfied
(B) Dissatisfied
(C) Neither satisfied nor dissatisfied
(D) Satisfied
(E) Very satisfied
Did ChatGPT save me time preparing and implementing recreational activities?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Overall, am I satisfied with the experience of using ChatGPT in my teaching practice?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Dimension 4 (O-D4): Impact of ChatGPT on the design of recreational activities
Has using ChatGPT allowed me to simplify the preparation of recreational activities and better manage my time during teaching?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Thanks to the playful activities designed with ChatGPT, have I managed to offer clearer and more dynamic explanations of Physics concepts?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Has ChatGPT made it easier to personalize my classes, adapting fun activities to meet the different needs of students?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Has ChatGPT improved my ability to evaluate the teaching process with the help of playful activities?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Dimension 5 (O-D5): ChatGPT future usage outlook
Would I like to continue using ChatGPT in the design of other types of activities for teaching Physics?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Do you consider ChatGPT to be a key tool in my long-term teaching practice?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree
Am I willing to continue training in the use of ChatGPT to make better use of its educational benefits?
(A) Not at all interested
(B) A little interested
(C) Moderately interested
(D) Very interested
(E) Extremely interested
Do you think ChatGPT will play an important role in future education and should be integrated into teacher training?
(A) Strongly disagree
(B) Disagree
(C) Neither agree nor disagree
(D) Agree
(E) Strongly agree

References

  1. Adeshola, I., & Adepoju, A. P. (2024). The opportunities and challenges of ChatGPT in education. Interactive Learning Environments, 32(10), 6159–6172. [Google Scholar] [CrossRef]
  2. Arun, G., Perumal, V., Urias, F. P. J. B., Ler, Y. E., Tan, B. W. T., Vallabhajosyula, R., Tan, E., Ng, O., Ng, K. B., & Mogali, S. R. (2024). ChatGPT versus a customized AI chatbot (Anatbuddy) for anatomy education: A comparative pilot study. Anatomical Sciences Education, 17(7), 1396–1405. [Google Scholar] [CrossRef]
  3. Barrot, J. S. (2024). ChatGPT as a language learning tool: An emerging technology report. Technology, Knowledge and Learning, 29(2), 1151–1156. [Google Scholar] [CrossRef]
  4. Bile, A. (2022). Development of intellectual and scientific abilities through game-programming in Minecraft. Education and Information Technologies, 27(5), 7241–7256. [Google Scholar] [CrossRef]
  5. Bloch, J. (2024). How to internationionalize teacher training: Overview of barriers and approaches to solutions. In Internationalization of teacher education in higher education (67p). wbv Media GmbH & Co. [Google Scholar]
  6. Borenstein, J., & Howard, A. (2021). Emerging challenges in AI and the need for AI ethics education. AI and Ethics, 1, 61–65. [Google Scholar] [CrossRef]
  7. Borisova, M., Hadzhikoleva, S., & Hadzhikolev, E. (2023, September 26–27). Use of Artificial Intelligence technologies in studying the phenomenon of electric current in physics education. International Conference on Virtual Learning, Virtual. [Google Scholar]
  8. Campo-Arias, A., & Oviedo, H. C. (2008). Propiedades psicométricas de una escala: La consistencia interna. Revista de salud pública, 10, 831–839. [Google Scholar] [CrossRef]
  9. Candela Borja, Y. M., & Benavides Bailón, J. (2020). Actividades lúdicas en el proceso de enseñanza-aprendizaje de los estudiantes de básica superior. Revista de Ciencias Humanísticas y Sociales (ReHuSo), 5(3), 90–98. [Google Scholar] [CrossRef]
  10. Cao, L. (2022). Ai in finance: Challenges, techniques, and opportunities. ACM Computing Surveys (CSUR), 55(3), 1–38. [Google Scholar]
  11. Castro, R. A. G., Cachicatari, N. A. M., Aste, W. M. B., & Medina, M. P. L. (2024). Exploration of ChatGPT in basic education: Advantages, disadvantages, and its impact on school tasks. Contemporary Educational Technology, 16(3), ep511. [Google Scholar] [CrossRef]
  12. Challoumis, C. (2024, November 7–8). The landscape of AI in Finance. XVII International Scientific Conference (pp. 109–144), Dortmund, Germany. [Google Scholar]
  13. Clarke, S., & Collier, S. (2015). Research essentials. Nursing Children & Young People, 27(9), 12. [Google Scholar]
  14. Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology, 32(3), 444–452. [Google Scholar] [CrossRef]
  15. Dai, Y., Lin, Z., Liu, A., & Wang, W. (2024). An embodied, analogical and disruptive approach of AI pedagogy in upper elementary education: An experimental study. British Journal of Educational Technology, 55(1), 417–434. [Google Scholar] [CrossRef]
  16. da Silva, D. R., da Silva Santos, S., Carbo, L., da Silva, J. L., Berton, A., & Mello, G. J. (2020). Atividades práticas e lúdica no ensino de Ciências: Sequência didática sobre calor e temperatura. Research, Society and Development, 9(5), e186953368. [Google Scholar] [CrossRef]
  17. Deng, J., & Lin, Y. (2022). The benefits and challenges of ChatGPT: An overview. Frontiers in Computing and Intelligent Systems, 2(2), 81–83. [Google Scholar] [CrossRef]
  18. Du, J., & Alm, A. (2024). The impact of ChatGPT on English for academic purposes (EAP) students’ language learning experience: A self-determination theory perspective. Education Sciences, 14(7), 726. [Google Scholar] [CrossRef]
  19. ElSayary, A. (2024). An investigation of teachers’ perceptions of using ChatGPT as a supporting tool for teaching and learning in the digital era. Journal of Computer Assisted Learning, 40(3), 931–945. [Google Scholar] [CrossRef]
  20. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460–474. [Google Scholar] [CrossRef]
  21. Frías-Navarro, D. (2022). Apuntes de estimación de la fiabilidad de consistencia interna de los ítems de un instrumento de medida. Universidad de Valencia, 23, 1–31. [Google Scholar]
  22. Garikapati, D., & Shetiya, S. S. (2024). Autonomous vehicles: Evolution of artificial intelligence and the current industry landscape. Big Data and Cognitive Computing, 8(4), 42. [Google Scholar] [CrossRef]
  23. Grimes, D. A., & Schulz, K. F. (2002). Descriptive studies: What they can and cannot do. The Lancet, 359(9301), 145–149. [Google Scholar] [CrossRef]
  24. Guo, A. A., & Li, J. (2023). Harnessing the power of ChatGPT in medical education. Medical Teacher, 45(9), 1063. [Google Scholar] [CrossRef]
  25. Ha, S. (2023). Exploring the practical applications of Chat GPT for simulation teaching by preservice physics teachers. New Physics: Sae Mulli, 73, 734–749. [Google Scholar] [CrossRef]
  26. Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542–570. [Google Scholar] [CrossRef]
  27. Hyun, H., Yoo, W. S., & Chen, Y. (2025). Retailing education as panaceas: Exploring the effects of knowledge transfer on organizational and employee outcomes. Journal of Retailing and Consumer Services, 84, 104259. [Google Scholar] [CrossRef]
  28. Jiang, X., Xu, J., & Xu, X. (2024). An overview of domestic and international applications of digital technology in teaching in vocational education: Systematic literature mapping. Education and Information Technologies, 29(13), 16867–16899. [Google Scholar] [CrossRef]
  29. Kalenda, P. J., Rath, L., Abugasea Heidt, M., & Wright, A. (2025). Pre-service teacher perceptions of ChatGPT for lesson plan generation. Journal of Educational Technology Systems, 53(3), 219–241. [Google Scholar] [CrossRef]
  30. Kiryakova, G., & Angelova, N. (2023). ChatGPT—A challenging tool for the university professors in their teaching practice. Education Sciences, 13(10), 1056. [Google Scholar] [CrossRef]
  31. Lebedeva, M., Taranova, M., & Beketov, V. (2023). Assessment of academic achievements in m-learning. Education and Information Technologies, 28(5), 5945–5965. [Google Scholar] [CrossRef]
  32. Lee, H. (2023). Using ChatGPT as a learning tool in acupuncture education: Comparative study. JMIR Medical Education, 9, e47427. [Google Scholar] [CrossRef]
  33. Liang, Y., Zou, D., Xie, H., & Wang, F. L. (2023). Exploring the potential of using ChatGPT in physics education. Smart Learning Environments, 10(1), 52. [Google Scholar] [CrossRef]
  34. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  35. Mai, D. T. T., Da, C. V., & Hanh, N. V. (2024). The use of ChatGPT in teaching and learning: A systematic review through SWOT analysis approach. Frontiers in Education, 9, 1328769. [Google Scholar] [CrossRef]
  36. Mohajan, H. K. (2020). Quantitative research: A successful investigation in natural and social sciences. Journal of Economic Development, Environment and People, 9(4), 50–79. [Google Scholar] [CrossRef]
  37. Niss, M. (2012). Towards a conceptual framework for identifying student difficulties with solving Real-World Problems in Physics. Latin-American Journal of Physics Education, 6(1), 3–13. [Google Scholar]
  38. Orellana, J. S., Cordero, C. A., & Espinoza, J. C. (2025). Validación del cuestionario para docentes: Percepción sobre el uso de ChatGPT en la educación superior. Revista Andina de Educación, 8(1), 000816. [Google Scholar] [CrossRef]
  39. Owen, S., Dickson, D., Stanisstreet, M., & Boyes, E. (2008). Teaching physics: Students’ attitudes towards different learning activities. Research in Science & Technological Education, 26(2), 113–128. [Google Scholar]
  40. Parekh, A.-D. E., Shaikh, O. A., Simran, Manan, S., & Al Hasibuzzaman, M. (2023). Artificial intelligence (AI) in personalized medicine: AI-generated personalized therapy regimens based on genetic and medical history. Annals of Medicine and Surgery, 85(11), 5831–5833. [Google Scholar] [CrossRef]
  41. Prieto, S. A., Mengiste, E. T., & García de Soto, B. (2023). Investigating the use of ChatGPT for the scheduling of construction projects. Buildings, 13(4), 857. [Google Scholar] [CrossRef]
  42. Roco-Videla, Á., Aguilera-Eguía, R., & Olguin-Barraza, M. (2024). Ventajas del uso del coeficiente de omega de McDonald frente al alfa de Cronbach. Nutrición Hospitalaria, 41(1), 262–263. [Google Scholar]
  43. Rutherford, A. (2004). Self-selected samples. In Encyclopedia of statistical sciences. Wiley. [Google Scholar]
  44. Schiff, D. (2022). Education for AI, not AI for education: The role of education and ethics in national AI policy strategies. International Journal of Artificial Intelligence in Education, 32(3), 527–563. [Google Scholar] [CrossRef]
  45. Silva, C. A. G. d., Ramos, F. N., De Moraes, R. V., & Santos, E. L. d. (2024). ChatGPT: Challenges and benefits in software programming for higher education. Sustainability, 16(3), 1245. [Google Scholar] [CrossRef]
  46. Sperling, A., & Lincoln, J. (2024). Artificial intelligence and high school physics. The Physics Teacher, 62(4), 314–315. [Google Scholar] [CrossRef]
  47. Sudirman, I. D., & Rahmatillah, I. (2023, June 7–10). Artificial intelligence-assisted discovery learning: An educational experience for entrepreneurship students using ChatGPT. 2023 IEEE World AI IoT Congress (AIIoT) (pp. 0786–0791), Virtual. [Google Scholar]
  48. Sung, N.-J., Ma, J., Choi, Y.-J., & Hong, M. (2019). Real-time augmented reality physics simulator for education. Applied Sciences, 9(19), 4019. [Google Scholar] [CrossRef]
  49. Tahiru, F. (2021). AI in education: A systematic literature review. Journal of Cases on Information Technology (JCIT), 23(1), 1–20. [Google Scholar] [CrossRef]
  50. Uddin, S. J., Albert, A., Tamanna, M., Ovid, A., & Alsharef, A. (2024). ChatGPT as an educational resource for civil engineering students. Computer Applications in Engineering Education, 32(4), e22747. [Google Scholar] [CrossRef]
  51. Vehovar, V., Toepoel, V., Steinmetz, S., Wolf, C., Joye, D., Smith, T., & Fu, Y. (2016). The Sage handbook of survey methodology. Sage. [Google Scholar] [CrossRef]
  52. Ventura-León, J. L., & Caycho-Rodríguez, T. (2017). El coeficiente Omega: Un método alternativo para la estimación de la confiabilidad. Revista Latinoamericana de Ciencias Sociales, niñez y juventud, 15(1), 625–627. [Google Scholar]
  53. Viladrich, C., Angulo-Brunet, A., & Doval, E. (2017). Un viaje alrededor de alfa y omega para estimar la fiabilidad de consistencia interna. Anales de Psicología/Annals of Psychology, 33(3), 755–782. [Google Scholar] [CrossRef]
  54. Viorennita, A., Dewi, L., & Riyana, C. (2023). The role of ChatGPT AI in student learning experience. Indonesian Journal of Multidiciplinary Research, 3(2), 445–452. [Google Scholar] [CrossRef]
  55. West, C. G. (2023). AI and the FCI: Can ChatGPT project an understanding of introductory physics? arXiv, arXiv:2303.01067. [Google Scholar]
  56. Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q.-L., & Tang, Y. (2023). A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica, 10(5), 1122–1136. [Google Scholar] [CrossRef]
  57. Yim, I. H. Y., & Su, J. (2025). Artificial intelligence (AI) learning tools in K-12 education: A scoping review. Journal of Computers in Education, 12(1), 93–131. [Google Scholar] [CrossRef]
  58. Zahid, I. A., Joudar, S. S., Albahri, A., Albahri, O., Alamoodi, A., Santamaría, J., & Alzubaidi, L. (2024). Unmasking large language models by means of OpenAI GPT-4 and Google AI: A deep instruction-based analysis. Intelligent Systems with Applications, 23, 200431. [Google Scholar] [CrossRef]
  59. Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J.-B., Yuan, J., & Li, Y. (2021). A review of artificial intelligence (AI) in education from 2010 to 2020. Complexity, 2021(1), 8812542. [Google Scholar] [CrossRef]
Figure 2. Entrance Questionnaire results.
Figure 2. Entrance Questionnaire results.
Education 15 00887 g002
Figure 3. Exit Questionnaire results.
Figure 3. Exit Questionnaire results.
Education 15 00887 g003
Figure 4. Entrance Questionnaire analysis.
Figure 4. Entrance Questionnaire analysis.
Education 15 00887 g004
Figure 5. Exit Questionnaire analysis.
Figure 5. Exit Questionnaire analysis.
Education 15 00887 g005
Table 1. Dimensions and questions applied in research.
Table 1. Dimensions and questions applied in research.
Entrance
Dimension 1 (I-D1): Familiarity with ChatGPT in the design of recreational activities.I-Q1. Am I familiar with the use of ChatGPT in creating recreational activities?
I-Q2. Do I know the capabilities of ChatGPT to generate ideas and educational games?
I-Q3. Have I explored ChatGPT as a tool to design interactive and fun activities?
I-Q4. Do I know how to use ChatGPT to adapt recreational activities to specific Physics topics?
Dimension 2 (I-D2): Knowledge of the use of ChatGPT in the design of recreational activities.I-Q5. Do I have knowledge of how ChatGPT can assist in planning games and recreational activities in Physics?
I-Q6. Do I understand how ChatGPT can help design activities that encourage experimentation in Physics?
I-Q7. Do I know that ChatGPT can make it easy to create quizzes, role-plays, and other fun activities?
I-Q8. Do I understand how ChatGPT can be used to create activities that develop critical thinking in Physics?
Dimension 3 (I-D3): Attitudes about the use of ChatGPT in the design of recreational activities.I-Q9. Am I willing to use ChatGPT to design recreational activities in my Physics classes?
I-Q10. Do I feel comfortable with the idea that ChatGPT helps structure playful dynamics in Physics?
I-Q11. Do I trust ChatGPT to generate fun and educational activity ideas for students?
I-Q12. Do I think that using ChatGPT in the design of recreational activities will enrich Physics classes?
Dimension 4 (I-D4): Knowledge of the impact of ChatGPT on the design of recreational activities.I-Q13. Do I know the benefits that ChatGPT can bring to creating educational and recreational activities?
I-Q14. Do I know that using ChatGPT in designing game activities can encourage greater student engagement?
I-Q15. Do I understand the potential challenges of using ChatGPT in educational and recreational contexts?
I-Q16. Am I aware of the positive impact ChatGPT could have on students’ motivation to learn Physics?
Dimension 5 (I-D5): Future applications of ChatGPT in the classroom.I-Q17. Am I interested in exploring the use of ChatGPT to design gaming activities in the future?
I-Q18. Do I think ChatGPT could help customize gaming activities based on students’ interests?
I-Q19. Would I like to experiment with ChatGPT to create educational games and challenges in Physics?
I-Q20. Do I think ChatGPT has the potential to transform the game-based approach to teaching Physics in the future?
Exit
Dimension 1 (O-D1): Perceived usefulness of ChatGPT in the design of leisure activities.O-Q1. Did using ChatGPT help me improve the quality of the fun activities I designed for teaching Physics?
O-Q2. Did you find ChatGPT helpful in adapting gaming activities to the students’ learning level?
O-Q3. Did using ChatGPT allow me to customize the play activities to better suit each student’s needs?
O-Q4. Do I think ChatGPT has improved my ability to teach complex Physics concepts through games?
Dimension 2 (O-D2): Difficulty of using ChatGPT in the design of recreational activities.O-Q5. Was it easy for me to learn how to use ChatGPT to design fun activities in Physics?
O-Q6. Were the teacher’s explanations sufficient to understand how to use ChatGPT in the design of recreational activities in the educational context?
O-Q7. Do I think that using ChatGPT requires too much technical preparation to be implemented effectively in the design of recreational activities?
O-Q8. Was the ChatGPT app I used easy to understand and use?
Dimension 3 (O-D3): Satisfaction with the use of ChatGPT in the design of recreational activities.O-Q9. Am I satisfied with the results obtained after using ChatGPT in the design of recreational Physics activities?
O-Q10. Am I satisfied with the way ChatGPT facilitated the teaching process in Physics by designing fun activities?
O-Q11. Did ChatGPT save me time preparing and implementing recreational activities?
O-Q12. Overall, am I satisfied with the experience of using ChatGPT in my teaching practice?
Dimension 4 (O-D4): Impact of ChatGPT on the design of recreational activities.O-Q13. Has using ChatGPT allowed me to simplify the preparation of recreational activities and better manage my time during teaching?
O-Q14. Thanks to the playful activities designed with ChatGPT, have I managed to offer clearer and more dynamic explanations of Physics concepts?
O-Q15. Has ChatGPT made it easier to personalize my classes, adapting fun activities to meet the different needs of students?
O-Q16. Has ChatGPT improved my ability to evaluate the teaching process with the help of playful activities?
Dimension 5 (O-D5): ChatGPT future usage outlook.O-Q17. Would I like to continue using ChatGPT in the design of other types of activities for teaching Physics?
O-Q18. Do you consider ChatGPT to be a key tool in my long-term teaching practice?
O-Q19. Am I willing to continue training in the use of ChatGPT to make better use of its educational benefits?
O-Q20. Do you think ChatGPT will play an important role in future education and should be integrated into teacher training?
Table 2. Summary of Survey Evaluations.
Table 2. Summary of Survey Evaluations.
SurveyDimension EvaluatedLikert Scale Implementation
First SurveyClarity, thematic coverage, familiarity, technical knowledge, attitudes, and potential applicationsScale from “Strongly Disagree” to “Strongly Agree” and “Not Familiar” to “Extremely Familiar”
Second SurveyUsefulness, difficulty, satisfaction, impact, and future perspectivesScale from “Very Difficult” to “Very Easy” and “Very Dissatisfied” to “Very Satisfied”
Table 3. Key Instructional Topics Covered During the Intervention.
Table 3. Key Instructional Topics Covered During the Intervention.
TopicDescription
ChatGPT in Physics EducationOverview of ChatGPT and its applications in education
Introduction to ChatGPTExplanation of its functions and benefits in teaching
Concept of PromptsKey elements such as context, instructions, tone, and constraints
Prompt ExamplePractical demonstration of an effective prompt
LimitationsDiscussion on challenges in scientific content generation
Designing Activities Using ChatGPTExploration of AI-assisted playful activity creation
Playful ActivitiesDefinition and pedagogical advantages
Types of Playful ActivitiesVarious approaches for interactive learning
Simulator-BasedExample: Projectile motion simulation
Role-Playing GamesExample: Displacement and distance scenarios
Experiment-BasedExample: Free fall demonstrations
Adapted GamesExample: Monopoly adapted for translational dynamics
Quiz-BasedExample: Translational kinematics questions
Advantages of Playful ActivitiesMotivation, teamwork, and meaningful learning benefits
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guerrero-Zambrano, M.; Sanchez-Alvarado, L.; Valarezo-Chamba, B.; Lamilla-Rubio, E. Transforming Physics Teacher Training Through ChatGPT: A Study on Usability and Impact. Educ. Sci. 2025, 15, 887. https://doi.org/10.3390/educsci15070887

AMA Style

Guerrero-Zambrano M, Sanchez-Alvarado L, Valarezo-Chamba B, Lamilla-Rubio E. Transforming Physics Teacher Training Through ChatGPT: A Study on Usability and Impact. Education Sciences. 2025; 15(7):887. https://doi.org/10.3390/educsci15070887

Chicago/Turabian Style

Guerrero-Zambrano, Marcos, Leonor Sanchez-Alvarado, Bryan Valarezo-Chamba, and Erick Lamilla-Rubio. 2025. "Transforming Physics Teacher Training Through ChatGPT: A Study on Usability and Impact" Education Sciences 15, no. 7: 887. https://doi.org/10.3390/educsci15070887

APA Style

Guerrero-Zambrano, M., Sanchez-Alvarado, L., Valarezo-Chamba, B., & Lamilla-Rubio, E. (2025). Transforming Physics Teacher Training Through ChatGPT: A Study on Usability and Impact. Education Sciences, 15(7), 887. https://doi.org/10.3390/educsci15070887

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop