Next Article in Journal
The Emotional Work of Heritage Language Maintenance: Insights from a Longitudinal Study of Chinese–Canadian Bilingual Parenting
Previous Article in Journal
Real Talk: Designing Practice-Based Teacher Education for Family Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating ChatGPT into the Design of 5E-Based Earth Science Lessons

by
Yoonsung Choi
Department of Earth Science Education, College of Education, Pusan National University, Busan 46241, Republic of Korea
Educ. Sci. 2025, 15(7), 815; https://doi.org/10.3390/educsci15070815 (registering DOI)
Submission received: 31 May 2025 / Revised: 20 June 2025 / Accepted: 23 June 2025 / Published: 26 June 2025

Abstract

This study investigates how pre-service Earth science teachers used ChatGPT in designing lessons based on the 5E instructional model and what educational opportunities and challenges emerged. As generative AI tools gain traction in education, understanding their integration into science lesson planning is increasingly important. Eight pre-service teachers from a South Korean university participated in a four-week instructional design project. They developed 5E-aligned Earth science lessons while interacting with ChatGPT for idea generation, explanation, activity development, and assessment. Data sources included lesson plans, ChatGPT interaction logs, reflective journals, and interviews. Thematic analysis was used to examine instructional uses of AI and the adaptations required during the process. Findings showed that ChatGPT supported different phases of the 5E model—providing metaphors and analogies in Engage, activity ideas in Explore, draft explanations in Explain, task prompts in Elaborate, and assessment questions in Evaluate. However, participants frequently revised or rejected AI-generated content to match inquiry goals, student readiness, and curriculum standards. The study highlights the importance of pedagogical reasoning in AI-supported lesson design. It contributes to the growing literature on teacher education and AI by offering a phase-specific view of GenAI use and underscoring the instructional mediation needed for effective application.

1. Introduction

The emergence of GenAI technologies is reshaping how instructional design is conceptualized and practiced in education. Among these, ChatGPT, a large language model capable of generating context-sensitive responses, has gained traction as a tool that can assist teachers in generating explanations, suggesting instructional activities, and refining lesson content in real time (Gurl et al., 2025; Şimşek, 2025). In science education, where conceptual depth and inquiry-based pedagogy are essential, GenAI tools offer potential as collaborative design partners, particularly for pre-service teachers or teachers who often face challenges in planning inquiry-oriented lessons (Lee & Zhai, 2024).
The 5E instructional model—comprising Engagement, Exploration, Explanation, Elaboration, and Evaluation—has become a widely accepted framework for lesson design in science classrooms due to its capacity to promote conceptual change and support inquiry-based learning (Duran & Duran, 2004). In principle, GenAI tools like ChatGPT can support instructional planning across all five phases. However, existing research has rarely examined how pre-service teachers engaged with these tools throughout the 5E framework. Prior studies have largely focused on evaluating the scientific accuracy of AI-generated content or surveying user perceptions without investigating the processual and reflective aspects of teacher–AI interaction during lesson design (Ishmuradova et al., 2025; Powell & Courchesne, 2024).
Recent studies have begun to document how pre-service teachers use ChatGPT to support the development of science lessons (Choi, 2024; Goodman et al., 2024). These studies underscore the importance of teacher agency, iterative prompting, and contextual filtering when working with AI-generated content (Lan & Chen, 2024). Yet, more practice-centered research is needed to examine how pre-service teachers modify, adapt, and sometimes reject AI suggestions as they design lessons in authentic settings. How GenAI influences instructional decision-making and pedagogical judgment across the 5E phases remains underexplored.
This study addresses that gap by analyzing how pre-service teachers specializing in Earth science engaged with ChatGPT to co-develop 5E-based science lessons. Drawing on lesson plans, ChatGPT interaction logs, reflective journals, and post-design interviews, this study investigates the instructional uses, limitations, and adaptations of AI-generated content in a structured lesson design process. The following research questions guide the inquiry:
  • How do Earth science pre-service teachers utilize ChatGPT in each phase of the 5E instructional model when designing science lessons?
  • What opportunities and challenges emerge during the process of AI-assisted lesson design using ChatGPT?

2. Literature Review

2.1. GenAI in Education

The rise of GenAI technologies has introduced new possibilities for content generation, feedback delivery, and interactive learning in education. Tools such as ChatGPT, Claude, and Gemini, based on large language models (LLMs), can generate fluent responses, simulate human-like interactions, and assist in instructional design tasks (Berg & Plessis, 2025). These tools have drawn attention to their potential to support both teaching and learning processes (Kalenda et al., 2025; Zhang et al., 2023).
A common application of GenAI is in writing support, where it offers real-time feedback, rephrasing suggestions, and content expansion (Jaboob et al., 2025). GenAI tools are also used to generate assessment questions, create worked examples, and produce instructional explanations aligned with learner needs (Chiu, 2024; Salinas-Navarro et al., 2024a, 2024b). It emphasizes the scalability, immediacy, and engagement advantages of AI chatbots in educational settings (Chen et al., 2025).
For teachers, GenAI can reduce planning time by automating routine tasks and supporting differentiated instruction. It can generate diverse lesson ideas, simulate dialogues, and provide analogies or case-based prompts (Kohnke et al., 2023; Mondal et al., 2023). Such affordances are especially useful for pre-service or novice teachers who may struggle with lesson structure and pedagogical coherence (Choi & Chung, 2025; Verma et al., 2023).
Despite its advantages, GenAI presents notable challenges. Concerns about factual inaccuracy, shallow explanations, and cultural or content bias persist (Michel-Villarreal et al., 2023; Wang et al., 2024). Ethical risks related to academic dishonesty, authorship ambiguity, and misuse are also increasingly discussed (Alasadi & Baiz, 2023; Murshidi et al., 2024). GenAI highlights the critical role of teacher educators in guiding the ethical and pedagogically appropriate use of these tools (Mouta et al., 2025).
While GenAI offers promising support for instructional tasks, its effective use depends on critical evaluation and contextual adaptation by teachers. Continued research is needed to establish responsible and meaningful frameworks for integrating GenAI in education.

2.2. GenAI in Teaching and Learning

The integration of GenAI technologies is reshaping how teaching and learning are conceptualized in formal education. Tools such as ChatGPT, Claude, and Bing AI are increasingly used to assist with instructional planning, provide learner support, and enhance engagement through real-time, responsive interaction (Baidoo-Anu & Ansah, 2023; Kayalı et al., 2023). In classrooms, both teachers and students are beginning to explore new instructional practices enabled by GenAI tools (Hamerman et al., 2025).
For teachers, GenAI offers support in developing lesson materials, suggesting examples, drafting explanations, and even organizing instructional flow (ElSayary, 2024). Studies have shown that educators use ChatGPT to generate analogies, simplify complex science concepts, and personalize instruction for diverse learners (Choi, 2025a; Quan & Lee, 2025). Pre-service teachers in particular benefit from the ideational scaffolding that GenAI provides, especially when they face time constraints or uncertainty in lesson planning (Dimas et al., 2025). The tool’s ability to simulate dialogic interactions or provide structural outlines makes it valuable for novice educators seeking guidance during instructional design.
From the learner’s perspective, GenAI functions as an accessible, interactive study tool. Students report using ChatGPT to summarize dense readings, generate test questions, or clarify unfamiliar terms (Ajlouni et al., 2023; Monib et al., 2025). Such affordances support learner autonomy and confidence, especially in content-heavy disciplines such as science (Crompton et al., 2024). It notes that students appreciate the nonjudgmental nature of AI tools, which allows them to engage in low-stakes exploration of academic content.
Despite its promise, GenAI integration is accompanied by several concerns. Educators and researchers have raised issues regarding the factual reliability of AI-generated responses, overreliance by students, and insufficient development of critical thinking skills (Adel et al., 2024; Choi, 2025b; Zhu et al., 2025). Teachers also express uncertainty about how to evaluate or modify AI outputs effectively, highlighting the need for pedagogical support and professional development (Airaj, 2024; Nazaretsky et al., 2022).

2.3. Science Lesson Design with GenAI

Designing effective science lessons requires not only content knowledge but also pedagogical ability, curriculum understanding, and the competency to scaffold inquiry-based learning (Choi & Chung, 2025). For pre-service teachers, the process of translating abstract scientific concepts into engaging classroom activities can be particularly challenging (Davis, 2022). In response, GenAI tools such as ChatGPT have emerged as potential support in the instructional design process, offering immediate suggestions, conceptual explanations, and structural guidance (Okulu & Muslu, 2024).
Recent studies have explored how both pre-service and in-service science teachers interact with GenAI tools during lesson planning. For example, Choi and Chung (2025) found that elementary pre-service teachers used ChatGPT to brainstorm lesson ideas, generate explanations for complex concepts, and refine their scientific language. Similarly, Lee and Zhai (2024) observed that pre-service teachers leveraged GenAI as thinking partners, asking them to suggest hands-on activities, rephrase content, or adapt lessons for different grade levels. These uses reflect GenAI’s role as a generative scaffold rather than a complete solution.
The integration of GenAI within established instructional frameworks such as the 5E model has also gained attention (Bybee, 2014). The 5E model—Engagement, Exploration, Explanation, Elaboration, Evaluation—encourages conceptual development through inquiry-based sequencing. Previous studies examined how pre-service teachers evaluated and revised ChatGPT-generated 5E lesson plans (Abualrob, 2025; Goodman et al., 2024). While the AI-generated outputs provided useful starting points, participants identified the need for significant revision to align with curriculum goals, clarify concepts, and ensure developmental appropriateness. Similarly, while ChatGPT could generate complete lesson outlines, the pedagogical coherence and conceptual rigor required teacher intervention (Powell & Courchesne, 2024).
Although challenges remain, GenAI-supported lesson design offers valuable opportunities (Kalenda et al., 2025). It enables pre-service teachers to experiment with instructional structures, visualize alternative teaching sequences, and develop their pedagogical judgment in a low-stakes environment. When guided appropriately, AI tools can foster reflective practice and strengthen lesson design skills. Still, the quality of AI integration depends heavily on teacher agency, the ability to critically assess, adapt, and mediate content within the context of student learning.
This study builds on these insights by investigating how pre-service Earth science teachers used ChatGPT to design lessons based on the 5E model. By examining both their design decisions and interactions with AI, it contributes to a more practice-centered understanding of GenAI’s role in science teacher education.

3. Materials and Methods

3.1. Research Design

This study adopted a qualitative case study design to investigate how pre-service Earth science teachers used ChatGPT to support 5E-based lesson planning. The case was defined as the instructional design activity carried out by eight pre-service teachers during a four-week workshop. This study is grounded in an interpretive paradigm, which emphasizes that knowledge is constructed through an individual’s lived experiences and the meanings they assign within specific social and cultural contexts. As Schwandt (2007) notes, interpretivism values the co-construction of meaning between researchers and participants, acknowledging that understanding is shaped by both personal perspectives and situational contexts. Drawing on an interpretive approach, the study sought to understand how participants made sense of their experience with AI in context, emphasizing their own language, reasoning, and reflective insights. Rather than focusing solely on lesson outcomes, the study examined how participants structured instructional components and revised AI-generated suggestions. The unit of analysis was each teacher’s design experience, documented through lesson plans, AI interaction logs, journals, and interviews. Data triangulation enabled a deeper understanding of how GenAI shaped instructional decisions and pedagogical reasoning. This interpretive stance allowed the researcher to explore meaning-making processes situated in authentic educational practice.

3.2. Research Procedures

This study followed a process that included literature exploration, qualitative design, data collection, analysis, and interpretation. The research began with a review of prior studies on GenAI in science education and instructional design models, particularly the 5E framework. Insights from this review guided the formulation of research questions and methodological direction.
A case study approach was adopted to investigate how pre-service Earth science teachers interacted with ChatGPT during lesson planning. The research was situated in a teacher education context, where participants engaged in a four-week instructional design task using ChatGPT. Prior to data collection, ethical approval was obtained from the Institutional Review Board (IRB), and all participants provided informed consent.
Multiple forms of data were collected throughout the design process, including 5E lesson plans, ChatGPT interaction logs, weekly reflective journals, and semi-structured interviews. These sources captured both the lesson design outcomes and participants’ evolving reflections on their instructional thinking.
Thematic analysis was conducted based on the 5E instructional model and inductively derived themes. Findings were interpreted through an interpretive lens, emphasizing how participants constructed meaning while working with AI tools. The analysis informed the development of conclusions and practical implications for the integration of GenAI in science teacher education.

3.3. Data Sources and Collection

Data were collected from four sources to capture both the process and outcomes of instructional design: (1) 5E lesson plans, (2) ChatGPT interaction logs, (3) reflective journals, and (4) semi-structured interviews. These data sources were selected to enable triangulation and to provide insight into participants’ instructional thinking and their engagement with GenAI tools.
The lesson plans were the primary design artifacts, created by each participant as they developed Earth science lessons using the 5E instructional model. These documents reflected how participants structured their lessons and whether AI-generated content was integrated or modified. ChatGPT interaction logs were gathered to document how participants used the tool during the design process, including the prompts they entered, responses received, and any iterative refinement. These logs served as a record of the dialogic interaction between the designer and the AI.
Weekly reflective journals were collected to capture participants’ evolving thoughts, challenges, and design choices. Participants were encouraged to describe how they interpreted ChatGPT’s suggestions, what they accepted or rejected, and why. After the final lesson plans were submitted, semi-structured interviews were conducted to explore participants’ experiences in greater depth and to clarify emerging themes.
This multi-source data collection strategy was designed to align with an interpretive approach, allowing the researcher to understand not only what participants designed, but also how and why those decisions were made in the context of working with a GenAI tool.

3.4. Data Analysis

A framework-guided thematic analysis was conducted to examine how participants engaged with ChatGPT during the process of designing 5E-based science lessons. This approach was aligned with the interpretive analysis of the study, focusing on how participants constructed meaning through their interactions with GenAI while making instructional decisions.
The analysis process followed thematic analysis procedures, which included data familiarization, initial coding, theme development, and refinement (Braun & Clarke, 2006). All qualitative data, lesson plans, ChatGPT logs, journals, and interview transcripts were read multiple times to identify recurrent patterns and significant expressions related to instructional design. Coding was guided by two analytic lenses: (1) the phases of the 5E instructional model and (2) emergent themes surrounding how ChatGPT was used, interpreted, modified, or resisted in participants’ planning.
To illustrate how the coding framework was applied, Table 1 presents two representative excerpts aligned with the two research questions. These examples demonstrate how the original data were interpreted and categorized into thematic codes.
Lesson plans and interaction logs were analyzed to determine how AI-supported content was positioned within each 5E phase. Reflective journals and interviews were used to explore participants’ rationales, struggles, and evolving views of AI integration. Themes such as trust in AI-generated suggestions, revision practices, and pedagogical alignment were tracked across data types.
Triangulation across multiple sources allowed for validation of findings and identification of converging and diverging patterns. The analysis aimed to provide a holistic understanding of how GenAI supported or challenged participants’ instructional design processes in a science education context. Peer debriefing was conducted with a science education professor and a secondary school teacher holding a doctoral degree in Earth science education to evaluate the clarity and credibility of thematic interpretations. Member checking was also employed during the semi-structured interviews, allowing participants to confirm or elaborate on the researcher’s interpretations. These strategies enhanced the trustworthiness and interpretive rigor of the analysis.

4. Results

4.1. ChatGPT Use Across the Phases of the 5E Instructional Model

4.1.1. Engage Phase: Using AI to Frame Geological Concepts in Familiar Contexts

In the Engage phase, participants used ChatGPT to generate ideas for introducing core geological concepts such as the law of superposition, sedimentary layers, and geological time in ways that would capture students’ interest and relate to familiar experiences. Their interactions with ChatGPT revealed two prominent patterns in how AI-supported suggestions were interpreted and integrated.
Code 1: Metaphor Generation for Abstract Concepts:
Participant T1 often asked ChatGPT to suggest metaphors or analogies to help students visualize stratification or geological time. ChatGPT responded with metaphors like sandwich layers, stacked laundry, or birthday cake slices. T1 reflected in their journal as follows:
“The sandwich idea was simple and helpful, but I thought using stacked books might feel more real in a classroom setting.”
Similarly, T2 shared a related experience in their reflective journal: “ChatGPT suggested a timeline made of colored beads to represent geological periods. I liked the concept but adapted it into a sediment layer activity using transparent containers and different soil types. It helped students see how layers build up over time.”
This theme illustrates how ChatGPT provided conceptual scaffolds that participants customized for pedagogical fit. While these metaphors supported initial understanding, they were not adopted wholesale. Participants used them as idea triggers, exercising discretion to ensure clarity and relevance.
Code 2: Contextual Adaptation of AI-Suggested Examples:
One participant (T2) modified AI-generated examples by grounding them in local or classroom-specific contexts. For instance, instead of generic road cut images suggested by ChatGPT, participants included photos of cliffs or sedimentary outcrops near their own schools.
T2 wrote: “ChatGPT suggested showing a Grand Canyon photo, but I used a cliff near our school where layers are visible. It helps students see that geology is not just in textbooks.”
T2 adapted ChatGPT’s suggestion to better fit their students’ daily experiences: “When I asked for an example of erosion, ChatGPT mentioned a desert canyon. I changed it to the drainage ditch behind our school that shows visible sediment lines. The students immediately recognized it.”
This demonstrated an emerging instructional agency where participants evaluated AI responses and adapted them to increase accessibility and engagement. ChatGPT acted as a creative prompt, but its suggestions gained meaning only when recontextualized by the teacher. Overall, during the Engage phase, ChatGPT was used as a tool for conceptual framing and idea stimulation, especially in relation to abstract geological topics. Yet participants played an active role in filtering, adapting, and localizing these suggestions, reinforcing their pedagogical authority and responsiveness to students’ learning contexts.

4.1.2. Explore Phase: Supporting Inquiry-Based Geological Thinking with AI

In the Explore phase, participants focused on designing hands-on and inquiry-based activities that would allow students to observe, question, and infer geological processes such as stratification, fossil formation, and relative dating. ChatGPT was used to suggest experiments, simulations, and observation tasks. Analysis of the data revealed three recurring themes in how participants engaged with AI-generated ideas during this phase.
Code 3: Sourcing Activity Ideas from AI:
T2 used ChatGPT to gather ideas for inquiry tasks involving sedimentation or fossil layer interpretation. For example, one participant asked, “What activity can help students understand how sediment layers form over time?” ChatGPT responded with a layered jar experiment using sand and pebbles. “I didn’t think of the layered jar before. It seems simple, but effective to show slow deposition.”
T4 also found ChatGPT helpful in suggesting hands-on tasks: “I was looking for an activity about fossil formation. ChatGPT suggested pressing leaves into clay to simulate fossil impressions. I modified it slightly using shells, and students loved making their own ‘fossils’.”
This theme highlights how ChatGPT served as a repository of activity templates, helping participants conceptualize feasible experiments they could adapt for the classroom.
Code 4: Adapting AI Suggestions to Align with Inquiry Goals:
T5 did not implement AI-generated activities directly. Instead, they modified them to promote deeper inquiry. For instance, rather than using ChatGPT’s suggestion of observing a fossil chart, T5 transformed the activity into a simulation where students “excavate” layers using paper layers with embedded fossil cards. “ChatGPT suggested a fossil timeline chart, but I wanted students to discover the order themselves by digging.”
T6 also restructured an AI-suggested task to encourage student reasoning: “ChatGPT recommended labeling a diagram of rock layers. I turned it into a problem-solving task where students had to infer the correct order of events using incomplete layer information. They had to justify their conclusions.”
This reflects an intentional shift from static presentation to student-centered investigation, demonstrating a pedagogical move from “showing” to “discovering.”
Code 5: Evaluating Practical Constraints and Feasibility:
While ChatGPT offered imaginative suggestions, participants evaluated them critically in terms of time, material availability, and classroom logistics.
“Some suggestions were too elaborate. I had to cut them down because we only have 45 min, and not every school has lab materials.”
T6 provided a similar account, highlighting material accessibility: “ChatGPT gave me an idea for a sedimentation tank using glass and aquarium gravel. It sounded great, but I replaced it with colored water and plastic bottles since those were easier to prepare and safer for kids.”
This reflects how participants actively evaluated the practicality of ChatGPT’s suggestions, considering time limitations, material availability, and classroom logistics. Rather than implementing AI-generated activities as they were, teachers made decisions based on the constraints of real-world educational settings. This process required thoughtful adjustment of idealized or generic tasks into formats that were feasible within a typical school lesson period and aligned with available resources.
Overall, in the Explore phase, ChatGPT functioned as a creative ideation partner, particularly in brainstorming activity formats. Yet participants exercised professional judgment in aligning these ideas with inquiry goals and feasibility, reinforcing the theme of AI as a flexible but non-directive co-designer in science lesson planning.

4.1.3. Explain Phase: Refining and Structuring Scientific Explanations with AI Support

In the Explain phase, participants used ChatGPT to help them present key geological concepts such as stratification, fossil succession, unconformities, and geologic time scales. The tool was primarily employed to clarify terminology, generate teacher talk scripts, and offer accessible explanations of abstract scientific ideas. Three prominent patterns emerged in how participants used and adapted ChatGPT during this phase:
Code 6: Editing AI-Generated Content for Conceptual Accuracy:
Participants used ChatGPT to obtain scientific definitions or explanations of geological principles. However, they often found the responses incomplete or too general, prompting them to revise or expand the content.
“ChatGPT explained relative dating, but didn’t include index fossils. I added that part so students could apply it better in the next activity.”
T7 encountered a similar issue with terminology clarity: “When ChatGPT defined ‘unconformity’, it used technical terms like ‘disconformity’ without context. I rewrote it to focus on the basic idea—missing time in the rock record—and gave a local example.”
These examples highlight how participants maintained conceptual control over the lesson content. Rather than treating AI-generated text as authoritative, they critically evaluated its scientific accuracy and appropriateness for their students. This editing process ensured that explanations aligned with curricular goals and student understanding.
This code reflects how participants-maintained control over content accuracy, demonstrating pedagogical filtering for completeness and correctness.
Code 7: Structuring Explanations for Student Understanding:
Beyond correctness, participants paid attention to how concepts were introduced and sequenced. They often reorganized ChatGPT-generated text or used it as a draft for teacher explanations that better fit student cognitive levels. T1 said that “ChatGPT answer was okay, but too long. I broke it down into three parts, each with a question to check understanding.”
T2 described a similar approach with geological timescales: “The AI gave me a big paragraph about the Precambrian era. I turned it into a short timeline slide, then added questions like ‘Which came first?’ and ‘What life forms existed then?’ to help students follow the flow.”
This illustrates how participants actively restructured AI-generated explanations to support student comprehension. Rather than using lengthy or overly dense responses as-is, they broke down the content into smaller, sequential components aligned with how students process and construct scientific understanding. These adjustments reflect intentional scaffolding strategies aimed at promoting conceptual clarity and facilitating step-by-step learning.
Code 8: Using AI to Prompt Analogies and Visual Thinking:
Participants asked ChatGPT for analogies or real-world comparisons to support conceptual clarity. Though the AI could not create images, it generated prompts that led participants to develop visual aids or classroom analogies (e.g., layered cake, books, tree rings).
T4 reflected in his journal: “I asked ChatGPT for a way to explain fossil succession, and it mentioned tree rings. That helped me create a diagram comparing rock layers to tree rings, with age increasing as you go deeper.”
T8 described a different case involving structural geology: “ChatGPT mentioned that folded rock layers can be compared to crumpled paper. I brought in old newspapers, had students fold and compress them to model synclines and anticlines. It really helped them understand how rocks can bend.”
These examples show how ChatGPT inspired teachers to create their own instructional visuals, transforming abstract geological concepts into concrete, student-friendly representations. In this Explain phase, ChatGPT acted as a content companion, offering scientifically framed ideas, while participants refined, structured, and illustrated these explanations to fit student needs and cognitive levels.

4.1.4. Elaborate Phase: Extending Geological Understanding Through Applied Contexts

In the Elaborate phase, participants used ChatGPT to design activities that helped students apply previously learned concepts—such as stratigraphy, fossil succession, and geologic time—to new contexts. ChatGPT was employed to suggest application problems, case-based scenarios, and cross-topic links (e.g., connecting local landforms to geological history). Three thematic patterns emerged from participants’ use of AI during this phase:
Code 9: Generating Extension Tasks and Application Scenarios:
Participants used ChatGPT to brainstorm activities that required students to apply geological concepts in new contexts. These often include simplified geologic columns, fossil-based interpretation tasks, or timeline reconstructions. The AI served as an initial idea generator, but participants still structured the flow of the tasks to match student readiness and curriculum goals.
T2 described how they used ChatGPT to develop an application task involving the Grand Canyon: “ChatGPT suggested using the Grand Canyon to talk about rock layers. I turned it into a scenario where students pretended to be geologists interpreting a cross-section of the canyon. They had to use clues like fossil types and rock texture to reconstruct the geologic history.”
T6 adapted an AI-generated suggestion by connecting geological history to the school’s local environment: “ChatGPT mentioned using satellite images of landforms. I found an aerial photo of a nearby valley and had students infer how it formed over time—erosion, uplift, and deposition. It helped them apply what they learned to where they actually live.”
This example shows how participants utilized AI to support the design of formative application tasks while retaining control over instructional framing. Even when activities originated from ChatGPT suggestions, they were customized to assess students’ ability to transfer and apply conceptual understanding. ChatGPT-generated text was used as a draft for teacher explanations that better fit students.
Code 10: Contextual Transfer of Scientific Knowledge:
Some participants adapted ChatGPT’s suggestions to frame geology not as isolated knowledge but as part of real-world phenomena. They restructured AI-proposed tasks to relate stratigraphy and fossil evidence to students’ immediate environments, societal concerns, or field applications. ChatGPT served as a source of generic ideas, but participants emphasized geographical familiarity and personal relevance.
T3 recontextualized stratigraphy by connecting it to a local landslide incident: “ChatGPT gave a basic description of how layers form, but I added a news article about a landslide that happened near our school. Students discussed what rock layers might have contributed to the collapse.”
T7 used ChatGPT’s ideas to link geology to construction and safety planning: “I asked ChatGPT how rock layers relate to real life, and it mentioned building foundations. I created a task where students had to choose safe building sites based on a simplified geologic map of our town.”
This approach reflects teachers’ pedagogical decision-making to transfer abstract geological concepts into locally grounded, socially meaningful contexts. Participants used AI-generated content as a springboard to engage students in thinking about the practical relevance of Earth science.
Code 11: Transforming AI Prompts into Inquiry-Based Extensions:
While ChatGPT offered mostly structured tasks or closed-ended worksheets, some participants reimagined these into open-ended, student-driven activities. Their goal was to push students beyond recall or description, toward reasoning, justification, and scientific argumentation. Participants demonstrated an ability to reshape fixed prompts into formats that promoted higher-order thinking.
T4 described modifying a quiz-like prompt into a group investigation: “ChatGPT gave me a list of multiple-choice questions on rock types. I turned it into a group task where students had to classify real rock samples and explain their reasoning using a shared rubric.”
T8 went a step further by transforming a worksheet into a student debate: “ChatGPT suggested matching fossils to periods. I changed it into a group discussion where students had to defend which fossil appeared first and why. They used evidence from diagrams and had to respond to peer challenges.”
This theme shows how participants moved beyond consumption of AI suggestions, enacting a more critical, generative stance as instructional designers. ChatGPT provided the seed of a task, but teachers made it cognitively demanding and epistemically open.

4.1.5. Evaluate Phase: Two Stages of Assessment Design with ChatGPT

In the Evaluate phase, participants used ChatGPT to support the design of activities that assessed students’ understanding of core geological concepts, including stratigraphy, fossil interpretation, and relative dating. Their approach to assessment planning involved two distinct but interconnected stages: designing evaluation items and anticipating student responses to inform instructional feedback.
Code 12: Designing Concept-Focused Assessment Items:
The first phase of assessment planning centered on constructing questions that could reveal students’ conceptual understanding. Participants used ChatGPT to generate sample quiz items, short-answer questions, and visual interpretation tasks. While ChatGPT’s suggestions often served as starting points, participants modified them to increase cognitive demand, improve clarity, or align with curriculum goals.
T2 shared how they adapted an AI-generated quiz item about fossil layers: “ChatGPT gave me a basic question on which fossil was oldest. I added a diagram with conflicting clues, so students had to think more carefully, not just memorize.”
T6 restructured a timeline-based item to focus on scientific reasoning: “The AI suggested a matching task for geologic eras and life forms. I turned it into an open-ended prompt where students had to place major events on a blank timeline and explain the evidence they used.”
These practices illustrate how participants went beyond surface-level questioning to design assessments that required students to reason through geological relationships. By adapting AI-generated prompts, they created tasks that targeted deeper conceptual thinking rather than simple recall, aligning with formative goals and inquiry-based instructional approaches.
Code 13: Anticipating Student Responses and Planning Feedback:
In the second phase, participants considered how students might interpret or misinterpret their assessment items. Some used ChatGPT to simulate likely answers or misconceptions, asking prompts such as “What mistake might students make here?” or “What feedback should I give if they select the wrong answer?”
T1 reflected on how ChatGPT helped them anticipate a common error: “I asked ChatGPT what students might misunderstand about rock layers, and it pointed out they often think the top layer is the oldest. That reminded me to revise the diagram and include a prompt asking, ‘Are you sure about the direction of time here?’”
T5 also described refining their assessment based on anticipated misconceptions: “I used ChatGPT to generate a sample student answer for a relative dating question. It missed the principle of superposition, so I created a follow-up item where students had to explain their reasoning—this helped me prepare feedback in advance.”
These assessment design practices demonstrate how participants used pedagogical judgment to shift AI-generated questions from basic recall to concept-focused reasoning tasks. Rather than accepting AI outputs at face value, they deliberately revised item structure and content to probe students’ understanding of key geological relationships and promote higher-order thinking.
These two codes illustrate how participants used ChatGPT not merely to generate evaluation items but to support a two-stage design process—first by constructing conceptually targeted tasks, and then by refining them based on anticipated student thinking. The findings point to a model of AI-supported assessment planning in which teacher agency remains central in shaping both what to ask and how to respond.

4.2. Opportunities and Challenges in AI-Supported Lesson Design

4.2.1. Opportunities in AI-Supported Instructional Design

Participants reported several advantages of using ChatGPT throughout the 5E lesson planning process. These opportunities centered around their support for creative ideation, explanation structuring, and lesson expansion.
Expanding Instructional Ideas (Codes 1, 3, 9):
Participants frequently described ChatGPT as a useful tool for generating lesson ideas they would not have come up with on their own. In the Engage and Explore phases, it offered analogies, activity types, and hands-on suggestions that stimulated their creativity. In the Elaborate phase, it helped extend learning into applied contexts.
“I would never have thought of using a food metaphor for rock layers. It gave me a fresh idea.”
“The layered jar activity was simple, but effective—I built on that to let students model sedimentation.”
These practices reveal how participants relied on ChatGPT to support different stages of lesson design—for instance, by generating relatable metaphors for abstract geological concepts, sourcing ideas for student-centered activities, and constructing tasks that encouraged the application of scientific knowledge in new contexts.
Structuring and Scaffolding Content (Codes 7, 8, 10, 12):
ChatGPT also supported participants in organizing content explanations and constructing evaluation items. It provided conceptual language and visual metaphors that helped participants clarify and sequence their ideas. In the Explain and Evaluate phases, this scaffolding role supported both instruction and assessment.
“ChatGPT explanation helped me see how to sequence my own.”, “It gave me a simple multiple-choice item that I adapted to check if students really understood.”
These moments illustrate how participants used ChatGPT as a springboard for instructional planning, transforming its preliminary suggestions into structured explanations and assessment tools that aligned with student learning goals.
Prompting Reflection and Instructional Reconsideration (Code 10):
Some participants reported that ChatGPT helped them recognize weaknesses in their own lesson clarity. While not always intentional, AI-generated outputs caused them to reflect on gaps in explanation, concept coverage, or the logic of their sequence.
“I realized I didn’t explain relative dating well—ChatGPT’s version reminded me what I skipped.”
This example demonstrates how ChatGPT’s suggestions, even when unintended, prompted participants to revisit and refine their instructional plans. By highlighting overlooked content or gaps in reasoning, the AI tool encouraged reflective thinking and helped participants become more aware of the coherence and completeness of their lesson structure.

4.2.2. Challenges in AI-Supported Instructional Design

Despite the opportunities, participants also encountered significant challenges. These revolved around the limitations of AI in terms of contextual fit, pedagogical alignment, and student appropriateness.
Adapting AI Output to Instructional Contexts (Codes 2, 4, 5):
Participants frequently revised AI-generated content to better match curricular goals, learning time constraints, or inquiry-based pedagogical structures. Activity ideas provided by ChatGPT were often procedurally detailed but lacked open-ended inquiry structure. Some examples included content or concepts that were not emphasized in the national curriculum.
“ChatGPT described a cross-section with trilobite fossils, but we don’t teach that directly in our textbook. I changed the example to ammonite fossils, which are covered in our class.”
“It gave me steps to follow, but no room for students to think for themselves.”
These cases show how participants critically filtered and restructured AI-generated suggestions to fit national curriculum standards, uphold inquiry-based pedagogy, and address practical classroom constraints. Their revisions reflect an effort to transform content that was technically accurate but pedagogically misaligned into meaningful, context-sensitive instructional materials.
Ensuring Scientific and Developmental Appropriateness (Codes 6, 11, 13):
While ChatGPT was often helpful in generating scientific explanations and assessment items, participants noted that many of these outputs were either overly simplistic or conceptually dense. Some content missed key ideas required for student reasoning, while other examples introduced advanced terminology beyond the target level.
“It used terms like ‘angular unconformity’ without explaining what it meant. I had to rewrite that whole section.”, “The quiz question looked fine, but didn’t really test their reasoning—it was too obvious.”
These observations illustrate how participants actively evaluated AI-generated content for both conceptual soundness and developmental appropriateness. They frequently revised explanations to ensure scientific accuracy, restructured tasks to promote inquiry rather than recall, and anticipated possible student misconceptions when designing assessments. Such instructional decisions underscore the teacher’s role in adapting AI outputs to align with student understanding, curricular level, and cognitive demands

5. Discussion

The findings of this study suggest that the instructional value of ChatGPT is not inherent to the tool itself but shaped by how pre-service teachers interpret, adapt, and integrate its output within structured pedagogical models. Across the phases of the 5E instructional model, ChatGPT served different instructional functions acting as a prompt generator in the Engage phase, a source of activity ideas in Explore, a provider of draft explanations in Explain, a stimulator of extension tasks in Elaborate, and a supplier of preliminary assessment items in Evaluate. In each phase, however, participants carefully evaluated and modified AI-generated suggestions to better align them with student readiness, inquiry-based goals, and national curricular standards. These findings highlight the importance of thoughtful instructional decisions in AI-supported lesson design, emphasizing that GenAI must be used with pedagogical care, contextual awareness, and subject-matter understanding.
These results support previous findings that pre-service teachers play an active role in shaping how GenAI is used during lesson planning. While ChatGPT provided a wide range of suggestions, participants consistently made decisions about what to adopt, modify, or discard. This aligns with prior research emphasizing the role of teacher judgment and agency in AI-supported instruction (Lan & Chen, 2024; Verma et al., 2023). Participants revised explanations, adapted assessments, and localized abstract metaphors—demonstrating a consistent effort to refine content to better support learning (Clark et al., 2025).
This study also offers insights into how GenAI can be incorporated within structured lesson design models such as the 5E framework. Participants interacted with ChatGPT differently depending on the instructional phase, drawing more inspiration in the Engage phase but applying more scrutiny during phases requiring conceptual clarity such as Explain and Evaluate. This pattern is consistent with previous studies suggesting that GenAI tools are more useful in early-stage brainstorming and less reliable for supporting inquiry-based content development (Powell & Courchesne, 2024; Şimşek, 2025).
Despite its usefulness, ChatGPT showed notable limitations, especially in supporting open-ended inquiry and age-appropriate conceptual explanations. Many AI-generated suggestions lacked sufficient depth or inquiry orientation, requiring participants to revise or restructure them significantly. These findings align with critiques noting that AI tools often simplify complex topics and require pedagogical refinement to meet instructional goals (Michel-Villarreal et al., 2023; Goodman et al., 2024).
Finally, these results suggest the need to help pre-service teachers develop the skills to evaluate and revise GenAI outputs effectively. Prior research has emphasized the importance of preparing future educators to use these tools critically and thoughtfully and highlighted the benefits of incorporating AI into teacher education to foster reflective instructional decision-making (Ishmuradova et al., 2025; Wang et al., 2024). Courses that combine structured lesson design with GenAI-supported activities may help pre-service teachers develop greater confidence and competence in using these tools productively.

6. Conclusions

This study explored how pre-service Earth science teachers engaged with ChatGPT during the process of designing science lessons using the 5E instructional model. By analyzing how the AI tool was utilized across different instructional phases, the study aimed to identify the specific educational opportunities and challenges that emerged. Adopting a qualitative case study design, the research focused on the practical interactions between AI-generated content and the instructional judgments made by participants during lesson planning.
The findings indicate that the usefulness of ChatGPT varied across the 5E phases. It was especially helpful in the Engage and Elaborate phases, where generating analogies and extension tasks supported idea development. However, in phases such as Explore and Explain, participants identified limitations in depth, inquiry orientation, and developmental appropriateness of AI-generated suggestions. Across all phases, participants made thoughtful revisions to adapt the content to their curricular goals and student needs.
Rather than replacing instructional expertise, ChatGPT served as a supportive resource that encouraged pre-service teachers to reflect on and refine their lesson ideas. The tool functioned most effectively when used selectively and critically, within the structure of a pedagogical model that guided lesson coherence and conceptual rigor. This highlights the importance of instructional design competence when working with AI tools in educational contexts.
Future research should investigate how both pre-service and in-service teachers continue to build their capacity to use GenAI tools effectively, especially in real-time teaching contexts where decisions must be made on the spot. There is also a need to embed opportunities for AI-critical thinking, curriculum alignment, and ethical reflection in teacher education programs. Preparing educators to thoughtfully evaluate and adapt AI-generated suggestions will be essential to integrating such tools in ways that genuinely support student learning.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved Institutional Review Board of Pusan National University (PNU_IRB_00024, 2025-02-20).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting this article will be made available on request from the corresponding author.

Conflicts of Interest

The author declare no conflicts of interest.

References

  1. Abualrob, M. (2025). Innovative teaching: How pre-service teachers use artificial intelligence to teach science to fourth graders. Contemporary Educational Technology, 17(1), ep547. [Google Scholar] [CrossRef]
  2. Adel, A., Ahsan, A., & Davison, C. (2024). ChatGPT promises and challenges in education: Computational and ethical perspectives. Education Sciences, 14(8), 814. [Google Scholar] [CrossRef]
  3. Airaj, M. (2024). Ethical artificial intelligence for teaching-learning in higher education. Education and Information Technologies, 29(13), 17145–17167. [Google Scholar] [CrossRef]
  4. Ajlouni, A., Wahba, F., & Almahaireh, A. (2023). Students’ attitudes towards using ChatGPT as a learning tool: The case of the university of Jordan. International Journal of Interactive Mobile Technologies, 17(8), 99–117. [Google Scholar] [CrossRef]
  5. Alasadi, E., & Baiz, C. (2023). Generative AI in education and research: Opportunities, concerns, and solutions. Journal of Chemical Education, 100(8), 2965–2971. [Google Scholar] [CrossRef]
  6. Baidoo-Anu, D., & Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62. [Google Scholar] [CrossRef]
  7. Berg, G., & Plessis, E. (2025). ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Education Sciences, 13(10), 998. [Google Scholar] [CrossRef]
  8. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  9. Bybee, R. (2014). Guest editorial: The BSCS 5E instructional model: Personal reflections and contemporary implications. Science and Children, 51(8), 10–13. [Google Scholar] [CrossRef]
  10. Chen, D., Aaltonen, K., Lampela, H., & Kujala, J. (2025). The design and implementation of an educational Chatbot with personalized adaptive learning features for project management training. Technology, Knowledge and Learning, 30(6), 1047–1072. [Google Scholar] [CrossRef]
  11. Chiu, T. (2024). The impact of generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and midjourney. Interactive Learning Environments, 32(10), 6187–6203. [Google Scholar] [CrossRef]
  12. Choi, Y. (2024). A phenomenological approach to the experiences of pre-service earth science teachers utilizing ChatGPT in science instruction. Journal of the Korean Earth Science Society, 45(6), 586–599. [Google Scholar] [CrossRef]
  13. Choi, Y. (2025a). Earth science simulations with generative artificial intelligence (GenAI). Journal of University Teaching and Learning Practice, 22(1), 1–24. [Google Scholar] [CrossRef]
  14. Choi, Y. (2025b). Exploring the scientific validity of ChatGPT’s responses in elementary science for sustainable education. Sustainability, 17(7), 2962. [Google Scholar] [CrossRef]
  15. Choi, Y., & Chung, J. (2025). Analysis of practical characteristics in simulated elementary science lessons earth and space using ChatGPT. Journal of the Korean Earth Science Society, 46(1), 78–95. [Google Scholar] [CrossRef]
  16. Clark, T., Fhaner, M., Stoltzfus, M., & Queen, M. (2025). Using ChatGPT to support lesson planning for the historical experiments of Thomson, Millikan, and Rutherford. Journal of Chemical Education, 101(5), 1992–1999. [Google Scholar] [CrossRef]
  17. Crompton, H., Jones, M., & Burke, D. (2024). Affordances and challenges of artificial intelligence in K-12 education: A systematic review. Journal of Research on Technology in Education, 56(3), 248–268. [Google Scholar] [CrossRef]
  18. Davis, E. (2022). Supporting preservice elementary teachers in teaching science for equity and justice: A practical framework. Innovations in Science Teacher Education, 7(4), 1–27. [Google Scholar]
  19. Dimas, D., Alvindi, A., Khoirunnisa, T., Yani, R., Pardamean, P., & Simanjuntak, K. (2025). Exploring pre-service teachers’ difficulties of ChatGPT as a tool for planning the learning process. Innovative: Journal of Social Science Research, 5(1), 5906–5922. [Google Scholar]
  20. Duran, L., & Duran, E. (2004). The 5E instructional model: A learning cycle approach for inquiry-based science teaching. Science Education Review, 3(2), 49–58. [Google Scholar]
  21. ElSayary, A. (2024). An investigation of teachers’ perceptions of using ChatGPT as a supporting tool for teaching and learning in the digital era. Journal of Computer Assisted Learning, 40(3), 931–945. [Google Scholar] [CrossRef]
  22. Goodman, J., Handa, V., Wilson, R., & Bradbury, L. (2024). Promises and pitfalls: Using an AI chatbot as a tool in 5E lesson planning. Innovations in Science Teacher Education, 9(1), 1–13. [Google Scholar]
  23. Gurl, T., Markinson, M., & Artzt, A. (2025). Using ChatGPT as a lesson planning assistant with preservice secondary mathematics teachers. Digital Experiences in Mathematics Education, 11(1), 114–139. [Google Scholar] [CrossRef]
  24. Hamerman, E., Aggarwal, A., & Martins, C. (2025). An investigation of generative AI in the classroom and its implications for university policy. Quality Assurance in Education, 33(2), 253–266. [Google Scholar] [CrossRef]
  25. Ishmuradova, I., Zhdanov, S., Kondrashev, S., Erokhova, N., Grishnova, E., & Volosova, N. (2025). Pre-service science teachers’ perception on using generative artificial intelligence in science education. Contemporary Educational Technology, 17(3), ep579. [Google Scholar] [CrossRef]
  26. Jaboob, M., Hazaimeh, M., & Al-Ansi, A. (2025). Integration of generative AI techniques and applications in student behavior and cognitive achievement in Arab higher education. International Journal of Human–Computer Interaction, 41(1), 353–366. [Google Scholar] [CrossRef]
  27. Kalenda, P., Rath, L., Abugasea Heidt, M., & Wright, A. (2025). Pre-service teacher perceptions of ChatGPT for lesson plan generation. Journal of Educational Technology Systems, 53(3), 219–241. [Google Scholar] [CrossRef]
  28. Kayalı, B., Yavuz, M., Balat, Ş., & Çalışan, M. (2023). Investigation of student experiences with ChatGPT-supported online learning applications in higher education. Australasian Journal of Educational Technology, 39(5), 20–39. [Google Scholar] [CrossRef]
  29. Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 54(2), 537–550. [Google Scholar] [CrossRef]
  30. Lan, Y., & Chen, N. (2024). Teachers’ agency in the era of LLM and generative AI. Educational Technology & Society, 27(1), 1–18. [Google Scholar]
  31. Lee, G., & Zhai, X. (2024). Using ChatGPT for science learning: A study on pre-service teachers’ lesson planning. IEEE Transactions on Learning Technologies, 17(1), 1643–1660. [Google Scholar] [CrossRef]
  32. Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D., Thierry-Aguilera, R., & Gerardou, F. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), 856. [Google Scholar] [CrossRef]
  33. Mondal, H., Marndi, G., Behera, J., & Mondal, S. (2023). ChatGPT for teachers: Practical examples for utilizing artificial intelligence for educational purposes. Indian Journal of Vascular and Endovascular Surgery, 10(3), 200–205. [Google Scholar] [CrossRef]
  34. Monib, W., Qazi, A., & Mahmud, M. (2025). Exploring learners’ experiences and perceptions of ChatGPT as a learning tool in higher education. Education and Information Technologies, 30(1), 917–939. [Google Scholar] [CrossRef]
  35. Mouta, A., Torrecilla-Sánchez, E., & Pinto-Llorente, A. (2025). Comprehensive professional learning for teacher agency in addressing ethical challenges of AIED: Insights from educational design research. Education and Information Technologies, 30(3), 3343–3387. [Google Scholar] [CrossRef]
  36. Murshidi, G., Shulgina, G., Kapuza, A., & Costley, J. (2024). How understanding the limitations and risks of using ChatGPT can contribute to willingness to use. Smart Learning Environments, 11(1), 36. [Google Scholar] [CrossRef]
  37. Nazaretsky, T., Ariely, M., Cukurova, M., & Alexandron, G. (2022). Teachers’ trust in AI-powered educational technology and a professional development program to improve it. British Journal of Educational Technology, 53(4), 914–931. [Google Scholar] [CrossRef]
  38. Okulu, H., & Muslu, N. (2024). Designing a course for pre-service science teachers using ChatGPT: What ChatGPT brings to the table. Interactive Learning Environments, 32(10), 7450–7467. [Google Scholar] [CrossRef]
  39. Powell, W., & Courchesne, S. (2024). Opportunities and risks involved in using ChatGPT to create first grade science lesson plans. PLoS ONE, 19(6), e0305337. [Google Scholar] [CrossRef]
  40. Quan, S., & Lee, S. (2025). Enhancing participatory planning with ChatGPT-assisted planning support systems: A hypothetical case study in Seoul. International Journal of Urban Sciences, 29(1), 1–34. [Google Scholar] [CrossRef]
  41. Salinas-Navarro, D., Vilalta-Perdomo, E., Michel-Villarreal, R., & Montesinos, L. (2024a). Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Education Sciences, 14(1), 83. [Google Scholar] [CrossRef]
  42. Salinas-Navarro, D., Vilalta-Perdomo, E., Michel-Villarreal, R., & Montesinos, L. (2024b). Designing experiential learning activities with generative artificial intelligence tools for authentic assessment. Interactive Technology and Smart Education, 21(4), 708–734. [Google Scholar] [CrossRef]
  43. Schwandt, T. (2007). Judging interpretations: But is it rigorous? In E. Guba, & Y. Lincoln (Eds.), Naturalistic evaluation. New Directions for Evaluation (pp. 11–24). Jossey-Bass. [Google Scholar]
  44. Şimşek, N. (2025). Integration of ChatGPT in mathematical story-focused 5E lesson planning: Teachers and pre-service teachers’ interactions with ChatGPT. Education and Information Technologies, 30(21), 11391–11462. [Google Scholar] [CrossRef]
  45. Verma, G., Campbell, T., Melville, W., & Park, B. (2023). Navigating opportunities and challenges of artificial intelligence: ChatGPT and generative models in science teacher education. Journal of Science Teacher Education, 34(8), 793–798. [Google Scholar] [CrossRef]
  46. Wang, D., Zheng, Y., & Chen, G. (2024). ChatGPT or Bert? Exploring the potential of ChatGPT to facilitate preservice teachers’ learning of dialogic pedagogy. Educational Technology & Society, 27(3), 390–406. [Google Scholar]
  47. Zhang, C., Schießl, J., Plößl, L., Hofmann, F., & Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. International Journal of Educational Technology in Higher Education, 20(1), 49. [Google Scholar] [CrossRef]
  48. Zhu, W., Huang, L., Zhou, X., Li, X., Shi, G., Ying, J., & Wang, C. (2025). Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. International Journal of Human–Computer Interaction, 41(1), 742–764. [Google Scholar] [CrossRef]
Table 1. Example of data excerpts and thematic codes by research question.
Table 1. Example of data excerpts and thematic codes by research question.
Research Question5E PhaseData SourceExcerptThematic CodeInterpretation
RQ1:
How do participants utilize ChatGPT in each phase of the 5E model?
EngageReflective journal“The sandwich idea was simple and helpful, but I thought using stacked books might feel more real in a classroom setting.”Metaphor Generation for Abstract ConceptsChatGPT provided analogies that participants recontextualized to suit student’s lived experiences and classroom realities.
RQ2:
What opportunities and challenges emerge in AI-assisted lesson design?
ExploreReflective journal“ChatGPT suggested a fossil timeline chart, but I wanted students to discover the order themselves by digging.”Adapting AI Suggestions to Align with Inquiry GoalsAI-generated tasks lacked inquiry orientation; participants redesigned them to promote discovery and student reasoning.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choi, Y. Integrating ChatGPT into the Design of 5E-Based Earth Science Lessons. Educ. Sci. 2025, 15, 815. https://doi.org/10.3390/educsci15070815

AMA Style

Choi Y. Integrating ChatGPT into the Design of 5E-Based Earth Science Lessons. Education Sciences. 2025; 15(7):815. https://doi.org/10.3390/educsci15070815

Chicago/Turabian Style

Choi, Yoonsung. 2025. "Integrating ChatGPT into the Design of 5E-Based Earth Science Lessons" Education Sciences 15, no. 7: 815. https://doi.org/10.3390/educsci15070815

APA Style

Choi, Y. (2025). Integrating ChatGPT into the Design of 5E-Based Earth Science Lessons. Education Sciences, 15(7), 815. https://doi.org/10.3390/educsci15070815

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop