Abstract
This self-study presents Persona, Break Glass, Name Plan, Jam (PBNJ), a human-centered workflow for using generative AI to support differentiated lesson planning and problem solving. Although differentiated instruction (DI) is widely endorsed, early-career teachers often lack the time and capacity to implement it consistently. Through four iterative cycles of collaborative self-study, seven educator-researchers examined how they used AI for lesson planning, identified key challenges, and refined their approach. When engaged, the PBNJ sequence—set a persona, use a ‘break glass’ starter prompt, name a preliminary plan, and iteratively ‘jam’ with the AI—improved teacher confidence, yielded more feasible lesson plans, and supported professional learning. We discuss implications for problem solving beyond educational contexts and the potential for use with young learners.
1. Introduction
As of fall 2022, three quarters of students with disabilities in the United States and Canada spent more than 80% of their instructional time in general education classrooms [1,2]. In these contexts, inclusive approaches such as universal design and differentiated instruction (DI) are used to make learning accessible to all students [3,4,5]. Using DI, teachers adapt the content, processes, and products of their lessons to better suit learner readiness, interests, and needs [6,7]. Meta-analyses report small to moderate positive effects of DI on student outcomes [8,9,10,11].
Despite broad endorsement, DI implementation is challenging [12,13,14]. Many teachers feel underprepared and default to ability—based grouping, which can produce segregation and inequity [11,15,16,17,18,19]. Planning for DI requires teachers to anticipate diverse student needs related to materials, timing, logistics, environment, caregivers, and colleagues, as well as the teacher’s own limits; this workload impedes regular implementation and increases stress, especially among early—career teachers [20,21]. Not surprisingly, early-career teachers report using DI much less than their more experienced counterparts [22,23].
As in many domains, the administrative overload related to planning is a persistent problem in teaching. Bryant et al. [24] found that 51% of teachers’ responsibilities are administrative in nature, Jotkoff [25] reported that 90% of educators experiencing burnout cited paperwork as a key contributor, and Hashem and colleagues [26] reported that lesson planning and content development are the most time-consuming, frustrating, and discouraging non-teaching tasks. Economically, the U.S. spends over $1 billion annually on public school teachers’ performance of administrative tasks [27]. These demands reduce the time and energy teachers can devote to planning, building relationships, and engaging with students [15].
Generative AI offers potential relief for administrative workload and for supporting inclusive practices. Since ChatGPT’s release in late 2022, public adoption of generative AI has grown rapidly [28] and its potential to enhance teaching practices, streamline administrative tasks, and personalize learning has generated significant interest across the education sector. Some believe AI can automate routine functions such as grading and lesson design, freeing teachers to focus on relational and creative aspects of teaching [29]. As of 2024, ~60% of educators report AI use, with higher adoption among younger teachers [28].
Despite this promise, questions persist about the ethical, appropriate, and efficient use of AI tools in education. Studies conducted from 2021–2024 found that AI platforms such as ChatGPT and MagicSchool AI could be used to create content, conduct grading, and answer simple factual questions [30,31,32], but often required time-consuming refinement to ensure pedagogical appropriateness [33,34,35]. Legitimate concerns about data privacy, output accuracy, algorithmic bias, and equitable access have tempered enthusiasm for AI in education, reducing opportunities for practice and professional learning [5,36]. These limitations highlight the importance of ongoing support for teachers to build comfort and skill with the use of AI [37,38].
Responding to this need, our team conducted a collaborative self-study to develop a practical AI workflow to support DI lesson planning for early-career teachers. Though extant research provides a quantitative picture of how often and for what reason AI is used by educators, our goal was to refine a specific AI workflow that would ease ethical concerns, allow early-career teachers to overcome planning and problem-solving obstacles, and improve the quantity and quality of DI in their daily lesson planning.
2. Methods
2.1. Definition and Rationale for Self-Study Methodology
Self-study is a systematic, collaborative form of professional inquiry in which participants investigate multiple forms of data emerging from their own practice to deepen understanding and improve professional action. It assumes that progress and improvement in privately conducted activity like classroom teaching relies on a rigorous confrontation between tacit, implicit, and private practices and theory, research, explicit rationale, and external expectation [39,40,41]. While reliability in qualitative methodology is often produced through the discussion of a subset of data, the subsequent co-creation of a codebook, and the consistent application of these codes to a data set [42], a self-study methodology demands continuous discussion of emerging themes that arise during iterative rounds of data gathering. It privileges reflection, dialogue, and co-construction of meaning over experimental control, emphasizing iterative cycles of practice, interpretation, and theory building within authentic professional contexts [43,44]. Rather than positioning participants as research subjects, self-study situates all contributors as co-researchers who analyze their lived experience through collaborative reflection, documentation, and critical examination of practice [43,45].
Self-study methodology was selected for this project because it enabled a concurrent, immersed, and cumulative process of surfacing, synthesizing, and analyzing data gathered from ongoing teaching experiences. During 14 h of discussion, shared understanding, and trust-building, a self-study approach allowed the relational and temporal depth needed to examine two particularly private aspects of professional practice: how educators use AI and how they ensure accessibility in their teaching. Unlike research designs that gather and then analyze one set of static data, a self-study methodology allowed us to interrogate a process of learning. Whereas case study or phenomenological approaches might describe what teachers believe or do, self-study examines how knowing evolves through action—a living methodology that acknowledges the researcher’s ongoing transformation [43]. This epistemic stance aligns with the project’s focus on how educators learn to use AI for lesson planning in real time, rather than what they already know. In a practical sense, our use of self-study allowed us to formulate outputs that were more responsive to the subtle and personal constraints experienced by preservice teachers-researchers, and thus more feasible, acceptable, and useful [46].
2.2. Procedure
Recruitment began in the winter of 2025 during a three-hour Special and Adaptive Education class in a two-year graduate teacher preparation program at the University of Toronto. All 63 pre-service teachers were invited to explore and discuss ChatGPT as well as EL (Education and Learning Chatbot), an evidence-trained chatbot created by author T. Zaugg. EL is an experimental, proprietary, education-focused, closed AI chatbot created, owned, and maintained by Dr. Tiffanie Zaugg. EL leverages a curated corpus of evidence-based repositories to generate and adapt instructional supports for inclusive classrooms, including the U.S. Department of Education, CAST UDL Guidelines, OSEP Ideas That Work, DaSy, CEEDAR, MIND Research, Inclusive365, NCTM, Carol Gray Social Stories, Promoting Progress, IRIS Center, the National Center on Intensive Intervention, Zones of Regulation, SPED Helper, ReadWriteThink, PBIS, High-Leverage Practices, and 17,222 items from the What Works Clearinghouse. EL is free for users to access, receives no financial backing, and operates under Custom GPT website license costing $5000 (paid by Tiffanie Zaugg). https://innoved.consulting/EL (accessed on 11 November 2025). Reflections from this routine professional discussion, as well as from another discussion with a group of in-service teachers, stimulated the hypothesis of this research—that teachers’ use of AI for lesson planning support was constrained and would benefit from further self-study and development. These professional discussions were not collected or analyzed as research data; rather, our research team’s reflections on them informed the design and focus of the subsequent self-study.
Five diverse preservice teachers volunteered to join two faculty (L. Faith, T. Zaugg) in a 14-week collaborative self-study. The team included both experienced and early-career educators, with teaching and learning experiences from Canada, Asia, and the United States, providing insight from a wide range of classroom experiences, pedagogical beliefs, and cultural perspectives. The project began as a graded (40%) course option; participation involved attending meetings, using AI for DI lesson planning, and collecting data (see assignment criteria in Appendix A). The preservice teachers were informed that they could opt out of full participation in the self-study at any time (to write a traditional term paper about their experience), but all chose to remain fully engaged from the first meeting to the final manuscript revisions.
All collaborators in the study had previously attempted the use of AI to support lesson planning. We employed an iterative self-study methodology to intentionally and systematically improve these skills within a concurrent K-6 teaching placement, with the intention of generating knowledge that could be shared with others [45,47]. Across 14 weekly meetings (recorded online, using Zoom; 60–120 min), all seven members of the research team collaborated to complete four cycles of self-study. The first meeting was used to build trust and respect and to offset the hierarchical dynamic that existed between established researchers/professors and preservice teachers. The goal for our rapport was explicitly shared: all participants should feel at liberty to share ideas the weren’t quite finished or perfected and to surface and explore their tensions, contradictions, and uncertainties in discussion [45]. A collegial conversation about each collaborator’s interest in the study and relevant background provided a natural continuation of the values of learner voice and participation already established within the Special and Adaptive Education course (from which the preservice teachers were drawn). While natural variations in communication style were evident, prompts such as “How might one disagree with that idea?” or “What are we missing here?” or “X, we haven’t heard from you on this issue yet. What are you thinking?” served to encourage honest reflection, critique, critical thinking, and disclosure. The snapshot of each collaborator in Table 1 includes representative quotes that reveal a trusting, open, and curious culture among the partners, in which risk-taking and critical thinking were encouraged despite the power imbalance inherent to a teacher-student relationship.
Table 1.
Participant Descriptions.
Because the preservice teachers were equal partners in the conceptualization, data collection, analysis, writing, and editing of the work, and because the work was conducted primarily for self-improvement and reflective learning, the University of Toronto ethics board determined that the work did not constitute research involving human participants under Article 2.5 of the Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans [48]. This interpretation aligns with international standards that define self-study and collaborative inquiry as forms of professional reflection and co-production rather than human-subject research [43]. Nevertheless, ethical principles of voluntary participation, transparency, and non-hierarchical collaboration were upheld throughout the project [39,43].
While self-study methodology privileges reflexivity and narrative over experimental control, it nevertheless demands a structured and transparent approach [43,49]. In accordance with the tradition of self-study, our analysis prioritized dialogue and interactive analysis and theory building among the whole team [43,44] and did not include a process of qualitative coding. Findings were triangulated through three complementary data sources: (1) meeting transcripts capturing collaborative reasoning, (2) individual reflective journals documenting internal meaning-making, and (3) artifacts such as lesson plans, AI prompts, and research articles. Patterns that appeared across at least two of these sources were considered verified within the self-study framework [39].
To produce reliability and validity in each of our 14 meetings, we first presented the data we had gathered; then surfaced patterns and descriptive themes (what happened or what was learned); then collaboratively analyzed the emerging themes to develop theoretical interpretations, new knowledge, and conclusions according to the group’s personal experience of the affordances and constraints of the complex environment of a classroom. The final collaborative writing process allowed for further refinement of ideas and understanding. This approach ensured an analysis that was grounded in concrete evidence, and that would help explicate how educators think, feel, and make judgments when operating privately to engage AI for assistance with planning. Table 2 illustrates the structure of this analytical approach.
Table 2.
Analytic Approach.
All 14 meetings ended with a plan for next steps to be followed independently by all collaborators. Our plans typically included an experiment to be carried out using ChatGPT (https://chatgpt.com/, accessed on 11 November 2025) to support DI lesson planning (e.g., “Let’s try inputting a more specific prompt”), often a question to take to the literature (e.g., “Let’s see what is already known about prompt engineering”), and twice a plan to create a journalled response (e.g., “What are the three most important issues, from your perspective?”). While Faith and Zaugg participated in all research activities, they were not in an elementary teaching placement so could not implement the lessons that had been derived from their interactions with AI.
After engaging in these research activities, the team would regroup, all collaborators would present their data from experimental practices (AI input-output, subsequent teaching experiences, and overall impressions), literature searches (point form notes, key ideas), and journal entries, and the team would engage in a process of analyzing this data to draw out themes and interrogate assumptions. Research literature helped address questions that went beyond the knowledge of the group but was held in check as just one among a team of equal interactive partners in the inquiry [43,45]. Throughout this process, all data (journals, AI input and output, notes from the team’s literature searches, analysis meeting transcripts) were saved and posted to a shared Google doc, ensuring that all members of the team could engage freely with the data outside of formal meetings.
2.3. The Inclusion of a Survey
Early in the study, our team identified a need for updated data about the prevalence of teachers’ AI use for lesson planning. To address this, approval was received from the University of Toronto Research Ethics Board for a 14-item Qualtrics survey (Protocol #56896) in accordance with the ethical principles outlined in the Declaration of Helsinki. Recruitment was conducted through the authors’ professional and academic networks on social media platforms (Facebook, LinkedIn, X, Instagram, Discord) and through Amazon Mechanical Turk (MTurk) to broaden the participant pool and enhance efficiency. Eligibility was restricted to elementary educators from North America and Asia to align with the demographic backgrounds of the research team. Participants provided informed consent prior to participation and were compensated at a fair hourly rate. Duplicate and incomplete responses were removed.
The survey was designed using the Technology Acceptance Model (TAM) [50] as a guiding framework to examine teachers’ attitudes, practices, and acceptance behaviors related to lesson planning. Items included demographic and contextual questions (e.g., country, school type, grades taught, years of experience), questions about lesson planning practices and sources, collaborative practices, teaching satisfaction, and weekly time spent planning lessons. Basic descriptive analyses (frequencies, percentages, and cross-tabulations) were conducted in Excel.
3. Results
Across the 14-week self-study process, four cycles of inquiry emerged relating to (1) ethics, trust, and beliefs about AI use; (2) the failure of our use of one big comprehensive prompt; (3) effective prompt engineering, and (4) the use of AI as a professional development tool. The results below present raw data from of our experiments and scholarly reading and one survey, as well as interpretation of this data drawn from our journalling and interactive conversations.
3.1. Cycle 1: Ethics, Trust, and Reframing AI Use
Our self-study began after a routine professional discussion with 63 preservice teachers in the teacher training program as well as 15 practising teachers from a Laboratory School at the University of Toronto about how to use AI to support lesson planning. Our research team was surprised by a wide range of negative reactions; while some responded with curiosity and enthusiasm, others expressed skepticism or remained cautiously silent. Our research team decided to conduct a separate survey to gather broader insight into teachers’ uptake of AI tools for lesson planning. Recognizing that the tendency to provide responses that appear positive or socially acceptable (social desirability bias) is often heightened in educational contexts [51,52], we were particularly interested in whether teachers’ private use of AI might diverge from their public stance.
The survey was completed by 147 educators from Canada, the United States, and Asia. Survey results indicated that nearly all participants (n = 143, 97%) reported using the internet to support lesson planning; only a small minority (n = 4, 3%) relied exclusively on independently created plans or scripted curricula. Among those who used internet-based resources, 46 participants (31%) reported drawing from general search engines such as Google, 76 (52%) accessed Teachers Pay Teachers, and 24 (16%) reported using AI-based tools, including ChatGPT, Gemini, Magic School AI, and Microsoft Copilot. Taken together, these findings suggest that while internet-based lesson planning resources are nearly universal among educators, AI-specific tools are less commonly adopted. Although educators voice concerns about AI’s integration into teaching, many already rely heavily on digital platforms for lesson design, suggesting an opportunity for future growth in AI-supported planning.
To better understand these dynamics, the research team consulted the broader literature on teacher perceptions and ethical concerns related to AI in education. A 2024 Pew Research Center survey [53] found that 25% of U.S. public K–12 teachers believe AI tools do more harm than good in education (only 6% believe they do more good than harm). Fears related to academic dishonesty, diminished critical thinking, and overreliance on automation reduce teacher and school adoption of AI tools [54,55,56]. This uncertainty may be fuelled by a professional culture among teachers that prioritizes emotional authenticity, trust, care, courage, honesty, practical wisdom, and fairness in teachers’ professional ability, suggesting that they underpin rapid daily decision-making and moral engagement and enable patterns of “wise” pedagogical “action” [57,58,59,60,61,62].
In discussion, the research team agreed that the use of AI to help achieve DI may put two different ethical and professional callings into conflict. When we invite teachers to input a prompt to AI to more quickly produce a differentiated lesson plan, we call upon their duty ethics to consistently deliver inclusive practices, but we also challenge the virtue ethics that demand authenticity, care, fairness, and honesty. Tiffanie recalled the discovery of a colleagues’ AI use by a special education consultant. Her colleague had “…met the criteria” and satisfied her duty ethics, but “the consultant wasn’t real happy when she had to give AI credit” (a challenge to her virtue ethics). Narratives about technological overreach and the subsequent recession of academic rigor and authenticity [37,63] may well tip this balance towards a critical level of ethical threat and produce technological paralysis in teachers. We agreed that a culture of meaningful, public, discussable and improvable adoption of AI by educators must include fulsome acknowledgement of their ethical concerns.
In addition to these ethical concerns, we discussed our fears of professional devaluation; AI may function not as a support but as a competitor who could reduce the perceived value of teachers’ expertise, particularly in areas like lesson planning or instructional design. Simon acknowledged the argument that AI can “augment teachers and not replace them,” but feared that by making things easier “you are kind of replacing the tasks that they’re going to be doing.” More clearly delineating the unique and irreplicable functions that human teachers play in lesson planning would seem to enable more confident use of AI for assistance.
As we discussed these findings and explored the use of AI for lesson planning, we surfaced several other issues that challenged our trust. Considering the output we received, we often felt misled by confidently fabricated curriculum expectations or manipulated by lesson plans that appeared authoritative but proved impractical or generic. The bot’s undeterred projection of confidence and skill was disturbing. For example, Nicole asked ChatGPT to give her three Ontario curriculum expectations for Grade 5 science ecosystems. It replied:
Certainly! Here are three Ontario curriculum expectations:
- 2.3: Analyze the impact of human activities on ecosystems
- 3.1: Investigate energy transfer in food chains
- 4.2: Explain adaptation in local species
She expressed frustration at the inaccuracy of this response—“It completely made-up curriculum expectations”—and explained a disconcerting sense of being misled: “[The expectations] exist, but they were the wrong strand and the wording of them … was different.” Unlike a human colleague, AI did not package its inferior product with a telltale stamp of confusion, frustration, or humility. “It presented this [inaccuracy] with total confidence” she noted. “No hedging, no ‘approximately.’ That’s what felt manipulative.”
Secondly, we worried that if a typical teacher was convinced by AI’s confidence and polish and attempted (unsuccessfully) to implement its output, they might simply assume that even well-planned DI is an impossible task. This gave us pause and reminded us to be cautious when advocating the use of AI solutions that may replace higher quality human-driven planning, degrade teacher’s perceptions of what good planning is, cause confusion, and contribute to teachers’ feelings of burnout and resignation. Simon provided a good example in our third meeting (see below). Using AI for DI, he was left feeling as though a differentiated lesson plan was too overwhelming and chaotic to implement.
It was like we [were] having three different lessons all at the same time, which was not helpful. We have, like, the visuals on, like, on a projector for the ELL learners. Yeah, fine. But then you have students with ADHD, right? So how are you expecting them to pay attention to their lesson when you have like all this flashing lights and whatever, like the visuals going on right next to you? Right? It doesn’t seem to make much sense in that regard. Not to mention, I had to basically beg my Associate Teacher … to kind of help me implement the plan. Because, you know, it was a very strange plan.
Key Takeaways from Cycle 1 (Ethics, Trust, and Reframing AI)
Although ease of use and perceived effectiveness are emphasized in accepted models of technology acceptance [50], our findings suggest that these factors alone are insufficient to explain teachers’ reluctance to use AI for differentiation. Instead, we learned:
- Ethical comfort, professional identity, and fears about job security influence AI uptake more than ease-of-use alone.
- Teachers reported feeling misled by confident but inaccurate AI outputs.
- Teachers reported that using AI to support differentiation made differentiation itself seem impossible.
These insights deepened our appreciation of the legitimate concerns teachers hold about using AI for lesson planning. We believe that sharing these findings—particularly with teachers considering AI for planning and problem solving—can provide the concepts and language needed to move beyond vague or implicit worries about AI use toward explicit, discussable, and adaptable positions. By making concerns clearer and more open to dialogue, users of AI may be better equipped to weigh risks and possibilities.
With this strengthened foundation, our self-study team entered the next cycle of inquiry. We asked: if high-quality prompt engineering could produce AI outputs that genuinely improve DI by enhancing inclusion, fairness, and student success, might educators with well-defined ethical reservations be willing to give it a try?
3.2. Cycle 2: The Failure of One Big Prompt
Knowing that other research teams had observed disappointingly few inclusive practices in their AI generated lesson plans [64], we crafted a template for one, big, coherent prompt that explicitly requested inclusive approaches. Our first One Big Prompt template (Supplementary File S1) proposes a framework for the lesson plan that the AI would create (hook, teaching approach, activities, assessment) and includes highlighted fields that an educator must fill in with information specific to their lesson: background information about the intended lesson (learning objectives, students previous learning, timing) and important information about the learners (anonymized notes about student learning needs, accommodations, interests). We have included Nicole’s entire AI output to this prompt in Supplementary File S2.
We discovered that even when our prompts were organized, detailed, and thorough, the output often lacked feasibility. For example, Charles’ AI-supported fractions lesson appeared structured but was ultimately unworkable, with an activity sequence that would be impossible for a teacher to adequately supervise. In another case, the AI recommended a standing desk for a kindergarten student who was struggling to focus (a good idea) though the recommendation was given for a lesson that was to occur outside (desks aren’t typically found outside).
The AI-generated lessons also often lacked flow and coherence. For example, we agreed that the hook suggested for Nicole’s science lesson was unrealistic for the context of the intended class, given the nature of the split grade and lack of connection to earlier lessons. Nicole commented that “some of the timeframe suggestions [were] really unrealistic.” She challenged: “10 min for everyone in the class to present their ideas, and then also discuss it, and then also summarize? There’s just absolutely no way that would happen.” Despite the considerable time it took to include detailed contextual information, such as individual student’s learning needs, the AI’s ideas for differentiation often felt disjointed and inappropriate. While there were parts of the output that could be adapted and used, we were struck by a profound sense of disconnection. Charles characterized the output we received when using One Big Prompt as “alien, odd,” and “unnatural.” This seemed fitting to all of us. “Lessons don’t happen in vacuums” wrote Nicole, in a journal reflection. “A lot of the lesson plans that AI tools create feel like they’re the first lesson of a topic, even if you’ve given the AI [adequate] background.” After trying and failing to delegate the whole task of lesson planning to AI, we realized that lesson planning is not a straightforward administrative task but a complex teaching task that requires a rich understanding of the classroom context. In Table 3, we provide two examples of our AI inputs, the AI’s corresponding ideas for differentiation, and the problems we identified in the output.
Table 3.
AI Inputs, Outputs, and Problems.
We agreed that AI, with its insensitivity, shallow reasoning, limited adaptability, and profound lack of contextual comprehension [54], is an unreliable stand-alone lesson planner. We realized that a vast, almost infinite amount of pedagogical and context knowledge is required to make good lesson planning decisions—a level of detail that far exceeded even our longest prompt inputs. “A good lesson,” Laurie suggested, “happens when a teacher knows that one child just got a dog, that another lost a tooth last week, that two of them are in a fight, and that a holiday break is coming up.” The group agreed that an infinite range of small context cues produce subtle, nuanced, sensitive, and effective shifts in a lesson plan. We came to agree that human educators perform a kind of alchemy when lesson planning, drawing upon and applying an extremely nuanced contextual awareness of the many nested environments in which children learn.
Using Bronfenbrenner’s [65] model of influential ecological systems as a guide, we suspect this teacher alchemy integrates many big and small signals related to the immediate environment (the physical space, equipment, peers, task demands), the interconnections and relationships within it, the broader culture of the school (schedules, rules, community), and the passage of time (what happened yesterday, this morning, an hour ago, or a minute ago, or what is coming up). When planning a fractions lesson, teachers intuitively incorporate the soft carpet into a plan for cross-legged groupwork, the missing box of blocks into a plan for using magnets instead, an inside joke about goldfish crackers in the lesson’s hook, and knowledge of a student’s worry about drawing, expressed last week, to expand the options for written output. AI tools do not have this rich bank of wisdom about the ecosystems in which lessons take place, but they can offer rapid access to information and creative suggestions.
This finding aligns with current theories of “organismic agency” [66], which highlight the human ability to plan, re-plan, prioritize, and re-prioritize when problem solving in ever-changing environments that are overflowing with ambiguous and ill-defined signals. Human agents, they suggest, use their embodiment to interact dynamically (perceive, act, experiment) within problem settings that are uncertain, ambivalent, or even misleading, while algorithms are restricted to the use of well-defined precedents and straightforward logical rules.
Key Takeaways from Cycle 2 (Failure of One Big Prompt)
Our experiments with One Big Prompt, considered alongside our reading on organismic agency, highlighted the sharp contrast between AI’s static mimicry and the dynamic, context-sensitive agency of human teachers, reinforcing our confidence in the indispensable role of educators in lesson planning. We learned that:
- One Big Prompt produced occasional useful ideas but delivered impractical, depersonalized lessons that required heavy teacher revision.
- Human teachers apply contextual “alchemy” that AI lacks; lesson planning is a dynamic and situated task that relies upon a teacher’s judgement and sensitivity.
Next, we wondered exactly how AI could be prompted to yield helpful results.
3.3. Cycle 3: Engineering a Dialogic Prompt with a “Break Glass” Kickstart
As we engaged more deeply with AI platforms over successive meetings, our team quickly realized that no single prompt could serve as a universally reliable template. In his journal, Simon wrote that AI “didn’t save much time at al. [It does] a fair job or creating an introductory jumping off point but fails to deliver a product that teachers could use to reduce workload.” We suspected that we could improve our prompting approach and turned to the literature on prompt engineering to expand our skills.
We discovered that Cain’s [67] model of effective prompt engineering includes three pillars: content knowledge, critical thinking, and iterative design. We were not surprised to learn that a good basic knowledge on the topic of one’s inquiry enables a prompt that is sufficiently detailed, specific, and more likely to yield a detailed and specific response. We believe that our One Big Prompt achieved this “pillar” of effective engineering. We had not, however, been employing adequate critical thinking to assess, refine, reiterate and improve our AI output to circumvent misunderstanding, bias, or inaccuracy. In fact, as Madeline said, “arming [ourselves] with a perfect prompt” seemed to “take away the thinking.” We reviewed several recent (2023 or 2024) frameworks for prompting AI and found broad agreement on the value of an iterative and reciprocal approach (see Table 4, below); CLEAR [68] encourages adaptability when prompting AI, GPEL [69] mentions the use of an iterative approach, and IDEA [70] refers to the use of verification and refinement.
Table 4.
Emphasis on Iteration and Reciprocity in Prompt Engineering Frameworks.
Our research team began to approach lesson planning with AI as a dynamic, co-constructive process rather than a one shot, input–output exchange. We shifted from long, static prompts to conversational, back-and-forth interactions, and noticed that our outputs became more relevant and easier to adapt. This dialogic approach allowed us to add detail gradually, test ideas, and exercise some creative and critical thinking. We agreed it resembled “jamming” with another musician to slowly integrate ideas and build something original and satisfying. Rather than relying on AI as a source of answers, we began to treat it as a flexible partner in the design process.
This approach reduced the frustration, disappointment, and lack of trust we had experienced while utilizing one big prompt. Tiffanie noted her satisfaction at receiving output “more tailored to [her] needs” after “adding details incrementally… as they arose.” We entered these jamming interactions with no expectation that a perfect result would be generated immediately. Nicole described how she was able to both input and receive information “gradually” and “as needed,” which seemed similar to “co-planning with a colleague” wherein information does not have to be front-loaded but is rather discussed more slowly, as it is asked for and “becomes relevant.” She noted that this “back and forth conversation can take off some of the teachers’ cognitive load” because it broke the process of lesson planning into smaller steps. In this way, an iterative jam session with the AI supported her sense that DI lesson planning was achievable.
We also noticed that it was important to start these dialogic interactions simply and with explicit purpose. On Simon and Tiffanie’s urging and based on the advice of most prompt engineering models to first specify a “persona” for the AI, we started each interaction by specifying the role that AI should play. In his journal, Simon suggested that “…maybe the idea is to sort of start with the prompt saying, ‘okay, I want to have a back and forth to eventually create a lesson plan,’ rather than just [saying] ‘okay, I’m creating a lesson plan.’” Tiffanie remarked that, by doing this, we could “start a base” and then “add to it instead of trying to throw a bunch of information at [the AI] at once.”
We were particularly interested in the work of Hashem and colleagues [26], who published their prompts for lesson planning as part of a report on using AI as a teaching assistant. We noticed that they began their iterative prompting with very simple initial prompts, such as, “Can you design a lesson plan for an 8th-grade class focusing on square roots and cube roots within a 45-min class period?” These were subsequently modified with prompts to improve the original output (e.g., “modify the lesson for lower achievers, middle achievers and high achievers”), but our team agreed that there was something important about this simple initial prompt. It seemed more closely aligned with what an educator would naturally want to do.
Upon further reflection, Madeline suggested that when we give AI a short initial prompt, it acts as a “break glass” tool—a minimal input that provides a fast, low-pressure, discreet, “better than nothing” boost when feeling stuck, rushed, or uncertain. “It is something you use in an emergency,” she said, “when you’re burned out” and “on the edge of a cliff.” Fatemeh agreed that the quick, easily accessible advice available from AI to a struggling early-career teacher, feels like “a ray of hope in the darkness.”
This advantage would seem especially appropriate for preservice teachers, who often lack a bank of experience, materials, and instructional strategies to draw from, and may feel uncomfortable turning to more experienced colleagues for support. In combination, a break glass start and a subsequent jam session with AI seemed to be a practical support for early-career educators who were developing both capacity and confidence for differentiated lesson planning.
Key Takeaways from Cycle 3 (Dialogic Prompt with “Break Glass” Kickstart)
We began this cycle of inquiry wondering if, when expertly prompted, AI could transform the quality of DI in classrooms enough to convince reluctant teachers to give it a try. Through our literature review and experimentation, we did not discover a prompting approach that would yield a one-shot, complete, and ready to use differentiated lesson. Rather, we discovered that:
- Iterative, dialogic prompting (“jamming”) produced more usable outputs than single long prompts.
- Short, initial, “break glass” prompts help overwhelmed teachers get started quickly while allowing them to retain control over the planning process.
These findings led us to approach AI not as a one-shot lesson generator but as a flexible partner in iterative design, offering both practical support for planning as well as opportunities for our own creativity, thought and engagement. This led us to a rich inquiry about the potential for AI as a source of teacher development in Cycle 4.
3.4. Cycle 4: Using AI as a Source of Teacher Development; Creating a Framework
Research on teachers’ use of AI for lesson planning often urges careful filtering, adapting, and critical evaluation of AI-generated materials to ensure quality [54,55]. This process was comfortably and feasibly enabled by our shift to an iterative prompt strategy. As we jammed with AI about lesson goals, contextual factors, and instructional decisions, we noticed that it steered us towards critical thinking, and also reflection and learning about teaching, learning, accessibility, and DI. For example, when the AI suggested that Nicole start her lesson with a “video clip on big questions,” Nicole was reminded that videos can be helpful as a form of visual support and decided to add one that was instead connected to students’ prior knowledge. This made her lesson more engaging for all learners—even those with less capacity for paying attention. The focus on timing and clear instruction that Charles integrated into his lesson emerged from AI advice to use a timer for a game he was planning. He exercised his understanding of the context of the classroom to avoid rushing his students as they played a game and instead integrated better cues for timing throughout the lesson. In this way, Charles’ jam session with AI nudged him towards a focus on timing and structure, stimulated his agency, and helped him to employ a more accessible approach. Fatemeh chose to adapt the AI’s advice to make 5 differentiated worksheets, instead creating two worksheet and providing a third option for an oral presentation. Having rapid access to multiple versions of the worksheet saved her some time and effort and gave her the confidence to explore further differentiation. Table 5 presents several examples of AI suggestions that required adaptation, but that were useful for broadening teachers’ thinking and supported their learning about DI techniques.
Table 5.
Examples of AI Output that Supported Teacher Learning.
As our understanding of how to engage productively with AI evolved, so too did our understanding of its impact on our teaching practice. We reflected on Schön’s [71] notion of “reflection-in-action,” the way that professionals learn through the act of describing and examining their work in real time. We began to explore models for conversational teacher development that were aligned with what we were experiencing in dialogue with AI. Frambaugh-Kritzer and Petroelje Stolle’s notion of AI as a critical friend [72] seemed like a good fit. Critical friendship is defined by an ongoing dialogue one has with a trusted peer who supports growth by providing honest and constructive feedback, helping to reveal blind spots, and fostering deep reflection [73]. A critical friend is not merely supportive, but challenges thinking, provokes inquiry, interrupts bias, and calls attention to unchecked habits [74]. When reviewing a differentiated lesson plan, a critical friend might ask, “What could go wrong?” or “Can you think of one student whose needs may not be met?” Madeline, Nicole, and Fatemeh all referred to the experience of being challenged while jamming with AI in a way that was consistent with critical friendship. Nicole said, “it gave me guiding questions. It was like having a colleague point out what I might be forgetting.” Madeline said, “it was always helpful in pointing out blind spots—[such as] questions we might not have thought to include or [an awareness of] where our own understanding ends.” Fatemeh felt that AI helped her to “find gaps” in her lesson plan.
Considering this insight, we began explicitly prompting the AI to take the persona of a critical friend and the team’s capacity for high quality lesson planning increased significantly. For a lesson on communication, Madeline had asked students to draw an image based on a classmates’ description. When the AI suggested the use of larger crayons to enable fine motor control, Madeline commented, “Like a new teacher, I kind of thought my activity was about …describ[ing]… and draw[ing]. I never would have considered [the fine motor] aspects.” Others talked about using AI’s suggestions for how to accommodate learners working above grade level, how to manage time, and what to do when a child did not respond as expected.
A trusting critical friendship takes time, patience, and effort to build [74]. Baskerville and Goldblatt suggest it often takes up to a year and a half for educators to move from professional indifference, to tentative trust, to a level of trust that permits unguarded conversations [75]. Within our critical friendships with AI, we agreed that we were building critical friendship in reverse. Initially overwhelmed by the power of AI, we moved from total acceptance, to a sense of betrayal, to an understanding of the AI’s limitations and an appreciation of our wisdom, to cautious engagement, to being able to fully “hold our own” in a more reciprocal relationship with AI. Rather than deferring to AI outputs, we learned to examine them curiously, critically, and adaptively. As Charles observed, “You have to inject your own alchemy as a teacher to make it fit within the context of your students… we are the ones responsible for selecting the best parts of the output.” Laurie agreed, noting that this dynamic parallels consultation with a human colleague: “This is typical. When consulting with others, we always run their suggestions through a filter and apply our own judgement.” Fatemeh added dimension to this notion by explaining that as an English language learner herself, she felt more confident in critical friendship with AI when discussing a math lesson and less confident when discussing issues related to the English language: “I’m not qualified to assess [output related to English language] because I don’t know myself. What is ‘a simile?’” Accordingly, Fatemeh approached the teacher supervising her Grade 4/5 placement for additional guidance. From this, we decided that AI should not be a teacher’s only critical friend. To use AI as a critical friend, teachers need to exercise their own judgement on the outputs, remember the limits of their ability to properly weigh and evaluate the outputs, and engage other supports when necessary.
Even as a temporary source of critical friendship, while trust was developing with human colleagues, we suspected that AI could be very useful for an early-career educator especially if could be trained to shift away from its confirmatory style and towards one that was more sober, curious, and critical. As the team explored this method, Tiffanie noticed: “…when you… are dialoging with AI … you’re in that reflective process.” Laurie agreed that dialoging with AI enables “reflective engagement” in pre-service teachers, and that this is just as valuable as eliciting a good DI lesson plan.
At this point in our self-study, we began to assemble a novel framework to combine all of the new knowledge and skill we had gained during four cycles of collaborative inquiry: the Persona, Break Glass, Name Plan, Jam (PBNJ) Framework. To use the PBNJ for high quality DI planning, an AI user must input: (1) an initial instruction about the desired Persona and role for the AI (Critical Friend), (2) a short and simple initial prompt that included the basic topic of the lesson and a few key factors we had in mind (Break Glass Prompt), (3) a tentative formulation of the lesson, whereby the educator Names the outline they have in mind, (4) and then several dialogical back and forth prompts, wherein the user “Jams” with the AI as though it was a critical friend to refine and improve the plan.
Nicole’s complete PBNJ interaction appears in Supplementary File S3. With only 10 inputs, she managed to co-create a differentiated lesson plan that she was satisfied with and engage in meaningful professional learning and development. Appendix B provides a complete list of her prompts, as well as an analysis of the benefits to Nicole at each step. Appendix C presents the final lesson that she co-created using the PBNJ framework.
Throughout, she used her in-depth understanding of the classroom context to parse the AI’s lengthy output for appropriate differentiation ideas. For example, when Nicole proposed starting with a diagram and small group investigation, ChatGPT responded with scaffolds for ELLs, movement options for ADHD, reflection prompts, and advanced learner challenges. Nicole then picked just one idea that seemed relevant for her context (“I like the idea of starting with real world link”) and invited the AI to help her produce a resource (“I’d love a little story or video about a world without plants.”) Later, when ChatGPT suggested complex small group investigations, Nicole exercised judgement based on her sense of the timing and logistics of the lesson, and pared it back to, “I’m just going to pass around three plants.” Nicole’s discernment in deciding what to use, simplify, or discard was central to the success of her AI-assisted lesson planning.
For Nicole, critical friendship with AI reminded her to include inclusive teaching practices in her lessons that may have otherwise been forgotten in her rush to adhere to the school schedule. “It serves a good reminder of what lessons can and should include that we often miss,” she noted. The team agreed that using AI nudged them towards “anatomically correct” (Laurie, Charles, Nicole) lesson planning that “includes all the things that learners need in order to engage and have their needs met” (Nicole). In Nicole’s PBNJ interaction, the AI suggested reminded her to include a strong hook, or “connection to real world.” In response, she added a short, dramatic story as well as live plants. The AI later reinforced this learning with feedback: “Leading with a real-world scenario like ‘What if plants disappeared?’ taps into curiosity, emotion, and survival thinking, which hooks all learners, regardless of reading level or language proficiency.” Notably, the scenario AI suggested was a little scary, and Nicole again used her nuanced understanding of the fears of several of her students to transform worrisome content (“Days later, people start to panic. There’s not enough oxygen. Without plants, we can’t eat. We can’t breathe. We can’t live.”) to content that was more optimistic and positive (“Days later, people get busy planting. We need plants to live. Without plants, we can’t eat. We can’t breathe. We can’t live.”)
Key Takeaways from Cycle 4 (AI for Teacher Development; Creating a Framework)
In Cycle 4, we explored how AI could serve not just as a tool for producing lesson plans, but as a critical friend—a dialogic partner that supports growth by asking questions, surfacing blind spots, and provoking reflection. Through this lens, we discovered that:
- AI can function as a productive critical friend, surfacing blind spots and prompting reflection and learning.
- The PBNJ workflow helped teachers co-create feasible, differentiated lessons while preserving agency.
- AI is a supplement to, but not a substitute for, human collegial support.
These findings suggest that AI may hold promise as a temporary or supplementary critical friend for teachers, particularly for early-career educators who benefit from immediate feedback and reflection prompts. As a support for DI planning, AI’s value lies less in producing final products and more in supporting reflective engagement and professional growth when used alongside teacher judgement and human collegial support.
4. Final Discussion and Conclusions
Acknowledging the persistent gap in differentiated lesson planning and the hesitancy among educators to use AI in this domain, this self-study set out to explore whether acceptable and pedagogically productive uses of AI for lesson planning could be identified. Across four cycles, we explored users’ ethical concerns, single-prompt experiments, refined dialogic prompting, and finally formulated the Persona, Break-Glass, Name Plan, Jam (PBNJ) framework. PBNJ reframes AI as a scaffolded, dialogic partner that supports reflection, decision rehearsal, and rapid idea generation while keeping teachers in control. In Supplementary File S4, we present a practitioner-friendly overview of the PBNJ workflow with a rationale for each step.
Our self-study revealed three recurring themes that underpin the conceptual contribution of the PBNJ framework: (1) the emotional positioning of the teacher at the outset of AI use, (2) the professional agency and judgement that must be preserved throughout an interaction with AI, and (3) the reframing of AI not as an expert tool but as a dialogic thinking partner. These themes distinguish PBNJ from other prompt engineering models by attending not only to the iterative structure of AI interaction, but to the user’s need for identity, authorship, and ethical responsibility throughout the process.
Unlike technical frameworks such as CLEAR [68], GPEL [69], or IDEA [70], which offer structured cycles of refinement focused on output quality, PBNJ centers the relational and affective dynamics between user and AI. By foregrounding the user’s emotional, ethical, and developmental needs, PBNJ frames a human interface for prompt engineering. In doing so, it positions dialogic AI interactions as fertile ground for reflection, learning, the exercise of professional judgement, and principled decision-making. This contribution offers a transferable model for human-centered, values-conscious AI use in professional domains where ethics, agency, care, and contextual judgment are paramount.
4.1. Limitations
- Small self-study sample and convenience survey limit generalizability.
- The study was descriptive; future controlled studies should assess learning outcomes, fidelity, and long-term practice change.
4.2. Conclusions
- The PBNJ framework positions AI as a reflective scaffold that supports user judgment and professional learning rather than replacing core professional work. The framework is promising for DI lesson planning, broad professional applications, and curricula that teach ethical, human-centered, and critical AI use.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/ai6120310/s1, Supplementary File S1: One Big Prompt (Input)—Nicole’s Lesson; Supplementary File S2: One Big Prompt (Output)—Nicole’s Lesson; Supplementary File S3: ChatGPT Input/Output Using PBNJ to Co-Create a Differentiated Lesson Plan (Grade 5); Supplementary File S4: Overview for Educators: the PBNJ AI Prompting Workflow (First and Second Pages).
Author Contributions
Conceptualization, L.F. and T.Z.; methodology, L.F.; validation, L.F. and T.Z.; formal analysis, L.F. and T.Z.; investigation, N.S., M.S., C.B., F.H. and S.L.O.; resources, L.F.; T.Z., N.S., M.S., C.B., F.H. and S.L.O.; data curation, T.Z.; writing—original draft preparation, L.F.; writing—original draft preparation, review, and editing, L.F., T.Z., N.S., M.S., C.B., F.H. and S.L.O.; visualization, L.F. and T.Z.; supervision, L.F.; project administration, L.F. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki. The external survey was approved by the Institutional Review Board (or Ethics Committee) of University of Toronto (Protocol code 00048441 and 2 May 2025).
Informed Consent Statement
Informed consent was obtained from all participants who completed a survey.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors on request.
Acknowledgments
During the preparation of this manuscript/study, the author(s) used Qualtrics for the purpose of collecting survey questions and statistical analysis. The authors have reviewed and edited the output and take full responsibility for the content of this publication.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| AI | Artificial Intelligence |
| DI | Differentiated Instruction |
| EL | Education and Learning Chatbot |
Appendix A
Graduate Course Assignment Criteria
- Active Participation in Five Theory Building Discussions (Literature Review + Data Collection)
- ○
- Engages meaningfully in group discussions, contributing thoughtful ideas and constructive feedback.
- ○
- Demonstrates attentiveness to others’ contributions and builds on them to advance collective understanding.
- ○
- Takes initiative in guiding discussions toward deeper analysis and synthesis of ideas.
- Engagement in Research Activities (Literature Review + Data Collection)
- ○
- Actively participates in all research-related tasks, including data collection, analysis, and collaborative inquiry.
- ○
- Identifies, summarizes, and critically evaluates relevant scholarly articles.
- ○
- Effectively integrates research findings into group discussions and written work.
- Quality of Insight and Originality (Theory Building)
- ○
- Demonstrates depth of thinking by making connections between theories, research, and practice.
- ○
- Offers novel perspectives or interpretations that contribute to the group’s intellectual progress.
- ○
- Uses evidence and reasoning to support claims and challenges assumptions where appropriate.
- Final Reflection Paper (Written “Discussion”)
- ○
- Provides a comprehensive four-page reflection on key insights and theories developed by the group.
- ○
- Analyzes the implications of these insights for educational or professional practice.
- ○
- Demonstrates clarity, coherence, and depth in writing, with well-supported arguments and conclusions.
Appendix B
All Prompts in a Successful PBNJ Lesson Planning Interaction
| Inputs and Beneficial Results of a PBNJ Interaction with AI | |
| Step 1: Set the Persona | |
| INPUT: In a moment, I’ll give you some info about a lesson I’m planning. I would like you to act as my critical friend while I plan a differentiated lesson. Please challenge my assumptions, suggest alternatives, and help me identify what I might be missing, especially for students with diverse learning needs. Can you do that? BENEFICAL RESULT: AI confirmed understanding of the role of critical friend. | |
| Step 2: “Break Glass” with an Initial Prompt | |
| INPUT: I want my 5/6 graders to learn the inputs and outputs of photosynthesis. I have several ELL students, two with ADHD, and a few who are already reading at high school level. I am worried about keeping everyone engaged. Where should I start? BENEFICAL RESULT: AI suggested: visual entry, scaffolds for ELLs, and choice-based tasks. | |
| Step 3: Name a Plan for the Lesson | |
| INPUT: Based on your suggestions, here is my draft: Start with a visual diagram, use hands-on plant investigation in small groups, provide sentence frames for ELL students, and offer choice in either redoing a mild or spicy diagram of input/output. What am I missing for differentiation? BENEFICAL RESULT: The AI gave a balanced critique, noting four strengths and offering five practical improvement areas focused on accessibility. | |
| Step 4: “Jam” with AI as Like a Critical Friend | |
| INPUT: I like the idea of starting with real world link. That what might happen if plants disappeared idea is a good one. I’d love a little story or video about a world without plants. Hmm.. BENEFICAL RESULT: The AI praised the idea with an original script but failed by providing three broken or irrelevant video links. | |
| INPUT: None of the links work, or are right but the scripted story is perfect. Big improvement. Easy. Okay. I do want to incorporate some movement to reinforce the parts of the diagram. Can I let them opt into either handing in a hand drawn diagram, putting diagram elements in order, or putting the diagram in order AND figuring out an action for each step? BENEFICIAL RESULT: The AI praised the plan and suggested focusing on evaluation criteria, while also prompting consideration of student perception regarding option difficulty (i.e., too “babyish”) and the inclusion of a more advanced task. | |
| INPUT: Good. I’ll put the criteria on the board. That’s good. I think I’m ready to go. BENEFICIAL RESULT: The AI offered encouragement, provided a brief checklist of the lesson’s contents, and highlighted four achievements. | |
| INPUT: I’m not going to do small group plant investigation. I’m just going to pass around three plants. BENEFICIAL RESULT: The AI agreed with the new plan, offering three suggestions for maximizing it and again highlighting the lesson’s four achievements. | |
| INPUT: OH! I can intro three plants and keep them around for the whole unit. BENEFICIAL RESULT: The AI echoed the excitement, providing four steps to integrate the three plants into the unit and four reasons the idea was “powerful.” | |
| INPUT: Make that plan for me again? BENEFICIAL RESULT: The AI outputted the lesson plan clearly using a helpful structure. | |
| INPUT: Can you put the story right into the lesson plan? I already have the diagram, so I can make the “put it in order” thing really easily. I just need the story. Rewrite the lesson plan. BENEFICIAL RESULT: The AI rewrote the lesson plan with the requested addition. | |
Appendix C
AI Supported Lesson Created Using 10 PBNJ Prompts
| Grade 5 Lesson Plan: Photosynthesis—What If Plants Disappeared? | |
| Big Idea: Plants use sunlight, water, and air to make food—a process called photosynthesis—which gives living things oxygen and energy to survive. Learning Goals By the end of this lesson, students will: (1) Identify the inputs (sunlight, water, carbon dioxide) and outputs (oxygen, glucose) of photosynthesis; (2) Represent the process in one of several formats; (3) Explain why photosynthesis is important to life on Earth Materials Three real classroom plants; Blank paper, out of order diagram cut-out pieces Lesson Sequence (7 min) 1. Hook—Story: “A World Without Green” A World Without Green? Imagine waking up one morning and the trees are gone. The grass is gone. Flowers, bushes, vines—vanished. No trees to climb. The world looks... different. At first, you think: “Cool—no more yard work!” But… there’s no fruit in the fridge. No cereal. No bread. Even the animals are acting weird—because their food is gone, too. Days later, people get busy planting. We need plants to live. Without plants, we can’t eat. We can’t breathe. We can’t live. Discussion Prompts: What do you think would disappear first if there were no plants? How do plants help us breathe and eat? What questions do you have about how plants work? | 2. Meet the Classroom Plants (5 min) Pass around 3 real plants; Give each plant a name (e.g., Sunny, Leafy, Captain CO2); Prompt students to look closely and make observations: What do you notice? What might this plant need to survive? 3. Mini-Lesson (10 min) Show anchor diagram on the board; Teach inputs: sunlight, water, carbon dioxide; Teach the outputs: oxygen and glucose Sentence frames for ELL support: - “Plants take in ___, ___, and ___.” - “They make ___ and ___.” 4. Differentiated Student Tasks (25 min) Students choose one of three ways to demonstrate understanding: 1. Draw It: Create their own labeled diagram; 2. Sequence It: Put diagram pieces in correct order on a mat or page; 3. Move It: Put pieces in order and create a movement or gesture for each step Success Criteria (post on board): - Know order of 3 inputs and 2 outputs—Understand why it matters to humans 5. Wrap-Up & Reflection (5–10 min) Volunteers share diagrams, sequences, or movements; Reflect as a class: “Which part of the plant’s job surprised you?” or “How does knowing this help us care for plants?” |
References
- National Center for Education Statistics. Students with Disabilities: Condition of Education. 2020. Available online: https://nces.ed.gov/programs/coe/indicator/cgg/students-with-disabilities (accessed on 5 November 2025).
- Ontario Ministry of Education. Ontario School Information System (OnSIS) Data. 2023. Available online: https://www.ontario.ca/page/education-ontario (accessed on 5 November 2025).
- UNESCO. The Salamanca Statement and Framework for action on special needs education. In Proceedings of the World Conference on Special Needs Education; Access and Quality, Salamanca, Spain, 7–10 June 1994. [Google Scholar]
- United Nations. Convention on the Rights of Persons with Disabilities. 2006. Available online: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities/convention-on-the-rights-of-persons-with-disabilities-2.html (accessed on 5 November 2025).
- UNESCO. AI and Education-Guidance for Policy Makers. 2023. Available online: https://teachertaskforce.org/sites/default/files/2023-07/2021_UNESCO_AI-and-education-Guidande-for-policy-makers_EN.pdf (accessed on 5 November 2025).
- Tomlinson, C.A. The Differentiated Classroom: Responding to the Needs of All Learners, 2nd ed.; ASCD: Alexandria, VA, USA, 2014. [Google Scholar]
- Tomlinson, C.A.; Imbeau, M.B. Leading and Managing a Differentiated Classroom; ASCD: Alexandria, VA, USA, 2023. [Google Scholar]
- Deunk, M.I.; Smale-Jacobse, A.E.; de Boer, H.; Doolaard, S.; Bosker, R.J. Effective differentiation Practices: A systematic review and meta-analysis of studies on the cognitive effects of differentiation practices in primary education. Educ. Res. Rev. 2018, 24, 31–54. [Google Scholar] [CrossRef]
- Hattie, J. Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement; Routledge: Oxfordshire, UK, 2009. [Google Scholar]
- Kulik, C.L.C.; Kulik, J.A.; Bangert-Drowns, R.L. Effectiveness of mastery learning programs: A meta-analysis. Rev. Educ. Res. 1990, 60, 265–299. [Google Scholar] [CrossRef]
- Steenbergen-Hu, S.; Makel, M.C.; Olszewski-Kubilius, P. What one hundred years of research says about the effects of ability grouping and acceleration on K–12 students’ academic achievement. Rev. Educ. Res. 2016, 86, 849–899. [Google Scholar] [CrossRef]
- Pozas, M.; Letzel, V.; Lindner, K.-T.; Schwab, S. DI (Differentiated Instruction) does matter! The effects of DI on secondary school students’ well-being, social inclusion, and academic self-concept. Front. Educ. 2021, 6, 729027. [Google Scholar] [CrossRef]
- Guay, F.; Roy, A.; Valois, P. Teacher structure as a predictor of students’ perceived competence and autonomous motivation: The moderating role of differentiated instruction. Br. J. Educ. Psychol. 2017, 87, 224–240. [Google Scholar] [CrossRef]
- Letzel-Alt, V.; Pozas, M. (Eds.) Differentiated Instruction Around the World: A Global Inclusive Insight; Waxmann: Münster, Germany, 2023. [Google Scholar] [CrossRef]
- Chen, B.; Zhao, C. More is less: Homeroom teachers’ administrative duties and students’ achievements in China. Teach. Teach. Educ. 2022, 119, 103857. [Google Scholar] [CrossRef]
- Liem, G.A.D.; Marsh, H.W.; Martin, A.J.; McInerney, D.M.; Yeung, A.S. The big-fish-little-pond effect and a national policy of within-school ability streaming: Alternative frames of reference. Am. Educ. Res. J. 2013, 50, 326–370. [Google Scholar] [CrossRef]
- Bondie, R.S.; Dahnke, C.; Zusho, A. How does changing “one-size-fits-all” to differentiated instruction affect teaching? Rev. Res. Educ. 2019, 43, 336–362. [Google Scholar] [CrossRef]
- Parekh, G. Ableism in Education: Rethinking School Practices and Policies; Routledge: Oxfordshire, UK, 2023. [Google Scholar]
- Hu, L. Utilization of differentiated instruction in K–12 classrooms: A systematic literature review (2000–2022). Asia Pac. Educ. Rev. 2024, 25, 507–525. [Google Scholar] [CrossRef]
- Pozas, M.; Letzel-Alt, V.; Schwab, S. The effects of differentiated instruction on teachers’ stress and job satisfaction. Teach. Teach. Educ. 2023, 122, 103962. [Google Scholar] [CrossRef]
- Gibbs, K. Voices in practice: Challenges to implementing differentiated instruction by teachers and school leaders in an Australian mainstream secondary school. Aust. Educ. Res. 2023, 50, 1217–1232. [Google Scholar] [CrossRef]
- Van de Grift, W.J.C.M.; Van der Wal, M.; Torenbeek, M. Development of didactical skills of primary school teachers. Pedagog. Stud. 2011, 88, 416–432. [Google Scholar]
- Van Geel, M.; Keuning, T.; Safar, I. How teachers develop skills for implementing differentiated instruction: Helpful and hindering factors. Teach. Teach. Educ. Leadersh. Prof. Dev. 2022, 1, 100007. [Google Scholar] [CrossRef]
- Bryant, J.; Heitz, C.; Sanghvi, S.; Wagle, D. How Artificial Intelligence Will Impact K-12 Teachers. 2020. Available online: https://www.mckinsey.com/industries/education/ourinsights/how-artificial-intelligence-will-impact-k-12-teachers (accessed on 1 November 2025).
- Jotkoff, E. NEA survey: Massive staff shortages in schools leading to educator burnout; an alarming number of educators indicating they plan to leave the profession. Natl. Educ. Association. 2022. [Google Scholar]
- Hashem, R.; Smith, J.; Lee, K. AI to the rescue: Exploring the potential of ChatGPT as a teacher ally for workload relief and burnout prevention. Res. Pract. Technol. Enhanc. Learn. 2024, 19, 1–15. [Google Scholar] [CrossRef]
- National Center of Educational Statistics. Condition of Education. 2022. Available online: https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2022144 (accessed on 1 November 2025).
- Hamilton, I.; Swanston, B. Artificial Intelligence in education: Teachers’ opinions on AI in the classroom. Forbes, 6 June 2024. Available online: https://www.forbes.com/advisor/education/it-and-tech/artificial-intelligence-in-school/ (accessed on 1 November 2025).
- Luckin, R.; Holmes, W. Intelligence Unleashed: An argument for AI in Education; UCL Knowledge Lab: London, UK, 2016. [Google Scholar]
- McKinsey & Company. What is Generative AI. 2024. Available online: https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai (accessed on 1 November 2025).
- Chassignol, M.; Khoroshavin, A.; Klimova, A.; Bilyatdinova, A. Artificial Intelligence trends in education: A narrative overview. Procedia Comput. Sci. 2018, 136, 16–24. [Google Scholar] [CrossRef]
- Chen, L.; Chen, P.; Lin, Z. Artificial Intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
- Setyaningsih, E.; Asrori, M.; Ngadiso; Sumardi; Zainnuri, H.; Hariyanti, Y. Exploring high school EFL teachers’ experiences with Magic School AI in lesson planning: Benefits and insights. Voices Engl. Lang. Educ. Soc. 2024, 8, 685–699. [Google Scholar] [CrossRef]
- Walter, A. Utilizing language-generating artificial intelligence (LGAI) in educational planning: A case study. J. Interdiscip. Teach. Leadership. 2024, 8, 29–59. [Google Scholar] [CrossRef]
- Bateman, T. Teacher perspectives of ChatGPT as a pedagogical tool in the K-12 setting: A case study. Qual. Assur. Educ. 2024, 33, 203–217. [Google Scholar] [CrossRef]
- Sebastian, G. Privacy and Data Protection in Chatgpt and Other AI Chatbots: Strategies for Securing User Information. 2023. Available online: https://www.researchgate.net/profile/Glorin-Sebastian/publication/370935454_Privacy_and_Data_Protection_in_ChatGPT_and_Other_AI_Chatbots_Strategies_for_Securing_User_Information/links/646a9cd066b4cb4f73c647ef/Privacy-and-Data-Protection-in-ChatGPT-and-Other-AI-Chatbots-Strategies-for-Securing-User-Information.pdf (accessed on 1 November 2025).
- Gantalao, L.C.; Calzada, J.G.D.; Capuyan, D.L.; Lumantas, B.C.; Acut, D.P.; Garcia, M.B. Equipping the Next Generation of Technicians. In Pitfalls of AI Integration in Education: Skill Obsolescence, Misuse, and Bias; Garcia, M.B., Rosak-Szyrocka, J., Bozkurt, A., Eds.; IGI Global: Hershey, PA, USA, 2025; pp. 201–224. [Google Scholar]
- Phalaguna, I.B.; Kaewsaeng, K.; Worabuttara, T. Exploring teachers’ perceptions of AI-generated English lesson plans for students with intellectual disabilities. Int. J. Instr. Lang. Stud. 2024, 2, 19–28. [Google Scholar] [CrossRef]
- Bullough, R.V.; Pinnegar, S. Guidelines for quality in autobiographical forms of self-study research. Educ. Res. 2001, 30, 13–21. [Google Scholar] [CrossRef]
- Feldman, A. Validity and quality in self-study. Educ. Res. 2003, 32, 26–28. [Google Scholar] [CrossRef]
- Loughran, J. Researching teacher education practices: Responding to the challenges, demands, and expectations of self-study. J. Teach. Educ. 2007, 58, 12–20. [Google Scholar] [CrossRef]
- Patton, M.Q. Qualitative Research & Evaluation Methods: Integrating Theory and Practice, 4th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2014. [Google Scholar]
- LaBoskey, V.K. The methodology of self-study and its theoretical underpinnings. In International Handbook of Self-Study of Teaching and Teacher Education Practices; Loughran, J.J., Hamilton, M.L., LaBoskey, V.K., Russell, T., Eds.; Kluwer Academic: Dordrecht, The Netherlands, 2004; pp. 817–869. [Google Scholar] [CrossRef]
- Whitehead, J. Creating a living educational theory from questions of the kind, ‘How do I improve my practice?’. Camb. J. Educ. 1989, 19, 41–52. [Google Scholar] [CrossRef]
- Hauge, K. Self-Study Research: Challenges and Opportunities in Teacher Education. In Teacher Education in the 21st Century—Emerging Skills for a Changing World; IntechOpen: London, UK, 2021; pp. 139–156. [Google Scholar] [CrossRef]
- Pinnegar, S.E.; Hamilton, M.L. (Eds.) Knowing, Becoming, Doing as Teacher Educators: Identity, Intimate Scholarship, Inquiry; Emerald Group Publishing: Leeds, UK, 2015. [Google Scholar]
- Hamilton, M.L.; Pinnegar, S. Conclusion: The value and promise of self-study. In Reconceptualizing Teaching Practice: Self-Study in Teacher Education; Hamilton, M.L., Ed.; Falmer Press: London, UK, 1998; pp. 235–246. [Google Scholar]
- Canadian Institutes of Health Research; Natural Sciences and Engineering Research Council of Canada; Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2, 2022); Article 2.5.; Government of Canada: Ottawa, ON, Canada, 2022. [Google Scholar]
- Pinnegar, S.; Hamilton, M.L. Self-Study of Practice as a Genre of Qualitative Research: Theory, Methodology, and Practice; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009; Volume 8. [Google Scholar]
- Holden, R.J.; Karsh, B.T. The technology acceptance model: Its past and its future in health care. J. Biomed. Inform. 2010, 43, 159–172. [Google Scholar] [CrossRef] [PubMed]
- Bowman, N.A.; Hill, P.L. Measuring how college affects students: Social desirability and other potential biases in college student self-reported gains. New Dir. Institutional Res. 2011, 2011, 73–85. [Google Scholar] [CrossRef]
- Brewer, G.; Urwin, E.; Witham, B. Disabled student experiences of Higher Education. Disabil. Soc. 2025, 40, 108–127. [Google Scholar] [CrossRef]
- Lin, L. A Quarter of US Teachers Say AI Tools Do More Harm Than Good in K-12 Education [Internet]. Pew Research Center. 2024. Available online: https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/ (accessed on 11 November 2025).
- Mogavi, R.H.; Deng, C.; Kim, J.J.; Zhou, P.; Kwon, Y.D.; Metwally, A.H.S.; Tlili, A.; Bassanelli, S.; Bucchiarone, A.; Gujar, S.; et al. ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Comput. Hum. Behav. Artif. Hum. 2024, 2, 100027. [Google Scholar] [CrossRef]
- Zhang, P.; Tur, G. A systematic review of ChatGPT use in K–12 education. Eur. J. Educ. 2024, 59, e12599. [Google Scholar] [CrossRef]
- Cukurova, M.; Miao, X.; Brooker, R. Adoption of artificial intelligence in schools: Unveiling factors influencing teachers’ engagement. In International Conference on Artificial Intelligence in Education; Springer Nature: Cham, Switzerland, 2023; pp. 151–163. [Google Scholar]
- Carr, D. Character in teaching. Br. J. Educ. Stud. 2007, 55, 369–389. [Google Scholar] [CrossRef]
- Van Hooft, S. Understanding Virtue Ethics; Routledge: New York, NY, USA, 2014. [Google Scholar]
- Back, S.; Clarke, M.; Phelan, A.M. Teacher education as the practice of virtue ethics. Res. Educ. 2018, 100, 3–9. [Google Scholar] [CrossRef]
- Murugan, A.; Pandiamani, B.K.; Murugesan, S.K. Honesty as a Moral and Professional Virtue in Teaching: A Conceptual Framework Integrating Virtue Ethics, Teacher Identity, and Ethical Practice. J. Educ. Teach. Train. 2025, 16, 21–37. [Google Scholar]
- Melville, W.; Yaxley, B.; Wallace, J. Virtues, teacher professional expertise, and socioscientific issues. Can. J. Environ. Educ. (CJEE) 2007, 12, 95–109. [Google Scholar]
- Kunz, B. Patterns of Acting Wisely: A Virtue Ethical Approach to the Professional Formation of Christian Teachers. Religions 2025, 16, 231. [Google Scholar] [CrossRef]
- MacIntyre, A. After Virtue. A Study in Moral Theory; Bloomsbury Academic: London, UK, 2013. [Google Scholar]
- Akgun, S.; Greenhow, C. Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI Ethics 2022, 2, 431–440. [Google Scholar] [CrossRef]
- Bronfenbrenner, U. The Ecology of Human Development: Experiments by Nature and Design; Harvard University Press: Cambridge, MA, USA, 1979. [Google Scholar]
- Jaeger, J.; Riedl, A.; Djedovic, A.; Vervaeke, J.; Walsh, D. Naturalizing relevance realization: Why agency and cognition are fundamentally not computational. Front. Psychol. 2024, 15, 1362658. [Google Scholar] [CrossRef] [PubMed]
- Cain, W. Prompting change: Exploring prompt engineering in large language model AI and its potential to transform education. TechTrends 2024, 68, 47–57. [Google Scholar] [CrossRef]
- Lo, L.S. The CLEAR path: A framework for enhancing information literacy through prompt engineering. J. Acad. Librariansh. 2023, 49, 102720. [Google Scholar] [CrossRef]
- Velásquez-Henao, J.D.; Franco-Cardona, C.J.; Cadavid-Higuita, L. Prompt engineering: A methodology for optimizing interactions with AI-language models in the field of engineering. Dyna 2023, 90, 9–17. [Google Scholar] [CrossRef]
- Park, J.; Choo, S. Generative AI prompt engineering for educators: Practical strategies. J. Spec. Educ. Technol. 2024, 40, 411–417. [Google Scholar] [CrossRef]
- Schön, D.A. The Reflective Practitioner: How Professionals Think in Action; Routledge: Abingdon, UK, 2017. [Google Scholar]
- Frambaugh-Kritzer, C.; Petroelje Stolle, E. Leveraging Artificial Intelligence (AI) As a Critical Friend: The Affordances and Limitations. Stud. Teach. Educ. 2024, 21, 188–211. [Google Scholar] [CrossRef]
- Costa, A.L.; Kallick, B. Through the lens of a critical friend. Educ. Leadersh. 1993, 51, 49–51. [Google Scholar]
- Katz, S.; Dack, L.A. Intentional Interruption: Breaking Down Learning Barriers to Transform Professional Practice; Corwin Press: Thousand Oaks, CA, USA, 2013. [Google Scholar]
- Baskerville, D.; Goldblatt, H. Learning to be a critical friend: From professional indifference through challenge to unguarded conversations. Camb. J. Educ. 2009, 39, 205–221. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).