Next Article in Journal
Ready and Healthy for Kindergarten: A Collaborative Multilingual Family Involvement Program Created by Teachers, Pediatricians, and Parents
Previous Article in Journal
School Climate and Sense of Coherence Among Vocational and General Education Teachers in a Nursing School Context
Previous Article in Special Issue
Sequence Analysis-Enhanced AI: Transforming Interactive E-Book Data into Educational Insights for Teachers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applying Natural Language Processing Adaptive Dialogs to Promote Knowledge Integration During Instruction

Berkeley School of Education, University of California Berkeley, Berkeley, CA 94720, USA
Educ. Sci. 2025, 15(2), 207; https://doi.org/10.3390/educsci15020207
Submission received: 12 November 2024 / Revised: 28 January 2025 / Accepted: 5 February 2025 / Published: 9 February 2025

Abstract

:
We explored the value of adding NLP adaptive dialogs to a web-based, inquiry unit on photosynthesis and cellular respiration designed following the Knowledge Integration (KI) framework. The unit was taught by one science teacher in seventh grade middle school classrooms with 162 students. We measured students’ integrated understanding at three time points across instruction using KI scores. Students received significantly higher KI scores after the dialog and with instruction. We found that students who had complete engagement with the dialogs at three time points during instruction received higher KI scores than those who had inconsistent engagement with the dialog across instruction. By investigating the idea progression among students with full engagement with the dialogs, we found significant improvements in KI scores in revised explanations after the dialog at three instruction time points, with significant interaction with the dialog and instruction facilitating a shift toward more KI links. Two rounds of guidance in the dialog elicited more ideas. Students were more likely to add mechanistic ideas of photosynthesis reactants and cellular respiration after the dialog, especially during and after instruction. Case analyses highlight how adaptive dialogs helped one student refine and integrate scientific mechanisms at three time points. These findings demonstrate the potential of combining NLP adaptive dialogs with instruction to foster deeper scientific reasoning.

1. Introduction

This paper explores the development of student ideas using Natural Language Processing (NLP) adaptive dialogs during a lesson on photosynthesis and cellular respiration. Students were asked to revise their explanations to a Knowledge Integration assessment at three time points over a week-long web-based inquiry unit. We designed the NLP adaptive dialog, informed by the Knowledge Integration pedagogy, to detect diverse ideas students expressed and to provide guidance to elicit more ideas and reflect on their ideas. We explore the combined effect of dialog and instruction. Students can choose to participate in the adaptive dialog to help them write explanations each time before they write their revision.
Photosynthesis is a vital yet abstract topic, linking energy flow between organisms and supporting ecosystems globally (Brown & Schwartz, 2009; NGSS Lead States, 2013). Its complexity arises from the abstract transformation of light energy into chemical energy and the unobservable energy transfer from plants to animals (Barker & Carr, 1989; Eisen & Stavy, 1993; Roseman et al., 2009). In addition, students do not usually have chances to connect photosynthesis to their everyday lives. Students’ everyday experiences are grounded in their family culture and community interactions. These ideas are often referred to as funds of knowledge (Gonzalez et al., 2006). They are a mix of intuitive, non-normative, and accurate interpretations of nature phenomena (diSessa & Sherin, 1998; Inhelder & Piaget, 1958). Common everyday experiences that students express about photosynthesis include the following: (1) sunlight is only used to keep plants shiny/warm/green; (2) plants absorb food directly from soil/water; (3) photosynthesis is simply a gas-exchange process (Amir & Tamir, 1990; Haslam & Treagust, 1987; Simpson & Arnold, 1982). Common everyday experiences about energy transfer in the food chain include the following: (1) animals eat plants just for food without energy transfer; (2) animals directly use and transform light energy in their body (Özay & Öztaş, 2003).
Research highlights the importance of incorporating students’ everyday experiences into science instruction to enhance the relevance of science in students’ lives and to promote integrated understanding (Basu & Barton, 2007; Linn & Eylon, 2011; Rivera Maulucci et al., 2014). This personalized guidance is aligned with the Knowledge Integration pedagogy (Linn & Eylon, 2011) because it builds on each student’s fragmented prior knowledge from multiple experiences (diSessa, 1993) and encourages students to use evidence to distinguish their ideas (Gerard & Linn, 2022). It is also important for effective science teaching because it recognizes students’ unique perspectives, encourages their expression of science understanding, and fosters their sense of agency and identity (Rodriguez, 2013). Recent advances in Natural Language Processing (NLP) have made student ideas visible (Riordan et al., 2020a) and have extended this kind of personalized guidance to more students and encouraged further reflection and revision (Zhai et al., 2020).
In this study, we explored the value of adding NLP adaptive dialogs in a web-based unit at three time points during instruction to enhance science learning by responding to students’ funds of knowledge and supporting their sense-making processes. Our NLP dialog (Figure 1) first identified the specific idea in a student’s explanation to an open-ended question and positioned the student’s idea as a lens for deepening their understanding. Then, the dialog provided one round of adaptive guidance followed by one round of generic reflection guidance. This approach aimed to help students recognize, discover, distinguish, and integrate their ideas to build a coherent understanding of photosynthesis and cellular respiration.
We investigated four main research questions:
(1)
How does consistent engagement in the NLP dialog compare to inconsistent engagement in influencing students’ KI scores across instruction?
(2)
How does the NLP adaptive dialog help students strengthen their integrated understanding of photosynthesis across instruction?
(3)
How does the NLP adaptive dialog support students to integrate ideas about photosynthesis across instruction?
(4)
How do the two rounds of guidance within the NLP dialog help students integrate their ideas?

2. Literature Review

2.1. Knowledge Integration Framework

We applied the Knowledge Integration (KI) framework to build the web-based unit curriculum and the NLP dialog. Students come to science class with a rich repertoire of ideas about various phenomena and benefit from exploring alternative views (Linn et al., 2023; Rosebery et al., 2016). Built on constructivist learning theory, which suggests that learners are actively involved in knowledge construction based upon the interaction of prior knowledge and new events (Inhelder & Piaget, 1958), KI emphasizes that students develop multiple, varied ideas about scientific phenomena across multiple contexts including school, home, and in nature (Linn & Eylon, 2011). To help students integrate these ideas, teachers elicit their students’ existing ideas, provide opportunities for students to discover more ideas using models or relevant examples, encourage students to conduct investigations to distinguish among these ideas, and ask students to reflect on their ideas to promote robust understanding of science (Linn & Eylon, 2011). Making science accessible, making thinking visible, helping students learn from others, and promoting autonomy are four core principles for KI instruction (Linn & Hsi, 2000). These four principles connect constructivist educational research to instructional practices. KI pedagogy has been applied in various contexts, including designing web-based curricula for teaching photosynthesis (e.g., Ryoo & Linn, 2012; Wiley et al., 2019). For instance, to make complex photosynthesis chemical reactions more accessible, Ryoo and Linn (2012) developed dynamic visualizations to track the breakdown and recombination of atoms. These visualizations were revised and embedded in our unit in this study to help students gather new evidence through interactions with the models and to distinguish between these new ideas and their initial views.
KI pedagogy also prompts reflection and encourages students to rethink their ideas and revise their responses, which is an important phase of self-regulated learning (Pintrich, 2000). Promoting productive revision during instruction has been shown to help students develop more coherent understanding of science concepts (Davis, 2003; Liu et al., 2015; Ryoo & Linn, 2015). To measure integrated understanding of science topics, researchers have developed formative assessment items using KI framework (Liu et al., 2016). The five-point KI scoring rubric assesses the degree to which students link normative, relevant ideas in their explanations. The higher the KI score is, the better that student can integrate and link their science knowledge. The KI scoring rubric rewards students for connecting ideas to evidence and for linking one idea to another, rather than punishing expressing intuitive ideas or rewarding the accumulation of isolated ideas. The KI scoring rubric has been used as a reliable and valid measurement tool in previous studies (Lee et al., 2011; Liu et al., 2016) across topics like global climate change (Bradford et al., 2023), thermodynamics (Li et al., 2023), and photosynthesis (Li et al., 2024a). For instance, in a longitudinal study, Liu et al. (Liu et al., 2015) used KI assessments to track middle school students’ progress in understanding energy concepts within a web-based science curriculum. Specifically, Ryoo and Linn (2015) worked with science educators and experts to develop KI items that assess complex thinking about photosynthesis. One KI item, Energy Story (“How does energy from the sun help animals to survive?”), invites students to reflect on their learning and create narratives that apply energy concepts in real-life situations. This prompts students to make connections between photosynthesis, energy transfer, and everyday observations, such as how animals obtain energy from sunlight or food. This Energy Story KI item has been further applied for automated scoring in middle school classrooms (Li et al., 2024b). Therefore, we applied this item at pre-test, midpoint assessment, and post-test to measure students’ coherent understanding of photosynthesis in our study.

2.2. NLP and Knowledge Integration Framework

As KI pedagogy emphasizes helping students connect and distinguish among ideas, assessing these cognitive processes often relies on analyzing textual student-written responses (Zhai et al., 2020). Natural Language Processing (NLP) techniques, with their ability to process and interpret large volumes of text data, present a powerful complement to scale up KI assessments (Li et al., 2024b). NLP has emerged as a valuable tool in science education, offering a powerful means to help teachers automatically score student responses (Zhai et al., 2022), as well as provide tutoring and timely adaptive guidance (e.g., Aleven & Koedinger, 2002; Gerard & Linn, 2022; Walker et al., 2011). Most science education research using NLP focuses on scoring assessments (Kubsch et al., 2023; Zhai et al., 2020, 2021). Even when these assessments are used for formative assessment, researchers report the limitations of the holistic scores they generate (Lee et al., 2021; Puntambekar et al., 2023; Zhu et al., 2020). For instance, holistic scoring can cause validity concerns and may limit use of specific information about students’ thinking to inform guidance (Myers & Wilson, 2023). More work is needed on how to detect the wide variety of student ideas and how learning theories can be integrated to help students integrate their ideas.
KI informs the creation of NLP idea detection models that automatically identify distinct ideas expressed by students in their responses according to predefined rubrics (Riordan et al., 2020a). Combined with dialog interface, the KI-based NLP dialogs deliver tailored prompts to encourage deeper reflection, elaboration, and the integration of new ideas, rather than the accumulation of settled ideas (Schwartz & Lederman, 2008). These dialogs promote the core KI processes of eliciting, discovering, distinguishing, and sorting ideas (Linn et al., 2023). In a series of KI-based NLP adaptive dialog studies (Bradford et al., 2023; Holtmann et al., 2023; Li et al., 2023, 2024a), students often start with multiple intuitive ideas from everyday observations they have about a science phenomenon. When they submit their explanations to the dialog, every single idea is detected, and then adaptive guidance is given based on researcher-designed rules. Results showed that these dialogs can support teachers in affirming students’ efforts to make sense of science, responding to what students think to have them elaborate and synthesize their ideas. For example, Li et al. (2024a) found that the KI-based NLP dialog helped students add more microscopic ideas and use more scientific language when they revised explanations of the Energy Story KI item.

2.3. Designing Automated Guidance in KI Framework

Teachers often value incorporating students’ individual ideas and lived experiences (Luna, 2018). Responding to students’ heterogeneous ideas, and the paths that lead to those ideas, reinforces the effort students make to develop their ideas, acknowledges the risk they take to express their ideas, and helps build student agency and identity (Rodriguez, 2013). Guiding reflection in science classrooms requires dynamic feedback to respond to multiple ideas (Kang et al., 2016), including normative, broad, and intuitive ideas that students hold simultaneously (Li et al., 2024b). Without opportunities to distinguish among these ideas, students may retain vague and fragmented knowledge pieces. However, with the challenges of large class sizes, heavy course loads, and frequent reassignment to new grades or schools, teachers welcome uses of AI to amplify their effectiveness (Atteberry et al., 2017).
NLP technologies provide a tool to scale up effective guidance to every single student. NLP-based tools like intelligent tutoring systems (ITSs) showed the value of automated guidance (Graesser, 2016; Graesser et al., 2004; Nye et al., 2014). Studies have shown that students benefit more from immediate, task-specific feedback, which helps them recognize and revise conceptual and procedural errors (Kulik & Fletcher, 2016; Shute, 2008). ITSs, using predefined curriculum scripts and anticipated answers, provide dialogs that ask questions, evaluate responses, and offer hints (Graesser et al., 2004). However, for science concepts that are more complex and lack clear-cut answers, ITSs that rely on techniques like Latent Semantic Analysis struggle to identify missing relations or various misconceptions in student responses (Paladines & Ramirez, 2020). At the same time, these systems are still limited in comparison to expert teachers, particularly when it comes to scaffolding, offering personalized constructive feedback, and supporting metacognitive and self-regulatory skills (Jurenka et al., 2024; Wollny et al., 2021).
The synergy between NLP and KI has proven particularly effective in promoting revisions and enhancing science learning. KI, rooted in constructivist learning theory, posits that students begin science units with fragmented knowledge pieces (diSessa & Sherin, 1998) and intuitive ideas, which can be refined through instruction and well-designed guidance (Linn et al., 2023). This process aligns with Vygotsky’s (1978) concept of the zone of proximal development, where students, aided by more knowledgeable others, are able to solve tasks slightly beyond their current capabilities. Previous studies have demonstrated that KI-based adaptive guidance, particularly when combined with NLP, improves student learning outcomes (Gerard et al., 2016; Vitale et al., 2016). Gerard et al. (2015) showed that KI-based adaptive guidance was more effective than generic or direct disciplinary guidance in helping students refine their scientific explanations. Similarly, Gerard and Linn (2016) found that combining automated guidance with teacher guidance led to improved student understanding of energy concepts in photosynthesis.
In more recent studies, with the development of NLP idea detection models, researchers have been able to design automated guidance in fine granularity (Riordan et al., 2020a, 2021). When designed following KI pedagogy, adaptive guidance in NLP dialogs has shown its potential to support students to sort out their ideas (Bradford et al., 2023), engage in refining their explanations (Li et al., 2023), and analyze their own reasoning and the evidence underlying their perspective (Gerard et al., 2024a). Adaptive guidance models personalized pathways for students with different prior knowledge to generate new, accurate scientific ideas and combine their ideas during instruction (Gerard et al., 2024a; Li et al., 2024a). Adaptive guidance, inspired by expert teachers, is found to be more valuable than general guidance in eliciting more ideas and refining ideas in science classrooms (Gerard et al., 2024b).
While studies demonstrated that providing students with NLP-based adaptive guidance is a promising approach to strengthening learning, prior research also highlights the value of simply asking reflection questions. In science class, reflection guidance that promoted self-monitoring was more likely to improve learning outcomes than guidance that only addressed content knowledge (Gerard et al., 2015). Ryoo and Linn (2016) found that reflective automated guidance helps students engage in evidence-gathering practices and enhances their understanding of scientific concepts. Therefore, we designed our NLP dialog to include one round of adaptive guidance followed by one round of generic guidance to balance personalized support with opportunities for independent reflection. Adaptive guidance leverages the KI framework to elicit and distinguish students’ ideas through researcher-designed prompts, fostering deeper cognitive engagement. The subsequent round of generic guidance shifts the focus to broader reflection and provides opportunities for the classroom teacher to monitor and adjust instruction. This design allows us to examine how these strategies complement each other, offering a nuanced understanding of their interplay in supporting student learning and making the findings more applicable to real-world educational contexts.
In this study, we explore the research gap concerning how the NLP adaptive dialog informed by the KI framework supports middle school students in refining their understanding of photosynthesis while learning the web-based inquiry unit. While prior research demonstrates the effectiveness of adaptive dialogs in fostering coherent science understanding, little is known about their effectiveness at different instructional stages. Additionally, while both adaptive guidance and generic reflection guidance have been shown to improve science explanations, their combined use in classroom settings, which better reflects practical classroom constraints where combining tailored feedback with opportunities for independent thinking is often necessary, has yet to be fully explored. To address these gaps, we embedded one round of adaptive guidance followed by one round of generic guidance in our adaptive dialog. The dialog was embedded at three time points across instruction.

3. Curriculum Design

This study happened in seventh grade middle school science classrooms in a public school in the western United States, taught by one science teacher. Students used this web-based Inquiry Science Environment (WISE) unit as a part of their regular science instruction. Before engaging with the unit, the teacher introduced foundational concepts of photosynthesis, and used this unit to elaborate on students’ understanding of complex topics.
The primary learning objective of the WISE unit (see Figure 2) was to deepen students’ knowledge of photosynthesis and cellular respiration through dynamic visualizations (Ryoo & Linn, 2012) and inquiry-based activities. The web-based unit curriculum design was informed by the KI framework. In Lesson 1, assessments were designed to elicit students’ initial ideas. Lessons 2 and 3 featured animations and models that introduced new ideas about photosynthesis and cellular respiration. During Lesson 4, students constructed concept maps to depict energy and matter flow in ecosystems, enabling them to distinguish among their ideas. Lesson 5 involved a capstone project where students applied their knowledge to analyze the ecosystem of a national park as part of a reflection process. Finally, in Lesson 6, students were given opportunities to integrate evidence across the unit to revise their responses to the KI assessments. Students studied the WISE unit, led by their regular classroom teacher, for 5–7 class periods.
We used a KI assessment, Energy Story (“How does energy from the sun help animals to survive?”), to measure how students integrate multiple pieces of evidence from throughout the unit to build a coherent explanation. We used within-subject design with repeated measure at three time points: before instruction (in Lesson 1, pre-test), during instruction (at end of Lesson 3, midpoint test), and after instruction (in Lesson 6, post-test). At each time point, students were told to choose and discuss with a virtual “thought buddy” in the NLP dialog. After they selected an avatar, their thought buddy began to ask them to explain their ideas (See Figure 1). When students submitted their initial explanation, the idea detection NLP model automatically detected the ideas within the explanation. The system assigned a prompt designed by researchers to elicit further reasoning about the student’s idea or to add a link with evidence. Students wrote a response to the adaptive guidance. Then, the system provided generic guidance (“What’s an idea you feel unsure about and chose not to include in your explanation?”) to reflect on their own reasoning process and recognize any gaps or uncertainties. After students responded, the system ended the dialog by asking the student to revise their explanation. Students revised their explanations and submitted them in a separate text box.

4. Methods

4.1. Participants

Participants were 162 7th-grade students, typically between 11 and 14 years old, in a public school in the western United States. Among the participants, 44% were multilingual students, 54% were male-identifying, and 90% had access to networked computers for homework. As a within-subjects design, each participant was asked to complete the dialog at three different time points while engaging with the WISE unit.

4.2. NLP Models

In this study, we developed and applied two types of NLP models. The first model was designed for idea detection within adaptive dialog and student explanations. The second model was used for scoring KI levels.
To develop the idea rubric and label datasets for the NLP idea detection model, we collected over 1000 student responses to the Energy Story item from five schools with demographics similar to those of the participating school. Two researchers leading the research practice partnership identified the range of distinct ideas expressed by students in their explanations (13 ideas; see Table 1). These 13 ideas include normative and intuitive ideas students have about photosynthesis and ecosystems. The two researchers first scored and labeled ideas on 15% of the data together, discussing and refining the KI score and idea rubrics together until the researchers achieved satisfactory inter-rater reliability on both tasks (Cohen’s Kappa > 0.85). Next, each researcher individually coded half of the remaining student data, assigning a KI score for each explanation and labeling each distinct idea in each explanation. The resulting labeled dataset was used to train an NLP idea detection model and a KI scoring NLP model.
We applied a multilabel token classification approach (Riordan et al., 2020b; Schulz et al., 2018) to develop the idea detection model. The model consists of a pretrained transformer backbone followed by a single-layer bidirectional GRU-based RNN and a final linear projection. The backbone operates at the wordpiece level but idea tagging is performed at the token level. Before the RNN component, the wordpieces that make up a token are averaged. Since ideas can overlap, a given word token can receive more than one idea category prediction. The final linear layer projects the concatenated hidden states of both RNN directions into a vector for each word. Sigmoid activation is applied to this word-level representation. During inference, all values greater than a 0.5 threshold indicate that that label was predicted for that word. The model was trained using Binary Cross Entropy loss. For construction of the idea detection model, we used the SciBERT (Beltagy et al., 2019) backbone, which is based on 1.1 million scientific papers for pretraining and vocabulary creation. We fine-tuned the model by testing a variety of learning rates during a 10-fold cross-validation grid search. For hyperparameter tuning, during each cross-validation iteration, 8 folds were used for training, 1 fold was used for evaluation, and the remaining fold was ignored. For the final cross-validation evaluation (e.g., after the hyperparameters and number of epochs were selected), 9 folds were used to train and the 10th fold was used for final evaluation.
The second model, the KI scoring model, is an updated version of the Energy Story KI scoring model, which was previously validated in prior research (Holtmann et al., 2023; Riordan et al., 2020c; Ryoo & Linn, 2015). This model was applied to score students’ initial and revised explanations at three time points across instruction. The KI scoring model was evaluated on quadratic weighted kappa (QWK). QWK for the model was 0.809, demonstrating the high level of agreement with human scores.
The idea detection model was evaluated on word-level micro-averaged precision, recall, and F1-score. The model had a word-level micro-averaged precision of 0.68 and a recall of 0.65, yielding an F1-score of 0.6783 for eight targeted ideas, which was acceptable for deployment (cf. Schulz et al., 2018, 2019). The overall F1-score, along with per-idea category human–machine agreement results, indicate that while the model performed well for frequent idea types, it is challenging to achieve high levels of accuracy across all idea classes from our relatively small dataset because some idea types are relatively rare in our annotations. We found that the model had low levels of accuracy in detecting the following ideas: 4-EngCreate, 5-Eng2Mat, 6-PltCellResp, and 6a-PltStore. Each of these ideas was infrequently observed (either unobserved or appeared fewer than 5 times). Therefore, in this round of dialog deployment, we did not guide on these ideas.

4.3. Guidance Design in the NLP Dialog

The dialog contains two rounds of guidance. In the first round, adaptive guidance is tailored to the specific ideas expressed by the student, while in the second round, generic guidance is provided to encourage students to reflect on their knowledge, regardless of their response. To assign the adaptive guidance (see Table 1) in the first round, we respond to one idea at a time in the student’s explanation. The adaptive guidance encourages the student to elaborate their reasoning. If multiple ideas are detected, the dialog responds to intuitive ideas first. A typical explanation explains energy and matter transfer from the sun to the plants, and then from plants to animals. Therefore, if multiple normative ideas are present, we first respond to ideas about animal cellular respiration (assuming they have already explained ideas about photosynthesis and energy transfer), then ideas about energy transfer from plants to animals, and then ideas about photosynthesis. For example, if a student has idea 3 (photosynthesis), then our adaptive prompt will elicit their understanding of how the energy from photosynthesis gets to animals and prompt them to think about animal cellular respiration.

5. Data Preprocessing and Analysis

5.1. Data Preprocessing

We first matched student IDs with their responses across three time points (See Table 2 for details). Since students can submit their responses and revise them multiple times in class, some will go back to the pre-test and resubmit their final revision after they learned the whole unit with their teacher at the end. Therefore, to measure their progression along with instruction, we only exported the first complete response that students submitted and deleted submissions afterward. At the pre-test before they learned the WISE unit, 90.1% of students started the dialog, and 82.7% of students completed it. A total of 80.9% of students used the dialog and 79.6% completed it while they were learning the WISE unit at the midpoint test. The percentage of dialog completion dropped to 62% at the post-test after they learned the unit. There were 79 students (49% of all students) who completed the dialog at all three timepoints. The low participation and completion rates may be due to students running out of time during class or being unavailable when the dialog took place. In terms of the revision after the dialog, students were told in class that their revision will be scored and that the total number of points is 5. Almost all students (over 98.1%) completed the revision, whether before, during, or after instruction.

5.2. Data Analysis

To track student science conceptual change progression along with the instruction, we first investigated the impact of dialog engagement on student KI scores. Then, to provide a detailed examination of how consistent engagement with the NLP adaptive dialog supported knowledge integration, we analyzed the performance of the 79 students who had full dialog participation at all three time points. We first analyzed the impact of the NLP dialog on the initial and revised KI scores across three instruction time points. We chose cumulative link mixed models (CLMMs) to analyze KI scores because student explanations were measured repeatedly before and after the dialog and our KI scores were not normally distributed. We modeled two factors: dialog (before dialog as initial, after dialog as revised) and instruction (before, during, and after instruction). Then, we analyzed the impact of NLP on eliciting student ideas, including the overall ideas and each specific idea. We used Generalized Estimating Equations (GEEs) with Poisson regression with repeated measures for the idea counts. We used GEEs with binomial regression with repeated measures for single idea changes. We used a case study to illustrate how the dialog helped the student improve at different instruction stages. Then, we described how the two rounds of guidance (adaptive guidance and generic reflection guidance) each help students distinguish intuitive ideas and add new ideas.

6. Results

6.1. How Did the NLP Dialog Engagement Affect Student Learning?

To evaluate how dialog participation might influence revisions, based on their engagement with the dialog at three time points, we categorized students into two groups: partial dialog (N = 83) and full dialog (N = 79). The partial dialog group consisted of students who engaged with the dialog inconsistently across three time points during the instruction, while the full dialog group included students who completed all three dialogs. Since we had unequal dosing and missing data between the two groups, as well as repeated measures across three time points, we chose a cumulative link mixed model (CLMM) to analyze the differences between the full and partial group comparisons across groups while accounting for variability and individual differences.
The cumulative link mixed model revealed significant main effects of group (full vs. partial), dialog (initial vs. revised), and time point (before vs. during vs. after instruction) on students’ knowledge integration (KI) scores. After controlling for other variables, students who had complete dialog interactions at all three time points demonstrated significantly higher KI scores compared to students who had inconsistent dialog engagement (β = 1.113, z = 2.54, p = 0.0112). Similarly, students received higher KI scores for their revised explanation after the dialog than their initial explanations before the dialog (β = 1.185, z = 3.27, p = 0.0011). Additionally, students achieved higher KI scores after the instruction than before the instruction (β = 1.251, z = 2.73, p = 0.0064). Significant interaction between dialog and time point suggests that students are more likely to achieve higher KI scores after using the dialog after instruction, compared to other time points (β = 1.112, z = 1.96, p = 0.0496). A marginal, significant interaction between group and time point suggests that students are more likely to received higher KI scores after instruction when they have full engagement with the dialog (β = 1.019, z = 1.95, p = 0.051). No significant three-way interaction between group, dialog, and time point was observed. These findings highlight the value of dialog and instruction to help students revise their explanations and that of the full dialog engagement across instruction in fostering students’ knowledge integration.
The difference between the full and partial dialog experience groups aligns with previous research that showed that students who participated fully in tutorial dialogs, whether with human tutors (M. Chi et al., 2001) or intelligent tutoring systems (VanLehn et al., 2007), outperformed peers who received less or no engagement, highlighting the importance of consistent and complete dialog engagement. Building on these findings, we will narrow our focus in the following data analysis to the smaller sample of 79 students who had full dialog participation. This approach will provide a more detailed examination of how consistent engagement with the NLP adaptive dialog supported their knowledge integration and learning outcomes.

6.2. How Did the NLP Dialog Strengthen KI Scores Along with Instruction?

Since the full dialog experience helped students make integrated revisions, we are interested in understanding how they made progress at each time point. To examine the role of the NLP adaptive dialog in strengthening KI scores in conjunction with instruction, we analyzed data from 79 students who completed the full dialog experience.
Using a cumulative link mixed model (CLMM), we modeled KI scores as the dependent variable, with dialog phase (initial, revised) and instructional time point (before, during, after) as fixed effects. We also added the interaction effect of these two independent variables to the model. The full model, which included fixed effects for dialog and time point, their interaction, and random effects for students, compared to three alternative models, demonstrated superior fit (lowest AIC = 1004.66; highest LRT χ2 = 169.60). This model captures significant variance (pseudo R2 = 0.33), outperforming both fixed-effect and random-effect-only models.
Controlling for other variables, students demonstrated significant gains in their revised explanations after the dialog compared to their initial ones (β = 0.625, z = 1.99, p = 0.046). Instruction further enhanced KI scores, shown by higher KI during (β = 0.895, z = 1.97, p = 0.048) and after (β = 2.047, z = 4.41, p < 0.001) instructional phases. Because the interaction term in the model was significant (χ2 (1) = 20.1, p < 0.001), we conducted pairwise comparisons of the estimated marginal means of the probabilities of each class of KI score (Figure 3). Pairwise comparisons revealed that dialog and instruction synergistically increased the likelihood of achieving higher KI levels, particularly for KI 4 and KI 5, while reducing probabilities for KI 1 and KI 2 (See Figure 3). This underscores the efficacy of combining adaptive dialog guidance with structured instruction to facilitate deeper scientific reasoning.

6.3. How Did the NLP Dialog Elicit Ideas Along with Instruction?

To investigate the role of NLP adaptive dialog in eliciting and distinguishing ideas in conjunction with instruction, we continued analyzing data from 79 students who completed the full dialog experience.
Using Generalized Estimating Equations (GEEs), we found that students were 55% more likely to express additional ideas in their revised explanations compared to their initial explanations (Wald = 14.28, p < 0.001), after controlling for other variables. Changes in intuitive and normative ideas were not significant. Furthermore, after controlling for other variables, students were 34% more likely to express additional ideas during instruction compared to before instruction (Wald = 4.39, p = 0.036) and 40% more likely to do so after instruction compared to before instruction (Wald = 5.85, p = 0.016). The interaction between time point and dialog was not significant. Across all time points, the newly expressed ideas were mostly normative, both in initial and revised explanations (Figure 4). While initial explanations showed greater increases in ideas during instruction compared to before, the increases from during to after instruction were limited. In contrast, revised explanations, incorporating dialog, demonstrated steady increases in both total ideas and normative ideas throughout the instructional period.
Analysis of idea changes using GEEs with a binomial distribution revealed that students were significantly more likely to express the following normative ideas in their revised explanations: animal cellular respiration (5.9 times, Wald = 6.1, p = 0.0135), animals using glucose for their energy and growth (3.06 times, Wald = 6.8, p = 0.0091), reactants of photosynthesis (i.e., carbon dioxide, water, etc.) (1.85 times, Wald = 4.18, p = 0.04), energy transfer (1.69 times, Wald = 6.94, p = 0.0084), photosynthesis reaction (1.5 times, Wald = 7.48, p = 0.0062), products of photosynthesis (i.e., glucose, oxygen, etc.) (1.04 times, Wald = 4.15, p = 0.04) (see Figure 5). These significant increases, all normative, suggest that the dialog effectively elicited pre-existing knowledge pieces (diSessa, 1993) that students had not initially articulated. The adaptive guidance helped students retrieve and apply these ideas as scientific evidence.
The dialog had no significant impact on other ideas. For the rest of the normative ideas, although they increased after dialog, they were infrequently expressed as they represent detailed mechanisms not central to the question (“How does energy from the sun help animals survive?”), such as the idea of light energy transforming into chemical energy during photosynthesis, the idea about plants storing energy in glucose, and the idea about plant cellular respiration. For intuitive ideas, the idea about animals eating plants just for food and not for energy decreased, suggesting a shift toward focusing on photosynthesis and energy transfer. The idea about energy becoming matter increased. This idea often occurs when students describe the details of photosynthesis (e.g., “Plants take in energy from the sun and convert it into energy molecules called glucose”). Students also talked more about animals directly using the sun’s energy. This is a common idea that students hold about animals and the sun’s energy. We still need some effort designing guidance to help students distinguish this idea when they are learning photosynthesis.
Instruction played a key role in helping students add mechanisms to their responses. Students were significantly more likely to mention photosynthesis reactants (Wald = 9.83, p = 0.0017) and animal cellular respiration (Wald = 7.46, p = 0.0063) during instruction, with these effects persisting after instruction (Wald = 7.29, p = 0.0069; Wald = 8.79, p = 0.003) compared to before instruction (Figure 6). These results underscore the synergistic impact of instruction and the NLP adaptive dialog in promoting scientific mechanisms. For example, more students mentioned “CO2”, “carbon atoms”, and “H2O” as they learned more about the photosynthesis equation. Students mentioned more specific scientific terms like “cellular respiration” compared to their explanations before instruction.
In Figure 7, the case of Jian (pseudonym) illustrates a clear development in the conceptual understanding of energy and matter transformations while learning the web-based unit with the help of NLP dialogs. Before the teacher taught the unit, Jian provided a simple description of energy transfer from the sun to plants and, subsequently, to animals. Adaptive guidance elicited new normative ideas, such as the mechanism of cellular respiration and the role of glucose/food as energy for animals, although his explanation lacked detailed evidence (e.g., reactants of cellular respiration). After he interacted with the photosynthesis and cellular respiration models in the WISE unit, Jian expressed additional normative ideas learned in class, including specifics about photosynthesis reactions. Adaptive guidance further supported Jian in elaborating on the mechanisms of cellular respiration, including the transfer of energy and matter in chemical reactions. Despite this progress, Jian expressed confusion about how animals and plants obtain matter, specifically glucose, indicating a lack of a valid link between photosynthesis energy and matter transformation. After learning the unit with the teacher, Jian’s final response demonstrated improved integration of scientific evidence. He emphasized animal cellular respiration and removed vague references to photosynthesis products. While no new normative ideas were added in the final revision, adaptive guidance helped Jian refine his explanation by incorporating evidence and focusing on mechanisms.
Jian held the ideas of energy transfer and animal cellular respiration across all three time points. Through a combination of instruction and dialog, he was able to deepen his reasoning by connecting these ideas with more detailed mechanisms, such as identifying the reactants and products of photosynthesis. This progression highlights his growing ability to articulate the flow of energy and matter within the food chain.

6.4. How Did the Two Rounds of Guidance Work in the NLP Dialog?

6.4.1. Round 1: Adaptive Guidance

As we noted before, there are often multiple ideas detected in a student explanation. The NLP model detects all of the ideas and prioritizes responding to intuitive ideas first. If there are only normative ideas, these are prioritized in terms of assumptions about student understanding, addressing ideas in the following order: animal cellular respiration (assuming knowledge of photosynthesis and energy transfer), energy transfer from plants to animals, and photosynthesis. However, due to the low frequency and limited detection accuracy of certain ideas (e.g., Eng2Mat, EngCreate, PltStore, and PltCellResp), guidance for these ideas was not included. Table 3 illustrates the priority of idea detection and corresponding prompts.
To further evaluate the impact of adaptive guidance, we selected cases from the two most frequent prompts: Prompt 8 and Prompt 12 (Figure 8). Prompt 8 (“Nice thinking! You talked about energy transfer. Can you tell me more about how animals use the energy?”), intended to elicit the animal cellular respiration idea (9-AnimCellResp), instead prompted intuitive ideas not previously expressed by the student. A common misconception elicited was that animals directly use the sun’s energy for warmth and vitamins. This outcome likely stemmed from the prompt’s vagueness about the type of energy being referenced, leading students to broaden their focus to all forms of energy animals can use, instead of the energy from plants. Prompt 12 (“Interesting idea! How do plants and animals use energy from the sun differently?”), designed to elicit the idea of photosynthesis and energy transfer, effectively generated these normative ideas, encouraging students to distinguish how plants and animals utilize solar energy.

6.4.2. Round 2: Generic Reflection Guidance

The generic reflection guidance (“What’s an idea you feel unsure about and chose not to include in your answer?”) was less effective than adaptive guidance. Before instruction, the generic guidance elicited many ideas that were non-scorable by the model (45.07% of all responses before instruction), which increased to 63.41% during instruction and 68.35% after instruction (Figure 9). This statistically significant upward trend ( χ 2 ( 2,138 ) = 8.97 ,   p = 0.011 ) suggests an overall increase in non-scorable ideas. There were three types of non-scorable ideas: (1) 65.01% of the non-scorable ideas were irrelevant to the learning objectives (e.g., “idk”, “nothing”, “bye”); (2) 27.7% of non-scorable ideas were specific questions about photosynthesis-related terms (e.g., lack of confidence in concepts like “chloroplast”, “mitochondria”, and “ATP”). These specific questions mostly appeared during instruction; (3) 7.29% of non-scorable ideas expressed confidence in their prior responses (e.g., “I feel confident about my current explanation and included everything that I know in it”). This idea type mostly appeared after instruction. This trend highlights the limitations of generic guidance fostering scorable, content-aligned responses and the need for more adaptive or targeted prompts to better scaffold student understanding.
The generic reflection guidance also elicited certain mechanistic ideas, such as reactants of photosynthesis (1-PhotoRec), animal cellular respiration (9-AnimCellResp), and energy transformation during photosynthesis (3a-PhotoChem). They only surfaced during instruction, which showed that students grappled with these complex concepts while learning them. The teacher in our study noticed those confusions and addressed them in class. Previous research has shown that students engage with their ideas in meaningful ways, working through uncertainties productively (Bransford et al., 1999; Inhelder & Piaget, 1958; Smith et al., 1994). Additionally, when teachers tailor science instruction to the ideas of students in the classroom, they also encouraged the incorporation of students’ personal experiences into their learning (Barton & Tan, 2009; Rosebery et al., 2016). With the assistance of our NLP adaptive dialog system, teachers can better monitor student learning and provide timely guidance as needed.
There were also ideas consistently elicited by the generic reflection guidance across three time points, such as the intuitive ideas of animals directly absorbing sunlight for warmth and vitamins (11-AnimDirUse) and animals eating plants only for nutrition (12-AnimFood). These ideas, deeply rooted in students’ personal experiences, proved to be persistent even with instruction and technology assistance. This suggests the need for further learning activities to connect them with science evidence and distinguish their prior ideas from the ideas in the curriculum (Larkin, 2012; Ryoo & Linn, 2014; Vitale et al., 2016).

7. Discussion

Effective classroom science teaching involves helping students expand on their existing ideas and distinguishing them from concepts learned in class (Larkin, 2012; Luna, 2018). Yet, it is challenging for teachers to draw out each student’s ideas and use tailored prompts to help them differentiate and make connections, particularly when there are more than 30 students (Hammer, 1995; Luna, 2018). This study highlights the impact of adaptive NLP dialog in fostering students’ scientific understanding when combined with structured instruction. First, students demonstrated significant KI gains after the dialog in their revised explanation and after instruction. Our analysis revealed the value of consistent interaction with adaptive dialogs during instruction. Results showed that students achieved higher KI scores when they had full interaction compared to when they had inconsistent interaction. The finding aligns with prior research emphasizing the importance of consistent and interactive feedback and revision in learning science. Dialogs at different instruction stages provided students with multiple opportunities for self-assessment and refinement, consistent with research on formative feedback enhancing self-regulated learning (Nicol & Macfarlane-Dick, 2006). Further research is needed to examine students’ cognitive processes during these interactions, particularly how they synthesize new evidence from instruction with their pre-existing ideas.
Moreover, there were significant improvements in KI scores in student explanations across instructional time points, with the combined effects of dialog and instruction facilitating a shift toward higher-order reasoning. This was particularly evident in the interaction between dialog phases and instructional periods, which significantly increased the likelihood of students achieving advanced KI levels (KI 4 and KI 5). These results align with prior studies emphasizing the importance of integrating structured guidance with active learning (M. T. H. Chi et al., 2018). Previous studies argued for the importance of combining direct instruction with scaffolding (Kirschner et al., 2006). The scaffolding enables learners to engage in guided reasoning, providing support until they can perform tasks independently. Gerard and Linn (2016) also found that low-scoring students who received the combination of automated and teacher guidance performed better than those who received only automated guidance. The steady progression of revised explanations throughout instructional periods underscores the importance of continuous scaffolding to deepen conceptual understanding (Hmelo-Silver et al., 2007). The instruction in this study was delivered through a highly structured, KI-informed web-based unit, which already incorporates NLP dialogs. Future research is needed to explore how teachers can effectively integrate NLP dialogs into their own teaching routines.
The dialog was instrumental in eliciting additional ideas, with students being 55% more likely to express new ideas in their revised explanations. The added ideas echoed prior research on student sense-making in photosynthesis and cellular respiration (e.g., Özay & Öztaş, 2003). Notably, instruction helped students integrate evidence, with students adding mechanistic ideas like photosynthesis reactants and cellular respiration at higher rates during and after instruction. These findings suggest that NLP-driven prompts scaffolded students’ ability to articulate pre-existing knowledge pieces (diSessa, 1993). Aligned with Vygotsky’s (1978) theory of the zone of proximal development, students can tackle challenges that lie beyond their current independent abilities when provided with appropriate guidance by a “more knowledgeable other,” such as teachers (Mestad & Kolstø, 2014) or automated feedback tools (Ferguson et al., 2022; Lee et al., 2019).
Using knowledge integration pedagogy, the adaptive guidance elicited students’ ideas, affirmed their responses, and prompted further reasoning instead of correcting them. While adaptive guidance proved effective in prompting normative ideas and distinguishing certain intuitive ideas, some intuitive ideas about energy use persisted. For example, the idea that animals directly use the sun’s energy was frequently elicited, indicating the need for refined prompts that target nuanced conceptual gaps. Generic guidance, while less effective overall, illuminated areas where students expressed uncertainty or disengagement, offering insights for future dialog designs. While this study focused on photosynthesis and cellular respiration, the methods of designing idea detection models and adaptive guidance using the KI framework could be adapted to various science topics (Bradford et al., 2023; Holtmann et al., 2023; Li et al., 2023; Linn et al., 2023). Extending the approach would require tailoring the adaptive prompts and idea rubrics to the linguistic and conceptual nuances of these topics, in collaboration with domain experts and educators.
The synergy between adaptive dialog and teacher scaffolding is essential for maximizing students’ sense-making (Gerard & Linn, 2016). To maximize the impact of dialog systems, teachers play a critical role in helping students synthesize and distinguish ideas from interacting with these tools. For example, structured post-dialog activities, such as teacher-led whole-class discussions or small-group reflection sessions, can help students reconcile the feedback they receive with their prior knowledge (Linn & Hsi, 2000). Teachers could also use strategies like asking students to compare their responses to peers’ ideas or to evaluate how their thinking evolved during the dialog. For example, Gerard and Linn (2022) designed the Annotator tool to help students analyzing a fictional peer’s work, identify gaps in their explanations, and select relevant ideas to fill those gaps during teachers’ instruction. Such strategies reinforce the value of students’ reasoning processes while providing opportunities for clarification and deeper understanding. This fosters a classroom environment where students feel their ideas are valued and where teachers are equipped to engage with every unique perspective (Barton & Tan, 2009; Rosebery et al., 2016). Further work is needed in continued partnership with teachers to design activities for students to use evidence to sort out the ideas they raised during the dialog.

8. Limitations

However, despite these strengths, the idea detection model had a moderate F1-score and struggled to detect and address more nuanced scientific concepts. Intuitive ideas of Eng2Mat (Plants transform light energy into glucose) and EngCreate (Energy is created, not transformed) were rare in our dataset because these two ideas are often combined with normative ideas of photosynthesis and energy transfer, when students describe how the light energy helps plants survive (e.g., “Energy comes from the sun which plants absorb and turn into food for themselves”). These misconceptions are infrequent in our dataset, and are difficult to identify using human raters and models. Mechanistic ideas of PltStore (Plants store energy in glucose) and PltCellResp (Plant releases energy from glucose for growth) are very rare because they are detailed mechanisms not central to the Energy Story item (“How does energy from the sun help animals survive?”). These ideas were underrepresented in the training data, leading to reduced accuracy. In contrast, more frequent ideas achieved higher F1-scores due to their prevalence in the dataset. To address these limitations, future work is needed to collect more student data from similar demographic groups to enhance the training data for infrequent ideas.
Additionally, while the system effectively scaffolded students’ articulation of normative ideas, misconceptions about energy use persisted. For example, the idea that animals directly use the sun’s energy was frequently elicited, suggesting a need for more teacher guidance and prompts that are more precisely tailored to target nuanced gaps in students’ understanding. Further research should explore how adaptive dialogs can be better integrated with classroom practices, particularly through iterative refinement of the idea detection model (Li et al., 2024b) and more contextualized adaptive guidance.
Moreover, this study was conducted with students from a specific demographic background using a structured, KI-informed web-based unit which already incorporates NLP dialogs by curriculum designers. Further research is needed to evaluate how NLP adaptive dialogs perform with students from diverse cultural, linguistic, and educational settings (Holtmann et al., 2023). It would also be valuable to examine how these tools can be scaled or adapted for use in classrooms with varying levels of access to technology.

9. Conclusions

In summary, by repeatedly assessing student ideas and KI scores at three time points across instruction, this study shows the significant role that NLP adaptive dialog can play in helping students express more ideas and integrate their ideas, particularly when combined with instruction. Students who fully engaged with the dialog at all three time points showed significantly higher KI scores compared to those with inconsistent engagement, emphasizing the benefits of consistent interaction with the dialog during instruction. Students demonstrated significant improvements in KI scores, with a greater likelihood of formulating more links through the interaction between adaptive dialog and instruction. Two rounds of guidance in the dialog elicited additional ideas. Instruction helped students integrate evidence, with students more likely to add mechanistic ideas of photosynthesis reactants and cellular respiration during and after instruction. Moving forward, further refinement of the idea detection model and more contextually tailored guidance will be key to optimizing the impact of adaptive dialog in science education.

Funding

This research was funded by the National Science Foundation (grant number 2101669).

Institutional Review Board Statement

All procedures performed in studies involving human participants received Institutional Review Board (IRB) of the University of California Berkeley approval for human subject research [IRB No. 2021-06-14389]; approval date: 21 September 2021.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study and was obtained in alignment with IRB approval.

Data Availability Statement

The dialog design, KI rubric, and idea rubrics are available at https://wise.berkeley.edu. We cannot make our data publicly available due to our approved Institutional Review Board’s Human Subjects protocol for conducting research with a vulnerable population (children, under age of 18). The authors upon emailed request will provide an explanation of the analytic syntax and discuss how to adapt it for others’ logged data.

Acknowledgments

The author gratefully acknowledges the partnership of the science teacher in this study for classroom teaching and facilitation. The author also sincerely appreciates the pre-service teachers and researchers from the TELS (Technology Enhanced Learning in Science) research team at the Berkeley School of Education who contributed to the NLP model development and guidance design iterations.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Aleven, V. A., & Koedinger, K. R. (2002). An effective metacognitive strategy: Learning by doing and explaining with a computer-based cognitive tutor. Cognitive Science, 26, 147–179. [Google Scholar]
  2. Amir, R., & Tamir, P. (1990). Detailed analysis of misconceptions as a basis for developing remedial instruction: The case of photosynthesis. Available online: https://eric.ed.gov/?id=ED319635 (accessed on 4 November 2022).
  3. Atteberry, A., Loeb, S., & Wyckoff, J. (2017). Teacher churning: Reassignment rates and implications for student achievement. Educational Evaluation and Policy Analysis, 39(1), 3–30. [Google Scholar] [CrossRef]
  4. Barker, M., & Carr, M. (1989). Teaching and learning about photosynthesis. Part 1: An assessment in terms of students’ prior knowledge. International Journal of Science Education, 11(1), 49–56. [Google Scholar] [CrossRef]
  5. Barton, A. C., & Tan, E. (2009). Funds of knowledge and discourses and hybrid space. Journal of Research in Science Teaching, 46(1), 50–73. [Google Scholar] [CrossRef]
  6. Basu, S. J., & Barton, A. C. (2007). Developing a sustained interest in science among urban minority youth. Journal of Research in Science Teaching, 44(3), 466–489. [Google Scholar] [CrossRef]
  7. Beltagy, I., Lo, K., & Cohan, A. (2019). SciBERT: A pretrained language model for scientific text. arXiv, arXiv:1903.10676. [Google Scholar] [CrossRef]
  8. Bradford, A., Li, W., Steimel, K., Riordan, B., & Linn, M. C. (2023). Adaptive dialog to support student understanding of climate change mechanism and who is most impacted. In P. Blikstein, J. Van Aalst, R. Kizito, & K. Brennan (Eds.), Proceedings of the 17th international conference of the learning sciences—ICLS 2023 (pp. 816–823). International Society of the Learning Sciences. [Google Scholar] [CrossRef]
  9. Bransford, J. D., Brown, A. L., & Cocking, R. (1999). How people learn: Brain, mind, experience, and school. National Academy Press. [Google Scholar]
  10. Brown, M. H., & Schwartz, R. S. (2009). Connecting photosynthesis and cellular respiration: Preservice teachers’ conceptions. Journal of Research in Science Teaching, 46(7), 791–812. [Google Scholar] [CrossRef]
  11. Chi, M. T. H., Adams, J., Bogusch, E. B., Bruchok, C., Kang, S., Lancaster, M., Levy, R., Li, N., McEldoon, K. L., Stump, G. S., Wylie, R., Xu, D., & Yaghmourian, D. L. (2018). Translating the ICAP theory of cognitive engagement into practice. Cognitive Science, 42(6), 1777–1832. [Google Scholar] [CrossRef]
  12. Chi, M., Siler, S., Jeong, H., Yamauchi, T., & Hausmann, R. (2001). Learning from human tutoring. Cognitive Science, 25, 471–533. [Google Scholar]
  13. Davis, E. A. (2003). Prompting middle school science students for productive reflection: Generic and directed prompts. The Journal of the Learning Sciences, 12(1), 91–142. [Google Scholar]
  14. diSessa, A. A. (1993). Toward an epistemology of physics. Cognition and Instruction, 10(2–3), 105–225. [Google Scholar] [CrossRef]
  15. diSessa, A. A., & Sherin, B. L. (1998). What changes in conceptual change? International Journal of Science Education, 20(10), 1155–1191. [Google Scholar] [CrossRef]
  16. Eisen, Y., & Stavy, R. (1993). How to make the learning of photosynthesis more relevant. International Journal of Science Education, 15(2), 117–125. [Google Scholar] [CrossRef]
  17. Ferguson, C., van den Broek, E. L., & van Oostendorp, H. (2022). AI-induced guidance: Preserving the optimal zone of proximal development. Computers and Education: Artificial Intelligence, 3, 100089. [Google Scholar] [CrossRef]
  18. Gerard, L. F., & Linn, M. C. (2016). Using automated scores of student essays to support teacher guidance in classroom inquiry. Journal of Science Teacher Education, 27(1), 111–129. [Google Scholar] [CrossRef]
  19. Gerard, L. F., Ryoo, K., McElhaney, K. W., Liu, O. L., Rafferty, A. N., & Linn, M. C. (2016). Automated guidance for student inquiry. Journal of Educational Psychology, 108(1), 60–81. [Google Scholar] [CrossRef]
  20. Gerard, L., & Linn, M. C. (2022). Computer-based guidance to support students’ revision of their science explanations. Computers & Education, 176, 104351. [Google Scholar] [CrossRef]
  21. Gerard, L., Holtman, M., Riordan, B., & Linn, M. C. (2024a). Impact of an adaptive dialog that uses natural language processing to detect students’ ideas and guide knowledge integration. Journal of Educational Psychology, 117(1), 63–87. [Google Scholar] [CrossRef]
  22. Gerard, L., Linn, M. C., & Holtmann, M. (2024b). A comparison of responsive and general guidance to promote learning in an online science dialog. Education Sciences, 14(12), 1383. [Google Scholar] [CrossRef]
  23. Gerard, L., Matuk, C., McElhaney, K., & Linn, M. C. (2015). Automated, adaptive guidance for K-12 education. Educational Research Review, 15, 41–58. [Google Scholar] [CrossRef]
  24. Gonzalez, N., Moll, L. C., & Amanti, C. (2006). Funds of knowledge: Theorizing practices in households, communities, and classrooms. Routledge. [Google Scholar]
  25. Graesser, A. C. (2016). Conversations with autotutor help students learn. International Journal of Artificial Intelligence in Education, 26(1), 124–132. [Google Scholar] [CrossRef]
  26. Graesser, A. C., Lu, S., Jackson, G. T., Mitchell, H. H., Ventura, M., Olney, A., & Louwerse, M. M. (2004). AutoTutor: A tutor with dialogue in natural language. Behavior Research Methods, Instruments, & Computers, 36(2), 180–192. [Google Scholar] [CrossRef]
  27. Hammer, D. (1995). Student inquiry in a physics class discussion. Cognition and Instruction, 13(3), 401–430. [Google Scholar] [CrossRef]
  28. Haslam, F., & Treagust, D. F. (1987). Diagnosing secondary students’ misconceptions of photosynthesis and respiration in plants using a two-tier multiple choice instrument. Journal of Biological Education, 21(3), 203–211. [Google Scholar] [CrossRef]
  29. Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to kirschner, sweller, and clark (2006). Educational Psychologist, 42(2), 99–107. [Google Scholar] [CrossRef]
  30. Holtmann, M., Gerard, L., Li, W., Linn, M. C., & Riordan, B. (2023). How does an adaptive dialog based on natural language processing impact students from distinct language backgrounds? In P. Blikstein, J. Van Aalst, R. Kizito, & K. Brennan (Eds.), Proceedings of the 17th international conference of the learning sciences—ICLS 2023 (pp. 1350–1353). International Society of the Learning Sciences. [Google Scholar] [CrossRef]
  31. Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence; An essay on the construction of formal operational structures. Basic Books. [Google Scholar]
  32. Jurenka, I., Kunesch, M., McKee, K. R., Gillick, D., Zhu, S., Wiltberger, S., Phal, S. M., Hermann, K., Kasenberg, D., Bhoopchand, A., Anand, A., Pîslar, M., Chan, S., Wang, L., She, J., Mahmoudieh, P., Rysbek, A., Ko, W.-J., Huber, A., . . . Ibrahim, L. (2024). Towards responsible development of generative AI for education: An evaluation-driven approach. arXiv, arXiv:2407.12687. Available online: http://arxiv.org/abs/2407.12687 (accessed on 28 January 2025).
  33. Kang, H., Windschitl, M., Stroupe, D., & Thompson, J. (2016). Designing, launching, and implementing high quality learning opportunities for students that advance scientific thinking. Journal of Research in Science Teaching, 53(9), 1316–1340. [Google Scholar] [CrossRef]
  34. Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41, 75–86. [Google Scholar] [CrossRef]
  35. Kubsch, M., Krist, C., & Rosenberg, J. M. (2023). Distributing epistemic functions and tasks—A framework for augmenting human analytic power with machine learning in science education research. Journal of Research in Science Teaching, 60(2), 423–447. [Google Scholar] [CrossRef]
  36. Kulik, J. A., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: A meta-analytic review. Review of Educational Research, 86(1), 42–78. [Google Scholar] [CrossRef]
  37. Larkin, D. (2012). Misconceptions about “misconceptions”: Preservice secondary science teachers’ views on the value and role of student ideas. Science Education, 96(5), 927–959. [Google Scholar] [CrossRef]
  38. Lee, H.-S., Gweon, G.-H., Lord, T., Paessel, N., Pallant, A., & Pryputniewicz, S. (2021). Machine learning-enabled automated feedback: Supporting students’ revision of scientific arguments based on data drawn from simulation. Journal of Science Education and Technology, 30(2), 168–192. [Google Scholar] [CrossRef]
  39. Lee, H.-S., Liu, O. L., & Linn, M. C. (2011). Validating measurement of knowledge integration in science using multiple-choice and explanation items. Applied Measurement in Education, 24(2), 115–136. [Google Scholar] [CrossRef]
  40. Lee, H.-S., Pallant, A., Pryputniewicz, S., Lord, T., Mulholland, M., & Liu, O. L. (2019). Automated text scoring and real-time adjustable feedback: Supporting revision of scientific arguments involving uncertainty. Science Education, 103(3), 590–622. [Google Scholar] [CrossRef]
  41. Li, W., Chang, H.-Y., Bradford, A., Gerard, L., & Linn, M. C. (2024a). Combining natural language processing with epistemic network analysis to investigate student knowledge integration within an AI Dialog. Journal of Science Education and Technology, 1–14. [Google Scholar] [CrossRef]
  42. Li, W., Liao, Y., Steimel, K., Bradford, A., Gerard, L., & Linn, M. (2024b). Teacher-informed expansion of an idea detection model for a knowledge integration assessment. In Proceedings of the eleventh ACM conference on learning@ scale (pp. 447–450). Association for Computing Machinery. [Google Scholar] [CrossRef]
  43. Li, W., Lim-Breitbart, J., Bradford, A., Linn, M. C., Riordan, B., & Steimel, K. (2023). Explaining thermodynamics: Impact of an adaptive dialog based on a natural language processing idea detection model. In P. Blikstein, J. Van Aalst, R. Kizito, & K. Brennan (Eds.), Proceedings of the 17th international conference of the learning sciences—ICLS 2023 (pp. 1306–1309). International Society of the Learning Sciences. [Google Scholar] [CrossRef]
  44. Linn, M. C., & Eylon, B.-S. (2011). Science learning and instruction: Taking advantage of technology to promote knowledge integration. Routledge. [Google Scholar]
  45. Linn, M. C., & Hsi, S. (2000). Computers, teachers, peers: Science learning partners. Lawrence Erlbaum Associates. [Google Scholar]
  46. Linn, M. C., Donnelly-Hermosillo, D., & Gerard, L. (2023). Synergies between learning technologies and learning sciences: Promoting equitable secondary school teaching. In Handbook of research on science education. Routledge. [Google Scholar]
  47. Liu, O. L., Rios, J. A., Heilman, M., Gerard, L., & Linn, M. C. (2016). Validation of automated scoring of science assessments. Journal of Research in Science Teaching, 53(2), 215–233. [Google Scholar] [CrossRef]
  48. Liu, O. L., Ryoo, K., Linn, M. C., Sato, E., & Svihla, V. (2015). Measuring knowledge integration learning of energy topics: A two-year longitudinal study. International Journal of Science Education, 37(7), 1044–1066. [Google Scholar] [CrossRef]
  49. Luna, M. J. (2018). What does it mean to notice my students’ ideas in science today?: An investigation of elementary teachers’ practice of noticing their students’ thinking in science. Cognition and Instruction, 36(4), 297–329. [Google Scholar] [CrossRef]
  50. Mestad, I., & Kolstø, S. D. (2014). Using the concept of zone of proximal development to explore the challenges of and opportunities in designing discourse activities based on practical work. Science Education, 98(6), 1054–1076. [Google Scholar] [CrossRef]
  51. Myers, M. C., & Wilson, J. (2023). Evaluating the construct validity of an automated writing evaluation system with a randomization algorithm. International Journal of Artificial Intelligence in Education, 33(3), 609–634. [Google Scholar] [CrossRef]
  52. NGSS Lead States. (2013). Next generation science standards: For states, by states. National Academies Press. Available online: http://www.nextgenscience.org/next-generation-science-standards (accessed on 10 October 2022).
  53. Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. [Google Scholar] [CrossRef]
  54. Nye, B. D., Graesser, A. C., & Hu, X. (2014). AutoTutor and family: A review of 17 years of natural language tutoring. International Journal of Artificial Intelligence in Education, 24(4), 427–469. [Google Scholar] [CrossRef]
  55. Özay, E., & Öztaş, H. (2003). Secondary students’ interpretations of photosynthesis and plant nutrition. Journal of Biological Education, 37(2), 68–70. [Google Scholar] [CrossRef]
  56. Paladines, J., & Ramirez, J. (2020). A systematic literature review of intelligent tutoring systems with dialogue in natural language. IEEE Access, 8, 164246–164267. [Google Scholar] [CrossRef]
  57. Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In Handbook of self-regulation (pp. 451–502). Academic Press. [Google Scholar] [CrossRef]
  58. Puntambekar, S., Dey, I., Gnesdilow, D., Passonneau, R. J., & Kim, C. (2023). Examining the effect of automated assessments and feedback on students’ written science explanations. International Society of the Learning Sciences. Available online: https://repository.isls.org//handle/1/10060 (accessed on 28 January 2025).
  59. Riordan, B., Bichler, S., Bradford, A., King Chen, J., Wiley, K., Gerard, L., & C. Linn, M. (2020a). An empirical investigation of neural methods for content scoring of science explanations. In Proceedings of the fifteenth workshop on innovative use of NLP for building educational applications (pp. 135–144). Association for Computational Linguistics. [Google Scholar] [CrossRef]
  60. Riordan, B., Bichler, S., Steimel, K., & Bradford, A. (2021, June 15–17). Detecting students’ emerging ideas in science explanations [Poster presentation]. 2021 National Science Foundation DRK-12 PI Meeting, Virtually. [Google Scholar] [CrossRef]
  61. Riordan, B., Cahill, A., Chen, J. K., Wiley, K., Bradford, A., Gerard, L., & Linn, M. C. (2020b, February 8). Identifying NGSS-aligned ideas in student science explanations. Workshop on Artificial Intelligence for Education, New York, NY, USA. [Google Scholar]
  62. Riordan, B., Wiley, K., King Chen, J., Bradford, A., Gerard, L., & Linn, M. C. (2020c, April 17–21). Automated scoring of science explanations for multiple NGSS dimensions and knowledge integration. 2020 American Educational Research Association (AERA) Annual Meeting, San Francisco, CA, USA. [Google Scholar]
  63. Rivera Maulucci, M. S., Brown, B. A., Grey, S. T., & Sullivan, S. (2014). Urban middle school students’ reflections on authentic science inquiry. Journal of Research in Science Teaching, 51(9), 1119–1149. [Google Scholar] [CrossRef]
  64. Rodriguez, G. M. (2013). Power and agency in education: Exploring the pedagogical dimensions of funds of knowledge. Review of Research in Education, 37, 87–120. [Google Scholar] [CrossRef]
  65. Rosebery, A. S., Warren, B., & Tucker-Raymond, E. (2016). Developing interpretive power in science teaching. Journal of Research in Science Teaching, 53(10), 1571–1600. [Google Scholar] [CrossRef]
  66. Roseman, J., Stern, L., & Koppal, M. (2009). A method for analyzing the coherence of high school biology textbooks. Journal of Research in Science Teaching, 47(1), 47–70. [Google Scholar] [CrossRef]
  67. Ryoo, K., & Linn, M. C. (2012). Can dynamic visualizations improve middle school students’ understanding of energy in photosynthesis? Journal of Research in Science Teaching, 49(2), 218–243. [Google Scholar] [CrossRef]
  68. Ryoo, K., & Linn, M. C. (2014). Designing guidance for interpreting dynamic visualizations: Generating versus reading explanations. Journal of Research in Science Teaching, 51(2), 147–174. [Google Scholar] [CrossRef]
  69. Ryoo, K., & Linn, M. C. (2015). Designing and validating assessments of complex thinking in science. Theory Into Practice, 54(3), 238–254. [Google Scholar] [CrossRef]
  70. Ryoo, K., & Linn, M. C. (2016). Designing automated guidance for concept diagrams in inquiry instruction. Journal of Research in Science Teaching, 53(7), 1003–1035. [Google Scholar] [CrossRef]
  71. Schulz, C., Eger, S., Daxenberger, J., Kahse, T., & Gurevych, I. (2018). Multi-task learning for argumentation mining in low-resource settings. arXiv, arXiv:1804.04083. [Google Scholar] [CrossRef]
  72. Schulz, C., Meyer, C. M., & Gurevych, I. (2019). Challenges in the automatic analysis of students’ diagnostic reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 01. [Google Scholar] [CrossRef]
  73. Schwartz, R., & Lederman, N. (2008). What scientists say: Scientists’ views of nature of science and relation to science context. International Journal of Science Education, 30(6), 727–771. [Google Scholar] [CrossRef]
  74. Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. [Google Scholar] [CrossRef]
  75. Simpson, M., & Arnold, B. (1982). The inappropriate use of subsumers in biology learning. European Journal of Science Education, 4(2), 173–182. [Google Scholar] [CrossRef]
  76. Smith, J. P., III, diSessa, A. A., & Roschelle, J. (1994). Misconceptions reconceived: A constructivist analysis of knowledge in transition. Journal of the Learning Sciences, 3(2), 115–163. [Google Scholar] [CrossRef]
  77. VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science, 31(1), 3–62. [Google Scholar] [CrossRef]
  78. Vitale, J. M., McBride, E., & Linn, M. C. (2016). Distinguishing complex ideas about climate change: Knowledge integration vs. specific guidance. International Journal of Science Education, 38(9), 1548–1569. [Google Scholar] [CrossRef]
  79. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. [Google Scholar]
  80. Walker, E., Rummel, N., & Koedinger, K. R. (2011). Using automated dialog analysis to assess peer tutoring and trigger effective support. In S. B. G. Biswas, & A. M. J. Kay (Eds.), Proceedings of the 10th international conference on artificial intelligence in education (pp. 385–393). Springer. [Google Scholar]
  81. Wiley, K., Bradford, A., & Linn, M. C. (2019). Supporting collaborative curriculum customizations using the knowledge integration framework. Computer-Supported Collaborative Learning, 1, 480–487. Available online: https://par.nsf.gov/biblio/10106811-supporting-collaborative-curriculum-customizations-using-knowledge-integration-framework (accessed on 3 January 2024).
  82. Wollny, S., Schneider, J., Di Mitri, D., Weidlich, J., Rittberger, M., & Drachsler, H. (2021). Are we there yet?—A systematic literature review on chatbots in education. Frontiers in Artificial Intelligence, 4, 654924. [Google Scholar] [CrossRef] [PubMed]
  83. Zhai, X., He, P., & Krajcik, J. (2022). Applying machine learning to automatically assess scientific models. Journal of Research in Science Teaching, 59(10), 1765–1794. [Google Scholar] [CrossRef]
  84. Zhai, X., Krajcik, J., & Pellegrino, J. W. (2021). On the validity of machine learning-based next generation science assessments: A validity inferential network. Journal of Science Education and Technology, 30(2), 298–312. [Google Scholar] [CrossRef]
  85. Zhai, X., Yin, Y., Pellegrino, J. W., Haudek, K. C., & Shi, L. (2020). Applying machine learning in science assessment: A systematic review. Studies in Science Education, 56(1), 111–151. [Google Scholar] [CrossRef]
  86. Zhu, M., Liu, O. L., & Lee, H.-S. (2020). The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Computers & Education, 143, 103668. [Google Scholar] [CrossRef]
Figure 1. NLP adaptive dialog between Ada (the thought buddy) and K (pseudonym name, a 7th-grade student).
Figure 1. NLP adaptive dialog between Ada (the thought buddy) and K (pseudonym name, a 7th-grade student).
Education 15 00207 g001
Figure 2. The curriculum and research design.
Figure 2. The curriculum and research design.
Education 15 00207 g002
Figure 3. The predicted probability of each KI score by dialog (initial explanation before dialog, revised explanation after dialog) and time points (before, during, and after instruction).
Figure 3. The predicted probability of each KI score by dialog (initial explanation before dialog, revised explanation after dialog) and time points (before, during, and after instruction).
Education 15 00207 g003
Figure 4. A bar graph of normative and intuitive ideas across three time points.
Figure 4. A bar graph of normative and intuitive ideas across three time points.
Education 15 00207 g004
Figure 5. Idea change means of three timepoints from initial to revised explanation. (Note: * means that students expressed significantly more ideas (p < 0.05) in their revised explanation compared to their initial one after controlling other variables. For example, PhotoRec* means that students are 1.85 times more likely to have the idea of reactants of photosynthesis (Wald = 4.18, p = 0.04) in their revised explanation. ** means that students expressed significantly more ideas (p < 0.01) in their revised explanation compared to their initial one after controlling other variables).
Figure 5. Idea change means of three timepoints from initial to revised explanation. (Note: * means that students expressed significantly more ideas (p < 0.05) in their revised explanation compared to their initial one after controlling other variables. For example, PhotoRec* means that students are 1.85 times more likely to have the idea of reactants of photosynthesis (Wald = 4.18, p = 0.04) in their revised explanation. ** means that students expressed significantly more ideas (p < 0.01) in their revised explanation compared to their initial one after controlling other variables).
Education 15 00207 g005
Figure 6. Idea presence probability at three time points.
Figure 6. Idea presence probability at three time points.
Education 15 00207 g006
Figure 7. Science explanation revision progress with the combined scaffolding of adaptive NLP dialog and instruction.
Figure 7. Science explanation revision progress with the combined scaffolding of adaptive NLP dialog and instruction.
Education 15 00207 g007
Figure 8. Ideas elicited by Prompt 8 and Prompt 12.
Figure 8. Ideas elicited by Prompt 8 and Prompt 12.
Education 15 00207 g008
Figure 9. Ideas elicited by generic guidance.
Figure 9. Ideas elicited by generic guidance.
Education 15 00207 g009
Table 1. KI scoring rubric, idea rubric and adaptive guidance for NLP dialog.
Table 1. KI scoring rubric, idea rubric and adaptive guidance for NLP dialog.
KIDescriptionIdeas and DescriptionsAdaptive Guidance
1Irrelevant/off-topicOff topic ideas (e.g., I don’t know.)Can you tell me more about this idea or another one in your explanation? I am still learning about student ideas to become a better thought partner.
2No link
Incomplete/vague/inaccurate ideas
4-EngCreate: Energy is created not transformed/transferred (e.g., chemical energy is created by plants)Cannot be accurately detected.
5-Eng2Mat: Plants transform (convert/change/turn) light energy into glucose/sugar/food OR turn/transform glucose into energyCannot be accurately detected.
11-AnimDirUse: Animals directly use the Sun’s energy for vitamins and keeping warmInteresting idea! Can animals live without sunlight? How does the animal use energy from the sun?
12-AnimFood: Animals eat plants [focused on animals eat plants as food and NOT about energy such as sun grows plants which are energy source for animals] Interesting idea! How do plants and animals use energy from the sun differently?
3Partial link
Accurate idea(s), but isolated (conclusion only, explanation/evidence only)
1-PhotoRec: CO2, H2O or both as reactants of photosynthesisNice thinking. You mentioned the inputs of photosynthesis. What are the outputs of this process?
2-PhotoProd: glucose [or sugar or food] or oxygen as a productInteresting idea about sugar as a product in this process. How are the products of photosynthesis useful for animals?
3-Photo: plant uses energy from the sun to do photosynthesisNice thinking of photosynthesis. How are the products of photosynthesis useful for animals?
3a-PhotoChem: Energy from the sun transforms into another type of energy [kinetic/chemical/usable] during photosynthesisInteresting idea about how plants transform light energy to usable energy. How does the energy get to animals?
6a-PltStore: Plants store energy in glucoseCannot be accurately detected.
6-PltCellResp: Plant releases energy from glucose/food for: growth, energy, repair, seed productionCannot be accurately detected.
8-EngTrans: Energy from the sun gets to animals when they eat plants Nice thinking! You talked about energy transfer. Can you tell me more about how animals use the energy?
9-AnimCellResp: Animal uses cellular respiration to release energyNice thinking! Can you tell me more about how the animal release the energy they get from the plant?
10-AnimGrw: Animal uses glucose/food for energy, repair, growth, to moveCannot be accurately detected.
4Single link: One scientifically complete and valid connection between ideas in KI level 3
5Multiple links: Two or more scientifically complete and valid connections between ideas in KI level 3
Table 2. Data preprocessing and cleaning.
Table 2. Data preprocessing and cleaning.
ActivityTotal NParticipated N 1Completed N 2Completed at All Three Time Points
Pre-test dialog16214613479
Pre-test revision162162162
Midpoint test dialog162131129
Midpoint test revision162162160
Post-test dialog162116116
Post-test revision162162159
Note: 1 Students who participated had at least one word of input. 2 Students who completed the dialog had at least one word of input in two rounds of dialog and had idea labels and KI scores assigned by the NLP models. Students who did not complete the dialog either left the dialog in the middle, or technical issues happened that resulted in their responses not being detected.
Table 3. Adaptive guidance frequency across three time points.
Table 3. Adaptive guidance frequency across three time points.
Guidance Is Assigned forBefore InstructionDuring InstructionAfter Instruction
Priority 1: Interesting idea! Can animals live without sunlight? How does the animal use energy from the sun? (Prompt 11, for idea 11-AnimDirUse)798
Priority 2: Interesting idea! How do plants and animals use energy from the sun differently? (Prompt 12, for idea 12-AnimFood) 116914
Priority 3: Nice thinking! Can you tell me more about how animals release the energy they get from the plant? (Prompt 9, for idea 9-AnimCellResp)21013
Priority 4: Nice thinking! You talked about energy transfer. Can you tell me more about how animals use the energy? (Prompt 8, for idea 8-EngTrans)352520
Priority 5: Interesting idea about how plants transform the light energy to the energy they can use. How does the energy get to animals? (Prompt 3a, for idea 3a-PhotoChem)133
Priority 6: Nice thinking of photosynthesis. Why are the products of photosynthesis useful for animals? (Prompt 3, for idea 3-Photo)032
Priority 7: Can you tell me more about this idea or another one in your explanation? I am still learning about student ideas to become a better thought partner. (Prompt Non, for Non-scorable ideas)343
Note: 1 Prompts in bold are prompts that were the most frequently assigned prompts at three timepoints.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, W. Applying Natural Language Processing Adaptive Dialogs to Promote Knowledge Integration During Instruction. Educ. Sci. 2025, 15, 207. https://doi.org/10.3390/educsci15020207

AMA Style

Li W. Applying Natural Language Processing Adaptive Dialogs to Promote Knowledge Integration During Instruction. Education Sciences. 2025; 15(2):207. https://doi.org/10.3390/educsci15020207

Chicago/Turabian Style

Li, Weiying. 2025. "Applying Natural Language Processing Adaptive Dialogs to Promote Knowledge Integration During Instruction" Education Sciences 15, no. 2: 207. https://doi.org/10.3390/educsci15020207

APA Style

Li, W. (2025). Applying Natural Language Processing Adaptive Dialogs to Promote Knowledge Integration During Instruction. Education Sciences, 15(2), 207. https://doi.org/10.3390/educsci15020207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop