Next Article in Journal
A Systematic Review of Artificial Intelligence in Higher Education Institutions (HEIs): Functionalities, Challenges, and Best Practices
Previous Article in Journal
Systemic Thinking and AI-Driven Innovation in Higher Education: The Case of Military Academies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fostering Student Engagement and Learning Perception Through Socratic Dialogue with ChatGPT: A Case Study in Physics Education

by
Ayax Santos-Guevara
,
Osvaldo Aquines-Gutiérrez
*,
Humberto Martínez-Huerta
,
Wendy Xiomara Chavarría-Garza
and
José Antonio Azuela
Department of Physics and Mathematics, Universidad de Monterrey, Avenida Morones Prieto 4500, San Pedro Garza García 66238, NL, Mexico
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(2), 184; https://doi.org/10.3390/educsci16020184
Submission received: 26 November 2025 / Revised: 15 January 2026 / Accepted: 20 January 2026 / Published: 24 January 2026

Abstract

This classroom-based case study examines how an AI-mediated Socratic dialogue, implemented through ChatGPT, can support students’ engagement and perceived learning in undergraduate thermodynamics. Conducted in a first-year engineering physics course at a private university in northern Mexico, the activity invited small student groups to interact with structured prompts designed to promote inquiry, collaboration, and reflective reasoning about the adiabatic process. Rather than functioning as a source of answers, ChatGPT was intentionally positioned as a mediating scaffold for Socratic questioning, prompting students to articulate, examine, and refine their reasoning. A mixed-methods approach was employed, combining a 10-item Likert-scale survey with construct-level statistical analysis of two focal dimensions: perception of learning and engagement, including an exploratory comparison by gender. Results indicated consistently high levels of perceived learning and engagement across the cohort, with average scores above 4.5 out of 5. At the construct level, no statistically significant gender differences were observed, although a single item revealed higher perceived learning among female students. Overall, the findings suggest that the educational value of ChatGPT in this context emerged from its integration within a Socratic, inquiry-oriented pedagogical design, rather than from the technology alone. These results contribute to ongoing discussions on the responsible and pedagogically grounded integration of generative AI in physics education and align with Sustainable Development Goal 4 (Quality Education).

1. Introduction

The use of artificial intelligence (AI) in education is expanding the horizons of instructional design and student engagement. Particularly in STEM areas, AI technologies help identify obstacles in learning and deliver timely, customized feedback (Kamalov et al., 2023; Qusef et al., 2025; Wollny et al., 2021). These innovations are particularly relevant for promoting inclusive and high-quality learning experiences, in line with Sustainable Development Goal 4 (SDG 4) (UNESCO, 2022).
Among recent developments, generative AI models such as ChatGPT offer new possibilities for dynamic and conversational interactions that support inquiry, reflection, and critical thinking (Adıgüzel et al., 2023; Gervacio, 2024; Liu et al., 2025; Pozdniakov et al., 2024; Tang & Zhang, 2023; Wan & Chen, 2024). However, most existing applications of ChatGPT in education focus on content delivery or individual question-answering, with limited attention to collaborative, conceptually driven dialogue embedded in classroom practice (Kamalov et al., 2023; Lo, 2023; Manprisio, 2024; Pozdniakov et al., 2024).
Physics education faces persistent challenges to foster deep understanding, especially in domains involving abstract processes such as energy transformations (Neumann et al., 2013). The adiabatic process is one such topic, where students often have strong misconceptions: they confuse it with isothermal transformations or fail to link work and internal energy changes (Loverude et al., 2002).
Predominantly lecture-based instruction and procedurally focused problem-solving activities, including lectures and traditional problem sets, are often insufficient to promote conceptual coherence, particularly when phenomena are difficult to visualize or experiment with in class (McDermott, 1999; Meltzer, 2012; Redish, 1999). Inquiry-oriented pedagogical approaches, such as Socratic dialogue, have been shown to support deeper reasoning by encouraging students to articulate, question, and refine their ideas.
To address these issues, this study explores the use of ChatGPT as a Socratic tutor: not as a content-delivery tool, but as an AI-mediated facilitator that operationalizes Socratic dialogue through guided questioning and dialogic inquiry. The activity focused on helping students collaboratively make sense of adiabatic processes through structured prompts and small-group interaction. In this design, the pedagogical logic is provided by the Socratic method, while ChatGPT functions as a mediating scaffold that supports consistent and responsive questioning within the classroom context.
Our aim was to investigate students’ perceptions of learning and engagement during this dialogic learning experience, with particular attention to potential gender-related differences. As a classroom-based case study, this work does not seek to measure objective learning gains or conceptual mastery, but rather to document how students experience and interpret the integration of generative AI within an inquiry-based physics activity. By shedding light on students’ experiences, this study offers replicable design insights to inform institutional strategies for the thoughtful and pedagogically grounded integration of AI tools in higher education.

2. Materials and Methods

2.1. Research Design

This study used a classroom-based qualitative case study methodology with complementary quantitative analysis to explore how generative AI, specifically ChatGPT (OpenAI, 2025), can support students’ perceived learning and engagement during collaborative inquiry into adiabatic processes. The intervention was implemented in a first-year engineering physics course at a private university in Mexico, with 53 students participating in a guided, small-group activity mediated by ChatGPT. The pedagogical design was grounded in the Socratic method as an inquiry-oriented instructional strategy. Students worked in small teams of two to three, interacting with ChatGPT via a series of structured prompts designed to encourage guided questioning, articulation of reasoning, and metacognitive reflection, rather than direct answer provision. The activity was designed to stimulate reflection on the relationships among work, heat, pressure, volume, and internal energy under adiabatic conditions. In this design, the Socratic method provides the pedagogical framework, while ChatGPT functions as a mediating technological scaffold that operationalizes dialogic inquiry in a consistent and responsive manner within the classroom context. The study does not seek to isolate the effects of AI or pedagogy independently, but rather to examine their intentional integration in a real instructional setting. This study was guided by the following research questions, which aim to explore students’ perceptions and experiences during an AI-mediated, Socratic-style learning activity. Consistent with the exploratory nature of a case study, the research questions focus on perceived learning and engagement rather than objective learning gains. Rather than assuming positive outcomes, the questions are designed to investigate how students interacted with ChatGPT, how they engaged with the activity, and whether any gender-related patterns emerged:
  • (RQ. 1) How do students perceive the use of ChatGPT as a Socratic tutor in a physics course?
  • (RQ. 2) What impact does the use of ChatGPT have on student engagement during class activities?
  • (RQ. 3) Can the use of ChatGPT as a Socratic tutor support more equitable learning experiences across genders in physics education?

2.2. Participants

The study involved 53 undergraduate students enrolled in a physics course at a private university in northern Mexico. Participants came from seven different engineering programs and all shared a common foundational physics curriculum, taking part in the activity during a regular in-class session. The distribution of participants by academic program is summarized in Table 1.

2.3. Procedure

Students were organized into small groups of two or three to engage collaboratively with ChatGPT through a structured prompt designed to elicit conceptual reasoning about adiabatic processes. The scenario introduced gas compression and required students to consider whether heat is transferred and which factors influence changes in internal energy and temperature.
The prompt instructed the AI to adopt a Socratic role: “Act like a Socratic instructor in thermodynamics. I am trying to understand the adiabatic process. Don’t give me direct answers—guide me with questions that help me discover the concepts for myself.” Prior to the activity, students received a brief orientation on effective interaction strategies with the AI. The objective was not to obtain correct answers, but to engage in dialogic reasoning, hypothesis generation, and self-reflection.
Students were asked to deliberate within their groups and reach consensus before submitting responses to ChatGPT. This collaborative dimension ensured that the interaction transcended individual dialogue, fostering negotiation of meaning and collective knowledge construction. ChatGPT, framed as a Socratic tutor, provided cognitive scaffolding by posing guiding questions, challenging assumptions, and prompting elaboration. Rather than delivering content, the AI supported students in identifying inconsistencies in their reasoning and refining their conceptual explanations, thereby supporting students’ regulation of their own learning.
The instructor acted as a facilitator, ensuring productive engagement with both peers and the AI by monitoring group progress, encouraging equitable participation, and supporting students in interpreting and critically evaluating AI-generated prompts. The instructor did not provide direct conceptual answers during the group–ChatGPT interactions; instead, facilitation focused on maintaining productive dialogue, for example by prompting students to justify their claims, encouraging quieter group members to contribute, and reminding groups to critically assess AI outputs. When AI responses were unclear or off-target, students were encouraged to rephrase prompts and cross-check their reasoning with course principles. The session was structured into brief phases (orientation, small-group interaction with ChatGPT, and whole-class consolidation), and concluded with a whole-class discussion aimed at situating individual and group discoveries within the broader conceptual framework of thermodynamics, thereby supporting replication in similar course settings.

2.4. Instruments and Data Collection

A mixed-methods approach was used to collect data, combining quantitative and qualitative strategies to provide a comprehensive understanding of the learning experience. Specifically, after completing the Socratic tutor activity, participants responded to a 10-item Likert-scale survey designed to measure perception of learning and engagement. Although items refer to learning, confidence, and participation, all responses are interpreted as self-reported perceptions rather than objective measures of conceptual understanding. The quantitative analysis focused on identifying overall trends and potential gender-related differences at the construct level, while open-ended reflections provided complementary qualitative insights into how students experienced the dialogic interaction. This combination of approaches ensured that the study captured not only measurable patterns but also students’ subjective perspectives, thereby strengthening the interpretive depth of the findings.

Surveys and Qualitative Data

The instruments used to explore the students’ experiences and perceptions were:
  • Learning Perception Survey: A 10-item, 5-point Likert-type questionnaire adapted from the survey proposed by (Diemer et al., 2012) was used to explore the students’ experiences and perceptions. The survey items were organized into two dimensions: perception of learning (Q1–Q6) and engagement (Q7–Q10).
    Q1.
    The ChatGPT activity helped me apply course content to solve problems.
    Q2.
    The ChatGPT activity helped me learn the course content.
    Q3.
    The ChatGPT activity helped me connect ideas in new ways.
    Q4.
    The ChatGPT activity helped me participate in the course activity in ways that enhanced my learning.
    Q5.
    The ChatGPT activity helped me develop confidence in the subject area.
    Q6.
    The ChatGPT activity helped me develop skills that apply to my academic career and/or professional life.
    Q7.
    The ChatGPT activities motivated me to learn the course material more than class activities that did not use the ChatGPT.
    Q8.
    I participated more in class during the ChatGPT activities than during activities that did not use the ChatGPT.
    Q9.
    My attention to the task(s) was greater using the ChatGPT. activities.
    Q10.
    It was easier to work in a group using the ChatGPT than in other group activities.
    The survey was structured into two dimensions. The first six items (Q1–Q6) assess students’ perception of learning, while the last four items (Q7–Q10) capture their perception of engagement. These two dimensions were analyzed separately in order to identify potential differences between how students perceived the learning outcomes and how they experienced engagement during the activities.
  • Qualitative Google Form: This was a digital form with four open-ended questions designed to explore students’ perceptions of the experience. This form was completed voluntarily and anonymously at the end of the activity. Qualitative responses were analyzed thematically and interpreted in relation to the two focal constructs: perception of learning and engagement. The questions were as follows:
    • How would you describe your overall experience of interacting with ChatGPT during the adiabatic process activity?
    • What differences did you notice between this activity and others you have done in previous physics classes?
    • Did you find that you were more attentive or focused during this activity, and what helped or hindered that?
    • Would you recommend this strategy to other courses or subjects? What would you change?

3. Results

Table 2 summarizes descriptive statistics for the full sample (N = 53). Overall, mean ratings were high across all items, particularly for Perception of Learning (Q1–Q6). By contrast, Engagement (Q7–Q10) showed comparatively lower mean values and greater variability, suggesting that engagement was more heterogeneous and context-sensitive than perceived learning in this activity.

3.1. Perception of Learning

Regarding the perception of learning, as shown in Table 3, both female and male students reported consistently high scores. Female students’ means ranged from 4.79 to 5.00, with minimal variability in some items (e.g., Q2 = 5.00, SD = 0.00). Male students showed means between 4.49 and 4.74, with moderate variability (SD between 0.44 and 0.68). These results suggest that, overall, students perceived the ChatGPT activity as highly supportive of their learning, contributing to content understanding, skill development, and confidence in the subject area. Gender differences were minimal, although female students tended to report slightly higher ratings across most items.
When considered as a composite construct (Q1–Q6), Perception of Learning showed high mean values for both female and male students, reflecting a strong and consistent perception that the ChatGPT-mediated Socratic dialogue supported meaningful learning. Item-level variation was minimal, and gender differences were small, reinforcing the interpretation that perceived learning benefits were broadly shared across the sample.

3.2. Engagement

With respect to engagement, as we can see in Table 3, scores were moderately lower and showed greater variability compared to the learning dimension. Among female students, the means ranged from 4.07 to 4.50 (SD between 0.76 and 1.21), whereas the means for male students ranged from 3.59 to 4.31 (SD between 0.83 and 1.16). Items related to sustained attention (Q9) and group work (Q10) yielded lower ratings, particularly among male students, indicating a more heterogeneous perception of engagement. Nevertheless, average engagement scores for both groups remained above the scale midpoint, reflecting an overall positive evaluation of the activity in terms of participation and motivation. At the construct level (composite score Q7–Q10), no statistically significant gender differences were observed, suggesting that the AI-mediated Socratic activity supported comparable levels of engagement across genders.
When analyzed at the construct level using the composite Engagement score (Q7–Q10), both female and male students reported average values above the scale midpoint (Table 3), reflecting an overall positive evaluation of the activity in terms of participation, interest, and motivation. Importantly, no statistically significant gender differences were observed at the construct level (Table 2), suggesting that the AI-mediated Socratic activity supported broadly comparable levels of engagement across genders, despite item-level variability. Taken together, these findings indicate that while engagement was perceived as slightly more variable and context-sensitive than learning, the overall pattern supports the conclusion that ChatGPT-mediated Socratic dialogue can foster sustained student involvement and motivation in physics learning activities across diverse student groups.

3.3. Item-Level Gender Comparisons

Survey results revealed generally high ratings across all items, with mean values close to or above 4.5 for both female and male students. As illustrated in Figure 1, female students consistently rated the activity slightly higher than male students across most items. At the item level, the most pronounced difference was observed in Q2 (perceived learning), where the mean score for female students reached 5.00, compared to 4.67 for male students.
Figure 1 displays the mean scores and standard errors (Mean ± SE) for each survey item (Q1–Q10), disaggregated by gender. The overlap of error bars across most items suggests that differences were generally not statistically significant. This visual pattern supports the interpretation that perceptions of learning and engagement were broadly similar across genders, with the exception of a single item.
Statistical comparisons presented in Table 4 confirmed this observation. With the exception of Q2, no statistically significant gender differences were detected. Female students reported significantly higher perceived learning in Q2 (“The ChatGPT activity helped me learn the course content”), with p = 0.0287 , while all remaining item-level comparisons were not statistically significant. Minor trends favoring female students were observed in items related to collaboration and conceptual understanding (Q1, Q4, Q7–Q9), whereas male students reported marginally higher scores in Q10; however, these differences did not reach statistical significance.
Given the exploratory nature of this classroom-based case study, item-level results are interpreted as descriptive trends rather than confirmatory evidence. Overall, the findings indicate that ChatGPT-mediated Socratic dialogue was perceived positively by both female and male students, supporting generally equitable perceptions of learning and engagement, with only isolated item-level differences observed.

3.4. Qualitative Google Form

The qualitative survey provided complementary insights into students’ experiences. Across responses, students consistently described the activity as novel, interactive, and cognitively demanding, particularly in comparison with traditional lecture-based instruction. Many students emphasized the importance of being required to think and articulate reasoning, rather than receiving direct answers. Representative comments included:
“I realized I had misunderstood how work affects internal energy in adiabatic processes. ChatGPT didn’t tell me the answer, but kept asking until I figured it out.”
“It was different because I had to think. ChatGPT asked questions that made me reconsider what I thought I knew. I was more alert.”
“I learned that for the process to be close to adiabatic, there must be no leakage to increase the pressure, which would raise the temperature. The piston should move fast enough so that there is no temperature exchange, and that the difference in internal energy for an adiabatic process is directly related to the work done on the system.”
Students also reported that the activity helped them clarify conceptual relationships, particularly between work, internal energy, and temperature in adiabatic processes. These qualitative responses align with the high quantitative ratings for perceived learning. Regarding engagement, several students noted that the conversational and dialogic nature of the interaction helped sustain attention, although a minority mentioned occasional distractions related to prompt formulation or interpretation of AI responses. Most participants indicated that they would recommend this strategy for other courses or subjects, particularly if complemented by additional instructor-guided discussion.

3.5. Facilitators and Obstacles of the Socratic Style

Students identified several key aspects of the ChatGPT-mediated Socratic method that enhanced their learning:
  • Progressive questioning: The chained structure of the questions helped guide reasoning in incremental steps, which students found helpful for unpacking complex concepts.
  • Cognitive pause: The written format allowed students to take time before responding, fostering more deliberate thinking.
  • Immediate feedback: The interaction often led students to revise their ideas based on new insights triggered by the next question or comment.
These facilitators align with established principles of Socratic dialogue and inquiry-based learning, particularly scaffolding, clarification, and self-correction. Challenges were also identified. Approximately 18% of students reported moments where the AI misinterpreted their responses, disrupting conversational flow. Others found some questions too vague or repetitive, especially when prior knowledge was stronger. Despite these limitations, the overall perception of the Socratic style remained strongly positive, and a majority of students recommended the approach for use in other courses and disciplines.

4. Discussion

This study explored the potential of using ChatGPT as a Socratic tutor to support students in reasoning about the adiabatic process. The findings indicate that AI-mediated Socratic dialogue was perceived by students as supportive of their learning and engagement, and that it fostered agency and metacognitive involvement during collaborative inquiry. By encouraging students to move beyond procedural problem-solving, the AI prompted them to recognize and correct misconceptions—such as confusing isothermal and adiabatic transformations—and to develop more coherent mental models of the adiabatic process. This iterative, reflective interaction was reported by students as promoting deeper reasoning, aligning with expert-like problem-solving practices and contributing to a more meaningful and self-regulated learning experience.
These outcomes align with educational goals under SDG 4, emphasizing critical thinking and deeper understanding through innovative methods (AlAli et al., 2023; Kioupi & Voulvoulis, 2019). While the use of ChatGPT revealed some limitations, such as occasional misinterpretations by the AI, the overall findings position generative AI not merely as a mediating scaffold for pedagogical innovation when embedded within a clear instructional design.
The findings of this study highlight how AI-mediated dialogue can foster a more personalized and cognitively demanding learning environment predominantly lecture-based, procedurally focused instruction. Students reported feeling both challenged and supported, and the qualitative evidence suggests that the activity helped them recognize not only what they misunderstood but also how their reasoning had gone astray—an outcome aligned with the goals of metacognitive development (Schraw & Dennison, 1994; Sharma et al., 2024). Importantly, the contribution of this case study lies in documenting students’ perceptions of learning and engagement, showing that structured, dialogic interactions can promote epistemic agency and encourage students to take ownership of their reasoning while strengthening critical thinking skills in science education. In this sense, the present study classroom-based empirical insights to a growing body of scholarship suggesting that generative AI, when used as a facilitator of Socratic inquiry, may extend beyond content delivery to cultivate higher-order learning outcomes.
More broadly, these results reflect ongoing discussions about the transformative potential of artificial intelligence in education. AI systems have been shown to facilitate student-centered learning and adaptive assessments, as well as providing real-time feedback to improve engagement and motivation (Banik & Gullapelly, 2025; Cha & Daud, 2025; Jumah et al., 2024). At the same time, they may also expand accessibility, particularly for learners who require individualized support (Mustafa, 2024). However, the growing reliance on AI-based platforms also raises concerns about issues such as data privacy, unequal levels of digital literacy, and the potential reduction of human interaction in the classroom. These tensions underscore the need for a balanced approach in integrating AI into pedagogical practice—one that leverages its capacity to enrich learning while maintaining the social and relational dimensions that are central to education (Mustafa, 2024).
The results revealed a varied pattern of student responses to different conceptual aspects of adiabatic processes. The post-activity Likert-type survey revealed that students perceived the interaction with ChatGPT as beneficial for clarifying core concepts of the adiabatic process. Many students specifically highlighted perceived improvement in understanding of the relationship between work and internal energy, a known conceptual hurdle in thermodynamics (Loverude et al., 2002). Several responses mentioned that while students had previously memorized equations, they had lacked intuitive understanding—something the Socratic dialogue helped them articulate and refine.
However, the use of ChatGPT also revealed some limitations that merit further examination. First, the model occasionally failed to interpret students’ inputs correctly, which disrupted the dialogic flow and led to frustration. Second, the level of questioning did not always align with the students’ prior knowledge—some found the questions too simplistic or redundant, while others felt they needed more scaffolding (Fakour & Imani, 2025; Wang & Fan, 2025). This issue reflects a broader challenge in the deployment of large language models in education: their current inability to fully diagnose learners’ understanding in real time and adjust their responses accordingly (Fan et al., 2025; Tam, 2025; Zhan & Yan, 2025).
Table 5 provides a comparative overview of recent studies involving ChatGPT in physics education and outcomes. Across studies, a consistent insight is that ChatGPT’s effectiveness depends strongly on the pedagogical framework in which it is embedded. In our case, positioning ChatGPT within a Socratic, inquiry-oriented design emphasized metacognitive engagement and conceptual exploration.
To align inference with the study’s theoretical constructs, quantitative comparisons were interpreted primarily at the construct level (composite scores for perception of learning and engagement), while item-level patterns were treated as descriptive trends. This analytic choice supports more robust interpretation and mitigates the risk associated with multiple comparisons.
Student perception consistently emerges as a central metric of success. Studies such as those by (Kasmaee & Mahyar, 2024; Pavlenko & Syzenko, 2024) have shown that students find ChatGPT useful and motivating, even in contexts beyond direct instruction. Similarly, our participants appreciated the opportunity to develop their understanding through dialogue rather than passively receiving responses.
Overall, this case study adds to the growing body of research on AI in education by demonstrating that, under the right conditions, a general-purpose language model like ChatGPT can support perceived learning and engagement through dialogic inquiry and conceptual conflict.

4.1. RQ1: How Do Students Perceive the Use of ChatGPT as a Socratic Tutor in a Physics Course?

Students generally perceived the learning experience with ChatGPT as positive, engaging, and thought-provoking. Working in small groups encouraged discussion and mutual explanation of the prompts, which helped clarify misunderstandings and consolidate knowledge. The Socratic dialogue, mediated by ChatGPT, prompted students to reflect on their reasoning, make predictions, and revisit thermodynamic concepts related to the adiabatic process. Many participants reported that the AI’s persistent questioning felt different from predominantly lecture-based, procedurally focused instruction, fostering a more active engagement with the content. The collaborative setting was also positively valued by students, as it enabled them to verbalize their reasoning, challenge peers’ ideas, and build confidence in their responses before engaging with the AI tutor. This social dimension of the activity reinforces the view that learning gains emerged not solely from interaction with ChatGPT, but from the combination of dialogic AI support and peer collaboration, consistent with findings reported by (Melo-López et al., 2025).
In terms of engagement, the survey results, Table 2, show comparatively lower scores on Items Q7–Q9 (≈3.6–4.1). At the construct level, engagement remained above the midpoint for both genders, and no statistically significant gender differences were observed, indicating broadly comparable engagement patterns.

4.2. RQ2: What Impact Does the Use of ChatGPT Have on Student Engagement During Class Activities?

Among the advantages, students highlighted that ChatGPT provided immediate, non-judgmental feedback, which encouraged exploration without fear of making mistakes. The AI’s ability to ask follow-up questions in a Socratic manner pushed students to articulate their understanding rather than memorize formulas, fostering sustained cognitive engagement during the activity.
Some challenges were also reported, particularly related to moments of uncertainty or cognitive overload when the AI did not provide direct answers. Overall, these findings reinforce that AI-mediated engagement is both empowering and cognitively demanding, echoing prior research that documents increased engagement alongside occasional confusion in AI-based tutoring environments (Wang & Fan, 2025).
Importantly, when the results were interpreted at the construct level—aggregating items into Perception of Learning and Engagement—no statistically significant gender differences were observed. This analytic approach provides a more robust basis for interpretation, as it reduces the influence of item-specific variability and aligns the findings with the study’s theoretical framework. In this sense, the isolated difference observed in Q2 should be interpreted cautiously and does not outweigh the overall pattern of broadly comparable learning and engagement perceptions across genders.

4.3. RQ3: Can the Use of ChatGPT as a Socratic Tutor Support More Equitable Learning Experiences Across Genders in Physics Education?

Overall, the results suggest that ChatGPT-mediated Socratic dialogue supported broadly equitable learning experiences across genders. Although a statistically significant difference was observed in a single item (Q2), where female students reported higher perceived learning, this result warrants cautious interpretation. Because item-level comparisons are exploratory, this isolated difference should not be overgeneralized.
At the construct level, composite measures of perceived learning and engagement did not reveal large gender disparities, indicating that both female and male students experienced comparable levels of participation and perceived benefit from the activity. This pattern suggests that the AI-mediated Socratic approach may support balanced learning experiences, even in contexts where gender differences have been traditionally reported in physics education.
The tendency for female students to provide slightly higher ratings across several items, as illustrated in Figure 1, aligns with prior research indicating that dialogic and collaborative learning environments can foster inclusion and engagement among groups that are often underrepresented or less confident in STEM settings. However, the present findings do not allow firm conclusions regarding gender-specific advantages, and further research with larger samples and validated measures is required.
Prior studies have documented nuanced gender differences in attitudes toward AI, including higher levels of AI-related anxiety among women (Russo et al., 2025). Such factors may shape how students perceive and interact with AI-supported learning environments (Julien, 2024). Future work should therefore examine how AI attitudes, confidence, and anxiety interact with pedagogical design to influence equity-related outcomes.
Overall, this case study contributes preliminary evidence that embedding ChatGPT within a Socratic, inquiry-oriented framework can support generally equitable perceptions of learning and engagement across genders, while underscoring the need for continued investigation into the conditions under which AI-mediated instruction promotes inclusivity.

5. Limitations and Future Work

This study presents promising insights into the integration of ChatGPT as a Socratic tutor for facilitating students’ perceived learning and engagement in thermodynamics. However, several limitations should be acknowledged.
First, the study involved a relatively small sample size ( n = 53 ) drawn from a single institution, which limits the generalizability of the findings beyond the specific classroom context examined.
Second, there was no control group or pre/post conceptual test to allow for a quantitative comparison of learning gains beyond self-perception. Although a formative quiz was administered during the intervention, it was used exclusively for pedagogical feedback and was not designed or validated as a research instrument; therefore, it was not analyzed as a primary outcome measure.
Another limitation concerns the variability in how student groups interacted with ChatGPT. AI–student chat transcripts were not systematically captured or archived across groups, which precluded rigorous qualitative content analysis of dialogic interactions and reasoning trajectories. Interactions occurred on students’ own devices during in-class group work, and consistent logging at the group level was not feasible in this implementation.
Additionally, some participants reported limitations in ChatGPT’s responses, such as vagueness or occasional misinterpretations, highlighting the importance of careful prompt design and critical engagement with AI-generated content.
Future research could address these limitations by including control groups, employing validated conceptual inventories, and systematically collecting AI–student interaction data to enable fine-grained analysis of dialogic and metacognitive processes. It would also be valuable to explore the role of instructor facilitation in conjunction with AI-mediated dialogue, as well as to examine the impact of similar interventions in larger and more diverse classroom settings.
While these limitations constrain the scope of inference, they do not undermine the value of the present case study. Rather, they delineate clear directions for future work and contribute to a more transparent and responsible integration of generative AI into physics education research.

6. Conclusions

This study provides evidence from a classroom-based case study that ChatGPT, when employed as a Socratic tutor, can support students’ perceived learning and engagement in challenging areas of thermodynamics. By emphasizing dialogue, metacognitive reflection, and cognitive conflict, the intervention was perceived by students as fostering self-regulated learning and epistemic agency.
The success of this approach was not due to a single factor, but rather to a synergistic combination of pedagogical design and technological mediation: ChatGPT functioned less as a content dispenser and more as a mediating scaffold for metacognitive dialogue, helping students detect inconsistencies in their reasoning and articulate more coherent models of thermodynamic behavior.
Compared to other AI-mediated interventions—such as gamified models like (Beltozar-Clemente & Díaz-Vega, 2024) or more structured scaffolding approaches like (El Fathi et al., 2025) and those described by (Mustofa et al., 2024)—this study highlights the potential of open-ended, AI-mediated Socratic dialogue to foster epistemic agency and deep conceptual exploration, as reported by students. Participants valued the opportunity to think through problems rather than being presented with direct answers, an experience that strengthened their sense of ownership over the learning process.
At the same time, important limitations became evident. Occasional misinterpretations by the AI, mismatches in the level of questioning, and the need for better adaptive support point to the importance of refining prompt design, instructor facilitation, and interaction protocols. These challenges underscore that generative AI should not be viewed as an autonomous tutor, but as a tool whose educational value depends on thoughtful pedagogical orchestration, echoing broader concerns in the literature regarding the current limitations of large language models as real-time pedagogical agents.
Future work should explore hybrid learning environments in which AI tools like ChatGPT are integrated with peer dialogue, instructor feedback, and validated assessment strategies, allowing researchers to examine not only student perceptions but also learning outcomes more systematically. Such combinations may help mitigate the shortcomings of AI while amplifying its strengths as a dialogic support system for physics reasoning.
In conclusion, this study contributes to an emerging research agenda that positions generative AI not merely as a novel technological tool, but as a mediating catalyst for pedagogical innovation. When embedded within thoughtfully designed learning experiences and guided by instructors, models like ChatGPT can play a meaningful role in physics education by supporting students’ engagement, reflective practice, and capacity for autonomous and critical thinking, thereby contributing to inclusive and high-quality learning environments aligned with Sustainable Development Goal 4 (Quality Education), while opening new avenues for research on equitable and responsible AI integration in STEM classrooms.

Author Contributions

Conceptualization, A.S.-G., O.A.-G., W.X.C.-G., J.A.A., and H.M.-H.; methodology, A.S.-G., W.X.C.-G., J.A.A., H.M.-H., and O.A.-G.; software, A.S.-G., W.X.C.-G., and O.A.-G.; validation, A.S.-G., W.X.C.-G., J.A.A., H.M.-H., and O.A.-G.; formal analysis, A.S.-G., W.X.C.-G., and O.A.-G.; investigation, A.S.-G., W.X.C.-G., J.A.A., H.M.-H., and O.A.-G.; resources, A.S.-G., W.X.C.-G., and O.A.-G.; data curation, A.S.-G., W.X.C.-G., and O.A.-G.; writing—original draft preparation, A.S.-G., W.X.C.-G., J.A.A., H.M.-H., and O.A.-G.; writing—review and editing, A.S.-G., W.X.C.-G., J.A.A., H.M.-H., and O.A.-G.; visualization, A.S.-G., W.X.C.-G., and O.A.-G.; supervision, A.S.-G., H.M.-H., and O.A.-G.; project administration, A.S.-G., O.A.-G., and W.X.C.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the nature of the research, as it involved only voluntary participation in a questionnaire administered to students, with no sensitive personal data collected. According to Mexican national regulations, specifically: Norma Oficial Mexicana NOM-012-SSA3-2012, Que establece los criterios para la ejecución de proyectos de investigación para la salud. This regulation states that ethical review is required only for research involving risks to participants, clinical interventions, biological samples, or identifiable sensitive data. Our study involved only the voluntary completion of an anonymous questionnaire, collected exclusively for educational improvement purposes, with no personal, sensitive, or health-related data.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study. Students participated voluntarily, and it was explicitly communicated that by completing the questionnaire, they were consenting to participate.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adıgüzel, T. K., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology, 15(4), ep429. [Google Scholar] [CrossRef]
  2. AlAli, R., Alsoud, K., & Athamneh, F. (2023). Towards a sustainable future: Evaluating the ability of STEM-based teaching in achieving sustainable development goals in learning. Sustainability, 15(16), 12542. [Google Scholar] [CrossRef]
  3. Banik, B. G., & Gullapelly, A. (2025). AI-powered gamification and interactive learning tools for enhancing student engagement. In Driving quality education through AI and data science. IGI Global Scientific Publishing. [Google Scholar] [CrossRef]
  4. Beltozar-Clemente, S., & Díaz-Vega, E. (2024). Physics XP: Integration of ChatGPT and gamification to improve academic performance and motivation in a Physics I course. International Journal of Engineering Pedagogy, 14(6), 6. [Google Scholar] [CrossRef]
  5. Cha, W. K., & Daud, P. (2025). Enhancing early education with artificial intelligence: A comparative study of AI-powered learning versus traditional methods. International Journal of Academic Research in Business and Social Sciences, 15(2), 983–1002. [Google Scholar] [CrossRef]
  6. Diemer, T. T., Fernandez, E., & Streepey, J. W. (2012). Student perceptions of classroom engagement and learning using iPads. Journal of Teaching and Learning with Technology, 1(2), 13–25. [Google Scholar]
  7. El Fathi, T., Saad, A., Larhzil, H., Lamri, D., & Al Ibrahmi, E. M. (2025). Integrating generative AI into STEM education: Enhancing conceptual understanding, addressing misconceptions, and assessing student acceptance. Disciplinary and Interdisciplinary Science Education Research, 7(1), 6. [Google Scholar] [CrossRef]
  8. Fadillah, M. A., Usmeldi, U., Lufri, L., Mawardi, M., & Festiyed, F. (2024). Exploring user perceptions: The impact of ChatGPT on high school students’ physics understanding and learning. Advances in Mobile Learning Educational Research, 4(2), 1197–1207. [Google Scholar] [CrossRef]
  9. Fakour, H., & Imani, M. (2025). Socratic wisdom in the age of AI: A comparative study of ChatGPT and human tutors in enhancing critical thinking skills. Frontiers in Education, 10, 1528603. [Google Scholar] [CrossRef]
  10. Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., & Gašević, D. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. [Google Scholar] [CrossRef]
  11. Gervacio, A. P. (2024). Exploring how generative AI contributes to the motivated engagement and learning production of science-oriented students. Environment and Social Psychology, 9(11), e3194. [Google Scholar] [CrossRef]
  12. Julien, G. (2024). How artificial intelligence (AI) impacts inclusive education. Educational Research and Reviews, 19(6), 95–103. [Google Scholar] [CrossRef]
  13. Jumah, J., Jumah, O., & Hassan, E. (2024, November 13–15). The impact of artificial intelligence on flipped learning and online learning. Proceedings of the International Conference on Technology and Innovation in Learning, Teaching and Education (pp. 93–105), Abu Dhabi, United Arab Emirates. [Google Scholar] [CrossRef]
  14. Kamalov, F., Santandreu Calonge, D., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability, 15(16), 12451. [Google Scholar] [CrossRef]
  15. Kasmaee, A. S., & Mahyar, H. (2024). AI chatbots as virtual teaching assistants. In Futureproofing Engineering Education for Global Responsibility (pp. 653–664). Springer Nature. [Google Scholar] [CrossRef]
  16. Kioupi, V., & Voulvoulis, N. (2019). Education for sustainable development: A systemic framework for connecting the SDGs to educational outcomes. Sustainability, 11(21), 6104. [Google Scholar] [CrossRef]
  17. Liu, Y. L. E., Lee, T. P., & Huang, Y. M. (2025). Enhancing student engagement and higher-order thinking in human-centred design projects: The impact of generative AI-enhanced collaborative whiteboards. Interactive Learning Environments. Advance online publication. [Google Scholar] [CrossRef]
  18. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  19. Loverude, M. E., Kautz, C. H., & Heron, P. R. L. (2002). Student understanding of the first law of thermodynamics: Relating work to the adiabatic compression of an ideal gas. American Journal of Physics, 70(2), 137–148. [Google Scholar] [CrossRef]
  20. Manprisio, R. S. (2024). Redefining learning paradigms: Integrating artificial intelligence into modern classrooms. Ubiquitous Learning: An International Journal, 17(2), 157–177. [Google Scholar] [CrossRef]
  21. McDermott, L. C. (1999). Resource letter: PER-1: Physics education research. American Journal of Physics, 67(9), 755–767. [Google Scholar] [CrossRef]
  22. Melo-López, V. A., Basantes-Andrade, A., Gudiño-Mejía, C. B., & Hernández-Martínez, E. (2025). The impact of artificial intelligence on inclusive education: A systematic review. Education Sciences, 15(5), 539. [Google Scholar] [CrossRef]
  23. Meltzer, D. E. (2012). Resource letter ALIP–1: Active-learning instruction in physics. American Journal of Physics, 80(6), 478–496. [Google Scholar] [CrossRef]
  24. Mustafa, B. (2024). From traditional classrooms to AI-integrated learning spaces: A future perspective. AI EDIFY Journal, 1(3), 20–27. [Google Scholar]
  25. Mustofa, H. A., Bilad, M. R., & Grendis, N. W. B. (2024). Utilizing AI for physics problem solving: A literature review and ChatGPT experience. Lensa: Jurnal Kependidikan Fisika, 12(1), 78–97. [Google Scholar] [CrossRef]
  26. Neumann, K., Viering, T., Boone, W. J., & Fischer, H. E. (2013). Towards a learning progression of energy. Journal of Research in Science Teaching, 50(2), 162–188. [Google Scholar] [CrossRef]
  27. OpenAI. (2025). ChatGPT (April 2025 version) [Large language model]. Available online: https://chat.openai.com/ (accessed on 20 April 2025).
  28. Pavlenko, O., & Syzenko, A. (2024). Using ChatGPT as a learning tool: A study of Ukrainian students’ perceptions. Arab World English Journal (AWEJ), Special Issue on ChatGPT. 252–264. [Google Scholar] [CrossRef]
  29. Polverini, G., & Gregorcic, B. (2024). Evaluating vision-capable chatbots in interpreting kinematics graphs: A comparative study of free and subscription-based models. Frontiers in Education, 9, 1452414. [Google Scholar] [CrossRef]
  30. Pozdniakov, S., Brazil, J., Abdi, S., Bakharia, A., Sadiq, S., Gašević, D., Denny, P., & Khosravi, H. (2024). Large language models meet user interfaces: The case of provisioning feedback. Computers and Education: Artificial Intelligence, 7, 100289. [Google Scholar] [CrossRef]
  31. Qusef, A., Murad, S., Alsalhi, N. R., & Al Gharaibeh, F. (2025). Leveraging artificial intelligence to identify students with learning challenges. International Journal of Learning, Teaching and Educational Research, 24(5), 623–643. [Google Scholar] [CrossRef]
  32. Redish, E. F. (1999). Teaching physics: Figuring out what works. Physics Today, 52(1), 24–30. [Google Scholar] [CrossRef]
  33. Russo, C., Romano, L., Clemente, D., Iacovone, L., Gladwin, T. E., & Panno, A. (2025). Gender differences in artificial intelligence: The role of artificial intelligence anxiety. Frontiers in Psychology, 16, 1559457. [Google Scholar] [CrossRef]
  34. Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460–475. [Google Scholar] [CrossRef]
  35. Sharma, K., Nguyen, A., & Hong, Y. (2024). Self-regulation and shared regulation in collaborative learning in adaptive digital learning environments: A systematic review of empirical studies. British Journal of Educational Technology, 55(4), 1398–1436. [Google Scholar] [CrossRef]
  36. Tam, A. C. F. (2025). Interacting with ChatGPT for internal feedback and factors affecting feedback quality. Assessment & Evaluation in Higher Education, 50(2), 219–235. [Google Scholar] [CrossRef]
  37. Tang, Z., & Zhang, Y. (2023, September 15–17). Application of generative artificial intelligence in English education: Taking ChatGPT system as an example. 2023 3rd International Conference on Educational Technology (ICET) (pp. 42–46), Xi’an, China. [Google Scholar] [CrossRef]
  38. UNESCO. (2022). UNESCO moving forward the 2030 agenda for sustainable development. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000247785.locale=es (accessed on 20 August 2025).
  39. Wan, T., & Chen, Z. (2024). Exploring generative AI-assisted feedback writing for students’ written responses to a physics conceptual question with prompt engineering and few-shot learning. Physical Review Physics Education Research, 20(1), 010152. [Google Scholar] [CrossRef]
  40. Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12(1), 1–21. [Google Scholar] [CrossRef]
  41. Wollny, S., Schneider, J., Di Mitri, D., Weidlich, J., Rittberger, M., & Drachsler, H. (2021). Are we there yet? A systematic literature review on chatbots in education. Frontiers in Artificial Intelligence, 4, 654924. [Google Scholar] [CrossRef] [PubMed]
  42. Zhan, Y., & Yan, Z. (2025). Students’ engagement with ChatGPT feedback: Implications for student feedback literacy in the context of generative artificial intelligence. Assessment & Evaluation in Higher Education, 1–14. [Google Scholar] [CrossRef]
Figure 1. Mean ± SE of survey responses by gender for each item (Q1–Q10).
Figure 1. Mean ± SE of survey responses by gender for each item (Q1–Q10).
Education 16 00184 g001
Table 1. Participant distribution by academic program.
Table 1. Participant distribution by academic program.
Academic ProgramNumber of Participants
Biomedical Engineering4
Civil Engineering5
Industrial Engineering14
Business management Engineering14
Administrative Mechanical Engineering5
Automotive Engineering3
Robotics and Mechatronics Engineering8
Total53
Table 2. Descriptive statistics of survey responses (Q1–Q10).
Table 2. Descriptive statistics of survey responses (Q1–Q10).
ItemNMeanSDSECI
Q1534.700.540.070.15
Q2534.750.520.070.14
Q3534.750.480.070.13
Q4534.770.420.060.12
Q5534.570.640.090.18
Q6534.580.630.090.18
Q7534.210.930.130.26
Q8533.751.180.160.32
Q9533.721.100.150.30
Q10534.260.840.110.23
Perception of Learning534.690.390.050.11
Engagement533.980.760.100.21
Table 3. Descriptive statistics of survey responses (Q1–Q10) by gender.
Table 3. Descriptive statistics of survey responses (Q1–Q10) by gender.
ItemFemale MSDSECIMale MSDSECI
Q14.860.360.100.214.640.580.090.19
Q25.000.000.000.004.670.580.090.19
Q34.790.430.110.254.740.500.080.16
Q44.860.360.100.214.740.440.070.14
Q54.790.430.110.254.490.680.110.22
Q64.790.430.110.254.510.680.110.22
Q74.500.760.200.444.100.970.150.31
Q84.071.210.320.703.641.160.190.38
Q94.071.070.290.623.591.090.180.35
Q104.140.860.230.504.310.830.130.27
Perception of Learning4.850.250.070.144.630.420.070.14
Engagement4.200.720.190.413.910.770.120.25
Table 4. Statistical comparison of survey items by gender (female n = 14 , male n = 39 ). Significant differences are marked with *.
Table 4. Statistical comparison of survey items by gender (female n = 14 , male n = 39 ). Significant differences are marked with *.
ItemFemale (n)Male (n)dfp ValueSig.
Q114393200.222ns
Q214393500.0287*
Q314392790.879ns
Q414393040.396ns
Q514393320.159ns
Q614393260.208ns
Q714393340.188ns
Q814393360.190ns
Q914393400.165ns
Q1014392410.490ns
Perception of Learning14391970.140ns
Engagement14392100.208ns
Table 5. Comparative Table of Studies on the Use of ChatGPT in Physics Education.
Table 5. Comparative Table of Studies on the Use of ChatGPT in Physics Education.
ReferencePedagogical ApproachUse of ChatGPTMain ObjectiveEvaluation MethodologyKey FindingsStudent Perception
El Fathi et al. (2025)Inquiry-based learning using the CILP frameworkGuided assistant providing structured prompts and conceptual tasksEnhance conceptual understanding and address misconceptions in thermodynamicsPre- and post-tests, Likert surveys, and cluster analysisSignificant reduction of misconceptions and high student acceptance of ChatGPTHigh acceptance and positive perception
This workSocratic dialogue promoting metacognitive and inquiry-oriented reasoning through open-ended questionsChatGPT acts as a mediating scaffold for reflective dialogue, rather than a content providerPerception survey (learning and engagement constructs) and qualitative reflections (case study)High levels of perceived learning and engagement; minimal gender differences at the construct levelPerceived as supportive for reflection, dialogue, and conceptual clarificationPerceived as useful and supportive for deeper understanding
Beltozar-Clemente and Díaz-Vega (2024)Combination of generative AI and gamification to increase motivation and engagementChatGPT provides real-time feedback and assistance during game-based activitiesImprove academic performance and student motivation in Physics 1Comparison of exam scores and Likert-type surveys on motivationHigher performance in the experimental group and increased interest in the subjectHighly motivated and positive perception of ChatGPT in a gamified context
Polverini and Gregorcic (2024)Evaluation of multimodal chatbots for graph interpretationChatbots (free and paid versions) analyzed for interpreting kinematics graphsAssess AI performance in interpreting visual and linguistic aspects of graphsKinematics Graph Test (TUG-K), comparison of language vs. vision tasksGPT-4o performed best; language-based tasks were more accurate than vision-based onesStudent perception not reported
Fadillah et al. (2024)Qualitative study of high school students’ experiences using ChatGPT for physicsUsed as a virtual physics tutor for explaining concepts and supporting assignmentsExplore how students perceive ChatGPT’s influence on learning and understandingOpen-ended surveys and qualitative content analysisStudents found ChatGPT helpful in improving physics understandingPositive perception of ChatGPT’s educational value
Kasmaee and Mahyar (2024)Use of chatbots as virtual teaching assistants in higher educationChatbot responded to students’ academic questionsEvaluate effectiveness of AI teaching assistants on student learningPre/post-tests, academic performance analysis, and focus groupsBetter academic performance and trust in the chatbot assistant among studentsTrust and satisfaction in AI as a supportive teaching tool
Pavlenko and Syzenko (2024)Perception study of Ukrainian university students using ChatGPT as a learning toolChatGPT used for homework help, writing tasks, and academic researchInvestigate satisfaction and user experiencesSurvey of 247 students across disciplinesStudents expressed high satisfaction and regular use of ChatGPTPositive attitudes and appreciation of ChatGPT’s usefulness
Mustofa et al. (2024)Literature review and experimental use of ChatGPT in solving physics problemsChatGPT used to solve and generate physics problemsReview benefits and challenges of AI in physics educationLiterature synthesis and testing ChatGPT on physics problemsChatGPT could solve certain physics problems and generate new ones with human-level clarityStudent perception not reported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Santos-Guevara, A.; Aquines-Gutiérrez, O.; Martínez-Huerta, H.; Chavarría-Garza, W.X.; Azuela, J.A. Fostering Student Engagement and Learning Perception Through Socratic Dialogue with ChatGPT: A Case Study in Physics Education. Educ. Sci. 2026, 16, 184. https://doi.org/10.3390/educsci16020184

AMA Style

Santos-Guevara A, Aquines-Gutiérrez O, Martínez-Huerta H, Chavarría-Garza WX, Azuela JA. Fostering Student Engagement and Learning Perception Through Socratic Dialogue with ChatGPT: A Case Study in Physics Education. Education Sciences. 2026; 16(2):184. https://doi.org/10.3390/educsci16020184

Chicago/Turabian Style

Santos-Guevara, Ayax, Osvaldo Aquines-Gutiérrez, Humberto Martínez-Huerta, Wendy Xiomara Chavarría-Garza, and José Antonio Azuela. 2026. "Fostering Student Engagement and Learning Perception Through Socratic Dialogue with ChatGPT: A Case Study in Physics Education" Education Sciences 16, no. 2: 184. https://doi.org/10.3390/educsci16020184

APA Style

Santos-Guevara, A., Aquines-Gutiérrez, O., Martínez-Huerta, H., Chavarría-Garza, W. X., & Azuela, J. A. (2026). Fostering Student Engagement and Learning Perception Through Socratic Dialogue with ChatGPT: A Case Study in Physics Education. Education Sciences, 16(2), 184. https://doi.org/10.3390/educsci16020184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop