Next Article in Journal
Overcoming Digital Inequalities—Identification and Characterisation of Digitally Resilient Schools in Different Countries Using ICILS 2023 Data
Previous Article in Journal
Perspectives on Employing a Structured Fifth-Grade Mathematics Curriculum Based on a Learning Outcomes Model with Students with Special Educational Needs in Kuwait Mainstream Schools
Previous Article in Special Issue
Institutional Compassion in Student Transition to University: Findings from the Nurture-U Compassionate Campus Project
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

ChatGPT Told Me to Say It: AI Chatbots and Class Participation Apprehension in University Students

Queens College and The Graduate Center, The City University of New York, New York, NY 10036, USA
Educ. Sci. 2025, 15(7), 897; https://doi.org/10.3390/educsci15070897
Submission received: 15 June 2025 / Revised: 2 July 2025 / Accepted: 4 July 2025 / Published: 14 July 2025

Abstract

The growing prevalence of AI chatbots in everyday life has prompted educators to explore their potential applications in promoting student success, including support for classroom engagement and communication. This exploratory study emerged from semester-long observations of class participation apprehensions in an introductory educational psychology course, examining how chatbots might scaffold students toward active and independent classroom contribution. Four students experiencing situational participation anxiety voluntarily participated in a pilot intervention using AI chatbots as virtual peer partners. Following comprehensive training in AI use and prompt design given to the entire class, participants employed systematic consultation frameworks for managing classroom discourse trepidations. Data collection involved regular instructor meetings documenting student experiences, challenges, and developmental trajectories through qualitative analysis emphasizing contextual interpretation. While students reported general satisfaction with chatbot integration, implementation revealed three critical complexities: temporal misalignment between AI consultation and real-time discussion dynamics; feedback inflation creating disconnects between AI reassurance and classroom reception; and unintended progression from supportive scaffolding toward technological dependency. Individual outcomes varied, with some students developing independence while others increased reliance on external validation. AI-assisted participation interventions demonstrate both promise and limitations, requiring careful consideration of classroom dynamics. Effective implementation necessitates rehearsal-based rather than validation-focused applications, emphasizing human mentorship and community-centered approaches that preserve educational autonomy while leveraging technological scaffolding strategically.

1. Introduction

The rapid expansion of artificial intelligence (AI) tools in our daily lives coincides with an ongoing mental health crisis on college campuses, particularly around social anxiety and participation-related distress (Rasouli et al., 2025; Tan et al., 2023). This convergence creates both unprecedented opportunities and complex challenges for educators seeking to support student learning and well-being. Generative AI-powered chatbots, such as ChatGPT and Google Gemini, have moved from niche applications to widespread adoption, increasingly permeating the core academic functions including tutoring, advising, and peer collaboration (Salih et al., 2025). Recent research demonstrates growing institutional interest in leveraging AI across multiple dimensions of student support. These applications include AI’s potential to serve as accessible academic advisors, particularly valuable for their 24/7 availability (Akiba & Fraboni, 2023), and to enhance academic writing instruction when strategically integrated into coursework (Akiba & Garte, 2024). Beyond these academic applications, AI’s capacity to provide broader navigational support for underrepresented students, including first-generation college students who may lack family guidance when navigating university culture or communicating with faculty, represents an emerging area of exploration (Lytridis & Tsinakos, 2025). It is thus not surprising that numerous studies have documented how intelligent technologies are being piloted to address instructional inefficiencies, cognitive scaffolding needs, and student well-being (Favero, 2024; Herrmann & Weigert, 2024; Salih et al., 2025). Building on this broader trend, more recent work has begun to explore AI’s potential role in supporting emotional well-being, including randomized trials investigating chatbot-based interventions for anxiety-related conditions (Heinz et al., 2025).
Positioned as a pilot, this exploratory study emerged from direct observations made during the fall 2024 semester in an introductory educational psychology course at a mid-sized public university in the United States. The course, with enrollment capped at 25 students, emphasized active class participation as a core component of the learning experience. Recognizing that this requirement might generate anxiety among some students (Blair et al., 2024), the course syllabus clearly stated that students anticipating difficulty with discussions or collaborative activities were encouraged to reach out during the first two weeks if they would like to improve their engagement in class. This proactive support framework represented standard pedagogical practice, with the message reiterated multiple times early in the semester.
The intervention focused on promoting classroom-based, participation-supportive behaviors through pedagogical strategies rather than clinical treatment. Following student-initiated communication about participation concerns, in addition to the individualized ongoing support afforded to all students, the instructor invited participants to engage in an exploratory approach involving strategic chatbot use, designed to facilitate virtual thought exercises and rehearsals, to help them build skills and comfort to become independent, self-sufficient class discussion contributors. This semester-long documentation provides snapshots of how students initially hesitant to participate in class discussions navigated AI-assisted anxiety management within established classroom dynamics, offering insights into both the potential and limitations of technological support for academic engagement among students experiencing non-clinical participation apprehensions. Accordingly, this exploratory analysis asks: (1) how do students experiencing non-clinical participation anxiety navigate AI-assisted support within established classroom dynamics; and (2) what do their patterns of use and outcomes suggest about the role of AI in scaffolding anxiety-sensitive pedagogy?

2. Materials and Methods

2.1. Context and Institutional Framework

This exploratory analysis examines an innovative pedagogical integration of AI chatbots designed to address non-clinical participation apprehensions within established educational frameworks. The study emerged from routine instructional practice in a Computing and Digital Literacies (CDL)-integrated introductory educational psychology course at a mid-sized U.S. university during Fall 2024. As part of standard CDL curriculum, students received comprehensive training in ethical AI use, prompt design, and transparency protocols from the first day, with AI chatbot engagement for academic writing, research assistance, and course logistics representing routine pedagogical practice.
This work operates under a standing Institutional Review Board (IRB) exemption granted to the author in 2022 and renewed annually for ongoing pedagogical improvement and assessment for the course sections taught by the author. The exemption falls under Category 1, covering normal educational practices and the use of educational tests, surveys, and observations in established classroom settings. In lieu of signed informed consent, in compliance with the institution’s IRB protocol under this exemption, the course syllabus included a disclosure clause stating that anonymized student work and instructional activities may be used for accreditation, instructional improvement, or scholarly dissemination. Students were also informed of their right to opt out by enrolling in other available sections of the same course, ensuring voluntary participation under this blanket waiver. This procedural design reflects institutional norms and safeguards consistent with ethical pedagogical inquiry.

2.2. Participants

The course in question enrolled 25 students, with active class participation designated as a core requirement. Following standard practice, the syllabus encouraged students anticipating participation difficulties to contact the instructor within the first two weeks, which is a message reiterated several times early in the semester. Six students (24%) reached out to express concerns about participation. Of these, four students (16% of the class; 67% of those who expressed concerns; 3 females and 1 nonbinary) cited non-clinical anxiety as the primary barrier and voluntarily agreed to participate in the AI-assisted intervention. The remaining two students, who were not included in the current exploration, cited their anticipated absences as the reason for their participation-related concerns. During the first month, these students rarely initiated verbal contributions but were given directives by the instructor to, for instance, give indirect cues, such as subtle nodding, to signal a desire to be called on when they were having difficulty initiating vocalization. They were also encouraged to email reflections before class, outlining their ideas and seeking instructor validation before contributing aloud. All four participating students, albeit to varying degrees, took advantage of these suggestions, indicating that their reluctance to participate stemmed not from disengagement or disinterest but from a need for encouragement or reassurance. Notably, none of these students had clinical diagnoses or ADA accommodations for mental health conditions on file, indicating that they were experiencing situational academic anxiety. This pattern aligns with prior research identifying social anxiety as a leading factor in classroom non-participation among university students (Cooper et al., 2018; Tan et al., 2023), suggesting that non-clinical participation apprehension may be a widespread but underexplored barrier to academic engagement.

2.3. Procedure

The intervention involved strategic AI chatbot consultation as a support mechanism for managing participation anxiety, building upon the existing CDL framework familiar to all participants. Individual meetings with each student addressed specific participation concerns and introduced customized strategies, including traditional pedagogical supports to make engagement more accessible (e.g., using subtle gestures to signal a desire to be called on) and structured AI consultation frameworks. In other words, the AI-powered intervention was provided in addition to, not in lieu of, the regular support offered by the instructor. It was further emphasized that these tools were intended as “steppingstones” to help students build the skills, knowledge, and comfort needed to gradually become independent contributors to class discussions without external assistance. Students were encouraged to select whichever AI-based chatbot they felt most comfortable with to support their participation efforts. While most students used ChatGPT, one opted for Google Gemini. Throughout this manuscript, the term “chatbot” refers to these student-selected AI-powered large language model (LLM) tools in general, unless otherwise noted.
Students received a prompt design summary sheet (Table 1) specifically developed primarily to facilitate procedural scaffolding and metacognitive skill development rather than content validation, emphasizing student agency in navigating classroom discourse dynamics while maintaining intellectual ownership of contributions. As such, students were also encouraged to emulate these prompts, adapt them, or develop their own.
This approach targeted situational, non-clinical participation anxiety rather than clinical mental health conditions, distinguishing it from extensive research on chatbot effectiveness in addressing clinical mental health challenges (Ni & Jia, 2025). The intervention focused on promoting classroom-based, learning-positive behaviors through pedagogical strategies, emphasizing academic engagement support rather than therapeutic intervention. All one-on-one meetings were documented through detailed notetaking, either during or immediately following each session. When quotes appear in this manuscript, they represent either verbatim transcripts or paraphrased statements designed to preserve the student’s intended meaning and affective tone without revealing sensitive information.

2.4. Data Collection and Analysis

Analysis involved iterative review of meeting notes, cross-case comparison of student experiences, and identification of recurring patterns across the semester-long intervention. These analytic insights emerged through reflective synthesis of documented interactions and observation notes, rather than formal coding protocols. All participants chose to maintain consistent communication with the instructor through regular optional meetings to discuss progress, challenges, and adaptive strategies. These conversations, conducted as part of routine instructional duties, form the empirical foundation for the following analysis. Detailed notes captured student insights, concerns, and developmental trajectories during most meetings. When conversations were not recorded verbatim, however, student perspectives were paraphrased to preserve original meaning and affective tone. This exploratory analysis drew on established qualitative sensibilities, emphasizing thick description and contextual interpretation rather than formal coding or thematic analysis. The goal was to illuminate the interplay between individual participation experiences and broader classroom dynamics, offering grounded insight to guide future research and practice.
Semester-long notetaking yielded valuable insights into how students with participation apprehensions navigated AI-assisted anxiety management within established classroom dynamics, highlighting both the potential and limitations of technological support for academic engagement in higher education. Because this project was designed as a classroom-based pedagogical intervention, the analysis relied on systematic notetaking and reflective synthesis rather than formal qualitative research methods such as thematic coding or interrater reliability.1

3. Findings

3.1. Findings on Student Experiences: Emerging Benefits and Challenges

The four students who participated in this AI-assisted participation intervention brought a wide array of academic backgrounds, learning profiles, and cultural perspectives to the classroom. All experienced situational, non-clinical participation anxiety rather than diagnosed mental health conditions, providing valuable insights into how LLMs might support students experiencing common but understudied forms of academic apprehension. As they attempted to integrate AI consultation into their participation strategies, their semester-long experiences revealed both promising outcomes and unexpected complications that varied significantly across individual contexts. While all four reported general satisfaction with using AI tools to manage classroom anxiety, practical implementation proved more complex than anticipated. Each case highlighted different aspects of technology-mediated anxiety support, revealing fundamental tensions between current AI capabilities and the dynamic demands of in-class academic discourse. Although all participants were familiar with AI tools from personal and academic use, none had previously considered applying them specifically as supports for enhancing classroom participation, making this a genuinely exploratory pedagogical intervention involving participation-skeptical students.
From a sociocultural and constructivist perspective, these experiences can be interpreted through Vygotsky’s concept of private speech, in which external tools and dialogue serve as cognitive scaffolds (Vygotsky & Kozulin, 2012). As students internalized their interactions with chatbots, AI tools became mediators of self-regulated thought and engagement. This framing positions digital tools not as replacements for human reasoning, but as temporary supports for developing comfort and independence in cognitively demanding situations.

3.1.1. Student A: Temporal Mismatches and Real-Time Demands

Student A, a psychology major accustomed to thorough preparation, initially found ChatGPT helpful for refining ideas before class. The chatbot’s responses validated her thinking about assigned readings and expanded on her original points, which she described as deepening her understanding. However, a critical challenge emerged during live discussion: “By the time I typed out my idea and read what ChatGPT said, the conversation had already moved on.” The dynamic pace of classroom dialogue did not accommodate the pause required for real-time AI consultation. Rather than alleviating anxiety, this temporal misalignment ostensibly increased stress, prompting frantic typing attempts to keep pace while not keeping up with the flow of the discussion.
The difficulty intensified when the chatbot’s enthusiastic responses created unintended pressure. “It always said my ideas were great and added more details. Then I felt like I had to say all of that, not just my original point. I got overwhelmed and didn’t say anything.” This experience illustrates how AI-based support designed to facilitate participation can complicate it when timing and feedback volume misalign with live classroom interactions. While, technologically speaking, emerging voice command options and interface improvements may mitigate some temporal limitations, current tools demonstrate constraints in supporting real-time participation effectively, based on this student’s experience.
By mid-semester, however, Student A reported that she no longer needed to consult the chatbot frequently during class. Instead, she had begun to internalize the chatbot’s patterns of affirmations and logical structure, developing an intuitive sense of how it might respond. This shift suggests a cognitive transition from external tool use to internalized support, which seemed to echo Vygotsky’s idea of private speech, where external dialogue with a tool like a chatbot supports the gradual development of internal thought and self-regulation (Yu et al., 2024). Rather than relying on real-time AI feedback, she drew on prior interactions as a form of mental rehearsal, which appeared to enhance her sense of preparedness and academic self-efficacy for classroom participation. This rehearsal-like use of chatbots may thus represent a promising strategy for supporting students with participation anxiety.

3.1.2. Student B: Cultural Navigation and Belonging

The complexity of AI-mediated support became particularly apparent when students navigated both academic anxiety and cultural difference. Student B, a physical education major who had immigrated from a Caribbean country as a teenager, used chatbots as both an academic tool and cultural interpreter. When she wanted to share her experience with Vicks VapoRub (called Vivaporú in her family) as a cure-all remedy in her culture for everything from coughs and headaches to cuts and open wounds, she first consulted ChatGPT to determine whether the example would be understood and, also, how to frame it appropriately for an American classroom, where such usage of Vicks VapoRub is uncommon. The chatbot helped her shape the example in relation to parental ethnotheories (Harkness & Super, 1996) and provided reassurance about its relevance. Her example was enthusiastically received by peers and sparked meaningful, engaged discussion about cultural variation in parenting, which represents a core theme of the course.
This interaction demonstrates how AI chatbots could empower students as they navigate cultural and emotional complexities, providing real-time feedback, validation, and a platform to rehearse classroom contributions. Student B, in particular, routinely consulted Google Gemini for advice or reassurance, especially when preparing for class discussions or reflecting on them afterwards. These interactions were reportedly meaningful and “comfort-building,” helping her feel better prepared and more comfortable contributing. Ultimately, the technology complemented the classroom experience by enhancing her participation comfort and affirming the value of her cultural knowledge, even as human connection remained essential to her sense of belonging.
Unlike Student A, however, Student B continued relying on Google Gemini throughout the semester, using it frequently during class sessions. The instructor occasionally reminded her privately that continuous use was not aligned with the intervention’s intended goals, which emphasized selective and reflective AI engagement toward independence rather than real-time dependence. Nonetheless, this case highlights AI’s potential value in specific contexts, providing an accessible and affirming resource that helped Student B participate with greater cultural voice, particularly in articulating examples that might otherwise have remained unspoken. Her experience suggests AI support may be especially useful for students managing multiple layers of academic challenge, even as it remains insufficient for addressing the full emotional and interpersonal dimensions of classroom engagement. Still, her sustained reliance on AI underscored the need for human mentorship to foster deeper and lasting growth.

3.1.3. Student C: Differentiated Outcomes in Neurodiverse Learners

The variation in student outcomes became most pronounced when considering neurodiversity. Student C, an elementary education major who voluntarily identified as neurodivergent but did not specify the diagnosis, and who used they/them pronouns, experienced persistent difficulties with peer engagement and spontaneous participation. They described inadvertently upsetting others, including professors, and often feeling unsure about how, when, what, or even whether to speak during discussions. The instructor recommended using a chatbot as a real-time peer partner and sounding board to manage these participation ambiguities and anxieties, with the ultimate goal of gradually scaffolding them toward becoming independent, confident contributors to classroom discussions.
Over the semester, this approach proved especially effective for their specific needs, though not in the way originally anticipated. Rather than relying on the chatbot in real time during class, as Student A initially did and as Student B continuously displayed, Student C quickly developed a consistent pre-class routine. They would input their ideas into the chatbot, ask it to anticipate peer reactions and counterarguments, and refine their points accordingly. For example, before a class debate on screen time with digital devices and its impact on child development, Student C used the chatbot to test how their argument about “co-viewing with friends and family as a bonding tool” might be received by their classmates and fit into the larger topic. ChatGPT anticipated multiple possible objections they might encounter from their peers, which allowed them to clarify their language and build more compelling arguments. This private rehearsal helped Student C anticipate discussion dynamics and gave them comfort to take more intellectual risks. Weekly check-ins with the instructor further supported this progress. By the end of the semester, Student C had transformed into the most frequent and intellectually engaged contributor in class, offering comments that reflected not only preparation but nuanced understanding and adaptability in live dialogue.
This success story illustrates how AI tools may be particularly valuable for students whose learning profiles benefit from structured preparation and social norm clarification. The chatbot served as a powerful scaffold for both pragmatic communication and self-regulatory decision-making, aligned with the student’s specific learning needs. However, this positive outcome also reminds us of the highly individualized nature of effective intervention: what worked for Student C would likely be less effective for others.

3.1.4. Student D: The Reassurance Gap

The limitations of AI-generated feedback became especially apparent with Student D, a music major, who relied heavily on ChatGPT to evaluate potential contributions both before and during class. The chatbot consistently praised her ideas as “strong” and “thoughtful,” providing reassurance during preparation. However, when she shared these AI-validated points in class, she felt that reception was often muted, and her delivery felt less successful than anticipated to her. She reflected that while the instructor always responded affirmatively with phrases like “That’s SUPER interesting! Let’s dissect that,” her comments were then reframed in ways that sounded more polished than her original comments. She also noted that peers rarely followed up on her remarks, contributing to disconnection.
Although Student D’s participation somewhat increased over time, she continued observing this pattern of perceived misalignment between AI validation and classroom reception. Despite reminders that undergraduate students are not expected to speak with academic fluency, she remained determined to improve her communication skills and express herself more effectively. The gap between the chatbot’s consistently enthusiastic feedback and her perceived experience of classroom dynamics ostensibly generated more frustration than comfort. Like Student B, she used chatbots continuously during class in real time, sometimes consulting it frantically to refine or validate thinking before speaking. This reactive use appeared to hinder her ability to focus on unfolding discussion and develop natural conversational timing, and this point was shared with her during our regular private conversations.
Rather than cultivating strategies for interpreting classroom dynamics and adapting in the moment, Student D increasingly questioned her own delivery and instincts. While her intentions aligned with self-improvement, the process seemed to undermine her sense of agency and distract from her contributions’ substance. Her experience illustrates how well-intentioned technological support, when not carefully scaffolded and mentored, can inadvertently erode the very autonomy and comfort it aims to build.

4. Discussion

4.1. Three Critical Challenges with AI-Assisted Participation

The experiences of Students A through D, while unique in their individual contexts and outcomes, collectively point to recurring tensions between current AI capabilities and the complex demands of classroom participation among students experiencing non-clinical anxieties. Despite each student achieving increased engagement through AI integration, their distinct coping strategies and ongoing concerns reveal deeper challenges that extend beyond simple implementation issues. These patterns demonstrate how carefully designed, well-intentioned technological interventions can yield consequences that could partially complicate rather than resolve the educational challenges they aim to address. Three critical challenges emerged consistently across these diverse student experiences, each highlighting fundamental misalignments between AI tool design and the nuanced realities of live academic discourse.
First, the most immediate challenge concerned temporal misalignment between AI consultation processes and the dynamic flow of classroom discourse. Classroom participation is inherently fluid, shaped by collective engagement and spontaneous responses (Blair et al., 2024); thus, academic discussions often unfold unpredictably, with ideas evolving rapidly and discourse direction shifting. In contrast, interaction with AI tools usually follows a more linear, deliberative process, whereby users formulate prompts, wait for responses, interpret feedback, and then decide how to proceed (Oruche et al., 2025). This temporal lag proved a critical obstacle across multiple student experiences. Student A’s reflection captures this challenge most directly: “By the time I typed out my idea and read what ChatGPT said, the conversation had already moved on.” When students pause mid-discussion to consult AI tools, they become prone to losing track of conversational flow, missing out on how the dialogues may have evolved during their inattention.
Rather than easing participation anxiety, this temporal misalignment may very well have heightened students’ sense of disorientation and feelings of falling behind the class discussion. The fragmentation of attentional focus during class sessions meant that, rather than engaging directly with peers and instructor contributions, students found themselves absorbed in their devices, reading, typing, or waiting for AI feedback, only to rejoin discussions that had evolved during their inattention. It would be feasible to argue that this dynamic not only diluted the experiential quality of classroom participation but also systematically limited opportunities for the spontaneous intellectual risk-taking that university educators hope to cultivate through collaborative dialogue.
Second, a less visible but equally consequential challenge involved the misalignment between AI-generated evaluations of student thoughts and the actual or perceived dynamics of classroom reception. Current LLMs are typically designed to be broadly supportive and encouraging, often defaulting to positive feedback patterns that can create unrealistic expectations about academic discourse (Rrv et al., 2024). While this tendency provides emotional affirmation, it may have resulted in what could be characterized as feedback inflation, resulting from a pattern of consistently generous praise that may not be replicated in the actual classroom.
Students typically received enthusiastic AI assessments of their contributions, such as “excellent insight,” “great example,” or “very thoughtful,” which often were not reflected proportionally in classroom dynamics, as reported explicitly by one student. Student D’s experience exemplifies this disconnect most clearly. Despite receiving consistent AI validation that her ideas were uniquely strong, she perceived classroom reception to be generally less enthusiastic than the AI feedback had suggested. She noted that, while the instructor decidedly responded positively to her comments, he would decidedly reframe them in ways that sounded more polished than her original phrasing, and peers rarely followed up on her contributions. This perception of disconnect between AI enthusiasm and measured classroom responses led Student D to question both the quality of her ideas and the ability to accurately assess social dynamics. In other words, the inflated praise Student D believed ChatGPT gave her, while emotionally supportive on the surface, potentially undermined her development of adaptive communication strategies that rely on reading contextual cues, peer responses, and instructor feedback. What appeared to be supportive encouragement, in other words, may have functioned more like a distorting mirror, reflecting approval rather than offering meaningful calibration for actual classroom dynamics.
Third, perhaps the most pedagogically disconcerting challenge was the unintended shift from supportive scaffolding to increasing dependency on AI tools. The intervention was originally conceptualized as temporary scaffolding, with AI tools serving as virtual peer partners and non-judgmental sounding boards designed to help students rehearse participation protocols, gain comfort with classroom discourse norms, and ultimately develop independent participation skills. The ultimate goal, as such, was to foster autonomy, not reliance. Student A and Student C successfully reduced their AI tool use over time, with Student A synthesizing and internalizing supportive dialogue patterns and Student C steering chatbot use exclusively toward class preparation designed to anticipate and emulate in-class dialogues. However, Students B and D increasingly began to treat chatbots as essential prerequisites for participation. For these students, AI gradually evolved from being a supplementary resource to functioning as a cognitive filter through which their thoughts had to pass before getting voiced in classroom settings. This transformation runs counter to core educational principles that emphasize self-sufficiency in engaging in learning-positive behaviors (Stead & Poskitt, 2010).
Rather than developing internal strategies for navigating uncertainty and engaging spontaneously, these students grew accustomed to “outsourcing” critical judgment and social interpretation to AI systems that, while perhaps sophisticated, lack contextual awareness of classroom dynamics, nuances, and interpersonal relationships. Although the instructor privately discouraged these students from continuous chatbot reliance during class sessions, such guidance appeared insufficient to deter sustained use. As a result, AI tool use may have inadvertently inhibited, rather than promoted, their growth toward independent academic engagement.
These three challenges collectively reveal fundamental limitations in current approaches to AI tool implementation for supporting real-time, context-sensitive participation among students experiencing non-clinical anxiety in higher education. The patterns observed across these semester-long experiences suggest that participation challenges often reflect intricate intersections of individual characteristics, classroom dynamics, and broader educational expectations rather than simple preparation deficits that can be addressed through technological enhancement alone.

4.2. Implications and Recommendations for Educational Practice

The findings from this pilot intervention carry significant implications for university instructors and institutions considering AI integration in student support services, extending beyond the specific context of participation anxiety to encompass broader questions of technology-mediated pedagogy and educational equity. Most fundamentally, this pilot exploration demonstrates that sophisticated technological solutions, at this point, cannot adequately substitute for the human mentorship, community building, and relational approaches that remain central to effective student development and inspiring educational practice. The differential outcomes observed across student experiences highlight critical distinctions between content-focused and process-focused AI applications in educational contexts. Students who ostensibly derived the greatest benefit from chatbot integration into active class participation were those who gradually transitioned toward using these tools for rehearsal, emulative, and procedural guidance rather than content validation and intellectual affirmation. Student C’s development of effective pre-class preparation routines and Student A’s eventual internalization of supportive frameworks exemplify this productive evolution, while Student B’s need for constant reassurance and Student D’s continued reliance on real-time validation, unfortunately, signal the limitations of validation-focused approaches. This pattern suggests that implementation strategies should prioritize metacognitive skill development and social navigation competencies over habitual academic content evaluation, recognizing that students’ intellectual contributions often hinge more on enhanced social and temporal coordination rather than substantive content improvement.
Rather than uncritically adopting AI tools to promote student engagement or rejecting them outright, the current findings suggest the need for thoughtful implementation strategies. Chatbot consultation should perhaps be explicitly framed as a rehearsal and simulation environment rather than a validation system. Students can thus use AI tools effectively to practice articulating ideas, anticipate peer and instructor reactions, and become familiar with classroom discourse dynamics before engaging in live discussion. Emphasizing procedural competencies, such as conversational timing, respectful disagreement, and effective idea transitions, positions AI as an effective but temporary training ground. This encourages students to internalize academic discourse protocols and become independent participants, reducing the risk of permanent reliance on AI for evaluating content quality. Such an approach fosters students’ social and temporal awareness, essential for effective participation, while minimizing feedback inflation and unrealistic expectations of classroom reception, which should serve to promote knowledge, skills, and comfort necessary for active class participation.
Critical to successful implementation is explicit training in evaluating AI feedback, particularly regarding social and contextual appropriateness. Students should learn to treat chatbot feedback as general rather than definitive guidance, while human feedback from instructors, peers, etc., should be positioned as primary sources for contextual assessment and nuanced evaluation. Furthermore, AI tools should be integrated as supplements to, rather than replacements for, human mentorship and community support systems. Students struggling with participation anxiety should be systematically connected with faculty or peer mentors, mentoring programs, campus counseling services, writing centers, and other resources that can address the complex social and emotional dimensions of classroom engagement.
Finally, rather than promising complete clarity or discomfort elimination, effective interventions should focus on helping students develop greater tolerance for the ambiguity, uncertainty, and unpredictability that characterize genuine intellectual exchange. This includes recognizing that effective participation often involves improvisation and spontaneous response rather than prescribed contributions, skills that require human connection and community support to develop (Apriceno et al., 2020). At a broader level, institutions exploring AI integration must consider whether AI coaching should be formalized, how faculty can be trained to scaffold its use, and what safeguards ensure ethical, equitable implementation that supports, not supplants, human mentorship.

5. Future Directions and Conclusions

This pilot intervention was limited to a single classroom context with one instructor and four self-selected participants, highlighting important questions about its generalizability. Future research should specifically investigate the efficacy of rehearsal-based versus validation-based AI applications across different student populations and learning contexts. Comparative studies examining outcomes when students use AI tools for procedural preparation versus content evaluation could provide crucial insights for institutional policy development. Additionally, research should explore how different prompt frameworks and interaction patterns might optimize the rehearsal function while minimizing dependency formation, with particular attention to how these dynamics vary across intersectional identity categories and learning profiles. The current analysis briefly acknowledged gender, cultural background, and neurodivergence, but future studies should more systematically explore how intersectional identities shape experiences with AI-mediated classroom participation. Such work should also examine longer-term effects on student development, academic comfort, and independent participation skills, particularly how rehearsal-based AI use affects students’ development of improvisation skills and tolerance for uncertainty in academic discourse.
The intersection of AI technology and learning-related behaviors among university students represents both significant opportunity and profound complexity for institutions committed to promoting success among their students. This exploratory study reveals that well-intentioned technological solutions can, at times, generate unintended consequences that may partially complicate rather than resolve educational challenges. The critical distinction between rehearsal-based and validation-based AI applications emerges as a fundamental factor in determining whether technological integration enhances or undermines student autonomy and intellectual development. When students strategically utilize AI tools to practice participation dynamics and develop procedural competencies while anticipating a range of reactions from others, they maintain meaningful ownership of their intellectual contributions while building academic self-efficacy. Conversely, when AI tools serve primarily to validate content quality and provide emotional reassurance, they may inadvertently undermine students’ trust in their own judgment and create unrealistic expectations about classroom reception.
As noted earlier, this exploratory pilot study’s limitations include a small sample (n = 4) of self-selected participants, lack of control or comparison group, and implementation in a single class by one instructor. These constraints clearly preclude generalization. However, the value of this work may lie in its potential transferability (i.e., the extent to which readers recognize parallels in their own classrooms and adapt the practices accordingly). Beyond offering transferable insights, this study also aims to serve as a catalyst for educators and researchers to explore creative, ethically grounded strategies for fostering classroom engagement and addressing participation-related anxiety, particularly among students experiencing non-clinical but persistent apprehension.
The four students whose experiences informed this exploratory study perhaps achieved their most meaningful growth not through AI alone but, in addition, through sustained faculty mentorship, regular consultation, and the gradual development of comfort in voicing their perspectives within a supportive learning environment. Their stories collectively signal that authentic classroom participation is a developmental process, which involves such processes as navigating social dynamics, building academic identity, and learning how to contribute thoughtfully to shared intellectual work. As higher education continues to explore the role of AI in supporting student engagement and success, the central challenge is to leverage these tools while maintaining focus on the human relationships and interactive environments that make learning truly transformative. AI can function as a useful rehearsal partner and procedural aid, but it cannot replace the collaborative, evolving, and trust-based contexts in which student voice is genuinely cultivated. While technology continues to evolve, it is still, at least for now, the human-centered settings grounded in mentorship, dialogue, and trust that empower students not only to overcome hesitation but to find meaning and agency in their contributions.

Funding

While this paper did not receive direct funding, the pedagogical experimentation and instructional framework on which it is based were supported by the Robin Hood Foundation (Grant Number A1N-5A000000IYl2QAA) and by the City University of New York’s Computing Integrated Teacher Education (CITE) grant to the author.

Institutional Review Board Statement

This exploratory analysis was conducted under an IRB exemption (Project 2022-0562, The City University of New York Central Office) for pedagogical assessment and instructional innovation. The intervention emerged naturally from standard teaching practices and reflects routine instructional support rather than a formal research design. Data were drawn from faculty–student interactions conducted as part of official instructional duties, with no collection of identifiable student information. As such, this work qualifies as exempt under U.S. federal regulations (45 CFR 46.104, Category 1). All participation was voluntary, and anonymized reporting ensured confidentiality while contributing to scholarly insights on AI-supported instructional strategies in higher education.

Informed Consent Statement

Informed consent was waived due to the study’s classification as exempt under U.S. federal regulations (45 CFR 46.104, Category 1), as it involved routine educational practices without collection of identifiable information. Instead, the course syllabus included a disclosure stating that anonymized student work and instructional activities may be used for accreditation, instructional improvement, or scholarly dissemination. All participation was voluntary, students were given the option to opt out without penalty, and anonymized reporting procedures were used to ensure confidentiality.

Data Availability Statement

No new datasets were generated or analyzed toward the completion of this manuscript. The analysis was based on the instructor’s anonymized observational notes and reflections collected as part of routine teaching practice. These materials are not publicly available due to privacy protections and ethical constraints related to the IRB exemption.

Acknowledgments

The author wishes to thank the four participating students for their willingness to share their experiences and for their thoughtful engagement throughout the semester. Their contributions were central to shaping the insights in this paper. Google Gemini 2.5 Flash was utilized for assistance with grammatical refinement and clarity during manuscript preparation; however, no conceptual content or original ideas were generated or evaluated by AI. The author assumes full responsibility for all AI-supported editing and the final manuscript.

Conflicts of Interest

The author declares no conflicts of interest.

Note

1
Due to the IRB exemption for routine classroom and educational practice and lack of signed informed consent for data publication, complete raw data are not made publicly available. This manuscript presents anonymized vignettes that reflect the data collected.

References

  1. Akiba, D., & Fraboni, M. C. (2023). AI-supported academic advising: Exploring ChatGPT’s current state and future potential toward student empowerment. Education Sciences, 13(9), 885. [Google Scholar] [CrossRef]
  2. Akiba, D., & Garte, R. (2024). Leveraging AI tools in university writing instruction: Enhancing student success while upholding academic integrity. Journal of Interactive Learning Research, 35(4), 467–480. [Google Scholar] [CrossRef]
  3. Apriceno, M., Levy, S. R., & London, B. (2020). Mentorship during college transition predicts academic self-efficacy and sense of belonging among stem students. Journal of College Student Development, 61(5), 643–648. [Google Scholar] [CrossRef]
  4. Blair, K. P., Banes, L. C., & Martin, L. (2024). Fostering noticing of classroom discussion features through analysis of contrasting cases. Instructional Science, 52(3), 417–452. [Google Scholar] [CrossRef]
  5. Cooper, K. M., Downing, V. R., & Brownell, S. E. (2018). The influence of active learning practices on student anxiety in large-enrollment college science classrooms. International Journal of STEM Education, 5(1), 23. [Google Scholar] [CrossRef] [PubMed]
  6. Favero, T. G. (2024). Using artificial intelligence platforms to support student learning in physiology. Advances in Physiology Education, 48(2), 193–199. [Google Scholar] [CrossRef] [PubMed]
  7. Harkness, S., & Super, C. M. (1996). Parents’ cultural belief systems: Their origins, expressions, and consequences. Guilford Publications. [Google Scholar]
  8. Heinz, M. V., Mackin, D. M., Trudeau, B. M., Bhattacharya, S., Wang, Y., Banta, H. A., Jewett, A. D., Salzhauer, A. J., Griffin, T. Z., & Jacobson, N. C. (2025). Randomized Trial of a Generative AI Chatbot for Mental Health Treatment. NEJM AI, 2(4), AIoa2400802. [Google Scholar] [CrossRef]
  9. Herrmann, L., & Weigert, J. (2024). AI-based prediction of academic success: Support for many, disadvantage for some? AI-based prediction of academic success: Support for many, disadvantage for some? Computers and Education: Artificial Intelligence, 7, 100303. [Google Scholar]
  10. Lytridis, C., & Tsinakos, A. (2025). AIMT agent: An artificial intelligence-based academic support system. Information, 16(4), 275. [Google Scholar] [CrossRef]
  11. Ni, Y., & Jia, F. (2025). A scoping review of ai-driven digital interventions in mental health care: Mapping applications across screening, support, monitoring, prevention, and clinical education. Healthcare, 13(10), 1205. [Google Scholar] [CrossRef] [PubMed]
  12. Oruche, R., Cheng, X., Zeng, Z., Vazzana, A., Goni, M. A., Shibo, B. W., Goruganthu, S. K., Kee, K., & Calyam, P. (2025). Chatbot dialog design for improved human performance in domain knowledge discovery. IEEE Transactions on Human-Machine Systems, 55(2), 207–222. [Google Scholar] [CrossRef]
  13. Rasouli, S., Ghafurian, M., & Dautenhahn, K. (2025). Co-design and user evaluation of a robotic mental well-being coach to support university students’ public speaking anxiety. ACM Transactions on Computer-Human Interaction, 32(3), 24. [Google Scholar] [CrossRef]
  14. Rrv, A., Tyagi, N., Uddin, M. N., Varshney, N., & Baral, C. (2024). Chaos with keywords: Exposing large language models sycophancy to misleading keywords and evaluating defense strategies. In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), Findings of the association for computational linguistics: ACL 2024 (pp. 12717–12733). Association for Computational Linguistics. [Google Scholar] [CrossRef]
  15. Salih, S., Husain, O., Hamdan, M., Abdelsalam, S., Elshafie, H., & Motwakel, A. (2025). Transforming education with AI: A systematic review of ChatGPT’s role in learning, academic practices, and institutional adoption. Results in Engineering, 25, 103837. [Google Scholar] [CrossRef]
  16. Stead, C., & Poskitt, J. (2010). Adaptive help seeking: A strategy of self-regulated learners and an opportunity for interactive formative assessment. Assessment Matters, 2, 42. [Google Scholar] [CrossRef]
  17. Tan, G. X. D., Soh, X. C., Hartanto, A., Goh, A. Y. H., & Majeed, N. M. (2023). Prevalence of anxiety in college and university students: An umbrella review prevalence of anxiety in college and university students: An umbrella review. Journal of Affective Disorders Reports, 14, 100658. [Google Scholar] [CrossRef]
  18. Vygotsky, L. S., & Kozulin, A. (2012). Thought and language (Revised and Expanded Edition). MIT Press. [Google Scholar]
  19. Yu, J., Kim, H., Zheng, X., Li, Z., & Zhu, X. (2024). Effects of scaffolding and inner speech on learning motivation, flexible thinking and academic achievement in the technology-enhanced learning environment. Learning and Motivation, 86, 101982. [Google Scholar] [CrossRef]
Table 1. Sample LLM prompts used to support classroom participation.
Table 1. Sample LLM prompts used to support classroom participation.
Prompt TypeSample Student Prompt to ChatbotPurpose
Pre-Class Blanket Setup“Hi [insert the name of chatbot you are using], I’m about to start my Educational Psych class. I might ask for advice on participation because I am nervous about speaking up in class.”Initiates context-aware chat for real-time support during class without having to set up the context with each prompt
Content Clarification“We’re discussing Harkness and Super’s parental ethnotheory. My Scandinavian cousins leave their babies outside in the cold during winter for sometimes half an hour. I know they think it’s good parenting practice. Would that be a good example?” How so?Helps clarify if a personal example fits academic content before sharing aloud
Social Navigation“Another student is discussing Latino parenting styles, but I see things differently. What’s a respectful way to join the conversation without sounding rude?”Supports polite disagreement and confident classroom engagement
Affective Navigation“I am so nervous about speaking up. My voice is going to quiver. Tell me something so I can gather up the courage!”Offers encouragement for them to take the initial step of speaking up.
Timing Participation“How do I know if it’s a good time to speak up when about 4 people have their hands up? Should I just let it go?”Aids in assessing classroom dynamics and identifying low-stress moments to contribute
Self-Expression Coaching“I know what I want to say about today’s topic, but I’m worried it won’t come out right. Can you help me phrase it clearly and concisely? Here is the gist of my idea: _____.”Provides support in preparing phrasing effectiveness in advance
Debriefing after Participation“I finally spoke up today, but I’m worried it didn’t make sense. Here is the gist of what I said on the topic of _____ and how people reacted. Can you help me reflect on what went well and what I could improve in the future?”Encourages post-participation self-reflection and builds confidence for future engagement
Note: This framework represents an intentional pedagogical design primarily emphasizing procedural scaffolding and metacognitive skill development rather than simple content validation. The prompt categories were developed to support student agency in navigating classroom discourse dynamics while maintaining intellectual ownership of contributions. This approach acknowledges that effective participation often extends beyond content knowledge, particularly for students managing participation apprehensions in class.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akiba, D. ChatGPT Told Me to Say It: AI Chatbots and Class Participation Apprehension in University Students. Educ. Sci. 2025, 15, 897. https://doi.org/10.3390/educsci15070897

AMA Style

Akiba D. ChatGPT Told Me to Say It: AI Chatbots and Class Participation Apprehension in University Students. Education Sciences. 2025; 15(7):897. https://doi.org/10.3390/educsci15070897

Chicago/Turabian Style

Akiba, Daisuke. 2025. "ChatGPT Told Me to Say It: AI Chatbots and Class Participation Apprehension in University Students" Education Sciences 15, no. 7: 897. https://doi.org/10.3390/educsci15070897

APA Style

Akiba, D. (2025). ChatGPT Told Me to Say It: AI Chatbots and Class Participation Apprehension in University Students. Education Sciences, 15(7), 897. https://doi.org/10.3390/educsci15070897

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop