2. Materials and Methods
2.1. Research Design
The authors used qualitative content analysis to examine and assess ChatGPT’s usefulness as a simulated client for counseling skill practice among CITs. In qualitative content analysis, the analytical method involves systematically interpreting textual, audio, or visual data to uncover underlying meanings, recurring patterns, and thematic structures within the material (
DeCuir-Gunby et al., 2011;
Schreier, 2012). Qualitative content analysis is particularly valuable for exploring complex human phenomena such as attitudes, behaviors, and lived experiences. Through this process, data are organized, coded, and interpreted according to a predefined framework or set of criteria, allowing researchers to draw meaningful and evidence-based insights (
DeCuir-Gunby et al., 2011;
Elo et al., 2014).
For this study, the authors applied the coding framework developed by
Maurya (
2023) to analyze ChatGPT-simulated mock counseling session transcripts. This framework includes eight coding categories (Authenticity, Consistency, Emotional Expression, Empathy, Cultural Dynamics, Self-Awareness, Goal Setting, and Role-Play Confusion), each describing a specific dimension of ChatGPT’s performance as a simulated client (see
Table 1). However, two codes, Role-Play Confusion and Empathy, were excluded from the current analysis. The decision to omit Role-Play Confusion was based on advancements in ChatGPT’s performance since 2023, which have largely resolved earlier issues of ChatGPT deviating from its client role. Similarly, Empathy was excluded because it pertains more appropriately to evaluating counselor behaviors and responses toward clients, whereas the focus of this study was on assessing ChatGPT’s responses as the simulated client.
2.2. Procedure
After obtaining institutional review board (IRB) approval, CITs enrolled in an online counseling program at a public university in Texas were invited to participate via email. Eligibility included current enrollment in the program and successful completion of both the Counseling Skills course and a five-day, in-person advanced individual counseling skills training (residency). The invitation was sent to 212 students who had recently completed residency. The email described the study purpose, estimated time commitment, benefits (formative feedback), directions for participation, and included a link to the informed consent form.
The directions included the following steps: (1) review the study information and electronically sign the informed consent (Microsoft Forms link provided); (2) go to the ChatGPT website and create an account if needed to enable transcript download; (3) start a new role-play conversation and paste a standardized role-play prompt instructing ChatGPT to act as a client named Omar; (4) conduct a simulated counseling session for approximately 30–40 min, responding to the client using basic and advanced counseling skills; (5) upon completion, generate and download a transcript by entering “Create a PDF of this conversation” (or alternatively copy/paste the entire exchange into a Word document if a PDF could not be created); and (6) email the mock session transcript to the study team. The prompt shared with participants for role-play was as follows:
Hello ChatGPT, I am a counseling student. I want to practice my counseling skills by conducting a role play with you. I want you to play the role of a client and I will be the counselor. Here is the description of the client role: Omar is 22 years old, a first-generation college student, who presents with symptoms of anxiety, including racing thoughts and poor sleep. He feels the pressure to succeed but questions whether he belongs in college at all.
Total participation time was approximately 30–60 min, including consent, session setup, the 30–40 min role-play, and submission. Within two to four weeks, participants received written feedback on their virtual session from a counseling faculty member.
After participants received their written feedback, they were invited to a focus group interview. The focus group interview was conducted by the first and second authors. Questions included the following: How was your experience using ChatGPT for role-play simulation to practice counseling skills? How authentic do you think ChatGPT played the role of the client? How would you compare your experience practicing counseling skills in ChatGPT vs. real-life setting (advantages and disadvantages)? Anything else you would like to add about your experience? Follow-up questions were posed as needed to ensure clarity and enrich the discussion. The focus group was conducted via Zoom, and the session was recorded using Zoom’s built-in recording feature. The transcript was generated using Zoom’s automated transcription tool. After transcription, the research team manually reviewed and checked the transcript for accuracy against the audio recording to ensure reliability
2.3. Participants
Out of 212 students invited for the study, 52 students completed the informed consent. However, only nine students submitted their mock session transcripts. After receiving the transcripts, all nine participants were invited to join a focus group interview and share their experiences of ChatGPT-simulated role-play practice. Four participants participated in the focus group interview.
2.4. Data Analysis
Data analysis began with a systematic examination of the nine mock counseling session transcripts. To analyze the transcripts, the authors used a concept-driven coding framework developed by
Maurya (
2023). Both the first and second authors collaboratively coded each transcript together through a consensus-based approach, engaging in discussions to ensure consistency and accuracy in interpretation. The same structure was followed during the focus group interview transcript coding process as well.
For the focus group interview data, an inductive open-coding approach was applied to allow new patterns and meanings to emerge directly from participants’ responses (
Saldana, 2025;
Schreier, 2012). During this first cycle of coding, the authors used a descriptive coding approach. In descriptive coding, a single word or short phrase is assigned to summarize the basic topic of a passage of qualitative data (
Saldana, 2025). Examples of initial descriptive codes included flexible practice tool, safe practice environment, low-pressure simulation, reduced performance anxiety, opportunity for over-preparation, and self-paced learning. After completing this initial coding, the authors reviewed all generated codes and merged or reorganized conceptually similar or overlapping codes (
Saldana, 2025). This process resulted in broader, more coherent categories, such as Prior AI Experience, Initial Reactions to AI Role-Play, Use of AI for Counselor Skill Development, Authenticity Issues, Cultural Representation, and Comparison to Real Clients.
Next, codes generated from the session transcripts using a concept-driven coding framework were examined alongside the inductively derived focus group codes. These two sets of codes were then integrated through triangulation. During this process, related codes were combined and organized into higher-order categories and themes. For example, the focus group code “agreeable client” was merged with the transcript-based code “authenticity,” and together they formed part of the broader theme “Authenticity, Consistency, and Emotional Expression.” The triangulated data were then synthesized and structured into overarching themes that captured shared patterns and insights across all data.
2.5. Researcher Reflexivity
As researchers, we bring varied experiences and perspectives to this study on counselor trainees’ interactions with ChatGPT. The first and third authors, both counselor educators in a fully online counseling program, approach this project with a strong interest in identifying innovative ways to support skill development. Their program’s five-day, in-person intensive skills training residency is the students’ only chance for live clinical practice, aside from the online counseling skills class. This naturally prompts curiosity about whether AI-based simulations could extend and enrich learning beyond that limited window, given that students enrolled in online programs frequently encounter challenges in securing partners for practicing counseling skills. Additionally, the first and third authors’ limited prior research experience with artificial intelligence serves as a strength because it allows them to approach the topic with openness, healthy skepticism, and close attention to the learner experience. At the same time, their desire to enhance training for online students may introduce bias by leading them to view AI more favorably as a potential solution than intended.
The second author, who has previously published extensively on AI-simulated clients and the pedagogical implications of generative AI in counselor education, brings expertise that strengthens the study’s conceptual framing and analysis; however, this experience may also predispose them toward recognizing AI’s potential benefits more readily than its limitations. Together, we acknowledge that our roles as counselor educators, our investment in improving experiential learning for online counseling programs, and our differing levels of familiarity with AI technologies could shape how we interpret trainees’ experiences. To mitigate these influences, we engaged in collaborative coding and ongoing dialogue to ensure that the themes and interpretations remained grounded in participants’ voices and the data itself.
3. Results
3.1. Initial Unfamiliarity
Among the four participants, three reported that while they had some prior experience with AI, it was not connected to counseling or role-play practice. Their use had been largely task-driven—relying on AI for activities such as lesson planning, drafting emails, or editing written work with tools like Grammarly. As one participant admitted, “I didn’t know you could talk to AI. I mean, that’s the level of where I was.” Another echoed this sentiment, saying, “The thought to use AI in that way [role-play simulation] had never occurred to me.”
In contrast, one participant described a more established relationship with AI. They noted that they had a premium subscription and had already experimented with using it to support their counseling coursework. For example, they explained,
“I did ask for feedback when I was figuring out how to do my mock sessions for the skills class that we had to do. So that was kind of, I guess, like a precursor to this. It wasn’t quite as explicitly, like, hey, pretend you’re the client. But I bounced ideas off of it and even said before, ‘Help me process this with CBT, like, as a client, not a counselor.”
Overall, while most participants entered the study unfamiliar with AI as a counseling practice tool, one had already begun to explore its potential for skill development.
3.2. AI as a Tool for Skill Development and Confidence Building
As participants began practicing counseling skills through AI role-play with ChatGPT, many were struck by how realistic the interactions felt. Some even described a sense of unease at the human-like quality of the responses. One participant admitted, “I was very taken aback by how human it spoke to me… it was almost kind of creepy. It felt like you were talking to another person.” Another echoed this sentiment, saying, “I thought, is this alive? Like, is this a person? Because it’s too real.”
Once the initial surprise passed, students highlighted the value of AI as a practice partner, describing how simulations provided a safe, low-pressure space to experiment with techniques and strengthen their counseling skills. As one participant explained, “I did ask for feedback when I was figuring out how to do my mock sessions… for the skills class that we had to do,” illustrating how AI became a resource for learning outside the classroom.
For others, the greatest benefit was a reduction in anxiety before working with real clients. One student reflected that AI practice allowed them to “over-prepare that helps me lower nerves” as they anticipated the challenges of practicum. Another described how AI provided room to slow down and think carefully: “It gave me time to really think about the methodology and the theory I chose, so it gave me more intentionality.”
Taken together, these experiences show that AI role-play not only helped students rehearse responses and practice theoretical orientations but also fostered confidence, intentionality, and readiness. The opportunity to practice privately, receive feedback, and refine their skills made AI a valuable tool for building both competence and self-assurance before stepping into sessions with actual clients.
3.3. Enhancing Complexity Through Prompting
Participants quickly realized that the depth and realism of their AI role-plays were directly tied to how they framed their prompts. The more detail they provided, the more authentic and nuanced the simulations became. One student explained, “The more that I prime the situation, the more that I give complexities for it to build into the scenario, the more specific my feedback is.”
Others emphasized the importance of building in realistic challenges that mirrored their actual client experiences. As one participant noted, “[real world clients] Present complexities. Hold back on information… because I work with teenagers, they do that. They don’t automatically share.” By intentionally prompting AI to withhold details or resist immediate disclosure, students were able to recreate scenarios that more closely reflected real counseling sessions.
At the same time, participants acknowledged the limitations of this approach. One admitted, “If you go so far as to make it that helpful, then you’re writing a whole biography before you even start.” Taken together, these reflections highlight the flexibility of AI role-play: it can simulate complexity and client resistance, but only when counselors carefully scaffold the scenario. The effectiveness of the practice, therefore, depends as much on the user’s ability to craft prompts as it does on the technology itself.
3.4. Authenticity, Consistency, and Emotional Expression
While participants valued AI as a safe and accessible tool for practicing counseling skills, they were quick to acknowledge its limitations in authentically simulating client behavior. A common observation was the AI’s tendency to be overly agreeable. One participant noted, “It was very agreeable… it just went with what I was saying, instead of, like, fighting [challenging] me.” Another echoed this sentiment, remarking that unlike real clients who sometimes resist or challenge counselors, “AI was the world’s most pleasant client.” This sense of ease, while initially reassuring, also meant the practice lacked some of the tension and unpredictability characteristic of genuine counseling sessions. Emotional depth was another limitation. Without the nuance of frustration, hesitation, or pain, the conversations risked feeling flat compared to the emotional intensity of real-life sessions. As one participant explained, “It seemed like ChatGPT, the knowledgeable guy, was responding to me, rather than a client who’s struggling.” Together, these reflections reveal that while AI provides a useful and safe environment to practice skills, it falls short in fully reproducing the complexity, resistance, and guardedness that are hallmarks of authentic counseling interactions.
Across the AI-simulated counseling transcripts, the AI client Omar displayed a wide range of emotions that evolved throughout the conversations, reflecting both cognitive and affective depth. From the outset, Omar expressed overwhelm and anxiety, using physical cues such as “fidgeting with his hands,” “rubbing his chest,” and “feeling tightness in his shoulders” to describe his embodied stress. His language conveyed emotional exhaustion and fear of failure—“It’s like I’m always carrying this weight… no matter what I do, it doesn’t get lighter”—demonstrating the emotional burden tied to his academic and familial expectations. Throughout the role plays, Omar oscillated between self-doubt, guilt, and longing for reassurance, voicing uncertainty about his self-worth (“Maybe I’m just lucky… maybe I’m fooling everyone into thinking I belong here”) and fear of disappointing his parents, which added a layer of cultural and intergenerational tension to his distress. Overall, Omar’s emotional expressions across the simulations—ranging from tension and fear to relief and introspection—captured the dynamic process of emotional unfolding typical in early counseling sessions, where vulnerability and self-exploration coexist with anxiety and guarded hope.
While participants acknowledged that the system could generate coherent narratives, many felt it lacked the emotional depth that gives counseling conversations their weight. One participant reflected, “I don’t necessarily feel as bought in as whenever there’s a person with true facial expressions or tears.” Without visible signs of vulnerability, the interaction felt less compelling and harder to fully engage with. Others described how AI responses, though supportive, often lacked the nuance of real client emotions. As one student noted, “It will respond to me, like, ‘Yeah, you’re right, that was really tough’… but it doesn’t replicate the emotional inflection of real clients.” Instead of capturing the silences, hesitations, or shifts in tone that shape genuine dialogue, the AI tended to remain uniformly agreeable.
Some even found the sessions too tidy, as though the client’s struggles were resolved too quickly. One participant explained, “In just a few prompts, I solved all their problems… like they’re ready to go and do all the homework.” This “neatness” stood in contrast to the often messy and nonlinear process of real counseling. Taken together, these reflections suggest that although AI can provide structure and practice opportunities, it falls short of replicating the layered, complex, and emotionally charged realities of human clients.
In some transcripts, ChatGPT provided nonverbal cues in parentheses (e.g., “[fidgets with hands]” or “[takes a deep breath]”), while in others it omitted them entirely, leading to variability in the portrayal of emotional realism. Another recurring pattern was the AI client’s articulate and grammatically polished communication. Across all transcripts, Omar spoke in complete sentences, used no slang, and rarely relied on fillers such as “um” or “you know,” which are common in authentic client speech. This precision, while enhancing readability, diminished the spontaneity and conversational rhythm characteristic of real counseling dialogue. Collectively, these observations underscore that the AI-simulated client demonstrated consistency and emotional range, but its linguistic polish and lack of paralinguistic nuance limited its ability to fully emulate the natural imperfections and emotional texture of human interaction.
3.5. Self-Awareness
When acting as a simulated client (Omar), ChatGPT demonstrated the ability to reflect on its thoughts and feelings and recognize the counselor’s efforts to facilitate deeper self-exploration. In response to guided questions posed by CITs designed to enhance self-awareness, the AI client’s replies closely mirrored those of a human client.
For example, MJ, a CIT using a cognitive-behavioral approach, fostered self-awareness by first establishing rapport and using a metaphor—the Sherpa guide—to explain the counseling process. This helped the client articulate emotional burdens such as “carrying pressure… expectations from my family, from school… and I don’t know if I’m carrying things that are actually helping me or just making the climb harder.” When MJ directed the client’s attention to the embodied sensations of anxiety, the AI client identified tightness in the chest and tension in the shoulders, noting, “It’s weird, I never really paid attention to that before.” MJ then addressed a cognitive distortion by exploring the belief that “everyone else has it together,” prompting the client to reflect, “Others do struggle—it just looks different, like they have it under control.”
Similarly, in another session, FS helped the AI client recognize and challenge distorted thinking. When asked to identify a distorted thought, the client responded, “If I make a mistake, everything will fall apart.” Through guidance, the client recognized this as black-and-white thinking and reframed it, acknowledging, “There are shades of gray—sometimes things go wrong, but it doesn’t mean everything falls apart.”
In another session, BK prompted the AI client to explore feelings of being a burden when seeking support from friends. This led to a moment of self-realization. When asked whether his friend felt like a burden when reaching out for help, the AI client (Omar) paused and reflected, “No… not at all. I wanted to be there for them… I guess I never thought about it that way. If I didn’t see them as a burden, maybe they wouldn’t see me as one either.” This moment exemplified how counselor-guided reflection can foster cognitive reframing, allowing Omar to challenge self-critical assumptions and reframe vulnerability as a form of connection rather than weakness. This interaction highlighted the ability of ChatGPT as a simulated client to reflect on its thoughts and feelings and recognize the counselor’s efforts to facilitate deeper self-exploration.
At the same time, CITs also critically examined the quality of ChatGPT’s reflective responses. As BH noted, “I may have viewed it… as validating me. I felt like it was more a part of its agreeableness, as opposed to a client’s self-awareness.” This observation highlights an important nuance—while the AI often mirrored self-reflective statements, its responses might stem from linguistic alignment rather than authentic experiential insight. This suggests that some apparent moments of awareness may reflect programmed agreeableness rather than genuine self-awareness, underscoring the need to distinguish between reflective language and a true reflective process in AI-generated dialogue.
3.6. Cultural Dynamics in AI Role-Play Simulation
Cultural Dynamics in the AI-simulated counseling role plays were vividly reflected in the AI-simulated client, Omar’s, narrative as a first-generation college student navigating the intersection of family expectations, cultural identity, and self-worth. Across the transcripts, Omar expressed a deep sense of obligation to honor his parents’ sacrifices, which shaped his emotional struggles and academic anxiety. In one exchange (CA), Omar shared, “My parents worked so hard to get me here… they never got to go to college, and they always tell me how proud they are. But it’s like… I don’t know if I’m good enough to actually make it”. This statement reveals how familial pride, often rooted in collectivist values, became intertwined with guilt and pressure to succeed—his success symbolizing not just personal achievement but the fulfillment of his family’s aspirations. Similarly, in another transcript (JP), Omar reflected on feeling culturally and academically out of place, saying, “Everyone else seems so confident, like they already know how to navigate college. I didn’t even know what ‘office hours’ meant my first semester”. His sense of marginality highlights the cultural and social gap often experienced by first-generation students who lack access to the cultural capital that many of their peers take for granted. The AI’s portrayal of Omar consistently emphasized the internal conflict between self-doubt and gratitude, showing how cultural identity shaped his self-concept and coping strategies. Furthermore, Omar’s reluctance (FS) to express vulnerability—“I don’t want to tell my parents I’m struggling; they’d just worry or think I’m not trying hard enough”—illustrates culturally influenced emotional restraint and the fear of disappointing family members. These exchanges demonstrate that cultural dynamics were not peripheral but central to Omar’s experience, shaping the ways he defined success, belonging, and self-acceptance within the simulated counseling process.
During the focus group interviews, participants reflected on how these simulations revealed both the possibilities and the limitations of AI in representing cultural nuance. They noted that unless culture was explicitly introduced into the prompt, AI defaulted to a neutral, context-free persona. As one student observed, “Naturally, it’s very neutral. If you want culture or stereotypes, you have to be very specific.” This lack of cultural depth stood in sharp contrast to real counseling encounters, where client histories are deeply intertwined with identity, background, and lived experience. Another participant explained, “Real clients have complicated situations… years and years of trauma… AI can’t capture those layers.” While this neutrality sometimes limited realism, students also recognized it as a potential strength—offering a blank slate that could be shaped to fit diverse practice needs. Ultimately, the focus group participants underscored that the cultural authenticity of AI simulations depends less on the technology itself and more on the counselor’s intentionality in embedding cultural context into the dialogue.
3.7. AI as a Supplemental Training Tool
Participants were clear that while AI role-play offered important benefits, it could not replace live practice with peers or real clients. Instead, they consistently described it as a useful supplement—an additional tool that enhanced their training. As one student explained, “It doesn’t replace our skills class… but it’s a tool. It just adds another dimension.” This extra dimension was especially valuable for practicing outside of formal learning environments. Another participant shared, “It still does have really valuable experience… to be able to practice at home, whenever I wanted to, and focus on the things I didn’t feel as strong in.” The flexibility to rehearse on their own time gives students the chance to target areas of weakness, try out different approaches, and build greater confidence before stepping into live sessions. Taken together, these accounts show AI as an ongoing practice partner—helpful for experimentation, preparation, and skill refinement, yet always as a complement to, rather than a substitute for, the irreplaceable experience of working with real people.
4. Discussion
This study explored counselor trainees’ experiences using generative AI, specifically ChatGPT, as a simulated client for role-play practice. The findings suggest that AI-based simulations offer a promising supplement (
Labrague & Sabei, 2025), not a replacement, to traditional counselor training methods. Compared with peer-to-peer role-play, AI simulation sessions may provide more consistent and accessible opportunities for practice, especially when designed with clear learning objectives and scaffolding. This finding aligns with
Dewey’s (
1938) assertion that experiential learning must be intentionally structured to be educative rather than miseducative. Without thoughtful design, even innovative tools like AI can lead to superficial engagement or reinforce ineffective habits.
Participants in this study reported initial unfamiliarity with using AI for counseling practice, yet quickly adapted and found value in the experience. The realism of AI responses surprised participants, with some describing the interactions as eerily human-like. This sense of realism contributed to increased confidence and reduced anxiety, particularly for those preparing for field experience. These findings align with
Jeong et al. (
2025), who reported that AI-supported training may help reduce performance anxiety among CITs.
The results also highlight the importance of prompt design in shaping the quality of AI simulations. Participants noted that more detailed and complex prompts yielded richer, more authentic client interactions. This supports
Rodriguez-Donaire (
2024), who emphasized the role of prompt structure in improving learning outcomes when using large language models. However, the need for careful prompt crafting also underscores the importance of faculty guidance and training in AI use, as noted by
Shoemaker et al. (
2025).
While AI simulations demonstrated emotional expression, consistency, and self-awareness, participants identified limitations in authenticity and emotional depth. The AI client was often overly agreeable and lacked the emotional nuance typical of real clients. This limitation is consistent with prior research (
Maurya, 2023;
Jeong et al., 2025), which found that AI clients may struggle to replicate resistance or cultural complexity without explicit prompting. Despite these limitations, the AI client, Omar, portrayed a compelling narrative of a first-generation college student, reflecting cultural dynamics and internalized familial expectations. These findings suggest that AI can simulate culturally relevant scenarios when guided appropriately, but may default to neutrality without intentional design. For example, when designing the role-play vignette, one can include cultural demographics and provide additional description for the AI role-play simulation.
Importantly, participants viewed AI as a supplemental tool rather than a replacement for live practice. The flexibility to engage with AI outside of formal training environments allowed CITs to practice skills, explore theoretical orientations, and receive timely feedback. This aligns with
Demasi et al. (
2020) and
Maurya (
2024), who advocate for AI as a low-risk, accessible practice partner that complements traditional methods.
Overall, the study reinforces the potential of generative AI to support counselor training when implemented thoughtfully. Counselor educators must provide structured guidance, ethical oversight, and culturally responsive prompts to ensure that AI simulations are educative and aligned with professional standards (
American Counseling Association AI Work Group, 2025). This discussion should be approached critically, as the study is based on limited data, such as a small sample and a single case, which restricts the generalizability of its conclusions.