Next Article in Journal
Reflective Practice and Digital Technology Use in a University Context: A Qualitative Approach to Transformative Teaching
Previous Article in Journal
Levels of Complexity in Mathematics Teachers’ Knowledge Connections: An Approach Based on MTSK and Piaget’s Schemas
Previous Article in Special Issue
Generative AI in Higher Education: Teachers’ and Students’ Perspectives on Support, Replacement, and Digital Literacy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mentorship in the Age of Generative AI: ChatGPT to Support Self-Regulated Learning of Pre-Service Teachers Before and During Placements

by
Ngoc Nhu Nguyen (Ruby)
1,2,* and
Walter Barbieri
3
1
Office of the Deputy Vice-Chancellor (Education), The University of Sydney, Sydney, NSW 2006, Australia
2
Learning Enhancement and Innovation, The University of Adelaide, Adelaide, SA 5005, Australia
3
School of Education, Faculty of Arts, Business, Law and Economics, The University of Adelaide, Adelaide, SA 5005, Australia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(6), 642; https://doi.org/10.3390/educsci15060642
Submission received: 11 January 2025 / Revised: 10 May 2025 / Accepted: 14 May 2025 / Published: 23 May 2025
(This article belongs to the Special Issue Teaching and Learning with Generative AI)

Abstract

:
This study investigates the integration of mentorship, self-regulated learning (SRL), and generative artificial intelligence (gen-AI) to support pre-service teachers (PSTs) before and during work-integrated learning (WIL) placements. Utilising the Mentoring and SRL Pyramid Model (MSPM), it examines how mentors’ dual roles as coaches and assessors influence PSTs’ SRL and explores to what extent gen-AI can assist PSTs in meeting the demands of WIL placements. Quantitative and qualitative data from 151 PSTs, including survey, interview, placement scores, and mentor feedback were analysed using statistical correlation analysis and thematic analysis to reveal varied mentorship approaches. Gen-AI tools are highlighted as valuable in enhancing PSTs’ SRL, providing tactical and emotional guidance where traditional mentorship is limited. However, challenges remain in gen-AI’s ability to navigate complex interpersonal dynamics. The study advocates for balanced mentorship training that integrates technical and emotional support, and equitable access to gen-AI tools. These insights are critical for educational institutions aiming to optimise PST experiences and outcomes in WIL through strategic integration of gen-AI and mentorship.

1. Introduction

Work-integrated learning (WIL) is a well-established educational approach that blends theoretical knowledge with practical experience to develop professional expertise (Aprile & Knight, 2020; Smith & Worsfold, 2015). This learning mode offers a structured yet flexible environment where students apply learned skills to real-world challenges under expert mentorship, fostering autonomous confidence (Trede & Jackson, 2021). Critical to WIL’s success is the mentor–mentee relationship (Nigate et al., 2023), which goes beyond technical information exchange to foster a deep understanding of professional norms and ethics, authentically contextualising academic knowledge with industry standards, encouraging reflective learning, and aiding in informed decision-making (Bipath, 2022; Lai, 2010; Smith-Ruig, 2014). Effective mentorship in WIL is therefore crucial, as it not only enhances technical skills but also develops essential soft skills such as professionalism, communication, and modus operandi relevant to the field (Barbieri & Nguyen, 2025).
However, pre-service teachers (PSTs) undertaking WIL through school placements have repeatedly reported a range of emotional, cognitive, and professional challenges (Hordvik et al., 2019) that negatively impact their performance and well-being (Corcoran & O’Flaherty, 2022; Matthewman et al., 2018). High workloads, time pressures, institutional restrictions, and navigating complex professional relationships in new environments are cited as the top causes of significant stress, anxiety, and burnout (Böke et al., 2024; Hordvik et al., 2019). Mixed methods research into PST perspectives and experiences has shown that WIL placements can exacerbate feelings of impostor syndrome and praxis shock (Ballantyne & Retell, 2020) when transitioning rapidly from a theoretical understanding of teaching gained at university to its real-world application in school classrooms (Edwards & Nutall, 2015; Trent, 2023), further worsened by feelings of failure against the expectations of their mentors and students (Ambrosetti, 2014). A range of causes and mitigations for these unresolved difficulties have been and continue to be explored by research, including this present study.
Studies investigating the realities of schoolteachers in mentoring roles use quantitative and qualitative approaches to reveal that their dual functions as both coaches and assessors carry inherent challenges (Kang, 2021). When tasked with guiding PSTs’ development while simultaneously evaluating their competence, mentor teachers are expected to provide supportive feedback and constructive advice that foster development, while maintaining the rigorous professional standards expected of teachers (Kuhn et al., 2024). The dual responsibility of being a supporter and evaluator has led to tension and conflict between mentor teachers and PSTs (Loughland et al., 2023; Hudson, 2016; Ambrosetti, 2010). Such an arrangement also carries power imbalances, as mentor teachers typically determine PSTs’ placement outcomes. Where there are discrepancies between contemporary teaching strategies introduced in ITE programmes and entrenched classroom practices, as was the case in case studies in Australia, mentors often favour pedagogies and tools that are familiar to them (Le Cornu, 2012; Hobson et al., 2009). These preferences discourage innovation and add pressure on PSTs to conform and adopt practices that may feel inauthentic to their emerging professional identities (Ambrosetti & Davis, 2016; Jackson, 2015). Ultimately, this dynamic can limit PSTs’ potential for discovery and risks disenfranchisement from the teaching profession (Onen & Ulusoy, 2015; Hasson, 2018).
Mentor teachers’ roles are further complicated by lack of dedicated time, insufficient training and institutional support, and the duty of their own classroom responsibilities (Moosa, 2018). Teacher workload is consistently challenging and limits mentor teachers’ capacity to mentor effectively (Gabay et al., 2019; Le Cornu, 2012). When stretched thin, mentors reveal in interviews that they view mentoring as an added burden rather than integral to teaching, leading to feedback that is either overly critical, undermining PSTs’ confidence, or vague and inconsistent, offering little actionable guidance (Pantachai et al., 2024; Ambrosetti, 2014; Ndebele & Legg-Jack, 2024). Overemphasis on gatekeeping can yield critical rather than constructive guidance, fabricating impostor syndrome among PSTs and undermining PSTs’ development (LaPalme et al., 2022). These challenges continue to vex WIL placements, continue to limit the pathways to success available to PSTs intending to join the teaching profession, and continue to evade solutions in the literature.
Amid these persistent challenges, generative AI (gen-AI) has demonstrated significant potential in addressing workload intensity (Hashem et al., 2024). For example, gen-AI can assist with lesson planning, resource creation, managing professional relationships, reducing workload, and cognitive demands while enhancing teaching efficiency (Mishra et al., 2023; Sanusi et al., 2024). Directly relevant to mentorship needs, AI applications, as demonstrated by researchers like Zhang et al. (2023) and Doroudi (2023), have proven effective in providing personalised feedback, facilitating reflective practice, and offering adaptive learning resources. These capabilities are particularly useful in addressing the complex challenges faced by PSTs, such as transitioning from theoretical knowledge to practical application in diverse teaching environments (Trent, 2023; Mishra et al., 2023).
AI tools have shown positive results across various educational settings, including STEM and creative disciplines (Ensher & Murphy, 2011; Klamma et al., 2020; Hensen et al., 2022; Barbieri & Nguyen, 2025). Hybrid models, blending AI’s efficiency with human mentors’ nuanced guidance, offer a scalable and inclusive approach to mentoring while addressing concerns about data privacy, bias, and transparency (Köbis & Mehner, 2021; Nam et al., 2022). Other studies have also suggested that gen-AI can help boost self-efficacy for PSTs by providing them with tools that allow for a more tailored and responsive learning experience (Ayanwale et al., 2024; A.-C. E. Ding et al., 2024; Samarescu et al., 2024; Sanusi et al., 2024). This is particularly important in the context of mentorship, where the traditional challenges include time constraints and the dual role of mentors as both supporters and assessors. AI-enhanced tools can help bridge the gap by automating administrative tasks such as grading and progress tracking, thus allowing mentors more time to focus on more nuanced aspects of teaching and mentorship (Zheng et al., 2021). Meaningful applications of gen-AI by PSTs are also contingent on users developing AI literacy, meaning a functional grasp of its workings, competency in its application in relevant contexts and ability to critically evaluate its outputs and processes (Kong et al., 2024). Indeed, AI literacy extends beyond technical competencies into a nuanced understanding of ethical considerations such as privacy, data security, and authenticity (Lee & Park, 2024; Barbieri & Nguyen, 2025). PSTs with higher AI literacy tend to demonstrate a higher level of autonomy in decision-making regarding how to utilise AI tools effectively (Karahan, 2023; Samarescu et al., 2024). Effects are not uniformly positive, however. Even studies that highlight the utility of AI literacy in fostering problem-solving capabilities towards self-regulated learning, found no correlation between AI use and emotional regulation (Ayanwale et al., 2024).
The adoption of gen-AI in educational settings is not widespread, nor without its controversies (Al-Zahrani, 2024; Ma & Lei, 2024). Many authors have expressed concerns over barriers and risks such as the automatic perpetuation of biases, over-reliance on technology, and a reduction in social interaction, which is salient in the context of mentorship where personal connections and interactions are crucial (Kaufmann, 2021). Others have pointed out a clear need for training in AI literacy in ITE, aiming to empower PSTs to use AI not just as a tool for efficiency but as a support mechanism that enhances their capacity to thrive in complex and dynamic classroom settings (Chang et al., 2019; J. Ding et al., 2022; Sperling et al., 2024; Uzumcu & Acilmis, 2024). It is also important to note that institutional policies and practices often play an influential role in the integration of AI tools by staff, and therefore also PSTs on placements. Without positive reinforcement and adequate mentorship from mentor teachers, PSTs may struggle to effectively utilise AI technologies, however high their individual AI literacy and autonomy may be (Pokrivcakova, 2023).
The reliability of AI-generated responses to effectively address the complex human and social dynamics remains a contentious issue and there is anxiety about the prospect of AI fully replacing functions of WIL placement mentorship (Karan & Angadi, 2024). Considering the contestability of gen-AI’s role in educational debates, this paper seeks to investigate to what extent gen-AI can offer support towards the mentorship that PSTs need to thrive, by asking:
  • How can the Mentoring and Self-Regulated Learning Pyramid Model help conceptualise the role of gen-AI in supporting mentoring and SRL development during WIL placements?
  • In what ways does the integration of gen-AI tools influence the development of SRL among PSTs during WIL placements?
  • How can course design and implementation structures be optimised to use gen-AI effectively in preparing PSTs for SRL and autonomy in WIL?
This paper applies an integrated framework combining Schunk and Mullen’s (2013) mentoring model with Nicol and Macfarlane-Dick’s (2006) principles of SRL, synthesised into the newly devised Mentoring and SRL Pyramid Model (MSPM) (Figure 1). Both Schunk and Mullen’s model and Nicol and Macfarlane-Dick’s model are established in the literature and have since been relied upon and developed by others (Bembenutty et al., 2024; Gao & Brown, 2023; Tise et al., 2023; Anderson et al., 2015), including within the field of educational technologies (Koong et al., 2024; Kong & Liu, 2023). The current study extends this line of work through the MSPM model, which visualises the layered interaction between effective mentoring and SRL to support PSTs during WIL placements. The MSPM’s pyramid structure encapsulates the progression from foundational mentoring to co-regulated learning to autonomy, while also recognising that real-world nuances, such as workload pressures and institutional constraints, may disrupt linearity, visualised by the gaps between the layers.
At the foundation of the MSPM lie the two fundamental mentoring functions: career-related support and psychosocial support (Schunk & Mullen, 2013), which are described as taking place during four phases of mentorship: initiation, cultivation, separation, and redefinition. Career-related support provides PSTs with practical guidance in professional topics such as classroom management, lesson planning, and networking, particularly during the cultivation phase. Psychosocial support, more critical in the initiation phase, focuses on trust-building, emotional reassurance, and fostering self-efficacy to build resilience (Schunk & Mullen, 2013; Ragins & Kram, 2007). These functions are interdependent and form the base of the pyramid, representing the mentor’s responsibility to establish a solid foundation of trust and competence for PSTs (Hudson, 2016).
The second layer of the MSPM transitions into the development of the mentee’s self-regulated processes stemmed from the mentor’s guidance and support. Three core processes are captured here, including the following: (1) planning realistic teaching goals, (2) monitoring the effectiveness of teaching strategies, and (3) reflecting on different sources of feedback for continuous improvement (Zimmerman, 2002). These core processes are grounded in Nicol and Macfarlane-Dick’s (2006) seven SRL principles, which act as a bridge between mentor guidance and the PST’s emerging autonomy: (1) clarifying performance expectations, (2) facilitating self-assessment, (3) delivering high-quality feedback, (4) promoting peer dialogue, (5) fostering motivation and self-esteem, (6) addressing performance gaps, and (7) leveraging feedback for continuous improvement. SRL principles scaffold PSTs’ development, encouraging them to take an active role in their learning by setting goals, evaluating their performance, and refining strategies (Ng, 2016; Zimmerman, 2002). Developing SRL skills has been found to be instrumental in facilitating successful and sustainable placement performance (Rodriguez-Gomez et al., 2024).
The third layer of the MSPM pyramid represents co-regulated learning, a collaborative stage where both mentor and mentee engage in mutual growth. Here, the mentor shifts from direct guidance to facilitation, enabling the PST to take increasing ownership of their professional development (Mullen & Tuten, 2010). This layer is closely tied to the apex of the pyramid that reflects a dynamic where the PST’s emerging autonomy complements the mentor’s reflective practices, fostering shared responsibility for learning outcomes. Importantly, co-regulation transforms the mentoring relationship into a partnership, characterised by shared dialogue and professional exchange. Both the mentor and the PST benefit professionally from this phase.
While the MSPM suggests a progressive hierarchy, its tiers are not rigid. Its conceptual flexibility allows it to accommodate not only real-world nuances, but also emerging tools and contexts, including the evolving role of AI in mentoring functions. AI tools have the potential to influence all layers of the pyramid. This paper considers how AI could influence traditional mentoring processes, introducing flexibility into the application of the MSPM. Building on the conceptual foundation of the MSPM, this study addresses the research questions by examining the impact of certain mentoring practices and the integration of gen-AI on PSTs’ performance, self-regulation, and relationships with mentor teachers during WIL placements. The following methods were designed to explore these dynamics in depth and provide actionable insights.

2. Materials and Methods

2.1. Course Context

The sample for this study included convenience sampling of 151 PSTs enrolled in ITE programmes at the University of Adelaide, all of whom undertook a five week WIL placement in secondary schools. Before their placements, PSTs participated in a six week university course that combined face-to-face seminars with digital resources. The course aimed to prepare PSTs for WIL by focusing on learning outcomes related to placement goals, expectations, and assessment criteria, as well as developing essential classroom skills such as lesson planning, teaching strategies, behaviour management, and managing professional relationships. The only assessment associated with this course was a summative placement report completed by mentor teachers in schools.
A distinctive aspect of the course was its emphasis on developing AI literacy. From the start of the course, PSTs were trained to use ChatGPT (3.5, 4o or 4o mini) with consistently signed-in accounts to enhance their prompt engineering and iterative prompting skills. Through weekly seminar activities, PSTs created and tested prompts to seek AI-generated advice for specific WIL scenarios including generating lesson plans for classes containing diverse student needs, creating worksheets for students, responding to classroom behavioural incidents, and navigating mentor relationships. For each of these activities, PSTs critically evaluated AI responses for alignment with course learning outcomes and the APSTs. Peer-review exercises also allowed PSTs to refine their prompts and assess the impact of various linguistic nuances on AI outputs, as well as observe to what extent ChatGPT modified the nature and quality of outputs to prompts. To support this work, PSTs were given access to prompt templates, scenario briefs, and a feedback rubric aligned to Nicol and Macfarlane-Dick’s (2006) SRL principles, which were used to guide reflective discussions in class. Each week focused on a different APST-linked teaching challenge (e.g., managing disengaged students, differentiating curriculum, or writing reflective statements), with AI used as a simulated ‘thinking partner’ to test and model possible strategies.

2.2. Data Collection Methods

This study employed a mixed-methods triangulated approach, combining survey data from PSTs, placement outcomes, and qualitative insights from both enrolled PSTs and their mentor teachers’ feedback. With approval from the institutional human research ethics committee, data was collected over a six month period at distinct stages of the WIL placement cycle. Pre-placement activities, including AI literacy training, took place over six weeks, followed by a five week placement period. Survey data and placement report data were collected at the conclusion of placements, while semi-structured interviews were conducted within three weeks post-placement to capture reflective insights while experiences remained recent. This approach was designed to explore the interplay of mentor support, gen-AI usage effectiveness, PSTs’ self-reflections, and SRL during WIL placements.

2.3. Survey

The survey design was supported by pilot testing and content validity expert review (Cobern et al., 2020) prior to data collection. Both processes produced feedback that led to modifications to question wording, positioning, and overall number. Pilot testing results also underwent inter-rater reliability to evaluate item validity, leading to survey items modification. The survey also passed a standardised validity test process mandated by the institution prior to its use.
PSTs were invited to complete a digital survey at the end of their WIL placements. The survey collected deidentified demographic data and responses to 20 questions, aligned with elements of the MSPM to ensure data reliability and comparability (see Appendix A). Questions explored three main areas:
  • Mentor support (questions 1–5);
  • SRL strategies (questions 6–9);
  • Gen-AI usage (questions 10–20).
Participants responded using a 5 point Likert scale to provide standardised, measurable insights into their experiences.

2.4. 1:1 Semi-Structured Interview

A sample of 12 PSTs was selected to participate in 1:1 interviews from an open call for volunteers. Purposive sampling was applied (Palinkas et al., 2015), using survey responses to ensure representation across different levels of engagement with gen-AI tools, different degrees of reflective practices and across different placement outcomes. This approach allowed for a meaningful and balanced exploration of how gen-AI tools intersected with SRL. The interview protocol maintained deidentification and neutrality, with a focus on eliciting in-depth insights into the role of AI in WIL placements. Interview questions were led by a researcher unknown to the participants and allowed for topic flexibility. Questions were semi-structured to elicit detailed, individual insights of PSTs’ experiences with gen-AI, SRL, and autonomy within the context of their WIL placements (Katz-Buonincontro, 2022). Interviews were transcribed, deidentified, and analysed through reflexive theme coding and reflexive thematic analysis (Braun & Clarke, 2022).

2.5. Placement Report Data

In assessing PST performance during their WIL placements, mentor teachers employed a detailed scoring system aligned with the 37 Australian Professional Standards for Teachers (APST). Each PST was evaluated on a 5 point scale across these standards, which range from lesson planning to the integration of technology in the classroom. The grading process was overseen by mentor teachers who are experienced educators designated by the participating schools. These mentors were responsible for both observing and evaluating PSTs’ performance, providing scores that were then analysed to identify patterns of competency and areas needing improvement. The placement report graded PSTs on an overall pass-or-fail basis, based on their performance against the APSTs. Placement reports also invited mentors to provide qualitative feedback regarding PSTs’ performance against the APSTs. Qualitative feedback was deidentified and analysed through reflexive theme coding and reflexive thematic analysis.

3. Results

3.1. Research Question 1: Conceptualising the Role of Gen-AI in Mentoring and SRL Through the Mentoring and SRL Pyramid Model (MSPM)

The Mentoring and Self-Regulated Learning Pyramid Model (MSPM) provides a structured framework for understanding how mentorship, self-regulation, and co-regulation interact to support the professional development of PSTs during WIL placements. It visualises mentorship as a layered process, beginning with career-related and psychosocial support at its foundation, transitioning into mentee self-regulated learning (SRL) processes, progressing into co-regulation, and culminating in mentee autonomy. This study’s findings suggest that AI’s role within this structure is multi-faceted: it does not replace strata in the pyramid but serves as a scaffolding tool across several layers, influencing how PSTs engage with their mentors, develop self-regulated learning strategies, and, in limited cases, co-regulate learning with their mentors.
At the foundation of the MSPM, mentorship is defined by career-related and psychosocial support, elements (derived from Schunk & Mullen, 2013) that mentors provide to help PSTs transition into the profession. In this study, career-related support focuses on classroom management, lesson planning, and professional networking, while psychosocial support is critical for trust-building, emotional reassurance, and fostering resilience. Findings indicate that gen-AI played a supplementary role in career-related support, particularly by assisting with lesson planning, instructional design, and resource development, areas where mentor feedback was sometimes inconsistent or limited due to time constraints. Survey responses reinforce this trend: Question 14 (To what degree did AI tools assist in filling gaps in mentor feedback when availability was limited?) received the highest mean score (4.25, SD = 0.73), suggesting that PSTs perceived AI as a useful bridge when mentor availability was limited. AI was not consistently perceived as effective in addressing the psychosocial dimension of mentorship, with Question 12 (How useful were AI tools in clarifying mentor expectations?) receiving a lower mean score (3.21, SD = 0.88). This discrepancy suggests that while gen-AI provided structured instructional guidance, it could not reliably replicate the nuanced, contextualised feedback and emotional reassurance that mentors provide. This limitation was echoed in interview data:
AI helps fill in the blanks when my mentor is too busy to explain something, but I still need my mentor to actually tell me what they expect from me in this school”.
(PST4)
Thus, AI’s role within the foundation layer of the MSPM was largely supportive, enhancing the efficiency of career-related mentorship functions but lacking the interpersonal depth required for psychosocial mentoring.
At the second layer of the MSPM, which focuses on mentees’ self-regulated processes, AI’s impact was more pronounced. Self-regulation in WIL placements involves three key processes: planning teaching goals, monitoring the effectiveness of instructional strategies, and reflecting on feedback for continuous improvement (Zimmerman, 2002). Findings indicate that PSTs actively leveraged AI for SRL-related tasks, particularly in self-assessment, reflection, and instructional adjustments. Survey responses reinforce this trend, with Question 16 (How effectively did AI tools encourage reflection on teaching performance?) receiving a high mean score (4.11, SD = 0.79), and Question 17 (How useful were AI tools in suggesting adjustments to teaching strategies based on feedback?) scoring even higher (4.27, SD = 0.76). Moreover, AI usage significantly correlated with self-assessment confidence (r = 0.41, p < 0.01), indicating that PSTs who engaged with AI were more likely to demonstrate reflective teaching practices. Interview data further supported this link:
AI sort of nudges you to see what you might’ve missed. Like, it’ll suggest, ‘Have you considered XYZ?’ It gets you to really reflect on things”.
(PST2)
However, while AI facilitated self-monitoring and adaptive teaching adjustments, the findings also point to the potential risks of over-reliance, which could undermine genuine SRL. Several PSTs noted that frequent AI use could lead to passivity in decision-making, rather than actively engaging with professional judgement. One participant reflected:
If the technology is there, why not use it, right? […] I feel more confident about my choices when I can double-check with ChatGPT, but sometimes I feel stupid to have to rely on a machine to do something…
(PST1)
This highlights a critical limitation in AI’s role at this level of the MSPM: while AI effectively supports structured reflection and instructional adjustments, its uncritical use may inhibit the development of independent professional reasoning. To maximise its benefits within SRL processes, AI must be integrated within a pedagogical structure that promotes deliberate, critical engagement rather than passive acceptance of AI-generated feedback.
The third layer of the MSPM, co-regulated learning, represents a transitional phase where mentors and PSTs engage in mutual development, shifting from directive mentorship to a collaborative professional exchange. While gen-AI use during placements was largely individualised, findings suggest that instances of co-regulated learning did emerge where some mentors and PSTs engaged in joint AI exploration for lesson planning and resource generation. One mentor reflected on how working alongside a PST introduced them to new ways of integrating gen-AI into instructional design:
Honestly, I didn’t even think of using AI for lesson structuring until my PST showed me what they were doing. Now I see it could actually save time—so we started using it together”.
(MT4)
This example illustrates AI’s potential to facilitate co-regulation, where mentors and PSTs jointly explore AI applications, leading to reciprocal learning opportunities. However, such instances were limited and context-dependent, suggesting that AI-facilitated co-regulation requires intentional structuring within mentorship models.
At the apex of the MSPM, mentee’s professional autonomy, AI’s contribution was more mixed. Placement performance data suggests that PSTs who frequently engaged with gen-AI demonstrated stronger outcomes in structured competencies such as lesson planning (APST 3.1, Mean = 4.1) and ICT integration (APST 2.6, Mean = 4.3), reinforcing that AI-supported instructional planning contributes to professional preparedness. However, PSTs struggled significantly with unstructured, socially complex aspects of teaching, particularly managing classroom behaviour (APST 4.3, Mean = 2.6), and engaging with professional networks (APST 7.4, Mean = 2.6). These findings suggest that gen-AI, while useful in structured teaching preparation, was far less effective in fostering the adaptive, interpersonal decision-making skills required for true professional autonomy. One PST articulated this limitation:
At times, I can be too eager to implement AI suggestions, ‘cause you know, they sound so reasonable, and plausible […] then I realised AI can’t know the limitations of the situation’”.
(PST8)
Thus, while gen-AI played a crucial role in scaffolding SRL and enhancing instructional strategies, it did not fully support the development of independent professional judgement. This reinforces the importance of framing gen-AI as a tool for augmentation rather than substitution; it can facilitate aspects of professional autonomy, but true teaching independence requires a broader spectrum of experiences, including interpersonal challenges, professional networking, and mentor engagement.
These findings confirm that gen-AI maps onto multiple layers of the MSPM, but its effectiveness depends on how it is integrated into mentorship structures. At the foundation level, gen-AI enhanced career-related guidance but did not substitute psychosocial mentoring. At the self-regulation level, gen-AI strengthened PSTs’ reflective and adaptive teaching skills but carried the risk of over-reliance. At the co-regulation stage, mentor-PST gen-AI collaborations demonstrated promising potential for mutual professional learning opportunities, though these were not widespread. Finally, at the autonomy level, gen-AI supported structured instructional competencies but had limited impact on fostering the adaptive, interpersonal skills needed for independent teaching. These insights reinforce that gen-AI should be integrated as a mentorship scaffold, supporting structured reflection, co-regulated learning, and instructional adaptability—without replacing critical human mentorship interactions.

3.2. Research Question 2: The Influence of Gen-AI on SRL in WIL Placements

The integration of gen-AI into WIL placements played a dual role in influencing the development of SRL among PSTs. While gen-AI functioned as an effective scaffold for SRL by supporting self-assessment, reflection, and adaptability, concerns emerged regarding its potential to foster over-reliance and passive engagement. Findings indicate that gen-AI enhanced SRL when used critically, but when engaged with uncritically, it posed challenges to autonomy, reinforcing the need for structured gen-AI integration into teacher training.
Survey data highlight that PSTs perceived gen-AI as particularly beneficial for self-assessment, reflective learning, and goal-setting—key components of SRL. Responses to questions measuring AI’s role in these areas yielded consistently high scores. PSTs rated gen-AI’s ability to encourage self-assessment of and reflection on teaching performance highly (Q16, Mean = 4.11, SD = 0.79), pinpointing the mechanisms through which the technology enhances the second of the seven elements of SRL. Gen-AI was also judged as effective in suggesting instructional adjustments (Q17, Mean = 4.27, SD = 0.76), therefore acting on element six of the SRL framework. Gen-AI was also perceived as valuable for goal setting (Q18, Mean = 4.19, SD = 0.81), which is a function and consequence of leveraging feedback: element seven of the SRL framework. This reinforces the idea that the structured feedback mechanisms provided by gen-AI facilitated independent learning and professional development. Interview data further support this finding, with PSTs describing gen-AI as a continuous prompt for reflection and a structured learning tool. PST8 noted, “The feedback and suggestions from AI are like a reality check for me. Also, ChatGPT always sounds very supportive and reasonable, like it wants to make sure I don’t miss anything important”. A moderate correlation between gen-AI engagement and SRL confidence (r = 0.41, p < 0.01) further indicates that PSTs who demonstrated stronger SRL strategies were more likely to use gen-AI actively for reflection and instructional adjustments.
Placement scores also reveal a positive relationship between gen-AI engagement and teaching performance in structured competencies. PSTs who frequently used gen-AI demonstrated strong results in lesson planning (APST 3.1, Mean = 4.1) and ICT integration (APST 2.6, Mean = 4.3), suggesting that gen-AI played a constructive role in refining instructional approaches. However, PSTs across the board scored significantly lower in managing classroom behaviour (APST 4.3, Mean = 2.6), reinforcing gen-AI’s limitations in addressing real-time teaching complexities. This discrepancy highlights that while gen-AI effectively supports structured teaching preparation, it is less effective in fostering the adaptive decision-making required for in-the-moment instructional challenges.
Qualitative feedback given by mentors in placement reports further illuminates the dual role of gen-AI in shaping SRL, revealing both its benefits and risks. Several mentors acknowledged gen-AI’s capacity to scaffold independent learning, with one remarking that a PST “showed remarkable initiative in using AI to preemptively refine lesson plans, which made their instruction clearer and more engaging”. However, concerns were also raised about PSTs becoming overly dependent on AI-generated suggestions, with another mentor questioning, “AI-supported lesson plans were well-structured, but I wonder how well they can plan without it”. This tension between AI as a tool for independent learning and its potential to diminish critical decision-making underscores that while AI can facilitate SRL, its impact on autonomy is highly dependent on how it is integrated into the learning process.
Despite gen-AI’s effectiveness in facilitating SRL, interview responses reveal concerns about over-reliance, where some PSTs defaulted to gen-AI recommendations without critically evaluating their appropriateness. Survey results indicate that while gen-AI was perceived as useful for reflection and adaptability, it was less effective in fostering true autonomy in socially interdependent teaching environments. Lower ratings for gen-AI’s ability to clarify mentor expectations (Q12, Mean = 3.21, SD = 0.88) and to enhance behaviour management (Q15, Mean = 3.00, SD = 0.85) suggest that this technology could not replace the expertise and contextual understanding provided by human mentors’ observations over time. Interview data further illustrate this issue, with PST5 reflecting: “Yes, sometimes I catch myself just doing whatever the AI proposes without really thinking about whether it’s the best choice […] if time permits, I’d like to come up with my own ideas more”. These perspectives suggest that PSTs who engaged with gen-AI critically maintained autonomy, whereas those who used it passively risked becoming dependent on it. Without intentional guidance, gen-AI use risks shifting PSTs towards passive decision-making rather than fostering independent professional judgement.
Beyond its impact on SRL and autonomy, gen-AI was found to have limited effects on stress reduction during placements. Survey responses indicate that AI was rated lowest for its role in reducing placement-related stress (Q20, Mean = 2.41, SD = 0.92), suggesting that PSTs did not view gen-AI as a replacement for human mentorship and emotional support. Interview data reinforce this conclusion, with PST7 stating, “AI is helpful for structuring my thoughts, but it’s not the same as talking to a person who actually understands my situation”. This important finding highlights that while gen-AI may facilitate cognitive support and structured reflection, it cannot replicate the social and emotional scaffolding provided by mentor teachers and peers.

3.3. Research Question 3: Optimising Course Design for AI Integration in WIL Placements

The effectiveness of gen-AI in WIL placements is contingent on how it is embedded within course structures to support SRL. While survey, interview, and placement report data indicate that gen-AI can bridge gaps in mentorship and facilitate reflection, findings also reveal the risk of passive reliance and the absence of clear institutional guidelines for its use. A key benefit of gen-AI integration within the preparatory university course was its ability to build skills that PSTs would thereafter use during WIL placements. To optimise course design, AI integration must balance its potential to support SRL with pedagogical scaffolding that ensures critical engagement and independent professional development.
While at university, PSTs were guided to engage with branched scenarios which enabled them to choose pathways in simulations of placement events. PSTs then used gen-AI to seek feedback on the relative benefits and shortcomings of their decision-making through the simulations. It was evident that PSTs continued to engage with these technologically-mediated feedback processes while on WIL placement. Survey responses indicated that PSTs rated AI highly for filling feedback gaps while on placement (Q14, Mean = 4.25, SD = 0.73; Q16, Mean = 4.22, SD = 0.71), reinforcing the function of this technology as an accessible, on-demand reflective tool. Gen-AI was also valued for its ability to provide emotional support (Q13, Mean = 4.09, SD = 0.79) through its propensity to provide feedback framed in constructive and affirmative ways. Interview data support this finding, with PST5 remarking the following:
Mentors often didn’t have the time to provide feedback until long after the lesson […] I’d be left wondering if the lesson was any good […] so I would enter observations of how students reacted to my teaching and then AI would sort of explain it, rationalise it all”.
Another PST highlighted the tone of AI-generated feedback:
My mentor typically listed all the things I did wrong in a lesson. They’re trying to be helpful but it can hit you hard. So I would pump their feedback through Chat for advice on how to improve and Chat would be so positive and encouraging, it made me feel better”.
(PST9)
These findings underscore the benefits of practicing gen-AI for feedback purposes before placements. In the often-disorienting WIL environments, PSTs could rely on the continuity of a resource that acted as a sounding board for their teaching performance. Clearly, PSTs also received feedback from mentors during WIL, but gen-AI proved to be helpful in mitigating against some of its emotionally detrimental impacts. In this way, being able to learn how to use gen-AI for feedback while at university helped to sustain SRL while on WIL placement.
The AI literacy skills built through the preparatory course encouraged PSTs to use the technology throughout placement for a range of purposes, including lesson planning, unit planning, resource creation, and assessment design. For instance, PSTs within university classes were encouraged to generate curriculum-aligned lesson plans, then they were given examples of student diversity variables that they might encounter in a classroom. PSTs then engaged gen-AI to redraft their lesson plans to be more inclusive of the diversity variables. Many PSTs continued this workflow during WIL placements, using gen-AI to redraft materials that had been submitted to mentors for review. The technology proved to be both effective and efficient in the process of adjusting lesson plans, resources, and unit plans in light of mentor recommendations. The positive responses given to Q17 (Mean = 4.27, SD = 0.77) demonstrate how time-saving and accurate gen-AI proved to be for many PSTs seeking to redraft their teaching materials. PST12 commented on this specific process:
Often my mentor would give me feedback on a lesson plan first thing in the morning of the same day I was due to teach it. That short time was only enough for me to get AI to help me adjust my notes before class, so I was able to act on the feedback in time”.
These observations reflect the fact that PSTs had built sufficient AI literacy before placement to use gen-AI purposefully. In turn, this preparatory process during university classes sustained SRL during WIL placement, and assisted PSTs’ ability to satisfy mentors’ requirements. These parallels with design-based research studies highlight how course-based AI literacy training can be treated not just as content integration, but as a strategically designed intervention to build transferable cognitive strategies and pedagogical resilience in real-world contexts.

3.4. Placement Scores

In the cohort of 151 PSTs relevant to the study, 139 (92%) successfully passed their placement, reflecting a high overall competency level. Detailed analysis showed that PSTs excelled in APST 2.6 (Information and Communication Technology), achieving an average score of 4.3, indicative of their strong ability to integrate technology effectively. This was closely followed by scores of 4.1 in APST 3.1 (Planning and Structuring Lessons) and 4.05 in APST 3.5 (Use of Effective Classroom Communication), suggesting proficient lesson preparation and communication skills. Conversely, the lowest scores were recorded in APST 4.3 (Managing Challenging Behaviour) and APST 7.4 (Engaging with Professional Networks), each averaging 2.6. These areas highlighted significant challenges, with mentors noting that many PSTs struggled to manage disruptive behaviours effectively and to engage with broader professional educational networks.
Despite the fact that university preparatory classes encouraged PSTs to use gen-AI to seek feedback on behaviour management decisions taken during classroom simulations, this particular skill seems not to have transferred to improved SRL practices on WIL placement, as reinforced by the contextually low mean score of 3.0 given to Question 15. This finding reveals that gen-AI was less useful in fostering the in-the-moment decision-making that underpins responding to behavioural incidents in the classroom. This detailed scoring not only provides a quantitative measure of PST performance across various teaching competencies but is also supported by the qualitative insights from mentors on areas of PST strength and weakness. Placement report feedback by MT7 connected behaviour management and gen-AI, stating the following:
I saw [PST] using AI to create all sorts of documents, which is fine, but then also for how to manage behaviour in class. This didn’t work and some of the advice given would have failed in practice. I encouraged [PST] to rely on me for this and over time there was improvement”.
This observation highlights the centrality of the mentor’s role in shaping PSTs’ practices for those aspects of the profession that are most unpredictable, dynamic and complex. It also reveals how mentors were able to engage with gen-AI to develop reasoned, nuanced perspectives on the extent of usefulness of these tools during WIL placement.
Placement scores also provide an important lens through which the effectiveness of the preparatory course design can be interpreted. PSTs achieved notably high scores in APST 2.6 (ICT integration, Mean = 4.3) and APST 3.1 (Lesson Planning, Mean = 4.1), which align with the specific learning outcomes and AI literacy tasks embedded in the course prior to placement. These tasks involved iterative lesson planning, AI-assisted resource development and simulation-based decision-making, all of which were designed to train PSTs in applying gen-AI to classroom preparation. While placement scores alone cannot isolate the effect of gen-AI, they serve as indirect indicators of how course-integrated AI literacy training helped PSTs meet mentor expectations in structured domains of teaching.
The absence of clear institutional guidelines on gen-AI use in schools emerged as a frequent issue in mentors’ qualitative comments, raising concerns about fairness in assessment and transparency in AI-supported work. Several mentors expressed uncertainty about how to evaluate AI-enhanced teaching materials, with one stating, “It’s difficult to judge whether a PST’s work truly reflects their ability or if they are just good at using AI” (MT3). Another mentor acknowledged AI’s potential but highlighted the lack of institutional readiness: “[PST] showed us new ways to use AI, but as a school, we aren’t ready to integrate it yet” (MT5). These concerns suggest that AI tools were not uniformly perceived as reliable for assessing long-term pedagogical development.

4. Discussion

Charting the findings across the Mentoring and Self-Regulated Learning Pyramid Model (MSPM), it becomes evident that each layer of the model is variably emphasised depending on a range of factors. The integration of gen-AI tools in WIL placements tangentially supports the development of SRL by enabling PSTs to independently navigate teaching challenges while also drawing on technological enhancements, in part confirming the findings of J. Ding et al. (2022) and A.-C. E. Ding et al. (2024). Unlike the findings of the similar study by Ayanwale et al. (2024), in this study, gen-AI is demonstrated to provide an extent of psychosocial support to PSTs when the tone or delivery mode of mentors’ feedback may otherwise erode PSTs’ confidence. When mentors are unable to provide feedback in a timely way, gen-AI is shown to be useful in filling the gap, enabling PSTs to continue growing in their practice in a timely way and therefore self-regulating their development. Timing challenges also emerged in our findings, particularly when mentors provide feedback on teaching materials and teaching performance but leave PSTs little time to act upon it before the next lesson takes place, confirming the results of Ambrosetti and Dekkers (2010). In this case, gen-AI is demonstrated to assist PSTs to enact and realise recommended changes efficiently, in turn facilitating self-regulated learning.
New findings from this study reveal that a deeper synergy between mentoring and AI emerges when mentor-PST interactions explicitly address gen-AI tools. Several PSTs (such as PST2, PST6) advocated the essential role of mentorship in bridging the gap between AI-generated feedback and practical teaching applications. They reported on how discussions with their mentor teachers about gen-AI helped them effectively translate AI suggestions into actionable strategies, thereby reinforcing the integration of SRL principles in real-world teaching scenarios. Mentor teachers who engaged with PSTs on gen-AI demonstrated an effective combination of the MSPM’s layers of mutual development and co-regulation. Here, mentor teachers serve not only as guides but also as collaborators, fostering a learning environment where both mentor and mentee evolve their practices in tandem. The findings therefore make the case for enhancing AI literacy among mentors themselves, perhaps through mandatory pre-placement induction and preparatory resources, so that they might be better able to explicitly involve the technology’s benefits and limitations in their conversations with PSTs.
A key contribution of this study is the recognition that AI functioned as an extension of mentoring practice, rather than merely as an independent learning aid. This aligns with, but also extends, the findings of Klamma et al. (2020), who highlighted the promise of detective AI in mentoring but did not examine generative AI in this capacity. The MSPM conceptualises mentorship as a progression from direct mentor guidance to self-regulation and ultimately professional autonomy, and the results demonstrate that AI played a structured role within this trajectory. These tasks developed PSTs’ AI literacy while simultaneously embedding core SRL strategies such as goal-setting, self-assessment, and responsive decision-making. On placement, AI further functioned as a supplemental support system, particularly in contexts where mentors lacked availability or expertise in certain areas, such as AI-enhanced lesson planning. This aligns with the co-regulation phase of the MSPM, where PSTs transition from relying on structured mentor input to strategically utilising external scaffolding, in this case, AI—to refine their professional judgement. These results imply potential in AI’s effectiveness in mentoring relationships is highly context-dependent and highlight the need for institutional frameworks that integrate AI as a structured component of mentoring, rather than an ad hoc individual tool.
Findings from the survey, interviews, and placement reports demonstrate that PSTs continued to use gen-AI during placement in ways that supported SRL and compensated for gaps in mentor availability. This was particularly evident in scenarios where mentor feedback was delayed or absent. PSTs reported using AI to reflect on lesson effectiveness, generate new instructional ideas, and evaluate alternative teaching strategies. PST2’s statement that “AI sort of nudges you to see what you might’ve missed…” suggests that gen-AI can act as a catalyst for deeper engagement with SRL processes by prompting reflection and offering new perspectives. Placement report data further confirmed that PSTs who engaged with AI for feedback and planning performed well in structured domains, suggesting that the preparatory course succeeded in fostering transferable SRL behaviours. PSTs who frequently used gen-AI for feedback-driven adjustments reported greater confidence in their SRL capabilities, indicating a stronger tendency towards professional autonomy. The effectiveness of gen-AI in augmenting mentorship is evident in the high scores for emotional support (Q13) and feedback supplementation (Q14). These findings align coherently with the MSPM, which underscores the importance of both psychosocial and career-related support in mentorship processes. AI’s role in fostering reflective practices and enhancing adaptability (Q16 and Q17) further exemplifies its utility in reinforcing SRL—principles central to the MSPM.
Gen-AI, however, seems to play a limited role in helping PSTs enhance the quality of mentoring in relation to behaviour management (Q15), an area requiring particularly nuanced human interaction. Addressing behavioural issues during WIL placement is an unpredictable and emotionally charged task, embedded within the complexities of evolving human relationships, as suggested by Porta and Hudson (2025). The limited impact that gen-AI had in mitigating this difficulty may also explain the finding that gen-AI had limited success in reducing stress induced by WIL placement (Q20). These combined observations highlight the importance of the quality of human interaction between mentors and PSTs.
In fostering PSTs’ SRL with behaviour management challenges, mentors often take on roles that extend beyond professional guides to become emotional anchors, helping PSTs navigate the compounded challenges of emotional labour, the relevance of which was highlighted by Wenham et al. (2020). This empathetic approach seems to form a critical basis for fostering trust and resilience, essential for PSTs to advance to higher levels of autonomy and self-regulation. The noted incident where a PST’s use of gen-AI didn’t enhance behaviour management practices, but traditional mentoring did (as reflected in the mentor teacher feedback MT7), is a testament to how foundational emotional support can catalyse professional learning, reflecting a deep integration of the MSPM’s initial phases. The PSTs who revealed in survey responses a lack of emotional support in the work of many mentors suggest a crucial gap. While using gen-AI can provide some scaffolding needed for professional growth in WIL placement, PSTs remain less equipped to manage the most complex psychological demands of the teaching profession. This imbalance points to a need for a more integrated mentorship approach that combines the MSPM’s layers of career support with emotional and psychosocial care, reinforcing previous findings by Moulding et al. (2014). In this way, mentoring would ensure that PSTs develop not only as educators but also as resilient professionals capable of navigating complex emotional landscapes, therefore mitigating the enduring risk of WIL-induced stress (Q20). These findings highlight the current inability of gen-AI to fully mediate interpersonal dynamics, a crucial component of the MSPM’s apex, which focuses on co-regulation and the development of mentee autonomy. This limitation therefore underpins the essential role of human mentors, also highlighted by Ghiţulescu (2021), in providing the depth of relational support that gen-AI cannot replicate, reiterating the necessity of using gen-AI as a supplement to, rather than a replacement for, traditional mentorship.
Findings indicate that access to AI tools in WIL environments is uneven, which could potentially hinder the consistent application of SRL and mentorship principles across diverse educational environments. Addressing these inequities is paramount to fully realising AI’s potential as a supportive resource in WIL settings. Access to AI tools and AI literacy learning support is also paramount in university preparatory courses that take place before WIL placements. The benefits of integrating gen-AI into these courses revealed by this study can only be realised if tertiary institutions demonstrate the ability and propensity to consider gen-AI as a valuable mentoring supplementary tool for PSTs. Looking forward, the MSPM serves as a robust framework for understanding the evolution of mentorship from basic support to more complex processes like co-regulation and autonomy. This model provides a structured pathway for integrating technical and psychosocial support, essential for developing both the technical competencies and emotional resilience of PSTs. Future training programmes for mentor teachers should be designed to enhance their ability to navigate these different aspects of the MSPM effectively, ensuring that PSTs receive well-rounded support.

5. Conclusions

This study showcases the transformative potential of gen-AI within WIL contexts, particularly its capacity to enrich the mentorship model and advance SRL and autonomy among PSTs on placement. By integrating gen-AI, mentors can amplify the quality and reach of their mentorship and enhance the capacities of PSTs. The dual roles of mentor teachers as both coaches and assessors critically shape the development of SRL and autonomy among PSTs. In environments where mentor teachers effectively balance these roles, gen-AI tools have facilitated a richer, more dynamic mentorship by providing PSTs with immediate, data-driven feedback and personalised learning insights. This allows mentors to focus more on coaching the more nuanced and complex aspects of the profession, such as behaviour management, in turn fostering a more nurturing and supportive relationship that directly benefits PSTs’ professional growth. The integration of gen-AI tools has also shown considerable promise in enhancing the development of SRL and autonomy among PSTs. These tools not only assist in lesson planning and delivery but also encourage PSTs to engage in critical self-reflection and adaptation, essential components of autonomous professional processes. However, the effectiveness of these tools varies, with some PSTs expressing concerns about over-reliance on gen-AI, which might impede their development of independent decision-making skills. Thus, while gen-AI can act as a powerful enabler of SRL, it must be carefully balanced with human mentorship.
The disparities observed across different WIL settings in the access to and effectiveness of gen-AI tools consistently point to the need for structured training and support systems for mentors, aimed at achieving greater equity. Such systems are essential to fully leverage gen-AI’s potential for both PSTs and mentor teachers. Future training programmes for mentor teachers should focus on the strategic use of AI to enhance their coaching abilities, ensuring that PSTs receive well-rounded support that extends beyond technological assistance. This includes educating mentors to critically integrate gen-AI tools into their teaching and mentoring practices in a way that maintains the essential human elements of empathy and understanding within the mentorship relationship. By shifting the higher education narrative towards a more balanced understanding of gen-AI’s capabilities and risks, its potential can be harnessed to close educational gaps and promote a more supportive form of work-integrated learning. The findings of this study advocate for a comprehensive approach that includes equitable access to AI tools, balanced integration within existing educational frameworks, and ongoing support and training for PSTs and educators. This holistic strategy can project the benefits of gen-AI across the entire spectrum of WIL.

Author Contributions

Conceptualization, N.N.N. and W.B.; methodology, N.N.N. and W.B.; validation, W.B. and N.N.N.; formal analysis, W.B. and N.N.N.; investigation, W.B., and N.N.N.; resources, N.N.N.; data curation, N.N.N.; writing—original draft preparation, W.B. and N.N.N.; writing—review and editing, W.B., and N.N.N.; visualization, N.N.N.; project administration, N.N.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was by the Low-Risk Human Research Ethics Committee of The University of Adelaide (H-2025-029).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Available upon request.

Acknowledgments

We are grateful to Alexi Grigoriadis for his technological expertise and support in developing several EdTech interactive elements to accompany our AI-integrated learning activities. We also thank Petra Galbraith, Matt Giacomini, and Denice Daou from the Placement Team for their dedicated support of pre-service teachers.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Gen-AIGenerative Artificial Intelligence
WILWork-Integrated Learning
PSTPre-Service Teachers
SRLSelf-Regulated Learning
MSPMMentoring and Self-Regulated Learning Pyramid Model
APSTAustralian Professional Standards for Teachers
ITEInitial Teacher Education

Appendix A

Table A1. Survey questions.
Table A1. Survey questions.
Survey QuestionsMSPM ThemeAverage Response
1. How often did your mentor teacher provide constructive feedback during your WIL placement?Mentoring3.26
2. How clear was your mentor teacher about the criteria for evaluating your performance?Mentoring3.19
3. To what extent did you feel supported by your mentor teacher in experimenting with new teaching methods?Mentoring2.91
4. How would you rate the emotional support provided by your mentor teacher during challenging moments in the classroom?Mentoring2.91
5. To what extent did your mentor teacher allow for autonomy in your teaching decisions?Mentoring3.10
6. How often did you set specific teaching goals before each lesson during WIL placement?SRL3.18
7. How often did you monitor your progress towards your teaching goals during your placement?SRL3.58
8. To what extent did you adjust your teaching strategies based on feedback received during WIL?SRL3.56
9. How confident are you in your ability to self-assess your teaching effectiveness after completing your placement?SRL3.07
10. How supportive was your school site in your use of AI during your WIL processes?AI3.23
11. How effective were AI tools in providing feedback about your WIL processes?AI3.72
12. To what extent did AI tools help clarify mentor expectations for your performance during WIL placements?AI3.21
13. How useful were AI tools in providing emotional support or reassurance during challenging moments of your WIL placement?AI4.09
14. To what degree did AI tools assist in filling gaps in mentor feedback when mentor availability was limited?AI4.25
15. To what extent did AI tools help you improve your behaviour management throughout the WIL placement?AI3.00
16. How effectively did AI tools encourage you to reflect on your teaching performance after each lesson?AI4.22
17. How useful were AI tools in suggesting adjustments to your teaching strategies based on feedback received during the placement?AI4.27
18. How useful were AI tools in helping you set specific teaching goals during WIL placements?AI4.19
19. To what degree did AI tools support your ability to self-assess and adjust your teaching strategies autonomously?AI4.19
20. To what extent did AI tools reduce your stress during WIL placements?AI3.41

References

  1. Al-Zahrani, A. M. (2024). Unveiling the shadows: Beyond the hype of AI in education. Heliyon, 10(9), e30696. [Google Scholar] [CrossRef] [PubMed]
  2. Ambrosetti, A. (2010). Mentoring and learning to teach: What do pre-service teachers expect to learn from their mentor teachers? International Journal of Learning, 17(9), 117–132. [Google Scholar] [CrossRef]
  3. Ambrosetti, A. (2014). Teaching and learning materials utilised in professional development for mentor teachers. IARTEM E-Journal, 6(1), 49–63. [Google Scholar]
  4. Ambrosetti, A., & Davis, S. (2016). Mentor. M/C Journal, 19(2). [Google Scholar] [CrossRef]
  5. Ambrosetti, A., & Dekkers, J. (2010). The interconnectedness of the roles of mentors and mentees in pre-service teacher education mentoring relationships. The Australian Journal of Teacher Education, 35(6), 42–55. [Google Scholar] [CrossRef]
  6. Anderson, M. K., Tenenbaum, L. S., Ramadorai, S. B., & Yourick, D. L. (2015). Near-peer mentor model: Synergy within mentoring. Mentoring & Tutoring: Partnership in Learning, 23(2), 149–163. [Google Scholar]
  7. Aprile, K. T., & Knight, B. A. (2020). The WIL to learn: Students’ perspectives on the impact of work-integrated learning placements on their professional readiness. Higher Education Research and Development, 39(5), 869–882. [Google Scholar] [CrossRef]
  8. Ayanwale, M. A., Adelana, O. P., Molefi, R. R., Adeeko, O., & Ishola, A. M. (2024). Examining artificial intelligence literacy among pre-service teachers for future classrooms. Computers and Education Open, 6, 100179. [Google Scholar] [CrossRef]
  9. Ballantyne, J., & Retell, J. (2020). Teaching careers: Exploring links between well-being, burnout, self-efficacy and praxis shock. Frontiers in Psychology, 10, 2250–2255. [Google Scholar] [CrossRef]
  10. Barbieri, W., & Nguyen, N. (2025). Generative AI as a “placement buddy”: Supporting pre-service teachers in work-integrated learning, self-management and crisis resolution. Australasian Journal of Educational Technology. [Google Scholar] [CrossRef]
  11. Bembenutty, H., Kitsantas, A., DiBenedetto, M. K., Wigfield, A., Greene, J. A., Usher, E. L., Bong, M., Cleary, T. J., Panadero, E., Mullen, C. A., & Chen, P. P. (2024). Harnessing motivation, self-efficacy, and self-regulation: Dale H. Schunk’s enduring influence. Educational Psychology Review, 36(4), 139. [Google Scholar] [CrossRef]
  12. Bipath, K. (2022). Enriching the professional identity of early childhood development teachers through mentorship. The Independent Journal of Teaching and Learning, 17(1), 137–150. [Google Scholar] [CrossRef]
  13. Böke, B. N., Petrovic, J., Zito, S., Sadowski, I., Carsley, D., Rodger, S., & Heath, N. L. (2024). Two for one: Effectiveness of a mandatory personal and classroom stress management program for preservice teachers. School Psychology, 39(3), 312. [Google Scholar] [CrossRef]
  14. Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. SAGE Publications Ltd. [Google Scholar]
  15. Chang, L.-C., Chiu, C.-W., Hsu, C.-M., Liao, L.-L., & Lin, H.-L. (2019). Examining the implementation of teaching and learning interactions of transition cultural competence through a qualitative study of Taiwan mentors untaking the postgraduate nursing program. Nurse Education Today, 79, 74–79. [Google Scholar] [CrossRef]
  16. Cobern, W., Adams, B., & Kara, İ. (2020). Establishing survey validity: A practical guide. International Journal of Assessment Tools in Education, 7(3), 404–419. [Google Scholar] [CrossRef]
  17. Corcoran, R. P., & O’Flaherty, J. (2022). Social and emotional learning in teacher preparation: Pre-service teacher well-being. Teaching and Teacher Education, 110, 103563. [Google Scholar] [CrossRef]
  18. Ding, A.-C. E., Shi, L., Yang, H., & Choi, I. (2024). Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Computers and Education Open, 6, 100178. [Google Scholar] [CrossRef]
  19. Ding, J., Alroobaea, R., Baqasah, A. M., Althobaiti, A., & Miglani, R. (2022). Big data intelligent collection and network analysis based on artificial intelligence. Informatica (Ljubljana), 46(3), 383–392. [Google Scholar] [CrossRef]
  20. Doroudi, S. (2023). The intertwined histories of artificial intelligence and education. International Journal of Artificial Intelligence in Education, 33(4), 885–928. [Google Scholar] [CrossRef]
  21. Edwards, A., & Nutall, G. (2015). Praxis shock: A real challenge for novice teachers. Journal of Teacher Education, 66(3), 213–223. [Google Scholar]
  22. Ensher, E. A., & Murphy, S. E. (2011). The mentoring relationship challenges scale: The impact of mentoring stage, type, and gender. Journal of Vocational Behavior, 79(1), 253–266. [Google Scholar] [CrossRef]
  23. Gabay, V., Voyles, S., Algozzini, L., Batchelor, G., Makhanya, M., Blessinger, P., Hoffman, J., Blessinger, P., Makhanya, M., & Hoffman, J. (2019). Using virtual communities of practice to coach and mentor faculty to facilitate engaging critical consciousness. In Strategies for facilitating inclusive campuses in higher education (Vol. 17, pp. 103–116). Emerald Publishing Limited. [Google Scholar] [CrossRef]
  24. Gao, X., & Brown, G. T. L. (2023). The relation of students’ conceptions of feedback to motivational beliefs and achievement goals: Comparing Chinese international students to New Zealand domestic students in higher education. Education Sciences, 13(11), 1090. [Google Scholar] [CrossRef]
  25. Ghiţulescu, R. D. (2021). La relation mentorale. Journal of Humanistic and Social Studies, 12(2), 109–116. [Google Scholar]
  26. Hashem, R., Ali, N., El Zein, F., Fidalgo, P., & Abu Khurma, O. (2024). AI to the rescue: Exploring the potential of ChatGPT as a teacher ally for workload relief and burnout prevention. Research and Practice in Technology Enhanced Learning, 19, 23. [Google Scholar] [CrossRef]
  27. Hasson, J. D. (2018). Class placements and competing priorities. The Journal of Cases in Educational Leadership, 21(4), 25–33. [Google Scholar] [CrossRef]
  28. Hensen, B., Bekhter, D., Blehm, D., Meinberger, S., Klamma, R., Sacco, M., Arpaia, P., De Paolis, L. T., De Paolis, L. T., Sacco, M., & Arpaia, P. (2022). Mixed reality agents for automated mentoring processes. In Extended reality (Vol. 13446, pp. 3–16). Springer. [Google Scholar] [CrossRef]
  29. Hobson, A. J., Ashby, P., Malderez, A., & Tomlinson, P. D. (2009). Mentoring beginning teachers: What we know and what we don’t. Teaching and Teacher Education, 25(1), 207–216. [Google Scholar] [CrossRef]
  30. Hordvik, M., MacPhail, A., & Ronglan, L. T. (2019). Negotiating the complexity of teaching: A rhizomatic consideration of pre-service teachers’ school placement experiences. Physical Education and Sport Pedagogy, 24(5), 447–462. [Google Scholar] [CrossRef]
  31. Hudson, P. (2016). Forming the mentor-mentee relationship. Mentoring & Tutoring, 24(1), 30–43. [Google Scholar] [CrossRef]
  32. Jackson, D. (2015). Employability skill development in work-integrated learning: Barriers and best practice. Studies in Higher Education, 40(2), 350–367. [Google Scholar] [CrossRef]
  33. Kang, H. (2021). The role of mentor teacher–mediated experiences for preservice teachers. Journal of Teacher Education, 72(2), 251–263. [Google Scholar] [CrossRef]
  34. Karahan, E. (2023). Using video-elicitation focus group interviews to explore pre-service science teachers’ views and reasoning on artificial intelligence. International Journal of Science Education, 45(15), 1283–1302. [Google Scholar] [CrossRef]
  35. Karan, B., & Angadi, G. R. (2024). Potential risks of artificial intelligence integration into school education: A systematic review. Bulletin of Science, Technology & Society, 43(3–4), 67–85. [Google Scholar] [CrossRef]
  36. Katz-Buonincontro, J. (2022). How to interview and conduct focus groups (1st ed.). American Psychological Association. [Google Scholar]
  37. Kaufmann, E. (2021). Algorithm appreciation or aversion? Comparing in-service and pre-service teachers’ acceptance of computerized expert models. Computers and Education: Artificial Intelligence, 2, 100028. [Google Scholar] [CrossRef]
  38. Klamma, R., de Lange, P., Neumann, A. T., Hensen, B., Kravcik, M., Wang, X., Kuzilek, J., Troussas, C., Kumar, V., Kumar, V., & Troussas, C. (2020). Scaling mentoring support with distributed artificial intelligence. In Intelligent tutoring systems (Vol. 12149, pp. 38–44). Springer International Publishing AG. [Google Scholar] [CrossRef]
  39. Kong, S.-C., Korte, S.-M., Burton, S., Keskitalo, P., Turunen, T., Smith, D., Wang, L., Lee, J. C.-K., & Beaton, M. C. (2024). Artificial Intelligence (AI) literacy—An argument for AI literacy in education. Innovations in Education and Teaching International, 62, 477–483. [Google Scholar] [CrossRef]
  40. Kong, S.-C., & Liu, B. (2023). Supporting the self-regulated learning or primary school students with a performance-based assessment platform for programming education. Journal of Educational Computing Research, 61(5), 977–1007. [Google Scholar] [CrossRef]
  41. Koong, L., Hao-Chiang, T., Meng-Chun, W., Ta, H., & Lu, W.-Y. (2024). Application of wsq (watch-summary-question) flipped teaching in affective conversational robots: Impacts on learning emotion, self-directed learning and learning effectiveness of senior high school students. International Journal of Human-Computer Interaction, 41, 4319–4336. [Google Scholar] [CrossRef]
  42. Köbis, L., & Mehner, C. (2021). Ethical questions raised by AI-supported mentoring in higher education. Frontiers in Artificial Intelligence, 4, 624050. [Google Scholar] [CrossRef]
  43. Kuhn, C., Hagenauer, G., Gröschner, A., & Bach, A. (2024). Mentor teachers’ motivations and implications for mentoring style and enthusiasm. Teaching and Teacher Education, 139, 104441. [Google Scholar] [CrossRef]
  44. Lai, E. (2010). Getting in step to improve the quality of in-service teacher learning through mentoring. Professional Development in Education, 36(3), 443–469. [Google Scholar] [CrossRef]
  45. LaPalme, M., Luo, P., Cipriano, C., & Brackett, M. (2022). Imposter syndrome among pre-service educators and the importance of emotion regulation. Frontiers in Psychology, 13, 838575. [Google Scholar] [CrossRef]
  46. Le Cornu, R. J. (2012). School co-ordinators: Leaders of learning in professional experience. The Australian Journal of Teacher Education, 37(3), 18–33. [Google Scholar] [CrossRef]
  47. Lee, S., & Park, G. (2024). Development and validation of ChatGPT literacy scale. Current Psychology, 43(21), 18992–19004. [Google Scholar] [CrossRef]
  48. Loughland, T., Winslade, M., & Eady, M. J. (Eds.). (2023). Work-integrated learning case studies in teacher education: Epistemic reflexivity (1st ed.). Springer. [Google Scholar] [CrossRef]
  49. Ma, S., & Lei, L. (2024). The factors influencing teacher education students’ willingness to adopt artificial intelligence technology for information-based teaching. Asia Pacific Journal of Education, 44(1), 94–111. [Google Scholar] [CrossRef]
  50. Matthewman, L., Jodhan-Gall, D., Nowlan, J., O’Sullivan, N., & Patel, Z. (2018). Primed, prepped and primped: Reflections on enhancing student wellbeing in tertiary education. Psychology Teaching Review, 24(1), 67–76. [Google Scholar] [CrossRef]
  51. Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235–251. [Google Scholar] [CrossRef]
  52. Moosa, M. (2018). Promoting quality learning experiences in teacher education: What mentor teachers expect from pre-service teachers during teaching practice. The Independent Journal of Teaching and Learning, 13(1), 57–68. [Google Scholar]
  53. Moulding, L. R., Stewart, P. W., & Dunmeyer, M. L. (2014). Pre-service teachers’ sense of efficacy: Relationship to academic ability, student teaching placement characteristics, and mentor support. Teaching and Teacher Education, 41, 60–66. [Google Scholar] [CrossRef]
  54. Mullen, C. A., & Tuten, E. M. (2010). Doctoral cohort mentoring interdependence, collaborative learning, and cultural change. Scholar-Practitioner Quarterly, 4, 11–32. [Google Scholar]
  55. Nam, C. S., Jung, J.-Y., & Lee, S. (2022). Human-centered artificial intelligence: Research and applications (1st ed.). Elsevier Science & Technology. [Google Scholar] [CrossRef]
  56. Ndebele, C., & Legg-Jack, D. W. (2024). Exploring the impact of mentors’ observation feedback on postgraduate pre-service teachers’ development. Prizren Social Science Journal, 8(2), 1–10. [Google Scholar] [CrossRef]
  57. Ng, E. M. W. (2016). Fostering pre-service teachers’ self-regulated learning through self-and peer assessment. Computers & Education, 98, 180–191. [Google Scholar]
  58. Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education (Dorchester-on-Thames), 31(2), 199–218. [Google Scholar] [CrossRef]
  59. Nigate, D. A., Dawit, M. M., & Solomon, A. K. (2023). Mentoring during school practicum: Mentor-mentee relationship, roles assumed, and focus of feedback. International Journal of Work-Integrated Learning, 24(4), 491–502. [Google Scholar]
  60. Onen, A. S., & Ulusoy, F. M. (2015). Investigating of the relationship between pre-service teachers’ self-esteem and stress coping attitudes. Procedia—Social and Behavioral Sciences, 186, 613–617. [Google Scholar] [CrossRef]
  61. Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 42(5), 533–544. [Google Scholar] [CrossRef]
  62. Pantachai, P., Homjan, S., Srikham, O., & Markjaroen, K. (2024). Factors affecting the adjustment of pre-service teachers. Journal of Education and Learning, 14(1), 282–288. [Google Scholar] [CrossRef]
  63. Pokrivcakova, S. (2023). Pre-service teachers’ attitudes towards artificial intelligence and its integration into EFL teaching and learning. Journal of Language and Cultural Education, 11(3), 100–114. [Google Scholar] [CrossRef]
  64. Porta, T., & Hudson, C. (2025). ‘I don’t know how to properly deal with challenging and complex behaviour’: Initial teacher education for pre-service teachers on behaviour management. Cogent Education, 12(1). [Google Scholar] [CrossRef]
  65. Ragins, B. R., & Kram, K. R. (2007). The handbook of mentoring at work: Theory, research and practice (1st ed.). SAGE Publications, Incorporated. [Google Scholar]
  66. Rodriguez-Gomez, D., Muñoz-Moreno, J. L., & Ion, G. (2024). Empowering teachers: Self-regulated learning strategies for sustainable professional development in initial teacher education at higher education institutions. Sustainability, 16(7), 3021. [Google Scholar] [CrossRef]
  67. Samarescu, N., Bumbac, R., & Zamfiroiu, A. (2024). Artificial intelligence in education: Next-gen teacher perspectives. Amfiteatru Economic, 26(65), 145–161. [Google Scholar] [CrossRef]
  68. Sanusi, I. T., Ayanwale, M. A., & Tolorunleke, A. E. (2024). Investigating pre-service teachers’ artificial intelligence perception from the perspective of planned behaviour theory. Computers and Education: Artificial Intelligence, 6, 100202. [Google Scholar] [CrossRef]
  69. Schunk, D., & Mullen, C. A. (2013). Toward a conceptual model of mentoring research: Integration with self-regulated learning. Educational Psychology Review, 25(3), 361–389. [Google Scholar] [CrossRef]
  70. Smith, C., & Worsfold, K. (2015). Unpacking the learning-work nexus: “Priming” as lever for high-quality learning outcomes in work-integrated learning curricula. Studies in Higher Education (Dorchester-on-Thames), 40(1), 22–42. [Google Scholar] [CrossRef]
  71. Smith-Ruig, T. (2014). Exploring the links between mentoring and work-integrated learning. Higher Education Research and Development, 33(4), 769–782. [Google Scholar] [CrossRef]
  72. Sperling, K., Stenberg, C.-J., McGrath, C., Åkerfeldt, A., Heintz, F., & Stenliden, L. (2024). In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Computers and Education Open, 6, 100169. [Google Scholar] [CrossRef]
  73. Tise, J. C., Hernandez, P. R., & Schultz, P. W. (2023). Mentoring underrepresented students for success: Self-regulated learning strategies as a critical link between mentor support and educational attainment. Contemporary Educational Psychology, 75, 102233. [Google Scholar] [CrossRef]
  74. Trede, F., & Jackson, D. (2021). Educating the deliberate professional and enhancing professional agency through peer reflection of work-integrated learning. Active Learning in Higher Education, 22(3), 171–187. [Google Scholar] [CrossRef]
  75. Trent, J. (2023). How are preservice teachers discursively positioned during microteaching? The views of student teachers in Hong Kong. Australian Journal of Teacher Education, 48(6), 80–97. [Google Scholar] [CrossRef]
  76. Uzumcu, O., & Acilmis, H. (2024). Do innovative teachers use AI-powered tools more interactively? A study in the context of diffusion of innovation theory. Technology, Knowledge and Learning, 29, 1109–1128. [Google Scholar] [CrossRef]
  77. Wenham, K. E., Valencia-Forrester, F., & Backhaus, B. (2020). Make or break: The role and support needs of academic advisors in work-integrated learning courses. Higher Education Research & Development, 39(5), 1026–1039. [Google Scholar]
  78. Zhang, C., Schießl, J., Plößl, L., Hofmann, F., & Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-service teachers: A multigroup analysis. International Journal of Educational Technology in Higher Education, 20, 49. [Google Scholar] [CrossRef]
  79. Zheng, Y., Zheng, X., Wu, C.-H., Yao, X., & Wang, Y. (2021). Newcomers’ relationship-building behavior, mentor information sharing and newcomer adjustment: The moderating effects of perceived mentor and newcomer deep similarity. Journal of Vocational Behavior, 125, 103519. [Google Scholar] [CrossRef]
  80. Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70. [Google Scholar] [CrossRef]
Figure 1. Mentoring and SRL Pyramid Model (MSPM).
Figure 1. Mentoring and SRL Pyramid Model (MSPM).
Education 15 00642 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nguyen, N.N.; Barbieri, W. Mentorship in the Age of Generative AI: ChatGPT to Support Self-Regulated Learning of Pre-Service Teachers Before and During Placements. Educ. Sci. 2025, 15, 642. https://doi.org/10.3390/educsci15060642

AMA Style

Nguyen NN, Barbieri W. Mentorship in the Age of Generative AI: ChatGPT to Support Self-Regulated Learning of Pre-Service Teachers Before and During Placements. Education Sciences. 2025; 15(6):642. https://doi.org/10.3390/educsci15060642

Chicago/Turabian Style

Nguyen (Ruby), Ngoc Nhu, and Walter Barbieri. 2025. "Mentorship in the Age of Generative AI: ChatGPT to Support Self-Regulated Learning of Pre-Service Teachers Before and During Placements" Education Sciences 15, no. 6: 642. https://doi.org/10.3390/educsci15060642

APA Style

Nguyen, N. N., & Barbieri, W. (2025). Mentorship in the Age of Generative AI: ChatGPT to Support Self-Regulated Learning of Pre-Service Teachers Before and During Placements. Education Sciences, 15(6), 642. https://doi.org/10.3390/educsci15060642

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop