Educational Design Principles of Using AI Chatbot That Supports Self-Regulated Learning in Education: Goal Setting, Feedback, and Personalization

: The invention of ChatGPT and generative AI technologies presents educators with signiﬁ-cant challenges, as concerns arise regarding students potentially exploiting these tools unethically, misrepresenting their work, or gaining academic merits without active participation in the learning process. To effectively navigate this shift, it is crucial to embrace AI as a contemporary educational trend and establish pedagogical principles for properly utilizing emerging technologies like Chat-GPT to promote self-regulation. Rather than suppressing AI-driven tools, educators should foster collaborations among stakeholders, including educators, instructional designers, AI researchers, and developers. This paper proposes three key pedagogical principles for integrating AI chatbots in classrooms, informed by Zimmerman’s Self-Regulated Learning (SRL) framework and Judgment of Learning (JOL). We argue that the current conceptualization of AI chatbots in education is inadequate, so we advocate for the incorporation of goal setting (prompting), self-assessment and feedback, and personalization as three essential educational principles. First, we propose that teaching prompting is important for developing students’ SRL. Second, conﬁguring reverse prompting in the AI chatbot’s capability will help to guide students’ SRL and monitoring for understanding. Third, developing a data-driven mechanism that enables an AI chatbot to provide learning analytics helps learners to reﬂect on learning and develop SRL strategies. By bringing in Zimmerman’s SRL framework with JOL, we aim to provide educators with guidelines for implementing AI in teaching and learning contexts, with a focus on promoting students’ self-regulation in higher education through AI-assisted pedagogy and instructional design.


Introduction
Educational chatbots, also called conversational agents, hold immense potential in delivering personalized and interactive learning experiences to students [1,2]. However, the advent of ChatGPT or generative AI poses a substantial challenge to the role of educators, as it gives rise to concerns that students may exploit generative AI tools to obtain academic recognition without actively engaging in the learning process. In light of this transformative development, it is observable that AI represents a contemporary trend in education, and it will be used by learners inevitably. Rather than attempting to suppress using AI in education, educators should proactively seek and explore ways to adapt its presence. This adaptation can be effectively achieved by establishing fruitful collaborations between educators, instructional designers, and researchers in the AI field. Such partnerships should strive to explore the integration of pedagogical principles within AI platforms, ensuring that students not only derive benefits from AI but also acquire the essential skills mandated by the educational curriculum. Consequently, it becomes crucial for chatbot designers and

Review of Zimmerman's Multi-Level Self-Regulated Learning Framework
Zimmerman's multi-level SRL framework [7,8] encompasses four distinct levels: observation, emulation, self-control, and self-regulation (see Figure 1). Each level represents a progressive stage in the development of SRL skills. This framework guides us to explore how a chatbot can facilitate SRL at each stage of Zimmerman's framework. For example, when students use AI chatbots for their learning, they treat the chatbots as a resource. They enter questions or commands into the AI chatbots, hoping to seek clarifications or information from the chatbots for the task at hand. We assume that this type of utilization of AI chatbots elicits students' self-regulation. We propose that Zimmerman's multi-level SRL framework helps to interpret the SRL processes undertaken by students.

Review of Zimmerman's Multi-Level Self-Regulated Learning Framework
Zimmerman's multi-level SRL framework [7,8] encompasses four distinct levels: ob servation, emulation, self-control, and self-regulation (see Figure 1). Each level represents a progressive stage in the development of SRL skills. This framework guides us to explore how a chatbot can facilitate SRL at each stage of Zimmerman's framework. For example when students use AI chatbots for their learning, they treat the chatbots as a resource They enter questions or commands into the AI chatbots, hoping to seek clarifications or information from the chatbots for the task at hand. We assume that this type of utilization of AI chatbots elicits students' self-regulation. We propose that Zimmerman's multi-leve SRL framework helps to interpret the SRL processes undertaken by students. Figure 1. Zimmerman's multi-level SRL Framework (adopted from Panadero [7]).
Specifically, the observation level denotes a stage where students possess prior knowledge of how conversations occur in a real-life context and their general goal for the learning task. During this phase, students may set their goals, or primarily observe and learn from others who prompt the chatbot, gaining insights into the expected outcomes and interactions. Moving onto the emulation level, students demonstrate their compre hension of the task requirements by independently prompting the chatbot using their own words or similar phrases they have observed or recommended by others. At this stage students strive to replicate successful interactions they have witnessed, applying their un derstanding of the task to engage with the chatbot. At this stage, they may also use their goals as the prompts being fed into a chatbot, or they can use the prompts they observe from others. The self-control level, on the other hand, represents a critical juncture where students face decisions about their learning. Such decisions can be ethical conduct and academic integrity decisions, or further re-engagement (re-prompting the chatbot). Spe cifically, once the chatbot generates a response, students must choose between potentially resorting to taking the chatbot's responses verbatim for the assignments (academic integ rity and ethical conduct) and modifying their approach, such as re-prompting, or sorting out other strategies. This phase provides an opportunity for the chatbot to contribute by offering evaluations and feedback on students' work, guiding them to determine whether their output meets the required standards or if further revisions are necessary. In sum, a this self-control stage, it can be considered as a two-way interaction between the chatbo and students. As students march into the self-regulation level when they use the chatbot, they begin to recognize the potential benefits of the chatbot as a useful, efficient, and valuable learn ing tool to assist their learning. Regarding the self-regulation level, students may seek an evaluation of their revised paragraph generated by the chatbot. Moreover, they might re quest the chatbot to provide their learning analytics report. Fine-grained student data can be visualized as learning analytics in the chatbot or receive recommendations for further learning improvement. This stage exemplifies the students' growing understanding o how the chatbot can facilitate their learning process, guiding them toward achieving spe cific objectives and refining their SRL skills. Zimmerman's multi-level SRL framework provides a comprehensive perspective on the gradual development of increasing SRL Specifically, the observation level denotes a stage where students possess prior knowledge of how conversations occur in a real-life context and their general goal for the learning task. During this phase, students may set their goals, or primarily observe and learn from others who prompt the chatbot, gaining insights into the expected outcomes and interactions. Moving onto the emulation level, students demonstrate their comprehension of the task requirements by independently prompting the chatbot using their own words or similar phrases they have observed or recommended by others. At this stage, students strive to replicate successful interactions they have witnessed, applying their understanding of the task to engage with the chatbot. At this stage, they may also use their goals as the prompts being fed into a chatbot, or they can use the prompts they observe from others. The self-control level, on the other hand, represents a critical juncture where students face decisions about their learning. Such decisions can be ethical conduct and academic integrity decisions, or further re-engagement (re-prompting the chatbot). Specifically, once the chatbot generates a response, students must choose between potentially resorting to taking the chatbot's responses verbatim for the assignments (academic integrity and ethical conduct) and modifying their approach, such as re-prompting, or sorting out other strategies. This phase provides an opportunity for the chatbot to contribute by offering evaluations and feedback on students' work, guiding them to determine whether their output meets the required standards or if further revisions are necessary. In sum, at this self-control stage, it can be considered as a two-way interaction between the chatbot and students. As students march into the self-regulation level when they use the chatbot, they begin to recognize the potential benefits of the chatbot as a useful, efficient, and valuable learning tool to assist their learning. Regarding the self-regulation level, students may seek an evaluation of their revised paragraph generated by the chatbot. Moreover, they might request the chatbot to provide their learning analytics report. Fine-grained student data can be visualized as learning analytics in the chatbot or receive recommendations for further learning improvement. This stage exemplifies the students' growing understanding of how the chatbot can facilitate their learning process, guiding them toward achieving specific objectives and refining their SRL skills. Zimmerman's multi-level SRL framework provides a comprehensive perspective on the gradual development of increasing SRL abilities. It illustrates how students proceed from observing and emulating others, exercising self-control, and ultimately achieving self-regulation by harnessing the chatbot's capabilities as a supportive learning resource.

Definition and Background of JOL
In Zimmerman's self-control and self-regulation phases of SRL, students have to engage in some levels of judgement about the chatbot's output, so they can decide what their next actions are. Such judgement is known as self-assessment, and self-assessment is grounded in Judgement of Learning (JOL), a concept dominant in educational psychology.
JOL is a psychological and educational concept that refers to an individual's evaluation of their learning [6]. It reflects the extent to which an individual believes they have learned or retained new information, which can impact their motivation and behavior during the learning process [5]. Several studies have indicated that various factors could impact an individual's JOL, including the difficulty of the material, the individual's pre-existing knowledge and skills, and the effectiveness of the learning strategy used [5,6]. There is empirical evidence showing that people with a higher JOL tend to be more motivated to learn and more likely to engage in SRL activities, while those with a lower JOL may be less motivated and avoid difficult learning tasks [9,10]. JOL can also serve as a feedback mechanism for learners by allowing them to identify areas where they need to focus more effort and adjust their learning strategies accordingly [11,12]. Additionally, JOL can influence an individual's confidence, which in turn can affect their overall approach to learning [11].
One of the most influential theories of JOL is the cue-utilization approach, which proposes that individuals use various cues, or indicators, to assess their learning [5]. These cues can include things like how difficult the material was to learn, how much time was spent studying, and how well the material was understood. According to Koriat [5], individuals are more likely to have higher JOL if they encounter more favorable cues while learning (e.g., domain-specific knowledge), and more likely to have a low JOL if they encounter less favorable cues (e.g., feelings of unfamiliarity or difficulty). Another important outcome of JOL is metacognitive awareness, which emphasizes the role of metacognitive processes, or higher-order thinking skills, in the learning process. Research [13,14] shows that individuals use metacognitive strategies, such as planning, monitoring, and evaluating, to guide their learning and assess their progress. As a result, individuals with higher JOL are more likely to use effective metacognitive strategies and be more successful learners. In certain conditions, students recognize their lack of understanding of specific concepts, a phenomenon referred to as "negative JOL" [15], which may result in the improvement of previously adopted learning skills and strategies. Suppose the student does not change their strategy use following such judgement. In that case, the student's metacognitive behavior is called "static", implying that they are aware of their knowledge deficit but are resistant to change [16]. Various models of JOL have been proposed. For example, the social cognitive model [17] emphasizes the influence of social and environmental factors on learning, and the self-perception model suggests that individuals' JOL is influenced by their perceptions of their abilities and self-worth [18].
Taken together, incorporating Zimmerman's SRL theoretical framework and JOL into the existing capacity of AI in Education has significant potential for improving students' SRL. Currently, AI technology operates in a unidirectional manner, where users (or students) prompt the generative AI tool to fulfill its intended function and purposes (in the following section, we also call it "goal setting"), as what we have shown above with respect to the emulation and the self-control stages. However, in education, it is crucial to emphasize the importance of bidirectional interaction (from user to AI and AI to user). Enabling AI to initiate personalized learning feedback (i.e., learning analytics, which we will elaborate in the Section 3.4) to users can create meaningful and educational interactions. In the sections below, we propose several educational principles that can guide the integration of chatbots into various aspects of educational practices.

Define Chatbots and Describe Their Potential Use in Educational Settings
The term "chatbot" refers to computer programs that communicate with users using natural language [19]. The history of chatbots can be extended back to the early 1950s [20]. In particular, ELIZA [21] and A.L.I.C.E. [22] were well-known early chatbot systems simulating real human communication. Chatbots are technological innovations that may efficiently supplement services delivered to humans. In addition to educational chatbots [23,24] and applying deep learning algorithms in learning management systems [25], chatbots have been used as a tool for many purposes and have a wide range of industrial applications, such as medical education [26,27], counseling consultations [28], marketing education [29], and telecommunications support and in financial industries [30,31].
In particular, research has been conducted to investigate the methods and impacts of chatbot implementation in education in recent years [25,32,33]. Chatbots' interactive learning feature and their flexibility in terms of time and location have made their usage more appealing and gained popularity in the field of education [23]. Several studies have shown that utilizing chatbots in educational settings may provide students with a positive learning experience, as human-to-chatbot interaction allows real-time engagement [34], improves students' communication skills [35], and improves students' efficiency of learning [36].
The growing need for AI technology has opened a new avenue for constructing chatbots when combined with natural language processing capabilities and machine learning techniques [37]. Smutny and Schreiberova's study [2] showed that chatbots have the potential to become smart teaching assistants in the future, as they might be capable of supplementing in-class instructions alongside instructors. In the case of ChatGPT, some students might have used it as personal assistants, regardless of its underlying ethical conduct in academia. However, we would like to argue that utilizing generative AI chatbots, like ChatGPT, can be a platform for students to become self-regulated under the conditions that they are taught about the context of appropriate use, such as when, where, and how they should use the AI chatbot system for learning. In addition, according to a meta-analysis conducted by Deng and Yu [38], chatbots can potentially have a medium-to-high effect on achievement or learning outcomes. Therefore, integrating AI chatbots into classrooms has now been a question of how educators should do it appropriately to foster learning rather than how educators should suppress it so students will observe the boundary of ethical conduct.
Conventional teaching approaches, such as giving students feedback, encouraging students, or customizing course material to student groups, are still dominant pedagogical practices. Suppose we can take these conventional approaches into account while integrating AI into pedagogy. In that case, we believe that computers and other digital gadgets can bring up far-reaching possibilities that have yet to be completely realized. For example, incorporating process data in student learning may offer students some opportunities to monitor their understanding of materials as well as additional opportunities for formative feedback, self-reflection, and competence development [39]. Hattie [40] has argued that the effect of feedback has a median effect size of d = 0.75 in terms of achievement. On the other hand, Wisniewski et al. [41] have shown that feedback can produce an effect size of d = 0.99 for highly informative feedback on student achievement. Such feedback may foster an SRL process and strong metacognitive monitoring and control [8,15,42]. With these pieces of evidence, we can propose that AI that model teachers' scaffolding and feedback mechanism after students prompt the AI will support SRL activities.
As stated earlier, under the unidirectional condition (student-to-AI), it has been unclear what instructional and pedagogical functions of chatbots can serve to produce learning effects. In particular, it is unclear what teaching and learning implications are when students use a chatbot to learn. We, therefore, propose an educational framework for integrating an AI educational chatbot based on learning science-Zimmerman's SRL framework along with JOL.
To our best knowledge, the design of chatbots has focused greatly on the backend design [43], user interface [44], and improving learning [36,45,46]. For example, Winkler and Söllner [46] reviewed the application of chatbots in improving student learning outcomes and suggested that chatbots could support individuals' development of procedural knowledge and competency skills such as information searching, data collection, decision making, and analytical thinking.
Specifically for learning improvement, since the rise of Open AI's ChatGPT, there have been several emerging calls for examining how ChatGPT can be integrated pedagogically to support the SRL process. As Dwivedi et al. [47] writes, "Applications like ChatGPT can be used either as a companion or tutor, [or] to support . . . self-regulated learning" [47] (p. 9). A recent case study also found that ChatGPT gave feedback to student assignments is comparable to that of a human instructor [48]. Lin and Chang's study [49] and Lin's doctoral dissertation have also provided a clear bulletin for designing and implementing chatbots for educational purposes and documented several interaction pathways leading to effective peer reviewing activities and writing achievement [49]. Similarly, Zhu et al. [50] argued that "self-regulated learning has been widely promoted in educational settings, the provision of personalized support to sustain self-regulated learning is crucial but inadequately accomplished" (p. 146). Therefore, we are addressing the emerging need to integrate chatbots in education and how chatbots can be developed or used to support learners' SRL activities. This will be the reason why the fundamental educational principles of pedagogical AI chatbots need to be established. To do so, we have identified several instructional dimensions that we argue should be featured in the design of educational chatbots to facilitate effective learning for students or at least to supplement classroom instructions. These instructional dimensions include (1) goal setting, (2) feedback and self-assessment, and (3) personalization and adaptation.

Goal Setting and Prompting
Goals and motivation are two highly correlated constructs in education. These two instructional dimensions can guide the design of educational chatbots. In the field of education, the three terms, learning goals, learning objectives, and learning outcomes, have been used interchangeably, though with some conceptual differences [51]. Prøitz [51] mentioned: "the two terms [learning outcomes and learning objectives] are often intertwined and interconnected in the literature makes it difficult to distinguish between them" (p. 122). In the context of SRL and AI chatbots, we argue that the three are inherently similar to some extent. It is because, according to Burke [52] and Prøitz [51], these teacher-written statements contain learning orientation and purpose orientation that manifest their expectations from students. Therefore, these orientations can serve as process-oriented or result-oriented goals that guide learners' strategies and SRL activities.
In goal-setting theory, learning goals (objectives or outcomes) that are process-oriented, specific, challenging, and achievable can motivate students and serve SRL functions. For instance, Locke and Latham [53] explained that goals may help shape students' strategies to tackle a learning task, monitor their progress in a studying session, and increase engagement and motivation. Let us take a scenario. Imagine that a student needs to write a report. This result-oriented goal can give rise to two process-based sub-goals: first, they want to synthesize information A, B, and C during a writing session. Secondly, they want to generate an argument. In order to synthesize information, the student may need to apply some strategies. The student's synthesis goal can drive the student to use some processoriented writing strategies, such as classifying, listing, or comparing and contrasting. To generate an argument, the student may need to find out what is missing in the synthesized information or what is common among the syntheses. Thus, this example demonstrates that goals articulate two dimensions of learning: the focus of attention and resources needed to achieve the result. As Leake and Ram [54] argued, "a goal-driven learner determines what to learn by reasoning about the information it needs, and determines how to learn by reasoning about the relative merits of alternative learning strategies in the current circumstances" (p. 389).
SRL also consists of learners exercising their metacognitive control and metacognitive monitoring. These two processes are guided by pre-determined result-oriented outcomes: objectives or goals [8,42,[55][56][57]. SRL researchers generally agree that goals can trigger several SRL events and metacognitive activities that should be investigated as they occur during learning and problem-solving activities [55,58,59]. Moreover, Paans et al.'s study [60] argues that learner-initiated SRL activities occurring at the micro-level and macrolevel can be developed and occur simultaneously, including goal setting or knowledge acquisition. It implies that, in certain pedagogical tasks or problem-solving environments, such as working with chatbots, students need to identify goals by prompting the AI chatbot in a learning session corresponding to the tasks.
Additionally, goals can function as benchmarks by which learners assess the efficacy of their learning endeavors. When students possess the capacity to monitor their progress toward these goals, they are more likely to sustain their motivation and active involvement in the learning process [61]. Within the context of AI chatbot interaction, consider a scenario where a student instructs a chatbot to execute certain actions, such as synthesizing a given set of information. Subsequently, the chatbot provides the requested synthesis, allowing students to evaluate its conformity with their expectations and the learning context. Within Zimmerman's framework of Self-Regulated Learning, this process aligns with the stages of emulation and self-control. Once a student prompts the chatbot for a response, they continuously monitor and self-assess its quality, subsequently re-prompting the chatbot for further actions. This bidirectional interaction transpires within the stages of simulation and self-control, as students actively participate in a cycle of prompts, monitoring and adjustments, and subsequent re-prompts, persisting until they attain a satisfactory outcome. Yet we have to acknowledge that the interaction assumes student autonomy, in which students keep prompting the chatbot and relying on the chatbot's output. A more sophisticated way of student-chatbot interaction is bidirectional, where a chatbot is capable of reverse prompting, a concept which we will dive into deeper in our next section.
We believe it is crucial to teach students how to effectively prompt a generative AI chatbot. As we mentioned earlier, prompts are the goals that students set for the AI chatbot, but often students just want the tool's output without engaging in the actual process. To better understand this, we can break prompts down into two types: cognitive prompts and metacognitive prompts, by drawing on Bloom's Taxonomy [62]. Cognitive prompts are goal-oriented, strategic inquiries that learners feed into a generative AI chatbot. Metacognitive prompts, on the other hand, are to foster learners' learning judgement and metacognitive growth. For example, in the case of a writing class, a cognitive prompt could be, "Help me grasp the concept of a thesis statement". An outcome-based prompt might be, "Revise the following sentence for clarity". In the case of metacognitive prompts, a teacher could encourage the students to reflect on their essays by asking the AI chatbot, "Evaluate my essay and suggest improvements". The AI chatbot may function as a writing consultant that provides feedback. Undeniably, students might take a quicker route by framing the process more "outcome-oriented", such as asking the AI, "Refine and improve this essay". This is where the educator's role comes in to explain the ethics of conduct and its associated consequences. Self-regulated learners stand as ethical AI users who care about the learning journey, valuing more than just the end product. In summary, goals, outcomes, or objectives can be utilized as defined learning pathways (also known as prompts) when students interact with chatbots. Students defining goals while working with a chatbot can be seen as setting a parameter for their learning. This goal defining (or prompting) helps students clearly understand what they are expected to achieve during a learning session and facilitates their work self-assessment while working with a chatbot.

Feedback and Self-Assessment Mechanism
Self-assessment is a process in which individuals evaluate their learning, performance, and understanding of a particular subject or skill. Research has shown that self-assessment can positively impact learning outcomes, motivation, and metacognitive skills [63][64][65]. Specifically, self-assessment can help learners identify their strengths and weaknesses, re-set goals, and monitor their progress toward achieving those goals. Self-assessment, grounded in JOL, involves learners reflecting on their learning and making judgements about their level of understanding and progress [66]. Self-assessment is also a component of SRL, as it allows learners to monitor their progress and adjust their learning strategies or learning goals as needed [67]. Self-assessment can therefore be a feature of a chatbot regardless of whether learners employ it to self-assess their learning, or it can be automatically promoted by the chatbot system to guide students to self-assess.
However, so far, we have found that the current AI-powered chatbots, like ChatGPT, have limited capabilities in reverse prompting when used for educational purposes. Reverse prompting functions as guiding questions after students prompt the chatbot. As suggested in the last section, after learners identify their prompts and goals, chatbots can ask learners to reflect on their learning and provide "reverse prompts" for self-assessment. The concept of reverse prompts is similar to reciprocal questioning. Reciprocal questioning is a group-based process in which two students pose their own questions for each other to answer [68]. This method has been used mainly to facilitate the reading process for emergent readers [69][70][71]. For instance, a chatbot could ask a learner an explanatory question like "Now, I give you two thesis statements you requested. Can you provide more examples of the relationship between the two statements of X and Y?" or "Can you provide more details on the requested speech or action?" as well as reflective questions like "How do you generalize this principle to similar cases?" to rate their understanding of a particular concept on a scale from 1 to 5 or to identify areas where they need more practice. We mock an example of such a conversation below in Figure 2.
Self-assessment is a process in which individuals evaluate their learning mance, and understanding of a particular subject or skill. Research has shown assessment can positively impact learning outcomes, motivation, and metacognit [63][64][65]. Specifically, self-assessment can help learners identify their strengths an nesses, re-set goals, and monitor their progress toward achieving those goals. Se ment, grounded in JOL, involves learners reflecting on their learning and makin ments about their level of understanding and progress [66]. Self-assessment is als ponent of SRL, as it allows learners to monitor their progress and adjust their strategies or learning goals as needed [67]. Self-assessment can therefore be a fea chatbot regardless of whether learners employ it to self-assess their learning, or automatically promoted by the chatbot system to guide students to self-assess.
However, so far, we have found that the current AI-powered chatbots, like C have limited capabilities in reverse prompting when used for educational purp verse prompting functions as guiding questions after students prompt the cha suggested in the last section, after learners identify their prompts and goals, cha ask learners to reflect on their learning and provide "reverse prompts" for self-ass The concept of reverse prompts is similar to reciprocal questioning. Reciprocal q ing is a group-based process in which two students pose their own questions other to answer [68]. This method has been used mainly to facilitate the reading for emergent readers [69][70][71]. For instance, a chatbot could ask a learner an exp question like "Now, I give you two thesis statements you requested. Can you more examples of the relationship between the two statements of X and Y?" or " provide more details on the requested speech or action?" as well as reflective q like "How do you generalize this principle to similar cases?" to rate their under of a particular concept on a scale from 1 to 5 or to identify areas where they ne practice. We mock an example of such a conversation below in Figure 2. The chatbot could then provide feedback and resources to help the learner in areas with potential knowledge gaps and low confidence levels. In this way, can be an effective tool for encouraging student self-assessment and SRL. A grea evidence shows that the integrative effect of self-assessment and just-in-time goes beyond understanding and learning new concepts and skills [72]. Goal-orie criteria-based self-assessment (e.g., self-explanation and reflection prompts) al The chatbot could then provide feedback and resources to help the learner improve in areas with potential knowledge gaps and low confidence levels. In this way, chatbots can be an effective tool for encouraging student self-assessment and SRL. A great body of evidence shows that the integrative effect of self-assessment and just-in-time feedback goes beyond Sustainability 2023, 15, 12921 9 of 15 understanding and learning new concepts and skills [72]. Goal-oriented and criteria-based self-assessment (e.g., self-explanation and reflection prompts) allows the learner to identify the knowledge gaps and misconceptions that often lead to incorrect conceptions or cognitive conflicts. Just-in-time feedback (i.e., the information provided by an agent/tutor in response to the diagnosed gap) can then act as a knowledge repair mechanism if the provided information is perceived as clear, logical, coherent, and applicable by the learner [73].
Based on Table 1 and the previous section on prompting and reverse prompting, teachers can also focus on facilitating learning judgement while teaching students to work with an AI chatbot. However, we propose that reverse prompting from an AI chatbot is also important so that educational values and SRL can be achieved. According to Zimmerman [8], a chatbot is the social assistance that students can obtain. If the chatbot can provide reverse prompts that guide thinking, reflection, and self-assessment, students can then execute strategies that fit their goals and knowledge level. When learners engage in self-assessment activities, they are engaging in the process of making judgments about their learning. Throughout self-assessment, learners develop an awareness of their strengths and weaknesses, which can help them modify or set new goals. If they are satisfied with their goals, they can use their goals to monitor their progress and adjust their strategies as needed. This process also aligns with Zimmerman's SRL model of self-control. At this phase, students can decide whether to go with what the chatbot suggests or if they need to take what they have and implement the suggestions that the chatbot provides. For example, a chatbot could reversibly ask learners to describe their strategies to solve a particular problem or reflect on what they have learned from a particular activity. This type of reflection can help learners become more aware of their learning processes and develop more effective strategies for learning [74,75]. Thus, the reverse interaction from chatbot to students provides an opportunity for developing selfawareness because learners become more self-directed or self-regulated and independent in their learning while working with the chatbot, which can lead to improved academic performance and overall success. Furthermore, by incorporating self-assessment prompts into educational chatbots, learners can receive immediate feedback and support as they engage in the self-assessment process, which can help to develop their metacognitive skills further and promote deeper learning.

Facilitating Self-Regulation: Personalization and Adaptation
Personalization and adaptation are unique characteristics of learning technology. When students engage with an LMS, the LMS platform inherently captures and records their behaviors and interactions. This can encompass actions such as page views, time allocation per page, link traversal, and page-specific operations. Even the act of composing content within a discussion forum can offer comprehensive trace data, such as temporal markers indicating the writing and conclusion of a discussion forum post, syntactic structures employed, discernible genre attributes, and lexical choices. This collection of traceable data forms the foundation for the subsequent generation of comprehensive learning analytics for learners, being manifested as either textual reports or information visualizations, both encapsulating a synthesis of pertinent insights regarding the students' learning trajectories [76]. These fine-grained analytical outputs can fulfill a key role in furnishing students with a holistic overview of how they learn and what they learn, fostering opportunities for reflection, evaluation, and informed refinement of their learning tactics. Therefore, by using data-driven insights and algorithms described above, chatbots can be tailored to the individual needs of learners, providing personalized feedback and guidance that supports their unique learning goals and preferences. However, we believe that the current AI-powered chatbot is inadequate in education; in particular, chatbots thus far lack capabilities for learning personalization and adaptation. A chatbot, like ChatGPT, often acts as a knowledge giver unless a learner knows how to feed the prompts. Our framework repositions the role of educational AI chatbots from knowledge providers to facilitators in the learning process. By encouraging students to initiate interactions through prompts, the chatbot assumes the role of a learning partner that progressively understands the students' requirements. As outlined in the preceding section, the chatbot possesses the capability to tactfully prompt learners when necessary, offering guidance and directions instead of outright solutions based on the given prompts.
Learner adaptation can be effectively facilitated through the utilization of learning analytics, which serves as a valuable method for collecting learner data and enhancing overall learning outcomes [75]. Chatbots have become more practical and intelligent by improving natural language, data mining, and machine-learning techniques. The chatbot could use the trace data collected on LMS to provide students with the best course of action. Data that the chatbot can collect from the LMS can include analysis of students' time spent on a page, students' clicking behaviors, deadlines set by the instructors, or prompts (goals) initiated by the students. For example, a student has not viewed their module assignment pages on a learning management system for a long time, but they request the chatbot to generate a sample essay for their assignments. Instead of giving the direct output of a sample essay, the chatbot can direct the student to view the assignment pages more closely (i.e., "It looks like you haven't spent enough time on this page, I suggest you review this page before attempting to ask me to give you an essay"), as shown in Figure 3. In this way, learning analytics can also help learners take ownership of their learning by providing real-time feedback on their progress and performance. By giving learners access to their learning analytics, educators can empower students to actively learn and make informed decisions about improving their performance [75,77]. An example is shown in Figure 4. Therefore, through personalized and adaptive chatbot interactions, learners can receive feedback and resources that are tailored to their specific needs and performance, helping to improve their metacognitive skills and ultimately enhancing their overall learning outcomes.
Sustainability 2023, 15, x FOR PEER REVIEW 10 of 1 the current AI-powered chatbot is inadequate in education; in particular, chatbots thus fa lack capabilities for learning personalization and adaptation. A chatbot, like ChatGPT, of ten acts as a knowledge giver unless a learner knows how to feed the prompts. Our frame work repositions the role of educational AI chatbots from knowledge providers to facili tators in the learning process. By encouraging students to initiate interactions through prompts, the chatbot assumes the role of a learning partner that progressively under stands the students' requirements. As outlined in the preceding section, the chatbot pos sesses the capability to tactfully prompt learners when necessary, offering guidance and directions instead of outright solutions based on the given prompts. Learner adaptation can be effectively facilitated through the utilization of learning analytics, which serves as a valuable method for collecting learner data and enhancing overall learning outcomes [75]. Chatbots have become more practical and intelligent by improving natural language, data mining, and machine-learning techniques. The chatbo could use the trace data collected on LMS to provide students with the best course of ac tion. Data that the chatbot can collect from the LMS can include analysis of students' time spent on a page, students' clicking behaviors, deadlines set by the instructors, or prompt (goals) initiated by the students. For example, a student has not viewed their module as signment pages on a learning management system for a long time, but they request the chatbot to generate a sample essay for their assignments. Instead of giving the direct out put of a sample essay, the chatbot can direct the student to view the assignment page more closely (i.e., "It looks like you haven't spent enough time on this page, I suggest you review this page before attempting to ask me to give you an essay"), as shown in Figure  3. In this way, learning analytics can also help learners take ownership of their learning by providing real-time feedback on their progress and performance. By giving learner access to their learning analytics, educators can empower students to actively learn and make informed decisions about improving their performance [75,77]. An example i shown in Figure 4. Therefore, through personalized and adaptive chatbot interactions learners can receive feedback and resources that are tailored to their specific needs and performance, helping to improve their metacognitive skills and ultimately enhancing thei overall learning outcomes.

Limitations
Lo's [78] comprehensive rapid review indicates three primary limitations inheren generative AI tools: 1. biased information, 2. constrained access to current knowledge, a 3. propensity for disseminating false information [78]. Baidoo-Anu and Ansa [79] und score that the efficacy of generative AI tools is intricately linked to the training data t were fed into the tool, wherein the composition of training data can inadvertently cont biases that subsequently manifest in the AI-generated content, potentially compromis the neutrality, objectivity, and reliability of information imparted to student users. Mo over, the precision and accuracy of the information generated by generative AI tools f ther emerge as a key concern. Scholarly investigations have discovered several instan where content produced by ChatGPT has demonstrated inaccuracy and spuriousne particularly when tasked with generating citations for academic papers [79,80].
Amidst these acknowledged limitations, our position leans toward an emphasis students' educational use of these tools, transcending the preoccupation with the too inherent characteristics of bias, inaccuracy, or falsity. Based on our proposal, we want develop students' capacity for self-regulation and discernment when evaluating receiv information. Furthermore, educators bear an important role in guiding students on h nessing the potential of generative AI tools to enhance the learning process, instead of generative AI tools can provide information akin to a textbook. This justifies the reas why we integrate Zimmerman's SRL model, illustrating how the judicious incorporat of generative AI tools can foster students' self-regulation, synergizing with the guidan of educators and the efficacy of instructional technology design.

Concluding Remarks
This paper explores how educational chatbots, or so-called conversational agents, c support student self-regulatory processes and self-evaluation in the learning process. shown in Figure 5 below, drawing on Zimmerman's SRL framework, we postulate t chatbot designers should consider pedagogical principles, such as goal setting and pl ning, self-assessment, and personalization, to ensure that the chatbot effectively suppo student learning and improves academic performance. We suggest that such a chat could provide personalized feedback to students on their understanding of course ma rial and promote self-assessment by prompting them to reflect on their learning proce

Limitations
Lo's [78] comprehensive rapid review indicates three primary limitations inherent in generative AI tools: 1. biased information, 2. constrained access to current knowledge, and 3. propensity for disseminating false information [78]. Baidoo-Anu and Ansa [79] underscore that the efficacy of generative AI tools is intricately linked to the training data that were fed into the tool, wherein the composition of training data can inadvertently contain biases that subsequently manifest in the AI-generated content, potentially compromising the neutrality, objectivity, and reliability of information imparted to student users. Moreover, the precision and accuracy of the information generated by generative AI tools further emerge as a key concern. Scholarly investigations have discovered several instances where content produced by ChatGPT has demonstrated inaccuracy and spuriousness, particularly when tasked with generating citations for academic papers [79,80].
Amidst these acknowledged limitations, our position leans toward an emphasis on students' educational use of these tools, transcending the preoccupation with the tools' inherent characteristics of bias, inaccuracy, or falsity. Based on our proposal, we want to develop students' capacity for self-regulation and discernment when evaluating received information. Furthermore, educators bear an important role in guiding students on harnessing the potential of generative AI tools to enhance the learning process, instead of the generative AI tools can provide information akin to a textbook. This justifies the reason why we integrate Zimmerman's SRL model, illustrating how the judicious incorporation of generative AI tools can foster students' self-regulation, synergizing with the guidance of educators and the efficacy of instructional technology design.

Concluding Remarks
This paper explores how educational chatbots, or so-called conversational agents, can support student self-regulatory processes and self-evaluation in the learning process. As shown in Figure 5 below, drawing on Zimmerman's SRL framework, we postulate that chatbot designers should consider pedagogical principles, such as goal setting and planning, self-assessment, and personalization, to ensure that the chatbot effectively supports student learning and improves academic performance. We suggest that such a chatbot could provide personalized feedback to students on their understanding of course material and promote self-assessment by prompting them to reflect on their learning process. We also emphasize the importance of establishing the pedagogical functions of chatbots to fit the actual purposes of education and supplement teacher instruction. The paper provides examples of successful implementations of educational chatbots that can inform SRL process as well as self-assessment and reflection based on JOL principles. Overall, this paper highlights the potential benefits of educational chatbots for personalized and interactive learning experiences while emphasizing the importance of considering pedagogical principles in their design. Educational chatbots may supplement classroom instruction by providing personalized feedback and prompting reflection on student learning progress. However, chatbot designers must carefully consider how these tools fit into existing pedagogical practices to ensure their effectiveness in supporting student learning. process as well as self-assessment and reflection based on JOL principles. Overall, this paper highlights the potential benefits of educational chatbots for personalized and interactive learning experiences while emphasizing the importance of considering pedagogical principles in their design. Educational chatbots may supplement classroom instruction by providing personalized feedback and prompting reflection on student learning progress. However, chatbot designers must carefully consider how these tools fit into existing pedagogical practices to ensure their effectiveness in supporting student learning. Through the application of our framework, future researchers are encouraged to delve into three important topics of inquiry that can empirically validate our conceptual model. The first dimension entails scrutiny of educational principles. For instance, how can AI chatbots be designed to support learners in setting and pursuing personalized learning goals, fostering a sense of ownership over the learning process? Addressing this question involves exploring how learners can form a sense of ownership over their interactions with the AI chatbots, while working towards the learning objectives.
The second dimension involves a closer examination of the actual Self-Regulated Learning (SRL) process. This necessitates an empirical exploration of the ways AI chatbots can effectively facilitate learners' self-regulated reflections and the honing of self-regulation skills. For example, how effective is AI's feedback to a student's essay and how do students develop subsequent SRL strategies to address the AI's feedback and evaluation? Additionally, inquiries might also revolve around educators' instructional methods in leveraging AI chatbots to not only nurture learners' skills in interacting with the technology but also foster their self-regulatory processes. Investigating the extent to which AI chatbots can provide learning analytics as feedback that harmonizes with individual learners' self-regulation strategies is also of significance. Moreover, ethical considerations must be taken into account when integrating AI chatbots into educational settings, ensuring the preservation of learners' autonomy and self-regulation.
The third dimension is related to user interface research. A research endeavor could revolve around identifying which conversational interface proves the most intuitive for learners as they engage with an AI chatbot. Additionally, an inquiry might probe the extent to which the AI chatbot should engage in dialogue within educational contexts. Furthermore, delineating the circumstances under which AI chatbots should abstain from delivering outcome-based outputs to learners constitutes a worthwhile avenue of investigation. Numerous additional inquiries can be derived from our conceptual model, yet the central message that we want to deliver remains clear: Our objective is to engage educators, instructional designers, and students in the learning process while navigating in this AI world. It is important to educate students on the potential of AI chatbots to enhance their self-regulation skills while also emphasizing the importance of avoiding actions that contravene the principles of academic integrity. Through the application of our framework, future researchers are encouraged to delve into three important topics of inquiry that can empirically validate our conceptual model. The first dimension entails scrutiny of educational principles. For instance, how can AI chatbots be designed to support learners in setting and pursuing personalized learning goals, fostering a sense of ownership over the learning process? Addressing this question involves exploring how learners can form a sense of ownership over their interactions with the AI chatbots, while working towards the learning objectives.
The second dimension involves a closer examination of the actual Self-Regulated Learning (SRL) process. This necessitates an empirical exploration of the ways AI chatbots can effectively facilitate learners' self-regulated reflections and the honing of self-regulation skills. For example, how effective is AI's feedback to a student's essay and how do students develop subsequent SRL strategies to address the AI's feedback and evaluation? Additionally, inquiries might also revolve around educators' instructional methods in leveraging AI chatbots to not only nurture learners' skills in interacting with the technology but also foster their self-regulatory processes. Investigating the extent to which AI chatbots can provide learning analytics as feedback that harmonizes with individual learners' selfregulation strategies is also of significance. Moreover, ethical considerations must be taken into account when integrating AI chatbots into educational settings, ensuring the preservation of learners' autonomy and self-regulation.
The third dimension is related to user interface research. A research endeavor could revolve around identifying which conversational interface proves the most intuitive for learners as they engage with an AI chatbot. Additionally, an inquiry might probe the extent to which the AI chatbot should engage in dialogue within educational contexts. Furthermore, delineating the circumstances under which AI chatbots should abstain from delivering outcome-based outputs to learners constitutes a worthwhile avenue of investigation. Numerous additional inquiries can be derived from our conceptual model, yet the central message that we want to deliver remains clear: Our objective is to engage educators, instructional designers, and students in the learning process while navigating in this AI world. It is important to educate students on the potential of AI chatbots to enhance their self-regulation skills while also emphasizing the importance of avoiding actions that contravene the principles of academic integrity.