Next Article in Journal
Observing Instructional Practice: Can We Consistently Measure Teaching Quality Constructs?
Previous Article in Journal
Embracing Complexity of Place for Place-Informed Education: International Insights from Periphery, Coastal and Rural Contexts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chatbot for Self-Regulated BGCE Learning: Effects of Visible-Design Thinking Integration on Creativity and Growth Mindsets in Entrepreneurship

1
Department of Development Economics, Faculty of Economics and Business, Universitas Negeri Malang, Malang 65145, Indonesia
2
Department of Special Education, Faculty of Education, Universitas Negeri Malang, Malang 65145, Indonesia
3
Department of Educational Technology, Faculty of Education, Universitas Negeri Malang, Malang 65145, Indonesia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(4), 582; https://doi.org/10.3390/educsci16040582
Submission received: 14 January 2026 / Revised: 11 February 2026 / Accepted: 5 March 2026 / Published: 7 April 2026

Abstract

The development of the Chatbot-Supported Making BGCE Thinking Visible (CS-MBTV) program is an important part of fostering the creativity, self-regulation, and growth mindset that students must have. High school students in Malang carry out the learning process by following five supporting modules on chatbot technology, supported by a learning process that follows a design thinking pattern. The study involved up to 120 high school students in Malang, divided into an experimental group (62 students) that participated in a series of CS-MBTV programs. In addition, the control group consisted of 58 students who did not receive special treatment. This study employed a quasi-experimental research design with a pretest and posttest to assess students over 12 weeks. The findings indicate that implementing chatbots can optimize student creativity and develop students’ mindsets. The current learning process cannot consistently reduce students’ fixed mindsets. Reflective learning can help students become aware of the need to continue developing and processing cognitive transitions throughout program implementation. The implications of this research can inform policy to improve entrepreneurship education programs and BGCE learning in schools, achieving greater impact.

1. Introduction

The blue, green, and circular economy (BGCE) learning process is an alternative solution that is environmentally responsible and has more complex project tasks (Jos et al., 2025). BGCE learning can hone creativity and develop alternative solutions, but entrepreneurship projects require iteration and data validation, not just a “vehicle for creativity” (Foster, 2021; M. T. Lee et al., 2025; Pakseresht et al., 2025; Thompson & Schonthal, 2020). Project-based learning faces obstacles related to the need for comprehensive, multidisciplinary solutions, bottlenecks in feedback processes in large classes during the performance phase, and students’ tendency to offer popular solutions and imitate (Gan et al., 2015; Wong et al., 2019; Yu et al., 2024).
The iterative learning process that supports students’ creative thinking was tested to conduct initial validation of alternative solutions characterized by BGCE innovation. Students’ mindsets need to be honed to foster a growth mindset through formative feedback during the SRL performance phase (Dai, 2024; Schiele et al., 2025). Students are encouraged to learn from failure, always try, and increase learning consistency as part of a growth mindset (Medina, 2017; Su et al., 2026). The training strategy serves as an intervention mechanism, providing students with feedback and opportunities for iteration within an innovative learning process (Farrokhnia et al., 2025; Narendorf et al., 2025; Xia et al., 2025). Project-based learning does not necessarily foster a growth mindset, but it needs to be supported by iterative experiences and feedback (Kabigting et al., 2025; Zhou et al., 2025).
In this study, the term “visible-design thinking” is not intended as a new theoretical concept, but rather as an instructional integration of visible thinking routines and design thinking sprints orchestrated across a series of learning sessions. Visible thinking is used to externalize students’ reasoning through problem framing, decision justification, and reflection activities. In contrast, design thinking is used as an innovation sprint framework that guides the stages of empathy, problem definition, ideation, prototype, and rapid test (Doss & Bloom, 2023; Liang et al., 2026; Sevimli-Celik & Güvelioglu, 2026). To maintain consistency with the term, this manuscript uses the phrase “integration of visible thinking routines and design thinking sprints” to refer to the learning approach. The empathy aspect of exploring problems through user interviews supports the prototype iteration process, which aligns with the design thinking approach (Daryanes et al., 2023; Tran-Duong & Do-Hung, 2025; Zeng, 2025). The combination of these two approaches can bridge the visible thinking process (reasoning externalization) with the design sprint (iteration structure) through scaffolding during the performance phase.
BGCE entrepreneurship learning requires students to carry out complex and iterative project tasks, so that success is determined more by the quality of the process in the implementation phase, such as progress monitoring, prototype revision, and time management, rather than just initial ideation (Held & Mejeh, 2024; Tise et al., 2023). In large classrooms, a major bottleneck arises when teachers are unable to provide quick, specific process feedback to each group, leading students to stop at the first solution and have difficulty maintaining self-regulation during innovation sprints. Although educational chatbots are increasingly used, most research still emphasizes their question-and-answer and learning engagement functions. In contrast, evidence on chatbots as a rubric-based self-regulation scaffolding in the project implementation phase is still limited, especially in the context of BGCE entrepreneurship at the high school level (Basuki et al., 2023; Gan et al., 2015; García-Porta et al., 2024; Wong et al., 2019; Yu et al., 2024). This study fills this gap by testing MBTV’s CS, a chatbot designed not as a Q&A but as a rubric-based, multi-level feedback provider to guide monitoring, revision, and decision-making during the implementation phase. Thus, the contribution of this study lies in the reinforcement of scalable learning designs for large classes, as well as in empirical evidence on increasing creativity and growth mindsets through chatbot-assisted scaffolding during performance phases (Micheli et al., 2019; Verganti et al., 2020).
To clarify the direction of the analysis and improve the repetition of the study, the revised manuscript added a subsection of research questions and hypotheses that formulates the H1–H3 hypotheses concisely and consistently with the main outputs of the research, namely creativity, creative mindsets (GI, GE, FI, FE), as well as indicators of the SRL/reflection process during the BGCE project. In addition, to facilitate evaluation and replication of intervention procedures, the manuscript provides Supplementary Materials. The integrative conceptual model in this study explains how the integration of visible thinking routines, design thinking sprints, and rubric-based scaffolding chatbots works together to strengthen self-regulation (especially during the performance phase), thereby encouraging increased creativity and growth mindsets in BGCE entrepreneurship projects. Further explanation is shown in Figure 1.

2. Theoretical Framework

2.1. Dimension of Creativity and Creative Mindset in BGCE Solution Development

The dimension of creativity in BGCE learning is defined as the ability to create innovative solutions that add greater value, thereby having a significant impact on the environment and the economy (Tran-Duong & Do-Hung, 2025). The development of innovations that address users’ needs requires a more adaptive creativity process, especially in terms of their ecological impact. The reasoning process must address two main aspects: novelty and usefulness, in exploring the idea of BGCE issues through product validation iteration steps (Anjum et al., 2021; M. T. Lee et al., 2025; Roth et al., 2022). Cross-dimensional knowledge needs to be integrated into the creativity process to generate relevant, data-driven ideas for modeling and decision-making. In addition, the limited number of competent teachers hinders implementation and poses a risk of failure. The quality of the argument of ideas is key to building creativity in BGCE learning that is feasible to implement and has a wide impact (White et al., 2026).
The aspect of self-regulation can help students develop creative performance in responding to challenges, so they do not rely solely on cognitive processes. Some of the previous literature on implicit theories shows some ability to choose the best strategy and to provide feedback in the face of existing constraints (H. Li et al., 2026; Zhou et al., 2025). Creativity mindsets are interpreted as students’ beliefs in developing creativity patterns through a series of training programs or bringing innate talents that have been previously possessed (Han & Park, 2025; Liang et al., 2026; Tran-Duong & Do-Hung, 2025). Risk-taking skills, encouragement for idea exploration to be tested early, is part of the growth mindset that students need to develop. A fixed mindset contrasts with the characteristics of a growth mindset. Fixed mindsets tend to encourage giving up quickly, avoiding challenging tasks, and resisting constructive criticism that supports innovation (Warren et al., 2018). Students in school can be grouped into a growth mindset and a fixed mindset. Students with a growth mindset can be adaptive, exploring alternative solutions to existing problems and providing periodic improvements in the BGCE learning process. On the other hand, BGCE learning for students with a fixed mindset tends to imitate existing solutions and does not develop initial ideas that are in accordance with the context of the existing problem.
Several studies have grouped mindsets into four types: internal growth, external growth, fixed internal, and fixed external. Students’ mindsets can directly impact their learning experiences and outcomes (Krskova & Breyer, 2023; White et al., 2026; Zhou et al., 2025). The concept of internal growth is interpreted as an encouragement to undertake independent learning efforts. The learning environment, school policies, and temporary teacher support are among the factors that influence external growth. Innate student traits that tend to be difficult to change even after specific training are categorized as fixed internal. The concept of fixed internal traits refers to innate traits that are difficult to change, despite attempts through various training methods. Fixed external is interpreted as the inability to develop, even in a supportive and conditioned environment (P. Lee et al., 2026; Z. Li & Li, 2025; Zhou et al., 2025). The success of the BGCE learning process in entrepreneurship is greatly influenced by individual perseverance in carrying out activities and by the development of a school ecosystem that supports both aspects of teaching, community, and user feedback (Schwiering & Heyder, 2026).
The output of entrepreneurial learning is closely related to the development of creativity. Creativity can be the driving force of the ideation process and help develop more diverse products through structured design thinking steps. Previous research has shown that the creative process can foster entrepreneurship based on innovative product development. Opportunity-reading skills help students identify emerging market needs (A. A. Lee et al., 2024; Schwiering & Heyder, 2026). This study finds that aspects of creative mindsets and growth mindsets are key skills for encouraging technology-based learning processes that produce more creative young entrepreneurs.

2.2. Integrated Approach to Visible Thinking and Design Thinking in BGCE Entrepreneurship Class

The thinking process is built through a learning process that requires students to be responsible for alternative solutions. Project-based learning outputs must assess the feasibility of implementation and their suitability for user needs, and demonstrate a strong social impact (Zhang et al., 2021). Students experience obstacles in making logical assumptions when answering initial ideas. Learning in the classroom should test decisions made based on field evidence (Karlsson et al., 2025). Visible thinking is the basis for creating learning strategies through thinking routines and a more reflective reasoning process for students (Teixeira et al., 2021). Students’ behavior patterns are directed toward developing the habit of questioning, considering problems from various perspectives, and presenting arguments. The track record of the decisions taken must be monitored until the final product is formed in the student’s thinking process (Cuevas-Cerveró et al., 2023). The observation process was conducted with practical ethical considerations consistent with the values of sustainability and responsible entrepreneurship (Pakseresht et al., 2025). Empirical research provides strong evidence for the effectiveness of reflective learning in improving thinking and reasoning skills and in developing ideas consistently (A. A. Lee et al., 2024; Medina, 2017; Schwiering & Heyder, 2026; Sevimli-Celik & Güvelioglu, 2026).
The reflective process in visible thinking alone creates an innovation gap because the bridge between the two is lost. The iteration process, aided by teacher feedback, is honed for students to encourage thinking (Xia et al., 2025). Several studies have shown that design thinking helps students understand problems more deeply (Sevimli-Celik & Güvelioglu, 2026). BGCE learning during prototyping and testing facilitates initial hypothesis testing (Deitte & Omary, 2019; Hunhevicz & Hall, 2020; Micheli et al., 2019; Pande & Bharathi, 2020; Sanabria-Z & Olivo, 2024). BGCE learning during the prototyping and testing process helps the initial hypothesis testing process understand the impact on beneficiaries and use resources more responsibly, rather than relying solely on jargon (Foster, 2021). The combination of two aspects, namely visible thinking and design thinking, can strengthen each other with their advantages and disadvantages (Medina, 2017).
The teacher’s capacity to manage large classes is often inadequate to pay attention to the entire group through each stage of self-regulation and reflection. Technology, in the form of chatbots, is the answer to increasing feedback support for the many students involved (García-Porta et al., 2024). Some previous research is related to the use of chatbots to provide feedback, personalized-based learning, and students’ active involvement in answering teacher capacity limitations (Ayanwale & Ndlovu, 2024; Cortés-Cediel et al., 2023; Kim & Su, 2024; Smutny & Schreiberova, 2020; Tam et al., 2023). Chatbots are used to emphasize self-regulated learning, enabling students to set goals and evaluate their progress based on feedback, thereby facilitating visible thinking. Innovation development can be practiced step by step to implement a design thinking-based framework (Fleischer et al., 2023; Hao & Zhang, 2026; Makransky & Mayer, 2022). Like BGCE, learning integrates the thought process using design thinking, using chatbots that are not only oriented to creative results but also build a learning discipline through a series of tasks.
The integrative conceptual model of this study positions the BGCE entrepreneurship project as a complex, iterative learning task, with success highly dependent on self-regulated learning capabilities, especially during the implementation phase, which requires progress monitoring, strategy revision, and time management. Visible thinking routines externalize students’ thinking processes by documenting reasons, assumptions, and evidence, while design thinking sprints provide a systematic iteration structure from empathy to prototype testing. To overcome the limitations of teacher feedback in large classrooms, chatbots are designed not as a question-and-answer service but as scaffolding through rubric-based, graded feedback that directs students to compare their actual performance with BGCE’s innovation quality standards and to establish concrete improvement steps. Through this mechanism, interventions are expected to increase creativity and strengthen GI and GE growth mindsets, while supporting increased self-regulation during the innovation cycle.

2.3. Feedback-Based Chatbots in Learning

The development of the education world is supported by interactive data processing via chatbot tools that mimic natural human conversations. Students can receive feedback and support beyond the teacher using chatbots. Various research findings indicate that chatbots can increase student participation in independent learning by providing access to learning, even when they do not engage students (Hao & Zhang, 2026; Noraset et al., 2026; Rücker & Becker-Genschow, 2025). The scaffolding function of chatbot services can facilitate students’ task completion for pedagogical actualization. Chatbots can reduce students’ dependence on teachers but need mentoring according to their competencies (Noraset et al., 2026; Ren et al., 2026).
The effective use of chatbots is heavily influenced by feedback design planning. A literature review on feedback should include three questions about goals, current positions, and improvement steps for future assessment (Soyoof et al., 2026; Yao et al., 2025). Learning strategies can fail if feedback does not help students improve their thinking, rather than provide corrections. Proof of the BGCE learning process can be seen from the quality of the ideas that can be made into prototypes. Innovation projects use a feedback and assessment system with a targeted rubric to ensure value transparency is readily accessible (Hao & Zhang, 2026; Kabigting et al., 2025; Soyoof et al., 2026). Failure in a trial is a strategic step in training students to engage in self-regulation, evaluation, and periodic monitoring (Krskova & Breyer, 2023; Loza, 2025).
Product value is part of justifying the impact of the activity, not just making a final project, but BGCE learning provides room for growth. Cognitive and metacognitive dimensions are incorporated into the chatbot’s framework through feedback services. The novelty of ideas, sustainability impacts, and feasibility can be fulfilled through chatbot services to build a cognitive dimension in students’ thinking processes. The process of students’ self-reflection is assisted in exploring strategic reasons to consider every alternative solution or to propose existing ideas through chatbot services from the metacognitive dimension. This approach aligns with the framework of visible thinking, which emphasizes the routine of thinking and the documentation of the thought process, enabling students’ reasoning to be reviewed and improved (H. Li et al., 2026; Pakseresht et al., 2025). At the same time, design thinking provides an innovation workflow that directs students from user understanding to prototype and testing so that feedback can be directly linked to the stages of empathy, problem definition, ideation, prototype, and testing (Medina, 2017; Su et al., 2026; Xia et al., 2025).
Based on this foundation, this study designed a chatbot with feedback comprising four main forms, rubric-based feedback, prompt reflection, BGCE learning resource recommendations, and guiding questions, as shown in Figure 2. This feedback is integrated with visible thinking routines such as See, Think, Wonder, Connect, Extend, and Challenge. I used to think. Now I think, and I am directed to follow the design thinking steps so that students can set project goals, monitor sprint progress, improve the quality of their outputs, and consistently reflect on strategies in BGCE entrepreneurship learning.

2.4. Chatbot-Assisted Self-Regulated Learning Feedback Learning System

Figure 3 presents the framework of a chatbot-assisted feedback system for self-regulation-based BGCE learning, grounded in the idea that the success of an innovation project is not only determined by material knowledge but especially by students’ ability to manage the learning process consciously and repeatedly. Self-regulation in learning comprises three interrelated phases: the goal planning phase, the action implementation phase, and the reflection and feedback phase (Krskova & Breyer, 2023; Ren et al., 2026; B. J. Zimmerman, 2000). In the context of BGCE entrepreneurship, these three phases become more challenging because students have to combine ideation, decision-making, and proof of sustainability value within a limited time, under conditions of market uncertainty and environmental impact.
Critically, the second phase, or implementation phase, is a vulnerable point that is often overlooked in intervention design, as shown in Figure 4. Many creative programs emphasize ideation and reflection, but lack support when students are working on complex tasks, negotiating in teams, choosing strategies, and managing time. The innovation process is a key component that can be supported by a design thinking framework in testing existing assumptions (Doss & Bloom, 2023; Z. Li & Li, 2025; B. J. Zimmerman, 2000). The learning process through strategic steps can be supported by providing students with feedback to improve their innovations (Noraset et al., 2026; Ren et al., 2026; Schwiering & Heyder, 2026). The scaffolding process must be provided periodically through chatbots to overcome teachers’ difficulty in giving feedback (Tran-Duong & Do-Hung, 2025; White et al., 2026). Recent studies have also shown that educational chatbots can enrich self-paced learning and strengthen self-regulation support if their interaction design is geared toward learning strategies, not just questions and answers (Farrokhnia et al., 2025; Hao & Zhang, 2026; Rücker & Becker-Genschow, 2025; Soyoof et al., 2026; Yao et al., 2025). Based on the existing presentations, this study integrates the use of technology in the form of chatbots with structured learning modules through a teaching model to provide maximum results. The use of chatbots will be present in every learning phase starting from the introduction, problem analysis process, idea development, resource recommendations, and feedback processes in supporting understanding of the blue, green, and circular economy. Strengthening the reasons for choosing alternative solutions can be strengthened through learning routines by integrating visible thinking (Medina, 2017). In addition, the phase of implementing activities and rapid revision can use the design thinking framework in developing innovation through the process of empathy (Han & Park, 2025; Xia et al., 2025). To accommodate different needs, chatbots provide three levels of feedback, namely basic, advanced, and expert, as well as record learning analytics such as interaction logs, revision frequency, activity duration, and reflection patterns as proof of process. With this design, the study tested the hypothesis of increasing creativity, strengthening GI and GE growth mindsets, decreasing FI and FE which may be weaker, and improving the quality of venture or prototype output in the experimental group (H. Li et al., 2026; Warren et al., 2018).

3. Method

3.1. Participants

This study used a non-randomized pretest–posttest control group design. In the initial stage, we recruited 120 high school students in Malang who participated in BGCE-based entrepreneurship learning. The final sample consisted of 62 students in the experimental group who participated in the Chatbot-Supported Making BGCE Thinking Visible (CS-MBTV) program, and 58 students in the control group who participated in comparative learning without chatbots or structured visible thinking routines for 12 weeks. The characteristics of the participants were initially inferred from demographic clues within the research groups. Ethical principles are emphasized in the research process through the publication of written consent from participants.
Because this study used a quasi-experimental design with an unrandomized integer class, the findings should be interpreted as an association rather than definitive causal evidence. In other words, increased creativity and a growth mindset in the CS-MBTV group are understood as changes in the application of interventions. However, they may still be influenced by factors outside treatment. Therefore, we consider the threats to internal validity common in non-random designs, especially selection effects arising from differences in initial characteristics between classes, as well as teacher and classroom influences such as teaching styles, classroom dynamics, and learning climate that can affect student achievement. To minimize the risk of interpretation bias, the analysis examined initial equivalence using pretest scores, reported effect sizes and confidence intervals, and discussed these limitations in the Limitations section.
This research has obtained ethical approval and implemented strict data governance measures, given its involvement of adolescent participants and the use of chatbots. All students follow informed consent and assent procedures that include written consent from parents/guardians and from the student, with an affirmation that participation is voluntary and does not affect academic grades. The data recorded from the chatbot is limited to information relevant to learning (interaction time, prompt type, task-related responses, and artifact revisions), without collecting sensitive personal information. Access to logs is restricted to the research team only; data is pseudonymous using participant codes, stored on a protected medium (encrypted or limited access), and has a clear retention policy (deleted after analysis is complete). With this step, the privacy, security, and confidentiality of participants’ data are maintained during the study.

3.2. Instruments

3.2.1. Creativity Task

Creativity can be measured by the quality of ideas and their feasibility for implementation, not just empty ideas. Several previous studies have shown that divergent thinking can explore the advantages of ideas and can be a benchmark for innovation novelty (Han & Park, 2025; Z. Li & Li, 2025; Warren et al., 2018; Zeng, 2025). The implementation phase serves as a platform for transforming ideas into more creative, concrete actions. Innovative products are outputs resulting from strategic decisions (Yao et al., 2025; B. J. Zimmerman, 2000). The combination of divergent thinking and creativity can be measured by exploring the understanding side of implementing activities regularly.
Development of divergent-thinking instruments adapted to the study of BGCE using ambiguous stimuli: The test consists of 10 ways to strengthen problem-solving skills, with a time limit of only 10 min. Fluency, flexibility, originality, and elaboration are the main factors highlighted in the test (Peperkorn & Wegner, 2026). The fluency aspect of the answer is awarded 2 points. Flexibility is awarded 1 point, and multidisciplinary ideas are awarded 3 points. Originality is determined by the scarcity of responses in the sample: 0 points if the percentage of occurrence is 16% or more, 1 point for 5% to 16%, 2 points for 2% to 5%, and 3 points for less than 2%. Descriptive complexity corresponds to the living situation by awarding 0–4 points in the elaboration aspect. The internal reliability of this study sample indicated that the Cronbach’s alpha coefficient ranged from 0.88 to 0.93 across all four indices.
The creativity task was designed by presenting a case study and requiring participants to propose five to ten related alternative solutions, from which one idea could be developed into a preliminary prototype. Novelty, usefulness, feasibility, and impact on sustainability are four important aspects in exploring applied creativity (Peperkorn & Wegner, 2026; White et al., 2026). Two independent raters assessed the entire product, and interrater reliability was assessed using ICC, with values ranging from 0.82 to 0.90, indicating good consistency in grading. The combination of these two types of tasks allows the research to capture creativity as an ideological capacity as well as the quality of artifact revision during the implementation phase of the BGCE project.

3.2.2. Creativity Mindset Inventory (CMI)/Growth Mindset Scale

Measuring the creativity mindset is necessary because creative performance in BGCE entrepreneurship projects is influenced not only by ideation skills but also by students’ beliefs about whether creativity can be developed through effort and learning strategies. The implicit theoretical literature explains that these beliefs shape students’ responses when faced with difficult tasks, especially in the execution phase, when students must monitor progress, manage trial failures, and decide if strategies need to be changed (Doss & Bloom, 2023; Ibrahim et al., 2025; Krskova & Breyer, 2023). In the context of BGCE, the implementation phase often gives rise to feasibility pressures, resource limitations, and demands for proof of impact, so an adaptive mindset is expected to determine the persistence and quality of prototype revisions (Kong et al., 2025). The instrument used is the Creativity Mindset Inventory (CMI) in the form of a Likert scale of 6 points from 1 as strongly disagree to 6 as strongly agree, consisting of 12 items that measure four dimensions, namely growth–internal control, growth–external control, fixed–internal control, and fixed–external control (Kong et al., 2025; Z. Li & Li, 2025; Sigmundsson & Haga, 2024). Sample items assess the belief that creativity increases through self-practice, through environmental support, or otherwise is seen as a talent that does not change much. In the sample of high school students in this study, internal reliability was adequate with Cronbach alpha for GI = 0.81, GE = 0.89, FI = 0.83, and FE = 0.87. The validity test of the construct through confirmatory factor analysis showed the compatibility of the acceptable four-factor model, namely chi square (df = 44) = 92.10, p < 0.001; GFI = 0.91; AGFI = 0.84; SRMR = 0.052; RMSEA = 0.078; CFI = 0.95. The composite reliability values for GI, GE, FI, and FE were 0.79, 0.86, 0.82, and 0.84, respectively, while the average variance extracted was 0.51, 0.49, 0.52, and 0.50. Thus, CMI is worth using to capture the mindset changes that occur especially during the implementation phase of the BGCE project, when chatbots provide feedback and scaffolding that can strengthen students’ learning confidence and perseverance.

3.2.3. Self-Regulated Learning and Reflection

To measure the self-regulated learning (SRL) of participants in BGCE entrepreneurship learning, this study used a 6-point Likert SRL scale (1 = strongly disagree to 6 = strongly agree), which includes six main dimensions: goal setting, planning, monitoring, self-evaluation, help-seeking, and strategy adaptation (Hao & Zhang, 2026; B. J. Zimmerman, 2000). This scale is designed to be relevant to the context of the CS-MBTV project, so that items assess students’ ability to set BGCE innovation targets, develop sprint plans, monitor prototype progress and validation evidence, evaluate the quality of outputs, seek help (teachers/friends/chatbots), and adjust strategies when facing obstacles. Based on the data of the study sample (N = 120; 62 experiments and 58 controls), internal reliability showed good results with Cronbach’s coefficient of α for goal setting = 0.84, planning = 0.86, monitoring = 0.83, self-evaluation = 0.88, help-seeking = 0.81, and strategy adaptation = 0.86, indicating adequate internal consistency.
In addition to SRL, this study measured reflection through two data sources: (1) a 6-point Likert reflection scale and (2) a structured prompt-based written reflection artifact. The reflection scale assesses process evaluation, error awareness, and improvement plans in subsequent sprints, while reflection artifacts are collected through weekly journals and “I used to think… Now I think” to capture changes in thinking during the BGCE project. The reliability results show Cronbach’s α = 0.90 for the reflection scale. Given the primary quantitative analysis using a gain score (posttest–pretest), SRL and reflection scores were analyzed as individual changes and compared between groups to support the explanation of the intervention mechanism.

3.3. Experimental Design and Procedures

This study used a quasi-experimental pretest–posttest design with a control group, because participants were assigned based on class assignments already in place at the school, making individual randomization impossible. Figure 4 shows that all participants did the pretest in the first week and the posttest in the twelfth week to measure creativity, creativity mindsets, and supporting indicators such as self-regulation and reflection. To reduce biased responses, the researcher explained from the beginning that test instruments and inventories do not affect report card scores, but are used for mapping the learning process and student self-improvement (Hao & Zhang, 2026; A. A. Lee et al., 2024; Noraset et al., 2026; Rücker & Becker-Genschow, 2025).
Figure 5 shows that the experimental group followed the CS-MBTV intervention from the first to the eleventh week in blended, project-based BGCE entrepreneurship learning. The intervention was organized into five modules. The first module focuses on orientation and baseline, namely an introduction to the concept of BGCE, mapping entrepreneurial challenges, and setting goals and self-regulation learning contracts through goal-setting activities and strategic planning (Krskova & Breyer, 2023; B. J. Zimmerman, 2000). The second module targets a mindset reset by strengthening a growth mindset through brain plasticity literacy, mastery experience examples, and an initial reflection using the “I used to think, Now I think” format to capture changes in beliefs before the design process begins (Medina, 2017; Su et al., 2026).
The third module develops unfold thinking through a visible thinking approach, specifically the “See, Think, Wonder” and “Circle of Viewpoints” thinking routines, to clarify reasoning, test assumptions, and produce artifacts such as stakeholder maps and BGCE problem framing (Han & Park, 2025; Zeng, 2025). The fourth module is the core of the implementation phase or performance phase emphasized in this study. In this phase, students run a design thinking sprint from empathy to a quick test and iterate on prototypes repeatedly. Because the execution phase is often a vulnerable point when students have to monitor progress, choose strategies, manage time, and refine artifacts, chatbots are used as active scaffolding through sprint checklists, prompt monitoring, rubric-based feedback, and resource recommendations to reinforce self-regulation as actions take place (P. Lee et al., 2026; Xia et al., 2025; B. J. Zimmerman, 2000). The fifth module closes with co-creation and reflection, namely the refinement of the prototype and pitch deck, as well as final reflection and a learning journal to consolidate effective strategies (H. Li et al., 2026; Pakseresht et al., 2025). The control group learned the BGCE project using a standard approach without chatbots, scheduled thinking routines, or structured formative feedback mechanisms, so that differences in outcomes could be traced to the contribution of visible-design thinking integration and chatbot support during the implementation phase.
To ensure that MBTV’s CS intervention is properly implemented and used by students, this study reports a summary of dose and fidelity based on chatbot logs. The instructional design and procedures of the MBTV-based CS intervention are presented in Figure 6. Indicators recorded include the number of sessions or turns per student as a measure of scaffolding intensity, the number of active weeks during the 1 to 11 week period as a measure of usage consistency, and the duration of activity proxies if the platform provides an interaction timestamp. In addition, the study presented a distribution of participation in the high-, medium-, and low-intensity categories to show the variation in involvement within the experimental group. This summary is important because the effectiveness of chatbots as scaffolding for performance in BGCE entrepreneurship projects relies heavily on actual exposure to rubric-based feedback and prompt monitoring, not just on the presence of features in the learning design.
Table 1 summarizes the operational aspects of chatbot use in the CS-MBTV intervention, including the access platform/channel, the type of input/output, the schedule of use during weeks 1–11, and the type of process data recorded (e.g., number of interactions, access time, and artifact revision). This summary is presented to improve the transparency of the procedure, facilitate replication, and help the reader interpret the intervention’s implementation. Students received consistent scaffolding throughout the BGCE design sprint.
To support transparency and replicability, core intervention materials and research instruments are provided as Supplementary Materials. The Appendix A includes: (1) the content of the BGCE creative task (prompt and answer format), (2) the rubric of the applied creativity/prototype assessment (novelty–usefulness–feasibility–sustainability impact), (3) a summary of the Creativity Mindset Inventory (CMI) instrument with sample items, (4) an empathy stage guide and user interview note template, and (5) an example of an anonymized chatbot interaction log. All attached data and artifacts have been pseudonymized, do not contain the student’s personal identity, and are used only for research purposes.

3.4. Data Analyses

The data analysis in this study uses a mixed-methods approach with a simplified quantitative focus, using a gain score. For each student, the gain score was calculated as the difference between the posttest and pretest, and then compared between the experimental and control groups using an independent-samples t-test (Peeters & Vaidya, 2016). This strategy was chosen because it is easy to interpret in the school classroom context, but still requires caution against potential biases, such as baseline differences and regression to the mean. Therefore, before the main test, an assumption check is carried out, including outliers, residual normality, and variance homogeneity using the Levene test. In creativity data, for example, the gain difference between groups showed a large difference with t(118) = 10.50, p < 0.001, a 95 percent confidence interval [1.03, 1.51], and a Hedges’ effect size g = 1.91, so that the impact of CS-MBTV can be assessed as strong practically.
The same analysis is applied to the four dimensions of the CMI. GI and GE were tested through gain scores and reported together with effect measures, e.g., GI t(118) = 6.57, p < 0.001, g = 1.20, and GE t(118) = 7.63, p < 0.001, g = 1.39, while FI and FE were analyzed to check if the decline was significant or simply a trend. To enrich the findings, the reflection of “I used to think… Now I think” was analyzed using quantitative content analysis based on the frequency of themes and representative quotes (Chan et al., 2025; Peperkorn & Wegner, 2026; Ren et al., 2026). In addition, a t-test on pretest scores by gender was performed as an initial control. Preliminary analyses showed no statistically significant gender differences, with t values ranging from -1.60 to 1.72, p values ranging from 0.33 to 0.74, and Levene’s F values ranging from 0.046 to 0.615. Therefore, gender was not included as a covariate. (D. W. Zimmerman & Williams, 2016).

4. Results

4.1. Baseline Equivalence

To strengthen the internal validity of the non-randomized design, this study included an initial equivalence test between groups on the main outcome before the intervention. Baseline equivalence is not only evident in demographic characteristics but also in creativity pretest scores and the four dimensions of the Creativity Mindset Inventory, namely internal growth, external growth, fixed internal, and fixed external. The baseline equivalence table reports the mean and pretest standard deviations for each group, along with a measure of the initial difference effect, such as the standardized mean difference or Hedges’ g. This reporting helps readers assess whether the experimental and control groups are at comparable initial conditions, so that the difference in gain at the end of the program is more reasonably interpreted as a change related to the CS-MBTV intervention, although the threat of selection bias across the whole class is still recognized.
Table 2 summarizes the dose and fidelity of chatbot interactions during weeks 1–11 in the CS-MBTV group, including turns per student, active weeks, artifact revisions, and completion rate. This indicator is then associated with creativity gain, as well as GI and GE gain, to test dose–response patterns as an explanation of the mechanism during the performance phase.

4.2. Creativity Outcomes of the CS-MBTV Intervention

The analysis of increased creativity in this study used a gain score approach to capture each participant’s change from pretest to posttest, as shown in Figure 7. Table 3 shows that the experimental group (CS-MBTV) had an average creativity score that increased from 8.30 ± 0.97 in the pretest to 9.35 ± 1.18 in the posttest, with an average gain of 1.05 ± 0.68 points. In contrast, the control group showed a slight decrease from 8.28 ± 0.82 on the pretest to 8.06 ± 1.03 on the posttest, resulting in a negative gain of −0.22 ± 0.64 points. This pattern indicates that, over 12 weeks, participants who used CS-MBTV tended to experience a stronger increase in creativity performance than those who used a comparative approach.
To test whether the difference in increase was statistically significant, an independent-samples t-test was performed on the gain score (Table 4). The test results showed that the experimental group’s creativity gain was significantly higher than that of the control group, with an average difference of 1.27 points t(118) = 10.50, p < 0.001) and a 95% confidence interval [1.03, 1.51]. In addition, Hedges’ value g = 1.91 indicates a very large effect, so the increase in creativity is not only significant but also substantial. Overall, these findings support the idea that CS-MBTV is associated with increased creativity among high school students in BGCE entrepreneurship learning, especially through scaffolding that encourages self-regulation and more consistent iteration of ideas.

4.3. Creativity Mindset Outcomes of the CS-MBTV Intervention

The analysis of changes in creativity mindsets in this study used the gain score approach, as shown in Figure 8. The gain score is the difference between the posttest and pretest scores for each CMI dimension (GI, GE, FI, and FE) for each student. Then, the gain score difference between the experimental group (CS-MBTV) and the control group was compared using an independent-samples t-test (df = 118). Table 5 presents descriptive statistics that show the direction of change consistent with the objectives of the intervention, namely, strengthening growth mindsets and (where possible) lowering fixed mindsets in the context of BGCE entrepreneurship learning for 12 weeks.
For the growth–internal mindset (GI), the experimental group had an average GI that increased from 3.30 ± 0.78 in the pretest to 4.55 ± 0.84 in the posttest, with an average gain of 1.25 ± 0.70. Meanwhile, the control group also increased, but to a lesser extent, from 3.35 ± 0.74 to 3.78 ± 0.81, with a gain of 0.43 ± 0.66. The t-test in Table 6 showed a significant difference in gain between groups, t(118) = 6.57, p < 0.001, with an average difference of 0.82 and Hedges’ g = 1.20 (a large effect). This pattern indicates that CS-MBTV is effective in strengthening students’ belief that creativity can be developed, especially through learning efforts and strategies that can be controlled from within (GI).
In the growth–external mindset (GE), the changes in the experimental group were even greater: the average increased from 3.25 ± 0.80 to 4.70 ± 0.86, with a gain of 1.45 ± 0.76. The control group only rose moderately from 3.40 ± 0.77 to 3.85 ± 0.83, a gain of 0.45 ± 0.70. This difference in gain is also significant, t(118) = 7.63, p < 0.001, with a difference of 1.00 and Hedges’ g = 1.39 (a large effect). These findings show that the integration of the BGCE project-based learning with chatbot scaffolding and visible thinking documentation encourages students to increasingly believe that creativity can be enhanced through learning environment support, feedback, and relevant external resources (Kabigting et al., 2025; Kong et al., 2025; Krskova & Breyer, 2023; H. Li et al., 2026).
On the other hand, in the dimension of fixed mindsets, change tends to decrease but is not as strong as growth. For fixed–internal (FI), the experimental group decreased from 3.05 ± 0.72 to 2.70 ± 0.74 (gain −0.35 ± 0.58), while the control decreased from 3.00 ± 0.69 to 2.88 ± 0.70 (gain −0.12 ± 0.55). The t-test showed a non-significant difference, t(118) = −1.71, p = 0.090, with a small-to-medium effect size (g = 0.31). For fixed–external (FE), the experiment decreased from 2.95 ± 0.71 to 2.65 ± 0.72 (gain −0.30 ± 0.56), while control decreased from 2.90 ± 0.68 to 2.80 ± 0.69 (gain −0.10 ± 0.52); the difference was also not significant, t(118) = −1.55, p = 0.124, with a small effect (g = 0.28). Theoretically, these results make sense because growth and fixed mindsets can co-exist, and lowering the belief that “creativity is innate” usually requires repeated success experiences, longer intervention durations, and reframing failures into consistent evidence of learning (Schwiering & Heyder, 2026; Zhou et al., 2025). In the context of complex, iterative BGCE projects, students can more quickly reinforce the belief that creativity can be trained (GI/GE rises). However, some fixed beliefs (FI/FE) tend to persist due to the variety of successful experiences, challenges in validating prototypes, and time constraints on building stable mastery experiences. The existing data is also visualized in Figure 7.

4.4. Self-Reflection on Changes

The self-regulated learning framework was used in this study not only as a theoretical reference but also as the basis for the design of interventions targeting the performance phases, specifically monitoring, revision, and time management, during the BGCE innovation sprint. Therefore, in addition to reporting the main outputs (creativity and creative mindsets), the revised manuscript also reports SRL-related process indicators. Where available, SRL and reflection scores are reported as gains and compared across groups to determine whether chatbot support is associated with improved self-regulation. If quantitative SRL measurements are limited, process indicators can still be operationalized through learning analytics and reflection artifacts, such as the frequency of responses to prompts, the number of artifact revisions, weekly checkpoint completion, and reflection themes that indicate planning, progress monitoring, and strategy adjustment. Thus, the discussion of SRL does not stop at conceptual claims, but is supported by evidence of processes that can be traced during the intervention.
To capture changes that are not always visible in test scores, the study collected written reflections via the prompt “I used to think… Now I think” and analyzed them using a thematic, frequency-based quantitative content analysis (Peperkorn & Wegner, 2026; B. J. Zimmerman, 2000). The researcher compiled the initial code independently, then agreed on a final list of categories to ensure that the interpretation was not dominated by a single point of view—a paradigm shift in creativity towards environmental issues with a significant impact through activity planning. Self-regulation can increase confidence in creativity in managing existing innovation risks (Han & Park, 2025; H. Li et al., 2026; Ren et al., 2026). Another theme highlights the increasing awareness of the value of sustainability, namely the ability to link business ideas with circularity, green value, and the blue economy (Pakseresht et al., 2025; Zhou et al., 2025). The value of gains can be reflected in the power of creativity and growth mindsets in the BGCE chatbot project.

5. Discussion

5.1. The Impact of Learning Interventions on Enhancing Student Creativity in the Context of BGCE

The first hypothesis is supported by the data: the control group showed lower creativity scores than the experimental group in the chatbot-supported learning intervention. These findings are supported by a learning theory framework that includes three strategies, from learning routines to the enhancement of critical thinking. Furthermore, the real-world innovation testing phase helps ensure that ideas meet users’ needs through regular feedback (Soyoof et al., 2026; Yao et al., 2025).
First, visible thinking routines serve as tools to externalize thought processes that are usually hidden. When students are asked to write down what they see, think, and question, or to explain the reasons behind design choices, they are encouraged to build more explicit justifications, compare alternatives, and test assumptions. This documentation process improves the quality of elaboration. It expands the flexibility of the category of ideas, as students do not stop at intuitive initial ideas, but rather re-examine different evidence and points of view (Medina, 2017; Noraset et al., 2026; Rücker & Becker-Genschow, 2025; Sevimli-Celik & Güvelioglu, 2026). The sustainability of innovations that address the main problems in supporting the feasibility of solutions is achievable.
Second, the development of prototypes using a design thinking model will turn ideas into applied technologies. Design thinking begins with defining the problem, followed by ideation, prototyping, and structured trials to improve the quality and accountability of innovation. This structure is important because creativity in entrepreneurship often fails not because of a lack of ideas, but because ideas are not translated into a clear value proposition, tested with users, or abandoned before iteration. With sprints, students experience a denser revision cycle, increasing the likelihood of generating useful novelty (Chan et al., 2025; Kong et al., 2025; P. Lee et al., 2026; Xia et al., 2025). Theoretical studies show that the ruler must have the strength to orient toward goals and manage the challenges they face, or risk giving up.
Third, the implementation phase will be supported by a consistent feedback process using a chatbot. Existing studies show that self-regulation helps students understand the development of innovation through appropriate strategies for developing products (Han & Park, 2025; Ren et al., 2026; Schwiering & Heyder, 2026). In large classrooms, it is difficult for teachers to provide quick and consistent formative feedback, even though effective feedback should explain goals, current performance positions, and next steps for improvement (Bez et al., 2025; Ren et al., 2026). In this context, formative feedback can be provided in a structured manner through guiding questions delivered by a chatbot or through checklists. The novelty of ideas can be explored using evaluation techniques, which help students develop a more directed, sustainable quality of creative thinking. The new learning strategy can accelerate the thinking process by using chatbot services not only to complete tasks and activities but also to find valuable alternative solutions (H. Li et al., 2026; Soyoof et al., 2026; Yao et al., 2025).
Overall, the integration of thinking routines, innovation sprints, and chatbot feedback creates a learning ecosystem that repeatedly links the quality of reasoning, the structure of innovation work, and support for self-regulation. Therefore, the increase in creativity in the CS-MBTV group makes sense because students more often experience cycles of ideas, articulation of reasons, prototypes, tests, and revisions that are monitored, thereby making BGCE’s entrepreneurial output richer, more directed, and more accountable.

5.2. The Influence of Learning Interventions on Improving Students’ Creative Growth Mindset

Students’ learning experiences, coupled with the strengthening of growth mindset approaches to creativity, are the main goals of BGCE learning. According to several theories, mindset is a skill that draws on cognitive knowledge and opens new opportunities even in the face of great challenges (Kong et al., 2025; Schwiering & Heyder, 2026). Market uncertainty is a challenge in itself when answering problems, because students will encounter various alternative solutions. A growth mindset must support students’ learning experience by explaining the reasons behind decision-making, rather than limiting it to receiving information from the teacher.
From a self-efficacy perspective, gradual and structured success reinforces adaptive internal attribution, i.e., students attribute progress to strategy and perseverance rather than to talent alone (P. Lee et al., 2026). CS-MBTV creates a condition of mastery through innovation sprints that break down big tasks into weekly targets, as well as rubrics that make the criteria for success more concrete. Students can see the internal growth dimension as a container that must be filled with data-based and evidence-based critical thinking experience in prototype development.
Reflection acts as a binding mechanism that transforms experience into conscious learning. The shift from a fixed mindset to a growth orientation can be seen in the reflection pattern captured through the “I used to think, now I think” routine, which enables a comparison between students’ perspectives before and after the implementation process (Doss & Bloom, 2023; Sigmundsson & Haga, 2024). Reflection also strengthens self-regulation as students assess effective strategies and plan for subsequent adjustments. In the SRL literature, self-monitoring and evaluation are the main drivers of sustained effort on complex tasks (Bez et al., 2025; B. J. Zimmerman, 2000). In other words, a growth mindset does not only grow from success, but from the ability to read why success happens.
The learning environment and support from outside the individual can be categorized as an external growth dimension. The student team is required to collaborate and negotiate together to carry out BGCE entrepreneurship projects. Creativity can be optimized by providing space for dialog and idea exchange among teams. Effective feedback can serve as a means of reflecting on formative argumentation (Sigmundsson & Haga, 2024; Su et al., 2026; Tran-Duong & Do-Hung, 2025; Zeng, 2025). In large classrooms, the consistency of feedback is often uneven, so students risk interpreting confusion as personal incompetence, which can trigger a fixed mindset.
At this point, scaffold chatbots become an important differentiator because they provide stable, fast, and iterative support, especially in the implementation phase when students are making design decisions and making revisions. Chatbots help maintain the continuity of the monitoring process by providing reminders and prompts for students to review the assumptions used. In addition, chatbots direct students back to the assessment rubric when the quality of the idea fails to meet the criteria for circularity, green value, feasibility, or proof of impact. This scaffolding function aligns with the idea that appropriate temporary support can elevate performance on tasks beyond actual capabilities, thereby encouraging independence once a strategy is in place (Noraset et al., 2026; White et al., 2026). Therefore, the increase in GI and GE in the CS-MBTV group can be understood as a consequence of a combination of structured mastery experiences, reflection that interprets experience, social support in the project, and a chatbot scaffold that maintains the quality of self-regulation and feedback throughout the process (Hao & Zhang, 2026; Tran-Duong & Do-Hung, 2025; Xia et al., 2025).

5.3. Instructional Effects of Decreasing Creative Fixed Mindsets

The decline in students’ fixed mindsets often does not align with the increase in growth mindsets, because the two are not on a continuous, opposite line. The literature on implicit theories shows that a person can simultaneously hold the beliefs that “ability can develop” and “there are innate limits”, especially when faced with complex tasks and when results are unstable (H. Li et al., 2026; Schwiering & Heyder, 2026). In the context of BGCE entrepreneurship, students face real uncertainties, such as changing market feedback, resource limitations, and trade-offs between business feasibility and sustainability impacts. This situation can make fixed beliefs persist as a protective mechanism, for example, to reduce the feeling of failure by relying on “talent” or “external conditions” (Farrokhnia et al., 2025; Jos et al., 2025).
Critically, fixed mindsets are also more difficult to change because they are embedded in long-term evaluative experiences in school, such as smart labels, social comparisons, and a culture that emphasizes the result. In innovation projects, initial failures in the implementation phase often occur before students have fully mastered the strategy, so the evidence they gather about themselves remains mixed. Reinforcing experiences can change mindsets through a series of structured training sessions (Krskova & Breyer, 2023; B. J. Zimmerman, 2000). Quality feedback will help students digest problems and practice critical thinking by asking more detailed questions, which will, in turn, shape new beliefs (Bez et al., 2025; Kong et al., 2025; P. Lee et al., 2026; Liang et al., 2026).
Several previous studies have reported that FI and FE do not decline significantly, but are supported by GI and GE, which can act as a buffer. Fixed mindsets can be reduced periodically, fostering a more dominant creative attitude (Z. Li & Li, 2025; Schwiering & Heyder, 2026; Su et al., 2026). More complex learning, such as BGCE, will require a longer learning process and multiple iterative processes. Furthermore, chatbots can reinforce and support the iterative learning process during the implementation phase (Farrokhnia et al., 2025; Ren et al., 2026; Yao et al., 2025).

5.4. Implications for BGCE Entrepreneurship Education

The findings of this study provide practical implications for the development of BGCE entrepreneurship education in high school, especially to foster creativity, growth mindsets, and SRL in a structured manner. First, the curriculum can be designed in weekly modules based on consistent design sprints: (1) BGCE orientation and entrepreneurial challenges, (2) mindset reset through neuroplasticity literacy and mastery experiences, (3) unfold thinking for stakeholder mapping and problem formulation, (4) empathize–define–ideate–prototype–test innovation sprints, and (5) co-creation and reflection for iteration and pitch. This structure aligns with the principles of design thinking as a repeatable end-to-end innovation process (Warren et al., 2018; Xia et al., 2025).
Second, the selection of visible thinking routines should be tailored to cognitive goals: ideas exploration routines (See–Think–Wonder, Circle of Viewpoints) to broaden sustainability perspectives, synthesis routines (Connect–Extend–Challenge) to strengthen arguments, and reflection routines (“I used to think… Now I think”) to reinforce changing beliefs and learning strategies (Han & Park, 2025; Peeters & Vaidya, 2016). Third, chatbot feedback design needs to be multi-level (basic–advanced–expert) and rubric-based so that feedback is specific, fast, and scalable in large classes (Chan et al., 2025; Doss & Bloom, 2023); the BGCE innovation rubric should assess novelty–usefulness–feasibility as well as circularity/green value indicators, environmental–social impact, and customer validation quality. Finally, for large-scale classroom implementations, schools can use progress dashboards and interaction logs as learning analytics to monitor SRL and ensure that all students receive fair scaffolding (Sevimli-Celik & Güvelioglu, 2026; White et al., 2026; Yao et al., 2025).
The findings of this study are relevant for education policy because they show that chatbot-based scaffolding support can help schools overcome feedback bottlenecks in large classes, especially on BGCE entrepreneurial project tasks that demand iteration and self-regulation. In policy, these results support the need for AI implementation guidelines in secondary schools that focus not only on technology access, but also on pedagogic design (e.g., rubric-based feedback, thinking routines, and innovation sprints) as well as quality indicators (fidelity/dose, artifact revision quality, and SRL monitoring). In addition, policies can direct teacher training on AI-enabled formative assessment competencies so that teachers can determine when manual intervention is needed, for example, when students experience conceptual impasses, ethical conflicts, or impact claims without evidence. At the governance level, these findings confirm the importance of school rules on log privacy, parent/guardian consent, and verify-before-use principles to reduce the risk of bias and hallucination in adolescents. Thus, the CS-MBTV program can serve as a policy model for AI integration that is safe, measurable, and impactful for sustainability learning.

5.5. AI Safety and Hallucination Mitigation

To anticipate the risks of bias and hallucinations in AI, this study positions chatbots not as “final answers” but as coaches who provide prompts, rubrics, checklists, and guiding questions to help students improve the quality of their work. The verify-before-use principle is applied, requiring students to include simple evidence/sources (e.g., user interview notes, prototype photos, observation data, or material references) when stating claims about facts, impacts, or design decisions. In addition, guardrails are implemented through domain restrictions and response templates (rejection when asked to “fabricate data”, warnings to double-check, and directions to consult teachers if sensitive/risky issues arise). This approach helps ensure AI support remains safe, accountable, and educational for high school students in evidence-based BGCE projects.

6. Conclusions

In line with the increasing urgency of creativity in the era of AI and sustainability challenges, this study developed a Chatbot-Supported Making BGCE Thinking Visible (CS-MBTV) learning framework by integrating visible thinking, design thinking, creative thinking, and growth mindset strategies in the context of BGCE entrepreneurship in high school students. The results of the study show that implementing the chatbot program can increase students’ creativity and mindset growth, as supported by empirical data collected through a quasi-experimental design. However, this program cannot gradually reduce the fixed mindset, and there is a change in mindset reflected in the results of the discussion: “I used to think… Now I think.” This learning output will disseminate BGCE learning results periodically. Therefore, chatbots act as scalable scaffolding through prompt routines and rubric-based feedback. This framework offers practical guidance for BGCE entrepreneurship education and advanced research.

7. Limitations and Implications

Educational research that tests real-world classroom interventions often faces tensions between strict methodological control and the ecological demands of learning. The first limitation of this study concerns its quasi-experimental, non-randomized design. The potential for bias can be reduced or optimized further if the selection of control and experimental classes is better prepared by diversifying students’ characteristics through the initial pretest process (Soyoof et al., 2026). In addition, the sample is limited to one level and one school context in Malang, so generalization to other schools with different learning cultures, access to technology, and levels of digital literacy needs to be done carefully (Z. Li & Li, 2025). These limitations are relevant because local contexts, including ecosystem support, project resources, and opportunities for user validation, heavily influence the success of BGCE entrepreneurial learning.
Second, this research has limitations related to the timing of activity implementation and the methods used to measure research outcomes. The aspect of measuring research results during the middle period of the activity was not carried out. In the middle phase of the activity, it should be able to capture students’ conditions in more detail, focusing on changes in mindset and creativity. Data findings in the middle phase can inform field conditioning and help minimize learning outcome failures for students (Hao & Zhang, 2026; Medina, 2017; B. J. Zimmerman, 2000). The 12-week duration may also be sufficient to foster growth mindsets through structured mastery experiences, but it may not be sufficient to reduce fixed mindsets, which tend to be more stable and embedded in long-term evaluative experiences (Doss & Bloom, 2023; White et al., 2026). On the other hand, the use of gain scores and t-tests provides a simple interpretation. However, it can ignore variations in individual change trajectories and potential data dependencies within the same class (D. W. Zimmerman & Williams, 2016).
The implications of further research emphasize three directions. First, longitudinal and multi-site studies are needed to test the resilience of the effects and ensure external validity in different school contexts. Second, the personalization of chatbots based on learning analytics needs to be tested, for example, by adapting the intensity of prompt monitoring during the implementation phase, triggering reflection when the revision pattern decreases, or providing different feedback for certain SRL profiles (Liang et al., 2026; Zhou et al., 2025). Third, the test stage in design thinking needs to be strengthened through more real user validation and more iteration cycles, so that changes in creativity and belief in the development of creativity get stronger evidence of experience in the context of BGCE entrepreneurship (Bez et al., 2025; Farrokhnia et al., 2025; Peeters & Vaidya, 2016).
Despite the implementation of guardrails and verify-before-use principles, the risk of hallucination and response bias in chatbots remains, so further research is needed to test safer, age-appropriate models, as well as stricter moderation and personalization mechanisms.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/educsci16040582/s1.

Author Contributions

Conceptualization, D.W. and N.I.; methodology, E.; software, O.F.; validation, N.I., O.F. and D.W.; formal analysis, D.W.; investigation, N.I.; resources, E.; data curation, N.I.; writing—original draft preparation, D.W.; writing—review and editing, E.; visualization, O.F.; supervision, E.; project administration, O.F.; funding acquisition, D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Institute for Research and Community Service, State University of Malang, under the Central Flagship Research scheme, contract number 24.2.386/UN32.14.1/LT/2025. The APC was funded by the authors.

Institutional Review Board Statement

Ethical review and approval were waived for this study because it used a non-clinical survey design and did not involve medical procedures or experiments.

Informed Consent Statement

Informed consent was obtained from all participants prior to their participation in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical restrictions.

Acknowledgments

The authors would like to thank the Institute for Research and Community Service, State University of Malang, for its support of this research. State University of Malang, for providing research funds through the Central Flagship Research scheme under contract number 24.2.386/UN32.14.1/LT/2025.

Conflicts of Interest

In accordance with the ethical standards of scientific publications, the authors state that there are no conflicts of interest related to the publication of this article. The authors have no financial, personal, or professional relationship with any party that could unduly influence the interpretation of the data, the preparation of the manuscript, or the conclusions of the research. This statement includes the absence of involvement in the form of employment, consultancy, shareholding, honorarium, or paid expert testimony. If a potential conflict of interest arises in the future, the authors will disclose it in accordance with the publisher’s policy.

Appendix A. Reflection Instruments and Examples of Anonymous Quotes

Appendix A.1. Prompt Reflection: “I Used to Think… Now I Think”

Participants are asked to write a short reflection at the end of the module and the end of the program in the following format:
  • I used to think… (explain the perspective before the BGCE project starts).
  • Now I think… (describe the change in perspective after undergoing an innovation sprint, rubric feedback, and prototype revision).
  • What changed and why? (mention evidence of experience, feedback, simple data, or important events during the implementation phase).
  • Next step (write a strategy improvement plan for the next sprint, including monitoring, revision, and time management).

Appendix A.2. Categories of Themes and Operational Definitions

Content analysis is done with the following theme-based coding.
  • Redefinition of creativity in BGCE: Changing the definition of creativity from “unique idea” to “valued, proven, and impactful solutions with sustainability impact”.
  • Changes in SRL habits: Changes in planning, monitoring, self-evaluation, strategy adaptation, and time management during sprints.
  • Creative confidence and the courage to try: Increased courage to experiment, tolerance for failure, and persistence of revision.
  • Sustainability impact awareness: Increased attention to blue, green, circular values; trade-offs; and proof of impact through simple indicators.

Appendix A.3. Quotes per Theme (Anonymous)

Note: The following quotes use participant codes (e.g., E12) and do not contain an identity.
Topic 1. Redefining creativity in the context of BGCE
  • E12: “I used to think that creativity was enough to make different ideas. Now I think new ideas are not necessarily creative if there is no evidence of benefits and impacts. When asked to show simple data and circularity reasons, I became aware that creativity must be accounted for.”
  • E27: “I used to think of creativity as talent. Now I see creativity as the process of creating solutions that are feasible, useful, and environmentally friendly. I only understood after we revised the prototype because the user feedback was not appropriate.”
Topic 2. Changes in SRL (planning, monitoring, reflection) habits
  • E05: “We used to work on projects that were close to the deadline. Now we make weekly targets, check progress, and divide tasks. The checklist from the chatbot allows us to monitor regularly, so we do not get lost during revisions.”
  • E41: “I used to just focus on finishing. Now I assess the quality using a rubric and check again whether the impact claim has evidence. After reflection, I can determine clear improvement steps for the next sprint.”
Topic 3. Courage to try, tolerance of failure, and creative confidence
  • E18: “I used to be afraid of making a mistake and finally choosing the idea of safety. Now I am more courageous about trying alternatives and not immediately giving up when I fail a test. We repeated the prototype several times because it was geared towards focusing on revisions rather than blaming ourselves.”
  • E33: “In the past, when I was criticized, I felt inadequate. Now I see the criticism as an opportunity for improvement. I became more confident because with each revision, there was a little progress that could be seen.”
Topic 4. Awareness of sustainability impact (blue, green, circular value)
  • E09: “I used to not think about environmental impact in detail. Now I distinguish between green and circular, and I check whether our solution reduces waste or saves resources.”
  • E56: “I used to only focus on products that could be sold. Now I also think about the social and environmental consequences. When asked to write down impact indicators, I realized that data and validation were needed, not assumptions.”

Appendix A.4. Overview of the Prevalence of Themes (Table Format)

Please fill in the numbers n and % according to your coding results (or I can help you calculate them if you send me a frequency recap).
Table A1. Descriptive statistics of creativity performance (points).
Table A1. Descriptive statistics of creativity performance (points).
ThemeBrief Definitionn (Number of Students)% of Total (N = 120)
Redefinition of creativity in BGCECreativity is understood as a value, feasible, and impact-proof solution8470.0%
Changes in SRL habitsPlanning, monitoring, strategy revision, and time management during sprints7260.0%
Creative confidence and the courage to tryDare to experiment, tolerance fails, persistence of revision6856.7%
Sustainability impact awarenessBlue, green, circular value; trade-off; simple impact indicators9075.0%

References

  1. Anjum, T., Farrukh, M., Heidler, P., & Tautiva, J. A. D. (2021). Entrepreneurial intention: Creativity, entrepreneurship, and university support. Journal of Open Innovation: Technology, Market, and Complexity, 7(1), 11. [Google Scholar] [CrossRef]
  2. Ayanwale, M. A., & Ndlovu, M. (2024). Investigating factors of students’ behavioral intentions to adopt chatbot technologies in higher education: Perspective from expanded diffusion theory of innovation. Computers in Human Behavior Reports, 14, 100396. [Google Scholar] [CrossRef]
  3. Basuki, A., Pahlevi, A. S., & Gunawan, A. (2023). Commercialization of the study sustainability learning chatbot through digital education teaching factory based on profit sharing. In Proceedings of the BISTIC business innovation sustainability and technology international conference (BISTIC 2022) (pp. 3–13). Springer Nature. [Google Scholar] [CrossRef]
  4. Bez, S., Burkart, F., Tomasik, M. J., & Merk, S. (2025). How do teachers process technology-based formative assessment results in their daily practice ? Results from process mining of think-aloud data. Learning and Instruction, 97, 102100. [Google Scholar] [CrossRef]
  5. Chan, S. L., Lecturer, S., Tai, J., Fung, C., Lecturer, S., Wong, S., Academic, A. H., Chi, C., Cheng, W., Jung, J., Lee, J., Choi, R., Fellow, P., Wan, W. H., Lecturer, A., Withrow, H., Po, R., Poon, W., Lam, C. F., & Chung, H. (2025). Educational technology enhanced interprofessional E-learning for engaging cross-institutional and cross-border healthcare students: A mixed-methods study. International Journal of Nursing Studies Advances, 9, 100404. [Google Scholar] [CrossRef]
  6. Cortés-Cediel, M. E., Segura-Tinoco, A., Cantador, I., & Rodríguez Bolívar, M. P. (2023). Trends and challenges of e-government chatbots: Advances in exploring open government data and citizen participation content. Government Information Quarterly, 40(4), 101877. [Google Scholar] [CrossRef]
  7. Cuevas-Cerveró, A., Colmenero-Ruiz, M. J., & Martínez-Ávila, D. (2023). Critical information literacy as a form of information activism. Journal of Academic Librarianship, 49(6), 102786. [Google Scholar] [CrossRef]
  8. Dai, X. (2024). A study on mindful agency’s influence on college students’ engagement with online teaching: The mediating roles of e-learning self-efficacy and self-regulation. Acta Psychologica, 243, 104146. [Google Scholar] [CrossRef]
  9. Daryanes, F., Ririen, D., Fikri, K., & Sayuti, I. (2023). Improving students’ critical thinking through the learning strategy “students as researchers”: Research based learning. Jurnal Penelitian Pendidikan IPA, 9(5), 2374–2382. [Google Scholar] [CrossRef]
  10. Deitte, L. A., & Omary, R. A. (2019). The power of design thinking in medical education. Academic Radiology, 26(10), 1417–1420. [Google Scholar] [CrossRef] [PubMed]
  11. Doss, K., & Bloom, L. (2023). Mindset and the desire for feedback during creative tasks. Journal of Creativity, 33(1), 100047. [Google Scholar] [CrossRef]
  12. Farrokhnia, M., Noroozi, O., Baggen, Y., & Biemans, H. (2025). Improving hybrid brainstorming outcomes with computer-supported scaffolds: Prompts and cognitive group awareness. Computers & Education, 227, 105229. [Google Scholar]
  13. Fleischer, T., Moser, S., Deibl, I., Strahl, A., Maier, S., & Zumbach, J. (2023). Digital sequential scaffolding during experimentation in chemistry education—Scrutinizing influences and effects on learning. Education Sciences, 13(8), 811. [Google Scholar] [CrossRef]
  14. Foster, M. K. (2021). Design thinking: A creative approach to problem solving. Management Teaching Review, 6(2), 123–140. [Google Scholar] [CrossRef]
  15. Gan, B., Menkhoff, T., & Smith, R. (2015). Enhancing students’ learning process through interactive digital media: New opportunities for collaborative learning. Computers in Human Behavior, 51, 652–663. [Google Scholar] [CrossRef]
  16. García-Porta, N., Vaughan, M., Rendo-González, S., Gómez-Varela, A. I., O’Donnell, A., de-Moura, J., Novo-Bujan, J., & Ortega-Hortas, M. (2024). Are artificial intelligence chatbots a reliable source of information about contact lenses? Contact Lens and Anterior Eye, 47(2), 102130. [Google Scholar] [CrossRef] [PubMed]
  17. Han, H., & Park, D. (2025). Unlocking creative potential: The role of creative mindset on creativity. Thinking Skills and Creativity, 58, 101953. [Google Scholar] [CrossRef]
  18. Hao, C., & Zhang, F. (2026). Understanding self-regulated grammar learning with LLM chatbot support: An epistemic network analysis of grammar learning strategy patterns. System, 136, 103879. [Google Scholar] [CrossRef]
  19. Held, T., & Mejeh, M. (2024). Students’ motivational trajectories in vocational education: Effects of a self-regulated learning environment. Heliyon, 10(8), e29526. [Google Scholar] [CrossRef]
  20. Hunhevicz, J. J., & Hall, D. M. (2020). Do you need a blockchain in construction? Use case categories and decision framework for DLT design options. Advanced Engineering Informatics, 45, 101094. [Google Scholar] [CrossRef]
  21. Ibrahim, F., Telle, N., Yorck, P., & Christoph, J. (2025). The construction and validation of the AI mindset scale (AIMS). Computers in Human Behavior: Artificial Humans, 6, 100220. [Google Scholar] [CrossRef]
  22. Jos, E., Esteve-faubel, R. P., & Botella-quirant, M. T. (2025). Fostering soft skills through collaborative music projects in the initial education stage of primary teachers. Social Sciences & Humanities Open, 12, 101681. [Google Scholar] [CrossRef]
  23. Kabigting, F. J., Jr., Donaldson, S., & Nakamura, J. (2025). Improving employee self-rated creativity using paradoxical strengths regulation: A mediated path analysis among personality traits, paradox mindset, and employee self-rated creativity. Journal of Creativity, 35(2), 100101. [Google Scholar] [CrossRef]
  24. Karlsson, P. S., Shafti, F., & Duffy, K. (2025). The house of the business school: A pragmatic approach to conceptualising learning community. International Journal of Management Education, 23(2), 101141. [Google Scholar] [CrossRef]
  25. Kim, A., & Su, Y. (2024). How implementing an AI chatbot impacts Korean as a foreign language learners’ willingness to communicate in Korean. System, 122, 103256. [Google Scholar] [CrossRef]
  26. Kong, J., Xu, X., Xu, J., Han, G., & Xue, Y. (2025). Development of a growth mindset assessment scale for nursing students based on the growth mindset model: A mixed-method study. Nurse Education in Practice, 82, 104232. [Google Scholar] [CrossRef]
  27. Krskova, H., & Breyer, Y. A. (2023). The influence of growth mindset, discipline, flow and creativity on innovation: Introducing the MDFC model of innovation. Heliyon, 9(3), e13884. [Google Scholar] [CrossRef]
  28. Lee, A. A., Totonchi, D. A., Priniski, S. J., Lee, M., Perez, T., & Linnenbrink-garcia, L. (2024). Do performance goals and fixed mindset explicate the relations between stereotype threat and achievement? Examining differences between racially marginalized and White students in STEM. Learning and Individual Differences, 115, 102525. [Google Scholar] [CrossRef]
  29. Lee, M. T., Kachen, S., Krishen, A. S., & Raschke, R. L. (2025). Creativity ambidexterity and sustainable business: Taking advantage of creative thinking techniques. Technological Forecasting and Social Change, 213, 123993. [Google Scholar] [CrossRef]
  30. Lee, P., Hung, J., Liau, P., & Tsai, C. (2026). A dual mediation model linking design thinking mindset to creative problem-solving skills through creative self-efficacy and critical thinking disposition. Thinking Skills and Creativity, 60, 102055. [Google Scholar] [CrossRef]
  31. Li, H., Zhang, Y., Chen, M., Zhao, T., & Jou, M. (2026). Creative personal identity in the age of generative AI: A social-cognitive pathway of AI literacy, self-efficacy, and mindset. Computers in Human Behavior, 175, 108838. [Google Scholar] [CrossRef]
  32. Li, Z., & Li, Q. (2025). The effects of school climate on students’ creativity: The mediating role of growth mindset and self-efficacy. Thinking Skills and Creativity, 57, 101851. [Google Scholar] [CrossRef]
  33. Liang, Z., Qian, K., Wu, L., Li, M., Zhu, Y., Liu, D., Gu, X., & Zheng, Y. (2026). Teachers’ creativity-fostering behaviors and students’ creativity: Parallel serial mediation via growth mindset, fear of evaluation, and creative self-efficacy. Thinking Skills and Creativity, 60, 102104. [Google Scholar] [CrossRef]
  34. Loza, S. (2025). “Did you have some kind of blow to the head?”: Spanish heritage language learners, language ideologies and oral corrective feedback. Linguistics and Education, 85, 101380. [Google Scholar] [CrossRef]
  35. Makransky, G., & Mayer, R. E. (2022). Benefits of taking a virtual field trip in immersive virtual reality: Evidence for the immersion principle in multimedia learning. Educational Psychology Review, 34(3), 1771–1798. [Google Scholar] [CrossRef]
  36. Medina, M. S. (2017). Making students’ thinking visible during active learning. American Journal of Pharmaceutical Education, 81(3), 41. [Google Scholar] [CrossRef]
  37. Micheli, P., Wilner, S. J. S., Bhatti, S. H., Mura, M., & Beverland, M. B. (2019). Doing design thinking: Conceptual review, synthesis, and research agenda. Journal of Product Innovation Management, 36(2), 124–148. [Google Scholar] [CrossRef]
  38. Narendorf, S. C., Khan, U., Munson, M. R., & Klodnick, V. V. (2025). Transition symptom management careers: Historical patterns of mental health symptoms and service use among young adults experiencing a psychiatric crisis. Social Science & Medicine, 366, 117657. [Google Scholar] [CrossRef]
  39. Noraset, T., Supratak, A., Ragkhitwetsagul, C., Worathong, N., & Tuarob, S. (2026). Evaluating lab assistant chatbot on student learning and behaviors in a programming short course. Computers and Education: Artificial Intelligence, 10, 100527. [Google Scholar] [CrossRef]
  40. Pakseresht, A., Kermani, A., & Decker-lange, C. (2025). Towards a sustainable and circular blue bioeconomy: A scoping review. Technological Forecasting & Social Change, 216, 124157. [Google Scholar] [CrossRef]
  41. Pande, M., & Bharathi, S. V. (2020). Theoretical foundations of design thinking—A constructivism learning approach to design thinking. Thinking Skills and Creativity, 36, 100637. [Google Scholar] [CrossRef]
  42. Peeters, M. J., & Vaidya, V. A. (2016). A mixed-methods analysis in assessing students’ professional development by applying an assessment for learning approach. American Journal of Pharmaceutical Education, 80(5), 77. [Google Scholar] [CrossRef] [PubMed]
  43. Peperkorn, C., & Wegner, C. (2026). Measurement of divergent thinking in biological contexts: Development, pilot testing, and validation of a test instrument for use in school. Thinking Skills and Creativity, 59, 102010. [Google Scholar] [CrossRef]
  44. Ren, W., Li, J., Pi, Z., Guo, J., & Li, X. (2026). How do high-performers and low-performers differently engage in collaborative creative problem solving with a conversational GenAI chatbot? Thinking Skills and Creativity, 60, 102134. [Google Scholar] [CrossRef]
  45. Roth, T., Conradty, C., & Bogner, F. X. (2022). The relevance of school self-concept and creativity for CLIL outreach learning. Studies in Educational Evaluation, 73, 101153. [Google Scholar] [CrossRef]
  46. Rücker, C. R., & Becker-Genschow, S. (2025). Enhancing enthusiasm for STEM education with AI: Domain-specific chatbot as personalized learning assistant. Computers and Education Open, 9, 100315. [Google Scholar] [CrossRef]
  47. Sanabria-Z, J., & Olivo, P. G. (2024). AI platform model on 4IR megatrend challenges: Complex thinking by active and transformational learning. Interactive Technology and Smart Education, 21(4), 571–587. [Google Scholar] [CrossRef]
  48. Schiele, T., Edelsbrunner, P., Mues, A., Birtwistle, E., Wirth, A., & Niklas, F. (2025). The effectiveness of game-based literacy app learning in preschool children from diverse backgrounds. Learning and Individual Differences, 117, 102579. [Google Scholar] [CrossRef]
  49. Schwiering, P., & Heyder, A. (2026). Experimental evidence on the effects of preservice teachers’ growth and fixed mindsets on teaching self-efficacy and anticipated. Learning and Instruction, 102, 102266. [Google Scholar] [CrossRef]
  50. Sevimli-Celik, S., & Güvelioglu, E. (2026). From comfort zone to growth zone: Experiential projects as catalysts for creativity in pre-service teachers. Thinking Skills and Creativity, 59, 101994. [Google Scholar] [CrossRef]
  51. Sigmundsson, H., & Haga, M. (2024). Growth mindset scale: Aspects of reliability and validity of a new 8-item scale assessing growth mindset. New Ideas in Psychology, 75, 101111. [Google Scholar] [CrossRef]
  52. Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Computers and Education, 151, 103862. [Google Scholar] [CrossRef]
  53. Soyoof, A., Lee, B., Rassaei, E., Kao, C., & Van Ha, X. (2026). From teachers to chatbots: Scaffolded corrective feedback and student trust in online L2 English classrooms. Computers and Education: Artificial Intelligence, 10, 100530. [Google Scholar] [CrossRef]
  54. Su, L., Wei, J., & Chuang, H. (2026). Fostering divergent thinking through a growth creative mindset: The mediating roles of multicultural attitudes and openness to experience. Thinking Skills and Creativity, 60(70), 102129. [Google Scholar] [CrossRef]
  55. Tam, W., Huynh, T., Tang, A., Luong, S., Khatri, Y., & Zhou, W. (2023). Nursing education in the age of artificial intelligence powered Chatbots (AI-Chatbots): Are we ready yet? Nurse Education Today, 129, 105917. [Google Scholar] [CrossRef] [PubMed]
  56. Teixeira, J. C. C., Bernardi, F. A., Rijo, R. P. C. L., & Alves, D. (2021). Proposal for a health information management model based on Lean thinking. Procedia Computer Science, 181, 1097–1104. [Google Scholar] [CrossRef]
  57. Thompson, L., & Schonthal, D. (2020). The social psychology of design thinking. California Management Review, 62(2), 84–99. [Google Scholar] [CrossRef]
  58. Tise, J. C., Hernandez, P. R., & Wesley Schultz, P. (2023). Mentoring underrepresented students for success: Self-regulated learning strategies as a critical link between mentor support and educational attainment. Contemporary Educational Psychology, 75, 102233. [Google Scholar] [CrossRef]
  59. Tran-Duong, Q. H., & Do-Hung, D. (2025). The mediating role of student growth mindset between teacher feedback, peer collaboration, and creative thinking dispositions. Studies in Educational Evaluation, 87, 101526. [Google Scholar] [CrossRef]
  60. Verganti, R., Vendraminelli, L., & Iansiti, M. (2020). Innovation and design in the age of artificial intelligence. Journal of Product Innovation Management, 37(3), 212–227. [Google Scholar] [CrossRef]
  61. Warren, F., Mason-apps, E., Hoskins, S., Azmi, Z., & Boyce, J. (2018). The role of implicit theories, age, and gender in the creative performance of children and adults. Thinking Skills and Creativity, 28, 98–109. [Google Scholar] [CrossRef]
  62. White, I., Gallagher, F., & Tiernan, P. (2026). ‘Now I understand what being creative looks like’: Preservice-teachers’ experiences of a module on creativity in education. Thinking Skills and Creativity, 60, 102113. [Google Scholar] [CrossRef]
  63. Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G. J., & Paas, F. (2019). Supporting self-regulated learning in online learning environments and MOOCs: A systematic review. International Journal of Human-Computer Interaction, 35(4–5), 356–373. [Google Scholar] [CrossRef]
  64. Xia, N., Hany, S., Huang, Y., & Niu, R. (2025). The effectiveness of CPS + SCAMPER teaching mode and strategies on student creativity. Thinking Skills and Creativity, 56, 101758. [Google Scholar] [CrossRef]
  65. Yao, Y., Zhu, X., Xiao, L., & Lu, Q. (2025). Secondary school English teachers’ application of artificial intelligence-guided chatbot in the provision of feedback on student writing: An activity theory perspective. Journal of Second Language Writing, 67, 101179. [Google Scholar] [CrossRef]
  66. Yu, J., Kim, H., Zheng, X., Li, Z., & Zhu, X. (2024). Effects of scaffolding and inner speech on learning motivation, flexible thinking and academic achievement in the technology-enhanced learning environment. Learning and Motivation, 86, 101982. [Google Scholar] [CrossRef]
  67. Zeng, C. (2025). The role of growth mindset, self-efficacy, and environmental support in ICT practices for creative thinking development. International Journal of Educational Research, 132(5), 102631. [Google Scholar] [CrossRef]
  68. Zhang, C., Xie, Y., Bai, H., Yu, B., Li, W., & Gao, Y. (2021). A survey on federated learning. Knowledge-Based Systems, 216, 106775. [Google Scholar] [CrossRef]
  69. Zhou, Y., Li, P., Wang, Y., & Sun, Z. (2025). The double-edged sword effect of fixed creative mindset on creativity: Investigating when and how it facilitates or hinders creativity. Acta Psychologica, 258, 105139. [Google Scholar] [CrossRef]
  70. Zimmerman, B. J. (2000). Self-efficacy: An essential motive to learn. Contemporary Educational Psychology, 25(1), 82–91. [Google Scholar] [CrossRef]
  71. Zimmerman, D. W., & Williams, R. H. (2016). Gain scores in research can be highly reliable. Journal of Educational Measurement, 19(2), 149–154. [Google Scholar] [CrossRef]
Figure 1. Integrative conceptual model.
Figure 1. Integrative conceptual model.
Education 16 00582 g001
Figure 2. Nusabot builds a chatbot interface.
Figure 2. Nusabot builds a chatbot interface.
Education 16 00582 g002
Figure 3. Structure of the BGCE learning chatbot with feedback learning system.
Figure 3. Structure of the BGCE learning chatbot with feedback learning system.
Education 16 00582 g003
Figure 4. Chatbot feedback.
Figure 4. Chatbot feedback.
Education 16 00582 g004
Figure 5. Screenshot of chatbot usage.
Figure 5. Screenshot of chatbot usage.
Education 16 00582 g005
Figure 6. Instructional design and procedures.
Figure 6. Instructional design and procedures.
Education 16 00582 g006
Figure 7. Effects of CS-MBTV on creativity improvement.
Figure 7. Effects of CS-MBTV on creativity improvement.
Education 16 00582 g007
Figure 8. Effects of CS-MBTV on creativity mindset improvement.
Figure 8. Effects of CS-MBTV on creativity mindset improvement.
Education 16 00582 g008
Table 1. Summary of chatbot platforms, access, recorded data, and usage schedule.
Table 1. Summary of chatbot platforms, access, recorded data, and usage schedule.
AspectsDescription
Platform/access channelLMS/Website
Student inputSprint progress summary, prompt answers, links/artifact uploads
Output chatbotPrompt monitoring, checklist, sprint, feedback rubric 3-level
Schedule of useWeeks 1–11 (checkpoint per sprint/weekly)
Recorded dataTimestamp, number of interactions, artifact revisions, duration
Rules of useVerify-before-use, prohibition of composing data, focus scaffolding process
Collected outputsBGCE artifacts, reflections, usage logs (fidelity)
Table 2. Summary of dose and fidelity of chatbot use in the experimental group (CS-MBTV) during weeks 1–11.
Table 2. Summary of dose and fidelity of chatbot use in the experimental group (CS-MBTV) during weeks 1–11.
IndicatorOperational DefinitionSummary of Statistics
Turns per studentTotal student messages + chatbot responses during the interventionMedian (IQR) = 84 (55–118); Mean ± SD = 92.6 ± 41.3
Active sessions per studentThe number of days differs, with a minimum of 1 interactionMedian (IQR) = 18 (12–26); Mean ± SD = 19.4 ± 8.7
Active week per studentNumber of weeks (out of 11) with at least 1 interactionMedian (IQR) = 8 (6–10); Mean ± SD = 8.1 ± 2.4
Proportion of consistency of useActivity categories by active weekHigh (≥9 weeks) = 37.1%; Moderate (5–8 weeks) = 50.0%; Low (≤4 weeks) = 12.9%
Response to promptsProportion of prompts that get a student response within 48 hMedian (IQR) = 78% (65–90%); Mean ± SD = 76.4% ± 15.2%
Response latencyTime from prompt sent to first re-spongeMedian (IQR) = 6.8 jam (2.4–18.5)
Artifact revisionNumber of revision iterations (version/edit uploads) on BGCE artifactsMedian (IQR) = 5 (3–7); Mean ± SD = 5.2 ± 2.1
Completion rateWeekly checkpoint/task completion percentageMean ± SD = 86.3% ± 12.4%; ≥80% = 74.2%
Table 3. Descriptive statistics for creativity performance scores.
Table 3. Descriptive statistics for creativity performance scores.
GroupnPretest (M ± SD)Posttest (M ± SD)Gain (M ± SD)
Experimental (CS-MBTV)628.30 ± 0.979.35 ± 1.181.05 ± 0.68
Control588.28 ± 0.828.06 ± 1.03−0.22 ± 0.64
Table 4. Independent-samples t-test on gain score (posttest-pretest).
Table 4. Independent-samples t-test on gain score (posttest-pretest).
ComparisonMean Gain (E)Mean Gain (C)Mean Diff (E − C)t (df)p95% CI of DiffHedges’ g
Experimental vs. Control1.05−0.221.2710.50 (118)<0.001[1.03, 1.51]1.91
Table 5. Descriptive statistics of creativity mindsets (CMIs) by group and time.
Table 5. Descriptive statistics of creativity mindsets (CMIs) by group and time.
Dimensions (CMI)GroupnPretest (M ± SD)Posttest (M ± SD)Gain (M ± SD)
Growth–Internal (GI)Experimental (CS-MBTV)623.30 ± 0.784.55 ± 0.841.25 ± 0.70
Control583.35 ± 0.743.78 ± 0.810.43 ± 0.66
Growth–External (GE)Experimental (CS-MBTV)623.25 ± 0.804.70 ± 0.861.45 ± 0.76
Control583.40 ± 0.773.85 ± 0.830.45 ± 0.70
Fixed–Internal (FI)Experimental (CS-MBTV)623.05 ± 0.722.70 ± 0.74−0.35 ± 0.58
Control583.00 ± 0.692.88 ± 0.70−0.12 ± 0.55
Fixed–External (FE)Experimental (CS-MBTV)622.95 ± 0.712.65 ± 0.72−0.30 ± 0.56
Control582.90 ± 0.682.80 ± 0.69−0.10 ± 0.52
Table 6. Independent-samples t-test on gain scores.
Table 6. Independent-samples t-test on gain scores.
DimensionsMean Gain (E)Mean Gain (C)Diff (E − C)t(118)pHedges’ gDimensions
GI1.250.430.826.57<0.0011.20GI
GE1.450.451.007.63<0.0011.39GE
FI−0.35−0.12−0.23−1.710.0900.31FI
FE−0.30−0.10−0.20−1.550.1240.28FE
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wulandari, D.; Istiqomah, N.; Ediyanto; Fajarianto, O. Chatbot for Self-Regulated BGCE Learning: Effects of Visible-Design Thinking Integration on Creativity and Growth Mindsets in Entrepreneurship. Educ. Sci. 2026, 16, 582. https://doi.org/10.3390/educsci16040582

AMA Style

Wulandari D, Istiqomah N, Ediyanto, Fajarianto O. Chatbot for Self-Regulated BGCE Learning: Effects of Visible-Design Thinking Integration on Creativity and Growth Mindsets in Entrepreneurship. Education Sciences. 2026; 16(4):582. https://doi.org/10.3390/educsci16040582

Chicago/Turabian Style

Wulandari, Dwi, Ni’matul Istiqomah, Ediyanto, and Otto Fajarianto. 2026. "Chatbot for Self-Regulated BGCE Learning: Effects of Visible-Design Thinking Integration on Creativity and Growth Mindsets in Entrepreneurship" Education Sciences 16, no. 4: 582. https://doi.org/10.3390/educsci16040582

APA Style

Wulandari, D., Istiqomah, N., Ediyanto, & Fajarianto, O. (2026). Chatbot for Self-Regulated BGCE Learning: Effects of Visible-Design Thinking Integration on Creativity and Growth Mindsets in Entrepreneurship. Education Sciences, 16(4), 582. https://doi.org/10.3390/educsci16040582

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop