Abstract
The integration of large language models (LLMs) into project-based learning (PBL) holds significant potential for addressing enduring pedagogical challenges in engineering education, such as providing scalable, personalized support during complex problem-solving. Grounded in Self-Determination Theory (SDT), this study investigates how different LLM usage strategies impact student learning within a blended engineering geology PBL context. A one-semester quasi-experiment (N = 120) employed a 2 (usage mode: individual/shared) × 2 (interaction restriction: restricted/unrestricted) factorial design. Mixed-methods data, including surveys, interaction logs, and reflective reports, were analyzed to assess learning engagement, psychological needs satisfaction, cognitive interaction levels, and project outcomes. Results demonstrate that the individual use strategy significantly outperformed shared use in enhancing engagement, needs satisfaction, higher-order cognitive interactions, and final project scores. The restricted interaction strategy effectively served as a metacognitive scaffold, optimizing the learning process by promoting deliberate planning. Notably, individual autonomy did not undermine collaboration but enhanced it by improving the quality of individual contributions to group work. Students also developed robust critical verification habits to navigate LLM “hallucinations.” This research identifies “individual autonomy” as the core mechanism and “moderate constraint” as a crucial design principle for LLM integration, providing an empirically supported framework for harnessing generative AI to foster both motivational and cognitive outcomes in engineering PBL.
1. Introduction
We are currently witnessing an era of academic digital transformation, driven by advancements in artificial intelligence (AI) that are profoundly reshaping traditional teaching and learning paradigms. This shift is accelerating the transition from an instruction-based model focused on knowledge transmission to one centered on competency development [1]. Within this transformative landscape, generative AI (GenAI), particularly powerful large language models (LLMs), has emerged as a disruptive force. Its powerful natural language processing and content generation capabilities offer unprecedented potential for creating highly personalized and adaptive learning environments [2]. By acting as a virtual tutor, LLMs can provide students with real-time, tailored guidance in reading, writing, coding, and complex problem solving, thereby significantly expanding the depth and breadth of learning experiences [3]. Higher education, particularly engineering education, which serves to cultivate future engineers and innovators, faces the urgent challenge of effectively integrating such cutting-edge technologies into teaching practices to equip students to overcome increasingly complex global engineering challenges.
The core objective of engineering education is to cultivate students’ ability to solve complex, ill-structured real-world problems. Project-based learning (PBL), a student-centered pedagogy rooted in constructivism, has been recognized as an effective pathway for addressing this goal [4,5]. By engaging students in an extended inquiry process centered on an authentic, complex, and meaningful driving question, PBL enables students to construct core knowledge and develop transferable skills such as critical thinking, creativity, and collaboration, which closely align with the core requirements of engineering education accreditation [6,7].
However, the successful implementation of PBL in high-order-thinking-intensive courses such as engineering geology presents several persistent challenges [3,8]. First, instructors struggle to provide timely, in-depth, and personalized feedback to each student group. Secondly the quality of student collaboration is often uneven, with issues such as “free-riding” and superficial discussions. Finally, students often feel overwhelmed when confronted with the incompleteness and contradictions inherent in real-world engineering data, leading to a theory-practice gap and potentially undermining their sense of competence. These challenges highlight a critical need for scalable, intelligent support systems that can adapt to individual cognitive and psychological needs during complex problem-solving.
The recent proliferation of blended learning provides an ideal context for integrating online and offline resources [9]. Concurrently, the advent of LLMs offers a new technological lever with unique potential to address the aforementioned challenges in engineering PBL [10]. Emerging research suggests that LLMs can serve as personalized learning companions, idea stimulators, and collaboration catalysts within PBL workflows [11,12,13]. For instance, recent empirical studies have demonstrated the utility of ChatGPT in enhancing specific skills within PBL settings, such as news writing [14] and supporting collaborative design processes [15]. Despite this promising potential, most current research remains at the level of theoretical discussion, model conceptualization, or focused on generic educational outcomes [16,17]. There is a scarcity of rigorous, long-term empirical studies that systematically embed LLMs within the blended PBL context of specific, high-complexity disciplines like engineering geology, and that deliberately design LLM usage strategies based on motivational theory to examine their impact on both learning processes (e.g., psychological needs, engagement) and outcomes (e.g., project quality, collaboration).
The effective application of any technology requires guidance from sound educational theory. Self-determination theory (SDT), a macro-theory of human motivation, provides a powerful theoretical lens through which to understand and optimize the use of LLMs in PBL [18]. SDT posits that individuals’ intrinsic motivation and optimal functioning stem from the satisfaction of three innate basic psychological needs: autonomy, competence, and relatedness [19]. Recent studies have begun to explore GenAI applications through an SDT lens, suggesting its potential to support these needs in digital learning environments [14,15]. However, empirical research that proactively designs LLM interaction strategies based on SDT principles and causally tests their efficacy within the authentic, ill-structured problem-solving context of discipline-specific PBL remains extremely rare.
Against this backdrop, this study uses an engineering geology course as its setting to investigate how LLM intervention strategies, designed based on SDT principles, influence student project-based learning. Specifically, this research seeks to address the following core questions: (RQ1) What patterns and cognitive characteristics emerge from student interactions with an LLM? (RQ2) How do students perceive and experience the role of an LLM as a learning partner? (RQ3) Compared with different usage strategies, do SDT-based LLM interventions significantly enhance students’ learning engagement, basic psychological needs satisfaction, and the quality of final project outcomes? (RQ4) Between ‘individual use’ and ‘shared use’ modes, which mode better optimizes learning effectiveness? (RQ5) Between ‘restricted interaction’ and ‘unrestricted interaction’ strategies, which strategy more effectively promotes deep learning? By answering these questions, this study aims to provide an empirically tested, theory-informed design framework for the effective and responsible integration of LLMs in engineering education.
2. Literature Review
2.1. PBL in Engineering Education: Efficacy and Challenges
Rooted in constructivist pedagogy, PBL emphasizes deep understanding and application of knowledge by solving authentic, complex problems. Its value is particularly pronounced in engineering education, as it closely mirrors the authentic workflow of engineers [20]. Compared with traditional instruction, PBL effectively promotes the integration of interdisciplinary knowledge, critical thinking, innovation capability, and collaborative skills [21,22].
In highly practical disciplines such as engineering geology, PBL typically involves students undertaking comprehensive engineering design and analysis projects (e.g., slope stability assessment), enabling them to experience the complete cycle of engineering practice and bridge the gap between “knowing” and “doing” [23]. Despite its recognized advantages, the deep implementation of PBL faces significant challenges, including high instructor guidance burdens, variable collaboration quality among students, and student frustration when dealing with the inherent uncertainty and complexity of real-world engineering problems [6,7]. These persistent challenges create a clear imperative for exploring innovative, intelligent support systems.
2.2. The Educational Application of GenAI: From General to Disciplinary
The emergence of GenAI, particularly LLMs, heralds a potential paradigm shift, moving beyond instrumental tools toward intelligent cognitive partnership [2]. In general education, LLMs have demonstrated significant potential as writing assistants, programming tutors, and research aides [13,24]. Recently, research has begun to explore the specific application of LLMs in PBL. For example, Mukhlis [11] found that a ChatGPT-based PBL model enhanced students’ news writing skills. Other studies suggest GenAI can empower various stages of PBL, from brainstorming to design optimization [12,25]. However, these studies often focus on general skills or are conceptual in nature. Rigorous, long-term empirical validation within the complex, ill-structured problem-solving contexts of specific engineering disciplines is scarce. Key questions remain unanswered: How can LLMs be deeply integrated into the PBL workflow of a highly practical discipline? What are the measurable effects on engineering thinking and project outcomes? Our study directly addresses this gap.
2.3. SDT: Understanding the Intrinsic Motivation of Technology-Empowered Learning
To fully realize the potential of LLMs in PBL, it is necessary to examine how these AI partners influence learners’ intrinsic psychological motivation. SDT provides an excellent theoretical lens for this purpose [18]. SDT posits that the activation of intrinsic motivation depends on the satisfaction of three basic psychological needs: autonomy (feeling volitional), competence (feeling effective), and relatedness (feeling connected) [19].
SDT offers clear criteria for designing educational technology: success hinges on whether it is “need supportive” [18]. In the PBL context, LLMs hold great potential as a need-supportive tool by allowing autonomous exploration (supporting autonomy), providing immediate personalized feedback to overcome obstacles (enhancing competence), and serving as a neutral reference point to deepen group discussions (fostering relatedness) [14,16]. Emerging research has begun to apply SDT to understand engagement in blended learning [15] and to develop frameworks for self-regulated learning with GenAI [17]. Nevertheless, empirical research that deeply integrates SDT, LLMs, and discipline-specific PBL to causally explain the mechanisms through which different usage strategies affect students’ intrinsic motivation and learning outcomes remains critically lacking. This study aims to fill this void by proactively designing LLM strategies based on SDT and empirically testing their efficacy.
3. Methodology
3.1. Research Design and Paradigm
This study employed an explanatory sequential mixed-methods design, prioritizing quantitative methods while supplementing them with qualitative approaches. This design was chosen to provide a comprehensive and in-depth investigation into the effects of LLM application within a PBL context in an engineering geology course. Specifically, a quasi-experimental design featuring a pretest–posttest, multiple-group structure was implemented. Quantitative data were primarily used to examine the causal effects of different LLM intervention strategies on learning outcomes. Subsequently, qualitative data were collected and analyzed to explain and elaborate upon the underlying reasons and contextual factors behind the quantitative findings. This approach is particularly suited to exploring the complex, mechanism-oriented “how” and “why” questions surrounding LLM integration, providing rich insights despite the single-institution context.
3.2. Research Context and Participants
The study was conducted in an undergraduate engineering geology course at a leading Chinese university of science and engineering. This course is a core requirement for senior-year students and is characterized by its comprehensive content involving substantial engineering calculation and design practice, making it an ideal setting for PBL implementation. The participants consisted of 128 students from four naturally constituted classes (each exceeding 30 students), with a mean age of 21.3 years. No significant differences were found in their prior academic performance in prerequisite courses. From each class, 30 students were randomly selected and assigned to one of four experimental groups, corresponding to four distinct LLM usage strategies. The course was delivered in a blended format: the online component, facilitated via the Rain Classroom platform (https://www.yuketang.cn/, accessed on 8 October 2025), was used for resource distribution, preliminary discussions, and assignment submission; the face-to-face component was reserved for in-depth project inquiry, instructor consultations, and collaborative work. The course was delivered over 10 weeks, in which context weeks 2 to 9 were dedicated to the core PBL project cycle.
3.3. Interventions and Experimental Procedure
The core independent variable was the LLM usage strategy, operationalized along two dimensions: usage mode and interaction restriction. This factorial design allows for a nuanced investigation of key pedagogical choices when deploying LLMs: should access be personalized or shared, and should interaction be free or structured?
Usage Mode: In the shared use condition, group members shared a single LLM account and were required to discuss and formulate questions collectively before querying. In the individual use condition, each student within a group possessed an independent LLM account and could pose questions autonomously.
Interaction Restriction: In the restricted condition, each account was limited to a maximum of 5 interactions with the LLM per key project phase (e.g., geological analysis, stability calculation). This limit was designed as a “moderate constraint” strategy, informed by pedagogical theory and pilot observations. Drawing on cognitive load theory [26,27], unrestricted access risks overwhelming students and promoting superficial use. A defined limit acts as a structured scaffold, encouraging metacognitive planning and more deliberate, high-quality inquiry [28]. Pilot testing indicated that “5 interactions per phase” effectively balanced sufficient support with the need to promote deeper engagement, optimizing the process without undue restriction. The Unrestricted condition imposed no limits on interaction frequency.
This 2 (usage mode: individual/shared) × 2 (interaction restriction: restricted/unrestricted) factorial design resulted in four experimental groups: G1 (shared + restricted), G2 (shared + unrestricted), G3 (individual + restricted), and G4 (individual + unrestricted). All the groups used the Kimi high-performance general-purpose large language model (https://www.kimi.com/, accessed on 8 October 2025). The selection of this specific model was made during the study design phase, informed by preliminary pilot testing and alignment with the project’s contextual requirements. Key considerations for its selection included its robust performance within Chinese engineering contexts, providing reliable support for the long-context dialogues and queries involving charts, formulas, and multi-step calculations essential to the PBL project. Furthermore, the platform’s API enabled centralized account management and comprehensive activity logging, which was critical for experimental control. Its practical feasibility within the university’s data compliance framework, coupled with a favorable balance of response speed and cost for classroom-scale application, were also determining factors. The research team provided unified access to accounts and provided basic prompt engineering training, covering how to formulate clear, specific engineering questions and outlining ethical usage guidelines, including the critical verification of outputs to mitigate risks associated with LLM “hallucinations.”.
The PBL task was a comprehensive, authentic engineering case: “stability evaluation and treatment design for a highway slope.” The central driving question was as follows: “As a geological engineer, your team is commissioned to conduct a detailed investigation and analysis of this potentially unstable slope. Integrate geological and geotechnical knowledge to assess its current stability, predict its safety under extreme conditions, and design a technically feasible, economically sound, and environmentally friendly comprehensive treatment solution. Present and justify your design using digital tools.” The procedure unfolded as follows:
Week 1 (Pretest and Preparation): Basic course knowledge was provided. A pretest (including scales for learning engagement, psychological needs, and perceptions of GenAI/LLMs) was administered to all students. The PBL project brief was introduced, students formed project groups of 5–6 members, and LLM usage training was provided.
Weeks 2–9 (Project Execution and Intervention): Groups progressed through the project according to the brief, encompassing three main phases: analysis of engineering geological conditions, stability calculation and evaluation, and treatment design. Throughout this process, the groups strictly adhered to their assigned LLM usage strategy. The instructor acted as a facilitator, providing necessary academic support without interfering in the students’ LLM usage decisions.
Week 10 (Posttest and Conclusion): The groups submitted final project reports and delivered presentations. All students completed the posttest questionnaire (which was identical to the pretest scales, with the addition of a collaborative experience scale) and submitted their complete interaction logs with GenAI along with the LLM individual reflective report.
3.4. Measures and Data Collection
The study’s dependent variables included both learning processes and outcomes. To measure the learning process, we utilized questionnaires adapted from well-established scales. These instruments included the Learning Engagement Scale (covering behavioral, emotional, cognitive, and agentic dimensions) by Jang et al. [29]; the Basic Psychological Needs Satisfaction Scale (measuring autonomy, competence, and relatedness) by Sheldon and Hilpert [30]; the GenAI/LLM Acceptance Scale (comprising perceived usefulness and ease of use) by Shahzad et al. [31]; and the Project-based Collaboration Experience Scale (including structural and interpersonal dimensions) by Liu and Zhang [32]. The first three scales employed a 5-point Likert scale, whereas the collaboration experience scale used a 6-point Likert scale.
Prior to formal data collection, all scales underwent a pilot test. More than ten students from the same major were invited to complete the draft questionnaires and provide feedback on item clarity, potential ambiguity, and contextual appropriateness. Based on this feedback, minor wording adjustments to several items were made, resulting in the final versions used in this study (see Supplementary Materials).
Data collection was meticulously aligned with the project timeline. At the beginning of the course (Week 1), the pretest was administered, which included the learning engagement, psychological needs satisfaction, and GenAI/LLM Acceptance scales. Basic student information, including grades in prerequisite courses, was also collected. Upon course completion (Week 10), the posttest questionnaire, which repeated the pretest scales and added the project-based collaboration experience scale, was distributed.
With respect to learning outcomes, the final project score served as a key metric. This score was determined at the group level by two subject-matter instructors who were blinded to the group allocations. They independently graded the final project reports using a detailed, pre-established rubric that assessed dimensions such as design innovation, technical rationality, calculation accuracy, and report standardization. The final score for each group was the average of the two instructors’ ratings. Since the project was a collaborative product, the group was the appropriate unit of analysis for project outcomes.
In addition to the quantitative data, rich qualitative data were collected. The complete log of all dialogues between students and the LLM was captured for subsequent analysis of cognitive interaction patterns, providing a direct record of how students leveraged the model’s capabilities. Furthermore, upon project completion, each student was required to submit an individual reflective report, detailing his or her personal experiences, challenges encountered, and key takeaways from using the LLM throughout the PBL process.
To control for potential confounding effects of differing LLM usage intensity across groups, the total number of LLM interactions per group (for shared groups) or the average number of interactions per student within a group (for individual groups) was recorded and included as a covariate in subsequent analyses of project outcomes where appropriate.
3.5. Data Analysis
To comprehensively address the research questions and ensure the robustness of the findings, a mixed-methods analytical approach was employed. All quantitative analyses were conducted using SPSS 26.0, while the qualitative data were subjected to standard content analysis procedures.
The quantitative analysis began with an assessment of the reliability of the measurement tools by calculating Cronbach’s alpha coefficients for all scales. Descriptive statistics (means and standard deviations) were computed, and a correlation matrix was generated. For all inferential statistical tests, effect sizes are reported (partial eta-squared, η2, for Analysis of Variances (ANOVAs)/Analysis of Covariances (ANCOVAs); Cohen’s d for pairwise comparisons) to facilitate the interpretation of practical significance.
To test the effects of the different LLM intervention strategies (RQ3, RQ4, RQ5), specific analyses were conducted. For the core process variables (learning engagement, psychological needs satisfaction), a two-way ANCOVA was performed with usage mode and interaction restriction as fixed factors, and the respective pretest scores as covariates. To ensure the validity of the ANCOVA, the key assumption of homogeneity of regression slopes was examined by testing interactions between the covariates and fixed factors; the absence of significant interactions (reported in Section 4.2) justified the model.
For the outcome variables, the analysis accounted for the nested structure of the data. For outcome variables, the analysis accounted for the nested data structure. The final project score was analyzed at the group level (N = 24 groups) using a two-way ANCOVA with usage mode and interaction restriction as fixed factors. The group-level measure of LLM interaction frequency (see Section 4.1) was included as a covariate to control for potential confounding effects of usage quantity, thereby addressing both the unit-of-analysis issue and variation in interaction volume.
The qualitative data underwent structured analysis. The LLM interaction logs were analyzed via content analysis. Each query was coded using a six-level taxonomy derived from the revised Bloom’s taxonomy [33], a framework validated for analyzing cognitive engagement with educational technology and conversational AI [34,35]. This allows distinction between surface-level retrieval (e.g., remembering) and deeper constructive engagement (e.g., evaluating, creating). Two coders performed the coding independently; intercoder reliability was assessed using Cohen’s kappa. The individual reflective reports were analyzed via thematic analysis following Braun and Clarke’s [36] six-phase approach to understand students‘ experiences and perceptions (RQ2).
Finally, the quantitative and qualitative results were integrated and triangulated to provide convergent, complementary evidence for a more nuanced interpretation.
4. Results
4.1. Descriptive Statistics and Reliability Check
Prior to the hypothesis tests, reliability analyses of all scales were conducted. The results indicated excellent internal consistency within the context of this study. The Cronbach’s alpha coefficients were as follows: Learning Engagement Scale (0.92 pretest/0.93 posttest), Basic Psychological Needs Satisfaction Scale (0.88 pretest/0.89 posttest), GenAI/LLM Perception Scale (0.90 pretest/0.91 posttest), and Project-Based Collaboration Experience Scale (0.94 posttest). These results confirmed the reliability and stability of the measurement tools across different time points.
To ensure the validity of the quasi-experimental design, a one-way analysis of variance was first performed on all pretest variables across the four experimental groups (G1, G2, G3, and G4) to check for baseline equivalence. As shown in Table 1, no significant differences among the groups in terms of pretest learning engagement (F(3, 116) = 0.42, p = 0.741), pretest needs satisfaction (F(3, 116) = 0.51, p = 0.677), pretest GenAI/LLM perception (F(3, 116) = 0.38, p = 0.769), or prerequisite course scores (F(3, 116) = 0.29, p = 0.835) were found. These results confirm that the four groups were equivalent in terms of initial learning motivation, psychological needs, technology acceptance, and academic foundation before the implementation of the different LLM intervention strategies, satisfying the prerequisite for quasi-experimental comparison.
Table 1.
Homogeneity Test of Pretest Variables.
To verify that the experimental manipulation was implemented as intended and to assess the actual usage patterns, the mean number of LLM interactions per student (for individual groups) or per group (for shared groups) across the entire project was calculated and is presented in Table 2. As expected, groups under the restricted condition (G1 and G3) exhibited significantly fewer interactions than those under the unrestricted condition (G2 and G4). An independent samples t-test confirmed a significant difference in interaction counts between the restricted (M = 4.4, SD = 1.2) and unrestricted (M = 19.8, SD = 6.7) conditions, t (118) = 22.15, p < 0.001. This confirms high fidelity in implementing the interaction restriction manipulation and indicates that the four groups differed meaningfully in their actual LLM usage behavior.
Table 2.
Actual Number of LLM Interactions Across Experimental Groups.
Table 3 presents the descriptive statistics (means and standard deviations) for all posttest variables and the Pearson correlation coefficients among them. The results revealed significant positive correlations between posttest learning engagement, needs satisfaction, GenAI/LLM perception, and collaboration experience (r ranging from 0.52 to 0.78, p < 0.01), suggesting close interrelationships among these variables and indicating that effective LLM use may be associated with positive learning processes and outcomes. Furthermore, the project score was significantly positively correlated with learning engagement (r = 0.45, p < 0.01) and needs satisfaction (r = 0.41, p < 0.01).
Table 3.
Descriptive Statistics and Correlation Matrix for Key Variables (Post-test, N = 120).
4.2. Main Effects of GenAI Usage Strategies on Learning Outcomes
Prior to examining the main effects, the assumption of homogeneity of regression slopes for the ANCOVA was tested. For the model with posttest learning engagement as the dependent variable, the interactions between the pretest engagement score and the two fixed factors (usage mode, interaction restriction) were not significant (both p > 0.10). Similarly, for the model with posttest needs satisfaction as the dependent variable, the interactions involving the pretest needs satisfaction score were also non-significant (all p > 0.10). These results indicate that the relationship between the pretest covariate and the posttest outcome was consistent across all experimental groups, satisfying the homogeneity assumption and validating the subsequent interpretation of the adjusted means.
To examine the effects of different LLM usage strategies (individual/shared, restricted/unrestricted) on learning outcomes, two two-way analyses of covariance were conducted. The respective pretest scores served as covariates, usage mode and interaction restriction were fixed factors, and the posttest scores for learning engagement and needs satisfaction were the dependent variables. This design controlled for baseline differences to more accurately assess the net effect of the intervention strategies. The results are visualized in Figure 1.
Figure 1.
Comparison of adjusted means of learning engagement and needs satisfaction across dif.
With respect to learning engagement, the main effect of usage mode was significant (F(1, 115) = 15.32, p < 0.001, η2 = 0.12). The individual usage groups (G3 + G4, Madj = 4.31) demonstrated significantly greater learning engagement than the shared usage groups did (G1 + G2, Madj = 3.93). This finding suggests that granting students independent control over the LLM tool and allowing for personalized interaction based on their own pace and needs effectively supported their autonomy and significantly enhanced their overall engagement across behavioral, emotional, cognitive, and agentic dimensions. The main effect of interaction restriction was also significant (F(1, 115) = 8.91, p = 0.004, η2 = 0.07). The restricted interaction groups (G1 + G3, Madj = 4.28) showed significantly higher engagement than the unrestricted groups did (G2 + G4, Madj = 3.96). This finding indicates that moderate external constraints prompted students to think more deliberately and to plan their questioning strategies before interacting with the LLM, avoiding aimless probing and potential cognitive overload, thereby guiding them to focus their limited cognitive resources on core issues and deepening the learning process. The interaction effect between usage mode and interaction restriction was not significant (F(1, 115) = 0.24, p = 0.623, η2 = 0.12), suggesting that their effects on learning engagement are relatively independent and potentially additive.
With regard to psychological needs satisfaction, the main effect of usage mode was significant (F(1, 115) = 12.47, p = 0.001, η2 = 0.10). The individual usage groups (G3 + G4, Madj = 4.24) reported significantly higher needs satisfaction than the shared usage groups did (G1 + G2, Madj = 3.86). From an SDT perspective, the individual use strategy provided students with a sense of choice and control, directly supporting autonomy needs. The ability to obtain personalized support to solve problems also enhanced their sense of competence. Although shared use aimed to promote collaboration and relatedness, the quantitative results of this research suggest that potential inefficiencies and issues such as free-riding may have undermined some group members’ experiences of autonomy and competence. The main effect of interaction restriction was also significant (F(1, 115) = 6.54, p = 0.012, η2 = 0.05). The restricted interaction groups (G1 + G3, Madj = 4.20) reported higher needs satisfaction than the unrestricted groups did (G2 + G4, Madj = 3.90). T This key finding indicates that ‘restriction’ was not perceived negatively. Instead, by creating a structured “challenge-success” environment, it allowed students to gain a stronger sense of competence and accomplishment after they successfully solved problems with limited resources. While offering freedom, unlimited access might lead to inefficient exploration or frustration because of a lack of guidance, potentially hindering competence satisfaction. Again, the interaction effect was not significant (F(1, 115) = 0.18, p = 0.674, η2 < 0.01), further supporting the independent influence of the two strategic dimensions on students’ psychological experiences.
As noted in Section 3.5, the key assumption of homogeneity of regression slopes for these ANCOVAs was tested. No significant interactions were found between the pretest covariates and the experimental factors (all p > 0.25), supporting the validity of the model and the interpretation of adjusted means.
These results clearly demonstrate that the LLM strategy combining individual use with restricted interactions (i.e., the G3 condition) was most effective in enhancing students’ learning engagement and basic psychological needs satisfaction in PBL. The educational value of LLMs appears to emerge not from unlimited access but rather from pedagogically sound strategies that grant autonomy while providing structured guidance, striking a balance between technological support and pedagogy.
4.3. Cognitive Levels and Patterns in GenAI Interactions
Two researchers, familiar with both the taxonomy and the engineering geology domain, conducted the coding. After initial training and calibration using a random sample of 50 dialogues, they independently coded all instances. The coding manual was grounded in the project context, with representative examples for each level:
- Remembering (Factual Recall): “What is the Mohr-Coulomb failure criterion?” or “List the common types of slope reinforcement.”
- Understanding (Explanation): “Explain how groundwater pressure affects slope stability.”
- Applying (Execution in a Given Scenario): “Using the provided soil parameters, calculate the factor of safety.”
- Analyzing (Differentiation & Relationship): “Compare the advantages of soil nailing versus shotcrete for this specific geological section.”
- Evaluating (Judgment & Critique): “Based on the design code, assess the potential risk of the proposed anchor spacing.” or “Critique the assumptions made in this stability calculation output.”
- Creating (Synthesis & Generation): “Generate two alternative treatment plans for this slope, considering both cost and construction time.”
Intercoder reliability, assessed via Cohen’s kappa, was 0.76 (p < 0.001), indicating good agreement. Discrepancies were resolved through discussion.
As shown in Figure 2, the distribution of questions was not even. Questions at the remembering (28.7%) and analyzing (26.5%) levels constituted the majority, followed by understanding (19.0%) and creating (15.2%). Questions requiring applying (7.2%) and evaluating (3.4%) were less frequent. This pattern suggests students primarily used the LLM as a powerful information repository and analytical assistant for retrieving facts and decomposing complex problems.
Figure 2.
Distribution of cognitive levels in LLM interactions (N = 415).
A cross-tabulation and chi-square test revealed a significant difference in the distribution of cognitive question levels across the experimental groups (χ2(15, N = 415) = 48.33, p < 0.001). The specific distribution is shown in Table 4. The individual usage groups (G3 + G4) had a significantly greater proportion of questions at higher-order cognitive levels, particularly creating (G3: 21.6%, G4: 18.1%) and evaluating (G3: 4.8%, G4: 4.0%), than did the shared usage groups (G1 + G2: creating: G1:10.9%, G2:9.8%; evaluating: G1:2.2%, G2:2.5%). Conversely, the shared usage groups had greater proportions of questions at lower cognitive levels such as remembering and understanding. This result aligns with the quantitative findings reported in Section 4.2, indicating that the individual use strategy, by granting greater exploratory freedom, significantly promoted higher-order thinking activities involving creative ideation and critical judgment using LLM.
Table 4.
Distribution of Cognitive Levels in LLM Interactions Across Experimental Groups (%, N = 415).
Qualitative analysis of the interaction texts further revealed four typical interaction patterns, reflecting a progression in the depth of student–LLM collaboration:
Informational Querying (~55%): Direct, factual questions (e.g., “What is the Mohr—Coulomb failure criterion?”).
Interpretation & Solving (~20%): Requests for explanations, examples, or solutions to specific calculations (e.g., “Please demonstrate how to calculate anchor length using Lizheng software.”).
Coding Assistance (~15%): Requests for or debugging of code for data analysis and visualization (e.g., “Write Python code to plot a histogram of RQD values from borehole logs.”).
Design Collaboration (~10%): Treating LLM as a collaborative partner for solution design and optimization (e.g., “Based on the following geological conditions, generate two different slope reinforcement solutions and compare their cost and construction difficulty in a table.”).
The findings underscore the significant regulatory role of instructional strategy (individual/shared use) on the cognitive level of interactions with the LLM. Individualized use of the LLM is a key instructional design factor for stimulating higher-order thinking in engineering geology problem solving.
4.4. Students’ Learning Experiences and Perceptions
To gain deeper insight into the underlying reasons for the quantitative data, a thematic analysis of the reflective reports submitted by all 120 students upon project completion was conducted. The analysis strictly followed the six-phase approach outlined by Braun and Clarke [36], aiming to identify recurring, meaningful patterns from students’ subjective narratives to comprehensively capture their experiences, challenges, and perceptions of using the LLM in PBL. Four core themes were identified:
Theme 1: LLM as an Omniscient Engineering Assistant and Creative Catalyst
The vast majority of students indicated that they had strong impressions of the LLM’s capabilities and that they perceived it as a readily accessible, knowledgeable “engineering expert.” One student from G3 remarked, “It’s like having a partner who has all the engineering knowledge in the universe. No matter how obscure a code clause or calculation method was, it could provide an answer instantly, which saved me an enormous amount of time searching through literature.” This immediate support not only increased efficiency but also played a key role in the creative ideation phase. Many students mentioned that when projects reached an impasse, the LLM provided crucial inspiration. A student from G4 shared, “When we were stuck on the support scheme, we asked Kimi ‘what are some innovative slope retaining structures?’ It listed options such as geogrid reinforcement and micro-pile groups that we hadn’t considered, which really opened up our thinking.”
Theme 2: Strategic Evolution from Instrumental Use to Cognitive Collaboration
Students’ interaction patterns significantly evolved throughout the project. Initial interactions were often instrumental information retrieval (e.g., “What does this formula mean?”). As learning progressed, interactions deepened into more complex cognitive collaboration. For instance, a G3 student described his or her strategy as follows: “I would feed it my preliminary calculation results and ask it to check for logical errors.” Another student from G1 had the LLM assume a critical role: “I asked it to act as a strict review expert to challenge the flaws in our design. This helped us identify many unforeseen risks.” This shift from “Q&A” to “collaboration” indicates the development of metacognitive and deep learning capabilities and was more common and profound in the individual usage groups (G3 + G4).
Theme 3: Critical Awareness of LLM ‘Hallucinations’ and Verification Strategies
Nearly all students reported instances where they realized that GenAI could provide inaccurate or completely false information (“hallucinations”), particularly regarding complex engineering calculations and professional judgments. This experience did not lead to abandonment; rather, it fostered the development of mature critical verification strategies, turning a known risk of LLMs into a learning opportunity. A student from G2 detailed his or her experience: “The formula it gave for an earth pressure coefficient was incorrect. Fortunately, we cross-verified it using our textbook.” This prevalent “questioning–verification” behavior suggests that the use of LLMs in engineering education did not weaken critical thinking; rather, it actively strengthened it in practice. Students generally recognized that the LLM’s output represented a “hypothesis to be verified” rather than a “definitive answer,” thus allowing them to cultivate professional habits of consulting authoritative sources and cross-referencing.
Theme 4: Differential Impact of Interaction Strategies on Learning Processes and Group Dynamics
The qualitative data clearly illustrate how different experimental interventions shaped distinct learning experiences and group dynamics. Students in the individual usage groups (G3 and G4) frequently emphasized the positive experience stemming from autonomy and efficiency. A G3 student stated, “I could ask questions anytime about what I cared about without waiting for a group meeting. I controlled my own learning pace, which was very efficient.” In contrast, reports from the shared usage groups (G1 and G2) often mentioned social coordination costs, e.g., “We had to reach a consensus every time we asked a question, and sometimes we argued over ‘how to ask it,’ which was a bit inefficient.” (G1 student). However, some shared group students also noted the LLM’s unique value: “The process of figuring out how to ask the AI together was a great learning experience in itself, forcing us to think through the problem very clearly first.” (G2 student). Additionally, students in the restricted groups widely perceived the constraint as prompting deeper metacognitive thinking. A G3 student explained that “Each question was precious, forcing me to think deeply and structure my problem clearly before asking. This actually trained my problem-defining ability.”
In summary, the qualitative findings of this research facilitate a strong triangulation of the quantitative results. They not only explain why the “individual use” and “restricted interaction” strategies were statistically more effective—by better supporting autonomy and competence and promoting higher-order thinking—but also vividly reveal how these strategies operated within students’ authentic learning experiences, providing a rich contextual explanation for the conclusions of this study.
4.5. Project Outcomes and Collaboration Quality
Given that the project was a group product, the final project score was analyzed at the group level (N = 24 groups). A two-way ANCOVA was conducted with the project score as the dependent variable, usage mode and interaction restriction as fixed factors, and the group-level LLM interaction frequency (see Table 2) as a covariate to control for the sheer volume of usage. This approach addresses the unit-of-analysis concern and isolates the effect of usage strategy from usage quantity.
After controlling for interaction frequency, the main effect of usage mode remained significant (F(1, 19) = 10.28, p = 0.005, η2 = 0.35). The adjusted mean project score for groups using the individual strategy (Madj = 88.9) was significantly higher than for groups using the shared strategy (Madj = 84.2). The main effect of interaction restriction was not significant (F(1, 19) = 1.95, p = 0.179, η2 = 0.09). The covariate, interaction frequency, was also non-significant (F(1, 19) = 0.62, p = 0.440, η2 = 0.03), indicating that the amount of LLM use did not directly predict project quality once the usage strategy was accounted for. The interaction between usage mode and restriction was not significant (F(1, 19) = 0.11, p = 0.745). These results, illustrated in Figure 3, confirm that the individual usage strategy was the primary driver of superior project outcomes, independent of how much the LLM was used.
Figure 3.
Adjusted mean project scores under different LLM usage strategies (controlling for interaction frequency, *** p < 0.001).
Collaboration experience was measured at the individual student level. One-way ANCOVA revealed significant differences among the four groups (F(3, 116) = 4.67, p = 0.004, η2 = 0.11). Post hoc LSD tests (Table 5) revealed that the collaboration experience score for Group G3 (individual + restricted, M = 5.32, SD = 0.59) was significantly greater than that for Group G1 (M = 4.65, SD = 0.87; p = 0.001, d = 0.91) and Group G2 (M = 4.71, SD = 0.81; p = 0.002, d = 0.85). Group G4 (individual + unrestricted, M = 5.05, and SD = 0.70) was not significantly different from G3, but it also scored significantly higher than the shared usage groups did (vs. G1: p = 0.046, d = 0.55; vs. G2: p = 0.072, d = 0.48). These results are consistent with the qualitative findings that individual use enhanced collaboration quality by improving individual preparedness for group discussion.
Table 5.
Descriptive Statistics and Post-hoc Comparisons for Collaboration Experience Scores.
4.6. Integrative Analysis of Quantitative and Qualitative Findings
To address the explanatory sequential mixed-methods design and strengthen the logic of integration, this section explicitly synthesizes the quantitative and qualitative results. A joint display (Table 6) is used to map key quantitative findings against the thematic insights derived from students’ reflective reports and interaction logs. This integration elucidates the underlying mechanisms and contextual factors behind the statistical patterns, providing a richer understanding of how and why the different LLM strategies influenced the learning process and outcomes [37,38].
Table 6.
Joint Display of Integrated Quantitative and Qualitative Findings.
This integrative analysis demonstrates that the quantitative outcomes are not merely statistical artifacts but are grounded in students’ lived experiences and strategic adaptations. The synergy between the data sources confirms that the core mechanism for effective LLM integration in PBL is the support of autonomy via individual access, optimized by structured guidance via moderate constraint, which together enhance both cognitive and motivational outcomes.
5. Discussion
Through a rigorous mixed-methods quasi-experimental design, this study systematically investigated the impact of different LLM usage strategies, grounded in SDT, on student learning processes and outcomes within a blended PBL environment for an engineering geology course. By integrating quantitative and qualitative findings, this study confirms that the designed integration of LLMs into PBL can significantly enhance students’ learning engagement, basic psychological needs satisfaction, higher-order cognitive thinking, collaborative quality, and final academic performance. The following sections discuss these findings of this research in depth, explaining their theoretical implications and practical significance in the context of harnessing large language models for teaching and learning.
5.1. Overview of the Key Findings
This study, through a mixed-methods quasi-experimental design, provides evidence that different LLM usage strategies, when grounded in SDT, can influence student learning in a blended engineering geology PBL environment. The integrated results suggest that a strategy combining “individual use” with “restricted interaction” was associated with significantly higher levels of learning engagement, basic psychological needs satisfaction, higher-order cognitive interactions, positive collaborative experiences, and superior final project scores compared to alternative strategies. The qualitative data offer plausible explanations for these patterns, revealing an evolution in student behavior from instrumental use to cognitive collaboration and the development of critical verification habits. Taken together, these findings point to the conclusion that within the studied context, “individual autonomy with moderate constraint” appears to be a key principle for optimizing LLM-enabled PBL. The following sections discuss the theoretical and practical implications of these findings, while also considering their boundaries and alternative interpretations.
5.2. Theoretical Implications
The findings of this study extend and refine our theoretical understanding of technology-enhanced learning by integrating SDT with complementary frameworks. This multi-theoretical approach offers a nuanced and robust explanation for the complex effects observed, moving beyond singular interpretations.
The clear benefit of individual over shared use robustly aligns with SDT’s core premise that autonomy support is fundamental to intrinsic motivation [18,19]. Granting students personal access to the LLM provided them with direct control over the pace, timing, and focus of their inquiry. This sense of ownership and volition appears to have underpinned the deeper behavioral, emotional, and cognitive engagement measured in the study [39]. The finding confirms the unique potential of conversational, on-demand AI as a powerful tool for fostering autonomy in complex, student-centered learning environments.
The counterintuitive advantage of restricted interaction for supporting perceived competence is effectively illuminated by integrating the lens of cognitive load theory [26,27]. While unrestricted access might seem optimally supportive, it risked overwhelming students with choices or promoting superficial, “trial-and-error” querying, which could ultimately undermine their sense of efficacy. In contrast, a moderate, well-defined constraint functioned as a metacognitive scaffold. It encouraged students to plan their inquiries more deliberately and to refine their questions before using a limited resource. The sense of accomplishment derived from successfully solving problems within these structured bounds appears to have significantly bolstered their feelings of competence. This suggests that in AI-rich learning contexts, a designed and structured challenge can be more conducive to mastery development than unbounded freedom.
Finally, the finding that individual use enhanced, rather than harmed, relatedness satisfaction challenges the simple assumption that shared tools directly foster collaboration. Our qualitative data suggest a mechanism whereby individual use, by reducing coordination conflicts and enabling members to contribute well-formed ideas derived from preliminary LLM exploration, elevated the quality of team interactions. This implies that relatedness in technology-enhanced collaboration may be fulfilled more through high-quality, content-rich interactions among members than through co-use of a tool itself.
The most theoretically significant finding—that individual use enhanced collaborative outcomes—challenges simplistic assumptions about shared tools and necessitates a synthesized explanatory view. Our data suggest a mechanism best understood by weaving together insights from multiple theories. From an SDT standpoint, satisfying autonomy and competence needs first may establish the essential psychological foundation for high-quality relatedness to emerge [20]. Social interdependence theory provides a crucial layer of explanation: individual preparation with the LLM transformed group dynamics. By entering discussions with substantive, pre-developed ideas, members shifted the team’s goal structure from a state prone to “free-riding” (negative interdependence) to one geared toward “integrative contribution” (positive interdependence). This process aligns with the cognitive readiness hypothesis [40,41], which posits that elevating each member’s knowledge base prior to collaboration reduces the group’s initial “grounding” effort. This efficiency gain accelerates the collaborative work toward higher-order tasks like critique, synthesis, and innovation. Furthermore, social cognitive theory suggests that the self-efficacy gained through successful individual problem-solving with the LLM likely increased members’ confidence to contribute actively and constructively, thereby enriching the diversity and depth of team discussions.
In essence, the LLM in this context primarily empowered individuals. It did not isolate them but enabled them to become more competent, prepared, and confident contributors. The subsequent high-quality, content-rich interactions that occurred within the team then fulfilled the need for relatedness more effectively than the often inefficient and conflict-prone co-use of a shared tool. This synthesis redefines the pathway to collaborative satisfaction in AI-augmented learning, positioning individual autonomy as a critical engine for driving high-quality teamwork.
5.3. Practical Implications
The findings offer concrete, actionable instructional design recommendations for the effective integration of LLMs in engineering geology and related disciplines, directly addressing the opportunities and challenges associated with their use.
First, “individual use” should be prioritized as the default mode. Educators and instructional designers should provide independent LLM access to each student, encouraging personalized exploration. This does not negate the value of collaboration; rather, it aims to optimize the quality of collaborative input. A workflow of “individual exploration first, followed by group integration” is recommended to ensure that discussions are built upon substantial preliminary research. This strategy leverages the LLM’s strength as a personal tutor and idea generator.
Second, “guided restriction” should be implemented over “unconstrained freedom.” Unrestricted use may foster dependency and cognitive passivity. Instructors should dynamically adjust interaction policies based on the project phase and task complexity. For instance, more interactions might be allowed during the initial exploration phase, subsequently be restricted during the deep analysis phase to foster focus, and finally be moderately relaxed during the solution optimization phase. Crucially, restrictions should be framed pedagogically as a scaffold for deep thinking, not merely as a rule. This approach directly mitigates the challenge of cognitive overload and promotes metacognitive planning.
Third, prompt engineering and critical thinking training should be integrated… Doing so will cultivate students’ critical discernment of AI-generated information—a skill directly beneficial to their future engineering practice. Our qualitative data suggest that, within a pedagogically structured environment, encountering and managing LLM “hallucinations” can become a catalyst for developing professional skepticism and verification habits. However, it is crucial to note that this positive outcome is not automatic; it relies heavily on the instructor’s explicit framing of the LLM as a fallible tool and the integration of verification as a mandatory step in the workflow. Without such design, hallucinations could equally lead to misinformation and frustration.
5.4. Limitations and Directions for Future Research
While the proposed theoretical integration is strongly supported by our convergent data, several limitations and alternative interpretations warrant consideration, qualifying the generalizability of the findings.
The benefits of individual use could be partially attributable to reduced coordination costs and mitigation of social loafing, mechanisms not exclusive to SDT. Similarly, the positive effects of restriction might involve a generic focusing effect due to scarcity, beyond targeted metacognitive scaffolding. The study was also conducted within the specific context of Chinese engineering higher education. Cultural norms regarding autonomy, authority, and collectivism may influence how these strategies operate, suggesting our findings represent robust pathways within a particular pedagogical, technological, and cultural constellation.
Future research should therefore: (1) conduct cross-cultural and cross-disciplinary replications to test the boundary conditions of these effects; (2) employ longitudinal designs to examine the long-term impact on skill retention and professional identity; (3) utilize more objective measures (e.g., collaboration analytics, eye-tracking) to triangulate self-reported data; and (4) develop adaptive scaffolding models where LLM interaction policies dynamically respond to individual learner states, moving towards truly responsive intelligent support.
6. Conclusions and Implications
This study, grounded in Self-Determination Theory, employed a mixed-methods quasi-experimental design to systematically investigate the impact of different Large Language Model usage strategies on student learning within a blended project-based learning environment for engineering geology. The findings demonstrate that the efficacy of LLMs is highly contingent upon instructional design. Integrated quantitative and qualitative evidence indicates that a strategy combining individual use with moderately restricted interaction was most effective in enhancing students’ learning engagement, basic psychological need satisfaction, higher-order cognitive thinking, collaborative experience, and final project outcomes. This core finding establishes “individual autonomy under moderate constraint” as a key design principle for the effective integration of LLMs in complex PBL.
The theoretical contribution of this research is twofold. Primarily, it successfully extends Self-Determination Theory into LLM-empowered intelligent learning environments, providing empirical evidence on how personalized AI access can function as an effective tool for supporting autonomy and competence. More significantly, by integrating perspectives from social interdependence theory and the cognitive readiness hypothesis, the study offers a profound explanation for the counterintuitive mechanism whereby “individual use fosters collaboration.” The LLM empowered individuals, transforming them into more prepared and confident contributors, thereby laying a solid foundation for high-quality team interaction and redefining the pathway to fulfilling the need for relatedness in technology-enhanced learning.
Based on these findings, the study provides clear instructional design guidance for engineering educators. In practice, personalized LLM access should be prioritized for each student, coupled with pedagogically meaningful interaction limits framed as scaffolds for metacognitive planning rather than mere arbitrary rules. The curriculum must integrate critical verification training targeting AI outputs, thereby transforming the risk of “hallucinations” into an opportunity to cultivate professional skepticism. Furthermore, a collaborative workflow of “individual exploration first, followed by group integration” is recommended to maximize the dual support of LLMs for both individual learning and team performance.
It is important to note that the conclusions of this study are rooted in a specific disciplinary and cultural context (Chinese engineering education). The benefits of the individual-use strategy may be partially attributable to reduced coordination costs, while the effects of the restriction strategy might also involve a focusing effect induced by scarcity. Therefore, future research should validate these findings across different engineering disciplines and cultural settings, employing longitudinal designs and more objective analysis of collaborative processes to test the generalizability of the findings and explore more granular mechanisms. Ultimately, harnessing LLMs responsibly and effectively hinges on achieving a balance, through scientific, learner-centered instructional design, between technological empowerment and cognitive guidance.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/systems13121112/s1, Table S1: Student Engagement Scale; Table S2: Basic Psychological Needs Satisfaction Scale; Table S3: GenAI/LLM Perception Scale; Table S4: PBL Collaboration Experience Scale.
Author Contributions
Conceptualization: X.Y. and W.F.; Methodology: X.Y.; Formal analysis and investigation: X.Y. and Y.H.; Writing—original draft preparation: X.Y.; Writing—review and editing: W.F. and F.W.; Funding acquisition: X.Y. and W.F. All authors have read and agreed to the published version of the manuscript.
Funding
This study was supported by the Undergraduate Education and Teaching Reform Project of Sichuan Province (New System for Cultivating Top Talents in Geosciences: Innovative Exploration Based on Technological Frontiers and National Strategies), the Graduate Education and Teaching Reform Project of Sichuan Province (YJGXM24-A007), and the Scientific Research Startup Fund of the Everest Talent Program of Chengdu University of Technology (Grant No. 10912-KYQD2023-10125).
Institutional Review Board Statement
This study has been reviewed by the Ethics Committee of the College of Environment and Civil Engineering at Chengdu University of Technology (approval number: IRB-HGY-2024-0901), and all research procedures comply with the ethical guidelines of the Helsinki Declaration. All students participating in the research have been informed and agreed to use their anonymized data for this study and subsequent academic publications.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Acknowledgments
During the preparation of this manuscript, the authors used DeepSeek (DeepSeek V3.2, accessed on 3 November 2025) for the purposes of language refinement and expression optimization. The authors have reviewed and edited the output and take full responsibility for the content of this publication.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Li, T.; Zhan, Z.; Ji, Y.; Li, T. Exploring Human and AI Collaboration in Inclusive STEM Teacher Training: A Synergistic Approach Based on Self-Determination Theory. Internet High. Educ. 2025, 65, 101003. [Google Scholar] [CrossRef]
- Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
- Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
- Chen, C.-H.; Yang, Y.-C. Revisiting the Effects of Project-Based Learning on Students’ Academic Achievement: A Meta-Analysis Investigating Moderators. Educ. Res. Rev. 2019, 26, 71–81. [Google Scholar] [CrossRef]
- Guo, P.; Saab, N.; Post, L.S.; Admiraal, W. A Review of Project-Based Learning in Higher Education: Student Outcomes and Measures. Int. J. Educ. Res. 2020, 102, 101586. [Google Scholar] [CrossRef]
- El Massah, S.S. Addressing Free Riders in Collaborative Group Work. Int. J. Educ. Manag. 2018, 32, 1223–1244. [Google Scholar] [CrossRef]
- Maier, U.; Klotz, C. Personalized Feedback in Digital Learning Environments: Classification Framework and Literature Review. Comput. Educ. Artif. Intell. 2022, 3, 100080. [Google Scholar] [CrossRef]
- Stojanov, A.; Liu, Q.; Koh, J.H.L. University Students’ Self-Reported Reliance on ChatGPT for Learning: A Latent Profile Analysis. Comput. Educ. Artif. Intell. 2024, 6, 100243. [Google Scholar] [CrossRef]
- Tu, Y.-F.; Hwang, G.-J. University Students’ Conceptions of ChatGPT-Supported Learning: A Drawing and Epistemic Network Analysis. Interact. Learn. Environ. 2024, 32, 6790–6814. [Google Scholar] [CrossRef]
- Fang, C.; Zhu, Y.; Fang, L.; Long, Y.; Lin, H.; Cong, Y.; Wang, S.J. Generative AI-Enhanced Human-AI Collaborative Conceptual Design: A Systematic Literature Review. Des. Stud. 2025, 97, 101300. [Google Scholar] [CrossRef]
- Mukhlis, M. The Effect of ChatGPT-Based Project-Based Learning Model and Digital Literacy on News Text Writing Skills. J. Lang. Lang. Teach. 2024, 12, 1353–1365. [Google Scholar] [CrossRef]
- Perifanou, M.; Economides, A.A. Collaborative Uses of GenAI Tools in Project-Based Learning. Educ. Sci. 2025, 15, 354. [Google Scholar] [CrossRef]
- Belkina, M.; Daniel, S.; Nikolic, S.; Haque, R.; Lyden, S.; Neal, P.; Grundy, S.; Hassan, G.M. Implementing Generative AI (GenAI) in Higher Education: A Systematic Review of Case Studies. Comput. Educ. Artif. Intell. 2025, 8, 100407. [Google Scholar] [CrossRef]
- Chiu, T.K.F.; Moorhouse, B.L.; Chai, C.S.; Ismailov, M. Teacher Support and Student Motivation to Learn with Artificial Intelligence (AI) Based Chatbot. Interact. Learn. Environ. 2024, 32, 3240–3256. [Google Scholar] [CrossRef]
- Chiu, T.K.F. Applying the Self-Determination Theory (SDT) to Explain Student Engagement in Online Learning during the COVID-19 Pandemic. J. Res. Technol. Educ. 2022, 54, S14–S30. [Google Scholar] [CrossRef]
- Kim, H.K.; Roknaldin, A.; Nayak, S.; Zhang, X.; Twyman, M.; Hwang, A.; Lu, S. Empowering Computer-Supported Collaborative Learning with ChatGPT: Investigating Effects on Student Interactions. In Proceedings of the ASEE PSW Conference, Las Vegas, NV, USA, 1 April 2024. [Google Scholar]
- Chiu, T.K.F. A Classification Tool to Foster Self-Regulated Learning with Generative Artificial Intelligence by Applying Self-Determination Theory: A Case of ChatGPT. Educ. Technol. Res. Dev. 2024, 72, 2401–2416. [Google Scholar] [CrossRef]
- Ryan, R.M.; Deci, E.L. Intrinsic and Extrinsic Motivation from a Self-Determination Theory Perspective: Definitions, Theory, Practices, and Future Directions. Contemp. Educ. Psychol. 2020, 61, 101860. [Google Scholar] [CrossRef]
- Deci, E.L.; Ryan, R.M. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
- Chen, J.; Kolmos, A.; Du, X. Forms of Implementation and Challenges of PBL in Engineering Education: A Review of Literature. Eur. J. Eng. Educ. 2021, 46, 90–115. [Google Scholar] [CrossRef]
- Johnson, B.; Ulseth, R. Professional Competency Attainment in a Project Based Learning Curriculum: A Comparison of Project Based Learning to Traditional Engineering Education. In Proceedings of the 2014 IEEE Frontiers in Education Conference (FIE) Proceedings, Madrid, Spain, 22–24 October 2014; pp. 1–4. [Google Scholar]
- Cortázar, C.; Nussbaum, M.; Harcha, J.; Alvares, D.; López, F.; Goñi, J.; Cabezas, V. Promoting Critical Thinking in an Online, Project-Based Course. Comput. Hum. Behav. 2021, 119, 106705. [Google Scholar] [CrossRef]
- Li, T. Teaching Reform and Practice of Soil Mechanics Course in the Context of Emerging Engineering Science. J. Glob. Humanit. Soc. Sci. 2024, 5, 358–363. [Google Scholar] [CrossRef]
- Hsu, Y.-C.; Ching, Y.-H. Generative Artificial Intelligence in Education, Part Two: International Perspectives. TechTrends 2023, 67, 885–890. [Google Scholar] [CrossRef]
- Purnama, I.; Edi, F.; Syahrul; Agustin, R.; Pranoto, N.W. GPT Chat Integration in Project Based Learning in Learning: A Systematic Literature Review. J. Penelit. Pendidik. IPA 2023, 9, 150–158. [Google Scholar] [CrossRef]
- Sweller, J.; Ayres, P.; Kalyuga, S. (Eds.) Cognitive Load Theory in Perspective. In Cognitive Load Theory; Springer: New York, NY, USA, 2011; pp. 237–242. ISBN 978-1-4419-8126-4. [Google Scholar]
- Sweller, J. Cognitive Load Theory: What We Learn and How We Learn. In Learning, Design, and Technology: An International Compendium of Theory, Research, Practice, and Policy; Spector, J.M., Lockee, B.B., Childress, M.D., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 137–152. ISBN 978-3-319-17461-7. [Google Scholar]
- Wood, D.; Bruner, J.S.; Ross, G. The Role of Tutoring in Problem Solving. J. Child Psychol. Psychiatry 1976, 17, 89–100. [Google Scholar] [CrossRef] [PubMed]
- Jang, H.; Kim, E.J.; Reeve, J. Why Students Become More Engaged or More Disengaged during the Semester: A Self-Determination Theory Dual-Process Model. Learn. Instr. 2016, 43, 27–38. [Google Scholar] [CrossRef]
- Sheldon, K.M.; Hilpert, J.C. The Balanced Measure of Psychological Needs (BMPN) Scale: An Alternative Domain General Measure of Need Satisfaction. Motiv. Emot. 2012, 36, 439–451. [Google Scholar] [CrossRef]
- Shahzad, M.F.; Xu, S.; Javed, I. ChatGPT Awareness, Acceptance, and Adoption in Higher Education: The Role of Trust as a Cornerstone. Int. J. Educ. Technol. High. Educ. 2024, 21, 46. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, Z. Process of Interaction among Members in Simulated Work Teams. Acta Psychol. Sin. 2005, 37, 253–259. [Google Scholar]
- Krathwohl, D.R. A Revision of Bloom’s Taxonomy: An Overview. Theory Into Pract. 2002, 41, 212–218. [Google Scholar] [CrossRef]
- Hattie, J.A.C.; Donoghue, G.M. Learning Strategies: A Synthesis and Conceptual Model. npj Sci. Learn. 2016, 1, 16013. [Google Scholar] [CrossRef]
- Winkler, R.; Soellner, M. Unleashing the Potential of Chatbots in Education: A State-of-the-Art Analysis. Proceedings 2018, 2018, 15903. [Google Scholar] [CrossRef]
- Braun, V.; Clarke, V. Using Thematic Analysis in Psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
- Creswell, J.W. Designing and Conducting Mixed Methods Research, 3rd ed.; SAGE: Thousand Oaks, CA, USA, 2018. [Google Scholar]
- Fetters, M.D.; Curry, L.A.; Creswell, J.W. Achieving Integration in Mixed Methods Designs—Principles and Practices. Health Serv. Res. 2013, 48, 2134–2156. [Google Scholar] [CrossRef]
- Chiu, T.K.F. Digital Support for Student Engagement in Blended Learning Based on Self-Determination Theory. Comput. Hum. Behav. 2021, 124, 106909. [Google Scholar] [CrossRef]
- Suthers, D.D. Technology Affordances for Intersubjective Meaning Making: A Research Agenda for CSCL. Int. J. Comput.-Support. Collab. Learn. 2006, 1, 315–337. [Google Scholar] [CrossRef]
- Weinberger, A.; Stegmann, K.; Fischer, F. Knowledge Convergence in Collaborative Learning: Concepts and Assessment. Learn. Instr. 2007, 17, 416–426. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).