Next Article in Journal
Integrating Local Plant Knowledge into Elementary Curriculum: A Scalable Model for Community Sustainability
Previous Article in Journal
Platform AI Resources and Green Value Co-Creation: Paving the Way for Sustainable Firm Performance in the Digital Age
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact Pathways of AI-Supported Instruction on Learning Behaviors, Competence Development, and Academic Achievement in Engineering Education

1
The College of River and Ocean Engineering, Chongqing Jiaotong University, Chongqing 400074, China
2
Key Laboratory of Ministry of Education for Hydraulic and Water Transport Engineering, Chongqing Jiaotong University, Chongqing 400074, China
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(17), 8059; https://doi.org/10.3390/su17178059 (registering DOI)
Submission received: 15 July 2025 / Revised: 25 August 2025 / Accepted: 3 September 2025 / Published: 7 September 2025
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

With the increasing integration of artificial intelligence into education, traditional instructional models in Hydraulic Engineering are shifting toward competence- and performance-oriented pedagogy under the New Engineering framework. Rooted in constructivist and learner-centered theories, this study examines how AI-assisted versus traditional instruction influences learning behaviors, competence development, and academic achievement in engineering education through a quasi-experimental study involving 102 undergraduate students. Results indicate that while the AI-assisted group achieved significantly higher Midterm Report Scores and PPT Presentation Scores, no significant difference was observed in Final Exam Scores between the two groups. Multivariate regression and latent profile analysis reveal that AI-assisted instruction enhances Classroom Participation, Data Processing Ability, and Comprehensive Analytical Ability, yet falls short in fostering Practical Problem-solving Ability compared to traditional instruction. Path analysis further indicates that AI-assisted instruction improves Academic Achievement indirectly by promoting Learning Behaviors, which in turn foster Competence Development, ultimately contributing to improved Academic Achievement. By addressing a critical gap in the literature on the mechanisms of AI integration in engineering education, this study underscores the importance of optimizing learning processes rather than merely pursuing outcome enhancement, offering theoretical and practical insights for AI-integrated instructional reform in the context of New Engineering education.

1. Introduction

With the rapid advancement of artificial intelligence (AI) technologies, AI has become deeply integrated into a wide range of industries. Education, as a cornerstone for knowledge dissemination and competence cultivation, is undergoing a profound integration with AI [1]. In China, various AI-powered instructional platforms have been widely adopted in higher education, including iFLYTEK’s Zhixue (Zhixue.com), Tsinghua University’s Rain Classroom, and the XuetangX MOOC platform. Zhixue is a smart-education platform that provides AI-assisted homework marking, learning-analytics dashboards, and personalized learning recommendations, enabling real-time formative assessment and precision teaching. Rain Classroom, jointly developed by Tsinghua University and XuetangX, is a plug-in toolkit that integrates Microsoft PowerPoint with the WeChat mobile app to connect pre-class, in-class, and after-class activities (e.g., pushing materials, in-slide quizzes and randomized roll-call, interaction, and analytics). The integration of AI promises to address several longstanding challenges in higher education, for instance, enabling real-time, intelligent feedback that overcomes the delay inherent in traditional instructional evaluation systems [2], and supporting personalized learning trajectories that illuminate and enhance the mechanisms underlying Competence Development [3]. This educational philosophy emphasizes not only the mastery of skills and knowledge but also the multidimensional development of learning behaviors, affective engagement, and academic achievement, thereby fostering a more holistic learning orientation. As such, AI-based instruction is reshaping not only how educators teach but also how students learn, providing a robust technological foundation for more effective learning interventions and competence- and performance-oriented educational reform [4].
As a representative discipline in engineering education, the Hydraulic Engineering major is characterized by strong theoretical foundations, high demands for practical application, interdisciplinary knowledge integration, and complex data analysis [5]. Students are expected not only to acquire foundational theories from courses such as hydraulics, hydrology, and aquatic ecology, but also to develop the ability to conduct Comprehensive Analysis and solve multifaceted, real-world problems in water engineering contexts [6]. The New Engineering Education Initiative is an engineering education reform movement centered on competence development. It emphasizes interdisciplinary integration, innovative practice, and the application of modern technologies, with the aim of cultivating versatile engineering professionals who can adapt to the future needs of society and industry. In light of the ongoing “New Engineering” reform in China, the focus of undergraduate training in this field is shifting from knowledge transmission to competence enhancement, with particular emphasis on Self-directed Learning Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability [7]. However, many courses still center on theoretical lectures and case analyses, focusing on the transmission of professional knowledge by instructors and the development of practical skills through teacher-led engineering case training. The limitations of this approach lie in students’ reliance on instructors’ expertise and insufficient opportunities for independent exploration. Additionally, assessments tend to emphasize knowledge recall and standardized problem-solving while placing less emphasis on cultivating innovation and adaptability to complex engineering problems, which makes it difficult to meet the New Engineering Education Initiative’s goal of developing versatile talent [8]. Furthermore, assessment systems often suffer from an overemphasis on knowledge acquisition rather than competence development, rote memorization rather than Applied Practice, and outcomes over learning processes [5]. These limitations hinder the development of well-rounded graduates with strong innovation capacity and problem-solving skills. There is an urgent need for innovative educational technologies that can support a pedagogical shift from teacher-centered to student-centered instruction and from knowledge-driven to competence-driven education [9]. AI-assisted teaching, with its data-driven foundation and capacity for personalized support, offers a promising approach to addressing the challenges of insufficient process guidance and the implicit nature of competence development in engineering education [10,11].
In recent years, a growing body of domestic and international research has explored the role of AI in education, yielding encouraging findings. Studies have demonstrated that AI can enhance students’ learning motivation and engagement through intelligent feedback systems and personalized pathway recommendations [12]. For example, Huang et al. (2023) found that AI-driven conversational systems improved the quality of classroom interactions and heightened student focus [13]. Similarly, Brusilovsky et al. (2007) emphasized the potential of learner modeling to enable precise content delivery and strategic interventions that optimize Learning Behaviors [14]. Moreover, Chen et al. (2020) highlighted the positive impact of AI instruction on the development of higher-order competencies, such as critical thinking and the ability to navigate complex problems [15]. Nonetheless, most existing studies have focused on the direct relationship between Instructional Approach and learning outcomes, while paying insufficient attention to the mediating role of Learning Behaviors in fostering Competence Development [16].
To address this gap, the present study employs a longitudinal path analysis framework that integrates Instructional Approach, Learning Behaviors, Competence Development, and Academic Achievement. This approach allows us to identify not just whether AI-assisted instruction is effective, but how it works—by revealing the intermediary role of learning behaviors in shaping competence outcomes, particularly within engineering education where such process-focused empirical evidence is still limited. The goal is to uncover the deeper mechanisms through which AI-assisted instruction shapes educational outcomes—not merely by altering results, but by transforming learning processes. The findings aim to provide theoretical support for pedagogical innovation in applied disciplines such as Hydraulic Engineering.
To guide the investigation, this study addresses the following research questions:
RQ1: How do AI-assisted and traditional instruction differentially influence students’ learning behaviors, competence development, and academic achievement?
RQ2: How do learning behaviors and competence development jointly mediate the effect of AI-assisted versus traditional instruction on students’ academic achievement?
The study is guided by the following hypotheses:
(1)
The two Instructional Approaches exhibit distinct strengths in promoting different dimensions of Competence Development.
(2)
AI enhances Academic Achievement indirectly by shaping students’ Learning Behaviors, which in turn foster Competence Development.

2. Literature Review

2.1. Educational Psychology Frameworks for Competence Development

Constructivist learning theory posits that knowledge is not passively received but actively constructed by learners through meaningful engagement [17]. Research has shown that learners can construct new knowledge within their “zone of proximal development” through scaffolded guidance embedded in social interaction [18]. Consequently, learner-centered instructional approaches that emphasize autonomous inquiry have been found to significantly enhance both deep understanding and Academic Achievement. For example, Freeman et al. (2014) reported that such approaches improved average examination scores by approximately 6% and reduced failure rates by nearly 55%, providing compelling empirical support for the constructivist principle that actively engaging students in situated problem-solving tasks fosters higher-order thinking and Practical Problem-solving Ability [19].
Self-directed Learning Ability is increasingly recognized as one of the core competencies required for 21st-century learners [20]. According to self-determination theory, when learners experience autonomy, their intrinsic motivation and learning engagement are markedly enhanced [21]. This capacity is not only foundational to lifelong learning but also a critical condition for cultivating high-quality talent within higher education, particularly in engineering disciplines [22]. Empirical studies have demonstrated that in online and blended learning environments, where instructor oversight is often diminished, students’ success heavily depends on their self-regulated learning capabilities [20]. As Jin et al. (2023) observe, Self-directed Learning nurtures cognitive flexibility and creative thinking, which are competencies that align closely with the strengths of AI-assisted instruction [17]. AI tools such as knowledge graph construction and data analysis provide powerful platforms that support learner autonomy by enabling students to engage in exploratory learning within authentic problem contexts. This functionality is especially relevant for technical disciplines like Hydraulic Engineering, where domain-specific problem-solving is paramount. Nevertheless, despite the promising potential of AI in promoting Learning Autonomy, most empirical studies to date have focused on foundational disciplines [23,24]. There remains a critical lack of systematic validation concerning the application of AI-supported instructional approaches within engineering and other technology-oriented domains.

2.2. Current Applications and Controversies of AI in Education

Generative AI has demonstrated significant potential in higher education. Tools such as ChatGPT enable students not only to retrieve information efficiently but also to receive support in generating solution-oriented designs when tackling real-world problems [25]. Moreover, AI plays a critical role in assisting students with the construction of knowledge graphs, facilitating the systematization of complex disciplinary content into structured conceptual frameworks [26]. In the realm of Data Processing, AI can offer real-time analytical feedback and recommendations based on experimental data or engineering problems [27], thereby enhancing students’ ability to engage with empirical inquiry and applied challenges.
Recent studies also indicate that AI-supported personalized instruction can significantly improve students’ learning engagement and academic performance, especially when implemented through interactive feedback and scaffolding mechanisms [28]. In other disciplines such as medical education, large-scale empirical findings have shown that learner-centered digital interventions supported by AI technologies contribute to improved academic outcomes and better classroom dynamics [29]. These findings further affirm the cross-disciplinary relevance of AI-assisted instruction and its broader pedagogical potential.
While AI offers advantages in enhancing Learning Efficiency and Practical Problem-solving Ability, its potential risks in educational contexts warrant careful consideration. Scholars have raised concerns that excessive reliance on AI may lead to cognitive dependency, thereby undermining students’ capacity for independent thinking and deep learning [30]. Zhai et al. (2024) found that although AI tools can significantly expedite task completion and enhance short-term performance, they may simultaneously discourage students from engaging in deeper cognitive processing, resulting in superficial learning outcomes [31]. Similarly, Gerlich (2025) emphasized that the offloading of cognitive tasks to AI systems can reduce learners’ engagement in critical reasoning, especially when tasks are complex and open-ended, potentially eroding problem-solving resilience [30]. Khatri et al. (2023) also observed that students who frequently depend on AI-generated solutions in academic contexts tend to demonstrate lower self-regulation and reduced metacognitive monitoring, which may hinder the long-term development of autonomous learning skills [32]. These findings underscore a key challenge in the educational application of AI: how to use AI in a pedagogically sound manner that prevents students from becoming overly dependent on it, while preserving their Self-directed Learning Ability and critical thinking skills.

2.3. Comparative Studies of Instructional Design

The traditional instructional approach is primarily centered on lecturing, a method that demonstrates significant advantages in knowledge transmission. This is particularly evident in the delivery of Fundamental Theoretical Knowledge, where instructors can systematically convey content to students with efficiency. However, research has indicated that while lecturing is effective for the transmission of lower-order knowledge, it often lacks efficacy in fostering higher-order thinking and creativity [19,33]. This limitation becomes especially pronounced in cultivating students’ ability to solve complex problems and engage in innovative design [33]. In Hydraulic Engineering courses, case-based teaching, as an important component of traditional lecture-based methods, provides students with learning contexts that more closely resemble real-world scenarios. Nonetheless, it also presents several limitations [34]. The case method frequently depends on the instructor’s expertise and knowledge base, which introduces a degree of subjectivity and restricts the diversity of instructional content. Furthermore, during the process of case analysis, students may be constrained by teacher-led guidance and may not have the opportunity to fully explore or solve problems independently. As a result, when facing complex hydraulic engineering challenges, students often lack sufficient innovative thinking and autonomous analytical capacity. With the application of AI technology in education, Technology-Enhanced Learning has gradually emerged as a central direction in modern pedagogy [35]. Flipped classrooms and blended learning, as two principal forms of Technology-Enhanced Learning, have been widely adopted in educational programs across disciplines and have demonstrated measurable success [36]. Research by Zain et al. (2022) suggests that flipped classrooms and blended learning can effectively stimulate students’ Self-directed Learning, deep thinking, and Learning Efficiency [37]. In engineering disciplines in particular, combining self-directed learning outside the classroom with interactive in-class discussions has led to significant improvements in students’ practical problem-solving abilities and creative thinking skills.
In summary, recent empirical work on AI-assisted instruction is dominated by short-duration interventions in foundational courses, such as introductory programming and mathematics, where studies typically assess immediate performance or perception changes through one-off classroom activities or brief pre–post designs [38,39,40], whereas systematic investigations in professional, discipline-specific engineering courses remain scarce. Furthermore, the majority of prior research has emphasized the direct link between Instructional Approach and Academic Achievement, often neglecting the mediating role of Learning Behaviors and the longitudinal mechanisms underpinning Competence Development. To address these gaps, drawing upon constructivist learning theory [41] and the competence development framework [42], this study conceptualizes learning behaviors, competence development, and academic achievement as interrelated variables that reflect both the process and outcomes of instructional interventions in engineering education. Learning behaviors capture students’ engagement and self-regulation strategies, competence development encompasses the acquisition and integration of disciplinary and transversal skills, and academic achievement reflects measurable learning outcomes. To measure these variables, this study utilized data from AI-assisted learning platforms, midterm and final exam scores, and classroom participation records. These measures have been widely adopted in engineering education research and demonstrated sound reliability and validity [3,8,43]. For data analysis, Latent Profile Analysis (LPA) was employed to identify subgroups of students with distinct learning characteristics, and multivariate regression was used to examine the predictive relationships among variables. Both methods are well-established in educational psychology and statistical methodology [44,45], making them suitable for addressing the complexity and heterogeneity of learning behaviors in the context of AI-assisted instruction.
Therefore, the present study proposes a conceptual framework that integrates Instructional Approach, Learning Behaviors, Competence Development, and Academic Achievement into a unified path analysis model. This model enables a more comprehensive understanding of how AI-assisted instruction exerts indirect effects by reshaping the learning process. By longitudinally tracing the full instructional trajectory, this study aims to uncover how AI transforms students’ learning behaviors, thereby fostering Competence Development and ultimately influencing Academic Achievement. The findings not only fill a critical empirical void in AI-supported instruction for engineering and technological disciplines but also provide a valuable reference for the application of AI teaching strategies in other academic fields.

3. Methodology

3.1. Participants

The participants in this study comprised 102 third-year undergraduate students majoring in Hydraulic Engineering. Among them, 49 students were assigned to the AI-assisted instruction group, while 53 were placed in the traditional instruction group. To minimize selection bias and ensure baseline equivalence, students were randomly allocated to the two groups. All participants belonged to the same academic cohort and shared comparable disciplinary backgrounds and levels of foundational knowledge, thereby controlling for potential confounding effects related to individual differences.
The instructional intervention spanned a period of 16 weeks, with two class sessions per week. Throughout the experiment, all students received identical course content; the only variation lay in the Instructional Approach employed.

3.2. Research Design

This study adopted a posttest-only control group design, a common variant of randomized controlled trial methodology. Outcome measures and group comparisons were conducted upon completion of the instructional intervention. To avoid influencing students’ learning behaviors or instructional activities, the study was designed without a pretest. Although participants were restricted to the same academic program and randomly assigned to groups to enhance comparability, the absence of a pretest introduces a potential risk of initial group differences.
The AI-assisted instruction group integrated various AI-based tools into the learning process. In both classroom and extracurricular activities, students in the AI-assisted group were encouraged to use ERNIE Bot (ERNIE 4.0) and DeepSeek (General Model, V3) for information retrieval, problem-solving, and solution design. These tools were accessed through their official web portals with default platform settings. Usage policies required students to disclose AI assistance in their submissions, paraphrase and integrate outputs into their own reasoning, and avoid uploading any personally identifiable information. AI use was permitted for formative tasks such as the midterm report and PPT presentation under these controls, while closed-book final exams were completed without AI support. Prompt templates for literature review, data analysis, problem diagnosis, and engineering solution design are compiled in Supplementary Table S1. In contrast, the traditional instruction group adhered to traditional pedagogical methods and did not engage with any AI tools. Assignments were completed independently using standard resources, such as textbooks, lecture notes, and library databases. Instruction was delivered through lectures, instructor-led questioning, and in-class discussions.
The River Regulation and Ecological Restoration course aims to enable students to thoroughly master the theories and methods of river hydrodynamic characteristics, river regulation, and ecological restoration, and to develop core professional competencies such as hydrological data analysis, river design scheme evaluation, and engineering feasibility analysis. This study compares two distinct instructional approaches, AI-assisted and traditional instruction, within the course River Regulation and Ecological Restoration, conducted at Chongqing Jiaotong University during the Spring semester of 2025. Multiple assessment methods, including the Midterm Report, PPT Presentation, Post-class Assignments, Number of Classroom Interactions, and the Final Exam, are employed to examine changes in students’ Learning Behaviors, such as Learning Efficiency and Classroom Participation. The study further investigates how these behavioral changes influence the development of students’ Self-directed Learning Ability, Data Processing Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability. Ultimately, the impact of different Instructional Approaches on students’ Final Exam Scores is evaluated across three dimensions: Fundamental Theoretical Knowledge, Comprehensive Thinking, and Practical Application. The overall design of this study follows the logical pathway illustrated in Figure 1. The Instructional Approach (AI-assisted and traditional teaching) first acts upon students’ Learning Behaviors, which in turn promotes Competence Development, ultimately influencing Academic Achievement. This pathway relationship provides the theoretical basis and methodological foundation for the subsequent statistical analyses.

3.3. Instructional Implementation

The instructional implementation was designed in accordance with the distinct Instructional Approaches employed in the AI-assisted and traditional teaching groups. While the two groups received equivalent coverage of core content, the AI-assisted group emphasized the integration and application of AI tools throughout the learning process. In this study, AI-assisted instruction refers to the systematic integration of AI tools into specific stages of the course (data analysis, instant feedback, personalized resource recommendations) to support students’ learning behaviors, competence development, and academic achievement. The detailed instructional arrangements for each phase are presented in Table 1 and Table 2.

3.4. Data Collection

The “Superstar Learning Platform” (Chaoxing Learning Platform; https://apps.chaoxing.com accessed on 2 September 2025) is a widely adopted commercial cloud-based learning management system (LMS) and mobile learning platform in China, developed and operated by Beijing Superstar Co., Ltd. It is not a locally developed tool but a third-party, subscription-based service commonly licensed by universities. The platform integrates features for course management, live streaming, video playback, assignment distribution and collection, online quizzes, gradebooks, and detailed learning analytics dashboards. Its ability to automatically record granular student behavioral data (e.g., login frequency, video watching time, assignment completion time, interaction counts) makes it a valuable tool for educational research. The validity and reliability of data exported from the Superstar platform for research purposes have been established in numerous previous studies in the Chinese context [46,47].
Data were collected using the export functions of the “Superstar Learning Platform”, covering Midterm Report Scores, PPT Presentation Scores, Final Exam Scores, Learning Efficiency, and Classroom Participation. Participation metrics were double-checked by trained teaching assistants. All collected data were reviewed and compiled by the instructor to ensure accuracy and consistency. These outcomes were categorized by competence type, as detailed in Table 3. All competence indicators in this study were derived from the course syllabus, intended learning outcomes, and widely adopted assessment practices in engineering education, aligned with national engineering education accreditation standards [48]. All assessment criteria were communicated to students at the start of the semester.
Midterm Report Score: In Week 9, students submitted an individual research report, scored out of 100 points. The evaluation criteria included: whether students reviewed recent domestic and international literature (30%), their understanding of the strengths and limitations of current research progress (30%), and their grasp of various technological application scenarios (40%). These components were used to assess Self-directed Learning Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability, respectively.
PPT Presentation Score: In Week 15, students delivered a PPT presentation evaluated on a 100-point scale. The evaluation focused on three aspects: whether students organized and analyzed aquatic habitat data (30%), whether they identified ecological issues in river systems (30%), and whether they proposed feasible technical solutions (40%). These were used to evaluate students’ Data Processing Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability, respectively.
Final Exam Score: The final exam was a closed-book assessment worth 100 points. The Fundamental Theoretical Knowledge Section (40 points) included multiple-choice, true/false, and fill-in-the-blank questions, assessing students’ mastery of basic concepts and theories. The Comprehensive Thinking Section (30 points) involved multiple-choice and short-answer questions, evaluating students’ capacity for comparing, filtering, and integrating information. The Practical Application Section (30 points) consisted of comprehensive problem-solving tasks, measuring students’ ability to analyze real-world issues and develop solutions.
Learning Efficiency: Learning Efficiency was measured using statistics from the “Superstar Learning Platform”, which recorded students’ Post-class Assignment Scores and Completion Time. A total of eight assignments were administered throughout the semester, each worth 50 points.
Classroom Participation: Classroom Participation was quantified based on the Number of Classroom Interactions, recorded weekly by teaching assistants. Interactions included asking questions, responding to prompts, and participating in discussions.
All submitted assignments and deliverables were subjected to three layers of originality checking: (1) Cross-Checking among Submissions, using Turnitin Feedback Studio v3.5 (Advance Publications Inc.). (2) External Source Plagiarism Detection, using Academic Misconduct Literature Detection System (AMLC) V6.3 (Tongfang Knowledge Network Technology Co., Ltd.). (3) AI-Generated Text Identification, using GPTZero v2.8 (GPTZero Technologies Inc.).
Prior to data collection, assistants completed a training session led by the course instructor, which covered the operational definitions, examples and non-examples of qualifying behaviors, procedures for tallying interactions in real time, and the use of the Superstar platform to cross-check student identification.

3.5. Analysis Methods

To comprehensively address the two research questions (RQ1 and RQ2) and test the corresponding hypotheses, a complementary set of data analysis strategies was employed. First, descriptive statistics and independent-samples t-tests were conducted to compare the differences between the AI-assisted and traditional teaching groups across all indicators, providing an answer to RQ1. Subsequently, multivariate regression analysis was used to delve into the predictive effects of learning behaviors and competence development on academic achievement under different instructional modes. To further unveil the causal pathways and mediating mechanisms among variables (RQ2), a partial least squares path model (PLS-PM) was constructed. Finally, latent profile analysis (LPA) was utilized to identify distinct subgroups of students, supplementing our understanding of how instructional approaches differentially impact student development. This multi-method approach aimed to provide a comprehensive evaluation of the effects of AI-assisted instruction from multiple perspectives: mean comparison, predictive relationships, path mechanisms, and individual differences.
(1)
Regression Analysis
To explore the extent to which students’ Learning Behaviors and Competence Development contribute to Final Exam Scores under different Instructional Approaches, six variables were selected as predictors: Learning Efficiency (X4), Number of Classroom Interactions (X5), Self-directed Learning Ability from the Midterm Report (X11), Data Processing Ability from the PPT Presentation (X21), Comprehensive Analytical Ability (X12, X22), and Practical Problem-solving Ability (X13, X23) drawn from both the Midterm Report and PPT Presentation. Each indicator was first regressed independently against the Final Exam Score using linear regression models. A multivariate regression model was then constructed to simultaneously include all six predictors, allowing for an assessment of their combined explanatory power and individual regression characteristics in relation to Academic Achievement. To assess potential multicollinearity among the predictors, Variance Inflation Factor (VIF) values were calculated using the vif function from the car package in R (version 4.3.2).
(2)
Path Analysis
To investigate the structural pathways through which Instructional Approach influences Learning Behaviors, Competence Development, and ultimately Academic Achievement, a Partial Least Squares Path Modeling (PLS-PM) framework was employed. Instructional Approach was treated as the exogenous variable and operationalized as a binary dummy variable, with the AI-assisted group coded as 1 and the traditional instruction group coded as 0. This variable captured the overall effect of the instructional intervention.
Learning Behaviors were represented by two observed indicators: Post-class Assignment Scores and Completion Time (X4) and Number of Classroom Interactions (X5). Competence Development was assessed using six indicators: Self-directed Learning Ability (X11), Data Processing Ability (X21), Comprehensive Analytical Ability (X12, X22), and Practical Problem-solving Ability (X13, X23). Academic Achievement served as the ultimate outcome variable, measured by students’ Final Exam Scores (total score out of 100).
Model estimation was conducted using R version 4.4.1. A positive path coefficient indicated that the AI-assisted instructional approach yielded superior outcomes in that specific domain relative to the traditional approach. Negative or non-significant coefficients suggested no discernible advantage or a possible reverse effect. The coefficient of determination (R2) was used to evaluate the proportion of variance explained for each endogenous variable. Statistical significance of the path coefficients was determined based on associated p-values.
(3)
Latent Profile Analysis
To further examine student typologies across instructional conditions, a latent profile model was constructed using the 11 evaluation indicators listed in Table 3 as manifest variables. Model fit was evaluated across one to five latent classes using the Bayesian Information Criterion (BIC), Integrated Completed Likelihood (ICL), and Bootstrap Likelihood Ratio Test (BLRT), as reported in Supplementary Table S2. Based on model parameter diagnostics, the four-class model demonstrated the best overall fit, with Entropy values of 0.82 and 0.85, indicating high classification accuracy.

4. Results

4.1. Score Statistics Under Different Instructional Approaches

To answer research question RQ1 (How do AI-assisted and traditional instruction differentially influence students’ learning behaviors, competence development, and academic achievement?), descriptive statistics and comparisons (independent-samples t-tests) of the scores on various assessment indicators between the two groups were first conducted. The statistical outcomes for each student’s Learning Behaviors, Competence Development scores, and Final Exam Scores across the AI-assisted and traditional instruction groups are presented in Figure 2 and Supplementary Table S3.
In terms of the Midterm Report Score, students in the AI-assisted group outperformed their counterparts in the traditional group, with a significantly higher mean score (75.9 vs. 73.2, p < 0.01, Figure 2a). This suggests that stage-based instructional design supported by AI tools enhanced the quality of students’ intermediate deliverables. Among the specific evaluation dimensions, the AI-assisted group demonstrated significantly higher performance in Comprehensive Analytical Ability (21.3 vs. 19.5, p < 0.01, Figure 2c). However, students in the traditional group performed slightly better in Practical Problem-solving Ability (32.8 vs. 32.1, p < 0.05, Figure 2d). There was no significant difference between the two groups in terms of Self-directed Learning Ability scores (Figure 2b).
Regarding the PPT Presentation Score, although a few students in the traditional group achieved exceptionally high scores, the AI-assisted group still showed a significantly higher average score overall (77.7 vs. 74.2, p < 0.05, Figure 2e). In terms of specific skill dimensions, the AI-assisted group exhibited significantly stronger performance in Data Processing Ability (23.9 vs. 20.9, p < 0.001, Figure 2f). However, no statistically significant differences were found between the two groups in Comprehensive Analytical Ability (Figure 2g) or Practical Problem-solving Ability (Figure 2h) in the context of the PPT presentation.
As for the Final Exam Score, no significant difference was observed between the AI-assisted and traditional groups in terms of total score (80.3 vs. 79.5, Figure 2i). However, the AI-assisted group exhibited a higher maximum score and a better minimum score, indicating greater overall score stability. In terms of sub-dimensions, there were no significant differences in Fundamental Theoretical Knowledge (Figure 2j) or Practical Application (Figure 2l), but the AI-assisted group significantly outperformed the traditional group in Comprehensive Thinking (23.2 vs. 21.6, p < 0.001, Figure 2k).
No significant differences were observed between the two groups in the total Post-class Assignment Scores across the eight assignments (320.4 vs. 309.7), total Completion Time (8.2 h vs. 8.6 h), or overall Learning Efficiency (43.4 vs. 40.2 points/hour) (Figure 2n–p). Nonetheless, the AI-assisted group demonstrated superior performance in both maximum efficiency and average levels; some students exceeded 80 points per hour in Learning Efficiency, suggesting that AI-supported instruction more effectively stimulated students’ learning potential.
Finally, the AI-assisted group exhibited a significantly higher Number of Classroom Interactions than the traditional group (p < 0.001, Figure 2m), indicating that AI-enhanced instruction substantially increased student engagement and interactive participation in class discussions.

4.2. The Impact of Learning Behaviors and Competence Development on Final Exam Scores Under Different Instructional Approaches

To examine how Learning Behaviors and Competence Development contribute to students’ Final Exam Scores under different Instructional Approaches, 6 key indicators (Classroom Participation, Learning Efficiency, Self-directed Learning Ability, Data Processing Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability) were individually subjected to correlation and regression analyses. The results are presented in Figure 3. This section aims to dissect the independent contribution of each variable, laying the groundwork for the subsequent comprehensive multivariate model.
In terms of Learning Behaviors, Classroom Participation showed a significant positive relationship with Final Exam Scores in both the traditional instruction group (R2 = 0.16, p = 0.003) and the AI-assisted group (R2 = 0.15, p = 0.005), as shown in Figure 3a. Notably, the regression coefficient was higher in the traditional group (0.79 vs. 0.46), suggesting that in traditional instruction, in-class engagement and verbal participation play a more prominent role as mechanisms for promoting academic performance. By contrast, Learning Efficiency showed no significant correlation with Final Exam Scores in either group (Figure 3b).
Regarding Competence Development, Self-directed Learning Ability was not significantly correlated with Final Exam Scores in the AI-assisted group. However, in the traditional instruction group, this ability exhibited a significant positive impact on academic performance (p = 0.04), with a relatively high regression coefficient (0.56 vs. 0.54, Figure 3c). This finding indicates that traditional instruction more heavily relies on students’ autonomous learning initiatives to drive Academic Achievement.
Both groups demonstrated significant positive correlations between Data Processing Ability and Final Exam Scores (p < 0.05). The AI-assisted group showed a stronger regression coefficient (0.83 vs. 0.73, Figure 3d), suggesting that the integration of AI tools, such as automated data analysis and visual analytics, enhanced students’ capacity to apply data processing skills in practice, thereby increasing the marginal contribution of this ability to Academic Achievement. This finding is consistent with the theoretical assumptions of technology-enhanced learning.
Comprehensive Analytical Ability was a significant predictor of Final Exam Scores in both groups (p < 0.05), with a stronger effect observed in the AI-assisted group (0.80 vs. 0.68, Figure 3e). These results suggest that AI-supported instruction may reinforce students’ ability to filter, synthesize, and structure information, thereby amplifying the academic impact of analytical competence.
In contrast, Practical Problem-solving Ability was not significantly associated with Final Exam Scores in the AI-assisted group. However, in the traditional instruction group, this ability exhibited a strong and significant positive correlation (p < 0.001), with a substantially higher regression coefficient (1.66 vs. 0.64, Figure 3f). This finding suggests that while AI tools may streamline problem-solving through standardized workflows, they may also constrain students’ flexibility in addressing complex and ill-defined problems, thereby limiting the translation of this ability into measurable academic outcomes.

4.3. Multivariate Effects of Learning Behaviors and Competence Development on Final Exam Scores Under Different Instructional Approaches

To examine the combined effects of the predictors and control for their potential intercorrelations, the multivariate regression results analyzing the combined influence of six predictors on students’ Final Exam Scores are presented in Figure 4. These results reveal both the overall explanatory power and the distinctive patterns of influence that Learning Behaviors and Competence Development exert under the two Instructional Approaches. This analysis helps identify the core drivers of academic achievement within different pedagogical environments.
In the traditional instruction group, the six predictors jointly explained 35% of the variance in Final Exam Scores (R2 = 0.35, p = 0.002; Figure 4a), and the overall model was statistically significant. Among the predictors, Practical Problem-solving Ability exhibited a significant positive effect on Final Exam Scores (p = 0.015), suggesting that teacher-centered case instruction and analysis effectively strengthened students’ ability to apply theoretical knowledge to practical scenarios. By contrast, this association was not significant in the AI-assisted group (p = 0.475), implying that the use of AI tools may have diverted students’ attention from the underlying reasoning required for complex problem-solving. In the traditional group, Learning Efficiency showed a negative trend in relation to Final Exam Scores, though it did not reach statistical significance (p = 0.07), and the remaining variables were not significantly associated with performance (p > 0.1).
In the AI-assisted instruction group, the six predictors together accounted for 32% of the variance in Final Exam Scores (R2 = 0.32, p = 0.005; Figure 4b), with the model again reaching statistical significance. Data Processing Ability (p = 0.052) and Comprehensive Analytical Ability (p = 0.061) exhibited positive trends toward influencing Final Exam Scores, although these effects did not reach conventional levels of significance. These findings align with the strengths of AI-supported instruction, which emphasizes data-driven reasoning and structured knowledge integration, and suggest that technology-enhanced environments may be more conducive to cultivating latent cognitive skills. In contrast, such effects were absent in the traditional group (Figure 4a, all p > 0.3), further highlighting the limitations of traditional instruction in fostering data-centric and integrative competencies. The remaining variables in the AI-assisted group did not show significant associations with Final Exam Scores (p > 0.1).
Taken together, the results indicate that the two Instructional Approaches exert fundamentally different effects on student performance. Traditional instruction appears to foster the rapid development of explicit, performance-oriented skills such as Practical Problem-solving Ability through structured, instructor-led training. In contrast, AI-assisted instruction may play a more subtle yet formative role by nurturing implicit competencies such as Data Processing and Comprehensive Analytical Ability through technology-mediated scaffolding and student-led exploration.

4.4. Effects of Instructional Approach on Learning Behaviors, Competence Development, and Academic Achievement

To directly test the proposed theoretical path model (Hypothesis 2) and quantify the indirect mechanism of “Instructional Approach → Learning Behaviors → Competence Development → Academic Achievement,” partial least squares path modeling (PLS-PM) was employed. Figure 5a illustrates the complete path model with its coefficients, and Figure 5b decomposes the direct, indirect, and total effects. Path coefficients reflect the direction and magnitude of change associated with AI-assisted instruction relative to traditional teaching. This model provides a holistic view of how AI-assisted instruction exerts its influence.
The Instructional Approach exerted a significant positive influence on Learning Behaviors (path coefficient = 0.46, p < 0.001), indicating that AI-assisted instruction is more effective than traditional teaching in promoting desirable learning practices, such as improved Post-class Assignment Scores and Completion Time, as well as increased Classroom Participation. The path from Instructional Approach to Competence Development was also positive (0.16), but did not reach statistical significance, suggesting that the direct impact of AI-assisted instruction on competence outcomes remains unstable and may be highly contingent on individual learner differences.
Interestingly, the direct effect of Instructional Approach on Academic Achievement was negative (path coefficient = −0.20, p < 0.05), implying that the adoption of AI-based instruction, when considered in isolation, may not directly enhance student performance, and may even pose challenges for learners who are not yet fully adapted to AI-mediated learning environments. However, the pathway analysis revealed a robust indirect mechanism: AI-assisted instruction significantly improved Learning Behaviors (0.46, p < 0.001), which in turn promoted Competence Development (0.44, p < 0.001), and ultimately yielded a strong, positive effect on Academic Achievement (0.42, p < 0.001). These results suggest that the effect of AI instruction on student performance operates predominantly through a dual mediation chain—“Learning Behaviors → Competence Development”—rather than through direct instructional effects.
As shown in Figure 5b, the indirect effect of AI-assisted instruction on Academic Achievement was positive and, numerically, larger than its negative direct effect. This reflects a “compensatory mediation effect”, wherein the indirect learning pathway offsets the immediate limitations of unfamiliar or nontraditional instructional delivery.
In summary, the findings highlight distinct developmental pathways under different Instructional Approaches. Traditional instruction is characterized by a teacher-centered, directly transmissive model, whereas AI-assisted instruction functions through the activation of Learning Behaviors and the subsequent cultivation of Competence Development, which represents an indirect yet effective route to improving Academic Achievement. These findings underscore the importance of aligning instructional innovations with strategies that support student engagement and the progressive development of higher-order capabilities to maximize educational outcomes.

4.5. Classification of Student Profiles Based on Latent Profile Analysis

While the previous analyses focused on variable-level group comparisons, latent profile analysis (LPA) offers a person-centered perspective by identifying distinct typologies of students based on their response patterns across the 11 indicators. Figure 6 depicts the characteristics of the four identified student profiles and their distribution across the two instructional groups. This analysis helps explain the heterogeneity underlying the aggregate results and reveals how instructional approaches differentially shape student learning characteristics.
Based on the results of the latent profile analysis, four distinct student profiles were identified, as illustrated in Figure 6. Students in the red profile demonstrated relatively stronger performance in indicators related to Learning Autonomy, such as including Self-directed Learning Ability (X11), Post-class Assignment Scores and Completion Time (X4), and Number of Classroom Interactions (X5), compared to other competence dimensions. This group was therefore labeled Efficient Self-directed Learners. Students in the blue profile exhibited relative strengths in Practical Problem-solving Ability (X13, X23, X33), and were thus designated as Applied Practice Learners. Those in the green profile showed notably high scores in Comprehensive Analytical Ability (X12, X22) and Comprehensive Thinking (X32), and were identified as Logically Analytical Learners. Finally, students in the purple profile performed relatively poorly across all indicators and were labeled Low-Efficiency Passive Learners.
As shown in Figure 6a, the AI-assisted instruction group had a higher proportion of Efficient Self-directed Learners (38%) and Logically Analytical Learners (24%) compared to the traditional instruction group (28% and 18%, respectively), and a lower proportion of Low-Efficiency Passive Learners. These results suggest that AI-assisted instruction, by offering personalized learning pathways and real-time feedback, effectively enhanced students’ motivation and efficiency, mitigated learning obstacles, and reinforced systematic thinking. In contrast, the traditional model of “teacher speaks, students listen” often fostered a reliance on instructor guidance, limiting opportunities for autonomous learning. Moreover, the one-size-fits-all nature of traditional instructional design failed to accommodate individual learner needs, potentially leaving some students unsupported and disengaged.
Figure 6b shows that the Applied Practice Learner profile was more prevalent in the traditional instruction group. This aligns with the emphasis on in-class lecturing, case studies, and practice-oriented strategies that are typical of traditional pedagogy. Such structured approaches appear effective in helping students integrate learned knowledge and improve their Practical Application skills during problem-solving tasks.
In sum, the two Instructional Approaches emphasize different dimensions of student development. AI-assisted instruction is more adept at cultivating Learning Autonomy and Logical Analysis, while traditional instruction demonstrates greater strength in fostering Applied Practice capabilities.

5. Discussion

5.1. Exploring the Differential Effectiveness of AI-Assisted and Traditional Instruction

This study found no significant difference in Final Exam Scores between students in the AI-assisted and traditional instruction groups (Figure 2i), suggesting that the integration of AI into instructional delivery did not yield immediate gains in Academic Achievement. This finding is consistent with meta-analytic evidence showing that AI-assisted interventions tend to yield larger short-term gains that attenuate as intervention duration increases [49]. Specifically, a BJET meta-analysis of 24 randomized studies reported stronger effects for short interventions and attributed the decline over time partly to novelty effects [49]; a Computers and Education meta-analysis focusing on ChatGPT likewise identified intervention length as a significant moderator, with shorter designs producing larger effects [50]; and a JRTE meta-analysis found the highest effects in single-session trials [51]. Taken together, these patterns help reconcile seemingly contradictory claims about AI’s uniformly positive impact by highlighting time-sensitive boundary conditions for sustained learning benefits. One plausible explanation discussed in educational technology is the novelty effect, wherein initial performance boosts associated with new tools diminish over time if not accompanied by sustained, active learning structures [52].
In contrast, with respect to intermediate learning outcomes, students in the AI-assisted group demonstrated significantly higher Midterm Report Scores (Figure 2a) and PPT Presentation Scores (Figure 2e). Notably, the AI group outperformed the traditional group in dimensions such as Comprehensive Analytical Ability (Figure 2c) and Data Processing Ability (Figure 2f). These findings are consistent with prior research suggesting that AI-supported instruction facilitates analytical thinking and information processing through intelligent data analysis platforms and knowledge-construction tools [53]. For example, Chen et al. (2020) reported that the integration of AI into instructional practices supports the development of higher-order skills, including critical thinking and the ability to engage with complex problems [15]. Accordingly, the superior performance of the AI group in analytical dimensions observed in this study may be largely attributed to the affordances of AI technologies, particularly the access to diverse resources and real-time feedback, which scaffold students’ efforts to synthesize and interpret multifaceted information.
It is noteworthy, however, that students in the traditional instruction group outperformed their AI-assisted peers in Practical Problem-solving Ability (Figure 2d). This finding suggests that traditional, instructor-led approaches, particularly those grounded in expert demonstration and case-based analysis, may offer more direct support for applying conceptual knowledge to real-world scenarios. The structured guidance and modeling provided by instructors in traditional classrooms appear to equip students with clear procedural strategies, thereby strengthening their immediate capacity to tackle prototypical problems [54]. This observation is consistent with findings from Zain et al. (2022), who demonstrated that structured instructional strategies, such as lectures and case-based exercises, effectively promote problem-solving capabilities in engineering contexts [37]. The relatively weaker performance of the AI-assisted group in this dimension may stem from an overreliance on AI tools. While AI offers standardized solution frameworks that enhance efficiency, such tools may inadvertently reduce students’ opportunities for engaging deeply with the logical structure of problems. Prior reviews have cautioned that excessive dependence on AI-generated outputs can undermine students’ capacity for independent reasoning and adaptive problem-solving [31].

5.2. Mechanisms Linking Instructional Approach to Academic Achievement

As artificial intelligence continues to integrate into educational settings, increasing attention has been paid to the divergent mechanisms through which AI-assisted and traditional Instructional Approaches shape student development. Rather than being inherently superior or inferior, these two approaches appear to cultivate distinct types of competencies, each with unique implications for Academic Achievement. This section explores how such differences in competence cultivation contribute to divergent academic outcomes.
The regression analysis (Figure 3a) showed that although AI-assisted instruction significantly increases the frequency of Classroom Participation, its marginal contribution (regression coefficient) to Academic Achievement was smaller than that observed in traditional classrooms. This discrepancy may be attributed, in part, to the nature of AI-facilitated interactions, which often involve technical prompts, input of keywords, or navigation of AI tool interfaces. Such interactions, while frequent, may lack the depth and cognitive demand of more intentional engagement. In contrast, Classroom Participation in traditional settings, such as posing questions and engaging in structured discussions, is typically more focused and cognitively substantive, resulting in greater academic benefit [55]. Prior research has consistently demonstrated a strong positive relationship between active classroom participation and academic performance, particularly in traditional environments where active engagement is a salient and self-initiated learning behavior [56].
The relationship between Self-directed Learning Ability and Academic Achievement also diverges markedly across the two Instructional Approaches (Figure 3c), as evidenced by their differing regression coefficients in the multivariate models (Figure 4). In the traditional instruction group, students with strong Self-directed Learning Ability tend to achieve higher Final Exam Scores, whereas in the AI-assisted group, this relationship is less pronounced. This divergence suggests that the instructional demands on learner autonomy differ substantially between the two contexts. Khalid et al. (2020) found that students in online or remote learning environments often exhibit stronger self-directed capabilities than those in traditional classrooms, and that the correlation between learner autonomy and academic performance is especially strong in digitally mediated contexts [57]. In the AI-assisted setting, however, the personalization and real-time scaffolding offered by intelligent systems may partially compensate for deficiencies in self-regulation, enabling students with weaker autonomous learning tendencies to maintain progress. As a result, the predictive strength of Self-directed Learning Ability on Academic Achievement may be diluted. In contrast, traditional instruction relies more heavily on learners’ initiative; teachers are less able to address every student’s needs in real time, which places a greater burden on students to preview content, review material independently, and resolve conceptual difficulties outside of class. Consequently, students with higher levels of Self-directed Learning Ability are more likely to transform classroom inputs into academic success under traditional pedagogical conditions.
AI-assisted instruction offers students abundant data resources and real-time feedback loops, enabling them to rapidly acquire and synthesize multidimensional information, especially in tasks such as Midterm Reports and PPT Presentations (Figure 3d,e). These affordances significantly enhance students’ Data Processing Ability and Comprehensive Analytical Ability. Empirical research on generative AI in engineering education has shown that students trained with AI-generated prompts outperform control groups in both data analysis and programming proficiency [58]. Within AI-enhanced learning environments, the iterative feedback mechanisms of intelligent systems help students consolidate domain knowledge while efficiently integrating interdisciplinary information. This process supports the emergence of data literacy and critical thinking as internalized, latent cognitive capacities [15]. Previous studies have also demonstrated that the combination of rich data resources and instant feedback in intelligent learning environments accelerates learners’ development of advanced information processing habits, thereby promoting active meaning-making and deeper learning engagement [59]. By contrast, traditional instruction follows a different logic in cultivating Data Processing and Comprehensive Analytical skills. In traditional classrooms, opportunities for data analysis and integrative processing are typically confined to textbook exercises and instructor-directed problem sets. These environments often lack diverse data inputs and real-time formative feedback, resulting in a slower development of the skills required to handle data-intensive or analytically demanding tasks [60]. Consequently, although students in traditional classrooms also show a positive correlation between these abilities and Final Exam Scores, their regression coefficients are generally lower compared to those observed under AI-assisted instruction. This indicates that, under traditional models, such competencies are often shaped by explicit, externally guided training rather than long-term, internalized cognitive routines [20]. It is worth noting, however, that some scholars remain cautious about the effectiveness of AI in fostering analytical thinking, especially within the humanities. Castillo (2024a) argues that traditional instruction should remain central to cultivating complex analysis and problem-solving skills in disciplines where interpretive reasoning and context sensitivity are paramount [61]. This suggests that efforts to develop Comprehensive Analytical Ability should be context-dependent, with instructional approaches tailored to both disciplinary characteristics and pedagogical design.
Findings from the present study also underscore the distinct value of traditional instruction in fostering students’ capacity to solve real-world, complex problems (Figure 3f). Human instructors are uniquely positioned to contextualize theoretical content and guide learners in understanding how engineering principles apply to authentic projects. This form of situated learning helps students develop cognitive schemas for Practical Problem-solving Ability. In contrast, AI-assisted instruction often emphasizes structured, standardized tasks. Intelligent systems excel in delivering personalized drills and feedback within well-defined problem spaces, but remain limited in supporting creative and context-sensitive reasoning. As a result, students in AI-assisted classrooms, despite mastering theoretical and data analytic skills, may lack the experiential scaffolding needed to approach novel, open-ended problems with confidence (Figure 3f and Figure 4b). Several studies reinforce this concern. Castillo et al. (2024b) caution that excessive reliance on AI may hinder the development of students’ communication, critical thinking, and applied practice skills, particularly in settings where direct engagement with real-world ambiguity is essential [62]. However, other research offers a more nuanced view. For instance, Chiang et al. (2024) compared graduate students’ problem-solving performance in online and face-to-face instruction, finding that online learners actually outperformed their peers [63]. The authors attributed this to the heightened self-directed exploration and confidence-building fostered by virtual environments. These results differ from those observed in the current undergraduate AI-assisted context, highlighting how learner maturity and instructional design can modulate the effectiveness of different pedagogical approaches. Taken together, these findings suggest that AI-assisted instruction is not inherently deficient in cultivating Practical Problem-solving Ability. Rather, its current limitations may stem from insufficient integration of authentic, contextualized experiences. If AI instruction can incorporate real-world case studies, simulated projects, or immersive problem scenarios, it may substantially narrow the observed gap in practical competence development compared to traditional methods.

5.3. Divergent Student Development Pathways in AI-Assisted and Traditional Instruction

The findings of this study indicate that the impact of AI-assisted instruction on Academic Achievement operates primarily through indirect pathways (Figure 5), highlighting a critical gap in current educational research: while many studies have focused on whether AI is effective, few have examined how it exerts its effects. As Baker et al. (2016) emphasize, the true educational value of technology should not be measured solely by terminal outcomes such as test scores, but rather by the mechanisms through which it shapes the learning process [16]. Within this process, improvements in Learning Behaviors emerge as a key mediating factor in AI-supported instruction. This study observed that students in the AI-assisted group exhibited significantly higher Classroom Participation, including more frequent in-class interactions and spontaneous questioning, compared to those in the traditional group. These behaviors align closely with the core principles of constructivism and self-determination theory, both of which stress the importance of active learner engagement. According to constructivist theory [18], knowledge is not passively received but actively constructed through learner-environment interactions. In this context, AI tools serve as effective scaffolds that facilitate such knowledge construction. Self-determination theory [21] further suggests that when learners experience greater autonomy, their intrinsic motivation and depth of engagement are significantly enhanced. In the present study, the personalized learning pathways, real-time feedback, and task adaptation mechanisms provided by AI instruction enhanced students’ sense of agency and control, thereby promoting the development of self-regulated learning behaviors. As noted by Jin et al. (2023), well-designed AI tools can effectively enhance learners’ autonomy and engagement, which is further supported by the present findings [17]. These elevated learning behaviors, in turn, contributed to greater gains in multiple dimensions of Competence Development, particularly in Data Processing Ability, Comprehensive Analytical Ability, and systematic thinking. The development of these higher-order capabilities subsequently served as mediating variables that indirectly boosted students’ Academic Achievement.
Compared to AI-supported instruction, traditional teaching exerts its influence on learning outcomes primarily through direct pathways (Figure 5), meaning that the effectiveness of instruction is largely contingent upon what the instructor delivers in real time and how much the student immediately retains. In this study, variables such as Classroom Participation and Learning Autonomy in the traditional instruction group exhibited limited associations with Academic Achievement, while Practical Problem-solving Ability emerged as the only significant predictor of improved performance. This finding aligns with existing evaluations of lecture-based instruction, which suggest that while this approach is efficient in conveying concrete knowledge, it falls short in cultivating students’ intrinsic learning habits and higher-order competencies [33]. As a result, in traditional instructional settings, improvements in Learning Behaviors and Competence Development do not serve as primary mediators of Academic Achievement. Instead, outcomes rely more heavily on the instructor’s direct transmission of knowledge. Although this pathway yields quick results, it lacks deeper penetration into the learning process and offers limited long-term impact. Freeman et al. (2014) analyzed 225 studies on STEM classrooms and found that students in lecture-only classes performed worse than those in active learning environments [19]. Specifically, active learning raised exam scores by an average of 6 percent and reduced failure rates by approximately 55 percent, suggesting that instructor-centered methods alone are insufficient to fully engage student potential. The present study supports this conclusion, showing that in traditional classrooms, where broad-based student engagement and inquiry are lacking, only a subset of students (typically those with strong Practical Problem-solving Ability) benefit directly from instruction, which may constrain overall instructional effectiveness.
The divergent developmental pathways observed between AI-assisted and traditional instruction reflect a broader paradigm shift occurring in higher education. The transition from pedagogy driven by knowledge transmission to instruction centered on competence and guided by process is no longer optional but essential. The findings of this study align with constructivist learning theory, which emphasizes that active engagement in learning behaviors fosters deeper competence development, ultimately leading to improved academic achievement. The observed pathway from “learning behaviors—competence development—academic achievement” supports this theoretical chain, consistent with previous findings in engineering education [3,8]. Notably, while the positive influence of AI-assisted instruction on competence development aligns with prior studies, the relatively modest effect size suggests that contextual factors, such as prior knowledge and course design, may moderate the extent of these benefits.

5.4. Implications for Instructional Reform

The differences identified in student development pathways offer important insights for educational reform. First, when evaluating and designing Instructional Approaches, educators must move beyond superficial comparisons of Academic Achievement and instead examine how different instructional models reshape the learning process. As demonstrated in this study, meaningful and sustained improvements in Academic Achievement can only occur when the Instructional Approach effectively activates a positive chain of “Learning Behaviors → Competence Development”. Therefore, the integration of technologies such as AI should not be framed merely as tools for boosting test scores. Rather, educators should focus on how such technologies can transform pedagogical processes. Teachers, in particular, must purposefully incorporate AI tools in ways that foster student agency, stimulate critical thinking, and support competence development, rather than reducing AI to a mechanistic substitute for traditional instruction. It is only through intentional instructional design, which deeply embeds technology into the learning environment, that “technology empowerment” can be transformed into “learning empowerment”, ultimately achieving dual gains in Academic Achievement and Competence Development.
Second, the structural model developed in this study aligns closely with the current reform paradigms emphasizing learner-centered and holistic learning-oriented education. In the context of higher engineering education, the “New Engineering” movement advocates for a shift from content transmission to the cultivation of practical competencies and innovation capacity. This study provides empirical support for such reforms by showing that AI-assisted instruction enhances Academic Achievement indirectly, primarily by increasing student engagement and fostering Competence Development, thereby confirming both the necessity and feasibility of competence-based instructional transformation.
Finally, this study reinforces the view that AI-assisted and traditional Instructional Approaches should not be viewed as dichotomous in quality or effectiveness. Rather, each offers distinct advantages depending on the desired learning outcomes. Traditional instruction excels at delivering content and procedural knowledge efficiently in the short term, while AI-assisted instruction demonstrates greater promise for supporting the long-term development of higher-order competencies. Educators should adopt a strategic, goal-oriented approach that leverages the strengths of both models. For example, in implementing AI technologies, instructors may retain effective elements of traditional pedagogy, such as targeted explanation and expert modeling, while using AI to broaden student participation and offer personalized learning support. Only through such thoughtful integration can the complementary advantages of both Instructional Approaches be fully realized, ultimately enabling the design of instructional pathways that meet students’ immediate learning needs while also fostering their long-term growth and development.

6. Conclusions

Promoting instructional reform within the framework of New Engineering Education has become an urgent priority in the transformation of engineering pedagogy. This study investigated the differential effects of AI-assisted instruction and traditional instruction on undergraduate students majoring in Hydraulic Engineering by systematically analyzing their impact across four dimensions: Learning Behaviors, Competence Development, Academic Achievement, and the underlying instructional pathways. The key findings are summarized as follows:
(1)
AI-assisted instruction significantly enhanced students’ Classroom Participation, Data Processing Ability, and Comprehensive Analytical Ability, particularly in tasks requiring information integration and systems thinking. However, in terms of Practical Problem-solving Ability, traditional instruction yielded more immediate benefits due to its structured training and instructor-led scaffolding. While AI tools are effective in cultivating latent cognitive skills, they may also inadvertently inhibit students’ flexibility when responding to complex, real-world problems.
(2)
The path analysis revealed that AI-assisted instruction did not directly improve students’ Final Exam Scores and even exerted a small but significant negative direct effect (path coefficient = –0.20). Nonetheless, its positive impact was realized indirectly through the significant promotion of Learning Behaviors, which in turn facilitated Competence Development and ultimately enhanced Academic Achievement. This “compensatory mediation effect” suggests that technology-enabled instruction should not be narrowly evaluated by short-term performance gains, but rather by its capacity to optimize the learning process itself.
(3)
The latent profile analysis further identified distinct student types cultivated under each instructional model. AI-assisted instruction was more likely to foster Efficient Self-directed Learners and Logically Analytical Learners, reflecting the strengths of personalized pathway recommendations and real-time feedback in promoting autonomy and systems-level reasoning. Conversely, traditional instruction was more conducive to developing Applied Practice Learners, highlighting the instructional value of guided demonstrations and case-based analysis in enhancing real-world problem-solving skills. These findings suggest that future instructional reform should seek deep integration of both approaches, leveraging the long-term developmental advantages of AI instruction while retaining the structured strengths of traditional methods, thus achieving a balanced cultivation of theoretical learning and practical application.

7. Future Research

Future studies should implement iterative teaching cycles in the same course across different semesters and cohorts to examine the stability of instructional effects and adopt longitudinal designs to track the same students’ learning behaviors, competence development, and academic performance over time. Additionally, refining the measurement of complex cognitive abilities, particularly in practical problem-solving and innovation, requires more sensitive, multi-dimensional assessment tools for greater precision.
Methodologically, combining structural equation modeling with causal inference techniques would enable a more nuanced identification and verification of the relational pathways within instructional models, thus advancing our understanding of the internal mechanisms through which AI instruction shapes student learning. Additionally, future research should explore the interactions between AI-enhanced instruction, instructor roles, and course characteristics, examining how technological applications can be optimally aligned with disciplinary demands and professional expectations.
At the sampling level, future work should expand the sample size, include students from diverse disciplinary backgrounds, and extend the research to multiple courses (e.g., Hydraulics, River Dynamics, Engineering Surveying) to compare the effects of AI-assisted instruction across different knowledge domains, thereby enhancing the generalizability of findings. Systematic logging of AI usage frequency and duration should be incorporated in future studies to enable precise measurement of exposure and examination of potential dose–response relationships. This would provide a stronger empirical foundation for the broader adoption and refinement of AI-assisted instruction in engineering education and beyond.

8. Limitations

While this study employed a posttest-only control group design to compare AI-assisted and traditional instruction in the River Regulation and Ecological Restoration course, the absence of a pretest limits the ability to fully control for pre-existing group differences. Future research in similar hydraulic engineering contexts should incorporate pre-intervention measurements or statistical controls to enhance internal validity and contextual rigor.
One limitation of this study is the assumption of linear relationships between instructional approaches, learning behaviors, competence development, and academic achievement. In practice, some of these relationships may be non-linear. For instance, excessive class participation or data-processing effort might reach a point of diminishing returns. Future research may explore more flexible models, such as threshold models or polynomial SEM, to better capture these dynamics.
These course-embedded performance measures serve as proxies for competence development and may not fully capture students’ actual competencies. Future studies should incorporate standardized, validated assessment tools, such as independent rubrics, situational tasks, or competence questionnaires, to further validate these indicators.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/su17178059/s1, Table S1: Representative Prompt Templates; Table S2: Parameter Values of Models with 1~5 Categories (Latent Profile Analysis); Table S3: Mapping from Results to Evidence and Instructional Implications; Table S4: Variance Inflation Factor (VIF) values for competence indicators in the regression models.

Author Contributions

Conceptualization, Y.W. and H.D.; methodology, Y.W. and R.L.; formal analysis, Y.W. and W.L.; data curation, R.L. and W.L.; writing—original draft preparation, Y.W.; writing—review and editing, H.D.; supervision, H.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, as it involved non-invasive educational interventions with anonymized data and posed minimal risk to participants.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Al-Zahrani, A.M.; Alasmari, T.M. Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications. Humanit. Soc. Sci. Commun. 2024, 11, 912. [Google Scholar] [CrossRef]
  2. Dai, C.P.; Ke, F. Educational applications of artificial intelligence in simulation-based learning: A systematic mapping review. Comput. Educ. Artif. Intell. 2022, 3, 100087. [Google Scholar] [CrossRef]
  3. Chen, L.; Chen, P.; Lin, Z. Artificial intelligence in education: A review. IEEE Access 2020, 8, 75264–75278. [Google Scholar] [CrossRef]
  4. Li, F.Y.; Hwang, G.J.; Chen, P.Y.; Lin, Y.-J. Effects of a concept mapping-based two-tier test strategy on students’ digital game-based learning performances and behavioral patterns. Comput. Educ. 2021, 173, 104293. [Google Scholar] [CrossRef]
  5. Lei, H.J.; Wang, H.J.; Fu, J.F. Reformations on Talent Cultivation System of Water Conservancy Engineering Based on the Emerging Engineering Concept. Adv. Educ. 2020, 10, 740–744. (In Chinese) [Google Scholar] [CrossRef]
  6. Liu, F.; Chen, X.; Fu, S.; Wang, Y.; Li, G.; Wu, J. Analysis of Subject Knowledge Structure and Practical Teaching Design for Graduate Degree Training in Civil and Hydraulic Engineering Based on Competency Objectives. In Proceedings of the 2024 4th International Conference on Education, Language and Art (ICELA 2024), Beijing, China, 22–24 November 2024; Springer Nature: Berlin/Heidelberg, Germany, 2025; Volume 907, pp. 94–106. [Google Scholar]
  7. Yang, Y.Y.; Xiong, H.X.; Huang, Q.; Wei, F. Study on Collaborative-Innovative Training Mode toward New Engineering for Professional Master’s Degree in Water Conservancy Engineering. Adv. Educ. 2022, 12, 3420. (In Chinese) [Google Scholar] [CrossRef]
  8. Prince, M. Does active learning work? A review of the research. J. Eng. Educ. 2004, 93, 223–231. [Google Scholar] [CrossRef]
  9. Kerimbayev, N.; Umirzakova, Z.; Shadiev, R.; Jotsov, V. A student-centered approach using modern technologies in distance learning: A systematic review of the literature. Smart Learn. Environ. 2023, 10, 61. [Google Scholar] [CrossRef]
  10. Fan, L.; Deng, K.; Liu, F. Educational impacts of generative artificial intelligence on learning and performance of engineering students in China. Sci. Rep. 2025, 15, 26521. [Google Scholar] [CrossRef]
  11. Merino-Campos, C. The impact of artificial intelligence on personalized learning in higher education: A systematic review. Trends High. Educ. 2025, 4, 17. [Google Scholar] [CrossRef]
  12. Zhai, X.; Chu, X.; Chai, C.S.; Jong, M.S.Y.; Istenic, A.; Spector, M.; Liu, J.-B.; Yuan, J.; Li, Y.; Cai, N. A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity 2021, 1, 8812542. [Google Scholar] [CrossRef]
  13. Huang, A.Y.Q.; Lu, O.H.T.; Yang, S.J.H. Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom. Comput. Educ. 2023, 194, 104684. [Google Scholar] [CrossRef]
  14. Brusilovsky, P.; Millan, E. User models for adaptive hypermedia and adaptive educational systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 3–53. [Google Scholar]
  15. Chen, X.; Xie, H.; Zou, D.; Hwang, G.-J. Application and theory gaps during the rise of artificial intelligence in education. Comput. Educ. Artif. Intell. 2020, 1, 100002. [Google Scholar] [CrossRef]
  16. Baker, R.S.; Martin, T.; Rossi, L.M. Educational data mining and learning analytics. In The Wiley Handbook of Cognition and Assessment: Frameworks, Methodologies, and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2016; pp. 379–396. [Google Scholar]
  17. Jin, S.H.; Im, K.; Yoo, M.; Roll, I.; Seo, K. Supporting students’ self-regulated learning in online learning using artificial intelligence applications. Int. J. Educ. Technol. High. Educ. 2023, 20, 37. [Google Scholar] [CrossRef]
  18. Vygotsky, L.S. Mind in Society: The Development of Higher Psychological Processes; Harvard University Press: Cambridge, UK, 1978. [Google Scholar]
  19. Freeman, S.; Eddy, S.L.; McDonough, M.; Smith, M.K.; Okoroafor, N.; Jordt, H.; Wenderoth, M.P. Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. USA 2014, 111, 8410–8415. [Google Scholar] [CrossRef] [PubMed]
  20. Broadbent, J.; Poon, W.L. Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. Internet High. Educ. 2015, 27, 1–13. [Google Scholar] [CrossRef]
  21. Deci, E.L.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  22. Ma, X.R. Influence study of learners’ independent learning ability on learning performance in online learning. Int. J. Emerg. Technol. Learn. 2022, 17, 201–213. [Google Scholar] [CrossRef]
  23. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  24. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education Promises and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar]
  25. Wang, H.; Dang, A.; Wu, Z.; Mac, S. Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Comput. Educ. Artif. Intell. 2024, 7, 100326. [Google Scholar] [CrossRef]
  26. Hwang, G.J.; Chang, C.Y. A review of opportunities and challenges of chatbots in education. Interact. Learn. Environ. 2023, 31, 4099–4112. [Google Scholar] [CrossRef]
  27. Arya, R.; Verma, A. Role of artificial intelligence in education. Int. J. Adv. Res. Sci. Commun. Technol. 2024, 4, 589–594. [Google Scholar] [CrossRef]
  28. Zhang, R.; Chen, Y. What can multi-factors contribute to Chinese EFL learners’ implicit L2 knowledge? Int. Rev. Appl. Linguist. Lang. Teach. 2024. [Google Scholar] [CrossRef]
  29. Li, Y.K.; Xiao, C.L.; Ren, H.; Li, W.-R.; Guo, Z.; Luo, J.-Q. Unraveling the effectiveness of new media teaching strategies in pharmacology education under different educational backgrounds: Insights from 6447 students. Eur. J. Pharmacol. 2025, 989, 177255. [Google Scholar] [CrossRef] [PubMed]
  30. Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
  31. Zhai, C.; Wibowo, S.; Li, L.D. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learn. Environ. 2024, 11, 28. [Google Scholar] [CrossRef]
  32. Khatri, B.B.; Karki, P.D. Artificial intelligence (AI) in higher education: Growing academic integrity and ethical concerns. Nepal. J. Dev. Rural. Stud. 2023, 20, 1–7. [Google Scholar] [CrossRef]
  33. Tshering, G. Interrogating Pedagogical Modalities: An In-depth Examination of the Lecture Method in Higher Education. Int. J. High. Educ. Pedagog. 2024, 5, 65–85. [Google Scholar] [CrossRef]
  34. Martin, D.A.; Conlon, E.; Bowe, B. Using case studies in engineering ethics education: The case for immersive scenarios through stakeholder engagement and real life data. Australas. J. Eng. Educ. 2021, 26, 47–63. [Google Scholar] [CrossRef]
  35. Chen, X.; Zou, D.; Xie, H.; Cheng, G. Twenty years of personalized language learning. Educ. Technol. Soc. 2021, 24, 205–222. [Google Scholar]
  36. Bond, M.; Buntins, K.; Bedenlier, S.; Zawacki-Richter, O.; Kerres, M. Mapping research in student engagement and educational technology in higher education: A systematic evidence map. Int. J. Educ. Technol. High. Educ. 2020, 17, 2. [Google Scholar] [CrossRef]
  37. Zain, F.M.; Sailin, S.N.; Mahmor, N.A. Promoting higher order thinking skills among pre-service teachers through group-based flipped learning. Int. J. Instr. 2022, 15, 519–542. [Google Scholar] [CrossRef]
  38. Sun, D.; Boudouaia, A.; Zhu, C.; Li, Y. Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. Int. J. Educ. Technol. High. Educ. 2024, 21, 14. [Google Scholar] [CrossRef]
  39. Sánchez-Ruiz, L.M.; Moll-López, S.; Nuñez-Pérez, A.; Moraño-Fernández, J.A.; Vega-Fleitas, E. ChatGPT challenges blended learning methodologies in engineering education: A case study in mathematics. Appl. Sci. 2023, 13, 6039. [Google Scholar] [CrossRef]
  40. Hu, M.; Assadi, T.; Mahroeian, H. Explicitly introducing chatgpt into first-year programming practice: Challenges and impact. In Proceedings of the 2023 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), Auckland, New Zealand, 27 November–1 December 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  41. Biggs, J. Enhancing teaching through constructive alignment. High. Educ. 1996, 32, 347–364. [Google Scholar] [CrossRef]
  42. Zimmerman, B.J. Becoming a self-regulated learner: An overview. Theory Into Pract. 2002, 41, 64–70. [Google Scholar] [CrossRef]
  43. Seo, K.; Tang, J.; Roll, I.; Fels, S.; Yoon, D. The impact of artificial intelligence on learner–instructor interaction in online learning. Int. J. Educ. Technol. High. Educ. 2021, 18, 54. [Google Scholar] [CrossRef]
  44. Muthén, B.; Muthén, L.K. Integrating person-centered and variable-centered analyses: Growth mixture modeling with latent trajectory classes. Alcohol. Clin. Exp. Res. 2000, 24, 882–891. [Google Scholar] [CrossRef]
  45. Hair, J.F. Multivariate data analysis: An overview. In International Encyclopedia of Statistical Science; Springer: Berlin/Heidelberg, Germany, 2011; pp. 904–907. [Google Scholar]
  46. Yan, J.; Yang, H.; Niu, J.; Chen, Y. Smart Teaching Reform and Practice of Flipped Classroom in Culture Geography Course Based on Chaoxing Learning Platform. J. Educ. Learn. 2022, 11, 103–110. [Google Scholar] [CrossRef]
  47. Chen, Y.; Zhao, X.; Tserenchimed, A. The Impact of Chaoxing Online Learning Platform on the Engagement of English Majors’ Course Learning. In Proceedings of the 3rd International Conference on Educational Innovation and Multimedia Technology, EIMT 2024, Wuhan, China, 29–31 March 2024. [Google Scholar]
  48. Li, Y.G.; Tang, S.P.; Ma, K.; Liu, G. Exploration on the Training Mode under the Background of Engineering Education Certification. Creat. Educ. Stud. 2025, 13, 496. [Google Scholar] [CrossRef]
  49. Wu, R.; Yu, Z. Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. Br. J. Educ. Technol. 2024, 55, 10–33. [Google Scholar] [CrossRef]
  50. Deng, R.; Jiang, M.; Yu, X.; Lu, Y.; Liu, S. Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies. Comput. Educ. 2025, 227, 105224. [Google Scholar] [CrossRef]
  51. Alemdag, E. The effect of chatbots on learning: A meta-analysis of empirical research. J. Res. Technol. Educ. 2025, 57, 459–481. [Google Scholar] [CrossRef]
  52. Rodrigues, L.; Pereira, F.D.; Toda, A.M.; Palomino, P.T.; Pessoa, M.; Carvalho, L.S.G.; Fernandes, D.; Oliveira, E.H.T.; Cristea, A.I.; Isotani, S. Gamification suffers from the novelty effect but benefits from the familiarization effect: Findings from a longitudinal study. Int. J. Educ. Technol. High. Educ. 2022, 19, 13. [Google Scholar] [CrossRef]
  53. Hwang, G.J.; Lai, C.L.; Wang, S.Y. Seamless flipped learning: A mobile technology-enhanced flipped classroom with effective learning strategies. J. Comput. Educ. 2015, 2, 449–473. [Google Scholar] [CrossRef]
  54. Hattie, J. Visible learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement; Routledge: Oxfordshire, UK, 2008. [Google Scholar]
  55. Bekkering, E.; Ward, T. Class Participation and Student Performance: A Follow-Up Study. Inf. Syst. Educ. J. 2021, 19, 77–91. [Google Scholar]
  56. Cohn, E.; Johnson, E. Class attendance and performance in principles of economics. Educ. Econ. 2006, 14, 211–233. [Google Scholar] [CrossRef]
  57. Khalid, M.; Bashir, S.; Amin, H. Relationship between Self-Directed Learning (SDL) and Academic Achievement of University Students: A Case of Online Distance Learning and Traditional Universities. Bull. Educ. Res. 2020, 42, 131–148. [Google Scholar]
  58. Garg, A.; Soodhani, K.N.; Rajendran, R. Enhancing data analysis and programming skills through structured prompt training: The impact of generative AI in engineering education. Comput. Educ. Artif. Intell. 2025, 8, 100380. [Google Scholar] [CrossRef]
  59. Hwang, G.J.; Fu, Q.K. Trends in the research design and application of mobile language learning: A review of 2007–2016 publications in selected SSCI journals. Interact. Learn. Environ. 2019, 27, 567–581. [Google Scholar] [CrossRef]
  60. Selwyn, N. Is Technology Good for Education? John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  61. Castillo, L. Examination of AI and Conventional Teaching Approaches in Cultivating Critical Thinking Skills in High School Students. J. Syst. Cybern. Inform. 2024, 22, 109–112. [Google Scholar] [CrossRef]
  62. Castillo, L.; Saavedra, R. A Comparative Analysis between Artificial Intelligence and Traditional Learning Methods in Developing Critical Thinking Skills among Secondary Education Students in Peru. In Proceedings of the 28th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2024), Virtual, 10–13 September 2024; International Institute of Informatics and Systemics: Winter Garden, FL, USA, 2024; pp. 200–202. [Google Scholar]
  63. Chiang, M.T.; Chang, Y.C.; Yu, H.C. Comparison of Problem-solving Skills in the Traditional Face-to-Face Classroom and Online Learning in Postgraduate Courses of Education Management. Open Psychol. J. 2024, 17, e18743501339518. [Google Scholar] [CrossRef]
Figure 1. The overall research framework.
Figure 1. The overall research framework.
Sustainability 17 08059 g001
Figure 2. Effects of Different Instructional Approaches on Students’ Multidimensional Performance. (a) Midterm Report Score (combined assessment of (bd)), (b) Self-directed Learning Ability, (c) Comprehensive Analytical Ability, (d) Practical Problem-solving Ability, (e) PPT Presentation Score (combined assessment of (fh)), (f) Data Processing Ability, (g) Comprehensive Analytical Ability (PPT), (h) Practical Problem-solving Ability (PPT), (i) Final Exam Score (combined assessment of (jl)), (j) Fundamental Theoretical Knowledge, (k) Comprehensive Thinking, (l) Practical Application, (m) Number of Classroom Interactions, (n) Learning Efficiency (combined assessment of (o,p)), (o) Post-class Assignment Scores, (p) Post-class Assignment Completion Time. Statistical significance: * 0.01 < p < 0.05, ** 0.001 < p < 0.01, *** p < 0.001.
Figure 2. Effects of Different Instructional Approaches on Students’ Multidimensional Performance. (a) Midterm Report Score (combined assessment of (bd)), (b) Self-directed Learning Ability, (c) Comprehensive Analytical Ability, (d) Practical Problem-solving Ability, (e) PPT Presentation Score (combined assessment of (fh)), (f) Data Processing Ability, (g) Comprehensive Analytical Ability (PPT), (h) Practical Problem-solving Ability (PPT), (i) Final Exam Score (combined assessment of (jl)), (j) Fundamental Theoretical Knowledge, (k) Comprehensive Thinking, (l) Practical Application, (m) Number of Classroom Interactions, (n) Learning Efficiency (combined assessment of (o,p)), (o) Post-class Assignment Scores, (p) Post-class Assignment Completion Time. Statistical significance: * 0.01 < p < 0.05, ** 0.001 < p < 0.01, *** p < 0.001.
Sustainability 17 08059 g002
Figure 3. Correlation Analysis Between Learning Behaviors, Competence Development, and Final Exam Scores Under Two Instructional Approaches. (af) show correlations between Final Exam Score and (a) Classroom Participants, (b) Learning Efficiency, (c) Self-directed Learning Ability, (d) Data Processing, (e) Comprehensive Analytical Ability, and (f) Practical Problem-solving Ability.
Figure 3. Correlation Analysis Between Learning Behaviors, Competence Development, and Final Exam Scores Under Two Instructional Approaches. (af) show correlations between Final Exam Score and (a) Classroom Participants, (b) Learning Efficiency, (c) Self-directed Learning Ability, (d) Data Processing, (e) Comprehensive Analytical Ability, and (f) Practical Problem-solving Ability.
Sustainability 17 08059 g003
Figure 4. Multivariate Regression of Learning Behaviors and Competence Development on Final Exam Scores Under Different Instructional Approaches. (a) Traditional Instruction, (b) AI-assisted Instruction. The variance inflation factors (VIF) values for the competence indicators ranged from 1.212 to 2.290 in the traditional instruction group, and from 1.091 to 1.278 in the AI-assisted instruction group (Supplementary Table S4).
Figure 4. Multivariate Regression of Learning Behaviors and Competence Development on Final Exam Scores Under Different Instructional Approaches. (a) Traditional Instruction, (b) AI-assisted Instruction. The variance inflation factors (VIF) values for the competence indicators ranged from 1.212 to 2.290 in the traditional instruction group, and from 1.091 to 1.278 in the AI-assisted instruction group (Supplementary Table S4).
Sustainability 17 08059 g004
Figure 5. (a) Partial Least Squares Path Model (PLS-PM) depicting the influence of Instructional Approach on Learning Behaviors, Competence Development, and Academic Achievement. Black arrows represent positive effects; red arrows represent negative effects. Numbers denote path coefficients, and arrow width is proportional to effect strength. R2 values indicate explained variance. Significance levels: *** p < 0.001; ** p < 0.01. (b) Decomposition of direct, indirect, and total effects of Instructional Approach on Competence Development and Academic Achievement. Model Goodness of Fit (GOF) = 0.40.
Figure 5. (a) Partial Least Squares Path Model (PLS-PM) depicting the influence of Instructional Approach on Learning Behaviors, Competence Development, and Academic Achievement. Black arrows represent positive effects; red arrows represent negative effects. Numbers denote path coefficients, and arrow width is proportional to effect strength. R2 values indicate explained variance. Significance levels: *** p < 0.001; ** p < 0.01. (b) Decomposition of direct, indirect, and total effects of Instructional Approach on Competence Development and Academic Achievement. Model Goodness of Fit (GOF) = 0.40.
Sustainability 17 08059 g005
Figure 6. Latent profile classification of students based on 11 standardized indicators. The x-axis corresponds to the 11 indicators derived from Midterm Report Scores, PPT Presentation Scores, Final Exam Scores, Learning Efficiency, Classroom Participation, etc., as defined in Table 3. The y-axis represents standardized scores for each indicator across the four latent profiles.
Figure 6. Latent profile classification of students based on 11 standardized indicators. The x-axis corresponds to the 11 indicators derived from Midterm Report Scores, PPT Presentation Scores, Final Exam Scores, Learning Efficiency, Classroom Participation, etc., as defined in Table 3. The y-axis represents standardized scores for each indicator across the four latent profiles.
Sustainability 17 08059 g006
Table 1. Instructional Design of the AI-assisted Group.
Table 1. Instructional Design of the AI-assisted Group.
PhaseWeeksInstructional Content and Teaching Methods
Fundamental Theoretical Knowledge PhaseWeeks
1~2
Instructor lectures on the fundamentals of AI, including basic principles, tool typologies, and application scenarios.
Weeks
3~7
Instructor teaches core course theories; students utilize AI tools to construct knowledge graphs.
Literature Review PhaseWeeks
8~9
Students employ AI tools for literature search and analysis; they identify key terms and engage in in-depth discussions based on domestic and international research.
Data Analysis and Problem Diagnosis PhaseWeeks
10~12
Instructor introduces AI-based methods for data analysis and problem diagnosis; students conduct independent analyses using new datasets and refine diagnostic results through inquiry.
Engineering Solution Design PhaseWeeks
13~15
Instructor guides students in leveraging AI tools to design river regulation and ecological restoration plans and assess their feasibility.
Synthesis and Review PhaseWeek
16
Students independently construct course knowledge systems using AI tools, with instructor support in refining and optimizing their knowledge networks.
Table 2. Instructional Design of the Traditional Group.
Table 2. Instructional Design of the Traditional Group.
PhaseWeeksInstructional Content and Teaching Methods
Fundamental Theoretical Knowledge PhaseWeeks
1~7
Instructor delivers theoretical knowledge through lectures, supplemented by in-class questioning and group discussion.
Literature Review PhaseWeeks
8~9
Students search and review literature using Web of Science and CNKI; instructor introduces current research progress.
Data Analysis and Problem Diagnosis PhaseWeeks
10~12
Based on real-world river project data, instructor teaches methods for analyzing different types of data and diagnosing associated problems.
Engineering Plan Development PhaseWeeks
13~15
Instructor presents current engineering solutions for river regulation and ecological restoration.
Synthesis and Review PhaseWeek
16
Students engage in course summary and review activities, culminating in the formation of a structured knowledge framework.
Table 3. Course Evaluation Indicators.
Table 3. Course Evaluation Indicators.
Evaluation CategoryEvaluation Indicators (Code)Competence Type
Midterm Report ScoreSelf-directed Learning Ability (X11)Learning Autonomy
Comprehensive Analytical Ability (X12)Logical Analysis
Practical Problem-solving Ability (X13)Applied Practice
PPT Presentation ScoreData Processing Ability (X21)Data Processing
Comprehensive Analytical Ability (X22)Logical Analysis
Practical Problem-solving Ability (X23)Applied Practice
Final Exam ScoreFundamental Theoretical Knowledge (X31)Theoretical Mastery
Comprehensive Thinking (X32)Logical Analysis
Practical Application (X33)Applied Practice
Learning EfficiencyPost-class Assignment Scores and Completion Time (X4)Learning Autonomy
Classroom ParticipationNumber of Classroom Interaction (X5)Learning Autonomy
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wan, Y.; Li, R.; Li, W.; Du, H. Impact Pathways of AI-Supported Instruction on Learning Behaviors, Competence Development, and Academic Achievement in Engineering Education. Sustainability 2025, 17, 8059. https://doi.org/10.3390/su17178059

AMA Style

Wan Y, Li R, Li W, Du H. Impact Pathways of AI-Supported Instruction on Learning Behaviors, Competence Development, and Academic Achievement in Engineering Education. Sustainability. 2025; 17(17):8059. https://doi.org/10.3390/su17178059

Chicago/Turabian Style

Wan, Yu, Rui Li, Wenjie Li, and Hongbo Du. 2025. "Impact Pathways of AI-Supported Instruction on Learning Behaviors, Competence Development, and Academic Achievement in Engineering Education" Sustainability 17, no. 17: 8059. https://doi.org/10.3390/su17178059

APA Style

Wan, Y., Li, R., Li, W., & Du, H. (2025). Impact Pathways of AI-Supported Instruction on Learning Behaviors, Competence Development, and Academic Achievement in Engineering Education. Sustainability, 17(17), 8059. https://doi.org/10.3390/su17178059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop