1. Introduction
With the rapid advancement of artificial intelligence (AI) technologies, AI has become deeply integrated into a wide range of industries. Education, as a cornerstone for knowledge dissemination and competence cultivation, is undergoing a profound integration with AI [
1]. In China, various AI-powered instructional platforms have been widely adopted in higher education, including iFLYTEK’s Zhixue (Zhixue.com), Tsinghua University’s Rain Classroom, and the XuetangX MOOC platform. Zhixue is a smart-education platform that provides AI-assisted homework marking, learning-analytics dashboards, and personalized learning recommendations, enabling real-time formative assessment and precision teaching. Rain Classroom, jointly developed by Tsinghua University and XuetangX, is a plug-in toolkit that integrates Microsoft PowerPoint with the WeChat mobile app to connect pre-class, in-class, and after-class activities (e.g., pushing materials, in-slide quizzes and randomized roll-call, interaction, and analytics). The integration of AI promises to address several longstanding challenges in higher education, for instance, enabling real-time, intelligent feedback that overcomes the delay inherent in traditional instructional evaluation systems [
2], and supporting personalized learning trajectories that illuminate and enhance the mechanisms underlying Competence Development [
3]. This educational philosophy emphasizes not only the mastery of skills and knowledge but also the multidimensional development of learning behaviors, affective engagement, and academic achievement, thereby fostering a more holistic learning orientation. As such, AI-based instruction is reshaping not only how educators teach but also how students learn, providing a robust technological foundation for more effective learning interventions and competence- and performance-oriented educational reform [
4].
As a representative discipline in engineering education, the Hydraulic Engineering major is characterized by strong theoretical foundations, high demands for practical application, interdisciplinary knowledge integration, and complex data analysis [
5]. Students are expected not only to acquire foundational theories from courses such as hydraulics, hydrology, and aquatic ecology, but also to develop the ability to conduct Comprehensive Analysis and solve multifaceted, real-world problems in water engineering contexts [
6]. The New Engineering Education Initiative is an engineering education reform movement centered on competence development. It emphasizes interdisciplinary integration, innovative practice, and the application of modern technologies, with the aim of cultivating versatile engineering professionals who can adapt to the future needs of society and industry. In light of the ongoing “New Engineering” reform in China, the focus of undergraduate training in this field is shifting from knowledge transmission to competence enhancement, with particular emphasis on Self-directed Learning Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability [
7]. However, many courses still center on theoretical lectures and case analyses, focusing on the transmission of professional knowledge by instructors and the development of practical skills through teacher-led engineering case training. The limitations of this approach lie in students’ reliance on instructors’ expertise and insufficient opportunities for independent exploration. Additionally, assessments tend to emphasize knowledge recall and standardized problem-solving while placing less emphasis on cultivating innovation and adaptability to complex engineering problems, which makes it difficult to meet the New Engineering Education Initiative’s goal of developing versatile talent [
8]. Furthermore, assessment systems often suffer from an overemphasis on knowledge acquisition rather than competence development, rote memorization rather than Applied Practice, and outcomes over learning processes [
5]. These limitations hinder the development of well-rounded graduates with strong innovation capacity and problem-solving skills. There is an urgent need for innovative educational technologies that can support a pedagogical shift from teacher-centered to student-centered instruction and from knowledge-driven to competence-driven education [
9]. AI-assisted teaching, with its data-driven foundation and capacity for personalized support, offers a promising approach to addressing the challenges of insufficient process guidance and the implicit nature of competence development in engineering education [
10,
11].
In recent years, a growing body of domestic and international research has explored the role of AI in education, yielding encouraging findings. Studies have demonstrated that AI can enhance students’ learning motivation and engagement through intelligent feedback systems and personalized pathway recommendations [
12]. For example, Huang et al. (2023) found that AI-driven conversational systems improved the quality of classroom interactions and heightened student focus [
13]. Similarly, Brusilovsky et al. (2007) emphasized the potential of learner modeling to enable precise content delivery and strategic interventions that optimize Learning Behaviors [
14]. Moreover, Chen et al. (2020) highlighted the positive impact of AI instruction on the development of higher-order competencies, such as critical thinking and the ability to navigate complex problems [
15]. Nonetheless, most existing studies have focused on the direct relationship between Instructional Approach and learning outcomes, while paying insufficient attention to the mediating role of Learning Behaviors in fostering Competence Development [
16].
To address this gap, the present study employs a longitudinal path analysis framework that integrates Instructional Approach, Learning Behaviors, Competence Development, and Academic Achievement. This approach allows us to identify not just whether AI-assisted instruction is effective, but how it works—by revealing the intermediary role of learning behaviors in shaping competence outcomes, particularly within engineering education where such process-focused empirical evidence is still limited. The goal is to uncover the deeper mechanisms through which AI-assisted instruction shapes educational outcomes—not merely by altering results, but by transforming learning processes. The findings aim to provide theoretical support for pedagogical innovation in applied disciplines such as Hydraulic Engineering.
To guide the investigation, this study addresses the following research questions:
RQ1: How do AI-assisted and traditional instruction differentially influence students’ learning behaviors, competence development, and academic achievement?
RQ2: How do learning behaviors and competence development jointly mediate the effect of AI-assisted versus traditional instruction on students’ academic achievement?
The study is guided by the following hypotheses:
- (1)
The two Instructional Approaches exhibit distinct strengths in promoting different dimensions of Competence Development.
- (2)
AI enhances Academic Achievement indirectly by shaping students’ Learning Behaviors, which in turn foster Competence Development.
3. Methodology
3.1. Participants
The participants in this study comprised 102 third-year undergraduate students majoring in Hydraulic Engineering. Among them, 49 students were assigned to the AI-assisted instruction group, while 53 were placed in the traditional instruction group. To minimize selection bias and ensure baseline equivalence, students were randomly allocated to the two groups. All participants belonged to the same academic cohort and shared comparable disciplinary backgrounds and levels of foundational knowledge, thereby controlling for potential confounding effects related to individual differences.
The instructional intervention spanned a period of 16 weeks, with two class sessions per week. Throughout the experiment, all students received identical course content; the only variation lay in the Instructional Approach employed.
3.2. Research Design
This study adopted a posttest-only control group design, a common variant of randomized controlled trial methodology. Outcome measures and group comparisons were conducted upon completion of the instructional intervention. To avoid influencing students’ learning behaviors or instructional activities, the study was designed without a pretest. Although participants were restricted to the same academic program and randomly assigned to groups to enhance comparability, the absence of a pretest introduces a potential risk of initial group differences.
The AI-assisted instruction group integrated various AI-based tools into the learning process. In both classroom and extracurricular activities, students in the AI-assisted group were encouraged to use ERNIE Bot (ERNIE 4.0) and DeepSeek (General Model, V3) for information retrieval, problem-solving, and solution design. These tools were accessed through their official web portals with default platform settings. Usage policies required students to disclose AI assistance in their submissions, paraphrase and integrate outputs into their own reasoning, and avoid uploading any personally identifiable information. AI use was permitted for formative tasks such as the midterm report and PPT presentation under these controls, while closed-book final exams were completed without AI support. Prompt templates for literature review, data analysis, problem diagnosis, and engineering solution design are compiled in
Supplementary Table S1. In contrast, the traditional instruction group adhered to traditional pedagogical methods and did not engage with any AI tools. Assignments were completed independently using standard resources, such as textbooks, lecture notes, and library databases. Instruction was delivered through lectures, instructor-led questioning, and in-class discussions.
The River Regulation and Ecological Restoration course aims to enable students to thoroughly master the theories and methods of river hydrodynamic characteristics, river regulation, and ecological restoration, and to develop core professional competencies such as hydrological data analysis, river design scheme evaluation, and engineering feasibility analysis. This study compares two distinct instructional approaches, AI-assisted and traditional instruction, within the course River Regulation and Ecological Restoration, conducted at Chongqing Jiaotong University during the Spring semester of 2025. Multiple assessment methods, including the Midterm Report, PPT Presentation, Post-class Assignments, Number of Classroom Interactions, and the Final Exam, are employed to examine changes in students’ Learning Behaviors, such as Learning Efficiency and Classroom Participation. The study further investigates how these behavioral changes influence the development of students’ Self-directed Learning Ability, Data Processing Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability. Ultimately, the impact of different Instructional Approaches on students’ Final Exam Scores is evaluated across three dimensions: Fundamental Theoretical Knowledge, Comprehensive Thinking, and Practical Application. The overall design of this study follows the logical pathway illustrated in
Figure 1. The Instructional Approach (AI-assisted and traditional teaching) first acts upon students’ Learning Behaviors, which in turn promotes Competence Development, ultimately influencing Academic Achievement. This pathway relationship provides the theoretical basis and methodological foundation for the subsequent statistical analyses.
3.3. Instructional Implementation
The instructional implementation was designed in accordance with the distinct Instructional Approaches employed in the AI-assisted and traditional teaching groups. While the two groups received equivalent coverage of core content, the AI-assisted group emphasized the integration and application of AI tools throughout the learning process. In this study, AI-assisted instruction refers to the systematic integration of AI tools into specific stages of the course (data analysis, instant feedback, personalized resource recommendations) to support students’ learning behaviors, competence development, and academic achievement. The detailed instructional arrangements for each phase are presented in
Table 1 and
Table 2.
3.4. Data Collection
The “Superstar Learning Platform” (Chaoxing Learning Platform;
https://apps.chaoxing.com accessed on 2 September 2025) is a widely adopted commercial cloud-based learning management system (LMS) and mobile learning platform in China, developed and operated by Beijing Superstar Co., Ltd. It is not a locally developed tool but a third-party, subscription-based service commonly licensed by universities. The platform integrates features for course management, live streaming, video playback, assignment distribution and collection, online quizzes, gradebooks, and detailed learning analytics dashboards. Its ability to automatically record granular student behavioral data (e.g., login frequency, video watching time, assignment completion time, interaction counts) makes it a valuable tool for educational research. The validity and reliability of data exported from the Superstar platform for research purposes have been established in numerous previous studies in the Chinese context [
46,
47].
Data were collected using the export functions of the “Superstar Learning Platform”, covering Midterm Report Scores, PPT Presentation Scores, Final Exam Scores, Learning Efficiency, and Classroom Participation. Participation metrics were double-checked by trained teaching assistants. All collected data were reviewed and compiled by the instructor to ensure accuracy and consistency. These outcomes were categorized by competence type, as detailed in
Table 3. All competence indicators in this study were derived from the course syllabus, intended learning outcomes, and widely adopted assessment practices in engineering education, aligned with national engineering education accreditation standards [
48]. All assessment criteria were communicated to students at the start of the semester.
Midterm Report Score: In Week 9, students submitted an individual research report, scored out of 100 points. The evaluation criteria included: whether students reviewed recent domestic and international literature (30%), their understanding of the strengths and limitations of current research progress (30%), and their grasp of various technological application scenarios (40%). These components were used to assess Self-directed Learning Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability, respectively.
PPT Presentation Score: In Week 15, students delivered a PPT presentation evaluated on a 100-point scale. The evaluation focused on three aspects: whether students organized and analyzed aquatic habitat data (30%), whether they identified ecological issues in river systems (30%), and whether they proposed feasible technical solutions (40%). These were used to evaluate students’ Data Processing Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability, respectively.
Final Exam Score: The final exam was a closed-book assessment worth 100 points. The Fundamental Theoretical Knowledge Section (40 points) included multiple-choice, true/false, and fill-in-the-blank questions, assessing students’ mastery of basic concepts and theories. The Comprehensive Thinking Section (30 points) involved multiple-choice and short-answer questions, evaluating students’ capacity for comparing, filtering, and integrating information. The Practical Application Section (30 points) consisted of comprehensive problem-solving tasks, measuring students’ ability to analyze real-world issues and develop solutions.
Learning Efficiency: Learning Efficiency was measured using statistics from the “Superstar Learning Platform”, which recorded students’ Post-class Assignment Scores and Completion Time. A total of eight assignments were administered throughout the semester, each worth 50 points.
Classroom Participation: Classroom Participation was quantified based on the Number of Classroom Interactions, recorded weekly by teaching assistants. Interactions included asking questions, responding to prompts, and participating in discussions.
All submitted assignments and deliverables were subjected to three layers of originality checking: (1) Cross-Checking among Submissions, using Turnitin Feedback Studio v3.5 (Advance Publications Inc.). (2) External Source Plagiarism Detection, using Academic Misconduct Literature Detection System (AMLC) V6.3 (Tongfang Knowledge Network Technology Co., Ltd.). (3) AI-Generated Text Identification, using GPTZero v2.8 (GPTZero Technologies Inc.).
Prior to data collection, assistants completed a training session led by the course instructor, which covered the operational definitions, examples and non-examples of qualifying behaviors, procedures for tallying interactions in real time, and the use of the Superstar platform to cross-check student identification.
3.5. Analysis Methods
To comprehensively address the two research questions (RQ1 and RQ2) and test the corresponding hypotheses, a complementary set of data analysis strategies was employed. First, descriptive statistics and independent-samples t-tests were conducted to compare the differences between the AI-assisted and traditional teaching groups across all indicators, providing an answer to RQ1. Subsequently, multivariate regression analysis was used to delve into the predictive effects of learning behaviors and competence development on academic achievement under different instructional modes. To further unveil the causal pathways and mediating mechanisms among variables (RQ2), a partial least squares path model (PLS-PM) was constructed. Finally, latent profile analysis (LPA) was utilized to identify distinct subgroups of students, supplementing our understanding of how instructional approaches differentially impact student development. This multi-method approach aimed to provide a comprehensive evaluation of the effects of AI-assisted instruction from multiple perspectives: mean comparison, predictive relationships, path mechanisms, and individual differences.
- (1)
Regression Analysis
To explore the extent to which students’ Learning Behaviors and Competence Development contribute to Final Exam Scores under different Instructional Approaches, six variables were selected as predictors: Learning Efficiency (X4), Number of Classroom Interactions (X5), Self-directed Learning Ability from the Midterm Report (X11), Data Processing Ability from the PPT Presentation (X21), Comprehensive Analytical Ability (X12, X22), and Practical Problem-solving Ability (X13, X23) drawn from both the Midterm Report and PPT Presentation. Each indicator was first regressed independently against the Final Exam Score using linear regression models. A multivariate regression model was then constructed to simultaneously include all six predictors, allowing for an assessment of their combined explanatory power and individual regression characteristics in relation to Academic Achievement. To assess potential multicollinearity among the predictors, Variance Inflation Factor (VIF) values were calculated using the vif function from the car package in R (version 4.3.2).
- (2)
Path Analysis
To investigate the structural pathways through which Instructional Approach influences Learning Behaviors, Competence Development, and ultimately Academic Achievement, a Partial Least Squares Path Modeling (PLS-PM) framework was employed. Instructional Approach was treated as the exogenous variable and operationalized as a binary dummy variable, with the AI-assisted group coded as 1 and the traditional instruction group coded as 0. This variable captured the overall effect of the instructional intervention.
Learning Behaviors were represented by two observed indicators: Post-class Assignment Scores and Completion Time (X4) and Number of Classroom Interactions (X5). Competence Development was assessed using six indicators: Self-directed Learning Ability (X11), Data Processing Ability (X21), Comprehensive Analytical Ability (X12, X22), and Practical Problem-solving Ability (X13, X23). Academic Achievement served as the ultimate outcome variable, measured by students’ Final Exam Scores (total score out of 100).
Model estimation was conducted using R version 4.4.1. A positive path coefficient indicated that the AI-assisted instructional approach yielded superior outcomes in that specific domain relative to the traditional approach. Negative or non-significant coefficients suggested no discernible advantage or a possible reverse effect. The coefficient of determination (R2) was used to evaluate the proportion of variance explained for each endogenous variable. Statistical significance of the path coefficients was determined based on associated p-values.
- (3)
Latent Profile Analysis
To further examine student typologies across instructional conditions, a latent profile model was constructed using the 11 evaluation indicators listed in
Table 3 as manifest variables. Model fit was evaluated across one to five latent classes using the Bayesian Information Criterion (BIC), Integrated Completed Likelihood (ICL), and Bootstrap Likelihood Ratio Test (BLRT), as reported in
Supplementary Table S2. Based on model parameter diagnostics, the four-class model demonstrated the best overall fit, with Entropy values of 0.82 and 0.85, indicating high classification accuracy.
4. Results
4.1. Score Statistics Under Different Instructional Approaches
To answer research question RQ1 (How do AI-assisted and traditional instruction differentially influence students’ learning behaviors, competence development, and academic achievement?), descriptive statistics and comparisons (independent-samples
t-tests) of the scores on various assessment indicators between the two groups were first conducted. The statistical outcomes for each student’s Learning Behaviors, Competence Development scores, and Final Exam Scores across the AI-assisted and traditional instruction groups are presented in
Figure 2 and
Supplementary Table S3.
In terms of the Midterm Report Score, students in the AI-assisted group outperformed their counterparts in the traditional group, with a significantly higher mean score (75.9 vs. 73.2,
p < 0.01,
Figure 2a). This suggests that stage-based instructional design supported by AI tools enhanced the quality of students’ intermediate deliverables. Among the specific evaluation dimensions, the AI-assisted group demonstrated significantly higher performance in Comprehensive Analytical Ability (21.3 vs. 19.5,
p < 0.01,
Figure 2c). However, students in the traditional group performed slightly better in Practical Problem-solving Ability (32.8 vs. 32.1,
p < 0.05,
Figure 2d). There was no significant difference between the two groups in terms of Self-directed Learning Ability scores (
Figure 2b).
Regarding the PPT Presentation Score, although a few students in the traditional group achieved exceptionally high scores, the AI-assisted group still showed a significantly higher average score overall (77.7 vs. 74.2,
p < 0.05,
Figure 2e). In terms of specific skill dimensions, the AI-assisted group exhibited significantly stronger performance in Data Processing Ability (23.9 vs. 20.9,
p < 0.001,
Figure 2f). However, no statistically significant differences were found between the two groups in Comprehensive Analytical Ability (
Figure 2g) or Practical Problem-solving Ability (
Figure 2h) in the context of the PPT presentation.
As for the Final Exam Score, no significant difference was observed between the AI-assisted and traditional groups in terms of total score (80.3 vs. 79.5,
Figure 2i). However, the AI-assisted group exhibited a higher maximum score and a better minimum score, indicating greater overall score stability. In terms of sub-dimensions, there were no significant differences in Fundamental Theoretical Knowledge (
Figure 2j) or Practical Application (
Figure 2l), but the AI-assisted group significantly outperformed the traditional group in Comprehensive Thinking (23.2 vs. 21.6,
p < 0.001,
Figure 2k).
No significant differences were observed between the two groups in the total Post-class Assignment Scores across the eight assignments (320.4 vs. 309.7), total Completion Time (8.2 h vs. 8.6 h), or overall Learning Efficiency (43.4 vs. 40.2 points/hour) (
Figure 2n–p). Nonetheless, the AI-assisted group demonstrated superior performance in both maximum efficiency and average levels; some students exceeded 80 points per hour in Learning Efficiency, suggesting that AI-supported instruction more effectively stimulated students’ learning potential.
Finally, the AI-assisted group exhibited a significantly higher Number of Classroom Interactions than the traditional group (
p < 0.001,
Figure 2m), indicating that AI-enhanced instruction substantially increased student engagement and interactive participation in class discussions.
4.2. The Impact of Learning Behaviors and Competence Development on Final Exam Scores Under Different Instructional Approaches
To examine how Learning Behaviors and Competence Development contribute to students’ Final Exam Scores under different Instructional Approaches, 6 key indicators (Classroom Participation, Learning Efficiency, Self-directed Learning Ability, Data Processing Ability, Comprehensive Analytical Ability, and Practical Problem-solving Ability) were individually subjected to correlation and regression analyses. The results are presented in
Figure 3. This section aims to dissect the independent contribution of each variable, laying the groundwork for the subsequent comprehensive multivariate model.
In terms of Learning Behaviors, Classroom Participation showed a significant positive relationship with Final Exam Scores in both the traditional instruction group (
R2 = 0.16,
p = 0.003) and the AI-assisted group (
R2 = 0.15,
p = 0.005), as shown in
Figure 3a. Notably, the regression coefficient was higher in the traditional group (0.79 vs. 0.46), suggesting that in traditional instruction, in-class engagement and verbal participation play a more prominent role as mechanisms for promoting academic performance. By contrast, Learning Efficiency showed no significant correlation with Final Exam Scores in either group (
Figure 3b).
Regarding Competence Development, Self-directed Learning Ability was not significantly correlated with Final Exam Scores in the AI-assisted group. However, in the traditional instruction group, this ability exhibited a significant positive impact on academic performance (
p = 0.04), with a relatively high regression coefficient (0.56 vs. 0.54,
Figure 3c). This finding indicates that traditional instruction more heavily relies on students’ autonomous learning initiatives to drive Academic Achievement.
Both groups demonstrated significant positive correlations between Data Processing Ability and Final Exam Scores (
p < 0.05). The AI-assisted group showed a stronger regression coefficient (0.83 vs. 0.73,
Figure 3d), suggesting that the integration of AI tools, such as automated data analysis and visual analytics, enhanced students’ capacity to apply data processing skills in practice, thereby increasing the marginal contribution of this ability to Academic Achievement. This finding is consistent with the theoretical assumptions of technology-enhanced learning.
Comprehensive Analytical Ability was a significant predictor of Final Exam Scores in both groups (
p < 0.05), with a stronger effect observed in the AI-assisted group (0.80 vs. 0.68,
Figure 3e). These results suggest that AI-supported instruction may reinforce students’ ability to filter, synthesize, and structure information, thereby amplifying the academic impact of analytical competence.
In contrast, Practical Problem-solving Ability was not significantly associated with Final Exam Scores in the AI-assisted group. However, in the traditional instruction group, this ability exhibited a strong and significant positive correlation (
p < 0.001), with a substantially higher regression coefficient (1.66 vs. 0.64,
Figure 3f). This finding suggests that while AI tools may streamline problem-solving through standardized workflows, they may also constrain students’ flexibility in addressing complex and ill-defined problems, thereby limiting the translation of this ability into measurable academic outcomes.
4.3. Multivariate Effects of Learning Behaviors and Competence Development on Final Exam Scores Under Different Instructional Approaches
To examine the combined effects of the predictors and control for their potential intercorrelations, the multivariate regression results analyzing the combined influence of six predictors on students’ Final Exam Scores are presented in
Figure 4. These results reveal both the overall explanatory power and the distinctive patterns of influence that Learning Behaviors and Competence Development exert under the two Instructional Approaches. This analysis helps identify the core drivers of academic achievement within different pedagogical environments.
In the traditional instruction group, the six predictors jointly explained 35% of the variance in Final Exam Scores (
R2 = 0.35,
p = 0.002;
Figure 4a), and the overall model was statistically significant. Among the predictors, Practical Problem-solving Ability exhibited a significant positive effect on Final Exam Scores (
p = 0.015), suggesting that teacher-centered case instruction and analysis effectively strengthened students’ ability to apply theoretical knowledge to practical scenarios. By contrast, this association was not significant in the AI-assisted group (
p = 0.475), implying that the use of AI tools may have diverted students’ attention from the underlying reasoning required for complex problem-solving. In the traditional group, Learning Efficiency showed a negative trend in relation to Final Exam Scores, though it did not reach statistical significance (
p = 0.07), and the remaining variables were not significantly associated with performance (
p > 0.1).
In the AI-assisted instruction group, the six predictors together accounted for 32% of the variance in Final Exam Scores (
R2 = 0.32,
p = 0.005;
Figure 4b), with the model again reaching statistical significance. Data Processing Ability (
p = 0.052) and Comprehensive Analytical Ability (
p = 0.061) exhibited positive trends toward influencing Final Exam Scores, although these effects did not reach conventional levels of significance. These findings align with the strengths of AI-supported instruction, which emphasizes data-driven reasoning and structured knowledge integration, and suggest that technology-enhanced environments may be more conducive to cultivating latent cognitive skills. In contrast, such effects were absent in the traditional group (
Figure 4a, all
p > 0.3), further highlighting the limitations of traditional instruction in fostering data-centric and integrative competencies. The remaining variables in the AI-assisted group did not show significant associations with Final Exam Scores (
p > 0.1).
Taken together, the results indicate that the two Instructional Approaches exert fundamentally different effects on student performance. Traditional instruction appears to foster the rapid development of explicit, performance-oriented skills such as Practical Problem-solving Ability through structured, instructor-led training. In contrast, AI-assisted instruction may play a more subtle yet formative role by nurturing implicit competencies such as Data Processing and Comprehensive Analytical Ability through technology-mediated scaffolding and student-led exploration.
4.4. Effects of Instructional Approach on Learning Behaviors, Competence Development, and Academic Achievement
To directly test the proposed theoretical path model (Hypothesis 2) and quantify the indirect mechanism of “Instructional Approach → Learning Behaviors → Competence Development → Academic Achievement,” partial least squares path modeling (PLS-PM) was employed.
Figure 5a illustrates the complete path model with its coefficients, and
Figure 5b decomposes the direct, indirect, and total effects. Path coefficients reflect the direction and magnitude of change associated with AI-assisted instruction relative to traditional teaching. This model provides a holistic view of how AI-assisted instruction exerts its influence.
The Instructional Approach exerted a significant positive influence on Learning Behaviors (path coefficient = 0.46, p < 0.001), indicating that AI-assisted instruction is more effective than traditional teaching in promoting desirable learning practices, such as improved Post-class Assignment Scores and Completion Time, as well as increased Classroom Participation. The path from Instructional Approach to Competence Development was also positive (0.16), but did not reach statistical significance, suggesting that the direct impact of AI-assisted instruction on competence outcomes remains unstable and may be highly contingent on individual learner differences.
Interestingly, the direct effect of Instructional Approach on Academic Achievement was negative (path coefficient = −0.20, p < 0.05), implying that the adoption of AI-based instruction, when considered in isolation, may not directly enhance student performance, and may even pose challenges for learners who are not yet fully adapted to AI-mediated learning environments. However, the pathway analysis revealed a robust indirect mechanism: AI-assisted instruction significantly improved Learning Behaviors (0.46, p < 0.001), which in turn promoted Competence Development (0.44, p < 0.001), and ultimately yielded a strong, positive effect on Academic Achievement (0.42, p < 0.001). These results suggest that the effect of AI instruction on student performance operates predominantly through a dual mediation chain—“Learning Behaviors → Competence Development”—rather than through direct instructional effects.
As shown in
Figure 5b, the indirect effect of AI-assisted instruction on Academic Achievement was positive and, numerically, larger than its negative direct effect. This reflects a “compensatory mediation effect”, wherein the indirect learning pathway offsets the immediate limitations of unfamiliar or nontraditional instructional delivery.
In summary, the findings highlight distinct developmental pathways under different Instructional Approaches. Traditional instruction is characterized by a teacher-centered, directly transmissive model, whereas AI-assisted instruction functions through the activation of Learning Behaviors and the subsequent cultivation of Competence Development, which represents an indirect yet effective route to improving Academic Achievement. These findings underscore the importance of aligning instructional innovations with strategies that support student engagement and the progressive development of higher-order capabilities to maximize educational outcomes.
4.5. Classification of Student Profiles Based on Latent Profile Analysis
While the previous analyses focused on variable-level group comparisons, latent profile analysis (LPA) offers a person-centered perspective by identifying distinct typologies of students based on their response patterns across the 11 indicators.
Figure 6 depicts the characteristics of the four identified student profiles and their distribution across the two instructional groups. This analysis helps explain the heterogeneity underlying the aggregate results and reveals how instructional approaches differentially shape student learning characteristics.
Based on the results of the latent profile analysis, four distinct student profiles were identified, as illustrated in
Figure 6. Students in the red profile demonstrated relatively stronger performance in indicators related to Learning Autonomy, such as including Self-directed Learning Ability (X11), Post-class Assignment Scores and Completion Time (X4), and Number of Classroom Interactions (X5), compared to other competence dimensions. This group was therefore labeled Efficient Self-directed Learners. Students in the blue profile exhibited relative strengths in Practical Problem-solving Ability (X13, X23, X33), and were thus designated as Applied Practice Learners. Those in the green profile showed notably high scores in Comprehensive Analytical Ability (X12, X22) and Comprehensive Thinking (X32), and were identified as Logically Analytical Learners. Finally, students in the purple profile performed relatively poorly across all indicators and were labeled Low-Efficiency Passive Learners.
As shown in
Figure 6a, the AI-assisted instruction group had a higher proportion of Efficient Self-directed Learners (38%) and Logically Analytical Learners (24%) compared to the traditional instruction group (28% and 18%, respectively), and a lower proportion of Low-Efficiency Passive Learners. These results suggest that AI-assisted instruction, by offering personalized learning pathways and real-time feedback, effectively enhanced students’ motivation and efficiency, mitigated learning obstacles, and reinforced systematic thinking. In contrast, the traditional model of “teacher speaks, students listen” often fostered a reliance on instructor guidance, limiting opportunities for autonomous learning. Moreover, the one-size-fits-all nature of traditional instructional design failed to accommodate individual learner needs, potentially leaving some students unsupported and disengaged.
Figure 6b shows that the Applied Practice Learner profile was more prevalent in the traditional instruction group. This aligns with the emphasis on in-class lecturing, case studies, and practice-oriented strategies that are typical of traditional pedagogy. Such structured approaches appear effective in helping students integrate learned knowledge and improve their Practical Application skills during problem-solving tasks.
In sum, the two Instructional Approaches emphasize different dimensions of student development. AI-assisted instruction is more adept at cultivating Learning Autonomy and Logical Analysis, while traditional instruction demonstrates greater strength in fostering Applied Practice capabilities.
5. Discussion
5.1. Exploring the Differential Effectiveness of AI-Assisted and Traditional Instruction
This study found no significant difference in Final Exam Scores between students in the AI-assisted and traditional instruction groups (
Figure 2i), suggesting that the integration of AI into instructional delivery did not yield immediate gains in Academic Achievement. This finding is consistent with meta-analytic evidence showing that AI-assisted interventions tend to yield larger short-term gains that attenuate as intervention duration increases [
49]. Specifically, a BJET meta-analysis of 24 randomized studies reported stronger effects for short interventions and attributed the decline over time partly to novelty effects [
49]; a Computers and Education meta-analysis focusing on ChatGPT likewise identified intervention length as a significant moderator, with shorter designs producing larger effects [
50]; and a JRTE meta-analysis found the highest effects in single-session trials [
51]. Taken together, these patterns help reconcile seemingly contradictory claims about AI’s uniformly positive impact by highlighting time-sensitive boundary conditions for sustained learning benefits. One plausible explanation discussed in educational technology is the novelty effect, wherein initial performance boosts associated with new tools diminish over time if not accompanied by sustained, active learning structures [
52].
In contrast, with respect to intermediate learning outcomes, students in the AI-assisted group demonstrated significantly higher Midterm Report Scores (
Figure 2a) and PPT Presentation Scores (
Figure 2e). Notably, the AI group outperformed the traditional group in dimensions such as Comprehensive Analytical Ability (
Figure 2c) and Data Processing Ability (
Figure 2f). These findings are consistent with prior research suggesting that AI-supported instruction facilitates analytical thinking and information processing through intelligent data analysis platforms and knowledge-construction tools [
53]. For example, Chen et al. (2020) reported that the integration of AI into instructional practices supports the development of higher-order skills, including critical thinking and the ability to engage with complex problems [
15]. Accordingly, the superior performance of the AI group in analytical dimensions observed in this study may be largely attributed to the affordances of AI technologies, particularly the access to diverse resources and real-time feedback, which scaffold students’ efforts to synthesize and interpret multifaceted information.
It is noteworthy, however, that students in the traditional instruction group outperformed their AI-assisted peers in Practical Problem-solving Ability (
Figure 2d). This finding suggests that traditional, instructor-led approaches, particularly those grounded in expert demonstration and case-based analysis, may offer more direct support for applying conceptual knowledge to real-world scenarios. The structured guidance and modeling provided by instructors in traditional classrooms appear to equip students with clear procedural strategies, thereby strengthening their immediate capacity to tackle prototypical problems [
54]. This observation is consistent with findings from Zain et al. (2022), who demonstrated that structured instructional strategies, such as lectures and case-based exercises, effectively promote problem-solving capabilities in engineering contexts [
37]. The relatively weaker performance of the AI-assisted group in this dimension may stem from an overreliance on AI tools. While AI offers standardized solution frameworks that enhance efficiency, such tools may inadvertently reduce students’ opportunities for engaging deeply with the logical structure of problems. Prior reviews have cautioned that excessive dependence on AI-generated outputs can undermine students’ capacity for independent reasoning and adaptive problem-solving [
31].
5.2. Mechanisms Linking Instructional Approach to Academic Achievement
As artificial intelligence continues to integrate into educational settings, increasing attention has been paid to the divergent mechanisms through which AI-assisted and traditional Instructional Approaches shape student development. Rather than being inherently superior or inferior, these two approaches appear to cultivate distinct types of competencies, each with unique implications for Academic Achievement. This section explores how such differences in competence cultivation contribute to divergent academic outcomes.
The regression analysis (
Figure 3a) showed that although AI-assisted instruction significantly increases the frequency of Classroom Participation, its marginal contribution (regression coefficient) to Academic Achievement was smaller than that observed in traditional classrooms. This discrepancy may be attributed, in part, to the nature of AI-facilitated interactions, which often involve technical prompts, input of keywords, or navigation of AI tool interfaces. Such interactions, while frequent, may lack the depth and cognitive demand of more intentional engagement. In contrast, Classroom Participation in traditional settings, such as posing questions and engaging in structured discussions, is typically more focused and cognitively substantive, resulting in greater academic benefit [
55]. Prior research has consistently demonstrated a strong positive relationship between active classroom participation and academic performance, particularly in traditional environments where active engagement is a salient and self-initiated learning behavior [
56].
The relationship between Self-directed Learning Ability and Academic Achievement also diverges markedly across the two Instructional Approaches (
Figure 3c), as evidenced by their differing regression coefficients in the multivariate models (
Figure 4). In the traditional instruction group, students with strong Self-directed Learning Ability tend to achieve higher Final Exam Scores, whereas in the AI-assisted group, this relationship is less pronounced. This divergence suggests that the instructional demands on learner autonomy differ substantially between the two contexts. Khalid et al. (2020) found that students in online or remote learning environments often exhibit stronger self-directed capabilities than those in traditional classrooms, and that the correlation between learner autonomy and academic performance is especially strong in digitally mediated contexts [
57]. In the AI-assisted setting, however, the personalization and real-time scaffolding offered by intelligent systems may partially compensate for deficiencies in self-regulation, enabling students with weaker autonomous learning tendencies to maintain progress. As a result, the predictive strength of Self-directed Learning Ability on Academic Achievement may be diluted. In contrast, traditional instruction relies more heavily on learners’ initiative; teachers are less able to address every student’s needs in real time, which places a greater burden on students to preview content, review material independently, and resolve conceptual difficulties outside of class. Consequently, students with higher levels of Self-directed Learning Ability are more likely to transform classroom inputs into academic success under traditional pedagogical conditions.
AI-assisted instruction offers students abundant data resources and real-time feedback loops, enabling them to rapidly acquire and synthesize multidimensional information, especially in tasks such as Midterm Reports and PPT Presentations (
Figure 3d,e). These affordances significantly enhance students’ Data Processing Ability and Comprehensive Analytical Ability. Empirical research on generative AI in engineering education has shown that students trained with AI-generated prompts outperform control groups in both data analysis and programming proficiency [
58]. Within AI-enhanced learning environments, the iterative feedback mechanisms of intelligent systems help students consolidate domain knowledge while efficiently integrating interdisciplinary information. This process supports the emergence of data literacy and critical thinking as internalized, latent cognitive capacities [
15]. Previous studies have also demonstrated that the combination of rich data resources and instant feedback in intelligent learning environments accelerates learners’ development of advanced information processing habits, thereby promoting active meaning-making and deeper learning engagement [
59]. By contrast, traditional instruction follows a different logic in cultivating Data Processing and Comprehensive Analytical skills. In traditional classrooms, opportunities for data analysis and integrative processing are typically confined to textbook exercises and instructor-directed problem sets. These environments often lack diverse data inputs and real-time formative feedback, resulting in a slower development of the skills required to handle data-intensive or analytically demanding tasks [
60]. Consequently, although students in traditional classrooms also show a positive correlation between these abilities and Final Exam Scores, their regression coefficients are generally lower compared to those observed under AI-assisted instruction. This indicates that, under traditional models, such competencies are often shaped by explicit, externally guided training rather than long-term, internalized cognitive routines [
20]. It is worth noting, however, that some scholars remain cautious about the effectiveness of AI in fostering analytical thinking, especially within the humanities. Castillo (2024a) argues that traditional instruction should remain central to cultivating complex analysis and problem-solving skills in disciplines where interpretive reasoning and context sensitivity are paramount [
61]. This suggests that efforts to develop Comprehensive Analytical Ability should be context-dependent, with instructional approaches tailored to both disciplinary characteristics and pedagogical design.
Findings from the present study also underscore the distinct value of traditional instruction in fostering students’ capacity to solve real-world, complex problems (
Figure 3f). Human instructors are uniquely positioned to contextualize theoretical content and guide learners in understanding how engineering principles apply to authentic projects. This form of situated learning helps students develop cognitive schemas for Practical Problem-solving Ability. In contrast, AI-assisted instruction often emphasizes structured, standardized tasks. Intelligent systems excel in delivering personalized drills and feedback within well-defined problem spaces, but remain limited in supporting creative and context-sensitive reasoning. As a result, students in AI-assisted classrooms, despite mastering theoretical and data analytic skills, may lack the experiential scaffolding needed to approach novel, open-ended problems with confidence (
Figure 3f and
Figure 4b). Several studies reinforce this concern. Castillo et al. (2024b) caution that excessive reliance on AI may hinder the development of students’ communication, critical thinking, and applied practice skills, particularly in settings where direct engagement with real-world ambiguity is essential [
62]. However, other research offers a more nuanced view. For instance, Chiang et al. (2024) compared graduate students’ problem-solving performance in online and face-to-face instruction, finding that online learners actually outperformed their peers [
63]. The authors attributed this to the heightened self-directed exploration and confidence-building fostered by virtual environments. These results differ from those observed in the current undergraduate AI-assisted context, highlighting how learner maturity and instructional design can modulate the effectiveness of different pedagogical approaches. Taken together, these findings suggest that AI-assisted instruction is not inherently deficient in cultivating Practical Problem-solving Ability. Rather, its current limitations may stem from insufficient integration of authentic, contextualized experiences. If AI instruction can incorporate real-world case studies, simulated projects, or immersive problem scenarios, it may substantially narrow the observed gap in practical competence development compared to traditional methods.
5.3. Divergent Student Development Pathways in AI-Assisted and Traditional Instruction
The findings of this study indicate that the impact of AI-assisted instruction on Academic Achievement operates primarily through indirect pathways (
Figure 5), highlighting a critical gap in current educational research: while many studies have focused on whether AI is effective, few have examined how it exerts its effects. As Baker et al. (2016) emphasize, the true educational value of technology should not be measured solely by terminal outcomes such as test scores, but rather by the mechanisms through which it shapes the learning process [
16]. Within this process, improvements in Learning Behaviors emerge as a key mediating factor in AI-supported instruction. This study observed that students in the AI-assisted group exhibited significantly higher Classroom Participation, including more frequent in-class interactions and spontaneous questioning, compared to those in the traditional group. These behaviors align closely with the core principles of constructivism and self-determination theory, both of which stress the importance of active learner engagement. According to constructivist theory [
18], knowledge is not passively received but actively constructed through learner-environment interactions. In this context, AI tools serve as effective scaffolds that facilitate such knowledge construction. Self-determination theory [
21] further suggests that when learners experience greater autonomy, their intrinsic motivation and depth of engagement are significantly enhanced. In the present study, the personalized learning pathways, real-time feedback, and task adaptation mechanisms provided by AI instruction enhanced students’ sense of agency and control, thereby promoting the development of self-regulated learning behaviors. As noted by Jin et al. (2023), well-designed AI tools can effectively enhance learners’ autonomy and engagement, which is further supported by the present findings [
17]. These elevated learning behaviors, in turn, contributed to greater gains in multiple dimensions of Competence Development, particularly in Data Processing Ability, Comprehensive Analytical Ability, and systematic thinking. The development of these higher-order capabilities subsequently served as mediating variables that indirectly boosted students’ Academic Achievement.
Compared to AI-supported instruction, traditional teaching exerts its influence on learning outcomes primarily through direct pathways (
Figure 5), meaning that the effectiveness of instruction is largely contingent upon what the instructor delivers in real time and how much the student immediately retains. In this study, variables such as Classroom Participation and Learning Autonomy in the traditional instruction group exhibited limited associations with Academic Achievement, while Practical Problem-solving Ability emerged as the only significant predictor of improved performance. This finding aligns with existing evaluations of lecture-based instruction, which suggest that while this approach is efficient in conveying concrete knowledge, it falls short in cultivating students’ intrinsic learning habits and higher-order competencies [
33]. As a result, in traditional instructional settings, improvements in Learning Behaviors and Competence Development do not serve as primary mediators of Academic Achievement. Instead, outcomes rely more heavily on the instructor’s direct transmission of knowledge. Although this pathway yields quick results, it lacks deeper penetration into the learning process and offers limited long-term impact. Freeman et al. (2014) analyzed 225 studies on STEM classrooms and found that students in lecture-only classes performed worse than those in active learning environments [
19]. Specifically, active learning raised exam scores by an average of 6 percent and reduced failure rates by approximately 55 percent, suggesting that instructor-centered methods alone are insufficient to fully engage student potential. The present study supports this conclusion, showing that in traditional classrooms, where broad-based student engagement and inquiry are lacking, only a subset of students (typically those with strong Practical Problem-solving Ability) benefit directly from instruction, which may constrain overall instructional effectiveness.
The divergent developmental pathways observed between AI-assisted and traditional instruction reflect a broader paradigm shift occurring in higher education. The transition from pedagogy driven by knowledge transmission to instruction centered on competence and guided by process is no longer optional but essential. The findings of this study align with constructivist learning theory, which emphasizes that active engagement in learning behaviors fosters deeper competence development, ultimately leading to improved academic achievement. The observed pathway from “learning behaviors—competence development—academic achievement” supports this theoretical chain, consistent with previous findings in engineering education [
3,
8]. Notably, while the positive influence of AI-assisted instruction on competence development aligns with prior studies, the relatively modest effect size suggests that contextual factors, such as prior knowledge and course design, may moderate the extent of these benefits.
5.4. Implications for Instructional Reform
The differences identified in student development pathways offer important insights for educational reform. First, when evaluating and designing Instructional Approaches, educators must move beyond superficial comparisons of Academic Achievement and instead examine how different instructional models reshape the learning process. As demonstrated in this study, meaningful and sustained improvements in Academic Achievement can only occur when the Instructional Approach effectively activates a positive chain of “Learning Behaviors → Competence Development”. Therefore, the integration of technologies such as AI should not be framed merely as tools for boosting test scores. Rather, educators should focus on how such technologies can transform pedagogical processes. Teachers, in particular, must purposefully incorporate AI tools in ways that foster student agency, stimulate critical thinking, and support competence development, rather than reducing AI to a mechanistic substitute for traditional instruction. It is only through intentional instructional design, which deeply embeds technology into the learning environment, that “technology empowerment” can be transformed into “learning empowerment”, ultimately achieving dual gains in Academic Achievement and Competence Development.
Second, the structural model developed in this study aligns closely with the current reform paradigms emphasizing learner-centered and holistic learning-oriented education. In the context of higher engineering education, the “New Engineering” movement advocates for a shift from content transmission to the cultivation of practical competencies and innovation capacity. This study provides empirical support for such reforms by showing that AI-assisted instruction enhances Academic Achievement indirectly, primarily by increasing student engagement and fostering Competence Development, thereby confirming both the necessity and feasibility of competence-based instructional transformation.
Finally, this study reinforces the view that AI-assisted and traditional Instructional Approaches should not be viewed as dichotomous in quality or effectiveness. Rather, each offers distinct advantages depending on the desired learning outcomes. Traditional instruction excels at delivering content and procedural knowledge efficiently in the short term, while AI-assisted instruction demonstrates greater promise for supporting the long-term development of higher-order competencies. Educators should adopt a strategic, goal-oriented approach that leverages the strengths of both models. For example, in implementing AI technologies, instructors may retain effective elements of traditional pedagogy, such as targeted explanation and expert modeling, while using AI to broaden student participation and offer personalized learning support. Only through such thoughtful integration can the complementary advantages of both Instructional Approaches be fully realized, ultimately enabling the design of instructional pathways that meet students’ immediate learning needs while also fostering their long-term growth and development.
7. Future Research
Future studies should implement iterative teaching cycles in the same course across different semesters and cohorts to examine the stability of instructional effects and adopt longitudinal designs to track the same students’ learning behaviors, competence development, and academic performance over time. Additionally, refining the measurement of complex cognitive abilities, particularly in practical problem-solving and innovation, requires more sensitive, multi-dimensional assessment tools for greater precision.
Methodologically, combining structural equation modeling with causal inference techniques would enable a more nuanced identification and verification of the relational pathways within instructional models, thus advancing our understanding of the internal mechanisms through which AI instruction shapes student learning. Additionally, future research should explore the interactions between AI-enhanced instruction, instructor roles, and course characteristics, examining how technological applications can be optimally aligned with disciplinary demands and professional expectations.
At the sampling level, future work should expand the sample size, include students from diverse disciplinary backgrounds, and extend the research to multiple courses (e.g., Hydraulics, River Dynamics, Engineering Surveying) to compare the effects of AI-assisted instruction across different knowledge domains, thereby enhancing the generalizability of findings. Systematic logging of AI usage frequency and duration should be incorporated in future studies to enable precise measurement of exposure and examination of potential dose–response relationships. This would provide a stronger empirical foundation for the broader adoption and refinement of AI-assisted instruction in engineering education and beyond.
8. Limitations
While this study employed a posttest-only control group design to compare AI-assisted and traditional instruction in the River Regulation and Ecological Restoration course, the absence of a pretest limits the ability to fully control for pre-existing group differences. Future research in similar hydraulic engineering contexts should incorporate pre-intervention measurements or statistical controls to enhance internal validity and contextual rigor.
One limitation of this study is the assumption of linear relationships between instructional approaches, learning behaviors, competence development, and academic achievement. In practice, some of these relationships may be non-linear. For instance, excessive class participation or data-processing effort might reach a point of diminishing returns. Future research may explore more flexible models, such as threshold models or polynomial SEM, to better capture these dynamics.
These course-embedded performance measures serve as proxies for competence development and may not fully capture students’ actual competencies. Future studies should incorporate standardized, validated assessment tools, such as independent rubrics, situational tasks, or competence questionnaires, to further validate these indicators.