Next Article in Journal
Application of Fuzzy Risk Allocation Decision Model for Improving the Nigerian Public–Private Partnership Mass Housing Project Procurement
Previous Article in Journal
A SketchUp-Based Optimal Design Tool for PV Systems in Zero-Energy Buildings During the Early Design Stage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Cognitive Reconstruction Mechanism of Generative AI in Outcome-Based Design Education: A Study on Load Optimization and Performance Impact Based on Dual-Path Teaching

1
School of Art and Design, Xihua University, Chengdu 610039, China
2
School of Industrial and Information Engineering, Politecnico di Milano, 20156 Milan, Italy
3
Sichuan Academy of Forestry, Chengdu 610036, China
4
Deyang City Territorial Spatial Planning Compilation and Research Center, Deyang 618099, China
*
Authors to whom correspondence should be addressed.
Buildings 2025, 15(16), 2864; https://doi.org/10.3390/buildings15162864
Submission received: 15 July 2025 / Revised: 11 August 2025 / Accepted: 12 August 2025 / Published: 13 August 2025
(This article belongs to the Topic Architectural Education)

Abstract

Undergraduate design education faces a structural contradiction characterized by high cognitive load (CL) and relatively low innovation output. Meanwhile, existing generative AI tools predominantly emphasize the generation of visual outcomes, often overlooking the logical guidance mechanisms inherent in design thinking. This study proposes a Dual-Path teaching model integrating critical reconstruction behaviors to examine how AI enhances design thinking. It adopts structured interactions with the DeepSeek large language model, CL theory, and Structural Equation Modeling for analysis. Quantitative results indicate that AI-assisted paths significantly enhance design quality (72.43 vs. 65.60 in traditional paths). This improvement is attributed to a “direct effect + multiple mediators” model: specifically, AI reduced the mediating role of Extraneous Cognitive Load from 0.907 to 0.017, while simultaneously enhancing its investment in Germane Cognitive Load to support deep, innovative thinking. Theoretically, this study is among the first to integrate AI-driven critical reconstruction behaviors (e.g., iteration count, cross-domain terms) into CL theory, validating the “logical chain externalization → load optimization” mechanism in design education contexts. Practically, it provides actionable strategies for the digital transformation of design education, fostering interdisciplinary thinking and advancing a teaching paradigm where low-order cognition is outsourced to reinforce high-order creative thinking.

1. Introduction

Undergraduate design education is undergoing a paradigm shift from “knowledge imparting” to “thinking capability cultivation”. However, foundational design courses, as core carriers for fostering creative awareness among lower-grade students, have long faced a salient contradiction: high cognitive load (CL) coupled with low innovation output [1,2]. Cognitive Load Theory, which categorizes cognitive load into Intrinsic Cognitive Load (InCL), Extraneous Cognitive Load (ExCL), and Germane Cognitive Load (GeCL), provides a critical framework for dissecting design thinking processes. Design tasks inherently involve nonlinear cognitive characteristics (e.g., spatial scale imagination, ecological logic deduction), which easily induce excessive InCL. Meanwhile, inefficient information presentation in traditional teaching further exacerbates ExCL, thereby overoccupying students’ cognitive resources and limiting their ability to invest in GeCL for in-depth innovative thinking. This mechanism is particularly pronounced in lower-grade students, who are in a “low-experience, high-plasticity” cognitive development stage [3].
Form generation, a core competency in design, traditionally required extensive training and prolonged experience accumulation. However, generative AI has significantly reduced this burden by leveraging large-scale labeled visual data (e.g., photographs, graphics) [4,5], thereby opening new avenues for stimulating and expanding design thinking. Despite this progress, existing AI tools and research predominantly focus on generating visual outcomes, overlooking the critical guidance of design thinking’s logical chain [6,7]; consequently, students tend to prioritize drawing beautification over logical reasoning, hindering the externalization and reorganization of cognitive processes. Current research has two key limitations. First, applications of Cognitive Load Theory (CLT) in design education have overlooked the integration of “critical reconstruction behaviors” variables (e.g., iteration frequency, cross-domain associations) [8,9]. As a result, the mechanism by which information optimization translates into innovative output in AI contexts remains unexplained. Second, studies exploring AI’s cognitive intervention mechanisms in design remain descriptive of tool effects, lacking quantitative analysis of the transmission pathway connecting “AI interaction behavior → load distribution → innovative output”. Thus, the key challenge remains: how to effectively leverage generative AI to support the design thinking stimulation process.
In response, this study addresses a core research question: how can structured interaction with generative AI optimize cognitive load (CL) distribution to stimulate design thinking? Using the DeepSeek large language model, we constructed a Dual-Path teaching model (traditional teaching vs. AI-assisted teaching) to explore three sub-questions: (1) How does teaching intervention affect students’ CL dimensions (Intrinsic [InCL], Extraneous [ExCL], and Germane [GeCL])? (2) How do these CL dimensions influence design quality? (3) Through what mechanisms (behavioral reconstruction or load regulation) does AI enhance design innovation? The findings are expected to provide replicable cognitive intervention pathways for the digital transformation of design education, facilitating the evolution of teaching paradigms toward “low-order cognition outsourcing–high-order thinking reinforcement”.

2. Literature Review

2.1. The Application of CL Theory in Design Education

CL theory provides a fundamental framework for analyzing thinking processes in design education, categorizing CL into three dimensions (InCL, ExCL, and GeCL). Among them, InCL is determined by task inherent complexity (e.g., spatial imagination in landscape design), ExCL arises from inefficient information presentation, and GeCL refers to an individual’s investment in deep knowledge processing—all of which are critical for design innovation [8,9,10].
However, existing applications of CL theory in design education are constrained by two key limitations. First, the majority of studies prioritize descriptive analyses of load characteristics over explorations of regulatory mechanisms, leaving a gap in understanding how load can be actively modulated. Key limitations in CL theory application include the following: Merriënboer and Sweller identified ExCL as a bottleneck but overlooked resource redistribution [11]; Gkintoni et al. noted AI’s ExCL-reduction potential but disconnected it from cognitive behaviors, such as iteration [12]. This gap is particularly evident in design education, where nonlinear tasks (e.g., ecological logic deduction) amplify InCL, yet few studies explain how AI tools can mediate this through behavioral interventions.
Notably, novice–expert comparisons reveal that novices (e.g., lower-grade design students) allocate more resources to ExCL than experts [11,13]. This finding suggests that AI-driven “structured thought expression” could optimize load distribution. However, current research has not integrated AI interaction modes with dynamic CL changes, leaving the “AI intervention → load reorganization → innovation output” pathway unelucidated.

2.2. Cognitive Intervention Mechanisms for Generative AI

Generative AI’s cognitive intervention operates through “tool mediation,” as framed by Vygotsky’s socio-cultural theory and cognitive psychology [14,15]. From a socio-cultural perspective, AI prompt text functions as a “digital psychological tool,” facilitating externalization of implicit design logic via structured interaction (e.g., “problem decomposition → strategy generation”) [15]. This aligns with Vygotsky’s assertion that tools shape cognitive development; however, existing studies often overemphasize the functionality of tools over the quality of interaction. For instance, Abdallah and Estévez demonstrated AI’s role in visualizing design concepts but ignored how prompt structure influences logical chain reconstruction [5].
From a cognitive psychology perspective, AI stimulates divergent thinking (fluency and flexibility) by breaking associative boundaries [16]. Empirically, Kamnerddee et al. confirmed that prompt iteration frequency correlates with scheme novelty, but their analysis remained descriptive, lacking quantification of how iterations affect CL dimensions [17]. This reflects a broader limitation: studies (e.g., López-Galisteo and Borrás-Gené [4]) fail to connect “tool mediation” with “CL regulation,” leaving unmodeled how cross-domain text stimulation (e.g., “parametric design → weaving process”) reduces ExCL or enhances GeCL.
Critical reconstruction behaviors, such as iteration count and cross-domain terms, are notably absent in existing frameworks. Although Li et al. [18] highlighted the significance of critical thinking in human–AI interaction, they failed to establish a empirical link between behavioral indicators and CL redistribution. This gap hinders understanding of why some AI-assisted designs yield higher quality: is it due to reduced ExCL, enhanced GeCL, or behavioral adjustments?

2.3. Research on Transformation of Design Thinking

Design thinking transformation hinges on bridging the “concept-form gap” through logical chain reconstruction [19,20]. Traditional teaching struggles to capture this dynamic process, but AI prompt text offers a “digital scaffold” via dual pathways: structured causal derivation (e.g., “site waterlogging → ecological strategy”) and cross-domain stimulation [21,22]. However, existing research is limited by three shortcomings. First is mechanistic ambiguity: studies like Tan and Luhrs describe AI’s role in enhancing divergent thinking but do not explain how this translates to reduced CL [20]. Second, there are methodological gaps: no quantitative index system exists for reconstruction behaviors (e.g., iteration thresholds or cross-domain term frequency), making it impossible to measure their impact on CL. Third, there are tool limitations: mainstream AI platforms lack feedback loops for thinking reconstruction, as noted by Liu et al., leading to passive acceptance of AI suggestions rather than active critique [23].
Guilford’s divergent thinking theory emphasizes fluency and flexibility as drivers of innovation, but its integration with CL theory is underdeveloped [16]. Practically, while Hubert et al. showed that AI improves creativity quality by 40–65%, they did not clarify whether this improvement stems from reduced ExCL or enhanced GeCL, nor the role of reconstruction behaviors in this process [24].
Guided by the theoretical framework and the assumptions of the SEM model outlined above, we constructed a Dual-Path research framework to explore the influence mechanisms of design quality (Figure 1). This framework was anchored in four research hypotheses:
H1. 
Teaching intervention has a significant direct impact on design quality.
H2. 
The three dimensions of CL have differentiated mediating roles in the effect of teaching intervention on design quality.
H3. 
Critical reconstruction behavior has a significant mediating role in the effect of teaching intervention on design quality.
H4. 
Teaching intervention indirectly affects design quality through the mediating path of “critical reconstruction behaviors → three CL dimensions (InCL, ExCL, and GeCL)”.

3. Materials and Methods

3.1. Experimental Design

3.1.1. Sample Selection and Site Characteristics

Following the sample size determination in similar studies [25,26], 52 sophomore students in the landscape design program were recruited as research participants. All participants had previously received training in hand-drawing for landscape design, with 75% demonstrating proficiency in this skill. In comparison, 84.3% had not systematically learned AI tools, ensuring homogeneity in baseline skills.
All participants had only received theoretical instruction in the course’s preliminary stage (covering basic concepts, constituent elements, and core techniques). They were in the cognitive development stage of “low experience–high plasticity”.
The design task was operationalized as a typical spatial design proposition, with the empirical setting being a campus site bounded by roads and characterized by minimal internal terrain elevation differences, ensuring task consistency across participants (Figure 2). The core design task focused on addressing “space inefficiency,” with no prescribed functional requirements or technical indicators.

3.1.2. Dual-Path Implementation Process and Variable Control

This experiment adopted a one-group pretest–posttest design, in which all students first took part in the traditional group and then the AI-assisted group (Table 1). In adherence to research ethics, the experiment received pre-experimental ethical approval from the Local Ethics Committee of Xihua University (approval code: XHU20250272; approval date: 10 April 2025), prior to the initiation of data collection and participant recruitment. Additionally, all participating students provided written informed consent, and all collected data were anonymized and used exclusively for the purposes of this study.
To minimize order effects from non-isolated Dual-Path interactions, three control measures were implemented in sequence. First, we used tool and process separation: AI intervention was strictly prohibited in the traditional group, while in the AI-assisted group, AI tools were used exclusively for idea generation (not direct design execution) to avoid technological interference with core design thinking processes. Second, time and sequence balancing were considered: the two tasks were scheduled with a 7-day interval (traditional group: 28–30 April 2025; AI-assisted group: 8–9 May 2025) to eliminate potential learning biases from task sequence. Third, we ensured outcome labeling standardization: both groups were required to submit A3-sized hand-drawn floor plans and renderings that complied with unified design specifications; the AI-assisted group additionally provided a complete prompt text modification log to ensure data traceability and academic integrity.

3.2. Variable Definition and Measurement

This study constructed a multidimensional variable system encompassing teaching intervention, CL, and design quality, thereby providing the data foundation for experimental analysis and subsequent Structural Equation Modeling (SEM).

3.2.1. Independent Variable—Teaching Intervention (Latent Variable)

Centered on the two teaching paths (Group T and Group A), a system of five independent variables was constructed based on the principle of “conceptual consistency with contextual adaptation”: mentoring support, time investment, skill level, knowledge base, and case resources [27,28]. This framework ensured parallelism in variable definition across groups, while highlighting the differences in intervention methods (Table 2). We integrated data collection with the CL scale to maintain operational feasibility.

3.2.2. Mediating Variables (Latent Variables)

The mediating variables included critical reconstruction behavior and CL, reflecting the behavioral characteristics of tool use and the dynamics of cognitive processes. The data acquisition methods and operational definitions were as follows:
  • Critical Reconstruction Behaviors (Group A Only)
As a quantitative indicator of “human–machine collaboration depth”, this construct reflects the quality of students’ logical chain reconstruction during AI prompt interaction. Drawing from Guilford’s theory and related studies [18,29,30], critical reconstruction behavior was measured across four dimensions: number of iterations (reflecting thought iteration frequency), word frequency (reflecting thinking refinement and interaction investment), number of professional terms (reflecting professional knowledge application), and number of cross-domain words (reflecting divergent thinking potential) (Table 3). Data were derived from Group A’s prompt text modification logs and processed through natural language processing as follows:
  • Word segmentation statistics were obtained using NLPAR-Parser 2021 software (Ling-join Software, Beijing, China).
  • Professional term counts were calculated through pre-set lexicon matching.
  • Cross-domain terms were identified using the Latent Dirichlet Allocation (LDA) topic model for non-landscape theme detection (e.g., “technology”, “art”).
2.
CL Variable (Latent Variable)
A multi-dimensional scale was employed to measure the three dimensions of CL after task completion (Table 4), with a focus on Group A’s ExCL reduction and GeCL promotion effects through critical reconstruction behavior (for details, see Appendix A Table A1). The observed variables were measured using a 1–7 scoring scale: decreasing InCL and ExCL scores indicated an increased negative load, while increasing GeCL scores indicated an enhanced investment in deep knowledge processing and a positive load increment.
The scale was administered within 10 min after the Group T task to preserve memory freshness. For Group A, after task completion, the scale was completed the day after task completion to reduce the fatigue bias associated with complex tasks [10] and to allow reflection on AI-assisted design. A five-minute prompt interaction review was permitted before Group A’s scaling to ensure memory accuracy.

3.2.3. Dependent Variable—Design Quality (Observed Variable)

Drawing on related research in spatial design [20,33,34] and the auxiliary characteristics of AI, a four-dimensional evaluation system was constructed, comprising functionality, technical standardization, aesthetic expression, and innovativeness. Using the Analytic Hierarchy Process (AHP), five experts ranked the importance of nine indicators, and the weights were determined after a consistency test (CR < 0.1) (Table 5).
To balance scoring efficiency (104 assignments requiring over 900 evaluation entries per expert) with sample representativeness, a stratified random sampling strategy was employed: using the 75th percentile (7 points) and 25th (3 points) percentile of pre-scores from the Group T task as stratification thresholds, the 104 designs were categorized into “high,” medium,” and “low” tiers. Samples were proportionally drawn using SPSS 28’s complex sampling function (IBM Corporation, Armonk, NY, USA), yielding 15 designs per group (five high, six medium, and four low). An independent sample t-test confirmed that there was no significant difference in the pre-score means between the sample and the full dataset (p > 0.05), validating the adequacy of the sampling method.
Thirteen environmental design and landscape architecture professionals independently scored the sampled designs (percentage system) using a randomized path evaluation system. After removing outliers via box plot analysis—and following the elimination of invalid scores, which resulted in eight valid scores being retained—the data were standardized and weighted to derive the observed variable of design quality.

3.3. Data Analysis Method

3.3.1. SEM Construction and Verification

Using IBM AMOS 29, an SEM was constructed to verify the causal transmission mechanism of “teaching intervention → critical reconstruction behavior → CL → design quality.” Given the potential for missing values in the design quality data due to stratified sampling, multiple imputation was first conducted in SPSS 28. A multiple regression model was constructed using variables such as teaching intervention and design quality pre-scores. Five imputed datasets were generated and combined for analysis, ensuring the integrity of the data structure (Cronbach’s α = 0.973, intra-group correlation coefficient = 0.911).
Subsequently, confirmatory factor analysis (CFA) was conducted to examine the validity of the measurement model for latent variables, ensuring that the scale items were appropriately mapped. The structural model paths were specified to investigate the direct effect of teaching intervention on design quality, as well as indirect effects via CL. Key analyses included testing GeCL’s positive effect on innovativeness in Group A, the negative influence of ExCL on scheme quality, and the moderating effect of critical reconstruction behavior on the “CL → design quality” path. Model fitting followed established criteria (CFI > 0.9, RMSEA < 0.08, SRMR < 0.08) [35], and confidence intervals for mediating effects were estimated using the Bootstrap method to enhance the reliability of the results.
Finally, the teaching path was included as a grouping variable in the SEM, and its moderating effect on path coefficients was examined via multi-group analysis [36] (Group T was coded as 0 for traditional design tasks, and Group A was coded as 1 for AI-assisted tasks).

3.3.2. Qualitative Data Analysis and Verification

Through the natural language processing of Group A’s prompt texts (detailed in Section 3.2.2), we analyzed diverse student strategies in the utilization of prompts. Triangulation validation was performed by cross-referencing qualitative analysis results with quantitative data (e.g., comparing behavioral characteristics between high-transformation and low-transformation groups) to further elucidate individual differences in the effects of AI prompt guidance and provide in-depth mechanistic support for the quantitative findings.

4. Results

4.1. Descriptive Statistics and Cross-Over Analysis of Dual-Path Variables

4.1.1. Differences in Design Quality and CL Scores Between Teaching Paths

As shown in Figure 3a, significant differences in design quality emerged between the traditional path (Group T) and the AI-assisted path (Group A). Specifically, Group A exhibited significantly higher design quality than Group T, with an overall score increase of 10.4% (72.43 vs. 65.60, p < 0.05). This difference was evident in both score distribution and innovation metrics: the percentage of high-score designs (>85 points) in Group A increased approximately 6-fold, while the proportion of low-score designs (<60 points) decreased from 40% in Group T to 7.6% in Group A. Notably, 30.77% of Group A designs achieved high innovation scores (>80 in B4), three times the percentage in Group T (9.61%), confirming AI’s role in enhancing cross-domain creative integration. This finding directly validates Hypothesis H1, confirming that teaching intervention exerts a significant direct impact on design quality, with the direct effect model of Group A more effectively promoting innovative output than that of Group T.
Intrinsic task complexity (InCL) remained stable across groups, confirming that the teaching path did not alter task difficulty (Figure 3b). However, CL distribution differed significantly: Group A showed lower overall ExCL than Group T, suggesting that AI reduced inefficient information processing. A notable exception was ExCL2 (information presentation problem), which increased significantly (t = 6.796, p < 0.001)—likely due to the structured design steps introduced by AI prompts, which temporarily elevated information input but laid the groundwork for logical organization (see Section 5.1.1).
Subjective feedback from Group A aligned with quantitative findings: 88.5% reported AI-enhanced inspiration generation, and 80.8% noted its role in scheme optimization (“AI liberates thinking from experiential constraints”). Critically, 59.4% felt greater creative freedom with AI, while only 18% perceived limitations, validating that AI’s cross-domain text stimulation effectively breaks conventional associative boundaries—a key driver of the observed improvement in innovation.

4.1.2. Critical Reconstruction of Behavioral Characteristics in Group A

Prompt text analysis revealed that students engaged in frequent iterations (mean 4.85 ± 2.33), with 82.69% completing five or more revisions. The number of professional terms (mean 19.17 ± 8.80) was far higher than that of cross-domain terms (4.87 ± 2.77) (Figure 4), reflecting a pattern of “high-frequency iteration combined with professional knowledge integration”—a behavioral signature of active logical reconstruction.
Pearson correlation analysis (Figure 4) confirmed significant positive correlations between critical reconstruction behaviors and design quality, with word frequency and professional term count showing the strongest associations (r = 0.394 and 0.385, p < 0.05). This suggests that refined, profession-specific prompt interaction directly contributes to quality improvement.
Behavioral threshold analysis revealed a clear link between interaction quality and design quality: students with ≥5 iterations scored 1.3 points higher than those with ≤3 iterations. Specifically, students with a word frequency of 100 or more and 20 or more professional terms showed improvements of 4.3–5.2 points. These findings underscore the importance of deep engagement with prompt optimization (e.g., frequent revision and professional term integration) in maximizing AI’s effectiveness.

4.2. Verification of CL’s Mediating Effect

4.2.1. SEM Model Fitting Results

SEM was employed to explore variations in the design quality influence mechanism across teaching pathways. The measurement model demonstrated adequate reliability and validity for the latent variables, as all composite reliability (CR) values exceeded 0.7 and average variance extracted (AVE) values exceeded 0.5, with most observed variables showing factor loadings greater than 0.7. This confirms the scale’s robustness, laying a solid foundation for subsequent structural model analysis.
Equivalence testing confirmed no significant differences between the two paths (Group T and Group A), indicating that the scale’s measurement properties remain stable across intervention modes. The structural model achieved acceptable fit indices (Group T: χ2/df = 4.423, CFI = 0.952, RMSEA = 0.073, SRMR = 0.046; Group A: χ2/df = 1.423, CFI = 0.903, RMSEA = 0.081, SRMR = 0.050).
Paths with non-significant coefficients (p > 0.05) and non-overlapping 95% confidence intervals were denoted by gray dashed lines and texts (Figure 5).

4.2.2. Path Coefficient Analysis and Mechanism Explanation

The model identified ten significant structural paths (three in Group T and seven in Group A). Path coefficient analysis (Table 6) revealed that the significant improvement in Group A’s design quality (as reported in Section 4.1.1), coincided with the retention of the direct effect (0.329) in its SEM model. This finding further validates Hypothesis H1, confirming that Group A overcame the mediating dependency of traditional teaching via the dual mechanism of “load optimization + direct guidance.”
A striking difference emerged in the “teaching intervention → ExCL” path: the coefficient decreased significantly from 0.907 in Group T to 0.017 in Group A. This indicates that AI effectively reduced extraneous load as a bottleneck, thereby freeing cognitive resources. This liberation enabled direct enhancement of design quality through teaching interventions. This pattern reflects a key mechanism: “logical chain externalization reduces inefficient cognitive burden.” Critical path comparisons across groups showed that the path coefficient of “teaching intervention → InCL” in Group T was twice that in Group A, reflecting traditional teaching’s heavy reliance on information processing efficiency to enhance design quality. Although Group A’s “teaching intervention → GeCL” path coefficient (0.598) was lower than Group T’s (0.852), integrated analysis with Section 4.1 revealed a more substantial actual effect in the exploratory group, confirming AI’s selective stimulation of positive CL.
  • Group T exhibited “dual complete mediation”: teaching intervention affected design quality solely through InCL and ExCL, with no significant direct effect.
    • ExCL complete mediation (Path 1): ExCL acted as a critical bottleneck (total indirect effect = −0.899), reflecting how inefficient information organization in traditional teaching constrains quality.
    • InCL complete mediation (Path 3): InCL mediation (total indirect effect = 0.927) further confirmed that task complexity alone dominates cognitive resource allocation without AI support.
  • Group A exhibited a significant direct effect (p < 0.05) with non-zero indirect effects, categorized as partial mediation.
    • ExCL and InCL partial mediation (Paths 1 and 3): Total indirect effects of −0.0001 and 0.310 alongside retained direct effects (0.329) form a dual mechanism of “load optimization + direct guidance”.
These mechanistic differences fully supported Hypothesis H2, demonstrating differentiated mediating effects across the CL dimensions: the adverse mediating effects of ExCL and InCL were attenuated in Group A, whereas the positive mediating effect of GeCL initially emerged.
Notably, critical reconstruction behaviors exhibited small but meaningful chain mediation (total indirect effects = −0.0001 and 0.017), providing preliminary support for H4. This suggests teaching intervention operates through a “critical reconstruction behavior → CL optimization” pathway—a mechanism absent in traditional teaching.
Notably, Hypothesis H3’s assertion of a significant mediating effect of critical reconstruction behaviors was not supported by the data (total indirect effect = 0.017, p > 0.05), possibly due to the low proportion of high-frequency iteration groups in the sample, which may have prevented the mediating path from being independently observed (see Section 5.1.2).

4.3. Quantitative Analysis of Differences in Dual-Path Mechanisms

Notable differences in effect structure and mediation patterns emerged between the two groups (Group T and Group A) (Table 6). Group T exhibited a “single linear mediation” pattern, with a total effect of 0.596, but no significant direct effect. This means that design quality was entirely dependent on InCL and ExCL mediation. This highlights the vulnerability of traditional teaching to CL bottlenecks, as quality improvement is constrained by load-related factors alone. In contrast, Group A exhibited a higher total effect (0.684) than Group T, and this was accompanied by a fundamental shift in the effect structure. Specifically, whilst Group T relied entirely on complete mediation via InCL and ExCL, approximately 48.09% of the effect in Group A operated directly on design quality, bypassing the mediation of CL. Furthermore, the mediation effect of ExCL decreased substantially from 0.927 in Group T to 0.310 in Group A, suggesting that some cognitive resources were reallocated to GeCL processing, although this mediating effect was not statistically significant (0.008). This model supports the value of AI in minimizing information loss through “logical externalization”.
Group A overcame traditional limitations through dual mechanisms, behavior guidance (stimulating critical reconstruction) and technological empowerment (reducing InCL dependency). The 48.09% direct effect share confirms that AI weakens the mediating role of task complexity, enabling teaching interventions to drive quality directly. This marks a qualitative shift from traditional “single linear mediation” to a “composite collaborative mechanism” that integrates direct guidance and load optimization.
In terms of load contribution, Group T’s indirect effect relied heavily on ExCL (47.2%) and InCL (52.8%), while Group A reduced InCL’s share to 38.5%. Most notably, critical reconstruction behavior contributed 4.2% to the effect, representing an entirely new pathway absent in Group T. This quantifies the shift in mechanism from “pure load regulation” to “behavior-guided load optimization,” highlighting the pivotal role of active human–AI interaction.

5. Discussion

5.1. Interpretation of Core Results and Mechanism Explanation

5.1.1. Structural Differences in Dual-Path Effect

Quantitative differences in Dual-Path effects provide direct empirical support for the core hypothesis that teaching pathways have a direct influence on design quality. Group T was characterized by complete mediation, aligning with the negative load dominance theory in design education [36], where intrinsic task complexity (InCL) and inefficient information processing (ExCL) acted as primary bottlenecks to design quality. In contrast to this load-dependent model, Group A formed a composite model of load optimization + direct guidance (total effect = 0.684) through partial mediation of ExCL (0.310) and retention of direct effects (0.329), thereby revising the traditional CL theory’s oversimplified assumption that load reduction necessarily correlates with quality improvement.
The quantitative disparity between the Dual-Path effect directly confirms the significant influence of the instructional approach on design quality. Group T relied entirely on the mediating effects of InCL and ExCL, indicating that creative output is constrained by both the task’s inherent complexity and the inefficiency of information processing [34]. In contrast, Group A achieved a breakthrough in total effect through a composite mechanism that combined “load optimization” with “direct guidance.” Specifically, 32.9% of the direct effect bypassed the CL mediator and exerted an immediate influence on design quality, while the mediation effect of ExCL decreased from 0.927 to 0.310. Freed cognitive resources substantially facilitated investment in deep processing load (GeCL). This structural transformation provides preliminary evidence on how AI may reshape cognitive resource allocation in basic design contexts through the “externalization of logical chains”: when prompt iterations reach or exceed five, critical reconstruction behavior contributes 4.2% indirectly to design quality, a mechanism not previously quantified in traditional approaches, although further validation with larger samples is needed.
Furthermore, Vygotsky’s theory of tool-mediated activity suggests AI, as a “cognitive substitute,” could enhance effectiveness by 10–15%. The observed 14.8% increase in this study aligns with this expectation, though this finding is based on a small sample (52 students) and requires replication. It should be noted, however, that Group A’s effect enhancement is constrained by current AI’s “professional semantic gap.” For instance, DeepSeek struggles with domain-specific logical chains (e.g., “cultural narrative → spatial form”) [17,38]. This limitation highlights that our findings may be context-dependent, with preliminary applicability to basic design tasks that involve structured prompts. For instance, inputting “aging-friendly” tended to produce conventional fitness equipment solutions, lacking intergenerational interactive innovation. Student feedback highlighted challenges such as “time-intensive AI idea refinement” and “precision-demanding language description to avoid scheme deviation,” reflecting students’ insufficient prompt writing skills and the current limitations of AI tools in natural language understanding and design logic transformation.
From the perspective of mediation mode transformation, AI’s “explicit logical chain” feature reduced the contribution of ExCL 52.8% to 38.5% by fostering “prompt–feedback–correction” interaction behaviors, with part of the reallocated cognitive resources invested in GeCL (although the mediating effects were not statistically significant). This finding preliminarily supports the innovation pathway of “negative load → positive load” and aligns with Vygotsky’s mediation tool theory: the effectiveness of prompt text as a “psychological tool” depends more on the quality of students’ “problem decomposition–logical verification” interaction than on the technology itself.
The observed structural differences in effects offer preliminary insights into the transformation of design education paradigms, with implications for integrating AI-driven cognitive optimization into foundational design curricula. In contrast, traditional teaching’s sole reliance on “ExCL reduction” has evolved into Group A’s composite mechanism of “direct effect retention + multiple mediations”—a potential step toward more adaptive teaching models, though not yet a definitive paradigm shift. Despite AI limitations in higher-order integration, quantitative results suggest that optimizing prompt interaction (e.g., high-frequency iteration, cross-domain fusion) may expand the utility of the tool in basic design contexts. However, its effectiveness in complex design tasks remains to be tested.

5.1.2. Interaction Mechanism Between CL and Critical Reconstruction Behavior

  • Quantitative Evidence and Typical Cases of Behavioral Heterogeneity
Cluster analysis of critical reconstruction behaviors classified students into three behavioral types (Table 7). Among them, the exploratory type was characterized by high-frequency iterations (≥6) and a high number of cross-domain terms (≥6). For example, Student No. 18 introduced the concepts of “digital art” and “outdoor teaching” through 10 prompt modifications, analogically transforming an AI-generated “honeycomb structure” into an interactive light and shadow installation, achieving a design quality score of 92. This group exhibited 10.2% lower ExCL (4.44 ± 0.38) and 9.4% higher GeCL (5.11 ± 0.69) than the conservative type, providing preliminary evidence for a potential positive cycle of “behavior reconstruction → load optimization” in this exploratory sample. In contrast, the conservative type (e.g., Student No. 1) performed only two iterations with ≤3 cross-domain terms, retaining traditional flower bed layouts, with an innovation score of 67. Their mean ExCL score (4.0 ± 0.62) did not differ significantly from that of Group T (t = 1.23, p = 0.22), confirming that inefficient interaction reduces AI to an “information presentation tool”.
2.
Theoretical explanation and contradiction analysis of behavior-load interaction
Drawing on Guilford’s divergent thinking theory, the positive correlation observed in the exploratory group between cross-domain term count (mean = 12.33 ± 5.03) and design quality (r = 0.219) validates the “thinking flexibility → creative integration” pathway. The conservative type exhibited lower professional term counts and cross-domain associations (mean professional term count = 5.4 ± 1.82; mean cross-domain terms = 2.2 ± 0.45), resulting in insufficient GeCL investment. Notably, two high-score designs (≥90) in the low-iteration group may reflect individual design schema reserves—a factor that aligns with CL theory’s emphasis on prior knowledge as a moderator of CL, highlighting the need to control this variable in future studies to refine the load-behavior model.
In line with Vygotsky’s mediation tool theory (AI prompt text as a “thinking scaffold” with a clear threshold effect), only when iterations reach at least five and cross-domain terms exceed four (e.g., the efficient type) can the dominance of ExCL mediation be disrupted (decreasing from 0.927 in Group T to 0.310 in Group A), thereby initiating a virtuous cycle of “logic externalization–load redistribution” (Path 7 in Table 6). This finding extends CL theory: it demonstrates that structured human–AI interaction behaviors can actively regulate load distribution, beyond the passive optimization of information presentation in traditional CL theory.

5.2. Theoretical Contribution

This study develops a novel technology–behavior–cognition synergistic framework by integrating multidisciplinary theories, offering three theoretical insights at the intersection of design education and generative AI—insights that require future validation.
First, traditional CL theory, which has rarely integrated behavioral variables, offers limited explanatory power for optimizing information presentation and the causal pathway of creative output in AI scenarios. This study proposes a cognitive intervention model encompassing “prompt text interaction → critical reconstruction → CL optimization” and, through SEM, provides one of the first systematic quantifications of how AI-driven critical reconstruction behaviors—such as iteration count and cross-domain terms—operate as mediating variables, thereby extending CL theory to incorporate behavioral mechanisms within digital design contexts. The findings reveal that AI prompt text, functioning as a “psychological tool”, fosters thinking externalization and logical reconstruction through structured interaction patterns (i.e., problem identification–strategy generation–formal verification) [18]. This transforms Vygotsky’s abstract proposition of “tool-mediated cognitive development” into a quantifiable “behavior–load” transmission chain, thereby bridging the application gap of traditional CL theory in digital education contexts.
Second, this study identifies a potential mechanism of AI-driven “CL redistribution”, which revises the traditional view in CL theory that “ExCL reduction inherently improves quality” [39]. Specifically, AI reduces ExCL contribution while facilitating GeCL investment, offering a nuanced understanding of load–quality relationships in AI-integrated settings. Consistent with quantitative findings in Section 4.3 (where ExCL’s mediation effect dropped from 0.927 to 0.310), SEM results show that Group A reduces ExCL contribution from 52.8% (Group T) to 38.5% via technological empowerment, with partially reallocated cognitive resources invested in GeCL—the first quantification of the “negative load → positive load” transformation process. This discovery aligns with Bola’s dynamic cognitive resource reorganization hypothesis [6], revealing the asymmetric relationship between load optimization and thinking reinforcement in AI-assisted design: ExCL reduction must synergize with GeCL enhancement to truly promote innovation, with critical reconstruction behavior (accounting for 4.2% of the total effect size) serving as the core synergistic variable.
Finally, the research proposes a preliminary “Dual-Path effect transformation” framework, shifting focus from absolute effect sizes to mechanism innovation. In the context of CL theory, this framework clarifies how AI resolves the “load reduction–thinking reinforcement” asymmetry in traditional teaching through “direct effect retention + multiple mediations,” providing a CL-based explanation for the mechanism differences between AI-assisted and traditional paths. This mechanism transformation essentially converts “one-way teacher knowledge transmission” into “human–AI collaborative logical construction”, whose theoretical significance surpasses superficial effect size differences.

5.3. Teaching Practice Strategies

Based on the differences in the Dual-Path mechanism and the interaction rules between behavior and load, this study proposes a three-level teaching practice strategy with universal design applications in studies. Through a hierarchical design that combines a general framework and professional adaptation, these strategies may be tentatively extended to cross-disciplinary design fields (e.g., product design, visual communication), and their effectiveness is pending validation in larger-scale studies.
  • Traditional teaching enhancement: three-tier AI prompt imitation writing training
Introduce “three-tier AI prompt imitation writing training” to transform the AI path’s “problem–strategy–form” logical chain into causal derivation tasks for paper sketches:
  • Describe site issues (e.g., “inefficient street green space”, “bulky walking aids for the elderly”);
  • Transform into strategic keywords (“ aging-friendly + Rain Garden”, “lightweight materials + folding structures”, etc.);
  • Form generative descriptions (“Sunken Seat + Ecological Grass Planting Ditch”, “streamlined shells”, etc.).
This complements externalized design logic [6]; through the adaptation of professional logic chains, it may help mitigate InCL-induced “thinking stagnation” in traditional teaching and better address the “thinking process externalization” requirement.
2.
AI teaching can develop an “iterative threshold prompt system”
Develop a preliminary “iterative-threshold prompt system” that triggers reminders for <3 prompt modifications, supplemented by cross-domain term recommendations (e.g., “art”, “aging”). This strategy aims to enhance divergent thinking activation and mitigate inefficient interaction in conservative groups, with effectiveness to be further tested in practice.
3.
Evaluation system innovation: three-dimensional “behavior–load–quality” framework
Establish a three-dimensional evaluation framework integrating
  • Behavioral indicators (prompt iteration count, cross-domain term frequency) into design quality criteria;
  • Dynamic CL scale monitoring of ExCL and GeCL balance.
This forms a closed-loop teaching framework: “interaction behavior assessment → CL regulation → creative quality improvement”—a model that requires iterative refinement in educational practice.

5.4. Research Limitations and Future Directions

This study has two key limitations that constrain the generalizability of findings: First, the sample size for critical reconstruction behavior typology analysis was small (n = 52), limiting the robustness of verification for the differentiated mechanism linking the critical reconstruction behavior type → CL mediation → design quality pathway. Second, the single-group pretest–posttest design introduces potential sequential effects, as prior design experience from Group T may have transferred to Group A, which may potentially bias CL response measurements.
As an exploratory study, future research should address these limitations through three key avenues to strengthen validity and generalizability:
  • Large-sample validation to refine theoretical mechanisms: Expand critical reconstruction behavior typology with n ≥ 50 per group to better quantify how behavioral types moderate CL mediation; adopt between-group control (independent Group T and Group A) or cross-over designs to eliminate sequence effects, thereby strengthening causal inferences about AI’s impact on CL redistribution (half of students complete Group T then Group A, and vice versa). Employ cross-lag modeling to track temporal causality and mitigate sequence effects [40].
  • Multimodal tools to deepen CL theory integration: Develop BERT-based algorithms to assess prompt logical chain integrity in real time, linking semantic similarity to GeCL and ExCL dynamics; integrate eye-tracking to quantify visual attention allocation, enhancing understanding of how AI interaction reshapes cognitive resource allocation patterns in CL theory.
  • Intervention tool refinement for educational practice: Develop intelligent prompt assistance systems with real-time feedback on critical reconstruction behaviors (e.g., iteration frequency alerts, cross-domain term suggestions), tailored to reduce ExCL and enhance GeCL in line with CL theory principles.

6. Conclusions

The Dual-Path teaching experiments in landscape design have confirmed that generative AI reshapes the stimulation of design thinking through the “externalization of logical chains”. In the traditional teaching path, InCL and ExCL exerted complete mediation (total effect = 0.569), resulting in a “high load–low innovation” dilemma. By contrast, the AI path overcame this limitation through a composite model combining a direct effect (0.329) with multiple mediators (total effect = 0.684)—reducing the mediation effect of task inherent complexity (path coefficient from 0.907 to 0.017) and reallocating the freed cognitive resources to GeCL investment. Critical reconstruction behaviors formed a dual mechanism of “technology empowerment–behavior guidance”: groups with ≥5 iterations showed a 4.3-point improvement in design quality.
Typology analysis of reconstruction behaviors revealed that the exploratory group (5.77%) achieved dual optimization—10.2% ExCL reduction and 9.4% GeCL increase—through high-frequency iteration and cross-domain integration. In contrast, the conservative group reduced the AI path to traditional teaching modes due to inefficient interaction, confirming a “reconstruction threshold” effect in interaction behaviors.
By proposing context-specific strategies such as “third-order AI prompt imitation writing” and “iterative threshold prompting”, this study provides a preliminarily replicable pathway for design education, “outsourcing low-order cognition to reinforce higher-order thinking”, which supports the implementation of Sustainable Development Goal (SDG) 4.7 in design disciplines.
As a pioneering exploration of generative AI-assisted design thinking, this study’s contributions include (1) establishing a quantitative index system for “critical reconstruction behaviors” (e.g., iteration frequency, cross-domain term count) and constructing a novel mechanism of “prompt interaction → logical chain externalization → cognitive resource redistribution”, which offers measurement tools for large-sample research, and (2) identifying the “reconstruction threshold” effect in interaction behaviors, which—though requiring validation with larger samples—provides key parameters for developing AI teaching tools (e.g., iteration reminder systems).

Author Contributions

Conceptualization, Q.D. and H.L.; methodology, Q.D. and N.L.; software, Q.D. and N.L.; validation, N.L.; formal analysis, Y.Y. and N.L.; investigation, Q.D. and J.H.; data curation, Y.Y.; writing—original draft preparation, Q.D., Y.Y., J.H. and B.W.; writing—review and editing, Q.D., Y.Y. and H.L.; visualization, Q.D., J.H. and B.W.; project administration, H.L.; funding acquisition, Q.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Sichuan Province Social Science Key Research Base—Sichuan Center for Rural Development Research (NO. CR2426), the talent introduction program of Xihua University (NO. 2320048) and the 2025 University-level Educational and Teaching Reform Project of Xihua University—Innovative Exploration of the “Fundamentals of Landscape Design” course in the Environmental Design Major Assisted by Generative Artificial Intelligence.

Institutional Review Board Statement

Since this study does not involve human physiological experiments and is based solely on a questionnaire survey, the ethics committee of our institution has determined that an ethical review is not required and has issued a formal certificate to confirm the exemption.

Informed Consent Statement

Written informed consent has been obtained from the participants to publish this paper.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CLCognitive Load
AIArtificial Intelligence
InCLIntrinsic Load
ExCLExtraneous Load
GeCLGermane Load
SEMStructural Equation Model

Appendix A

1. Basic information
(1) How well do you understand landscape design theories? (Knowledge Base)
□ (1–7 rating scale, where 1 = novice, 7 = expert)
(2) What is your proficiency in hand-drawing/AI tool operation? (Skill Level)
□ (1–7 rating scale, where 1 = completely unfamiliar, 7 = highly proficient)
(3) Teaching intervention statistics
Table A1. Cognitive questionnaire of landscape design basic course (traditional method/AI-aided design).
Table A1. Cognitive questionnaire of landscape design basic course (traditional method/AI-aided design).
Mentoring SupportTime Investment (Class Hours)Case Resources (Number of Adopted Cases)
2. Cognitive load core scale
Please circle the number that best reflects your experience (1 = Strongly Disagree, 7 = Strongly Agree).
Example of the item: Understanding the core requirements of this design task makes me feel: 7 (Extremely difficult).
Table A2. Cognitive load questionnaire (traditional method/AI-aided design).
Table A2. Cognitive load questionnaire (traditional method/AI-aided design).
DimensionExample of the Item
Group TGroup A
Internal cognitive loadTask complexityIt is tough for me to understand the core requirements of this design task
Concept Understanding DifficultyIt is tough to understand the site and carry out the designIt is tough to understand the logic of the AI generation scheme
Difficulty of creative generationThe novel design plan makes me feel exhausted
Extrinsic cognitive loadTool operation burdenThe technique of hand-drawing consumes a lot of my energyIt is tough to operate the AI platform
Information presentation problemThe clarity of design informationThe clarity of the AI feedback information
Time pressureCompleting the scheme design within the prescribed time makes me anxious
Associated cognitive loadSense of knowledge integrationI can apply the classroom theories to the actual designUnderstand the principles of AI generation schemes
Sense of engagement in learningWhen optimizing the plan, multiple possibilities were actively tried
Migration application confidenceThe design experience gained this time will be of great help for future projects.AI-assisted methods are of great help to future projects
3. Path Comparison (AI-assisted group only)
(1) In which stages did AI assistance reduce cognitive load? (Multiple choices)
□ Idea generation □ Technical drawing □ Scheme optimization □ Standards and norms checking
(2) How did AI assistance affect your creative freedom compared to traditional methods?
□ Significantly decreased □ Slightly decreased □ No change □ Slightly increased □ Significantly increased
4. Open-Ended Questions (AI-assisted group only)
(1) Which step in the AI workflow was most mentally demanding? Why?
(2) How did your thinking mode change between the two design phases?

References

  1. Eisner, E.W. Current Issues in Art and Design Education: Art Education Today: A Look at its Past and an Agenda for the Future. Int. J. Art Des. Educ. 2010, 8, 153–166. [Google Scholar] [CrossRef]
  2. Sazonova, M.V.; Mikhailova, L.V. Transformation of personalized educational trajectories of students of higher educational institutions in the context of the development of digital technologies. AIP Conf. Proc. 2024, 2969, 7. [Google Scholar] [CrossRef]
  3. Jing, Y. Present Situation and Trend of Diversified Development of Art Design Education System in China. In Proceedings of the International Conference on Arts, Moscow, Russia, 22–24 April 2015. [Google Scholar] [CrossRef]
  4. López-Galisteo, A.J.; Borrás-Gené, O. The Creation and Evaluation of an AI Assistant (GPT) for Educational Experience Design. Information 2025, 16, 117. [Google Scholar] [CrossRef]
  5. Meron, Y.; Araci, Y.T. Artificial intelligence in design education: Evaluating ChatGPT as a virtual colleague for post-graduate course development. Des. Sci. 2023, 9, e30. [Google Scholar] [CrossRef]
  6. Abdallah, Y.K.; Estévez, A.T. Biomaterials Research-Driven Design Visualized by AI Text-Prompt-Generated Images. Designs 2023, 7, 48. [Google Scholar] [CrossRef]
  7. Chen, Y.; Qin, Z.; Sun, L.; Wu, J.; Ai, W.; Chao, J.; Li, H.; Li, J. GDT Framework: Integrating Generative Design and Design Thinking for Sustainable Development in the AI Era. Sustainability 2025, 17, 372. [Google Scholar] [CrossRef]
  8. Chandler, P.; Sweller, J. Cognitive Load Theory and the Format of Instruction. Cogn. Instr. 1991, 8, 293–332. [Google Scholar] [CrossRef]
  9. Kalyuga, S. Cognitive Load Theory: How Many Types of Load Does It Really Need? Educ. Psychol. Rev. 2011, 23, 1–19. [Google Scholar] [CrossRef]
  10. Paas, F.; Renkl, A.; Sweller, J. Cognitive Load Theory and Instructional Design: Recent Developments. Educ. Psychol. 2003, 38, 1–4. [Google Scholar] [CrossRef]
  11. Merriënboer, J.; Sweller, J. Cognitive load theory in health professional education: Design principles and strategies. Med. Educ. 2010, 44, 85–93. [Google Scholar] [CrossRef]
  12. Gkintoni, E.; Antonopoulou, H.; Sortwell, A.; Halkiopoulos, C. Challenging Cognitive Load Theory: The Role of Educational Neuroscience and Artificial Intelligence in Redefining Learning Efficacy. Brain Sci. 2025, 15, 203. [Google Scholar] [CrossRef]
  13. Yang, Y.; Leung, H.; Yue, L.; Deng, L. Generating a two-phase lesson for guiding beginners to learn basic dance movements. Comput. Educ. 2013, 61, 1–20. [Google Scholar] [CrossRef]
  14. Yavuz, S. Can Generative AI and ChatGPT Break Human Supremacy in Mathematics and Reshape Competence in Cognitive-Demanding Problem-Solving Tasks? J. Intell. 2025, 13, 43. [Google Scholar] [CrossRef]
  15. Vygotsky, L.S. Mind in society: The development of higher psychological processes. In Mind in Society: The Development of Higher Psychological Processes; Harvard University Press: Cambridge, MA, USA; London, UK, 1978; Volume 1+3, pp. 19–30+40–51. [Google Scholar]
  16. Guilford, J.P. The Nature of Human Intelligence; McGraw-Hill: Columbus, OH, USA, 1967. [Google Scholar]
  17. Kamnerddee, C.; Putjorn, P.; Intarasirisawat, J. AI-Driven Design Thinking: A Comparative Study of Human-Created and AI-Generated UI Prototypes for Mobile Applications. In Proceedings of the 2024 8th International Conference on Information Technology (InCIT), Chonburi, Thailand, 14–15 November 2024. [Google Scholar] [CrossRef]
  18. Li, F.; Yan, X.; Su, H.; Shen, R.; Mao, G. An Assessment of Human–AI Interaction Capability in the Generative AI Era: The Influence of Critical Thinking. J. Intell. 2025, 13, 62. [Google Scholar] [CrossRef] [PubMed]
  19. Goldschmidt, G. The dialectics of sketching. Creat. Res. J. 1991, 4, 123–143. [Google Scholar] [CrossRef]
  20. Tang, C.; Lou, Y. Exploration of the Application of Experiential Teaching in the Enlightenment Phase of Architectural Education. Design 2025, 38, 75–79. [Google Scholar] [CrossRef]
  21. Wong, M.; Rios, T.; Menzel, S.; Ong, Y.S. Prompt Evolutionary Design Optimization with Generative Shape and Vision-Language models. In Proceedings of the 2024 IEEE Congress on Evolutionary Computation (CEC), Yokohama, Japan, 30 June 2024. [Google Scholar] [CrossRef]
  22. Tan, L.; Luhrs, M. Using Generative AI Midjourney to enhance divergent and convergent thinking in an architect’s creative design process. Des. J. 2024, 27, 677–699. [Google Scholar] [CrossRef]
  23. Liu, B.; Wang, W.; Liu, X.; Wu, J. Research on Innovative Design of Interactive Learning Environment for Architectural Education Integrating Artificial Intelligence and Virtual Reality Technology. Appl. Math. Nonlinear Sci. 2024, 9, 1. [Google Scholar] [CrossRef]
  24. Hubert, K.F.; Awa, K.N.; Zabelina, D.L. The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks. Sci. Rep. 2024, 14, 3440. [Google Scholar] [CrossRef]
  25. Merriam, S.B. Qualitative Research and Case Study Applications in Education. Br. Educ. Res. J. 1998, 41, 287–302. [Google Scholar] [CrossRef]
  26. Chen, Y.; Zhang, L.; Yin, H. A Longitudinal Study on Students’ Foreign Language Anxiety and Cognitive Load in Gamified Classes of Higher Education. Sustainability 2022, 14, 10905. [Google Scholar] [CrossRef]
  27. He, S.; Wu, J.; Qin, J. How Do Designers Collaborate with AI: Teaching Practice and Reflection on AIGC Empowered Design Workshop. Art Des. Res. 2025, 1, 123–129. [Google Scholar]
  28. Elsomcook, M.T. Environment Design and Teaching Intervention; Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar] [CrossRef]
  29. Lu, D.; Xie, Y. The Intervention Model and Strategies of Blended Learning Environment with Orientation of Critical Thinking: A Design-Based Research. J. Northeast. Norm. Univ. (Philos. Soc. Sci.) 2020, 3, 143–151. [Google Scholar] [CrossRef]
  30. Landa-Blanco, M.; Cortés-Ramos, A. Psychology students’ attitudes towards research: The role of critical thinking, epistemic orientation, and satisfaction with research courses. Heliyon 2021, 7, e08504. [Google Scholar] [CrossRef]
  31. Hart, S.G. Nasa-task load index (Nasa-TLX); 20 years later. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 904–908. [Google Scholar] [CrossRef]
  32. Lox, C.L.; Jackson, S.; Tuholski, S.W.; Wasley, D.; Treasure, D.C. Revisiting the Measurement of Exercise-Induced Feeling States: The Physical Activity Affect Scale (PAAS). Meas. Phys. Educ. Exerc. Sci. 2000, 4, 79–95. [Google Scholar] [CrossRef]
  33. Xiao, G.; Zhang, J. On Comprehensive Assessment System of Landscape Architecture Planning. J. Southwest Agric. Univ. (Soc. Sci. Ed.) 2006, 01, 150–155. [Google Scholar]
  34. Duan, X.; Lin, B.; Meng, L.; Zhao, F. A Method for Selecting and Optimizing Pocket Park Design Proposals Based on Multi-Attribute Decision Making. Buildings 2025, 15, 1026. [Google Scholar] [CrossRef]
  35. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  36. Byrne, B.M. Testing for Multigroup Invariance Using AMOS Graphics: A Road Less Traveled. Struct. Equ. Model. A Multidiscip. J. 2004, 11, 272–300. [Google Scholar] [CrossRef]
  37. Bie, F.; Yang, Y.; Zhou, Z.; Ghanem, A.; Zhang, M.; Yao, Z.; Wu, X.; Holmes, C.; Golnari, P.; Clifton, D.A. RenAIssance: A Survey Into AI Text-to-Image Generation in the Era of Large Mode. Pattern Analysis and Machine Intelligence. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 47, 2212–2231. [Google Scholar] [CrossRef] [PubMed]
  38. Leahy, W.; Sweller, J. The imagination effect increases with an increased intrinsic cognitive load. Appl. Cogn. Psychol. 2010, 22, 273–283. [Google Scholar] [CrossRef]
  39. Bola, M.; Borchardt, V. Cognitive Processing Involves Dynamic Reorganization of the Whole-Brain Network’s Functional Community Structure. J. Neurosci. 2016, 36, 3633. [Google Scholar] [CrossRef]
  40. Yuan, S.; Cao, W.; Zhang, M.; Wu, S.; Wei, X. The Road to A More Accurate Causal Analysis: New Advances in Cross-lagged Panel Models. Hum. Resour. Dev. China 2021, 38, 23–41. [Google Scholar] [CrossRef]
Figure 1. Research framework.
Figure 1. Research framework.
Buildings 15 02864 g001
Figure 2. Current site conditions (based on campus WeChat official account imagery).
Figure 2. Current site conditions (based on campus WeChat official account imagery).
Buildings 15 02864 g002
Figure 3. (a) Design quality scores of criterion layers (B1–B4) and total score; (b) CL questionnaire results.
Figure 3. (a) Design quality scores of criterion layers (B1–B4) and total score; (b) CL questionnaire results.
Buildings 15 02864 g003
Figure 4. The correlation between the score characteristics of critical reconstruction behavior and design quality.
Figure 4. The correlation between the score characteristics of critical reconstruction behavior and design quality.
Buildings 15 02864 g004
Figure 5. The result for the SEM standardized path coefficients is shown.
Figure 5. The result for the SEM standardized path coefficients is shown.
Buildings 15 02864 g005
Table 1. Dual-Path teaching implementation process.
Table 1. Dual-Path teaching implementation process.
StageTraditional Group (Group T)AI-Assisted Group (Group A)Key Points of Control
Tool LimitationsUse case collectionsDeepSeek generates design mind maps onlyStandardize the technological intervention degree
Prompt text construction-Three-stage method:
  • Basic layer (site + function)
  • Expansion layer (style + parameters)
  • Challenge layer (cross-domain integration)
Prompts must include professional terms (e.g., viewing platforms, flower beds) and recommend cross-domain integration (e.g., aging, narratology).
Concept generationLinear association produces 2–3 directions in 8 class hours; one-way teacher comments (e.g., “Pedestrian road width should be 2 m”)AI generates three logical chains; screen one direction for deepening; the teacher provides prompt optimization guidance (e.g., “Add ‘flea market’ to strengthen functionality”)Prohibit AI intervention in Group T; AI path enforces idea guidance
Scheme deepeningManual optimization emphasizes proportion and hand-drawn expressionComplete drawing expression combined with AI-generated conceptsGroup A prohibits the direct use of generated graphics
Outcome formA3 hand-drawn floor plan + renderingsA3 hand-drawn floor plan + renderings + prompt text logDesign must mark AI contributions
Data CollectionStudent CL scale, expert scoring; Group A’s prompt logsCL measurements were administered at regular time intervals to minimize memory interference, and expert evaluations were conducted under anonymous conditions
Table 2. Intervention variables in the teaching process (Cronbach’s α = 0.709).
Table 2. Intervention variables in the teaching process (Cronbach’s α = 0.709).
Variable NameTeaching PathSpecific DefinitionQuantification Method
Mentoring supportGroup TNumber of times teachers provided guidance on traditional design methods (e.g., hand-drawing, sketching)Count of guidance sessions
Group ANumber of times teachers provided guidance on the prompt text for AI tools
Time investmentGroup TTotal class hours spent on Group T design (from concept to initial scheme)Recorded in units of class hours, accurate to 1 decimal place
Group ATotal class hours spent on Group A design (from prompt text writing to initial scheme)
Skill levelGroup TProficiency in using hand tools (e.g., pencils, markers) for design tasks7-point scale (1 = completely unfamiliar, 7 = very proficient)
Group AProficiency in AI tool (Midjourney, Stable Diffusion) operation and parameter setting
Base of knowledgeGeneralDepth of mastery in landscape design theories, norms
Case resourcesGroup TNumber of traditional landscape design cases (hand-drawn plans, physical projects) consultedCount based on the actual number of cases consulted
Group ANumber of reference cases generated or retrieved via AI tools during design
Table 3. Critical reconstruction behavior indicators (Cronbach’s α = 0.659).
Table 3. Critical reconstruction behavior indicators (Cronbach’s α = 0.659).
Observed VariableOperation DefinitionMeasuring Tools and MethodsData Source
Number of iterationsNumber of prompt revisions per taskPrompt modification history log (Group A)
Word frequencyTotal word count in prompts (excluding pronouns, adverbs, etc.)Text word segmentation analysisPrompt text corpusRely on the entire prompt text statistics
Number of professional termsNumber of landscape design-specific termsDesign professional vocabulary matching (e.g., “terrain undulation”, “habitat creation”)Manual coding combined with Python(Python Software Foundation, version 3.11.4, Wilmington, NC, USA) regular expressions
Number of cross-domain wordsNumber of non-landscape termsLDA topic modeling for identifying non-landscape topics (e.g., “technology”, “art”)Prompt text semantic analysis report
Table 4. Explanation of the CL variable.
Table 4. Explanation of the CL variable.
Latent VariableDefinitionDimensionMeasuring ToolsExample Items (1–9 Points)Cronbach’s α
InCLLoad determined by the task’s inherent complexityTask complexity (InCL1),
Concept Understanding Difficulty (InCL2),
Difficulty of creative generation (InCL3)
Adapted from NASA-TLX and Paas scales [31,32]“Understanding the core requirements of this design task was tough for me.”0.742
ExCLLoad resulting from inefficient information presentation or chaotic task organizationTool operation burden (ExCL1),
Information presentation problem (ExCL2),
Time pressure (ExCL3)
“When optimizing the plan, multiple possibilities were actively tried.”
“AI-assisted methods are of great help to future projects.”
0.706
GeCLLoad invested in deep knowledge processing and creative integrationSense of knowledge integration (GeCL1),
Sense of engagement in learning (GeCL2),
Migration application confidence (GeCL3)
“The hand-drawn expression techniques have consumed a lot of my energy.”
“It is tough to operate the AI platform.” “The clarity of AI feedback information.”
0.787
Table 5. Landscape design quality evaluation system (AHP-validated weights).
Table 5. Landscape design quality evaluation system (AHP-validated weights).
Target LayerLayer of Criterion (Number)Indicator (Number)WeightsScoring Instructions
Design Quality EvaluationFunctionality (B1)Functional rationality (C1)0.1959The clarity of the main functional zoning, the organization of human flow, and other aspects.
Site adaptability (C2)0.1438The rationality of terrain elevation treatment and the extent of existing vegetation retention and utilization, etc.
Technical specification (B2)Drawing standard (C3)0.1025The accuracy of scale and dimensioning, as well as the standardized use of legend symbols, etc.
Engineering feasibility (C4)0.1367The turning radius of the road complies with the standards, the spacing of the plantings is reasonable, etc.
Aesthetic expression (B3)Aesthetics of Plane Composition (C5)0.0716The coordination of the relationship between the figure and the base, the sense of rhythm in the virtual and real spaces, etc.
Visual effect (C6)0.1570The drawing features a clear hierarchy and a harmonious, unified color system.
Innovativeness (B4)Conceptual originality (C7)0.0353The uniqueness of the design theme, the degree of innovation of the spatial prototype, etc.
Formal language innovation (C8)0.0623The novelty of geometric form composition and the originality of landscape feature design, among others.
Drawing depth (C9)0.0949The completeness of key node enlarged drawings, the matching degree between design descriptions and drawings, etc.
Table 6. Path analysis of SEM.
Table 6. Path analysis of SEM.
Grouping PathIndependent Variable(X)Influence PathMediating Variable M1Mediating Variable M2Dependent Variable: Design Quality (Y)
Path Coefficient β (X → M1)Path Coefficient β (M1 → Y/M2)Path Coefficient β (M2 → Y)Total Indirect Effect (β (X → M1) * β (M1 → M2) * β (M2 → Y))Direct EffectEffect TypeTotal Effect
Group TTeaching interventionPath 1ExCL-0.907 ***−0.991 **-−0.8990.55Complete mediation0.596
Path 2GeCL0.852 ***0.0210.018Not significant
Path 3InCL0.944 ***0.982 *0.927Complete mediation
Group APath 1ExCL-0.017 *−0.003 ***-−0.00010.329 *Partial mediation0.684
Path 2GeCL0.598 ***0.0130.008Not significant
Path 3InCL0.45 **0.689 *0.31Partial mediation
Path 4Critical reconstruction behavior0.314 *0.0540.017Not significant
Path 5ExCL0.332 *−0.003 ***−0.0001Partial mediation
Path 6GeCL0.0460.0130.0014Not significant
Path 7InCL0.353 *0.689 *0.01Partial mediation
* p < 0.05, ** p < 0.01, *** p < 0.001; total effect = direct effect + indirect effect [37].
Table 7. Types of critical reconstruction behaviors and corresponding characteristics.
Table 7. Types of critical reconstruction behaviors and corresponding characteristics.
Critical Reconstruction Behavioral TypesQuantity (Proportion)Screening Criteria (All Must Be Met Simultaneously)ExCL MeanGeCL MeanMean Value of Design QualityTypical Case Characteristics
Exploratory type3 (5.77%)
  • Iterations ≥ 6 times
  • Professional terms ≥ 20
  • Cross-domain terms ≥ 6
4.44 ± 0.385.11 ± 0.6982.33 ± 14.22Introduced “digital art” to generate interactive light installations
Efficient type11 (21.15%)
  • Iterations 4–5 times
  • Professional terms 11–19
  • Cross-domain terms 4–5
4.12 ± 0.434.73 ± 0.9968.18 ± 8.75Focused on professional terms to complete terrain design efficiently
Conservative type5 (9.62%)
  • Iterations ≤ 3 times
  • Professional terms ≤ 10
  • Cross-domain terms ≤ 3
4.0 ± 0.624.67 ± 0.9166.4 ± 3.65Relied on traditional terms (e.g., conventional flower bed design)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, Q.; He, J.; Li, N.; Wang, B.; Lu, H.; Yang, Y. Exploring the Cognitive Reconstruction Mechanism of Generative AI in Outcome-Based Design Education: A Study on Load Optimization and Performance Impact Based on Dual-Path Teaching. Buildings 2025, 15, 2864. https://doi.org/10.3390/buildings15162864

AMA Style

Dong Q, He J, Li N, Wang B, Lu H, Yang Y. Exploring the Cognitive Reconstruction Mechanism of Generative AI in Outcome-Based Design Education: A Study on Load Optimization and Performance Impact Based on Dual-Path Teaching. Buildings. 2025; 15(16):2864. https://doi.org/10.3390/buildings15162864

Chicago/Turabian Style

Dong, Qidi, Jiaxi He, Nanxin Li, Binzhu Wang, Heng Lu, and Yingyin Yang. 2025. "Exploring the Cognitive Reconstruction Mechanism of Generative AI in Outcome-Based Design Education: A Study on Load Optimization and Performance Impact Based on Dual-Path Teaching" Buildings 15, no. 16: 2864. https://doi.org/10.3390/buildings15162864

APA Style

Dong, Q., He, J., Li, N., Wang, B., Lu, H., & Yang, Y. (2025). Exploring the Cognitive Reconstruction Mechanism of Generative AI in Outcome-Based Design Education: A Study on Load Optimization and Performance Impact Based on Dual-Path Teaching. Buildings, 15(16), 2864. https://doi.org/10.3390/buildings15162864

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop