Next Article in Journal
Achieving Sustainable Performance Through Digital Knowledge Integration: The Roles of Green Knowledge Sharing and Digital Leadership in the Hospitality Industry
Previous Article in Journal
Revegetation Enriched Microbial Carbon-, Nitrogen- and Phosphorus-Cycling Genes in Pb-Zn Tailings, Promoted Their Coupling, and Was Regulated by Plant Type and Colonization Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bridging AI Education and Sustainable Development: Design-Based Research on First-Year Undergraduates’ Systems Analysis for Habitat Conservation

1
School of Computer and Information Engineering, Xiamen University of Technology, Xiamen 361024, China
2
Xiamen Key Laboratory of Data Mining and Recommendation, Xiamen 361024, China
*
Author to whom correspondence should be addressed.
Sustainability 2026, 18(4), 1812; https://doi.org/10.3390/su18041812
Submission received: 26 November 2025 / Revised: 13 January 2026 / Accepted: 5 February 2026 / Published: 10 February 2026

Abstract

Against global challenges like climate change and biodiversity loss, sustainable development is the core orientation of engineering education transformation. Cultivating talents with interdisciplinary perspectives, systemic thinking and AI literacy is crucial for implementing the UN 2030 Sustainable Development Agenda. However, AI education focuses on seniors or graduates, with freshmen’s use of AI acting as “cognitive partners” for knowledge construction and complex problem-solving understudied, constraining AI’s potential in fostering early systemic thinking. We present a novel teaching practice integrating generative AI into an “AI-Environmental System Analysis” module, with Sousa chinensis habitat conservation as the case. Using a design-based research paradigm, we evaluated 24 student groups via system analysis briefs, AI usage reflections and course assessment data. Results show that the module effectively guided students to establish preliminary system analysis frameworks, with over 70% of groups identifying complex interactions among environmental factors. Students’ AI applications ranged from information retrieval to scenario simulation, initially forming systemic thinking and responsible AI literacy for sustainable development. This study provides a replicable paradigm for integrating AI and sustainable development education, clarifies the key role of structured instructional scaffolding, and enriches sustainable development-oriented engineering education pathways.

1. Introduction

1.1. General Introduction

Human society currently faces a series of severe global challenges, including climate change and rapid biodiversity loss [1]. Addressing these challenges and achieving the United Nations 2030 Sustainable Development Goals (SDGs) requires future talent equipped with interdisciplinary knowledge and systemic problem-solving capabilities [2]. This demand is driving profound transformations in the international paradigm underlying higher engineering education [3]. Accordingly, authoritative international engineering accreditation systems, such as the Washington Accord, have incorporated the use of “modern tools to solve complex engineering problems” as a core competency requirement for graduates [4]. Generative artificial intelligence (AI) technology has rapidly developed in recent years [5]. Leading global universities are actively integrating AI tools into their curricula, aiming far beyond basic skill training. Instead, they seek to stimulate students’ critical thinking, innovative awareness, and ability to navigate uncertainty through human–machine collaboration [6,7]. Recent first-year engineering pilots also report a shift from using GenAI for knowledge acquisition toward problem framing and early professional identity building [8].
However, research on AI education applications has predominantly focused on senior undergraduate or graduate students, often emphasizing complex algorithm development, sophisticated model building, or advanced specialized software applications within specific disciplines [9,10]. This bias largely stems from the inertia of “traditional engineering education models”, which we define in this study as pedagogical approaches that strictly isolate technical tool learning (e.g., software operations, syntax) from professional context application, relying on passive knowledge transmission [11,12]. Under this traditional paradigm, first-year students are often deemed “not ready” for complex problem-solving. Thus, research on how they can systematically integrate AI as a “cognitive partner” and “thinking catalyst” that assists in knowledge construction and complex problem analysis, rather than as a passive tool for information retrieval or text generation, remains critical [13]. This research gap has, to some extent, constrained the immense potential of AI technology in fostering students’ professional identity and establishing systematic thinking patterns that may serve them throughout their lives [14].
Against this background, we aimed to explore the following questions: (1) How can an AI-driven interdisciplinary “AI-Environmental Systems Analysis” teaching module be scientifically designed and effectively implemented for first-year environmental engineering and ecological engineering students? (2) What specific impacts will this innovative teaching practice have on first-year students in defining complex problems, constructing systematic analytical frameworks, and developing future-oriented AI literacy? Building upon teaching practices conducted during the Fall semester 2025, we systematically evaluated the effectiveness of the teaching module through multidimensional analysis of student projects, reflective journals, and assessment data. We contextualized our research within the real-world case of “Sousa chinensis Habitat Conservation”, a locally relevant and professionally representative issue, and aimed to provide a robust empirical foundation and actionable model for advancing AI-enabled interdisciplinary teaching for first-year undergraduate students globally.

1.2. Literature Review

1.2.1. Theoretical Evolution of Education for Sustainable Development and Interdisciplinary Learning

As a pioneering educational philosophy, “Education for Sustainable Development” focuses on integrating social equity, economic viability, and environmental protection throughout the entire teaching and learning process [15]. It aims to cultivate students’ ability to understand and actively address the complex, interwoven challenges of the real world [16]. This educational philosophy strongly advocates for interdisciplinary learning approaches because core issues such as ecosystem management and clean energy transition cannot be effectively addressed through the isolated knowledge of any single discipline [17]. In engineering education, this trend necessitates breaking down traditional disciplinary barriers [18] to create learning environments that enable students to integrate knowledge from ecology, sociology, economics, engineering, and other fields [19]. Breaking down disciplinary barriers also involves cultivating critical skills for effective communication and collaboration with stakeholders from diverse knowledge backgrounds [20].
More importantly, this innovative teaching practice aligns closely with the United Nations Sustainable Development Goals (SDGs) [21]. The UNESCO has consistently emphasized in recent years that higher education institutions bear the social responsibility of cultivating future leaders for sustainable development [22]. By integrating cutting-edge AI technology into the curriculum to foster future-ready engineering talents, we actively respond to SDG 4 (Quality Education) and SDG 9 (Industry, Innovation, and Infrastructure). Furthermore, by anchoring the course content in the conservation of the Sousa chinensis and its habitat, we guide students to directly engage with SDG 14 (Life Below Water), promoting awareness and solutions for marine biodiversity protection.

1.2.2. Systems Thinking: A Core Framework for Analyzing Complex Environmental Issues and Teaching Practices

Systems thinking has been widely recognized as a foundational capability for understanding and addressing sustainable development challenges [23]. It requires learners to move beyond the sole memorization of isolated facts and instead focus on the dynamic interrelationships among components within a system, non-linear feedback loops, and the complex behaviors that emerge from the system as a whole [24]. Within environmental science and engineering, conceptualizing specific environmental issues (such as the conservation of particular species) as complex adaptive systems formed by the continuous interaction and co-evolution of natural (e.g., hydrological, ecological, geological) and human subsystems (e.g., policy, economic, cultural) [25,26] has become a mainstream cognitive paradigm. Compared with more theoretically rigorous and constructively demanding “Explanatory Models” [27], “Systems Analysis” offers a more accessible conceptual entry point for academically novice undergraduates [28,29]. This enables them to conduct preliminary yet meaningful systemic analyses of real-world problems using visualization tools such as causal loop and stock-flow diagrams [30,31]. Research indicates that early training in systems thinking significantly enhances the ability of students to understand complex issues and their creative problem-solving skills in subsequent specialized studies [32].

1.2.3. Positioning AI as a Cognitive Augmentation and Critical Thinking Tool

AI, particularly generative AI, is undergoing a paradigm shift in education from being a “supportive tool” to becoming a “cognitive partner.” This transformation can be summarized as a transition from “tool usage” to “intelligence augmentation” [33]. Based on its robust natural language understanding and content generation capabilities, generative AI demonstrates immense potential as a powerful cognitive partner. It assists students in efficiently processing vast information, generating and preliminarily testing multiple hypotheses, and practicing logical reasoning, which can significantly expand their cognitive boundaries [34,35]. However, achieving this deep, constructive human–machine collaboration demands new competencies from students: prompt engineering capabilities and the ability to correctly interpret AI outputs [36]. This reflective, responsible, and ethical engagement with AI constitutes a core component of AI literacy, an essential competency for 21st-century citizens and future engineers [37]. A significant frontier in current educational research lies in exploring how to organically embed the cultivation of these competencies into the specialized curricula of various disciplines [38].
However, integrating AI is not without risks. Recent scholarship cautions against the “automation of thought,” where over-reliance on generative AI may lead to superficial learning and a decline in epistemic diligence. If students treat AI as an oracle rather than a fallible source, they risk inheriting the model’s hallucinations and biases. Therefore, a critical pedagogical challenge lies in designing interventions that leverage AI’s processing power while enforcing human critical oversight.

2. Materials and Methods

2.1. Module Design Philosophy: AI-Empowered Systems Analysis

The overarching design philosophy of this teaching module is “AI-Empowered Systems Analysis” [39]. Its core objective is to guide first-year students in applying a systems theory perspective to understand a real-world regional environmental issue (“Habitat Change of the Sousa chinensis”) as a dynamic, open, and complex system. This analytical framework (Figure 1) emphasizes a holistic view, requiring students not only to identify key components within the system (e.g., water temperature, depth, turbidity in the physical environment; prey abundance and competitors in the biological environment, shipping traffic, fishing activities, coastal pollution, and tourism development in the human-social environment), but also to delve into the intricate interactions among these elements (e.g., shipping activities not only pose direct collision risks but also generate underwater noise that masks the dolphins’ sonar signals, disrupting their communication, navigation, and foraging behaviors). In this process, generative AI tools are strategically positioned as “cognitive partners” and “force multipliers” to assist students in executing multiple critical steps of systems analysis (such as large-scale information integration and summary, multidimensional relationship mapping and visualization, or future scenario simulation and evaluation) [40].
The teaching objectives of this module closely align with the core requirements of the “Fundamentals of Artificial Intelligence” course syllabus, i.e., cultivating and improving students’ computational thinking and modern tool application skills in systems analysis and human–machine collaboration. Specifically, upon completing this module, students are expected to (1) employ fundamental systems thinking methods to deconstruct and analyze complex real-world environmental problems; (2) proficiently and critically utilize generative AI tools to assist in information gathering, data analysis, concept visualization, and report writing; and (3) gain a detailed understanding of the strengths and limitations of AI technology, thus developing the habit and ability to responsibly use and evaluate AI-generated content.
In the context of this module, we operationalize Critical Thinking not merely as a general cognitive disposition, but specifically as “AI-mediated Evaluative Judgement.” This involves two specific competencies: (1) Skepticism: The ability to question the provenance and accuracy of AI-generated information [41]. (2) Triangulation: The proactive habit of cross-referencing AI outputs with external authoritative data sources (e.g., government reports, scientific literature) before acceptance.

2.2. Sequence of Instructional Activities

This teaching practice spans four weeks and is embedded as a core module within the overall framework of the “Fundamentals of Artificial Intelligence” course. Its activity sequence has been carefully designed into two interconnected phases with progressively increasing difficulty(Figure 2).
  • Phase I: AI Tool Exploration and Problem Framework Establishment (1 week)
This phase aligns with the “AI-Assisted Office Work” section of the syllabus. Under faculty guidance, students first become familiar with the basic operations of one to two mainstream generative AI large-model tools. The core tasks explicitly go beyond simple functional tool operation, instead focusing on using AI to accomplish two foundational tasks through carefully designed and continuously iterated prompt engineering:
Information Gathering and Synthesis: Students design a series of prompts to guide AI in extracting and synthesizing a concise, structured summary report based on diverse online sources. This report covers the core ecological habits of the Sousa chinensis, its primary threats (e.g., fisheries bycatch, underwater noise pollution, habitat loss/fragmentation, water pollution), and existing domestic and international conservation measures and policies.
Preliminary System Structure Exploration and Visualization: After obtaining initial information, students further design prompts instructing the AI to create a conceptual or system relationship diagram of “White Dolphin Habitat Impact Factors” based on the aforementioned summary. This visualizes causal, correlative, and other interactive relationships among different factors, laying a visual foundation for the subsequent in-depth analysis.
  • Phase II: Interdisciplinary Project Practice (3 weeks)
This phase corresponds to the “Integrated Project Practice” segment in the course syllabus. Ninety-four students were grouped by disciplinary background into 24 research teams (three to four members per team), each selecting one of two distinct project directions for in-depth exploration:
Project Track A (Scenario Narrative and Science Communication): Teams selecting this track have to leverage the generative capabilities of AI tools to construct scenario narratives that depict potential future trajectories of the Sousa chinensis population over the next 20 years under varying human intervention and management strategies (e.g., implementing strict vessel speed zones, establishing and effectively managing marine protected areas, rigorously controlling total land-based pollutant emissions). The final deliverable should be a 3- to 5 min long science communication video featuring a clear narrative logic that balances scientific rigor with emotional resonance. This track focuses on developing students’ systems thinking, imagination, and science communication skills.
Project Track B (Data-Driven Decision Support): Teams selecting this track have to leverage the data processing and analytical capabilities of AI tools to conduct exploratory correlation analyses on publicly available shipping automatic identification system data [42], long-term coastal water quality monitoring data, and historical white dolphin observation records [43]. The final deliverable is a decision support briefing note submitted to local marine management authorities, presenting actionable conservation management recommendations grounded in data evidence. This track focuses on developing students’ data literacy and evidence-based logical reasoning and the ability to translate scientific knowledge into policy language.
The final assessment deliverables of all teams consist of a structurally complete Systems Analysis Briefing Note (Supplementary S1), which has to include four core components: (1) A systems relationship diagram reflecting the complexity of the issue; (2) an in-depth analysis of key influencing factors; (3) a detailed description of the specific AI application methods used in the analysis process, including example prompts and key AI outputs; and (4) preliminary conservation recommendations or a reasonable outlook for future scenarios based on the above analysis.

2.3. Research Methodology

In this study, we adopted the Design-Based Research (DBR) paradigm, which aims to develop effective instructional intervention strategies in authentic educational settings through an iterative cycle of design, implementation, analysis, and refinement [44,45,46]. DBR has become a mainstream method for evaluating innovative instructional models in educational technology. The iterative cycle of DBR effectively captures dynamic issues in teaching, enabling timely adjustments to design plans to ensure both practical applicability and theoretical innovation of research outcomes [47]. We specifically focused on evaluating the effectiveness of the first round of teaching. Subsequent iterations will be conducted based on these results in the future, thus forming a complete closed-loop cycle of “design–implementation–evaluation–improvement.”

2.3.1. Participants

The subjects involved this study were first-year students enrolled in the “Fundamentals of Artificial Intelligence” elective course during the fall semester of 2025, including one class of Environmental Engineering majors (49 students) and one class of Ecological Engineering majors (45 students), totaling 94 participants. These 94 students were grouped by their major, forming a total of 24 research teams. Details regarding participant demographics and research timeline are presented in Table 1.

2.3.2. Data Sources and Collection

(1) Core Qualitative Data: Final system analysis briefs submitted by all 24 groups at project completion (24 electronic documents).
(2) Supplementary Qualitative Data: Each student independently submitted a “Reflection Report on AI Usage Process” (Supplementary S2) of no fewer than 500 words. To reduce purely subjective reporting, students were required to include representative prompt excerpts and corresponding AI outputs, and to describe at least one cross-check/triangulation action (e.g., comparing sources or models). This report focused on describing the primary challenges encountered in the group project when using AI, the prompt optimization strategies employed, the process for assessing the reliability of AI outputs, and personal reflections on the experience.
(3) Quantitative Course Assessment Data: Two comprehensive case analysis questions were specifically designed before the course concluded to evaluate students’ propensity to apply systems thinking in analyzing problems within novel contexts.

2.3.3. Data Analysis Methods

(1) Qualitative Content Analysis of System Analysis Briefs: Drawing upon the standard thematic analysis framework presented by Ahmed et al. [48], we developed a three-dimensional approach aligned with our research objectives:
  • Complexity of System Structure: We counted and analyzed the total number of distinct impact factors identified in the briefs, along with the diversity of factor types (natural/human-made). We focused on evaluating the quality of descriptions regarding inter-factor relationships, including whether the direction and strength of interactions were accurately indicated, and whether systemic features such as indirect effects and feedback loops were recognized. Two researchers conducted the assessment, resolving discrepancies through discussion to ensure consistent results [49].
  • Argument Rigor: We evaluated whether the core arguments in the briefs were effectively supported by diverse evidence [50]. Evidence may include AI-generated information cross-validated by students, cited authoritative public data, sound logical reasoning chains, or simple computational analysis [51,52].
  • Depth of AI Application: We classified students into three levels based on their demonstrated AI usage in briefings and reflections [53]: (Level 1) Information retrieval and simple summarization; (Level 2) Data interpretation, relationship mapping, and chart generation; (Level 3) Scenario simulation, multi-scenario comparison, and creative content generation. To strengthen pedagogical interpretability in sustainability-oriented engineering education, we additionally align our three levels with post-LLM integration perspectives [54].
(2) Quantitative Data Analysis: Basic descriptive statistics (e.g., mean scores, achievement rates, score distributions) were performed for questions related to systems thinking in the final assessment. This served as supplementary evidence for evaluating the development in the students’ systems thinking competencies.

3. Results

3.1. Multidimensional Demonstration of Students’ Systems Analysis Capabilities

Detailed evaluation and analysis of 24 system analysis briefs (scoring criteria in Supplementary Materials) revealed effectiveness of the module in cultivating students’ systems thinking capabilities. All 24 groups successfully identified five or more key influencing factors, with 18 groups (75%) identifying more than eight factors. Each group covered both natural and human-induced factors and accurately described direct causal or correlational relationships between them. Notably, four groups (16.7%) demonstrated non-linear systemic thinking, successfully identifying chains involving indirect impacts and simple feedback loops (Figure 3).
For example, one group depicted the following dynamic chain in their presentation: Regional economic growth results in port expansion and increases international shipping route density. This leads to persistent increases in underwater noise pollution in nearshore waters, while land reclamation projects cause permanent loss and fragmentation of intertidal habitats. These processes double the survival pressure on the dolphin population, reducing reproductive success rates and shrinking core habitat ranges. The processes further disrupt the dolphins’ sonar system, reduce hunting efficiency, and force contraction of core habitats, which lowers reproductive success, shortens the average lifespan, and leads to declining population trends. The decline in this keystone species may simplify the food web structure in the bay ecosystem and thus impair ecosystem stability and services (such as resource maintenance for the fisheries sector and water purification). Ultimately, this could undermine the long-term economic development and ecological security of coastal communities (Supplementary S3). This analysis demonstrates the students’ profound insights into the complex feedback loops between socioeconomic systems and natural ecosystems.
Case assessment data provided supplementary support: The two questions related to systems thinking within the case analysis had a total weight of 15 points. The average score among all 94 students was 10.8 points (72% score rate), with a remarkable 85% of students exceeding the passing threshold (9 points). This indicates that through this module, most students established a foundational cognitive framework and the capability to analyze environmental issues using a systems perspective.

3.2. Differentiated Depth and Patterns in AI Application as a Cognitive Tool

The generative AI tools utilized by the student groups varied based on accessibility and personal preference within the campus network environment. The most frequently used platforms included Doubao (by ByteDance), DeepSeek (by High-Flyer), and Tencent Yuanbao. These platforms are domestic Large Language Models (LLMs) optimized for Chinese natural language processing, offering capabilities comparable to mainstream international models in terms of information synthesis and code generation within the context of this study [55]. Students’ AI applications exhibited clear differentiation, reflecting disparities in their AI literacy (Figure 4):
Level 1 (Problem familiarization and scope definition (Information Retrieval and Synthesis), nine groups, 37.5%): Level 1 teams primarily relied on AI for rapid information gathering and text consolidation. AI-generated content was incorporated into briefings after minimal editing and collation. Their reflection reports indicated superficial prompt optimization (e.g., “Please be more detailed”) and a lack of proactive questioning or verification regarding the authenticity, timeliness, and potential biases in AI outputs. One student stated the following: “First, I thought AI answers were definitive. Later, I realized that different AIs can provide contradictory responses to the same question.”
Level 2 (Relationship modeling and evidence structuring (Data Interpretation and Relationship Analysis), 11 groups, 45.8%): Level 2 teams demonstrated more proactive and effective AI utilization. For example, a group selecting Project Track B uploaded a table containing 5-year shipping density and water quality indicator data (e.g., nitrogen and phosphorus levels) to the AI, prompting the following: “Analyze potential relationships between these data points, recommend the most appropriate statistical method to test these relationships, and generate a chart that clearly illustrates them.” The AI not only identified potential positive or negative correlations but also recommended correlation analysis and generated code for plotting a scatter diagram. Building on this, the team preliminary validated the output using Excel. Their reflection report detailed the iterative process—such as adding time series analysis and controlling for other variables—to gain deeper insights.
Level 3 (Scenario exploration and decision/communication synthesis (Scenario Simulation and Creative Application), four groups, 16.7%): The most outstanding groups demonstrated the ability to use AI as a true “thinking partner.” One group selecting Project Track A employed a structured, progressive series of prompts that instructed the AI to “assume the role of a marine ecologist and simulate, based on population ecology and stress-response models, the population dynamics, age structure changes, and spatial distribution patterns of the local Sousa chinensis population over the next 20 years under two scenarios: ‘Business As Usual’ and ‘Strict Ecological Red Line Protection.’ Generate a narrative script highlighting critical tipping points and uncertainties.” Through multiple rounds of dialog with the AI, they continuously refined the model’s assumptions, ultimately producing a logically rigorous and creative video script. The team’s reflection impressively noted the following: “AI is not a repository of answers, but a mirror. The more precise your questions, the deeper the insights it reflects. We must use our professional expertise to judge whether the scenarios it generates are plausible.”

3.3. Comprehensive Assessment of Learning Outcomes and Reflective Growth

Combining quantitative and qualitative data analysis, this teaching practice for first-year students successfully built a bridge that connected AI technology with professional problem-solving. The final course evaluation revealed an average score of 85.3 among 94 students, with a 100% pass rate (≥60 points) and an outstanding rate (≥90 points) of 25.5%, indicating that students generally mastered the competencies required.
More significantly, student reflection reports revealed growth at the metacognitive level. The vast majority of students noted that the course made them realize for the first time that “formulating a clear, specific, and meaningful question” far outweighs the adoption of ready-made answers. They began actively evaluating the credibility of AI-generated content, attempting cross-validation through consulting authoritative literature and comparing outputs from different AI models. This budding awareness of critically using AI embodies the core of “responsible AI literacy.” An environmental engineering student wrote: “This project made me realize that my professional value lies not in operating AI, but in knowing what specialized problems to assign to AI and how to judge whether its proposed solutions are reliable.”

4. Discussion

4.1. Redefining AI’s Role: Strategic Value as a Cognitive Partner Beyond an Efficiency Tool

Drawing on the theory of Distributed Cognition, we define the role of AI not as a replacement but as a “Cognitive Partner”. In this partnership, the AI handles computationally intensive tasks, such as broad pattern recognition or rapid generation of multiple scenario variables [56,57], thereby offloading lower-level cognitive burdens. This frees up the student’s cognitive resources to focus on higher-order tasks, such as defining problem scope and ethical judgment [58,59]. This study powerfully demonstrates that the effectiveness of AI as a “cognitive partner” is highly dependent on students’ “prompt design capabilities” and “critical evaluation skills”.
The success of Level 3 groups illustrates that when students guide AI iterative outputs through “question chains” and leverage subject knowledge to refine AI limitations, human–machine collaboration achieves a synergistic effect. Conversely, the struggles of Level 1 groups reveal that without these competencies, AI can become a mere “information delivery tool” that potentially misleads students through “outdated data” or “logical flaws” [60,61]. This underscores that AI education must transcend mere “tool operation training” and prioritize core competencies such as “prompt engineering” and “AI output evaluation” [62]. Generative AI excels in “unstructured information integration,” “preliminary relationship mapping,” and “multi-scenario simulation.” This provides new students with “cognitive scaffolding” [63,64,65]. It frees students’ limited cognitive resources from tedious information filtering and basic integration. This allows them to invest more in challenging intellectual activities such as understanding system structures, reasoning about causal mechanisms, and creatively devising solutions. Repositioning AI in education from an efficiency-enhancing “auxiliary tool” to a “cognitive partner” that stimulates deep thinking fosters a “human–machine collaboration” problem-solving model [66]. Such a model can overcome the bottlenecks posed by the weak foundational knowledge and limited cognitive resources of first-year students [67].
Moreover, the observed divergence in students’ AI application skills during instruction reveals the critical urgency to cultivate critical thinking when introducing AI early in professional education. Instructional design must not stop at teaching students how to “use” AI. It must incorporate embedded, mandatory reflection components [68,69] that guide students to persistently ask the following questions: “Where does this information/conclusion come from? What might the AI have missed? Are there biases or errors? How can I verify this?” This habit of critical engagement is key to resisting information illusions, cultivating rigorous scientific thinking, and forming the foundation of “responsible AI literacy” [70,71]. Future curricula have to systematically integrate digital ethics and information verification.

4.2. Implications for International Engineering Education: A Transferable Interdisciplinary Curriculum Template

The “AI-Environmental Systems Analysis” module designed and validated in this study offers a highly transferable and adaptable curriculum template for the reformation of global engineering education. The module achieves this through its clear conceptual foundation (systems thinking), cutting-edge tools (generative AI), and real-world anchors (SDG-related issues). It successfully demonstrates how learning emerging technologies can undergo a “chemical reaction”-like deep integration with core professional competencies (such as systems analysis and complex problem-solving), rather than being simple “physical superposition” [72]. This model offers universal insights for global engineering education, which manifest in the following two aspects:
(1)
Reshaping the “Technology-Professional” Integration Model in First-year Student Education
Traditional engineering education often separates technical tool learning (e.g., AI, statistical software) from professional problem-solving; students first learn tool operation in foundational courses before applying these tools in advanced specialized courses [73]. Our current study demonstrates that integrating AI tools with complex professional challenges (e.g., species conservation, ecological restoration) in first-year student courses simultaneously enhances “technical learning” and “professional cognition” [74]. This integrated approach disrupts the traditional “theory-first, practice-later” path, aligning with the international engineering education trend of “early practice, early integration” [75]. For instance, multiple universities in the United Kingdom launched pilot “AI + Engineering” courses for first-year engineering students in the 2024–2025 academic year. These courses aim to engage students in solving real-world engineering problems using generative AI tools from their initial enrollment. This fosters simultaneous development of technical proficiency and professional understanding through project-based learning [76].
(2)
Defining the Core Position of “Responsible AI Literacy” in Engineering Education
Our current study confirms that “Responsible AI Literacy,” which encompasses critical evaluation of AI outputs, ethical AI reasoning, and human–machine collaborative decision-making, is not an abstract concept but a concrete competency that can be progressively cultivated through “real-world problem-solving.” This finding provides a clear direction for developing AI literacy in international engineering education: AI literacy must be elevated from “technical elective content” to “core professional competency” and embedded into all practical interdisciplinary projects [77]. For instance, the Accreditation Board for Engineering and Technology has incorporated the responsible use of AI and AI ethics into its accreditation standards, offering a reference framework to help universities systematically cultivate students’ AI literacy (including critical evaluation, ethical reflection, and human–machine collaborative decision-making) via their curricula [78]. The practical experiences from this study (such as AI reflection reports and AI limitation analysis tasks) can provide actionable references for universities implementing these standards worldwide.

4.3. Challenges and Optimization Pathways

Despite the positive outcomes of this teaching module, its implementation revealed several challenges, primarily stemming from the limited knowledge base of first-year students, insufficient experience with AI tools, and the complexity of interdisciplinary collaboration.
First, the students’ weak foundational knowledge limited the depth of their systems analysis. Most students remained at the surface level when identifying environmental factors, struggling to uncover complex feedback mechanisms or non-linear dynamics. Hmelo-Silver et al. noted that novices often overlook indirect effects and delayed feedback in systems modeling [79], which is consistent with the fact that most student teams in the current study failed to recognize intricate feedback loops. Furthermore, some students demonstrated poor understanding of fundamental ecological concepts (e.g., “trophic cascades,” “habitat connectivity”), leading to biases in causal reasoning when using AI. Yoon et al. emphasized that systems thinking requires a solid foundation in subject knowledge; otherwise, students tend to get stuck in “superficial connections” without mechanism-based explanations [80].
Second, the use of AI tools involves the risks of “technological dependency” and “critical thinking deficits.” Some students excessively rely on AI-generated content, lacking the initiative to verify information, and even equate AI outputs with authoritative conclusions. Rafiq et al. found that students who have not received specialized training in critical AI usage are prone to “automatic trust,” overlooking potential AI hallucinations or biases [81]. Furthermore, variations in prompt design capabilities significantly impact AI-assisted learning outcomes. Level 1 groups predominantly employed generic prompts, failing to stimulate the deep reasoning capabilities of AI. Kasneci et al. [61] and Boussioux et al. [82] emphasized that prompt quality is the critical variable determining whether generative AI can function as a “cognitive partner” in education.
Third, assessing group collaboration and individual contributions poses significant challenges. Within interdisciplinary mixed-group settings, some students exhibit low engagement due to disciplinary differences, and AI tool usage further blurs individual contribution boundaries. The fair evaluation of the individual cognitive input in technology-enhanced collaborative learning remains a methodological challenge [83].
Because this is a single-site, short-cycle (4-week) first-iteration DBR study without a non-AI comparison group, our evidence supports design feasibility and credible learning artifacts under authentic constraints. It does not support strong causal claims about AI effects or population-level generalization; these will be addressed through matched-cohort comparisons and multi-cohort replications in future iterations.
Future teaching iterations should thus prioritize the following points (Figure 5). First, more structured analytical scaffolding should be provided by developing a “system analysis template library” containing typical environmental system cases (e.g., watershed pollution, species conservation), a glossary of key concepts, and an index of authoritative data sources. Lin et al. [84] and Nugroho et al. [85] demonstrated that structured scaffolding effectively supports novices in decomposing complex problems and conducting system modeling. Second, formative assessment, such as introducing individual prompt logs as evaluation criteria [86], should be enhanced. This requires students to document the intent behind each AI interaction, the prompt iteration process, and reflections on the outputs. Ng et al. [37] identified “prompt design” and “critical evaluation of outputs” as core dimensions of AI literacy, in which logging methods can tangibly reflect student growth in these areas. Third, the formation of interdisciplinary teaching teams comprising computer science instructors and subject-specific course teachers should be promoted [19] to ensure that students receive concurrent support in both technological application and professional knowledge.

5. Conclusions

Faced with global sustainability challenges and the technological wave of generative AI, higher education urgently needs to explore new AI-enabled interdisciplinary teaching models. This study actively tackles this gap by designing and implementing an interdisciplinary “AI-Environmental Systems Analysis” module for first-year environmental and ecological engineering students. Based on the real-world “Sousa chinensis Habitat Conservation” case, the module strategically positions generative AI as a “cognitive partner” that supports students’ systems analysis skills. It successfully guided 94 first-year students (24 teams) through an intense 4-week course.
Our results demonstrate that this new teaching approach successfully helped students in meeting core course objectives. (1) In cultivating systems thinking, the module effectively guided students to construct preliminary frameworks for analyzing complex environmental issues. All 24 teams (100%) successfully identified five or more natural and human-induced key factors influencing dolphin habitats, with 18 teams (75%) identifying over eight factors. Some groups (29.2%) were able to identify indirect impacts and feedback loops. End-of-term assessment data further corroborated the success of the module, with an average score of 72% (10.8/15) on systemic thinking questions. Eighty-five percent of students exceeded the passing threshold, which indicates the development of systemic analytical thinking. (2) Regarding AI literacy development, student applications exhibited significant differentiation across levels, forming a complete competency spectrum. Specifically, 9 groups (37.5%) operated at Level 1 (information retrieval and aggregation), 11 groups (45.8%) reached Level 2 (data interpretation and relationship mapping), while 4 groups (16.7%) demonstrated advanced Level 3 capabilities (scenario simulation and creative application). The course’s final overall assessment scores (average 85.3, with 25.5% of students achieving excellence and 100% passing) confirmed the students’ overall strong mastery. More importantly, through mandatory process reflection, students demonstrated clear metacognitive growth in prompt engineering design, critical evaluation of AI outputs, and human–machine collaborative decision-making. This signifies the effective cultivation of “responsible AI literacy,” centered on critical use and ethical responsibility.
This study demonstrates that integrating AI into foundational professional education for first-year students not only enhances technical tool application skills (modern tool application) but also amplifies cognitive abilities for tackling complex problems through human–machine collaboration (systems thinking). This study provides a concrete, transferable template for designing and implementing such courses. We will further focus on optimizing structured analytical scaffolding to reduce the cognitive load for first-year students; introducing process-based assessment tools such as prompt logs to precisely measure individual growth; and strengthening collaborative guidance mechanisms within interdisciplinary teaching teams. Ultimately, we will extend this model to broader sustainability topics such as urban water resource management and carbon neutrality pathway simulation. We aim to continuously empower future engineers with the critical ability to harness human–machine intelligence in addressing the complexities of Earth systems.
As this paper reports the first DBR implementation cycle without a non-AI control group, the findings should be interpreted as evidence of design feasibility and learning-product quality under authentic classroom conditions rather than as causal estimates of AI effects. Future research will focus on two key areas: (1) Quasi-experimental comparisons: when parallel sections are available, we will compare an AI-supported section with a non-AI section using the same case, rubric, and assessment items, with propensity/matching controls for baseline differences. (2) Longitudinal tracking: we will follow the cohort into Years 2–3 and re-administer systems-thinking transfer tasks to evaluate retention and cross-course transfer.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/su18041812/s1.

Author Contributions

Conceptualization, Y.L., Y.Z. and S.Z.; methodology, Y.L., J.L. and Y.Z.; software, Y.L. and J.L.; validation, J.L., Y.Z., L.L. and S.Z.; formal analysis, Y.L. and J.L.; investigation, Y.L.; resources, Y.Z. and S.Z.; data curation, Y.L. and J.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L., J.L., Y.Z., L.L. and S.Z.; visualization, J.L.; supervision, L.L. and S.Z.; project administration, Y.L. and L.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Xiamen Municipality of Xiamen Municipal Bureau of Science and Technology, People’s Republic of China [grant 3502Z202573067], the High-level Talent Program of Xiamen University of Technology, People’s Republic of China [grant YKJ24008R], and the 2024 Education and Teaching Research Project of Xiamen University of Technology, People’s Republic of China [grant JYCG202425].

Institutional Review Board Statement

The study is classified as low-risk educational research. It was conducted in a standard educational setting involving normal educational practices (curriculum evaluation and student feedback analysis). All data collected were anonymized, and no sensitive personal information or biological samples were involved. Consequently, ethical review and approval were waived for this study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated and analyzed during the current study are available from the corresponding author upon reasonable request. Note that the raw qualitative data (e.g., student briefs and reflection reports) are primarily documented in Chinese.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
DBRDesign-Based Research
SDGSustainable Development Goal

References

  1. IPCC. Climate Change 2022: Impacts, Adaptation, and Vulnerability; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  2. Luo, Z.; Abbasi, B.N.; Yang, C.; Li, J.; Sohail, A. A systematic review of evaluation and program planning strategies for technology integration in education: Insights for evidence-based practice. Educ. Inf. Technol. 2024, 29, 21133–21167. [Google Scholar] [CrossRef]
  3. Graham, R. The Global State of the Art in Engineering Education; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  4. International Engineering Alliance. Graduate Attributes and Professional Competencies (Version 4); International Engineering Alliance: Wellington, New Zealand, 2021. [Google Scholar]
  5. Zhang, J.; Sun, D. A systematic review of generative artificial intelligence in education. In Proceedings of the 2025 7th International Conference on Computer Science and Technologies in Education (CSTE), Wuhan, China, 18–20 April 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 552–556. [Google Scholar] [CrossRef]
  6. Teräs, M. Education and technology: Key issues and debates. Int. Rev. Educ. 2022, 68, 635–636. [Google Scholar] [CrossRef]
  7. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education-where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  8. Starling, A.L.P.; Tian, E. GIFTS: Integrating generative AI into first-year engineering education: From knowledge acquisition to defining accessibility problems. In Proceedings of the 2025 ASEE Annual Conference & Exposition, Montreal, QC, Canada, 22–25 June 2025. [Google Scholar] [CrossRef]
  9. Chen, L.; Li, T.; Chen, Y.; Chen, X.; Wozniak, M.; Xiong, N.; Liang, W. Design and analysis of quantum machine learning: A survey. Connect. Sci. 2024, 36, 2312121. [Google Scholar] [CrossRef]
  10. Kong, X.; Yang, Y.; Lv, Z.; Zhao, J.; Fu, R. A dynamic dual-population co-evolution multi-objective evolutionary algorithm for constrained multi-objective optimization problems. Appl. Soft Comput. 2023, 141, 110311. [Google Scholar] [CrossRef]
  11. Chu, H.C.; Hwang, G.H.; Tu, Y.F.; Yang, K.H. Roles and research trends of artificial intelligence in higher education: A systematic review of the top 50 most-cited articles. Australas. J. Educ. Technol. 2022, 38, 22–42. [Google Scholar]
  12. Malhotra, R.; Massoudi, M.; Jindal, R. Shifting from traditional engineering education towards competency-based approach: The most recommended approach-review. Educ. Inf. Technol. 2023, 28, 9081–9111. [Google Scholar] [CrossRef]
  13. Lee, D.; Arnold, M.; Srivastava, A.; Plastow, K.; Strelan, P.; Ploeckl, F.; Lekkas, D.; Palmer, E. The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Comput. Educ. Artif. Intell. 2024, 6, 100221. [Google Scholar] [CrossRef]
  14. Ruano-Borbalan, J.C. The transformative impact of artificial intelligence on higher education: A critical reflection on current trends and futures directions. Tsinghua J. Educ. 2024, 5, 13–24. [Google Scholar] [CrossRef]
  15. UNESCO. Reimagining Our Futures Together: A New Social Contract for Education; UNESCO Publishing: Paris, France, 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000379381 (accessed on 3 November 2025).
  16. Wiek, A.; Withycombe, L.; Redman, C.L. Key competencies in sustainability: A reference framework for academic program development. Sustain. Sci. 2011, 6, 203–218. [Google Scholar] [CrossRef]
  17. Klein, J.T. Beyond Interdisciplinarity: Boundary Work, Communication, and Collaboration; Oxford University Press: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  18. Schijf, J.E.; van der Werf, G.P.C.; Jansen, E.P.W.A. Measuring interdisciplinary understanding in higher education. Eur. J. High. Educ. 2022, 13, 429–447. [Google Scholar] [CrossRef]
  19. Borrego, M.; Cutler, S. Constructive alignment of interdisciplinary graduate curriculum in engineering and science: An analysis of successful IGERT proposals. J. Eng. Educ. 2010, 99, 355–369. [Google Scholar] [CrossRef]
  20. Bayuo, B.B.; Chaminade, C.; Göransson, B. Unpacking the role of universities in the emergence, development and impact of social innovations—A systematic review of the literature. Technol. Forecast. Soc. Change 2020, 155, 120030. [Google Scholar] [CrossRef]
  21. UNESCO. Sustainable Development Goals (SDGs). Available online: https://unric.org/en/united-nations-sustainable-development-goals/ (accessed on 25 November 2025).
  22. García-González, E.; Jiménez-Fontana, R.; Goded, P.A. Approaches to teaching and learning for sustainability: Characterizing students’ perceptions. J. Clean. Prod. 2020, 274, 122928. [Google Scholar] [CrossRef]
  23. Sterman, J. System dynamics at sixty: The path forward. Syst. Dyn. Rev. 2018, 34, 5–47. [Google Scholar] [CrossRef]
  24. Hanisch, S.; Eirdosh, D. Behavioral science and education for sustainable development: Towards metacognitive competency. Sustainability 2023, 15, 7413. [Google Scholar] [CrossRef]
  25. Ben-Zvi Assaraf, O.; Orion, N. Development of system thinking skills in the context of earth system education. Sci. Teach. 2005, 42, 518–560. [Google Scholar] [CrossRef]
  26. Liu, J.; Dietz, T.; Carpenter, S.R.; Taylor, W.W.; Alberti, M.; Deadman, P.; Redman, C.; Pell, A.; Folke, C.; Ouyang, Z.; et al. Coupled human and natural systems: The evolution and applications of an integrated framework. Ambio 2021, 50, 1778–1783. [Google Scholar] [CrossRef]
  27. Sezen-Barrie, A.; Stapleton, M.K.; Marbach-Ad, G.; Miller-Rushing, A. Epistemic discourses and conceptual coherence in students’ explanatory models: The case of ocean acidification and its impacts on oysters. Educ. Sci. 2023, 13, 496. [Google Scholar] [CrossRef]
  28. Thompson Rivers University. Thompson Rivers University Calendar 2021–2022 (September Edition). 2021. Available online: https://tru.arcabc.ca/node/4222 (accessed on 3 November 2025).
  29. Al-Ansi, A.M.; Jaboob, M.; Garad, A.; Al-Ansi, A. Analyzing augmented reality (AR) and virtual reality (VR) recent development in education. Soc. Sci. Humanit. Open 2023, 8, 100532. [Google Scholar] [CrossRef]
  30. Deaton, M.; Macdonald, R. System Dynamics Learning Guide. James Madison University Libraries. 2025. Available online: https://pressbooks.lib.jmu.edu/sdlearningguide/open/download?type=print_pdf (accessed on 3 November 2025).
  31. Rodgers, V.L.; Deets, S.; Foster, S.; Schmitz, P.; Way, M. An Interdisciplinary, Co-Teaching Approach to a Course in Urban Social-Ecological Systems. 2025. Available online: https://link.springer.com/content/pdf/10.1007/s11252-025-01694-7.pdf (accessed on 3 November 2025).
  32. Jacobson, M.J.; Wilensky, U. Complex systems and the learning sciences: Educational, theoretical, and methodological implications. In The Cambridge Handbook of the Learning Sciences; Sawyer, R.K., Ed.; Cambridge University Press: New York, NY, USA, 2022; pp. 504–522. [Google Scholar]
  33. Kayyali, M. The evolution of AI in education from concept to classroom. In Navigating Barriers to AI Implementation in the Classroom; IGI Global: Hershey, PA, USA, 2025; pp. 325–368. [Google Scholar] [CrossRef]
  34. Alwakid, W.N.; Dahri, N.A.; Humayun, M.; Alwakid, G.N. Exploring the role of AI and teacher competencies on instructional planning and student performance in an outcome-based education system. Systems 2025, 13, 517. [Google Scholar] [CrossRef]
  35. Zeng, Z.; Wang, X.; Zhang, J.; Wu, Q. Semi-supervised feature selection based on local discriminative information. Neurocomputing 2016, 173, 102–109. [Google Scholar] [CrossRef]
  36. Mollick, E.; Mollick, L. Using AI to implement effective teaching strategies in classrooms: Five strategies, including prompts. SSRN Electron. J. 2023. [Google Scholar] [CrossRef]
  37. Ng, D.T.K.; Leung, J.K.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  38. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  39. McAlister, M.M.; Zhang, Q.; Annis, J.; Schweitzer, R.W.; Guidotti, S.; Mihelcic, J.R. Systems thinking for effective interventions in global environmental health. Environ. Sci. Technol. 2022, 56, 732–738. [Google Scholar] [CrossRef]
  40. Liu, Y.; Chang, C.-C.; Huang, P.-C.; Hsu, C.-Y. Efficient Information Hiding Based on Theory of Numbers. Symmetry 2018, 10, 19. [Google Scholar] [CrossRef]
  41. Zhang, X.; Huang, B.; Tay, R. Estimating spatial logistic model: A deterministic approach or a heuristic approach? Inf. Sci. 2016, 330, 358–369. [Google Scholar] [CrossRef]
  42. Fan, Y.; Liu, J.; Weng, W.; Chen, B.; Chen, Y.; Wu, S. Multi-label feature selection with constraint regression and adaptive spectral graph. Knowl. Based Syst. 2021, 212, 106621. [Google Scholar] [CrossRef]
  43. Wang, Y.; Li, J.; Zhong, Y.; Zhu, S.; Guo, D.; Shang, S. Discovery of accessible locations using region-based geo-social data. World Wide Web 2019, 22, 929–944. [Google Scholar] [CrossRef]
  44. Wang, Y.-H. Design-based research on integrating learning technology tools into higher education classes to achieve active learning. Comput. Educ. 2020, 156, 103935. [Google Scholar] [CrossRef]
  45. Karsten, I.; van Zyl, A. Design-Based Research (DBR) as an effective tool to create context-sensitive and data-informed student success initiatives. J. Stud. Aff. Afr. 2022, 10, 15–31. [Google Scholar] [CrossRef]
  46. Henriksen, D.T.; Eising-Duun, S. Implementation in design-based research projects: A map of implementation typologies and strategies. Nord. J. Digit. Lit. 2022, 17, 234–247. [Google Scholar] [CrossRef]
  47. Abdusselam, M.S.; Kilis, S. Development and evaluation of an augmented reality microscope for science learning: A design-based research. Int. J. Technol. Educ. 2021, 4, 708–728. [Google Scholar] [CrossRef]
  48. Ahmed, S.K.; Mohammed, R.; Nashwan, A.J.; Ibrahim, R.H.; Abdalla, A.Q.; Ameen, B.M.M.; Khdhir, R.M. Using thematic analysis in qualitative research. J. Med. Surg. Public Health 2025, 6, 100198. [Google Scholar] [CrossRef]
  49. Page, M.J.; Moher, D.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ 2021, 372, n160. [Google Scholar] [CrossRef]
  50. Shapland, C.Y.; Bell, J.A.; Borges, M.C.; Goncalves Soares, A.; Davey Smith, G.; Gaunt, T.R.; Lawlor, D.A.; McGuinness, L.A.; Tilling, K.; Higgins, J.P. A quantitative approach to evidence triangulation: Development of a framework to address rigour and relevance. Int. J. Epidemiol. 2024, 53, 456–470. [Google Scholar] [CrossRef]
  51. Papavasileiou, E.F.; Dimou, I. Evidence of construct validity for work values using triangulation analysis. Eur. Manag. J. Bus. 2024, 20, 98–115. [Google Scholar] [CrossRef]
  52. Bans-Akutey, A.; Tiimub, B.M. Triangulation in Research. Acad. Lett. 2021, 2, 1–6. [Google Scholar] [CrossRef]
  53. Liefooghe, B.; van Maanen, L. Three levels at which the user’s cognition can be represented in artificial intelligence. Front. Artif. Intell. 2022, 5, 1092053. [Google Scholar] [CrossRef] [PubMed]
  54. Cañavate, J.; Martínez-Marroquín, E.; Colom, X. Engineering a sustainable future through the integration of generative AI in engineering education. Sustainability 2025, 17, 3201. [Google Scholar] [CrossRef]
  55. Chen, W.; Hussain, W.; Chen, J. GLMTopic: A Hybrid Chinese Topic Model Leveraging Large Language Models. Comput. Mater. Contin. 2025, 85, 1559. [Google Scholar] [CrossRef]
  56. Ma, Y.; Lei, Y.; Wang, T. A Natural Scene Recognition Learning Based on Label Correlation. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 150–158. [Google Scholar] [CrossRef]
  57. Zeng, Z.; Wang, X.; Yan, F.; Chen, Y. Local adaptive learning for semi-supervised feature selection with group sparsity. Knowl. Based Syst. 2019, 181, 104787. [Google Scholar] [CrossRef]
  58. Kim, J.; Lee, H.; Cho, Y.H. Learning design to support student-AI collaboration: Perspectives of leading teachers for AI in education. Educ. Inf. Technol. 2022, 27, 6069–6104. [Google Scholar] [CrossRef]
  59. Shibani, A.; Knight, S.; Kitto, K.; Karunanayake, A.; Shum, S.B. Untangling critical interaction with AI in students’ written assessment. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), Honolulu, HI, USA, 11–16 May 2024; ACM: New York, NY, USA, 2024; pp. 357–362. [Google Scholar] [CrossRef]
  60. Cotton, D.R.E.; Cotton, P.A.; Shipway, J.R. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 2023, 61, 228–239. [Google Scholar] [CrossRef]
  61. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  62. Zhao, X. Leveraging Artificial Intelligence (AI) Technology for English Writing: Introducing Wordtune as a Digital Writing Assistant for EFL Writers. RELC J. 2022, 54, 890–894. [Google Scholar] [CrossRef]
  63. Annapureddy, R.; Fornaroli, A.; Gatica-Perez, D. Generative AI literacy: Twelve defining competencies. ACM Digit. Gov. Res. Pract. 2024, 6, 1–21. [Google Scholar] [CrossRef]
  64. Omar, A.S.; Mgala, M.; Mwakondo, F. AI-driven visual scaffolding in education: A comprehensive literature review. Int. J. Res. Sci. Innov. (IJRSI) 2025, 12, 740–750. [Google Scholar] [CrossRef]
  65. Laflamme, K.A. Scaffolding AI literacy: An instructional model for academic librarianship. J. Acad. Librariansh. 2025, 51, 103041. [Google Scholar] [CrossRef]
  66. Tong, D.; Jin, B.; Tao, Y.; Ren, H.; Islam, A.Y.M.A.; Bao, L. Exploring the role of human-AI collaboration in solving scientific problems. Phys. Rev. Phys. Educ. Res. 2024, 21, 010149. [Google Scholar] [CrossRef]
  67. Hwang, Y.; Lee, J.H. Exploring students’ experiences and perceptions of human-AI collaboration in digital content making. Int. J. Educ. Technol. High. Educ. 2025, 12, 42. [Google Scholar] [CrossRef]
  68. Giannakos, M.; Azevedo, R.; Brusilovsky, P.; Cukurova, M.; Dimitriadis, Y.; Hernandez-Leo, D.; Järvelä, S.; Mavrikis, M.; Rienties, B. The promise and challenges of generative AI in education. Behav. Inf. Technol. 2024, 44, 2518–2544. [Google Scholar] [CrossRef]
  69. Lim, W.M.; Gunasekara, A.; Pallant, J.L.; Pallant, J.I.; Pechenkina, E. Generative AI and the future of education: Ragnarok or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 2023, 21, 100790. [Google Scholar] [CrossRef]
  70. Chakravartty, A. Reflections on new thinking about scientific realism. Synthese 2017, 194, 3379–3392. [Google Scholar] [CrossRef]
  71. Lombardi, D.; Sinatra, G.M.; Bailey, J.M.; Butler, L.P. Seeking a comprehensive theory about the development of scientific thinking. Educ. Psychol. Rev. 2024, 36, 72. [Google Scholar] [CrossRef]
  72. Hao, X.; Demir, E.; Eyers, D. Exploring collaborative decision-making: A quasi-experimental study of human and generative AI interaction. Technol. Soc. 2024, 78, 102662. [Google Scholar] [CrossRef]
  73. Labone, E.; Long, J. Features of effective professional learning: A case study of the implementation of a system-based professional learning model. Prof. Dev. Educ. 2014, 42, 54–77. [Google Scholar] [CrossRef]
  74. Zhou, Y.F.; Yang, H.L.; Li, J.J.; Wang, D.L. Skill assessment method: A perspective from concept-cognitive learning. Fuzzy Sets Syst. 2025, 508, 109331. [Google Scholar] [CrossRef]
  75. Tran, H.H.; Berger, E.J.; Cuervo-Basurto, A.; Rodriguez-Mejia, F. Early career engineering instructors’ experiences with Freeform—An innovative instructional system: Acceptability and feasibility. Int. J. Eng. Educ. 2025, 41, 1086–1101. [Google Scholar]
  76. Hao, Y.; Liu, Y.; Liu, B.; Amarantidis, G.; Ghannam, R. Integrating AI in engineering education: A comprehensive review and student-informed module design for U.K. students. IEEE Trans. Educ. 2025, 68, 173–185. [Google Scholar] [CrossRef]
  77. Chiu, T.K.F. AI literacy and competency: Definitions, frameworks, development and future research directions. Interact. Learn. Environ. 2025, 33, 3225–3229. [Google Scholar] [CrossRef]
  78. Accreditation Board for Engineering and Technology (ABET). Technology and the Ethics Gap. 2018. Available online: https://www.abet.org/technology-and-the-ethics-gap/ (accessed on 3 November 2025).
  79. Hmelo-Silver, C.E.; Green Pfeffer, M. Comparing expert and novice understanding of a complex system from the perspective of structures, behaviors, and functions. Cogn. Sci. 2004, 28, 127–138. [Google Scholar] [CrossRef]
  80. Yoon, S.A. Complex systems and the learning sciences: Implications for learning, theory, and methodologies. In International Handbook of the Learning Sciences; Routledge: New York, NY, USA, 2018; pp. 157–166. [Google Scholar]
  81. Rafiq, K.; Beery, S.; Palmer, M.S.; Harchaoui, Z.; Abrahms, B. Generative AI as a tool to accelerate the field of ecology. Nat. Ecol. Evol. 2025, 9, 378–385. [Google Scholar] [CrossRef] [PubMed]
  82. Léonard Boussioux, L.; Lane, J.N.; Zhang, M.; Jacimovic, V.; Lakhani, K.R. The crowdless future? Generative AI and creative problem-solving. Organ. Sci. 2024, 35, 1589–1607. [Google Scholar] [CrossRef]
  83. Park, J.; Choo, S. Generative AI prompt engineering for educators: Practical strategies. J. Spec. Educ. Technol. 2025, 40, 411–417. [Google Scholar] [CrossRef]
  84. Lin, P.C.; Hou, H.T.; Chang, K.E. Development of a collaborative problem-solving environment that integrates a scaffolding mind tool and simulation-based learning: Analysis of learners’ performance and cognitive process in discussion. Interact. Learn. Environ. 2022, 30, 1273–1290. [Google Scholar] [CrossRef]
  85. Nugroho, A.; Andriyanti, E.; Widodo, P.; Mutiaraningrum, I. Students’ appraisals post-ChatGPT use: Students’ narrative after using ChatGPT for writing. Innov. Educ. Teach. Int. 2025, 62, 499–511. [Google Scholar] [CrossRef]
  86. Schellaert, W.; Martínez-Plumed, F.; Vold, K.; Burden, J.; Casares, P.A.M.; Loe, B.S.; Reichart, R.; Héigeartaigh, S.Ó.; Korhonen, A.; Hernández-Orallo, J. Your prompt is my command: On assessing the human-centred generality of multimodal models. J. Artif. Intell. Res. 2023, 77, 377–394. [Google Scholar] [CrossRef]
Figure 1. Core Framework of the “AI-Environmental System Analysis” Teaching Module. This framework demonstrates how systems thinking, generative AI (as a cognitive partner), and real-world cases are integrated to collectively support the educational goal of developing the ability of first-year students to solve complex environmental problems.
Figure 1. Core Framework of the “AI-Environmental System Analysis” Teaching Module. This framework demonstrates how systems thinking, generative AI (as a cognitive partner), and real-world cases are integrated to collectively support the educational goal of developing the ability of first-year students to solve complex environmental problems.
Sustainability 18 01812 g001
Figure 2. Teaching Activity Flowchart for the “AI-Environmental System Analysis” Module. This diagram illustrates the two phases of the four-week teaching module. Phase 1 focuses on AI tool exploration and problem framing, while Phase 2 offers students two distinct interdisciplinary project tracks (A: Narrative Scenarios and Science Communication; B: Data-Driven Decision Support) for in-depth practice. The entire process emphasizes integrating generative AI as a cognitive partner into the systematic analysis of complex environmental problems through prompt engineering and continuous reflection.
Figure 2. Teaching Activity Flowchart for the “AI-Environmental System Analysis” Module. This diagram illustrates the two phases of the four-week teaching module. Phase 1 focuses on AI tool exploration and problem framing, while Phase 2 offers students two distinct interdisciplinary project tracks (A: Narrative Scenarios and Science Communication; B: Data-Driven Decision Support) for in-depth practice. The entire process emphasizes integrating generative AI as a cognitive partner into the systematic analysis of complex environmental problems through prompt engineering and continuous reflection.
Sustainability 18 01812 g002
Figure 3. Assessment Dimensions of Student System Analysis Briefs. The chart visualizes the four key evaluation metrics: Systemic Structural Complexity (SSC), Argumentative Rigor (AR), Depth of AI Integration (DAI), and Expressive Communication and Creative Synthesis (ECCS).
Figure 3. Assessment Dimensions of Student System Analysis Briefs. The chart visualizes the four key evaluation metrics: Systemic Structural Complexity (SSC), Argumentative Rigor (AR), Depth of AI Integration (DAI), and Expressive Communication and Creative Synthesis (ECCS).
Sustainability 18 01812 g003
Figure 4. Distribution of Student AI Application Levels and Typical Behavioral Patterns. The lower-left pie chart displays the depth distribution of AI usage among the 94 first-year students during the project. The charts above and to the right illustrate cognitive behavioral differences across application levels through typical workflows, progressing from passive information consumers (Level 1) to active, critical human–machine collaborators (Level 3). AIS, Automatic Identification System; BAU, business as usual.
Figure 4. Distribution of Student AI Application Levels and Typical Behavioral Patterns. The lower-left pie chart displays the depth distribution of AI usage among the 94 first-year students during the project. The charts above and to the right illustrate cognitive behavioral differences across application levels through typical workflows, progressing from passive information consumers (Level 1) to active, critical human–machine collaborators (Level 3). AIS, Automatic Identification System; BAU, business as usual.
Sustainability 18 01812 g004
Figure 5. Iterative Optimization Pathway for Design-Based Research (DBR) Teaching. This figure illustrates the iterative optimization pathway developed for the “AI-Environmental Systems Analysis” module based on challenges identified post-implementation. This closed-loop system embodies the core of the DBR paradigm, aiming to drive the spiral-shaped advancement and refinement of teaching models through continuous “design–implementation–evaluation–improvement” cycles.
Figure 5. Iterative Optimization Pathway for Design-Based Research (DBR) Teaching. This figure illustrates the iterative optimization pathway developed for the “AI-Environmental Systems Analysis” module based on challenges identified post-implementation. This closed-loop system embodies the core of the DBR paradigm, aiming to drive the spiral-shaped advancement and refinement of teaching models through continuous “design–implementation–evaluation–improvement” cycles.
Sustainability 18 01812 g005
Table 1. Participant Demographics and Research Timeline.
Table 1. Participant Demographics and Research Timeline.
ComponentDetails
Participants (N = 94)Class A (Environmental Eng.): 49 students (27 Male, 22 Female).
Class B (Ecological Eng.): 45 students (21 Male, 24 Female).
Academic LevelFirst-Year Undergraduate (Freshmen).
Grouping24 Teams (3–4 students per team), mixed gender, single-discipline groupings.
TimelineWeek 1: AI Tool Tutorials and Prompt Engineering Basics.
Week 2: Problem Definition and Info Gathering (Phase I).
Week 3: System Analysis and Scenario Simulation (Phase II).
Week 4: Final Project Synthesis and Reflection.
Data CollectionPre-course survey, Weekly Reflection Logs, Final Group Briefs, Course Grades.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Y.; Liao, J.; Zhong, Y.; Liu, L.; Zhu, S. Bridging AI Education and Sustainable Development: Design-Based Research on First-Year Undergraduates’ Systems Analysis for Habitat Conservation. Sustainability 2026, 18, 1812. https://doi.org/10.3390/su18041812

AMA Style

Lin Y, Liao J, Zhong Y, Liu L, Zhu S. Bridging AI Education and Sustainable Development: Design-Based Research on First-Year Undergraduates’ Systems Analysis for Habitat Conservation. Sustainability. 2026; 18(4):1812. https://doi.org/10.3390/su18041812

Chicago/Turabian Style

Lin, Yanhong, Jianhua Liao, Ying Zhong, Ling Liu, and Shunzhi Zhu. 2026. "Bridging AI Education and Sustainable Development: Design-Based Research on First-Year Undergraduates’ Systems Analysis for Habitat Conservation" Sustainability 18, no. 4: 1812. https://doi.org/10.3390/su18041812

APA Style

Lin, Y., Liao, J., Zhong, Y., Liu, L., & Zhu, S. (2026). Bridging AI Education and Sustainable Development: Design-Based Research on First-Year Undergraduates’ Systems Analysis for Habitat Conservation. Sustainability, 18(4), 1812. https://doi.org/10.3390/su18041812

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop