Next Article in Journal
The Needs of People with Developmental Disabilities Vis-à-Vis Accessibility Standards in the Built Environment
Previous Article in Journal
A Thermal Comfort Study of Plateau Settlements in Qinghai Through Field Data and Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

AI Sparring in Conceptual Architectural Design: A Systematic Review of Generative AI as a Pedagogical Partner (2015–2025)

by
Mirko Stanimirovic
1,*,
Ana Momcilovic Petronijevic
1,
Branislava Stoiljkovic
1,
Slavisa Kondic
1 and
Bojana Nikolic
2
1
Faculty of Civil Engineering and Architecture, University of Nis, 18000 Nis, Serbia
2
Academy of Applied Technical and Preschool Studies, 18000 Nis, Serbia
*
Author to whom correspondence should be addressed.
Buildings 2026, 16(3), 488; https://doi.org/10.3390/buildings16030488
Submission received: 22 December 2025 / Revised: 14 January 2026 / Accepted: 20 January 2026 / Published: 24 January 2026
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

Over the past five years, generative AI has carved out a major role in architecture, especially in education and visual idea generation. Most of the time, the literature talks about AI as a tool, an assistant, or sometimes a co-creator, always highlighting efficiency and the end product in architectural design. There is a steady rise in empirical studies, yet the real impact on how young architects learn still lacks a solid theory behind it. In this systematic review, we dig into peer-reviewed work from 2015 to 2025, looking at how generative AI fits into architectural design education. Using PRISMA guidelines, we pull together findings from 40 papers across architecture, design studies, human–computer interaction and educational research. What stands out is a clear tension: on one hand, students crank out more creative work; on the other, their reflective engagement drops, especially when AI steps in as a replacement during early ideation instead of working alongside them. To address this, we introduce the idea of “AI sparring”. Here, generative AI is not just a helper—it becomes a provocateur, pushing students to think critically and develop stronger architectural concepts. Our review offers new ways to interpret AI’s role, moving beyond seeing it just as a productivity booster. Instead, we argue for AI as an active, reflective partner in education, and we lay out practical recommendations for studio-based teaching and future research. This paper is a theoretical review and conceptual proposal, and we urge future studies to test these ideas in practice.

1. Introduction

Computer technology has shaped architecture for decades, starting with early CAD tools and pushing forward to parametric design and BIM. Researchers experimented with artificial intelligence in conceptual design for years, but these tools never really left the lab. Everything changed around 2021. That is when the first major review papers appeared, mapping out the possibilities and problems in this field [1]. Still, the real shift came in 2022, when generative AI models—Stable Diffusion, DALL-E 2 (https://openart.ai), ChatGPT (https://chatgpt.com)—became widely available. Suddenly, anyone could use them in architectural practice. This surge in access during 2023 and 2024 sparked a wave of new studies and papers, all digging into how AI shapes creativity, teaching and design itself. The field is not just growing; it is evolving fast, and it is now one of the most dynamic areas in architectural research [2].
Most research talks about artificial intelligence in architecture as a quick way to spark ideas or create visuals [3]. Some studies go further and see AI as a real co-creator in the process [4]. These perspectives offer valuable evidence, sure, but they mostly look at what comes out at the end—the results, the performance. They do not spend much time on how students actually learn or build their understanding of architecture with AI in the mix. At the same time, data from practice and classroom experiments paint a mixed picture. Generative AI can boost productivity and spark a wider range of ideas [5]. But there is a flip side—researchers warn that easy access to AI tools might mean students reflect less deeply or lose their personal sense of authorship [6,7]. All this points to one thing: the real impact of AI in architectural education comes down to how educators frame and guide its use. The pedagogical context matters.
There is no shortage of literature on artificial intelligence, but most reviews just catalog technologies or outline process phases (see [1,8]). They rarely dig into the deeper, theoretical side of how students and AI actually interact in a teaching context. So, we still do not have a clear sense of how to position AI in the studio as a genuine tool for learning—not just for generating outputs. This paper argues that the real issue is the absence of a solid conceptual model for AI’s pedagogical role. In the literature we have analyzed, authors usually describe AI in broad strokes: as a tool, an assistant, or a co-creator [3,4]. We treat these descriptions as conceptual metaphors—ideas that also show up as paradigms, narratives, or discursive patterns. Each term carries its own set of assumptions about teaching and learning, shaping the way humans and AI work together in education. Strikingly, most papers do not really unpack these underlying frameworks—they just use the language without questioning it. That is where this study comes in. We take an interpretative approach, aiming to do two things: first, to clarify what these frameworks bring into focus and what they leave out; and second, to show how they quietly steer teaching practice by shaping how we understand AI’s role in architectural education.
This systematic review sets out to do two things. First, it brings together and takes a hard look at what researchers have already said about using AI in conceptual architectural design, especially in educational settings. Second, it digs into the idea of AI sparring—a teaching approach that treats AI as an active, challenging partner in dialog, not just a tool. By doing this, the study sharpens our understanding of the issues at play and points the way toward teaching practices where AI actually pushes students to think deeper and develop their ideas.

2. Methods

This review was conducted following the PRISMA 2020 guidelines [9] (also see Supplementary Materials PRISMA 2020 Checklist) to ensure a transparent and systematic process. Its primary aim was to identify patterns, tensions and gaps within the existing body of research. In carrying out the systematic literature review, particular focus has been given to how the relationship between the student and the AI is described using words such as ‘tool’, ‘partner’ or ‘co-creator’. These words and phrases used throughout the systematic literature review have been given attention not only as describing words, but as active metaphors that prove to be influential in the formation of pedagogical beliefs and how student-AI interactions are believed to be conceptualized and viewed in their given research. The entire systematic literature review was written and conceptualized by its authors and only used generative AI as assistance for its translation and editing needs, using the following for assistance: ChatGPT (https://chatgpt.com) and DeepSeek (https://chat.deepseek.com).

2.1. Inclusion and Exclusion Criteria

The search was performed on the Science Direct (http://www.sciencedirect.com), Scopus (http://www.scopus.com/) and Web of Science (http://www.webofscience.com/) databases, which were chosen for their widespread coverage of high-quality, peer-reviewed journals that are related to the subject. These platforms were selected due to their comprehensive coverage of high-quality, peer-reviewed journals relevant to the subject. The search included literature at the intersection of architecture, design, education and human–computer interaction. The search employed a variety of terms, such as: artificial intelligence, generative AI, architectural design, conceptual design, design studio, architectural education and creativity. These terms were used in the search between the years of 2015 and 2025, which covers the literature that existed on either side of the widespread use of generative AI tools (Table 1).
Articles were selected based on evident inclusion and exclusion criteria. Articles were considered for inclusion if they were peer-reviewed scientific articles that dealt with the use of AI in conceptual architectural designing or related areas, especially those that focused on the creative process or educational setting. Articles were excluded on the basis that they either: (1) dealt with technical aspects of AI that did not require pedagogical or creative considerations; (2) involved designing in the later stages (for example, designing based on performance) without incorporating considerations of creativity or education; or (3) did not follow scientific research procedures.

2.2. Paper Selection Process (PRISMA)

The selection process was structured in three stages, as shown in the PRISMA 2020 flow diagram (Figure 1). First, ScienceDirect, Scopus and Web of Science were searched, yielding 732 records. After the removal of duplicates, 600 articles remained for screening based on titles and abstracts. This step reduced the sample to 104 papers selected for full-text review. Applying the predefined inclusion and exclusion criteria, 40 papers were ultimately selected for in-depth analysis and qualitative synthesis. No additional exclusions were made at this stage, as all selected studies met the required level of relevance to address the research questions.

2.3. Analytical Framework and Synthesis Approach

The approach undertaken was narrative and thematic synthesis with a focus on conceptual and pedagogical framework interpretation. This was a non-comparative analysis of research with a focus on research on conceptual and pedagogical frameworks. The analysis was conducted using four key dimensions:
  • The role of AI in the design process,
  • The definition and display of creativity,
  • The Educational and Pedagogical Context,
  • Implications of Authorship, Agency and Reflective Practice.
The reasons why this framework has been adopted are not only to present a synopsis of the current findings, but as a critical response to, and explanation of, the underlying conceptual difficulties within this field of research. Detailed study characteristics extracted for each included study are provided in Appendix A (Table A1, Table A2, Table A3 and Table A4).

3. Results

An examination of the literature identified a corpus of 40 interdisciplinary papers spanning the fields of architecture, design, psychology of creativity, educational sciences and human–computer interaction. A significant increase in publication volume was observed post-2020, coinciding with the public emergence of generative models such as Stable Diffusion and DALL-E. The majority of these studies were published in high-impact journals, including Frontiers of Architectural Research, Automation in Construction, Design Studies, Computers in Industry and several MDPI titles, indicating strong momentum and a rising scholarly interest in the domain. Thematically, the corpus was categorized into four primary research clusters (see Table 2), each providing a distinct conceptual framework for understanding scholarly perspectives on AI.
The most common view in the literature is AI as an inspirational tool—16 papers fall into this category. Here, researchers treat AI like a spark generator for quick idea exploration. Kwon et al. [5] and Horvath et al. [3] show how these platforms speed up divergent thinking, but they do not really dig into the teaching side of things.
A second perspective, AI as a co-creator, is gaining ground. Nine papers—like Yu [4] and Karadag and Ozar [10]—look at more interactive setups. In these studies, AI is not just a tool but a creative partner that helps shape the concept. Still, authorship and control often get messy. Who is really making the decisions? That is still up in the air.
The most varied group, with ten papers, examines AI in educational contexts. Studies such as Medel-Vera et al. [11] and Abrusci et al. [12] report that students tend to get more done and stay more engaged. But the results on reflection are mixed. Generative AI can push students to move faster, but sometimes at the cost of deeper thinking. Instead of building and examining ideas step by step, students might just pick or tweak whatever the AI spits out.
Critical and theoretical perspectives—five papers—take the deepest dive. Ivcevic and Grandinetti [7] and Mei et al. [6] both raise red flags about losing touch with authentic creative experience and authorship. They argue for a more thoughtful, reflective approach that goes past using AI as just another tool. A full breakdown of all 28 papers, with authors, years and categories, appears in Appendix A.
One thing keeps coming up: there is a real tension between better performance—faster work, sharper images—and a fading sense of personal creativity. The research shows students produce more and try more ideas [12], but they often feel less ownership and spend less time reflecting [6]. Clearly, there is a need for a new pedagogical framework—one that puts more emphasis on the creative process, not just the final product (see Table 3). A complete overview of the included studies, including their key characteristics, is presented in Appendix A (Table A1, Table A2, Table A3 and Table A4).

4. Discussion

The systematic review shows that most current research on generative AI in architectural design leans heavily on instrumental metaphors. Researchers tend to frame AI as a tool—something that sparks inspiration, speeds up the process, or opens up new ways to explore visuals [3,5]. Lately, some go further and call AI a co-creative agent, shifting the narrative a bit [4,10]. These metaphors help integrate AI into the design process, sure, but they also gloss over a bigger issue: what kind of relationship are students actually forming with these AI-driven generative platforms? Are they just users, or are they developing a deeper sense of knowledge, responsibility and reflection? That is the pedagogical question the literature tends to overlook.

4.1. Limitations of Dominant Metaphors

Instrumental approaches treat creativity as something you can boost from the outside—more options, faster cycles, flashier visuals. But research in creativity and learning tells a different story. Progress in creative development does not hinge on how much material you throw at someone. It is about what the person does with it: how they interpret, critique and fold those ideas into their own way of thinking [7]. So, when we call AI a “tool,” we are not actually saying much about what a student gets out of the experience. We are just describing the software, not the learning.
These metaphors really start to fall apart in the architecture studio. Here, education is not just about picking up skills. It is about building practical judgment, thinking critically, making ethical choices and inventing real, meaningful solutions [13]. When we frame AI as a tool, we quietly shift the focus away from learning and toward technical efficiency. And that leads to three big problems. First, if we treat creativity as productivity, we start measuring success by how many images a student generates, not the depth of their ideas. Second, students turn into editors—curating AI output instead of crafting their own concepts. That chips away at authorship. And third, the conversation with AI gets shallow. Students end up just picking their favorites, not questioning their own assumptions or values. The result? Reflection stays on the surface.
The “co-creator” metaphor sounds better, but it is not much of an improvement. It suggests an equal partnership, but let us be honest—AI does not have intent, responsibility, or any sense of ethics. This imbalance creates the illusion of creative teamwork, while the student actually slips into a more passive role.
That is why the authors push back against these metaphors. Not only do they fail to capture what really happens in human–AI learning, they actually reinforce an outcome-obsessed model of education. This paper offers a different idea: “AI sparring”. Instead of fixating on results, this approach puts the back-and-forth between student and AI at the center. It is about fostering dialog, critical thinking and real reflection. And in architecture, where knowledge grows through active engagement with process, material and context, “AI sparring” is not just a new metaphor—it is a necessary shift. It brings genuine reflection back into the heart of learning with technology.

4.2. Theoretical Foundations for AI Sparring

AI sparring draws on the idea of reflective practice, as described by Schön back in the 1980s. He argued that designers build knowledge by reflecting in the moment—working things out as they go, wrestling with uncertainty, even conflict. That messy process drives learning. The authors also lean on Kolb’s work on dialogical and problem-oriented learning. Kolb saw knowledge as something you build by acting, reflecting and then rethinking the problem—always cycling through those stages. In this light, AI turns into a partner for reflective learning. It offers feedback, pushes back and sometimes even surprises you. Instead of just handing out answers, it creates those moments that force you to stop and think—exactly what reflective learning needs [13,14,15,16,17].
AI sparring is not just inspired by theory—it puts two major ideas about learning and professional growth right into practice: Schön’s reflective practice and Kolb’s experiential learning cycle. These theories give us a strong lens for seeing how structured back-and-forth with generative AI can help architecture students learn in a deeper, more reflective way.
Schön’s reflective practice [18,19] breaks down into two types of reflection. First, there is reflection-in-action. That is when someone thinks about what they are doing as they do it, tweaking or changing course on the fly when something surprising or confusing comes up. Then there is reflection-on-action, which happens afterward—looking back, analyzing what happened and pulling out lessons to use next time.
Usually, in an architecture studio, teachers or classmates help spark these reflective moments, often through critique. With AI sparring, the interaction with the AI itself takes over this role. The AI introduces unexpected or even contradictory feedback, forcing students to hit pause and really think about their process.
Kolb’s experiential learning cycle [20] lays out four phases that keep looping: Concrete Experience (actually doing something), Reflective Observation (stepping back and looking at what happened from different angles), Abstract Conceptualization (figuring out the bigger ideas or patterns) and Active Experimentation (trying those new ideas out in practice).
AI sparring puts this cycle into action by setting up a structured dialog between the student and the AI. Each phase shows up clearly in how students and the AI interact. Table 4 lays out exactly how the steps and methods of AI sparring line up with these classic theories. The result is not just compatibility—it is a direct, practical use of established learning frameworks in architectural education.
AI sparring does not just borrow from Schön’s and Kolb’s theories—it brings them right into the open, puts them under the microscope and lets us pick them apart in a digital setting. Schön talked a lot about reflection happening between people—a teacher and student, or a mentor and apprentice. Here, AI sparring takes those same ideas and builds them into how students interact with a tool that is not even sentient. Suddenly, students see their own thinking in a new way. They get a clearer sense of how they learn. Kolb’s model usually sits up in the clouds, pretty abstract. AI sparring drags it down and turns it into a clear, step-by-step routine you can use, monitor and evaluate right there in the studio. So, what is AI sparring really? It is a piece of pedagogical technology, but more than that, it is a system for reflective practice. It creates those much-needed pauses and sparks of friction—the kind that actually drive deep learning. And with this, it moves past old metaphors of tools as just instruments or creative partners. It becomes something more deliberate, something that shapes how we reflect and grow.

4.3. Concept and Operationalization of AI Sparring

To conceptualize the dynamic interplay between student, AI and pedagogical context, Figure 2 illustrates the AI sparring Interaction Model—an iterative cycle of provocation, critical analysis, reflection and conceptual refinement. “AI sparring” is a concept the authors put forward to tackle a central challenge: how do we use generative AI in the architecture studio so it actually helps students learn and think critically, instead of just speeding up the creation of pretty images? Here, AI is not a co-author—it is more like a provocateur, deliberately designed to push back. The idea is to set up a model where students use AI to create counter-proposals, hybrid or antagonistic solutions, conceptual contradictions and out-there alternatives. Then, they have to dig into these outcomes—analyze, challenge and reinterpret them.
Schön’s idea of reflective practice [13] comes into play. It is all about a back-and-forth between action and reflection—experimenting, facing uncertainty and thinking hard about what works and what does not. “AI sparring” turns this into a kind of intellectual wrestling match: students do not just accept what the AI spits out. They use those results as springboards to sharpen their own ideas and question their thinking. It is not just about drawing or generating forms—it is about asking, “Why am I making these choices? What do I believe about space, function, aesthetics?” Uncertainty and mistakes are not problems to avoid; they are essential to the creative process. On the flip side, what we want to avoid is mindless routine—just following steps or caring only about the final product instead of the thinking and learning that happen along the way. That sort of superficial “problem-solving” misses the point. So, when students get an AI-generated solution to an architectural problem, their job is not to accept it at face value. They have to analyze what is wrong with it, using principles from architecture’s history, theory and practice. This keeps them in the driver’s seat—they keep learning and the AI’s proposal becomes just a jumping-off point.
Let us get concrete. Suppose AI spits out a design that completely ignores the local context. The student’s job is to spot that and analyze it through the lens of place theory and historical examples of contextual architecture. They do not just reject the AI’s proposal—they also rethink and redefine the design problem, so the next round is conceptually stronger. That is how they take charge of the process. The authors argue this is the only real way to design. There are practical ways to put this idea into action. Imagine a student asks AI for a “minimalist home,” but the AI returns something ornate and flashy. The student has to stop and ask, “Why does this not fit what I wanted?” Now, they have to explain minimalism—not just to themselves, but to the AI. They record their reasoning. This pushes them to articulate their ideas, not just pick an image that looks good.
Here is another method: building visual literacy. Say AI produces two totally different designs—one clean and simple, one elaborate. The student compares them, sees how the same elements can play out in different ways, and then tries to create their own hybrid. This sharpens their eye and their ability to describe what they see. The deepest version of this is a true dialog. The student poses a tough problem—say, designing an “adaptive public space.” The AI does not just answer; it fires back questions that challenge the basic premise: “Why not make a single space that does everything?” Now, the student has to defend their position, drawing on theory and real-world examples. The conversation goes back and forth over several rounds. The design concept does not pop up instantly—it evolves through questioning and reflection. These examples make something clear: “AI sparring” does not deny that AI speeds up production (as Abrusci et al. [12] have shown). But it builds on that foundation, shifting the focus from “what AI can generate” to “what the student learns through interacting with AI.” In this way, the research lines up with what is already in the literature, but pushes the conversation further—toward deeper learning and critical engagement.
To clearly highlight the distinctions between AI Sparring and other recognizable approaches in the literature, Table 5 compares three pedagogical models: ‘AI as a Tool’, ‘AI as a Co-Creator’, and ‘AI Sparring’, in terms of the role of AI, the role of the student, the pedagogical model, and the main risk.

4.4. Designing an AI Sparring Teaching Session: Operational Protocol

To translate the concept of AI sparring into practical teaching practice, it is essential to develop a clear operational protocol. This section provides a detailed, step-by-step guide for designing and conducting an AI sparring session in the architectural studio, structured in five phases that enable consistent application and evaluation.
In Phase 0 (Preparation/Pre-session), the goal is to define the focus of the session and prepare all necessary materials and frameworks:
  • Teacher’s Activities: Selecting a conceptual problem: Identify a specific architectural or design problem that will be the core of the session (e.g., “the relationship between public and private,” “adaptive reuse of a post-industrial building,” “design for climate resilience”).
  • Preparing the initial ‘trigger’: Prepare starting material for students that defines the problem. This can be a textual brief, a contextual photograph, a basic sketch, or a set of parameters.
  • Defining analysis criteria: Establish the theoretical, historical, or practical references that students will use to critically analyze the AI’s output (e.g., theories of place, principles of sustainable design, case studies).
  • Selecting and configuring the AI tool: Choose an appropriate generative AI tool (e.g., DALL-E, Midjourney, ChatGPT with specific plugins) and develop basic prompt strategies that will encourage provocative, rather than merely confirmatory, responses.
In Phase 1 (Challenge Framing), the student defines the problem in a way that is clear but still leaves room for creative thinking. It should guide the conversation, not box it in. What the student does: They start by shaping the initial prompt. Using the given ‘trigger,’ the student writes a focused task for the AI. Instead of something vague like “draw a modern house,” they go for a more open prompt, such as “generate conceptual visuals for a residential unit that prioritizes flexibility of work and living space for a freelancer in an urban center.”
Next, the student notes down their intent and what they want from the prompt. They spell out what they are aiming for and what kind of results they expect. This step matters because it gives them something to look back on later. This phase takes about 10 to 15 min. At the end, the student has a written prompt and a short description explaining their intent. That is the tangible result (measurable outcome).
In Phase 2 (AI Provocation), the goal is to create material—visuals, text, or parametric data—that students can later pick apart and critique. Here is how it works. The student types their prompt into the AI tool of choice. The AI spits out one or more results. Sometimes, the teacher sets parameters like how random the response should be. Other times, the student might push the AI to go off the rails, maybe by asking for something totally unexpected. Set aside 5 to 10 min for this step. By the end, you should have the AI’s output saved: images, text, diagrams—whatever it generated.
In Phase 3 (Critical Deconstruction), the goal is to dive into the AI’s response using your architectural expertise. Figure out exactly where the output lines up with what you wanted—and where it does not.
Student’s Activities start by lining up the AI’s output against your original intent. Pinpoint exactly where things drift off and why. Digging into the causes: Use both the set criteria and your own understanding. Ask yourself: Did the AI miss a key parameter? Did it follow—or ignore—a theoretical principle? Did it actually grasp the core concepts? Structured analysis: Lay out your findings clearly: What did the AI get right? Where did it miss the mark? Which architectural theories, principles, or case studies help explain what happened? Time: 20–30 min. Outcome: A written analysis that ties the AI’s response back to your architectural knowledge and your original intent.
In Phase 4 (Reflective Synthesis and Redefinition), the goal is to take what you have learned from critical analysis and use it to see the original problem with fresh eyes—more depth, more clarity. The student should reformulate the task. After digging into the analysis, the student reshapes the problem or prompt. It gets sharper, sometimes more layered, maybe even shifts focus to the parts that really matter now.
Next, the student should spell out what has been learned. The student puts into words what changed in their thinking. What did they figure out about their own approach? How did the problem itself evolve? Did AI help push creative boundaries, or did it throw up new limits?
In the next move, the student decides: Is it time for another round of AI sparring with the newly defined problem, or do they take these insights and run with them in the traditional design process? Duration is 10–15 min. Measurable outcome is a clearer, redefined problem or prompt, plus a short, honest reflection on what shifted during the process.
In Phase 5 (Moderated Dialog), the goal is to place the one-to-one sparring session inside the wider teaching plan of the studio and let the whole group learn from it. The teacher should lead a short talk with the student (or with a few students) about what happened during sparring. It is important to look at the way the work was done, not only at the final result. Also, the teacher should help the student join the new thoughts to theories, historical cases or present-day debates that the class has already studied. Steer the talk toward the way the student will use the new insights in the next steps of the project. Duration is 10–20 min, shorter for one student, longer for a small group. The measurable outcome is reflected in presenting the main points of the talk and showing how they connect the sparring session to the rest of the course.
Table 6 summarizes a typical time frame for a 90 min AI sparring session, integrating these phases into a coherent protocol that can serve teachers as a ready-made template for planning.
This protocol is not a rigid template but an adjustable framework. The schedule and emphasis of individual phases may vary depending on the goals, student experience and studio context. Its primary value lies in explicitly structuring the student–AI interaction, imposing an intentional pause for critical reflection and theoretical elaboration, thereby transforming generative AI from a tool for rapid production into a catalyst for deep conceptual learning.

4.5. AI Sparring as a Bridge Between Theory and Measurable Outcomes

When we talk about the instrumental approach, we are basically treating AI like a fancy pencil or a drawing app. It is just a tool—something to help you work faster, come up with more ideas, or whip up visuals in less time. In this view, education starts to look like training students to use tools as efficiently as possible. If someone can pump out a hundred AI-generated images in an hour, that is supposed to count as progress. But what is really happening here? Does cranking out more AI renders actually mean you are thinking more deeply about architecture, or developing sharper critical skills? Not really. Collecting a pile of images is not the same as wrestling with an architectural problem or coming up with a creative solution.
Unlike the instrumental approach summarized in Table 5, which frames AI primarily as a solution generator and students as consumers of output, AI Sparring fundamentally repurposes the technology as a provocateur and positions students as critical, reflective subjects. AI sparring flips this on its head. It does not reject tools, but it slows things down on purpose. Instead of just dropping a tool in your lap and saying “go,” it forces you to stop and think—right in the middle of the process. Say the AI spits out something you did not ask for. Now you have to answer: “Why does this not work? What does that say about my original idea?” You cannot just gloss over these moments; you have to confront them.
With instrumental approaches, the link between using the tool and actually learning something usually stays hidden. AI sparring, on the other hand, makes that connection obvious—and intentional. It weaves in these reflective pauses, or “pedagogical intervals,” so students are always questioning and analyzing as they go. The model lays out clear roles (like Provocateur or Critical Subject) and ties specific thinking and reflection tasks to tangible results. Take the “antagonistic pairs” exercise: it trains you to spot differences in visuals and use terminology precisely. Or the “adaptive public space” dialog, which pushes you to argue with real evidence. This clarity is why AI sparring fits so well in settings where you have to prove students are actually learning—think of accreditation reviews.
So, the bottom line: the instrumental approach sees AI as just a tool and a result. AI sparring adds a deliberate challenge that forces students to reflect and record their thought process, leading to a much deeper grasp of architectural ideas.

4.6. Response to Potential Criticisms and Contextual Expansion

Some critics might say the AI sparring concept, even if it is more reflective, still puts people at the center. In this view, AI just serves to reach goals we have already set. But that misses the point. The whole idea of AI sparring depends on real dialog—on being willing to change the task itself when AI throws something unexpected at us. Instead of just pushing toward a particular outcome, AI sparring invites us to rethink what we are even aiming for. It is not only about helping students think better, though that matters. It is also about pushing at the edges of what AI can be in education. The authors argue that sparring is not some fixed routine. It is an evolving process. It shifts and grows with each educational setting, adapting to what teachers and students actually need. In practice, AI sparring stays open—always ready for reinterpretation and improvement.

4.7. Future-Proofing the Framework: Core Principles over Tools

One of the biggest concerns with teaching anything tied to fast-changing technology is that it can quickly feel outdated. The AI sparring framework challenges this head-on by separating its lasting educational core from the ever-shifting tools and techniques used to bring it to life. What gives the framework staying power? It grounds itself in fundamental learning principles drawn from educational theory, not in any particular AI model.
At its heart, the framework adapts across three levels, each with its own degree of stability. Here is how it breaks down:
  • Core Philosophy (Enduring): At the base is a belief—deep learning happens through dialog, pushback and real reflection. This idea is not new. It goes back to traditions of reflective practice and experiential learning; concepts that long predate digital technology.
  • Pedagogical Strategies (Adaptable): Next come the methods for putting that philosophy into action. Think of roles like the “provocateur”, cycles of proposing, analyzing and redefining, and routines for documenting reflection. These strategies are not fixed; you can reshape them for new kinds of interaction, whether you are working with text, images, immersive AI, or whatever comes next.
  • Specific Tools and Techniques (Transient): At the surface are the actual platforms—ChatGPT, DALL-E, Midjourney—prompting methods and digital interfaces. These change all the time and that is to be expected.
This layered approach means the framework does not fall apart just because the tech advances. In fact, each new wave of technology gives it more ways to work. Maybe today, students critique a 2D image made from a text prompt. Tomorrow, they might be debating with an AI that generates real-time, adaptive 3D models or runs simulations of social and environmental performance. The essential task stays the same: students engage critically with an AI-generated proposal, measure it against architectural criteria, reflect on mismatches between intention and outcome and then rethink their own understanding.
The continuity of the framework comes from the big, unchanging questions in education: How do we foster critical thinking? How do we encourage authentic authorship? How do we connect theory to practice? AI sparring is built to hold these questions. It is a structure that adapts to new technologies without losing its purpose. The protocol is not about today’s AI tools—it is about the ongoing challenge of shaping reflective practitioners, no matter what the next technological shift brings.

4.8. Broader Critical Frameworks: From Pedagogy to the Politics and Ethics of Technology

AI sparring started out as a teaching framework, but it does not stop there—it digs into bigger, more complicated questions about how we think about education and technology. At its core, it pushes for reflective practice, which stands out as a direct response to two big trends: the neoliberal push in education and the rise of “technological solutionism” that shapes much of the talk around digital universities [21,22]. People usually pitch AI tools as ways to boost efficiency or get students ready for the job market. That is the sales pitch, at least. But this glosses over something essential: education shapes people who can think for themselves and question the world around them [21].
AI sparring pushes back against that efficiency-first mindset. Instead of chasing speed or automatic answers, it invites a slower, more dialogical way to engage with technology. There is value in “slow knowledge”—in taking time, sitting with uncertainty and letting creativity breathe. That is not inefficiency; that is depth.
But there is another layer here. This approach also asks us to look hard at the political economy behind the AI tools we use. When students turn to commercial generative platforms, they are handing over part of their creative labor to corporations. They end up relying on tech built on black-box systems and business models rooted in surveillance capitalism [23,24]. So, AI sparring is not just a new classroom method—it is an ethical and political stance. It reshapes the student’s role, treating them not as passive consumers, but as active, critical players in the messy web of power that technology weaves.
Of course, AI sparring is not perfect or final. It is a protocol built by people, and it needs ongoing refinement to stay true to its reflective spirit. Its real strength is that it makes our relationship with technology visible—and open to challenge. It does not let that relationship harden into just another technical fix. Instead, it stays alive, up for debate and always ready for change. That is the point: keeping the conversation open rather than settling for a quick solution.

4.9. Specific Requirements for Generative AI Models in Architectural Education and Practice

Beyond the wider critical issues putting the system to work also demands that the models meet concrete technical and teaching demands. Given the earlier discussion about AI sparring models, it is time to get specific about what generative AI actually needs to offer to be useful in architecture—whether in the classroom or on the drawing board. General-purpose tools like ChatGPT or DeepSeek are trained on all sorts of public information, which means they are more likely to get things wrong or make things up (so-called “hallucinations”) when you push them into specialized territory like architectural history, building codes, or design theory. To make these tools genuinely reliable and responsible in this field, here is what is essential:
  • Domain-Specific Adaptation—The model needs to be trained on actual architectural sources: textbooks, project records, technical standards and materials from trusted archives. This cuts down on off-base or sloppy answers and keeps the output focused and relevant.
  • Validation and Verification Mechanisms—You need real people—teachers, mentors, or experienced architects—checking the AI’s work. That means cross-checking with solid reference sources, critically reviewing its suggestions and setting clear rules for what counts as accurate or useful.
  • Transparency and Explainability—Students and architects deserve to know exactly which ideas came from AI. The model should point to its sources or explain its reasoning whenever possible. This is not just about accountability; it pushes users to think critically and make their own calls.
  • Ethical and Practical Frameworks for Use—Treat AI like a sparring partner, not a stand-in for your own creativity or judgment. It is there to push your thinking, spark new ideas and offer alternatives—not to take over the design process. When everyone understands these boundaries, there is less risk of blindly following whatever the model spits out.
All of these requirements need to be put to the test in real projects, with outside experts weighing in to keep things honest. This is not just about ticking boxes—trial and feedback will help sharpen these guidelines and turn them into practical advice for students and professionals alike. In the end, this approach ties the theory to actual practice and sets the stage for more research and development in AI sparring for architecture.

5. Conclusions

The application of AI sparring requires a reassessment of the role of AI in the architectural educational studio: from a means of accelerating production to a means for developing conceptual thinking, visual literacy and reflective practice (Table 5). In this concept, the teacher becomes a moderator of critical dialog.
The review points out a real problem: the way people talk about teaching with AI is not clear enough. When you dig into the research, you see a pattern. Generative AI has the power to boost performance, but at the same time, it can dull the reflective process—especially when teachers use it without a clear structure. The main idea here is the concept of “AI sparring.” Instead of treating AI as just another tool, this approach casts it as a provocateur, shaking up the learning process. It lines up well with theories about reflective practice and problem-based learning, both of which argue that students build knowledge by facing uncertainty head-on.
Bringing AI sparring into the architecture studio means rethinking what AI does in education. Instead of just using it to speed up design work, you use it to sharpen conceptual thinking, build visual literacy and support real reflection (see Table 7). In this setup, the teacher’s role shifts, too—they step in as moderators, guiding students through critical discussions rather than just handing out answers.
Indicators from Table 7 can be put into action using both qualitative and quantitative methods. To assess the strength of the arguments and the sharpness of the analysis, established rubrics can be applied, with AI tools used to help identify patterns in the text. To determine how often reflective cycles are undertaken, digital activity can be automatically tracked. To ensure that ideas are clearly credited, process documentation and source-tracking tools can be employed. The quality and credibility of evidence in problem-based tasks can be evaluated through expert judgment, peer review and rubrics that focus on relevance, accuracy and coherence between evidence and claims. In this way, continuous improvement of the process is supported and scalability is enabled as technology continues to advance.
To further validate and refine the proposed AI sparring framework, a student workshop is being planned in which the approach will be implemented and evaluated with the support of external experts. Through this practical application, the real-world usability of AI in educational dialog will be assessed and specific guidelines for its effective integration will be formulated. The outcomes of this implementation will be reported in a subsequent study, offering empirical evidence for the theoretical model developed through the present analysis.
The authors openly point out several limitations that shape what this study can offer—and where future research needs to go. First, the AI sparring idea is not a finished tool. It is a theoretical framework and a teaching proposal, grounded in what we know from the literature and learning theories. But it still needs to be tested with real students, in actual workshops and pilot studies, especially in studio settings. The paper lays out a conceptual map, yet whether this approach actually works in practice is still an open question.
Another big limit comes from the pace of tech itself. The field is moving fast. Some studies referenced here draw on generative AI models that will soon be outdated. This review captures a snapshot—a look at this particular, shifting moment—rather than offering the last word on the topic.
There is also the matter of educational context. The framework grew out of certain teaching traditions, mostly Western. Bringing AI sparring into an American studio, a European atelier or an Asian classroom is not as simple as copying and pasting. Each context demands its own adjustments, and the literature base has its own gaps. Most studies focus on end results, not the process. Documentation is often thin, so it is tough to directly confirm the benefits that AI sparring claims to offer. More process-oriented research is needed—work that digs into what actually happens during learning, not just what comes out at the end.
Yet, these limitations are not just obstacles—they point the way forward. This review is less of a final verdict and more of a launchpad and stops short of tackling those specifics.
Methodologically, for the next phase, future research should test these ideas in practice, compare them to standard uses of AI and look closely at changes in student writing through both qualitative and semantic analysis.
Generative AI is not inherently a threat or a fix-all. Its real educational value comes down to the framework we build around it. The central challenge is clear: use AI to encourage critical thinking and genuine authorship. AI sparring is one such framework. It is not a finished model, but a starting point for further research and fresh teaching experiments—shifting the focus from just speeding up output to actually deepening reflection.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/buildings16030488/s1, PRISMA 2020 Checklist.

Author Contributions

Conceptualization, M.S.; methodology, M.S. and A.M.P.; software, M.S.; validation, M.S., A.M.P., B.S., S.K. and B.N.; formal analysis, M.S.; investigation, M.S.; resources, M.S. and B.N.; data curation, A.M.P.; writing—original draft preparation, M.S.; writing—review and editing, A.M.P. and B.N.; visualization, M.S.; supervision, B.S.; project administration, S.K.; funding acquisition, A.M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia, under the Agreement on Financing the Scientific Research Work of Teaching Staff at the Faculty of Civil Engineering and Architecture, University of Niš—Registration number: 451-03-137/2025-03/200095 dated 4 February 2025.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Complete Corpus of Analyzed Papers (n = 40)

This appendix contains the complete list of all 40 scientific papers included in the qualitative synthesis of this systematic review. The papers are organized according to the thematic groups defined in Table 1 of the main text.
Table A1. Thematic Group 1: AI as a tool for inspiration (n = 16).
Table A1. Thematic Group 1: AI as a tool for inspiration (n = 16).
No.Main Author
(First Surname)
YearJournalKey Focus of the PaperCitation Count
1Kwon et al. [5]2023Design StudiesHow designers use AI platforms to find inspiration40
2Horvath et al. [3]2024Frontiers of Architectural ResearchReflections on designing with text-to-image and image-to-image generators31
3Alcaide-Marzal and Diego-Mas [22]2025Computers in IndustryComparative study of text-to-image models for conceptual design14
4Ploennigs and Berger [23]2023AI in Civil EngineeringEffects of AI text-to-image on creativity in architecture88
5Li et al. [8]2025Frontiers of Architectural ResearchReview of generative AI models for different steps in architectural design34
6Castro Peña et al. [1]2021Automation in ConstructionReview of AI application in conceptual design of architecture173
7Zhang and Zhang [2]2025Progress in Engineering ScienceReview of generative AI in designing the built environment2
8Yiannoudes [24]2025ArchitectureShaping Architecture with Generative Artificial Intelligence: Deep Learning Models in Architectural Design Workflow0
9Albukhari [25]2025Journal of Umm Al-Qura University for Engineering and ArchitectureThe role of artificial intelligence (AI) in architectural design: a systematic review of emerging technologies and applications10
10Lystbæk [26]2025Automation in ConstructionMachine learning-driven processes in architectural building design4
11Yao et al. [27]2025NexusArtificial intelligence for sustainable architectural design0
12Khan et al. [28]2025Automation in ConstructionGenerative AI approaches for architectural design automation2
13Law et al. [29]2025BuildingsGenerative AI for Architectural Façade Design: Measuring Perceptual Alignment Across Geographical, Objective, and Affective Descriptors1
14Ercsey and Storcz [30]2025ArchitectureBuilding Geometry Generation Example Applying GPT Models0
15Sebestyen et al. [31]2025International Journal of Architectural ComputingFrom NURBS to neural networks: Efficient geometry encoding for generative AI in architectural design1
16Ghimire et al. [32]2024BuildingsOpportunities and Challenges of Generative AI in Construction Industry: Focusing on Adoption of Text-Based Models91
Table A2. Thematic Group 2: AI as a co-creator (n = 9).
Table A2. Thematic Group 2: AI as a co-creator (n = 9).
No.Main Author
(First Surname)
YearJournalKey Focus of the PaperCitation Count
1Yu [4]2023Design StudiesAI as a co-creator and design material: process transformation14
2Karadağ and Ozar [10]2025Frontiers of Architectural ResearchA new frontier in the design studio: AI and human collaboration3
3Abrusci et al. [12]2025Computers and Education: Artificial IntelligenceAI4Design system for enhancing creativity—a field evaluation7
4Alamasi et al. [33]2025
in press
Ain Shams Engineering JournalImpact of generative AI on architectural education: student experiences0
5Solórzano Requeyo et al. [16]2024Cell Reports Physical ScienceFostering creativity through constructive dialogs with generative AI4
6Shen et al. [17]2025Journal of Innovation and KnowledgeSolving the AI-creativity paradox: educational innovations and knowledge0
7Belaroussi [34]2025Big Data and Cognitive ComputingSubjective assessment of the built environment by ChatGPT6
8Özeren, E.B.; Özeren, Ö [35]2025Sage OpenChatGPT as a Jury? Multi-Modal AI Versus Human Evaluation in an Architectural Design Competition0
9Vo, K.H.T. [36]2025Design ScienceWho designs better? A competition among human, artificial intelligence and human-AI collaboration0
Table A3. Thematic Group 3: AI in educational context (n = 10).
Table A3. Thematic Group 3: AI in educational context (n = 10).
No.Main Author
(First Surname)
YearJournalKey Focus of the PaperCitation Count
1Medel-Vera et al. [11]2025Computers and Education: Artificial IntelligenceExploring the role of generative AI in fostering creativity in architectural learning0
2Alsswey [37]2025Acta PsychologicaStudent perspectives on using AI tools in higher education: a case study0
3Stefańska and Kurcjusz [38]2025SustainabilityFrom Nature to Neutral Networks: AI-Driven Biomimetic Optimization in Architectural Design and Fabrication0
4Labib et al. [39]2025SustainabilityThe Role of Generative AI in Architecture Education from Students’ Perspectives—A Cross-Sectional Descriptive and Correlational Study0
5Choo et al. [40]2025Journal of the Architectural Institute of KoreaConsideration on Adopting Generative AI in Architectural Design Education-Focusing on the Analysis of Practical Courses in Architecture Utilizing Stable Diffusion0
6Zhang et al. [41]2025Sustainable Cities and SocietyLeveraging LLM-based multi-agent simulations to boost participatory design education: An experimental exploration in residential area design0
7Wang et al. [42]2025BuildingsTeaching with Artificial Intelligence in Architecture: Embedding Technical Skills and Ethical Reflection in a Core Design Studio1
8Gül et al. [43]2025ITU Press of the Istanbul Technical UniversityDevelopment and applications of Generative AI in architectural design studios0
9Hao [44]2025Intelligent Education and Computer TechnologyResearch on the Application of Generative Artificial Intelligence in Interior Design Education and Teaching0
10Saritepeci and Durak [45]2024Education and Information TechnologiesEffectiveness of artificial intelligence integration in design-based learning on design thinking mindset, creative and reflective thinking skills: An experimental study60
Table A4. Thematic Group 4: Critical and theoretical perspectives (n = 5).
Table A4. Thematic Group 4: Critical and theoretical perspectives (n = 5).
No.Main Author
(First Surname)
YearJournalKey Focus of the PaperCitation Count
1Ivcevic and Grandinetti [7]2024Journal of CreativityAI as a tool for creativity: a critical perspective77
2Mei et al. [6]2025Computers in Human Behavior: Artificial HumansGenerative AI improves performance but diminishes the creative experience28
3Crawford [21]2021Atlas of AI: Power, Politics, and the Planetary Costs of Artificial IntelligenceCritical examination of AI as a system of power that relies on material extraction, labor exploitation, and political control, rather than a neutral or purely technical technology417
4Selwyn [18]2016Is Technology Good for Education? A critical analysisCritical examination of the assumption that digital technologies inherently improve education, questioning values, inequalities, and unintended consequences.300
5Campo-Ruiz, I. [46]2025AI and SocietySpaces for democracy with generative artificial intelligence: public architecture at stake0
Usage Note: This table represents one possible way of organizing and categorizing the corpus. Some papers may contain elements of multiple thematic groups; the classification was performed based on the dominant focus of each paper according to the criteria defined in Chapter 2.4 of the main text. Full bibliographic data for all references are available in the Literature section. Some articles included in the synthesis were marked as “in press” during the search period (2025) and carry an official publication year of 2026. For consistency with the stated search timeframe (2015–2025), these articles are listed with the year 2025 and annotated as “(in press)” in the tables and reference list.

References

  1. Castro Pena, M.L.; Carballal, A.; Rodríguez-Fernández, N.; Santos, I.; Romero, J. Artificial Intelligence Applied to Conceptual Design. A Review of Its Use in Architecture. Autom. Constr. 2021, 124, 103550. [Google Scholar] [CrossRef]
  2. Zhang, H.; Zhang, R. Generative Artificial Intelligence (AI) in Built Environment Design and Planning—A State-of-the-Art Review. Progress. Eng. Sci. 2025, 2, 100040. [Google Scholar] [CrossRef]
  3. Horvath, A.-S.; Pouliou, P. AI for Conceptual Architecture: Reflections on Designing with Text-to-Text, Text-to-Image, and Image-to-Image Generators. Front. Archit. Res. 2024, 13, 593–612. [Google Scholar] [CrossRef]
  4. Yu, W.F. AI as a Co-Creator and a Design Material: Transforming the Design Process. Des. Stud. 2025, 97, 101303. [Google Scholar] [CrossRef]
  5. Kwon, E.; Rao, V.; Goucher-Lambert, K. Understanding Inspiration: Insights into How Designers Discover Inspirational Stimuli Using an AI-Enabled Platform. Des. Stud. 2023, 88, 101202. [Google Scholar] [CrossRef]
  6. Mei, P.; Brewis, D.N.; Nwaiwu, F.; Sumanathilaka, D.; Alva-Manchego, F.; Demaree-Cotton, J. If ChatGPT Can Do It, Where Is My Creativity? Generative AI Boosts Performance but Diminishes Experience in Creative Writing. Comput. Hum. Behav. Artif. Hum. 2025, 4, 100140. [Google Scholar] [CrossRef]
  7. Ivcevic, Z.; Grandinetti, M. Artificial Intelligence as a Tool for Creativity. J. Creat. 2024, 34, 100079. [Google Scholar] [CrossRef]
  8. Li, C.; Zhang, T.; Du, X.; Zhang, Y.; Xie, H. Generative AI Models for Different Steps in Architectural Design: A Literature Review. Front. Archit. Res. 2025, 14, 759–783. [Google Scholar] [CrossRef]
  9. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef]
  10. Karadağ, D.; Ozar, B. A New Frontier in Design Studio: AI and Human Collaboration in Conceptual Design. Front. Archit. Res. 2025, 14, 1536–1550. [Google Scholar] [CrossRef]
  11. Medel-Vera, C.; Britton, S.; Gates, W.F. An Exploration of the Role of Generative AI in Fostering Creativity in Architectural Learning Environments. Comput. Educ. Artif. Intell. 2025, 9, 100501. [Google Scholar] [CrossRef]
  12. Abrusci, L.; Dabaghi, K.; D’Urso, S.; Sciarrone, F. AI4Design: A Generative AI-Based System to Improve Creativity in Design—A Field Evaluation. Comput. Educ. Artif. Intell. 2025, 8, 100401. [Google Scholar] [CrossRef]
  13. Schon, D. The Reflective Practitioner: How Professionals Think in Action; Basic Books: New York, NY, USA, 1984. [Google Scholar]
  14. Schon, D. The Design Studio: An Exploration of Its Traditions and Potential; RIBA: London, UK, 1985. [Google Scholar]
  15. Kolb, D. Experiential Learning: Experience as the Source of Learning and Development; Prentice Hall: Hoboken, NJ, USA, 1984. [Google Scholar]
  16. Solórzano Requejo, W.; Franco Martínez, F.; Aguilar Vega, C.; Zapata Martínez, R.; Martínez Cendrero, A.; Díaz Lantada, A. Fostering Creativity in Engineering Design through Constructive Dialogues with Generative Artificial Intelligence. Cell Rep. Phys. Sci. 2024, 5, 102157. [Google Scholar] [CrossRef]
  17. Shen, G.; Wu, W.; Huang, W. Tackling the AI-Creativity Paradox: How Educational Innovation and Knowledge Unlocks Positive-Sum Dynamics? J. Innov. Knowl. 2025, 10, 100816. [Google Scholar] [CrossRef]
  18. Selwyn, N. Is Technology Good for Education? Polity Press: Cambridge, UK, 2016. [Google Scholar]
  19. Morozov, E. To Save Everything, Click Here: The Folly of Technological Solutionism; PublicAffairs: New York, NY, USA, 2013. [Google Scholar]
  20. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, NY, USA, 2019. [Google Scholar]
  21. Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021. [Google Scholar]
  22. Alcaide-Marzal, J.; Diego-Mas, J.A. Computers as Co-Creative Assistants. A Comparative Study on the Use of Text-to-Image AI Models for Computer Aided Conceptual Design. Comput. Ind. 2025, 164, 104168. [Google Scholar] [CrossRef]
  23. Ploennigs, J.; Berger, M. AI art in architecture. AI Civ. Eng. 2023, 2, 8. [Google Scholar] [CrossRef]
  24. Yiannoudes, S. Shaping Architecture with Generative Artificial Intelligence: Deep Learning Models in Architectural Design Workflow. Architecture 2025, 5, 94. [Google Scholar] [CrossRef]
  25. Albukhari, I.N. The Role of Artificial Intelligence (AI) in Architectural Design: A Systematic Review of Emerging Technologies and Applications. J. Umm Al-Qura Univ. Eng. Archit. 2025, 16, 1457–1476. [Google Scholar] [CrossRef]
  26. Lystbæk, M.S. Machine Learning-Driven Processes in Architectural Building Design. Autom. Constr. 2025, 178, 106379. [Google Scholar] [CrossRef]
  27. Yao, J.; Jian, Y.; Huang, C.; Yuan, L.; Ye, J.; Shi, Z.; Calautit, J.K.; Wei, S.; Deng, X.; Broyd, T.; et al. Artificial Intelligence for Sustainable Architectural Design. Nexus 2025, 2, 100100. [Google Scholar] [CrossRef]
  28. Khan, A.; Chang, S.; Chang, H. Generative AI Approaches for Architectural Design Automation. Autom. Constr. 2025, 180, 106506. [Google Scholar] [CrossRef]
  29. Law, S.; Valentine, C.; Kahlon, Y.; Seresinhe, C.I.; Tang, J.; Morad, M.G.; Fujii, H. Generative AI for Architectural Façade Design: Measuring Perceptual Alignment Across Geographical, Objective, and Affective Descriptors. Buildings 2025, 15, 3212. [Google Scholar] [CrossRef]
  30. Ercsey, Z.; Storcz, T. Building Geometry Generation Example Applying GPT Models. Architecture 2025, 5, 79. [Google Scholar] [CrossRef]
  31. Sebestyen, A.; Wiltsche, A.; Stavric, M.; Özdenizci, O. From NURBS to Neural Networks: Efficient Geometry Encoding for Generative AI in Architectural Design. Int. J. Archit. Comput. 2025, 23, 720–741. [Google Scholar] [CrossRef]
  32. Ghimire, P.; Kim, K.; Acharya, M. Opportunities and Challenges of Generative AI in Construction Industry: Focusing on Adoption of Text-Based Models. Buildings 2024, 14, 220. [Google Scholar] [CrossRef]
  33. Alamasi, R.; Asfour, O.S.; Ashmeel, R. The Impact of Generative AI on Architectural Design Education: Insights from Hands-on Experience with Architecture Students. Ain Shams Eng. J. 2025, 17, 103879. [Google Scholar] [CrossRef]
  34. Belaroussi, R. Subjective Assessment of a Built Environment by ChatGPT, Gemini and Grok: Comparison with Architecture, Engineering and Construction Expert Perception. Big Data Cogn. Comput. 2025, 9, 100. [Google Scholar] [CrossRef]
  35. Özeren, E.B.; Özeren, Ö. ChatGPT as a Jury? Multi-Modal AI Versus Human Evaluation in an Architectural Design Competition. Sage Open 2025, 15, 1–13. [Google Scholar] [CrossRef]
  36. Vo, K.H.T. “Who” Designs Better? A Competition among Human, Artificial Intelligence and Human–AI Collaboration. Des. Sci. 2025, 11, e37. [Google Scholar] [CrossRef]
  37. Alsswey, A. Examining Students’ Perspectives on the Use of Artificial Intelligence Tools in Higher Education: A Case Study on AI Tools of Graphic Design. Acta Psychol. 2025, 258, 105190. [Google Scholar] [CrossRef]
  38. Stefańska, A.; Kurcjusz, M. From Nature to Neutral Networks: AI-Driven Biomimetic Optimization in Architectural Design and Fabrication. Sustainability 2025, 17, 11333. [Google Scholar] [CrossRef]
  39. Labib, W.; Abdelsattar, A.; Abowardah, E.; Abdelalim, M.; Mahmoud, H. The Role of Generative AI in Architecture Education from Students’ Perspectives—A Cross-Sectional Descriptive and Correlational Study. Sustainability 2025, 17, 10029. [Google Scholar] [CrossRef]
  40. Choo, S.; Park, J.; Hong, S.M. Consideration on Adopting Generative AI in Architectural Design Education-Focusing on the Analysis of Practical Courses in Architecture Utilizing Stable Diffusion. J. Archit. Inst. Korea 2025, 41, 57–68. [Google Scholar]
  41. Zhang, Y.; Lin, Y.; Tian, L.; Yang, X. Leveraging LLM-Based Multi-Agent Simulations to Boost Participatory Design Education: An Experimental Exploration in Residential Area Design. Sustain. Cities Soc. 2025, 131, 106761. [Google Scholar] [CrossRef]
  42. Wang, J.; Shi, Y.; Chen, X.; Lan, Y.; Liu, S. Teaching with Artificial Intelligence in Architecture: Embedding Technical Skills and Ethical Reflection in a Core Design Studio. Buildings 2025, 15, 3069. [Google Scholar] [CrossRef]
  43. Gül, L.F.; Delikanlı, B.; Üneşi, O.; Gül, E.Ö. Development and Applications of Generative AI in Architectural Design Studios; ITU Press, Press of the Istanbul Technical University: Istanbul, Turkey, 2025. [Google Scholar] [CrossRef]
  44. Hao, L. Research on the Application of Generative Artificial Intelligence in Interior Design Education and Teaching. In Proceedings of the 2nd International Conference on Intelligent Education and Computer Technology, Nantong, China, 27–29 June 2025; ACM: New York, NY, USA, 2025; pp. 748–753. [Google Scholar]
  45. Saritepeci, M.; Yildiz Durak, H. Effectiveness of Artificial Intelligence Integration in Design-Based Learning on Design Thinking Mindset, Creative and Reflective Thinking Skills: An Experimental Study. Educ. Inf. Technol. 2024, 29, 25175–25209. [Google Scholar] [CrossRef]
  46. Campo-Ruiz, I. Spaces for Democracy with Generative Artificial Intelligence: Public Architecture at Stake. AI Soc. 2025, 40, 5951–5966. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 diagram.
Figure 1. PRISMA 2020 diagram.
Buildings 16 00488 g001
Figure 2. The AI Sparring Interaction Model: an iterative cycle of dialogical learning between student and AI within the architectural studio context.
Figure 2. The AI Sparring Interaction Model: an iterative cycle of dialogical learning between student and AI within the architectural studio context.
Buildings 16 00488 g002
Table 1. Search strategy overview.
Table 1. Search strategy overview.
DatabaseSearch String ExampleDate Range
ScienceDirect,
Scopus and
Web of Science
(“generative AI” OR “artificial intelligence”)
AND (“architectural design” OR “design studio”)
AND (“education” OR “creativity”)
2015–2025
Table 2. Thematic groups and pedagogical frameworks of the analyzed papers.
Table 2. Thematic groups and pedagogical frameworks of the analyzed papers.
Thematic GroupNumber of
Papers (n = 40)
Research FocusDominant Pedagogical
Model
AI as a Tool for
Inspiration
16Idea generation,
visual stimuli
Divergent thinking
(instrumental)
AI as a Co-Creator9Human-AI collaboration,
co-creation
Co-creative model
(partially reflective)
AI in Educational
Contexts
10Empirical effects on
student learning
Contextual analysis
(ambivalent)
Critical Perspectives5Creativity, authorship,
reflection
Reflective practice
(recommended)
Table 3. Analytical matrix: From problem to the AI sparring concept.
Table 3. Analytical matrix: From problem to the AI sparring concept.
Dimension of
Analysis
Key Findings from LiteratureCritical ProblemImplication for AI Sparring
Role of AI in the ProcessAI as generator/assistant/partnerLack of a reflective agentAI as a provocateur (dialog, resistance)
Effect on CreativityProductivity, ReflectionPerformance ≠ experienceQuality over quantity of ideas
AuthorshipUnclear distributionErosion of student responsibilityClear division of responsibilities
Process vs. ResultFocus on visual outputNeglect of the processDocumented iterative dialog
Pedagogical ModelInstrumental, inspirationalSuperficial learningReflective learning
through problems
Table 4. Theoretical mapping: How AI sparring operationalizes Schön’s reflective practice and Kolb’s learning cycle.
Table 4. Theoretical mapping: How AI sparring operationalizes Schön’s reflective practice and Kolb’s learning cycle.
AI Sparring PhaseDescription of the Phase
in the Context of an
Architectural Studio
Corresponding
Schön Concept
Corresponding
Kolb Cycle Phase
Initial Problem SettingThe student articulates a design task (e.g., “a minimalist house in an urban context”).Setting the stage for reflection; defining the “known territory”.Concrete Experience: Encountering a specific, defined design challenge.
AI as Provocateur (Generating Counter-Proposals)AI generates solutions that intentionally deviate from expectations (e.g., an overly ornamental design instead of a minimalist one).Trigger for reflection-in-action: An unexpected outcome disrupts routine and demands immediate re-examination.Reflective Observation: The student is forced to observe and compare their own intention with the AI’s (flawed) interpretation.
Critical Analysis of AI OutputThe student deconstructs the AI’s proposal, identifying neglected contextual, functional, or theoretical aspects (e.g., “This elaboration neglects urban grid constraints.”)Reflection-in-action in process: The student “converses with the situation,” using the AI’s mistake as a mirror for their own understanding.Abstract Conceptualization: The student articulates abstract principles and criteria (e.g., principles of minimalism, contextualism) to explain the failure of the AI’s proposal.
Reflective Synthesis and RedefinitionThe student documents the analysis, reformulates the original problem based on new insights (e.g., “The problem is not just ‘minimalism’, but ‘minimalism’ as a response to density.”).Reflection-on-action (subsequent): Systematic consideration of the entire interaction and extraction of broader lessons for one’s own design approach.Active Experimentation: The new, more precise understanding is tested through a new set of instructions for the AI or through one’s own sketching process.
Iterative Dialog (New Cycle)The process repeats with an expanded or revised prompt, deepening the conceptual elaboration.Continuous reflective practice: Learning becomes cyclical, not linear, which is the essence of professional development.New Concrete Experience: The cycle begins anew, but at a higher level of understanding and with more complex starting points.
Table 5. Comparison of pedagogical approaches: AI as a Tool, AI as a Co-Creator, and AI Sparring.
Table 5. Comparison of pedagogical approaches: AI as a Tool, AI as a Co-Creator, and AI Sparring.
ApproachRole of AIRole of StudentPedagogical ModelMain Risk
AI as a ToolSolution
Generator
Consumer of
Output
InstrumentalSuperficial
Learning
AI as a
Co-Creator
CollaboratorPartial AuthorCo-creativeLoss of
Authorship
AI SparringProvocateurCritical SubjectReflectiveRequires
Mentorship Work
Table 6. Prototype time frame for a 90-min AI sparring session.
Table 6. Prototype time frame for a 90-min AI sparring session.
PhaseActivityDuration
(min)
Key TasksDocumentation/Outcome
PreparationProblem definitionPre-sessionConcept, material and tool selectionTeacher guidelines
1Challenge Framing15Prompt formulation, intent documentationInitial prompt + intent description
2AI Provocation10Prompt entry, generation and capture of outputAI-generated output(s)
3Critical Deconstruction30Discrepancy analysis, theoretical reasoningStructured analysis with references
4Reflective Synthesis15Problem redefinition, reflection on learningRedefined problem + reflective note
5Moderated Dialog20Discussion, connection to curriculumRecorded discussion conclusions
Table 7. Examples of AI sparring approach implementation.
Table 7. Examples of AI sparring approach implementation.
Pedagogical
Goal
AI’s Role
in Sparring
Student
Activity
Measurable
Indicators
Adaptability Aspect (How It Evolves with Technology)
Conceptual
Thinking
ProvocateurReflective notes
that challenge
the AI
Depth of
argumentation
Platform evolution: Paper → digital journal → AI-assisted analysis of reflection patterns
Visual LiteracyAntagonist
Generator
Analysis and
explanation
of pairs
Quality of
analytical
writing
Visual model adaptation: 2D images → 3D models → immersive VR environments → multimodal (visual + tactile) generation
Reflective
Practice
Dialogic PartnerDocumentation of
iterations
with reasons
Number of
reflective
cycles
Interface evolution: Text chat → voice interaction → embodied interaction (AR/VR) → neuroadaptive interfaces
Preserving
Authorship
Limited GeneratorTransparent process
documentation
Clear attribution
of ideas
Authentication methods: Manual logging → blockchain-style provenance tracking → AI-assisted originality detection
Problem-Based
Learning
Source of New
Problem
Construction of
counterarguments
Quality and validity
of evidence
Problem complexity scaling: Static problems → dynamic, real-time generated problems → collaborative multi-agent problem spaces
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stanimirovic, M.; Momcilovic Petronijevic, A.; Stoiljkovic, B.; Kondic, S.; Nikolic, B. AI Sparring in Conceptual Architectural Design: A Systematic Review of Generative AI as a Pedagogical Partner (2015–2025). Buildings 2026, 16, 488. https://doi.org/10.3390/buildings16030488

AMA Style

Stanimirovic M, Momcilovic Petronijevic A, Stoiljkovic B, Kondic S, Nikolic B. AI Sparring in Conceptual Architectural Design: A Systematic Review of Generative AI as a Pedagogical Partner (2015–2025). Buildings. 2026; 16(3):488. https://doi.org/10.3390/buildings16030488

Chicago/Turabian Style

Stanimirovic, Mirko, Ana Momcilovic Petronijevic, Branislava Stoiljkovic, Slavisa Kondic, and Bojana Nikolic. 2026. "AI Sparring in Conceptual Architectural Design: A Systematic Review of Generative AI as a Pedagogical Partner (2015–2025)" Buildings 16, no. 3: 488. https://doi.org/10.3390/buildings16030488

APA Style

Stanimirovic, M., Momcilovic Petronijevic, A., Stoiljkovic, B., Kondic, S., & Nikolic, B. (2026). AI Sparring in Conceptual Architectural Design: A Systematic Review of Generative AI as a Pedagogical Partner (2015–2025). Buildings, 16(3), 488. https://doi.org/10.3390/buildings16030488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop