Next Article in Journal
Navigating the Forgetting Curve: A Longitudinal Study of Knowledge Retention and Confidence Dynamics in Dental Education
Previous Article in Journal
Reframing Academic Development for the Ecological University: From ‘Change’ to ‘Growth’
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Students’ Perceptions of Generative AI Image Tools in Design Education: Insights from Architectural Education

by
Michelle Boyoung Huh
1,*,
Marjan Miri
2 and
Torrey Tracy
3
1
Interior Design, School of Design, College of Architecture, Arts, and Design, Virginia Tech, Blacksburg, VA 24060, USA
2
Department of Architecture, Design & Urbanism, Antoinette Westphal College of Media Arts and Design, Drexel University, PA 19104, USA
3
Department of Interior Architecture and Design, Fay Jones School of Architecture and Design, University of Arkansas, Fayetteville, AR 72701, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(9), 1160; https://doi.org/10.3390/educsci15091160
Submission received: 11 August 2025 / Revised: 28 August 2025 / Accepted: 1 September 2025 / Published: 5 September 2025
(This article belongs to the Topic AI Trends in Teacher and Student Training)

Abstract

The rapid emergence of generative artificial intelligence (GenAI) has sparked growing interest across educational disciplines, reshaping how knowledge is produced, represented, and assessed. While recent research has increasingly explored the implications of text-based tools such as ChatGPT in education, far less attention has been paid to image-based GenAI tools—despite their particular relevance to fields grounded in visual communication and creative exploration, such as architecture and design. These disciplines raise distinct pedagogical and ethical questions, given their reliance on iteration, authorship, and visual representation as core elements of learning and practice. This exploratory study investigates how architecture and interior architecture students perceive the use of AI-generated images, focusing on ethical responsibility, educational relevance, and career implications. To ensure participants had sufficient exposure to visual GenAI tools, we conducted a series of workshops before surveying 42 students familiar with image generation processes. Findings indicate strong enthusiasm for GenAI image tools, which students viewed as supportive during early-stage design processes and beneficial to their creativity and potential future professional competitiveness. Participants regarded AI use as ethically acceptable when accompanied by transparent acknowledgment. However, acceptance declined in later design stages, where originality and critical judgment were perceived as more central. While limited in scope, this exploratory study foregrounds student voices to offer preliminary insights into evolving conversations about AI in creative education and to inform future reflection on developing ethically and pedagogically responsive curricula across the design disciplines.

1. Introduction

The rapid advancement of artificial intelligence—particularly Generative AI (GenAI)—is transforming how knowledge is produced, communicated, and practiced across disciplines, including design education. GenAI tools such as ChatGPT, Stable Diffusion, DALL·E, and Midjourney can now generate text, images, code, and more in response to complex prompts (Chan & Hu, 2023; Chui et al., 2023; Ogunleye et al., 2024).
While early discussions of AI in education focused largely on algorithm development and system engineering (Chiu, 2023; Zawacki-Richter et al., 2019), the educational discourse has increasingly turned toward pedagogical, ethical, and socio-emotional implications (Bates et al., 2020; Ifenthaler et al., 2024). Scholars have called for deeper investigation into how GenAI intersects with academic integrity, authorship, and the values of creative disciplines (Chan & Hu, 2023; Holmes et al., 2023). Yet, much of the current conversation remains educator- or policy-driven. There is limited understanding of how students—as primary stakeholders and future practitioners—perceive GenAI’s role in their learning and development. Furthermore, more attention is needed in creative disciplines, where iterative creation and authorship of artifacts are central to practice.
This exploratory study is grounded in the theory of constructionism, developed by Papert (1980), which asserts that meaningful learning occurs when learners actively construct personally meaningful artifacts. Unlike constructivism, which emphasizes internal knowledge construction, constructionism highlights the externalization of ideas through the creation of shareable artifacts. Papert (1980) described these artifacts as “objects to think with,” enabling learners to explore, refine, and communicate their ideas.
In architectural design education, where students rarely work at full scale due to the size of the built environment, representational tools are essential for making abstract ideas tangible and communicable (Sopher et al., 2018). Aligned with constructionist thinking, the process of visualizing and externalizing ideas—from rough concepts in the early design stages to refined spatial proposals in later stages—is not only central to student learning but also mirrors the iterative practices of professional design work.
Text-to-image Gen AI tools can produce visually striking, unexpected, and intuitive depictions of design concepts (Chandrasekera et al., 2024; Kahraman et al., 2024). Their capacity to rapidly generate high-quality visual alternatives has been seen as both a creative accelerator and a potential disruptor to traditional design processes (Hanafy, 2023; Horvath & Pouliou, 2024). Scholars have noted that these tools can support idea iteration, reduce cognitive load, and enhance visual clarity—particularly for students with less technical proficiency (Almaz et al., 2024; Kee et al., 2024). With the evolving constructionist framework, these tools may serve as “partners to think with” (Levin et al., 2025), supporting more dynamic and co-creative interactions in the learning process (Hsu, 2025).
Still, it remains essential to foreground the constructionist emphasis on learner autonomy, responsibility, and ownership (Hsu, 2025). Architectural design education involves sequential, stage-based learning—from early ideation to detailed proposals—each stage requiring distinct forms of decision-making and visualization within the context of student-led projects in the design studio (N. L. N. Ibrahim & Utaberta, 2012; Nussbaumer & Guerin, 2000). While GenAI may accelerate or mediate aspects of this process, concerns remain about its limitations in supporting critical thinking, design ownership, and the irreplaceable role of human designers in evaluating feasibility and contextual responsiveness (Jaruga-Rozdolska, 2022).
Moreover, from a constructionist view that values learner subjectivity, how students perceive these tools and their experiences with them significantly shape what and how they learn (Ackermann, 1996). Therefore, student perception is not only useful for user-centered tool adoption but also critical to exploring the potential of the educational validity of GenAI within creative, constructionist learning environments.
Although several large-scale studies have explored student attitudes toward text-based GenAI tools like ChatGPT (Chan & Hu, 2023; H. Ibrahim et al., 2023; Ngo, 2023), these are often conducted across general disciplines, and research on visual GenAI tools within design education remains notably scarce. For example, Chan and Hu (2023) surveyed nearly 400 students across six Hong Kong universities, finding generally positive attitudes toward ChatGPT despite participants’ limited exposure. Similarly, H. Ibrahim et al. (2023) conducted a cross-country study with nearly 150 respondents, highlighting students’ interest in offloading mundane tasks to ChatGPT to focus on creative work, while also noting ethical concerns. Ngo’s (2023) survey of 200 Vietnamese university students showed broad peer acceptance of ChatGPT for assignments and learning support.
While these studies reflect a growing normalization of GenAI in higher education and increased research interest, they often overlook discipline-specific pedagogical demands—particularly the sequential and iterative learning models central to architecture and design education. Unlike general coursework, architectural design learning unfolds through distinct stages—often referred to as concept design, design development, and final documentation—though the terminology may vary across institutions. In the concept stage, students generate and explore initial ideas; in design development, they refine these into applicable solutions; and in the final stage, they prepare technical documentation and presentation materials (Hettithanthri et al., 2023; Soliman, 2017). Each stage requires different forms of creative decision-making and externalization, reflecting constructionism’s emphasis on iterative making and the progressive transformation of ideas into shareable artifacts.
This gap highlights the need for targeted research within design disciplines—specifically focusing on image-generative tools—to explore how students perceive visual GenAI in relation to creative agency, authorship, and learning throughout the full arc of the design process. Grounded in constructionism, this exploratory study uses architectural design education as its frame to investigate how students perceive the integration of AI image generators—focusing on their impact on meaningful making, responsibility, and ownership of ideas across the design process, as well as students’ perspectives on their future careers.
Accordingly, this study asks the following research questions:
RQ1: How do architectural design students perceive the role of generative AI image tools across different stages of the design process?
RQ2: How do architectural design students perceive the ethical dimensions of using AI-generated images, including responsibility for detection and disclosure?
RQ2: How do students view the necessity of learning AI image generators, their contribution to design competitiveness, and the role of institutional support?
RQ4: How do students anticipate the impact of AI image generators on their future careers and design practices?

2. Methods

This research was conducted at a higher education institution in the United States, housing an interior design program accredited by the Council for Interior Design Accreditation (CIDA) and a five-year architecture program accredited by the National Council of Architectural Registration Boards (NCARB).
To ensure that participants had firsthand familiarity with AI image generation, we hosted a series of three workshops focused on introducing Midjourney, due to its accessibility and ease of use. The workshops were open to all students on a voluntary basis and structured progressively across three sessions—each lasting 90 to 120 min—covering introductory, intermediate, and advanced applications. Guest speakers with expertise in creative design led each session, which encourages students’ active engagement through project-relevant demonstrations rather than traditional lectures. Students were encouraged to follow along with live examples, experimenting with image creation, prompt refinement, and hybrid workflows combining AI with other visualization tools. For instance, during the workshop, students joined a shared Midjourney account, where they engaged in prompt building, image generation, and iterative refinement, with all participants able to view and respond to real-time updates from the instructor and peers.
Across the three workshop sessions, nearly 130 students participated in total, with more than 50 attending all three sessions. Following the workshop series, students who attended at least two sessions were invited to complete an online survey. This ensured that participants had sufficient exposure to both the capacities and limitations of AI image generators, providing a basis for more informed reflections. However, we acknowledge that this design may reflect more positive perceptions of GenAI than might be observed in a broader, less-primed population. Survey participation was voluntary, and respondents received a $15 gift card as compensation. The study protocol was reviewed and granted exempt approval by the university’s Institutional Review Board (IRB).
The survey instrument was developed based on prior literature and the study’s research questions, with careful attention to the unique features of architectural design education. It comprised 19 items rated on a five-point Likert scale, including both positively and negatively worded statements to encourage attentive responses and improve internal validity. Optional open-ended questions at the end of the survey allowed participants to elaborate on their experiences, concerns, and aspirations regarding AI image generators. The full questionnaire is provided in Appendix A.
In total, 42 students completed the survey. The demographic profile of participants is presented in Table 1. All quantitative data were analyzed using IBM SPSS Statistics 29.0.2.0. Open-ended responses were optional and typically brief—often one to two sentences—directly addressing the related survey items. This brevity, combined with their focused content, enabled straightforward manual review. Two authors independently examined the responses to extract representative quotes. Because the responses were documented verbatim and were already concise and unambiguous, a formal coding process was not undertaken due to the clarity and brevity of responses. Any differences in interpretation were discussed and resolved collaboratively.

3. Results

A principal component analysis (PCA) with varimax rotation was conducted to identify patterns among survey items. Prior to PCA, six items, designed to measure students’ perceptions of the acceptability based on each design stage (N_CD, A_CD, N_DD, A_DD, N_FP, and A_FP), were excluded because they were conceptually distinct and intended to be grouped under a single category. These items are reported under Category 4.
Sampling adequacy was marginal but acceptable (Kaiser–Meyer–Olkin [KMO] = 0.53), and Bartlett’s test of sphericity was significant, χ2(78) = 107.82, p = 0.014, indicating that the data were factorable. Initial eigenvalues indicated three components with values greater than 1, accounting for 48.6% of the total variance, shown in Table 2. The unrotated component matrix (Appendix B) showed several cross-loadings, whereas the rotated solution yielded a clearer factor structure. Specific items, TT*, SC, and EFU, which showed weak loadings in the rotated solution, were manually assigned to Category 3. This decision was based on their thematic alignment with the factor, ensuring the survey’s conceptual integrity and maintaining relevance with related items already categorized.
Then, we assessed the internal consistency through Cronbach’s alpha with items under the same category, and all were acceptable within the value range between 0.6 and 0.7 or greater (Taber, 2018). Table 3 shows the Cronbach’s alpha for each category along with PCA loadings and the means and standard deviations (SDs) of each questionnaire item.

3.1. Descriptive Overview of Student Responses

3.1.1. Ethical Responsibility of Using AI-Generated Images

On a 5-Likert scale scoring 1 for strongly disagree and 5 for strongly agree, respondents agreed that whenever students use AI-generated images in their work, students should acknowledge the use of AI-generated images with high mean scores (SA), M = 4.40, SD = 0.94. Whereas, regarding the question of whether it is the responsibility of instructor to detect if AI-generated images have been used in students’ work (IR), the result showed a tendency toward disagreement with M = 2.88, SD = 0.97. Interestingly, respondents generally agreed that the use of AI-generated images in students’ work could be easily detected (ED), M = 3.74, SD = 1.06. Respondents disagreed with the idea that using AI-generated images is unethical in the context of the authenticity of the design project (DA), M = 2.00, SD = 0.86, assuming respondents’ positive perception of using AI-generated images in the design.

3.1.2. Integration of AI-Generated Images in Design Education

Respondents expressed a positive belief that their academic institution would support the use of AI-generated images to help students with their design work (IS), M = 3.98, SD = 1.02. They also recognized that AI-generated images could enhance the competitiveness of students lacking confidence in their creativity and design skills (SC), M = 3.64, SD = 1.12, and acknowledged the necessity of learning AI-image generating tools to improve their performance (LN), M = 3.62, SD = 1.15.

3.1.3. Use of AI-Generated Images in the Design Process of Studio Project

Figure 1 shows the box plot of students’ perception of using AI-generated images in each design phase of the design studio project under the condition of students’ acknowledgment. Respondents strongly agreed that it could be acceptable in the concept design (A_CD), M = 4.60, SD = 0.54, and design development (A_DD), M = 4.14, SD = 0.81, if students acknowledge using AI-generated images. However, they were less in agreement with the acceptability of using AI-generated images in the final presentation (A_FP), M = 3.19, SD = 1.37.
The order of mean value scores for the allowability of using AI-generated images across the three design phases—concept design, design development, and final presentation—was similar to the pattern observed in the question about the absolute prohibition of using AI-generated images, regardless of whether students acknowledged it, as illustrated by the box plot in Figure 2. Respondents strongly disagreed with the prohibition of using AI-generated images in concept design (N_CD), M = 1.88, SD = 1.17, compared to a moderate disagreement with the prohibition of AI-generated images in design development (N_DD), M = 2.19, SD = 1.09. Respondents disagreed less with the prohibition of AI-generated images in the final presentation (N_FP), M = 2.81, SD = 1.50, which is a similar pattern to Figure 1, showing the respondents’ tendency to less acceptability of AI-generated images in the final presentation phase compared to the other two earlier design phases.

3.1.4. Impact of AI-Generated Images in Future Career

Respondents generally thought that AI-generated images would critically affect architectural design disciplines in their future career (FI), M = 3.79, SD = 1.00, and disagreed with the idea that the trend of using AI-generated images is hyped and would be over soon (TT), M = 2.45, SD = 0.94. Respondents were strongly excited about using AI-generated images as a future architectural designer (EU), M = 4.12, SD = 0.94, while showing less concern about AI image generators potentially replacing their jobs in the future (FJS), M = 2.43, SD = 1.29. Respondents felt that they could outsource creative visualizing tasks to AI image generators while focusing on critical problem-solving aspects of their future jobs (EFU), M = 3.74, SD = 0.94.

3.2. Correlation Analysis

The Pearson correlation analysis was conducted to understand the linear relationships between items, as shown in Table 4.
IR showed a moderate negative correlation with LN (r = −0.372, p < 0.05) and a moderate positive correlation with N_DD (r = 0.323, p < 0.05) and N_FP (r = 0.370, p < 0.05), while it showed a strong negative correlation with A_CD (r = −0.511, p < 0.01).
DA had a moderate negative correlation with LN (r = −0.348, p < 0.05) and A_CD (r = −0.315, p < 0.05), while it had a moderate positive correlation with FJS (r = 0.331, p < 0.05). DA and EU significantly correlated negatively r = −0.424, p < 0.01. Interestingly, DA strongly negatively correlated with A_DD (r = −0.561, p < 0.01) and A_FP (r = −0.543, p < 0.01), which aligns with its strong positive correlations with N_DD (r = 0.420, p < 0.01) and N_FP (r = 0.475, p < 0.01).
LN exhibited a moderate positive correlation with EU (r = 0.359, p < 0.05), A_CD (r = 0.373, p < 0.05), and A_DD (r = 0.373, p < 0.05), and a moderate negative correlation with N_CD (r = −0.307, p < 0.05) and N_DD (r = −0.312, p < 0.05).
EU, which is about the student’s excitement of use, strongly correlated with A_CD (r = 0.430, p < 0.01) and moderately correlated with A_DD (r = 0.391, p < 0.05).
SA and IS showed a significant correlation (r = 0.564, p < 0.01), while SA and N_DD had a moderate correlation (r = 0.312, p < 0.05). ED was strongly correlated with FJS (r = 0.405, p < 0.01), while it was negatively correlated with N_CD (r = −0.359, p < 0.05). N_CD and TT were also strongly correlated (r = 0.403, p < 0.01), and FI and NFC were significantly correlated (r = 0.473, p < 0.01). FJS showed moderately negative correlations with A_DD (r = −0.315, p < 0.05) and A_FP (r = −0.352, p < 0.05).
There were strong correlations between N_CD and N_DD (r = 0.535, p < 0.01) and between N_DD and N_FP (r = 0.426, p < 0.01), while no meaningful correlation was observed between N_CD and N_FP.
A_CD and N_DD were negatively correlated (r = −0.320, p < 0.05), and there was a significantly negative correlation between N_DD and A_DD (r = −0.693, p < 0.01) and between N_DD and A_FP (r = −0.616, p < 0.01).
A_DD and N_FP showed a negative correlation (r = −0.336, p < 0.05), while there was a significant positive correlation between A_DD and A_FP (r = 0.568, p < 0.01). A strongly negative correlation was observed between N_FP and A_FP (r = −0.779, p < 0.01).

3.3. MANOVA Analysis

Correlation analysis indicated that IR, DA, LN, and EU were meaningfully related to A_CD, A_DD, A_FP, N_CD, N_DD, and N_FP. Because students’ perceptions of AI-generated images varied across design stages, a multivariate analysis of variance (MANOVA) was conducted to examine how each independent variable (IR, DA, LN, EU) affected multiple dependent variables. The acceptance-related variables (A_CD, A_DD, A_FP) were combined into one set, and the prohibition-related variables (N_CD, N_DD, N_FP) into another. MANOVA was chosen given the conceptual relatedness and moderate correlations among the dependent variables, allowing control for Type I error and assessment of shared variance. Because the predictors were continuous and the sample size was relatively small, Pillai’s trace was used as the primary multivariate statistic given its robustness to violations of assumptions. Assumptions of MANOVA were examined. Box’s M tests indicated that the covariance matrices of the dependent variables did not significantly differ across groups (all p > 0.05), and Levene’s tests suggested that the error variances were generally equal across groups, with only a few marginal exceptions (all other p > 0.05), supporting the use of MANOVA.
As key findings show in Table 5, IR exhibited a significant multivariate effect on the acceptance set (A_CD, A_DD, A_FP), Pillai’s trace = 0.51, F(12, 111) = 1.91, p = 0.040. Follow-up univariate tests suggested that IR had a notable effect on A_CD, R2 = 0.387, F(4, 37) = 5.85, p < 0.001, whereas its effects on A_DD and A_FP were relatively small.
DA also showed a significant overall multivariate effect, Pillai’s trace = 0.55, F(9, 114) = 2.87, p = 0.004, with follow-up tests indicating significant effects on A_DD, F(3, 38) = 6.01, p = 0.002, and A_FP, F(3, 38) = 6.80, p = 0.001, but not on A_CD. Results were consistent across other multivariate test statistics (Wilks’ lambda, Hotelling’s trace, Roy’s largest root).
LN did not have a statistically significant overall effect on the acceptance set, Pillai’s trace = 0.388, F(12, 111) = 1.37, p = 0.190. Follow-up univariate tests suggested a marginal effect of LN on A_CD, R2 = 0.221, F(4, 37) = 2.62, p = 0.050, whereas its effects on A_DD (R2 = 0.143, F(4, 37) = 1.55, p = 0.208) and A_FP (R2 = 0.083, F(4, 37) = 0.84, p = 0.508) were relatively small.
EU did not have a statistically significant overall effect on the acceptance set, Pillai’s trace = 0.311, F(9, 114) = 1.47, p = 0.168. Follow-up univariate tests suggested that EU had a significant effect on A_CD, R2 = 0.220, F(3, 38) = 3.57, p = 0.023, a marginal effect on A_DD, R2 = 0.163, F(3, 38) = 2.46, p = 0.078, and no notable effect on A_FP, R2 = 0.072, F(3, 38) = 0.99, p = 0.410.
For the prohibition set, the multivariate test indicated that IR did not have a statistically significant overall effect on the non-acceptance set, Pillai’s trace = 0.283, F(12, 111) = 0.96, p = 0.488. Follow-up univariate tests suggested that IR did not have significant effects on N_CD, R2 = 0.090, F(4, 37) = 0.92, p = 0.463, N_DD, R2 = 0.144, F(4, 37) = 1.56, p = 0.205, or N_FP, R2 = 0.159, F(4, 37) = 1.74, p = 0.161. DA had a statistically significant overall effect on the non-acceptance set, Pillai’s trace = 0.452, F(9, 114) = 2.24, p = 0.024. Follow-up univariate tests suggested that DA did not significantly affect N_CD, R2 = 0.028, F(3, 38) = 0.37, p = 0.779, but had a significant effect on N_DD, R2 = 0.211, F(3, 38) = 3.40, p = 0.028, and N_FP, R2 = 0.371, F(3, 38) = 7.47, p < 0.001. LN did not have a statistically significant overall effect on the non-acceptance set, Pillai’s trace = 0.233, F(12, 111) = 0.78, p = 0.672. Follow-up univariate tests suggested that LN did not have significant effects on N_CD, R2 = 0.117, F(4, 37) = 1.22, p = 0.319, N_DD, R2 = 0.118, F(4, 37) = 1.24, p = 0.310, or N_FP, R2 = 0.081, F(4, 37) = 0.82, p = 0.522. EU did not have a statistically significant overall effect on the non-acceptance set, Pillai’s trace = 0.225, F(9, 114) = 1.03, p = 0.423. Follow-up univariate tests suggested that EU did not have significant effects on N_CD, R2 = 0.074, F(3, 38) = 1.01, p = 0.401, N_DD, R2 = 0.063, F(3, 38) = 0.85, p = 0.475, or N_FP, R2 = 0.029, F(3, 38) = 0.38, p = 0.771.

4. Discussion

As GenAI becomes more present in creative disciplines, examining what students accept, question, or hesitate to embrace can offer entry points into deeper conversations about the values shaping design learning. While student perceptions offer important insights into the educational integration of GenAI, we acknowledge that these must be understood within broader pedagogical and disciplinary contexts. Furthermore, perception alone cannot determine how GenAI should be integrated into curricula. We interpret the findings both as reflections of current attitudes and as prompts for critical pedagogical reflection.
To explore how students perceive GenAI image tools, the discussion is organized into two thematic levels: (1) foundational issues—including ethics, authorship, and human-centered design—which relate to core values and professional identity; and (2) contextual insights—such as job security, tool usability, and detectability—that reflect pragmatic and situational considerations shaping engagement with these technologies.

4.1. Foundational Issues

4.1.1. Ethics and Disclosure

Students generally did not perceive using GenAI image tools as unethical (DA: M = 2.00, SD = 0.86). However, respondents highlighted the importance of acknowledgement when AI-generated images are used (SA: M = 4.40, SD = 0.94). Quantitative results revealed a strong negative correlation between perceiving AI as unethical and excitement to use it, as well as between ethical concern and the perceived necessity of learning these tools. These patterns suggest that ethical discomfort may be associated with lower enthusiasm or perceived relevance, while students who view GenAI as ethically acceptable tend to express more openness toward its use in creative contexts.
Open-ended responses echoed this sentiment. Many students compared GenAI to traditional design practices of inspiration, borrowing, and remixing, emphasizing a pragmatic view of creativity. For instance, students noted that “most things we design are drawn from other things we have seen, which is the same approach AI takes,” and that “ideas are often shared or borrowed from others; it is just how the design development goes.” Others highlighted that GenAI tools depend on human-generated prompts and are “open source, available to everyone.” However, several students expressed uncertainty about issues of plagiarism, originality, and intellectual property, particularly as GenAI outputs become more sophisticated.

4.1.2. Authorship and Human-Centered Design Decision

In the optional open-ended responses (Table 6), the majority of respondents (n = 31) expressed interest in using GenAI during the concept design phase, highlighting its value for rapid experimentation and idea development. One student noted, “It allows for a lot more experimentation in such a short time,” while others described it as helpful for “kick-starting a project” and “seeing an idea highly rendered.” Additionally, some respondents (n = 15) recognized its potential to support inspiration and creative confidence, describing it as useful for “getting creativity going” and “helping newer designers feel more confident about coming up with ideas.”
This preference was reflected in the mean acceptance ratings, which were highest during the concept design phase (A_CD: M = 4.60, SD = 0.54) and declined through the design development (A_DD: M = 4.14, SD = 0.81) and final presentation phases (A_FP: M = 3.19, SD = 1.37). A similar pattern presented in responses to the prohibition items: students strongly disagreed with prohibiting visual GenAI use at the concept stage (N_CD: M = 1.88, SD = 1.17), showed greater agreement with prohibition during the design development stage (N_DD: M = 2.19, SD = 1.09), and expressed the strongest agreement at the final presentation stage (N_FP: M = 2.81, SD = 1.50). Figure 3 displays the overlay of the mean value of students’ perceived acceptance and prohibition at each design stage.
This trend aligns with several students (n = 7) who indicated they would avoid using AI-generated images in the final stage. As one student noted, AI tools do not yet “understand many design rules and the human condition in space.”
Students’ varying sentiments toward GenAI across design stages were also reflected in the MANOVA results. Acceptance during the concept design phase (A_CD) was influenced by Instructor Responsibility to detect (IR)—the belief that teachers should detect AI-generated images in students’ work—and Learning Necessity (LN), relating to students’ perceived need to learn GenAI tools. Follow-up tests showed a strong effect of IR and a marginal effect of LN on A_CD.
In contrast, acceptance in the later design development (A_DD) and final presentation phases (A_FP) was more strongly influenced by Design Authenticity (DA), reflecting ethical concerns about the use of AI-generated images and project authenticity. DA showed significant effects on both A_DD and A_FP, aligning with students’ caution regarding GenAI use in later stages where human judgment and contextual specificity are critical.
These findings suggest that the use of visual GenAI could be examined with attention to specific design stages, considering their unique demands for externalization and the skill sets of students.

4.2. Contextual Insights

4.2.1. Job Security and Future Professional Roles

Students’ perceptions of GenAI’s professional impact reflected a mix of optimism and caution. The survey results presented excitement about using GenAI in future practice (EU: M = 4.12, SD = 0.94) and low concern over job replacement (FJS: M = 2.43, SD = 1.29).
Correlational patterns suggest an association between ethical concerns and students’ professional anxieties. Concern about future job security (FJS) was moderately correlated with ethical concerns related to design authenticity (DA). FJS also showed moderate negative correlations with acceptance of GenAI in the later phases of design (A_DD and A_FP), suggesting that students who reported greater concern about job security were less likely to view GenAI as appropriate for use in advanced stages of the design process. While these relationships do not imply causality, they point to a pattern in which ethical and employment-related concerns may coincide with students’ hesitation to adopt GenAI in later design stages.

4.2.2. Tool Usability and Prompt Literacy

Students increasingly recognized that the quality of GenAI outputs depends heavily on the strategies used to craft input prompts, though limitations remain, as it can still be difficult to anticipate the exact outcome. This reflects the emergence of a new form of design literacy: prompt design. In architectural education, where technical, functional, and esthetic considerations must be integrated within real-world contexts, careful prompt design is particularly important to ensure that generated outputs align with human designer intent. For example, during the workshop, we discussed strategies such as adding environmental descriptions with adjectives and contextual terms, specifying the viewpoint, such as top view, elevation, or user perspective, referencing stylistic traditions, articulating the spatial formal qualities, and determining rendering styles.
This emphasis on prompting also resonates with an evolving constructionist perspective, where prompts can be understood as cognitive tools for exploration and iteration that facilitate learning through making (Hsu, 2025). Beyond engaging with AI-generated images as potential objects to think with or even partners to think with (Levin et al., 2025), prompting text itself can also be regarded as an artifact (Hsu, 2025)—an externalized expression of the learner’s ideas that supports reflection, refinement, and communication.

4.2.3. Detectability and Perceived Technological Trajectory

Most participants believed that AI-generated images are currently easy to detect, possibly because they do not yet present the nuanced decision-making of human designers (ED: M = 3.74, SD = 1.06). Open-ended responses supported this view, with students noting that GenAI outputs often “require a dedication to filtering out poor designs and many stages of refinement,” and that the results often remain “very limiting even with the high graphic output,” showing “questionable design choices.” However, some students expressed concern that this distinction may fade as technology advances. Notably, a stronger belief in the detectability of AI-generated images was correlated with greater concern about job security—perhaps reflecting unease about rapid technological change and its unpredictable implications. These complex dynamics echo a mix of optimism and caution that the results presented could be further determined.

4.2.4. Reflection on GenAI Workshop

Open-ended reflections present that initial apprehensions about GenAI often shifted following exposure during the workshops. Several students noted that understanding the tool’s limitations helped reduce their anxiety, though some continued to express concern that over-reliance could weaken foundational design skills. While some participants were previously unaware of GenAI’s capabilities, others had underestimated its potential. After engaging with the tools, many came to see GenAI as a helpful aid for early conceptual development—while still recognizing limitations such as inconsistent output quality and a dependence on carefully formulated prompts. As these reflections were drawn from optional questions rather than a structured pre- and post-design, they should be interpreted cautiously. Nonetheless, they suggest that longitudinal investigation may offer valuable insight into how GenAI exposure shapes students’ attitudes over time.

5. Limitations and Future Study

This study has several important limitations. It was conducted at a single academic institution with a small, homogeneous, and self-selected sample (n = 42), skewed toward a particular program and demographic profile, which constrains the generalizability of findings. The voluntary nature of participation—including both the workshops and survey—introduces self-selection bias that may have skewed results toward more positive perceptions of GenAI. Although participants received a small gift card as compensation, this modest incentive may still have influenced participation decisions.
The brief exposure period, 90–120 min across three sessions, further limits the extent to which participants could develop fully informed views about complex pedagogical implications. In addition, the exclusive focus on a single platform, Midjourney, narrows applicability to other GenAI tools, and the absence of a control group or pre/post design precludes assessment of long-term impacts or causal relationships. Taken together, these limitations underscore the exploratory nature of the study, which is intended to surface preliminary insights rather than draw definitive conclusions.
Future research could further explore the perspectives of instructors and design professionals, incorporating qualitative methods such as interviews or longitudinal studies. This would help develop actionable guidelines for integrating visual GenAI into creative education. Expanding the scope to include other types of visual GenAI tools could also deepen our understanding of their pedagogical potential. As GenAI technologies continue to evolve, ongoing research should remain current to inform curriculum development, ethical practices, and skill-building strategies that equip students for the changing landscape of design practice.

6. Conclusions

This exploratory study contributes to providing insight into the evolving discourse on visual GenAI in design education by centering architectural design student perspectives. Given the exploratory nature of this study and its methodological limitations, the findings should be interpreted with caution. We position this work as laying important groundwork for future research that engages larger samples and incorporates diverse perspectives from educational stakeholders through more rigorous investigations into the role of generative AI in design education.
The preliminary patterns observed show that students generally consider using GenAI tools ethically acceptable, especially when they are transparent about their use, emphasizing the importance of openly acknowledging it. However, their acceptance of visual GenAI tools varies across different design stages, showing greater enthusiasm during the early ideation phase and more cautious use in later stages.
Based on the patterns observed in this study, we suggest several prompts that could potentially be considered in developing a guide for responding to the use of visual GenAI in architectural design curricula.
(1) Stage-specific guidelines for the use of AI-generated images could be developed in alignment with the learning objectives of each design stage, recognizing that different phases require distinct skill sets. Acknowledgment of AI use should be expected at all stages to ensure transparency. For instance, during the concept design stage, where initial brainstorming and iterative idea generation primarily occur, AI-generated images may be used to support visual concept exploration for inspiration and communication, provided that students acknowledge their use. At later design stages, which involve critical decision-making—such as space layout, material selection, and code compliance—that require technical and practical design knowledge, the use of visual GenAI should be carefully considered due to its limitations.
(2) Prompt literacy education could help students engage with GenAI tools more effectively. As part of this, incorporating requirements to submit final prompts—or prompt histories—alongside AI-generated outputs can support verification, provide traceability, and foster critical reflection as part of the design process.
(3) Authorship safeguards, particularly during final design stages where human-centered decisions are essential, could be integrated with stage-specific assessment criteria to emphasize originality, contextual responsiveness, and the responsible interpretive role of the human designer—both in education and in future professional practice.
While this study is limited to architectural and interior architecture students, the tensions surfaced here are likely to resonate across adjacent design disciplines. Further research is needed to refine cross-disciplinary guidelines and institutional frameworks that support the responsible and informed integration of visual GenAI into creative education.

Author Contributions

Conceptualization, M.B.H., M.M. and T.T.; methodology, M.B.H.; validation, M.B.H., M.M. and T.T.; formal analysis, M.B.H.; investigation and resources, M.B.H., M.M. and T.T.; writing—original draft preparation, review and editing, M.B.H.; visualization, M.B.H.; project administration, T.T., M.B.H. and M.M.; funding acquisition, T.T., M.B.H. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Angelo Donghia Foundation, Inc. under the Angelo Donghia Foundation Grant 2023.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the University of Arkansas (protocol #2402522755/approval date 16 April 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed at the corresponding author.

Acknowledgments

The authors have reviewed and edited the output and take full responsibility for the content of this publication. They express their gratitude to the Department of Interior Architecture and Design, as well as to all student research participants, for their support in this study. Generative artificial intelligence (GenAI) tools (specifically ChatGPT and Grammarly) were used during the preparation of this manuscript exclusively for language-related assistance, including grammar correction, sentence rephrasing, and improving clarity of expression. Zotero reference management software was used to organize, format, and insert citations and references according to the APA style. All research design, data collection, analysis, interpretation, and conclusions are the sole work of the authors, and all AI-assisted text was reviewed and verified for accuracy and intended meaning.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GenAIGenerative Artificial Intelligence
IRInstructor Responsibility to detect
DADesign Authenticity
LNLearning Necessity
EUExcitement to Use
SAStudent Acknowledgment
ISInstitutional Support
EDEase to Detect
TTTemporal Trend
SCStudent Competitiveness
FIFuture Impact
NFCNecessity for Future Career
FJSFuture Job Security
EFUExpectation for Future Use
N_CDNever allowed Concept Design
A_CDAcceptable with acknowledgment Concept Design
N_DDNever allowed Design Development
A_DDAcceptable with acknowledgment Design Development
N_FPNever allowed Final Presentation
A_FPAcceptable with acknowledgment Final Presentation

Appendix A

Survey Questionnaire
Q1.1 What is your age?
Q1.2 How do you describe yourself?
  • Male
  • Female
  • Non-binary/third gender
  • Prefer to self-describe
  • Prefer not to say
Q1.3 Which of the following best describes you?
  • White
  • Black or African American
  • American Indian or Alaska Native
  • Asian
  • Native Hawaiian or Pacific Islander
  • Other (specify)
Q1.4 What is the highest level of education you have completed?
  • Less than high school diploma
  • High School degree or equivalent (e.g., GED)
  • Bachelor’s degree (e.g., BA, BS)
  • Master’s degree (e.g., MA, MS, MEd)
  • Doctorate (e.g., PhD, EdD)
  • Others (please specify)
Q1.5 What is your major?
  • IARD (Interior Architecture)
  • ARCH (Architecture)
  • Others (please specify)
Q1.6 Please select your current year in IARD or ARCH program in school.
  • 1st
  • 2nd
  • 3rd
  • 4th
  • 5th
  • Others (please specify)
Q2.1 To what degree do you agree or disagree with the following statement:
Strongly disagree 1, Somewhat disagree 2, Neutral 3, Somewhat agree 4, Strongly agree 5
  • Whenever a student uses AI-generated images in their work, they should acknowledge the use of AI image generated platform.
  • It is the responsibility of teachers and professors to detect if AI-generated images were used in students’ work.
  • I believe that my institution will be supportive of the usage of AI-generated images to assist students with their design.
  • The use of AI-generated images in students’ work can be easily detected.
  • The use of AI-generated images is unethical in the context of the authenticity of the design project.
  • AI-generated images are hyped at the moment only because it is new and unique, but soon the hype will be over.
  • AI-generated images will contribute to increasing the competitiveness of students who are not confident in creativity and design capability.
  • I need to learn how to use AI-generated image platforms to improve my performance as a designer.
  • I am excited about using AI-generated images as a future architectural designer.
  • AI-generated images will critically affect architectural design disciplines in my future career.
  • I will need to use AI-generated images to be competitive in my future career.
  • I worry that AI-generated images will take my job in the future. In my future job, I will be able to outsource creative visualizing tasks to AI-generated images while I focus on critical problem-solving aspects.
Q2.2 Let’s think about using AI-generated images in various phases of design in your studio project. To what degree do you agree or disagree with the following statement:
Strongly disagree 1, Somewhat disagree 2, Neutral 3, Somewhat agree 4, Strongly agree 5
  • The use of AI-generated images in the CONCEPT DESIGN process as an inspiration tool should not be allowed in design, regardless of whether the student acknowledges it.
  • Using AI-generated images in the CONCEPT DESIGN process, as an inspiration tool, could be acceptable if the student acknowledges it.
  • The use of AI-generated images in DESIGN DEVELOPMENT, especially for developing material, lighting, and colors, should not be allowed in design, regardless of whether the student acknowledges it.
  • Using of AI-generated images in DESIGN DEVELOPMENT, especially for developing material, lighting, and colors, could be acceptable if the student acknowledges it.
  • The use of AI-generated images in the PRESENTATION (such as in a final review) as a final rendering image should not be allowed in design, regardless of whether the student acknowledges it.
  • Using AI-generated images in the presentation (such as in a final review) as a final rendering image could be acceptable if the student acknowledges it.
Open-Ended Questions
Q3.1 What were your initial impressions regarding the emergence of AI image generators before you learned about them in the workshop?
Q3.2 What are your impressions of AI image generators now that you’ve attended the AI workshop and learned how to use them?
Q3.3 How do you feel about incorporating AI image generators into your design process?
Q3.4 How do you feel about the authenticity of AI-generated images related to ethical concerns, such as copyright and plagiarism?
Q3.5 Feel free to describe your thoughts/concerns/potential about AI image generators in architectural design disciplines.

Appendix B

Unrotated component matrix (principal component analysis)
1 Ethical Responsibility and Personal Attitudes2 Acknowledgment and Institutional Support3 Perceived Impact on Career and Future Use
IR *
Instructor Responsibility to detect
0.377−0.002−0.494
DA *
Design Authenticity
0.1560.615−0.543
LN
Learning Necessity
0.5830.251−0.411
EU
Excitement to Use
0.2660.269−0.622
SA
Student Acknowledgment
−0.3470.5690.377
IS
Institutional Support
−0.2520.7870.330
ED
Ease to Detect
0.4930.2390.324
TT *
Temporal Trend
0.536−0.101−0.035
SC
Student Competitiveness
0.2520.3440.395
FI
Future Impact
0.539−0.2060.465
NFC
Necessity for Future Career
0.4450.1860.476
FJS
Future Job Security
0.314−0.1520.531
EFU
Expectation for Future Use
0.5430.0150.038
Note. ‘*’ indicates the item that was reverse computed for Cronbach’s alpha and Principal Component Analysis.

References

  1. Ackermann, E. (1996). Perspective-taking and object construction. In Y. B. Kafai, & M. Resnick (Eds.), Constructionism in practice (1st ed.). Routledge. [Google Scholar] [CrossRef]
  2. Almaz, A. F., El-Agouz, E. A. E., Abdelfatah, M. T., & Mohamed, I. R. (2024). The future role of Artificial Intelligence (AI) design’s integration into architectural and interior design education is to improve efficiency, sustainability, and creativity. Civil Engineering and Architecture, 12(3), 1749–1772. [Google Scholar] [CrossRef]
  3. Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artificial intelligence transform higher education? International Journal of Educational Technology in Higher Education, 17(1), 42. [Google Scholar] [CrossRef]
  4. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  5. Chandrasekera, T., Hosseini, Z., & Perera, U. (2024). Can artificial intelligence support creativity in early design processes? International Journal of Architectural Computing, 23(1), 122–136. [Google Scholar] [CrossRef]
  6. Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments, 32(10), 6187–6203. [Google Scholar] [CrossRef]
  7. Chui, M., Hazan, E., Roberts, R., Singla, A., & Smaje, K. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company. Available online: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier (accessed on 21 August 2024).
  8. Hanafy, N. O. (2023). Artificial intelligence’s effects on design process creativity: “A study on used A.I. Text-to-Image in architecture”. Journal of Building Engineering, 80, 107999. [Google Scholar] [CrossRef]
  9. Hettithanthri, U., Hansen, P., & Munasinghe, H. (2023). Exploring the architectural design process assisted in conventional design studio: A systematic literature review. International Journal of Technology and Design Education, 33(5), 1835–1859. [Google Scholar] [CrossRef]
  10. Holmes, W., Miao, F., & UNESCO. (2023). Guidance for generative AI in education and research. UNESCO Publishing. [Google Scholar]
  11. Horvath, A.-S., & Pouliou, P. (2024). AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators. Frontiers of Architectural Research, 13(3), 593–612. [Google Scholar] [CrossRef]
  12. Hsu, H.-P. (2025). From programming to prompting: Developing computational thinking through large language model-based generative artificial intelligence. TechTrends, 69(3), 485–506. [Google Scholar] [CrossRef]
  13. Ibrahim, H., Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., Adnan, W., Alhanai, T., AlShebli, B., Baghdadi, R., Bélanger, J. J., Beretta, E., Celik, K., Chaqfeh, M., Daqaq, M. F., Bernoussi, Z. E., Fougnie, D., Garcia De Soto, B., Gandolfi, A., … Zaki, Y. (2023). Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. Scientific Reports, 13(1), 12187. [Google Scholar] [CrossRef]
  14. Ibrahim, N. L. N., & Utaberta, N. (2012). Learning in architecture design studio. Procedia—Social and Behavioral Sciences, 60, 30–35. [Google Scholar] [CrossRef]
  15. Ifenthaler, D., Majumdar, R., Gorissen, P., Judge, M., Mishra, S., Raffaghelli, J., & Shimada, A. (2024). Artificial intelligence in education: Implications for policymakers, researchers, and practitioners. Technology, Knowledge and Learning, 29(4), 1693–1710. [Google Scholar] [CrossRef]
  16. Jaruga-Rozdolska, A. (2022). Artificial intelligence as part of future practices in the architect’s work: Midjourney generative tool as part of a process of creating an architectural form. Architectus, 3(71), 95–104. [Google Scholar] [CrossRef]
  17. Kahraman, M. U., Şekerci, Y., Develier, M., & Koyuncu, F. (2024). Integrating artificial intelligence in interior design education: Concept development. Journal of Computational Design, 5(1), 31–60. [Google Scholar] [CrossRef]
  18. Kee, T., Kuys, B., & King, R. (2024). Generative artificial intelligence to enhance architecture education to develop digital literacy and holistic competency. Journal of Artificial Intelligence in Architecture, 3(1), 24–41. [Google Scholar] [CrossRef]
  19. Levin, I., Semenov, A. L., & Gorsky, M. (2025). Smart learning in the 21st century: Advancing constructionism across three digital epochs. Education Sciences, 15(1), 45. [Google Scholar] [CrossRef]
  20. Ngo, T. T. A. (2023). The Perception by University Students of the Use of ChatGPT in Education. International Journal of Emerging Technologies in Learning (iJET), 18(17), 4–19. [Google Scholar] [CrossRef]
  21. Nussbaumer, L. L., & Guerin, D. A. (2000). The relationship between learning styles and visualization skills among interior design students. Journal of Interior Design, 26(2), 1–15. [Google Scholar] [CrossRef]
  22. Ogunleye, B., Zakariyyah, K. I., Ajao, O., Olayinka, O., & Sharma, H. (2024). A systematic review of generative AI for teaching and learning practice. Education Sciences, 14(6), 636. [Google Scholar] [CrossRef]
  23. Papert, S. A. (1980). Mindstorms: Children, computers, and powerful ideas. Basic books. [Google Scholar]
  24. Soliman, A. M. (2017). Appropriate teaching and learning strategies for the architectural design process in pedagogic design studios. Frontiers of Architectural Research, 6(2), 204–217. [Google Scholar] [CrossRef]
  25. Sopher, H., Fisher-Gewirtzman, D., & Kalay, Y. E. (2018). Use of immersive virtual environment in the design studio. Proceedings of eCAADe 2018—36th Annual Conference, 17, 856–862. [Google Scholar]
  26. Taber, K. S. (2018). The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education. Research in Science Education, 48(6), 1273–1296. [Google Scholar] [CrossRef]
  27. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
Figure 1. Allowability of using AI-generated images in each design phase, with the premise of students’ acknowledgment.
Figure 1. Allowability of using AI-generated images in each design phase, with the premise of students’ acknowledgment.
Education 15 01160 g001
Figure 2. Prohibition of using AI-generated images in each design phase regardless of students’ acknowledgment.
Figure 2. Prohibition of using AI-generated images in each design phase regardless of students’ acknowledgment.
Education 15 01160 g002
Figure 3. Mean values of students’ perceived acceptance and prohibition at each design stage.
Figure 3. Mean values of students’ perceived acceptance and prohibition at each design stage.
Education 15 01160 g003
Table 1. Research respondent demographics.
Table 1. Research respondent demographics.
n = 42%
Gender
Female3583.3
Male 614.3
Prefer not to say12.4
Race
White3685.7
Hispanic or Latino37.1
American Indian or Alaska Native24.8
Asian12.4
Major and Years
Architecture 1023.8
Second years37.1
Third years24.8
Fourth years12.4
Fifth years49.5
Interior Architecture3276.2
First years614.3
Second years1023.8
Third years37.1
Fourth years1331.0
Age (Mean)21.78
Table 2. Total variance explained by PCA.
Table 2. Total variance explained by PCA.
ComponentInitial Eigenvalues (Total)% of VarianceCumulative %
1 Ethical Responsibility and Personal Attitudes 2.2417.2135.16
2 Acknowledgment and Institutional Support 1.7413.4048.57
3 Perceived Impact on Career and Future Use 2.3317.9517.95
Table 3. Means and SDs of each item.
Table 3. Means and SDs of each item.
Category 1 Ethical Responsibility and Personal Attitudes (α = 0.66)
MSDPCA loading
IR *
Instructor Responsibility to detect
It is the responsibility of teachers (professors) to detect if AI-generated images were used in students’ work. 2.880.970.54
DA *
Design Authenticity
The use of AI-generated images is unethical in the context of the authenticity of the design project. 2.000.860.78
LN
Learning Necessity
I need to learn how to use AI-generated image platforms to improve my performance as a designer. 3.621.150.70
EU
Excitement to Use
I am excited about using AI-generated images as a future architectural designer.4.120.940.71
Category 2 Acknowledgment and Institutional Support (α = 0.72)
MSDPCA loading
SA
Student Acknowledgment
Whenever a student uses AI-generated images in their work, they should acknowledge the use of an AI image-generated platform.4.400.990.75
IS
Institutional Support
I believe that my institution will be supportive of the usage of AI-generated images to assist students with their design.3.981.020.89
Category 3 Perceived Impact on Career and Future Use (α = 0.62)
MSDPCA loading
ED
Ease to Detect
The use of AI-generated images in students’ work can be easily detected.3.741.060.61
TT *
Temporal Trend
AI-generated images are hyped at the moment only because they are new and unique, but soon the hype will be over2.450.940.41
SC
Student Competitiveness
AI-generated images will contribute to increasing the competitiveness of students who are not confident in creativity and design capability.3.641.120.46
FI
Future Impact
AI-generated images will critically affect architectural design disciplines in my future career.3.791.000.69
NFC
Necessity for Future Career
I will need to use AI-generated images to be competitive in my future career.3.361.140.65
FJS
Future Job Security
I worry that AI-generated images will take my job in the future.2.431.290.55
EFU
Expectation for Future Use
In my future job, I will be able to outsource creative visualizing tasks to AI-generated images while I focus on critical problem-solving aspects.3.740.940.47
Category 4 Use of AI-generated images in the Design Process (α = 0.78)
MSD
N_CD *
Never allowed Concept Design
The use of AI-generated images in the CONCEPT DESIGN process as an inspiration tool should not be allowed in design, regardless of whether the student acknowledges it.1.881.17-
A_CD
Acceptable with acknowledgment Concept Design
Using AI-generated images in the CONCEPT DESIGN process as an inspiration tool could be acceptable if the student acknowledges it.4.600.54-
N_DD *
Never allowed Design Development
The use of AI-generated images in DESIGN DEVELOPMENT, especially for developing material, lighting, and colors, should not be allowed in design, regardless of whether the student acknowledges it.2.191.09-
A_DD
Acceptable with acknowledgment Design Development
Using AI-generated images in DESIGN DEVELOPMENT, especially for developing material, lighting, and colors, could be acceptable if the student acknowledges it.4.140.81-
N_FP *
Never allowed Final Presentation
The use of AI-generated images in the PRESENTATION (such as in a final review) as a final rendering image should not be allowed in design, regardless of whether the student acknowledges it.2.811.50-
A_FP
Acceptable with acknowledgment Final Presentation
Using AI-generated images in the PRESENTATION (such as in a final review) as a final rendering image could be acceptable if the student acknowledges it.3.191.37-
Note. ‘*’ indicates the item that was reverse computed for Cronbach’s alpha and Principal Component Analysis. 1 = strongly disagree, 5 = strongly agree.
Table 4. Correlation analysis results.
Table 4. Correlation analysis results.
Variables123456789101112131415161718
2
DA
0.265
3
LN
−0.372
*
−0.348 *
4
EU
−0.198−0.424
**
0.359
*
5
SA
0.2550.000−0.140−0.184
6
IS
0.194−0.195−0.112−0.0480.564
**
7
ED
−0.0310.0270.2170.0320.1500.039
8
TT
0.060−0.030−0.130−0.2270.1130.163−0.147
9
SC
0.117−0.0250.119−0.097−0.0200.2900.206−0.051
10
FI
0.0490.1990.118−0.205−0.132−0.0770.130−0.1790.191
11
NFC
−0.027−0.025−0.080−0.1310.0200.1950.240−0.1540.2540.473
**
12
FJS
0.0610.331
*
−0.002−0.1430.052−0.0470.405
**
−0.0430.2090.2620.142
13 EFU−0.062−0.0910.2220.009−0.041−0.0570.174−0.304−0.0680.2500.2030.055
14 N_CD0.1810.097−0.307
*
−0.1630.1900.059−0.359
*
0.403
**
−0.126−0.168−0.2950.0180.037
15 A_CD−0.511
**
−0.315
*
0.373
*
0.430
**
−0.278−0.018−0.231−0.062−0.083−0.253−0.272−0.303−0.069−0.154
16 N_DD0.323
*
0.420
**
−0.312
*
−0.2370.312
*
0.158−0.1670.1040.077−0.029−0.2130.184−0.0460.535
**
−0.320
*
17 A_DD−0.195−0.561
**
0.373
*
0.391
*
−0.286−0.1130.0440.009−0.076−0.0510.101−0.315
*
0.178−0.1100.299−0.693
**
18 N_FP0.370
*
0.475
**
−0.086−0.0530.152−0.162−0.017−0.058−0.070−0.141−0.2150.232−0.2270.125−0.2760.426
**
−0.336
*
19 A_FP−0.296−0.543
**
0.2030.266−0.2930.0210.018−0.1440.0450.1200.205−0.352
*
0.287−0.2290.271−0.616
**
0.568
**
−0.779
**
* Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed).
Table 5. MANOVA Results.
Table 5. MANOVA Results.
Independent VariablesDependent VariablesType III Sum of Squaresdf
(Model, Error)
Mean SquareF-Valuep-ValueR Squared
IR
Instructor’s Responsibility to detect
A_CD4.6934, 371.1735.846<0.0010.387
A_DD2.9904, 370.7471.1450.3510.110
A_FP7.2284, 371.8070.9650.4380.095
DA
Design Authenticity
A_CD1.4823, 380.4941.7650.1700.122
A_DD8.7333, 382.9116.0080.0020.322
A_FP26.7073, 388.9026.797<0.0010.349
N_CD1.5803, 380.5270.3650.7790.028
N_DD10.2453, 383.4153.3950.0280.211
N_FP34.3013, 3811.4347.469<0.0010.371
LN
Learning Necessity
A_CD2.6754, 370.6692.6200.0500.221
A_DD3.8934, 370.9731.5490.2080.143
A_DD6.3794, 371.5950.8420.5080.083
EU
Excitement to Use
A_CD2.6673, 380.8893.5730.0230.220
A_DD4.4133, 381.4712.4590.0780.163
A_FP5.5243, 381.8410.9860.4100.072
Table 6. Response from open-ended questions.
Table 6. Response from open-ended questions.
How Do You Feel About Incorporating AI Image Generators into Your Design Process?
CategoriesQuotes
Intend to use in concept design at the early design stage (n = 31)“It allows for a lot more experimentation in such a short time. This could really help kick-start a project.”
“I feel like it is extremely helpful because my brain is only so limited and AI is unlimited.”
“I think it is a great start to a project with brainstorming and seeing an idea highly rendered.”
Recognize its contribution to inspiration and confidence (n = 15)“I feel like AI will be a great resource for concept development, just as places like Pinterest help with inspiration.”
“I think this is an amazing tool for launching a project. It will help get my creativity going.”
“It helps newer designers feel more confident about coming up with ideas. “
“Certain tasks are sped up with AI tools, and it gives confidence to put more detail into the project.”
Intend not to use in the final stage of the design process (n = 7)“I would use it in preliminary stages, but I would want my work to be my own later.”
“I don’t feel I would use it for a final project.”
“I may try to use it as a brainstorming tool, but I am still hesitant to use it past the conceptual design stage.”
“I believe that AI should be avoided in final rendering and technical design. Given the current state of AI, it does not understand many design rules and the human condition in space.”
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huh, M.B.; Miri, M.; Tracy, T. Students’ Perceptions of Generative AI Image Tools in Design Education: Insights from Architectural Education. Educ. Sci. 2025, 15, 1160. https://doi.org/10.3390/educsci15091160

AMA Style

Huh MB, Miri M, Tracy T. Students’ Perceptions of Generative AI Image Tools in Design Education: Insights from Architectural Education. Education Sciences. 2025; 15(9):1160. https://doi.org/10.3390/educsci15091160

Chicago/Turabian Style

Huh, Michelle Boyoung, Marjan Miri, and Torrey Tracy. 2025. "Students’ Perceptions of Generative AI Image Tools in Design Education: Insights from Architectural Education" Education Sciences 15, no. 9: 1160. https://doi.org/10.3390/educsci15091160

APA Style

Huh, M. B., Miri, M., & Tracy, T. (2025). Students’ Perceptions of Generative AI Image Tools in Design Education: Insights from Architectural Education. Education Sciences, 15(9), 1160. https://doi.org/10.3390/educsci15091160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop