Next Article in Journal
Mapping Digital Literacy Thresholds in South African Higher Education and the Implications for Entrepreneurship Education in an Industry 4.0 Paradigm
Previous Article in Journal
Algorithmic Charisma Under Strain: Elon Musk and the Dynamics of Founder Myths in the Platform Era
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Generative AI Images on Consumer Attitudes in Advertising

Department of Business Administration, Hankyong National University, Anseong 17579, Republic of Korea
*
Author to whom correspondence should be addressed.
Adm. Sci. 2025, 15(10), 395; https://doi.org/10.3390/admsci15100395
Submission received: 18 July 2025 / Revised: 8 October 2025 / Accepted: 10 October 2025 / Published: 16 October 2025

Abstract

While the capability of generative AI to generate high-quality content is well-recognized, there is still a lack of in-depth research on its actual impact on marketing effectiveness within real-world marketing environments. This study addresses this gap by conducting experiments to examine the effects of AI-generated advertisement images, created using text-to-image diffusion models, on consumer responses and the boundary conditions of these effects. Study 1 (n = 130) found that for coffee ads, attitudes were descriptively higher toward AI-generated images (ηp2 = 0.17), whereas for medical-aesthetics and public-service ads, evaluations favored human-made images; none of these differences reached significance. Study 2 (n = 79) revealed that when consumers were informed about the source of the image (AI or human), they showed significantly more positive attitudes toward human-made images than those generated by AI (d = 0.52). Study 3 (n = 209) demonstrated that in commercial advertising contexts where usage motivations were disclosed, consumers’ negative reactions to AI-generated images were moderated by the specific usage motivation (η2 = 0.04). When the motivation was privacy protection, evaluations were comparable to human-made images. In contrast, visual appeal produced slightly lower but non-significant ratings, whereas cost efficiency led to significant declines in trust and purchase intention, with attitude showing only marginal decreases and preference no differences. This study aims to understand the innovative potential of generative AI and provides critical insights for businesses, consumers, and policymakers regarding its effective utilization.

1. Introduction

Historically, creative tasks such as poetry writing, software development, fashion design, and music composition were considered exclusive to human capability. However, recent advancements in artificial intelligence have significantly challenged this notion, as AI can now generate content indistinguishable from human craftsmanship (Feuerriegel et al., 2024). Generative AI has emerged as a key driver accelerating AI adoption across various industries (Dwivedi et al., 2023; Kshetri, 2023). While many organizational functions benefit from AI innovation (Dwivedi et al., 2024; MIT Technology Review Insight, 2023), the positive impact of generative AI is particularly well-documented in the marketing sector (Kshetri, 2023). The AI market in marketing was valued at approximately $15.84 billion in 2021, with projections suggesting it could reach around $107.5 billion by 2028 (Statista Research Department, 2024). Compared to previous generations of digital technologies, generative AI exerts a stronger influence on marketing processes and outcomes (Kshetri, 2023).
In both marketing practice and academia, numerous examples highlight the innovations driven by generative AI. For instance, AI has been used to create magazine covers (Economist, 2022), award-winning artworks (Grierson, 2023), and scientific paper abstracts that are nearly indistinguishable from those written by humans (Gao et al., 2023). Although generative AI is widely recognized for its cost-effectiveness (Reisenbichler et al., 2022) and its positive impact on productivity (Brynjolfsson et al., 2025; Peng et al., 2023), it still has notable limitations. For example, in South Korea’s Midnight Horror Stories Season 4, AI-generated images were used to reduce costs but were criticized for disrupting viewer immersion (Jeong, 2024).
As interest in generative AI continues to grow in the industry, its application in marketing is expanding. However, academic research has largely centered on the technical aspects of generative AI or its evaluation in marketing (Feuerriegel et al., 2024; Kshetri et al., 2024; Longoni et al., 2022). Specifically, prior studies have explored the impact of generative AI on human creative productivity (Zhou & Lee, 2024) and discussed ethical principles surrounding AI-generated content (Sands et al., 2024). Research has also examined the current state and future prospects of generative AI (Storey et al., 2025), as well as performance comparisons between AI-generated texts (e.g., large language models (LLMs)) and human-written texts (Carlson et al., 2023; Reisenbichler et al., 2022). More recently, studies have investigated the quality of AI-generated advertisement images and their effects on online ad click-through rates (Hartmann et al., 2025), as well as consumer perceptions of AI-generated content (Arango et al., 2023).
Although prior research has provided valuable insights into the potential of generative AI in advertising and marketing, systematic investigations into how consumers evaluate and compare AI-generated versus human-made images under different disclosure conditions remain limited. As the call for greater transparency in AI practices grows within the advertising industry (Campbell et al., 2022a), scholars have begun to examine the impact of AI disclosure on consumer responses. However, this literature remains fragmented, leaving key questions unanswered, particularly regarding commercial advertising and contexts without AI disclosure (Baek et al., 2024). Some studies have focused on prosocial advertising contexts, exploring how AI disclosure influences consumer reactions to charity donation ads (Arango et al., 2023; Baek et al., 2024). These studies show that disclosing the use of AI may reduce emotional engagement, empathy, and even ad credibility, thereby weakening positive responses. However, research on commercial advertising remains scarce, and the mechanisms through which consumers respond to AI disclosure in such contexts have not been thoroughly examined. The rapid integration of AI-generated content into marketing makes addressing this gap a timely and theoretically significant endeavor. Moreover, most prior work has concentrated on whether AI should be disclosed. In contrast, relatively little attention has been paid to how consumers compare AI-generated and human-made content in the absence of disclosure. This gap limits a comprehensive understanding of AI advertising effects across different disclosure conditions.
From the perspective of the Persuasion Knowledge Model (PKM; Friestad & Wright, 1994), disclosure activates consumers’ persuasion knowledge, leading them to question the authenticity and motives of the advertisement. As AI is perceived as a new type of persuasive agent, disclosing its use not only raises concerns about manipulative intent but may also trigger algorithm aversion (D. Lee & Ham, 2023; Longoni & Cian, 2022). Thus, under disclosure conditions, AI-generated images are predicted to be less effective than human-made images. In contrast, when no disclosure is provided, consumers lack explicit cues of manipulation and are more likely to evaluate the advertisement on the basis of its content. In such cases, the effectiveness of AI-generated images may be comparable to, or even exceed, that of human-made images. Furthermore, prior research indicates that in prosocial advertising, disclosing the motives for using AI (e.g., highlighting moral reasons or resource constraints) can partly reduce negative consumer reactions to AI-generated content (Arango et al., 2023). However, whether such mitigating mechanisms also apply in commercial advertising remains to be empirically examined.
To address these gaps, this research draws on three empirical studies to systematically investigate consumer responses to AI-generated versus human-made images across commercial and public service advertising contexts. Specifically, Study 1 examines whether differences in consumer attitudes occur under non-disclosure conditions (i.e., a “black-box” condition). Study 2 investigates how consumer attitudes change when the source of the images is explicitly disclosed in advance. Study 3 extends Study 2 by examining the boundary conditions that moderate consumer responses to AI-generated versus human-made images in commercial advertising under disclosure.
This research extends prior studies by systematically examining AI-generated images in both commercial and public service contexts. Drawing on the Persuasion Knowledge Model and authenticity theory, we demonstrate that disclosure serves as a critical moderator of consumer responses to AI-generated content. Absent disclosure, consumers evaluate images based on output quality, allowing AI-generated content to achieve parity with or even outperform human-made alternatives. However, when AI use is disclosed, consumers systematically penalize AI-generated images compared to human-made counterparts. In commercial advertising, we identify three boundary conditions—cost-saving, privacy-protection, and visual-quality rationales—that attenuate or amplify disclosure’s negative effects. By establishing disclosure-contingent evaluation patterns and their contextual boundaries, this research advances our theoretical understanding of AI advertising effectiveness and provides evidence-based guidance for transparent AI implementation strategies.

2. Literature Review

2.1. Generative AI and Its Impact on the Advertising Industry

Generative AI refers to a computational technology capable of creating novel and contextually meaningful content, such as text, images, and audio, based on learned data patterns. With the rapid adoption of technologies like DALL-E 3, GPT-4, and Copilot, generative AI is transforming various industries by reshaping creative workflows and enhancing digital interactions (Feuerriegel et al., 2024). These advancements enable companies to refine consumer engagement strategies, personalize recommendations, and optimize brand interactions through AI-driven innovations in advertising, customer service, and interactive shopping experiences (Mogaji & Jain, 2024).
Recent breakthroughs in generative AI have generated significant interest in the advertising industry. For instance, in 2021, Kraft Heinz, a U.S.-based food manufacturing company, partnered with the creative agency Rethink to develop AI-generated images that closely resembled its iconic ketchup bottle, reinforcing brand recognition and consumer engagement (H. Park, 2023). Similarly, in January 2023, Netflix released an experimental animation titled Dog and Boy, featuring AI-generated background art, demonstrating the technology’s capability to support content creation (Kim & Kim, 2023). In April of the same year, Revolve, a U.S.-based online fashion retailer, gained widespread attention with its 20th-anniversary campaign, in which generative AI was used to create the entire advertisement, including background sets, models, and clothing designs (S. Lee, 2023). Additionally, Amazon integrated AI-powered image generation tools to assist advertisers in producing visually compelling advertisements, thereby improving the overall customer ad experience (Amazon, 2023). In June 2023, the South Korean chicken breast brand I’m Dak became the first company in the country to release an advertisement video entirely generated using ChatGPT (GPT-3.5), further showcasing AI’s expanding role in marketing (Data News, 2023).
Despite its relatively recent commercialization, generative AI is poised to drive substantial innovation in advertising, public relations, and marketing by increasing efficiency, reducing costs, and broadening creative possibilities. Its ability to lower entry barriers for content creation allows non-experts to generate professional-grade visuals within a short timeframe (Campbell et al., 2022a). As generative AI continues to evolve, its potential to redefine market dynamics and consumer interactions will likely become a critical area of exploration in both academic research and industry applications (S. Han et al., 2024).

2.2. Consumers’ Perceptions of Generative AI

The rapid advancement of generative AI, propelled by deep-learning breakthroughs, is transforming marketing and advertising. Yet because the technology—and especially its marketing applications—is still nascent, the extant literature remains relatively limited and has largely emphasized technical capabilities over consumer response. With a few notable exceptions, comparatively few studies directly contrast consumer attitudes and reactions to AI-generated versus human-made images.
Nam (2023) examined advertisements featuring AI-generated versus artist-created artwork and showed that pairing AI imagery with utilitarian ad messages can yield favorable outcomes, underscoring the importance of image–message congruence in brand strategy. In a related stream, D. Han et al. (2023) investigated collaborative creation with DALL·E and documented the gap between users’ expectations and their experienced sense of creative agency, proposing positioning strategies to strengthen that agency. Extending to applied settings, Lim (2024) compared consumer preferences for fruit-drink posters produced with leading image generators (e.g., Midjourney, Stable Diffusion, DALL·E, Firefly). Complementary evidence indicates that ads incorporating virtual or AI elements can heighten perceived novelty (Franke et al., 2023). Moreover, Sun et al. (2025) show that personalization in AI-generated advertising exhibits an inverted-U relationship with consumer attitudes: moderate personalization is most persuasive, whereas too little or too much can backfire.
From a technology-adoption perspective, the Technology Acceptance Model (TAM) posits that perceived ease of use and perceived usefulness are primary determinants of users’ attitudes and adoption. Generative AI—as a new application domain—fits well within TAM (S. F. Wang & Chen, 2024; S. F. Wang et al., 2025), and prior work finds both ease of use and usefulness positively predict acceptance of generative-AI tools (Wei & Chen, 2025). Building on this foundation, Y. Wang et al. (2025) integrate TAM with the Theory of Planned Behavior, the Information Systems Success Model, and perceived risk to explain designers’ cognitive acceptance of generative AI image tools, highlighting the roles of subjective norms, attitudes, and perceived behavioral control in adoption.
At the same time, caution is warranted regarding the economic and branding implications of adopting generative AI. Magni et al. (2024) demonstrate that when audiences know a product was created by AI, they sometimes rate its creativity lower—partly because AI creators are perceived as exerting less effort—although this effect is not universal. Similarly, Brüns and Meißner (2024) find that a brand’s use of generative AI can provoke negative consumer reactions via diminished perceived authenticity; however, these effects are attenuated when AI is framed as augmenting rather than replacing human effort. These patterns align with evidence that recognizing the artificiality of AI-generated content can shape downstream brand evaluations (Arango et al., 2023).

2.3. Persuasion Knowledge, Ethical Implications, and AI Advertising Disclosure

The Persuasion Knowledge Model (PKM; Friestad & Wright, 1994) provides a foundational framework for understanding how consumers interpret and respond to marketing persuasion attempts. Challenging early conceptualizations of consumers as passive recipients, Friestad and Wright (1994) posit that consumers are active, cognitively capable agents who develop sophisticated persuasion-related knowledge structures. These knowledge structures enable consumers to interpret, evaluate, and respond to advertising in goal-consistent ways. The model identifies three interrelated knowledge types that consumers deploy when encountering persuasive communication: topic knowledge (about the product or service), agent knowledge (beliefs about the advertiser’s motives and characteristics), and persuasion knowledge proper (recognition of specific persuasive strategies and tactics). Critically, once cues of persuasive intent are detected, persuasion knowledge activation often triggers defensive cognitive processes—including suspicion, skepticism, and resistance—that ultimately reduce advertising effectiveness (Boerman et al., 2012; Kirmani & Zhu, 2007; Xu & Wyer, 2010).
Empirical extensions of PKM to contemporary native advertising contexts support these theoretical predictions. Boerman et al. (2017) demonstrate that when sponsorship disclosures are noticed by consumers, they heighten conceptual persuasion knowledge and trigger defensive responses, such as more critical evaluations and reduced electronic word-of-mouth (eWOM) intentions. This finding illustrates how the recognition of persuasive intent translates into measurable resistance behaviors (S. S. Lee & Johnson, 2022), establishing PKM as a robust framework for understanding consumer responses to transparent persuasion attempts.
The emergence of generative AI and deepfake technologies introduces novel complexities to this established persuasion knowledge framework. Campbell et al. (2022b) argue that these technologies are fundamentally reshaping the advertising ecosystem in two simultaneous ways: dramatically lowering content-production costs while elevating concerns about authenticity, manipulation, and consumer protection. Their agenda-setting analysis identifies disclosure and explainability as critical governance mechanisms for mitigating risks and preserving trust in AI-mediated persuasion. As AI-generated content proliferates across advertising contexts, the question of whether—and how—to disclose AI involvement has become increasingly pressing (Schilke & Reimann, 2025). The disclosure decision creates a fundamental paradox: transparency about AI use may activate persuasion knowledge and defensive responses, yet concealment risks greater backlash if discovered and raises ethical concerns.
Empirical research reveals the double-edged nature of AI disclosure in advertising. When consumers are not informed about content origins, they may actually prefer AI-generated marketing copy over human-created alternatives (Y. Zhang & Gosline, 2023), suggesting that AI can produce effective persuasive content. However, when AI use is disclosed, multiple negative consequences can emerge. Disclosure can depress ad attitudes and brand trust (Baek et al., 2024), and over-disclosure may further exacerbate distrust and suspicion of falsity (Karpinska-Krakowiak & Eisend, 2025). In luxury advertising contexts, disclosing AI-generated imagery lowers perceived brand effort and authenticity, producing negative reactions, though this penalty diminishes when the imagery demonstrates high creativity (To et al., 2025). Beyond traditional advertising, consumers may interpret AI-assisted selling as a manipulative tactic that heightens perceived manipulative intent and reduces service-quality evaluations (D. Liu et al., 2025). In corporate social responsibility (CSR) communications, perceived falsity in AI-generated advertising undermines online brand engagement through reduced perceived sincerity (Aljarah et al., 2025). Adding further complexity, making AI use salient through explicit cues can increase online engagement metrics while simultaneously evoking negative affect and psychological reactance that harm attitudes toward both the ad and the brand (Hanson et al., 2025).
Consumer responses to AI disclosure are further shaped by wider societal concerns beyond immediate advertising effectiveness. Fears that AI will replace creative professionals in illustration, photography, music, writing, and design raise fundamental fairness and labor displacement concerns (Dornis, 2020; Hernández-Ramírez & Ferreira, 2024). Evidence that AI-generated faces reproduce gender stereotypes and racial homogenization adds to apprehensions about algorithmic bias (AlDahoul et al., 2025; Y. S. Park, 2024). Generative AI’s misuse for harmful purposes, including fraud schemes and non-consensual explicit deepfakes, further compounds public skepticism (Hernández-Ramírez & Ferreira, 2024). From an industry standpoint, AI adoption faces resistance from creative professionals fearing job displacement and from organizations lacking clarity on how to integrate AI into existing workflows (Davenport et al., 2020). Consistent with the stewardship perspective highlighted by Campbell et al. (2022b), these multifaceted risks underscore the need for well-designed disclosures and context-appropriate explanations to balance innovation benefits with consumer protection.
Importantly, ethical concerns about AI in advertising do not invariably produce negative outcomes. When disclosures emphasize motives aligned with consumer interests or social responsibility—such as privacy protection or ethical justifications for AI use—they can mitigate perceptions of manipulation, reduce defensive reactions, and even enhance acceptance of AI advertising. Arango et al. (2023) demonstrate that highlighting privacy protection or ethical rationales for AI use yields evaluations of AI-generated images comparable to those of human-made images, lowering skepticism that would otherwise arise from AI disclosure. This suggests a promising path forward: rather than concealing AI use or offering minimal disclosure, advertisers should adopt transparent, values-aligned disclosure strategies that contextualize AI use within frameworks of consumer benefit and ethical responsibility. Such approaches balance innovation benefits with consumer protection needs, potentially transforming disclosure from a defensive necessity into a trust-building opportunity. The challenge for practitioners and policymakers lies in designing disclosure frameworks that activate appropriate levels of persuasion knowledge—enough to ensure informed consumer decision-making, but not so much as to trigger counterproductive defensive reactions that undermine legitimate marketing communications.

3. Overview of Studies

The present study conducts three empirical investigations to systematically examine consumer responses and attitudes toward AI-generated versus human-made images across both commercial and public service advertising contexts. Moreover, it explores and tests the boundary conditions under which these effects occur in the commercial advertising context. Table 1 summarizes the objectives and experimental designs of Studies 1–3.
Study 1 investigates whether differences in consumer responses arise when the source of the images is not disclosed (i.e., a “black-box” condition). Participants evaluated 30 advertisement images, including 27 AI-generated images (produced by three different text-to-image models based on three original human-made advertisement images) and 3 human-made images. Their overall attitudes, preferences, trust, and purchase intentions were measured and compared across conditions.
Study 2 examines how consumer attitudes change when the source of the images is explicitly disclosed. An AI-generated image from Study 1—selected for having received ratings most comparable to human-made images—was used as the stimulus. Prior to evaluation, participants were informed whether the image was generated by AI or created by a human.
Study 3 explores the conditions under which attitudinal differences between AI-generated and human-made images may be mitigated. Specifically, participants’ trust, attitudes, preferences, and purchase intentions toward AI-generated advertisement images were measured after emphasizing different motivations for their use (i.e., cost efficiency, privacy protection, and visual appeal).

4. Study 1

Previous studies have demonstrated that participants often cannot distinguish between AI-generated images and real images (Arango et al., 2023), and in certain contexts, such as appetite-related contexts, they may even appear more appealing (Califano & Spence, 2024). At the same time, research by Hartmann et al. (2025) indicates that although many AI models outperform human creations in terms of aesthetics, some models still fall short of human work in quality and realism, with substantial differences observed across models. In other words, evaluations of AI-generated images on dimensions such as quality, realism, and aesthetics reveal a mixed picture of strengths and weaknesses. When consumers are unaware of the image source, they lack this information as a cue for judgment and tend to respond based on their overall perception of the image. Building on this logic, we propose that since AI-generated images do not demonstrate consistent advantages over human-made images across these perceptual dimensions, consumers’ trust, attitudes, preferences, and purchase intentions are not expected to differ significantly between the two. To address this, the following hypothesis was formulated:
H1. 
Consumers will exhibit no significant differences in trust, attitude, preference, and purchase intention when exposed to AI-generated advertising images compared to human-made advertising images.

4.1. Method

To compare human-made benchmark images with AI-generated synthetic images, this study employed a two-step approach based on the methodology of Hartmann et al. (2025). Each synthetic image was generated by converting a textual description of a corresponding human-made benchmark image into a visual representation using a text-to-image diffusion model. The benchmark images, pre-selected from marketing advertisements, included three images representing various use cases and visual compositions. Specifically, (1) coffee advertisements as representative of low-involvement, everyday consumer products; (2) medical beauty advertisements as examples of high-involvement product categories requiring more deliberate consumer engagement; and (3) public service advertisements characterized by non-commercial, prosocial messaging distinct from conventional product marketing were chosen. By incorporating these diverse advertisement types, we aimed to enhance the external validity and generalizability of the study’s findings across different product categories and advertising goals.
As illustrated in Figure 1, the first step involved converting the benchmark images into textual descriptions using the image-text model CLIP Interrogator (pharma, 2022). In the second step, three representative text-to-image AI models—DALL-E 3, Midjourney v6, and Dream Up—were selected. Each model generated three synthetic images based on the textual descriptions produced by CLIP. Following Hartmann et al. (2025), When a platform returned a multi-image grid, we deterministically retained the first (top-left) image from each run. All prompts passed the platforms’ default safety/moderation filters.
To validate the quality of the transformation process, we assessed image–text alignment in both directions using Google’s protocol (Saharia et al., 2022). Two human raters evaluated all pairs on a three-point accuracy scale (Yes = 100, Somewhat = 50, No = 0). For the image→text step, the mean alignment across the three benchmark pairs was 83.3 (SD = 14.4; 95% CI = [47.5, 119.2]), reflecting high average agreement but a wide interval due to the small number of items. Collapsing across all 6 ratings (2 raters × 3 pairs), Yes/Somewhat/No accounted for 66.7%/33.3%/0%, indicating that the CLIP descriptions generally captured the core content of the original images. For the text→image step, the mean alignment across the 27 synthesized pairs was 69.4 (SD = 17.4; 95% CI = [62.5, 76.3]). Aggregating over all 54 ratings (2 raters × 27 pairs), Yes/Somewhat/No comprised 38.9%/61.1%/0%, suggesting moderate-to-high fidelity when reconstructing images from text prompts. Taken together, these findings support the reliability of the two-stage pipeline for research use.
Study 1 was conducted with 134 participants recruited online. Each participant viewed 10 images per product category (1 human-made image and 9 AI-generated images; 3 models × 3 images per model), presented sequentially in randomized order. Ratings were collected for attitude, preference, trust, and purchase intention on a 7-point Likert scale. Ratings were analyzed at the trial level, by fitting linear mixed-effects models with crossed random intercepts for participants and images (REML; Satterthwaite dfs). Within each ad category (coffee, medical aesthetics, public service), we controlled multiplicity across the four correlated outcomes (attitude, preference, trust, purchase intention) using the Holm–Bonferroni procedure (familywise α = 0.05), and we also report Benjamini–Hochberg FDR–adjusted p-values (q = 0.05) for transparency. The measurement items and their reliability and validity assessments are provided in Appendix A.

4.2. Results

In Study 1, consumer perceptions were compared across multiple product categories between AI-generated and human-made advertising images. Participants were recruited via Credamo, a professional online survey and experiment platform in China, and each received CNY 3.5. To ensure data quality and respondent authenticity, only participants with a credibility score of 80 or higher and a historical approval rate of at least 80% on the survey platform were allowed to participate. In addition, an attention-check question was embedded in the questionnaire, and response completion times were monitored. Responses that failed the attention check or exhibited implausibly short completion durations were excluded. As a result, four responses were removed, and the final dataset included 130 participants. The participants were divided into groups by product category: 44 for coffee advertisements (56.8% female), 42 for medical aesthetics advertisements (85.7% female), and 44 for public service advertisements (59.1% female). The higher female proportion in the medical aesthetics group reflects real-world customer demographics, enhancing external validity (see Table 2).
First, we fitted trial-level linear mixed-effects models with crossed random intercepts for participants and images. For coffee advertisements, across all four outcomes, the condition effect was not significant, as shown in Table 3. Attitude: F(1, 8) = 1.59, p = 0.24; EMMs AI = 4.46 (SE = 0.20) vs. human = 4.06 (SE = 0.44); difference = 0.40, 95% CI [−0.33, 1.14]. Preference: F(1, 8) = 1.27, p = 0.29; EMMs AI = 4.33 (0.21) vs. human = 3.92 (0.38); difference = 0.41, 95% CI [−0.42, 1.24]. Trust: F(1, 8) = 0.91, p = 0.37; EMMs AI = 4.34 (0.20) vs. human = 4.03 (0.35); difference = 0.31, 95% CI [−0.44, 1.05]. Purchase intention: F(1, 8) = 1.06, p = 0.33; EMMs AI = 4.14 (0.23) vs. human = 3.77 (0.39); difference = 0.37, 95% CI [−0.45, 1.19]. Variance decomposition yielded ICCparticipant ≈ 0.53–0.56 and ICCimage ≈ 0.03–0.04, justifying the crossed random-effects specification. After multiplicity control within the outcome family, all Holm-adjusted p’s = 0.96 and BH-FDR p’s = 0.37; conclusions remained non-significant.
Second, for medical aesthetics advertisements we fit the same trial-level linear mixed-effects models (REML; Satterthwaite dfs). Across outcomes there were no condition effects, as shown in Table 4. Attitude: F(1, 8) = 0.01, p = 0.94; EMMs AI = 4.41 (SE = 0.18) vs. human = 4.39 (SE = 0.27); difference = 0.02, 95% CI [−0.52, 0.56]. Preference: F(1, 8) = 0.51, p = 0.49; EMMs AI = 4.34 (0.17) vs. human = 4.47 (0.24); difference = −0.13, 95% CI [−0.56, 0.29]. Trust: F(1, 8) = 0.40, p = 0.55; EMMs AI = 4.22 (0.19) vs. human = 4.41 (0.32); difference = −0.19, 95% CI [−0.87, 0.50]. Purchase intention: F(1, 8) = 0.29, p = 0.61; EMMs AI = 4.01 (0.19) vs. human = 4.11 (0.26); difference = −0.10, 95% CI [−0.54, 0.33]. Variance decomposition yielded ICCparticipant ≈ 0.49–0.59 and ICCimage ≈ 0.00–0.02, indicating variance is dominated by stable between-participant differences and supporting the crossed participant × image random-effects specification. Holm-adjusted p’s were 1.00 for all outcomes; BH-FDR p’s were 0.81 for Preference, Trust, and Purchase intention, and 0.94 for Attitude; inferences remained non-significant.
Public service advertisements were analyzed with the same trial-level linear mixed-effects models (REML; Satterthwaite dfs) with crossed random intercepts for participants and images, as shown in Table 5. For attitude, the condition effect was not significant, F(1, 8) = 1.01, p = 0.34; estimated marginal means (EMMs) were AI = 4.81 (SE = 0.20) and human = 5.00 (0.26), difference = −0.19, 95% CI [−0.63, 0.25]. For preference, F(1, 8) = 0.26, p = 0.63; EMMs AI = 4.68 (0.20) and human = 4.79 (0.27), difference = −0.11, 95% CI [−0.59, 0.38]. For trust, F(1, 8) = 0.24, p = 0.64; EMMs AI = 4.62 (0.21) and human = 4.76 (0.34), difference = −0.14, 95% CI [−0.82, 0.53]. For purchase intention, F(1, 8) = 0.04, p = 0.85; EMMs AI = 5.17 (0.22) and human = 5.22 (0.31), difference = −0.05, 95% CI [−0.61, 0.52]. Variance decomposition showed that participant differences dominated (ICCparticipant ≈ 0.60–0.72), whereas image clustering was small (ICCimage ≈ 0.01–0.05), supporting image-level sampling and the crossed random-effects specification. Holm-adjusted p’s were 1.00 for all outcomes; BH-FDR p’s were 0.85 across outcomes; conclusions remained non-significant.

4.3. Discussion

The results of this experiment indicated that, in the case of coffee advertisements, consumers reported higher attitudes toward AI-generated images than toward human-made images, but the difference was not statistically significant. In medical-aesthetics and public-service advertisements, consumers overall showed a slight preference for human-made images over AI-generated ones; however, these differences were likewise not significant. These findings suggest that when consumers are unaware of whether an advertising image is AI-generated or human-made, they may exhibit similar attitudes toward both types of images, which is consistent with previous research (Hartmann et al., 2025).
These results imply that AI-generated images may possess a unique style and creativity distinct from human-made images, providing consumers with a fresh and novel impression. As AI technology advances, it is likely that consumers will develop greater trust in the quality and sophistication of AI-generated images, particularly as AI can capture intricate details that might be overlooked by human creators.

5. Study 2

Prior research has shown that consumers often hold more negative attitudes toward content created by AI than by humans. For example, Longoni et al. (2022) found that AI-generated news was more likely to be evaluated as inaccurate. Similarly, Nozawa et al. (2022) examined consumer evaluations of restaurants operated by AI versus humans and found that consumers rated AI-produced food and AI-operated restaurants more negatively, with the effect being particularly pronounced in high-end restaurants. More recently, Schilke and Reimann (2025) conducted 13 independent experiments to systematically test whether disclosing the use of AI reduces trust in the actor. Their findings showed that, compared to situations where AI use was not disclosed, disclosure led to a significant decline in trust. This effect was consistent across different task types, role identities (e.g., supervisor, subordinate, professor, analyst, creator), and organizational settings (e.g., investment funds).
Notably, some prior studies have focused on prosocial advertising and examined how AI disclosure influences charitable giving behavior. Arango et al. (2023) found that disclosure of AI-generated content reduced consumers’ empathy, guilt, and emotional engagement, thereby lowering donation intentions. Baek et al. (2024) further demonstrated that AI disclosure negatively affected ad credibility, ad attitudes, and donation intentions, and identified perceived ad credibility as the key mediating mechanism.
More broadly, advertising research has also shown that disclosure can undermine ad effectiveness. Existing studies indicate that ad disclosures often highlight manipulative persuasion knowledge, which in turn damages perceptions of authenticity (Beckert et al., 2021) and reduces ad credibility and purchase intentions (Amazeen & Muddiman, 2018; Darke & Ritchie, 2007; S. S. Lee & Johnson, 2022). Related work also suggests that consumers generally hold negative attitudes toward “synthetic” content, especially when it is perceived as deceptive or manipulative (Campbell et al., 2022a).
According to the PKM, when consumers realize that an advertisement is AI-generated, their persuasion knowledge is activated, leading to reduced credibility and heightened skepticism (Campbell et al., 2022a; Isaac et al., 2016; Voorveld et al., 2024). In other words, when an ad explicitly discloses that it has been created through artificial means (e.g., AI), consumers may perceive it as false, believing that the world it depicts does not match reality. Such perceptions negatively influence persuasion outcomes and weaken ad effectiveness (Campbell et al., 2022a). At the same time, because AI is often perceived as a persuasive agent, its involvement in ad creation may also trigger algorithm aversion, thereby intensifying consumers’ negative evaluations of AI-generated advertising (Longoni & Cian, 2022).
In sum, AI disclosure may reduce ad effectiveness through two distinct pathways: first, by activating persuasion knowledge, which decreases ad credibility and attitudes; and second, by eliciting algorithm aversion, which further strengthens consumer resistance to AI advertising. Based on this reasoning, we predict that disclosure of AI-generated content will have adverse effects on consumer trust, attitudes, preferences, and purchase intentions toward advertising images.
In Study 1 of the current research, it was confirmed that when consumers were unaware of whether an advertisement image was generated by AI or created by a human, they either preferred or held similar attitudes toward AI-generated images compared to human-made ones. Therefore, Study 2 aims to investigate how consumers’ attitudes toward advertisement images are affected when they are informed that the image was generated by AI or by a human. The second hypothesis was thus formulated as follows:
H2. 
Consumers’ evaluations of AI-generated advertising images will be significantly lower than their evaluations of human-made advertising images in terms of trust, attitude, preference, and purchase intention when the image source is disclosed.

5.1. Method

In Study 2, participants were informed whether the advertisement image they viewed was created by AI or by a human. They were then asked to evaluate their attitude, trust, preference, and purchase intention toward the image using a 7-point Likert scale. Stimulus 1 consisted of three human-made benchmark images from Study 1, labeled as “Photographer-captured images”. Stimulus 2 included AI-generated images from Study 1 that had received ratings closest to the benchmark, labeled as “AI-generated images”. The disclosure label appeared directly on the image as a conspicuous banner with a blue background and white text. Each page displayed only one image, and participants were required to view it for at least 30 s before proceeding.
Study 2 was conducted online with 81 participants, who were divided into two groups. One group was shown Stimulus 1 and informed that the images were created by humans, while the other group was shown Stimulus 2 and informed that the images were generated by AI. Group differences were analyzed using linear mixed-effects models (REML) with crossed random intercepts for participants and images to account for the three repeated ratings per participant; familywise error across the four outcomes was controlled with the Holm procedure (BH-FDR reported for robustness), and standardized effect sizes (Cohen’s d with 95% CIs) were computed from participant-level means.

5.2. Results

Study 2 examined whether consumer attitudes toward advertising images differ when the images were explicitly labeled as AI-generated versus human-made. Participants were again recruited via the Credamo platform and received CNY 3 each. After excluding two responses that failed authenticity checks, data from 79 participants (77.2% female) were analyzed (see Table 6). We tested whether the four scales (attitude, preference, trust, purchase intention) operated equivalently across the AI-disclosed and human-disclosed conditions using multi-group CFA. A configural model (same factor structure across groups) provided acceptable comparative fit, χ2(58) = 137.35, CFI = 0.939, TLI = 0.906, RMSEA = 0.133. Constraining factor loadings to equality (metric invariance) did not degrade fit relative to the configural model: χ2(64) = 138.04, CFI = 0.944 (ΔCFI = +0.005), TLI = 0.921 (ΔTLI = +0.015), RMSEA = 0.123 (ΔRMSEA = −0.010). Further constraining item intercepts (scalar invariance) and residual variances (strict invariance) likewise left overall fit unchanged (both: χ2(64) = 138.04, CFI = 0.944, TLI = 0.921, RMSEA = 0.123; all ΔCFI ≤ 0.010 and ΔRMSEA ≤ 0.015). These results indicate that the measures exhibit at least scalar (and even strict) invariance across disclosure conditions, supporting the comparability of mean levels and relations across the two groups.
We fitted linear mixed-effects models (LMMs) with crossed random intercepts for participants (pid) and images (img_idx) to account for repeated ratings (three images per participant). The fixed effect was disclosure condition (human-made vs. AI-generated). To control for familywise error across the four primary outcomes (attitude, preference, trust, and purchase intention), we applied the Holm procedure, with BH-FDR reported as a robustness check. Using AI as the reference category, the human-made condition showed consistently higher ratings on all outcomes.
As shown in Table 7, type III tests of fixed effects indicated significant differences: Attitude: F(1, 77) = 5.263, p = 0.025; estimated marginal means (EMMs): Human = 4.81 vs. AI = 4.22; Δ = 0.592, 95% CI [0.078, 1.105]. Preference: F(1, 77) = 6.136, p = 0.015; EMMs: 4.74 vs. 4.07; Δ = 0.668, 95% CI [0.131, 1.204]. Trust: F(1, 77) = 11.109, p = 0.001; EMMs: 4.94 vs. 4.04; Δ = 0.895, 95% CI [0.360, 1.429]. Purchase intention: F(1, 77) = 11.017, p = 0.001; EMMs: 4.93 vs. 4.02; Δ = 0.912, 95% CI [0.365, 1.459].
Holm-adjusted ps were 0.030 (attitude), 0.030 (preference), 0.004 (trust), and 0.004 (purchase intention); BH-FDR values were 0.025, 0.020, 0.002, and 0.002, respectively. All effects remained significant after adjustment.
Finally, we computed Cohen’s d based on participant-level means (three images per participant) using the pooled SD. Positive values indicate Human > AI. The results confirmed medium to large effect sizes: Attitude: d = 0.515, 95% CI [0.067, 0.963]; Preference: d = 0.561, 95% CI [0.113, 1.008]; Trust: d = 0.752, 95% CI [0.304, 1.200]; Purchase intention: d = 0.745, 95% CI [0.297, 1.193].

5.3. Discussion

The results of Study 2 indicate that, when consumers are informed about an image’s origin, they exhibit a more positive attitude toward human-made images than AI-generated images. This contrasts with the findings from Study 1 and may be explained by the fact that once consumers realize AI-generated content does not reflect real-world objects or human creation, they may feel deceived, resulting in a negative attitude toward such content.
According to Arango et al. (2023), participants in their study experienced a stronger sense of manipulation when they were told that the images they viewed were AI-generated. This suggests that consumers may show defensive reactions after recognizing content as synthetic (Cotte et al., 2005; Darke & Ritchie, 2007). Although societal acceptance and awareness of AI technology continue to grow, consumers can still be skeptical about the inherent artificiality of AI-generated images, even those featuring hyperrealistic details. Such skepticism may be particularly pronounced among consumer groups less familiar with emerging technologies (Arango et al., 2023), potentially reducing trust in AI-generated images. Furthermore, some consumers may hold biases against AI due to moral or ethical concerns regarding its use.

6. Study 3

The results of Study 2 indicate that consumers generally hold more negative views toward AI-generated images. However, prior research has also shown that under certain conditions, consumer attitudes toward AI advertising may be mitigated. For example, studies on charitable giving have demonstrated that when potential donors learn that children’s faces are AI-generated, their willingness to donate decreases significantly. Yet, when the ethical motives for using AI images are emphasized, or when such images are applied in specific contexts such as disaster relief, the negative effect can be alleviated (Arango et al., 2023). It remains unclear, however, whether these boundary conditions also apply in the context of commercial advertising. Based on this, the present study further proposes that consumers’ attitudes may vary depending on the motives communicated for using AI images. In other words, this study aims to examine whether emphasizing intrinsic, extrinsic, or objective motives for applying AI in commercial advertising can reduce consumers’ negative perceptions of AI-generated images. Based on this, the third hypothesis is formulated as follows:
H3. 
Consumers’ negative reactions to AI-generated images will be moderated by the motives emphasized in the advertisement (cost efficiency, privacy protection, and visual appeal).

6.1. Method

In Study 3, after presenting consumers with various motivations for using AI-generated advertising images, participants were asked to evaluate their trust, attitude, preference, and purchase intention toward both AI-generated and human-made images using a 7-point Likert scale. As shown in Table 8, these motivations were categorized into three groups: “Cost Efficiency: AI-generated images were used for cost efficiency.” “Privacy Protection: AI-generated images were used for privacy protection.” and “Visual Appeal: AI-generated images were used to enhance visual appeal.” The AI-generated images were selected from Study 1 based on their similarity in ratings to the human-made images. Each AI-generated image was labeled with the motivation for using AI-generated images, and the specific reasons for employing AI technology were disclosed beneath each advertisement image (see Figure 2). Human-made images served as the control group: “Control Group: Real images were used, with the subject’s approval.” The disclosure label appeared directly on the image as a conspicuous banner with a blue background and white text. Each page displayed only one image, and participants were required to view it for at least 30 s before proceeding.
Study 3 was conducted online with 212 participants, who were divided into four groups: three groups reflecting different motivations for using AI-generated images and one control group. Each group was presented with an advertisement image labeled with the corresponding motivation. Participants then evaluated the image based on the provided information. A one-way ANOVA was used to examine the differences in responses across the four groups.

6.2. Results

Study 3 examined how highlighting different motivations for using AI-generated images influenced consumer attitudes toward advertising images. Participants were again recruited via the Credamo platform and received CNY 3 each. After excluding three responses that failed authenticity checks, data from 209 participants (93.8% female) were analyzed. The higher female proportion reflects real-world customer demographics, enhancing external validity (see Table 9). We conducted a four-group multi-group CFA to test whether the four scales functioned equivalently across the motivation conditions. The configural model showed acceptable comparative fit, χ2(168) = 283.10, CFI = 0.951, TLI = 0.948, RMSEA = 0.058 (90% CI [0.046, 0.069], PCLOSE = 0.134; χ2/df = 1.685). Constraining factor loadings to equality (metric) did not degrade fit relative to the configural model: χ2(174) = 289.94, CFI = 0.951 (ΔCFI = 0), TLI = 0.949 (ΔTLI = +0.001), RMSEA = 0.057 (ΔRMSEA = −0.001; χ2/df = 1.666). Adding equality constraints on item intercepts (scalar) and on residual variances (strict) yielded identical fit to the metric model (both χ2(174) = 289.94, CFI = 0.951, TLI = 0.949, RMSEA = 0.057; all |ΔCFI| ≤ 0.001, |ΔRMSEA| ≤ 0.001). Using the standard decision rules (ΔCFI ≤ 0.010; ΔRMSEA ≤ 0.015), the measures exhibit scalar—and even strict—invariance across the four motivation conditions, supporting valid comparisons of both mean levels and relations among the constructs across groups.
We ran separate one-way ANOVAs for attitude, preference, trust, and purchase intention with Condition (Human-made (N = 52), Privacy Protection (N = 52), Visual Appeal (N = 53), Cost Efficiency (N = 52)) as a between-subjects factor. Levene tests supported homogeneity; for completeness, Welch’s ANOVA led to the same conclusions. Pairwise comparisons used Tukey’s HSD (familywise α = 0.05) with adjusted p-values and 95% Cis, as shown in Table 10. Figure 3 reports condition means with 95% CIs and cell sizes.
Attitude. Overall means were similar for Privacy Protection and Human-made, Visual Appeal was slightly lower, and Cost Efficiency was the lowest (see Figure 3a). The omnibus ANOVA was significant, F(3, 205) = 2.80, MSbetween = 3.51, MSwithin = 1.25, p = 0.04; partial η2 = 0.04, ω2 = 0.03. However, Tukey HSD pairwise tests were not significant (Cost < Privacy: padj = 0.06; Cost < Human-made: padj = 0.07).
Preference. Human-made, Privacy Protection, and Visual Appeal were similar, with Cost Efficiency lower in mean. The omnibus ANOVA was not significant, F(3, 205) = 1.66, p = 0.18; partial η2 = 0.02, ω2 = 0.01, and all Tukey comparisons were n.s.
Trust. ANOVA was significant, F(3, 205) = 4.03, MSbetween = 7.01, MSwithin = 1.74, p = 0.01; partial η2 = 0.06, ω2 = 0.04. Tukey: Cost < Human-made (padj = 0.01) and Cost < Privacy (padj = 0.05) were significant, whereas Cost vs. Visual was not (padj = 0.11).
Purchase intention. ANOVA was significant, F(3, 205) = 3.17, MSbetween = 6.19, MSwithin = 1.95, p = 0.03; partial η2 = 0.04, ω2 = 0.03. Tukey: Cost < Human-made remained significant (padj = 0.02), but Cost vs. Privacy was not (padj = 0.11).

6.3. Discussion

Study 2 confirmed that consumers hold negative perceptions of AI-generated images. The results of Study 3 demonstrate that emphasizing specific motivations for using AI-generated images can mitigate these negative perceptions. Specifically, when “Privacy Protection” was emphasized, consumers’ attitudes were similar to those toward human-made images. When “Visual Appeal” was emphasized, the ratings were slightly lower than for human-made images, although the difference was not statistically significant. In contrast, when “Cost Efficiency” was emphasized, evaluations declined; Tukey HSD showed significant decreases versus human-made images for trust and purchase intention, whereas the decrease for attitude was only marginal and preference did not differ.
Consumers may view AI-generated images as advantageous for privacy protection because they do not reveal the faces of real individuals, which could be considered a benefit compared to human-made images. These findings are consistent with Arango et al. (2023) and align with Ellemers et al. (2019)’s assertion that people tend to see themselves as ethical actors. In fields such as medical aesthetics, using AI to provide customized designs can enhance consumer trust by showcasing a company’s advanced technology and innovative capabilities. Furthermore, improving the visual appeal of images through AI can strengthen consumer confidence in a company’s aesthetic sense and technical expertise.
Conversely, consumers might associate an emphasis on “Cost Efficiency” with a potential compromise in quality. In line with the saying “you get what you pay for,” utilizing AI-generated images solely for cost-saving purposes could lead consumers to believe that the overall quality of the services offered is inferior. In areas like medical aesthetics—where consumers highly value the naturalness and quality of outcomes—there is a risk that AI-generated images will be seen as inaccurate reflections of actual results. This perception may provoke more negative reactions, especially when cost efficiency is highlighted.

7. Discussion

7.1. Contribution and Implications

Since the 1970s, digital technology has fundamentally transformed the fields of marketing and advertising (Kamal, 2016). Recently, generative AI has introduced a new paradigm, once again driving revolutionary changes in the marketing industry (Peres et al., 2023). Given its advantages in accessibility, cost efficiency, quality, and variety, AI-generated content is rapidly gaining traction in advertising and marketing. Synthetic content is expected to continue evolving and be adopted exponentially by marketing and advertising professionals.
However, research on how consumers perceive AI-generated content remains relatively scarce. Although several conceptual models have been proposed to address this issue (Campbell et al., 2022a; Whittaker et al., 2021), empirical studies are only just beginning to emerge. Considering the potential for synthetic content to become mainstream in marketing and advertising communications, this study aims to highlight existing gaps in the literature and contribute to the growing need for research on consumer responses to synthetic advertisements.
The research identified the following insights based on three studies. In Study 1, conducted under a “black-box” condition in which the image source was not disclosed, consumer responses to AI-generated versus human-made images were compared across coffee, medical aesthetics, and public service advertisements. The results indicated that although AI images in coffee advertising received slightly higher attitude ratings, none of the differences reached statistical significance, suggesting that when the source is undisclosed, consumers generally evaluate AI-generated and human-made images similarly. In Study 2, where the image source was disclosed, consumers expressed significantly more favorable attitudes toward human-made images. This suggests that once consumers recognize an image as AI-generated, activation of persuasion knowledge and algorithm aversion may reduce evaluations, thereby heightening risks in marketing applications (Baek et al., 2024). In Study 3, boundary conditions under disclosure were examined. The findings revealed that when “privacy protection” was emphasized, evaluations of AI images were comparable to those of human-made images; when “visual appeal” was highlighted, differences were nonsignificant; however, when “cost efficiency” was emphasized, consumer trust and purchase intention declined significantly, with attitudes also showing a slight decrease.
Building on these findings, this research makes several theoretical contributions to the existing literature.
First, it extends prior work on consumer responses to AI-generated versus human-made images in advertising. Earlier studies have primarily focused on public service advertising (e.g., Arango et al., 2023; Baek et al., 2024), whereas the present research broadens the scope to commercial advertising. The results show that, whether in commercial or public service contexts, consumers display more negative attitudes toward AI-generated images once the source is disclosed. This not only corroborates previous findings but also highlights contextual differences in the applicability of AI advertising, thereby deepening our understanding of consumer responses to synthetic content across a wider range of advertising domains. Moreover, prior research under “non-disclosure” conditions rarely compared human-made and AI-generated content directly, overlooking consumers’ natural reactions when source information is entirely absent. By examining this condition, the present research fills this gap and clearly reveals how consumers respond to images without knowing their mode of creation.
Second, by drawing on the PKM, the research provides a theoretical explanation for why consumers develop negative attitudes toward AI-generated images in commercial advertising. According to PKM, disclosure of the image source may activate persuasion knowledge and trigger algorithm aversion, thereby lowering evaluations of AI content (Baek et al., 2024). The current findings align with this theoretical framework and strengthen its explanatory power in the context of digital advertising.
Finally, the research identifies boundary effects under disclosure, showing that motivational cues can either mitigate or amplify consumer responses. This finding expands existing work on the role of motives in AI advertising and offers a theoretical basis for firms to adopt appropriate disclosure and motivation strategies in practice. Furthermore, this study broadens the scope of existing research that has primarily emphasized technological factors—often drawing on frameworks such as the Technology Acceptance Model (e.g., T. Liu et al., 2024; Ma et al., 2025)—to examine consumer attitudes toward AI-based technologies and services. By introducing usage motivation as a key contextual variable, this study empirically demonstrates that consumer responses to generative AI in advertising imagery differ depending on the purpose for which the technology is used. These findings highlight that, in addition to technological attributes, situational factors such as usage motivation play a vital role in shaping consumer perceptions and should not be overlooked in efforts to predict consumer acceptance of AI-driven technologies.
This research analyzed consumer attitudes toward human-made versus AI-generated advertising images under both non-disclosure and disclosure conditions, across commercial and public service contexts. It further examined how emphasizing different motives for using AI-generated images shapes consumer responses in commercial advertising when disclosure is present. The findings provide several practical implications for advertisers, platform managers, and policymakers as AI-generated advertising images become increasingly prevalent.
First, under non-disclosure conditions, results show that consumer responses to AI-generated and human-made images are generally comparable, with no significant differences. In real-world contexts where consumers cannot directly discern how an image was created, advertisers may employ AI-generated images as a flexible tool to expand creative options and save time and budgetary resources, without concern about substantial consumer resistance. This approach resonates with current industry trends, as major consulting firms like McKinsey & Company (2023) report that generative AI could automate 60 to 70 percent of employees’ work time, freeing up resources for more strategic and creative tasks. Thus, using AI for content creation is not just a theoretical possibility but a rapidly adopted strategy for gaining a competitive edge. This has practical relevance for advertising categories in which imagery primarily conveys atmosphere and appeal, such as coffee promotions and everyday consumer goods.
However, under mandatory disclosure conditions, consumer evaluations of AI-generated images are significantly less favorable than those of human-made images. This suggests that when disclosure is required by regulation or platform transparency policies, firms must proactively adopt strategies to mitigate potential negative perceptions. This finding is particularly critical as industry analyses consistently highlight a growing “trust deficit” between consumers and brands regarding AI usage. For example, the Ipsos AI Monitor 2024 report shows that globally 50% say AI makes them nervous, and 37% think AI will make online disinformation worse (Ipsos, 2024)—underscoring the urgent need for brands to build trust.
Currently, consumers are concerned about the ethical issues associated with AI-generated content, such as job displacement, racial and gender stereotyping, and deception. The use of AI-generated images may further heighten these concerns. These apprehensions are not merely academic; they are directly reflected in market research. For instance, WARC’s Marketer’s Toolkit 2024 warns that advertisers using generative AI may be perceived as deceiving consumers and that transparency is key (WARC, 2024), and WARC commentary notes that Gen Z in particular values authenticity and will call out brands that are not transparent about their AI use (Dekker, 2024). Therefore, in the decision-making process, firms and advertisers should emphasize ethical motives to reduce consumer concerns. For instance, a policy stating that “AI-generated images are used to protect privacy” is likely to be more effective than merely disclosing that “the image was generated by AI,” and markedly more effective than emphasizing instrumental motives such as “cost efficiency.” This is especially relevant in domains with high privacy sensitivity, such as medical aesthetics advertising. In addition, regulatory bodies could allow firms to frame AI usage in alignment with consumer values (e.g., privacy, safety) when setting disclosure requirements. Such practices would balance transparency with reduced perceptions of manipulation and resistance, thereby promoting the healthy development of AI advertising. Ultimately, consumer trust has become the central challenge for the widespread adoption of AI (Accenture, 2025), and the narrative surrounding why and how AI is used will be just as important as the technology itself in determining its long-term acceptance.
This research addresses a critical gap in understanding consumer responses to AI-generated advertising content by demonstrating that disclosure fundamentally transforms how consumers evaluate AI versus human-created images. Contrary to widespread concerns about transparency, our findings reveal that disclosure need not inevitably undermine advertising effectiveness. When AI use is strategically framed with appropriate justifications, consumer acceptance can be preserved even under full disclosure. This suggests a promising path forward: rather than concealing AI use or offering minimal disclosure, advertisers should adopt transparent, values-aligned disclosure strategies that contextualize AI use within frameworks of consumer benefit and ethical responsibility. Such approaches balance innovation benefits with consumer protection needs, potentially transforming disclosure from a defensive necessity into a trust-building opportunity. The challenge for practitioners and policymakers lies in designing disclosure frameworks that activate appropriate levels of persuasion knowledge—enough to ensure informed consumer decision-making, but not so much as to trigger counterproductive defensive reactions that undermine legitimate marketing communications.

7.2. Limitations and Future Research Directions

This study explored consumer attitudes toward AI-generated advertisement images; however, several limitations warrant attention in future research:
First, although this study covered diverse contexts such as coffee, public service, and medical aesthetics advertisements, future studies should examine the impact of generative AI in a broader range of marketing settings, such as luxury goods, electronics, technology, and fashion, which represent high-involvement product categories.
Second, to ensure a fair comparison with human-made images, the study did not fine-tune or adjust hyperparameters for each AI model during the multi-stage image generation process. Prompt engineering was also not conducted in order to avoid introducing human biases into the prompt generation. These measures could enhance the effectiveness of generative text-to-image models (Jansen et al., 2023; Rombach et al., 2022). It is anticipated that when generative AI models are combined with task-specific data, the cognitive evaluation and real-world effectiveness of synthetic images will be further improved (S. Zhang & Srinivasan, 2023). Future research may compare optimized vs. non-optimized AI outputs to assess if better prompts improve consumer attitudes.
Third, the inclusion of medical aesthetics advertisements led to a higher proportion of female participants to align with the primary consumer base in this sector. While this approach supported internal validity by reflecting real market conditions, it may limit external validity. Future studies should pay more attention to balancing gender and age composition to better validate the robustness of the findings.
Fourth, all three studies were conducted with Chinese participants. Prior research suggests that cultural factors significantly influence consumer responses to AI and algorithmic systems (N. T. Y. Liu et al., 2023; Yam et al., 2023). While China’s rapidly growing AI advertising market (Qin & Jiang, 2019) makes it a theoretically relevant context, Chinese consumers may exhibit distinct patterns compared to Western consumers due to differences in collectivism–individualism orientations (Hofstede, 2001), trust formation mechanisms (Zhao et al., 2021), and historical experiences with transparency (Mol, 2015; Shin et al., 2022). Specifically, research indicates that collectivist cultures tend to show lower algorithm aversion (N. T. Y. Liu et al., 2023) and greater acceptance of AI-driven technologies (Yam et al., 2023), which may amplify or attenuate the disclosure effects observed in our studies. Future research should replicate these findings across diverse cultural contexts to establish boundary conditions and test whether the disclosure-contingent evaluation patterns generalize beyond the Chinese market.
Fifth, in real markets, advertising disclosure is often driven by policies or regulations, and firms do not have complete discretion over whether to disclose. Therefore, future research should examine how regulatory contexts shape consumer attitudes and behavioral responses to mandatory disclosure. Such investigations would not only enhance the external validity of the findings but also provide empirical evidence to inform policymaking.
Sixth, although this study measured consumer attitudes, trust, preference, and purchase intention, it did not systematically incorporate psychological variables that could explain the observed differences. Future research could further examine mediating mechanisms such as perceived authenticity, perceived quality, psychological discomfort (for example, the “uncanny valley” effect), and perceived privacy risks and benefits. In addition, potential moderators such as product involvement, technological familiarity, psychological anxiety, social exclusion, and perceived control could be introduced to capture individual and contextual differences in consumer responses. These extensions would contribute to building a more comprehensive theoretical model.
Seventh, the examination of motivational frames for AI usage in this study primarily focused on the context of medical aesthetics advertising. While the findings revealed differences between privacy protection and cost-efficiency frames, the underlying psychological mechanisms were not fully tested. Future research should explore potential mediators, such as perceived quality or authenticity under the cost-efficiency frame, and perceived privacy protection or moral perception under the privacy-protection frame.
Eighth, In Studies 2 and 3, the manipulations were ensured through prominent prompts without explicit manipulation checks, consistent with the approach of Baek et al. (2024). Future research may employ comprehension checks to further strengthen the validation of the manipulations.
Lastly, consumer attitudes toward artificial intelligence may evolve alongside technological development and changes in societal perceptions. Future studies should track shifts in cognition and emotion as AI becomes increasingly normalized, thereby capturing the dynamic influence of AI-generated content on consumer attitudes and behaviors.
Despite these limitations, our study provides an early empirical foundation for understanding consumer trust in AI advertising—a springboard for future research to explore psychological mechanisms, cross-cultural variations, and dynamic consumer adaptations to AI-generated content.

Author Contributions

Conceptualization, L.Z. and C.H.; Methodology, L.Z. and C.H.; Software, L.Z.; Validation, L.Z. and C.H.; Formal analysis, L.Z.; Investigation, L.Z. and C.H.; Resources, C.H.; Data curation, L.Z.; Writing—original draft, L.; Writing—review & editing, L.Z. and C.H.; Visualization, L.Z.; Supervision, C.H.; Project administration, C.H.; Funding acquisition, C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Institutional Review Board of the Ministry of Health and Welfare, South Korea (protocol code P01-202506-01-021 and date of approval 13 June 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Using multi-group CFA, we assessed measurement invariance across the three studies. Configural and metric invariance were supported (Configural: CFI = 0.974, RMSEA = 0.071; Metric: CFI = 0.974, RMSEA = 0.067; ΔCFI = 0.000, ΔRMSEA = −0.004). Accordingly, we pooled the three studies and report reliability and correlation-/loading-based validity indices to enhance estimation stability.
Internal consistency was assessed with Cronbach’s alpha for each multi-item scale. Results indicated excellent reliability across all four constructs: Attitude, α = 0.93; Preference, α = 0.96; Trust, α = 0.91; and Purchase Intention, α = 0.96. All coefficients exceed conventional 0.70/0.80 benchmarks, supporting the use of composite scores. Accordingly, we computed scale scores by averaging the items within each construct for the experimental analyses reported in the main text.
Table A1. Questionnaire items.
Table A1. Questionnaire items.
ItemsCronbach’s αSource
AttitudeThis advertisement is interesting.0.93De Pelsmacker et al. (2002)
This advertisement attracts my attention.
This advertisement is impressive.
PreferenceI have a favorable impression of this advertisement.0.96MacKenzie et al. (1986)
Yoo et al. (2005)
Overall, this advertisement is good.
I like this advertisement.
TrustThis advertisement seems trustworthy.0.91Sarofim and Cabano (2018)
This advertisement seems honest.
Purchase
intention
I want to conserve water.0.96Dodds et al. (1991)
Spears and Singh (2004)
I feel it is necessary to conserve water.
Prior to extraction, sampling adequacy was excellent (KMO = 0.942) and Bartlett’s test of sphericity was significant, χ2(45) = 5909.424, p < 0.001, indicating that the correlation matrix was factorable. We conducted an EFA using principal components extraction with varimax (Kaiser-normalized) rotation. Four components with eigenvalues > 1 were retained, consistent with the theorized four-construct structure. The rotated solution explained 92.53% of the total variance (Component 1 = 25.78%, Component 2 = 24.76%, Component 3 = 24.72%, Component 4 = 17.28%; rotation converged in 11 iterations).
Item communalities were uniformly high (0.875–0.959). In the rotated pattern, the Attitude items (a1–a3) loaded strongly on one factor (0.632–0.780), the Preference items (b1–b3) on a second factor (0.644–0.718), the Purchase Intention items (d1–d2) on a third factor (0.759, 0.790), and the Trust items on a fourth factor (c2 = 0.712; c1 showed a moderate cross-loading on the Preference and Trust factors, 0.525 and 0.522, respectively). Given the high communalities and theoretical coherence, all items were retained.

References

  1. Accenture. (2025). Technology vision 2025: AI: A declaration of autonomy—Is trust the limit of AI’s limitless possibilities? Available online: https://www.accenture.com/us-en/insights/technology/technology-trends-2023 (accessed on 7 October 2025).
  2. AlDahoul, N., Rahwan, T., & Zaki, Y. (2025). AI-generated faces influence gender stereotypes and racial homogenization. Scientific Reports, 15(1), 14449. [Google Scholar] [CrossRef]
  3. Aljarah, A., Ibrahim, B., & López, M. (2025). In AI, we do not trust! The nexus between awareness of falsity in AI-generated CSR ads and online brand engagement. Internet Research, 35(3), 1406–1426. [Google Scholar] [CrossRef]
  4. Amazeen, M. A., & Muddiman, A. R. (2018). Saving media or trading on trust? The effects of native advertising on audience perceptions of legacy and online news publishers. Digital Journalism, 6(2), 176–195. [Google Scholar] [CrossRef]
  5. Amazon. (2023). Amazon rolls out AI-powered image generation to help advertisers deliver a better ad experience for customers. Available online: https://www.aboutamazon.com/news/innovation-at-amazon/amazon-ads-ai-powered-image-generator (accessed on 25 July 2024).
  6. Arango, L., Singaraju, S. P., & Niininen, O. (2023). Consumer responses to AI-generated charitable giving ads. Journal of Advertising, 52(4), 486–503. [Google Scholar] [CrossRef]
  7. Baek, T. H., Kim, J., & Kim, J. H. (2024). Effect of disclosing AI-generated content on prosocial advertising evaluation. International Journal of Advertising, 1–22. [Google Scholar] [CrossRef]
  8. Beckert, J., Koch, T., Viererbl, B., & Schulz-Knappe, C. (2021). The disclosure paradox: How persuasion knowledge mediates disclosure effects in sponsored media content. International Journal of Advertising, 40(7), 1160–1186. [Google Scholar] [CrossRef]
  9. Boerman, S. C., Van Reijmersdal, E. A., & Neijens, P. C. (2012). Sponsorship disclosure: Effects of duration on persuasion knowledge and brand responses. Journal of Communication, 62(6), 1047–1064. [Google Scholar] [CrossRef]
  10. Boerman, S. C., Willemsen, L. M., & van der Aa, E. P. (2017). “This post is sponsored”: Effects of sponsorship disclosure on persuasion knowledge and electronic word of mouth in the context of Facebook. Journal of Interactive Marketing, 38, 82–92. [Google Scholar] [CrossRef]
  11. Brüns, J. D., & Meißner, M. (2024). Do you create your content yourself? Using generative artificial intelligence for social media content creation diminishes perceived brand authenticity. Journal of Retailing and Consumer Services, 79, 103790. [Google Scholar] [CrossRef]
  12. Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(2), 889–942. [Google Scholar] [CrossRef]
  13. Califano, G., & Spence, C. (2024). Assessing the visual appeal of real/AI-generated food images. Food Quality and Preference, 116, 105149. [Google Scholar] [CrossRef]
  14. Campbell, C., Plangger, K., Sands, S., & Kietzmann, J. (2022a). Preparing for an era of deepfakes and AI-generated ads: A framework for understanding responses to manipulated advertising. Journal of Advertising, 51(1), 22–38. [Google Scholar] [CrossRef]
  15. Campbell, C., Plangger, K., Sands, S., Kietzmann, J., & Bates, K. (2022b). How deepfakes and artificial intelligence could reshape the advertising industry: The coming reality of AI fakes and their potential impact on consumer behavior. Journal of Advertising Research, 62(3), 241–251. [Google Scholar] [CrossRef]
  16. Carlson, K., Kopalle, P. K., Riddell, A., Rockmore, D., & Vana, P. (2023). Complementing human effort in online reviews: A deep learning approach to automatic content generation and review synthesis. International Journal of Research in Marketing, 40(1), 54–74. [Google Scholar] [CrossRef]
  17. Cotte, J., Coulter, R. A., & Moore, M. (2005). Enhancing or disrupting guilt: The role of ad credibility and perceived manipulative intent. Journal of Business Research, 58(3), 361–368. [Google Scholar] [CrossRef]
  18. Darke, P. R., & Ritchie, R. J. (2007). The defensive consumer: Advertising deception, defensive processing, and distrust. Journal of Marketing research, 44(1), 114–127. [Google Scholar] [CrossRef]
  19. Data News. (2023). I’m Chicken, the first domestic ChatGPT-made advertising video. Available online: https://www.datanews.co.kr/news/article.html?no=127951 (accessed on 21 June 2023).
  20. Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24–42. [Google Scholar] [CrossRef]
  21. Dekker, M. (2024). Have you Gen Z-proofed your approach to artificial intelligence. Available online: https://www.warc.com/newsandopinion/opinion/have-you-gen-z-proofed-your-approach-to-artificial-intelligence/en-gb/6764 (accessed on 7 October 2025).
  22. De Pelsmacker, P., Geuens, M., & Anckaert, P. (2002). Media context and advertising effectiveness: The role of context appreciation and context/Ad similarity. Journal of Advertising, 31(2), 49–61. [Google Scholar] [CrossRef]
  23. Dodds, W. B., Monroe, K. B., & Grewal, D. (1991). Effects of price, brand, and store information on buyers’ product evaluations. Journal of Marketing Research, 28(3), 307–319. [Google Scholar] [CrossRef]
  24. Dornis, T. W. (2020). Artificial creativity: Emergent works and the void in current copyright doctrine. Yale Journal of Law & Technology, 22, 1–60. [Google Scholar] [CrossRef]
  25. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., & Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. [Google Scholar] [CrossRef]
  26. Dwivedi, Y. K., Pandey, N., Currie, W., & Micu, A. (2024). Leveraging ChatGPT and other generative artificial intelligence (AI)-based applications in the hospitality and tourism industry: Practices, challenges and research agenda. International Journal of Contemporary Hospitality Management, 36(1), 1–12. [Google Scholar] [CrossRef]
  27. Economist, T. (2022). How a computer designed this week’s cover. Available online: https://www.economist.com/topics/artificial-intelligence?after=3916e326-c0a3-4299-b42a-d7d7c5f55205 (accessed on 30 June 2024).
  28. Ellemers, N., Van Der Toorn, J., Paunov, Y., & Van Leeuwen, T. (2019). The psychology of morality: A review and analysis of empirical studies published from 1940 through 2017. Personality and Social Psychology Review, 23(4), 332–366. [Google Scholar] [CrossRef]
  29. Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111–126. [Google Scholar] [CrossRef]
  30. Franke, C., Groeppel-Klein, A., & Müller, K. (2023). Consumers’ responses to virtual influencers as advertising endorsers: Novel and effective or uncanny and deceiving? Journal of Advertising, 52(4), 523–539. [Google Scholar] [CrossRef]
  31. Friestad, M., & Wright, P. (1994). The persuasion knowledge model: How people cope with persuasion attempts. Journal of Consumer Research, 21(1), 1–31. [Google Scholar] [CrossRef]
  32. Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. npj Digital Medicine, 6(1), 75. [Google Scholar] [CrossRef]
  33. Grierson, J. (2023). Photographer admits prize-winning image was AI generated. Available online: https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated (accessed on 30 June 2024).
  34. Han, D., Choi, D., & Oh, C. (2023). A study on user experience through analysis of the creative process of using image generative AI: Focusing on user agency in creativity. Journal of Convergence on Culture Technology, 9(4), 667–679. [Google Scholar]
  35. Han, S., Lee, H., Kim, J., & Koo, Y. (2024). Utilizing generative AI services in the image production process of the print media content industry: Focusing on user demographics. Design Convergence Study, 23, 1–24. [Google Scholar]
  36. Hanson, S., Carlson, J., & Pressler, H. (2025). The differential impact of AI salience on advertising engagement and attitude: Scary good AI advertising. Journal of Advertising Research, 65(2), 190–201. [Google Scholar] [CrossRef]
  37. Hartmann, J., Exner, Y., & Domdey, S. (2025). The power of generative marketing: Can generative AI create superhuman visual marketing content? International Journal of Research in Marketing, 42(1), 13–31. [Google Scholar] [CrossRef]
  38. Hernández-Ramírez, R., & Ferreira, J. B. (2024). The future end of design work: A critical overview of managerialism, generative AI, and the nature of knowledge work, and why craft remains relevant. She Ji: The Journal of Design, Economics, and Innovation, 10(4), 414–440. [Google Scholar] [CrossRef]
  39. Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations (2nd ed.). Sage Publications. [Google Scholar]
  40. Ipsos. (2024). The Ipsos AI monitor 2024. Available online: https://www.ipsos.com/en-us/ipsos-ai-monitor-2024 (accessed on 7 October 2025).
  41. Isaac, M. S., Brough, A. R., & Grayson, K. (2016). Is top 10 better than top 9? The role of expectations in consumer response to imprecise rank claims. Journal of Marketing Research, 53(3), 338–353. [Google Scholar] [CrossRef]
  42. Jansen, T., Heitmann, M., Reisenbichler, M., & Schweidel, D. A. (2023). Automated alignment: Guiding visual generative AI for brand building and customer engagement. Available online: https://ssrn.com/abstract=4656622 (accessed on 7 October 2025). [CrossRef]
  43. Jeong, M. (2024). “Because of production costs…” MBC’s ‘Midnight Ghost Story’ on the chopping block for using AI instead of actors. Available online: https://v.daum.net/v/20240718163553177 (accessed on 18 July 2024).
  44. Kamal, Y. (2016). Study of trend in digital marketing and evolution of digital marketing strategies. International Journal of Engineering Science, 6(5), 5300–5302. [Google Scholar]
  45. Karpinska-Krakowiak, M., & Eisend, M. (2025). Realistic portrayals of untrue information: The effects of deepfaked ads and different types of disclosures. Journal of Advertising, 54(3), 432–442. [Google Scholar] [CrossRef]
  46. Kim, K. H., & Kim, H. G. (2023). A case study of ChatGPT and Midjourney-Exploring the possibility of use for art and creation using AI. The Treatise on The Plastic Media, 26(2), 1–10. [Google Scholar] [CrossRef]
  47. Kirmani, A., & Zhu, R. (2007). Vigilant against manipulation: The effect of regulatory focus on the use of persuasion knowledge. Journal of Marketing Research, 44(4), 688–701. [Google Scholar] [CrossRef]
  48. Kshetri, N. (2023). Generative artificial intelligence in marketing. IT Professional, 25(5), 71–75. [Google Scholar] [CrossRef]
  49. Kshetri, N., Dwivedi, Y. K., Davenport, T. H., & Panteli, N. (2024). Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda. International Journal of Information Management, 75, 102716. [Google Scholar] [CrossRef]
  50. Lee, D., & Ham, C. D. (2023). AI versus human: Rethinking the role of agent knowledge in consumers’ coping mechanism related to influencer marketing. Journal of Interactive Advertising, 23(3), 241–258. [Google Scholar] [CrossRef]
  51. Lee, S. (2023). No need for models, sets, or photographers…AI-created giant fashion advertisements. Available online: https://www.hankookilbo.com/News/Read/A2023040505480005047 (accessed on 30 June 2024).
  52. Lee, S. S., & Johnson, B. K. (2022). Are they being authentic? The effects of self-disclosure and message sidedness on sponsored post effectiveness. International Journal of Advertising, 41(1), 30–53. [Google Scholar] [CrossRef]
  53. Lim, J. (2024). A study on quality preference of image generation AI for advertising poster design: Focusing on fruit drink advertisements. Journal of Communication Design, 88, 66–77. [Google Scholar] [CrossRef]
  54. Liu, D., Wang, H., & Zhu, Y. (2025). You plan to manipulate me: A persuasion knowledge perspective for understanding the effects of AI-assisted selling. Journal of Business Research, 200, 115598. [Google Scholar] [CrossRef]
  55. Liu, N. T. Y., Kirshner, S. N., & Lim, E. T. K. (2023). Is algorithm aversion WEIRD? A cross-country comparison of individual-differences and algorithm aversion. Journal of Retailing and Consumer Services, 72, 103259. [Google Scholar] [CrossRef]
  56. Liu, T., Zhang, Y., Zhang, M., Chen, M., & Yu, S. (2024). Factors influencing consumer willingness to use AI-driven autonomous taxis. Behavioral Sciences, 14(12), 1216. [Google Scholar] [CrossRef]
  57. Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. Journal of Marketing, 86(1), 91–108. [Google Scholar] [CrossRef]
  58. Longoni, C., Fradkin, A., Cian, L., & Pennycook, G. (2022, June 21–24). News from generative artificial intelligence is believed less. 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 97–106), Seoul, Republic of Korea. [Google Scholar]
  59. Ma, J., Wang, P., Li, B., Wang, T., Pang, X. S., & Wang, D. (2025). Exploring user adoption of ChatGPT: A technology acceptance model perspective. International Journal of Human–Computer Interaction, 41(2), 1431–1445. [Google Scholar] [CrossRef]
  60. MacKenzie, S. B., Lutz, R. J., & Belch, G. E. (1986). The role of attitude toward the ad as a mediator of advertising effectiveness: A test of competing explanations. Journal of Marketing Research, 23(2), 130–143. [Google Scholar] [CrossRef]
  61. Magni, F., Park, J., & Chao, M. M. (2024). Humans as creativity gatekeepers: Are we biased against AI creativity? Journal of Business and Psychology, 39(3), 643–656. [Google Scholar] [CrossRef]
  62. McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier. Available online: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier (accessed on 7 October 2025).
  63. MIT Technology Review Insight. (2023). The great acceleration: CIO perspectives on generative AI: How technology leaders are adopting emerging tools to deliver enterprise-wide AI. Available online: https://www.technologyreview.com/2023/07/18/1076423/the-great-acceleration-cio-perspectives-on-generative-ai/ (accessed on 30 June 2024).
  64. Mogaji, E., & Jain, V. (2024). How generative AI is (will) change consumer behaviour: Postulating the potential impact and implications for research, practice, and policy. Journal of Consumer Behaviour, 23(5), 2379–2389. [Google Scholar] [CrossRef]
  65. Mol, A. P. J. (2015). Transparency and value chain sustainability. Journal of Cleaner Production, 107, 154–161. [Google Scholar] [CrossRef]
  66. Nam, J. (2023). The impact of fit in artwork–product combinations on consumer evaluations: Focusing on artificial intelligence (AI)-generated images and artist. Journal of Communication Design, 85, 472–483. [Google Scholar] [CrossRef]
  67. Nozawa, C., Togawa, T., Velasco, C., & Motoki, K. (2022). Consumer responses to the use of artificial intelligence in luxury and non-luxury restaurants. Food Quality and Preference, 96, 104436. [Google Scholar] [CrossRef]
  68. Park, H. (2023). A Case Study on Application of Text to Image Generator AI DALL·E. The Treatise on The Plastic Media, 26(1), 102–110. [Google Scholar] [CrossRef]
  69. Park, Y. S. (2024). White default: Examining racialized biases behind AI-generated images. Art Education, 77(4), 36–45. [Google Scholar] [CrossRef]
  70. Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity: Evidence from GitHub Copilot. arXiv, arXiv:2302.06590. [Google Scholar] [CrossRef]
  71. Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. International Journal of Research in Marketing, 40(2), 269–275. [Google Scholar] [CrossRef]
  72. pharma. (2022). CLIP interrogator. Available online: https://huggingface.co/spaces/pharma/CLIP-Interrogator (accessed on 17 October 2024).
  73. Qin, X., & Jiang, Z. (2019). The impact of AI on the advertising process: The Chinese experience. Journal of Advertising, 48(4), 338–346. [Google Scholar] [CrossRef]
  74. Reisenbichler, M., Reutterer, T., Schweidel, D. A., & Dan, D. (2022). Frontiers: Supporting content marketing with natural language generation. Marketing Science, 41(3), 441–452. [Google Scholar] [CrossRef]
  75. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022, June 18–24). High-resolution image synthesis with latent diffusion models. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10684–10695), New Orleans, LA, USA. [Google Scholar] [CrossRef]
  76. Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., Ghasemipour, S. K. S., Ayan, B. K., Mahdavi, S. S., Lopes, R. G., Salimans, T., Ho, J., Fleet, D. J., & Norouzi, M. (2022). Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35, 36479–36494. [Google Scholar]
  77. Sands, S., Campbell, C., Ferraro, C., Demsar, V., Rosengren, S., & Farrell, J. (2024). Principles for advertising responsibly using generative AI. Organizational Dynamics, 53(2), 101042. [Google Scholar] [CrossRef]
  78. Sarofim, S., & Cabano, F. G. (2018). In God we hope, in ads we believe: The influence of religion on hope, perceived ad credibility, and purchase behavior. Marketing Letters, 29(3), 391–404. [Google Scholar] [CrossRef]
  79. Schilke, O., & Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes, 188, 104405. [Google Scholar] [CrossRef]
  80. Shin, D., Chotiyaputta, V., & Zaid, B. (2022). The effects of cultural dimensions on algorithmic news: How do cultural value orientations affect how people perceive algorithms? Computers in Human Behavior, 126, 107007. [Google Scholar] [CrossRef]
  81. Spears, N., & Singh, S. N. (2004). Measuring attitude toward the brand and purchase intentions. Journal of Current Issues & Research in Advertising, 26(2), 53–66. [Google Scholar] [CrossRef]
  82. Statista Research Department. (2024). AI in marketing revenue worldwide 2020–2028. Available online: https://www.statista.com/statistics/1293758/ai-marketing-revenue-worldwide/#:~:text=In%202021%2C%20the%20market%20for,than%20107.5%20billion%20by%202028 (accessed on 30 December 2024).
  83. Storey, V. C., Yue, W. T., Zhao, J. L., & Lukyanenko, R. (2025). Generative artificial intelligence: Evolving technology, growing societal impact, and opportunities for information systems research. Information Systems Frontiers, 1–22. [Google Scholar] [CrossRef]
  84. Sun, H., Xie, P., & Sun, Y. (2025). The inverted U-shaped effect of personalization on consumer attitudes in AI-generated ads: Striking the right balance between utility and threat. Journal of Advertising Research, 65(2), 237–258. [Google Scholar] [CrossRef]
  85. To, R. N., Wu, Y. C., Kianian, P., & Zhang, Z. (2025). When AI doesn’t sell Prada: Why using AI-generated advertisements backfires for luxury brands. Journal of Advertising Research, 65(2), 202–236. [Google Scholar] [CrossRef]
  86. Voorveld, H. A., Meppelink, C. S., & Boerman, S. C. (2024). Consumers’ persuasion knowledge of algorithms in social media advertising: Identifying consumer groups based on awareness, appropriateness, and coping ability. International Journal of Advertising, 43(6), 960–986. [Google Scholar] [CrossRef]
  87. Wang, S. F., & Chen, C. C. (2024). Exploring designer trust in artificial intelligence-generated content: TAM/TPB model study. Applied Sciences, 14(16), 6902. [Google Scholar] [CrossRef]
  88. Wang, S. F., Tang, Y. X., Meng, F. Y., & Sun, B. (2025). Evaluating designers’ acceptance of AI-generated content: Insights from the TAM and TRI frameworks. Available online: https://www.researchsquare.com/article/rs-7305405/v1 (accessed on 7 October 2025). [CrossRef]
  89. Wang, Y., Guan, X., Sun, Y., Wang, H., & Chen, D. (2025). The cognitive acceptance of generative AI image tools based on TPB-TAM model and multi-theory integration. Advanced Design Research, 3(1), 38–54. [Google Scholar] [CrossRef]
  90. WARC. (2024). The marketer’s toolkit 2024. Available online: https://www.warc.com/content/article/WARC-Exclusive/The_Marketers_Toolkit_2024/153414 (accessed on 7 October 2025).
  91. Wei, L., & Chen, M. (2025). Impact of AI-generated visual design features on user acceptance: A TAM-based analysis. Asia-Pacific Journal of Convergent Research Interchange (APJCRI), 11(2), 55–71. [Google Scholar] [CrossRef]
  92. Whittaker, L., Letheren, K., & Mulcahy, R. (2021). The rise of deep fakes: A conceptual framework and research agenda for marketing. Australasian Marketing Journal, 29(3), 204–214. [Google Scholar] [CrossRef]
  93. Xu, A. J., & Wyer, R. S., Jr. (2010). Puffery in advertisements: The effects of media context, communication norms, and consumer knowledge. Journal of Consumer Research, 37(2), 329–343. [Google Scholar] [CrossRef]
  94. Yam, K. C., Tan, T., Jackson, J. C., Shariff, A., & Gray, K. (2023). Cultural differences in people’s reactions and ap-plications of robots, algorithms, and artificial intelligence. Management and Organization Review, 19(5), 859–875. [Google Scholar] [CrossRef]
  95. Yoo, C., MacInnis, D. J., & St. James, Y. (2005). The brand attitude formation process of emotional and informational ads. Journal of Business Research, 58(10), 1397–1406. [Google Scholar] [CrossRef]
  96. Zhang, S., & Srinivasan, K. (2023). Marketing Through the machine’s eyes: Image analytics and interpretability. Artificial Intelligence in Marketing, 20, 217–237. [Google Scholar] [CrossRef]
  97. Zhang, Y., & Gosline, R. (2023). Human favoritism, not AI aversion: People’s perceptions (and bias) toward generative AI, human experts, and human–GAI collaboration in persuasive content generation. Judgment and Decision Making, 18, e41. [Google Scholar] [CrossRef]
  98. Zhao, D., Shi, X., Wei, S., & Ren, J. (2021). Comparing antecedents of Chinese consumers’ trust and distrust. Frontiers in Psychology, 12, 648883. [Google Scholar] [CrossRef]
  99. Zhou, E., & Lee, D. (2024). Generative artificial intelligence, human creativity, and art. PNAS Nexus, 3(3), 052. [Google Scholar] [CrossRef]
Figure 1. Image generation procedure. The source images were cited from publicly available marketing advertisements (Coffee: https://www.marketing91.com/marketing-strategy-cafe-coffee-day/ (accessed on 28 June 2024); medical aesthetics: https://maylinjyd.com/category/regenerative/ (accessed on 28 June 2024); Public service: https://contents.premium.naver.com/backbriefing/news/contents/220823152847857ed (accessed on 29 June 2024)). The synthetic images were generated by the authors using generative text-to-image models.
Figure 1. Image generation procedure. The source images were cited from publicly available marketing advertisements (Coffee: https://www.marketing91.com/marketing-strategy-cafe-coffee-day/ (accessed on 28 June 2024); medical aesthetics: https://maylinjyd.com/category/regenerative/ (accessed on 28 June 2024); Public service: https://contents.premium.naver.com/backbriefing/news/contents/220823152847857ed (accessed on 29 June 2024)). The synthetic images were generated by the authors using generative text-to-image models.
Admsci 15 00395 g001
Figure 2. AI-Generated Image with Stated Motivation. The images were generated by the authors using generative text-to-image models.
Figure 2. AI-Generated Image with Stated Motivation. The images were generated by the authors using generative text-to-image models.
Admsci 15 00395 g002
Figure 3. Mean Table for One-Way Analysis of Variance (ANOVA): (a) Attitude; (b) Preference; (c) Trust; (d) Purchase Intention.
Figure 3. Mean Table for One-Way Analysis of Variance (ANOVA): (a) Attitude; (b) Preference; (c) Trust; (d) Purchase Intention.
Admsci 15 00395 g003
Table 1. Overview of studies.
Table 1. Overview of studies.
Phase 1—Study 1Phase 2—Study 2Phase 3—Study 3
Research QuestionUnder non-disclosure (“black-box”) conditions, do AI-generated vs. human-made ads differ in evaluations?Does ex-ante source disclosure (AI vs. human) change evaluations?Which disclosed usage motivations (e.g., Privacy Protection vs. Visual Appeal vs. Cost Efficiency) attenuate or exacerbate the effect?
Key Design/ManipulationSource not revealed; participants rate a fixed set of ads (AI vs. human).Source explicitly labeled before evaluation (AI vs. human).Under disclosure, vary stated motivation for using AI.
Outcomes (DVs)Attitude, Preference, Trust, Purchase Intention
Table 2. Sample characteristics of study 1 (N = 130).
Table 2. Sample characteristics of study 1 (N = 130).
VariablesClassificationsCoffee ads
(N = 44)
Medical Aesthetics ads
(N = 42)
Public Service ads
(N = 44)
NumberPercentageNumberPercentageNumberPercentage
SexMale1943.2614.31840.9
Female2556.83685.72659.1
Age18–303068.13992.84193.1
31–501227.337.236.8
51 and older24.50000
EducationAssociate degree or below715.949.649.1
Bachelor’s degree2250.02866.71943.2
Master’s degree or above1534.11023.82147.7
Table 3. Coffee ads: LMMs with crossed random intercepts (participants, images).
Table 3. Coffee ads: LMMs with crossed random intercepts (participants, images).
Observed M (SD)
Human/AI
t(43), p 1EMM (SE)
Human/AI
Δ EMM [95% CI]Holm p
Attitude4.06 (1.44)/4.46 (1.16)−3.08, 0.004.06 (0.44)/4.46 (0.20)0.40 [−0.33, 1.14]n.s. 2
Preference3.92 (1.50)/4.33 (1.19)−3.01, 0.003.92 (0.38)/4.33 (0.21)0.41 [−0.42, 1.24]n.s.
Trust4.03 (1.54)/4.34 (1.21)−2.05, 0.054.03 (0.35)/4.34 (0.20)0.31 [−0.44, 1.05]n.s.
Purchase intention3.77 (1.74)/4.14 (1.34)−2.48, 0.023.77 (0.39)/4.14 (0.23)0.37 [−0.45, 1.19]n.s.
1 t-tests are descriptive (not model-based), inferential claims rely on LMM. 2 n.s. = not significant (p > 0.05).
Table 4. Medical Aesthetics ads: LMMs with crossed random intercepts.
Table 4. Medical Aesthetics ads: LMMs with crossed random intercepts.
Observed M (SD)
Human/AI
t(43), p 1EMM (SE)
Human/AI
Δ EMM [95% CI]Holm p
Attitude4.39 (1.35)/4.41 (1.09)−0.14, 0.894.39 (0.27)/4.41 (0.18)0.02 [−0.52, 0.56]n.s. 2
Preference4.47 (1.35)/4.34 (1.07)0.88, 0.384.47 (0.24)/4.34 (0.17)−0.13 [−0.56, 0.29]n.s.
Trust4.40 (1.51)/4.22 (1.11)0.98, 0.334.41 (0.32)/4.22 (0.19)−0.19 [−0.87, 0.50]n.s.
Purchase intention4.11 (1.55)/4.01 (1.23)0.61, 0.554.11 (0.26)/4.01 (0.19)−0.10 [−0.54, 0.33]n.s.
1 t-tests are descriptive (not model-based), inferential claims rely on LMM. 2 n.s. = not significant (p > 0.05).
Table 5. Public Service ads: LMMs with crossed random intercepts.
Table 5. Public Service ads: LMMs with crossed random intercepts.
Observed M (SD)
Human/AI
t(43), p1EMM (SE)
Human/AI
Δ EMM [95% CI]Holm p
Attitude5.00 (1.40)/4.81 (1.30)1.36, 0.185.00 (0.26)/4.81 (0.20)−0.19 [−0.63, 0.25]n.s. 2
Preference4.79 (1.41)/4.68 (1.31)0.88, 0.394.79 (0.27)/4.68 (0.20)−0.11 [−0.59, 0.38]n.s.
Trust4.76 (1.57)/4.62 (1.32)1.01, 0.324.76 (0.34)/4.62 (0.21)−0.14 [−0.82, 0.53]n.s.
Purchase intention5.22 (1.50)/5.17 (1.40)0.43, 0.675.22 (0.31)/5.17 (0.22)−0.05 [−0.61, 0.52]n.s.
1 t-tests are descriptive (not model-based), inferential claims rely on LMM. 2 n.s. = not significant (p > 0.05).
Table 6. Sample characteristics of study 2 (N = 79).
Table 6. Sample characteristics of study 2 (N = 79).
VariablesClassificationsAI-Generated (N = 39)Human-Made (N = 40)
NumberPercentageNumberPercentage
SexMale1230.8615.0
Female2769.23485.0
Age18–303179.53690.0
31–50820.5410.0
EducationAssociate degree or below1025.725.0
Bachelor’s degree1230.82665.0
Master’s degree or above1743.61230.0
Table 7. Study 2 group means, LMM results, and standardized effects.
Table 7. Study 2 group means, LMM results, and standardized effects.
OutcomeHuman M (SD)AI M (SD)Δ (Human–AI)Cohen’s d [95% CI]Holm p
Attitude4.81 (1.08)4.22 (1.21)0.590.52 [0.07, 0.96]0.03
Preference4.74 (1.20)4.07 (1.19)0.670.56 [0.11, 1.01]0.03
Trust4.94 (1.12)4.04 (1.27)0.900.75 [0.30, 1.21]0.00
Purchase intention4.93 (1.14)4.02 (1.30)0.910.75 [0.29, 1.20]0.00
Table 8. Motivations for Using AI-Generated Images.
Table 8. Motivations for Using AI-Generated Images.
MotivationDescription Presented with Ad Image
Cost Efficiency“AI-generated images were used for cost efficiency.”
Privacy Protection“AI-generated images were used for privacy protection.”
Visual Appeal“AI-generated images were used to enhance visual appeal.”
Control Group“Real images were used, with the subject’s approval.”
Table 9. Sample characteristics of study 3 (N = 209).
Table 9. Sample characteristics of study 3 (N = 209).
VariablesClassificationsHuman-Made
(N = 52)
Privacy Protection
(N = 52)
Visual Appeal
(N = 53)
Cost Efficiency
(N = 52)
Number%Number%Number%Number%
SexMale35.835.847.535.8
Female4994.24994.24992.54994.2
Age18–304280.73771.24584.94790.4
31–501019.21528.9815.159.6
EducationAssociate degree or below713.447.61018.859.6
Bachelor’s degree3465.43669.23769.83873.1
Master’s degree or above1121.21223.1611.3917.3
Table 10. Results of the One-way ANOVA.
Table 10. Results of the One-way ANOVA.
MS (Between)MS (Within)F(3, 205)pPartial η2
Attitude3.511.252.800.040.04
Preference2.631.581.660.180.02
Trust7.011.744.030.010.06
Purchase intention6.191.953.170.030.04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Hur, C. The Impact of Generative AI Images on Consumer Attitudes in Advertising. Adm. Sci. 2025, 15, 395. https://doi.org/10.3390/admsci15100395

AMA Style

Zhang L, Hur C. The Impact of Generative AI Images on Consumer Attitudes in Advertising. Administrative Sciences. 2025; 15(10):395. https://doi.org/10.3390/admsci15100395

Chicago/Turabian Style

Zhang, Lei, and Chung Hur. 2025. "The Impact of Generative AI Images on Consumer Attitudes in Advertising" Administrative Sciences 15, no. 10: 395. https://doi.org/10.3390/admsci15100395

APA Style

Zhang, L., & Hur, C. (2025). The Impact of Generative AI Images on Consumer Attitudes in Advertising. Administrative Sciences, 15(10), 395. https://doi.org/10.3390/admsci15100395

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop