Next Article in Journal
Optimizing Furniture Retail Strategies: Insights from Cross-Platform Consumer Sentiment and Topic Modeling
Previous Article in Journal
Who Benefits from the Internet? The Impact of Internet Technology on Farmers’ Agricultural Sales Performance and Its Heterogeneity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Consumer Reactions to AI-Generated Content on E-Commerce Platforms: A Trust–Risk Dual Pathway Framework with Ethical and Platform Responsibility Moderators

by
Tao Yu
1,2,
Younghwan Pan
2,* and
Wansok Jang
3,*
1
China-Korea International Institute of Visual Arts Research, Qingdao University of Science and Technology, Qingdao 260061, China
2
Department of Smart Experience Design, Kookmin University, Seoul 02707, Republic of Korea
3
College of Communication, Qingdao University of Science and Technology, Qingdao 260061, China
*
Authors to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 257; https://doi.org/10.3390/jtaer20040257
Submission received: 5 May 2025 / Revised: 5 September 2025 / Accepted: 17 September 2025 / Published: 1 October 2025

Abstract

With the widespread integration of Artificial Intelligence-Generated Content (AIGC) into e-commerce platforms, understanding how users perceive, evaluate, and respond to such content has become a critical issue for both academia and industry. This study examines the influence mechanism of AIGC Content Quality (AIGCQ) on users’ Purchase Intention (PI) by constructing a cognitive model centered on Trust (TR) and Perceived Risk (PR). Additionally, it introduces two moderating variables—Ethical Concern (EC) and Perceived Platform Responsibility (PLR)—to explore higher-order psychological influences. The research variables were identified through a systematic literature review and expert interviews, followed by structural equation modeling based on data collected from 507 e-commerce users. The results indicate that AIGCQ significantly reduces users’ PR and enhances TR, while PR negatively and TR positively influence PI, validating the fundamental dual-pathway structure. However, the moderating effects reveal unexpected complexities: PLR simultaneously amplifies both the negative effect of PR and the positive effect of TR on PI, presenting a “dual amplification” pattern; meanwhile, EC weakens the strength of both pathways, manifesting a “dual attenuation” effect. These findings highlight the nonlinear cognitive mechanisms underlying users’ acceptance of AIGC, suggesting that PLR and EC influence decision-making in more intricate ways than previously anticipated. By uncovering the unanticipated patterns in moderation, this study extends the boundary conditions of the trust–risk theoretical framework within AIGC contexts. In practical terms, it reveals that PLR acts as a “double-edged sword,” providing more nuanced guidance for platform governance of AI-generated content, including responsibility frameworks and ethical labeling strategies.

1. Introduction

With the rapid development of AIGC technology, AI applications in content production have widely penetrated various media forms, including text, images, audio, and video, profoundly reshaping human information dissemination and cognitive structures [1]. Against this backdrop, e-commerce, as one of the application scenarios closest to consumers, has gradually become a core arena for the integration and implementation of AIGC technologies [2]. Currently, e-commerce platforms and merchants are extensively utilizing AIGC tools to enhance content generation efficiency, reduce labor costs, and, to some extent, improve user experience. Application areas include product title creation, detailed description optimization, review generation, and recommendation drafting [3]. According to a forecast by Statista (2024), more than half of global e-commerce platforms are expected to adopt some form of AI content generation mechanisms for product display, intelligent customer service, virtual presenters, and marketing creativity support [4]. Leading platforms in countries such as China, South Korea, and the United States (e.g., Taobao, JD.com, Coupang, and Amazon) have successively deployed AIGC modules, and some brand enterprises have achieved large-scale automation in product copywriting [5]. However, the proliferation of AI-generated content has also sparked widespread discussions and controversies, particularly among consumers. How do consumers perceive these “algorithm-driven anthropomorphic messages” [6]? Is the authenticity of such content convincing [7]? If misleading information arises, should the platform bear corresponding responsibilities [8]? Would consumers’ PI be affected once they realize that the related content was not human-generated [9]? Existing research has indicated that users’ PR and TR mechanisms are key psychological foundations influencing behavioral decision-making on digital platforms [10]. Nevertheless, the current literature predominantly focuses on trust construction in platform recommendation systems or human customer service scenarios, with insufficient attention to the dynamic shifts in user trust and risk perception when AIGC serves as the primary information carrier [11]. Meanwhile, given the rapid expansion of AIGC content, discussions on users’ ethical judgment capabilities and the attribution of platform responsibilities are becoming increasingly critical [12]. Issues related to “AI ethical governance” have yet to be systematically explored in the e-commerce context. At the policy level, multiple countries have begun to impose higher compliance requirements regarding the use of AIGC content. For instance, China’s Cyberspace Administration issued the “Interim Measures for the Management of Generative Artificial Intelligence Services” in 2023, clearly stating that platforms are directly responsible for the authenticity and traceability of AI-generated content, emphasizing the importance of “clear labeling,” “content auditing,” and “algorithm transparency” [13]. Similarly, the European Union’s proposed “Artificial Intelligence Act” has established specific clauses regulating information disclosure and consumer protection mechanisms for e-commerce platforms [14]. In summary, under the dual pressures of strengthened regulation and accelerated industry integration, exploring how users perceive, accept, and even question AIGC-generated content holds considerable practical relevance and theoretical interest [15].
Building upon the aforementioned background, this study focuses on the e-commerce platform context to investigate the impact mechanism of AIGC on consumer behavioral intentions. A dual-pathway structural model centered on TR and PR is initially constructed, and key moderating factors within user cognition are further identified through a systematic literature review and expert interviews. Based on this foundation, the study proposes that users’ perception of AIGCQ influences their PI through TR and PR, with EC and PLR acting as potential moderators.
This research aims to address the following two stages of research questions:
Stage 1: Variable Identification and Model Construction
RQ1: When confronted with AI-generated content in e-commerce scenarios, what are the primary psychological cognitive dimensions involved for users? What potential relationships exist among these dimensions?
Stage 2: Model Validation and Mechanism Analysis
RQ2: Does users’ perception of AIGCQ influence their PI through TR and PR?
RQ3: Do EC and PLR moderate the pathways from risk and trust to PI?
The expected contributions of this study are as follows: From a user cognitive perspective, the study systematically identifies the key psychological mechanisms by which AIGC content affects users’ behavioral intentions in e-commerce contexts through a systematic literature review and expert interviews. Based on the dual-pathway logic of “trust–risk,” it constructs and validates an integrated user behavior model encompassing perceived quality, trust, risk, ethical concern, and platform responsibility. Furthermore, the study extends the application boundaries of user acceptance theories within the AIGC e-commerce context and provides empirical evidence and data support for platform governance and content regulation policymaking.

2. Literature Review

2.1. Research Progress on AI-Generated Content in E-Commerce

As a significant advancement in artificial intelligence technology, AIGC is increasingly embedded in the information production and consumption processes of the digital commerce sector, becoming a key driver of transformation within the e-commerce content ecosystem. Powered by large language models (LLMs) and multimodal generation algorithms, AIGC possesses the capability to autonomously generate high-quality text, images, and audio content [16], and has been widely applied in core business areas such as product information management, recommendation system optimization, and user interface interactions [17]. Currently, e-commerce platforms leverage AIGC systems to automatically generate product titles and descriptions, intelligently synthesize user reviews and Q&A content, and contextually output personalized recommendation messages and advertising copy. These applications have enhanced content production efficiency and commercial adaptability [18], while also lowering operational barriers for small and medium-sized enterprises to some extent [19]. Existing studies suggest that in the e-commerce environment, AIGC primarily fulfills three functions: first, it automatically generates structured content for product pages, such as titles, descriptions, and key selling points, thereby optimizing content management processes [20]; second, it produces review summaries and simulates user Q&A interactions to assist consumers in information evaluation and purchasing decisions [21]; third, it works in conjunction with recommendation systems to generate personalized advertising messages and product pairing suggestions to enhance user experience and conversion outcomes [22]. With the continuous evolution of large model technologies and multimodal generation capabilities, the application scope of AIGC has extended beyond text to encompass richer visual scenarios, including product image generation, virtual model demonstrations, and virtual presenter creations [23] (Table 1).
Compared to traditional manual editing models, AI-generated content demonstrates significant advantages in generation speed and consistency of expression. It can also integrate user profiles, semantic contexts, and platform marketing strategies to enable personalized content customization, fostering more diverse and precise expression styles [24]. In this transformation, platform content management is gradually shifting from “human-driven” to “algorithm-driven,” resulting in a new mechanism of information orchestration and content organization [25]. However, technological advancement has also introduced new challenges to user trust. Existing studies have found that when users encounter AI-generated product reviews, Q&A content, or display information, they often actively trigger “source awareness,” that is, they attempt to discern whether the content originated from real users or professional contributors [26]. This cognitive mechanism not only affects users’ subjective evaluations of content quality but also further influences their overall trust in the platform and their willingness to use it. Particularly in user-generated content, AIGC is often perceived as lacking emotional nuance and exhibiting homogenized language expression, which raises doubts about its authenticity and informational value [27]. Some studies have indicated that when users cannot clearly distinguish whether a review is AI-generated, their overall trust in the platform’s content system significantly decreases [28]. Moreover, in the absence of clear labeling, AIGC may elicit concerns among users regarding “covert manipulation” by platforms—specifically, the fear that platforms could control information structures through AI to guide or even mislead consumers’ purchasing decisions. This mechanism is attributed to the intrinsic tension between the “semantic realism” and “production opacity” embodied by AIGC content [29].
In the context of the rapid penetration of AIGC into e-commerce platforms, AIGCQ can be defined as users’ overall perception and subjective evaluation of AIGC in terms of linguistic fluency, informational accuracy, stylistic consistency, semantic naturalness, and visual presentation [30]. This construct integrates both the technical generation attributes and the psychological cognition of users. Compared with traditional UGC or brand-authored content, AIGC lacks trust anchors such as author identity, contextual cues, and emotional expression, which compels users to rely solely on the manifest characteristics of the content itself to assess its credibility [31]. Hence, AIGCQ functions not only as an indicator of aesthetic form and language performance but also plays a pivotal role in trust formation and risk recognition. Furthermore, existing studies have shown that the cognitive processing of AIGCQ involves multiple unique psychological mechanisms. For example, the uncanny valley effect may cause discomfort in users when faced with highly realistic yet subtly unnatural content, thereby influencing their quality assessments [32]. On the other hand, algorithmic appreciation describes users’ tendency to assign “beyond-human” trust expectations to high-quality machine-generated content [33]. Additionally, in the context of highly standardized and templated AI content, user perceptions of diversity, authenticity, and creative intent significantly affect quality judgments. Therefore, AIGCQ should be conceptualized as a multidimensional construct encompassing technological visibility, semantic completeness, and psychological acceptability, rather than being reduced to a simplistic “good vs. bad” content measure.
Although existing studies have advanced in AIGC system architecture, text generation mechanisms, and e-commerce platform application strategies, research on the “user–content” interaction relationship remains relatively underexplored, leaving room for further theoretical development. Much of the current research has primarily positioned AIGC as a technical tool for enhancing platform efficiency, while relatively less attention has been paid to users’ psychological responses and decision-making pathways when engaging with such content. On the other hand, the processes by which users weigh risk and trust when facing AI-generated information—potentially experiencing cognitive conflict, ethical concerns, or confusion regarding responsibility—have not been systematically modeled and deeply explored. More critically, as a “dehumanized” content generation approach, AIGC raises perceptual biases, semantic dissonance, and trust erosion issues that transcend mere technical adaptation, and should instead be treated as complex interactive phenomena encompassing information ethics, user security perceptions, and cognitive load. Therefore, from the consumer perspective, systematically analyzing users’ quality perception, risk alerts, and trust judgment pathways toward AI-generated content not only contributes to a deeper understanding of AIGC acceptance mechanisms but also forms a critical theoretical foundation for evaluating the influence of AIGC on e-commerce platforms [34].

2.2. Formation Mechanisms of Perceived Risk and Trust

The classic trust model defines trust as a psychological state in which the trustor is willing to be vulnerable based on an assessment of the trustee’s ability, benevolence, and integrity [35]. However, in the AIGC context, the trustee is replaced by an “algorithmic system”—a depersonalized black box whose decision paths are often opaque. Users cannot evaluate an algorithm’s reliability and intent using past reputation or affective cues as they would with human actors [36]. To address this, automation and human–machine trust research has proposed two approaches: “trust transfer” and “algorithmic credibility.” First, users transfer their trust in the platform or developers to the deployed automated system [37,38]. Second, algorithmic credibility can be signaled via explainability, transparency, and controllability [39]. Specifically, when a platform effectively labels AI content sources, discloses algorithmic logic, or provides visualized decision explanations, users can form trust in the algorithmic system even in the absence of traditional reputation cues [40]. Therefore, in our trust–risk framework, in addition to users’ direct evaluations of content quality, we also consider users’ overall expectations of the platform’s algorithm governance and responsibility to fully capture the trust-building process for AIGC.
In the e-commerce environment, the absence of face-to-face interaction and physical contact between consumers and platforms inherently introduces uncertainty and information asymmetry, which constitute the core sources of “PR” [41]. PR is defined as the psychological discomfort users experience when making purchasing decisions due to concerns about uncertainty regarding expected outcomes or potential negative consequences. This concept has been widely applied in fields such as technology adoption, online shopping, and fintech research to explain user avoidance behaviors and adoption resistance. On e-commerce platforms, perceived risk typically manifests as doubts about product authenticity, concerns over logistics fulfillment capabilities, apprehensions about payment system security, and fears of personal data breaches [42]. Particularly with the increasing prevalence of AIGC content, users’ concerns about “content authenticity” and “information manipulation” have intensified. Compared to human-edited information, AIGC content—due to its unverifiable origin, untraceable logic, and lack of emotional warmth—is often perceived by users as a more uncertain form of information, thereby potentially amplifying their perceived risk levels [43].
In contrast to perceived risk is “TR,” academically defined as a psychological state in which users, in the absence of supervision and safeguards, are willing to accept the information, services, or products provided by a platform based on a positive expectation [10]. In the context of e-commerce, trust serves not only as a prerequisite for user adoption of platform services but also as a critical factor influencing continued use and brand loyalty [44]. Numerous empirical studies have demonstrated that trust can effectively mitigate users’ negative evaluations related to system complexity and technological unfamiliarity, thereby enhancing their willingness to use the platform and increasing transaction conversion rates [45]. It is important to emphasize that trust and perceived risk are not independent variables; rather, they form a dynamic interactive relationship. Multiple studies have found a negative correlation between the two: the higher the user’s risk assessment of a platform or its information, the lower their trust level, which in turn negatively impacts their behavioral intentions [46]. This mechanism, known as the “trust–risk interaction model,” has been widely applied in user behavior research under conditions of high technological complexity or information opacity.
As AIGC technologies increasingly permeate the e-commerce content ecosystem, the uncertainties users face are no longer limited to the transaction itself but stem from a fundamental change in the information structure—that is, the content producers are no longer verifiable real users or professional editors, but rather algorithmic models with opaque origins and processes. This shift poses new challenges to the mechanisms of trust construction [47]. Some studies have indicated that AIGC content, due to its lack of emotional and social cues, often fails to elicit empathetic judgments and perceptions of authenticity from users [48]. Moreover, in the absence of clear responsibility labeling and explanatory mechanisms by the platform, users who are unable to confirm the content’s source or accountability are prone to triggering “risk-defense” cognitive responses, leading to negative behavioral strategies such as abandoning purchases, lowering trust levels, or migrating to platforms with clearer human content labeling [49]. Therefore, in an AIGC-centered information environment, understanding how users psychologically balance “trust” and “risk” becomes a critical entry point for revealing the formation mechanisms of their purchase intentions.
Moreover, to gain a more comprehensive understanding of users’ cognitive and emotional responses when encountering AI-generated content, this study draws on seminal theories from HCI and computer science. Reeves and Nass (1996)proposed the “Computers Are Social Actors” (CASA) framework, which posits that users ascribe social attributes to machine systems and interact with them in ways similar to interpersonal communication, thereby influencing trust and acceptance behaviors [50]. Research by Logg (2019) and Dietvorst (2018) has documented the phenomenon of “algorithmic aversion”: when users become aware that decisions are made by algorithms rather than humans, they tend to elevate their risk perceptions and exhibit resistance, and this automation bias may intensify defensive mindsets in AIGC contexts [51,52]. Furthermore, drawing on Mori’s (1970) ”uncanny valley” theory, studies have shown that when algorithms behave too “human-like” or are overly smooth yet lack human imperfections, users may experience discomfort and suspicion, further heightening risk perceptions [53]. These multidisciplinary perspectives not only enrich the traditional trust–risk interaction model but also provide a robust theoretical foundation for our exploration of how users navigate the trade-off between “trust” and “risk” in AIGC settings.

2.3. Platform Responsibility and Ethical Concern

In traditional technology acceptance research, commonly used moderating variables include personal innovativeness [54], technology anxiety [55], and cognitive-affective trust dimensions [56], which primarily reflect users’ willingness to adopt new technologies. While these variables remain influential in general technology adoption, recent studies on AI acceptance indicate that ethical and institutional factors have begun to outperform traditional technical capability indicators in predictive power [43,57]. However, in the context of AIGC, users are not merely evaluating a neutral technological tool but are also confronting deeper ethical issues and questions of accountability embedded in the “invisible algorithm.” Although algorithmic literacy influences users’ understanding of the technology, actual purchase decisions are more strongly shaped by concerns over content legitimacy and platform accountability rather than the technical intricacies of algorithmic functioning [58]. EC captures users’ sensitivity to potential risks posed by AIGC in areas such as authenticity, manipulation, and misinformation. This variable reflects users’ value-based judgments about the core ethical boundary of whether algorithms should be allowed to represent the human voice. Meanwhile, PLR reflects users’ expectations and confidence in the platform’s role in content governance mechanisms such as AI content labeling, algorithmic transparency, and error correction. Prior research has shown that this perception can enhance user trust and reduce perceived risk within content systems [59]. Our literature analysis reveals that within the body of AIGC-related research, EC and PLR are repeatedly identified as critical influencing factors, whereas traditional variables such as personal innovativeness show comparatively limited explanatory power [60]. Therefore, EC and PLR not only extend the boundaries of conventional acceptance models but also resonate with the dual ethical-institutional considerations unique to the AIGC context. They contribute to revealing how the trust–risk pathway dynamically shifts under varying moral values and expectations of organizational responsibility [61,62].
In an algorithm-driven information distribution environment, where content generation mechanisms are highly automated and increasingly opaque, users are more likely to assess whether the platform possesses oversight responsibility and corrective capabilities when information distortion or misleading content occurs. Belanche et al. (2014) pointed out that when users perceive that platforms exercise strong control over content but fail to fulfill corresponding responsibilities, their fundamental trust in the platform is can be undermined [63]. In the context of AIGC, this mechanism is particularly salient—Prior research has argued that users tend to attribute content errors not to merchants or the content itself, but rather to the platform’s failure to label AI-generated content or to establish a clear responsibility framework [64]. Du et al. (2023) further demonstrated that such “responsibility expectation deviation” intensifies users’ uncertainty during the risk perception stage and exerts a negative moderating effect in the trust-building process [65]. Moreover, Li et al. (2025) suggested that if platforms proactively label AIGC content, establish user feedback mechanisms, and implement clear error correction procedures, they can mitigate users’ distrust and enhance their willingness to accept AIGC content [48]. Therefore, whether a platform assumes sufficient ethical responsibility profoundly affects not only users’ moral sense of security but also their ultimate behavioral intentions. Compared to traditional human-generated content, AIGC is more likely to trigger users’ ethical concerns regarding dimensions such as “authenticity,” “manipulativeness,” “imitativeness,” and “responsibility boundaries” [66]. Particularly in scenarios where users are unable to distinguish between AI-generated and human-generated content, or when platforms fail to disclose the content generation mechanisms clearly, users are more prone to ethical skepticism, thereby lowering their trust in the information itself [67]. For instance, when reviews are authored by AI without appropriate labeling, or when customer service responses are algorithmically synthesized while mimicking a “human tone,” such phenomena often trigger users’ alertness to the platform’s “deliberate obfuscation of information sources,” thereby eliciting negative cognitive and behavioral responses [12]. It is important to emphasize that EC does not equate to a wholesale rejection of AI technology itself; rather, it reflects users’ defensive psychological reactions to issues of “opacity,” “asymmetry,” and “non-neutrality” during the application process [68]. This variable not only affects users’ initial acceptance of content but may also play a moderating role within the cognitive pathways of trust building and risk assessment, raising users’ sensitivity thresholds.
Reviewing existing research, PLR and EC are often incorporated into user behavior models as moderating variables or contextual factors. Their effects are not limited to the direct influence on any single variable but operate by impacting users’ psychological response mechanisms during the information evaluation process, thereby altering the strength of the inter-variable pathways [69]. In the “low-visibility, high-dependency” content generation context of AIGC, users often cannot directly comprehend the operational logic and must instead rely on their own experiences and value systems for judgment. Once a platform is perceived as lacking responsibility awareness, or AIGC content is perceived as ethically risky, even high-quality generated content may trigger “cognitive unease” or “moral conflict,” suppressing users’ positive evaluations, weakening trust, amplifying perceived risk, and ultimately influencing their PI [70].
Despite the growing interest in AIGC adoption and ethical implications, three important gaps remain. First, most prior studies focus on general applications of AIGC, but fail to address the unique behavioral context of e-commerce platforms where transactional decision-making dominates. Second, while content quality has been linked to user perception, the mechanisms through which AIGC content influences purchase intentions—particularly through trust and perceived risk—have not been empirically tested in a dual-path framework. Third, individual ethical concerns and platform responsibility perceptions, as value-sensitive variables, are rarely incorporated as moderators in such behavioral models. This study aims to address these gaps by building a cognitive framework centered on the trust–risk dual pathway and integrating ethical and platform-related moderators to enrich our understanding of consumer decision-making in AIGC-enabled commerce environments.

3. Research Methods

This study adopts a two-phase mixed-method design.
Phase I identifies key variables and develops the research model through a Systematic Literature Review (SLR) [71] and expert interviews. The SLR followed a structured retrieval and screening process, while interviews with specialists in AI technology, e-commerce operations, and user behavior explored potential cognitive responses to AIGC applications. Variables were classified into content stimulus, mediating, moderating, and outcome categories.
Phase II collects large-scale data via a structured questionnaire and applies SEM to test the hypotheses. Analyses include reliability and validity assessment, model fit evaluation, and estimation of mediation and moderation effects, providing empirical validation of the proposed model.
The three research phases in this study were sequentially interdependent. The SLR provided the theoretical scaffolding for the research. Building on this, expert interviews contextualized and refined the identified variables, ensuring their practical relevance in the e-commerce context. Subsequently, the survey operationalized these variables and tested the hypothesized relationships through quantitative analysis. This sequential dependency ensured methodological coherence, with each phase building upon and validating the outputs of the preceding phase, ultimately forming a logically connected and empirically supported research process.

3.1. Systematic Literature Review (SLR) Process

To identify key variables and their theoretical pathways, this study conducted a SLR following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [72]. The process comprised four stages: Identification, Screening, Eligibility, and Inclusion. In the Identification stage, search strategies were defined, specifying databases (Web of Science, Scopus, Google Scholar), keyword combinations (e.g., “AIGC,” “AI-generated content,” “trust,” “risk perception,” “purchase intention,” “platform responsibility,” “ethics”), publication years (2015–2024), and language (English). Screening and Eligibility stages applied predefined inclusion and exclusion criteria, such as thematic relevance, research subject consistency, and methodological rigor. In the Inclusion stage, only literature offering clear theoretical and methodological contributions was retained. All retained documents underwent deduplication, quality assessment, and coding for variable extraction. The selection process was fully documented and visualized to ensure traceability and replicability.

3.2. Expert Interview Design and Analysis

To further complement the user cognitive factors not fully covered in the SLR and to enhance the explanatory power of the research model in real-world contexts, this study incorporated Expert Interviews as part of the first-stage research [73]. This method is particularly suitable for identifying deep psychological mechanisms exhibited by users in complex, ambiguous, or still-institutionalizing technological environments.
The expert selection criteria were: (1) over three years of experience in generative AI, e-commerce content operations, consumer behavior, or AI ethics/regulation; (2) interdisciplinary expertise across technical, managerial, ethical, and behavioral domains; (3) participation in AIGC-related project design, regulation, or strategic planning; and (4) strong communication skills. The final sample included AI product managers, content operations directors, platform moderators, UX researchers, ethicists, marketing consultants, AI engineers, and postdoctoral researchers in platform governance, covering areas such as AI ethics, regulatory policy, and consumer behavior. During coding, multiple rounds of cross-checking and discussion ensured consistency, and reflexive practices were maintained to enhance objectivity and reliability.
Semi-structured interviews focused on five topics: (1) emotional and cognitive responses to AIGC; (2) concerns about authenticity and transparency; (3) perceptions of platform responsibility; (4) ethical judgments on potential manipulation; and (5) trust formation and risk alerting under uncertainty. Each 40–60-min session was conducted remotely, recorded with consent, and transcribed for analysis. In the data analysis phase, interview transcripts underwent open coding, thematic categorization, and abstraction following an inductive process of “semantic annotation → code consolidation → theme abstraction.” (1) Semantic Annotation—two researchers independently coded relevant statements, generating 45 initial codes across dimensions such as content quality, risk perception, and trust formation; (2) Code Consolidation—three rounds of discussion refined these into 32 categories, resolving discrepancies through consensus; (3) Theme Abstraction—iterative analysis identified five higher-order themes, each supported by 15–20 coding instances, with theoretical saturation reached after the sixth interview. No predetermined model was applied, and reflexive memos documented analytical decisions to ensure objectivity and reliability.

3.3. Questionnaire Design and Data Collection Strategy

The questionnaire’s content, structure, and measurement methods were developed based on the core variables and research model identified in the first phase, ensuring theoretical coherence and alignment between measurement dimensions and conceptual definitions. The design and data collection followed three main steps:
Step 1: Questionnaire Drafting
Based on the variables identified through the SLR and expert interviews, the research team designed measurement items for each construct by adapting established scales from prior studies and refining them to fit the e-commerce AIGC context. The questionnaire consisted of six sections: introduction and consent, screening questions, measurement items for all variables, demographic questions, attention checks, and closing remarks. The complete list of measurement items for each variable is provided in Appendix A, with corresponding references for each item.
Step 2: Pilot Test
A preliminary version of the questionnaire was distributed to 50 respondents, of which 46 valid responses were retained after quality checks. Pilot participants were recruited from diverse e-commerce user backgrounds to ensure variability in familiarity with AIGC. The pilot test was used to assess item wording, logical flow, and respondent experience. Feedback from participants was incorporated to refine semantic clarity, unify scale formats, and adjust item length to reduce cognitive load and avoid ambiguity. This process enhanced the content validity and internal consistency of the final instrument.
Step 3: Formal Survey Administration
The finalized questionnaire was administered via the Wenjuanxing platform using a combination of convenience sampling and snowball sampling. The distribution process was as follows: 1. Targeted Recruitment—Invitations were sent through WeChat, QQ groups, e-commerce user communities, and self-media channels to reach participants with prior experience of encountering AIGC in e-commerce (e.g., AI customer service, product descriptions, AI-generated reviews). 2. Screening and Logic Branching—Initial screening questions ensured that only respondents with AIGC exposure proceeded to the main survey, improving sample relevance. 3. Incentivized Participation—Valid responses were rewarded with a randomized cash incentive (3.3 CNY, 6.6 CNY, or 8.8 CNY) to enhance participation motivation while discouraging low-quality responses. 4. Data Quality Control—The Wenjuanxing platform’s technical functions were used to enforce quality checks, including response time monitoring (to detect abnormally fast completions), attention check questions (e.g., “Please select ‘strongly agree’ for this item”), and logic consistency verification. This step-by-step approach ensured that the questionnaire maintained high theoretical rigor, operational precision, and data reliability throughout its design and administration process.

4. Results

4.1. Phase I: Construct Identification

4.1.1. Findings from Systematic Literature Review

Following the PRISMA procedure, a systematic literature search and screening process was conducted. Initially, 412 records were identified. After removing duplicates and conducting a preliminary screening, 351 records remained. From these, 61 core studies with strong theoretical relevance and empirical support were selected to extract key variables and construct the research model. Specifically, the inclusion and exclusion criteria applied during the initial screening phase were as follows: Inclusion criteria: (1) studies focused on AIGC or related algorithmic content; (2) target population included users or consumers of e-commerce platforms; (3) addressed key user behavior variables such as trust, perceived risk, and purchase intention; (4) employed quantitative or qualitative empirical methods with complete data or analysis. Exclusion criteria: (1) non-English publications (e.g., conference abstracts, commentary articles); (2) studies not situated in e-commerce contexts; (3) studies focusing solely on technical implementation of AI-generated content without considering user behavior perspectives; (4) insufficient methodological descriptions or lack of empirical evidence.
During the screening process, two researchers independently assessed the titles and abstracts of the 412 records and excluded 238 that did not meet the inclusion criteria. The remaining 174 articles underwent full-text review, and 53 were excluded due to methodological weaknesses or topic irrelevance. All exclusion reasons were documented, and a third researcher reviewed and arbitrated the screening outcomes. Ultimately, 61 high-quality studies were retained for variable extraction and model development, ensuring transparency and reproducibility of the review. The complete literature screening process is illustrated in Figure 1.
After conducting open coding and thematic categorization on the included core literature, the research team extracted several high-frequency conceptual keywords and preliminarily identified key variables with modeling potential. The statistical results are presented in Table 2. Among them, “Trust (TR)” was the most frequently appearing variable (n = 38, accounting for 62.3%), followed by “Purchase Intention (PI, n = 47),” “Perceived Risk (PR, n = 35),” and “AIGC Content Quality (AIGCQ, n = 24).” In contrast, “Perceived Platform Responsibility (PLR, n = 12)” and “Ethical Concern (EC, n = 9)” appeared less frequently in the literature but have shown a clear upward trend in studies over the past two years, indicating their potential theoretical value and development prospects in explaining AIGC acceptance mechanisms.
The keyword distribution results indicate that variables such as “TR,” “PR,” and “PI” continue to constitute the core structural framework within the AIGC acceptance pathways, reflecting the fundamental psychological mechanisms users employ when encountering algorithmically generated content. However, “ PLR” and “ EC”—representing users’ emerging focus on content authenticity and system transparency—remain relatively peripheral in the existing literature. Nevertheless, their increasing research prominence suggests considerable theoretical expansion potential and empirical exploration value. Consequently, the subsequent model construction will be based on the structure of these high-frequency variables while moderately incorporating the additional dimensions of “ PLR” and “ EC.” Furthermore, cross-validation with expert interview data will be conducted to confirm their theoretical applicability and determine their pathway positioning within the model.

4.1.2. Findings from Expert Interviews

To supplement the theoretical coverage of user cognitive dimensions established through the SLR, the research team invited eight experts from fields such as artificial intelligence, e-commerce platform operations, user behavior research, and digital ethics to participate in semi-structured interviews (Table 3), aiming to obtain firsthand insights and cognitive feedback from the practical application of AIGC. The collected data were analyzed using Open Coding and Thematic Categorization, involving semantic deconstruction and conceptual induction based on expert statements.
The entire analysis process was divided into three stages: the first stage involved semantic annotation, identifying key terms and emotional expressions used by the experts during the interviews; the second stage focused on initial label aggregation, where repetitive or synonymous labels were classified and merged; and the third stage involved mid-level theme abstraction, extracting thematic dimensions that reflect typical user psychological mechanisms and behavioral tendencies (Figure 2).
Through open coding and thematic categorization of the interview data, five high-frequency core themes were extracted, as detailed below:
Experts consistently noted that when users initially encounter AI-generated content, their primary focus is on whether the language appears “natural” and “authentic,” forming the basis for preliminary trust judgments. As one platform operations manager pointed out: “If the writing is too smooth, it actually feels like AI. Users expect the ‘imperfections’ that come with human writing.” This indicates that the evaluation criteria for AIGCQ are no longer limited to technical “completeness” but are based on a complex cognitive structure involving “perceived authenticity” and “contextual fit.”
Several experts emphasized that once users become aware that content is AI-generated, they tend to enter a “cautious evaluation” mode, especially when reading product reviews or descriptions. Users exhibit heightened skepticism toward “template-like expressions” or “overly positive” content. As one expert remarked: “It’s not that the content is poor that raises doubts; it’s because it ‘sounds too much like AI’ that users distrust it.”
In traditional content environments, trust is typically built on the author’s identity or emotional cues. AIGC, however, disrupts this trust structure. Multiple experts pointed out that when facing AI-generated content, users shift their evaluations toward the platform’s technological capabilities, filtering mechanisms, and brand reputation. An AI engineer summarized: “Users no longer ask ‘who said this’ but rather ‘who made the AI say this.’ Experts reported a strong ethical vigilance among users concerning the scope of AI content usage, disclosure boundaries, and potential misleading expressions. Particularly in sensitive domains such as healthcare, finance, and emotional counseling, users harbor doubts about “whether such content should be generated by AI” at all. As one expert noted: “Users are not technophobic; they are highly resistant to behaviors where AI infringes on expressive rights.”
Although users do not fundamentally reject the existence of AIGC, they hold high expectations for how platforms respond to errors. Most experts agreed that users tend to attribute misleading responsibilities to the platform’s “failure to disclose AI-generated content” rather than to the AI itself. A content governance expert stated: “The platform’s non-disclosure behavior is more damaging than the AI-generated content itself.”
The above five cognitive themes not only reflect users’ emotional and evaluative responses to AIGC content at the semantic level but also outline the cognitive pathways from content exposure to behavioral decision-making at the psychological structure level. These themes closely align with the variable structure identified through the Systematic Literature Review: “AIGCQ,” “PR,” and “TR” form the core pathway variables of the research model, while “EC” and “PLR” are incorporated as moderating variables, providing a solid theoretical foundation for the model’s pathway complexity and contextual adaptability (Table 4).
To clarify the logical relationship between the interview themes and the research model variables, Table 5 maps each categorized theme to the corresponding final constructs, thereby strengthening the semantic foundation of the model structure.

5. Variable Analysis and Hypotheses Development

5.1. Theoretical Basis for Hypotheses

Based on the cognitive processes users undergo when encountering AI-generated content on e-commerce platforms, this study constructs a structural model centered on the pathway of “content evaluation–cognitive judgment–behavioral intention.” It focuses on examining how AIGCQ indirectly affects users’ PI through PR and TR. Additionally, EC and PLR are introduced as key moderating variables to reveal their interactive moderation mechanisms within the cognition–behavior pathway. The following research hypotheses are proposed from the perspectives of the main effect pathways and interaction moderation pathways. Given that this study is based on cross-sectional correlational data, the following hypotheses specify statistical associations (paths) between variables rather than asserting strict causal relationships.

5.1.1. Effects of AIGC Content Quality on User Cognition

Systematic literature review findings indicate that when users encounter AIGC content on e-commerce platforms, they evaluate its information quality through cues such as language naturalness, logical coherence, and stylistic expression [6]. Prior research shows that high-quality AIGC content can alleviate doubts about information authenticity and reduce perceptions of platform manipulation or algorithmic bias, thereby inhibiting the formation of perceived risk. Studies in the e-commerce context have also highlighted that content quality is a critical driver of trust in platform mechanisms and recommendation systems [74]. In line with these insights, and building on the theoretical and empirical evidence synthesized in the literature review, this study proposes the following hypotheses.
Based on this, the following hypotheses are proposed:
H1. 
AIGCQ is negatively associated with users’ PR.
H2. 
AIGCQ is positively associated with users’ TR.

5.1.2. Transmission Mechanism of PR and TR Toward PI

Previous research indicates that PR and TR are central psychological mechanisms influencing users’ decision-making processes when interacting with AI-generated content [75]. PR reflects users’ anticipation of potential uncertainty and negative consequences, which can diminish their perceived security and control over platform systems. Conversely, TR stems from users’ positive evaluations of a platform’s competence, fairness, and benevolence, acting as a key driver of adoption intentions [76]. Empirical findings in the e-commerce context consistently show that PR and TR are negatively correlated and jointly shape users’ purchasing decisions.
Based on this, the following hypotheses are proposed:
H3. 
PR is negatively associated with users’ TR.
H4. 
PR is negatively associated with users’ PI.
H5. 
TR is positively associated with users’ PI.

5.1.3. Moderating Role of PLR

PLR reflects users’ subjective judgments regarding the platform’s fulfillment of responsibilities in information disclosure, content auditing, and error correction feedback [77]. When users form a positive perception of the platform’s responsibility, their potential concerns regarding AIGC content may be alleviated [78]. Our systematic literature review further revealed that in e-commerce contexts, platforms perceived as responsible are more likely to be viewed as protective intermediaries that safeguard content authenticity and user rights, thereby influencing how users interpret and respond to AI-generated content.
Based on this, the following hypotheses are proposed:
H6. 
PLR negatively moderates the relationship between PR and PI; that is, the higher the PLR, the weaker the negative relationship between PR and PI.
H7. 
PLR positively moderates the relationship between TR and PI; that is, the higher the PLR, the stronger the positive relationship between TR and PI.

5.1.4. Moderating Role of EC

With the widespread application of AIGC in commercial content, users have become increasingly sensitive to its potential ethical implications, such as the lack of authenticity, manipulation risks, and blurred responsibility [79]. When users exhibit higher levels of EC, they are more likely to amplify perceived risks and exhibit greater skepticism toward trust formation, thereby influencing their final behavioral decisions [80]. Findings from our systematic literature review further suggest that higher levels of EC not only strengthen the inhibitory effect of perceived risk on purchase intention but also weaken the positive influence of trust on purchase intention. This is because elevated ethical concern may undermine the moral legitimacy of both the platform and the content, thereby influencing users’ final behavioral decisions.
Based on this, the following hypotheses are proposed:
H8. 
EC positively moderates the relationship between PR and PI; that is, the higher the ethical concern, the stronger the negative relationship between perceived risk and purchase intention.
H9. 
EC negatively moderates the relationship between TR and PI; that is, the higher the ethical concern, the weaker the positive relationship between TR and PI.

5.2. Research Model Structure

Based on variables identified through the SLR and expert interviews, this study develops a structural model grounded in the Trust–Risk Framework, incorporating EC and PLR as moderators to capture the complexity of users’ psychological judgments toward AIGC in e-commerce (Figure 3). The model comprises six latent variables: AIGCQ as the stimulus variable; PR and TR as dual mediators linking AIGCQ to PI; EC and PLR as moderators affecting the PR–PI and TR–PI pathways; and PI as the outcome variable. It hypothesizes that AIGCQ indirectly influences PI via PR and TR, with PR negatively related to TR, and that EC and PLR moderate these effects such that the influence of PR and TR on PI varies according to users’ ethical sensitivity and responsibility expectations.

5.3. Phase II: Variable Analysis and Hypothesis Testing

To empirically examine the proposed research model and test the hypothesized relationships, a structured questionnaire was developed based on the key variables identified through the systematic literature review and expert interviews in Phase I. The questionnaire incorporated six core constructs within the Trust–Risk dual-pathway framework: AIGCQ, PR, TR, PLR, EC, and PI. Each construct was measured using four items adapted from established scales in the literature and modified to fit the AIGC e-commerce context. (See Appendix A for details). This study employed Harman’s single-factor test, which revealed that the first factor accounted for 34.395% of the variance, below the 40% threshold of total explained variance (Table 6).
A total of 580 questionnaires were distributed, with 534 collected and 27 invalid responses excluded, resulting in 507 valid samples. As shown in Table 7, respondents were predominantly young (60% aged 19–35), highly educated (80% with a bachelor’s degree or above), and active online shoppers (84% shopping at least twice monthly), with a near gender balance (53% male, 47% female). Notably, 59% reported frequently or occasionally encountering AIGC in e-commerce, indicating its broad adoption, while 16% were unsure whether the content they saw was AI-generated.
Reliability testing of the questionnaire evaluates its consistency and credibility, reflecting the stability of measurement results and the authenticity of the data. Reliability encompasses both internal and external reliability. Internal reliability primarily examines whether the items within the questionnaire measure the same underlying concept and assesses their internal consistency; the higher the consistency, the stronger the credibility of the questionnaire [81].
As shown in Table 8, the questionnaire comprises six dimensions, each with a Cronbach’s alpha above 0.7, indicating good reliability. Validity testing covered content and construct validity: content validity was ensured by adapting established scales based on literature and pretest results; construct validity was assessed via Confirmatory Factor Analysis (CFA) (Figure 4).
As shown in Table 9, all model fit indices meet the evaluation criteria: CMIN/DF = 1.658 (acceptable < 3), GFI = 0.940, AGFI = 0.925 (good fit > 0.9), RMSEA = 0.036 (good fit < 0.05), NFI = 0.945, IFI = 0.977, TLI = 0.974, CFI = 0.977 (good fit > 0.9), PNFI = 0.811, and PCFI = 0.839 (acceptable > 0.5).
Convergent validity was assessed using Average Variance Extracted (AVE) and Composite Reliability (CR) (Table 10). AVE = 0.555–0.703 (acceptable > 0.5); CR = 0.833–0.904 (acceptable > 0.8); factor loadings = 0.704–0.877 (p < 0.001) [82].
Discriminant validity was assessed using correlation coefficients and the Fornell–Larcker criterion (Table 11). All correlations were significant (p < 0.001). The square root of each latent variable’s AVE exceeded its correlations with other variables (Fornell–Larcker criterion met).
The HTMT is an indicator used to assess discriminant validity in SEM. HTMT values reflect the ratio of correlations between different constructs compared to correlations between indicators within the same construct (Table 12). Generally, HTMT values below 0.85 indicate good discriminant validity between constructs.
Standardized factor loadings exceeded 0.6 (p < 0.05), CR values were above 0.7, and AVE values exceeded 0.5, confirming good convergent validity. The square root of each AVE was greater than its inter-construct correlations, further supporting discriminant validity. Overall, the scale demonstrates strong structural validity (Figure 5).
Structural Equation Modeling (SEM) was used to test the research hypotheses. Model fit indices (Table 13): CMIN/DF = 1.888 (acceptable < 3), GFI = 0.958, AGFI = 0.941, RMSEA = 0.042 (good fit < 0.05), NFI = 0.957, IFI = 0.979, TLI = 0.974, CFI = 0.979 (good fit > 0.9), PNFI = 0.781, PCFI = 0.800 (acceptable > 0.5) [83].
Maximum likelihood estimation was used to estimate path coefficients (Table 14). All coefficients were significant: C.R. > 1.96 (p < 0.05), C.R. > 2.58 (p < 0.01), C.R. > 3.29 (p < 0.001).
Hypothesis testing results (Table 14) show: AIGCQ → PR (β = −0.448, p < 0.001), AIGCQ → TR (β = 0.343, p < 0.001), PR → TR (β = −0.463, p < 0.001), AIGCQ → PI (β = 0.369, p < 0.001), PR → PI (β = −0.227, p < 0.001), TR → PI (β = 0.215, p < 0.001). Mediation effects were tested using the bias-corrected Bootstrap method (5000 resamples, 95% CI) with Maximum Likelihood estimation, as recommended by Preacher and Hayes [84]. Following Wen [85], mediation is significant when the CI does not include zero. Results are presented in Table 15.
Mediation analysis results: AIGCQ → PR → PI = 0.101 (95% CI: 0.034–0.178, p = 0.005 < 0.01, partial mediation); AIGCQ → TR → PI = 0.074 (95% CI: 0.027–0.143, p = 0.003 < 0.01, partial mediation); AIGCQ → PR → TR → PI = 0.045 (95% CI: 0.018–0.093, p = 0.003 < 0.01, chain mediation). Direct effect = 0.369 (p < 0.001); total effect = 0.589 (p < 0.001).
All continuous variables were mean-centered before creating interaction terms to reduce multicollinearity and facilitate interpretation.
Moderation analysis of PLR (controls: gender, age, education, occupation, shopping frequency): R2 = 0.429, F = 33.806, p < 0.01.
  • PR → PI: β = −0.202 (p < 0.001); PLR × PR = −0.089 (p = 0.007), indicating that PLR strengthens the negative effect.
  • TR → PI: β = 0.198 (p < 0.001); PLR × TR = 0.164 (p < 0.001), indicating that PLR strengthens the positive effect (Figure 6, Table 16).
Moderation analysis of EC (controls: gender, age, education, occupation, shopping frequency): R2 = 0.423, F = 32.956, p < 0.01.
  • PR → PI: β = −0.137 (p = 0.001); EC × PR = −0.126 (p = 0.001), indicating that EC weakens the negative effect.
  • TR → PI: β = 0.202 (p < 0.001); EC × TR = −0.162 (p < 0.001), indicating that EC weakens the positive effect (Figure 7, Table 17).

6. Discussion

6.1. Summary of Key Findings

The results of the structural equation modeling show that seven out of the nine proposed hypotheses were supported, while two moderating hypotheses (H6 and H8) yielded results contrary to expectations. These findings collectively highlight the complex psychological mechanisms underlying user decision-making in AIGC contexts.
First, AIGCQ, as a cognitive starting point, exerts a dual influence on user psychology: it significantly reduces PR (β = −0.448, p < 0.001, supporting H1) and enhances TR (β = 0.343, p < 0.001, supporting H2). This indicates that high-quality AIGC can simultaneously alleviate users’ uncertainty concerns and establish a positive foundation for TR. Second, PR and TR exhibit a significant negative relationship (β = −0.463, p < 0.001, supporting H3), validating the erosive effect of risk perception on trust formation. Regarding their impact on PI, PR has a suppressive effect (β = −0.227, p < 0.001, supporting H4), while TR promotes purchasing behavior (β = 0.215, p < 0.001, supporting H5). These results provide strong empirical support for the applicability of the dual-path “trust–risk” framework in the AIGC acceptance context. As for the moderating effects, both expected and unexpected patterns emerged. PLR demonstrated a dual amplification role: on the one hand, PLR positively moderated the effect of trust on PI (β = 0.164, p < 0.001, sup-porting H7); on the other hand, the negative interaction of PLR × PR (β = −0.089, p = 0.007) suggests that PLR in fact intensified the negative effect of risk, which runs counter to the original H6 assumption of a mitigating role. This finding implies that when users perceive a platform as more accountable, they not only become more responsive to trust cues but also more sensitive to perceived risks. The moderating role of EC displayed similarly complex dynamics. The negative interaction of EC × TR (β = −0.162, p < 0.001) supports H9, indicating that moral concerns weaken the positive effect of TR. However, the negative interaction of EC × PR (β = −0.126, p = 0.001) shows that EC actually weakens the negative impact of risk, contrary to the expectation in H8. This “double attenuation” pattern suggests that high moral sensitivity may activate a distinct cognitive mechanism inde-pendent of traditional risk–trust evaluation. Collectively, these findings confirm the core pathway of “AIGCQ → PR/TR → PI,” but also suggest that users’ acceptance of AIGC is more nuanced than previously theorized. In particular, the unexpected moderation effects reveal: (1) PLR may simultaneously enhance users’ opportunity recognition and risk vigilance; and (2) EC may represent a third evaluative dimension that operates independently of the trust–risk framework. These insights offer new theoretical perspectives for deepening our understanding of AIGC acceptance mechanisms.

6.2. Main Pathway Discussion: Content Quality–Trust/Risk–Intention

This study validated the applicability of the “Trust–Risk” dual-pathway in the context of AIGC-based e-commerce and, from the perspective of user psychological mechanisms, revealed how content quality, perceived risk, and trust jointly form the cognitive foundation for PI. First, AIGCQ significantly affects users’ PR and TR, consistent with findings from traditional studies on user-generated content (UGC) and professional recommendation scenarios [86,87]. However, the underlying mechanisms exhibit notable differences within the AIGC context [88]. It is worth noting that the mechanism through which AIGCQ simultaneously influences PR and TR may fundamentally differ from that in traditional UGC contexts. In AIGC scenarios, users not only evaluate the content itself but also implicitly assess the underlying technological system and the platform’s governance capabilities. This multi-level evaluation may help explain why the moderating variables exhibit unexpected effects. The essential nature of AI-generated content is “depersonalization,” meaning users cannot rely on external cues such as “publisher authority” or “historical reputation” for judgment. Instead, linguistic expression, structural logic, and professional tone of the information itself become the “only diagnosable signals” [89]. This aligns with the perspective of “Information Diagnosticity Theory,” which suggests that when users are unable to determine the source of information, they heavily depend on the inherent quality of the content for reliability assessment [47]. Therefore, AIGC content quality is not merely an issue of “information readability” but acts as a vehicle for a “trust proxy mechanism. “Furthermore, the pathways of risk and trust in this study exhibit strong symmetry; however, their directional effects stem from distinct psychological evaluation mechanisms. The negative influence of PR on TR (β = −0.463) is particularly pronounced in this study, which may reflect a psychological distinctiveness in the context of AIGC. When the “human” origin of content is stripped away, the relationship between risk and trust may no longer function as a simple opposition, but rather be moderated by more complex cognitive and value-based factors. PR typically arises from concerns about content authenticity, controllability of intentions, and unpredictability of consequences. Particularly when AIGC lacks explicit labeling or appears excessively smooth in tone, users may develop a “technological camouflage alertness,” thereby undermining the platform’s overall credibility [90]. In contrast, the formation of trust is not solely driven by content quality but is jointly determined by the perceived rationality of information and users’ broader perception of whether the platform “takes responsibility” and “discloses AI sources” [34]. This differs from traditional e-commerce contexts, where trust primarily relies on platform reputation and accumulated user reviews [10,75]. Therefore, perceived risk reflects a response to “fundamental information processing uncertainty,” while trust represents an intersection of “systemic trust” and “content perception.” This study confirms that both factors independently influence purchase intention, and that perceived risk significantly suppresses the formation of trust, demonstrating a typical “risk erosion of trust” effect.
Unlike existing studies in the e-commerce domain, the model developed in this study exhibits a characteristic of “similar weight between the trust effect and the risk effect,” suggesting that although AIGC content possesses a certain degree of persuasive power, users’ behavioral transformation is not entirely built on trust. Instead, users demonstrate a stronger “self-protection mechanism” when encountering AIGC [91]. This finding supplements prior research that often assumed “high trust automatically accompanies low risk” [75], revealing that in a depersonalized content environment, trust and risk can coexist and independently influence user decision-making. This phenomenon may be associated with the dual features of AI content—its “human-like imitation” and “de-responsibilization”: on one hand, AI can imitate human linguistic styles, raising the initial level of trust; on the other hand, once users realize that there is no clearly identifiable entity bearing responsibility behind the content, their perceived risk escalates rapidly [92]. Therefore, in practical applications, while content quality is a prerequisite for trust, it must be complemented by a clear “responsibility-taking mechanism” to genuinely activate users’ purchase intentions. Platforms cannot solely rely on content optimization to earn user trust; they must also establish “traceable and accountable” AI governance structures at the institutional and ethical levels.
Overall, the basic pathway of “AIGCQ → PR/TR → PI” was supported, indicating that the traditional Trust–Risk framework remains applicable in the context of AIGC. However, this applicability may be conditional—as revealed in the moderation analysis, users’ PLR and EC altered the strength and direction of these fundamental relationships in unexpected ways. This suggests the necessity of a more nuanced understanding of the boundary conditions involved in the AIGC adoption process.

6.3. Moderation Effects Discussion: Platform Responsibility and Ethical Concern

Although AIGCQ largely shapes users’ initial perceptions of AI-generated content, the extent to which these perceptions translate into PI is influenced by higher-order psychological moderators. This study introduced two moderators—PLR and EC—to examine their interaction effects within the “PR and TR → PI” pathway. All four moderation paths were statistically significant; however, two exhibited directions opposite to expectations, highlighting the context-dependency of behavioral transformation in AIGC environments [78,93].
First, regarding PLR, results revealed a surprising dual amplification pattern. When users perceive e-commerce platforms as highly responsible in terms of content review, labeling, and error handling, the positive effect of TR on PI is further strengthened (supporting H7). This aligns with findings from Corporate Social Responsibility (CSR) research, which show that stronger perceptions of organizational responsibility enhance behavioral intention through system-based trust [77,94]. This indicates that structural trust is becoming a foundational support in the use of AI-generated content. Contrary to H6, however, PLR did not attenuate but instead intensified the negative impact of PR on PI (β = −0.089, p = 0.007). This unexpected dual amplification effect extends beyond the patterns observed by Paulssen et al. [95], suggesting that in AIGC contexts, users become more sensitive to both trust and risk under high-responsibility conditions, triggering stronger attributional reactions. This reflects a dual-track cognitive mechanism: users not only expect responsible platforms to manage AI content effectively but may also be more likely to attribute any “AI failure” to the platform itself rather than to the technology. As a result, heightened expectations of platform responsibility may paradoxically lead to stronger rebound effects when those expectations are violated [88]. In this sense, PLR functions as a “double-edged sword”—it acts as both a catalyst for trust and a magnifier of risk. For platforms to earn user trust, it is not enough to merely project an image of responsibility; they must fulfill these expectations in practice, or users may heighten their vigilance toward potential risks due to perceived breaches of trust.
In contrast, the moderating role of EC exhibited a more complex dual-attenuation pattern, with some results diverging from expectations. As predicted by H9, users with high levels of EC demonstrated a significantly weakened capacity to convert TR into PI (β = −0.162, p < 0.001) [96,97]. However, contrary to the expectation of H8, EC attenuated rather than amplified the negative impact of PR on PI (β = −0.126, p = 0.001). This surprising finding suggests that users with heightened moral awareness may develop unique cognitive processing mechanisms. Although they harbor ethical concerns about AI-generated content, such concerns may operate independently of traditional risk assessments, resulting in a “moral concern–risk perception decoupling” phenomenon. This perspective is consistent with consumer ethical perception models, which suggest that ethical discomfort has the potential to disrupt the trust-building process between users and platforms or brands [98]. One plausible explanation is that highly ethically concerned users tend to be more rational and reflective. They are capable of distinguishing between technical risks (e.g., information accuracy) and moral risks (e.g., whether AI should be used to generate content), thereby responding more calmly to actual risks. In other words, EC may activate a mode of value-based reasoning rather than instrumental rationality, shifting the decision-making process away from conventional risk–trust calculations [99,100]. This mechanism may represent a new form of moral priming, in which users evaluate AIGC based on moral positioning, forming a third evaluative dimension that functions independently of the traditional PR–TR framework. Such dynamics are further supported by Gabriela’s research on ethical consumer behavior, which emphasizes the profound impact of value-based defenses on final decision-making [101]. Theoretical implications of these unexpected findings lie in their challenge to traditional linear moderation assumptions, unveiling the cognitive complexity of AIGC acceptance. The dual amplification effect of PLR suggests that users’ expectations of responsible platforms are multifaceted—they not only anticipate more reliable services (which reinforces trust) but also become more alert to potential issues (which reinforces risk perception). Meanwhile, the dual attenuation effect of EC implies that ethical judgment may constitute an independent evaluative axis, one that neither simply magnifies risk nor entirely blocks trust but instead influences user decisions in more nuanced and subtle ways.
Therefore, the acceptance process of AIGC is more complex than previously anticipated. It must pass not only through the stages of content quality assessment, risk perception, and trust formation, but also through the multiple lenses of Perceived PLR and EC. This finding suggests that in the commercialization of AIGC, simply optimizing generative models and algorithmic logic is far from sufficient. It is equally critical to understand and respond to users’ multidimensional psychological needs, including their expectations of platform accountability and their nuanced moral sensitivities [102]. In summary, both PLR and EC influence the AIGC acceptance pathways in more intricate ways than initially hypothesized. PLR demonstrates a “dual amplification” mechanism, intensifying the effects of both PR and TR. In contrast, EC exhibits a “dual attenuation” pattern, which may represent a third cognitive mechanism that operates independently of the conventional PR–TR framework. These findings offer new theoretical insights into the boundary conditions of AIGC adoption and acceptance, highlighting the importance of contextual and value-driven factors in shaping user behavior. Furthermore, while our simple slopes analysis demonstrates clear moderation patterns, the Johnson–Neyman analysis would provide more precise transition points. We position this technique as a priority for future research with larger samples.

6.4. Theoretical Contributions and Practical Implications

First, this study extends the dual-pathway model of TR and PR to the AIGC-dominated e-commerce context, demonstrating that the core structure of the model remains valid even in environments characterized by non-human authorship and anonymous content sources. Despite the lack of personal endorsements and social verification mechanisms found in traditional UGC environments [103], users are still able to form initial judgments of trust and risk based on the perceived quality of AIGC content. However, the unexpected findings in the moderation effects highlight key boundary conditions of this framework in AIGC contexts: PLR exhibits a “dual amplification” effect—simultaneously enhancing the positive effect of TR and strengthening the negative effect of PR. Meanwhile, EC demonstrates a “dual attenuation” pattern—diminishing the strength of both risk and trust pathways. These non-linear moderation patterns challenge traditional unidirectional assumptions and suggest a theoretical evolution of the TR–PR framework toward a more multi-dimensional model grounded in content-centric, mechanism-mediated, and context-dependent logic [86,92]. Second, by introducing PLR and EC as moderators, this study reveals that the formation of users’ PI is more complex than expected. In particular, the surprising finding that EC weakens rather than strengthens the negative effect of PR implies that moral concerns may activate a distinct cognitive processing path, separate from traditional risk assessments.
This study provides actionable insights for e-commerce platforms, brands, and policymakers. For platform operators, the findings indicate that PLR is a double-edged sword—while high perceived responsibility strengthens the conversion effect of trust, it also increases users’ sensitivity to risk. Thus, platforms must adopt refined governance strategies, including: Establishing a tiered AI content management system, applying differentiated labels and review standards based on content risk levels; Developing expectation management mechanisms that not only highlight the platform’s responsible image but also communicate the limitations of AI technologies, preventing unrealistic user expectations; For brands, this study underscores that AIGC is not merely a tool for efficiency but entails a profound reconfiguration of brand perception. In light of the complex role of EC, brands are advised to: Apply a scenario-adaptive strategy, exercising caution or clear labeling when using AIGC in morally sensitive domains (e.g., healthcare, children’s products, financial recommendations); Maintain brand persona consistency, ensuring that AI-generated content aligns with core brand values and linguistic tone. For policy-makers, the findings suggest that risk prevention alone is insufficient to address the multifaceted impacts of AIGC. A three-dimensional governance framework—encompassing risk, trust, and ethics—is recommended. This includes not only enhanced regulation of platform accountability mechanisms but also special regulatory provisions for morally sensitive applications. Specifically, regulators may promote the development of industry-wide AIGC content certification standards, employing differentiated labels such as “AI-generated,” “AI-assisted,” and “human-reviewed” to grant consumers informed choice, thereby achieving a dynamic balance between user protection and technological innovation.

6.5. Future Research Directions and Limitations

This study validated the applicability of the Trust–Risk dual-pathway in the AIGC-based e-commerce context and introduced EC and PLR as moderating mechanisms. However, certain methodological limitations should be acknowledged. First, the study primarily relied on cross-sectional data collected through structured questionnaires, which makes it difficult to capture the dynamic evolution of user behavior as technological familiarity and platform usage frequency change. Future research could adopt longitudinal tracking or experimental designs to better capture the temporal dynamics of trust construction and risk assessment [34]. Second, cross-sectional data cannot establish causal relationships, and reverse causality remains possible (e.g., users with stronger purchase intentions may retrospectively report higher trust). Third, no formal alternative model comparisons were conducted, which constitutes another limitation.
Building on these limitations, future research can be extended in several directions: Technical perception variables—Incorporate AI explainability, algorithmic transparency, and other perception constructs to explore how “black-box models” influence cognitive processes [104]. Content attribute analysis—Examine different types of AI-generated content (e.g., product descriptions, review summaries, virtual influencer scripts) to assess marginal differences in trust formation across content forms. Moreover, future studies could triangulate subjective AIGCQ ratings with objective textual indicators (e.g., semantic fluency, redundancy, templating indices) and incorporate scenario-based manipulations (e.g., AIGC labeling, platform-responsibility cues, semantic coherence) to enhance causal inference and disentangle linguistic performance from governance cues across contexts. Individual differences—Consider user characteristics such as frequency of AI interaction and ability to recognize AI-generated content as moderators to refine psychological stratification mechanisms. Cross-cultural validation—Replicate the model in different cultural contexts to examine whether cultural values (e.g., uncertainty avoidance, collectivism) moderate the risk–trust pathways, enhancing external validity and explanatory power [105]. Expanded literature coverage—Extend the systematic search beyond Web of Science, Scopus, and Google Scholar to discipline-specific databases such as PsycINFO (consumer psychology), Business Source Premier (e-commerce and business management), ACM Digital Library, and IEEE Xplore (computer science and engineering). Additionally, expand the keyword set to include synonyms such as “generative AI,” “machine-generated content,” “synthetic media,” and “computational creativity” to improve the completeness and depth of literature coverage. By addressing these directions, future studies can further enrich the theoretical foundation and enhance the practical applicability of AIGC research in e-commerce contexts.

7. Conclusions

In the context of AIGC rapidly permeating e-commerce platforms, how users perceive and accept algorithm-generated product information has become a critical issue. This study constructs and tests a dual-pathway model integrating AIGCQ, PR, TR, and two moderators: EC and PLR.
Based on structural equation modeling of 507 valid responses from e-commerce users, the results reveal that AIGCQ significantly reduces PR and enhances TR. In turn, TR positively and PR negatively influence PI, supporting the basic dual-pathway structure. However, the moderation effects show unexpected complexity: PLR simultaneously amplifies the effects of both PR and TR (“dual amplification”), while EC attenuates both pathways (“dual attenuation”)—not only weakening the effect of TR but also unexpectedly reducing the negative impact of PR. These findings suggest that AIGC acceptance involves multidimensional cognitive mechanisms. The main effects validate the applicability of the traditional trust–risk framework, while the moderating effects reveal important boundary conditions. EC may serve as an independent third evaluative dimension beyond trust and risk, and the bidirectional amplification of PLR reflects users’ comprehensive expectations of platforms.
Theoretically, this study not only extends the trust–risk model to AIGC contexts but also uncovers its nonlinear boundary conditions. Practically, platforms should recognize the “double-edged” nature of responsibility perception, while brands must address the nuanced reactions of morally sensitive users. Future research should further explore the cognitive mechanisms underlying these unexpected moderating effects to develop a more complete theory of AIGC acceptance.

Author Contributions

Data curation, Y.P.; formal analysis, W.J.; investigation, T.Y.; methodology, Y.P.; supervision, Y.P. and W.J.; validation, Y.P.; visualization, T.Y.; writing—original draft, T.Y.; writing—review and editing, T.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by research projects of Qingdao University of Science and Technology (WST2021020) and Kookmin University.

Institutional Review Board Statement

All participants provided informed consent before participating in the study. This study did not require ethical approval and complied with the local regulations of the institution’s location (https://www.law.go.kr/LSW//lsLinkCommonInfo.do?lspttninfSeq=75929&chrClsCd=010202, accessed on 22 January 2025). Additionally, the study adhered to the local government requirements of the data collection site. According to Chapter III Ethical Review—Article 32 of the Implementation of Ethical Review Measures for Human-Related Life Science and Medical Research issued by the Chinese government, this study used anonymized information for research purposes, posed no harm to participants, and did not involve sensitive personal information or commercial interests; therefore, it was exempt from ethical review and approval (https://www.gov.cn/zhengce/zhengceku/2023-02/28/content_5743658.htm, accessed on 22 January 2025).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

All data generated or analyzed during this study are included in this article. The raw data are available from the corresponding authors upon reasonable request.

Acknowledgments

The authors thank all the participants in this study for their time and willingness to share their experiences and feelings.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Quantitative Survey Questionnaire Items

VariablesItemsIssueReferences
AI-Generated
Content Quality
(AIGCQ)
AIGCQ1The AI-generated content on e-commerce platforms is very clear in its information expression.[6,30,31]
AIGCQ2I find that the overall structure of AI-generated product information is well organized.
AIGCQ3The descriptions generated by AI demonstrate a certain degree of professionalism and credibility.
AIGCQ4Compared with user-generated content, AI-generated content performs well in terms of expression quality.
Perceived Risk
(PR)
PR1I am concerned that AI-generated product information may contain errors or be misleading.[41,42,75]
PR2AI-generated content may obscure the true condition of the product.
PR3I feel that AI-generated content lacks full transparency.
PR4I feel uneasy when making decisions based on AI-generated content.
Trust
(TR)
TR1I consider AI-generated product information to be trustworthy.[10,34,56]
TR2I believe the platform is capable of managing the quality and norms of AI-generated content.
TR3Even knowing the content is generated by AI, I am still willing to use it.
TR4Overall, I trust the AI-assisted content services provided by the platform.
Perceived Platform Responsibility
(PLR)
PLR1The platform has the responsibility to clearly inform users which content is generated by AI.[59,77,78]
PLR2The platform should establish mechanisms to monitor the accuracy and applicability of AI content.
PLR3When issues arise due to AI-generated content, the platform should proactively provide explanations.
PLR4Whether the platform takes responsibility affects my acceptance of AI-generated content.
Ethical Concern
(EC)
EC1I believe replacing human creation with AI poses certain ethical problems.[69,79,99]
EC2I am concerned that AI-generated content may infringe upon expression rights or originality.
EC3If AI-generated content is not clearly labeled, it triggers conflicts with my personal values.
EC4I have ethical concerns about AI-generated content that is not explicitly disclosed as such.
Purchase Intention
(PI)
PI1I am willing to make purchase decisions based on AI-generated content.[9,75,91]
PI2Even if the content is not human-written, I am still willing to purchase as long as the quality is high.
PI3If AI-generated content provides useful information, I am willing to consider it when choosing products.
PI4When facing AI-generated content, I will decide whether to purchase based on its perceived reliability.

References

  1. Bansal, G.; Nawal, A.; Chamola, V.; Herencsar, N. Revolutionizing Visuals: The Role of Generative AI in Modern Image Generation. ACM Trans. Multimedia Comput. Commun. Appl. 2024, 20, 356. [Google Scholar] [CrossRef]
  2. Kimura, T. Exploring the Frontier: Generative AI Applications in Online Consumer Behavior Analytics. Cuad. Gest. 2025, 25, 57–70. [Google Scholar] [CrossRef]
  3. Dai, J.; Mao, X.; Wu, P.; Zhou, H.; Cao, L. Revolutionizing Cross-Border e-Commerce: A Deep Dive into AI and Big Data-Driven Innovations for the Straw Hat Industry. PLoS ONE 2024, 19, e0305639. [Google Scholar] [CrossRef]
  4. Artificial Intelligence (AI) in e-Commerce. Available online: https://www.statista.com/study/146530/artificial-intelligence-ai-and-extended-reality-xr-in-e-commerce/ (accessed on 23 April 2025).
  5. Software Engineer 3 Walmart Inc.; Sinha, A.R. Revolutionizing Retail User Experience: Leveraging Generative AI for Data Summarization, Chatbot Integration, and AI-Driven Sentiment Analysis. Int. Sci. J. Eng. Manag. 2023, 2, 1–6. [Google Scholar] [CrossRef]
  6. Olmedilla, M.; Romero, J.C.; Martínez-Torres, R.; Galván, N.R.; Toral, S. Evaluating Coherence in AI-Generated Text. In Proceedings of the 6th International Conference on Advanced Research Methods and Analytics—CARMA 2024, Valencia, Spain, 26–28 June 2024; pp. 149–156. [Google Scholar] [CrossRef]
  7. Ding, L.; Antonucci, G.; Venditti, M. Unveiling User Responses to AI-Powered Personalised Recommendations: A Qualitative Study of Consumer Engagement Dynamics on Douyin. Qual. Mark. Res. Int. J. 2025, 28, 234–255. [Google Scholar] [CrossRef]
  8. Shareef, M.; Dwivedi, Y.K.; Kumar, V.; Davies, G.; Rana, N.P.; Baabdullah, A. Purchase Intention in an Electronic Commerce Environment: A Trade-off between Controlling Measures and Operational Performance. Inf. Technol. People 2019, 32, 1345–1375. [Google Scholar] [CrossRef]
  9. Sipos, D. The Effects of AI-Powered Personalization on Consumer Trust, Satisfaction, and Purchase Intent. Eur. J. Appl. Sci. Eng. Technol. 2025, 3, 14–24. [Google Scholar] [CrossRef] [PubMed]
  10. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in Online Shopping: An Integrated Model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  11. Yang, Y.; Jia, Q.; Zhao, Y. The Impact of Platform Social Responsibility on Consumer Trust the Impact of Platform Social Responsibility on Consumer Trust. In Proceedings of the International Conference on Electronic Business, Nanjing, China, 3–7 December 2021. [Google Scholar]
  12. Sharma, S.; Chaitanya, K.; Jawad, A.B.; Premkumar, I.; Mehta, J.V.; Hajoary, D. Ethical Considerations in AI-Based Marketing: Balancing Profit and Consumer Trust. Tuijin Jishu/J. Propuls. Technol. 2023, 44, 1301–1309. [Google Scholar] [CrossRef]
  13. Cyberspace Administration of China; National Development and Reform Commission; Ministry of Education; Ministry of Science and Technology; Ministry of Industry and Information Technology; Ministry of Public Security; National Radio and Television Administration. Interim Measures for the Administration of Generative Artificial Intelligence Services; 2023. Available online: https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (accessed on 23 April 2025).
  14. EU AI Act: First Regulation on Artificial Intelligence. Available online: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 23 April 2025).
  15. Jobin, A.; Ienca, M.; Vayena, E. The Global Landscape of AI Ethics Guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  16. Hua, Y.; Niu, S.; Cai, J.; Chilton, L.B.; Heuer, H.; Wohn, D.Y. Generative AI in User-Generated Content. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–7. [Google Scholar] [CrossRef]
  17. Chua, T.-S. Towards Generative Search and Recommendation: A Keynote at RecSys 2023. ACM SIGIR Forum 2023, 57, 1–14. [Google Scholar] [CrossRef]
  18. Wang, Y.; Luo, H.; Liu, H. Research on the Application of AIGC Technology in E-commerce Platforms Advertising. Int. J. Asian Soc. Sci. Res. 2025, 2, 32–41. [Google Scholar] [CrossRef]
  19. Zhang, X.; Guo, F.; Chen, T.; Pan, L.; Beliakov, G.; Wu, J. A Brief Survey of Machine Learning and Deep Learning Techniques for E-Commerce Research. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 2188–2216. [Google Scholar] [CrossRef]
  20. Wasilewski, A. Harnessing Generative AI for Personalized E-Commerce Product Descriptions: A Framework and Practical Insights. Comput. Stand. Interfaces 2025, 94, 104012. [Google Scholar] [CrossRef]
  21. Wasilewski, A.; Chawla, Y.; Pralat, E. Enhanced E-Commerce Personalization through AI-Powered Content Generation Tools. IEEE Access 2025, 13, 48083–48095. [Google Scholar] [CrossRef]
  22. Malikireddy, S.K.R. Revolutionizing Product Recommendations with Generative AI: Context-Aware Personalization at Scale. IJSREM 2024, 8, 1–8. [Google Scholar] [CrossRef]
  23. Generative AI and the Future of Interactive and Immersive Advertising|Semantic Scholar. Available online: https://www.semanticscholar.org/paper/Generative-AI-and-the-Future-of-Interactive-and-Gujar-Paliwal/4d14e8b7f56066963cbd28a49dbdc22b0be19979?utm_source=consensus (accessed on 23 April 2025).
  24. Balamurugan, M. AI-Driven Adaptive Content Marketing: Automating Strategy Adjustments for Enhanced Consumer Engagement. Int. J. Multidiscip. Res. 2024, 6, 27940. [Google Scholar] [CrossRef]
  25. Teepapal, T. AI-Driven Personalization: Unraveling Consumer Perceptions in Social Media Engagement. Comput. Hum. Behav. 2025, 165, 108549. [Google Scholar] [CrossRef]
  26. Rae, I. The Effects of Perceived AI Use on Content Perceptions. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–14. [Google Scholar] [CrossRef]
  27. Kirk, C.P.; Givi, J. The AI-Authorship Effect: Understanding Authenticity, Moral Disgust, and Consumer Responses to AI-Generated Marketing Communications. J. Bus. Res. 2025, 186, 114984. [Google Scholar] [CrossRef]
  28. Pandey, P.; Rai, A.K. Modeling Consequences of Brand Authenticity in Anthropomorphized AI-Assistants: A Human-Robot Interaction Perspective. PURUSHARTHA-J. Manag. Ethics Spiritual. 2025, 17, 116–135. [Google Scholar] [CrossRef]
  29. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, P.S.; Sun, L. A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT 2023. arXiv 2023, arXiv:2303.04226. [Google Scholar]
  30. Fang, X.; Chen, P.; Wang, M.; Wang, S. Examining the Role of Compression in Influencing AI-Generated Image Authenticity. Sci. Rep. 2025, 15, 12192. [Google Scholar] [CrossRef] [PubMed]
  31. Eisner, J.; Holub, S.; Goller, F.; Steiner, E. Revolutionizing Product Descriptions: The Impact of AI Generated Product Descriptions on Product Quality Perception and Purchase Intention. In Proceedings of the AMS World Marketing Congress, Bel Ombre, Mauritius, 25–29 June 2024. [Google Scholar]
  32. Kishnani, D. The Uncanny Valley: An Empirical Study on Human Perceptions of AI-Generated Text and Images. Master’s Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2025. [Google Scholar]
  33. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. J. Exp. Psychol. Gen. 2014, 144, 114–126. [Google Scholar] [CrossRef] [PubMed]
  34. Zhou, T.; Lu, H. The Effect of Trust on User Adoption of AI-Generated Content. Electron. Libr. 2025, 43, 61–76. [Google Scholar] [CrossRef]
  35. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  36. Bigman, Y.E.; Gray, K. People Are Averse to Machines Making Moral Decisions. Cognition 2018, 181, 21–34. [Google Scholar] [CrossRef]
  37. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  38. Hoffman, R.R.; Johnson, M.; Bradshaw, J.M.; Underbrink, A. Trust in Automation. IEEE Intell. Syst. 2013, 28, 84–88. [Google Scholar] [CrossRef]
  39. Sundar, S.S. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). J. Comput.-Mediat. Commun. 2020, 25, 74–88. [Google Scholar] [CrossRef]
  40. Zhang, B.; Dafoe, A. Artificial Intelligence: American Attitudes and Trends; Center for the Governance of AI, Future of Humanity Institute, University of Oxford: Oxford, UK, 2019. [Google Scholar]
  41. Featherman, M.S.; Pavlou, P.A. Predicting E-Services Adoption: A Perceived Risk Facets Perspective. Int. J. Hum.-Comput. Stud. 2003, 59, 451–474. [Google Scholar] [CrossRef]
  42. Forsythe, S.; Liu, C.; Shannon, D.; Gardner, L.C. Development of a Scale to Measure the Perceived Benefits and Risks of Online Shopping. J. Interact. Mark. 2006, 20, 55–75. [Google Scholar] [CrossRef]
  43. Shin, D. The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  44. Harrison McKnight, D.; Choudhury, V.; Kacmar, C. The Impact of Initial Consumer Trust on Intentions to Transact with a Web Site: A Trust Building Model. J. Strateg. Inf. Syst. 2002, 11, 297–323. [Google Scholar] [CrossRef]
  45. Paul, A. Pavlou Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the Technology Acceptance Model. Int. J. Electron. Commer. 2003, 7, 101–134. [Google Scholar] [CrossRef]
  46. Kim, H.-W.; Kankanhalli, A. Investigating User Resistance to Information Systems Implementation: A Status Quo Bias Perspective. MIS Q. 2009, 33, 567–582. [Google Scholar] [CrossRef]
  47. Nicolaou, A.I.; McKnight, D.H. Perceived Information Quality in Data Exchanges: Effects on Risk, Trust, and Intention to Use. Inf. Syst. Res. 2006, 17, 332–351. [Google Scholar] [CrossRef]
  48. Li, F.; Yang, Y.; Yu, G. Nudging Perceived Credibility: The Impact of AIGC Labeling on User Distinction of AI-Generated Content. Emerg. Media 2025, 3, 275–304. [Google Scholar] [CrossRef]
  49. Dezao, T. Enhancing Transparency in AI-Powered Customer Engagement. J. AI Robot. Workplace Autom. 2024, 3, 134. [Google Scholar] [CrossRef]
  50. Reeves, B.; Nass, C. The Media Equation—How People Treat Computers, Television, and New Media like Real People and Places; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  51. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef]
  52. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  53. Mori, M.; MacDorman, K.; Kageki, N. The Uncanny Valley [from the Field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  54. Agarwal, R.; Prasad, J. A Conceptual and Operational Definition of Personal Innovativeness in the Domain of Information Technology. Inf. Syst. Res. 1998, 9, 204–215. [Google Scholar] [CrossRef]
  55. Venkatesh, V. Determinants of Perceived Ease of Use: Integrating Control, Intrinsic Motivation, and Emotion into the Technology Acceptance Model. Inf. Syst. Res. 2000, 11, 342–365. [Google Scholar] [CrossRef]
  56. McKnight, D.H.; Choudhury, V.; Kacmar, C. Developing and Validating Trust Measures for E-Commerce: An Integrative Typology. Inf. Syst. Res. 2002, 13, 334–359. [Google Scholar] [CrossRef]
  57. Zhang, B.; Anderljung, M.; Kahn, L.; Dreksler, N.; Horowitz, M.C.; Dafoe, A. Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers. J. Artif. Intell. Res. 2021, 71, 591–666. [Google Scholar] [CrossRef]
  58. Shin, D.; Park, Y.J. Role of Fairness, Accountability, and Transparency in Algorithmic Affordance. Comput. Hum. Behav. 2019, 98, 277–284. [Google Scholar] [CrossRef]
  59. Stanaland, A.J.S.; Lwin, M.O.; Murphy, P.E. Consumer Perceptions of the Antecedents and Consequences of Corporate Social Responsibility. J. Bus. Ethics 2011, 102, 47–55. [Google Scholar] [CrossRef]
  60. Gillath, O.; Ai, T.; Branicky, M.S.; Keshmiri, S.; Davison, R.B.; Spaulding, R. Attachment and Trust in Artificial Intelligence. Comput. Hum. Behav. 2021, 115, 106607. [Google Scholar] [CrossRef]
  61. Xu, S.; Mou, Y.; Ding, Z. The More Open, the Better? Research on the Influence of Subject Diversity on Trust of Tourism Platforms. Mark. Intell. Plan. 2023, 41, 1213–1235. [Google Scholar] [CrossRef]
  62. Ahn, J.; Kim, J.; Sung, Y. The Role of Perceived Freewill in Crises of Human-AI Interaction: The Mediating Role of Ethical Responsibility of AI. Int. J. Advert. 2024, 43, 847–873. [Google Scholar] [CrossRef]
  63. Belanche, D.; Casaló, L.V.; Flavián, C.; Schepers, J. Trust Transfer in the Continued Usage of Public E-Services. Inf. Manag. 2014, 51, 627–640. [Google Scholar] [CrossRef]
  64. Vasir, G.; Huh-Yoo, J. Characterizing the Flaws of Image-Based AI-Generated Content. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 26 April–1 May 2025; Association for Computing Machinery: New York, NY, USA, 25 April 2025; pp. 1–7. [Google Scholar]
  65. Du, D.; Zhang, Y.; Ge, J. Effect of AI Generated Content Advertising on Consumer Engagement; Nah, F., Siau, K., Eds.; Springer Nature: Cham, Switzerland, 2023; Volume 14039, pp. 121–129. [Google Scholar]
  66. Franzoni, V. From Black Box to Glass Box: Advancing Transparency in Artificial Intelligence Systems for Ethical and Trustworthy AI; Gervasi, O., Murgante, B., Rocha, A.M.A.C., Garau, C., Scorza, F., Karaca, Y., Torre, C.M., Eds.; Springer Nature: Cham, Switzerland, 2023; Volume 14107, pp. 118–130. [Google Scholar]
  67. Kumar, R. Ethics of Artificial Intelligence and Automation: Balancing Innovation and Responsibility. J. Comput. Signal Syst. Res. 2024, 1, 1–8. Available online: https://www.researchgate.net/publication/386537846_Ethics_of_Artificial_Intelligence_and_Automation_Balancing_Innovation_and_Responsibility (accessed on 23 April 2025). [CrossRef]
  68. Sundar, S.S.; Kim, J. Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–9. [Google Scholar]
  69. Choung, H.; David, P.; Ross, A. Trust and Ethics in AI. AI Soc. 2023, 38, 733–745. [Google Scholar] [CrossRef]
  70. Lopes, E.L.; Yunes, L.Z.; Bandeira De Lamônica Freire, O.; Herrero, E.; Contreras Pinochet, L.H. The Role of Ethical Problems Related to a Brand in the Purchasing Decision Process: An Analysis of the Moderating Effect of Complexity of Purchase and Mediation of Perceived Social Risk. J. Retail. Consum. Serv. 2020, 53, 101970. [Google Scholar] [CrossRef]
  71. Cabrera, D.; Cabrera, L.L. The Steps to Doing a Systems Literature Review (SLR). J. Syst. Think. Prepr. 2023, 3, 1–27. [Google Scholar] [CrossRef]
  72. Sarkis-Onofre, R.; Catalá-López, F.; Aromataris, E.; Lockwood, C. How to Properly Use the PRISMA Statement. Syst. Rev. 2021, 10, 117. [Google Scholar] [CrossRef]
  73. Dorussen, H.; Lenz, H.; Blavoukos, S. Assessing the Reliability and Validity of Expert Interviews. Eur. Union Politics 2005, 6, 315–337. [Google Scholar] [CrossRef]
  74. Liang, J.; Guo, J.; Liu, Z.; Tang, J. “Ask Everyone?” Understanding How Social Q&a Feedback Quality Influences Consumers’ Purchase Intentions. In Proceedings of the Eighteenth Wuhan International Conference on E-Social Network and Commerce, Wuhan, China, 24–26 May 2019. [Google Scholar]
  75. Kim, D.J.; Ferrin, D.L.; Rao, H.R. A Trust-Based Consumer Decision-Making Model in Electronic Commerce: The Role of Trust, Perceived Risk, and Their Antecedents. Decis. Support Syst. 2008, 44, 544–564. [Google Scholar] [CrossRef]
  76. Amin, S.; Mahasan, S.S. Relationship between Consumers Perceived Risks and Consumer Trust: A Study of Sainsbury Store. Middle-East J. Sci. Res. 2014, 19, 647–655. [Google Scholar]
  77. Yang, F.; Abedin, M.Z.; Qiao, Y.; Ye, L. Toward Trustworthy Governance of AI-Generated Content (AIGC): A Blockchain-Driven Regulatory Framework for Secure Digital Ecosystems. IEEE Trans. Eng. Manag. 2024, 71, 14945–14962. [Google Scholar] [CrossRef]
  78. David, P.; Choung, H.; Seberger, J.S. Who Is Responsible? US Public Perceptions of AI Governance through the Lenses of Trust and Ethics. Public Underst. Sci. 2024, 33, 654–672. [Google Scholar] [CrossRef]
  79. Adanyin, A. Ethical AI in Retail: Consumer Privacy and Fairness. Eur. J. Comput. Sci. Inf. Technol. 2024, 12, 21–35. [Google Scholar] [CrossRef]
  80. Brunk, K.H. Consumer Perceived Ethicality: An Impression Formation Perspective. In Proceedings of the EMAC Annual Conference 2010, Nantes, France, 1 June 2010. [Google Scholar]
  81. Kuder, G.F.; Richardson, M.W. The Theory of the Estimation of Test Reliability. Psychometrika 1937, 2, 151–160. [Google Scholar] [CrossRef]
  82. dos Santos, P.M.; Cirillo, M.Â. Construction of the Average Variance Extracted Index for Construct Validation in Structural Equation Models with Adaptive Regressions. Commun. Stat.-Simul. Comput. 2023, 52, 1639–1650. [Google Scholar] [CrossRef]
  83. Ullman, J.B.; Bentler, P.M. Structural Equation Modeling. In Handbook of Psychology, 2nd ed.; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2012; ISBN 978-1-118-13388-0. [Google Scholar]
  84. Preacher, K.J.; Rucker, D.D.; Hayes, A.F. Addressing Moderated Mediation Hypotheses: Theory, Methods, and Prescriptions. Multivar. Behav. Res. 2007, 42, 185–227. [Google Scholar] [CrossRef]
  85. Wen, Z.; Ye, B. Mediation Effect Analysis: Methods and Model Development. Adv. Psychol. Sci. 2014, 22, 731. [Google Scholar] [CrossRef]
  86. Filieri, R.; McLeay, F.; Tsui, B.; Lin, Z. Consumer Perceptions of Information Helpfulness and Determinants of Purchase Intention in Online Consumer Reviews of Services. Inf. Manag. 2018, 55, 956–970. [Google Scholar] [CrossRef]
  87. Alamyar, I.H. The Role of User-Generated Content in Shaping Consumer Trust: A Communication Psychology Approach to E-Commerce. Medium 2025, 12, 175–191. [Google Scholar] [CrossRef]
  88. Jiang, X.; Wu, Z.; Yu, F. Constructing Consumer Trust through Artificial Intelligence Generated Content. Acad. J. Bus. Manag. 2024, 6, 263–272. [Google Scholar] [CrossRef]
  89. Gu, C.; Jia, S.; Lai, J.; Chen, R.; Chang, X. Exploring Consumer Acceptance of AI-Generated Advertisements: From the Perspectives of Perceived Eeriness and Perceived Intelligence. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 2218–2238. [Google Scholar] [CrossRef]
  90. Suryaningsih, I.B.; Hadiwidjojo, D.; Rohman, F.; Sumiati, S. A Theoretical Framework: The Role of Trust and Perceived Risks in Purchased Decision. Res. Bus. Manag. 2014, 1, 103. [Google Scholar] [CrossRef]
  91. Lăzăroiu, G.; Neguriţă, O.; Grecu, I.; Grecu, G.; Mitran, P.C. Consumers’ Decision-Making Process on Social Commerce Platforms: Online Trust, Perceived Risk, and Purchase Intentions. Front. Psychol. 2020, 11, 890. [Google Scholar] [CrossRef] [PubMed]
  92. Ou, M.; Zheng, H.; Zeng, Y.; Hansen, P. Trust It or Not: Understanding Users’ Motivations and Strategies for Assessing the Credibility of AI-Generated Information. New Media Soc. 2024, 14614448241293154. [Google Scholar] [CrossRef]
  93. Gonçalves, A.R.; Pinto, D.C.; Rita, P.; Pires, T. Artificial Intelligence and Its Ethical Implications for Marketing. Emerg. Sci. J. 2023, 7, 313–327. [Google Scholar] [CrossRef]
  94. Liu, W.; Wang, C.; Ding, L.; Wang, C. Research on the Influence Mechanism of Platform Corporate Social Responsibility on Customer Extra-Role Behavior. Discrete Dyn. Nat. Soc. 2021, 2021, 1895598. [Google Scholar] [CrossRef]
  95. Paulssen, M.; Roulet, R.; Wilke, S. Risk as Moderator of the Trust-Loyalty Relationship. Eur. J. Mark. 2014, 48, 964–981. [Google Scholar] [CrossRef]
  96. Tang, Y.; Su, L. Graduate Education in China Meets AI: Key Factors for Adopting AI-Generated Content Tools. Libri 2025, 75, 81–96. [Google Scholar] [CrossRef]
  97. Riquelme, I.P.; Román, S. The Relationships among Consumers’ Ethical Ideology, Risk Aversion and Ethically-Based Distrust of Online Retailers and the Moderating Role of Consumers’ Need for Personal Interaction. Ethics Inf. Technol. 2014, 16, 135–155. [Google Scholar] [CrossRef]
  98. Peukert, C.; Kloker, S. Trustworthy AI: How Ethicswashing Undermines Consumer Trust. In Proceedings of the 15th International Conference on Wirtschaftsinformatik (WI), Potsdam, Germany, 8–11 March 2020; GITO Verlag: Berlin, Germany, 2020; pp. 1100–1115. [Google Scholar] [CrossRef]
  99. Ferhataj, A.; Memaj, F.; Sahatcija, R.; Ora, A.; Koka, E. Ethical Concerns in AI Development: Analyzing Students’ Perspectives on Robotics and Society. J. Inf. Commun. Ethics Soc. 2025, 23, 165–187. [Google Scholar] [CrossRef]
  100. Aldulaimi, S.; Soni, S.; Kampoowale, I.; Krishnan, G.; Yajid, M.S.A.; Khatibi, A.; Minhas, D.; Khurana, M. Customer Perceived Ethicality and Electronic Word of Mouth Approach to Customer Loyalty: The Mediating Role of Customer Trust. Int. J. Ethics Syst. 2024, 41, 258–278. [Google Scholar] [CrossRef]
  101. Gabriela, S. Ethical Consumerism in the 21st Century. Ovidius Univ. Ann. Econ. Sci. Ser. 2010, X, 1327–1331. [Google Scholar]
  102. Wang, X.; Tajvidi, M.; Lin, X.; Hajli, N. Towards an Ethical and Trustworthy Social Commerce Community for Brand Value Co-Creation: A Trust-Commitment Perspective. J. Bus. Ethics 2020, 167, 137–152. [Google Scholar] [CrossRef]
  103. Cheung, C.M.K.; Lee, M.K.O.; Rabjohn, N. The Impact of Electronic Word-of-mouth. Internet Res. 2008, 18, 229–247. [Google Scholar] [CrossRef]
  104. Wang, S.-F.; Chen, C.-C. Exploring Designer Trust in Artificial Intelligence-Generated Content: TAM/TPB Model Study. Appl. Sci. 2024, 14, 6902. [Google Scholar] [CrossRef]
  105. Liu, Z.; Chen, Z. Cross-Cultural Perspectives on Artificial Intelligence Generated Content (AIGC): A Comparative Study of Attitudes and Acceptance among Global Products. In Cross-Cultural Design, Proceedings of the 16th International Conference, CCD 2024, Held as Part of the 26th HCI International Conference, HCII 2024, Washington, DC, USA, 29 June–4 July 2024; Rau, P.-L.P., Ed.; Springer Nature: Cham, Switzerland, 2024; pp. 287–298. [Google Scholar]
Figure 1. PRISMA flow diagram of the SLR process.
Figure 1. PRISMA flow diagram of the SLR process.
Jtaer 20 00257 g001
Figure 2. Qualitative Analysis Process of Expert Interviews.
Figure 2. Qualitative Analysis Process of Expert Interviews.
Jtaer 20 00257 g002
Figure 3. Structural Model of Users’ Acceptance Mechanism toward AIGC Content.
Figure 3. Structural Model of Users’ Acceptance Mechanism toward AIGC Content.
Jtaer 20 00257 g003
Figure 4. Confirmatory Factor Analysis Model Diagram.
Figure 4. Confirmatory Factor Analysis Model Diagram.
Jtaer 20 00257 g004
Figure 5. Structural Equation Model Diagram.
Figure 5. Structural Equation Model Diagram.
Jtaer 20 00257 g005
Figure 6. Moderating Effect of PLR on PR-PI and TR-PI Relationships.
Figure 6. Moderating Effect of PLR on PR-PI and TR-PI Relationships.
Jtaer 20 00257 g006
Figure 7. Moderating Effect of EC on PR-PI and TR-PI Relationships.
Figure 7. Moderating Effect of EC on PR-PI and TR-PI Relationships.
Jtaer 20 00257 g007
Table 1. Main Categories of AI-Generated Content.
Table 1. Main Categories of AI-Generated Content.
Content TypeDescriptionExample Platforms or Tools
Product Information GenerationAutomatically writes product titles, selling points, specifications, and functional descriptionsTaobao “Smart Copywriting,” Amazon Auto Title Generation, JD AI Graphic Assistant
User Review and Q&A GenerationSynthesizes review summaries, simulates authentic user reviews, and auto-completes FAQsE-commerce AI Customer Service, Review Summary Generators, Shopee Auto Reply
Marketing Copy GenerationGenerates personalized recommendation phrases, discount prompts, ad slogans, and SMS contentJD Advertising AI System, Pinduoduo Push Message Generator
Image/Video Content GenerationGenerates product images, virtual model try-on, promotional short videos, and livestream personasAliyun Visual AIGC, Runway, Midjourney, Luma
Customer Service and Interaction Content GenerationGenerates smart customer service responses, auto-guidance phrases, and scenario-based dialoguesJD Cloud Smart Customer Service, Coupang Auto Response System
User Recommendation and Personalized ContentGenerates personalized product descriptions in recommendation sections based on user behaviorAmazon Personalized Product Descriptions, TikTok E-commerce Content Recommendations
Table 2. Frequency Statistics of Keywords/Variables in Systematic Literature.
Table 2. Frequency Statistics of Keywords/Variables in Systematic Literature.
CategoryContent/KeywordFrequency (n)Proportion (%)
Research Topic KeywordsTrust3862.3%
Risk Perception3557.4%
AIGC2439.3%
Platform Responsibility1219.7%
Ethical Concern914.8%
Purchase Intention4777.0%
Theoretical FrameworksTrust–Risk Framework2134.4%
TAM/UTAUT1829.5%
No Explicit Theoretical Framework1321.3%
Research MethodsQuantitative (Survey)4268.9%
Qualitative (Interview/Content Analysis)914.8%
Mixed Methods1016.3%
Application ScenariosE-commerce Platforms4167.2%
Social Media Content Recommendation1219.7%
Enterprise-Generated Content Systems813.1%
Table 3. Information of Interviewed Experts.
Table 3. Information of Interviewed Experts.
IDProfessional RoleIndustry/OrganizationRelevant Expertise
E1AI Product ManagerDomestic E-commerce Platform AAI Copywriting Generation, Content System Deployment
E2Director of Content OperationsInternational E-commerce Platform BProduct Content Quality Management, Automated Comment Review
E3UX Design ResearcherResearch Institute / UX LabConsumer Behavior Analysis, A/B Testing
E4Technology Ethics ScholarUniversity Philosophy & Social Research CenterAIGC Ethical Review, Platform Policy Consultation
E5Digital Marketing ConsultantIndependent Consulting FirmAI Recommendation Strategies, E-commerce Operation Optimization
E6Platform Content Review SpecialistE-commerce Platform CUser Report Handling, Violation Content Review
E7AI Writing System Development EngineerAI Tech Company DText Generation Model Design, Content Credibility Optimization
E8Postdoctoral Researcher in Social Sciences (Platform Governance)Public Policy Research InstitutePlatform Responsibility Mechanisms, User Trust Crisis Management
Table 4. Coding Summary of Expert Interviews (Partial).
Table 4. Coding Summary of Expert Interviews (Partial).
IDOriginal Statement (Excerpt)Initial CodeThematic CategoryMapped Variable
E2-01“This review is too polished—it doesn’t look like a real person wrote it.”Highly Uniform Expression StyleAuthenticity JudgmentAIGCQ
E4-02“AI content is acceptable, but the platform should tell me if it’s machine-written.”Lack of Source DisclosureExpectation of TransparencyPLR
E3-03“I’m afraid AI-generated reviews might be manipulated to look overly positive.”Concern about Manipulated ReviewsRisk TriggerPR
E6-01“Most user reports target those that seem human-written but sound weird.”Difficulty in Identifying Suspicious ContentRisk Identification MechanismPR
E1-04“I don’t mind AI copywriting, but not in emotionally sensitive product categories.”Boundary of Content ApplicabilityMoral Fit AssessmentEC
E5-02“A platform can’t profit from AI but deny responsibility when things go wrong.”Attribution of Platform ResponsibilityRejection of Responsibility ShiftingPLR
E7-03“Whether users trust AI content depends largely on their trust in the platform.”Platform-Driven Trust MechanismProxy Trust StructureTR
E8-01“The worst thing about AI content is errors with no accountability—that uncertainty creates anxiety.”Unclear Consequences of ErrorsOpaque Technical RisksPR
Table 5. Mapping of Expert Interview Themes to Model Constructs.
Table 5. Mapping of Expert Interview Themes to Model Constructs.
IDThematic CategoryDescriptive Keywords (Examples)Mapped Construct
T1Content QualityNatural, logically clear, overly standardized, lacking authenticityAIGCQ
T2Risk SalienceUntrustworthy, manipulated reviews, vague sources, information overload, obvious AI tracesPR
T3Rebuilding TrustTrust in platform, distrust in author, sense of system control, rule transparency, brand reputationTR
T4Ethical Boundary SensitivityFeeling deceived, moral ambiguity, inappropriate in certain domains, blurred right to expression, information manipulationEC
T5Responsibility ExpectationShould be labeled, who is responsible for errors, lack of knowledge leads to distrust, platforms must not evade responsibilityPLR
Table 6. Common Method Bias Test.
Table 6. Common Method Bias Test.
ComponentInitial EigenvaluesExtraction Sums of Squared Loadings
Total% of VarianceCumulative %Total% of VarianceCumulative %
18.25534.39534.3958.25534.39534.395
23.00712.5346.9253.00712.5346.925
31.8617.75354.6781.8617.75354.678
41.6977.06961.7471.6977.06961.747
51.3835.76467.5111.3835.76467.511
61.2495.20672.7161.2495.20672.716
Table 7. Demographic Variables.
Table 7. Demographic Variables.
ItemOptionFrequencyPercentage
GenderMale26953.06%
Female23846.94%
AgeUnder 18224.34%
19–2515129.78%
26–3515430.37%
36–4511622.88%
Over 466412.62%
EducationHigh school or below9919.53%
Bachelor’s degree26251.68%
Master’s degree or above14628.80%
Shopping frequency per month≤1 time8115.98%
2–3 times15630.77%
4–6 times16933.33%
≥7 times10119.92%
Frequency of encountering AI-generated contentFrequently16231.95%
Occasionally13927.42%
Rarely12524.65%
Not sure8115.98%
Table 8. Results of Questionnaire Reliability Analysis.
Table 8. Results of Questionnaire Reliability Analysis.
DimensionNumber of ItemsCronbach’s Alpha
AIGCQ40.872
PR40.858
TR40.867
PLR40.903
EC40.888
PI40.831
Table 9. Model Fit Indices of Confirmatory Factor Analysis.
Table 9. Model Fit Indices of Confirmatory Factor Analysis.
Fit IndexCriterionActual ValueFit Result
CMIN/DF<31.658Excellent
GFI>0.800.940Excellent
AGFI>0.800.925Excellent
RMSEA<0.080.036Excellent
NFI>0.90.945Excellent
IFI>0.90.977Excellent
TLI>0.90.974Excellent
CFI>0.90.977Excellent
PNFI>0.50.811Excellent
PCFI>0.50.839Excellent
Table 10. Results of Confirmatory Factor Analysis.
Table 10. Results of Confirmatory Factor Analysis.
DimensionObserved VariableFactor LoadingS.E.C.R.pCRAVE
AIGCQAIGCQ10.828 0.8730.631
AIGCQ20.7620.04918.608***
AIGCQ30.8130.04820.180***
AIGCQ40.7730.05318.958***
PRPR10.788 0.8600.606
PR20.8150.05618.742***
PR30.7160.06116.248***
PR40.7900.06318.144***
TRTR10.825 0.8670.621
TR20.7940.04919.575***
TR30.7180.05017.248***
TR40.8100.04920.059***
PLRPLR10.806 0.9040.703
PLR20.8770.05322.391***
PLR30.8490.05321.534***
PLR40.8200.04620.566***
ECEC10.830 0.8900.669
EC20.7500.05318.711***
EC30.8290.05121.420***
EC40.8590.04922.395***
PIPI10.704 0.8330.555
PI20.7700.07115.237***
PI30.7200.07014.399***
PI40.7830.07115.454***
Note: *** indicates significance at the 0.001 level.
Table 11. Results of Discriminant Validity Analysis.
Table 11. Results of Discriminant Validity Analysis.
AIGCQPRTRPLRECPI
AIGCQ0.795
PR−0.4470.778
TR0.550−0.6160.788
PLR0.086−0.2060.2780.839
EC0.493−0.4420.4930.1390.818
PI0.589−0.5260.5590.1990.5430.745
Table 12. HTMT (Heterotrait-Monotrait Ratio).
Table 12. HTMT (Heterotrait-Monotrait Ratio).
AIGCQPRTRPLRECPI
AIGCQ
PR0.444
TR0.5460.615
PLR0.0840.2040.271
EC0.4960.4490.4890.137
PI0.5940.5290.5620.2000.548
Table 13. Model Fit Indices for Structural Equation Modeling.
Table 13. Model Fit Indices for Structural Equation Modeling.
Fit IndexCriterionActual ValueFit Result
CMIN/DF<31.888Excellent
GFI>0.800.958Excellent
AGFI>0.800.941Excellent
RMSEA<0.080.042Excellent
NFI>0.90.957Excellent
IFI>0.90.979Excellent
TLI>0.90.974Excellent
CFI>0.90.979Excellent
PNFI>0.50.781Excellent
PCFI>0.50.800Excellent
Table 14. Path Coefficients and Hypothesis Testing Results.
Table 14. Path Coefficients and Hypothesis Testing Results.
PathPath CoefficientS.E.C.R.p
PR<---AIGCQ−0.4480.054−8.718***
TR<---AIGCQ0.3430.0497.023***
TR<---PR−0.4630.049−8.999***
PI<---AIGCQ0.3690.0516.462***
PI<---PR−0.2270.051−3.797***
PI<---TR0.2150.0573.338***
Note: *** indicates significance at the 0.001 level.
Table 15. Results of Mediation Effect Testing.
Table 15. Results of Mediation Effect Testing.
ParameterEstimateSELowerUpperp
AIGCQ-PR-PI (Indirect Effect)0.1010.0360.0340.1780.005
AIGCQ-TR-PI (Indirect Effect)0.0740.0290.0270.1430.003
AIGCQ-PR-TR-PI (Indirect Effect)0.0450.0190.0180.0930.003
AIGCQ-PI (Direct Effect)0.3690.0680.2380.5050.000
AIGCQ-PI (Total Effect)0.5890.0510.4820.6880.000
Table 16. Testing Results of the Moderating Effect of PLR.
Table 16. Testing Results of the Moderating Effect of PLR.
BSEtp
(Constant)2.6680.23611.3180
Gender0.1120.0681.6530.099
Age0.0180.0310.5860.558
Highest Educational Attainment−0.1140.049−2.3340.020
Occupation−0.0380.032−1.1800.239
Shopping Frequency−0.0250.035−0.7170.474
AIGCQ0.2800.0407.0710.000
PR−0.2020.041−4.9550.000
TR0.1980.0444.4830.000
PLR0.1310.0334.0130.000
PLR * PR−0.0890.033−2.6980.007
PLR * TR0.1640.0334.9460.000
R20.429
F33.806
Dependent Variable: PI. Note: Variables were mean-centered.
Table 17. Testing Results of the Moderating Effect of EC.
Table 17. Testing Results of the Moderating Effect of EC.
BSEtp
(Constant)2.8730.24411.7670.000
Gender0.0640.0680.9410.347
Age0.0150.0310.4780.633
Highest Educational Attainment−0.0760.049−1.5370.125
Occupation−0.0360.032−1.1320.258
Shopping Frequency−0.0130.035−0.3790.705
AIGCQ0.2370.0425.5790.000
PR−0.1370.042−3.2450.001
TR0.2020.0454.4480.000
EC0.2030.0395.2470.000
EC * PR−0.1260.037−3.4490.001
EC * TR−0.1620.038−4.2220.000
R20.423
F32.956
Dependent Variable: PI. Note: Variables were mean-centered.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, T.; Pan, Y.; Jang, W. Modeling Consumer Reactions to AI-Generated Content on E-Commerce Platforms: A Trust–Risk Dual Pathway Framework with Ethical and Platform Responsibility Moderators. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 257. https://doi.org/10.3390/jtaer20040257

AMA Style

Yu T, Pan Y, Jang W. Modeling Consumer Reactions to AI-Generated Content on E-Commerce Platforms: A Trust–Risk Dual Pathway Framework with Ethical and Platform Responsibility Moderators. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(4):257. https://doi.org/10.3390/jtaer20040257

Chicago/Turabian Style

Yu, Tao, Younghwan Pan, and Wansok Jang. 2025. "Modeling Consumer Reactions to AI-Generated Content on E-Commerce Platforms: A Trust–Risk Dual Pathway Framework with Ethical and Platform Responsibility Moderators" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 4: 257. https://doi.org/10.3390/jtaer20040257

APA Style

Yu, T., Pan, Y., & Jang, W. (2025). Modeling Consumer Reactions to AI-Generated Content on E-Commerce Platforms: A Trust–Risk Dual Pathway Framework with Ethical and Platform Responsibility Moderators. Journal of Theoretical and Applied Electronic Commerce Research, 20(4), 257. https://doi.org/10.3390/jtaer20040257

Article Metrics

Back to TopTop