Next Article in Journal
Correction: Bafa et al. Integrating Cultural and Emotional Intelligence to Examine Newcomers’ Performance and Error Reduction: A Moderation–Mediation Analysis. Systems 2025, 13, 195
Previous Article in Journal
Manufacturer Strategies for Blockchain Adoption and Sales Mode Selection with a Dual-Purpose Platform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective

1
China-Korea International Institute of Visual Arts Research, Qingdao University of Science and Technology, Qingdao 266101, China
2
Department of Smart Experience Design, Kookmin University, Seoul 02707, Republic of Korea
3
Culture Design Lab, Graduate School of Techno Design, Kookmin University, Seoul 02707, Republic of Korea
4
College of Communication, Qingdao University of Science and Technology, Qingdao 266101, China
*
Authors to whom correspondence should be addressed.
Systems 2025, 13(6), 461; https://doi.org/10.3390/systems13060461
Submission received: 14 April 2025 / Revised: 9 June 2025 / Accepted: 9 June 2025 / Published: 11 June 2025

Abstract

With the widespread application of AI-generated content (AIGC) tools in creative domains, users have become increasingly concerned about the ethical issues they raise, which may influence their adoption decisions. To explore how ethical perceptions affect user behavior, this study constructs an ethical perception model based on the trust–risk theoretical framework, focusing on its impact on users’ adoption intention (ADI). Through a systematic literature review and expert interviews, eight core ethical dimensions were identified: Misinformation (MIS), Accountability (ACC), Algorithmic Bias (ALB), Creativity Ethics (CRE), Privacy (PRI), Job Displacement (JOD), Ethical Transparency (ETR), and Control over AI (CON). Based on 582 valid responses, structural equation modeling (SEM) was conducted to empirically test the proposed paths. The results show that six factors significantly and positively influence perceived risk (PR): JOD (β = 0.216), MIS (β = 0.161), ETR (β = 0.150), ACC (β = 0.137), CON (β = 0.136), and PRI (β = 0.131), while the effects of ALB and CRE were not significant. Regarding trust in AI (TR), six factors significantly negatively influence it: CRE (β = −0.195), PRI (β = −0.145), ETR (β = −0.148), CON (β = −0.133), ALB (β = −0.113), and ACC (β = −0.098), while MIS and JOD were not significant. In addition, PR has a significant negative effect on TR (β = −0.234), which further impacts ADI. Specifically, PR has a significant negative effect on ADI (β = −0.259), while TR has a significant positive effect (β = 0.187). This study not only expands the applicability of the trust–risk framework in the context of AIGC but also proposes an ethical perception model for user adoption research, offering empirical evidence and practical guidance for platform design, governance mechanisms, and trust-building strategies.

1. Introduction

In recent years, AI-Generated Content (AIGC) tools have rapidly emerged worldwide, becoming a driving force in the paradigm shift of digital content production. From natural language generation models such as ChatGPT-4o and DeepSeek-V2 to image generation tools like Midjourney v6 and DALL·E 3, and further to video generation platforms such as Runway, AIGC is gradually permeating various domains, including media communication, creative design, film production, education, and advertising [1,2,3,4]. These tools not only significantly enhance the efficiency of professional creators but also empower ordinary users with unprecedented creative freedom and expressive capabilities [5]. As AIGC technologies continue to evolve and spread, their implications for social benefits, transformations in production modes, and the restructuring of value systems have attracted increasing attention from both academia and industry [6]. Particularly amid the growing trend of “technological decentralization,” understanding the attitudes and behavioral mechanisms of general users toward AIGC tools has become a pressing issue in the development of artificial intelligence technologies [7].
However, the rapid proliferation of AIGC technologies has given rise to a growing array of ethical concerns. As these tools become increasingly embedded in everyday life, they have triggered significant debates surrounding their ethical implications. For example, the authenticity of AI-generated content is often unverifiable, making it susceptible to misinformation dissemination [8]. Algorithmic biases in model training may threaten social equity [9], while unresolved issues around data privacy and intellectual property rights remain prevalent in content generation contexts [10]. Furthermore, when the generated content involves infringement, discrimination, or deception, the mechanisms for assigning accountability are often ambiguous [11]. Concerns over job displacement, opaque “black-box” algorithms, and users’ limited control over outputs have also emerged as pressing ethical challenges [12]. These concerns not only drive regulatory and policy discussions but are increasingly influencing individual users’ decisions to adopt or reject AIGC tools. As a result, a fundamental tension has emerged between the broad accessibility of such technologies and the uncertainty surrounding their ethical consequences.
Previous studies have indicated that user adoption of emerging technologies is shaped not only by perceived functionality or performance but also by psychological mechanisms such as trust (TR) and perceived risk (PR) [13,14]. Especially in interactions with intelligent and unpredictable AI systems, users must constantly weigh their trust in the system’s outputs against perceived risks [15]. Trust acts as a critical enabler of technology adoption, while perceived risk may lead to hesitation or outright avoidance. As a result, scholars have increasingly examined AIGC from a technology ethics perspective, exploring how ethical concerns influence user perceptions of risk and trust, which subsequently affect behavioral intentions. While classical models such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) emphasize factors like perceived usefulness and ease of use [16], a growing body of literature has incorporated trust and risk as mediating variables. Recent approaches also emphasize ethical dimensions—such as privacy, transparency, and accountability—to better capture user behavior in complex sociotechnical environments [17].
Nevertheless, there remains a lack of systematic research on the interrelationship among ethical factors, psychological mechanisms, and adoption intention (ADI). On the one hand, existing literature predominantly focuses on a limited range of ethical dimensions—such as data privacy and algorithmic transparency—without offering a comprehensive model of multidimensional ethical perception. On the other hand, the pathways through which ethical variables affect user decisions via perceived risk and trust have yet to be fully conceptualized and empirically validated. In the emerging context of AIGC, where the logic of content generation, control over information, and boundaries of responsibility are being fundamentally reshaped, traditional acceptance models are insufficient to uncover the complex mechanisms underlying user behavior. Therefore, there is a pressing need to construct a multidimensional system of ethical perception variables and integrate psychological mechanisms to systematically explore their impact on adoption intention. This gap is especially critical in the emerging context of AIGC, where the boundaries of content ownership, accountability, and human-AI interaction are rapidly evolving. The generative nature of AIGC tools introduces new ethical uncertainties—such as misinformation, authorship ambiguity, and loss of user control—that challenge traditional acceptance models grounded primarily in utility or usability. Therefore, there is a pressing need to construct a multidimensional system of ethical perception variables, integrate them with key psychological mechanisms such as perceived risk and trust, and examine their combined influence on user behavior through a comprehensive, empirically validated framework.
Accordingly, this study focuses on the following core research questions:
  • Do the ethical issues associated with AIGC tools significantly affect users’ adoption intentions?
  • Among various ethical factors, which specific dimensions exert a significant influence on user behavior?
  • Do these ethical factors indirectly affect adoption intentions through the pathways of perceived risk and trust?
To answer the above questions, this study proposes a structural equation model in which multidimensional ethical perception serves as the antecedent variable, PR) and TR as mediating variables, and ADI as the outcome variable. A questionnaire survey and Structural Equation Modeling (SEM) will be used for empirical testing. The anticipated contributions of this study are threefold. First, at the theoretical level, it integrates a newly constructed multidimensional ethical perception framework—derived specifically for AIGC tools—into the trust–risk model. This goes beyond traditional technology acceptance models that primarily emphasize perceived usefulness and ease of use, and extends recent work that considers ethics by offering a structured, empirical model linking ethical concerns to user psychology and behavior. Second, in terms of variable design, it draws on a systematic literature review and expert interviews to develop eight distinct ethical dimensions (e.g., misinformation, accountability, creativity ethics), enriching current approaches that often rely on single-factor constructs like privacy or transparency. Third, at the practical level, the findings are expected to provide theoretical grounding and actionable insights for the ethical design, user guidance, and governance strategies of AIGC platforms—particularly in helping developers and policymakers understand how ethical concerns shape adoption decisions in uncertain AI environments.

2. Literature Review

2.1. User Adoption of AI-Generated Content (AIGC) Tools

As a representative form of generative artificial intelligence, AI-Generated Content (AIGC) tools are profoundly reshaping how users create content and interact with technology. With the widespread adoption of tools such as ChatGPT, Midjourney, and Runway across multimodal domains—including text, image, and video—academic attention has gradually shifted from evaluating their technical capabilities to understanding user behavior. Existing studies have indicated that users interacting with highly automated and intelligent generative tools exhibit distinct perceptual characteristics compared to those using traditional digital technologies. From a behavioral process perspective, users typically move through several stages when engaging with AIGC tools, including information exposure, technology adoption, creative engagement, trust evaluation, and continued use. Some studies, grounded in the TAM and UTAUT, have explored how perceived usefulness, ease of use, and social influence affect users’ behavioral intentions [18,19]. Others have drawn from the perspectives of Human-Computer Interaction (HCI) and Creative Cognition, focusing on how AIGC tools enable personalized expression and creative empowerment [20]. Additional research from the domains of User Experience (UX) and emotional design has examined users’ sense of enjoyment, immersion, and interactive engagement during content generation [21]. Collectively, these studies suggest that AIGC tools are significantly transforming the interaction patterns between humans and content and are exerting a notable influence on users’ cognitive structures and behavioral pathways.
However, in terms of variable design and research dimensions, existing studies have primarily concentrated on technical features—such as system performance, interface functionality, and operational fluency—emphasizing the influence of technical performance on user attitudes and behavior [12]. In contrast, users’ perceptions of the ethical issues surrounding AIGC tools—such as the generation of misinformation, ambiguous accountability, algorithmic bias, data misuse, and disputes over content ownership—remain underexplored and lack systematic theoretical frameworks and empirical validation [10]. Ethical concerns are often mentioned as contextual background or negative cases, but are rarely treated as structural variables within user behavior models. Notably, the generative and opaque nature of AIGC tools inherently introduces ethical risks. Users’ subjective evaluations of these ethical dimensions may unconsciously influence their levels of trust and willingness to adopt the technology. Therefore, it is necessary to introduce a more structured system of ethical variables and integrate them into behavioral modeling frameworks—particularly in conjunction with PR and TR—to more comprehensively explain users’ AIGC adoption pathways [22].

2.2. Categorization of AI Ethical Issues and User Perception

The rapid development of artificial intelligence technologies has prompted widespread ethical reflection and institutional responses. Several international organizations have successively established AI ethical governance frameworks that systematically classify and standardize related issues. For example, the European Union’s Ethics Guidelines for Trustworthy AI outline seven key principles, including human agency, prevention of harm, fairness, explicability, privacy and data governance, transparency and accountability, and technical robustness [23]. The OECD’s Principles on Artificial Intelligence emphasize that AI should promote human well-being and possess explainability, accountability, robustness, and inclusiveness [24]. The IEEE’s Ethically Aligned Design seeks to establish cross-industry ethical standards by focusing on system safety, algorithmic bias, data privacy, and user control [25]. Although these frameworks differ in emphasis, they all commonly recognize that the development of AI is not only a process of technological evolution, but also a profound reshaping of social structures, value systems, and boundaries of responsibility. As AI technologies increasingly integrate into everyday life, AI-related ethical concerns have moved beyond abstract discussions at the institutional or policy level and have entered users’ experiences and behavioral judgments in concrete application scenarios such as AIGC [10]. Studies have found that users frequently encounter various ethical dilemmas when using AIGC tools, such as: “Is the generated content authentic and reliable?”, “Who should be accountable for the generated output?”, “Does the system exhibit bias or discrimination?”, “Is personal data being misused?”, “Is ownership of the content clearly defined?”, “Does the tool threaten job security?”, and “Is the system sufficiently transparent and controllable?” [9]. Although these issues have been addressed to some extent at the policy level, for ordinary users they often manifest through subjective perception, emotional response, and cognitive judgment, all of which significantly influence their acceptance and adoption of the technology.
It is important to note that users’ ethical concerns are highly multidimensional and context-dependent. For instance, some users are more concerned with the authenticity and explainability of the tool’s output, while others place greater emphasis on privacy protection during data processing and clarity of information ownership [26]. Professional creators may be particularly sensitive to the originality of generated content, copyright allocation, and platform accountability [27]. In other words, when using AIGC tools, users are not only assessing “can it be used?” or “is it efficient?”—they are also weighing “should it be used?” and “what consequences might result?”. Ethical perception has thus become a critical cognitive dimension in their decision-making process [28]. More fundamentally, AI ethical issues are not perceived by users as purely objective institutional constraints, but rather as subjectively constructed cognitive processes. Judgments regarding whether a system is transparent, fair, or controllable are often shaped by users’ prior experiences, information sources, platform communication mechanisms, and cultural background [29]. This form of highly subjective and differentiated ethical perception is closely tied to users’ trust formation and risk evaluation. Incorporating ethical perception into the explanatory framework of user behavior not only helps to expand existing technology acceptance theories but also offers valuable insights into the value orientations and psychological logic underlying user decisions [30]. To further understand how ethical perception affects behavioral choices, it is necessary to introduce mediating mechanisms. Among these, TR and PR—two psychological variables that have been extensively validated in recent user adoption research—provide theoretical support for explaining the pathway from ethical cognition to behavioral intention [31].

2.3. Perceived Risk (PR) and Trust (TR)

Perceived Risk (PR) and Trust (TR) are two critical psychological variables, particularly relevant for explaining user decision-making in contexts characterized by high complexity, uncertainty, and information asymmetry. Compared to traditional technology acceptance models, which focus primarily on perceived usefulness and ease of use, these two constructs emphasize users’ sense of psychological security and control during interaction, highlighting the subjective judgments formed under conditions of opacity and unpredictable outcomes.
Perceived risk is defined as the user’s anticipation of potential negative outcomes associated with using a technology or system. These risks may include financial loss, privacy breaches, reputational harm, psychological discomfort, or legal liability. In the context of AIGC tool usage, the complexity of algorithmic mechanisms and the opacity of system operations often lead to user concerns over misinformation, misleading outputs, copyright infringement, and data privacy violations, which may generate psychological resistance and avoidance behaviors [32]. In contrast, trust refers to users’ positive expectations regarding the ability, reliability, integrity, and benevolence of a technological system under uncertain conditions. Trust serves as a positive psychological resource that can enhance users’ sense of safety and willingness to adopt technology when control is limited [33]. When users believe that an AIGC tool operates based on rational mechanisms, produces reliable outputs, and is backed by a platform that demonstrates responsibility and safeguards their interests, they are more likely to establish trust and show favorable adoption intentions.
Prior research has demonstrated a close interrelationship between perceived risk and trust. On the one hand, heightened risk perception typically weakens user trust; on the other hand, the establishment of trust can alleviate uncertainty brought on by risk, thereby increasing user acceptance of the technology. Together, these two variables serve as a critical “psychological filtering mechanism” when users confront ethical concerns and technological uncertainties [15]. In other words, when users perceive ethical risks such as unclear accountability, data misuse, or loss of content control, their risk sensitivity may increase, leading to a decline in trust and, subsequently, a reduction in adoption intention. From a path mechanism perspective, perceived risk and trust are not only integral to users’ cognitive structures but also function as mediating variables through which ethical perceptions influence adoption intention. Numerous empirical studies have validated this mediating mechanism across various domains, including artificial intelligence, financial technology, online healthcare, and autonomous driving [34]. For instance, Kumar’s research on intelligent voice assistants found that user concerns over privacy breaches and algorithmic opacity influenced their trust via the risk pathway, thereby altering usage behavior [35]. Similarly, Bawack’s study revealed that users’ trust in AI-based recommendation systems significantly predicted their tendency to accept or reject recommended content [36]. An increasing number of empirical studies have directly examined how ethical perceptions influence users’ trust and perceived risk in AI systems. Carlotta et al. discussed that users’ concerns about algorithmic fairness and responsibility attribution negatively affect their trust in AI-based recruitment platforms [37]. Abid, in the context of intelligent medical systems, pointed out that concerns regarding data misuse and privacy violations increase perceived risk and reduce trust in the system [38]. Floridi et al. emphasized that when AI systems lack ethical consistency—such as exhibiting manipulative tendencies or value conflicts—user trust declines significantly, especially in highly opaque decision-making scenarios [39]. Furthermore, Al-kfairy et al. [40] noted that users widely express concerns about the authenticity of generated content, the boundaries of platform responsibility, system transparency, and the degree of user control. These perceptions not only constitute sources of perceived risk but also directly trigger behavioral resistance [40]. Law stressed that ethical dimensions such as transparency, fairness, accountability, and responsibility attribution are not only normative principles but also essential psychological mechanisms for establishing user trust and adoption intention [41].

3. Research Methods

3.1. Overview of Research Design Process

The overall research process of this study is illustrated in Figure 1, which consists of two main phases:
Phase 1: Variable Exploration.
First, the study adopts the PRISMA procedure to conduct a systematic literature review. Relevant research was retrieved from leading international journals and conference proceedings, focusing on themes such as AI ethics, user perception, and generative content tools. This process yielded a classification of commonly discussed ethical issues associated with AIGC usage.
Second, semi-structured interviews were conducted with 10 experts and frontline practitioners from fields including artificial intelligence, ethics, communication studies, and human–computer interaction. The interviews focused on the ethical concerns perceived by general users when using AIGC tools. Thematic coding was used to analyze the interview data. By integrating findings from both the literature review and the expert interviews, a set of representative ethical perception dimensions was summarized, which served as the theoretical foundation for questionnaire development and hypothesis formulation.
Phase 2: Model Construction and Empirical Testing.
Based on the established variable system, a theoretical model was constructed with the core pathway: Ethical Perception → PR/TR → Adoption Intention. The questionnaire was distributed using a convenience sampling approach through online platforms and academic networks. This method was deemed appropriate as the research aims to capture a broad spectrum of ordinary users’ ethical perceptions regarding AIGC tools, rather than to test platform-specific or demographic-specific effects. Similar sampling strategies have been widely adopted in user-centered AI studies that prioritize perceived psychological mechanisms over strict population generalization. Moreover, the study emphasizes cognitive pathways rather than demographic prediction, which supports the validity of this approach. A questionnaire was then developed in accordance with this model and distributed for large-scale data collection. SEM was employed to analyze the collected data, verify the hypothesized paths, and systematically explore the mechanisms by which different ethical perception dimensions influence user behavior. Particular attention was given to the mediating effects of perceived risk and trust within this pathway.

3.2. Systematic Literature Review (SLR)

To ensure that the design of ethical variables is grounded in robust theoretical foundations and reflects real-world relevance, this study employs a SLR to synthesize existing research on AI ethics and user perception. This method emphasizes procedural rigor and replicability, making it particularly suitable for refining theoretical constructs and integrating knowledge in areas where key concepts and variable structures remain underdefined [42].
This study follows the four-stage PRISMA process: Identification, Screening, Eligibility, and Inclusion. In the identification stage, literature was retrieved from three major databases: Web of Science, Scopus, and Google Scholar. The search query combined the following keywords: (“AI” OR “Artificial Intelligence” OR “AIGC”) AND (“Ethics” OR “Ethical Issues” OR “AI Ethics”) AND (“User Perception” OR “User Attitude” OR “Technology Acceptance”). The search was limited to English-language publications from 2015 to 2024.
During the screening stage, duplicate records, conference abstracts, patents, studies from unrelated disciplines, and articles lacking methodological descriptions were excluded. In the eligibility phase, full-text reviews were conducted to eliminate studies that did not adopt a user-centered perspective, lacked clear ethical variables, or had weak relevance to AIGC tools. The final selection included core studies that met the inclusion criteria and were used for subsequent thematic analysis [43]. In terms of analytical strategy, this study applied content analysis and thematic coding, employing open coding, axial coding, and selective coding to systematically extract and classify ethical concerns from the included literature. The analysis focused on users’ expressed ethical concerns, psychological judgments, and behavioral responses when engaging with AI technologies—particularly AIGC tools—with the goal of identifying high-frequency and representative ethical perception themes.
It is important to note that due to the contextual and polysemous nature of AI ethical issues, literature alone may not fully capture the psychological mechanisms and value trade-offs users experience in real-world scenarios. Therefore, the results of the SLR serve as a theoretical basis for preliminary variable identification, which will be further supplemented and validated through expert interviews in the next stage of the research.

3.3. Expert Interview Design and Implementation

To supplement the findings of the systematic literature review and enhance the contextual relevance and explanatory power of the variable framework, this study incorporated expert interviews. Building on the literature synthesis, expert input was used to explore and cross-validate the ethical issues encountered by users during their engagement with AIGC tools. Expert interviews not only help clarify ambiguous concepts found in prior research but also enable the identification of ethical concerns that are widely experienced in practice yet underrepresented in academic literature. This approach ensures that the constructed variables are effectively aligned with both theoretical foundations and practical realities.
A semi-structured interview method was adopted. A total of 10 experts and industry practitioners were invited from fields including AI ethics, digital content production, human–computer interaction, information systems, and philosophy of technology. Among the interviewees were: 5 senior scholars from universities and research institutions with expertise in AI ethics and user behavior; 3 product leads from content generation platforms or AI startups, familiar with the development logic of AIGC tools and user feedback; 2 media professionals and creators, representing end-user perspectives and hands-on experience. (Table 1) All interviewees held at least a master’s degree and demonstrated a solid understanding of AIGC technological trends and ethical challenges.
The interview protocol was designed based on the thematic categories identified in the SLR and contextualized using real-world application scenarios. It focused on four key dimensions: What are the most common concerns or doubts users express when using AIGC tools? Do these concerns affect their trust in the tools or their willingness to continue using them? In current AIGC-related ethical discussions, which issues are considered most critical yet insufficiently addressed by scholars? Are there user concerns frequently observed in practice but still underexplored in academic research?
All interviews were conducted either face-to-face or via video conferencing platforms. Each session lasted approximately 30–45 min. With participants’ consent, the interviews were audio-recorded and transcribed for analysis. The transcripts were processed using thematic analysis, following a three-phase coding procedure: open coding, axial coding, and selective coding. Core concepts were extracted and organized into thematic labels, which were then compared with the preliminary variable dimensions derived from the SLR phase. To enhance the reliability of the thematic coding process, all interview transcripts were independently reviewed by two researchers. Discrepancies in code application or interpretation were discussed and resolved through collaborative sessions. Given the manageable sample size (N = 10), this dual-coding and consensus-based approach ensured both analytical rigor and contextual sensitivity. Inter-coder consistency was assessed informally through overlapping review and alignment of category structures (Figure 2).

4. Results

4.1. Phase I: Identification and Construction of Ethical Variables

4.1.1. Findings from Systematic Literature Review

An initial search yielded 783 records. Following the screening process, duplicates and irrelevant entries—such as conference abstracts, technical patents, studies from unrelated disciplines, and articles lacking methodological clarity—were removed, resulting in 426 retained articles. A full-text review was then conducted to exclude studies that did not adopt a user-centered perspective, lacked clearly defined ethical dimensions, or showed weak relevance to AIGC tools. Ultimately, 82 core studies met the inclusion criteria and were used as the basis for keyword analysis and variable identification (Figure 3).
The research team conducted keyword analysis and thematic clustering on the 82 selected articles. These studies span multiple interdisciplinary fields, including AI ethics, user behavior, technology adoption models, risk perception of generative content, accountability, and privacy protection. The literature, primarily published between 2015 and 2024, provides a systematic overview of the academic discourse and theoretical evolution concerning ethical issues in AIGC tools over the past decade. Each article was reviewed using open coding and thematic analysis to identify recurring semantic patterns. The results revealed a set of high-frequency keywords: “privacy” (54 articles), “bias” (47), “accountability” (41), “misinformation” (39), “job displacement” (31), “copyright / intellectual property” (29), “transparency” (28), and “control / autonomy” (22). Although terminology varied across studies, the core ethical concerns demonstrated a high degree of consistency. Based on semantic integration and structural categorization, the research team ultimately distilled eight representative dimensions of users’ perceived ethical concerns: Misinformation, Accountability, Algorithmic Bias, Creativity Ethics (including copyright and originality), Privacy, Job Displacement, Ethical Transparency, and Control over AI (Table 2).
These dimensions not only comprehensively capture the typical ethical issues encountered by users when using AIGC tools but also offer a solid foundation for measurement, providing robust theoretical support for subsequent variable design.

4.1.2. Findings from Expert Interviews

To further supplement and validate the ethical dimensions identified through the systematic literature review, and to ensure the contextual relevance and semantic accuracy of the variable framework, this study conducted semi-structured interviews. The interviews focused on users’ perceived ethical risks during their use of AIGC tools and the underlying mechanisms of such perceptions. A combination of open-ended questions and thematic prompts was adopted to balance free expression of expert experience with structured data aggregation.
All interviews were audio-recorded with informed consent and fully transcribed, resulting in approximately 38,000 words of textual data. The research team used NVivo 12 software for qualitative coding. The analysis began with open coding, generating 62 initial labels. These were then grouped through semantic clustering, frequency analysis, and conceptual integration into 13 intermediate-level themes, which were further synthesized into 8 high-level ethical dimensions. The overall coding structure aligned closely with the thematic findings of the literature review, while also revealing some context-specific refinements and variations in terminology.
The results show that experts generally affirmed the core ethical concerns identified in the literature, particularly with regard to privacy, accountability, content authenticity, creative ownership, and technological controllability. For example, E01 stated: “Many AI tools now default to collecting user data, but users often don’t know whether their inputs are being stored or used for training.” This directly reflects real-world concerns associated with the Privacy dimension.
At the same time, some experts offered suggestions for refining existing terminology. For instance, E03 noted: “Users don’t say ‘accountability’—that’s an academic term. They ask, ‘If something goes wrong, who’s responsible? Me or the platform?’” In response, this study considers both academic accuracy and user language alignment when naming variables. Additionally, E06 pointed out: “Bias doesn’t only come from the model. The way content is distributed—who it reaches and who it doesn’t—also reinforces inequality.” This observation led the research team to include user perceptions of algorithmic distribution mechanisms in the Algorithmic Bias dimension.
Regarding semantic refinement, the term transparency was frequently elaborated upon during interviews. On the one hand, experts emphasized whether platforms explicitly inform users if their data will be used for model training; on the other hand, they stressed whether users can comprehend how AI-generated content is created and the associated risks. Based on these insights, the research team renamed this dimension Ethical Transparency, distinguishing it from purely technical transparency.
The keyword frequency analysis revealed the most frequently mentioned themes were privacy, content authenticity, trust in tools, generation boundaries, copyright ownership, and responsibility attribution. These results highlight the multiplicity and practical orientation of users’ ethical perceptions. The clustering of related keywords and their corresponding ethical dimensions are summarized in Table 3.
In terms of variable structure, the experts generally agreed that the eight dimensions demonstrated strong discriminant validity and comprehensive coverage, and therefore did not recommend adding or removing any variables. However, based on expert feedback, the research team made the following three adjustments to operational definitions:
“Copyright/Ownership” was revised to “Creativity Ethics” to more comprehensively capture issues of creative subjectivity, value judgment, and ethical attribution.
“Transparency” was redefined as “Ethical Transparency” to clarify that it refers to the platform’s clarity in disclosing ethical risks, its methods of handling them, and the boundaries of system use.
Within the “Control over AI” dimension, two sub-elements—user intervention capability and output predictability—were added to better reflect users’ actual experiences of control.
The interview findings further confirmed that many of the ethical concerns raised in the literature—such as content authenticity, privacy protection, and algorithmic bias—are indeed perceived by users and recurrently expressed during real-world interactions with AIGC tools. Moreover, experts highlighted that some ethical issues exhibit more complex interrelations in practice: for instance, accountability is often entangled with system transparency, and the sense of job displacement is frequently intensified by ambiguity over creative ownership. In addition, several ethical concerns that users frequently raise in practice have not yet been systematically addressed in the academic literature, revealing a gap between scholarly discourse and practical experience. In summary, the expert interviews not only validated the eight ethical perception dimensions derived from the systematic literature review but also led to critical refinements in terminology and semantic boundaries.

4.2. Hypotheses Development

4.2.1. Influence of Ethical Perception on Perceived Risk (PR)

The following section explores the potential mechanisms through which each of the eight ethical perception dimensions may influence PR and formulates corresponding hypotheses.
MIS refers to users’ perception that AIGC-generated content may contain fabricated, misleading, or factually incorrect information [44]. When users doubt the truthfulness of the system’s output, they are more likely to question its reliability, which increases their overall sense of uncertainty during use. Previous studies have shown that generative AI often lacks fact-checking mechanisms, which can trigger cognitive risks and ethical concerns [45].
H1. 
Perceived MIS has a significant positive effect on users’ PR.
ACC perception reflects users’ concern over ambiguous responsibility attribution when system errors or misuse occur. When users are unable to clearly determine whether responsibility lies with the platform, developers, or themselves, they tend to anticipate negative consequences, which amplifies their sense of risk [46].
H2. 
Perceived ACC issues have a significant positive effect on users’ PR.
ALB perception refers to users’ awareness that the system’s outputs may contain discrimination, stereotypes, or value-laden biases [47]. Such unfair generative patterns are often perceived as potential threats, undermining the perceived fairness and safety of the system, thereby heightening risk perception [48].
H3. 
Perceived ALB has a significant positive effect on users’ PR.
CRE perception mainly concerns issues such as content originality, style imitation, and copyright attribution. When users believe that AIGC tools may infringe upon others’ works or find it difficult to determine ownership of creative outputs, they often experience ethical anxiety, which raises their perceived risk of using the tool.
H4. 
Perceived CRE concerns have a significant positive effect on users’ PR.
PRI perception reflects users’ concerns about how their personal data is collected, stored, and used during interactions with AIGC tools [49]. When the platform’s data handling mechanisms are vague or opaque, users tend to be more alert to potential risks of information leakage, misuse, or surveillance, thereby significantly increasing their perceived risk [50].
H5. 
Perceived PRI concerns have a significant positive effect on users’ PR.
JOD perception reflects users’ concern that AI may replace human creativity and threaten employment opportunities [51]. When users perceive AIGC as a threat to the value of their skills or future career prospects, they are likely to adopt a more cautious usage attitude, leading to elevated levels of perceived risk [52].
H6. 
Perceived JOD has a significant positive effect on users’ PR.
ETR perception is low when users are unable to understand the system’s training processes, generative logic, or ethical boundaries, often resulting in the perception of the system as a “black box” [53]. A lack of transparency reduces users’ assessment of predictability and controllability, heightening psychological unease and risk anticipation [54].
H7. 
Lower perceived ETR has a significant positive effect on users’ PR.
CON is perceived as low when users feel they cannot manage the type, style, or boundaries of AI-generated content [55]. When users recognize that the outcomes of using AIGC tools are difficult to predict or influence, they may enter a heightened state of vigilance, which amplifies their perception of potential risks [56].
H8. 
Lower perceived CON has a significant positive effect on users’ PR.

4.2.2. Influence of Ethical Perception on Trust (TR)

This section analyzes the potential influence pathways of the eight ethical perception dimensions on TR and formulates corresponding hypotheses.
MIS perception undermines users’ evaluation of content authenticity and information reliability, thereby disrupting the perceived stability and trustworthiness of the system [33]. The authenticity of generated content serves as a fundamental basis for user TR, and frequent exposure to MIS directly weakens that foundation [45].
H9. 
Perceived MIS has a significant negative effect on users’ TR.
ACC perception creates ambiguity when users encounter errors or misuse, making it difficult for them to determine who should be held responsible. Users may perceive AIGC tools as lacking clear governance mechanisms and ACC structures, thereby diminishing their TR in the system’s underlying values and management capacity [57].
H10. 
Perceived ACC issues have a significant negative effect on users’ TR.
ALB perception reflects users’ sensitivity to the fairness and neutrality of system outputs [58]. If users detect discrimination, stereotypes, or cultural biases in the generated content, they are likely to view the system as violating principles of fairness, thereby lowering their evaluation of the platform’s moral trustworthiness [59].
H11. 
Perceived ALB has a significant negative effect on users’ TR.
CRE perception involves user doubts about content originality, copyright attribution, and the legitimacy of training data. When users believe AIGC tools may plagiarize or infringe on others’ creative work, it directly affects their judgment of the system’s moral legitimacy, thereby eroding TR [50].
H12. 
Perceived CRE concerns have a significant negative effect on users’ TR.
PRI perception relates to users’ confidence in the protection of their personal data. If the system lacks clear authorization mechanisms or demonstrates potential misuse of data, users’ perception of platform security is significantly reduced, making it difficult for TR to be established [49].
H13. 
Perceived PRI concerns have a significant negative effect on users’ TR.
JOD perception, although commonly studied within the risk pathway, may also influence trust. If users feel that AIGC tools are diminishing their creative roles or professional value, it may provoke skepticism toward the system’s ethical stance, thereby weakening their TR [52].
H14. 
Perceived JOD has a significant negative effect on users’ TR.
Lower perceived ETR leads to users being unable to comprehend the system’s training mechanisms, review standards, or value orientation. This increases concerns about unclear behavioral boundaries and ambiguous principles, thereby significantly diminishing overall TR in the system [60].
H15. 
Lower perceived ETR has a significant negative effect on users’ TR.
Low perceived CON indicates that users feel they lack the ability to intervene in or adjust AI outputs. The resulting unpredictability of system behavior undermines users’ confidence and damages the psychological foundation of TR [56].
H16. 
Lower perceived CON has a significant negative effect on users’ TR.

4.2.3. The Effects of Mediating Variables on Adoption Intention (ADI)

Within the Trust–Risk Framework, PR and TR are regarded as two key psychological variables influencing users’ technology adoption decisions. PR generally inhibits behavioral intention, while TR enhances users’ acceptance of a system. Users form both risk evaluations and trust judgments based on their ethical perceptions, which together shape their ADI. When users believe that AIGC systems may cause negative consequences—such as erroneous outputs, data breaches, content infringement, or loss of system control—their overall sense of security regarding the system is significantly diminished. This uncertainty not only directly reduces their willingness to use the technology but may also weaken their positive expectations about its value and capabilities [61].
H17. 
PR has a significant negative effect on users’ TR.
A high level of PR also directly suppresses user behavior. Especially in scenarios involving significant ethical, legal, or social consequences, users are more likely to reject the technology based on risk-avoidance psychology [62]. This effect is particularly evident when AI outputs are unpredictable or responsibility boundaries are unclear; under such conditions, risk-driven expectations significantly reduce users’ motivation to adopt the technology [63].
H18. 
PR has a significant negative effect on users’ ADI.
In contrast, TR is widely regarded as a key positive driver of technology adoption. When users believe that AIGC tools are reliable, fair, controllable, and ethically aligned, their behavioral intention is significantly enhanced. A high level of TR not only reduces cognitive uncertainty but also mitigates the negative impact of PR [61].
H19. 
TR has a significant positive effect on users’ ADI.

4.3. Research Model

Based on the aforementioned variable definitions and theoretical foundations, this study constructs a multi-path research model grounded in the trust–risk framework (Figure 4). The model is designed to systematically explore how users’ perceptions of ethical issues in AIGC tools influence their adoption intention through key psychological mechanisms. In this model, the eight ethical perception dimensions are treated as independent variables, each representing a specific type of ethical concern perceived by users during their interaction with AIGC tools. These variables exert their effects through two critical mediating variables—PR and TR—which jointly influence the dependent variable, ADI, that is, whether users are willing to continue using or promoting AIGC tools. Accordingly, the model hypothesizes that ethical perceptions do not influence user behavior directly, but rather have indirect effects via the risk and trust pathways. Moreover, the model accounts for the heterogeneity of ethical dimensions and the diversity of psychological pathways. For example, ACC may primarily weaken user TR, while PRI is more likely to increase PR. CON and ETR may simultaneously influence both PR and TR, reflecting the complex interrelations among ethical concerns within users’ cognitive structures.

4.4. Phase II: Model Testing and Empirical Analysis

To further validate the proposed research model and its hypothesized pathways, this study conducted a second-phase empirical analysis based on the findings from the systematic literature review and expert interviews. A structured questionnaire was developed, grounded in the Trust–Risk Framework, covering eight user ethical perception dimensions (MIS, ACC, ALB, CRE, PRI, JOD, ETR, and CON), two key mediating variables (PR and TR), and one outcome variable (ADI). Each variable was measured using three to four declarative items, which were adapted from established instruments in authoritative literature and modified to fit the AIGC usage context. This ensured that the item phrasing was aligned with user cognition and actual experience. All items were rated on a five-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree) to capture participants’ subjective evaluations across all variable dimensions. After the initial draft of the questionnaire was completed, the research team invited three academic experts and two AIGC platform practitioners to conduct a pretest evaluation. Feedback was collected on content clarity, semantic accuracy, and measurement logic consistency (See Appendix A for details). Based on their suggestions, the questionnaire was revised and finalized. Data collection and subsequent empirical testing were then conducted to evaluate the structural validity and path relationships of the proposed model.

4.4.1. Participants

This study employed a non-probability convenience sampling method, distributing questionnaires via the online survey platform Wenjuanxing to a diverse group of respondents, including university students, online content creators, freelancers, and technology professionals (Table 4). To expand the sample coverage, the survey was also promoted through social media channels such as WeChat groups, QQ communities, and creator forums. A total of 650 questionnaires were distributed, of which 612 were returned. After data cleaning and the removal of incomplete responses, 582 valid samples were retained for analysis. Overall, the sample demonstrated adequate representation across different genders, age groups, occupational categories, and AIGC usage frequencies, providing a robust foundation for subsequent SEM.

4.4.2. Quantitative Analysis Results

Reliability analysis was conducted to examine the consistency and stability of the questionnaire responses, which reflect the overall quality and accuracy of the data. Reliability includes both internal and external reliability. In this context, internal reliability evaluates whether items within the same construct consistently measure the same concept. Higher internal consistency indicates greater reliability.
As shown in Table 5, the Cronbach’s alpha coefficients for all 11 dimensions of the questionnaire exceed the commonly accepted threshold of 0.70, indicating a high level of reliability for the measurement instrument used in this study [64].
In this study, validity testing includes both content validity and construct validity. Content validity evaluates the extent to which the questionnaire items adequately cover the intended measurement domains. Based on a review of relevant literature and pretest revisions, the questionnaire demonstrates strong content validity [65]. Construct validity assesses whether the measurement instrument appropriately reflects the underlying theoretical structure. This study employed Confirmatory Factor Analysis (CFA) to verify construct validity. As shown in Table 6, all key model fit indices meet commonly accepted thresholds: for example, CMIN/DF = 2.197 < 3, GFI = 0.913, and RMSEA = 0.045, indicating that the overall model fit is satisfactory. Subsequent sections further evaluate convergent validity and discriminant validity.
This study examined convergent validity using Average Variance Extracted (AVE) and Composite Reliability (CR). As shown in Table 7, the AVE values for all 11 dimensions exceed the recommended threshold of 0.50 (ranging from 0.592 to 0.738), and all CR values are above 0.80 (ranging from 0.813 to 0.918). In addition, factor loadings for all items are greater than 0.70 (ranging from 0.712 to 0.885) and statistically significant (p < 0.001).
These results indicate that the measurement model demonstrates good convergent validity and internal consistency, and that the model fit is acceptable.
Discriminant validity was assessed by examining the correlation coefficients and the square roots of AVE values. As shown in Table 8, all variables are positively correlated and statistically significant at the p < 0.001 level. All standardized factor loadings exceed 0.60, with CR > 0.70 and AVE > 0.50, confirming acceptable convergent validity. Moreover, for each dimension, the square root of the AVE is greater than its correlations with any other construct, indicating that each dimension demonstrates good discriminant validity.
This study employed SEM to test the proposed hypotheses, analyze causal relationships among variables, and assess the model’s overall fit (Figure 5). As shown in Table 9, the absolute fit indices indicate a good fit: CMIN/DF = 2.175 < 3, GFI = 0.913, AGFI = 0.892, and RMSEA = 0.045. The relative fit indices also exceed recommended thresholds: NFI = 0.909, IFI = 0.948, TLI = 0.939, and CFI = 0.948. In addition, the parsimonious fit indices are acceptable: PNFI = 0.778 and PCFI = 0.811, both above the 0.50 benchmark. These results confirm that the model demonstrates excellent fit and structural soundness, and that it effectively explains the relationships among the studied variables.
Path coefficients were estimated using the Maximum Likelihood (ML) method. As shown in Table 10, all standard errors are positive and within acceptable ranges, and the critical ratios (C.R.) exceed the threshold of 1.96 in absolute value, indicating statistical significance at the p < 0.05 level. The specific criteria are as follows: C.R. > 1.96 indicates significance at p < 0.05, C.R. > 2.58 indicates significance at p < 0.01, C.R. > 3.29 indicates significance at p < 0.001. These results suggest that the model’s path relationships are statistically valid and robust.
With acceptable model fit indices, the model was deemed suitable for path and hypothesis testing [66]. The results indicate that MIS, ACC, PRI, JOD, ETR, and CON all exert significant positive effects on PR, with path coefficients of 0.161, 0.137, 0.131, 0.216, 0.150, and 0.136, respectively (all p-values < 0.05). Meanwhile, ACC, ALB, CRE, PRI, ETR, CON, and PR all exhibit significant negative effects on TR, with path coefficients ranging from −0.098 to −0.234 (all p-values < 0.05). TR has a significant positive effect on ADI (0.187, p = 0.001), while PR negatively affects ADI (−0.259, p < 0.001). The effects of ALB and CRE on PR, and of MIS and JOD on TR, were found to be not significant (p > 0.05). Overall, the results provide empirical support for most of the hypothesized paths (Figure 6).

5. Discussion

5.1. The Impact of Ethical Perceptions on Perceived Risk (PR)

This study found that most ethical perception variables significantly influence users’ PR regarding AIGC tools, indicating that users’ decision-making is not solely based on the functionality of the tools, but heavily relies on the ethical assurance mechanisms behind them. Especially in the context where generative AI has increasingly entered content creation, dissemination, and creative labor, users’ risk assessment logic has gradually shifted from “Is the technology usable?” to “Is the technology trustworthy?”.
JOD is the ethical factor that most strongly influences users’ risk perception, and this finding has clear practical implications in the context of creative labor [67]. Recent research has highlighted that AIGC poses a substitution threat to intermediate creative jobs such as writing, design, translation, and video editing. When facing AI tools, users often exhibit a “dual response”: on the one hand, they expect improved efficiency, while on the other, they worry about the tools replacing their own job value [68]. This “expectation-threat paradox” evolves into heightened uncertainty during usage. Moreover, in Chinese society, job stability, identity recognition, and tool usage are closely related. Once a tool is perceived as a “substitute for human functions,” its usage becomes not just a technology adoption issue, but a potential self-devaluation risk [69]. This may explain why JOD has the strongest impact on risk perception, despite being non-significant in the trust path—users may not distrust AI due to the fear of “replacement,” but they do anticipate greater uncertainty regarding the system.
MIS perception also has a significant impact on users’ risk assessment. Users worry that AI-generated content may contain fabricated facts, logical errors, or unfiltered information, especially in contexts like news summaries or health advice. When such content is trusted, it can lead to substantial harm [70]. Compared to Western users, who focus more on whether AI is “neutral,” Chinese users are more concerned with “whether AI outputs are trustworthy” [71]. In this context, if AIGC tools fail to indicate that “this content is AI-generated and for reference only,” users are more likely to feel uncertainty. Ethical Transparency significantly enhances users’ perception of risk, indicating that users are not only concerned with the technical performance of AI but also care about whether the AI operates within reasonable ethical constraints. If users cannot understand how AI makes decisions or whether it is subject to review and constraints, they tend to perceive it as an “unpredictable black box” [72]. This result also highlights that even as users become more accepting of AI technology, if platforms do not actively disclose their content generation mechanisms, filtering logic, and ethical boundaries, users will not be able to establish basic “ethical security” and will naturally perceive higher risks [73].
ACC perception’s significant influence on risk perception reflects users’ high sensitivity to the “consequence responsibility mechanism”. This finding resonates with Bachmann’s research [74]. In systems where the responsibility chain is unclear, users are likely to assume that they will bear the consequences themselves, thus increasing their risk judgment [75]. Currently, many AIGC platforms use disclaimers to avoid legal and ethical consequences of generated content, but this approach ironically increases users’ concerns [76]. The CON dimension’s significant effect indicates that whether users can control the content generated by AI is a crucial factor in their risk evaluation [55]. Control concerns not only the user interface friendliness but also whether users have the minimal ability to intervene, such as “correcting erroneous outputs” or “withdrawing inappropriate generations.” Especially in image and video generation scenarios, where output randomness is high and feedback mechanisms are weak, users tend to feel that the system is “uncontrollable”, which leads to distrust and higher risk perception [56].
Although the path coefficient for PRI is relatively lower compared to other variables, its impact remains significant. This suggests that users still worry about their data being recorded, trained, or leaked by the platform. However, unlike visible risks such as misinformation or job loss, privacy concerns tend to be more latent and cumulative. Users may not feel immediate threat during each use, but their sense of risk is shaped by long-term impressions—such as whether the platform provides data usage disclosures, opt-out options, or clear privacy protections. This background anxiety may not be activated constantly, but still contributes meaningfully to their overall risk perception. It also highlights the importance of platform transparency in reducing long-term uncertainty and building psychological safety [11].
Additionally, this study found that ALB and CRE had no significant effect on PR. This may be attributed to how these factors are cognitively processed by users. Compared to issues like PRI or JOD, which evoke personal vulnerability and direct harm, algorithmic bias and creativity ethics are more abstract and policy-level concerns [9]. Most ordinary users lack the technical literacy to detect subtle forms of bias in AIGC outputs, especially when the content is textual, visual, or creative in nature rather than decision-based. Likewise, users may view plagiarism or originality disputes as issues between platforms and content owners, with minimal perceived consequences for themselves [50]. Theoretically, this finding underscores the heterogeneous nature of ethical perceptions in shaping perceived risk. It suggests that not all ethical concerns equally activate users’ psychological defense mechanisms. Future models may benefit from distinguishing between “immediate personal risks” and “systemic ethical concerns” when evaluating the impact of ethics on user cognition. This differentiation contributes to a more nuanced understanding of how users process complex ethical stimuli in AI environments and highlights the need for variable classification in trust–risk–ethics models.

5.2. The Impact of Ethical Perceptions on Trust (TR)

TR, as a core psychological variable driving technology adoption, is generally considered to be built on predictability, accountability, and moral consistency.
CRE has the most significant negative impact on user TR. This variable primarily reflects users’ perceptions of AI-generated content in terms of ownership, originality, and ethical boundaries. When users believe that AIGC tools blur the creative boundaries between humans and machines, or even suspect that these tools are “plagiarizing human experience”, their moral alignment with the system declines, thus weakening their TR [77]. Therefore, even when AI-generated results are satisfactory, users still find it difficult to establish a positive relationship with the system at the trust level in the absence of strong CRE [78].
ETR significantly negatively affects TR, suggesting that users tend to view systems as “untrustworthy” when faced with opaque or black-box-like operational logic [79]. This finding aligns closely with the view that “ethical perception is a prerequisite for TR formation”. Specifically, when platforms fail to clearly disclose their content review standards, generation boundaries, or ethical limitations, users cannot form a “predictable” basis for TR at the cognitive level [80]. For example, users often cannot determine whether platforms restrict violent, pornographic, or political content, nor do they understand whether certain topics are selectively suppressed by the model [81]. This uncertainty about the “system’s intentions” weakens the moral consistency and institutional control needed for TR.
PRI perception also has a significant negative impact on PR, which is consistent with existing studies discussing the relationship between “perceived PR invasion” and “TR erosion” [33]. When users suspect that platforms may store, analyze, or leak their input data, even if no actual harm has occurred, they preemptively lower their trust level by mentally anticipating risks. It is important to note that AIGC tools often involve actions such as content uploading, context description, and personalized prompts, all of which users may view as “personal data.” Without providing a “data usage policy” or “data deletion mechanism,” it is difficult for TR to be established on the platform [82].
The negative impact of CON indicates that when users perceive that the system’s outputs cannot be effectively intervened with or modified, their TR in the system significantly declines. The foundation of TR lies not only in the system’s legitimate behavior but also in whether users possess the right to intervene. If an AI system is perceived as “autonomous with no feedback,” it loses its identity as a “collaborative tool” and becomes more akin to an “uncontrollable agent”, a shift that easily triggers cognitive conflicts [83].
Although the path coefficient for ALB is relatively lower, its negative impact is still statistically significant. This result highlights users’ sensitivity to moral values and social fairness in AI systems. Numerous recent studies have shown that AI systems often inherit or even amplify real-world biases during training, and users are likely to question the legitimacy of AI-generated content if they perceive gender discrimination, cultural bias, or political leaning [84]. For AIGC users, even if bias is not explicitly observed, as long as the platform fails to explain how the model was trained or whether biased data was included, users’ expectations of the system’s fairness become difficult to sustain. In other words, even if bias is not visible, it still triggers a trust erosion mechanism due to users’ cognitive alertness.
ACC issues, particularly unclear responsibility attribution, are widely regarded as key causes of TR erosion. When users believe that a platform cannot offer clear mechanisms for taking responsibility for erroneous content outputs, even if the technology is powerful, the platform will be seen as “untrustworthy.” This study’s results validate this logic. One possible explanation is that trust in AI systems is not only based on technical performance but also on users’ perception of accountability fairness and institutional reliability. When systems produce problematic content—such as misinformation or offensive outputs—users expect someone, either human or institutional, to be held accountable. If the platform shifts responsibility entirely to users or algorithms through legal disclaimers, it signals a lack of ethical maturity and weakens moral trust. It is worth noting that in most practical applications of AIGC platforms, the widespread use of disclaimers actually increases users’ suspicion of “platform evasion.” Users often interpret disclaimers not as risk mitigation tools, but as preemptive strategies to avoid responsibility, which can further amplify psychological distance between user and system. This is particularly true in content generation scenarios, where outputs are highly variable and users may lack the capacity to fully evaluate what is safe or appropriate. In such contexts, trust depends not only on how the system performs, but also on how the platform assumes ethical obligations when failures occur [74].
Although this study finds that MIS and JOD significantly influence PR, their effects on TR are not statistically significant. This divergence reveals users’ cognitive distinction between “system capability” and “system intention.” In other words, users may acknowledge the possibility that AIGC tools can generate false information or pose a replacement threat to creative professions, but they do not necessarily perceive the system as “untrustworthy” or “malicious.” Instead, they still regard it as a “usable tool,” especially when it performs reliably in completing specific tasks. This phenomenon has been referred to in existing literature as “operational trust” or “functional acceptance” [85], which differs from “moral trust” [86] built upon value alignment and ethical consistency. The insignificant path from MIS and JOD to TR in this study suggests that users are able to psychologically separate risks such as “content errors” or “job threats” from judgments about whether a system is trustworthy. Theoretically, this finding highlights the importance of distinguishing “performance-related risks” from “ethics-driven distrust” when constructing models of AI adoption behavior. It also indicates that trust in AI systems should not be treated as a binary construct, but rather as a complex structure encompassing multiple dimensions such as “reliability” and “ethical accountability” [53].

5.3. The Logical Relationship Between Trust–Risk Mechanism and Adoption Intention (ADI)

PR significantly negatively affects both ADI and TR, while TR has a significant positive effect on ADI. This indicates that, in a context where technological complexity and ethical concerns are increasingly prevalent, users do not adopt AI tools passively. Instead, they make their decisions based on a dynamic psychological balancing mechanism formed through ethical judgments, risk assessments, and TR building.
PR manifests in two significant pathways in this study: First, it directly negatively impacts ADI, and second, it indirectly influences user behavior by negatively affecting TR. This result supports Featherman & Pavlou’s theory that “risk is a blocking variable in the adoption process,” suggesting that users tend to adopt a risk-avoidant strategy when facing technologies with high uncertainty or potentially uncontrollable consequences [87]. Further analysis shows that users do not only develop risk awareness after encountering specific harm, but instead engage in proactive risk evaluation when ethical factors have not been institutionalized and platform behaviors are unpredictable. This “precautionary risk assessment” mechanism is especially evident in contexts where data privacy is unclear, the system is uncontrollable, and accountability is difficult to trace. Therefore, if AIGC platforms fail to effectively alleviate users’ ethical anxieties, even with strong functionality and a user-friendly experience, they may still face limitations in Adoption Intention.
The path coefficient for TR in relation to ADI is 0.187, indicating that while TR is not the sole determining factor, it remains a core psychological support variable within the technological ethical perception framework. This result aligns with McKnight, who emphasized that “trust is a prerequisite for usage” [88]. In AIGC usage scenarios, trust encompasses not only technical reliability (e.g., system stability, rational outputs) but also ethical reliability (e.g., whether creative rights are respected, whether user data is protected, and whether usage boundaries are set). Particularly, under the current trend of AI technology “functional generalization,” the scope of trust has expanded from simply “Is the technology usable?” to “Is the platform trustworthy?” Once users establish this higher-level trust, their attachment to the platform will significantly increase. On the other hand, if trust is absent, even with a good user experience, users may quickly disengage. Therefore, to truly drive user behavior transformation, platforms should prioritize institutional safeguards and ethical mechanism design, establishing a transparent, stable, and accountable trust environment.

6. Contributions and Future Directions

6.1. Theoretical and Practical Contributions

This study offers several important theoretical and practical contributions to the field of AI adoption research. Theoretically, it constructs a multi-path structural model that integrates user ethical perceptions, PR, TR, and ADI, thereby extending the applicability of the Trust–Risk Framework to the emerging domain of AIGC. Unlike traditional models based on perceived usefulness or ease of use (e.g., TAM, UTAUT), this research emphasizes the role of ethical cognition as a critical antecedent of user decision-making. By incorporating eight empirically derived ethical dimensions—misinformation, accountability, algorithmic bias, creativity ethics, privacy, job displacement, ethical transparency, and control over AI—this study builds a novel, domain-specific variable system that can serve as a theoretical foundation for future AIGC behavior research. Moreover, the model empirically validates how these ethical concerns influence users’ behavioral intention through dual cognitive pathways: increasing perceived risk and reducing trust. This enriches our understanding of sociotechnical decision-making under ethical uncertainty, especially in high-autonomy and low-transparency AI systems. From a practical perspective, the findings reveal that users are particularly sensitive to issues related to job displacement, data privacy, unclear responsibility, and lack of system controllability. These results provide actionable insights for AIGC platform developers to design ethically aware generation mechanisms, enhance user transparency (e.g., content warnings, model explainability), and implement trust-building features. Additionally, this research offers a psychological foundation for policymakers to construct anticipatory regulatory frameworks addressing user protection and algorithmic responsibility.

6.2. Research Limitations and Future Research Directions

Although this study has developed a comprehensive theoretical model and conducted empirical testing, several limitations should be acknowledged.
First, the data were collected using convenience sampling, which may introduce potential sampling bias. Participants were predominantly young, well-educated, and digitally literate individuals, which could limit the generalizability of the findings to populations with different demographic or technological profiles. For example, users with lower AI exposure may perceive ethical issues differently or exhibit distinct adoption behaviors.
Second, the study used cross-sectional data, which restricts the ability to examine how ethical perceptions, perceived risk, and trust evolve over time. Future research could employ longitudinal or experimental designs to capture dynamic changes in user cognition and behavior.
Third, while the model includes eight core ethical variables, it may not fully reflect newly emerging or highly sensitive ethical issues such as AI-based resurrection of deceased individuals, deepfake fraud, and political manipulation. These complex concerns require further refinement and expansion.
Future studies may also consider incorporating emotional or affective variables as mediators or moderators to better understand user reactions. Additionally, validating this model in cross-cultural contexts would help explore variations in ethical sensitivity and technology acceptance across societies, thereby enhancing both the theoretical robustness and the practical applicability of ethical adoption frameworks for generative AI.

7. Conclusions

In the context of rapid artificial intelligence development, AIGC are increasingly penetrating various creative fields such as text writing, image creation, and video generation. This has led to a more complex interaction between users and technology. Compared to traditional tools, AIGC tools not only enhance efficiency but also bring about unprecedented ethical challenges, raising public concern over issues such as misinformation, responsibility ambiguity, data privacy, and content ownership. Against this backdrop, this study addresses the core question of whether AI ethical factors influence user adoption intention by constructing a structural model integrating ethical perceptions, trust, and perceived risk. Using a combination of systematic literature review, expert interviews, and survey-based empirical methods, the study verifies the pathways and mechanisms among these factors.
The results show that users’ ethical perceptions significantly influence their perceived risk and trust in AI. Factors such as job displacement, misinformation, ethical transparency, and responsibility attribution play a prominent role in these paths. Perceived risk and trust act as mediating variables, forming a dual mechanism that affects adoption intention, further validating the theoretical applicability of the Trust-Risk Framework in AIGC ethical adoption studies. Specifically, in the Chinese context, users demonstrate high sensitivity to platform accountability mechanisms, content controllability, and system transparency, reflecting a tendency for “moral caution” in technology use. In conclusion, this study not only theoretically constructs an ethical perception model adapted to generative AI but also provides empirical evidence from a user perspective for platforms to optimize product design and ethical guidelines. As the scope of AIGC applications becomes more diversified and complex, platform managers and policymakers should focus on shaping users’ risk judgments and trust mechanisms to promote the sustainable development of artificial intelligence technology in a more transparent, trustworthy, and human-centered direction.

Author Contributions

Data curation, Y.P. and W.J.; Formal analysis, Y.T.; Investigation, T.Y.; Methodology, Y.C.; Supervision, Y.P. and W.J.; Validation, Y.H.; Visualization, Y.T.; Writing—original draft, T.Y.; Writing—review and editing, T.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by research projects of Qingdao University of Science and Technology (WST2021020) and Kookmin University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

All data generated or analyzed during this study are included in this article. The raw data are available from the corresponding author upon reasonable request.

Acknowledgments

The authors thank all the participants in this study for their time and willingness to share their experiences and feelings.

Conflicts of Interest

The authors declare no conflicts of interest concerning the research, authorship, and publication of this article.

Appendix A. Quantitative Survey Questionnaire Items

VariablesItemsIssueReferences
Misinformation
(MIS)
MIS1I am concerned that AI-generated content may not be truthful or accurate.[8,45,77]
MIS2I find it difficult to determine whether AI content has been fact-checked.
MIS3AI may produce misleading or deceptively realistic false information.
Accountability
(ACC)
ACC1I am worried about whether the platform will take responsibility for errors in AI-generated content.[11,57]
ACC2I am unclear about who should be held accountable when problems occur.
ACC3I believe AI platforms should clearly define responsibility attribution.
Algorithmic Bias
(ALB)
ALB1I am concerned that AI-generated content may contain gender, racial, or cultural bias.[30,59,70]
ALB2I think AI systems may make decisions based on unfair data.
ALB3Content generated or recommended by AI may reinforce stereotypes.
Creativity Ethics
(CRE)
CRE1I am concerned that AI may infringe upon others’ original works.[22,49,50]
CRE2It is difficult for me to determine whether AI-generated content constitutes plagiarism.
CRE3I feel confused about the ownership of AI-generated creations.
Privacy
(PRI)
PRI1I am concerned that the platform may store or analyze the content I input.[22,24,27]
PRI2I am unsure whether AI tools make use of my personal data.
PRI3I am worried that the platform has not clearly explained its data usage policies.
Job Displacement
(JOD)
JOD1I am worried that AI-generated tools may replace parts of my job.[1,17,51]
JOD2Using AI makes me concerned that my professional skills will be devalued.
JOD3The development of AIGC may lead to job loss for creative workers.
Ethical Transparency
(ETR)
ETR1I do not understand how AI-generated content is constructed.[24,41]
ETR2I feel that the platform has not adequately explained its technologies and data sources.
ETR3I hope AI platforms will be more transparent about their usage rules and limitations.
Control over AI
(CON)
CON1I find the output of AI difficult to predict at times.[5,56,70]
CON2Sometimes I feel I cannot effectively control the behavior of the AI.
CON3I would like to fine-tune the style or structure of AI-generated content more precisely.
Perceived Risk
(PR)
PR1Using AI tools makes me feel a certain degree of uncertainty.[39,60]
PR2I am concerned that AI outputs may cause negative consequences.
PR3I believe the potential risks associated with AI cannot be fully anticipated.
Trust in AI
(TR)
TR1I trust that the AI platform can reasonably manage its generated content.[32,41]
TR2I believe the AI platform is generally trustworthy.
TR3I trust these tools will not cause me harm or distress.
Adoption Intention
(ADI)
ADI1If conditions permit, I am willing to continue using AIGC tools.[14,18,33]
ADI2I am willing to recommend AI-generated tools to others.
ADI3I plan to use such AI tools more frequently in the future.
ADI4I will actively explore more ways to use AI-generated tools.

References

  1. Chen, H.-C.; Chen, Z. Using ChatGPT and Midjourney to Generate Chinese Landscape Painting of Tang Poem ‘the Difficult Road to Shu’. Int. J. Soc. Sci. Artist. Innov. 2023, 3, 1–10. [Google Scholar] [CrossRef]
  2. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, P.S.; Sun, L. A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT. arXiv 2023, arXiv:2303.04226. Available online: https://arxiv.org/abs/2303.04226 (accessed on 7 April 2025).
  3. Gao, M.; Leong, W.Y. Research on the Application of AIGC in the Film Industry. J. Innov. Technol. 2024, 2024, 1–12. [Google Scholar] [CrossRef]
  4. Wu, F.; Hsiao, S.-W.; Lu, P. An AIGC-Empowered Methodology to Product Color Matching Design. Displays 2024, 81, 102623. [Google Scholar] [CrossRef]
  5. Wang, Y.; Pan, Y.; Yan, M.; Su, Z.; Luan, T.H. A Survey on ChatGPT: AI–Generated Contents, Challenges, and Solutions. IEEE Open J. Comput. Soc. 2023, 4, 280–302. [Google Scholar] [CrossRef]
  6. Song, F. Optimizing User Experience: AI- Generated Copywriting and Media Integration with ChatGPT on Xiaohongshu. Commun. Humanit. Res. 2023, 18, 215–221. [Google Scholar] [CrossRef]
  7. Murphy, C.; Thomas, F.P. Navigating the AI Revolution: This Journal’s Journey Continues. J. Spinal Cord Med. 2023, 46, 529–530. [Google Scholar] [CrossRef] [PubMed]
  8. Doyal, A.S.; Sender, D.; Nanda, M.; Serrano, R.A. Chat GPT and Artificial Intelligence in Medical Writing: Concerns and Ethical Considerations. Cureus 2023, 15, e43292. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef]
  10. Peng, L.; Zhao, B. Navigating the Ethical Landscape behind ChatGPT. Big Data Soc. 2024, 11, 20539517241237488. [Google Scholar] [CrossRef]
  11. Ghandour, A.; Woodford, B.J.; Abusaimeh, H. Ethical Considerations in the Use of ChatGPT: An Exploration through the Lens of Five Moral Dimensions. IEEE Access 2024, 12, 60682–60693. [Google Scholar] [CrossRef]
  12. Stahl, B.C.; Eke, D. The Ethics of ChatGPT—Exploring the Ethical Issues of an Emerging Technology. Int. J. Inf. Manag. 2024, 74, 102700. [Google Scholar] [CrossRef]
  13. Choudhury, A.; Elkefi, S.; Tounsi, A.; Statler, B.M. Exploring Factors Influencing User Perspective of ChatGPT as a Technology That Assists in Healthcare Decision Making: A Cross Sectional Survey Study. PLoS ONE 2024, 19, e0296151. [Google Scholar] [CrossRef] [PubMed]
  14. Jo, H. Decoding the ChatGPT Mystery: A Comprehensive Exploration of Factors Driving AI Language Model Adoption. Inf. Dev. 2023, 2666669231202764. [Google Scholar] [CrossRef]
  15. Balaskas, S.; Tsiantos, V.; Chatzifotiou, S.; Rigou, M. Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk. Information 2025, 16, 82. [Google Scholar] [CrossRef]
  16. Chen, G.; Fan, J.; Azam, M. Exploring Artificial Intelligence (AI) Chatbots Adoption among Research Scholars Using Unified Theory of Acceptance and Use of Technology (UTAUT). J. Librariansh. Inf. Sci. 2024, 9610006241269189. [Google Scholar] [CrossRef]
  17. Shuhaiber, A.; Kuhail, M.A.; Salman, S. ChatGPT in Higher Education—A Student’s Perspective. Comput. Hum. Behav. Rep. 2025, 17, 100565. [Google Scholar] [CrossRef]
  18. Biloš, A.; Budimir, B. Understanding the Adoption Dynamics of ChatGPT among Generation Z: Insights from a Modified UTAUT2 Model. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 863–879. [Google Scholar] [CrossRef]
  19. Wang, S.-F.; Chen, C.-C. Exploring Designer Trust in Artificial Intelligence-Generated Content: TAM/TPB Model Study. Appl. Sci. 2024, 14, 6902. [Google Scholar] [CrossRef]
  20. Wang, C.; Chen, X.; Hu, Z.; Jin, S.; Gu, X. Deconstructing University Learners’ Adoption Intention towards AIGC Technology: A Mixed-methods Study Using ChatGPT as an Example. J. Comput. Assisted Learn. 2025, 41, e13117. [Google Scholar] [CrossRef]
  21. Yu, H.; Dong, Y.; Wu, Q. User-Centric AIGC Products: Explainable Artificial Intelligence and AIGC Products. arXiv 2023, arXiv:2308.09877. [Google Scholar] [CrossRef]
  22. Boina, R.; Achanta, A. Balancing Language Brilliance with User Privacy: A Call for Ethical Data Handling in ChatGPT. Int. J. Sci. Res. 2023, 12, 440–443. [Google Scholar] [CrossRef]
  23. Smuha, N.A. The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence. Comput. Law Rev. Int. 2019, 20, 97–106. [Google Scholar] [CrossRef]
  24. Li, Z. AI Ethics and Transparency in Operations Management: How Governance Mechanisms Can Reduce Data Bias and Privacy Risks. J. Appl. Econ. Policy Stud. 2024, 13, 89–93. [Google Scholar] [CrossRef]
  25. Lewis, D.; Hogan, L.; Filip, D.; Wall, P. Global Challenges in the Standardization of Ethics for Trustworthy AI. J. ICT Stand 2020, 8, 123–150. [Google Scholar] [CrossRef]
  26. Qureshi, N.I.; Choudhuri, S.S.; Nagamani, Y.; Varma, R.A.; Shah, R. Ethical Considerations of AI in Financial Services: Privacy, Bias, and Algorithmic Transparency. In Proceedings of the 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS), Chikkaballapur, India, 18–19 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
  27. Campbell, M.; Barthwal, A.; Joshi, S.; Shouli, A.; Shrestha, A.K. Investigation of the Privacy Concerns in AI Systems for Young Digital Citizens: A Comparative Stakeholder Analysis. In Proceedings of the 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2025; pp. 30–37. [Google Scholar] [CrossRef]
  28. Morante, G.; Viloria-Núñez, C.; Florez-Hamburger, J.; Capdevilla-Molinares, H. Proposal of an Ethical and Social Responsibility Framework for Sustainable Value Generation in AI. In Proceedings of the 2024 IEEE Technology and Engineering Management Society (TEMSCON LATAM), Panama, Panama, 18–19 July 2024; pp. 1–6. [Google Scholar] [CrossRef]
  29. Shrestha, A.K.; Joshi, S. Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives. In Proceedings of the 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2025; pp. 22–29. [Google Scholar] [CrossRef]
  30. Yang, Q.; Lee, Y.-C. Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data Cogn. Comput. 2024, 8, 105. [Google Scholar] [CrossRef]
  31. Angerschmid, A.; Zhou, J.; Theuermann, K.; Chen, F.; Holzinger, A. Fairness and Explanation in AI-Informed Decision Making. Mach. Learn. Knowl. Extr. 2022, 4, 556–579. [Google Scholar] [CrossRef]
  32. Bhaskar, P.; Misra, P.; Chopra, G. Shall I Use ChatGPT? A Study on Perceived Trust and Perceived Risk towards ChatGPT Usage by Teachers at Higher Education Institutions. Int. J. Inf. Learn. Technol. 2024, 41, 428–447. [Google Scholar] [CrossRef]
  33. Zhou, T.; Lu, H. The Effect of Trust on User Adoption of AI-Generated Content. Electron. Libr. 2024, 43, 61–76. [Google Scholar] [CrossRef]
  34. Kenesei, Z.; Ásványi, K.; Kökény, L.; Jászberényi, M.; Miskolczi, M.; Gyulavári, T.; Syahrivar, J. Trust and Perceived Risk: How Different Manifestations Affect the Adoption of Autonomous Vehicles. Transp. Res. Part A Policy Pract. 2022, 164, 379–393. [Google Scholar] [CrossRef]
  35. Kumar, M.; Sharma, S.; Singh, J.B.; Dwivedi, Y.K. “Okay Google, What about My Privacy?”: User’s Privacy Perceptions and Acceptance of Voice Based Digital Assistants. Comput. Hum. Behav. 2021, 120, 106763. [Google Scholar]
  36. Bawack, R.E.; Bonhoure, E.; Mallek, S. Why Would Consumers Risk Taking Purchase Recommendations from Voice Assistants? Inf. Technol. People 2024, 38, 1686–1711. [Google Scholar] [CrossRef]
  37. Rigotti, C.; Fosch-Villaronga, E. Fairness, AI & Recruitment. Comput. Law Secur. Rev. 2024, 53, 105966. [Google Scholar] [CrossRef]
  38. Haleem, A.; Javaid, M.; Khan, I.H. Current Status and Applications of Artificial Intelligence (AI) in Medical Field: An Overview. Curr. Med. Res. Pract. 2019, 9, 231–237. [Google Scholar] [CrossRef]
  39. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Mines Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
  40. Al-kfairy, M.; Mustafa, D.; Kshetri, N.; Insiew, M.; Alfandi, O. Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics 2024, 11, 58. [Google Scholar] [CrossRef]
  41. Law, R.; Ye, H.; Lei, S.S.I. Ethical Artificial Intelligence (AI): Principles and Practices. Int. J. Contemp. Hosp. Manag. 2024, 37, 279–295. [Google Scholar] [CrossRef]
  42. Shrestha, A.K.; Barthwal, A.; Campbell, M.; Shouli, A.; Syed, S.; Joshi, S.; Vassileva, J. Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review. arXiv 2024, arXiv:2412.16369. [Google Scholar] [CrossRef]
  43. Stracke, C.M.; Chounta, I.A.; Holmes, W.; Tlili, A.; Bozkurt, A. A Standardised PRISMA-Based Protocol for Systematic Reviews of the Scientific Literature on Artificial Intelligence and Education (AI&ED). J. Appl. Learn. Teach. 2023, 6, 64–70. [Google Scholar] [CrossRef]
  44. Hamdan, Q.U.; Umar, W.; Hasan, M. Navigating Ethical Dilemmas of Generative AI In Medical Writing. J. Rawalpindi Med. Coll. 2024, 28, 363–364. [Google Scholar] [CrossRef]
  45. Li, F.; Yang, Y. Impact of Artificial Intelligence–Generated Content Labels on Perceived Accuracy, Message Credibility, and Sharing Intentions for Misinformation: Web-Based, Randomized, Controlled Experiment. JMIR Form. Res. 2024, 8, e60024. [Google Scholar] [CrossRef] [PubMed]
  46. Ozanne, M.; Bhandari, A.; Bazarova, N.N.; DiFranzo, D. Shall AI Moderators Be Made Visible? Perception of Accountability and Trust in Moderation Systems on Social Media Platforms. Big Data Soc. 2022, 9, 20539517221115666. [Google Scholar] [CrossRef]
  47. Wang, C.; Wang, K.; Bian, A.; Islam, R.; Keya, K.; Foulde, J.; Pan, S. User Acceptance of Gender Stereotypes in Automated Career Recommendations. arXiv 2021, arXiv:2106.07112. [Google Scholar]
  48. Brauner, P.; Hick, A.; Philipsen, R.; Ziefle, M. What Does the Public Think about Artificial Intelligence?—A Criticality Map to Understand Bias in the Public Perception of AI. Front. Comput. Sci. 2023, 5, 1113903. [Google Scholar] [CrossRef]
  49. Guo, X.; Li, Y.; Peng, Y.; Wei, X. Copyleft for Alleviating AIGC Copyright Dilemma: What-If Analysis, Public Perception and Implications. arXiv 2024, arXiv:2402.12216. [Google Scholar] [CrossRef]
  50. Zhuang, L. AIGC (Artificial Intelligence Generated Content) Infringes the Copyright of Human Artists. Appl. Comput. Eng. 2024, 34, 31–39. [Google Scholar] [CrossRef]
  51. Wang, K.-H.; Lu, W.-C. AI-Induced Job Impact: Complementary or Substitution? Empirical Insights and Sustainable Technology Considerations. Sustain. Technol. Entrep. 2025, 4, 100085. [Google Scholar] [CrossRef]
  52. Liu, Y.; Meng, X.; Li, A. AI’s Ethical Implications: Job Displacement. Adv. Comput. Commun. 2023, 4, 138–142. [Google Scholar] [CrossRef]
  53. Shin, D. The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  54. Kim, S.S.Y. Establishing Appropriate Trust in AI through Transparency and Explainability. In Proceedings of the CHI EA’24: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 11–16 May 2024; pp. 1–6. [Google Scholar] [CrossRef]
  55. Wamba-Taguimdje, S.-L.; Wamba, S.F.; Twinomurinzi, H. Why Should Users Take the Risk of Sustainable Use of Generative Artificial Intelligence Chatbots: An Exploration of ChatGPT’s Use. J. Glob. Inf. Manag. 2024, 32, 1–32. [Google Scholar] [CrossRef]
  56. Sieger, L.N.; Hermann, J.; Schomäcker, A.; Heindorf, S.; Meske, C.; Hey, C.-C.; Doğangün, A. User Involvement in Training Smart Home Agents: Increasing Perceived Control and Understanding. In Proceedings of the 10th International Conference on Human-Agent Interaction, Christchurch, New Zealand, 5–8 December 2022; pp. 76–85. [Google Scholar] [CrossRef]
  57. Aumüller, U.; Meyer, E. Trusting AI: Factors Influencing Willingness of Accountability for AI-Generated Content in the Workplace. In Human Factors and Systems Interaction; AHFE International: Orlando, FL, USA, 2024. [Google Scholar] [CrossRef]
  58. Chen, C.; Sundar, S.S. Is This AI Trained on Credible Data? The Effects of Labeling Quality and Performance Bias on User Trust. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–11. [Google Scholar] [CrossRef]
  59. Hou, T.-Y.; Tseng, Y.-C.; Yuan, C.W. Is This AI Sexist? The Effects of a Biased AI’s Anthropomorphic Appearance and Explainability on Users’ Bias Perceptions and Trust. Int. J. Inf. Manag. 2024, 76, 102775. [Google Scholar] [CrossRef]
  60. Aquilino, L.; Bisconti, P.; Marchetti, A. Trust in AI: Transparency, and Uncertainty Reduction. Development of a New Theoretical Framework. In Proceedings of the MULTITTRUST 2023 Multidisciplinary Perspectives on Human-AI Team Trust 2023, Gothenburg, Sweden, 4 December 2023. [Google Scholar]
  61. Choudhury, A.; Asan, O.; Medow, J.E. Effect of Risk, Expectancy, and Trust on Clinicians’ Intent to Use an Artificial Intelligence System—Blood Utilization Calculator. Appl. Ergon. 2022, 101, 103708. [Google Scholar] [CrossRef]
  62. Marjerison, R.K.; Dong, H.; Kim, J.-M.; Zheng, H.; Zhang, Y.; Kuan, G. Understanding User Acceptance of AI-Driven Chatbots in China’s E-Commerce: The Roles of Perceived Authenticity, Usefulness, and Risk. Systems 2025, 13, 71. [Google Scholar] [CrossRef]
  63. Ashrafi, D.M.; Ahmed, S.; Shahid, T.S. Privacy or Trust: Understanding the Privacy Paradox in Users Intentions towards e-Pharmacy Adoption through the Lens of Privacy-Calculus Model. J. Sci. Technol. Policy Manag. 2024; ahead-of-print. [Google Scholar] [CrossRef]
  64. Ferketich, S. Internal Consistency Estimates of Reliability. Res. Nurs. Health 1990, 13, 437–440. [Google Scholar] [CrossRef]
  65. Sireci, S.G. The Construct of Content Validity. Social Indic. Res. 1998, 45, 83–117. [Google Scholar] [CrossRef]
  66. Bentler, P.M. Comparative Fit Indexes in Structural Models. Psychol. Bull. 1990, 107, 238–246. [Google Scholar] [CrossRef]
  67. Caporusso, N. Generative Artificial Intelligence and the Emergence of Creative Displacement Anxiety: Review. Res. Directs Psychol. Behav. 2023, 3, 9. [Google Scholar] [CrossRef]
  68. Molla, M.M. Artificial Intelligence (AI) and Fear of Job Displacement in Banks in Bangladesh. Int. J. Sci. Bus. 2024, 42, 1–18. [Google Scholar] [CrossRef]
  69. Liu, X.; Liu, Y. Developing and Validating a Scale of Artificial Intelligence Anxiety among Chinese EFL Teachers. Euro. J. Educ. 2025, 60, e12902. [Google Scholar] [CrossRef]
  70. Zhou, J.; Zhang, Y.; Luo, Q.; Parker, A.G.; De Choudhury, M. Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–20. [Google Scholar] [CrossRef]
  71. Brauner, P.; Glawe, F.; Liehner, G.L.; Vervier, L.; Ziefle, M. AI Perceptions across Cultures: Similarities and Differences in Expectations, Risks, Benefits, Tradeoffs, and Value in Germany and China. arXiv 2024, arXiv:2412.13841. [Google Scholar] [CrossRef]
  72. Moon, D.; Ahn, S. A Study on Functional Requirements and Inspection Items for AI System Change Management and Model Improvement on the Web Platform. J. Web Eng. 2024, 23, 831–848. [Google Scholar] [CrossRef]
  73. Smith, H. Clinical AI: Opacity, Accountability, Responsibility and Liability. AI Soc. 2020, 36, 535–545. [Google Scholar] [CrossRef]
  74. Henriksen, A.; Enni, S.; Bechmann, A. Situated Accountability: Ethical Principles, Certification Standards, and Explanation Methods in Applied AI. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Virtual, 19–21 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  75. Beckers, S. Moral Responsibility for AI Systems. Adv. Neural Inf. Process. Syst. 2023, 36, 4295–4308. [Google Scholar] [CrossRef]
  76. Yonsei University, Seoul, Republic of Korea; Kim, H. Investigating the Effects of Generative-Ai Responses on User Experience after Ai Hallucination. In Proceedings of the MBP 2024 Tokyo International Conference on Management & Business Practices, Tokyo, Japan, 18–19 January 2024; pp. 92–101. [Google Scholar] [CrossRef]
  77. Jia, H.; Appelman, A.; Wu, M.; Bien-Aimé, S. News Bylines and Perceived AI Authorship: Effects on Source and Message Credibility. Comput. Hum. Behav. Artif. Hum. 2024, 2, 100093. [Google Scholar] [CrossRef]
  78. Mazzi, F. Authorship in Artificial Intelligence-generated Works: Exploring Originality in Text Prompts and Artificial Intelligence Outputs through Philosophical Foundations of Copyright and Collage Protection. J. World Intellect. Prop. 2024, 27, 410–427. [Google Scholar] [CrossRef]
  79. Ferrario, A. Design Publicity of Black Box Algorithms: A Support to the Epistemic and Ethical Justifications of Medical AI Systems. J. Med. Ethics 2022, 48, 492–494. [Google Scholar] [CrossRef]
  80. Chaudhary, G. Unveiling the Black Box: Bringing Algorithmic Transparency to AI. Masaryk Univ. J. Law Technol. 2024, 18, 93–122. [Google Scholar] [CrossRef]
  81. Thalpage, N. Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems. J. Digit. Art Humanit. 2023, 4, 31–36. [Google Scholar] [CrossRef]
  82. Ijaiya, H. Harnessing AI for Data Privacy: Examining Risks, Opportunities and Strategic Future Directions. Int. J. Sci. Res. Arch. 2024, 13, 2878–2892. [Google Scholar] [CrossRef]
  83. Shin, D. User Perceptions of Algorithmic Decisions in the Personalized AI System:Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability. J. Broadcast. Electron. Media 2020, 64, 541–565. [Google Scholar] [CrossRef]
  84. Zhou, J.; Verma, S.; Mittal, M.; Chen, F. Understanding Relations between Perception of Fairness and Trust in Algorithmic Decision Making. In Proceedings of the 2021 8th International Conference on Behavioral and Social Computing (BESC), Doha, Qatar, 29–31 October 2021; pp. 1–5. [Google Scholar] [CrossRef]
  85. Lankton, N.K.; McKnight, D.H. What Does It Mean to Trust Facebook? Examining Technology and Interpersonal Trust Beliefs. SIGMIS Database 2011, 42, 32–54. [Google Scholar] [CrossRef]
  86. Muir, B.M. Trust in Automation: Part I. Theoretical Issues in the Study of Trust and Human Intervention in Automated Systems. Ergonomics 1994, 37, 1905–1922. [Google Scholar] [CrossRef]
  87. Featherman, M.S.; Pavlou, P.A. Predicting E-Services Adoption: A Perceived Risk Facets Perspective. Int. J. Hum.-Comput. Stud. 2003, 59, 451–474. [Google Scholar] [CrossRef]
  88. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a Specific Technology: An Investigation of Its Components and Measures. ACM Trans. Manage. Inf. Syst. 2011, 2, 1–25. [Google Scholar] [CrossRef]
Figure 1. Overall Research Methodology Design.
Figure 1. Overall Research Methodology Design.
Systems 13 00461 g001
Figure 2. Coding Process of Expert Interviews.
Figure 2. Coding Process of Expert Interviews.
Systems 13 00461 g002
Figure 3. PRISMA Flow Diagram of Literature Screening.
Figure 3. PRISMA Flow Diagram of Literature Screening.
Systems 13 00461 g003
Figure 4. Proposed Research Model.
Figure 4. Proposed Research Model.
Systems 13 00461 g004
Figure 5. Structural Equation Model (SEM).
Figure 5. Structural Equation Model (SEM).
Systems 13 00461 g005
Figure 6. Diagram of Significant and Non-Significant Paths.
Figure 6. Diagram of Significant and Non-Significant Paths.
Systems 13 00461 g006
Table 1. Basic Information of Interviewed Experts.
Table 1. Basic Information of Interviewed Experts.
IDGenderAffiliationField of ResearchBrief Description of AIGC-Related Experience
E01MaleA Comprehensive UniversityAI EthicsPrincipal investigator of a national AI ethics project; focuses on the societal impact of generative models
E02FemaleAI Content Platform CompanyProduct DevelopmentParticipated in video generation module design and user testing for Runway-like platforms
E03MaleUniversityHuman–Computer Interaction (HCI)Published multiple papers on AIGC user interaction and experience
E04MaleDigital Creative StartupImage Generation Product DesignExtensive use of Midjourney/Stable Diffusion for commercial content creation
E05FemaleSocial Research InstituteSociology of TechnologyInvestigates how AI tools influence creative behavior among young users
E06FemaleUniversityDigital CommunicationResearches the impact of AIGC on information credibility and dissemination structures
E07FemaleAI Ethics and Policy Think TankPublic PolicyAuthored several policy advisory reports on AI ethics and regulatory frameworks
E08MaleMedia IndustryVideo Content EditingHands-on experience with Runway for automated video editing; familiar with creator pain points
E09FemaleUniversityPhilosophy of TechnologyExplores AI creativity, subjectivity, and algorithmic bias from a philosophical perspective
E010MaleAI Application Development CompanyDialogue System DevelopmentResponsible for semantic control and safety mechanisms in AIGC text generation products
Table 2. Ethical Perception Dimensions Identified from the Systematic Literature Review.
Table 2. Ethical Perception Dimensions Identified from the Systematic Literature Review.
Ethical DimensionCore KeywordsNumber of Articles
Misinformationmisinformation, disinformation, factuality, fake content39
Accountabilityaccountability, responsibility, liability41
Algorithmic Biasbias, discrimination, fairness47
Creativity Ethicscopyright, IP, ownership, authorship29
Privacyprivacy, data protection, user data54
Job Displacementjob loss, creative replacement, automation threat31
Ethical Transparencytransparency, explainability, black-box28
Control over AIcontrol, autonomy, system unpredictability22
Table 3. Summary of Ethical Perception Variables (part).
Table 3. Summary of Ethical Perception Variables (part).
CodeEthical DimensionUser ConcernsKeywords in LiteratureConceptual Definition
V1(MIS)
Misinformation
“Sometimes it makes things up with full confidence.”misinformation,
fake content,
hallucination,
deepfake,
User concerns about the factual accuracy of AI-generated content and the potential for misleading or fabricated information.
“I can’t tell whether what it says is true or not.”
V2(ACC)
Accountability
“If AI makes a mistake, who takes responsibility?”accountability,
liability,
responsibility,
User perception of unclear responsibility when AIGC produces harmful, inappropriate, or incorrect content.
“I’m just using it, I didn’t build it.”
V3(ALB)
Algorithmic Bias
“Does it prefer white faces?”bias,
discrimination,
fairness,
training data,
User awareness of unfair or biased outputs related to gender, race, culture, etc., caused by algorithmic training data or system design.
V3“The outputs are all stereotyped, no matter what I input.”
V4(CRE)
Creativity Ethics
“Did it copy from my artwork?”copyright,
IP,
authorship,
originality,
Ethical concerns about content ownership, originality, and the erosion of creative labor value in the age of generative AI.
V4“That image I made—can someone generate the same thing with AI?”
V5(PRI)
Privacy
“Will the prompts I input be saved?”data privacy,
personal data,
collection,
consent,
Concerns over how user data are collected, stored, used, and whether consent and boundaries are clearly communicated.
V5“Are they using my data to train the model?”
V6(JOD)
Job Displacement
“Why would clients hire me when AI can do the same?”job loss,
automation,
creative replacement,
Anxiety over the potential of AIGC to replace creative or professional roles, threatening job security.
V6“Will video editors be obsolete soon?”
V7(ETR)
Ethical Transparency
“I don’t know what material it’s pulling from.”transparency,
explainability,
black-box,
usage disclosure,
Perceived lack of moral and procedural clarity about how AIGC tools function, where data come from, and what boundaries are in place.
V7“It’s a black box—I can’t see what it’s doing.”
V8(CON)
Control over AI
“I can’t tweak the output to match my intent.”control,
autonomy,
intervention,
unpredictability,
A sense of limited user agency or unpredictability when interacting with AIGC tools, leading to concerns over lack of control.
V8“It sometimes just ignores my prompts.”
Table 4. Demographic Variables.
Table 4. Demographic Variables.
VariableCategoryFrequencyPercentage
Gender:Male30953.09%
Female27346.91%
Age Group:Under 18244.12%
18–2517830.58%
26–3017730.41%
31–4012721.82%
Over 407613.06%
Current OccupationStudent9416.15%
Teacher12020.62%
Media Professional10918.73%
Technology Industry8614.78%
Freelancer9516.32%
Other7813.40%
Highest Educational AttainmentHigh school or below6210.65%
Associate degree17930.76%
Bachelor’s degree20535.22%
Master’s degree10017.18%
Doctorate or above366.19%
Average Weekly Usage Frequency of AIGC ToolsRarely437.39%
1–2 times per week539.11%
3–5 times per week21737.29%
Almost daily17530.07%
Multiple times daily9416.15%
Table 5. Reliability Analysis Results of the Questionnaire.
Table 5. Reliability Analysis Results of the Questionnaire.
DimensionNumber of ItemsCronbach’s Alpha
MIS30.820
ACC30.824
ALB30.835
CRE30.813
PRI30.833
JOD30.866
ETR30.868
CON30.855
PR30.846
TR30.856
ADI40.918
Table 6. Model Fit Indices from Confirmatory Factor Analysis (CFA).
Table 6. Model Fit Indices from Confirmatory Factor Analysis (CFA).
Fit IndexRecommended ThresholdObserved ValueEvaluation Result
CMIN/DF<32.197Excellent
GFI>0.800.913Excellent
AGFI>0.800.89Excellent
RMSEA<0.080.045Excellent
NFI>0.90.909Good
IFI>0.90.948Excellent
TLI>0.90.938Excellent
CFI>0.90.948Excellent
PNFI>0.50.765Excellent
PCFI>0.50.798Excellent
Table 7. Results of Confirmatory Factor Analysis (CFA).
Table 7. Results of Confirmatory Factor Analysis (CFA).
DimensionObserved VariableFactor LoadingS.E.C.R.PCRAVE
MISMIS10.751 0.8240.611
MIS20.7120.06316.117***
MIS30.8740.06917.708***
ACCACC10.785 0.8280.617
ACC20.7160.05716.681***
ACC30.8490.05818.625***
ALBALB10.778 0.8350.629
ALB20.7990.05718.307***
ALB30.8010.05818.332***
CRECRE10.744 0.8130.592
CRE20.7690.06416.157***
CRE30.7950.06816.386***
PRIPRI10.790 0.8330.625
PRI20.7690.05617.893***
PRI30.8130.05718.614***
JODJOD10.805 0.8670.685
JOD20.8100.05020.569***
JOD30.8670.04821.690***
ETRETR10.817 0.8690.689
ETR20.8050.04620.808***
ETR30.8670.04822.130***
CONCON10.802 0.8550.663
CON20.7950.05219.597***
CON30.8450.05220.530***
PRPR10.827 0.8450.645
PR20.7740.04919.473***
PR30.8070.04920.321***
TRTR10.836 0.8570.666
TR20.8010.05021.131***
TR30.8110.04721.420***
ADIADI10.845 0.9180.738
ADI20.8850.04326.981***
ADI30.8600.04125.852***
ADI40.8450.03825.133***
Note: “*** p < 0.001”. For model identification purposes, the first item of each construct was fixed to a loading of 1. Thus, standard error (S.E.), critical ratio (C.R.), and p-value are not reported for these items.
Table 8. Results of Discriminant Validity Analysis.
Table 8. Results of Discriminant Validity Analysis.
MISACCALBCREPRIJODETRCONPRTRADI
MIS0.782
ACC0.3320.785
ALB0.3620.3410.793
CRE0.2620.3450.3270.77
PRI0.3110.4190.4060.3060.791
JOD0.3520.4230.3390.3170.4100.828
ETR0.2400.2830.4660.3070.3620.3500.83
CON0.3460.3070.4560.3260.4220.3590.3780.814
PR0.4240.4390.3860.3290.4540.5040.4170.4440.803
TR−0.377−0.467−0.504−0.488−0.523−0.451−0.500−0.515−0.5920.816
ADI−0.118−0.173−0.196−0.189−0.227−0.167−0.235−0.169−0.3770.3410.859
Table 9. Model Fit Indices for Structural Equation Modeling (SEM).
Table 9. Model Fit Indices for Structural Equation Modeling (SEM).
Fit IndexRecommended ThresholdObserved ValueEvaluation Result
CMIN/DF<32.175Excellent
GFI>0.800.913Excellent
AGFI>0.800.892Excellent
RMSEA<0.080.045Excellent
NFI>0.90.909Excellent
IFI>0.90.948Excellent
TLI>0.90.939Excellent
CFI>0.90.948Excellent
PNFI>0.50.778Excellent
PCFI>0.50.811Excellent
Table 10. Path Coefficients and Hypothesis Testing Results.
Table 10. Path Coefficients and Hypothesis Testing Results.
PathPath CoefficientS.E.C.R.p
PR<---MIS0.1610.0603.420<0.001
PR<---ACC0.1370.0592.7380.006
PR<---ALB0.0110.0580.1970.844
PR<---CRE0.0370.0550.7920.429
PR<---PRI0.1310.0562.5300.011
PR<---JOD0.2160.0524.322<0.001
PR<---ETR0.1500.0483.0790.002
PR<---CON0.1360.0532.6880.007
TR<---MIS−0.0160.053−0.3820.702
TR<---ACC−0.0980.052−2.1350.033
TR<---ALB−0.1130.051−2.3390.019
TR<---CRE−0.1950.049−4.489<0.001
TR<---PRI−0.1450.050−3.0570.002
TR<---JOD−0.0260.047−0.5670.571
TR<---ETR−0.1480.043−3.294<0.001
TR<---CON−0.1330.047−2.8640.004
TR<---PR−0.2340.050−4.503<0.001
ADI<---PR−0.2590.070−4.379<0.001
ADI<---TR0.1870.0713.2040.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, T.; Tian, Y.; Chen, Y.; Huang, Y.; Pan, Y.; Jang, W. How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective. Systems 2025, 13, 461. https://doi.org/10.3390/systems13060461

AMA Style

Yu T, Tian Y, Chen Y, Huang Y, Pan Y, Jang W. How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective. Systems. 2025; 13(6):461. https://doi.org/10.3390/systems13060461

Chicago/Turabian Style

Yu, Tao, Yihuan Tian, Yihui Chen, Yang Huang, Younghwan Pan, and Wansok Jang. 2025. "How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective" Systems 13, no. 6: 461. https://doi.org/10.3390/systems13060461

APA Style

Yu, T., Tian, Y., Chen, Y., Huang, Y., Pan, Y., & Jang, W. (2025). How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective. Systems, 13(6), 461. https://doi.org/10.3390/systems13060461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop