1. Introduction
The COVID-19 pandemic accelerated digital transformation across media industries worldwide. Previously, media industries operated through established production workflows and centralized governance structures. The pandemic accelerated digital transformation as professionals rapidly adopted ICT tools and resilience-oriented practices under crisis conditions, often bypassing systematic governance frameworks [
1,
2,
3,
4,
5,
6,
7]. As a result, the post-pandemic era presents the challenge of institutionalizing these adaptive behaviors into durable community-driven oversight mechanisms. The emergence of advanced generative AI technologies exemplifies this governance challenge.
In February 2024, OpenAI released Sora, its first large-scale text-to-video generative model, representing a watershed moment for media professionals globally. The development of AI technology profoundly impacts media professionals by fundamentally transforming content production processes and reshaping the entire professional ecosystem. Sora’s ability to generate up to 60-s videos directly from text prompts demonstrates AI’s potential to dramatically enhance efficiency in content creation, enabling professionals to produce high-quality content with unprecedented speed and minimal resource requirements. However, alongside these efficiency gains, AI technologies introduce critical challenges that threaten professional integrity and public trust [
8]. AI’s potential to mislead media professionals emerges as a central concern, manifesting through (1) Algorithmic bias that skews content recommendations and production decisions, (2) Generated misinformation that professionals may unknowingly incorporate into their work, (3) Manipulated training data that embeds harmful stereotypes or false information into AI outputs, and (4) Black-box decision-making processes that obscure how AI systems reach their conclusions, making it difficult for professionals to verify accuracy or appropriateness.
Existing research on AI adoption has predominantly employed technology-centric models that treat governance as an external constraint rather than a behavioral driver. While COVID-era studies [
2,
3] documented how organizational resilience and ICT adoption shaped operational outcomes in supply chains and micro, small, and medium enterprises (MSMEs), these macro-level analyses cannot explain how individual professionals process governance-related information when deciding whether to adopt AI tools with potential to mislead them. This study addresses this gap by examining whether individual orientations (AI knowledge, institutional trust) moderate the relationship between information elaboration and adoption intentions—a moderation claim with profound implications for decentralized governance design.
According to global industry reports, video content dominates digital consumption across all major markets, with professionals in Europe, North America, Latin America, and Asia-Pacific facing similar pressures to adopt AI tools while managing associated risks [
9]. The fundamental tension between AI’s efficiency benefits and its potential to mislead exists regardless of geographical or cultural context. China is a global leader in both video production and consumption. According to the [
10], by December 2023 the country had 1.074 billion online audiovisual users, representing 98.3% of all internet users, making it the most widely used internet application. Video apps also dominate usage time, with short-video platforms ranking highest at 151 min per person per day, followed by long-video platforms at 112 min [
11]. This massive market sustains the Chinese video industry and provides livelihoods for millions of content creators. By 2023, China hosted 1.55 billion short-video accounts and 15.08 million full-time live-streamers. On short-video platforms alone, roughly 80 million new videos are uploaded daily, alongside over 3.5 million live-stream sessions [
11]. This massive scale provides an ideal empirical setting for examining professional responses to GenAI. However, specific institutional arrangements and regulatory frameworks do not transfer directly to other national contexts with different governance traditions and media industry structures. This urgency reflects a post-pandemic transition: pandemic disruptions accelerated digitization of media production as professionals rapidly adopted new ICT tools under crisis conditions, often bypassing established governance frameworks in favor of immediate operational continuity.
The rapid spread of AI across industries is irreversible, and public attitudes toward such technologies are more complex than ever. On the one hand, AI sparks [
12], particularly fears of technological replacement of human labor [
13,
14]. For content creators, the possibility that Sora may trigger waves of unemployment is especially concerning. Indeed, in May 2023 Hollywood witnessed its first industry-wide strike in 63 years, as writers and actors protested unregulated AI use by streaming platforms, warning that creative jobs like screenwriting could be replaced [
15]. China’s short-video industry is arguably even more vulnerable: while AI can already generate scripts, Sora demonstrates that actors, videographers, and other roles might also be displaced, making the risks of disruption far more immediate and tangible for Chinese creators. On the other hand, AI also opens new opportunities. As one popular Reddit post noted, “Having ChatGPT is like carrying a PhD in your pocket”. AI tools are designed not to replace humans but to augment human knowledge and decision-making capacity [
16]. This technological optimism complicates how content creators view Sora. Rapid adoption of AI tools may help them boost efficiency, reduce workload, and gain competitive advantage, an especially compelling prospect in China’s fiercely competitive video industry, where Sora’s commercial potential cannot be ignored.
In terms of China’s new media sector, Sora embodies both an imminent threat and a possible breakthrough: a technology that could eliminate creative jobs, yet also one that may allow creators to outpace peers in an “evolutionary” environment. This paradox underscores the value of examining how Chinese content creators and media professionals perceive and accept Sora, and critically, how to govern such technologies to counteract their potential to mislead professionals. Effective governance requires mechanisms that can balance oversight with innovation. Existing governance approaches face critical limitations. Traditional regulatory frameworks suffer from regulatory lag that cannot keep pace with rapidly evolving AI technologies, jurisdictional constraints that struggle with globally deployed systems, and limited technical expertise to assess complex algorithms [
17,
18,
19]. Industry self-regulation encounters fundamental conflicts of interest where profit motives override public protection and lacks binding enforcement mechanisms [
20,
21]. Multi-stakeholder governance bodies experience decision-making paralysis, vulnerability to capture by dominant players, and absence of enforcement authority [
22,
23,
24].
In contrast, blockchain-based decentralized autonomous organizations (DAOs) offer unique advantages that directly address these limitations. DAOs provide algorithmic transparency and auditability through immutable on-chain records of governance decisions and algorithmic parameters [
25,
26]. Distributed consensus mechanisms aggregate expert knowledge while preventing single-entity dominance [
27,
28]. Programmable enforcement via smart contracts automatically executes community decisions without centralized authorities [
29,
30]. Economic incentive alignment through tokenomics rewards quality oversight and long-term community benefit [
31,
32]. Furthermore, cross-border coordination enables global participation while maintaining regulatory interoperability [
33,
34]. These blockchain-based communities can develop collective norms and rules that guide AI algorithm behavior through transparent governance protocols, implement real-time auditing systems where community members continuously monitor AI outputs for bias, misinformation, or harmful content, create accountability frameworks with smart contracts that automatically penalize AI systems producing misleading content and reward accurate ethical outputs, and establish professional standards through community consensus that ensures AI tools enhance rather than compromise professional integrity.
The community governance frameworks we propose based on our behavioral findings apply universally across different media markets and regulatory environments. DAOs can operate across jurisdictional boundaries while respecting local regulations, enabling international cooperation in AI oversight. Media professionals worldwide can participate in these decentralized governance systems regardless of their geographic location, contributing to collective intelligence that benefits the global media ecosystem. Although scholarship on AI adoption is expanding, most studies rely on technology-centered models while neglecting how behavioral insights can inform community-based governance design for addressing AI’s potential to mislead professionals. Our study addresses this gap by applying the O-S-O-R model to understand behavioral patterns that inform the design of DAO-based governance systems. Specifically, how the knowledge paradox (where higher AI expertise correlates with increased skepticism through information elaboration) necessitates governance mechanisms that properly weight expert skepticism through token-weighting, how strong benefit perception effects require transparent verification of AI benefits and risks through blockchain’s immutable audit trails, and how variation in information processing patterns demands multi-token architectures with differentiated participation mechanisms for technical experts, community validators, and affected stakeholders.
Based on our empirical findings, we propose a multi-layered DAO structure:
Technical Assessment DAOs composed of AI experts who evaluate algorithm design and identify potential bias sources;
Content Verification DAOs, where media professionals collectively verify AI-generated content accuracy and appropriateness;
Ethics Oversight DAOs that establish and enforce professional standards for AI use in media;
Community Impact DAOs that assess how AI deployment affects different stakeholder groups and adjust governance policies accordingly.
To address these gaps, this study applies the Orientation–Stimulus–Orientation– Response (O-S-O-R) model to investigate how Chinese content creators and media professionals perceive and respond to Sora. Specifically, it examines how they interpret this disruptive tool, why they may resist or embrace it, and what their responses reveal about broader debates over AI governance. Our findings inform the design of blockchain-based governance mechanisms that could address the identified behavioral patterns through decentralized decision-making structures.
2. Theoretical Framework
2.1. The O-S-O-R Model and Decentralized Information Processing
The Orientation–Stimulus–Orientation–Response (O-S-O-R) model is widely applied to explain how media messages influence individual behavioral intentions. Within this framework,
O1 represents pre-orientation variables,
S denotes stimuli,
O2 refers to post-orientation variables influenced by those stimuli, and
R stands for behavioral response. In essence, the model illustrates how individuals’ pre-existing orientations shape their exposure to certain stimuli, which in turn affects their subsequent orientations and ultimately determines their behavioral outcomes [
35].
Crucially, the O-S-O-R framework maps directly onto blockchain-based governance mechanisms, enabling a behavioral foundation for decentralized AI oversight. Individual orientations (
O1) correspond to stakeholder characteristics that determine participation roles in decentralized autonomous organizations (DAOs): professionals with high AI knowledge (measured through technical assessments) hold EXPERT tokens and evaluate algorithm design, while professionals with domain expertise in content creation hold COMMUNITY tokens and validate outputs. Stimuli (
S) represent information processing activities enhanced through blockchain-based transparency: algorithm audit reports, content flagging events, and governance proposals qualify as stimuli that trigger community review when they exceed predefined risk thresholds encoded in smart contracts. Post-orientation variables (
O2), such as credibility assessments, risk perceptions, and benefit evaluations, align with collective intelligence formation that DAOs facilitate through smart contract-mediated consensus mechanisms: when individual
O2 evaluations aggregate to form collective risk scores above threshold levels, smart contracts automatically trigger on-chain checks including mandatory third-party audits, temporary algorithm pauses, or escalated review procedures. Finally, behavioral responses (
R) translate into governance participation and token-based decision-making: adoption intentions manifest as on-chain votes, staking behaviors, and participation in validation activities that characterize Web3 ecosystems [
25].
Specifically, pre-orientation variables (
O1) typically capture internal personal characteristics such as personality traits or values, which influence how individuals engage with stimuli (
S). Stimuli represent environmental factors that trigger emotional or cognitive reactions, often in the form of mediated information. Post-orientation variables (
O2) describe the cognitive processes through which individuals interpret and evaluate stimuli prior to forming a behavioral response (
R) [
36]. Although the O-S-O-R framework has been less frequently applied to studies of technology adoption, it is particularly suitable for examining how content creators and professionals in the Chinese media sector perceive
Sora. The model helps to reveal how individual characteristics and information-processing behaviors shape acceptance intentions toward this disruptive AI technology.
First, the O-S-O-R model emphasizes the role of information environments in shaping attitudes and behavioral intentions. Since
Sora has not yet been officially released, most content creators’ knowledge and perceptions of the technology derive from their information environments, especially social media platforms. These platforms provide ample opportunities for encountering and discussing technology-related topics [
37,
38] and have become a primary source of AI-related information. Moreover, the ways in which AI technologies are presented in news and media coverage strongly influence and reflect public attitudes [
39]. Using the O-S-O-R framework therefore allows this study to examine how information environments, via mediating mechanisms, shape intentions to adopt
Sora.
Second, the model highlights the importance of personal characteristics (
O1) in shaping responses to technology. Given the uncertainty and opacity of AI, individuals who are more risk-tolerant, more willing to learn, or more motivated by curiosity are more likely to adopt AI tools [
40]. By emphasizing
O1, the O-S-O-R model offers a useful lens for understanding how differences among Chinese content creators and professionals affect their engagement with information about
Sora, and how these differences ultimately shape their acceptance intentions [
41].
This behavioral governance approach builds on resilience mechanisms that emerged during the COVID-19 pandemic, institutionalizing effective crisis adaptations into durable oversight structures. Recent COVID-era studies [
2,
6,
7] reveal organizational adaptations that parallel the DAO mechanisms we propose. These mechanisms align with our benefit-perception pathways (H4, H7), where transparent DAO governance enhances professionals’ perceived benefits by providing algorithmic accountability that centralized systems lack.
2.2. Blockchain-Based Governance Theory
The behavioral patterns captured by the O-S-O-R model have direct applications in blockchain governance system design. Traditional centralized AI development creates information asymmetries where technology companies control algorithm design, training data, and deployment policies while communities bear the consequences. These asymmetries manifest as the behavioral paradoxes we observe in adoption studies.
Smart Contract Governance Mechanisms: Blockchain networks enable programmable governance through smart contracts that can automate decision-making based on community consensus. Unlike centralized systems where corporate executives make unilateral decisions about AI deployment, smart contracts can implement multi-stakeholder voting mechanisms that weight different perspectives based on expertise, stake, and community contribution [
27,
29]. However, smart contract governance faces notable limitations including: (1) code vulnerabilities that can be exploited—as demonstrated by The DAO hack in 2016, where
$60 million was stolen due to a recursive call vulnerability [
42], (2) immutability challenges when bugs are discovered post-deployment, and (3) high technical barriers that limit broad participation [
43].
Decentralized Autonomous Organizations (DAOs): DAOs represent organizational structures where governance rights are distributed among token holders who participate in transparent, blockchain-recorded decision-making processes. For AI governance, DAOs can implement (1) technical assessment committees with domain experts, (2) community impact evaluation groups, (3) risk monitoring DAOs that continuously audit AI system behavior, and (4) benefit distribution mechanisms that ensure fair value capture from AI-generated content [
26]. Recent advances in multi-agent blockchain systems demonstrate how these governance structures can be scaled and secured through collaborative AI agents that facilitate automated consensus mechanisms [
44]. Nevertheless, empirical evidence from existing DAOs reveals persistent challenges. Polkadot’s governance system, despite sophisticated design, experiences voter participation rates below 15% for most proposals, with token concentration enabling governance by a small number of whale addresses [
45]. MakerDAO similarly struggles with governance capture concerns, where technical complexity and voter apathy result in effective control by a minority of highly engaged token holders. These patterns suggest that DAO governance may replicate rather than resolve the concentration of power observed in traditional corporate structures.
Token Economics for Participation Incentives: Blockchain systems implement sophisticated incentive mechanisms through tokenomics that address the participation barriers revealed by behavioral research. Governance tokens reward quality information processing, expert knowledge contribution, and long-term community benefit rather than short-term adoption optimization [
31]. However, token-based governance introduces plutocratic tendencies where economic stake translates directly into voting power, potentially marginalizing stakeholders who are most affected by decisions but hold fewer tokens [
46]. Additionally, token markets create speculative dynamics that misalign governance incentives, as short-term traders influence long-term protocol decisions [
47].
Information Processing in Blockchain Ecosystems: The O-S-O-R model’s emphasis on information elaboration (S → O2) closely aligns with blockchain’s transparency requirements. On-chain governance creates immutable records of all discussions, proposals, and decisions, enabling community members to engage in deep information processing about AI technologies. Unlike social media platforms where algorithms curate information exposure, blockchain governance platforms implement community-controlled information curation that addresses the expert skepticism we observe in our findings. Yet blockchain’s transparency creates trade-offs:
Privacy concerns, as all governance activities are publicly visible [
48];
Potential for vote-buying or coordinated manipulation attacks [
49];
Deterrence of candid deliberation since participants recognize that all statements are immutably recorded [
50].
Multi-Token Governance Architecture: Based on behavioral insights, we propose differentiated token systems:
Expertise Tokens earned through demonstrated technical knowledge and awarded to participants who provide accurate risk assessments;
Community Tokens distributed to stakeholders affected by AI deployment decisions;
Validation Tokens earned through quality information processing and peer verification activities.
This multi-token approach addresses the knowledge paradox by ensuring expert skepticism is properly weighted in governance outcomes. However, multi-token systems introduce additional complexity that exacerbates participation barriers and creates new forms of inequality if certain stakeholder groups systematically struggle to earn particular token types [
51]. The interoperability and exchange rates between token types also create design challenges without clear optimal solutions [
52].
2.3. Smart Contract Implementation for AI Governance
Automated Policy Execution: Smart contracts implement community decisions about AI usage policies without requiring centralized enforcement. For example, if a DAO votes to restrict certain types of AI-generated content, smart contracts automatically flag and filter such content based on cryptographic verification of AI model signatures and content provenance [
53]. Advanced AI agents enhance these systems by providing intelligent data annotation and transformation capabilities that improve policy enforcement accuracy [
54]. However, automated policy execution through smart contracts faces practical limitations:
Content Authenticity Verification: Blockchain systems create immutable provenance records for AI-generated content, addressing trust concerns revealed in our study. Every AI-generated video includes cryptographic signatures indicating the model used, parameters applied, and human oversight involved, with this information permanently recorded on-chain for community verification. While this approach offers enhanced transparency, it also encounters limitations:
Provenance systems only verify that content metadata matches blockchain records, not whether the content itself is truthful or appropriate [
57].
Adversaries create authentic records of malicious content, making verification of technical provenance distinct from content quality assurance [
58].
Widespread adoption requires industry coordination and standardization that proves difficult to achieve given competitive dynamics among AI providers [
57].
Economic Incentive Alignment: Smart contracts implement automated benefit distribution mechanisms where AI-generated revenue is distributed to community stakeholders based on their governance participation, risk assessment accuracy, and long-term value contribution rather than simple capital investment. This approach addresses economic concerns about AI displacing human workers by ensuring communities capture value from AI deployment. Nevertheless, economic mechanism design for AI governance involves unresolved challenges:
Measuring risk assessment accuracy or long-term value contribution objectively on-chain remains technically difficult [
59].
Automated distribution mechanisms are gamed by participants who optimize for measurable metrics rather than genuine contribution [
60].
Legal and tax uncertainty around token-based compensation creates barriers to mainstream adoption, particularly for professional media organizations operating under traditional employment frameworks [
61].
In addition, this study introduces AI knowledge as a moderating variable in the S → O2 pathway in order to explore whether individuals’ actual knowledge levels condition how social media information elaboration influences their cognitive evaluations of Sora. Understanding these moderation effects has direct implications for designing inclusive governance mechanisms that account for different expertise levels within communities. Thus, the O-S-O-R model in this study is operationalized as follows:
O1 (Pre-orientation variable): Interest in science and technology (IST).
S (Stimulus): Social media information elaboration (SMIE), defined as the extent to which individuals actively interpret Sora-related content on social platforms.
O2 (Post-orientation variables): Confidence in AI knowledge (AIcon), risk perception (RP), and benefit perception (BP).
R (Response): Intention to adopt Sora.
Moderator: Objective AI knowledge level (AIK).
3. Research Hypotheses
Guided by the O-S-O-R framework and prior literature, we propose the following hypotheses linking information elaboration, knowledge, perceptions, and Sora adoption.
3.1. From Interest in Science and Technology to Social Media Information Elaboration (O1 → S): Blockchain Information Curation
Accordingly, this study introduces
social media information elaboration (SMIE) as the stimulus (S) and also as the outcome predicted by
interest in science and technology (IST) (O1). Information elaboration refers to the cognitive process of integrating new information with existing knowledge to achieve effective learning [
62]. In blockchain governance contexts, this process becomes community-validated information processing where multiple stakeholders contribute to collective intelligence formation.
Blockchain-based information systems can enhance traditional social media information processing by implementing: (1) Cryptographic verification of information sources, (2) Community-driven fact-checking with economic incentives for accuracy, and (3) Transparent curation algorithms that are controlled by DAOs rather than corporate platforms. Users with stronger interest in science and technology are more likely to engage with this enhanced information processing mechanism, making them ideal participants in blockchain-based AI governance systems.
Therefore, Hypothesis 1 is proposed:
H1. Interest in science and technology positively influences social media information elaboration about Sora among new media content creators and industry professionals, with implications for their potential participation in blockchain-based AI governance systems.
3.2. From Social Media Information Elaboration to AI Knowledge Confidence, Risk Perception, and Benefit Perception (S → O2): Community Validation Mechanisms
In blockchain governance systems, individual information processing becomes collective intelligence through community validation mechanisms. Smart contracts can aggregate individual assessments into community consensus, weighing contributions based on demonstrated expertise and past accuracy. This addresses the limitations of current social media environments where information quality varies significantly.
Blockchain-Enhanced Knowledge Confidence: Traditional knowledge confidence suffers from information source opacity and lack of verification mechanisms. Blockchain systems can implement
reputation-based knowledge scoring where confidence is validated by community consensus and tracked immutably over time. Contributors who consistently provide accurate assessments earn higher reputation scores, creating sustainable incentives for quality information processing [
25].
Decentralized Risk Assessment: Rather than relying on corporate risk assessments, blockchain governance can implement distributed risk evaluation protocols where multiple stakeholders contribute to comprehensive risk matrices. Smart contracts can automatically aggregate these assessments, weighting them based on contributor expertise and historical accuracy. This addresses the risk perception biases observed in centralized information environments.
Community-Driven Benefit Evaluation: Blockchain systems enable transparent benefit tracking where the actual outcomes of AI deployment are recorded on-chain and automatically inform future benefit assessments. Unlike traditional systems where benefit claims cannot be verified, blockchain governance creates auditable records of AI system performance and community impact.
Therefore, Hypotheses 2, 3, and 4 are proposed:
H2. Social media information elaboration about Sora is positively associated with confidence in AI knowledge, with this relationship enhanced in blockchain-based governance systems through community validation mechanisms.
H3. Social media information elaboration about Sora is negatively associated with risk perception of Sora, with blockchain governance enabling more accurate collective risk assessment through distributed evaluation protocols.
H4. Social media information elaboration about Sora is positively associated with benefit perception of Sora, with blockchain systems providing transparent verification of benefit claims through immutable outcome tracking.
3.3. From AI Knowledge Confidence, Risk Perception, and Benefit Perception to Intention to Adopt Sora (O2 → R)
Confidence in one’s own knowledge exerts a significant influence on judgment and decision-making [
63,
64,
65,
66,
67,
68,
69,
70,
71,
72]. Subjective confidence in one’s computer security knowledge has a greater impact on user behavior than actual knowledge, as what people believe they know influences adoption more than what they truly know [
64]. For example, individuals with IT backgrounds may be overconfident about their ability to detect deepfakes [
73]. Evidence links subjective knowledge confidence with more positive views of technology [
74]. Thus, if content creators are more confident in their AI knowledge, they are more inclined to adopt AI tools at work, such as Sora. Hence, Hypothesis 5 is proposed.
H5. Confidence in AI knowledge positively predicts intention to adopt Sora.
Research also shows that risk perception (RP) and benefit perception (BP) are key predictors of new-technology acceptance [
75,
76,
77]. When evaluating new technologies, risk and benefit perceptions tend to be negatively correlated [
78,
79], perhaps because people automatically down-weight risk and up-weight benefits to avoid cognitive dissonance when they see a technology as advantageous [
78]. Humans generally seek high benefit/low-cost options; lower risk perception and higher benefit perception positively predict acceptance across technologies [
80,
81,
82]. For content creators, introducing AI tools into work is not risk-free. Fear of AI-induced unemployment is widely noted across countries [
83]. Yet even in the face of risks, perceived benefits exert a stronger positive effect on AI-adoption intentions than perceived risks [
81]. Hence, Hypotheses 6 and 7 are proposed.
H6. Risk perception of Sora negatively predicts intention to adopt Sora.
H7. Benefit perception of Sora positively predicts intention to adopt Sora.
3.4. Moderating Role of AI Knowledge
Existing research on
AI knowledge (AIK) mainly relies on surveys to assess respondents’ AI knowledge levels [
84,
85,
86,
87]. Some studies examine relationships between AI knowledge and attitudes/trust toward AI, but findings are inconsistent. For instance, a study found that knowledge about autonomous vehicles correlates positively with willingness to ride in them—consistent with the intuition that knowledge helps people grasp small risks and large benefits, thus facilitating acceptance [
88]. By contrast, a study on AI in healthcare work reported no direct relationship between physicians’ AI knowledge and their adoption of AI tools; even doctors with limited knowledge still tended to adopt AI in practice [
89]. These contradictions may stem from multiple sources: (a) heterogeneous measurement approach—some employ objective tests and others utilizing subjective questionnaires; (b) domain specificity—AI applications span many sectors, so a single standardized scale is difficult; and (c) indirect effects—insufficient attention to mediating or moderating roles of AI knowledge may contribute to divergent conclusions. Therefore, this study not only assesses objective AI knowledge among Chinese new media workers in their professional domain but also explores the mechanisms by which AI knowledge operates in the adoption process. Therefore, Research Question 1 is posed.
RQ1. Does AI knowledge (AIK) moderate the effects of social media information elaboration (SMIE) on AI knowledge confidence (AIcon), risk perception (RP), and benefit perception (BP)?
3.5. Demographic Correlates of Adoption Intentions
In technology adoption research, demographics—such as gender, age, income, and education—are long-standing predictors. For example, gender and age significantly influence mobile technology adoption [
90,
91]. In autonomous-vehicle research, higher-income, younger, and more educated individuals are more willing to pay, while gender effects are sometimes null [
92], though other studies find men more likely to adopt [
93]. Higher education is associated with greater technological knowledge [
94,
95] and faster adoption and is also considered an important moderator in autonomous-vehicle acceptance [
96]. Given the work-domain focus of this study, we include industry tenure in the new media sector as a factor. Although prior research has not directly linked tenure to AI adoption, studies in the hospitality sector show significant moderating effects of industry tenure on technology acceptance [
97]. Thus, Research Question 2 is posed.
RQ2. Do age, gender, education, industry tenure, and income significantly relate to intention to adopt Sora among China’s new media content creators and industry professionals?
The theoretical model is presented in
Figure 1.
6. Discussion
The findings of this study reveal critical behavioral patterns that inform the design of blockchain-based governance systems to address AI’s potential to mislead media professionals globally. Our empirical results demonstrate the need for decentralized community governance mechanisms that can establish norms, rules, and strategies to guide AI algorithms and audit their outputs, ensuring proper conduct for media professionals worldwide.
6.1. The Knowledge Paradox: Evidence for Community-Driven AI Oversight
Most critically, our findings reveal a “knowledge paradox” (β = −0.251, p < 0.001) that provides empirical evidence for why centralized AI governance fails media professionals. Those with higher AI knowledge become more skeptical as they engage more deeply with available information, while those with less knowledge show increased confidence. This paradox demonstrates how current centralized information environments may mislead even knowledgeable professionals by creating systematic exclusion of expert perspectives.
DAO-Based Solution: This knowledge paradox provides strong empirical justification for implementing multi-tiered DAO governance, where
Expert Assessment DAOs capture technical knowledge through specialized governance tokens that weight expert skepticism as valuable input rather than an obstacle to adoption;
Community Validation DAOs ensure broader stakeholder perspectives are included in AI oversight decisions;
Cross-DAO Consensus Mechanisms use smart contracts to reconcile different perspectives and prevent any single viewpoint from dominating AI governance decisions.
Universal Applicability: This knowledge paradox pattern likely exists among media professionals globally, not just in China, making our DAO governance solutions applicable across different countries and media markets. A joined mapping from these empirical patterns to concrete DAO governance components is provided in
Table 6.
6.2. Community-Driven AI Algorithm Auditing Systems
Our findings on information elaboration patterns (β = 0.294 for confidence, β = 0.486 for benefit perception, β = 0.231 for risk perception) reveal how media professionals process AI-related information. These behavioral patterns inform the design of DAO-based auditing systems that can prevent AI from misleading professionals.
Real-Time Algorithm Monitoring DAOs: Implement blockchain-based systems where community members continuously audit AI outputs through scalable multi-agent architectures that ensure secure collaboration between human validators and automated systems [
54]:
Bias Detection Protocols: Smart contracts that automatically flag AI-generated content showing systematic bias against specific demographics, viewpoints, or content types.
Misinformation Verification Networks: DAO members cross-reference AI outputs against verified information sources, with consensus mechanisms determining content accuracy through advanced node classification algorithms that can evolve with large-scale attributed network data [
113].
Professional Standards Enforcement: Community-established rules about appropriate AI use in different media contexts (news vs. entertainment vs. advertising).
Algorithmic Transparency Requirements: Mandate that AI systems provide explainable outputs that DAO members can evaluate for appropriateness.
Global Implementation Strategy: These auditing systems can be deployed across different countries and media markets, with local DAOs adapting global standards to regional contexts while maintaining interoperability through blockchain networks.
6.3. DAO-Established Norms and Rules for AI Guidance
The strong positive effects of benefit perception on adoption intentions (β = 0.051, p < 0.001, f2 = 0.854) suggest that media professionals need clear frameworks for realizing AI benefits while avoiding misleading outcomes. DAOs can establish these frameworks through.
Community Governance Protocols: Community Governance Protocols outline DAO-governed rules to ensure responsible AI use, such as
Ethical AI Use Standards: DAO-voted guidelines on when and how AI should be used in different media production contexts.
Quality Assurance Requirements: Community-defined minimum standards for AI-generated content before publication.
Disclosure Mandates: Rules requiring transparent labeling of AI-generated or AI-assisted content.
Professional Development Programs: DAO-funded training initiatives to help media professionals use AI tools responsibly. The multi-tier DAO architecture that operationalizes these norms is specified in
Table 7.
Smart Contract Implementation: These norms and rules can be automatically enforced through smart contracts that
Block publication of content violating community standards;
Automatically reward media professionals who follow established guidelines;
Provide real-time feedback to help professionals improve their AI use practices;
Create reputation systems that build trust among community members.
Figure 3 illustrates the knowledge paradox solution: compared with traditional arrangements, the DAO design increases professional confidence, especially at higher knowledge levels, aligning with the expert-weighting and community validation logic detailed in
Table 7.
Figure 3 provides the knowledge paradox solution by comparing professional confidence levels under traditional centralized governance versus the proposed DAO-based governance system. Three knowledge groups were constructed based on objective AI knowledge test scores: Low (0–7 correct answers out of 15,
n = 71), Medium (8–11 correct,
n = 68), and High (12–15 correct,
n = 70). Mean confidence scores were calculated for each group under two governance scenarios. Under Traditional System governance, confidence declines sharply as knowledge increases: 0.80 for low-knowledge professionals, 0.60 for medium-knowledge professionals, and only 0.30 for high-knowledge professionals. This declining pattern reflects the knowledge paradox identified in our empirical results (β = −0.251,
p < 0.001), where greater AI expertise amplifies skepticism toward centralized governance systems. Under DAO System governance, confidence instead increases with knowledge: 0.70 for low-knowledge professionals, 0.80 for medium-knowledge professionals, and 0.90 for high-knowledge professionals. The crossover pattern is particularly striking: low-knowledge professionals show slightly higher confidence in traditional systems (0.80 vs. 0.70), while high-knowledge professionals exhibit dramatically higher confidence in DAO systems (0.90 vs. 0.30). This reversal demonstrates that decentralized blockchain-based governance addresses the transparency deficits and expert-exclusion concerns that generate skepticism among knowledgeable professionals, transforming expert awareness from a barrier to adoption into support for properly designed governance mechanisms.
6.4. Blockchain Platform Selection and Technical Architecture
Implementing the multi-tiered DAO architecture specified in
Table 7 requires careful selection among available blockchain platforms, each offering distinct trade-offs relevant to AI governance requirements. Literature on blockchain governance systems identifies several platform options with varying characteristics [
8].
Ethereum-Based Implementation: Ethereum remains the dominant platform for DAO deployment, offering mature smart contract infrastructure through Solidity programming language and established governance frameworks including Compound Governor and OpenZeppelin contracts [
114]. Ethereum’s advantages for AI governance include
Extensive developer ecosystem and audited contract libraries that reduce security risks;
Proven track record with major DAOs. including MakerDAO and Uniswap governance;
Robust decentralization with over 500,000 validator nodes post-Merge.
However, Ethereum faces limitations including relatively high gas costs that make frequent governance transactions expensive (average
$5–50 per transaction depending on network congestion), and throughput constraints of approximately 15–30 transactions per second that limit scalability for real-time content monitoring. For the multi-token architecture proposed in
Table 8, Ethereum’s ERC-20 token standard provides well-established infrastructure, though managing multiple token types introduces complexity in voting weight calculations [
115].
Polkadot Parachain Architecture: Polkadot offers an alternative approach through its parachain model, which enables specialized blockchain customization while maintaining cross-chain interoperability [
116]. For AI governance, Polkadot’s advantages include
Substrate framework allowing custom governance logic tailored to media industry requirements;
Shared security model where parachains benefit from relay chain validators without maintaining separate validator sets;
Cross-chain message passing (XCMP) that facilitates the cross-border coordination DAOs specified in
Table 7.
However, Polkadot introduces trade-offs including parachain slot acquisition costs (recent auctions exceeded
$100 million in bonded DOT tokens), governance complexity as the platform itself undergoes rapid evolution through its own referendum system, and limited smart contract maturity compared to Ethereum. The technical complexity that contributed to low participation rates in Polkadot governance (below 15% as noted in
Section 2.2) raises concerns about whether media professionals would effectively engage with parachain governance [
45].
Layer-2 Scaling Solutions: Given Ethereum’s gas cost and throughput limitations, Layer-2 solutions including Optimistic Rollups (Optimism, Arbitrum) and Zero-Knowledge Rollups (zkSync, StarkNet) offer promising alternatives that maintain Ethereum’s security guarantees while dramatically reducing costs and increasing throughput [
117]. For the real-time audit enforcement DAOs specified in
Table 7, Layer-2 platforms provide
Transaction costs reduced by 10–100x compared to Ethereum mainnet, making continuous monitoring economically viable;
Throughput increases to hundreds or thousands of transactions per second supporting large-scale content verification;
Eventual settlement to Ethereum mainnet, providing security for high-value governance decisions [
118].
However, Layer-2 solutions involve trade-offs including withdrawal delays (typically 7 days for Optimistic Rollups due to fraud proof windows), additional technical complexity in bridging assets between layers, and fragmented liquidity across multiple Layer-2 networks [
119].
Blockchain Feature Selection Rationale: Based on the behavioral findings and governance requirements, we propose a hybrid architecture that leverages different blockchain layers for distinct functions. Core governance decisions including expert assessment DAO votes on algorithm standards and professional standards DAO policy setting utilize Ethereum mainnet, where security and decentralization outweigh gas cost concerns given infrequent but high-stakes decisions [
118]. Real-time content monitoring and audit enforcement DAOs operate on Layer-2 solutions where high throughput and low costs enable continuous operation. Cross-border coordination DAOs leverage cross-chain bridges or Polkadot XCMP to facilitate international cooperation while respecting jurisdictional boundaries [
120]. This hybrid approach addresses the blockchain trilemma by optimizing different components for security, scalability, or decentralization based on functional requirements rather than seeking a single platform that compromises all three dimensions [
121].
Smart Contract Language and Security Considerations: The choice between Solidity (Ethereum), Rust (Polkadot/Substrate), or other smart contract languages involves trade-offs between developer availability, security tooling, and expressiveness. Given the security-critical nature of AI governance where flawed smart contracts create vulnerabilities demonstrated by The DAO hack (
Section 2.2), we prioritize platforms with mature auditing infrastructure. Solidity benefits from extensive security tools including Slither, Mythril, and formal verification frameworks, alongside established audit firms specializing in DAO contracts [
122]. However, Solidity’s design enables reentrancy attacks and other vulnerabilities that require careful defensive programming. Rust-based smart contracts (used in Substrate/Polkadot and Solana) offer memory safety guarantees that prevent entire classes of vulnerabilities but involve steeper learning curves and less mature auditing tools [
123]. For the proposed AI governance system, we recommend Solidity-based implementation on Ethereum/Layer-2 given current developer ecosystem maturity, with gradual exploration of Rust-based contracts as tooling improves [
116].
Consensus Mechanism Implications: The choice of consensus mechanism affects governance participation patterns and energy efficiency. Ethereum’s transition from Proof-of-Work to Proof-of-Stake reduces energy consumption by 99.95% while maintaining security, addressing sustainability concerns relevant for media organizations with environmental commitments. Proof-of-Stake’s requirement that validators stake capital creates potential plutocratic tendencies (noted in
Section 2.2) where wealthy participants disproportionately influence consensus, though this operates at the blockchain layer rather than DAO governance layer where token-weighted voting already introduces similar dynamics [
124]. Alternative consensus mechanisms including Delegated Proof-of-Stake (used by EOS and Tron) offer higher throughput but greater centralization with typically 21–101 validators, trading decentralization for performance in ways that may undermine the governance transparency goals motivating blockchain adoption. For AI governance DAOs, the consensus mechanism primarily affects transaction finality speed and costs rather than governance outcomes, making Ethereum’s Proof-of-Stake a reasonable default choice balanced between decentralization and efficiency [
120].
6.5. Implementation Preconditions and Practical Challenges
While the DAO architecture outlined above addresses the behavioral paradoxes identified in our empirical findings, successful implementation requires establishing critical preconditions and confronting practical limitations that blockchain governance inherits or introduces.
Identity and Reputation Systems: Effective DAO governance depends on verifiable identity without compromising privacy. Traditional blockchain pseudonymity enables Sybil attacks where single actors create multiple identities to manipulate voting outcomes. For the Expert Assessment DAOs specified in
Table 7, verifying credentials such as PhD qualifications and five years of professional experience requires identity infrastructure that links on-chain addresses to real-world credentials. Zero-knowledge proof systems offer promising solutions, enabling participants to cryptographically prove credential possession without revealing personal information [
125]. Our findings on the knowledge paradox suggest that reputation must capture not just participation frequency but also demonstrated expertise, requiring sophisticated on-chain metrics that reward quality skepticism and accurate risk assessment rather than mere activity levels.
Anti-Capture Safeguards: Token-weighted voting creates plutocratic risks where wealthy participants accumulate disproportionate governance power, potentially replicating the centralized control that DAO governance aims to overcome. Empirical evidence from existing DAOs supports these concerns: MakerDAO governance concentrates among large token holders, with top 10 addresses controlling over 50% of voting power [
117], while Polkadot referenda consistently show participation below 15% with outcomes determined by whale addresses [
126]. For AI governance where professional standards affect livelihoods, token concentration enables capture by technology companies, investors, or other actors whose interests diverge from working media professionals. Several anti-capture mechanisms warrant consideration. However, each safeguard introduces complexity and potential gaming vectors that require careful mechanism design informed by ongoing empirical study of DAO governance in practice.
On-Chain Auditing Costs: Real-time algorithm monitoring DAOs specified in
Section 6.2 require frequent transactions for content flagging, validation voting, and smart contract execution. Layer-2 solutions reduce costs by 10–100x as noted in
Section 6.4, but still impose non-trivial expenses that accumulate across large-scale operations. Furthermore, computational costs extend beyond transaction fees to include node operation expenses, data storage costs (particularly for content hashes and audit trails), and oracle costs for importing off-chain information about AI outputs into smart contracts. These economic realities necessitate hybrid architectures where only high-stakes decisions such as algorithm standard changes and professional sanctions utilize expensive on-chain governance, while routine content validation employs off-chain coordination with periodic on-chain checkpoints [
127].
Centralization Through Token Accumulation: Even with anti-capture safeguards, token economics create accumulation dynamics where early participants, successful contributors, and financially privileged actors amass governance power over time. The REPUTATION token system in
Table 8 rewards accurate assessments and quality contributions, but professionals who participate full-time accumulate more tokens than those balancing governance with content creation responsibilities, potentially creating governance classes divorced from practical production realities. Addressing these dynamics requires ongoing mechanism adjustment informed by empirical monitoring of token distribution patterns, complemented by non-token-based governance elements such as randomly selected review juries or mandatory participation rotation that ensure governance remains representative despite economic inequalities [
117].
6.6. Blockchain-Based Professional Integrity Protection
Our finding that confidence in AI knowledge positively predicts adoption intentions (β = 0.169, p < 0.001) reveals the importance of building legitimate confidence rather than false confidence that could lead professionals to be misled by AI. DAO governance systems can protect professional integrity through
Reputation-Based Verification Systems: Reputation-Based Professional Integrity Protection establishes blockchain-anchored systems that verify credibility, incentivize accountability, and support continuous learning through the following components:
Professional Credibility Tokens: Blockchain-based credentials that track media professionals’ accuracy and ethical standards over time.
Peer Review Networks: DAO members evaluate each other’s AI-assisted work, building collective quality assurance.
Accountability Mechanisms: Smart contracts that automatically track the outcomes of AI-assisted content, rewarding professionals whose work maintains high standards.
Continuous Learning Protocols: DAO-organized training programs that help professionals stay current with AI capabilities and limitations.
Global Professional Standards: These protection mechanisms can be implemented across different media markets while respecting local professional norms and regulatory requirements. The supporting token-economics design is summarized in
Table 8.
Figure 4 represents the activity-to-token reward matrix (
Table 8) as an intensity heatmap showing which tokens most strongly incentivize different governance activities. Six governance activity types (Algorithm Assessment, Content Validation, Integrity Tracking, Audit Participation, Cross-Border Coordination, Training Contribution) form the rows, while six token types (EXPERT, COMMUNITY, REPUTATION, AUDIT, GLOBAL, EDUCATION) constitute the columns. Cell colors indicate reward intensity on a scale from 10 (minimal reward) to 90 (maximum reward). Intensity values reflect behavioral justifications from our empirical findings and functional alignment between activities and tokens. Algorithm Assessment activities receive maximum rewards (90) for EXPERT and AUDIT tokens, as the knowledge paradox (β = −0.251) demonstrates that expert technical knowledge is essential for identifying algorithmic bias. Content Validation shows high rewards (80–85) for COMMUNITY and AUDIT tokens, reflecting information elaboration effects (β = 0.294–0.486) where broad stakeholder engagement improves quality assessment. Integrity Tracking receives maximum REPUTATION token rewards (90), since the confidence-adoption link (β = 0.169) indicates that tracking professional credibility builds legitimate confidence. Cross-Border Coordination shows peak rewards (85) for GLOBAL tokens, aligning with the finding that 66.6% knowledge accuracy suggests universal challenges requiring international cooperation. Training Contribution reaches maximum intensity (90) for EDUCATION tokens, as the model’s 58% explanatory power reveals substantial learning needs. The heatmap pattern reveals functional specialization: bright regions along diagonal-like positions show where token types align with their primary intended activities, while moderate intensities (50–60) in off-diagonal positions indicate secondary incentives that encourage cross-functional participation without diluting specialized expertise. This intensity distribution ensures that participants receive the strongest rewards when contributing to activities matching their demonstrated capabilities, while maintaining sufficient incentives for broader engagement that prevents governance silos.
6.7. Universal Implementation Framework for Global Media Industries
Our knowledge test showing 66.6% accuracy among Chinese media professionals suggests similar challenges likely exist globally. The knowledge paradox we identify transcends cultural boundaries, making our DAO governance solutions applicable worldwide and across adjacent creative industries facing parallel AI governance challenges.
Generalization Beyond Media to Creative Industries: The behavioral mechanisms and governance design principles derived from our media professional study extend naturally to adjacent creative sectors confronting generative AI disruption. Music production faces AI composition tools like Suno and Udio that generate complete tracks from text prompts, raising authenticity and attribution challenges analogous to video generation. Visual arts and illustration communities confront Midjourney, DALL-E, and Stable Diffusion systems that generate images competing with human artists while training on copyrighted works without consent [
128]. Game development encounters AI systems generating code, 3D assets, and narrative content that transform production pipelines [
129]. Across these domains, professionals exhibit similar behavioral patterns: expertise creates awareness of AI limitations and governance gaps that reduce adoption confidence under centralized control, while benefit perceptions drive adoption when appropriate oversight exists. The multi-token DAO architecture in
Table 8 generalizes directly: EXPERT tokens for domain specialists (music producers, illustrators, game designers), COMMUNITY tokens for broader practitioner participation, REPUTATION tokens tracking contribution quality, AUDIT tokens incentivizing real-time output monitoring, GLOBAL tokens coordinating cross-border standards, and EDUCATION tokens rewarding skill development. Industry-specific adaptations require only adjusting threshold criteria and audit priorities while maintaining core governance structure, enabling federated implementation where music DAOs, visual arts DAOs, and game development DAOs operate independently while sharing anti-capture mechanisms and identity infrastructure through cross-chain interoperability protocols.
Post-Pandemic Resilience into Scalable Protocols: The DAO governance framework we propose institutionalizes organizational adaptations that emerged during COVID-19 disruptions, transforming temporary crisis responses into permanent governance infrastructure. Recent empirical studies [
2,
5,
7] document resilience mechanisms that parallel our DAO design. By encoding these crisis-tested resilience behaviors into immutable smart contracts and token economics, blockchain governance transforms adaptive responses that organizations adopted under pandemic pressure into durable institutional features that persist beyond crisis conditions. This addresses the post-pandemic governance challenge identified in our introduction: whereas crisis-driven ICT adoption bypassed systematic oversight in favor of operational continuity, the current phase demands institutionalizing effective practices into frameworks that balance innovation with professional protection.
Cross-Border DAO Networks: Cross-Border DAO Networks enable coordinated international governance, local adaptation, shared resources, and joint incident response across jurisdictions through the following components:
International Coordination Protocols: Blockchain systems that enable cooperation between DAOs in different countries while respecting local regulations, leveraging advanced cross-blockchain data migration techniques to ensure seamless information sharing [
130].
Cultural Adaptation Mechanisms: Smart contracts that automatically adjust governance rules based on regional contexts while maintaining core professional standards.
Resource Sharing Systems: DAOs can share best practices, training materials, and auditing tools across international networks using content-centric software-defined approaches that optimize information distribution [
131].
Global Incident Response: Coordinated DAO response to AI systems that mislead professionals across multiple jurisdictions.
Implementation Roadmap: The Implementation Roadmap sequences the rollout from pilot DAOs to interoperable, globally scaled, and regulator-aligned governance as follows:
Phase 1: Establish pilot DAOs in major media markets (North America, Europe, Asia-Pacific) and adjacent creative industries (music, visual arts, game development).
Phase 2: Develop interoperability protocols between regional and sector-specific DAOs, sharing identity infrastructure and anti-capture mechanisms.
Phase 3: Scale governance systems to cover global creative economies while federating authority to respect jurisdictional and cultural differences.
Phase 4: Integrate existing regulatory frameworks while maintaining decentralized oversight capabilities, clarifying complementary roles between blockchain governance and statutory enforcement.
Necessary Role of Statutory Regulation: While blockchain-based DAO governance addresses transparency, participation, and accountability challenges that centralized systems create, decentralized mechanisms cannot replace statutory regulation entirely. Legal enforcement of criminal violations such as fraud, defamation, and copyright infringement requires state authority with coercive power that smart contracts lack [
132]. Cross-border disputes involving conflicting national laws exceed DAO jurisdiction, necessitating international legal frameworks and treaties. Consumer protection regulations including data privacy requirements (GDPR, CCPA) and accessibility standards demand governmental oversight with audit authority and penalty mechanisms beyond blockchain capabilities [
133]. Systemic risks where AI-generated misinformation threatens public health, election integrity, or national security warrant centralized intervention when decentralized coordination proves insufficient. The optimal governance architecture therefore combines decentralized DAO oversight for algorithmic transparency, professional standards, and community coordination with statutory regulation for legal enforcement, consumer protection, and crisis response. This hybrid model positions DAOs as a complement to rather than a replacement for traditional regulation, addressing the governance gaps and participation deficits that centralized approaches create while preserving state capacity for functions requiring legal authority. Future research should examine integration mechanisms that enable information flow between DAO governance systems and regulatory agencies, specify escalation protocols for transferring issues from blockchain to legal jurisdiction when appropriate, and assess how decentralized oversight can inform statutory rulemaking by surfacing practitioner knowledge and community norms.
8. Conclusions
This study examines how to prevent AI-driven misguidance of media professionals while ensuring equitable access to AI benefits. Using the O-S-O-R framework and data from 209 Chinese media professionals, we identify a knowledge paradox where higher AI expertise amplifies skepticism toward adoption under centralized governance, alongside strong information elaboration effects through confidence, benefit perception, and risk perception pathways. Building on these behavioral insights, we propose a blockchain-based governance framework with five integrated mechanisms: Expert Assessment DAOs, Community Validation DAOs, real-time algorithm monitoring, professional integrity protection, and cross-border coordination. These mechanisms address transparency deficits and participation barriers that generate skepticism among knowledgeable professionals, transforming expertise from a barrier to adoption into support for properly designed oversight.
This study has important limitations that future research should address. The cross-sectional design limits causal inference about how behavioral dynamics evolve over time. The single-country sample in China constrains generalizability given distinct regulatory environments and platform ecosystems. Construct operationalization choices require validation across varied settings. Future research should pursue four complementary directions. First, longitudinal tracking before and after key AI policy or model releases can examine how the knowledge paradox and information elaboration patterns change as technologies mature. Second, multi-country replications with varied platform regimes can assess whether findings generalize across institutional contexts. Third, field pilots of DAO modules with pre-registered outcomes including audit accuracy rates, time-to-correction metrics, and participation sustainability can validate the theoretical governance architecture through real-world implementation. Fourth, economic analyses comparing token incentive efficacy versus traditional quality assurance approaches can assess cost-effectiveness and long-run viability. This research agenda addresses whether decentralized governance successfully institutionalizes adaptive coordination and transparency practices or whether organizations revert to centralized patterns once adoption pressures stabilize.