Next Article in Journal
A Federated Digital Twin Framework for Consumer Wellbeing Systems
Previous Article in Journal
Knowledge Evolution in the Mobile Industry via Embedding-Based Topic Growth and Typology Analysis
Previous Article in Special Issue
Firms’ Structural Positions in Patent Citation Networks and Innovation Performance: Evidence from a Large-Scale Chinese Dataset
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evolutionary Game Analysis of AI-Generated Disinformation Governance on UGC Platforms Based on Prospect Theory

Business School, Xiangtan University, Xiangtan 411105, China
*
Author to whom correspondence should be addressed.
Systems 2026, 14(4), 416; https://doi.org/10.3390/systems14040416
Submission received: 11 February 2026 / Revised: 27 March 2026 / Accepted: 30 March 2026 / Published: 9 April 2026
(This article belongs to the Special Issue Advancing Open Innovation in the Age of AI and Digital Transformation)

Abstract

While Generative Artificial Intelligence technology empowers content production on user-generated content platforms, it also gives rise to novel risks of disinformation dissemination. The effective governance of these risks is critical to ensuring the cybersecurity of the online ecosystem and maintaining long-term social stability. To address the collaborative governance dilemma, this study constructs a tripartite “platform-user-government” evolutionary game model based on prospect theory. It explores the evolutionarily stable strategies and stability conditions of each actor, supplemented by numerical simulations and practical case validation. The results indicate that: (1) under specific conditions, the system can converge to an ideal equilibrium {active platform governance, engaged user participation, stringent government supervision}; (2) the government’s reward–penalty mechanisms can drive the system towards this ideal equilibrium; (3) users’ digital literacy is a key variable influencing the system’s evolutionary path; (4) both the risk preference coefficient (β) and loss aversion coefficient (λ) from prospect theory have a significant moderating effect on the system’s evolution. Finally, targeted recommendations are proposed for the three aforementioned stakeholders to accelerate the improvement of China’s collaborative governance of the content ecosystem.

1. Introduction

As an emerging field of human development, artificial intelligence profoundly transforms human production and lifestyles, creating unprecedented opportunities for the world while simultaneously presenting unprecedented risks and challenges [1]. The resultant Generative Artificial Intelligence (GAI) generates content, including text, images, audio, and video, based on user instructions, which is referred to as artificial intelligence-generated content (AIGC) [2]. Its proliferation drives the intelligent transformation of user-generated content (UGC) platforms, shifting from “user creation” to “human-machine collaborative creation”. It significantly lowers the barrier to user creation and enables an exponential growth in both the scale and diversity of content supply on these platforms [3]. However, the double-edged sword effect of the technology has also become apparent. While AIGC enhances creative efficiency, it also exacerbates the risk of disinformation dissemination, potentially leading to severe, even catastrophic, consequences [4]. Mass-produced false and harmful content generated by AIGC can be highly similar to authentic information, easily obscuring the truth. If not promptly identified and controlled, once disseminated via the social networks of UGC platforms or other channels, it can directly mislead public perception, disrupt social order, and cause detrimental effects [5].
Some individuals or organizations exploit GAI to deliberately create disinformation and publish it on UGC platforms, severely polluting the online content ecosystem and urgently necessitating the establishment of an effective governance system. Compared to traditional disinformation, artificial intelligence-generated disinformation (AIGD) exhibits higher fidelity and faster propagation speed, complicating its identification and prevention, and thus poses unprecedented challenges to governance [6]. How to effectively govern AIGD on UGC platforms has become a pressing real-world issue that needs to be addressed. According to the “Regulations on the Ecological Governance of Online Information Content” [7], the governance of the content ecosystem, which includes the governance of AIGD, is a comprehensive activity guided by the Core Socialist Values and involves the collaborative participation of multiple entities, including the government (cyberspace administration departments), platforms (online information content service platforms), and users (producers and consumers of online information content). In the context of governing AIGD, UGC platforms, as key channels for the dissemination of disinformation, bear the primary responsibility for content review and risk prevention. Users, who act as both creators and disseminators, directly influence the clarity of the online ecosystem through their behavioral self-discipline and media literacy. The government is responsible for coordinating governance efforts and overseeing the fulfillment of responsibilities by all parties, using laws, regulations, and policy instruments to regulate and guide the behavior of platforms and users.
However, in reality, the interactive relationships among platforms, users, and the government are intricate [8]. The governance of AIGD in practice faces multiple dilemmas: the boundary between government regulation and guidance is ambiguous, where excessive oversight may stifle technological innovation, while insufficient oversight risks allowing disinformation to proliferate [9]. The primary responsibility of platforms in disinformation governance has not been fully clarified; their governance strategies often adjust dynamically based on commercial interests, societal pressure, or the intensity of government oversight [10], leading to a lack of continuity and stability in governance efforts. Users can be creators and disseminators of disinformation, as well as its victims or defenders [11]. Conflicts of interest and complex interactions among the three parties exacerbate decision uncertainty. Dynamic evolutionary game theory (EGT) provides an analytical framework for dissecting such multi-actor interactions. However, the assumptions of “complete rationality” and “utility maximization” in traditional evolutionary games struggle to adequately capture the bounded rationality, subjective value perception bias, and heterogeneity in risk preferences of actors in reality. To address this gap, this paper integrates prospect theory (PT), revealing the caution exhibited by boundedly rational actors toward potential gains and their risk appetite toward potential losses [12].
In summary, this paper constructs a tripartite dynamic evolutionary game model involving the platform, users, and the government. It aims to clarify three core issues in the governance of AI-generated disinformation: (1) the specific role positioning of each participating actor; (2) the characteristics of the game relationships and dynamic evolutionary processes among the actors; (3) the strategic combinations for effectively curbing the dissemination of false content. The marginal contributions are reflected in: (1) introducing prospect theory into the analytical framework to analyze the evolutionarily stable strategies for the collaborative governance of AI-generated disinformation and the conditions under which the actors’ strategies converge to an ideal state, thereby expanding the application boundaries of evolutionary game theory in the field of information content governance; (2) employing evolutionary game theory and simulation analysis methods to investigate the dynamic evolutionary patterns and coordination mechanisms of multi-actor strategies, providing a theoretical basis for the government to formulate differentiated incentive and constraint policies, for platforms to optimize governance mechanisms, and for users to effectively participate in governance, thereby facilitating the formation of a co-construction, co-governance, and co-sharing framework for the information ecosystem.

2. Literature Review

2.1. Research on the Governance of AI-Generated Disinformation

The rapid development of GAI technology, while stimulating the innovative vitality of UGC platforms, has also turned them into a breeding ground for the generation and dissemination of novel disinformation. In contrast to traditional disinformation, which relies on manual fabrication, AIGD, produced by GAI tools such as Large Language Models (LLMs) and multimodal generation models, features automation and high realism. Concurrently, its dissemination, facilitated by algorithmic recommendations on social media, demonstrates traits of instantaneity, cross-platform reach, and concealment [13]. Sun et al. also pointed out that AIGD, in terms of its technological logic, possesses inevitability and infinite replicability, while its harmful consequences are characterized by complexity and uncontrollability [14]. Therefore, the unique characteristics of this new form of disinformation render traditional governance approaches ineffective, leading to an exponential increase in the difficulty of its management.
In response to the formidable challenges posed by AIGD, current research and practice have primarily progressed along two dimensions: technological detection and regulatory governance. At the technological level, scholars are dedicated to enhancing the accuracy and robustness of identifying AIGC through methods such as semantic analysis [15], multimodal fusion [16], and large language model enhancement [17]. However, technological detection suffers from the inherent flaw of being a “cat-and-mouse game,” where the development of detection algorithms often lags behind the iterative advancements in generation technology [4,18]. At the regulatory level, existing research indicates that relying solely on government supervision or platform self-governance is insufficient to effectively manage this type of disinformation. Consequently, establishing a multi-stakeholder collaborative governance system has become a consensus within the academic community [19,20]. Existing discussions have focused on government legislation, platform responsibility, and the enhancement of public media literacy [21,22]. However, these discussions are largely concentrated at the macro level, and standalone regulatory measures may encounter challenges such as high implementation costs, slow response times, and potential impacts on freedom of expression. Given that the governance of AIGD on UGC platforms, which is the focus of this study, is a complex social interaction involving multiple stakeholders, it not only requires technological support but also urgently necessitates the construction of a governance system that enables effective collaboration among diverse stakeholders to integrate their respective strengths and form a synergistic force. It is particularly noted that existing research has conducted sufficient empirical analysis on the diffusion mechanism of AIGD. Therefore, rather than attempting to simulate how AIGD spreads among users, this study treats it as a given governance context and focuses on the incentive structure and strategic interactions among governance actors when responding to such information, complementing existing research.

2.2. Evolutionary Game Research on Disinformation Governance

As a new frontier of disinformation governance in the era of artificial intelligence, the governance of AIGD not only follows the general principles of the former but also imposes higher demands on behavioral interactions among governance stakeholders due to its highly deceptive nature and other characteristics. In this context, multiple actors—including UGC platforms, the government, and users—each face differentiated cost–benefit structures, and their decision-making behaviors are jointly influenced by information uncertainty, risk perception, and cognitive biases. EGT, due to its capacity to effectively explain the decision-making processes of participants in complex systems and to enable a micro-level exploration of the evolution of involved stakeholders’ behavioral strategies, has become a powerful tool for analyzing such social dilemmas.
Existing research has extensively explored multi-stakeholder interactions in disinformation governance. The government is typically positioned as the macro-level regulator, responsible for establishing rules and implementing rewards and penalties to guide online order [23], while platforms, as information service providers, bear the responsibility for content moderation and compliant operations [24]. The role of the public, or users, is more complex; they can be potential victims of disinformation [25], potential disseminators of disinformation [26], and even potential monitors and whistleblowers [27]. These studies have clearly outlined the prototype of a tripartite collaborative framework involving the “government, platforms, and users.” However, existing research often addresses the user’s role in a fragmented manner, focusing either on the negative behaviors of users as disseminators or emphasizing their positive role as monitors. This monolithic portrayal overlooks a core feature of UGC platforms: any user, empowered by technology, can simultaneously possess the dual identity of content creator and content consumer. To elaborate, a user may both deliberately use GAI tools to create disinformation and, while browsing, identify and report disinformation generated by others. These two identities often coexist within the same individual in practice, and the inherent motivational conflict profoundly influences the user’s strategic choices, such as the short-term benefits of creating disinformation versus the monitoring costs of reporting it. However, existing game models fail to effectively integrate this complexity. Therefore, constructing a game framework that accounts for users’ dual identity is a necessary prerequisite for deepening the governance of AIGD.
EGT provides a suitable framework for analyzing the interaction mechanisms among the aforementioned stakeholders. A substantial body of research has constructed tripartite evolutionary game models involving the “government, platforms, and users,” exploring the influence of factors such as penalty mechanisms [28], regulatory costs [29], and technological empowerment [30] on the evolutionary stable strategies of the system. For instance, Liu et al. (2024) [28] pointed out that increasing the severity of penalties on governments and platforms can effectively curb the dissemination of disinformation. These studies have laid a solid foundation for understanding the dynamic mechanisms of AIGD’s governance on UGC platforms. However, traditional evolutionary game models are built upon the foundations of perfect rationality and expected utility theory, with the core assumption that decision-makers can accurately calculate and maximize their objective expected utility. This deviates from the actual context of AIGD’s governance. Specifically, the high realism of AIGD makes it difficult for users to accurately discern the truthfulness of information, thereby exacerbating decision-making ambiguity [31]; the low cost and high concealment of AIGD production, in turn, lead to a systematic bias in users’ perception of the probability of “being punished” [32]. In this context, users’ decisions rely more on subjective perceptions, risk preferences, and emotional responses rather than on objective cost–benefit calculations [33].

2.3. Application of Prospect Theory in Evolutionary Games of Social Governance

Prospect theory, proposed by Kahneman and Tversky [34,35], is based on the assumption of bounded rationality. By introducing the concept of psychologically perceived gains and losses, it precisely addresses the aforementioned shortcomings and offers a more accurate explanation for characterizing decision-making under such uncertain circumstances. This theory comprehensively characterizes the different responses of decision-makers when facing gains and losses through the value function and the decision weight function. Specifically, the theory demonstrates that decision-makers are not perfectly rational; their judgments of gains and losses are often based on a reference point, and they tend to exhibit risk aversion when facing gains but risk-seeking behavior when facing losses. This theory provides a scientific psychological foundation for understanding the “irrational behaviors” of governance stakeholders within complex and dynamic environments.
In recent years, scholars have begun integrating prospect theory with evolutionary game theory and applying this combined approach to the field of social governance, yielding fruitful results. For instance, Chen et al. (2024) constructed a collaborative governance model for public crises from the perspective of value perception, finding that the behaviors of game participants are influenced by perceived benefits and perceived costs, with greater sensitivity to risk perceptions regarding the costs [36]. Chen et al. (2025), in their study on public health emergencies, found that the public’s risk preference positively influences panic-buying behavior, while loss aversion exerts an inhibitory effect [37]. Bu et al. (2025) further revealed how the information environment, such as the accuracy of information on social media, shapes stakeholders’ value perceptions and consequently alters the equilibrium outcomes of multi-stakeholder games [38]. These studies indicate that in dynamic and complex environments, stakeholders’ subjective value perception is a pivotal variable driving strategy evolution. This provides a solid theoretical foundation and methodological support for this paper’s endeavor to introduce prospect theory into the context of AIGD’s governance, in order to construct an evolutionary game model that more closely aligns with reality.

2.4. Research Gaps and Innovations

In summary, existing research has established a solid foundation for understanding AIGD and its governance, yet the following research gaps remain:
(1) Research perspective: Existing research has clearly revealed the novel characteristics of AIGD [13,14], rendering governance approaches that rely solely on technological detection [15,16,17] or macro-level regulation [21,22] unsustainable. Although scholars have called for the construction of a multi-stakeholder collaborative governance system [19,20], an analytical framework capable of dynamically characterizing the behavioral interactions, conflicts of interest, and strategy evolution among stakeholders is still lacking in the context of AIGD’s governance mechanisms on UGC platforms. This deficiency makes it impossible to explain under what conditions the ideal state of collaborative governance can be achieved.
(2) Role definition: Evolutionary game theory provides an effective tool for analyzing the behavioral interactions of the aforementioned stakeholders, and existing research has successfully identified the distinct roles and game relationships among governments, platforms, and users in disinformation governance [23,24,25,26,27]. However, in game models involving users, their behaviors are commonly conceptualized in a monolithic manner, overlooking the dual identity of users on UGC platforms as both potential violators and potential monitors.
(3) Decision-making assumptions: Existing governance research based on evolutionary game theory [28,29,30] is largely predicated on the assumption of perfect rationality, which deviates from the unique characteristics of AIGD governance [31,32,33]. Although some studies have applied prospect theory to fields of social governance such as public crises [36,37,38], their models have not simultaneously accounted for the characteristics of UGC platforms and the novel context of AIGD’s governance.
In light of these, this study focuses on the context of AIGD’s governance on UGC platforms. Adopting the perspective of perceived value and integrating prospect theory, it constructs a tripartite evolutionary game model involving “platforms, users, and the government” that accounts for users’ dual identity, thereby making it more aligned with the actual context. Through numerical simulations and practical case validation, it reveals the influence of factors such as the government’s reward and penalty mechanisms, users’ digital literacy, and stakeholders’ risk preferences on system evolution. This research aims to provide theoretical support and quantitative references for the government to optimize regulatory policies and for platforms to enhance governance efficiency, thereby advancing the improvement of a collaborative governance system for the online information content ecosystem.

3. Model Construction

3.1. Problem Description

Against the backdrop of rapid artificial intelligence development, disinformation governance faces unprecedented challenges. Building on policy interpretations and prior research [29,39,40], this paper identifies three core actors in an evolutionary game model for the co-governance of AIGD: platforms, users, and the government. In the model, the UGC platform serves as the core carrier. Users, functioning as both content producers and recipients [41], are characterized by both free creation and public oversight. The government and the platform act as the regulatory actor and the governance actor, respectively, bearing core responsibilities for information dissemination control and risk prevention.
Specifically, to balance industry development and cyberspace ecology objectives, the government formulates differentiated regulatory strategies based on regulatory costs and societal impact. These strategies may entail either stringent or lenient supervision. Platforms need to weigh governmental regulatory requirements against their own commercial interests. They may opt for active governance of disinformation by optimizing algorithmic recommendation mechanisms, strengthening content moderation investments, and establishing user credit systems. Alternatively, they may choose a passive approach, reducing moderation costs to maximize short-term benefits. Users, characterized by this dual role, function as both creators and supervisors. In content production, they may use GAI normally to disseminate authentic information, or they may be driven by profit or swayed by emotion to maliciously use GAI, becoming agents for spreading disinformation. Simultaneously, users can also act as supervisors through behaviors such as reporting and commenting, spontaneously curbing disinformation. Consequently, users may either choose to participate in governance, engaging in rational creation and active supervision to foster a healthier cyberspace ecology, or they may opt out of governance, choosing to spread rumors for gain and neglect their responsibilities, thereby exacerbating information distortion.
Therefore, establishing a co-governance system characterized by effective government supervision, diligent platform compliance, and active user participation constitutes a critical pathway to address the dissemination of disinformation under GAI abuse. How can such tripartite co-governance be advanced? To facilitate model analysis, this paper categorizes the continuous behavioral spectrum of each actor into two representative strategies, thereby constructing a tripartite evolutionary game model involving the government, platforms, and users. The strategies and their mechanisms of action are illustrated in Figure 1.
This paper adopts a binary discrete strategy setup—the government’s stringent supervision versus lenient supervision, platforms’ active governance versus passive governance, and users’ participation in governance versus non-participation—which is essentially a discrete strategy game design with continuous intensity. Specifically, the core behavioral direction of each actor is represented as a binary strategy, while the specific intensity of strategy implementation (such as regulatory intensity, review effort, and degree of participation) is captured through continuous parameters within the [0, 1] interval in the model. This setup not only accurately captures the core decision-making direction of each actor in the governance game and avoids the blurring of evolutionary logic that may arise from an excessive number of continuous strategy dimensions, but also aligns with the mainstream paradigm of evolutionary game research in the field of information content governance, ensuring the comparability of this study’s conclusions with existing literature. At the same time, it achieves a decoupled analysis of “strategic direction choice” and “behavioral implementation intensity”, enabling the model to effectively capture nuanced differences in behavioral intensity in real-world governance scenarios while maintaining analytical tractability.

3.2. Model Assumptions

Given that AIGD on UGC platforms exhibits distinctive communication characteristics—namely, rapid dissemination, emotional amplification, and network polarization—this paper does not directly construct a communication dynamics model to capture these processes. Instead, it internalizes these typical communication attributes into the core parameters and payoff matrix of the tripartite evolutionary game model through cost–benefit adjustments and behavioral preference calibration, thereby effectively bridging the realistic communication context of disinformation with the strategic interactions of governance actors. Specifically, the rapid dissemination of AIGD amplifies the potential reputational and regulatory risks for platforms and the government. It increases both the reputational losses associated with platforms’ passive governance and the loss of public credibility associated with the government’s lenient supervision, while also raising the government’s penalty intensity for platforms’ passive governance and the negative externalities generated by platforms. Meanwhile, emotional amplification and network polarization increase the difficulty of identifying disinformation, raise users’ initial supervision costs, and reduce their willingness to participate. Furthermore, the high exposure resulting from rapid dissemination lowers users’ subsequent supervision costs and enhances governance efficiency. Based on the internalization logic, the basic assumptions of the tripartite evolutionary game model are proposed as follows.
Assumption 1.
Prospect theory. The three actors in the game—the government, platforms, and users—all exhibit bounded rationality, possessing corresponding learning and strategy-adjustment capabilities. Their strategy choices are influenced by multiple factors such as the environment and policies. These choices are typically based on the actors’ subjective perceived value of gains and losses rather than on actual utility, thus incurring a degree of subjective cognitive bias. Over time, the strategy choices of each actor will gradually converge to a stable strategy profile, ultimately achieving a dynamic equilibrium in co-governance. From the perspective of prospect theory, this paper uses the actors’ perceived value of gains and losses, rather than actual utility, as the criterion for evaluating different strategy choices [42]. When actors face situations involving uncertain gains or losses ( Δ U ), according to prospect theory, they derive a perceived value  V (ΔU), as specified by the following formula:
U = p w p V Δ π
V Δ π = ( Δ π ) α , Δ π 0 λ Δ π β , Δ π < 0
w p = p σ p σ + 1 p σ 1 σ
In Formulas (1) to (3),  Δ π  denotes the difference between the payoff value obtained from an actor’s behavioral strategy and the reference point payoff value.  Δ π 0  indicates the decision-maker’s perceived gain value for that strategy; conversely, it indicates a loss. To facilitate the analysis, this study assumes that the value of the reference point in the value function is 0, and the absolute values of gains and losses are represented by their respective deviations [43].  α , β ( 0 α , β 1 ) are the risk preference coefficients of the perceived value function. A larger coefficient value indicates a greater degree of risk preference in the decision-maker.  λ 1) is the loss aversion coefficient of the perceived value function. A larger value of this coefficient indicates a greater degree of loss aversion in the decision-maker. w(p) represents the decision weight of an event,  p  denotes the actual probability of the event, and  σ ( 0 < σ < 1 ) represents the decision impact coefficient.
Assumption 2.
Strategy selection. In the co-governance game of AIGD, UGC platforms, as the core media, bear the responsibility for user-generated content moderation. Based on the level of moderation intensity  μ , their strategy set is (active governance, passive governance), with the selection probability denoted by (x, 1 − x). Users, possessing the dual roles of creators and supervisors, have a strategy set of (participate in governance, not participate in governance), with the selection probability denoted by (y, 1 − y). The government, acting as a neutral regulatory actor independent of interest group influence [44], has a strategy set of (stringent supervision, lenient supervision) based on the regulatory intensity  ε , with the selection probability denoted by (z, 1 − z). Here,  0 x , y , z , μ , ε 1 .
Assumption 3.
Actor costs. Actor costs are positively correlated with the stringency of regulatory standards; stricter standards necessitate greater resource investment and thus higher costs, and vice versa [45]. In the context of AIGD governance, the governance costs for UGC platforms are influenced by  μ , expressed as  μ · C p  (where  μ = 1  under active governance, and  0 μ < 1  under passive governance). Users’ costs comprise two types: (1) monitoring cost associated with users’ willingness to participate in monitoring  γ , expressed as  γ · C u 1  (where  0 < γ 1  when participating in monitoring, and  γ = 0  when not participating); (2) content creation cost ( C u 2 ) when using GAI. Government supervision costs are related to  ε , expressed as  ε · C g  (where  ε = 1  under stringent supervision, and  0 ε < 1  under lenient supervision).
Assumption 4.
Gains and losses. When platforms choose the active governance strategy, the economic gain is  E p 1 , and the resulting positive externality benefits for the cyberspace ecology are shared by the government (denoted  P g ) and users (denoted  P u ). When choosing the passive governance strategy, the economic gain is  E p 2 , and the resulting negative externality losses for the cyberspace ecology are shared by the government (denoted  N g ) and users (denoted  N u ). Here, the government’s externality gains and losses primarily reflect the impact of the online environment on social governance. Specifically, the healthy environment fostered by active governance facilitates the achievement of social objectives (a positive gain), while the proliferation of disinformation resulting from passive governance increases governance costs (a negative loss). Users’ externality gains and losses reflect their perception of online experience: active governance leads to a positive experience (a positive gain), whereas passive governance results in a deteriorated experience (a negative loss). When participating in governance, users use GAI normally and engage in supervision. When not participating, they maliciously use GAI with a probability  m [ 0 , 1 ]  and refrain from supervision. The gain from normal posting is  E u 1 , and the gain from successful malicious posting is  E u 2 . (Note: passive governance yields higher short-term traffic gains for platforms [29], hence  E p 1 < E p 2 ; malicious use of GAI can bring users more ‘grey benefits’ [46], hence  E u 1   <   E u 2 ).
Assumption 5.
Rewards and penalties. The government regulates the governance performance of platforms and users through reward–penalty mechanisms. Specifically, platforms receive a government reward  R p  for active governance, and users receive a government reward  R u 1  for participating in supervision. If passive governance by a platform is detected, it incurs a government penalty  ε · D p Under stringent government supervision, the platform must additionally compensate users for ecological losses with an amount  R u 2 . Platforms impose a penalty  D u  on users discovered to be maliciously using GAI. Furthermore, users’ supervision behavior involves a spillover effect. When participating in supervision, if the government adopts lenient supervision, users incur a credibility loss  γ · L g ; if platforms engage in passive governance, users incur a reputational loss  γ · L p  (with γ∈(0,1] in these cases). Otherwise, no loss is incurred ( γ = 0 ).
Prospect theory posits that for deterministic gains and losses, the actual utility aligns with the perceived value. Psychological perceived utility arises only when decision-makers face uncertain gains and losses [47]. In the model, the deterministic gains and losses of the actors are represented by their actual value, while other parameters are represented by their perceived value. Based on the above assumptions, the parameter settings and definitions for the evolutionary game model are presented in Table 1.

3.3. Payoff Matrix Construction

Building upon the traditional evolutionary game framework, this study integrates prospect theory to transform the expected payoff values into prospect values for each variable. Based on this transformation, a perceived payoff matrix is constructed, as detailed in Table 2.

3.4. Construction of the Replicator Dynamics Equation

The replicator dynamic equation is a classic form of selection dynamics in evolutionary game theory. It accurately captures the strategic evolution tendency of actors and provides a quantitative tool for analyzing the strategy choices and behavioral evolution of multiple actors in the UGC–platform AIGD governance evolutionary game [48], as shown in Table 2.
The expected prospect value for platforms choosing the active governance strategy ( U 11 ), the expected prospect value for choosing the passive governance strategy ( U 12 ), the average expected prospect value ( U 1 ), and the corresponding replicator dynamic equation F ( x ) are given by Formulas (4) through (7), respectively.
U 11 = y z C p + E p 1 + V R p + y 1 z C p + E p 1 + V R p + 1 y z C p + E p 1 + V R p + m V D u + 1 y 1 z C p + E p 1 + V R p + m V D u
U 12 = y z μ C p + E p 2 + V D p + γ V L p + V R u 2 + y 1 z μ C p + E p 2 + ε V D p + γ V L p + 1 y z μ C p + E p 2 + V D p + 1 y 1 z μ C p + E p 2 + ε V D p
U 1 = x U 11 + 1 x U 12
F x = x U 11 U 1 = x 1 x C p + E p 1 E p 2 + V R p + μ · C p ε · V D p ( 1 ε ) · V D p · z         γ V L p y + m · V D u · ( 1 y ) V R u 2 y z
The expected prospect value for users choosing the participate in governance strategy ( U 21 ), the expected prospect value for choosing the not participate in governance strategy ( U 22 ), the average expected prospect value ( U 2 ), and the corresponding replicator dynamic equation F ( y ) are given by Formulas (8) through (11), respectively.
U 21 = x z γ C u 1 C u 2 + V P u + E u 1 + V R u 1 + x 1 z γ C u 1 C u 2 + V P u + E u 1 + V R u 1 + 1 x z γ C u 1 C u 2 + V N u + E u 1 + V R u 1 + V R u 2 + 1 x 1 z γ C u 1 C u 2 + V N u + E u 1 + V R u 1
U 22 = x z C u 2 + V P u + 1 m E u 1 + m V D u + x 1 z C u 2 + V P u + 1 m E u 1 + m V D u + 1 x z C u 2 + V N u + 1 m E u 1 + m E u 2 + 1 x 1 z C u 2 + V N u + 1 m E u 1 + m E u 2
U 2 = y U 21 + 1 y U 22
F y = y U 21 U 2 = y 1 y V R u 1 γ C u 1 + m E u 1 m E u 2 ( 1 x ) + V R u 2 ( 1 x ) z m V D u x
The expected prospect value for the government choosing the stringent supervision strategy ( U 31 ), the expected prospect value for choosing the lenient supervision strategy ( U 32 ), the average expected prospect value ( U 3 ), and the corresponding replicator dynamic equation F ( z ) are given by Formulas (12) through (15), respectively.
U 31 = x y C g + V P g + V R p + V R u l + x 1 y C g + V P g + V R p + 1 x y C g + V N g + V D p + V R u l + 1 x 1 y C g + V N g + V D p
U 32 = x y ε C g + V P g + γ V L g + V R p + V R u 1 + x 1 y ε C g + V P g + V R p + 1 x y ε C g + V N g + γ V L g + ε V D p + V R u 1 + 1 x 1 y ε C g + ε V D p
U 3 = z U 31 + 1 z U 32
F z = z U 31 U 3 = z 1 z ( 1 ε ) · C g + ( 1 ε ) · V D p · ( 1 x ) γ V L g y

4. Stability Analysis

4.1. Analysis of Players’ Evolutionary Stable Strategies

4.1.1. Analysis of the Platform’s Evolutionary Stable Strategy

Based on the replicator dynamics equation for the platform given in Equation (7), the first-order partial derivative of F ( x ) is solved, with the result shown in Equation (16). Here, a = C p E p 1 + E p 2 V ( R p ) μ C p + ε V ( D p ) m V ( D u ) , b = γ V ( L p ) + m V ( D u ) , c = ( 1 ε ) V ( D p ) .
F ( x ) x = ( 2 x 1 ) · [ a + b · y + c · z + V ( R u 2 ) · y · z ]
According to the stability theorem of differential equations, for the platform’s strategy selection to be in an evolutionarily stable state, the following conditions must be satisfied: F ( x ) = 0 and F ( x ) x < 0 . Setting F ( x ) = 0 yields the solutions x = 0 ,   x = 1 ,   y = c · z a b + V ( R u 2 ) · z ( d e n o m i n a t o r 0 ) . Based on this, a case-by-case discussion is conducted.
(1) When y = y , F ( x ) 0 . In this case, all strategies of the platform are in a stable state, and the corresponding phase diagram for strategy evolution is shown in Figure 2a.
(2) When y y , the following two cases can be discussed. When 0 < y < y , F ( x ) x | x = 0 < 0 and F ( x ) x | x = 1 > 0 . In this case, x = 0 is the Evolutionarily Stable Strategy (ESS) for the platform; conversely, x = 1 is the ESS. This indicates that when the probability of user participation in governance is low, the firm tends to choose a passive governance strategy; otherwise, it tends to adopt an active governance strategy. The phase diagrams for strategy evolution corresponding to these two cases are shown in Figure 2b and c, respectively.

4.1.2. Analysis of the User’s Evolutionary Stable Strategy

Based on the replicator dynamics equation for the user given in Equation (11), the first-order partial derivative of F ( y ) is solved, with the result shown in Equation (17). Here, d = m V ( D u ) + m E u 2 , e = V ( R u 1 ) + γ C u 1 m E u 1 + m E u 2 .
F ( y ) y = ( 2 y 1 ) · [ e d · x V ( R u 2 ) · z + V ( R u 2 ) · x · z ]
According to the stability theorem of differential equations, for the user’s strategy selection to be in an evolutionarily stable state, the following conditions must be satisfied: F ( y ) = 0 and F ( y ) y < 0 . Setting F ( y ) = 0 yields the solutions y = 0 ,   y = 1 ,   z = V ( R u 2 ) · x e d + V ( R u 2 ) · x ( d e n o m i n a t o r 0 ) . Based on this, a case-by-case discussion is conducted.
(1) When z = z , F ( x ) 0 . In this case, all strategies of the user are in a stable state, and the corresponding phase diagram for strategy evolution is shown in Figure 3a.
(2) When z z , the following two cases can be discussed. When 0 < z < z , F ( y ) y | y = 0 < 0 and F ( y ) y | y = 1 > 0 . In this case, y = 0 is the ESS for the platform; conversely, y = 1 is the ESS. This indicates that when the probability of stringent government regulation is low, users tend to choose non-participation in governance; conversely, they tend to participate. The phase diagrams for strategy evolution corresponding to these two cases are shown in Figure 3b and c, respectively.

4.1.3. Analysis of the Government’s Evolutionary Stable Strategy

Based on the replicator dynamics equation for the government given in Equation (15), the first-order partial derivative of F ( z ) is solved, with the result shown in Equation (18). Here,  f = ( ε 1 ) V ( D p ) , g = ( γ ) V ( L g ) , h = ( 1 ε ) C g ( 1 ε ) V ( D p ) .
F ( z ) z = ( 2 z 1 ) · [ h f · x g · y ]
According to the stability theorem of differential equations, for the government’s strategy selection to be in an evolutionarily stable state, the following conditions must be satisfied: F ( z ) = 0 and F ( z ) z < 0 . Setting F ( z ) = 0 yields the solutions z = 0 ,   z = 1 ,   x = h + γ · V ( L g ) · y f ( d e n o m i n a t o r 0 ) . Based on this, a case-by-case discussion is conducted.
(1) When x = x , F ( z ) 0 . In this case, all strategies of the government are in a stable state, and the corresponding phase diagram for strategy evolution is shown in Figure 4a.
(2) When x x , the following two cases can be discussed. When 0 < x < x , F ( z ) z | z = 0 > 0 and F ( z ) z | z = 1 < 0 . In this case, z = 1 is the ESS for the platform; conversely, z = 0 is the ESS. This indicates that when the probability of the platform adopting active governance is low, the government tends to adopt a stringent regulation strategy; conversely, it tends to adopt a lenient supervision strategy. The phase diagrams for strategy evolution corresponding to these two cases are shown in Figure 4b and c, respectively.

4.2. Stability Analysis of System Equilibrium Points

By simultaneously solving the three replicator dynamics equations given in Equations (7), (11), and (15), we obtain the replicator dynamic system for the tripartite collaborative governance game of AIGD involving the platform, users, and the government. Setting this system of equations equal to zero yields 15 equilibrium points, which are E1(0,0,0), E2(1,0,0), E3(0,1,0), E4(0,0,1), E5(1,1,0), E6(1,0,1), E7(0,1,1), E8(1,1,1), E9(0,y1,z1), E10(x1,y2,0), E11(x2,y3,1), E12(x3,1,z2), E13(x4,0,z3), E14(x5,y4,z4), and E15(x6,y5,z5).
Among these, E1E8 represent the corner solutions, which collectively delineate the boundary D = {(x,y,z)|0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0 ≤ z ≤ 1} of this evolutionary game. E9E15 correspond to mixed strategies, whose existence requires the satisfaction of specific conditions [49]. The Jacobian matrix of the tripartite evolutionary game’s replicator dynamic system is given by Equation (19). The stability of the system is assessed using Lyapunov’s first method [50]. An equilibrium point is deemed stable if all eigenvalues of the Jacobian matrix have negative real parts; it is unstable if all eigenvalues have positive real parts; and it is a saddle point (a transitional state) if the eigenvalues have both positive and negative real parts. Analyzing the local stability of the Jacobian matrix allows for the identification of evolutionarily stable equilibrium solutions [51]. By substituting the potential equilibrium points into the Jacobian matrix, calculations reveal that the eigenvalue expressions for the seven mixed-strategy points E9E15 are exceedingly lengthy. Analysis of their conditions indicates that their stability is highly uncertain. Consequently, this study focuses solely on discussing the asymptotic stability of the eight pure-strategy points E1E8. Their corresponding eigenvalues are presented in Table 3.
J = F x x F x y F x z F y x F y y F y z F z x F z y F z z = ( 2 x 1 ) · [ a + b · y + c · z + V ( R u 2 ) · y · z ] y · ( 1 y ) · [ d V ( R u 2 ) · z ] z · ( 1 z ) · f x · ( 1 x ) · [ b V ( R u 2 ) · z ] ( 2 y 1 ) · [ e d · x V ( R u 2 ) · z + V ( R u 2 ) · x · z ] z · ( 1 z ) · g x · ( 1 x ) · [ c V ( R u 2 ) · y ] y · ( 1 y ) · ( 1 x ) · V ( R u 2 ) ( 2 z 1 ) · [ h f · x g · y ]
When the model parameter values change, the system’s stable strategies evolve accordingly. A scenario-based analysis of the stability of the aforementioned potential equilibrium points is now conducted, building on the results presented in Table 3.
Scenario 1. Inefficient equilibrium. When parameters a, e, and h are all greater than 0, it implies that the platform’s perceived benefits from passive governance (including cost avoidance and loss prevention) exceed those from active governance, users’ perceived benefits from non-participation in governance (including gains from malicious use) outweigh those from participation, and the government’s perceived benefits from lax regulation (including regulatory cost avoidance) surpass those from strict regulation. Under these conditions, E1(0,0,0) becomes a stable equilibrium point, indicating that the game converges to a steady state of {passive governance, non-participation, lenient supervision}. Consequently, the platform opts for a passive approach due to the poor returns on active governance, users choose non-participation because of insufficient rewards for engagement, and the government adopts a lenient stance owing to the high costs of stringent supervision. The behaviors of the three parties fall into a vicious cycle, plunging the entire system into an inefficient and stagnant equilibrium.
Scenario 2. Compromise equilibrium.
(1) When a + c > 0 , V ( R u 2 ) < e , and h < 0 , the key difference from the stability conditions of E1 lies in the fact that the government’s perceived benefits from stringent supervision now offset its regulatory costs. However, due to insufficient rewards and penalties imposed by the government on the platform, coupled with limited rewards and compensation provided to users by both the government and the platform, the perceived benefits for the platform choosing passive governance and for users choosing non-participation still outweigh those of proactive strategies. Consequently, E4(0,0,1) emerges as a stable equilibrium point, indicating that the game converges to a steady state of {passive governance, non-participation, stringent supervision}. In this state, although the government exercises stringent supervision, the supervisory efficacy remains constrained due to the platform’s passive governance and the lack of user participation. This leads to an underutilization of supervisory resources. Consequently, the entire system operates at a low level of efficiency, and its stability remains weak.
(2) When a + b < 0 , e < d , and f + g < h , the platform’s perceived benefits from active governance temporarily offset its additional costs, and the users’ perceived benefits also outweigh the opportunity costs of their malicious use. Meanwhile, the government opts for a lenient strategy due to excessively high regulatory costs or an insufficient perception of potential losses. Under these conditions, E5(1,1,0) becomes a stable equilibrium point, meaning the game converges to a steady state of {active governance, participation in governance, lenient supervision}. In this state, the platform engages in proactive governance in response to government rewards and penalties, and users participate actively due to high returns on their oversight. The system relies on the platform’s self-discipline and users’ voluntary compliance, indicating that user participation can, to some extent, substitute for government supervision in motivating the platform to actively address AIGD. However, prolonged government oversight absence may undermine the motivation for sustained proactive efforts by both platforms and users, potentially inducing opportunistic behaviors. This results in insufficient compliance and leaves the overall system with relatively weak stability.
Scenario 3. Desirable equilibrium. When a + b + c + V ( R u 2 ) < 0 , e < d , and f + g > h , the platform’s perceived benefits from active governance offset its governance costs, penalty losses, reputational damage, and other related expenses. Similarly, users’ perceived benefits from participation outweigh their opportunity costs, and the government’s perceived benefits from stringent supervision cover its regulatory costs and the loss of public trust. Under these conditions, E8(1,1,1) emerges as a stable equilibrium point, indicating that the game converges to a steady state of {active governance, participation, stringent supervision}. In this state, tripartite collaboration enables highly effective governance. This is manifested in the government’s supervision effectively constraining platform behavior, user participation enhancing governance efficiency, and the platform’s proactive governance reducing the regulatory burden on the government. Consequently, the system achieves a desirable equilibrium, characterized by high stability and high governance efficiency.
Grounded in the real-world context of AIGD governance on UGC platforms, the evolutionary process of this system can be summarized as follows: (a) In the initial stage of governance, the government possesses an insufficient understanding of the importance of AIGD governance, and the supervisory framework remains underdeveloped. Platforms prioritize user base expansion and commercial profit maximization, resulting in limited investment in governing such content. Users exhibit weak awareness of participating in governance. Some individuals lack the capacity to identify AIGD, while others display opportunistic tendencies, aiming to derive additional benefits through the malicious use of GAI. At this point, the tripartite strategy corresponds to E1(0,0,0). (b) In the intermediate stage of governance, as the risks associated with the dissemination of AIGD accumulate, the government progressively deepens its understanding of governance and strengthens its regulatory and supervisory efforts. Its strategy consequently shifts towards E4(0,0,1). As governance advances, the government introduces explicit reward and penalty policies alongside governance standards. However, given the high costs of stringent supervision, it gradually transitions towards a more lenient and enabling approach. Meanwhile, driven by regulatory pressure, the desire to avoid reputational damage, and the pursuit of long-term commercial interests, platforms begin to intensify their efforts in disinformation review. They introduce intelligent detection algorithms and establish mechanisms for user oversight feedback and for penalizing malicious content creation. Concurrently, users’ perceived benefits from participating in governance gradually increase. This drives platforms towards active governance and users toward participation, culminating in a “platform-user” collaborative governance model. Consequently, the system’s state evolves into E5(1,1,0). (c) In the advanced stage of governance, the effects of collaborative governance become increasingly evident. The government’s supervisory framework matures, effectively mitigating information asymmetry. Platforms fully recognize the long-term value of disinformation governance for their sustainable development and further enhance their governance efforts. Through sustained participation in governance practices, users experience a marked improvement in their digital literacy, which significantly curtails the motivation for malicious creation, while their willingness to participate in monitoring gradually increases. At this juncture, the system attains the ideal equilibrium of E8(1,1,1). The government, platforms, and users collectively establish a highly efficient and collaborative co-governance system for disinformation, achieving a virtuous cycle.
Based on the above analysis, the stable equilibrium states corresponding to processes (a) and (b) can be optimized through specific adjustment strategies to achieve the desired ideal state of E8(1,1,1). In particular, parameters related to the intensity of government rewards and penalties play a pivotal role. The following numerical simulations will focus on exploring this ideal equilibrium state and the dynamic characteristics of its key parameters.

5. Simulation Analysis

To more intuitively explore the influencing factors and associated parameter sensitivity of the tripartite evolutionary strategies in the governance of AIGD on UGC platforms, this study employs MATLAB R2016b (MathWorks, Natick, MA, USA) to conduct numerical simulations of the aforementioned model.
To conduct simulation analysis, benchmark assignments for the parameters in the model are required. The parameter settings adhere to the following principles: (1) the core parameters of PT, including the risk preference coefficient for gains (α), the risk preference coefficient for losses (β), and the loss aversion coefficient (λ), are initially assigned using the average individual parameter values from the classic study by Tversky and Kahneman [35], i.e., α = β = 0.88,   λ = 2.25 ; (2) the platforms’ governance cost and benefit parameters are assigned initial values based on existing UGC platform governance game studies [24,52] and industry governance practices; (3) the users’ benefit and cost parameters are reasonably set by taking into account both the technical characteristics of AIGD and existing reports on the governance of disinformation; (4) the behavioral probability and intensity parameters μ, γ, ε are in the range of [0, 1]. For simulation calculations, they are discretely taken with a step size of 0.1. In the initial scenario, they are set at a medium level with a value of 0.5. Considering that malicious users account for a small proportion, the initial value of m is set to 0.3; (5) The remaining parameters are determined by satisfying the stability conditions of E8(1,1,1), provided they are consistent with the realistic context. The specific initial parameter assignments are presented in Table 4.
It should be noted that, although the core parameters of PT mentioned above take the classic values from Tversky and Kahneman (1992) [35] as the baseline, this study takes into account, prior to this, the fact that in realistic contexts, there is heterogeneity in the decision-making characteristics among the three distinct types of agents: the government, platforms, and users. Specifically, (1) the government in China, as a public authority, is extremely sensitive to the loss of public credibility and negative social impacts, and its decision-making tends to be more risk-averse, while its perception of gains such as tax revenue may be relatively rational; (2) platforms, as for-profit enterprises, tend to pursue profit maximization and are sensitive to economic gains and reputational losses; their risk attitude lies somewhere between that of the government and users, but they may focus more on short-term gains due to market competition, and their perception of gains may be biased upward; (3) users, as independent individuals, are often influenced by emotions in their decision-making, may pay more attention to immediate gains and penalties, exhibit a relatively lower degree of loss aversion, and may have a non-linear perception of gains and losses. Therefore, a control group experiment was designed to compare the evolutionary outcomes under unified parameters versus differentiated parameters. The evolutionary results under differentiated parameters are presented in Appendix A. The results show that the two differ only slightly in convergence speed and do not lead to different equilibria, indicating that their influence is not significant. Therefore, even though the risk attitudes of the three types of actors—the government, platforms, and users—differ in theory, within the parameter range of this model, unified parameters are sufficient to capture the main game characteristics. To simplify the analysis, the unified parameter form is adopted in the subsequent sections of this paper.
In addition, to verify the effectiveness of PT in characterizing the risk attitudes and loss aversion behavior of decision-makers, this paper conducts a comparative analysis between the constructed PT-based tripartite evolutionary game model (hereinafter referred to as the “PT model”) and the evolutionary game model under the assumption of perfect rationality (hereinafter referred to as the “traditional model”). In the traditional model, the utility of each participating actor directly adopts the objective payoff values, which is equivalent to setting the parameters in the PT value function as α = β = 1 and λ = 1 . In this case, both perceived gains and perceived losses are equal to the actual gains or losses. The remaining parameters are set exactly the same as those in the PT model (see Model Construction for details). Its evolutionary results are presented in Appendix B. By simulating the evolutionary stable strategies of the two models under different initial conditions, it was found that the main difference lies in the fact that, under the PT model, the system is more likely to converge to the desirable equilibrium characterized by platforms’ active governance, users’ participation in governance, and the government’s stringent supervision. This difference suggests that, in the governance of AIGD on UGC platforms, the introduction of PT can more realistically reflect the sensitivity and risk preferences of each participating actor when facing gains and losses, thereby more accurately capturing the real-world governance dilemma.

5.1. Analysis of System Evolution Stability Test

Substituting the given parameter values into the replicator dynamic equations and varying the initial willingness of the three actors to participate, we investigate their impact on the system’s evolutionary path. (1) Conducting 50 evolutionary iterations under different initial strategy combinations yields the results shown in Figure 5a. The results indicate that regardless of the initial values of the platform’s probability of active governance (x), users’ probability of participation in governance (y), and the government’s probability of stringent supervision (z), the system ultimately converges to the stable equilibrium point E8(1,1,1), corresponding to the strategy profile {active governance, participation in governance, stringent supervision}. (2) Holding other parameters constant and assuming identical initial willingness for all three actors (x = y = z), the evolutionary outcomes for values of 0.2, 0.5, and 0.8 are shown in Figure 5b–d. While all scenarios eventually trend towards E8(1,1,1), a higher initial probability leads to faster convergence speed. This occurs because an elevated initial willingness reflects greater recognition and acceptance of AIGD governance among the actors. Consequently, the positive incentive effects within the collaborative process emerge earlier. Specifically, the platform’s active governance measures receive quicker user response and cooperation, and government stringent supervision more effectively regulates actor behavior. The resulting virtuous cycle of interaction significantly accelerates the process of strategy alignment.
These numerical simulation results align with the conclusions drawn from the preceding strategic stability analysis, thereby robustly validating the effectiveness of the model.

5.2. Analysis of Influencing Factors

To eliminate the potential influence of initial strategy probability settings on system evolution, the initial strategy probabilities for the three game actors are uniformly set to 0.5 in subsequent analyses [53]. Based on this configuration, we analyze the impact of variations in individual parameters on the system’s evolutionary path and the strategic choices of the game actors.

5.2.1. Impact of Government Reward–Penalty Mechanisms on Evolutionary Paths

Based on the replicator dynamic equations of the game actors, it is evident that the government’s reward and penalty policies toward the platform and users influence the strategic choices of the associated actors, thereby affecting the evolutionary stability of the entire system. Subsequently, three parameters— R p , D p , and R u 1 —are selected for analysis. These correspond to the government’s reward to the platform, penalty on the platform, and reward to users, respectively. As the most direct policy instruments within the government’s reward and penalty mechanism, they systematically reflect the impact of government intervention on the evolutionary path.
(1) Government reward for platform active governance ( R p ). The focus is on three R p values with practical significance, representing symbolic, moderate, and substantial government rewards to the platform, specifically set at R p = 1, R p = 2, and R p = 3. The evolutionary paths of the system and its actors under these conditions are shown in Figure 6. Tests show that variations in R p have no significant impact on the evolutionary paths of users and the government; therefore, this aspect is not analyzed further. Figure 6a indicates that the system ultimately converges to the equilibrium point E8(1,1,1) for all tested values of R p . This indicates that when the government provides varying degrees of rewards for active governance to platforms, there are slight differences in the speed at which the system reaches a stable state, and ultimately, a tripartite equilibrium is formed, characterized by the government’s stringent supervision, platforms’ active governance, and users’ participation in governance. Figure 6b shows that as R p increases, the evolutionary speed of platforms choosing the active governance strategy accelerates slightly. This indicates that the government reward for platforms’ active governance ( R p ) plays a role in AIGD’s governance, and enhancing it can effectively strengthen the platforms’ governance willingness. However, we postulate that as a form of government expenditure, an increase in R p may weaken the enforcement intensity of the government’s stringent supervision strategy, potentially leading to a slowdown in the overall convergence speed of the system.
(2) Government penalty for platform passive governance ( D p ). The focus is on three D p values with practical significance, representing symbolic punishment (primarily for deterrence), moderate punishment commensurate with governance costs, and severe punishment with strong deterrent effects imposed by the government on platforms. These are specifically set at D p = 2, D p = 4, and D p = 6. The evolutionary paths of the system and its actors under these conditions are shown in Figure 7. Figure 7a indicates that the system ultimately converges to the equilibrium point E8(1,1,1) across the tested values of D p . This demonstrates that, although the speed at which the system reaches a steady state varies with different levels of government penalties for platforms’ passive governance, the tripartite equilibrium of {active governance, participation in governance, stringent supervision} is ultimately attained in all cases. Figure 7b–d show that as D p increases, the evolutionary speed at which platforms, users, and the government choose their respective positive strategies accelerates. This finding indicates that, within a reasonable range, raising D p can not only effectively regulate platforms’ behavior but may also, through a signaling effect, boost users’ confidence in participation in governance and strengthen the authority of government supervision. Consequently, it comprehensively accelerates the formation of a tripartite collaborative governance framework.
(3) Government reward for user participation in oversight ( R u 1 ). Similarly, the focus is on three R u 1 values with practical significance, representing no reward, a moderate reward, and a substantial reward provided by the government to users. These are specifically set at R u 1 = 0, R u 1 = 1, and R u 1 = 2. The evolutionary paths of the system and its actors under these conditions are shown in Figure 8. Figure 8a indicates that the system ultimately converges to the equilibrium point E8(1,1,1) across the tested values of R u 1 . This demonstrates that, although the speed at which the system reaches a steady state varies with different levels of government rewards for user participation in oversight, the tripartite equilibrium of {active governance, participation in governance, stringent supervision} is ultimately attained in all cases. Figure 8b–d show that an increase in R u 1 does not produce a significant impact on the platforms’ strategic choice; its strategy evolution rapidly converges to active governance across the varying levels of government rewards. Meanwhile, the evolutionary speed of the user strategy significantly increases, while that of the government shows a slight rise. This indicates that, although incentive policies targeting user participation in oversight do not directly affect platforms, raising R u 1 within a reasonable range can effectively enhance user engagement and the efficiency of government supervision. Consequently, it promotes the convergence of the collaborative governance process towards the ideal state.

5.2.2. Impact of User Digital Literacy on Evolutionary Paths

Digital literacy is defined as the knowledge, skills, and competencies that help learners become confident, critical, and responsible users of digital technologies. Its core elements include digital information management and digital responsibility, which refer, respectively, to “effectively locating information resources and evaluating the accuracy, credibility, and relevance of information” and “demonstrating respect for oneself and others in digital spaces, and practicing safe, responsible, and ethical use.” Against the backdrop of the rapid development of GAI technologies, digital literacy is expanding to encompass new dimensions such as algorithm and AI literacy [54], requiring users not only to possess the ability to discern information but also to have the awareness and capability to actively participate in digital society governance. In this paper, the digital literacy of users, when elaborated, refers to the ability and willingness of users to identify the risks of disinformation, resist malicious usage, and actively participate in content supervision on UGC platforms during their use of GAI tools and engagement in content creation. In the model, this concept is captured by two parameters: the probability of users maliciously using GAI (m) and the willingness of users to participate in monitoring (γ). Specifically, the lower m is and the higher γ is, the higher the digital literacy of users. Based on the replicator dynamic equations of the game actors, the level of users’ digital literacy, as manifested in the governance of AIGD, directly influences the evolutionary paths of the system and the three actors.
(1) Probability of malicious use of GAI by users ( m ). With a step size of 0.2, three m values representing different creative motivations are selected: low malicious tendency ( m = 0.1), moderate malicious tendency ( m = 0.3), and high malicious tendency ( m = 0.5). The evolutionary paths of the system and its actors under these conditions are shown in Figure 9. Figure 9a indicates that, regardless of the value of m, the system ultimately converges to E8(1,1,1). However, Figure 9b–d reveal that a higher m value leads to a faster evolutionary rate at which each actor adopts its positive strategy. Among them, users exhibit the greatest variation in responsiveness to changes in m , followed by the government, with platforms showing the least change. This indicates that an increase in the probability of malicious use does not alter the final stable equilibrium. Instead, it accelerates the collaborative governance process by intensifying the perceived risk among all actors. Specifically, users shift more rapidly towards participation in oversight due to facing heightened deterrence from penalties. The government expedites its regulatory response to counteract potential negative effects. Platforms, however, experience the smallest change in evolutionary rate because the transmission of risk to them is relatively indirect. These findings suggest that a moderate level of external risk can, in fact, stimulate the system’s potential for collaborative governance. They further corroborate that the risk perception mechanism grounded in prospect theory constitutes a key theoretical lens for explaining the evolution of this collaborative governance.
(2) Willingness of user participation in monitoring ( γ ). Using a step size of 0.4, three γ values reflecting different levels of willingness to participate in monitoring are selected: low willingness (γ = 0.1), medium willingness (γ = 0.5), and high willingness (γ = 0.9). The evolutionary paths of the system and its actors under these conditions are shown in Figure 10. Figure 10a reveals that when γ equals 0.1, the system evolves toward E5(1,1,0); whereas when γ takes values of 0.5 and 0.9, the system evolves toward E8(1,1,1). Figure 10b–d show that an increase in γ slightly accelerates the convergence of platforms and users toward their respective positive strategies. Additionally, it prompts the government to shift from lenient to stringent supervision, and as γ increases further, the rate at which the government converges to the stringent supervision strategy also accelerates. These reveal that an improvement in users’ monitoring willingness not only incentivizes platforms and users to accelerate their positive strategic choices but also serves as a key driver compelling the government to transition from lenient to stringent supervision. When users’ willingness to monitor is insufficient, the government lacks external supervisory pressure and tends to maintain lenient supervision. As users’ willingness improves, users’ capacity to identify and provide feedback on platforms’ passive governance and government supervisory deficiencies is enhanced. This significantly raises the risk of a loss of government credibility, thereby compelling the government to accelerate its shift towards a stringent supervision strategy and ultimately facilitating the achievement of tripartite collaborative governance.

5.2.3. Impact of Risk Preference and Loss Aversion Coefficient on Evolutionary Paths

The risk preference coefficients ( α , β ) and the loss aversion coefficient ( λ ) in PT reflect actors’ non-rational perceptual biases regarding gains and losses. Analyzing their influence on the evolutionary paths can validate the explanatory value of this theory within the context of AIGD governance.
(1) Risk preference coefficient α / β . Based on PT, the risk preference coefficients in the perceived value model consist of the perceived gain sensitivity ( α ) and the perceived loss sensitivity ( β ). Tests indicate that variations in α have no significant impact on system evolution (the result is presented in Appendix C); therefore, this parameter is not discussed further. Simulations are conducted with β values of 0.58, 0.68, 0.78, and 0.88. The evolutionary paths of the system and its actors under these conditions are shown in Figure 11. Figure 11a indicates that the system ultimately converges to the equilibrium point E8(1,1,1) across all tested values of β . Figure 11b,c show that the higher the β value, the faster the evolutionary speed of platforms and users in adopting active and positive strategies, with platforms exhibiting a larger fluctuation amplitude in response to changes in β. Figure 11d shows that, compared to other values, the government’s strategy converges the fastest when β = 0.78 . It can be observed that an increase in the sensitivity to perceived losses (β), although it does not alter the long-term equilibrium of the system, significantly accelerates the convergence of platforms and users toward positive strategies, and the evolutionary path of platforms exhibits a larger amplitude of fluctuation. This phenomenon validates the applicability of PT in the context of AIGD governance: the sensitivity of actors to losses amplifies their motivation to avoid potential penalties, thereby driving faster behavioral change. Meanwhile, the impact of β on the government’s strategy exhibits a nonlinear characteristic, suggesting that there exists an optimal level of loss perception for the government when weighing the costs of supervision against the loss of public credibility. This indicates that, in governance practice, attention should be paid to strengthening the perception of platforms and users regarding the consequences of passive governance, while setting a reasonable level of supervision intensity for the government itself, so as to guide the system toward the desirable state more quickly.
(2) Loss aversion coefficient λ . This coefficient is a core parameter that distinguishes behavioral decision-making from traditional rational assumptions. With a step size of 1.0, simulations are conducted for λ = 1.25, λ = 2.25, λ = 3.25, and λ = 4.25. The evolutionary paths of the system and its actors under these conditions are shown in Figure 12. Figure 12a indicates that the system ultimately converges to the equilibrium point E8(1,1,1) across the tested values of λ . Figure 12b–d show that the higher the value of λ, the faster the evolutionary speed at which each actor adopts positive strategies, with the government exhibiting the largest amplitude of fluctuation, followed by platforms, and then users. Compared to β, the actors are more sensitive to changes in λ. Furthermore, as λ increases by equal increments, the fluctuations within each actor gradually diminish. The above indicates that an increase in λ not only accelerates convergence but also stabilizes strategy choices. This result validates a core proposition of PT, namely that the degree of loss aversion is a key psychological mechanism driving behavioral change, and that the payoff structure of different actors determines their differential sensitivity to λ. It can be seen that, in governance practice, strengthening the actors’ perception of loss aversion can guide the system toward the desirable state more quickly.

5.3. Practical Case Validation

To verify the applicability of the model in this paper to real-world governance scenarios, this study selects Douyin (ByteDance Ltd., Beijing, China), a leading UGC platform in China, as a typical case, calibrates the model parameters based on its AIGC content governance practices, and compares the simulation results with those of the baseline model described above.
Drawing on Douyin’s governance characteristics—including its mature AI-assisted review technologies, stringent regulatory penalty rules, and well-established user reporting incentive mechanisms—this paper conducts contextual calibration of the core parameters. A comparison between the calibrated results and the baseline parameters is presented in Table 5. The remaining parameters are kept consistent with those in the baseline model to ensure univariate consistency in the comparison. The calibrated parameters more closely reflect Douyin’s actual governance costs, reward and penalty intensities, and the behavioral characteristics of the actors involved.
Based on the calibrated Douyin parameters, this paper re-examines the stability of system evolution, with the results presented in Figure 13. Comparing the simulation results of the Douyin scenario with those of the baseline scenario reveals that both sets of simulations ultimately converge to the desirable equilibrium state of {platforms’ active governance, users’ participation in governance, the government’s stringent supervision}, validating the generalizability of the core conclusions of this paper. In terms of evolutionary paths, the convergence speeds of the strategies of platforms, users, and the government in the Douyin scenario are significantly faster than those in the baseline scenario. Collectively, the platform’s strategy rapidly approaches 1 within one step, the user’s strategy approaches 1 within three steps, and the government’s strategy approaches 1 within two steps, with more pronounced fluctuations. This difference stems from Douyin’s lower governance costs, stronger reward and penalty incentives, and higher risk sensitivity among actors, accurately reflecting its governance characteristics as a leading UGC platform. This demonstrates that the model presented in this paper can effectively adapt to different platform scenarios and possesses strong practical applicability.

6. Discussion

6.1. Conclusions

Based on prospect theory, this study constructs a tripartite evolutionary game model of “UGC platforms–users–the government” and systematically analyzes the strategic decision-making mechanisms of multiple actors in the collaborative governance of AIGD. Through numerical simulation and practical case validation, it is revealed that users’ digital literacy and the psychological characteristics of various actors play a crucial role in governance, with the system being most sensitive to users’ willingness to supervise γ and the loss aversion coefficient λ. The main research conclusions are as follows:
(1) The system exhibits multiple possible evolutionary paths. When the conditions a + b + c + V(−Ru2) < 0, e < d, and f + g > h are satisfied—that is, when the perceived benefits of platforms’ active governance cover governance costs, penalty losses, reputational losses, etc., the perceived benefits of users’ participation in governance cover their opportunity costs; and the perceived benefits of the government’s stringent supervision cover supervision costs and the loss of public credibility—then, regardless of the initial willingness levels of the three actors, the system converges to the desirable equilibrium point of {platforms’ active governance, users’ participation in governance, the government’s stringent supervision}, and the higher the initial willingness, the faster the convergence speed. This indicates that, within the tripartite game framework, the collaborative governance mechanism exhibits endogenous stability. As long as the institutional design is sound, the system can spontaneously move toward the desirable governance pattern, and an increase in initial willingness can accelerate the realization of this desirable state.
(2) The government’s reward–penalty mechanisms play a role in promoting the evolution of the system toward the desirable equilibrium state, but the impact is limited. Specifically, the reward R p and penalty D p for platforms can accelerate the convergence speed of platforms toward positive strategies, and an increase in the latter has a positive accelerating effect on the evolutionary speed of all three actors. The reward Ru1 for users can significantly enhance users’ willingness to participate, but its impact on platforms’ strategies is weak. This indicates that the government’s reward–penalty mechanisms need to be precisely designed for different actors. For platforms, a balanced combination of rewards and penalties should be maintained, whereas for users, incentives should be the primary approach.
(3) Users’ digital literacy is a key variable influencing the evolutionary path of the system. Users’ low malicious tendency (low m) and high willingness to monitor (high γ) together constitute high digital literacy, but the mechanisms through which they affect the system differ. Specifically, the higher m is, the faster the convergence speed of all actors, reflecting that high-risk behaviors trigger stronger regulatory responses. Meanwhile, when γ is low, the government tends toward lenient supervision, whereas when γ is high, it shifts to stringent supervision. This indicates that user participation serves as an important trigger for government regulatory behavior.
(4) Both the loss preference coefficient (β) and the loss aversion coefficient (λ) exert a significant moderating effect on the evolution of the system. The higher β is, the faster the convergence of platforms and users toward positive strategies, with platforms being more sensitive to risk. An increase in λ accelerates the evolutionary speed of all three actors’ strategies, with the government exhibiting the largest amplitude of fluctuation, indicating that in governance, the government places greater emphasis on loss avoidance. Furthermore, compared to β, the actors are more sensitive to changes in λ, highlighting the dominant role of loss aversion psychology in revealing the irrational characteristics of actors’ behavior in this context.

6.2. Recommendations

As stated in the Book of Rites · Doctrine of the Mean, “All things are nurtured together without harming one another; paths are pursued in parallel without contradicting one another.” Only through the collaborative efforts of all three actors can the governance effectiveness be maximized. To accelerate the improvement of China’s collaborative governance system for the content ecosystem, the following actionable recommendations are proposed for each actor based on the above research conclusions and policy interpretations:
(1) As the regulator in the governance system, the government should optimize the governance mechanism in the following aspects. Firstly, improve the reward–penalty mechanisms. Reward and penalty standards that match platforms’ governance costs should be established, and the intensity of penalties should be appropriately increased (e.g., raising D p to a level comparable to governance costs) in order to accelerate platforms’ positive strategy responses. Secondly, incentivize user participation. A user monitoring reward system (e.g., R u 1 ) should be established, and non-material incentives such as credit scores and honorary titles should be used to enhance users’ sense of accomplishment in participating in governance, thereby stimulating their willingness to supervise γ. Thirdly, promote the improvement of users’ digital literacy. The identification of disinformation, participation in content supervision, and resistance to malicious creation should be integrated into digital literacy education, thereby increasing users’ γ value and reducing their m value, fundamentally optimizing the evolutionary path of the system. Furthermore, establish a linkage mechanism whereby user participation triggers regulatory actions. Given that the government places greater emphasis on loss avoidance (as evidenced by the largest fluctuation when λ increases), it is recommended that when the volume of user reports or the level of supervision willingness reaches a certain threshold, platform review or government intervention mechanisms be automatically triggered to avoid potential loss of public credibility ( L g ), thereby forming a “users-platforms-government” tripartite collaborative governance loop.
(2) As the primary executor in the governance of AIGD, UGC platforms should focus on taking the following measures. Firstly, optimize the governance cost structure and enhance the expected benefits of governance. By introducing AI-assisted review technologies, the active governance cost ( C p ) incurred in responding to government regulation can be reduced. Meanwhile, the economic benefits ( E p 1 ) derived from active governance can be increased through measures such as promoting high-quality content and enhancing platform reputation, thereby strengthening the motivation for active governance. Secondly, establish a user reporting feedback mechanism. Provide timely feedback and rewards (such as exposure opportunities, coupons, etc.) for users’ reporting behaviors, thereby enhancing users’ sense of accomplishment in participation and consequently increasing their willingness to supervise γ. Thirdly, guide users to use GAI tools rationally. Through methods such as user agreements, content labels, and usage prompts, the probability of users’ malicious usage m can be reduced, and their sense of digital responsibility can be enhanced. Furthermore, strengthen reputation management and risk early warning mechanisms. Given that platforms are more sensitive to losses (as evidenced by the significant impact of λ), the mechanism for detecting public sentiment regarding disinformation should be improved, and timely responses should be made to social concerns in order to reduce potential reputational losses ( L p ).
(3) Users in the governance of AIGD are both a potential source of risk and an important supervisory force. It is recommended that users: Firstly, enhance their information discernment ability and risk awareness. Actively learn methods for identifying disinformation, improve their digital literacy, and reduce the risk of being misled by malicious content or penalized by platforms. Secondly, actively participate in supervision governance. Participate in disinformation supervision through channels such as platforms’ reporting functions and content ratings, thereby enhancing their willingness to supervise γ and serving as “watchdogs” within the governance system. Thirdly, resist malicious creation behaviors. Cultivate a sense of digital responsibility, learn about the risks and ethical norms of GAI tools, refuse to use them to generate disinformation, reduce the probability of malicious usage (m), and curb the spread of disinformation at its source. Finally, view the reward–penalty mechanisms rationally. Understand and respond to the incentive policies of the government and platforms (such as credit scores, exposure opportunities, etc.), obtain positive feedback from participation in governance, and form a virtuous cycle of “participation-benefit- re-participation.”

6.3. Limitations and Future Directions

This study, from the perspective of bounded rationality and incorporating prospect theory, focuses on the context of AIGD governance. It constructs a tripartite evolutionary game model of “UGC platforms–users–the government” that takes into account the dual role of users and conducts numerical simulations and practical case validation. Corresponding conclusions and recommendations are derived, extending the application boundary of evolutionary game theory in the field of information content governance. Although efforts were made to align the model as closely as possible with the realistic context during its construction, the following limitations remain.
(1) The model only selects the government, platforms, and users as the three actors in the game, defining them as the minimum core governance structure for the governance of AIGD on UGC platforms. In realistic contexts, the governance system does include other stakeholders such as media, fact-checking organizations, and advertisers, which play important roles in shaping reward and penalty mechanisms and monitoring governance effectiveness. From the perspective of theoretical sufficiency, the core objective of this study is not to fully replicate the entire ecosystem of AIGD governance, but rather to focus on identifying the fundamental strategic interaction patterns within the minimum core governance structure. Considering that the government serves as the rule-maker and enforcer of regulation, platforms serve as the primary responsible parties for content review, and users serve as content producers, disseminators, and supervisors, these three actors together constitute the basic responsibility chain for content governance on UGC platforms. Their strategic interactions directly determine the core direction of governance effectiveness. The roles of other actors are incorporated into the model through parameter internalization (e.g., the professional capability of fact-checking organizations is reflected in the reduction of platforms’ governance costs, and media supervision is reflected in the improvement of the government’s regulatory efficiency), thereby avoiding the difficulties in model solution and the blurring of core logic that may arise from excessive complexity caused by multiple actors. Therefore, under the premise of ensuring analytical sufficiency, the current setup achieves a balance between theoretical parsimony and practical explanatory power. Future research may consider further expanding the scope of actors to incorporate more actors and behaviors that play critical roles into the governance system, in order to construct a theoretical model that more closely aligns with complex reality.
(2) The model assumes that the core parameters of prospect theory (α, β, λ) are static constants, indicating that the actors’ risk preferences and degrees of loss aversion remain unchanged throughout the governance cycle. However, in real-world governance scenarios, as the government dynamically adjusts the regulatory intensity ε for AIGD, as major disinformation incidents erupt, or as platforms’ governance effectiveness and users’ participation levels change, the psychological preferences of the three types of actors undergo time-varying evolution. For instance, major disinformation-related public sentiment events significantly increase the government’s aversion to the loss of public credibility and platforms’ risk perception of regulatory penalties, whereas a long-term stable governance environment may gradually reduce the actors’ levels of risk aversion. Although the current static parameter setup clearly reveals the fundamental game patterns of governance, it fails to capture the dynamic characteristics of psychological preferences as they evolve with contextual changes. Future research could introduce time-varying prospect theory parameters (e.g., modeling α, β, λ as functions of regulatory intensity (ε), public sentiment intensity, or governance stage) to further explore the impact of dynamic shifts in actors’ psychological preferences on the system’s evolutionary path, equilibrium convergence speed, and stable strategies under different governance scenarios, thereby constructing an evolutionary game model that more closely aligns with real-world governance dynamics.
(3) The current model does not incorporate the potential social costs that may arise from stringent governance as explicit benefit items into the perceived payoff matrix. The current model focuses only on the direct benefits and costs of governing AIGD and does not consider indirect social welfare losses such as the suppression of innovation in the AIGC industry caused by excessive government regulation, or the chilling effect on users’ freedom of expression resulting from overly strict content review by platforms. Future research could incorporate such potential social costs, represented as explicit negative benefit terms, to construct a more balanced and comprehensive evolutionary game model that takes into account both “the effectiveness of disinformation governance” and “the maximization of social welfare,” thereby further enhancing the practical applicability of policy recommendations.

Author Contributions

Conceptualization, L.L. and Y.W.; methodology, Y.W. and S.G.; software, Y.W.; validation, L.L. and Y.W.; formal analysis, Y.W.; investigation, Y.W.; resources, L.L. and S.G.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, L.L. and S.G.; visualization, Y.W.; supervision, L.L. and S.G.; project administration, L.L.; funding acquisition, L.L. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, Grant No. 72171207 and No. 72501251.

Data Availability Statement

No new data were created or analyzed in this study. All numerical simulation parameters and results are presented in detail in the manuscript.

Acknowledgments

The authors are very grateful to the anonymous referees for their insightful, constructive, and valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GAIGenerative Artificial Intelligence
UGCUser-generated content
AIGCArtificial intelligence-generated content
AIGDArtificial intelligence-generated disinformation/AI-generated disinformation
LLMsLarge Language Models
ESSEvolutionarily stable strategies
EGTEvolutionary game theory
PTProspect theory
MCNsMulti-Channel Networks

Appendix A. Simulation Results Under Differentiated PT Parameters

In this appendix, we adopt a heterogeneous setting for the core prospect theory parameters of the three actors (the government, platforms, and users) by aligning with their distinct decision-making characteristics in real-world scenarios:
(1) Platforms: α = β = 0.88 , λ = 2.25 ;
(2) Users: α = 0.58 , β = 0.68 , λ = 2.00 ;
(3) The government: α = 0.78 , β = 0.88 , λ = 2.50 .
The corresponding evolutionary simulation results are presented in the figures below.

Appendix A.1. Analysis of System Evolution Stability Test

Figure A1. Dynamic evolutionary trajectories of the system under differentiated PT parameters. (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Figure A1. Dynamic evolutionary trajectories of the system under differentiated PT parameters. (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Systems 14 00416 g0a1

Appendix A.2. Analysis of Influencing Factors

Figure A2. Impact of R p on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of R p on the system’s evolutionary path; (b) impact of R p on the platform’s evolutionary path; (c) impact of R p on the user’s evolutionary path; (d) impact of R p on the government’s evolutionary path.
Figure A2. Impact of R p on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of R p on the system’s evolutionary path; (b) impact of R p on the platform’s evolutionary path; (c) impact of R p on the user’s evolutionary path; (d) impact of R p on the government’s evolutionary path.
Systems 14 00416 g0a2
Figure A3. Impact of D p on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of D p on the system’s evolutionary path; (b) impact of D p on the platform’s evolutionary path; (c) impact of D p on the user’s evolutionary path; (d) impact of D p on the government’s evolutionary path.
Figure A3. Impact of D p on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of D p on the system’s evolutionary path; (b) impact of D p on the platform’s evolutionary path; (c) impact of D p on the user’s evolutionary path; (d) impact of D p on the government’s evolutionary path.
Systems 14 00416 g0a3
Figure A4. Impact of R u 1 on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of R u 1 on the system’s evolutionary path; (b) impact of R u 1 on the platform’s evolutionary path; (c) impact of R u 1 on the user’s evolutionary path; (d) impact of R u 1 on the government’s evolutionary path.
Figure A4. Impact of R u 1 on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of R u 1 on the system’s evolutionary path; (b) impact of R u 1 on the platform’s evolutionary path; (c) impact of R u 1 on the user’s evolutionary path; (d) impact of R u 1 on the government’s evolutionary path.
Systems 14 00416 g0a4
Figure A5. Impact of m on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of m on the system’s evolutionary path; (b) impact of m on the platform’s evolutionary path; (c) impact of m on the user’s evolutionary path; (d) impact of m on the government’s evolutionary path.
Figure A5. Impact of m on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of m on the system’s evolutionary path; (b) impact of m on the platform’s evolutionary path; (c) impact of m on the user’s evolutionary path; (d) impact of m on the government’s evolutionary path.
Systems 14 00416 g0a5
Figure A6. Impact of γ on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of γ on the system’s evolutionary path; (b) impact of γ on the platform’s evolutionary path; (c) impact of γ on the user’s evolutionary path; (d) impact of γ on the government’s evolutionary path.
Figure A6. Impact of γ on the evolutionary trajectories of the system and its actors under differentiated PT parameters. (a) Impact of γ on the system’s evolutionary path; (b) impact of γ on the platform’s evolutionary path; (c) impact of γ on the user’s evolutionary path; (d) impact of γ on the government’s evolutionary path.
Systems 14 00416 g0a6

Appendix B. Simulation Results Under Perfect Rationality

To verify the effectiveness of prospect theory in characterizing decision-makers’ risk attitudes and loss aversion behaviors, this appendix presents the simulation results of the traditional evolutionary game model under the perfect rationality assumption.
In this control group, the core parameters of the prospect theory value function are set to α = β = 1.00 and λ = 1.00 , while all other parameter settings remain identical to those in the PT model to ensure comparability. The corresponding evolutionary results are illustrated in the figures below.

Appendix B.1. Analysis of System Evolution Stability Test

Figure A7. Dynamic evolutionary trajectories of the system under perfect rationality. (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Figure A7. Dynamic evolutionary trajectories of the system under perfect rationality. (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Systems 14 00416 g0a7

Appendix B.2. Analysis of Influencing Factors

Figure A8. Impact of R p on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of R p on the system’s evolutionary path; (b) impact of R p on the platform’s evolutionary path; (c) impact of R p on the user’s evolutionary path; (d) impact of R p on the government’s evolutionary path.
Figure A8. Impact of R p on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of R p on the system’s evolutionary path; (b) impact of R p on the platform’s evolutionary path; (c) impact of R p on the user’s evolutionary path; (d) impact of R p on the government’s evolutionary path.
Systems 14 00416 g0a8
Figure A9. Impact of D p on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of D p on the system’s evolutionary path; (b) impact of D p on the platform’s evolutionary path; (c) impact of D p on the user’s evolutionary path; (d) impact of D p on the government’s evolutionary path.
Figure A9. Impact of D p on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of D p on the system’s evolutionary path; (b) impact of D p on the platform’s evolutionary path; (c) impact of D p on the user’s evolutionary path; (d) impact of D p on the government’s evolutionary path.
Systems 14 00416 g0a9
Figure A10. Impact of R u 1 on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of R u 1 on the system’s evolutionary path; (b) impact of R u 1 on the platform’s evolutionary path; (c) impact of R u 1 on the user’s evolutionary path; (d) impact of R u 1 on the government’s evolutionary path.
Figure A10. Impact of R u 1 on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of R u 1 on the system’s evolutionary path; (b) impact of R u 1 on the platform’s evolutionary path; (c) impact of R u 1 on the user’s evolutionary path; (d) impact of R u 1 on the government’s evolutionary path.
Systems 14 00416 g0a10
Figure A11. Impact of m on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of m on the system’s evolutionary path; (b) impact of m on the platform’s evolutionary path; (c) impact of m on the user’s evolutionary path; (d) impact of m on the government’s evolutionary path.
Figure A11. Impact of m on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of m on the system’s evolutionary path; (b) impact of m on the platform’s evolutionary path; (c) impact of m on the user’s evolutionary path; (d) impact of m on the government’s evolutionary path.
Systems 14 00416 g0a11
Figure A12. Impact of γ on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of γ on the system’s evolutionary path; (b) impact of γ on the platform’s evolutionary path; (c) impact of γ on the user’s evolutionary path; (d) impact of γ on the government’s evolutionary path.
Figure A12. Impact of γ on the evolutionary trajectories of the system and its actors under perfect rationality. (a) Impact of γ on the system’s evolutionary path; (b) impact of γ on the platform’s evolutionary path; (c) impact of γ on the user’s evolutionary path; (d) impact of γ on the government’s evolutionary path.
Systems 14 00416 g0a12

Appendix C. Simulation Result of α

In this appendix, we report the supplementary sensitivity analysis result for the parameter α. In the main prospect theory model, it is verified that variations in α have no significant impact on the system’s evolutionary dynamics and equilibrium outcomes. The corresponding simulation result is illustrated in the following figure.
Figure A13. Impact of α on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of α on the system’s evolutionary path; (b) impact of α on the platform’s evolutionary path; (c) impact of α on the user’s evolutionary path; (d) impact of α on the government’s evolutionary path.
Figure A13. Impact of α on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of α on the system’s evolutionary path; (b) impact of α on the platform’s evolutionary path; (c) impact of α on the user’s evolutionary path; (d) impact of α on the government’s evolutionary path.
Systems 14 00416 g0a13

References

  1. Tronnier, F.; Pomrehn, L.; Navakumaran, R. Motivated or Overwhelmed? A Qualitative Study on Technostress in Generative Artificial Intelligence Use. In Proceedings of the International Conference on Information Systems (ICIS 2025), Nashville, TN, USA, 13–17 December 2025; Available online: https://aisel.aisnet.org/icis2025/is_transformwork/is_transformwork/1 (accessed on 4 February 2026).
  2. Wang, Y.; Pan, Y.; Yan, M.; Su, Z.; Luan, T.H. A Survey on ChatGPT: AI-Generated Contents, Challenges, and Solutions. IEEE Open J. Comput. Soc. 2023, 4, 280–302. [Google Scholar] [CrossRef]
  3. Guo, D.; Chen, H.; Wu, R.; Wang, Y. AIGC Challenges and Opportunities Related to Public Safety: A Case Study of ChatGPT. J. Saf. Sci. Resil. 2023, 4, 329–339. [Google Scholar] [CrossRef]
  4. Kehkashan, T.; Riaz, R.A.; Al-Shamayleh, A.S.; Akhunzada, A.; Ali, N.; Hamza, M.; Akbar, F. AI-Generated Text Detection: A Comprehensive Review of Methods, Datasets, and Applications. Comput. Sci. Rev. 2025, 58, 100793. [Google Scholar] [CrossRef]
  5. Lyu, S. DeepFake the Menace: Mitigating the Negative Impacts of AI-Generated Content. Organ. Cybersecur. J. Pract. Process People 2024, 4, 1–18. [Google Scholar] [CrossRef]
  6. Eisenstein, M. Seven Technologies to Watch in 2024. Nature 2024, 625, 844–848. [Google Scholar] [CrossRef]
  7. Cyberspace Administration of China. Provisions on the Governance of the Online Information Content Ecosystem; Cyberspace Administration of China: Beijing, China, 2019. Available online: http://www.npc.gov.cn/npc/c2/c30834/201912/t20191223_303632.html (accessed on 3 September 2025). (In Chinese)
  8. He, Y.H.; Li, X. New Challenges and Countermeasures of False Information Governance in Generative Artificial Intelligence—Based on the Agile Governance Perspective. Gov. Stud. 2024, 40, 142–156+160. (In Chinese) [Google Scholar] [CrossRef]
  9. Taeihagh, A. Governance of Generative AI. Policy Soc. 2025, 44, 1–22. [Google Scholar] [CrossRef]
  10. Qi, H.X. Content Review Responsibility of Digital Publishing Platforms in AIGC Risk Governance. Publ. Res. 2024, 9, 79–86+31. (In Chinese) [Google Scholar] [CrossRef]
  11. Del Vicario, M.; Bessi, A.; Zollo, F.; Petroni, F.; Scala, A.; Caldarelli, G.; Stanley, H.E.; Quattrociocchi, W. The Spreading of Misinformation Online. Proc. Natl. Acad. Sci. USA 2016, 113, 554–559. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, F.M.; Zhang, L.Q.; Zhu, S.Q. A Method for Multi-Stage Mixed Information Decision-Making Based on Prospect Theory. Chin. J. Manag. 2024, 21, 605–615. (In Chinese) [Google Scholar] [CrossRef]
  13. Ren, G.; Jiang, L.; Huang, T.; Yang, Y.; Xie, R. LLM-Enhanced Multi-Task Joint Learning Model for Misinformation Detection. Inf. Process. Manag. 2025, 62, 104305. [Google Scholar] [CrossRef]
  14. Sun, Y.; Sheng, D.; Zhou, Z.; Wu, Y. AI Hallucination: Towards a Comprehensive Classification of Distorted Information in Artificial Intelligence-Generated Content. Humanit. Soc. Sci. Commun. 2024, 11, 1278. [Google Scholar] [CrossRef]
  15. Yan, S.; Wang, Z.; Dobolyi, D. An Explainable Framework for Assisting the Detection of AI-Generated Textual Content. Decis. Support Syst. 2025, 196, 114498. [Google Scholar] [CrossRef]
  16. Athira, A.B.; Kumar, S.D.M.; Chacko, A.M. Pretrained Transformers for Multimodal Fake News Detection: Explainability Using SHapley Additive exPlanations for Contributions from Text, Image, and Image Captions. Eng. Appl. Artif. Intell. 2025, 162, 112590. [Google Scholar] [CrossRef]
  17. Chen, X.; Huang, X.; Gao, Q.; Huang, L.; Liu, G. Enhancing Text-Centric Fake News Detection via External Knowledge Distillation from LLMs. Neural Netw. 2025, 187, 107377. [Google Scholar] [CrossRef]
  18. Xie, S.; Qiao, T.; Li, S.; Zhang, X.; Zhou, J.; Feng, G. DeepFake Detection in the AIGC Era: A Survey, Benchmarks, and Future Perspectives. Inf. Fusion 2026, 127, 103740. [Google Scholar] [CrossRef]
  19. Qi, C.H. Research on the Risks of Disinformation from Generative Artificial Intelligence and Its Governance Paths. Inf. Stud. Theory Appl. 2024, 47, 112–120. (In Chinese) [Google Scholar] [CrossRef]
  20. Cui, M.; Wang, Y.; Miao, C.; Yang, M.; Lin, Y.; Wang, C. Study on the Evolution of Public Opinion Dynamics and Control Strategies for Collaborative Management by Government, Mainstream Media, and Self-Media. IEEE Trans. Comput. Soc. Syst. 2025, early access. [Google Scholar] [CrossRef]
  21. Tsang, S.J.; Zhou, L. Understanding Public Preference for Misinformation Interventions: Support for Digital Platform Monitoring, Media Literacy Education and Legislation. Online Inf. Rev. 2025, 49, 791–807. [Google Scholar] [CrossRef]
  22. Meng, Y. Governing AI Virtual Anchors in China’s Live Streaming E-Commerce Ecosystem: Policy Challenges and Global Implications. Telecommun. Policy 2026, 50, 103109. [Google Scholar] [CrossRef]
  23. Zhao, H.; Liu, X.; Wang, Y. Tripartite Evolutionary Game Analysis for Rumor Spreading on Weibo Based on MA-PT. IEEE Access 2021, 9, 90043–90060. [Google Scholar] [CrossRef]
  24. Peng, Z.Y.; Xu, P.L.; Wang, Y.Q. Content Governance Strategy of UGC Platform: Tripartite Game with Participation of Intermediary. J. Syst. Manag. 2020, 29, 1101–1112. (In Chinese) [Google Scholar]
  25. Guo, Z.; Valinejad, J.; Cho, J.-H. Effect of Disinformation Propagation on Opinion Dynamics: A Game Theoretic Approach. IEEE Trans. Netw. Sci. Eng. 2022, 9, 3775–3790. [Google Scholar] [CrossRef]
  26. Li, X.; Li, Q.; Du, Y.; Fan, Y.; Chen, X.; Shen, F.; Xu, Y. A Novel Tripartite Evolutionary Game Model for Misinformation Propagation in Social Networks. Secur. Commun. Netw. 2022, 2022, 1136144. [Google Scholar] [CrossRef]
  27. Xie, J.; Wang, Z. The Influence of Injunctive Norm on the Effect of Misinformation Correction. Libr. Inf. Serv. 2024, 68, 102–112. (In Chinese) [Google Scholar] [CrossRef]
  28. Liu, J.; Song, M.; Fu, G. Intervention Analysis for Fake News Diffusion: An Evolutionary Game Theory Perspective. Nonlinear Dyn. 2024, 112, 14657–14675. [Google Scholar] [CrossRef]
  29. Guo, H.L.; Wei, J.J.; Liu, Z.S. Research on Collaborative Governance with Generative Artificial Intelligence in Disinformation. J. Intell. 2024, 43, 121–129+165. (In Chinese) [Google Scholar] [CrossRef]
  30. Wang, R.; Tai, Y. How Does Blockchain Mitigate False Advertising in Live Streaming E-Commerce? A Tripartite Stochastic Evolutionary Game Approach. J. Retail. Consum. Serv. 2025, 85, 104287. [Google Scholar] [CrossRef]
  31. Ma, R.; Wang, X.; Yang, G.-R. Fighting Fake News in the Age of Generative AI: Strategic Insights from Multi-Stakeholder Interactions. Technol. Forecast. Soc. Change 2025, 216, 124125. [Google Scholar] [CrossRef]
  32. Chu, J.; Fan, X. Research on the Multi-Dimensional Causes and Governance Strategies of Information Pollution in the AIGC Environment. Inf. Stud. Theory Appl. 2026, 49, 38–46. (In Chinese) [Google Scholar] [CrossRef]
  33. Nie, H.; Zhu, J. Two-Stage Decision-making Method in Response to Public Opinion Crisis Derived from Emergency Considering Emotion Evolution. Chin. J. Manag. Sci. 2025, 33, 112–125. (In Chinese) [Google Scholar] [CrossRef]
  34. Kahneman, D.; Tversky, A. Prospect Theory: An Analysis of Decision under Risk. Econometrica 1979, 47, 263–291. [Google Scholar] [CrossRef]
  35. Tversky, A.; Kahneman, D. Advances in Prospect Theory: Cumulative Representation of Uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
  36. Chen, Y.; Liu, X.; Tadikamalla, P.R.; Qu, M.; Wang, Y. Evolutionary Game Analysis for Multi-Level Collaborative Governance under Public Crisis in China: From a Value Perception Perspective. Risk Anal. 2024, 44, 582–611. [Google Scholar] [CrossRef]
  37. Chen, T.; Wu, X.; Wang, B.; Yang, J. The Role of Behavioral Decision-Making in Panic Buying Events during COVID-19: From the Perspective of an Evolutionary Game Based on Prospect Theory. J. Retail. Consum. Serv. 2025, 82, 104067. [Google Scholar] [CrossRef]
  38. Bu, Z.; Liu, J.; Liu, J. The Double-Edged Effect of Social Media in the Collaborative Governance of PPP Projects from a Value Perception Perspective. Sustain. Cities Soc. 2025, 128, 106468. [Google Scholar] [CrossRef]
  39. Hu, H.; Guo, X.; Liang, Y. Game Analysis of the Three-Way Evolution of Network Rumor Control under the Major Epidemic Based on Prospect Theory. Inf. Sci. 2021, 39, 45–53. (In Chinese) [Google Scholar] [CrossRef]
  40. Luo, J.B.; Xie, W.H. Research on Governance Mechanism of “Low Level” Vulgar Content on Live Broadcasting Platform Based on Non-participating Interactive Public. Oper. Res. Manag. Sci. 2022, 31, 227–233. (In Chinese) [Google Scholar]
  41. Hojati, A.; Nault, B.R. Content Moderation with Shadowbanning. Inf. Syst. Res. 2025, articles in advance. [Google Scholar] [CrossRef]
  42. Sun, Y.L.; Du, Y.W. Analysis on the Evolutionary Game of Technology Collaborative Innovation of Marine Ranching Enterprises from the Perspective of Prospect Theory. Sci. Technol. Manag. Res. 2023, 43, 154–167. (In Chinese) [Google Scholar] [CrossRef]
  43. Barberis, N.C. Thirty Years of Prospect Theory in Economics: A Review and Assessment. J. Econ. Perspect. 2013, 27, 173–196. [Google Scholar] [CrossRef]
  44. Yao, Y. The Political EconomyCauses for China’s Economic Success. Res. Econ. Manag. 2018, 39, 3–12. (In Chinese) [Google Scholar] [CrossRef]
  45. Seetharaman, A.; Gul, F.A.; Lynn, S.G. Litigation Risk and Audit Fees: Evidence from UK Firms Cross-Listed on US Markets. J. Account. Econ. 2002, 33, 91–115. [Google Scholar] [CrossRef]
  46. Mo, Z.Y.; Pan, D.Q.; Liu, H.; Zhao, Y.M. Analysis on AIGC False Information Problem and Root Cause from the Perspective of Information Quality. Doc. Inf. Knowl. 2023, 40, 32–40. (In Chinese) [Google Scholar] [CrossRef]
  47. Bromiley, P.; Rau, D. Some Problems in Using Prospect Theory to Explain Strategic Management Issues. Acad. Manag. Perspect. 2019, 36, 125–141. [Google Scholar] [CrossRef]
  48. Taylor, P.D.; Jonker, L.B. Evolutionary Stable Strategies and Game Dynamics. Math. Biosci. 1978, 40, 145–156. [Google Scholar] [CrossRef]
  49. Li, P.; Wu, B.; Wu, N. The Three-Party Evolutionary Game of Preannouncement Strategies for Platform’s AI Updates Considering Users’ Loyalties. Eng. Appl. Artif. Intell. 2025, 162, 112782. [Google Scholar] [CrossRef]
  50. Lei, L.; Gao, S.; Zeng, E. Regulation Strategies of Ride-Hailing Market in China: An Evolutionary Game Theoretic Perspective. Electron. Commer. Res. 2020, 20, 535–563. [Google Scholar] [CrossRef]
  51. Friedman, D. Evolutionary Games in Economics. Econometrica 1991, 59, 637–666. [Google Scholar] [CrossRef]
  52. Ma, R.; Mai, Y.; Hu, B. User-Generated Content Platforms: Managing the Relationship Triangle. Manuf. Serv. Oper. Manag. 2025, 27, 1258–1274. [Google Scholar] [CrossRef]
  53. Li, C.; Li, H.; Tao, C. Evolutionary Game of Platform Enterprises, Government and Consumers in the Context of Digital Economy. J. Bus. Res. 2023, 167, 113858. [Google Scholar] [CrossRef]
  54. Han, Y.; Li, X.; Feng, Z. Conceptual Connotation and Research Progress of Digital Literacy. Stud. Sci. Pop. 2025, 20, 48–57+125. (In Chinese) [Google Scholar] [CrossRef]
Figure 1. Strategies and action mechanisms of actors in the co-governance game of AI-generated disinformation on UGC (User-generated content) platforms.
Figure 1. Strategies and action mechanisms of actors in the co-governance game of AI-generated disinformation on UGC (User-generated content) platforms.
Systems 14 00416 g001
Figure 2. Phase diagram for strategy evolution of the UGC platform.
Figure 2. Phase diagram for strategy evolution of the UGC platform.
Systems 14 00416 g002
Figure 3. Phase diagram for user strategy evolution.
Figure 3. Phase diagram for user strategy evolution.
Systems 14 00416 g003
Figure 4. Phase diagram for government strategy evolution.
Figure 4. Phase diagram for government strategy evolution.
Systems 14 00416 g004
Figure 5. Dynamic evolutionary trajectories of the system under uniform PT (Prospect theory) parameters. (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Figure 5. Dynamic evolutionary trajectories of the system under uniform PT (Prospect theory) parameters. (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Systems 14 00416 g005
Figure 6. Impact of R p on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of R p on the system’s evolutionary path; (b) impact of R p on the platform’s evolutionary path; (c) impact of R p on the user’s evolutionary path; (d) impact of R p on the government’s evolutionary path.
Figure 6. Impact of R p on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of R p on the system’s evolutionary path; (b) impact of R p on the platform’s evolutionary path; (c) impact of R p on the user’s evolutionary path; (d) impact of R p on the government’s evolutionary path.
Systems 14 00416 g006
Figure 7. Impact of D p on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of D p on the system’s evolutionary path; (b) impact of D p on the platform’s evolutionary path; (c) impact of D p on the user’s evolutionary path; (d) impact of D p on the government’s evolutionary path.
Figure 7. Impact of D p on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of D p on the system’s evolutionary path; (b) impact of D p on the platform’s evolutionary path; (c) impact of D p on the user’s evolutionary path; (d) impact of D p on the government’s evolutionary path.
Systems 14 00416 g007
Figure 8. Impact of R u 1 on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of R u 1 on the system’s evolutionary path; (b) impact of R u 1 on the platform’s evolutionary path; (c) impact of R u 1 on the user’s evolutionary path; (d) impact of R u 1 on the government’s evolutionary path.
Figure 8. Impact of R u 1 on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of R u 1 on the system’s evolutionary path; (b) impact of R u 1 on the platform’s evolutionary path; (c) impact of R u 1 on the user’s evolutionary path; (d) impact of R u 1 on the government’s evolutionary path.
Systems 14 00416 g008
Figure 9. Impact of m on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of m on the system’s evolutionary path; (b) impact of m on the platform’s evolutionary path; (c) impact of m on the user’s evolutionary path; (d) impact of m on the government’s evolutionary path.
Figure 9. Impact of m on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of m on the system’s evolutionary path; (b) impact of m on the platform’s evolutionary path; (c) impact of m on the user’s evolutionary path; (d) impact of m on the government’s evolutionary path.
Systems 14 00416 g009
Figure 10. Impact of γ on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of γ on the system’s evolutionary path; (b) impact of γ on the platform’s evolutionary path; (c) impact of γ on the user’s evolutionary path; (d) impact of γ on the government’s evolutionary path.
Figure 10. Impact of γ on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of γ on the system’s evolutionary path; (b) impact of γ on the platform’s evolutionary path; (c) impact of γ on the user’s evolutionary path; (d) impact of γ on the government’s evolutionary path.
Systems 14 00416 g010
Figure 11. Impact of β on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of β on the system’s evolutionary path; (b) impact of β on the platform’s evolutionary path; (c) impact of β on the user’s evolutionary path; (d) impact of β on the government’s evolutionary path.
Figure 11. Impact of β on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of β on the system’s evolutionary path; (b) impact of β on the platform’s evolutionary path; (c) impact of β on the user’s evolutionary path; (d) impact of β on the government’s evolutionary path.
Systems 14 00416 g011
Figure 12. Impact of λ on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of λ on the system’s evolutionary path; (b) impact of λ on the platform’s evolutionary path; (c) impact of λ on the user’s evolutionary path; (d) impact of λ on the government’s evolutionary path.
Figure 12. Impact of λ on the evolutionary trajectories of the system and its actors under uniform PT parameters. (a) Impact of λ on the system’s evolutionary path; (b) impact of λ on the platform’s evolutionary path; (c) impact of λ on the user’s evolutionary path; (d) impact of λ on the government’s evolutionary path.
Systems 14 00416 g012
Figure 13. Dynamic evolutionary trajectories of the system under uniform PT parameters (Douyin Platform). (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Figure 13. Dynamic evolutionary trajectories of the system under uniform PT parameters (Douyin Platform). (a) Evolutionary paths under different initial strategy combinations; (b) the evolutionary path when the initial strategies are x = y = z = 0.2 ; (c) the evolutionary path when the initial strategies are x = y = z = 0.5 ; (d) the evolutionary path when the initial strategies are x = y = z = 0.8 .
Systems 14 00416 g013
Table 1. Parameter settings.
Table 1. Parameter settings.
Variable TypeParameterDefinitionParameterDefinition
Fundamental VariablesαRisk preference coefficient in the perceived benefit function (α∈[0, 1]) P g Positive network ecological externalities brought to the government by active platform governance
βRisk preference coefficient in the perceived loss function (β∈[0, 1]) P u Positive network ecological externalities brought to users by active platform governance
λLoss aversion coefficient in the perceived loss function (λ∈[1, ∞)) E p 1 Economic benefits gained by the platform through active governance
xProbability that the platform adopts an active governance strategy (x∈[0, 1]) E p 2 Economic benefits gained by the platform through passive governance
yProbability that the user adopts a governance-participation strategy (y∈[0, 1]) E u 1 Posting benefits for users who use GAI normally
zProbability that the government adopts a stringent supervision strategy (z∈[0, 1]) E u 2 Posting benefits for users who use GAI maliciously
mProbability that the user maliciously uses GAI(Generative artificial intelligence) (m∈[0, 1]) N g Negative network ecological externalities imposed on the government by passive platform governance
μThe platform’s review intensity for AIGD(AI-generated disinformation) (μ∈[0, 1]) N u Negative network ecological externalities imposed on users by passive platform governance
γWillingness of user participation in monitoring (γ∈[0, 1]) R p Government’s reward for the platform’s active governance
εGovernment’s regulatory intensity on AIGD (ε∈[0, 1]) D p Government’s penalty for the platform’s passive governance
C p Governance cost of the platform for active governance R u 1 Government’s reward for users’ participation in supervision
C u 1 Supervision cost for users participating in supervision D u Platform’s punishment for users’ malicious content creation
C u 2 Creation cost for users when using GAI R u 2 Platform’s compensation to users for network ecological damage
C g Supervision cost of the government for stringent supervision L g Loss of government credibility when users detect lenient government supervision
Note :   E p 1   <   E p 2 ; E u 1   < E u 2 L p Loss of platform reputation when users detect passive platform governance
Perceived Value Variables V ( P g ) Government’s perceived benefit from positive network ecological externalities V ( R u 1 ) Users’ perceived benefit from government rewards
V ( P u ) Users’ perceived benefit from positive network ecological externalities V ( R u 1 ) Government’s perceived loss from rewarding users
V ( N g ) Government’s perceived loss from negative network ecological externalities V ( D u ) Perceived losses perceived by malicious content users due to platform punishment
V ( N u ) Users’ perceived loss from negative network ecological externalities V ( D u ) Platform’s perceived benefits from punishing malicious content users
V ( R p ) Platform’s perceived benefit from government rewards V ( R u 2 ) Users’ perceived benefit from compensation for network ecological damage
V ( R p ) Government’s perceived loss from rewarding the platform V ( R u 2 ) Platform’s perceived loss from compensation for network ecological damage
V ( D p ) Platform’s perceived loss from government punishment V ( L g ) Government’s perceived loss from credibility loss
V ( D p ) Government’s perceived benefit from punishing the platform V ( L p ) Platform’s perceived loss from reputation loss
Table 2. Perceived payoff matrix.
Table 2. Perceived payoff matrix.
Platform StrategyUser StrategyGovernment Strategy
Stringent Supervision (z)Lenient Supervision (1 − z)
Active Governance (x)Governance Participation (y) C p + E p 1 + V ( R p ) C p + E p 1 + V ( R p )
γ · C u 1 C u 2 + V ( P u ) + E u 1 + V ( R u 1 ) γ · C u 1 C u 2 + V ( P u ) + E u 1 + V ( R u 1 )
C g + V ( P g ) + V ( R p ) + V ( R u 1 ) ε · C g + V ( P g ) + γ · V ( L g ) + V ( R p ) + V ( R u 1 )
Non-participation in Governance (1 − y) C p + E p 1 + V ( R p ) + m · V ( D u ) C p + E p 1 + V ( R p ) + m · V ( D u )
C u 2 + V ( P u ) + ( 1 m ) · E u 1 + m · V ( D u ) C u 2 + V ( P u ) + ( 1 m ) · E u 1 + m · V ( D u )
C g + V ( P g ) + V ( R p ) ε · C g + V ( P g ) + V ( R p )
Passive Governance (1 − x)Governance Participation (y) μ · C p + E p 2 + V ( D p ) + γ · V ( L p ) + V ( R u 2 ) μ · C p + E p 2 + ε · V ( D p ) + γ · V ( L p )
γ · C u 1 C u 2 + V ( N u ) + E u 1 + V ( R u 1 ) + V ( R u 2 ) γ · C u 1 C u 2 + V ( N u ) + E u 1 + V ( R u 1 )
C g + V ( N g ) + V ( D p ) + V ( R u 1 ) ε · C g + V ( N g ) + γ · V ( L g ) + ε · V ( D p ) + V ( R u 1 )
Non-participation in Governance (1 − y) μ · C p + E p 2 + V ( D p ) μ · C p + E p 2 + ε · V ( D p )
C u 2 + V ( N u ) + ( 1 m ) · E u 1 + m · E u 2 C u 2 + V ( N u ) + ( 1 m ) · E u 1 + m · E u 2
C g + V ( N g ) + V ( D p ) ε · C g + V ( N g ) + ε · V ( D p )
Table 3. Eigenvalues of potential equilibrium points.
Table 3. Eigenvalues of potential equilibrium points.
Potential Equilibrium PointEigenvalues of the Jacobian Matrix
λ 1 λ 2 λ 3
E1(0,0,0)−a−e−h
E2(1,0,0)a−e + d−h + f
E3(0,1,0)−a − beg − h
E4(0,0,1)−a − c e + V ( R u 2 )h
E5(1,1,0)a + be − d−h + f + g
E6(1,0,1)a + c−e + dh − f
E7(0,1,1) a b c V ( R u 2 ) e V ( R u 2 )h − g
E8(1,1,1) a + b + c + V ( R u 2 )e − dh − f − g
Table 4. Initial parameter values.
Table 4. Initial parameter values.
ParameterValueParameterValueParameterValue
α0.88 C p 5 R p 2
β0.88 C u 1 1 D p 4
λ2.25 C g 4 R u 1 1
m0.3 E p 1 3 D u 3
μ0.5 E p 2 6 R u 2 2
γ0.5 E u 1 2 L g 5
ε0.5 E u 2 5 L p 4
Table 5. Comparison of parameter values (Douyin Platform).
Table 5. Comparison of parameter values (Douyin Platform).
ParameterBaseline ValueDouyin Platform ValueBasis for Assignment
λ g 2.252.50Disinformation on Douyin tends to trigger large-scale public sentiment events, making the government highly sensitive to the loss of public credibility and exhibiting a high degree of loss aversion.
λ p 2.252.25The Douyin platform is sensitive to regulatory penalties and damage to brand reputation, exhibiting a strong tendency toward loss aversion.
λ u 2.252.00The Douyin user base is diverse, and ordinary individuals exhibit a relatively low degree of loss aversion.
m0.30.25Douyin has strict labeling and review rules for AIGC content, resulting in a lower probability of users maliciously using GAI.
γ0.50.8Douyin has a large user base and a well-established reporting mechanism, resulting in users’ willingness to supervise being significantly higher than the industry average.
C p 54Douyin has mature AI-assisted review technologies and a high coverage rate of automated review, resulting in the governance cost per piece of content being significantly lower than the industry average.
C u 1 10.8Douyin provides convenient reporting channels and rapid feedback, resulting in lower time costs for users to participate in supervision.
R p 23The brand premium and traffic support benefits derived from Douyin’s compliant operations are higher, providing stronger positive incentives for active governance.
D p 45AIGD has a broad reach and significant public sentiment impact, with regulatory penalties and reputational losses being more pronounced.
L p 44.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lei, L.; Wu, Y.; Gao, S. Evolutionary Game Analysis of AI-Generated Disinformation Governance on UGC Platforms Based on Prospect Theory. Systems 2026, 14, 416. https://doi.org/10.3390/systems14040416

AMA Style

Lei L, Wu Y, Gao S. Evolutionary Game Analysis of AI-Generated Disinformation Governance on UGC Platforms Based on Prospect Theory. Systems. 2026; 14(4):416. https://doi.org/10.3390/systems14040416

Chicago/Turabian Style

Lei, Licai, Yanyan Wu, and Shang Gao. 2026. "Evolutionary Game Analysis of AI-Generated Disinformation Governance on UGC Platforms Based on Prospect Theory" Systems 14, no. 4: 416. https://doi.org/10.3390/systems14040416

APA Style

Lei, L., Wu, Y., & Gao, S. (2026). Evolutionary Game Analysis of AI-Generated Disinformation Governance on UGC Platforms Based on Prospect Theory. Systems, 14(4), 416. https://doi.org/10.3390/systems14040416

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop