Next Article in Journal
A Community Detection Model Based on Dynamic Propagation-Aware Multi-Hop Feature Aggregation
Previous Article in Journal
Local Invariance of Divergence-Based Quantum Information Measures
Previous Article in Special Issue
Evolving Public Attitudes Towards the HPV Vaccine in China: A Fine-Grained Emotion Analysis of Sina Weibo (2016 vs. 2024)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Blame Attribution on Moral Contagion in Controversial Events

1
School of Journalism and Communication, Beijing Normal University, Beijing 100875, China
2
Center for Computational Communication Research, Beijing Normal University, Zhuhai 519087, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(10), 1052; https://doi.org/10.3390/e27101052
Submission received: 19 August 2025 / Revised: 1 October 2025 / Accepted: 4 October 2025 / Published: 10 October 2025
(This article belongs to the Special Issue Complexity of Social Networks)

Abstract

Controversial events are social incidents that trigger wide discussion and strong emotions, often touching on public interests, moral judgment, or social values. Their diffusion typically involves moral evaluations and affect-laden language. Prior work has mostly examined how the quantity of moral and emotional words shapes diffusion, while largely overlooking blame attribution—that is, whether audiences locate the cause of a controversial event in individual actions or in social structures, across different contexts. Using 189,872 original Weibo posts covering 105 events in three domains— street-level bureaucracy (SLB; individual attribution), education governance (EG; structural attribution), and gender-based violence (GBV; mixed attribution)—we estimate negative binomial models with an interaction between word type and account verification and report incidence rate ratios (IRR). Moral contagion is strongest for SLB (IRR = 1.337) and attenuated for EG (IRR = 1.037). For GBV, moral-emotional language decreases reposts (IRR = 0.844). Unverified accounts amplify the diffusion advantage of moral-emotional wording for both individually and structurally attributed issues, with the largest gains in SLB. When disaggregating by valence and discrete emotions, fear-type moral-emotional words are positively associated with reposts in GBV (IRR = 1.314). Theoretically, we shift the question from whether moral contagion occurs to when it operates, highlighting attribution tendencies and verification status as key moderators. Empirically, we provide cross-issue evidence from large-scale Chinese social media. Methodologically, we offer a replicable workflow that combines length-normalized lexical measures with negative binomial models, including interaction terms.

1. Introduction

Public evaluations of controversial events on social media commonly express moral stances and emotional bias. Language that blends moral judgment with emotion can attract attention and influence public opinion [1,2]. Yet the same cue does not travel equally well across all issues [3]. Weibo is a Chinese social platform similar in functionality to Twitter. In settings like Weibo, where institutional actors and ordinary users interact under invisible moderation, how people assign blame—to individuals or to systems—can change diffusion dynamics and, by extension, the tone of public life [4,5]. These dynamics matter because they can amplify polarization in events that focus on individual deviant behavior while muting concern for system-level failures.
Prior work shows that moral-emotional wording can boost online diffusion [6], but effects are different across domains [3]. In events focusing on individual behavior, narratives highlight the actions of specific people, inviting individual blame; in governance disputes, coverage foregrounds abstract rules and institutions, inviting structural attributions [7] (p. 482). These contrasts should have an influence on how moral-emotional language performs. In the Weibo context, media accounts post more cautiously, whereas unverified users rely on affective appeals to spur peer reposting. This pattern is consistent with professional media’s credibility norms and with the platform’s reposting incentives [4,8].
Despite rich theorizing, we lack comparative, cross-issue evidence from non-Western platforms that identifies when moral contagion strengthens, weakens, or reverses. We also know little about how user identity moderates these effects in practice. Without such evidence, people may overgeneralize from highly polarizing cases, and platforms may unintentionally give more visibility to incendiary content through their curation. Our results preview the stakes: moral-emotional language is far more impactful in SLB than in EG, while in GBV, fear-type moral-emotional language drives diffusion; moreover, unverified users benefit most from affect-heavy wording, while verified accounts see muted or even negative returns. Understanding these contingencies is essential for responsible amplification and for designing guardrails that do not simply reward the loudest voices.
This study asks when moral-emotional language increases sharing and for whom. We focus on attribution framing—whether audiences read a problem as caused by individual misconduct or by structural conditions—and on user identity (verified vs. unverified) [9] (p. 16). Our goal is to identify the boundary conditions of “moral contagion” on social media and to translate them into actionable guidance for communicators and platforms. Concretely, we compare three issue contexts that naturally differ in attribution: street-level bureaucracy (SLB, individual blame) [10], education governance (EG, structural blame) [11], and gender-based violence (GBV, mixed attribution) [12,13], and test whether moral emotional language is affected by topic attributes and user identity.
To this end, we test a model of moral contagion that centers attribution framing and user identity, asking when moral-emotional language boosts reposts and when it fails to do so.
Our contributions are fourfold. (1) We describe the contextual boundary conditions of moral contagion, showing it is conditional rather than universal—robust in individual-attribution settings (SLB), attenuated or null in structural frames (EG), and reversed in mixed contexts (GBV) where fear-type moral-emotional language predominates. (2) We demonstrate that user identity influences the effect of moral contagion: unverified accounts capture the largest gains from moral-emotional language, whereas verified or institutional actors do not. (3) Methodologically, we introduce a transparent, replicable workflow that integrates length-normalized linguistic measures with topic classification and moderation analysis, tailored to short-text platforms. (4) We distill practical implications for platform design and public communication under Chinese platform norms. Taken together, these contributions shift the research agenda from asking whether moral contagion exists to specifying when it occurs—and for whom.

2. Literature Review

2.1. Contextual Boundaries of Moral-Emotional Diffusion

Moral contagion theory [6] posits that language combining moral and emotional content diffuses more widely than language containing either element alone. The words that intersect the emotional dictionary and the moral dictionary are moral-emotional words. That is to say, when posting comments on controversial events on social platforms, compared with purely moral words and purely emotional words, words that contain both moral values and emotional signals are more likely to promote the forwarding of the post. The theory holds that moral language signals group identity and norms, while emotional language provides the motivation to share. Together, these elements produce a synergistic effect. Analyzing Twitter debates on contentious topics, Brady et al. [6] found that each moral-emotional word in a tweet increased its expected retweet count by approximately 20%.
The Motivation–Attention–Design (MAD) model [1] further explains this amplification. It posits that users are motivated to share moral-emotional content to reinforce group identity. This content naturally captures attention due to its salience, and platform design (e.g., likes, algorithms) amplifies these motivational and attentional biases [14]. This mechanism motivates our first hypothesis:
H1: 
Posts containing moral-emotional language will be reposted more than posts containing only moral or only emotional language.
However, scholars have recently questioned whether “moral contagion” generalizes across topics and contexts [3]. Much of the strongest evidence to date comes from polarized issues (e.g., gun control) on Western platforms [6], where what counts as a “polarizing topic” can be subjective, and effect sizes vary by domain. In some polarizing topics, moral-emotional words even predict reduced sharing, indicating that moral contagion does not apply universally [3]. These results do not undermine moral contagion; they point to its boundary conditions. Guided by the MAD framework [1], we treat user sharing motivations, context-driven selective attention, and platform design as the key boundary conditions of moral contagion. Our study examines these conditions in a non-Western setting and offers an exploratory test of when moral-emotional language amplifies—and when it fails to amplify—sharing.
China’s content-moderation system permits some criticism but seeks to deter large-scale collective expression [4]; this institutional context partly shapes how discussions of controversial events spread on Chinese social media. In this moderated space, institutional and official accounts play an outsized agenda-setting role and frequently function as opinion hubs [5,15]. Because verified accounts generally reach larger audiences, they tend to show higher baseline diffusion in routine contexts [16]. However, audiences often read low-arousal or neutral language as more credible [8]. For verified professional or institutional accounts, using moral-emotional language can therefore backfire—prompting doubts about credibility and reducing willingness to repost, especially on controversial events. By contrast, for unverified users, moral-emotional language can signal in-group alignment and mobilize like-minded audiences [6], consistent with the echo-chamber dynamics observed on Weibo [17]. Additionally, posts by conservative-leaning users that employ moral-emotional language are more likely to spread [18], suggesting that moral contagion resonates more readily with broad, non-elite audiences. In such cases, emotionally charged moral claims from grassroots accounts can diffuse rapidly on Weibo, at least prior to the full salience of any collective-action risks.
We therefore hypothesize the following:
H2: 
For posts containing moral-emotional language, unverified user status will predict more reposts than verified user status.

2.2. Arousal-Driven Mechanisms of Emotional Sharing

Emotional contagion is a fundamental mechanism of social cohesion [19]. Emotional tweets are shared more often and faster than neutral ones [20], and emotional interaction can become part of a society’s social beliefs and reshape relational structures, thereby shaping how we see and judge social problems [2].
Finer-grained evidence shows that emotional valence does not have a uniform impact across settings. In news, positive content tends to spread more virally [21]; in rumor diffusion, false rumors go viral when they contain more positive-emotion words [22]. By event type, positive emotion builds up for highly-anticipated events, whereas unexpected events are marked by negative emotion; correspondingly, positive information reaches broader audiences, while negative information spreads faster [23]. In political contexts, negative affect more reliably boosts sharing [24]. Together, these patterns indicate that the effects of emotional valence are context-dependent rather than universal.
Compared with valence, physiological arousal provides a clearer framework for explaining why different emotions spread to different degrees. High-arousal states—positive (e.g., awe) or negative (e.g., anger, anxiety)—promote viral diffusion, whereas low-arousal emotions (e.g., sadness) suppress it [21]. Studies of discrete emotions further show that anger is more contagious than joy, an effect amplified by weak social ties [25]. This perspective motivates our first research question:
RQ1: 
Across different valences and categories, which specific moral-emotional language best predicts reposts?

2.3. Attribution Frames and Issue Types

According to attribution theory, different emotional responses are evoked depending on where blame is placed. When individuals are held responsible (internal attribution), the common emotional reaction is anger [26]. When the cause is seen as structural (external attribution), the response is more likely to be sympathy [26].
“Attribution in terms of impersonal and personal causes, and with the latter, in terms of intent, are everyday occurrences that determine much of our understanding of and reaction to our surroundings” [9] (p. 16). When observing others, people tend to underestimate situational factors and overestimate dispositional ones. This cognitive bias, known as the fundamental attribution error, is a robust phenomenon observed even in trained psychologists [27] and is likely widespread in the general public.
Attributing social problems is inherently political. While the sociological imagination encourages linking individual biographies to broader social structures [28] (pp. 10–11), the dominant tendency is often to “blame the victim” [29] (pp. 11, 17). This ideology individualizes systemic issues, framing them as personal failings that require exceptional, case-by-case solutions rather than universal reforms. This focus on personal deficits over structural causes ultimately reinforces the existing social order [29] (p. 19).
Media narratives shape public attribution by framing the causes of social problems [7] (pp. 477, 493). News coverage that focuses on individual actions and choices encourages internal attributions. Conversely, coverage that examines systemic or social causes promotes external, situational attributions [7] (p. 482). Therefore, person-centric event narratives are more likely to lead to individual blame, while abstract, institution-focused narratives are more likely to lead to structural blame.
To operationalize these framing effects in our study, we next define three issue categories that map media narratives to attributional logics: attributable to individual actions, attributable to social structures, and attributable to a mix of the two.
  • Attributable to individual actions: Events framed mainly as the choices, intentions, or misconduct of specific people (e.g., a police officer, a teacher, a local official). The problem is treated as a discrete event caused by identifiable actors, and solutions focus on disciplining, rewarding, or replacing those individuals.
  • Attributable to social structures: Events framed as the result of rules, incentives, institutions, or broad social conditions (e.g., laws, hiring systems, cultural norms). The problem is treated as systemic and persistent across cases, and solutions emphasize policy or organizational reform.
  • Attributable to a mix of individual actions and social structures: Events where narratives link specific actors’ behaviors to the larger systems that enable or constrain them. Both personal agency and structural conditions are presented as necessary parts of the explanation, and solutions combine accountability for individuals with reforms to rules or contexts.
We compare three types of issues that naturally encourage different attributions. For individual attribution, we selected street-level bureaucracy, which involves the concrete actions of specific people. We then selected education governance as a case of structural attribution, as it involves abstract institutions. To further examine how issues that combine individual and structural attributions unfold, we analyze gender-based violence as a mixed case. By analyzing the moral-emotional content and repost counts of posts within these domains, we can examine the boundary conditions of the theory.
The concept of street-level bureaucracy, introduced by Lipsky [10,30], refers to frontline public service personnel who exercise discretion under conditions of limited resources and ambiguous goals, effectively shaping how policies are implemented. In China, such cases often involve urban management officers or police. Media narratives of these incidents usually highlight the actions of individual officials (e.g., “a police officer kicked a student”), which encourages the public to attribute the problem to personal misconduct rather than broader structural factors.
Education governance refers to controversies over the structural arrangements, rules, and policies within the education system. In contrast to the concrete actions of SLB, EG is an abstract, multi-level system where actors are often faceless institutions [11]. Media coverage of EG issues therefore focuses on macro-level rules, guiding the public toward situational or structural attributions.
Although Weibo shares the core microblogging affordances of Twitter/X, the cultural and institutional environment in which those affordances operate is distinct. While platforms like Weibo possess an open structure capable of empowering users to spread information publicly [31], the same moral-emotional cue can have weaker or stronger effects depending on whether the public reads a problem as structural (stable, systemic) or personal (discrete, individual-level). Structural frames implicitly raise the perceived potential for wide-scale collective expression—making such content more likely to be constrained under moderation rules [4]. —whereas person-centric incidents pose less threat to institutional stability and thus allow moral-emotional content to travel farther.
We therefore predict the following:
H3: 
Moral-emotional language will have a weaker effect on reposts in the context of education governance compared to street-level bureaucracy.
Then, we follow the United Nations High Commissioner for Refugees (UNHCR) [12] and World Health Organization (WHO) [13] to define gender-based violence as violence, discrimination, or coercion directed at individuals—especially women and gender minorities—on the basis of gender, gender identity, or socially constructed gender roles. Manifestations include physical and sexual violence, verbal abuse and harassment, humiliating or stigmatizing speech (including online), and institutional or systemic inequities; these harms can occur in both private and public settings and carry severe, sometimes lifelong, consequences.
“Violence, Peace and Peace Research” [32] distinguishes personal (direct) from structural (indirect, system-level) violence and argues they are intertwined rather than separable: individual actions are patterned by institutional arrangements and social norms. Building on this, ecological models of violence against women conceptualize GBV as arising from the interaction across levels—individual traits and relationships (micro), situational and organizational contexts (meso), and cultural, legal, and economic structures (macro) [33]. Taken together, these frameworks justify treating GBV as a mixed attribution domain in which personal agency and structural conditions jointly shape expression and diffusion.

3. Materials and Methods

3.1. Data Collection and Issue Classification

Our data is drawn from the Sina Weibo Online Emergency Public Opinion Dissemination Dataset provided by the 2025 Micro Hotspot Big Data Research Institute, which contains 349 trending events from 2024 onward. The provider clustered Weibo posts to events by lexical similarity and assigned unique IDs to all events. We then collected the original Weibo posts for these events, extracting the full text, user verification status, follower count, media type and repost count for analysis.
We employed a human–Artificial Intelligence (AI) collaborative approach for issue classification, leveraging the ability of Large Language Models (LLMs) to replicate expert annotations while maintaining human supervision for accuracy [34,35]. Our process adapts the workflow from Chew et al. [36] to classify the event titles.
From the universe, we drew two random subsets for calibration. In each subset, two trained coders independently labeled the event titles into four mutually exclusive categories— street-level bureaucracy (SLB), education governance (EG), gender-based violence (GBV), and other—using short operational definitions. We then prompted Gemini 2.5 Pro to code the same events (full prompt in Appendix C.2). Human labels and model outputs were compared; both calibration rounds showed high agreement, leading us to proceed with model-assisted coding of the full corpus while keeping human oversight for flagged or ambiguous cases. This “calibrate → audit → scale” design follows recent guidance on LLM-assisted deductive coding and on integrating LLMs into workflows that emphasize human oversight and triangulation [36,37].
After the two audits confirmed stable agreement, we applied the same instructions and settings to all 349 events and retained human spot-checks for any low-confidence determinations. Following data quality screening, we excluded events with missing critical fields (e.g., user IDs) required for downstream analyses. The resulting research sample comprised 42 SLB events, 46 EG events, and 17 GBV events; the remainder were labeled “other” and not analyzed as focal categories. Our procedure mirrors the recommended final step in LLM-assisted content analysis—using the model to create the final coded dataset once non-inferiority to human coding has been demonstrated—while documenting prompts, model name, and run parameters for replicability [36,37] (see Appendix C).
After initial cleaning (de-duplication, removal of missing values, and trimming repost outliers at the 0.1% level), the final analytic samples comprised 23,730 posts for SLB (42 events), 97,731 posts for EG (46 events), and, with the addition of the GBV topic, 68,411 posts for GBV (17 events).

3.2. Operationalization of Variables

Our dependent variable is the repost count, a key indicator of information diffusion [6]. Because repost counts are over-dispersed count data (i.e., the variance is much larger than the mean), we selected a statistical model appropriate for this distribution.
Our core independent variables measure the language used in each Weibo post. We constructed these variables using two Chinese lexicons: the Chinese Moral Foundation Dictionary (C-MFD) 2.0 moral lexicon [38] and the Information Retrieval Laboratory of Dalian University of Technology (DUTIR) emotional lexicon [39]. From these, we created three mutually exclusive categories:
  • Moral-Emotional Words: Words present in both lexicons (n = 1957);
  • Distinctly Moral Words: Words unique to the moral lexicon (n = 3747);
  • Distinctly Emotional Words: Words unique to the affective lexicon (n = 25,358).
To control for post length, we normalized the count of each word type using the following formula:
LinguisticVariable = f N + 1 × 100
The normalized metric represents the number of words of a given type per 100 characters, which improves the interpretability and stability of the model coefficients. Our analysis also includes the following moderator and control variables:
  • User Verification (Moderator): Effects-coded as ordinary user (−1) or verified user (+1);
  • Follower Count (Control): To control for user influence, we use the log-transformed number of followers due to the variable’s right-skewed distribution;
  • Media Type (Control): Effects-coded as text-only (−1) or multimedia (e.g., images, video) (+1);
  • Post Length (Control): The character count of the post is included as an additional control in our robustness checks.
To address RQ1, we created fine-grained sentiment intensity scores for each post. These scores were calculated using a standard Chinese sentiment formula [40] that integrates several components:
  • Core Lexicons: The DUTIR lexicon [39] provided word polarity (positive/negative) and discrete emotion categories (e.g., joy, sadness).
  • Modifier Lexicons: We incorporated established negation [41] and degree-adverb [42] lexicons.
  • Scoring Logic: The formula accounts for a word’s polarity (+1 or −1), its strength (on a 5-point scale), the multiplicative effect of negators, and the weighting of degree adverbs.
This process generated our final exploratory variables: scores for positive/negative emotion, positive/negative moral-emotion, and each discrete moral-emotion category. The formula is as follows:
T o t a l S c o r e = i = 1 n ( I n t e n s i t y i × P o l a r i t y i × D e g r e e W e i g h t i × N e g a t i o n F l a g i )

3.3. Analytic Strategy

We used negative binomial regression to model the repost counts. This method was chosen because our dependent variable is an over-dispersed count, meaning its variance is much larger than its mean. Unlike Poisson regression, which assumes equal mean and variance, the negative binomial model accounts for this over-dispersion and is therefore more appropriate for our data [43].
To test our hypotheses, we modeled repost counts using negative binomial regression. The models included interaction terms to assess the main effects of language type and the moderating effect of user verification. The resulting coefficients are interpreted as incident rate ratios (IRRs):
ln ( μ i ) = β 0 + j = 1 6 β j X j , i + k = 1 3 γ k Z k , i
where
  • i is indexes the i-th post;
  • μ i is the expected repost count for post i;
  • β 0 is the intercept;
  • Xj,i is the value of the j-th predictor (main effect) for post i;
  • β j is the coefficient for the j-th predictor;
  • Z k , i is the value of the k-th interaction term for post i;
  • γ k is the coefficient for the k-th interaction term.
To investigate RQ1, we specified two exploratory models. The first model tests the effect of emotion valence, classifying emotions as positive (joy, good, surprise) or negative (anger, sadness, fear, disgust) based on the DUTIR lexicon while holding all controls constant [39]. Second, we created variables for five discrete moral-emotion categories, excluding two due to low frequency (surprise and anger). In both models, intensity is measured as a normalized frequency per 100 characters. The models are specified as follows:
ln(Reposts) = β0 + β1⋅Moral Words + β2⋅Positive Emotion + β3⋅Negative Emotion + β4⋅Positive Moral Emotion + β5⋅Negative Moral Emotion + β6⋅Log Followers + β7⋅Auth Type + β8⋅Media Type + ε
ln(Reposts) = β0 + β1⋅Joy + β2⋅Good + β3⋅Sadness + β4⋅Fear + β5⋅Disgust

4. Results

4.1. Main Effects of Language on Reposts and Cross-Issue Differences (Model 1)

Table 1, Table 2 and Table 3 present the negative binomial regression results for the SLB, EG, and GBV topics, respectively.
The results do not support H1 across all issues. In SLB, moral-emotional language has the strongest effect (IRR = 1.337), exceeding distinctly moral (IRR = 1.067) and distinctly emotional (IRR = 1.061) language. In EG, the pattern is similar but smaller: moral-emotional (IRR = 1.037) exceeds distinctly emotional (IRR = 1.022). However, in GBV, the pattern reverses: distinctly emotional language increases sharing (IRR = 1.118), whereas moral-emotional decreases it (IRR = 0.844), and distinctly moral language also decreases it (IRR = 0.952). These findings indicate that moral contagion effects vary by topic and message type.
The data support H3, revealing the context-dependent effectiveness of language strategies. Moral-emotional language was significantly more impactful in SLB (IRR = 1.337) than in EG (IRR = 1.037). These findings suggest that moral contagion is strongest when the media narrative highlights a direct conflict and blames the problem on individual misconduct.

4.2. The Moderating Role of User Identity

The analysis supports H2, showing that user verification status acts as a moderator for the diffusion effect of language. Given the complexity of interpreting interaction coefficients directly, we plotted the predicted marginal effects. This graphical analysis demonstrates how the relationship between language and post diffusion differs significantly for verified versus unverified users within each topic.
For the SLB topic (Figure 1), a stark divergence emerges between user types. A higher density of moral-emotional language predicts an exponential increase in reposts for unverified users, indicating it can trigger virality. Conversely, this language has no discernible impact on the engagement of verified users, whose repost counts remain low regardless of its use.
In the EG topic (Figure 2), the engagement pattern differs. Overall engagement is lower, with predicted reposts for unverified users peaking at approximately 12. For these accounts, the positive relationship between moral-emotional language and reposts is weaker and linear, not exponential.
For the GBV topic (Figure 3), diffusion is driven primarily by emotional language. As the density of emotional words increases, predicted reposts for unverified accounts rise steeply while verified accounts also gain but plateau at a lower level, indicating that verification attenuates the payoff from emotional language. By contrast, moral-emotional language does not fuel sharing: predicted reposts stay low and even edge downward across the range for both user types, with only minor separation between the lines. Distinctly moral language shows a similarly flat-to-negative pattern near zero.
Taken together, the results show clear moderation by user verification. Unverified users gain most from affective cues: in SLB, moral-emotional language turns viral; in EG, it helps modestly; in GBV, emotion drives the gains. Verified users see little benefit from moral-emotional language and a muted return to emotion in GBV, consistent with credibility-preserving strategies [8]. In short, weaker actors lean on affect to mobilize sharing, while institutional actors keep tone restrained to protect trust.

4.3. Robustness Checks

We conducted several robustness checks which confirmed that our core findings are stable. The effect of moral-emotional language and its moderation by user type were unaffected by the following changes:
  • Model Simplification: The results held when retaining only moral-emotional words, their interactions, and controls;
  • Control Variables: The results were consistent when adding post length or removing media type as controls (except EG);
  • Interaction Terms: The results were unchanged when retaining only interactions involving moral-emotional words.
Finally, a bootstrap analysis (1000 resamples) confirmed that the effects are not artifacts of user-level clustering (except EG). Therefore, the findings are robust to alternative model specifications and data structures (see Appendix A for details).
Only the education governance checks raised reliability concerns, likely because this structurally framed topic is disproportionately influenced by users who post multiple times. When the chance of adopting or sharing a message rises with repeated exposure, and cluster size (the number of posts per user) is related to outcomes conditional on covariates, an “informative cluster size” (ICS) problem can arise.
In our data, about 27% of authors contributed more than one post, introducing non-independence. As Table 4 shows, most of these multi-post authors contributed exactly two posts, while authors with five or more posts were rare. Cluster sizes were also highly heterogeneous across topics (ranges in Table 4): SLB: 1–76; EG: 1–380; GBV: 1–466.
Moreover, within each topic the share of verified accounts increases as the number of posts per author rises, indicating that highly active users are disproportionately verified. Compositional differences across user types likely account for the observed interaction (see Figure 4).
We conducted two checks on the education governance topic. First, a cluster-aware bootstrap (1000 resamples; one post drawn per multi-post user in each resample) showed that, when the influence of highly active users is muted, the effects of distinctly emotional and moral-emotional language on reposts become small and often trend below unity, indicating weak or even negative associations (see Appendix A.5 Figure A2). Second, we implemented a threshold-slices analysis that incrementally re-introduces multi-post users by estimating separate negative-binomial models on six nested subsets: users with exactly one post (=1), and users with ≥2, ≥3, ≥4, ≥5, and ≥6 posts (notated “>1” through “>5”). The resulting IRR trajectories (Figures “Emotional Words—Threshold slices” and “Moral-Emotional Words—Threshold slices”) reveal a clear pattern: for distinctly emotional language, the IRR crosses the 1.00 line precisely when moving from the single-post slice to “>1” (≈0.97 → ≈1.09) and then creeps upward with wider CIs as more high-activity users are included (see Figure 5); for moral-emotional language, the IRR similarly jumps above 1.00 at “>1” (≈0.97 → ≈1.08) but then flattens back toward ≈1.00 as thresholds increase, with broadening uncertainty bands (see Figure 6). Taken together, these visuals show that the apparent promotive effects in the EG main model are largely induced by a small set of highly active accounts; once we control for this ICS bias, the effects attenuate or reverse.

4.4. Exploratory Analysis: Effects of Emotion Valence and Categories

Negative emotions show a diffusion advantage. In SLB (IRR = 1.028) and EG (IRR = 1.03) topics, posts containing negative emotional language were significantly more likely to be reposted, whereas those with positive emotional language were not. But in the GBV topic, positive emotional language reliably increases reposts (IRR = 1.041), whereas negative emotion is not significant.
The effects of polarity are more complex for moral-emotional language. In the SLB topic, both positive (IRR = 1.084) and negative (IRR = 1.055) moral-emotional language significantly promoted reposts. In the EG topic, however, only negative moral-emotional language had an effect of suppressing reposts (IRR = 0.988). Similarly, in the GBV topic, negative moral-emotional language slightly reduces reposts (IRR = 0.993).
Effects differ across discrete emotion categories. An analysis of specific categories (restricted to good, sadness, fear, disgust, and joy due to data sparsity) revealed further distinctions. Moral-emotional language related to good promoted reposts across the SLB topic (IRR = 1.022), while language related to disgust significantly suppressed them in the EG topic (IRR = 0.988). But in the GBV topic, discrete ME categories reveal sharper contrasts: While good-related moral-emotional language is associated with a modest increase in reposts (IRR = 1.069), fear-type moral-emotional words shows a large promotive effect (IRR = 1.314).
These results suggest that future research could benefit from using finer-grained emotional taxonomies to capture how specific emotions influence diffusion (see Appendix B for details).

5. Discussion

5.1. Summary of Findings

Our exploratory attribution analysis shows that moral contagion is not uniform across issues. When problems are framed at the individual level (SLB), moral-emotional language strongly boosts diffusion; when framed as structural (EG), the effect weakens or reverses once we address high-activity users and cluster bias. In the mixed case (GBV), purely emotional cues increase sharing while moral-emotional cues reduce it, echoing evidence from #MeToo contexts that moralized appeals can backfire in mixed-attribution debates [3]. These patterns suggest that attribution—not emotion alone—conditions when moral language spreads.
User type also matters. Unverified accounts gain most from affect-heavy messaging, consistent with discursive empowerment of non-elite actors. Verified and institutional accounts see muted returns, likely because credibility norms favor a neutral tone [8] and because platform dynamics around public-affairs topics penalize overt moralization [4]. Together, these results point to distinct audience–disseminator fit.
Emotion structure further clarifies the boundary conditions. In GBV, both fear-related and good-related moral-emotional language are associated with increased reposting, whereas sadness-related language is not. High-arousal signals travel; low-arousal signals do not. The implication is both practical and theoretical: amplifying high-arousal moral language can heighten visibility through diffusion, but it may intensify polarization and reduce space for reasoned debate.
These findings advance a conditional view of moral contagion: attribution framing, user type, and emotion structure jointly determine diffusion. We show strong effects for individual-level issues, attenuation or reversal for structural issues, and fear-type moral-emotional language driven diffusion in GBV. We therefore shift the question from “does moral contagion exist?” to “when does it occur?”.

5.2. Theoretical and Methodological Implications

This study advances moral-contagion theory on three fronts. First, using large-scale Weibo data, we show that moral contagion is not confined to a single cultural context—consistent with cross-lingual evidence that emotion–virality relationships generalize across languages when modeled via valence, arousal, and dominance (VAD) [44]. Its impact varies by topic and disseminator identity, supporting a contextual, rather than universal, model. Second, by linking contagion to arousal theory, we clarify mechanism: in contentious debates, high-arousal cues—especially negative ones—do most of the work. Arousal, not valence alone, appears to translate moralized content into sharing [21].
Our findings also suggest a social-psychological bridge between attribution and polarization. When issues are framed at the individual level, audiences can more easily locate a responsible agent, often outside their in-group [45]. Prior work shows that perceived out-group hostility is over-detected online and that such perceptions spur reposting [46]. This points to a testable hypothesis: individual-level attribution may heighten out-group antagonism, which in turn amplifies moral contagion. If supported, interventions could target early attribution framing (e.g., encouraging structural explanations) to slow polarization.
Methodologically, we contribute three tools for computational text analysis. We introduce a relative density metric that normalizes word use by text length, improving sensitivity for short posts. We develop a human–AI coding pipeline that combines researcher rules with LLM validation to classify large volumes of social media data at low cost. Following prior work on Chinese sentiment scoring [40], which integrates polarity, intensity, negation, and degree adverbs, we repurpose that formula to analyze moral-emotional valence and discrete emotions in Chinese moral contagion.
Finally, we provide a replicable framework for studying moral and emotional discourse in Chinese social media: custom lexicons, a transparent human–AI workflow, and statistical models that handle clustering and heterogeneity. The framework is portable and can be adapted across platforms and contexts to test when individual-, structural-, or mixed-attribution amplifies or attenuates moral contagion.

5.3. Practical Implications

Our findings that moral-emotional language generally boosts diffusion on Weibo, but with sizable heterogeneity by issue domain and user identity (e.g., ≈33.7% in SLB vs. ≈3.7% in EG; identity moderation strongest for unverified users), align with and extend platform-agnostic theories of affective amplification.
This pattern resonates with evidence from Facebook that highly engaged participation within like-minded communities can tilt collective sentiment toward negativity and shape group dynamics—an “echo-chamber” mechanism that helps explain why strong moralized emotion travels farther in conflict-laden, individually attributable topics (SLB) than in structurally framed governance discussions (EG) [47].
At the same time, our topic- and identity-specific contingencies complement recent modeling work that tracks user and community sentiments over time and introduces “cross-contamination” between communities and their neighborhoods; our results suggest that such community-level sentiment dynamics are likely to differ by issue attribution and actor status on Weibo, and they motivate future analyses that couple our diffusion estimates with network-based sentiment evolution frameworks [48].
For platform governance, our findings highlight a tension between maximizing user engagement and maintaining social responsibility. Algorithms designed for engagement may unintentionally amplify the most incendiary and emotionally negative content, increasing exposure to polarizing material and potentially worsening social divisions [1]. This suggests a need to integrate principles of social responsibility directly into the design of content curation and distribution algorithms.
For social actors such as government agencies, media, and non-governmental organizations (NGOs), these findings highlight the critical role of language in communication strategy. The results show that while moral-emotional language can maximize a message’s reach, it also risks oversimplifying complex issues and inflaming conflict. This presents a key ethical dilemma: social actors must balance the strategic goal of effective dissemination against the responsibility to foster reasoned public discourse.

5.4. Study Limitations and Future Research

First, our operationalization of attribution by issue category relies on theoretically informed—but ultimately conceptual—distinctions rather than a direct, respondent-level measure of how audiences assign blame. Future work should incorporate more fine-grained, quantitative indicators of attribution—for example, employing a BERT-based feature set and human–AI collaborative coding to classify attributions in comments on polarizing events, then comparing moral-contagion effects across attribution types.
Second, our analyses of heterogeneity by user verification status and by moral-emotional polarity/category are exploratory. While we document systematic differences, we do not fully unpack the mechanisms that generate these patterns. Future studies should move beyond descriptive moderation to probe causal pathways.
Finally, we analyze moral contagion on Weibo to compare attribution effects, not to localize the phenomenon to China. Replications across platforms and across cultural/moderation contexts are needed to validate how attribution shapes moral contagion beyond this setting. Taken together, these limitations point to a clear agenda: pair audience-level attribution measures with mechanism-focused designs, and test the theory across diverse platforms and cultural–institutional contexts.

6. Conclusions

This study shows that moral contagion is conditional, not universal. Whether moral-emotional language spreads depends on three levers: attribution framing, user identity, and arousal profile. When issues are framed at the individual level (SLB), moral-emotional cues travel widely; for structural governance topics (EG), the effect weakens or can reverse once we account for highly active users and clustering. In mixed-attribution debates (GBV), diffusion is driven by high-arousal fear, while moral-emotional wording can backfire. Unverified users benefit most from affect-heavy language, whereas verified and institutional accounts see muted returns, consistent with credibility norms and public-affairs dynamics [8].
Theoretically, we shift the question from whether moral contagion exists to when it occurs. A conditional model—anchored in attribution and arousal—better explains variation across topics than emotion alone [6]. Methodologically, we provide a replicable toolkit for Chinese social media: a length-normalized density metric, a transparent human–AI coding workflow, and the adoption of an existing affect-scoring formula (polarity, intensity, negation, degree adverbs).
Practically, the findings highlight a tension for platforms and communicators. Engagement-optimized ranking can over-amplify high-arousal, incendiary content; responsible design should account for attribution cues and user type to avoid amplifying polarization. For public agencies, media, and NGOs, moralized appeals mobilize grassroots supporters yet have limited traction with policy audiences and can oversimplify complex problems.
Our evidence is bounded by conceptual attribution categories and exploratory heterogeneity analyses. Future work should measure audience attributions directly and test these mechanisms across platforms and cultural-institutional settings. A more precise, evidence-based account of how moral contagion operates can clarify why polarizing events escalate on social media, how they can be interrupted, and how online signals reshape offline collective beliefs [2].

Author Contributions

Conceptualization, H.L.; Data curation, R.C.; Formal analysis, Q.W.; Methodology, Q.W.; Project administration, H.L.; Resources, H.L.; Supervision, R.C.; Validation, R.C.; Visualization, Q.W.; Writing—original draft, Q.W.; Writing—review & editing, H.L. and R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Fundamental Research Funds for the Central Universities, grant numbers 1233300003.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

We thank the 2025 Micro Hotspot Big Data Research Institute for providing the Weibo Online Emergency Public Opinion Dissemination Dataset. We also acknowledge the use of Gemini 2.5 Pro for assistance with topic classification and event coding. The authors reviewed all AI-generated content and take full responsibility for the final text of this publication.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
C-MFDChinese Moral Foundation Dictionary
DUTIR Information Retrieval Laboratory of Dalian University of Technology
EGEducation Governance
GBVGender-Based Violence
ICSInformative Cluster Size
IRRIncident Rate Ratio
LLMsLarge language models
MADMotivation–Attention–Design
MEMoral-Emotional
NGOsNon-Governmental Organizations
SLBStreet-Level Bureaucracy
UNHCRthe United Nations High Commissioner for Refugees
VADValence, Arousal, and Dominance
WHOWorld Health Organization

Appendix A

The following robustness checks test the stability of our core predictors: the main effect of Moral-emotional words and their interaction with authentication type.

Appendix A.1. Retaining Only Moral-Emotional Words, Its Interaction, and Controls

Table A1. SLB. With only moral-emotional language and controls retained, moral-emotional language strongly increases reposts (IRR = 1.456). Verification lowers baseline diffusion (IRR = 0.768) and dampens this payoff (Auth × ME IRR = 0.839). Larger audiences and media type also raise reposts (log-followers IRR = 1.79; media type IRR = 2.17).
Table A1. SLB. With only moral-emotional language and controls retained, moral-emotional language strongly increases reposts (IRR = 1.456). Verification lowers baseline diffusion (IRR = 0.768) and dampens this payoff (Auth × ME IRR = 0.839). Larger audiences and media type also raise reposts (log-followers IRR = 1.79; media type IRR = 2.17).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.2173900.0411100.0000000.8046160.7423290.872131
Moral-Emotional Words0.3756020.0260650.0000001.4558671.3833601.532175
Log-Followers0.5796990.0111910.0000001.7855001.7467641.825095
Authentication Type−0.2638140.0424600.0000000.7681160.7067810.834774
Media Type0.7751740.0403710.0000002.1709702.0058112.349728
Authentication Type ×
Moral-Emotional Words
−0.1755860.0256910.0000000.8389660.7977660.882293
Table A2. EG. Moral-emotional language shows a small positive association (IRR = 1.041). Verification reduces baseline diffusion (IRR = 0.773) and weakens the moral-emotional effect (Auth × ME IRR = 0.944). Followers and media type remain strong positives (IRR = 1.48 and 3.02, respectively).
Table A2. EG. Moral-emotional language shows a small positive association (IRR = 1.041). Verification reduces baseline diffusion (IRR = 0.773) and weakens the moral-emotional effect (Auth × ME IRR = 0.944). Followers and media type remain strong positives (IRR = 1.48 and 3.02, respectively).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.5342100.0169980.0000000.5861320.5669260.605989
Moral-Emotional Words0.0400000.0113730.0004361.0408101.0178671.064271
Log-Followers0.3894090.0053990.0000001.4761081.4605711.491812
Authentication Type−0.2572300.0238080.0000000.7731900.7379400.810125
Media Type1.1050460.0167790.0000003.0193642.9216833.120310
Authentication Type × Moral-Emotional Words−0.0577670.0113640.0000000.9438690.9230790.965128
Table A3. GBV. Moral-emotional language is linked to fewer reposts (IRR = 0.836). Verification lowers baseline diffusion (IRR = 0.703) but partly offsets the negative moral-emotional effect (Auth × ME IRR = 1.071). Followers and media type exert large positive effects (IRR = 1.59 and 4.28).
Table A3. GBV. Moral-emotional language is linked to fewer reposts (IRR = 0.836). Verification lowers baseline diffusion (IRR = 0.703) but partly offsets the negative moral-emotional effect (Auth × ME IRR = 1.071). Followers and media type exert large positive effects (IRR = 1.59 and 4.28).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.7962870.0208970.0000000.4510000.4329010.469856
Moral-Emotional Words−0.1786250.0096660.0000000.8364200.8207230.852417
Log-Followers0.4656850.0074180.0000001.5931051.5701091.616438
Authentication Type−0.3528540.0293900.0000000.7026790.6633460.744345
Media Type1.4541140.0195770.0000004.2806884.1195524.448127
Authentication Type × Moral-Emotional Words0.0684750.0096060.0000001.0708741.0509001.091227

Appendix A.2. Adding “Weibo Text Length” as a Control Variable

Table A4. SLB. Adding Weibo text length leaves the core results unchanged: moral, emotional, and especially moral-emotional language predict more reposts (ME IRR = 1.34). Verification lowers baseline diffusion (IRR = 0.78), dampens the moral-emotional payoff (Auth × ME IRR = 0.86), and slightly amplifies the emotional effect (Auth × Emo IRR = 1.05). Text length is negligible (p = 0.86).
Table A4. SLB. Adding Weibo text length leaves the core results unchanged: moral, emotional, and especially moral-emotional language predict more reposts (ME IRR = 1.34). Verification lowers baseline diffusion (IRR = 0.78), dampens the moral-emotional payoff (Auth × ME IRR = 0.86), and slightly amplifies the emotional effect (Auth × Emo IRR = 1.05). Text length is negligible (p = 0.86).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.2669110.0417880.0000000.7657410.7055240.831098
Moral Words0.0650510.0126630.0000001.0672131.0410521.094031
Emotional Words0.0594070.0139890.0000221.0612071.0325051.090707
Moral-Emotional Words0.2902230.0303060.0000001.3367261.2596381.418532
Log-Followers0.5737580.0111790.0000001.7749251.7364591.814243
Weibo Text Length0.0000110.0000600.8584781.0000110.9998931.000129
Authentication Type−0.2485620.0423580.0000000.7799210.7177870.847434
Media Type0.8180200.0416300.0000002.2660082.0884582.458653
Authentication Type × Moral Words−0.0307020.0125590.0144980.9697650.9461850.993931
Authentication Type × Emotional Words0.0488220.0138020.0004041.0500341.0220091.078826
Authentication Type × Moral-Emotional Words−0.1496450.0300420.0000010.8610140.8117800.913233
Table A5. EG. With text length controlled, moral-emotional language shows a small positive association (IRR = 1.04), while moral and emotional language are not significant. Followers and media type remain strong positives (IRR = 1.49; 2.95). Verification reduces baseline diffusion (IRR = 0.77), weakens moral and moral-emotional effects (Auth × M IRR = 0.985; Auth × ME IRR = 0.956), and slightly amplifies the emotional effect (Auth × Emo IRR = 1.020). Text length has a tiny but significant increase (IRR = 1.00018).
Table A5. EG. With text length controlled, moral-emotional language shows a small positive association (IRR = 1.04), while moral and emotional language are not significant. Followers and media type remain strong positives (IRR = 1.49; 2.95). Verification reduces baseline diffusion (IRR = 0.77), weakens moral and moral-emotional effects (Auth × M IRR = 0.985; Auth × ME IRR = 0.956), and slightly amplifies the emotional effect (Auth × Emo IRR = 1.020). Text length has a tiny but significant increase (IRR = 1.00018).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.5565570.0171260.0000000.5731790.5542590.592745
Moral Words0.0020200.0066770.7622261.0020220.9889951.015221
Emotional Words0.0061980.0068650.3665811.0062180.9927701.019848
Moral-Emotional Words0.0357330.0114900.0018721.0363791.0133001.059983
Log-Followers0.3988450.0054730.0000001.4901031.4742051.506173
Weibo Text Length0.0001840.0000160.0000001.0001841.0001521.000216
Authentication Type−0.2586590.0241310.0000000.7720860.7364200.809480
Media Type1.0808570.0172180.0000002.9472042.8494063.048360
Authentication Type × Moral Words−0.0151370.0066540.0229060.9849770.9722150.997906
Authentication Type × Emotional Words0.0197620.0067380.0033581.0199591.0065771.033518
Authentication Type × Moral-Emotional Words−0.0450730.0114860.0000870.9559280.9346480.977691
Table A6. GBV. Results are robust to text length: emotional language increases reposts (IRR = 1.08), while moral and moral-emotional language decrease them (IRR = 0.95; 0.86). Verification is negative overall (IRR = 0.67) and moderates the moral-emotional penalty upward (Auth × ME IRR = 1.077); other interactions are small/NS. Followers and media type are strong positives (IRR = 1.65; 3.62), and text length has only a minute positive effect (IRR = 1.00054).
Table A6. GBV. Results are robust to text length: emotional language increases reposts (IRR = 1.08), while moral and moral-emotional language decrease them (IRR = 0.95; 0.86). Verification is negative overall (IRR = 0.67) and moderates the moral-emotional penalty upward (Auth × ME IRR = 1.077); other interactions are small/NS. Followers and media type are strong positives (IRR = 1.65; 3.62), and text length has only a minute positive effect (IRR = 1.00054).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.9390710.0213730.0000000.3909910.3749510.407718
Moral Words−0.0531880.0070420.0000000.9482020.9352040.961380
Emotional Words0.0761580.0080920.0000001.0791331.0621531.096384
Moral-Emotional Words−0.1533620.0095860.0000000.8578190.8418530.874089
Log-Followers0.4981020.0073780.0000001.6455961.6219701.669566
Weibo Text Length0.0005410.0000350.0000001.0005421.0004731.000610
Authentication Type−0.4014420.0289330.0000000.6693540.6324520.708409
Media Type1.2871490.0204050.0000003.6224443.4804293.770253
Authentication Type × Moral Words0.0042500.0069960.5434981.0042590.9905831.018124
Authentication Type × Emotional Words−0.0103540.0079280.1915490.9896990.9744391.005198
Authentication Type × Moral-Emotional Words0.0744540.0095890.0000001.0772961.0572391.097734

Appendix A.3. Removing the “Media Type” Control

Table A7. SLB. Removing media type leaves the substantive pattern intact: moral-emotional language strongly raises reposts (IRR = 1.33), moral language is modestly positive, and followers remain a major booster (IRR = 1.83). Verification lowers baseline diffusion (IRR = 0.79), dampens the ME effect (Auth × ME IRR = 0.86), and amplifies the emotional effect (Auth × Emo IRR = 1.08).
Table A7. SLB. Removing media type leaves the substantive pattern intact: moral-emotional language strongly raises reposts (IRR = 1.33), moral language is modestly positive, and followers remain a major booster (IRR = 1.83). Verification lowers baseline diffusion (IRR = 0.79), dampens the ME effect (Auth × ME IRR = 0.86), and amplifies the emotional effect (Auth × Emo IRR = 1.08).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept0.3432930.0300560.0000001.4095821.3289441.495113
Moral Words0.0484050.0124470.0001011.0495961.0243011.075515
Emotional Words0.0210840.0138000.1265491.0213080.9940551.049309
Moral-Emotional Words0.2855920.0295900.0000001.3305491.2555801.409996
Log-Followers0.6032350.0111990.0000001.8280221.7883361.868590
Authentication Type−0.2301760.0425000.0000000.7943930.7309030.863399
Authentication Type × Moral Words−0.0228050.0124430.0668440.9774530.9539041.001585
Authentication Type × Emotional Words0.0726030.0137990.0000001.0753031.0466101.104783
Authentication Type × Moral-Emotional Words−0.1504100.0294040.0000000.8603550.8121740.911394
Table A8. EG. Excluding media type shifts effects downward: moral language is only slightly positive (IRR = 1.033), while emotional and moral-emotional language are not significant. Verification remains negative (IRR = 0.85) and weakens both moral (Auth × M IRR = 0.975) and moral-emotional cues (Auth × ME IRR = 0.952). Combined with the bootstrap and threshold-slices checks, this suggests that earlier positives were partly driven by media-form mix and high-activity (often verified) users.
Table A8. EG. Excluding media type shifts effects downward: moral language is only slightly positive (IRR = 1.033), while emotional and moral-emotional language are not significant. Verification remains negative (IRR = 0.85) and weakens both moral (Auth × M IRR = 0.975) and moral-emotional cues (Auth × ME IRR = 0.952). Combined with the bootstrap and threshold-slices checks, this suggests that earlier positives were partly driven by media-form mix and high-activity (often verified) users.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept0.1361500.0161890.0000001.1458541.1100661.182795
Moral Words0.0323590.0076300.0000221.0328881.0175561.048451
Emotional Words−0.0121590.0072870.0952040.9879150.9739051.002126
Moral-Emotional Words0.0087600.0133380.5113511.0087980.9827671.035518
Log-Followers0.3867750.0058350.0000001.4722251.4554841.489158
Authentication Type−0.1626650.0262950.0000000.8498760.8071850.894825
Authentication Type × Moral Words−0.0254040.0076370.0008790.9749160.9604320.989618
Authentication Type × Emotional Words0.0045360.0072910.5338901.0045460.9902921.019005
Authentication Type × Moral-Emotional Words−0.0496340.0133380.0001980.9515780.9270250.976781
Table A9. GBV. Dropping media type sharpens contrasts: emotional language shows a stronger positive association (IRR = 1.195), while moral and moral-emotional language are more negative (IRR = 0.90; 0.76). Verification reduces baseline diffusion (IRR = 0.64) but mitigates the ME penalty (Auth × ME IRR = 1.166). Followers remain strongly positive (IRR = 1.69).
Table A9. GBV. Dropping media type sharpens contrasts: emotional language shows a stronger positive association (IRR = 1.195), while moral and moral-emotional language are more negative (IRR = 0.90; 0.76). Verification reduces baseline diffusion (IRR = 0.64) but mitigates the ME penalty (Auth × ME IRR = 1.166). Followers remain strongly positive (IRR = 1.69).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.2850210.0212380.0000000.7519980.7213380.783961
Moral Words−0.1031170.0076170.0000000.9020220.8886560.915588
Emotional Words0.1780410.0102840.0000001.1948741.1710311.219203
Moral-Emotional Words−0.2738490.0093720.0000000.7604470.7466050.774545
Log-Followers0.5253390.0073920.0000001.6910321.6667091.715710
Authentication Type−0.4477670.0286290.0000000.6390540.6041830.675937
Authentication Type × Moral Words−0.0011120.0076140.8838460.9988880.9840931.013906
Authentication Type × Emotional Words−0.0195050.0101020.0535020.9806840.9614581.000294
Authentication Type × Moral-Emotional Words0.1536850.0093280.0000001.1661231.1449971.187639

Appendix A.4. Retaining Only the Interaction Involving Moral-Emotional Words

Table A10. SLB. Keeping only the interaction with moral-emotional (ME) language, ME remains a strong positive predictor of reposts (IRR = 1.33). Verification lowers baseline diffusion (IRR = 0.78) and attenuates the ME effect (Auth × ME IRR = 0.87). Moral and emotional language are modestly positive; followers and media are strong positives.
Table A10. SLB. Keeping only the interaction with moral-emotional (ME) language, ME remains a strong positive predictor of reposts (IRR = 1.33). Verification lowers baseline diffusion (IRR = 0.78) and attenuates the ME effect (Auth × ME IRR = 0.87). Moral and emotional language are modestly positive; followers and media are strong positives.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.2648420.0414460.0000000.7673270.7074600.832261
Moral Words0.0577040.0115360.0000011.0594011.0357171.083627
Emotional Words0.0678550.0136280.0000011.0702101.0420031.099179
Moral-Emotional Words0.2865920.0282110.0000001.3318811.2602381.407597
Log-Followers0.5727150.0112160.0000001.7730751.7345221.812485
Authentication Type−0.2543470.0425490.0000000.7754220.7133800.842861
Media Type0.8288580.0411890.0000002.2907012.1130422.483297
Authentication Type × Moral-Emotional Words−0.1436630.0256370.0000000.8661800.8237320.910815
Table A11. EG. With only the ME interaction retained, effects are small: emotional (IRR = 1.02) and ME language (IRR = 1.03) show slight positives, while moral is non-significant. Verification reduces baseline diffusion (IRR = 0.77) and weakens the ME effect (Auth × ME IRR = 0.95). Followers and media remain strong positives.
Table A11. EG. With only the ME interaction retained, effects are small: emotional (IRR = 1.02) and ME language (IRR = 1.03) show slight positives, while moral is non-significant. Verification reduces baseline diffusion (IRR = 0.77) and weakens the ME effect (Auth × ME IRR = 0.95). Followers and media remain strong positives.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.5376380.0170300.0000000.5841260.5649520.603952
Moral Words0.0077490.0068800.2600181.0077790.9942811.021461
Emotional Words0.0217750.0069480.0017251.0220141.0081901.036027
Moral-Emotional Words0.0334070.0117070.0043241.0339711.0105161.057971
Log-Followers0.3896840.0053970.0000001.4765151.4609801.492215
Authentication Type−0.2571300.0238260.0000000.7732680.7379880.810234
Media Type1.1100940.0169590.0000003.0346422.9354343.137203
Authentication Type × Moral-Emotional Words−0.0540540.0113500.0000020.9473810.9265390.968691
Table A12. GBV. Results are polarized: emotional language increases reposts (IRR = 1.11), whereas moral and ME language reduce them (IRR = 0.95; 0.84). Verification is negative overall (IRR = 0.69) but partly offsets the ME penalty (Auth × ME IRR = 1.07). Followers and media exert large positive effects.
Table A12. GBV. Results are polarized: emotional language increases reposts (IRR = 1.11), whereas moral and ME language reduce them (IRR = 0.95; 0.84). Verification is negative overall (IRR = 0.69) but partly offsets the ME penalty (Auth × ME IRR = 1.07). Followers and media exert large positive effects.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.8482200.0210640.0000000.4281760.4108600.446223
Moral Words−0.0496360.0070760.0000000.9515750.9384700.964864
Emotional Words0.1081480.0083370.0000001.1142121.0961541.132568
Moral-Emotional Words−0.1705640.0096930.0000000.8431890.8273210.859361
Log-Followers0.4761610.0073930.0000001.6098821.5867221.633381
Authentication Type−0.3718230.0290630.0000000.6894760.6513000.729891
Media Type1.4202450.0197370.0000004.1381363.9811144.301350
Authentication Type × Moral-Emotional Words0.0720370.0096710.0000001.0746951.0545171.095259

Appendix A.5. Cluster-Robust Bootstrapping

To account for the non-independence of observations from users with multiple posts, we performed a cluster-robust bootstrap analysis. The procedure involved randomly sampling one post from each multi-post user and repeating this process 1000 times to generate a distribution of effect sizes. The resulting 95% confidence intervals (CIs) confirmed that the estimates for our key language variables and their interaction terms are stable, indicating the findings are not an artifact of user-level clustering.
Figure A1. Distribution of bootstrapped coefficients for the SLB Model. The figure displays the results of 1000 cluster-aware bootstrap simulations to assess the robustness of the model’s coefficients. Each histogram shows the distribution of a coefficient’s estimate across the simulations. The dashed blue line represents the mean effect size (Mean), while the solid red lines delineate the 95% confidence interval (95% CI). A narrow confidence interval that does not cross zero indicates a robust and statistically significant effect.
Figure A1. Distribution of bootstrapped coefficients for the SLB Model. The figure displays the results of 1000 cluster-aware bootstrap simulations to assess the robustness of the model’s coefficients. Each histogram shows the distribution of a coefficient’s estimate across the simulations. The dashed blue line represents the mean effect size (Mean), while the solid red lines delineate the 95% confidence interval (95% CI). A narrow confidence interval that does not cross zero indicates a robust and statistically significant effect.
Entropy 27 01052 g0a1
Figure A2. Distribution of bootstrapped coefficients for the EG model. Each panel shows the sampling distribution of a coefficient with the mean (blue dashed) and 95% CI (red). When we mute high-activity users (one post per multi-post user per resample), the emotional and moral-emotional main effects shift toward or below zero and their CIs often span zero, in contrast to the main model. By comparison, moral is near zero/slightly positive, Auth × Emotional is positive, and Auth × Moral-Emotional is negative. Taken together, the bootstrap indicates that the apparent positive effects in EG are sensitive to ICS and largely driven by a small set of highly active accounts.
Figure A2. Distribution of bootstrapped coefficients for the EG model. Each panel shows the sampling distribution of a coefficient with the mean (blue dashed) and 95% CI (red). When we mute high-activity users (one post per multi-post user per resample), the emotional and moral-emotional main effects shift toward or below zero and their CIs often span zero, in contrast to the main model. By comparison, moral is near zero/slightly positive, Auth × Emotional is positive, and Auth × Moral-Emotional is negative. Taken together, the bootstrap indicates that the apparent positive effects in EG are sensitive to ICS and largely driven by a small set of highly active accounts.
Entropy 27 01052 g0a2
Figure A3. Distribution of bootstrapped coefficients for the GBV model. The figure displays the results of 1000 cluster-aware bootstrap simulations to assess the robustness of the model’s coefficients. Each histogram shows the distribution of a coefficient’s estimate across the simulations. The dashed blue line represents the mean effect size (mean), while the solid red lines delineate the 95% confidence interval (95% CI). A narrow confidence interval that does not cross zero indicates a robust and statistically significant effect.
Figure A3. Distribution of bootstrapped coefficients for the GBV model. The figure displays the results of 1000 cluster-aware bootstrap simulations to assess the robustness of the model’s coefficients. Each histogram shows the distribution of a coefficient’s estimate across the simulations. The dashed blue line represents the mean effect size (mean), while the solid red lines delineate the 95% confidence interval (95% CI). A narrow confidence interval that does not cross zero indicates a robust and statistically significant effect.
Entropy 27 01052 g0a3

Appendix B

Appendix B.1. Effect of Emotional Polarity on Repost Rates

Table A13. SLB. Negative emotion is associated with more reposts (IRR = 1.028, p < 0.001), while positive emotion is not. Both positive and negative moral-emotional scores increase diffusion (IRR = 1.084 and 1.055). Moral language is modestly positive (IRR = 1.079). Followers and media strongly raise reposts; authentication is negative.
Table A13. SLB. Negative emotion is associated with more reposts (IRR = 1.028, p < 0.001), while positive emotion is not. Both positive and negative moral-emotional scores increase diffusion (IRR = 1.084 and 1.055). Moral language is modestly positive (IRR = 1.079). Followers and media strongly raise reposts; authentication is negative.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.2077920.0415730.0000010.8123760.7488070.881341
Moral Words0.0755790.0113300.0000001.0785091.0548241.102726
Positive Emotional Score 0.0041560.0050980.4149451.0041650.9941811.014250
Negative Emotional Score 0.0276380.0050080.0000001.0280241.0179821.038164
Positive Moral-Emotional Score0.0805230.0106450.0000001.0838541.0614751.106705
Negative Moral-Emotional Score0.0538240.0102040.0000001.0552981.0344031.076616
Log-Followers0.5686650.0112440.0000001.7659081.7274161.805259
Authentication Type−0.2738760.0442140.0000000.7604260.6973040.829263
Media Type0.8150040.0416770.0000002.2591862.0819792.451475
Table A14. EG. Only negative emotion shows a small positive association (IRR = 1.030). Positive emotion and positive moral-emotional are not significant, whereas negative moral-emotional slightly reduces reposts (IRR = 0.988). Followers and media are strong positives; authentication is negative.
Table A14. EG. Only negative emotion shows a small positive association (IRR = 1.030). Positive emotion and positive moral-emotional are not significant, whereas negative moral-emotional slightly reduces reposts (IRR = 0.988). Followers and media are strong positives; authentication is negative.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.5491410.0169910.0000000.5774460.5585320.597000
Moral Words0.0088850.0066880.1840201.0089240.9957861.022236
Positive Emotional Score 0.0033000.0026570.2142671.0033060.9980941.008545
Negative Emotional Score 0.0298040.0031220.0000001.0302521.0239681.036575
Positive Moral-Emotional Score0.0021670.0055110.6941391.0021700.9914031.013053
Negative Moral-Emotional Score−0.0116410.0029450.0000770.9884260.9827370.994149
Log-Followers0.3858570.0054150.0000001.4708741.4553451.486569
Authentication Type−0.2355520.0239610.0000000.7901340.7538850.828126
Media Type1.1099480.0168520.0000003.0342022.9356193.136095
Table A15. GBV. Positive emotion boosts reposts (IRR = 1.041); negative emotion is not significant. Negative moral-emotional score tilt negative (negative ME IRR = 0.993). Moral language lowers reposts (IRR = 0.959). Followers and media remain strong positives; authentication is negative.
Table A15. GBV. Positive emotion boosts reposts (IRR = 1.041); negative emotion is not significant. Negative moral-emotional score tilt negative (negative ME IRR = 0.993). Moral language lowers reposts (IRR = 0.959). Followers and media remain strong positives; authentication is negative.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept−0.7305550.0201300.0000000.4816410.4630090.501024
Moral Words−0.0418510.0071850.0000000.9590130.9456030.972614
Positive Emotional Score 0.0402410.0043930.0000001.0410621.0321371.050065
Negative Emotional Score 0.0037240.0026530.1603751.0037310.9985261.008964
Positive Moral-Emotional Score−0.0076330.0063870.2320510.9923960.9800511.004897
Negative Moral-Emotional Score−0.0069620.0034190.0417300.9930620.9864290.999739
Log-Followers0.4585470.0073190.0000001.5817751.5592461.604628
Authentication Type−0.3542620.0296560.0000000.7016910.6620680.743686
Media Type1.4847410.0197110.0000004.4138234.2465564.587677

Appendix B.2. Effects of Discrete Moral-Emotional Categories on Reposts

Table A16. SLB. Among moral-emotional (ME) subdimensions, only ME_Good shows a small positive association with reposts (IRR = 1.022); ME_Joy, ME_Sadness, ME_Fear, and ME_Disgust are not significant.
Table A16. SLB. Among moral-emotional (ME) subdimensions, only ME_Good shows a small positive association with reposts (IRR = 1.022); ME_Joy, ME_Sadness, ME_Fear, and ME_Disgust are not significant.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept2.1983640.0316320.0000009.0102618.4686099.586558
ME_Joy Score per0.1523320.0995380.1259201.1645460.9581431.415413
ME_Good Score0.0217930.0108560.0446921.0220321.0005171.044011
ME_Sadness Score0.0520370.0461800.2598081.0534150.9622571.153209
ME_Fear Score0.1846030.1491190.2157311.2027410.8979291.611024
ME_Disgust Score−0.0090940.0136910.5065250.9909470.9647101.017897
Note: The anger and surprise categories were omitted from the model due to insufficient data for reliable estimation.
Table A17. EG. The ME subdimensions are mostly null; only ME_Disgust exhibits a small negative effect (IRR = 0.988).
Table A17. EG. The ME subdimensions are mostly null; only ME_Disgust exhibits a small negative effect (IRR = 0.988).
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept1.3207530.0186290.0000003.7462423.6119253.885555
ME_Joy Score0.0171680.0162440.2905541.0173170.9854381.050226
ME_Good Score−0.0023240.0082100.7771520.9976790.9817541.013863
ME_Sadness Score0.0286870.0446930.5209581.0291030.9427921.123315
ME_Fear Score0.1185060.0924380.1998411.1258140.9392541.349428
ME_Disgust Score−0.0118650.0039920.0029560.9882050.9805030.995967
Note: The anger and surprise categories were omitted from the model due to insufficient data for reliable estimation.
Table A18. GBV. ME_Fear strongly promotes diffusion (IRR = 1.314), and ME_Good is also positive (IRR = 1.069). ME_Sadness is marginally negative (IRR = 0.95, p = 0.093); ME_Joy and ME_Disgust are not significant.
Table A18. GBV. ME_Fear strongly promotes diffusion (IRR = 1.314), and ME_Good is also positive (IRR = 1.069). ME_Sadness is marginally negative (IRR = 0.95, p = 0.093); ME_Joy and ME_Disgust are not significant.
CoefficientStd.Err.p > |z|Incident Rate Ratio (IRR)IRR_CI_2.5%IRR_CI_97.5%
Intercept1.4251810.0242680.0000004.1586113.9654394.361192
ME_Joy Score per0.0216230.0470390.6457471.0218580.9318611.120547
ME_Good Score0.0663510.0125610.0000001.0686021.0426161.095236
ME_Sadness Score−0.0509290.0303070.0928700.9503460.8955391.008507
ME_Fear Score0.2729340.0618170.0000101.3138141.1638991.483038
ME_Disgust Score0.0013010.0046760.7807981.0013020.9921681.010520
Note: The anger and surprise categories were omitted from the model due to insufficient data for reliable estimation.

Appendix C

Appendix C.1. Human–AI Coding Run Log

Table A19. Environment. All human–AI coding runs were conducted in UTC+8 using Google Gemini 2.5 Pro (chat interface) on 14 July 2025 with default deterministic settings. The model used Prompt v1.0 (see Appendix C.2). The dataset comprised 349 Weibo hot-event titles from the Micro Hotspot Big Data Research Institute, where posts were pre-clustered into events and assigned unique IDs.
Table A19. Environment. All human–AI coding runs were conducted in UTC+8 using Google Gemini 2.5 Pro (chat interface) on 14 July 2025 with default deterministic settings. The model used Prompt v1.0 (see Appendix C.2). The dataset comprised 349 Weibo hot-event titles from the Micro Hotspot Big Data Research Institute, where posts were pre-clustered into events and assigned unique IDs.
Time zoneUTC+8
Provider/ModelGoogle, Gemini 2.5 Pro (chat interface)
Access date14 July 2025
Deterministic settingsDefault
PromptPrompt v1.0 (see Appendix C.2)
DataMicro Hotspot Big Data Research Institute, Weibo hot events 2024 (n = 349 events; provider clustered posts to events and assigned unique IDs).

Appendix C.2. Prompt v1.0

Now we are going to do human-AI collaborative coding. The coding task is to classify 349 hot search event titles on Weibo. They need to be divided into four categories: gender-based violence (GBV), street-level bureaucrats, education governance, and others. That is to say, all events except the first three topics are classified as others, and no detailed judgment is made. Please read the definitions of the first three topics carefully and classify the events. First, you will be given a subset of events to judge. These events are random. Each event may be one of the four categories of topics. They have been manually coded. After you finish coding, you will compare the accuracy with the manual coding results. Now please read the definition: Gender-based: violence violence, discrimination or oppression against individuals, especially women and gender minorities, based on gender differences, gender identity or social gender roles. Its forms include but are not limited to physical violence, sexual violence, verbal insults, sexual harassment, humiliating remarks, online rumors, institutional injustice, etc. Such behavior is often accompanied by Patriarchal structures, gender stereotypes, and unequal power relations, and causes physical and mental harm or social exclusion to victims in the public or private sphere. Street bureaucracy: events involving frontline administrative law enforcement or public service personnel (such as urban management, traffic police, community grid workers, etc.) exercising administrative discretion, implementing policies or providing services when directly facing the public. Its core characteristics include: frontline personnel interact with the public face-to-face as policy implementers; under limited resources, vague norms or implementation pressure, individual behavior to a certain extent reflects the actual implementation of policies; events often involve the use of power, law enforcement standards, service attitudes, execution efficiency, public reaction and other content. All social conflicts, disputes, innovations or daily practices related to “how grassroots frontline public officials enforce the law, how to provide services, whether they abuse their power or neglect their duties” can be classified as “street bureaucracy” issues. Education governance: social concerns and disputes caused by rule-making, policy implementation, power and responsibility allocation and governance structure arrangements in the education system. It involves coordination and conflict between multiple This issue focuses on the performance of education policies and practices in terms of fairness, transparency, and institutional rationality, and is an important entry point for understanding the effectiveness of the education system, public trust, and social response. The events covered in the education governance topic should reflect a problem or adverse outcome caused by the structural mechanism of education.
After understanding, please make a judgment on the following events: just give the judgment result (gender-based violence (GBV), street-level bureaucrats, education governance, other) and reasons for each event.

References

  1. Brady, W.J.; Crockett, M.J.; Bavel, J.J.V. The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online. Perspect. Psychol. Sci. 2020, 15, 978–1010. [Google Scholar] [CrossRef]
  2. Peters, K.; Kashima, Y.; Clark, A. Talking about Others: Emotionality and the Dissemination of Social Information. Eur. J. Soc. Psychol. 2009, 39, 207–222. [Google Scholar] [CrossRef]
  3. Burton, J.W.; Cruz, N.; Hahn, U. Reconsidering Evidence of Moral Contagion in Online Social Networks. Nat. Hum. Behav. 2021, 5, 1629–1635. [Google Scholar] [CrossRef]
  4. King, G.; Pan, J.; Roberts, M.E. How Censorship in China Allows Government Criticism but Silences Collective Expression. Am. Polit. Sci. Rev. 2013, 107, 326–343. [Google Scholar] [CrossRef]
  5. Yang, Z.; Vicari, S. The Pandemic across Platform Societies: Weibo and Twitter at the Outbreak of the COVID-19 Epidemic in China and the West. Howard J. Commun. 2021, 32, 493–506. [Google Scholar] [CrossRef]
  6. Brady, W.J.; Wills, J.A.; Jost, J.T.; Tucker, J.A.; Bavel, J.J.V. Emotion Shapes the Diffusion of Moralized Content in Social Networks. Proc. Natl. Acad. Sci. USA 2017, 114, 7313–7318. [Google Scholar] [CrossRef] [PubMed]
  7. Hoynes, W. Media and the Construction of Social Problems. In The Cambridge Handbook of Social Problems; Treviño, A.J., Ed.; Cambridge University Press: Cambridge, UK, 2018; pp. 477–496. ISBN 978-1-108-42616-9. [Google Scholar]
  8. Giachanou, A.; Rosso, P.; Crestani, F. The Impact of Emotional Signals on Credibility Assessment. J. Assoc. Inf. Sci. Technol. 2021, 72, 1117–1132. [Google Scholar] [CrossRef] [PubMed]
  9. Heider, F. The Psychology of Interpersonal Relations, 1st ed.; John Wiley & Sons: New York, NY, USA, 1958; ISBN 978-0-203-78115-9. [Google Scholar] [CrossRef]
  10. Lipsky, M. Street-Level Bureaucracy: Dilemmas of the Individual in Public Service; 30th Anniversary Expanded Edition; Russell Sage Foundation: New York, NY, USA, 2010; ISBN 978-0-87154-544-2. [Google Scholar]
  11. OECD Education Organisation and Governance. Available online: https://www.oecd.org/en/topics/policy-issues/education-organisation-and-governance.html (accessed on 27 September 2025).
  12. UNHCR (The UN Refugee Agency). Gender-Based Violence. Available online: https://www.unhcr.org/us/what-we-do/protect-human-rights/protection/gender-based-violence (accessed on 27 September 2025).
  13. World Health Organization Violence Against Women. Available online: https://www.who.int/news-room/fact-sheets/detail/violence-against-women (accessed on 27 September 2025).
  14. Brady, W.J.; Gantman, A.P.; Van Bavel, J.J. Attentional Capture Helps Explain Why Moral and Emotional Content Go Viral. J. Exp. Psychol. Gen. 2020, 149, 746–756. [Google Scholar] [CrossRef] [PubMed]
  15. Deng, W.; Yang, Y. Cross-Platform Comparative Study of Public Concern on Social Media during the COVID-19 Pandemic: An Empirical Study Based on Twitter and Weibo. Int. J. Environ. Res. Public. Health 2021, 18, 6487. [Google Scholar] [CrossRef]
  16. Chen, J.; She, J. An Analysis of Verifications in Microblogging Social Networks—Sina Weibo. In Proceedings of the 32nd IEEE ICDCS Workshops (ICDCSW ‘12), Macau, China, 18–21 June 2012; pp. 147–154. [Google Scholar] [CrossRef]
  17. Wang, D.; Zhou, Y.; Qian, Y.; Liu, Y. The Echo Chamber Effect of Rumor Rebuttal Behavior of Users in the Early Stage of COVID-19 Epidemic in China. Comput. Hum. Behav. 2022, 128, 107088. [Google Scholar] [CrossRef]
  18. Brady, W.J.; Wills, J.A.; Burkart, D.; Jost, J.T.; Van Bavel, J.J. An Ideological Asymmetry in the Diffusion of Moralized Content on Social Media among Political Leaders. J. Exp. Psychol. Gen. 2019, 148, 1802–1813. [Google Scholar] [CrossRef]
  19. Liu, H.; Zhang, D.; Zhu, Y.; Ma, H.; Xiao, H. Emotions Spread like Contagious Diseases. Front. Psychol. 2025, 16, 1493512. [Google Scholar] [CrossRef]
  20. Stieglitz, S.; Dang-Xuan, L. Emotions and Information Diffusion in Social Media—Sentiment of Microblogs and Sharing Behavior. J. Manag. Inf. Syst. 2013, 29, 217–247. [Google Scholar] [CrossRef]
  21. Berger, J.; Milkman, K.L. What Makes Online Content Viral? J. Mark. Res. 2012, 49, 192–205. [Google Scholar] [CrossRef]
  22. Pröllochs, N.; Bär, D.; Feuerriegel, S. Emotions Explain Differences in the Diffusion of True vs. False Social Media Rumors. Sci. Rep. 2021, 11, 22721. [Google Scholar] [CrossRef]
  23. Ferrara, E.; Yang, Z. Quantifying the Effect of Sentiment on Information Diffusion in Social Media. PeerJ Comput. Sci. 2015, 1, e26. [Google Scholar] [CrossRef]
  24. Schöne, J.P.; Parkinson, B.; Goldenberg, A. Negativity Spreads More than Positivity on Twitter After Both Positive and Negative Political Situations. Affect. Sci. 2021, 2, 379–390. [Google Scholar] [CrossRef]
  25. Fan, R.; Xu, K.; Zhao, J. Higher Contagion and Weaker Ties Mean Anger Spreads Faster than Joy in Social Media. arXiv 2016, arXiv:1608.03656. [Google Scholar] [CrossRef]
  26. Weiner, B.; Osborne, D.; Rudolph, U. An Attributional Analysis of Reactions to Poverty: The Political Ideology of the Giver and the Perceived Morality of the Receiver. Personal. Soc. Psychol. Rev. 2010, 15, 199–213. [Google Scholar] [CrossRef] [PubMed]
  27. Ross, L. The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process. In Advances in Experimental Social Psychology; Berkowitz, L., Ed.; Academic Press: New York, NY, USA, 1977; Volume 10, pp. 173–220. [Google Scholar]
  28. Mills, C.W. The Sociological Imagination, 40th anniversary ed.; Oxford University Press: New York, NY, USA, 2000; ISBN 978-0-19-513373-8. [Google Scholar]
  29. Ryan, W. Blaming the Victim; Revised, updated edition; Vintage Books: New York, NY, USA, 1976; ISBN 978-0-394-71762-3. [Google Scholar]
  30. Eseonu, T. Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. Adm. Theory Prax. 2023, 45, 259–261. [Google Scholar] [CrossRef]
  31. Stockmann, D.; Luo, T. Which Social Media Facilitate Online Public Opinion in China? Probl. Post-Communism 2017, 64, 189–202. [Google Scholar] [CrossRef]
  32. Galtung, J. Violence, Peace, and Peace Research. J. Peace Res. 1969, 6, 167–191. [Google Scholar] [CrossRef]
  33. Heise, L.L. Violence against Women: An Integrated, Ecological Framework. Violence Women 1998, 4, 262–290. [Google Scholar] [CrossRef]
  34. Ziems, C.; Held, W.; Shaikh, O.; Chen, J.; Zhang, Z.; Yang, D. Can Large Language Models Transform Computational Social Science? Comput. Linguist. 2024, 50, 237–291. [Google Scholar] [CrossRef]
  35. Bail, C.A. Can Generative AI Improve Social Science? Proc. Natl. Acad. Sci. USA 2024, 121, e2314021121. [Google Scholar] [CrossRef]
  36. Chew, R.; Bollenbacher, J.; Wenger, M.; Speer, J.; Kim, A. LLM-Assisted Content Analysis: Using Large Language Models to Support Deductive Coding. arXiv 2023, arXiv:2306.14924. [Google Scholar] [CrossRef]
  37. Peng, T.-Q.; Yang, X. Recalibrating the Compass: Integrating Large Language Models into Classical Research Methods. arXiv 2025, arXiv:2505.19402. [Google Scholar] [CrossRef]
  38. Cheng, C.Y.; Zhang, W. C-MFD 2.0: Developing a Chinese Moral Foundation Dictionary. Comput. Commun. Res. 2023, 5, 1. [Google Scholar] [CrossRef]
  39. Xu, L.; Lin, H.; Pan, Y.; Ren, H.; Chen, J. Constructing the Affective Lexicon Ontology. J. China Soc. Sci. Tech. Inf. 2008, 27, 180–185. [Google Scholar] [CrossRef]
  40. Qiao, Y.; Zhong, Z.; Xu, X.; Cao, R. Emotional Polarity and Contagion Patterns in Public Opinion on Emergencies: A Social Network Analysis Perspective. J. Jishou Univ. Sci. 2021, 42, 131–142. [Google Scholar] [CrossRef]
  41. Wu, D. Sentiment_Dictionary. Available online: https://github.com/DavionWu2018/Sentiment_dictionary (accessed on 27 September 2025).
  42. Fang, T. Dictionary of Adverbs of Degree and Negative Words. Available online: https://figshare.com/articles/dataset/____/12240233/1?file=22519265 (accessed on 27 September 2025).
  43. Yang, B.; Zhang, R.; Cheng, X.; Zhao, C. Exploring Information Dissemination Effect on Social Media: An Empirical Investigation. Pers. Ubiquitous Comput. 2023, 27, 1469–1482. [Google Scholar] [CrossRef]
  44. Guerini, M.; Staiano, J. Deep Feelings: A Massive Cross-Lingual Study on the Relation between Emotions and Virality. In Proceedings of the 24th International Conference on World Wide Web (WWW ‘15 Companion), Florence, Italy, 18–22 May 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 299–305. [Google Scholar] [CrossRef]
  45. Brady, W.J.; McLoughlin, K.L.; Torres, M.P.; Luo, K.F.; Gendron, M.; Crockett, M.J. Overperception of Moral Outrage in Online Social Networks Inflates Beliefs about Intergroup Hostility. Nat. Hum. Behav. 2023, 7, 917–927. [Google Scholar] [CrossRef] [PubMed]
  46. Rathje, S.; Van Bavel, J.J.; van der Linden, S. Out-Group Animosity Drives Engagement on Social Media. Proc. Natl. Acad. Sci. USA 2021, 118, e2024292118. [Google Scholar] [CrossRef]
  47. Del Vicario, M.; Vivaldo, G.; Bessi, A.; Zollo, F.; Scala, A.; Caldarelli, G.; Quattrociocchi, W. Echo Chambers: Emotional Contagion and Group Polarization on Facebook. Sci. Rep. 2016, 6, 37825. [Google Scholar] [CrossRef] [PubMed]
  48. Bonifazi, G.; Cauteruccio, F.; Corradini, E.; Marchetti, M.; Terracina, G.; Ursino, D.; Virgili, L. A Framework for Investigating the Dynamics of User and Community Sentiments in a Social Platform. Data Knowl. Eng. 2023, 146, 102183. [Google Scholar] [CrossRef]
Figure 1. Moderating effect of user authentication on language diffusion in SLB discussions. The plots visualize the predicted marginal effects from the negative binomial regression model. The vertical axis represents the predicted number of reposts, where the label 106 denotes millions (e.g., 1.0 × 106 equals 1,000,000). The steeper upward slope indicates the stronger viral effect of moral-emotional language.
Figure 1. Moderating effect of user authentication on language diffusion in SLB discussions. The plots visualize the predicted marginal effects from the negative binomial regression model. The vertical axis represents the predicted number of reposts, where the label 106 denotes millions (e.g., 1.0 × 106 equals 1,000,000). The steeper upward slope indicates the stronger viral effect of moral-emotional language.
Entropy 27 01052 g001
Figure 2. Moderating effect of user authentication on language diffusion in EG discussions. The plots visualize the predicted marginal effects from the negative binomial regression model. The vertical axis represents the predicted number of reposts. The steeper upward slope indicates the stronger viral effect of moral-emotional language.
Figure 2. Moderating effect of user authentication on language diffusion in EG discussions. The plots visualize the predicted marginal effects from the negative binomial regression model. The vertical axis represents the predicted number of reposts. The steeper upward slope indicates the stronger viral effect of moral-emotional language.
Entropy 27 01052 g002
Figure 3. Moderating effect of user authentication on language diffusion in GBV discussions. The plots visualize the predicted marginal effects from the negative binomial regression model. The vertical axis represents the predicted number of reposts. The steeper upward slope indicates the stronger viral effect of emotional language.
Figure 3. Moderating effect of user authentication on language diffusion in GBV discussions. The plots visualize the predicted marginal effects from the negative binomial regression model. The vertical axis represents the predicted number of reposts. The steeper upward slope indicates the stronger viral effect of emotional language.
Entropy 27 01052 g003
Figure 4. Verified vs. unverified share by post threshold and topic (100% stacked). Each x-axis group shows the number of posts by the same user on the same topic (1, >1, >2, >3, >4, >5). Within each group, bars (left→right) correspond to SLB, EG, and GBV. The verified proportion rises steadily as the posting threshold increases (roughly from ~30–40% at one post to ~70–80% beyond >4), indicating that high-activity users are disproportionately verified and the unverified share correspondingly declines.
Figure 4. Verified vs. unverified share by post threshold and topic (100% stacked). Each x-axis group shows the number of posts by the same user on the same topic (1, >1, >2, >3, >4, >5). Within each group, bars (left→right) correspond to SLB, EG, and GBV. The verified proportion rises steadily as the posting threshold increases (roughly from ~30–40% at one post to ~70–80% beyond >4), indicating that high-activity users are disproportionately verified and the unverified share correspondingly declines.
Entropy 27 01052 g004
Figure 5. Emotional words—threshold slices. Negative-binomial IRRs with 95% CIs are estimated on six nested subsets by re-introducing multi-post users (1, >1, >2, >3, >4, >5). The IRR shifts from ~0.97 at the single-post slice to ~1.09 at “>1” and then increases slightly as higher-activity users are included, with widening uncertainty bands. This pattern indicates that the apparent positive effect of emotional language in EG is largely driven by highly active accounts.
Figure 5. Emotional words—threshold slices. Negative-binomial IRRs with 95% CIs are estimated on six nested subsets by re-introducing multi-post users (1, >1, >2, >3, >4, >5). The IRR shifts from ~0.97 at the single-post slice to ~1.09 at “>1” and then increases slightly as higher-activity users are included, with widening uncertainty bands. This pattern indicates that the apparent positive effect of emotional language in EG is largely driven by highly active accounts.
Entropy 27 01052 g005
Figure 6. Moral-emotional words—threshold slices. Negative-binomial IRRs with 95% CIs estimated on six nested subsets (1, >1, >2, >3, >4, >5). The IRR rises from ~0.97 at the single-post slice to ~1.08 at “>1”, then tapers back toward ~1.00 as increasingly active users are included, with widening uncertainty. This pattern indicates that the apparent positive effect is largely driven by highly active accounts.
Figure 6. Moral-emotional words—threshold slices. Negative-binomial IRRs with 95% CIs estimated on six nested subsets (1, >1, >2, >3, >4, >5). The IRR rises from ~0.97 at the single-post slice to ~1.08 at “>1”, then tapers back toward ~1.00 as increasingly active users are included, with widening uncertainty. This pattern indicates that the apparent positive effect is largely driven by highly active accounts.
Entropy 27 01052 g006
Table 1. Negative binomial regression (DV: reposts)—SLB. Moral, emotional, and especially moral-emotional language is positively associated with reposts (IRR = 1.34 for moral-emotional), net of controls. Interactions show that verification dampens the payoff of moral-emotional language (IRR = 0.86) and slightly reduces the effect of moral words (IRR = 0.97), while it amplifies the effect of emotional words (IRR = 1.05).
Table 1. Negative binomial regression (DV: reposts)—SLB. Moral, emotional, and especially moral-emotional language is positively associated with reposts (IRR = 1.34 for moral-emotional), net of controls. Interactions show that verification dampens the payoff of moral-emotional language (IRR = 0.86) and slightly reduces the effect of moral words (IRR = 0.97), while it amplifies the effect of emotional words (IRR = 1.05).
Coefficientp > |z|IRRIRR_CI_2.5%IRR_CI_97.5%
Intercept−0.2673640.0000000.7653940.7053060.830601
Moral Words0.0652490.0000001.0674241.0413461.094156
Emotional Words0.0595400.0000201.0613481.0326681.090825
Moral-Emotional Words0.2907100.0000001.3373771.2604961.418948
Log-Followers0.5738660.0000001.7751161.7366951.814387
Authentication Type −0.2488340.0000000.7797100.7176190.847173
Media Type0.8182840.0000002.2666072.0891262.459166
Authentication Type × Moral Words−0.0306810.0146070.9697850.9461960.993962
Authentication Type × Emotional Words0.0489610.0003831.0501801.0221821.078945
Authentication Type × Moral-Emotional Words−0.1498190.0000010.8608640.8116260.913088
Note: All language variables and followers are mean-centered. User type and media type use effects coding. n = 23,730.
Table 2. Negative binomial regression (DV: reposts)—EG. Emotional and moral-emotional language show small positive associations with reposts (IRR ≈ 1.02–1.04), while moral language is not significant. Interactions indicate that verification slightly amplifies the effect of emotional words (IRR = 1.02), but reduces the effect of moral-emotional language (IRR = 0.95); the interaction with moral words is marginal.
Table 2. Negative binomial regression (DV: reposts)—EG. Emotional and moral-emotional language show small positive associations with reposts (IRR ≈ 1.02–1.04), while moral language is not significant. Interactions indicate that verification slightly amplifies the effect of emotional words (IRR = 1.02), but reduces the effect of moral-emotional language (IRR = 0.95); the interaction with moral words is marginal.
Coefficientp > |z|IRRIRR_CI_2.5%IRR_CI_97.5%
Intercept−0.5396630.0000000.5829450.5637920.602749
Moral Words0.0063910.3553921.0064110.9928631.020144
Emotional Words0.0214110.0022101.0216421.0077291.035747
Moral-Emotional Words0.0359580.0023391.0366121.0128831.060897
Log-Followers0.3903960.0000001.4775661.4619971.493300
Authentication Type−0.2595640.0000000.7713880.7361670.808295
Media Type1.1119270.0000003.0402112.9405543.143246
Authentication Type × Moral Words−0.0132550.0546990.9868320.9735781.000267
Authentication Type × Emotional Words0.0188120.0069671.0189901.0051611.033009
Authentication Type × Moral-Emotional Words−0.0517780.0000110.9495400.9278340.971753
Note: All language variables and followers are mean-centered. User type and media type use effects coding. n = 97,731.
Table 3. Negative binomial regression (DV: reposts)—GBV. Emotional language is positively associated with reposts (IRR = 1.12), whereas moral (IRR = 0.95) and moral-emotional language (IRR = 0.84) are negative predictors. The authentication main effect is negative (IRR = 0.69). Interactions are modest: authentication slightly attenuates the emotional effect (IRR = 0.98), is non-significant with moral words, and shifts the moral-emotional effect toward zero (IRR = 1.07).
Table 3. Negative binomial regression (DV: reposts)—GBV. Emotional language is positively associated with reposts (IRR = 1.12), whereas moral (IRR = 0.95) and moral-emotional language (IRR = 0.84) are negative predictors. The authentication main effect is negative (IRR = 0.69). Interactions are modest: authentication slightly attenuates the emotional effect (IRR = 0.98), is non-significant with moral words, and shifts the moral-emotional effect toward zero (IRR = 1.07).
Coefficientp > |z|IRRIRR_CI_2.5%IRR_CI_97.5%
Intercept−0.8532240.0000000.4260390.4086990.444116
Moral Words−0.0494410.0000000.9517610.9384780.965232
Emotional Words0.1114660.0000001.1179161.0996041.136533
Moral-Emotional Words−0.1694170.0000000.8441570.8281820.860439
Log-Followers0.4773020.0000001.6117201.5885091.635269
Authentication Type−0.3706920.0000000.6902570.6521240.730619
Media Type1.4215670.0000004.1436083.9862874.307139
Authentication Type × Moral Words−0.0011020.8769810.9988980.9850561.012936
Authentication Type × Emotional Words−0.0221610.0078770.9780830.9622260.994201
Authentication Type × Moral-Emotional Words0.0697340.0000001.0722231.0519761.092860
Note: All language variables and followers are mean-centered. User type and media type use effects coding. n = 68,411.
Table 4. Distribution of posts per author across topics. Single-post authors dominate in all topics (SLB = 67.75%, EG = 73.33%, GBV = 77.08%), while multi-post authors account for roughly a quarter to a third (22.92–32.25%). The multi-post tail thins quickly (e.g., “>3” falls to 5.68–11.07%), but cluster sizes are highly heterogeneous across topics (ranges: SLB 1–76; EG 1–380; GBV 1–466), underscoring potential non-independence from high-activity users.
Table 4. Distribution of posts per author across topics. Single-post authors dominate in all topics (SLB = 67.75%, EG = 73.33%, GBV = 77.08%), while multi-post authors account for roughly a quarter to a third (22.92–32.25%). The multi-post tail thins quickly (e.g., “>3” falls to 5.68–11.07%), but cluster sizes are highly heterogeneous across topics (ranges: SLB 1–76; EG 1–380; GBV 1–466), underscoring potential non-independence from high-activity users.
Topic
Number of Posts
Appearing in Data Set
Street-Level BureaucracyEducation GovernanceGender-Based ViolenceMean
167.7573.3377.0872.72
>132.2526.6722.9227.28
>21712.999.7513.25
>311.078.235.688.33
>47.885.853.775.83
>55.994.462.724.39
Range1–761–3801–466
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Wang, Q.; Cao, R. The Impact of Blame Attribution on Moral Contagion in Controversial Events. Entropy 2025, 27, 1052. https://doi.org/10.3390/e27101052

AMA Style

Li H, Wang Q, Cao R. The Impact of Blame Attribution on Moral Contagion in Controversial Events. Entropy. 2025; 27(10):1052. https://doi.org/10.3390/e27101052

Chicago/Turabian Style

Li, Hua, Qifang Wang, and Renmeng Cao. 2025. "The Impact of Blame Attribution on Moral Contagion in Controversial Events" Entropy 27, no. 10: 1052. https://doi.org/10.3390/e27101052

APA Style

Li, H., Wang, Q., & Cao, R. (2025). The Impact of Blame Attribution on Moral Contagion in Controversial Events. Entropy, 27(10), 1052. https://doi.org/10.3390/e27101052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop