C-STEER: A Dynamic Sentiment-Aware Framework for Fake News Detection with Lifecycle Emotional Evolution
Abstract
1. Introduction
2. Related Work
3. Model Details
3.1. Overall Framework
- Data preprocessing: The data preprocessing phase involves two main steps. First, user–news interaction records are cleaned via deduplication and timestamp completion, and user stances are annotated (support, denial, neutral, retweet). Second, news text is preprocessed using standard tokenization and stopword removal.
- Parallel feature extraction: The graph module constructs a heterogeneous graph and derives structural features [7,15]. The life-cycle module partitions propagation into stages based on interaction signals [10]. The emotion–temporal module extracts stage-wise dynamic emotions and models temporal dependencies [6,9,24,26]. The text module obtains semantic representations using a pretrained language model [33,34].
- Feature fusion and classification. All feature streams are projected to 64 dimensions and then fused via concatenation and linear transformations to enable cross-feature interaction. The classifier outputs the probability that a news item is fake; we adopt a decision threshold of 0.5, i.e., the item is predicted as fake if > 0.5 [18,21].
3.2. Graph Construction
- Nodes: We include news nodes and user nodes.
- News nodes: initial features comprise publication time and the global mean emotion.
- User nodes: initial features comprise the historical stance distribution and interaction activeness.
- Edges and weights (emotion-driven design): We instantiate two edge types:
- User–news interaction edges: if a user engages with a news item at time t (retweet, comment, like, etc.), an edge is created. Its weight is jointly determined by time decay, the emotion intensity of the interaction text, and the emotional similarity (synchrony) to the news item’s current mean emotion.
- Structural feature learning: We adopt a lightweight neighbor sampling and aggregation scheme to preserve topology while remaining efficient:
- Neighbor aggregation: Sampled neighbor features are aggregated via a weighted mean that emphasizes neighbors with higher emotional synchrony and stronger emotion intensity. The aggregated vector is then concatenated with the node’s own features and passed through a linear transformation plus activation to yield updated node representations.
3.3. Life-Cycle Segmentation of News
- (1)
- Awareness exhibits a typical cumulative pattern, with fake news growing faster in the early stage;
- (2)
- The interpersonal propagation share in fake news rises first and then recedes, whereas real news remains overall more stable;
- (3)
- In terms of emotion intensity, fake news is more negative during the initiation and burst stages, and gradually reverts toward neutrality thereafter.
3.3.1. Life-Cycle Segmentation
- (1)
- Sliding-window configuration.
- (2)
- Instantaneous propagation speed.
3.3.2. Phase Segmentation Algorithm
- (1)
- Peak-detection parameterization
- Prominence: Set to the 50th percentile (median) of the non-zero values in the speed sequence, ensuring only significant peaks are detected;
- Width: Set to (where denotes the floor function), ensuring that each detected peak corresponds to a sustained high-speed interval;
- Distance: Set to , preventing adjacent windows from being repeatedly flagged as peaks.
- (2)
- Phase-boundary determination
- Initiation: Starts at the first interaction time and ends at the time corresponding to the window 20% before the main peak:
- Burst: Starts at , and ends at the time corresponding to the window 30% after the main peak, which includes the post-peak plateau:
- Decay: Starts at , and ends at the last interaction time , with the additional requirement that the propagation speed in this stage stays below 20% of the peak
- (3)
- Special cases
3.4. Stage-Wise Emotion Feature Extraction Under the Life-Cycle
3.4.1. Extraction of Stage-Wise Static Emotion Features
3.4.2. Temporal Encoding of Interaction Sequences with BiLSTM
3.5. Text Feature Extraction
3.6. Multi-Feature Fusion and Classification
- Dimension alignment: All module outputs are mapped to 64-D:
- Graph: (already 64-D).
- Emotion/temporal:
- Text: (from Section 3.5).
- Feature fusion: We then integrate the streams by concatenation followed by a linear layer:
- Classifier: A two-layer MLP with Dropout and LeakyReLU outputs the fake probability:
- Loss with class imbalance: We employ weighted binary cross-entropy:
4. Experiments and Results
4.1. Experimental Setup
4.1.1. Datasets and Splits
- Weibo21: 4488 fake/4640 real; includes headline, body, user interactions (retweet/comment/like), timestamps, and user IDs.
- Twitter16: 205 fake/613 real (total 818 items); includes source tweets, comment trees, timestamps, and user metadata.
4.1.2. Experimental Settings
4.2. Baselines and Overall Results
- Graph-centric baselines: TextGCN, SA-HyperGAT, HGT [8];
- Our method: C-STEER (text semantics + life-cycle emotions + BiLSTM with attention + lightweight graph).
- Versus text-centric baselines: On Weibo21, C-STEER outperforms strong text-centric models such as SSE-BERT (and the text-heavy temporal detector TDEI) with a notable margin (F1-macro: 91.6% vs. 89.7%). This highlights the limitation of relying solely on static semantics: by integrating temporal dynamics and propagation structure, our model captures anomalous diffusion signals that pure-text methods cannot sense [20,26].
- Versus graph-based baselines: Crucially, C-STEER surpasses the advanced graph model SA-HyperGAT, reinforcing our core hypothesis. As argued in the introduction, models like SA-HyperGAT can be confused by viral real breaking news whose propagation graphs resemble those of fake news. By introducing life-cycle-aware emotional dynamics, C-STEER supplies the missing decision axis to disambiguate emotion-driven misinformation from fact-driven hot events, directly remedying a key shortcoming of existing graph approaches [8,9,13], as shown in Figure 5.
4.3. Ablation Study
- w/o LifeCycle-Emotion: remove the 13-dimensional stage-wise emotion feature vector (defined in Section 3.4) and use global emotion statistics instead;
- w/o BiLSTM: drop sequence dependence; keep only stage-wise emotion statistics;
- w/o Attn: keep BiLSTM but remove attention (uniform averaging over timesteps);
- w/o Graph: exclude graph features entirely;
- Text-only: keep the text module only.
4.4. Validating Life-Cycle Segmentation
- Global-Emotion: no stage segmentation; global statistics only;
- LifeCycle-Emotion (ours): stage-wise (initiation/burst/decay) statistics plus a cross-stage dynamic feature [9].
4.5. Early Detection Capability
4.6. Cross-Platform Generalization and Robustness
4.7. Robustness and Sensitivity Analyses
4.8. Computational Efficiency and Scalability
4.9. Experimental Summary
4.10. Ethical Considerations
5. Conclusions
- Time-aware emotion dynamics are the key gain driver. Removing life-cycle emotions or temporal attention yields the most pronounced performance drops, validating the discriminative value of stage differences + temporal dependencies.
Limitations and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kolev, V.; Weiss, G.; Spanakis, G. FOREAL: RoBERTa Model for Fake News Detection based on Emotions. In Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART 2022), Online, 3–5 February 2022; Volume 2, pp. 429–440. [Google Scholar]
- Nassif, A.B.; Elnagar, A.; Elgendy, O.; Afadar, Y. Arabic fake news detection based on deep contextualized embedding models. Neural Comput. Appl. 2022, 34, 16019–16032. [Google Scholar] [CrossRef]
- Al-Yahya, M.; Al-Khalifa, H.; Al-Baity, H.; AlSaeed, D.; Essam, A. Arabic Fake News Detection: Comparative Study of Neural Networks and Transformer-Based Approaches. Complexity 2021, 2021, 5516945. [Google Scholar] [CrossRef]
- Hu, G.; Ding, Y.; Qi, S.; Wang, X.; Liao, Q. Multi-depth graph convolutional networks for fake news detection. In CCF International Conference on Natural Language Processing and Chinese Computing; Springer International Publishing: Cham, Switzerland, 2019; pp. 698–710. [Google Scholar]
- Li, T.; Sun, Y.; Hsu, S.; Li, Y.; Wong, R.C.W. Fake news detection with heterogeneous transformer. arXiv 2022, arXiv:2205.03100. [Google Scholar] [CrossRef]
- Hatfield, E.; Cacioppo, J.T.; Rapson, R.L. Emotional Contagion. Curr. Dir. Psychol. Sci. 1993, 2, 96–100. [Google Scholar] [CrossRef]
- Granovetter, M.S. The Strength of Weak Ties. Am. J. Sociol. 1973, 78, 1360–1380. [Google Scholar] [CrossRef]
- Dong, D.; Lin, F.; Li, G.; Liu, B. Sentiment-aware fake news detection on social media with hypergraph attention networks. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp. 2174–2180. [Google Scholar]
- Liu, Z.; Zhang, T.; Yang, K.; Thompson, P.; Yu, Z.; Ananiadou, S. Emotion detection for misinformation: A review. Inf. Fusion 2024, 107, 102300. [Google Scholar] [CrossRef]
- Rogers, E.M.; Singhal, A.; Quinlan, M.M. Diffusion of Innovations. In An Integrated Approach to Communication Theory and Research; Routledge: London, UK, 2014; pp. 432–448. [Google Scholar]
- Katz, E. Utilization of mass communication by the individual. In The Uses of Mass Communications: Current Perspectives on Gratifications Research; Sage: Beverly Hills, CA, USA, 1974; pp. 19–32. [Google Scholar]
- Lu, D.; Hong, D. Emotional contagion: Research on the influencing factors of social media users’ negative emotional communication during the COVID-19 pandemic. Front. Psychol. 2022, 13, 931835. [Google Scholar] [CrossRef]
- Chu, M.; Song, W.; Zhao, Z.; Chen, T.; Chiang, Y.C. Emotional contagion on social media and the simulation of intervention strategies after a disaster event: A modeling study. Humanit. Soc. Sci. Commun. 2024, 11, 968. [Google Scholar] [CrossRef]
- Wei, C.; Nawi, H.M.; Naeem, S.B. The uses and gratifications (U&G) model for understanding fake news sharing behavior on social media. J. Acad. Libr. 2024, 50, 102938. [Google Scholar] [CrossRef]
- Zhang, Y.; Ma, X.; Wu, J.; Yang, J.; Fan, H. Heterogeneous Subgraph Transformer for Fake News Detection. In Proceedings of the ACM Web Conference 2024 (WWW ’24), Singapore, 13–17 May 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1272–1282. [Google Scholar] [CrossRef]
- Wu, J.; Xu, W.; Liu, Q.; Wu, S.; Wang, L. Adversarial contrastive learning for evidence-aware fake news detection with graph neural networks. IEEE Trans. Knowl. Data Eng. 2023, 36, 5591–5604. [Google Scholar] [CrossRef]
- Ma, J.; Gao, W.; Wong, K.F. Rumor Detection on Twitter with Tree-Structured Recursive Neural Networks; Association for Computational Linguistics: Stroudsburg, PA, USA, 2018. [Google Scholar]
- Saha, K.; Kobti, Z. Debertnext: A multimodal fake news detection framework. In International Conference on Computational Science; Springer Nature: Cham, Switzerland, 2023; pp. 348–356. [Google Scholar]
- Miao, X.; Rao, D.; Jiang, Z. Syntax and sentiment enhanced bert for earliest rumor detection. In CCF International Conference on Natural Language Processing and Chinese Computing; Springer International Publishing: Cham, Switzerland, 2021; pp. 570–582. [Google Scholar]
- Wei, S.; Wu, B.; Xiang, A.; Zhu, Y.; Song, C. DGTR: Dynamic graph transformer for rumor detection. Front. Res. Metrics Anal. 2023, 7, 1055348. [Google Scholar] [CrossRef]
- Abdali, S.; Shaham, S.; Krishnamachari, B. Multi-modal misinformation detection: Approaches, challenges and opportunities. ACM Comput. Surv. 2024, 57, 1–29. [Google Scholar] [CrossRef]
- Luvembe, A.M.; Li, W.; Li, S.; Liu, F.; Xu, G. Dual emotion based fake news detection: A deep attention-weight update approach. Inf. Process. Manag. 2023, 60, 103354. [Google Scholar] [CrossRef]
- Zhang, X.; Cao, J.; Li, X.; Sheng, Q.; Zhong, L.; Shu, K. Mining dual emotion for fake news detection. In Proceedings of the Web Conference 2021 (WWW ’21), Ljubljana, Slovenia, 19–23 April 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 3465–3476. [Google Scholar] [CrossRef]
- Padalko, H.; Chomko, V.; Chumachenko, D. A novel approach to fake news classification using LSTM-based deep learning models. Front. Big Data 2024, 6, 1320800. [Google Scholar] [CrossRef] [PubMed]
- Amer, E.; Kwak, K.S.; El-Sappagh, S. Context-based fake news detection model relying on deep learning models. Electronics 2022, 11, 1255. [Google Scholar] [CrossRef]
- Wang, C.; Zhou, B.; Tu, H.; Liu, Y. Rumor detection on social media using temporal dynamic structure and emotional information. In Proceedings of the 2021 IEEE Sixth International Conference on Data Science in Cyberspace (DSC), Shenzhen, China, 9–11 October 2021; pp. 16–22. [Google Scholar]
- Weeks, B.E. Emotion, Digital Media, and Misinformation. In Emotions in the Digital World: Exploring Affective Experience and Expression in Online Interactions; Nabi, R.L., Myrick, J.G., Eds.; Oxford University Press: New York, NY, USA, 2023; pp. 422–442. [Google Scholar] [CrossRef]
- Zhu, J.; Gao, C.; Yin, Z.; Li, X.; Kurths, J. Propagation structure-aware graph transformer for robust and interpretable fake news detection. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; pp. 4652–4663. [Google Scholar]
- Farhoudinia, B.; Ozturkcan, S.; Kasap, N. Emotions unveiled: Detecting COVID-19 fake news on social media. Humanit. Soc. Sci. Commun. 2024, 11, 640. [Google Scholar] [CrossRef]
- Salamanos, N.; Leonidou, P.; Laoutaris, N.; Sirivianos, M.; Aspri, M.; Paraschiv, M. HyperGraphDis: Leveraging hypergraphs for contextual and social-based disinformation detection. Proc. Int. AAAI Conf. Web Soc. Media 2024, 18, 1381–1394. [Google Scholar] [CrossRef]
- Gong, S.; Sinnott, R.O.; Qi, J.; Paris, C. Fake news detection through graph-based neural networks: A survey. arXiv 2023, arXiv:2307.12639. [Google Scholar] [CrossRef]
- Nasser, M.; Arshad, N.I.; Ali, A.; Alhussian, H.; Saeed, F.; Da’u, A.; Nafea, I. A systematic review of multimodal fake news detection on social media using deep learning models. Results Eng. 2025, 26, 104752. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers); Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 4171–4186. [Google Scholar]
- He, P.; Gao, J.; Chen, W. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv 2021, arXiv:2111.09543. [Google Scholar]
- Zhao, Z.; Zhao, J.; Sano, Y.; Levy, O.; Takayasu, H.; Takayasu, M.; Li, D.; Wu, J.; Havlin, S. Fake news propagates differently from real news even at early stages of spreading. EPJ Data Sci. 2020, 9, 7. [Google Scholar] [CrossRef]
- Pfeffer, J.; Matter, D.; Jaidka, K.; Varol, O.; Mashhadi, A.; Lasser, J.; Assenmacher, D.; Wu, S.; Yang, D.; Brantner, C.; et al. Just another day on Twitter: A complete 24 hours of Twitter data. Proc. Int. AAAI Conf. Web Soc. Media 2023, 17, 1073–1081. [Google Scholar] [CrossRef]
- Hu, W.; Wang, Y.; Jia, Y.; Liao, Q.; Zhou, B. A multi-modal prompt learning framework for early detection of fake news. Proc. Int. AAAI Conf. Web Soc. Media 2024, 18, 651–662. [Google Scholar] [CrossRef]
- Liu, Q.; Wu, J.; Wu, S.; Wang, L. Out-of-distribution evidence-aware fake news detection via dual adversarial debiasing. IEEE Trans. Knowl. Data Eng. 2024, 36, 6801–6813. [Google Scholar] [CrossRef]
- Pfänder, J.; Altay, S. Spotting false news and doubting true news: A systematic review and meta-analysis of news judgements. Nat. Hum. Behav. 2025, 9, 688–699. [Google Scholar] [CrossRef]
- Quelle, D.; Cheng, C.Y.; Bovet, A.; Hale, S.A. Lost in translation: Using global fact-checks to measure multilingual misinformation prevalence, spread, and evolution. EPJ Data Sci. 2025, 14, 22. [Google Scholar] [CrossRef]
- Shen, X.; Huang, M.; Hu, Z.; Cai, S.; Zhou, T. Multimodal Fake News Detection with Contrastive Learning and Optimal Transport. Front. Comput. Sci. 2024, 6, 1473457. [Google Scholar] [CrossRef]










| Communication Theory | Key Role/Insight | Empirical Measurement/Features | Corresponding Module |
|---|---|---|---|
| Diffusion Theory | Characterize stage differences via propagation rate | Metrics: Propagation Velocity (v), Peak Detection (), Adaptive Windows () | Life-cycle segmentation module |
| Uses and Gratifications | Interaction motives shape emotional expression and volatility | Features: Negative-Valence Ratio (venting needs), User Entropy (diversity of needs), Emotion Std. (conflicting stances) | Stage-wise emotion feature module |
| Emotional Contagion Theory | Emotions diffuse along social links and synchronize/converge | Mechanisms: BiLSTM Temporal Dependencies, Attention Weights (), Emotion-Similarity Edge Weights | Edge-weight design and BiLSTM-based temporal module |
| Social Network Theory | Heterogeneous ties and the strength-of-weak-ties affect diffusion | Structure: Heterogeneous Graph (User–News/User–User edges), Neighbor Aggregation | Heterogeneous graph structure module |
| Stage | Window | Diffusion-Scope Criterion | Notes |
|---|---|---|---|
| Initiation | Pre-global-peak 20% window | User coverage and retweet-chain depth remain low | Captures the nascent phase before acceleration |
| Burst | Global-peak ± (20%/30%) window | Coverage and chain depth rise rapidly; core nodes emerge | Propagation rate, emotional volatility, and interaction density increase markedly |
| Decay | Post-global-peak | Coverage approaches saturation; new interactions become sparse | Enters plateau–decline; emotions trend toward neutrality |
| Feature Name | Stage | Type | Description |
|---|---|---|---|
| Initiation—Emotion Mean | Initiation | Static | Stage-level emotional tendency (time-weighted) |
| Initiation—Emotion Std. | Initiation | Static | Amplitude of emotional fluctuations within the stage |
| Initiation—Negative-Valence Ratio | Initiation | Static | Proportion of negative stances |
| Initiation—User Entropy | Initiation | Static | Diversity (concentration) of interacting users |
| Burst—Emotion Mean | Burst | Static | Stage-level emotional tendency (time-weighted) |
| Burst—Emotion Std. | Burst | Static | Amplitude of emotional fluctuations within the stage |
| Burst—Negative-Valence Ratio | Burst | Static | Proportion of negative stances |
| Burst—User Entropy | Burst | Static | Diversity (concentration) of interacting users |
| Decay—Emotion Mean | Decay | Static | Stage-level emotional tendency (time-weighted) |
| Decay—Emotion Std. | Decay | Static | Amplitude of emotional fluctuations within the stage |
| Decay—Negative-Valence Ratio | Decay | Static | Proportion of negative stances |
| Decay—User Entropy | Decay | Static | Diversity (concentration) of interacting users |
| Cross-stage Emotion Change Rate | Cross-stage | Dynamic | Trend from venting toward reversion/neutrality across stages |
| Metric | Stage | Fake News (Mean) | Real News (Mean) | Absolute Diff. | Relative Diff. | Notes |
|---|---|---|---|---|---|---|
| Emotion Std. | Burst | 0.40 | 0.27 | 0.13 | 48% | Conflicts are more salient during burst; gap is moderately large. |
| Negative-Valence Ratio | Initiation | 0.62 | 0.41 | 0.21 | 51% | Fake news shows stronger incitement early on, with more concentrated negativity. |
| User Entropy | Initiation | 1.80 | 2.40 | −0.60 | −25% | Fake news is more influenced by a few homogeneous clusters, yielding lower entropy. |
| Cross-Stage Emotion Change Rate | Cross-stage | 0.34 | 0.22 | 0.12 | 55% | Fake news returns to neutrality faster, showing larger shifts. |
| Method | Acc | F1-Macro | F1-Fake | F1-Real | AUC |
|---|---|---|---|---|---|
| BERT | 85.1 | 87.3 | 91.8 | ||
| SSE-BERT | 88.8 | 90.6 | 94.0 | ||
| RoBERTa-wwm-ext | 89.4 | 91.0 | 94.3 | ||
| DeBERTa-v3 | 89.2 | 90.5 | 94.0 | ||
| TextGCN | 86.0 | 88.5 | 92.1 | ||
| SA-HyperGAT | 87.6 | 89.7 | 93.0 | ||
| HGT | 88.1 | 90.2 | 93.4 | ||
| RvNN | 87.0 | 89.2 | 92.8 | ||
| TDEI | 89.6 | 91.0 | 94.5 | ||
| MDE | 89.9 | 91.1 | 94.2 | ||
| GRU-TS | 88.5 | 90.4 | 93.7 | ||
| C-STEER | 93.2 ± 0.3 | 91.6 ± 0.4 | 91.0 | 92.1 | 95.5 |
| Method | Acc | F1-Macro | F1-Fake | F1-Real | AUC |
|---|---|---|---|---|---|
| BERT | 83.0 | 85.0 | 90.6 | ||
| SSE-BERT | 87.9 | 89.4 | 93.2 | ||
| RoBERTa | 88.1 | 89.9 | 93.6 | ||
| DeBERTa-v3 | 88.6 | 90.0 | 93.8 | ||
| TextGCN | 84.7 | 86.2 | 91.4 | ||
| SA-HyperGAT | 84.9 | 86.9 | 91.8 | ||
| HGT | 85.5 | 87.3 | 92.1 | ||
| RvNN | 84.6 | 86.2 | 91.6 | ||
| TDEI | 86.9 | 88.7 | 93.7 | ||
| MDE | 86.5 | 88.8 | 93.5 | ||
| GRU-TS | 86.0 | 87.8 | 92.5 | ||
| C-STEER | 91.2 ± 0.8 | 90.1 ± 0.8 | 89.5 | 90.7 | 94.7 |
| Variants | Acc | F1 (Macro) | ΔF1 vs. C-STEER | Significance |
|---|---|---|---|---|
| C-STEER | 93.2 ± 0.3 | 91.6 ± 0.4 | — | — |
| w/o LifeCycleEmotion | ** | |||
| w/o BiLSTM | ** | |||
| w/o Attn | * | |||
| w/o Graph | ns | |||
| Text-only | ** |
| Method | Acc | F1 (Macro) | ΔF1 |
|---|---|---|---|
| Global-Emotion | — | ||
| LifeCycle-Emotion |
| Time Window | Acc | F1 (Macro) | ΔF1 vs. Full-Cycle |
|---|---|---|---|
| 3 h | |||
| 6 h | |||
| 9 h | |||
| 12 h | |||
| 18 h | |||
| 24 h | |||
| Full-cycle | 0 |
| Train → Test | Acc | F1 (Macro) | ΔF1 vs. In-Domain |
|---|---|---|---|
| Weibo → Twitter | (vs. Twitter 90.1) | ||
| Twitter → Weibo | (vs. Weibo 91.6) |
| Setting | F1 | Setting | F1 | Setting | F1 |
|---|---|---|---|---|---|
| 91.2 | 91.2 | Boundary | 91.5 | ||
| 91.6 | 91.6 | No perturbation | 91.6 | ||
| 91.3 | 91.3 | Boundary | 91.4 |
| Scenario | Setting | F1 | vs. No-Noise |
|---|---|---|---|
| Random token deletion | 10% token | 90.9 | |
| Random token deletion | 30% token | 89.8 | |
| Timestamp jitter | h | 91.0 | |
| Emotion-label flips | 10% | 90.1 | |
| Graph edge deletion | 10% | 91.1 | |
| Graph edge deletion | 30% | 90.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zhen, Z.; Li, Y. C-STEER: A Dynamic Sentiment-Aware Framework for Fake News Detection with Lifecycle Emotional Evolution. Informatics 2026, 13, 4. https://doi.org/10.3390/informatics13010004
Zhen Z, Li Y. C-STEER: A Dynamic Sentiment-Aware Framework for Fake News Detection with Lifecycle Emotional Evolution. Informatics. 2026; 13(1):4. https://doi.org/10.3390/informatics13010004
Chicago/Turabian StyleZhen, Ziyi, and Ying Li. 2026. "C-STEER: A Dynamic Sentiment-Aware Framework for Fake News Detection with Lifecycle Emotional Evolution" Informatics 13, no. 1: 4. https://doi.org/10.3390/informatics13010004
APA StyleZhen, Z., & Li, Y. (2026). C-STEER: A Dynamic Sentiment-Aware Framework for Fake News Detection with Lifecycle Emotional Evolution. Informatics, 13(1), 4. https://doi.org/10.3390/informatics13010004
