Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (383)

Search Parameters:
Keywords = fake information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 559 KiB  
Review
Interest Flooding Attacks in Named Data Networking and Mitigations: Recent Advances and Challenges
by Simeon Ogunbunmi, Yu Chen, Qi Zhao, Deeraj Nagothu, Sixiao Wei, Genshe Chen and Erik Blasch
Future Internet 2025, 17(8), 357; https://doi.org/10.3390/fi17080357 (registering DOI) - 6 Aug 2025
Abstract
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful [...] Read more.
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful forwarding plane introduces significant vulnerabilities, particularly Interest Flooding Attacks (IFAs). These IFA attacks exploit the Pending Interest Table (PIT) by injecting malicious interest packets for non-existent or unsatisfiable content, leading to resource exhaustion and denial-of-service attacks against legitimate users. This survey examines research advances in IFA detection and mitigation from 2013 to 2024, analyzing seven relevant published detection and mitigation strategies to provide current insights into this evolving security challenge. We establish a taxonomy of attack variants, including Fake Interest, Unsatisfiable Interest, Interest Loop, and Collusive models, while examining their operational characteristics and network performance impacts. Our analysis categorizes defense mechanisms into five primary approaches: rate-limiting strategies, PIT management techniques, machine learning and artificial intelligence methods, reputation-based systems, and blockchain-enabled solutions. These approaches are evaluated for their effectiveness, computational requirements, and deployment feasibility. The survey extends to domain-specific implementations in resource-constrained environments, examining adaptations for Internet of Things deployments, wireless sensor networks, and high-mobility vehicular scenarios. Five critical research directions are proposed: adaptive defense mechanisms against sophisticated attackers, privacy-preserving detection techniques, real-time optimization for edge computing environments, standardized evaluation frameworks, and hybrid approaches combining multiple mitigation strategies. Full article
Show Figures

Figure 1

22 pages, 454 KiB  
Article
You Understand, So I Understand: How a “Community of Knowledge” Shapes Trust and Credibility in Expert Testimony Evidence
by Ashley C. T. Jones and Morgan R. Haga
Behav. Sci. 2025, 15(8), 1071; https://doi.org/10.3390/bs15081071 (registering DOI) - 6 Aug 2025
Abstract
Sloman and Rabb found support for the existence of the community of knowledge (CK) effect, which occurs when individuals are more likely to report understanding and being able to explain even fake scientific information when told that an expert understands the information. To [...] Read more.
Sloman and Rabb found support for the existence of the community of knowledge (CK) effect, which occurs when individuals are more likely to report understanding and being able to explain even fake scientific information when told that an expert understands the information. To date, no studies have been conducted that attempted to replicate original findings, let alone test the presence of the CK effect in realistic, legal scenarios. Therefore, Study One replicated original CK effect studies in a jury-eligible M-Turk sample (N = 291) using both Sloman and Rabb’s experimental stimuli as well as new stimuli. Study Two then tested the presence of the CK effect using scientific testimony in a mock court hearing from a forensic evaluator (N = 396). Not only did the CK effect improve laypeople’s perceptions of the scientific information in court, but it also improved their perceptions of the expert witness’s credibility, increased the weight assigned to the scientific information, and increased the weight assigned to the expert testimony. This effect was mediated by participants’ perceived similarity to the expert, supporting the theory behind the CK effect. These studies have important implications for the use of scientific information in court, which are discussed. Full article
(This article belongs to the Special Issue Social Cognitive Processes in Legal Decision Making)
Show Figures

Figure 1

32 pages, 1885 KiB  
Article
Mapping Linear and Configurational Dynamics to Fake News Sharing Behaviors in a Developing Economy
by Claudel Mombeuil, Hugues Séraphin and Hemantha Premakumara Diunugala
Technologies 2025, 13(8), 341; https://doi.org/10.3390/technologies13080341 - 6 Aug 2025
Abstract
The proliferation of social media has paradoxically facilitated the widespread dissemination of fake news, impacting individuals, politics, economics, and society as a whole. Despite the increasing scholarly research on this phenomenon, a significant gap exists regarding its dynamics in developing countries, particularly how [...] Read more.
The proliferation of social media has paradoxically facilitated the widespread dissemination of fake news, impacting individuals, politics, economics, and society as a whole. Despite the increasing scholarly research on this phenomenon, a significant gap exists regarding its dynamics in developing countries, particularly how predictors of fake news sharing interact, rather than merely their net effects. To acquire a more nuanced understanding of fake news sharing behavior, we propose identifying the direct and complex interplay among key variables by utilizing a dual analytical framework, leveraging Structural Equation Modeling (SEM) for linear relationships and Fuzzy-Set Qualitative Comparative Analysis (fsQCA) to uncover asymmetric patterns. Specifically, we investigate the influence of news-find-me orientation, social media trust, information-sharing tendencies, and status-seeking motivation on the propensity of fake news sharing behavior. Additionally, we delve into the moderating influence of social media literacy on these observed effects. Based on a cross-sectional survey of 1028 Haitian social media users, the SEM analysis revealed that news-find-me perception had a negative but statistically insignificant influence on fake news sharing behavior. In contrast, information sharing exhibited a significant negative association. Trust in social media was positively and significantly linked to fake news sharing behavior. Meanwhile, status-seeking motivation was positively associated with fake news sharing behavior, although the association did not reach statistical significance. Crucially, social media literacy moderated the effects of trust and information sharing. Interestingly, fsQCA identified three core configurations for fake news sharing: (1) low status seeking, (2) low information-sharing tendencies, and (3) a unique interaction of low “news-find-me” orientation and high social media trust. Furthermore, low social media literacy emerged as a direct core configuration. These findings support the urgent need to prioritize social media literacy as a key intervention in combating the dissemination of fake news. Full article
(This article belongs to the Section Information and Communication Technologies)
13 pages, 248 KiB  
Article
Fake News: Offensive or Defensive Weapon in Information Warfare
by Iuliu Moldovan, Norbert Dezso, Daniela Edith Ceană and Toader Septimiu Voidăzan
Soc. Sci. 2025, 14(8), 476; https://doi.org/10.3390/socsci14080476 - 30 Jul 2025
Viewed by 318
Abstract
Background and Objectives: Rumors, disinformation, and fake news are problems of contemporary society. We live in a world where the truth no longer holds much importance, and the line that divides the truth from lies, between real news and disinformation, becomes increasingly blurred [...] Read more.
Background and Objectives: Rumors, disinformation, and fake news are problems of contemporary society. We live in a world where the truth no longer holds much importance, and the line that divides the truth from lies, between real news and disinformation, becomes increasingly blurred and difficult to identify. The purpose of this study is to describe this concept, to draw attention to one of the “pandemics” of the 21st-century world, and to find methods by which we can defend ourselves against them. Materials and methods. A cross-sectional study was conducted based on a sample of 442 respondents. Results. For 77.8% of the people surveyed, the concept of “fake news” is important in Romania. Regarding trust in the mass media, a clear dominance (72.4%) was observed among participants who have little trust in the mass media. Although 98.2% of participants detect false information found on the internet, 78.5% are occasionally deceived by the information provided. Of the participants, 47.3% acknowledged their vulnerability to disinformation. The main source of disinformation is the internet, as 59% of the interviewed subjects believed. As the best measure against disinformation, the study group was divided almost equally according to the three possible answers, all of which were considered to be equally important: imposing legal restrictions and blocking the posting of certain news (35.4%), imposing stricter measures for authors (33.9%), and increasing vigilance among people (30.5%). Conclusions. According to the statistics based on the participants’ responses, the main purposes of disinformation are propaganda, manipulation, distracting attention from the truth, making money, and misleading the population. It can be observed that the main intention of disinformation, in the perception of the study participants, is manipulation. Full article
(This article belongs to the Special Issue Disinformation and Misinformation in the New Media Landscape)
23 pages, 4256 KiB  
Article
A GAN-Based Framework with Dynamic Adaptive Attention for Multi-Class Image Segmentation in Autonomous Driving
by Bashir Sheikh Abdullahi Jama and Mehmet Hacibeyoglu
Appl. Sci. 2025, 15(15), 8162; https://doi.org/10.3390/app15158162 - 22 Jul 2025
Viewed by 242
Abstract
Image segmentation is a foundation for autonomous driving frameworks that empower vehicles to explore and navigate their surrounding environment. It gives a fundamental setting to the dynamic cycles by dividing the image into significant parts like streets, vehicles, walkers, and traffic signs. Precise [...] Read more.
Image segmentation is a foundation for autonomous driving frameworks that empower vehicles to explore and navigate their surrounding environment. It gives a fundamental setting to the dynamic cycles by dividing the image into significant parts like streets, vehicles, walkers, and traffic signs. Precise segmentation ensures safe navigation and the avoidance of collisions, while following the rules of traffic is very critical for seamless operation in self-driving cars. The most recent deep learning-based image segmentation models have demonstrated impressive performance in structured environments, yet they often fall short when applied to the complex and unpredictable conditions encountered in autonomous driving. This study proposes an Adaptive Ensemble Attention (AEA) mechanism within a Generative Adversarial Network architecture to deal with dynamic and complex driving conditions. The AEA integrates the features of self, spatial, and channel attention adaptively and powerfully changes the amount of each contribution as per input and context-oriented relevance. It does this by allowing the discriminator network in GAN to evaluate the segmentation mask created by the generator. This explains the difference between real and fake masks by considering a concatenated pair of an original image and its mask. The adversarial training will prompt the generator, via the discriminator, to mask out the image in such a way that the output aligns with the expected ground truth and is also very realistic. The exchange of information between the generator and discriminator improves the quality of the segmentation. In order to check the accuracy of the proposed method, the three widely used datasets BDD100K, Cityscapes, and KITTI were selected to calculate average IoU, where the value obtained was 89.46%, 89.02%, and 88.13% respectively. These outcomes emphasize the model’s effectiveness and consistency. Overall, it achieved a remarkable accuracy of 98.94% and AUC of 98.4%, indicating strong enhancements compared to the State-of-the-art (SOTA) models. Full article
Show Figures

Figure 1

38 pages, 6851 KiB  
Article
FGFNet: Fourier Gated Feature-Fusion Network with Fractal Dimension Estimation for Robust Palm-Vein Spoof Detection
by Seung Gu Kim, Jung Soo Kim and Kang Ryoung Park
Fractal Fract. 2025, 9(8), 478; https://doi.org/10.3390/fractalfract9080478 - 22 Jul 2025
Viewed by 264
Abstract
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality [...] Read more.
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality and sophistication of fake images have improved, leading to an increased security threat from counterfeit images. In particular, palm-vein images acquired through near-infrared illumination exhibit low resolution and blurred characteristics, making it even more challenging to detect fake images. Furthermore, spoof detection specifically targeting palm-vein images has not been studied in detail. To address these challenges, this study proposes the Fourier-gated feature-fusion network (FGFNet) as a novel spoof detector for palm-vein recognition systems. The proposed network integrates masked fast Fourier transform, a map-based gated feature fusion block, and a fast Fourier convolution (FFC) attention block with global contrastive loss to effectively detect distortion patterns caused by generative models. These components enable the efficient extraction of critical information required to determine the authenticity of palm-vein images. In addition, fractal dimension estimation (FDE) was employed for two purposes in this study. In the spoof attack procedure, FDE was used to evaluate how closely the generated fake images approximate the structural complexity of real palm-vein images, confirming that the generative model produced highly realistic spoof samples. In the spoof detection procedure, the FDE results further demonstrated that the proposed FGFNet effectively distinguishes between real and fake images, validating its capability to capture subtle structural differences induced by generative manipulation. To evaluate the spoof detection performance of FGFNet, experiments were conducted using real palm-vein images from two publicly available palm-vein datasets—VERA Spoofing PalmVein (VERA dataset) and PLUSVein-contactless (PLUS dataset)—as well as fake palm-vein images generated based on these datasets using a cycle-consistent generative adversarial network. The results showed that, based on the average classification error rate, FGFNet achieved 0.3% and 0.3% on the VERA and PLUS datasets, respectively, demonstrating superior performance compared to existing state-of-the-art spoof detection methods. Full article
Show Figures

Figure 1

35 pages, 1458 KiB  
Article
User Comment-Guided Cross-Modal Attention for Interpretable Multimodal Fake News Detection
by Zepu Yi, Chenxu Tang and Songfeng Lu
Appl. Sci. 2025, 15(14), 7904; https://doi.org/10.3390/app15147904 - 15 Jul 2025
Viewed by 412
Abstract
In order to address the pressing challenge posed by the proliferation of fake news in the digital age, we emphasize its profound and harmful impact on societal structures, including the misguidance of public opinion, the erosion of social trust, and the exacerbation of [...] Read more.
In order to address the pressing challenge posed by the proliferation of fake news in the digital age, we emphasize its profound and harmful impact on societal structures, including the misguidance of public opinion, the erosion of social trust, and the exacerbation of social polarization. Current fake news detection methods are largely limited to superficial text analysis or basic text–image integration, which face significant limitations in accurately identifying deceptive information. To bridge this gap, we propose the UC-CMAF framework, which comprehensively integrates news text, images, and user comments through an adaptive co-attention fusion mechanism. The UC-CMAF workflow consists of four key subprocesses: multimodal feature extraction, cross-modal adaptive collaborative attention fusion of news text and images, cross-modal attention fusion of user comments with news text and images, and finally, input of fusion features into a fake news detector. Specifically, we introduce multi-head cross-modal attention heatmaps and comment importance visualizations to provide interpretability support for the model’s predictions, revealing key semantic areas and user perspectives that influence judgments. Through the cross-modal adaptive collaborative attention mechanism, UC-CMAF achieves deep semantic alignment between news text and images and uses social signals from user comments to build an enhanced credibility evaluation path, offering a new paradigm for interpretable fake information detection. Experimental results demonstrate that UC-CMAF consistently outperforms 15 baseline models across two benchmark datasets, achieving F1 Scores of 0.894 and 0.909. These results validate the effectiveness of its adaptive cross-modal attention mechanism and the incorporation of user comments in enhancing both detection accuracy and interpretability. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

23 pages, 3614 KiB  
Article
A Multimodal Semantic-Enhanced Attention Network for Fake News Detection
by Weijie Chen, Yuzhuo Dang and Xin Zhang
Entropy 2025, 27(7), 746; https://doi.org/10.3390/e27070746 - 12 Jul 2025
Viewed by 547
Abstract
The proliferation of social media platforms has triggered an unprecedented increase in multimodal fake news, creating pressing challenges for content authenticity verification. Current fake news detection systems predominantly rely on isolated unimodal analysis (text or image), failing to exploit critical cross-modal correlations or [...] Read more.
The proliferation of social media platforms has triggered an unprecedented increase in multimodal fake news, creating pressing challenges for content authenticity verification. Current fake news detection systems predominantly rely on isolated unimodal analysis (text or image), failing to exploit critical cross-modal correlations or leverage latent social context cues. To bridge this gap, we introduce the SCCN (Semantic-enhanced Cross-modal Co-attention Network), a novel framework that synergistically combines multimodal features with refined social graph signals. Our approach innovatively combines text, image, and social relation features through a hierarchical fusion framework. First, we extract modality-specific features and enhance semantics by identifying entities in both text and visual data. Second, an improved co-attention mechanism selectively integrates social relations while removing irrelevant connections to reduce noise and exploring latent informative links. Finally, the model is optimized via cross-entropy loss with entropy minimization. Experimental results for benchmark datasets (PHEME and Weibo) show that SCCN consistently outperforms existing approaches, achieving relative accuracy enhancements of 1.7% and 1.6% over the best-performing baseline methods in each dataset. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 2178 KiB  
Article
Cross-Modal Fake News Detection Method Based on Multi-Level Fusion Without Evidence
by Ping He, Hanxue Zhang, Shufu Cao and Yali Wu
Algorithms 2025, 18(7), 426; https://doi.org/10.3390/a18070426 - 10 Jul 2025
Viewed by 415
Abstract
Although multimodal feature fusion technology in fake news detection can integrate complementary information from different modal data, the semantic inconsistency of multimodal features will lead to feature fusion difficulties. And there is the problem of information loss during one fusion process. In addition, [...] Read more.
Although multimodal feature fusion technology in fake news detection can integrate complementary information from different modal data, the semantic inconsistency of multimodal features will lead to feature fusion difficulties. And there is the problem of information loss during one fusion process. In addition, although it is possible to improve the detection effect by increasing the support of external evidence in fake news detection, there is a lag in obtaining external evidence and the reliability and completeness of the evidence source is difficult to guarantee. Additional noise may be introduced to interfere with the model judgment. Therefore, a cross-modal fake news detection method (CM-MLF) based on evidence-free multilevel fusion is proposed. The method solves the semantic inconsistency problem by utilizing cross-modal alignment processing. And it utilizes the attention mechanism to perform multilevel fusion of text and image features without the assistance of other evidential features to further enhance the expressive power of the features. Experiments show that the method achieves better detection results on multiple benchmark datasets, effectively improving the accuracy and robustness of cross-modal fake news detection. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Graphical abstract

25 pages, 2297 KiB  
Article
Detecting Fake News in Urdu Language Using Machine Learning, Deep Learning, and Large Language Model-Based Approaches
by Muhammad Shoaib Farooq, Syed Muhammad Asadullah Gilani, Muhammad Faraz Manzoor and Momina Shaheen
Information 2025, 16(7), 595; https://doi.org/10.3390/info16070595 - 10 Jul 2025
Viewed by 432
Abstract
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in [...] Read more.
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in Urdu is difficult because there are not many effective systems for this. This study aims to solve this problem by creating a detailed process and training models using machine learning, deep learning, and large language models (LLMs). The research uses methods that look at the features of documents and classes to detect fake news in Urdu. Different models were tested, including machine learning models like Naïve Bayes and Support Vector Machine (SVM), as well as deep learning models like Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), which used embedding techniques. The study also used advanced models like BERT and GPT to improve the detection process. These models were first evaluated on the Bend-the-Truth dataset, where CNN achieved an F1 score of 72%, Naïve Bayes scored 78%, and the BERT Transformer achieved the highest F1 score of 79% on Bend the Truth dataset. To further validate the approach, the models were tested on a more diverse dataset, Ax-to-Grind, where both SVM and LSTM achieved an F1 score of 89%, while BERT outperformed them with an F1 score of 93%. Full article
Show Figures

Figure 1

23 pages, 1523 KiB  
Article
Deep One-Directional Neural Semantic Siamese Network for High-Accuracy Fact Verification
by Muchammad Naseer, Jauzak Hussaini Windiatmaja, Muhamad Asvial and Riri Fitri Sari
Big Data Cogn. Comput. 2025, 9(7), 172; https://doi.org/10.3390/bdcc9070172 - 30 Jun 2025
Viewed by 680
Abstract
Fake news has eroded trust in credible news sources, driving the need for tools to verify the accuracy of circulating information. Fact verification addresses this issue by classifying claims as Supports (S), Refutes (R), or Not Enough Info (NEI) based on evidence. Neural [...] Read more.
Fake news has eroded trust in credible news sources, driving the need for tools to verify the accuracy of circulating information. Fact verification addresses this issue by classifying claims as Supports (S), Refutes (R), or Not Enough Info (NEI) based on evidence. Neural Semantic Matching Networks (NSMN) is an algorithm designed for this purpose, but its reliance on BiLSTM has shown limitations, particularly overfitting. This study aims to enhance NSMN for fact verification through a structured framework comprising encoding, alignment, matching, and output layers. The proposed approach employed Siamese MaLSTM in the matching layer and introduced the Manhattan Fact Relatedness Score (MFRS) in the output layer, culminating in a novel algorithm called Deep One-Directional Neural Semantic Siamese Network (DOD–NSSN). Performance evaluation compared DOD–NSSN with NSMN and transformer-based algorithms (BERT, RoBERTa, XLM, XL-Net). Results demonstrated that DOD–NSSN achieved 91.86% accuracy and consistently outperformed other models, achieving over 95% accuracy across diverse topics, including sports, government, politics, health, and industry. The findings highlight the DOD–NSSN model’s capability to generalize effectively across various domains, providing a robust tool for automated fact verification. Full article
(This article belongs to the Special Issue Machine Learning and AI Technology for Sustainable Development)
Show Figures

Figure 1

22 pages, 1326 KiB  
Article
The Detection Optimization of Low-Quality Fake Face Images: Feature Enhancement and Noise Suppression Strategies
by Ge Wang, Yue Han, Fangqian Xu, Yuteng Gao and Wenjie Sang
Appl. Sci. 2025, 15(13), 7325; https://doi.org/10.3390/app15137325 - 29 Jun 2025
Viewed by 437
Abstract
With the rapid advancement of deepfake technology, the detection of low-quality synthetic facial images has become increasingly challenging, particularly in cases involving low resolution, blurriness, or noise. Traditional detection methods often exhibit limited performance under such conditions. To address these limitations, this paper [...] Read more.
With the rapid advancement of deepfake technology, the detection of low-quality synthetic facial images has become increasingly challenging, particularly in cases involving low resolution, blurriness, or noise. Traditional detection methods often exhibit limited performance under such conditions. To address these limitations, this paper proposes a novel algorithm, YOLOv9-ARC, which is designed to enhance the accuracy of detecting low-quality fake facial images. The proposed algorithm introduces an innovative convolution module, Adaptive Kernel Convolution (AKConv), which dynamically adjusts kernel sizes to effectively extract image features, thereby mitigating the challenges posed by low resolution, blurriness, and noise. Furthermore, a hybrid attention mechanism, Convolutional Block Attention Module (CBAM), is integrated to amplify salient features while suppressing irrelevant information. Extensive experiments demonstrate that YOLOv9-ARC achieves a mean average precision (mAP) of 75.1% on the DFDC (DeepFake Detection Challenge) dataset, representing a 3.5% improvement over the baseline model. The proposed YOLOv9-ARC not only addresses the challenges of low-quality deepfake detection but also demonstrates significant improvements in accuracy within this domain. Full article
Show Figures

Figure 1

24 pages, 595 KiB  
Article
An Empirical Comparison of Machine Learning and Deep Learning Models for Automated Fake News Detection
by Yexin Tian, Shuo Xu, Yuchen Cao, Zhongyan Wang and Zijing Wei
Mathematics 2025, 13(13), 2086; https://doi.org/10.3390/math13132086 - 25 Jun 2025
Viewed by 555
Abstract
Detecting fake news is a critical challenge in natural language processing (NLP), demanding solutions that balance accuracy, interpretability, and computational efficiency. Despite advances in NLP, systematic empirical benchmarks that directly compare both classical and deep models—across varying input richness and with careful attention [...] Read more.
Detecting fake news is a critical challenge in natural language processing (NLP), demanding solutions that balance accuracy, interpretability, and computational efficiency. Despite advances in NLP, systematic empirical benchmarks that directly compare both classical and deep models—across varying input richness and with careful attention to interpretability and computational tradeoffs—remain underexplored. In this study, we systematically evaluate the mathematical foundations and empirical performance of five representative models for automated fake news classification: three classical machine learning algorithms (Logistic Regression, Random Forest, and Light Gradient Boosting Machine) and two state-of-the-art deep learning architectures (A Lite Bidirectional Encoder Representations from Transformers—ALBERT and Gated Recurrent Units—GRUs). Leveraging the large-scale WELFake dataset, we conduct rigorous experiments under both headline-only and headline-plus-content input scenarios, providing a comprehensive assessment of each model’s capability to capture linguistic, contextual, and semantic cues. We analyze each model’s optimization framework, decision boundaries, and feature importance mechanisms, highlighting the empirical tradeoffs between representational capacity, generalization, and interpretability. Our results show that transformer-based models, especially ALBERT, achieve state-of-the-art performance (macro F1 up to 0.99) with rich context, while classical ensembles remain viable for constrained settings. These findings directly inform practical fake news detection. Full article
(This article belongs to the Special Issue Mathematical Foundations in NLP: Applications and Challenges)
Show Figures

Figure 1

21 pages, 4050 KiB  
Article
SAFE-GTA: Semantic Augmentation-Based Multimodal Fake News Detection via Global-Token Attention
by Like Zhang, Chaowei Zhang, Zewei Zhang and Yuchao Huang
Symmetry 2025, 17(6), 961; https://doi.org/10.3390/sym17060961 - 17 Jun 2025
Viewed by 508
Abstract
Large pre-trained models (PLMs) have provided tremendous opportunities and potentialities for multimodal fake news detection. However, existing multimodal fake news detection methods never manipulate the token-wise hierarchical semantics of news yielded from PLMs and extremely rely on contrastive learning but ignore the symmetry [...] Read more.
Large pre-trained models (PLMs) have provided tremendous opportunities and potentialities for multimodal fake news detection. However, existing multimodal fake news detection methods never manipulate the token-wise hierarchical semantics of news yielded from PLMs and extremely rely on contrastive learning but ignore the symmetry between text and image in terms of the abstract level. This paper proposes a novel multimodal fake news detection method that helps to balance the understanding between text and image via (1) designing a global-token across-attention mechanism to capture the correlations between global text and tokenwise image representations (or tokenwise text and global image representations) obtained from BERT and ViT; (2) proposing a QK-sharing strategy within cross-attention to enforce model symmetry that reduces information redundancy and accelerates fusion without sacrificing representational power; (3) deploying a semantic augmentation module that systematically extracts token-wise multilayered text semantics from stacked BERT blocks via CNN and Bi-LSTM layers, thereby rebalancing abstract-level disparities by symmetrically enriching shallow and deep textual signals. We also prove the effectiveness of our approach by comparing it with four state-of-the-art baselines. All the comparisons were conducted using three widely adopted multimodal fake news datasets. The results show that our approach outperforms the benchmarks by 0.8% in accuracy and 2.2% in F1-score on average across the three datasets, which demonstrates a symmetric, token-centric fusion of fine-grained semantic fusion, thereby driving more robust fake news detection. Full article
(This article belongs to the Special Issue Symmetries and Symmetry-Breaking in Data Security)
Show Figures

Figure 1

27 pages, 1417 KiB  
Article
A BERT-Based Multimodal Framework for Enhanced Fake News Detection Using Text and Image Data Fusion
by Mohammed Al-alshaqi, Danda B. Rawat and Chunmei Liu
Computers 2025, 14(6), 237; https://doi.org/10.3390/computers14060237 - 16 Jun 2025
Viewed by 1687
Abstract
The spread of fake news on social media is complicated by the fact that fake information spreads extremely fast in both textual and visual formats. Traditional approaches to the detection of fake news focus mainly on text and image features, thereby missing valuable [...] Read more.
The spread of fake news on social media is complicated by the fact that fake information spreads extremely fast in both textual and visual formats. Traditional approaches to the detection of fake news focus mainly on text and image features, thereby missing valuable information contained within images and texts. In response to this, we propose a multimodal fake news detection method based on BERT, with an extension to text combined with the extracted text from images through Optical Character Recognition (OCR). Here, we consider extending feature analysis with BERT_base_uncased to process inputs for retrieving relevant text from images and determining a confidence score that suggests the probability of the news being authentic. We report extensive experimental results on the ISOT, WELFAKE, TRUTHSEEKER, and ISOT_WELFAKE_TRUTHSEEKER datasets. Our proposed model demonstrates better generalization on the TRUTHSEEKER dataset with an accuracy of 99.97%, achieving substantial improvements over existing methods with an F1-score of 0.98. Experimental results indicate a potential accuracy increment of +3.35% compared to the latest baselines. These results highlight the potential of our approach to serve as a strong resource for automatic fake news detection by effectively integrating both text and visual data streams. Findings suggest that using diverse datasets enhances the resilience of detection systems against misinformation strategies. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

Back to TopTop