Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = fake news sharing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1151 KiB  
Article
Proposal of a Blockchain-Based Data Management System for Decentralized Artificial Intelligence Devices
by Keundug Park and Heung-Youl Youm
Big Data Cogn. Comput. 2025, 9(8), 212; https://doi.org/10.3390/bdcc9080212 - 18 Aug 2025
Viewed by 268
Abstract
A decentralized artificial intelligence (DAI) system is a human-oriented artificial intelligence (AI) system, which performs self-learning and shares its knowledge with other DAI systems like humans. A DAI device is an individual device (e.g., a mobile phone, a personal computer, a robot, a [...] Read more.
A decentralized artificial intelligence (DAI) system is a human-oriented artificial intelligence (AI) system, which performs self-learning and shares its knowledge with other DAI systems like humans. A DAI device is an individual device (e.g., a mobile phone, a personal computer, a robot, a car, etc.) running a DAI system. A DAI device acquires validated knowledge data and raw data from a blockchain system as a trust anchor and improves its knowledge level by self-learning using the validated data. A DAI device using the proposed system reduces unreliable tasks, including the generation of unreliable products (e.g., deepfakes, fake news, and hallucinations), but the proposed system also prevents these malicious DAI devices from acquiring the validated data. This paper proposes a new architecture for a blockchain-based data management system for DAI devices, together with the service scenario and data flow, security threats, and security requirements. It also describes the key features and expected effects of the proposed system. This paper discusses the considerations for developing or operating the proposed system and concludes with future works. Full article
Show Figures

Figure 1

29 pages, 1969 KiB  
Article
Mapping Linear and Configurational Dynamics to Fake News Sharing Behaviors in a Developing Economy
by Claudel Mombeuil, Hugues Séraphin and Hemantha Premakumara Diunugala
Technologies 2025, 13(8), 341; https://doi.org/10.3390/technologies13080341 - 6 Aug 2025
Viewed by 234
Abstract
The proliferation of social media has paradoxically facilitated the widespread dissemination of fake news, impacting individuals, politics, economics, and society as a whole. Despite the increasing scholarly research on this phenomenon, a significant gap exists regarding its dynamics in developing countries, particularly how [...] Read more.
The proliferation of social media has paradoxically facilitated the widespread dissemination of fake news, impacting individuals, politics, economics, and society as a whole. Despite the increasing scholarly research on this phenomenon, a significant gap exists regarding its dynamics in developing countries, particularly how predictors of fake news sharing interact, rather than merely their net effects. To acquire a more nuanced understanding of fake news sharing behavior, we propose identifying the direct and complex interplay among key variables by utilizing a dual analytical framework, leveraging Structural Equation Modeling (SEM) for linear relationships and Fuzzy-Set Qualitative Comparative Analysis (fsQCA) to uncover asymmetric patterns. Specifically, we investigate the influence of news-find-me orientation, social media trust, information-sharing tendencies, and status-seeking motivation on the propensity of fake news sharing behavior. Additionally, we delve into the moderating influence of social media literacy on these observed effects. Based on a cross-sectional survey of 1028 Haitian social media users, the SEM analysis revealed that news-find-me perception had a negative but statistically insignificant influence on fake news sharing behavior. In contrast, information sharing exhibited a significant negative association. Trust in social media was positively and significantly linked to fake news sharing behavior. Meanwhile, status-seeking motivation was positively associated with fake news sharing behavior, although the association did not reach statistical significance. Crucially, social media literacy moderated the effects of trust and information sharing. Interestingly, fsQCA identified three core configurations for fake news sharing: (1) low status seeking, (2) low information-sharing tendencies, and (3) a unique interaction of low “news-find-me” orientation and high social media trust. Furthermore, low social media literacy emerged as a direct core configuration. These findings support the urgent need to prioritize social media literacy as a key intervention in combating the dissemination of fake news. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

21 pages, 4050 KiB  
Article
SAFE-GTA: Semantic Augmentation-Based Multimodal Fake News Detection via Global-Token Attention
by Like Zhang, Chaowei Zhang, Zewei Zhang and Yuchao Huang
Symmetry 2025, 17(6), 961; https://doi.org/10.3390/sym17060961 - 17 Jun 2025
Viewed by 632
Abstract
Large pre-trained models (PLMs) have provided tremendous opportunities and potentialities for multimodal fake news detection. However, existing multimodal fake news detection methods never manipulate the token-wise hierarchical semantics of news yielded from PLMs and extremely rely on contrastive learning but ignore the symmetry [...] Read more.
Large pre-trained models (PLMs) have provided tremendous opportunities and potentialities for multimodal fake news detection. However, existing multimodal fake news detection methods never manipulate the token-wise hierarchical semantics of news yielded from PLMs and extremely rely on contrastive learning but ignore the symmetry between text and image in terms of the abstract level. This paper proposes a novel multimodal fake news detection method that helps to balance the understanding between text and image via (1) designing a global-token across-attention mechanism to capture the correlations between global text and tokenwise image representations (or tokenwise text and global image representations) obtained from BERT and ViT; (2) proposing a QK-sharing strategy within cross-attention to enforce model symmetry that reduces information redundancy and accelerates fusion without sacrificing representational power; (3) deploying a semantic augmentation module that systematically extracts token-wise multilayered text semantics from stacked BERT blocks via CNN and Bi-LSTM layers, thereby rebalancing abstract-level disparities by symmetrically enriching shallow and deep textual signals. We also prove the effectiveness of our approach by comparing it with four state-of-the-art baselines. All the comparisons were conducted using three widely adopted multimodal fake news datasets. The results show that our approach outperforms the benchmarks by 0.8% in accuracy and 2.2% in F1-score on average across the three datasets, which demonstrates a symmetric, token-centric fusion of fine-grained semantic fusion, thereby driving more robust fake news detection. Full article
(This article belongs to the Special Issue Symmetries and Symmetry-Breaking in Data Security)
Show Figures

Figure 1

33 pages, 3077 KiB  
Article
Perspective-Based Microblog Summarization
by Chih-Yuan Li, Soon Ae Chun and James Geller
Information 2025, 16(4), 285; https://doi.org/10.3390/info16040285 - 1 Apr 2025
Viewed by 793
Abstract
Social media allows people to express and share a variety of experiences, opinions, beliefs, interpretations, or viewpoints on a single topic. Summarizing a collection of social media posts (microblogs) on one topic may be challenging and can result in an incoherent summary due [...] Read more.
Social media allows people to express and share a variety of experiences, opinions, beliefs, interpretations, or viewpoints on a single topic. Summarizing a collection of social media posts (microblogs) on one topic may be challenging and can result in an incoherent summary due to multiple perspectives from different users. We introduce a novel approach to microblog summarization, the Multiple-View Summarization Framework (MVSF), designed to efficiently generate multiple summaries from the same social media dataset depending on chosen perspectives and deliver personalized and fine-grained summaries. The MVSF leverages component-of-perspective computing, which can recognize the perspectives expressed in microblogs, such as sentiments, political orientations, or unreliable opinions (fake news). The perspective computing can filter social media data to summarize them according to specific user-selected perspectives. For the summarization methods, our framework implements three extractive summarization methods: Entity-based, Social Signal-based, and Triple-based. We conduct comparative evaluations of MVSF summarizations against state-of-the-art summarization models, including BertSum, SBert, T5, and Bart-Large-CNN, by using a gold-standard BBC news dataset and Rouge scores. Furthermore, we utilize a dataset of 18,047 tweets about COVID-19 vaccines to demonstrate the applications of MVSF. Our contributions include the innovative approach of using user perspectives in summarization methods as a unified framework, capable of generating multiple summaries that reflect different perspectives, in contrast to prior approaches of generating one-size-fits-all summaries for one dataset. The practical implication of MVSF is that it offers users diverse perspectives from social media data. Our prototype web application is also implemented using ChatGPT to show the feasibility of our approach. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

37 pages, 2517 KiB  
Article
Multitask Learning for Authenticity and Authorship Detection
by Gurunameh Singh Chhatwal and Jiashu Zhao
Electronics 2025, 14(6), 1113; https://doi.org/10.3390/electronics14061113 - 12 Mar 2025
Cited by 1 | Viewed by 1264
Abstract
Traditionally, detecting misinformation (real vs. fake) and authorship (human vs. AI) have been addressed as separate classification tasks, leaving a critical gap in real-world scenarios where these challenges increasingly overlap. Motivated by this need, we introduce a unified framework—the Shared–Private Synergy Model (SPSM)—that [...] Read more.
Traditionally, detecting misinformation (real vs. fake) and authorship (human vs. AI) have been addressed as separate classification tasks, leaving a critical gap in real-world scenarios where these challenges increasingly overlap. Motivated by this need, we introduce a unified framework—the Shared–Private Synergy Model (SPSM)—that tackles both authenticity and authorship classification under one umbrella. Our approach is tested on a novel multi-label dataset and evaluated through an exhaustive suite of methods, including traditional machine learning, stylometric feature analysis, and pretrained large language model-based classifiers. Notably, the proposed SPSM architecture incorporates multitask learning, shared–private layers, and hierarchical dependencies, achieving state-of-the-art results with over 96% accuracy for authenticity (real vs. fake) and 98% for authorship (human vs. AI). Beyond its superior performance, our approach is interpretable: stylometric analyses reveal how factors like sentence complexity and entity usage can differentiate between fake news and AI-generated text. Meanwhile, LLM-based classifiers show moderate success. Comprehensive ablation studies further highlight the impact of task-specific architectural enhancements such as shared layers and balanced task losses on boosting classification performance. Our findings underscore the effectiveness of synergistic PLM architectures for tackling complex classification tasks while offering insights into linguistic and structural markers of authenticity and attribution. This study provides a strong foundation for future research, including multimodal detection, cross-lingual expansion, and the development of lightweight, deployable models to combat misinformation in the evolving digital landscape and smart society. Full article
Show Figures

Figure 1

23 pages, 1392 KiB  
Article
An Optimized Weighted-Voting-Based Ensemble Learning Approach for Fake News Classification
by Muhammad Shahzaib Toor, Hooria Shahbaz, Muddasar Yasin, Armughan Ali, Norma Latif Fitriyani, Changgyun Kim and Muhammad Syafrudin
Mathematics 2025, 13(3), 449; https://doi.org/10.3390/math13030449 - 28 Jan 2025
Cited by 2 | Viewed by 2604
Abstract
The emergence of diverse content-sharing platforms and social media has rendered the dissemination of fake news and misinformation increasingly widespread. This misinformation can cause extensive confusion and fear throughout the populace. Confronting this dilemma necessitates an effective and accurate approach to identifying misinformation, [...] Read more.
The emergence of diverse content-sharing platforms and social media has rendered the dissemination of fake news and misinformation increasingly widespread. This misinformation can cause extensive confusion and fear throughout the populace. Confronting this dilemma necessitates an effective and accurate approach to identifying misinformation, an intrinsically intricate process. This research introduces an automated and efficient method for detecting false information. We evaluated the efficacy of various machine learning and deep learning models on two separate fake news datasets of differing sizes via holdout cross-validation. Furthermore, we evaluated the efficacy of three distinct word vectorization methods. Additionally, we employed an enhanced weighted voting ensemble model that enhances fake news detection by integrating logistic regression (LR), support vector machine (SVM), gated recurrent unit (GRU), and long short-term memory (LSTM) networks. This method exhibits enhanced performance relative to previous techniques: 98.76% for the PolitiFact dataset and 97.67% for the BuzzFeed dataset. Furthermore, the model outperforms individual components, resulting in superior accuracy, precision, recall, and F1 scores. The enhancements in performance result from the ensemble method’s capacity to use the advantages of each base model, hence providing robust generalization across datasets. Cross-validation was employed to enhance the model’s trustworthiness, validating its capacity to generalize effectively to novel data. Full article
(This article belongs to the Section D2: Operations Research and Fuzzy Decision Making)
Show Figures

Figure 1

22 pages, 382 KiB  
Article
Narrow Margins and Misinformation: The Impact of Sharing Fake News in Close Contests
by Samuel Rhodes
Soc. Sci. 2024, 13(11), 571; https://doi.org/10.3390/socsci13110571 - 24 Oct 2024
Cited by 1 | Viewed by 10590
Abstract
This study investigates the impact of candidates disseminating fake news on voter behavior and electoral outcomes in highly competitive, partisan races. While the effects of fake news on electoral outcomes have been studied, research has yet to examine the impact of candidates’ strategic [...] Read more.
This study investigates the impact of candidates disseminating fake news on voter behavior and electoral outcomes in highly competitive, partisan races. While the effects of fake news on electoral outcomes have been studied, research has yet to examine the impact of candidates’ strategic use of fake news in elections where it may have the greatest impact—close races. This research explores whether the use of fake news influences voter support, particularly among independent voters, in tightly contested elections. Through a conjoint survey experiment involving participants from Amazon MTurk, this study analyzes how variables such as race competitiveness, perceived risk of alienating independents, and the presence of partisan labels affect voter responses to candidates who spread misinformation. The findings indicate that while the competitiveness of a race does not significantly enhance support for candidates sharing fake news, the presence of partisan labels does. These results suggest that voter behavior in response to fake news is more closely tied to partisan identity than to strategic electoral considerations. This study highlights the complex dynamics of misinformation in electoral contexts and its implications for democratic processes. Full article
(This article belongs to the Special Issue Disinformation and Misinformation in the New Media Landscape)
Show Figures

Figure 1

16 pages, 731 KiB  
Article
Stance Detection in the Context of Fake News—A New Approach
by Izzat Alsmadi, Iyad Alazzam, Mohammad Al-Ramahi and Mohammad Zarour
Future Internet 2024, 16(10), 364; https://doi.org/10.3390/fi16100364 - 6 Oct 2024
Cited by 1 | Viewed by 2326
Abstract
Online social networks (OSNs) are inundated with an enormous daily influx of news shared by users worldwide. Information can originate from any OSN user and quickly spread, making the task of fact-checking news both time-consuming and resource-intensive. To address this challenge, researchers are [...] Read more.
Online social networks (OSNs) are inundated with an enormous daily influx of news shared by users worldwide. Information can originate from any OSN user and quickly spread, making the task of fact-checking news both time-consuming and resource-intensive. To address this challenge, researchers are exploring machine learning techniques to automate fake news detection. This paper specifically focuses on detecting the stance of content producers—whether they support or oppose the subject of the content. Our study aims to develop and evaluate advanced text-mining models that leverage pre-trained language models enhanced with meta features derived from headlines and article bodies. We sought to determine whether incorporating the cosine distance feature could improve model prediction accuracy. After analyzing and assessing several previous competition entries, we identified three key tasks for achieving high accuracy: (1) a multi-stage approach that integrates classical and neural network classifiers, (2) the extraction of additional text-based meta features from headline and article body columns, and (3) the utilization of recent pre-trained embeddings and transformer models. Full article
Show Figures

Figure 1

14 pages, 7087 KiB  
Article
Generated or Not Generated (GNG): The Importance of Background in the Detection of Fake Images
by Marco Tanfoni, Elia Giuseppe Ceroni, Sara Marziali, Niccolò Pancino, Marco Maggini and Monica Bianchini
Electronics 2024, 13(16), 3161; https://doi.org/10.3390/electronics13163161 - 10 Aug 2024
Cited by 3 | Viewed by 1966
Abstract
Facial biometrics are widely used to reliably and conveniently recognize people in photos, in videos, or from real-time webcam streams. It is therefore of fundamental importance to detect synthetic faces in images in order to reduce the vulnerability of biometrics-based security systems. Furthermore, [...] Read more.
Facial biometrics are widely used to reliably and conveniently recognize people in photos, in videos, or from real-time webcam streams. It is therefore of fundamental importance to detect synthetic faces in images in order to reduce the vulnerability of biometrics-based security systems. Furthermore, manipulated images of faces can be intentionally shared on social media to spread fake news related to the targeted individual. This paper shows how fake face recognition models may mainly rely on the information contained in the background when dealing with generated faces, thus reducing their effectiveness. Specifically, a classifier is trained to separate fake images from real ones, using their representation in a latent space. Subsequently, the faces are segmented and the background removed, and the detection procedure is performed again, observing a significant drop in classification accuracy. Finally, an explainability tool (SHAP) is used to highlight the salient areas of the image, showing that the background and face contours crucially influence the classifier decision. Full article
(This article belongs to the Special Issue Deep Learning Approach for Secure and Trustworthy Biometric System)
Show Figures

Figure 1

18 pages, 599 KiB  
Article
Fake News: “No Ban, No Spread—With Sequestration”
by Serge Galam
Physics 2024, 6(2), 859-876; https://doi.org/10.3390/physics6020053 - 6 Jun 2024
Cited by 5 | Viewed by 2555
Abstract
To curb the spread of fake news, I propose an alternative to the current trend of implementing coercive measures. This approach would preserve freedom of speech while neutralizing the social impact of fake news. The proposal relies on creating an environment to naturally [...] Read more.
To curb the spread of fake news, I propose an alternative to the current trend of implementing coercive measures. This approach would preserve freedom of speech while neutralizing the social impact of fake news. The proposal relies on creating an environment to naturally sequestrate fake news within quite small networks of people. I illustrate the process using a stylized model of opinion dynamics. In particular, I explore the effect of a simultaneous activation of prejudice tie breaking and contrarian behavior, on the spread of fake news. The results show that indeed most pieces of fake news do not propagate beyond quite small groups of people and thus pose no global threat. However, some peculiar sets of parameters are found to boost fake news so that it “naturally” invades an entire community with no resistance, even if initially shared by only a handful of agents. These findings identify the modifications of the parameters required to reverse the boosting effect into a sequestration effect by an appropriate reshaping of the social geometry of the opinion dynamics landscape. Then, all fake news items become “naturally” trapped inside limited networks of people. No prohibition is required. The next significant challenge is implementing this groundbreaking scheme within social media. Full article
Show Figures

Figure 1

17 pages, 872 KiB  
Article
Federated Learning in the Detection of Fake News Using Deep Learning as a Basic Method
by Kristína Machová, Marián Mach and Viliam Balara
Sensors 2024, 24(11), 3590; https://doi.org/10.3390/s24113590 - 2 Jun 2024
Cited by 2 | Viewed by 3711
Abstract
This article explores the possibilities for federated learning with a deep learning method as a basic approach to train detection models for fake news recognition. Federated learning is the key issue in this research because this kind of learning makes machine learning more [...] Read more.
This article explores the possibilities for federated learning with a deep learning method as a basic approach to train detection models for fake news recognition. Federated learning is the key issue in this research because this kind of learning makes machine learning more secure by training models on decentralized data at decentralized places, for example, at different IoT edges. The data are not transformed between decentralized places, which means that personally identifiable data are not shared. This could increase the security of data from sensors in intelligent houses and medical devices or data from various resources in online spaces. Each station edge could train a model separately on data obtained from its sensors and on data extracted from different sources. Consequently, the models trained on local data on local clients are aggregated at the central ending point. We have designed three different architectures for deep learning as a basis for use within federated learning. The detection models were based on embeddings, CNNs (convolutional neural networks), and LSTM (long short-term memory). The best results were achieved using more LSTM layers (F1 = 0.92). On the other hand, all three architectures achieved similar results. We also analyzed results obtained using federated learning and without it. As a result of the analysis, it was found that the use of federated learning, in which data were decomposed and divided into smaller local datasets, does not significantly reduce the accuracy of the models. Full article
(This article belongs to the Collection Artificial Intelligence in Sensors Technology)
Show Figures

Figure 1

25 pages, 924 KiB  
Article
Graph-Based Interpretability for Fake News Detection through Topic- and Propagation-Aware Visualization
by Kayato Soga, Soh Yoshida and Mitsuji Muneyasu
Computation 2024, 12(4), 82; https://doi.org/10.3390/computation12040082 - 15 Apr 2024
Cited by 1 | Viewed by 4725
Abstract
In the context of the increasing spread of misinformation via social network services, in this study, we addressed the critical challenge of detecting and explaining the spread of fake news. Early detection methods focused on content analysis, whereas recent approaches have exploited the [...] Read more.
In the context of the increasing spread of misinformation via social network services, in this study, we addressed the critical challenge of detecting and explaining the spread of fake news. Early detection methods focused on content analysis, whereas recent approaches have exploited the distinctive propagation patterns of fake news to analyze network graphs of news sharing. However, these accurate methods lack accountability and provide little insight into the reasoning behind their classifications. We aimed to fill this gap by elucidating the structural differences in the spread of fake and real news, with a focus on opinion consensus within these structures. We present a novel method that improves the interpretability of graph-based propagation detectors by visualizing article topics and propagation structures using BERTopic for topic classification and analyzing the effect of topic agreement on propagation patterns. By applying this method to a real-world dataset and conducting a comprehensive case study, we not only demonstrated the effectiveness of the method in identifying characteristic propagation paths but also propose new metrics for evaluating the interpretability of the detection methods. Our results provide valuable insights into the structural behavior and patterns of news propagation, contributing to the development of more transparent and explainable fake news detection systems. Full article
(This article belongs to the Special Issue Computational Social Science and Complex Systems)
Show Figures

Figure 1

13 pages, 1574 KiB  
Article
The Influence of Affective Empathy on Online News Belief: The Moderated Mediation of State Empathy and News Type
by Yifan Yu, Shizhen Yan, Qihan Zhang, Zhenzhen Xu, Guangfang Zhou and Hua Jin
Behav. Sci. 2024, 14(4), 278; https://doi.org/10.3390/bs14040278 - 27 Mar 2024
Viewed by 2874
Abstract
The belief in online news has become a topical issue. Previous studies demonstrated the role emotion plays in fake news vulnerability. However, few studies have explored the effect of empathy on online news belief. This study investigated the relationship between trait empathy, state [...] Read more.
The belief in online news has become a topical issue. Previous studies demonstrated the role emotion plays in fake news vulnerability. However, few studies have explored the effect of empathy on online news belief. This study investigated the relationship between trait empathy, state empathy, belief in online news, and the potential moderating effect of news type. One hundred and forty undergraduates evaluated 50 online news pieces (25 real, 25 fake) regarding their belief, state empathy, valence, arousal, and familiarity. Trait empathy data were collected using the Chinese version of the Interpersonal Reactivity Index. State empathy was positively correlated with affective empathy in trait empathy and believability, and affective empathy was positively correlated with believability. The influence of affective empathy on news belief was partially mediated by state empathy and regulated by news type (fake, real). We discuss the influence of empathy on online news belief and its internal processes. This study shares some unique insights for researchers, practitioners, social media users, and social media platform providers. Full article
Show Figures

Figure 1

14 pages, 12048 KiB  
Article
Decoding the News Media Diet of Disinformation Spreaders
by Anna Bertani, Valeria Mazzeo and Riccardo Gallotti
Entropy 2024, 26(3), 270; https://doi.org/10.3390/e26030270 - 19 Mar 2024
Cited by 2 | Viewed by 2790
Abstract
In the digital era, information consumption is predominantly channeled through online news media and disseminated on social media platforms. Understanding the complex dynamics of the news media environment and users’ habits within the digital ecosystem is a challenging task that requires, at the [...] Read more.
In the digital era, information consumption is predominantly channeled through online news media and disseminated on social media platforms. Understanding the complex dynamics of the news media environment and users’ habits within the digital ecosystem is a challenging task that requires, at the same time, large databases and accurate methodological approaches. This study contributes to this expanding research landscape by employing network science methodologies and entropic measures to analyze the behavioral patterns of social media users sharing news pieces and dig into the diverse news consumption habits within different online social media user groups. Our analyses reveal that users are more inclined to share news classified as fake when they have previously posted conspiracy or junk science content and vice versa, creating a series of “misinformation hot streaks”. To better understand these dynamics, we used three different measures of entropy to gain insights into the news media habits of each user, finding that the patterns of news consumption significantly differ among users when focusing on disinformation spreaders as opposed to accounts sharing reliable or low-risk content. Thanks to these entropic measures, we quantify the variety and the regularity of the news media diet, finding that those disseminating unreliable content exhibit a more varied and, at the same time, a more regular choice of web-domains. This quantitative insight into the nuances of news consumption behaviors exhibited by disinformation spreaders holds the potential to significantly inform the strategic formulation of more robust and adaptive social media moderation policies. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Sociophysics)
Show Figures

Figure 1

44 pages, 7889 KiB  
Article
Mapping the Landscape of Misinformation Detection: A Bibliometric Approach
by Andra Sandu, Ioana Ioanăș, Camelia Delcea, Laura-Mădălina Geantă and Liviu-Adrian Cotfas
Information 2024, 15(1), 60; https://doi.org/10.3390/info15010060 - 19 Jan 2024
Cited by 17 | Viewed by 6661
Abstract
The proliferation of misinformation presents a significant challenge in today’s information landscape, impacting various aspects of society. While misinformation is often confused with terms like disinformation and fake news, it is crucial to distinguish that misinformation involves, in mostcases, inaccurate information without the [...] Read more.
The proliferation of misinformation presents a significant challenge in today’s information landscape, impacting various aspects of society. While misinformation is often confused with terms like disinformation and fake news, it is crucial to distinguish that misinformation involves, in mostcases, inaccurate information without the intent to cause harm. In some instances, individuals unwittingly share misinformation, driven by a desire to assist others without thorough research. However, there are also situations where misinformation involves negligence, or even intentional manipulation, with the aim of shaping the opinions and decisions of the target audience. Another key factor contributing to misinformation is its alignment with individual beliefs and emotions. This alignment magnifies the impact and influence of misinformation, as people tend to seek information that reinforces their existing beliefs. As a starting point, some 56 papers containing ‘misinformation detection’ in the title, abstract, or keywords, marked as “articles”, written in English, published between 2016 and 2022, were extracted from the Web of Science platform and further analyzed using Biblioshiny. This bibliometric study aims to offer a comprehensive perspective on the field of misinformation detection by examining its evolution and identifying emerging trends, influential authors, collaborative networks, highly cited articles, key terms, institutional affiliations, themes, and other relevant factors. Additionally, the study reviews the most cited papers and provides an overview of all selected papers in the dataset, shedding light on methods employed to counter misinformation and the primary research areas where misinformation detection has been explored, including sources such as online social networks, communities, and news platforms. Recent events related to health issues stemming from the COVID-19 pandemic have heightened interest within the research community regarding misinformation detection, a statistic which is also supported by the fact that half of the papers included in top 10 papers based on number of citations have addressed this subject. The insights derived from this analysis contribute valuable knowledge to address the issue, enhancing our understanding of the field’s dynamics and aiding in the development of effective strategies to detect and mitigate the impact of misinformation. The results spotlight that IEEE Access occupies the first position in the current analysis based on the number of published papers, the King Saud University is listed as the top contributor for the misinformation detection, while in terms of countries, the top-5 list based on the highest contribution to this area is made by the USA, India, China, Spain, and the UK. Moreover, the study supports the promotion of verified and reliable sources of data, fostering a more informed and trustworthy information environment. Full article
(This article belongs to the Special Issue Recent Advances in Social Media Mining and Analysis)
Show Figures

Figure 1

Back to TopTop