Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (115)

Search Parameters:
Keywords = online fake news

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 888 KiB  
Article
Explainable Deep Learning Model for ChatGPT-Rephrased Fake Review Detection Using DistilBERT
by Rania A. AlQadi, Shereen A. Taie, Amira M. Idrees and Esraa Elhariri
Big Data Cogn. Comput. 2025, 9(8), 205; https://doi.org/10.3390/bdcc9080205 - 11 Aug 2025
Viewed by 479
Abstract
Customers heavily depend on reviews for product information. Fake reviews may influence the perception of product quality, making online reviews less effective. ChatGPT’s (GPT-3.5 and GPT-4) ability to generate human-like reviews and responses to inquiries across several disciplines has increased recently. This leads [...] Read more.
Customers heavily depend on reviews for product information. Fake reviews may influence the perception of product quality, making online reviews less effective. ChatGPT’s (GPT-3.5 and GPT-4) ability to generate human-like reviews and responses to inquiries across several disciplines has increased recently. This leads to an increase in the number of reviewers and applications using ChatGPT to create fake reviews. Consequently, the detection of fake reviews generated or rephrased by ChatGPT has become essential. This paper proposes a new approach that distinguishes ChatGPT-rephrased reviews, considered fake, from real ones, utilizing a balanced dataset to analyze the sentiment and linguistic patterns that characterize both reviews. The proposed model further leverages Explainable Artificial Intelligence (XAI) techniques, including Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for deeper insights into the model’s predictions and the classification logic. The proposed model performs a pre-processing phase that includes part-of-speech (POS) tagging, word lemmatization, tokenization, and then fine-tuned Transformer-based Machine Learning (ML) model DistilBERT for predictions. The obtained experimental results indicate that the proposed fine-tuned DistilBERT, utilizing the constructed balanced dataset along with a pre-processing phase, outperforms other state-of-the-art methods for detecting ChatGPT-rephrased reviews, achieving an accuracy of 97.25% and F1-score of 97.56%. The use of LIME and SHAP techniques not only enhanced the model’s interpretability, but also offered valuable insights into the key factors that affect the differentiation of genuine reviews from ChatGPT-rephrased ones. According to XAI, ChatGPT’s writing style is polite, uses grammatical structure, lacks specific descriptions and information in reviews, uses fancy words, is impersonal, and has deficiencies in emotional expression. These findings emphasize the effectiveness and reliability of the proposed approach. Full article
(This article belongs to the Special Issue Natural Language Processing Applications in Big Data)
Show Figures

Figure 1

15 pages, 2869 KiB  
Article
“Virtual Masks” and Online Identity: The Use of Fake Profiles in Armenian Social Media Communication
by Arthur V. Atanesyan, Samson Mkhitaryan and Anrieta Karapetyan
Journal. Media 2025, 6(2), 49; https://doi.org/10.3390/journalmedia6020049 - 26 Mar 2025
Viewed by 4119
Abstract
The goal of the study is to reveal the reasons (strategies) behind the use of “virtual masks” (fake profiles and altered identities) by real (human) users of social media networks (SMNs) within a cultural context, specifically in Armenia. Applying Erving Goffman’s Dramaturgical Theory [...] Read more.
The goal of the study is to reveal the reasons (strategies) behind the use of “virtual masks” (fake profiles and altered identities) by real (human) users of social media networks (SMNs) within a cultural context, specifically in Armenia. Applying Erving Goffman’s Dramaturgical Theory and concepts of virtual identity, the research explores how users construct their online personas, either reflecting their real identities or modifying them to achieve specific communicative goals. A statistical analysis of the most popular SMNs in Armenia, combined with semi-structured interviews with 400 users, reveals diverse approaches to virtual communication. While SMNs facilitate news consumption, socializing, and professional networking, many users deliberately conceal personal information or engage in deceptive practices. Approximately 35% prefer anonymity when following others, and 24% of men and 11% of women admit to posting false information. Additionally, 26% of men and 12% of women alter their online appearance to enhance attractiveness. The study also highlights the role of anonymity in expressing controversial opinions, particularly in political discussions. Men are more inclined than women to create fake accounts and manipulate information to avoid social repercussions. Ultimately, the study highlights how “virtual masks” in Armenia reflect both cultural attitudes and broader global digital communication trends. Full article
Show Figures

Figure 1

25 pages, 622 KiB  
Article
Cross-Domain Fake News Detection Through Fusion of Evidence from Multiple Social Media Platforms
by Jannatul Ferdush, Joarder Kamruzzaman, Gour Karmakar, Iqbal Gondal and Rajkumar Das
Future Internet 2025, 17(2), 61; https://doi.org/10.3390/fi17020061 - 3 Feb 2025
Cited by 1 | Viewed by 2129
Abstract
Fake news has become a significant challenge on online social platforms, increasing uncertainty and unwanted tension in society. The negative impact of fake news on political processes, public health, and social harmony underscores the urgency of developing more effective detection systems. Existing methods [...] Read more.
Fake news has become a significant challenge on online social platforms, increasing uncertainty and unwanted tension in society. The negative impact of fake news on political processes, public health, and social harmony underscores the urgency of developing more effective detection systems. Existing methods for fake news detection often focus solely on one platform, potentially missing important clues that arise from multiple platforms. Another important consideration is that the domain of fake news changes rapidly, making cross-domain analysis more difficult than in-domain analysis. To address both of these limitations, our method takes evidence from multiple social media platforms, enhances our cross-domain analysis, and improves overall detection accuracy. Our method employs the Dempster–Shafer combination rule for aggregating probabilities for comments being fake from two different social media platforms. Instead of directly using the comments as features, our approach improves fake news detection by examining the relationships and calculating correlations among comments from different platforms. This provides a more comprehensive view of how fake news spreads and how users respond to it. Most importantly, our study reveals that true news is typically rich in content, while fake news tends to generate a vast thread of comments. Therefore, we propose a combined method that merges content- and comment-based approaches, allowing our model to identify fake news with greater accuracy and showing an overall improvement of 7% over previous methods. Full article
(This article belongs to the Special Issue Information Communication Technologies and Social Media)
Show Figures

Figure 1

28 pages, 1346 KiB  
Article
Cross-Cultural Perspectives on Fake News: A Comparative Study of Instagram Users in Greece and Portugal
by Evangelia Pothitou, Maria Perifanou and Anastasios A. Economides
Information 2025, 16(1), 41; https://doi.org/10.3390/info16010041 - 13 Jan 2025
Cited by 2 | Viewed by 4675
Abstract
As our society increasingly relies on digital platforms for information, the spread of fake news has become a pressing concern. This study investigates the ability of Greek and Portuguese Instagram users to identify fake news, highlighting the influence of cultural differences. The responses [...] Read more.
As our society increasingly relies on digital platforms for information, the spread of fake news has become a pressing concern. This study investigates the ability of Greek and Portuguese Instagram users to identify fake news, highlighting the influence of cultural differences. The responses of 220 Instagram users were collected through questionnaires in Greece and Portugal. The data analysis investigates characteristics of Instagram posts, social endorsement, and platform usage duration. The results reveal distinct user behaviors: Greeks exhibit a unique inclination towards social connections, displaying an increased trust in friends’ content and investing more time on Instagram, reflecting the importance of personal connections in their media consumption. They also give less importance to a certain post’s characteristics, such as content opposing personal beliefs, emotional language, and poor grammar, spelling, or formatting when identifying fake news, compared to the Portuguese, suggesting a weaker emphasis on content quality in their evaluations. These findings show that cultural differences affect how people behave on Instagram. Hence, content creators, platforms, and policymakers need specific plans to make online spaces more informative. Strategies should focus on enhancing awareness of key indicators of fake news, such as linguistic quality and post structure, while addressing the role of personal and social networks in the spread of misinformation. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

17 pages, 2434 KiB  
Article
A Fuzzy AHP and PCA Approach to the Role of Media in Improving Education and the Labor Market in the 21st Century
by Branislav Sančanin, Aleksandra Penjišević, Dušan J. Simjanović, Branislav M. Ranđelović, Nenad O. Vesić and Maja Mladenović
Mathematics 2024, 12(22), 3616; https://doi.org/10.3390/math12223616 - 19 Nov 2024
Cited by 2 | Viewed by 1208
Abstract
In a hyperproductive interactive environment, where speed and cost-effectiveness often overshadow accuracy, the media’s role is increasingly shifting towards an educational function, beyond its traditional informative and entertaining roles. This shift, particularly through the promotion of science and education, aims to bridge the [...] Read more.
In a hyperproductive interactive environment, where speed and cost-effectiveness often overshadow accuracy, the media’s role is increasingly shifting towards an educational function, beyond its traditional informative and entertaining roles. This shift, particularly through the promotion of science and education, aims to bridge the gap between educational institutions and the labor market. In this context, the importance of 21st-century competencies—encompassing a broad range of knowledge and skills—becomes increasingly clear. Educational institutions are now expected to equip students with relevant, universally applicable, and market-competitive competencies. This paper proposes using a combination of principal component analysis (PCA) and fuzzy analytic hierarchy process (FAHP) to rank 21st-century competencies developed throughout the educational process to improve the system. The highest-ranked competency identified is the ability to manage information—specifically, gathering and analyzing information from diverse sources. It has been shown that respondents who developed “soft skills” and media literacy during their studies are better able to critically assess content on social networks and distinguish between credible and false information. The significance of this work lies in its focus on the damaged credibility of online media caused by user-generated content and the rapid spread of unverified and fake news. Denying such discourse or erasing digital traces is therefore futile. Developing a critical approach to information is essential for consistently identifying fake news, doctored images, and recordings taken out of context, as well as preventing their spread. Full article
Show Figures

Figure 1

59 pages, 11596 KiB  
Review
Fake News Detection Revisited: An Extensive Review of Theoretical Frameworks, Dataset Assessments, Model Constraints, and Forward-Looking Research Agendas
by Sheetal Harris, Hassan Jalil Hadi, Naveed Ahmad and Mohammed Ali Alshara
Technologies 2024, 12(11), 222; https://doi.org/10.3390/technologies12110222 - 6 Nov 2024
Cited by 5 | Viewed by 17365
Abstract
The emergence and acceptance of digital technology have caused information pollution and an infodemic on Online Social Networks (OSNs), blogs, and online websites. The malicious broadcast of illegal, objectionable and misleading content causes behavioural changes and social unrest, impacts economic growth and national [...] Read more.
The emergence and acceptance of digital technology have caused information pollution and an infodemic on Online Social Networks (OSNs), blogs, and online websites. The malicious broadcast of illegal, objectionable and misleading content causes behavioural changes and social unrest, impacts economic growth and national security, and threatens users’ safety. The proliferation of AI-generated misleading content has further intensified the current situation. In the previous literature, state-of-the-art (SOTA) methods have been implemented for Fake News Detection (FND). However, the existing research lacks multidisciplinary considerations for FND based on theories on FN and OSN users. Theories’ analysis provides insights into effective and automated detection mechanisms for FN, and the intentions and causes behind wide-scale FN propagation. This review evaluates the available datasets, FND techniques, and approaches and their limitations. The novel contribution of this review is the analysis of the FND in linguistics, healthcare, communication, and other related fields. It also summarises the explicable methods for FN dissemination, identification and mitigation. The research identifies that the prediction performance of pre-trained transformer models provides fresh impetus for multilingual (even for resource-constrained languages), multidomain, and multimodal FND. Their limits and prediction capabilities must be harnessed further to combat FN. It is possible by large-sized, multidomain, multimodal, cross-lingual, multilingual, labelled and unlabelled dataset curation and implementation. SOTA Large Language Models (LLMs) are the innovation, and their strengths should be focused on and researched to combat FN, deepfakes, and AI-generated content on OSNs and online sources. The study highlights the significance of human cognitive abilities and the potential of AI in the domain of FND. Finally, we suggest promising future research directions for FND and mitigation. Full article
Show Figures

Figure 1

43 pages, 11339 KiB  
Article
Machine Learning and Deep Learning Applications in Disinformation Detection: A Bibliometric Assessment
by Andra Sandu, Liviu-Adrian Cotfas, Camelia Delcea, Corina Ioanăș, Margareta-Stela Florescu and Mihai Orzan
Electronics 2024, 13(22), 4352; https://doi.org/10.3390/electronics13224352 - 6 Nov 2024
Cited by 9 | Viewed by 3142 | Correction
Abstract
Fake news is one of the biggest challenging issues in today’s technological world and has a huge impact on the population’s decision-making and way of thinking. Disinformation can be classified as a subdivision of fake news, the main purpose of which is to [...] Read more.
Fake news is one of the biggest challenging issues in today’s technological world and has a huge impact on the population’s decision-making and way of thinking. Disinformation can be classified as a subdivision of fake news, the main purpose of which is to manipulate and generate confusion among people in order to influence their opinion and obtain certain advantages in multiple domains (politics, economics, etc.). Propaganda, rumors, and conspiracy theories are just a few examples of common disinformation. Therefore, there is an urgent need to understand this phenomenon and offer the scientific community a paper that provides a comprehensive examination of the existing literature, lay the foundation for future research areas, and contribute to the fight against disinformation. The present manuscript provides a detailed bibliometric analysis of the articles oriented towards disinformation detection, involving high-performance machine learning and deep learning algorithms. The dataset has been collected from the popular Web of Science database, through the use of specific keywords such as “disinformation”, “machine learning”, or “deep learning”, followed by a manual check of the papers included in the dataset. The documents were examined using the popular R tool, Biblioshiny 4.2.0; the bibliometric analysis included multiple perspectives and various facets: dataset overview, sources, authors, papers, n-gram analysis, and mixed analysis. The results highlight an increased interest from the scientific community on disinformation topics in the context of machine learning and deep learning, supported by an annual growth rate of 96.1%. The insights gained from the research bring to light surprising details, while the study provides a solid basis for both future research in this area, as well for the development of new strategies addressing this complex issue of disinformation and ensuring a trustworthy and safe online environment. Full article
Show Figures

Figure 1

9 pages, 454 KiB  
Article
COVID-19 Parental Vaccine Hesitancy: The Role of Trust in Science and Conspiracy Beliefs
by Ambra Gentile and Marianna Alesi
Int. J. Environ. Res. Public Health 2024, 21(11), 1471; https://doi.org/10.3390/ijerph21111471 - 5 Nov 2024
Viewed by 1685
Abstract
Background. Parent vaccine hesitancy is a sensitive topic despite the benefits associated with children’s vaccination. Especially regarding the COVID-19 vaccination, parents displayed concerns about children’s vaccination, questioning their effectiveness and security. Although several studies were conducted on the general population, few studies investigated [...] Read more.
Background. Parent vaccine hesitancy is a sensitive topic despite the benefits associated with children’s vaccination. Especially regarding the COVID-19 vaccination, parents displayed concerns about children’s vaccination, questioning their effectiveness and security. Although several studies were conducted on the general population, few studies investigated this relationship on parents’ intentions. Methods. An online survey was advertised from May to December 2022 on social networks, collecting data from 109 participants (90% F; mean age: 41.34 years, SD: ±6.40). The survey assessed sociodemographic characteristics, vaccine hesitancy through the Parents Attitude towards Childhood Vaccine—PAVC, trust in science through the Belief in Science Scale—BISS, and conspiracy beliefs through the Generic Conspiracist Beliefs Scale—GCBS. Results. In our sample, 29 parents (26.6%) scored more than 50 points to PAVC and, for this reason, were considered hesitant. Moreover, more than half of parents (60.6%) declared that they did not intend to vaccinate their children in the future. The path analysis model showed that parents with low education tended to have higher conspiracy beliefs (β = −0.40). Holding conspiracy beliefs (β = 0.28) and having low trust in science (β = −0.23) was associated with higher parent hesitancy and, in turn, no future intention to vaccinate their children for COVID-19 (OR = 0.83, p < 0.001). Conclusion. The results of the current paper suggest that targeted campaigns should be aimed at parents with lower levels of education, mainly on social media, debunking the most common fake news or myths, independently from the type of vaccine, and highlighting the importance of scientific research for improving people’s living conditions. Full article
(This article belongs to the Special Issue Control and Prevention of COVID-19 Spread in Post-Pandemic Era)
Show Figures

Figure 1

16 pages, 731 KiB  
Article
Stance Detection in the Context of Fake News—A New Approach
by Izzat Alsmadi, Iyad Alazzam, Mohammad Al-Ramahi and Mohammad Zarour
Future Internet 2024, 16(10), 364; https://doi.org/10.3390/fi16100364 - 6 Oct 2024
Cited by 1 | Viewed by 2326
Abstract
Online social networks (OSNs) are inundated with an enormous daily influx of news shared by users worldwide. Information can originate from any OSN user and quickly spread, making the task of fact-checking news both time-consuming and resource-intensive. To address this challenge, researchers are [...] Read more.
Online social networks (OSNs) are inundated with an enormous daily influx of news shared by users worldwide. Information can originate from any OSN user and quickly spread, making the task of fact-checking news both time-consuming and resource-intensive. To address this challenge, researchers are exploring machine learning techniques to automate fake news detection. This paper specifically focuses on detecting the stance of content producers—whether they support or oppose the subject of the content. Our study aims to develop and evaluate advanced text-mining models that leverage pre-trained language models enhanced with meta features derived from headlines and article bodies. We sought to determine whether incorporating the cosine distance feature could improve model prediction accuracy. After analyzing and assessing several previous competition entries, we identified three key tasks for achieving high accuracy: (1) a multi-stage approach that integrates classical and neural network classifiers, (2) the extraction of additional text-based meta features from headline and article body columns, and (3) the utilization of recent pre-trained embeddings and transformer models. Full article
Show Figures

Figure 1

21 pages, 6745 KiB  
Article
Multimodal Social Media Fake News Detection Based on 1D-CCNet Attention Mechanism
by Yuhan Yan, Haiyan Fu and Fan Wu
Electronics 2024, 13(18), 3700; https://doi.org/10.3390/electronics13183700 - 18 Sep 2024
Cited by 2 | Viewed by 3187
Abstract
Due to the explosive rise of multimodal content in online social communities, cross-modal learning is crucial for accurate fake news detection. However, current multimodal fake news detection techniques face challenges in extracting features from multiple modalities and fusing cross-modal information, failing to fully [...] Read more.
Due to the explosive rise of multimodal content in online social communities, cross-modal learning is crucial for accurate fake news detection. However, current multimodal fake news detection techniques face challenges in extracting features from multiple modalities and fusing cross-modal information, failing to fully exploit the correlations and complementarities between different modalities. To address these issues, this paper proposes a fake news detection model based on a one-dimensional CCNet (1D-CCNet) attention mechanism, named BTCM. This method first utilizes BERT and BLIP-2 encoders to extract text and image features. Then, it employs the proposed 1D-CCNet attention mechanism module to process the input text and image sequences, enhancing the important aspects of the bimodal features. Meanwhile, this paper uses the pre-trained BLIP-2 model for object detection in images, generating image descriptions and augmenting text data to enhance the dataset. This operation aims to further strengthen the correlations between different modalities. Finally, this paper proposes a heterogeneous cross-feature fusion method (HCFFM) to integrate image and text features. Comparative experiments were conducted on three public datasets: Twitter, Weibo, and Gossipcop. The results show that the proposed model achieved excellent performance. Full article
(This article belongs to the Special Issue Application of Data Mining in Social Media)
Show Figures

Figure 1

36 pages, 2362 KiB  
Article
A Predictive Model for Benchmarking the Performance of Algorithms for Fake and Counterfeit News Classification in Global Networks
by Nureni Ayofe Azeez, Sanjay Misra, Davidson Onyinye Ogaraku and Ademola Philip Abidoye
Sensors 2024, 24(17), 5817; https://doi.org/10.3390/s24175817 - 7 Sep 2024
Cited by 1 | Viewed by 2142
Abstract
The pervasive spread of fake news in online social media has emerged as a critical threat to societal integrity and democratic processes. To address this pressing issue, this research harnesses the power of supervised AI algorithms aimed at classifying fake news with selected [...] Read more.
The pervasive spread of fake news in online social media has emerged as a critical threat to societal integrity and democratic processes. To address this pressing issue, this research harnesses the power of supervised AI algorithms aimed at classifying fake news with selected algorithms. Algorithms such as Passive Aggressive Classifier, perceptron, and decision stump undergo meticulous refinement for text classification tasks, leveraging 29 models trained on diverse social media datasets. Sensors can be utilized for data collection. Data preprocessing involves rigorous cleansing and feature vector generation using TF-IDF and Count Vectorizers. The models’ efficacy in classifying genuine news from falsified or exaggerated content is evaluated using metrics like accuracy, precision, recall, and more. In order to obtain the best-performing algorithm from each of the datasets, a predictive model was developed, through which SG with 0.681190 performs best in Dataset 1, BernoulliRBM has 0.933789 in Dataset 2, LinearSVC has 0.689180 in Dataset 3, and BernoulliRBM has 0.026346 in Dataset 4. This research illuminates strategies for classifying fake news, offering potential solutions to ensure information integrity and democratic discourse, thus carrying profound implications for academia and real-world applications. This work also suggests the strength of sensors for data collection in IoT environments, big data analytics for smart cities, and sensor applications which contribute to maintaining the integrity of information within urban environments. Full article
(This article belongs to the Special Issue IoT and Big Data Analytics for Smart Cities)
Show Figures

Figure 1

29 pages, 521 KiB  
Review
A Survey on the Use of Large Language Models (LLMs) in Fake News
by Eleftheria Papageorgiou, Christos Chronis, Iraklis Varlamis and Yassine Himeur
Future Internet 2024, 16(8), 298; https://doi.org/10.3390/fi16080298 - 19 Aug 2024
Cited by 18 | Viewed by 16076
Abstract
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall [...] Read more.
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information. Full article
Show Figures

Figure 1

16 pages, 1963 KiB  
Article
Cross-Domain Fake News Detection Using a Prompt-Based Approach
by Jawaher Alghamdi, Yuqing Lin and Suhuai Luo
Future Internet 2024, 16(8), 286; https://doi.org/10.3390/fi16080286 - 8 Aug 2024
Cited by 3 | Viewed by 2813
Abstract
The proliferation of fake news poses a significant challenge in today’s information landscape, spanning diverse domains and topics and undermining traditional detection methods confined to specific domains. In response, there is a growing interest in strategies for detecting cross-domain misinformation. However, traditional machine [...] Read more.
The proliferation of fake news poses a significant challenge in today’s information landscape, spanning diverse domains and topics and undermining traditional detection methods confined to specific domains. In response, there is a growing interest in strategies for detecting cross-domain misinformation. However, traditional machine learning (ML) approaches often struggle with the nuanced contextual understanding required for accurate news classification. To address these challenges, we propose a novel contextualized cross-domain prompt-based zero-shot approach utilizing a pre-trained Generative Pre-trained Transformer (GPT) model for fake news detection (FND). In contrast to conventional fine-tuning methods reliant on extensive labeled datasets, our approach places particular emphasis on refining prompt integration and classification logic within the model’s framework. This refinement enhances the model’s ability to accurately classify fake news across diverse domains. Additionally, the adaptability of our approach allows for customization across diverse tasks by modifying prompt placeholders. Our research significantly advances zero-shot learning by demonstrating the efficacy of prompt-based methodologies in text classification, particularly in scenarios with limited training data. Through extensive experimentation, we illustrate that our method effectively captures domain-specific features and generalizes well to other domains, surpassing existing models in terms of performance. These findings contribute significantly to the ongoing efforts to combat fake news dissemination, particularly in environments with severely limited training data, such as online platforms. Full article
(This article belongs to the Special Issue Embracing Artificial Intelligence (AI) for Network and Service)
Show Figures

Figure 1

17 pages, 872 KiB  
Article
Federated Learning in the Detection of Fake News Using Deep Learning as a Basic Method
by Kristína Machová, Marián Mach and Viliam Balara
Sensors 2024, 24(11), 3590; https://doi.org/10.3390/s24113590 - 2 Jun 2024
Cited by 2 | Viewed by 3711
Abstract
This article explores the possibilities for federated learning with a deep learning method as a basic approach to train detection models for fake news recognition. Federated learning is the key issue in this research because this kind of learning makes machine learning more [...] Read more.
This article explores the possibilities for federated learning with a deep learning method as a basic approach to train detection models for fake news recognition. Federated learning is the key issue in this research because this kind of learning makes machine learning more secure by training models on decentralized data at decentralized places, for example, at different IoT edges. The data are not transformed between decentralized places, which means that personally identifiable data are not shared. This could increase the security of data from sensors in intelligent houses and medical devices or data from various resources in online spaces. Each station edge could train a model separately on data obtained from its sensors and on data extracted from different sources. Consequently, the models trained on local data on local clients are aggregated at the central ending point. We have designed three different architectures for deep learning as a basis for use within federated learning. The detection models were based on embeddings, CNNs (convolutional neural networks), and LSTM (long short-term memory). The best results were achieved using more LSTM layers (F1 = 0.92). On the other hand, all three architectures achieved similar results. We also analyzed results obtained using federated learning and without it. As a result of the analysis, it was found that the use of federated learning, in which data were decomposed and divided into smaller local datasets, does not significantly reduce the accuracy of the models. Full article
(This article belongs to the Collection Artificial Intelligence in Sensors Technology)
Show Figures

Figure 1

11 pages, 330 KiB  
Article
Influences on COVID-19 Vaccine Adherence among Pregnant Women: The Role of Internet Access and Pre-Vaccination Emotions
by Rosângela Carvalho de Sousa, Maria Juliene Lima da Silva, Maria Rita Fialho do Nascimento, Mayara da Cruz Silveira, Franciane de Paula Fernandes, Tatiane Costa Quaresma, Simone Aguiar da Silva Figueira, Maria Goreth Silva Ferreira, Adjanny Estela Santos de Souza, Waldiney Pires Moraes, Sheyla Mara Silva de Oliveira and Livia de Aguiar Valentim
Int. J. Environ. Res. Public Health 2024, 21(6), 719; https://doi.org/10.3390/ijerph21060719 - 31 May 2024
Cited by 2 | Viewed by 1390
Abstract
Introduction: The onset of the COVID-19 pandemic brought about global uncertainties and fears, escalating the dissemination of fake news. This study aims to analyze the impact of fake news on COVID-19 vaccine adherence among pregnant women, providing crucial insights for effective communication strategies [...] Read more.
Introduction: The onset of the COVID-19 pandemic brought about global uncertainties and fears, escalating the dissemination of fake news. This study aims to analyze the impact of fake news on COVID-19 vaccine adherence among pregnant women, providing crucial insights for effective communication strategies during the pandemic. Methods: A cross-sectional, exploratory study was conducted with 113 pregnant women under care at a Women’s Health Reference Center. Data analysis included relative frequency and odds ratio to assess the relationship between sociodemographic and behavioral variables regarding vaccination. Results: In the behavioral context of vaccination, internet access shows a significant association with decision-making, influencing vaccine refusal due to online information. Nuances in the odds ratios results highlight the complexity of vaccine hesitancy, emphasizing the importance of information quality. Pre-vaccination sentiments include stress (87.61%), fear (50.44%), and anxiety (40.7%), indicating the need for sensitive communication strategies. Discussion: Results revealed that pregnant women with higher education tend to adhere more to vaccination. Exposure to news about vaccine inefficacy had a subtle association with hesitancy, while finding secure sources was negatively associated with hesitancy. The behavioral complexity in the relationship between online information access and vaccination decision underscores the need for effective communication strategies. Conclusions: In the face of this challenging scenario, proactive strategies, such as developing specific campaigns for pregnant women, are essential. These should provide clear information, debunk myths, and address doubts. A user-centered approach, understanding their needs, is crucial. Furthermore, ensuring information quality and promoting secure sources are fundamental measures to strengthen trust in vaccination and enhance long-term public health. Full article
Back to TopTop