Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (20)

Search Parameters:
Keywords = fact checking claims

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2741 KiB  
Article
EVOCA: Explainable Verification of Claims by Graph Alignment
by Carmela De Felice, Carmelo Fabio Longo, Misael Mongiovì, Daniele Francesco Santamaria and Giusy Giulia Tuccari
Information 2025, 16(7), 597; https://doi.org/10.3390/info16070597 - 11 Jul 2025
Viewed by 285
Abstract
The paper introduces EVOCA—Explainable Verification Of Claims by Graph Alignment—a hybrid approach that combines NLP (Natural Language Processing) techniques with the structural advantages of knowledge graphs to manage and reduce the amount of evidence required to evaluate statements. The approach leverages the [...] Read more.
The paper introduces EVOCA—Explainable Verification Of Claims by Graph Alignment—a hybrid approach that combines NLP (Natural Language Processing) techniques with the structural advantages of knowledge graphs to manage and reduce the amount of evidence required to evaluate statements. The approach leverages the explicit and interpretable structure of semantic graphs, which naturally represent the semantic structure of a sentence—or a set of sentences—and explicitly encodes the relationships among different concepts, thereby facilitating the extraction and manipulation of relevant information. The primary objective of the proposed tool is to condense the evidence into a short sentence that preserves only the salient and relevant information of the target claim. This process eliminates superfluous and redundant information, which could impact the performance of the subsequent verification task and provide useful information to explain the outcome. To achieve this, the proposed tool called EVOCA—Explainable Verification Of Claims by Graph Alignment—generates a sub-graph in AMR (Abstract Meaning Representation), representing the tokens of the claim–evidence pair that exhibit high semantic similarity. The structured representation offered by the AMR graph not only aids in identifying the most relevant information but also improves the interpretability of the results. The resulting sub-graph is converted back into natural language with the SPRING AMR tool, producing a concise but meaning-rich “sub-evidence” sentence. The output can be processed by lightweight language models to determine whether the evidence supports, contradicts, or is neutral about the claim. The approach is tested on the 4297 sentence pairs of the Climate-BERT-fact-checking dataset, and the promising results are discussed. Full article
Show Figures

Figure 1

36 pages, 1084 KiB  
Article
Quantifying Claim Robustness Through Adversarial Framing: A Conceptual Framework for an AI-Enabled Diagnostic Tool
by Christophe Faugere
AI 2025, 6(7), 147; https://doi.org/10.3390/ai6070147 - 7 Jul 2025
Viewed by 1034
Abstract
Objectives: We introduce the conceptual framework for the Adversarial Claim Robustness Diagnostics (ACRD) protocol, a novel tool for assessing how factual claims withstand ideological distortion. Methods: Based on semantics, adversarial collaboration, and the devil’s advocate approach, we develop a three-phase evaluation process combining [...] Read more.
Objectives: We introduce the conceptual framework for the Adversarial Claim Robustness Diagnostics (ACRD) protocol, a novel tool for assessing how factual claims withstand ideological distortion. Methods: Based on semantics, adversarial collaboration, and the devil’s advocate approach, we develop a three-phase evaluation process combining baseline evaluations, adversarial speaker reframing, and dynamic AI calibration along with quantified robustness scoring. We introduce the Claim Robustness Index that constitutes our final validity scoring measure. Results: We model the evaluation of claims by ideologically opposed groups as a strategic game with a Bayesian-Nash equilibrium to infer the normative behavior of evaluators after the reframing phase. The ACRD addresses shortcomings in traditional fact-checking approaches and employs large language models to simulate counterfactual attributions while mitigating potential biases. Conclusions: The framework’s ability to identify boundary conditions of persuasive validity across polarized groups can be tested across important societal and political debates ranging from climate change issues to trade policy discourses. Full article
(This article belongs to the Special Issue AI Bias in the Media and Beyond)
Show Figures

Figure 1

21 pages, 1276 KiB  
Article
Quantifying Truthfulness: A Probabilistic Framework for Atomic Claim-Based Misinformation Detection
by Fahim Sufi and Musleh Alsulami
Mathematics 2025, 13(11), 1778; https://doi.org/10.3390/math13111778 - 27 May 2025
Viewed by 830
Abstract
The increasing sophistication and volume of misinformation on digital platforms necessitate scalable, explainable, and semantically granular fact-checking systems. Existing approaches typically treat claims as indivisible units, overlooking internal contradictions and partial truths, thereby limiting their interpretability and trustworthiness. This paper addresses this gap [...] Read more.
The increasing sophistication and volume of misinformation on digital platforms necessitate scalable, explainable, and semantically granular fact-checking systems. Existing approaches typically treat claims as indivisible units, overlooking internal contradictions and partial truths, thereby limiting their interpretability and trustworthiness. This paper addresses this gap by proposing a novel probabilistic framework that decomposes complex assertions into semantically atomic claims and computes their veracity through a structured evaluation of source credibility and evidence frequency. Each atomic unit is matched against a curated corpus of 11,928 cyber-related news entries using a binary alignment function, and its truthfulness is quantified via a composite score integrating both source reliability and support density. The framework introduces multiple aggregation strategies—arithmetic and geometric means—to construct claim-level veracity indices, offering both sensitivity and robustness. Empirical evaluation across eight cyber misinformation scenarios—encompassing over 40 atomic claims—demonstrates the system’s effectiveness. The model achieves a Mean Squared Error (MSE) of 0.037, Brier Score of 0.042, and a Spearman rank correlation of 0.88 against expert annotations. When thresholded for binary classification, the system records a Precision of 0.82, Recall of 0.79, and an F1-score of 0.805. The Expected Calibration Error (ECE) of 0.068 further validates the trustworthiness of the score distributions. These results affirm the framework’s ability to deliver interpretable, statistically reliable, and operationally scalable misinformation detection, with implications for automated journalism, governmental monitoring, and AI-based verification platforms. Full article
Show Figures

Figure 1

34 pages, 4011 KiB  
Article
Climate Change Disinformation on Social Media: A Meta-Synthesis on Epistemic Welfare in the Post-Truth Era
by Essien Oku Essien
Soc. Sci. 2025, 14(5), 304; https://doi.org/10.3390/socsci14050304 - 14 May 2025
Viewed by 1570
Abstract
Climate change disinformation has emerged as a substantial issue in the internet age, affecting public perceptions, policy response, and climate actions. This study, grounded on the theoretical frameworks of social epistemology, Habermas’s theory of communicative action, post-truth, and Foucault’s theory of power-knowledge, examines [...] Read more.
Climate change disinformation has emerged as a substantial issue in the internet age, affecting public perceptions, policy response, and climate actions. This study, grounded on the theoretical frameworks of social epistemology, Habermas’s theory of communicative action, post-truth, and Foucault’s theory of power-knowledge, examines the effect of digital infrastructures, ideological forces, and epistemic power dynamics on climate change disinformation. The meta-synthesis approach in the study reveals the mechanics of climate change disinformation on social media, the erosion of epistemic welfare influenced by post-truth dynamics, and the ideological and algorithmic amplification of disinformation, shedding light on climate change misinformation as well. The findings show that climate change disinformation represents not only a collection of false claims but also a broader epistemic issue sustained by digital environments, power structures, and fossil corporations. Right-wing populist movements, corporate interests, and algorithmic recommendation systems substantially enhance climate skepticism, intensifying political differences and public distrust in scientific authority. The study highlights the necessity of addressing climate change disinformation through improved scientific communication, algorithmic openness, and digital literacy initiatives. Resolving this conundrum requires systemic activities that go beyond fact-checking, emphasizing epistemic justice and legal reforms. Full article
Show Figures

Figure 1

32 pages, 4415 KiB  
Review
Disinformation in the Digital Age: Climate Change, Media Dynamics, and Strategies for Resilience
by Andrea Tomassi, Andrea Falegnami and Elpidio Romano
Publications 2025, 13(2), 24; https://doi.org/10.3390/publications13020024 - 6 May 2025
Cited by 2 | Viewed by 3169
Abstract
Scientific disinformation has emerged as a critical challenge at the interface of science and society. This paper examines how false or misleading scientific content proliferates across both social media and traditional media and evaluates strategies to counteract its spread. We conducted a comprehensive [...] Read more.
Scientific disinformation has emerged as a critical challenge at the interface of science and society. This paper examines how false or misleading scientific content proliferates across both social media and traditional media and evaluates strategies to counteract its spread. We conducted a comprehensive literature review of research on scientific misinformation across disciplines and regions, with particular focus on climate change and public health as exemplars. Our findings indicate that social media algorithms and user dynamics can amplify false scientific claims, as seen in case studies of viral misinformation campaigns on vaccines and climate change. Traditional media, meanwhile, are not immune to spreading inaccuracies—journalistic practices such as sensationalism or “false balance” in reporting have at times distorted scientific facts, impacting public understanding. We review efforts to fight disinformation, including technological tools for detection, the application of inoculation theory and prebunking techniques, and collaborative approaches that bridge scientists and journalists. To empower individuals, we propose practical guidelines for critically evaluating scientific information sources and emphasize the importance of digital and scientific literacy. Finally, we discuss methods to quantify the prevalence and impact of scientific disinformation—ranging from social network analysis to surveys of public belief—and compare trends across regions and scientific domains. Our results underscore that combating scientific disinformation requires an interdisciplinary, multi-pronged approach, combining improvements in science communication, education, and policy. We conducted a scoping review of 85 open-access studies focused on climate-related misinformation and disinformation, selected through a systematic screening process based on PRISMA criteria. This approach was chosen to address the lack of comprehensive mappings that synthesize key themes and identify research gaps in this fast-growing field. The analysis classified the literature into 17 thematic clusters, highlighting key trends, gaps, and emerging challenges in the field. Our results reveal a strong dominance of studies centered on social media amplification, political denialism, and cognitive inoculation strategies, while underlining a lack of research on fact-checking mechanisms and non-Western contexts. We conclude with recommendations for strengthening the resilience of both the public and information ecosystems against the spread of false scientific claims. Full article
Show Figures

Figure 1

16 pages, 242 KiB  
Article
Global Compacts and the EU Pact on Asylum and Migration: A Clash Between the Talk and the Walk
by Gamze Ovacık and François Crépeau
Laws 2025, 14(2), 13; https://doi.org/10.3390/laws14020013 - 5 Mar 2025
Viewed by 3178
Abstract
The current global mobility paradigm suffers from a great paradox. The illegality of human mobility is manufactured through restrictive migration and asylum policies, which claim to address the supposed challenges of human mobility, such as erosion of border security, burden on the labour [...] Read more.
The current global mobility paradigm suffers from a great paradox. The illegality of human mobility is manufactured through restrictive migration and asylum policies, which claim to address the supposed challenges of human mobility, such as erosion of border security, burden on the labour market, and social disharmony. On the contrary, they reinforce them, resulting in strengthened anti-migrant sentiments at the domestic level. The contradiction is that the more restrictive migration policies are and the more they are directed at containment of human mobility, the more counterproductive they become. The fact that the policies of the destination states are shaped through the votes of their citizens, and migrants are never part of the conversation which would bring the reality check of their lived lives, is a defining factor that enables state policies preventing and deterring access to territory and containing asylum seekers elsewhere. We demonstrate that this is the dynamic behind the new EU Pact on Migration and Asylum, as it thickens the European borders even further through harsher border procedures and expanded externalisation of migration control. Whereas the Global Compacts represent the paradigm of facilitated mobility and are a significant step in the right direction for moving beyond the defined paradox, the EU Pact represents the containment paradigm and showcases that the tension between the commitments and the actions of states is far from being resolved. Through an assessment of the EU Pact on Migration and Asylum’s alignment with the Global Compacts, this article scrutinizes the trajectory of the global mobility paradigm since the adoption of the Global Compacts. Full article
16 pages, 715 KiB  
Article
Sentence Embeddings and Semantic Entity Extraction for Identification of Topics of Short Fact-Checked Claims
by Krzysztof Węcel, Marcin Sawiński, Włodzimierz Lewoniewski, Milena Stróżyna, Ewelina Księżniak and Witold Abramowicz
Information 2024, 15(10), 659; https://doi.org/10.3390/info15100659 - 21 Oct 2024
Viewed by 1958
Abstract
The objective of this research was to design a method to assign topics to claims debunked by fact-checking agencies. During the fact-checking process, access to more structured knowledge is necessary; therefore, we aim to describe topics with semantic vocabulary. Classification of topics should [...] Read more.
The objective of this research was to design a method to assign topics to claims debunked by fact-checking agencies. During the fact-checking process, access to more structured knowledge is necessary; therefore, we aim to describe topics with semantic vocabulary. Classification of topics should go beyond simple connotations like instance-class and rather reflect broader phenomena that are recognized by fact checkers. The assignment of semantic entities is also crucial for the automatic verification of facts using the underlying knowledge graphs. Our method is based on sentence embeddings, various clustering methods (HDBSCAN, UMAP, K-means), semantic entity matching, and terms importance assessment based on TF-IDF. We represent our topics in semantic space using Wikidata Q-ids, DBpedia, Wikipedia topics, YAGO, and other relevant ontologies. Such an approach based on semantic entities also supports hierarchical navigation within topics. For evaluation, we compare topic modeling results with claims already tagged by fact checkers. The work presented in this paper is useful for researchers and practitioners interested in semantic topic modeling of fake news narratives. Full article
Show Figures

Figure 1

16 pages, 842 KiB  
Article
Social Media, Endometriosis, and Evidence-Based Information: An Analysis of Instagram Content
by Hannah Adler, Monique Lewis, Cecilia Hoi Man Ng, Cristy Brooks, Mathew Leonardi, Antonina Mikocka-Walus, Deborah Bush, Alex Semprini, Jessica Wilkinson-Tomey, George Condous, Nikhil Patravali, Jason Abbott and Mike Armour
Healthcare 2024, 12(1), 121; https://doi.org/10.3390/healthcare12010121 - 4 Jan 2024
Cited by 16 | Viewed by 7695
Abstract
Social media platforms are used for support and as resources by people from the endometriosis community who are seeking advice about diagnosis, education, and disease management. However, little is known about the scientific accuracy of information circulated on Instagram about the disease. To [...] Read more.
Social media platforms are used for support and as resources by people from the endometriosis community who are seeking advice about diagnosis, education, and disease management. However, little is known about the scientific accuracy of information circulated on Instagram about the disease. To fill this gap, this study analysed the evidence-based nature of content on Instagram about endometriosis. A total of 515 Instagram posts published between February 2022 and April 2022 were gathered and analysed using a content analysis method, resulting in sixteen main content categories, including “educational”, which comprised eleven subcategories. Claims within educational posts were further analysed for their evidence-based accuracy, guided by a process which included fact-checking all claims against the current scientific evidence and research. Of the eleven educational subcategories, only four categories (cure, scientific article, symptoms, and fertility) comprised claims that were at least 50% or greater evidence-based. More commonly, claims comprised varying degrees of evidence-based, mixed, and non-evidence-based information, and some categories, such as surgery, were dominated by non-evidence-based information about the disease. This is concerning as social media can impact real-life decision-making and management for individuals with endometriosis. Therefore, this study suggests that health communicators, clinicians, scientists, educators, and community groups trying to engage with the endometriosis online community need to be aware of social media discourses about endometriosis, while also ensuring that accurate and translatable information is provided. Full article
Show Figures

Figure 1

17 pages, 3320 KiB  
Article
The Impact of Patient Characteristics, Risk Factors, and Surgical Intervention on Survival in a Cohort of Patients Undergoing Neoadjuvant Treatment for Cervical Cancer
by Irinel-Gabriel Dicu-Andreescu, Marian-Augustin Marincaș, Virgiliu-Mihail Prunoiu, Ioana Dicu-Andreescu, Sînziana-Octavia Ionescu, Anca-Angela Simionescu, Eugen Brătucu and Laurențiu Simion
Medicina 2023, 59(12), 2147; https://doi.org/10.3390/medicina59122147 - 11 Dec 2023
Cited by 3 | Viewed by 2199
Abstract
Introduction: Cervical cancer is among the most frequent types of neoplasia worldwide and remains the fourth leading cause of cancer death in women, a fact that raises the necessity for further development of therapeutic strategies. NCCN guidelines recommend radiation therapy with or [...] Read more.
Introduction: Cervical cancer is among the most frequent types of neoplasia worldwide and remains the fourth leading cause of cancer death in women, a fact that raises the necessity for further development of therapeutic strategies. NCCN guidelines recommend radiation therapy with or without chemotherapy as the gold standard for locally advanced cervical cancer. Also, some studies claim that performing surgery after chemo-radiation therapy does not necessarily improve the therapeutic outcome. This study aims to determine the impact of the risk factors, various characteristics, and surgical treatment for patients in different stages of the disease on survival rate. Material and methods: Our study started as a retrospective, observational, unicentric one, carried out on a cohort of 96 patients diagnosed with cervical cancer from the surgical department of the Bucharest Oncological Institute, followed from 1 January 2019 for a period of 3 years. After the registration of the initial parameters, however, the study became prospective, as the patients were closely monitored through periodical check-ups. The end-point of the study is either the death of the participants or reaching the end of the follow-up period, and, therefore, we divided the cohort into two subgroups: the ones who survived after three years and the ones who did not. All 96 patients, with disease stages ranging from IA2 to IIIB, underwent radio-chemotherapy followed by adjuvant surgery. Results: Among the 96 patients, 45 (46%) presented residual tumor after radio-chemotherapy. Five patients (5%) presented positive resection margins at the post-operative histopathological examination. The presence of residual tumor, the FIGO stage post-radiotherapy, positive resection margins, and lympho-vascular and stromal invasions differed significantly between the subgroups, being more represented in the subgroup that reached the end-point. Variables correlated with the worst survival in Kaplan–Meier were the pelvic lymph node involvement—50% at three years (p—0.015)—and the positive resection margins—only 20% at three years (p < 0.001). The univariate Cox model identified as mortality-associated risk factors the same parameters as above, but also the intraoperative stage III FIGO (p < 0.001; HR 9.412; CI: 2.713 to 32.648) and the presence of post-radiotherapy adenopathy (p—0.031; HR: 3.915; CI: 1.136 to 13.487) identified through imagistic methods. The independent predictors of the overall survival rate identified were the positive resection margins (p—0.002; HR: 6.646; CI 2.0 to 22.084) and the post-radiotherapy stage III FIGO (p—0.003; HR: 13.886; CI: 2.456 to 78.506). Conclusions: The most important predictor factors of survival rate are the positive resection margins and the FIGO stage after radiotherapy. According to the NCCN guidelines in stages considered advanced (beyond stages IB3, IIA2), the standard treatment is neoadjuvant chemoradiotherapy. In our study, with radical surgery after neoadjuvant therapy, 46% of patients presented residual tumor at the intraoperative histopathological examination, a fact that makes the surgical intervention an important step in completing the treatment of these patients. In addition, based on the patient’s features/comorbidities and the clinical response to chemotherapy/radiotherapy, surgeons could carefully tailor the extent of radical surgery, thus resulting in a personalized surgical approach for each patient. However, a potential limitation can be represented by the relatively small number of patients (96) and the unicentric nature of our study. Full article
(This article belongs to the Special Issue Diagnosis and Treatment of Cervical Cancer)
Show Figures

Figure 1

32 pages, 8511 KiB  
Article
The PolitiFact-Oslo Corpus: A New Dataset for Fake News Analysis and Detection
by Nele Põldvere, Zia Uddin and Aleena Thomas
Information 2023, 14(12), 627; https://doi.org/10.3390/info14120627 - 23 Nov 2023
Cited by 8 | Viewed by 7042
Abstract
This study presents a new dataset for fake news analysis and detection, namely, the PolitiFact-Oslo Corpus. The corpus contains samples of both fake and real news in English, collected from the fact-checking website PolitiFact.com. It grew out of a need for a more [...] Read more.
This study presents a new dataset for fake news analysis and detection, namely, the PolitiFact-Oslo Corpus. The corpus contains samples of both fake and real news in English, collected from the fact-checking website PolitiFact.com. It grew out of a need for a more controlled and effective dataset for fake news analysis and detection model development based on recent events. Three features make it uniquely placed for this: (i) the texts have been individually labelled for veracity by experts, (ii) they are complete texts that strictly correspond to the claims in question, and (iii) they are accompanied by important metadata such as text type (e.g., social media, news and blog). In relation to this, we present a pipeline for collecting quality data from major fact-checking websites, a procedure which can be replicated in future corpus building efforts. An exploratory analysis based on sentiment and part-of-speech information reveals interesting differences between fake and real news as well as between text types, thus highlighting the importance of adding contextual information to fake news corpora. Since the main application of the PolitiFact-Oslo Corpus is in automatic fake news detection, we critically examine the applicability of the corpus and another PolitiFact dataset built based on less strict criteria for various deep learning-based efficient approaches, such as Bidirectional Long Short-Term Memory (Bi-LSTM), LSTM fine-tuned transformers such as Bidirectional Encoder Representations from Transformers (BERT) and RoBERTa, and XLNet. Full article
Show Figures

Figure 1

15 pages, 5714 KiB  
Article
Leverage Boosting and Transformer on Text-Image Matching for Cheap Fakes Detection
by Tuan-Vinh La, Minh-Son Dao, Duy-Dong Le, Kim-Phung Thai, Quoc-Hung Nguyen and Thuy-Kieu Phan-Thi
Algorithms 2022, 15(11), 423; https://doi.org/10.3390/a15110423 - 10 Nov 2022
Cited by 11 | Viewed by 3488
Abstract
The explosive growth of the social media community has increased many kinds of misinformation and is attracting tremendous attention from the research community. One of the most prevalent ways of misleading news is cheapfakes. Cheapfakes utilize non-AI techniques such as unaltered images with [...] Read more.
The explosive growth of the social media community has increased many kinds of misinformation and is attracting tremendous attention from the research community. One of the most prevalent ways of misleading news is cheapfakes. Cheapfakes utilize non-AI techniques such as unaltered images with false context news to create false news, which makes it easy and “cheap” to create and leads to an abundant amount in the social media community. Moreover, the development of deep learning also opens and invents many domains relevant to news such as fake news detection, rumour detection, fact-checking, and verification of claimed images. Nevertheless, despite the impact on and harmfulness of cheapfakes for the social community and the real world, there is little research on detecting cheapfakes in the computer science domain. It is challenging to detect misused/false/out-of-context pairs of images and captions, even with human effort, because of the complex correlation between the attached image and the veracity of the caption content. Existing research focuses mostly on training and evaluating on given dataset, which makes the proposal limited in terms of categories, semantics and situations based on the characteristics of the dataset. In this paper, to address these issues, we aimed to leverage textual semantics understanding from the large corpus and integrated with different combinations of text-image matching and image captioning methods via ANN/Transformer boosting schema to classify a triple of (image, caption1, caption2) into OOC (out-of-context) and NOOC (no out-of-context) labels. We customized these combinations according to various exceptional cases that we observed during data analysis. We evaluate our approach using the dataset and evaluation metrics provided by the COSMOS baseline. Compared to other methods, including the baseline, our method achieves the highest Accuracy, Recall, and F1 scores. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

12 pages, 900 KiB  
Article
Automatic Fact Checking Using an Interpretable Bert-Based Architecture on COVID-19 Claims
by Ramón Casillas, Helena Gómez-Adorno, Victor Lomas-Barrie and Orlando Ramos-Flores
Appl. Sci. 2022, 12(20), 10644; https://doi.org/10.3390/app122010644 - 21 Oct 2022
Cited by 4 | Viewed by 3545
Abstract
We present a neural network architecture focused on verifying facts against evidence found in a knowledge base. The architecture can perform relevance evaluation and claim verification, parts of a well-known three-stage method of fact-checking. We fine-tuned BERT to codify claims and pieces of [...] Read more.
We present a neural network architecture focused on verifying facts against evidence found in a knowledge base. The architecture can perform relevance evaluation and claim verification, parts of a well-known three-stage method of fact-checking. We fine-tuned BERT to codify claims and pieces of evidence separately. An attention layer between the claim and evidence representation computes alignment scores to identify relevant terms between both. Finally, a classification layer receives the vector representation of claims and evidence and performs the relevance and verification classification. Our model allows a more straightforward interpretation of the predictions than other state-of-the-art models. We use the scores computed within the attention layer to show which evidence spans are more relevant to classify a claim as supported or refuted. Our classification models achieve results compared to the state-of-the-art models in terms of classification of relevance evaluation and claim verification accuracy on the FEVER dataset. Full article
(This article belongs to the Special Issue Applications of Deep Learning and Artificial Intelligence Methods)
Show Figures

Figure 1

21 pages, 1233 KiB  
Article
PEINet: Joint Prompt and Evidence Inference Network via Language Family Policy for Zero-Shot Multilingual Fact Checking
by Xiaoyu Li, Weihong Wang, Jifei Fang, Li Jin, Hankun Kang and Chunbo Liu
Appl. Sci. 2022, 12(19), 9688; https://doi.org/10.3390/app12199688 - 27 Sep 2022
Cited by 3 | Viewed by 2740
Abstract
Zero-shot multilingual fact-checking, which aims to discover and infer subtle clues from the retrieved relevant evidence to verify the given claim in cross-language and cross-domain scenarios, is crucial for optimizing a free, trusted, wholesome global network environment. Previous works have made enlightening and [...] Read more.
Zero-shot multilingual fact-checking, which aims to discover and infer subtle clues from the retrieved relevant evidence to verify the given claim in cross-language and cross-domain scenarios, is crucial for optimizing a free, trusted, wholesome global network environment. Previous works have made enlightening and practical explorations in claim verification, while the zero-shot multilingual task faces new challenging gap issues: neglecting authenticity-dependent learning between multilingual claims, lacking heuristic checking, and a bottleneck of insufficient evidence. To alleviate these gaps, a novel Joint Prompt and Evidence Inference Network (PEINet) is proposed to verify the multilingual claim according to the human fact-checking cognitive paradigm. In detail, firstly, we leverage the language family encoding mechanism to strengthen knowledge transfer among multi-language claims. Then, the prompt turning module is designed to infer the falsity of the fact, and further, sufficient fine-grained evidence is extracted and aggregated based on a recursive graph attention network to verify the claim again. Finally, we build a unified inference framework via multi-task learning for final fact verification. The newly achieved state-of-the-art performance on the released challenging benchmark dataset that includes not only an out-of-domain test, but also a zero-shot test, proves the effectiveness of our framework, and further analysis demonstrates the superiority of our PEINet in multilingual claim verification and inference, especially in the zero-shot scenario. Full article
(This article belongs to the Special Issue AI Techniques in Computational and Automated Fact Checking)
Show Figures

Figure 1

16 pages, 283 KiB  
Article
Checked and Approved? Human Resources Managers’ Uses of Social Media for Cybervetting
by Michel Walrave, Joris Van Ouytsel, Kay Diederen and Koen Ponnet
J. Cybersecur. Priv. 2022, 2(2), 402-417; https://doi.org/10.3390/jcp2020021 - 8 Jun 2022
Cited by 2 | Viewed by 6449
Abstract
Human resource (HR) professionals who assess job candidates may engage in cybervetting, the collection and analysis of applicants’ personal information available on social network sites (SNS). This raises important questions about the privacy of job applicants. In this study, interviews were conducted with [...] Read more.
Human resource (HR) professionals who assess job candidates may engage in cybervetting, the collection and analysis of applicants’ personal information available on social network sites (SNS). This raises important questions about the privacy of job applicants. In this study, interviews were conducted with 24 HR professionals from profit and governmental organizations to examine how information found on SNS is used to screen job applicants. HR managers were found to check for possible mismatches between the online information and the experiences and competences claimed by candidates. Pictures of the job candidates’ spare time activities, drinking behavior, and physical appearance are seen as very informative. Pictures posted by job candidates’ connections are valued as more informative than those posted by the applicants themselves. Governmental organizations’ HR managers differ from profit-sector professionals by the fact that political views may play a role for the former. Finally, some HR professionals do not collect personal information about job candidates through social media, since they aim to respect a clear distinction between private life and work. They do not want to be influenced by information that has no relation with candidates’ qualifications. The study’s implications for theory and practice are also discussed. Full article
(This article belongs to the Special Issue Cyber Situational Awareness Techniques and Human Factors)
22 pages, 1327 KiB  
Review
Using NLP for Fact Checking: A Survey
by Eric Lazarski, Mahmood Al-Khassaweneh and Cynthia Howard
Designs 2021, 5(3), 42; https://doi.org/10.3390/designs5030042 - 14 Jul 2021
Cited by 15 | Viewed by 9075
Abstract
In recent years, disinformation and “fake news” have been spreading throughout the internet at rates never seen before. This has created the need for fact-checking organizations, groups that seek out claims and comment on their veracity, to spawn worldwide to stem the tide [...] Read more.
In recent years, disinformation and “fake news” have been spreading throughout the internet at rates never seen before. This has created the need for fact-checking organizations, groups that seek out claims and comment on their veracity, to spawn worldwide to stem the tide of misinformation. However, even with the many human-powered fact-checking organizations that are currently in operation, disinformation continues to run rampant throughout the Web, and the existing organizations are unable to keep up. This paper discusses in detail recent advances in computer science to use natural language processing to automate fact checking. It follows the entire process of automated fact checking using natural language processing, from detecting claims to fact checking to outputting results. In summary, automated fact checking works well in some cases, though generalized fact checking still needs improvement prior to widespread use. Full article
(This article belongs to the Section Electrical Engineering Design)
Show Figures

Figure 1

Back to TopTop