Next Article in Journal
Realizing Mathematics of Arrays Operations as Custom Architecture Hardware-Software Co-Design Solutions
Previous Article in Journal
Building Knowledge Graphs from Unstructured Texts: Applications and Impact Analyses in Cybersecurity Education
 
 
Review
Peer-Review Record

A Systematic Literature Review and Meta-Analysis of Studies on Online Fake News Detection

Information 2022, 13(11), 527; https://doi.org/10.3390/info13110527
by Robyn C. Thompson *, Seena Joseph and Timothy T. Adeliyi
Information 2022, 13(11), 527; https://doi.org/10.3390/info13110527
Submission received: 10 October 2022 / Revised: 1 November 2022 / Accepted: 1 November 2022 / Published: 4 November 2022
(This article belongs to the Section Review)

Round 1

Reviewer 1 Report

In my view, this work is a remarkable effort to offer a systematic review and meta-analysis on fake news detection methods based on Deep Learning, Machine Learning and ensemble approaches, using N=125 relevant scientific articles published between 2014 and 2022. 

 

I have only four observations to be considered and corrected, three of them about content and one about a formal issue.

 

3 Content remarks:

 

1)    In my view authors should review fake news’ definition they chose to begin their work. There are a lot of definitions (Allcott & Gentzkow, 2018; McNair, 2018; Lazer et al., 2018; Dalkir & Katz, 2020; Dentith, 2017; Gelfert, 2018; Jaster & Lanius, 2018; Hinsley & Holton, 2021, etc.). None of them (to my knowledge) include the nuances they decide to highlight in their own definition.

They say: “The term "fake news" describes untrue speculations and insinuations made on social media, such as outright fabrications or flagrant misrepresentations of an actual occurrence” (p. 1).

This is not accurate: fake news are mostly claims (not speculations or insinuations) and sometimes they are totally fabricated from scratch (not necessarily based on an actual occurrence).

McNair defined fake news as follows: “intentional disinformation (invention or falsification of known facts) for political and/or commercial purposes, presented as real news”.

Allcott and Gentzkow (2017) made also a valuable contribution on fake news’ definition: “News articles that are intentionally and verifiably false and could mislead readers”. Moreover, they chose as example a statement: “Pope Francis openly endorses Donald Trump” (wtoe5news.com). This is not a speculation or an insinuation, and it’s totally made up. Allcott & Genztkow remembered another well-known fake news which was also spread around 2016 US Presidential Elections: “FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide.” (denverguardian.com). Again, this is not a speculation or an insinuation at all.

Fake news are presented as claims or stories, and they mimic, resemble, or show off under the guise of media news. Fake news is information that tries to pass itself off as news reporting or journalistic reports.

2)    It’s well known that fake news’ detection has been approached from different strategies, not only those based on AI techniques (machine learning, deep learning, ensemble approaches, etc., which are only one of four methods).

Put it simple, there are four basic strategies to face fake news’ detection:

(1) Knowledge-based methods, which detect fake news by verifying if the knowledge within the news content (text) is consistent with facts (fact-checking techniques, be they manual or automatic).

(2) Style-based methods, which are concerned with how fake news is written (aspects related to lexicon, syntax, semantic, discourse, e.g., if it is written with extreme emotions: sentiment analysis). That is to say: malicious entities prefer to write fake news in a “special” style to encourage others to read and convince them to trust.

(3) Propagation-based methods, where they detect fake news based on how it spreads online; and

(4) Source-based methods, which detect fake news by investigating the credibility of news sources at various stages (being created, published online, and spread on social media).

I think authors should present these four kinds of approaches to frame the focus of their study, although their goal is centered mostly on (2).

 

3)    What about detecting fake news by exploring news images, or more in general, multimodality? I mean, obviously it’s not authors’ goal, but it’s worth mentioning, because not every fake news’ detection is natural language processing driven. When we allude to fake news’ “content”, images (be they pictures or videos) should be on an equal footing with text. Authors mention a couple of works on the matter (Singh, B.; Sharma, 2021; Varshney, D.; Vishwakarma, D.K., 2022), but they don’t allude explicitly to the specificity of fake news’ fakeness detection based on images (Shu et al. 2017, Farid, 2009, 2019a, 2019b), and don’t mention any kind of multimodality.

Some suggested references:

 

Farid, Hany (2009). “Digital doctoring: Can we trust photographs?”. En B. Harrington (ed.). Deception. Stanford: Stanford U.P, pp. 95–108.

FARID, H. (2019a). Fake Photos. Cambridge, MA: MIT Press.

FARID, H. (2019b). Image Forensics. Annual Review of Vision Science 5:1, 549-573. https://doi.org/10.1146/annurev-vision-091718-014827

 Formal Issues:

I think authors should revise their text to avoid iterations:

Examples:

Abstract:

Despite the large number of studies on fake news detection, they have not yet been combined to offer coherent insight on the trends and advancements in fake news detection” (p. 1).

 

"The discovery of a variety of sources in research on the detection of online fake news can help researchers make better decisions by identifying appropriate AI approaches for detecting fake news online” (p. 2).

Author Response

Good Day

Herewith our response:

Comment Response
Content comment 1: fake news definition Thank you for pointing this out, we have included an additional definition - Line 38 (no markup).
Content comment 2: refer to different strategies

Lines 45-54 (no markup): Additional reference to the different strategies has been included in the introduction 

Lines 191-197 & 200-202 (no markup): the authors have provided additional motivation for the inclusion of DL, ML and ensemble approaches with the text now also highlighting the use of supervised methods. 

We agree with this and have updated the contents to specify the use of image data for fake new detection. 

Content comment 3: consideration for detection of fake news through images

Lines 99-108 and the inclusion criteria, IC2 in Table 1 has been updated to support the visual dataset used. 

Formal issue: avoid repetition Identified sections are paraphrased. 

                  

Reviewer 2 Report

This manuscript presents a thorough introduction from which the objective, contributions, and structure are coherent. The work provides a decent literature review. The Materials and Methods section is debatable but acceptable. The results of the study are presented fully and clearly.

Several minor remarks are to be addressed:

1. In line 393, an extra "dot" is possible.

2. Conclusion section should be extended using numerical results obtained in the paper and limitations of the proposed method.

In sum, the submitted manuscript can, in principle, be accepted after minor revisions based on the reviewer’s comments.

Author Response

Good Day

Please see the authors responses below:

Comment Response
Line 1: extra dot on line 393 Corrected
Conclusion - include numeric values & limitations

Numeric values have been included in the abstract, the authors are of the opinion that including these in the conclusion is not necessary.

Line: 450-455 (no markup): Limitations regarding the reliance on a single database, and on only supervised methods have been acknowledged in the conclusion. 

Kind Regards

Back to TopTop