Next Article in Journal
Three Dimensional Lifetime-Multiplex Tomography Based on Time-Gated Capturing of Near-Infrared Fluorescence Images
Next Article in Special Issue
Feature Extraction with Handcrafted Methods and Convolutional Neural Networks for Facial Emotion Recognition
Previous Article in Journal
Metaheuristics Optimization with Deep Learning Enabled Automated Image Captioning System
Previous Article in Special Issue
Singing Voice Detection in Electronic Music with a Long-Term Recurrent Convolutional Network
 
 
Article
Peer-Review Record

Investigating the Difference of Fake News Source Credibility Recognition between ANN and BERT Algorithms in Artificial Intelligence

Appl. Sci. 2022, 12(15), 7725; https://doi.org/10.3390/app12157725
by Tosti H. C. Chiang *, Chih-Shan Liao and Wei-Ching Wang
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Appl. Sci. 2022, 12(15), 7725; https://doi.org/10.3390/app12157725
Submission received: 30 June 2022 / Revised: 18 July 2022 / Accepted: 29 July 2022 / Published: 31 July 2022
(This article belongs to the Special Issue AI, Machine Learning and Deep Learning in Signal Processing)

Round 1

Reviewer 1 Report

- The article is on a topic of obvious interest, and will undoubtedly be useful to readers of the journal. The work is well written and the theoretical bases, the methods and the results are well presented, as well as the discussion is adequate.

Specific comments

- Literacy review: I recommend considering the Unesco reports: "Journalism, ‘Fake News’ & Disinformation" (by Cherilyn Ireton and Julie Posetti, Eds., 2018) and the report of the Council of Europe "INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making" (by Claire Wardle, PhD and Hossein Derakhshany). The latter is important due to the taxonomy on information disorder that it incorporates.

- Use the term "information disorder" or "disinformation" instead of fake news, in application of the policies recommended by the UNESCO report and CE report.

Author Response

Reviewer 1

 

- The article is on a topic of obvious interest, and will undoubtedly be useful to readers of the journal. The work is well written and the theoretical bases, the methods and the results are well presented, as well as the discussion is adequate.

Ans: Thank you for your valuable opinions

Specific comments

- Literacy review: I recommend considering the Unesco reports: "Journalism, ‘Fake News’ & Disinformation" (by Cherilyn Ireton and Julie Posetti, Eds., 2018) and the report of the Council of Europe "INFORMATION DISORDER: Toward an interdisciplinary framework for research and policy making" (by Claire Wardle, PhD and Hossein Derakhshany). The latter is important due to the taxonomy on information disorder that it incorporates.

Ans: Thank you for your careful reading. We were updated about the term “fake news” explanation in section 2.1 (page 3). We have revised below.

There is not a consistent definition of fake news [7]. For example, the intellectual property of the United Nations Educational, Scientific and Cultural Organization (UNESCO) reports on "Journalism, ‘Fake News’ & Disinformation" [8] discussed the term “Fake News” and “Disinformation”. The Council of Europe report on “Infor-mation Disorder: Toward an interdisciplinary framework for research and policy mak-ing” [9] defined “INFORMATION DISORDER” into three categories: mis-information, dis-information, and mal-information. Guess, Nagler & Tucker [10] regarded fake news as wrongly or intentionally misleading others but real article-like true news. From the perspective of an enterprise, the dissemination of fake news aimed at advertising rev-enue. By synthesizing various researchers’ definitions of fake news, Leeder [11] point-ed out the relationship of fake news with two types of incorrect information, namely inappropriate information and false information. Both were wrong information, but false information was intentionally disseminated or misled. In this case, fake news could be problematic or incorrect information but looked like real news for intention-ally deceiving or misleading people. Cooke [12] indicated that inappropriate infor-mation was incomplete, ambiguous, and uncertain information, but might possibly be correct, which required the judgment through the background context; false infor-mation, on the other hand, was wrong information, which might be intentionally dis-seminated with specific planning.

 

- Use the term "information disorder" or "disinformation" instead of fake news, in application of the policies recommended by the UNESCO report and CE report.

Ans: Thank you for your careful reading. We were updated two references about the UNESCO report and CE report in section2.1 (page3). We have revised below.

 

There is not a consistent definition of fake news [7]. For example, the intellectual property of the United Nations Educational, Scientific and Cultural Organization (UNESCO) reports on "Journalism, ‘Fake News’ & Disinformation" [8] discussed the term “Fake News” and “Disinformation”. The Council of Europe report on “Infor-mation Disorder: Toward an interdisciplinary framework for research and policy mak-ing” [9] defined “INFORMATION DISORDER” into three categories: mis-information, dis-information, and mal-information. Guess, Nagler & Tucker [10] regarded fake news as wrongly or intentionally misleading others but real article-like true news. From the perspective of an enterprise, the dissemination of fake news aimed at advertising rev-enue. By synthesizing various researchers’ definitions of fake news, Leeder [11] point-ed out the relationship of fake news with two types of incorrect information, namely inappropriate information and false information. Both were wrong information, but false information was intentionally disseminated or misled. In this case, fake news could be problematic or incorrect information but looked like real news for intention-ally deceiving or misleading people. Cooke [12] indicated that inappropriate infor-mation was incomplete, ambiguous, and uncertain information, but might possibly be correct, which required the judgment through the background context; false infor-mation, on the other hand, was wrong information, which might be intentionally dis-seminated with specific planning.

 

Author Response File: Author Response.docx

Reviewer 2 Report

Investigating the difference in fake news source credibility 2

recognition between ANN and BERT algorithms in artificial 

intelligence

--------------------

The paper does an investigation the difference in fake news source credibility recognition between ANN and BERT.

The paper has potential but it requires a major revision.

 

* The topic could be "Investigating the difference of ............"

*The benefits of the proposed study could be highlighted in the abstract.

* The quantitative figures of the study could be presented in the abstract.

* English proofreading is required to improve the readability.

* The first sentence of the introduction is confusion. Please split them to make readable.

* It seems that the model is overfitting (refer to Fig. 1 & 2). How the paper justifies that the proposed model is highly generalisable? You may use early stopping to deal with it.

* The paper lacks recent works used to represent text or news in natural language processing:

https://www.hindawi.com/journals/cin/2021/2158184/ (explains the domain-specific, syntactic approach)

https://www.hindawi.com/journals/cin/2022/5681574/ (explains the syntactic approach)

Please elaborate those papers and explain why the proposed work is required for the representation of texts.

* Is it possible to compare the proposed work with the SOTA methods?

Author Response

Reviewer 2

 

Investigating the difference in fake news source credibility 2

recognition between ANN and BERT algorithms in artificial 

intelligence

--------------------

The paper does an investigation the difference in fake news source credibility recognition between ANN and BERT.

The paper has potential but it requires a major revision.

Ans: Thank you for your valuable opinions. We were updated our article.

 

* The topic could be "Investigating the difference of ............"

Ans: Thank you for your careful reading, our title was updated and named “Investigating the difference of fake news source credibility recognition between ANN and BERT algorithms in artificial intelligence”.

*The benefits of the proposed study could be highlighted in the abstract.

Ans: Thank you for your careful reading, we were updated the benefits of our proposed study in the abstract. We have revised below.

Abstract: Fake news permeating life through channels misleads people into disinformation. To reduce the harm of fake news and provide multiple and effective news credibility channels, the approach of linguistics is applied to word frequency-based ANN system and semantics-based BERT system in this study, using mainstream news as general news dataset and content farms as fake news dataset for the models judging news source credibility and comparing the difference in news source credibility recognition between ANN and BERT. The research findings show high similarity in highest and lowest hit rate between ANN system and BERT system (Liberty Time is the highest hit rate, ETtoday and nooho.net is the lowest hit rate). BERT system presents higher and stable overall source credibility recognition rate than ANN system (BERT 91.2% > ANN 82.75%). Recognizing news source credibility through artificial intelligence not only could effec-tively enhance people’s sensitivity to news source, but, in long term, could cultivate the public media literacy to achieve the synergy of fake news resistance with technology.

* The quantitative figures of the study could be presented in the abstract.

Ans: Thank you for your careful reading, we were updated our results about quantitative figures in the abstract.

* English proofreading is required to improve the readability.

Ans : Thank you for your suggestion. We asked about the English proof-reading company, they need a month's work. But MDPI only give us ten days to revise the article. Once we've determined that there are no major issues with the article, we'll send it back to editing services

 

* The first sentence of the introduction is confusion. Please split them to make readable.

Ans: Thank you for your careful reading, we were updated the first sentence of the introduction in page 1.

* It seems that the model is overfitting (refer to Fig. 1 & 2). How the paper justifies that the proposed model is highly generalisable? You may use early stopping to deal with it.

Ans: Thank you for your careful reading, maybe we can set epoch is 50 times for our training model in the figure2. We will modify our parameters to enhance our training model next time

* The paper lacks recent works used to represent text or news in natural language processing:

https://www.hindawi.com/journals/cin/2021/2158184/ (explains the domain-specific, syntactic approach)

https://www.hindawi.com/journals/cin/2022/5681574/ (explains the syntactic approach)

Please elaborate those papers and explain why the proposed work is required for the representation of texts.

Ans: Thank you for your careful reading, we were updated some references in section 2.2 (page 7) and section 2.3 (page 8).

* Is it possible to compare the proposed work with the SOTA methods?

Ans: Thank you for your careful reading, we will extend our scope and keep looking for related articles.

 

 

Author Response File: Author Response.docx

Reviewer 3 Report

This paper appears to be about detecting the difference in fake news source credibility by employing two different Neural Network approaches to the datasets collected.

It was good to see the authors make the distinction that this is not fact checking going on here, but more of a case of training the models to pick up on the differences in writing style/content style between the two types of news sources.

It appears the content from the news farm sites are blanket labelled as fake news and the articles from the "reputable" websites are labelled as "good news".  I think this is far too black and white - why not just label according to source in that way?  Poorer websites still report true stories from time to time (although they may use loaded words of have an inherent bias of some sort).

I found the results reporting a bit weird.  What is a "hit"?  The results themselves look good but I feel the training set may be too diverse from the two classes and somewhat "rigged" in the sense it is being evaluated (maybe this was the intent?  Just to detect articles form a site such as this regardless of content?).  The results, given how they are achieved, are not surprising and I would expect a high recognition rate.

Other comments

  - The writing is very poor.  Grammatical mistakes litter this document throughout and I get the impression it has not been proof read at all.  You should not submit articles in this state ever.

  - The background on AI and its history was not needed.  

  - Severe lack of citations that is a common problem throughout.  Many facts or claims are asserted without reference.  This is problematic.

Comments for author File: Comments.pdf

Author Response

Reviewer 3

 

This paper appears to be about detecting the difference in fake news source credibility by employing two different Neural Network approaches to the datasets collected.

It was good to see the authors make the distinction that this is not fact checking going on here, but more of a case of training the models to pick up on the differences in writing style/content style between the two types of news sources.

Ans : Thank you for your valuable opinions.

It appears the content from the news farm sites are blanket labelled as fake news and the articles from the "reputable" websites are labelled as "good news".  I think this is far too black and white - why not just label according to source in that way?  Poorer websites still report true stories from time to time (although they may use loaded words of have an inherent bias of some sort).

Ans: Thank you for your careful reading. We agreed your comment, so, our system named fake news source “credibility” recognition system, not “accuracy” recognition system.

I found the results reporting a bit weird.  What is a "hit"?  The results themselves look good but I feel the training set may be too diverse from the two classes and somewhat "rigged" in the sense it is being evaluated (maybe this was the intent?  Just to detect articles form a site such as this regardless of content?).  The results, given how they are achieved, are not surprising and I would expect a high recognition rate.

Ans: Thank you for your careful reading. Although the news content of our dataset is not fact-checked, we assumed that the ratio of our test dataset over 50% is hit rate, not correct rate.

Other comments

  - The writing is very poor.  Grammatical mistakes litter this document throughout and I get the impression it has not been proof read at all.  You should not submit articles in this state ever.

Ans : Thank you for your suggestion. We asked about the English proof-reading company, they need a month's work. But MDPI only give us ten days to revise the article. Once we've determined that there are no major issues with the article, we'll send it back to editing services

  - The background on AI and its history was not needed.

Ans: Thank you for your careful reading, we were deleted section 2.2 (Artificial Intelligence) in page 5.

  - Severe lack of citations that is a common problem throughout.  Many facts or claims are asserted without reference.  This is problematic.

Ans: Thank you. This article mainly compares the differences between AI in word frequency technology (ANN) and semantic technology (BERT). This research can bring researchers a lot of research innovation. After all, it is difficult for AI to explain the operation process.

 

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Thanks to the authors for improving the manuscript. Please proofread the manuscript.

Back to TopTop