Next Article in Journal
ICT Use, Digital Skills and Students’ Academic Performance: Exploring the Digital Divide
Previous Article in Journal
Text Mining with Network Analysis of Online Reviews and Consumers’ Satisfaction: A Case Study in Busan Wine Bars
 
 
Article
Peer-Review Record

Towards the Detection of Fake News on Social Networks Contributing to the Improvement of Trust and Transparency in Recommendation Systems: Trends and Challenges

Information 2022, 13(3), 128; https://doi.org/10.3390/info13030128
by Oumaima Stitini *, Soulaimane Kaloun and Omar Bencharef
Reviewer 1:
Reviewer 2: Anonymous
Information 2022, 13(3), 128; https://doi.org/10.3390/info13030128
Submission received: 29 November 2021 / Revised: 13 December 2021 / Accepted: 13 December 2021 / Published: 3 March 2022

Round 1

Reviewer 1 Report

The authors modified the paper accordingly to my suggestions and now it appears enhanced in quality. Thus, I would recommend its publication. 

Author Response

Thank you for the positive evaluation of our work.

Author Response File: Author Response.pdf

Reviewer 2 Report

The manuscript was well revised following a set of my previous comments. I only have a few more questions and suggestions as follows:

  1. Figure 1 is not clear to me. What is the idea this figure is trying to convey? Maybe a formal taxonomy of terms would be more beneficial.
  2. Table 2: explain the notation (v and ×) used. What does it mean: supports, provides, implements, or something else? Explain in the caption.
  3. Table 1: the proposed approach does not seem to have better results than [1]. More discussion is needed, including on the limitations of the proposed methodology.
  4. Present a motivation for the use of selected machine learning methods (logistic regression, decision tree, naive Bayes, and linear SVM). These are quite old methods. Did you consider any more recent alternative? Support your claims and reasoning with appropriate references.
  5. Conclusions: use the main numerical finding from the experiments to support your claims.

Author Response

Thank you for the comments concerning our manuscript. We have studied comments carefully and have made corrections which we hope meet with approval. Thank you very much for your comments and suggestions.

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The authors tried to analyze potential fake news published on social
networks. The authors proposed a semi-supervised method in which its efficiency was tested on two benchmark datasets and compared with the most commonly available simple learners logistic regression, decision tree, naive Bayes, and linear SVM. 

The paper is intriguing and a few issues should be "fixed" before publication. 

1) I would suggest reading and considering two recent works published in the Information journal, maybe comparing your results with some other authors that adopted a different recipe: https://doi.org/10.3390/info12090376; https://doi.org/10.3390/info12060248

2) There are some insights coming from the Psychology of Virtual Environments that could be useful to enhance the paper and increase the multidisciplinarity of the work. There is an interesting work very recently published in Future Internet journal that the authors could use to frame their paper: https://doi.org/10.3390/fi13050110. In the work of Duradoni and colleagues (2021) there is evidence that on the Internet we tend to trust strangers more than we reasonably should because we implicitly represent/treat them as having a good reputation. This aspect can contribute (among many others) to explaining why many people trust fake news. The same research group also published a work on reputation dynamics that can strengthen your point about trust-related behavior: https://doi.org/10.3390/fi10060050. Indeed, people or content reputation/popularity may enhance the trust and credibility of the information received. Finally, about the trust's potential to be self-reinforcing, in the literature we have the "Reputation Inertia Effect". Please refer to it: https://doi.org/10.1002/hbe2.170.

3) In the state of the art please report, if available, the accuracy rate in successfully detecting fake news for each of the solutions proposed. Otherwise, it would be difficult for the reader to understand what level of technological readiness we are currently at and compare between solutions. 

Reviewer 2 Report

This work aims to detect fake news forms dispatched on social networks to enhance the quality of trust and transparency in the social network recommendation system. However, the contribution of this paper is short of the minimum requirements for a journal publication. Moreover, the technical quality and presentation are poor.

Comments:

  1. The contribution of this article is not clear. It seems that a standard methodology is adopted without any innovation. If there is any novelty, it must be clearly stated in the introduction section.
  2. The technical description of methodology in Section 5 lacks of any technical detail. The methods used are not explained in sufficient detail to allow for replicability of detail.
  3. Figure 1: is not a Venn diagram, since the intersections of concepts are not labelled.
  4. In the related works section, discussed more recent work on recommendation in social networks such as “Recommendation based on review texts and social communities: A hybrid model”.
  5. There is no information on what dataset, if any, was used for experiments.
  6. I did not find the answers to the Research Challenges formulated.
  7. The similarity algorithm mentioned in the conclusions section is not mentioned anywhere in the paper.
Back to TopTop