Next Article in Journal
The Effects of Daubechies Wavelet Basis Function (DWBF) and Decomposition Level on the Performance of Artificial Intelligence-Based Atrial Fibrillation (AF) Detection Based on Electrocardiogram (ECG) Signals
Next Article in Special Issue
Integrating Spherical Fuzzy Sets and the Objective Weights Consideration of Risk Factors for Handling Risk-Ranking Issues
Previous Article in Journal
Comparison of ML/DL Approaches for Detecting DDoS Attacks in SDN
Previous Article in Special Issue
Multi-View Gait Recognition Based on a Siamese Vision Transformer
 
 
Article
Peer-Review Record

Graph-Augmentation-Free Self-Supervised Learning for Social Recommendation

Appl. Sci. 2023, 13(5), 3034; https://doi.org/10.3390/app13053034
by Nan Xiang 1,2,*, Xiaoxia Ma 1, Huiling Liu 1, Xiao Tang 1 and Lu Wang 1,2
Reviewer 2:
Reviewer 3:
Appl. Sci. 2023, 13(5), 3034; https://doi.org/10.3390/app13053034
Submission received: 31 January 2023 / Revised: 23 February 2023 / Accepted: 24 February 2023 / Published: 27 February 2023
(This article belongs to the Special Issue Artificial Intelligence in Complex Networks)

Round 1

Reviewer 1 Report

Dear Author,

Paper is well written.

Good concept.

Give a name for the proposed model. Instead of telling as "our model" in all the places, the same could be referred by a name

Author Response

Response to Reviewer 1 Comments

Point 1: Give a name for the proposed model. Instead of telling as "our model" in all the places, the same could be referred by a name

Response 1: We sincerely thank the reviewer for careful reading. As suggested by the reviewer, we have corrected the “our model” into “GAFSRec”. Thank you again for your positive comments and valuable suggestions to improve the quality of our manuscript.

 

 

Reviewer 2 Report

The manuscript presents a study on self-supervised contrastive learning for social recommendation. The author proposed a large dataset of three classes under the topic of epidemics and floods. The author clearly discussed the experimental results on different dataset and compared the result with some past methods. I have some minor comments to improve the quality of the manuscript.

1.     Author proposed a shallow network (only two layers of GCN) in compared to the data for training. Therefore, it is better to use some deep network for more generalized result.

2.     The Table 4 illustrate the statistical comparison of proposed method and existing one. The computational complexity of the networks also need to compare.

3.     Some correction in page 9, line 294 i.e. TABLE II bust be Table 2

Author Response

Response to Reviewer 2 Comments

Point 1: Author proposed a shallow network (only two layers of GCN) in compared to the data for training. Therefore, it is better to use some deep network for more generalized result.

Response 1: We sincerely thank the reviewer for careful reading. We agree with the reviewer, but our model uses 2-layer GCN, because many experiments have shown that the best training depth of this model is 2 layers. The experimental results are in Section 4.4.2. Therefore, we use 2-layer GCN to maintain the best performance in comparison experiments. For this reason, we chose not to make this change.

Point 2: The Table 4 illustrate the statistical comparison of proposed method and existing one. The computational complexity of the networks also need to compare.

Response 2: In Section 3.5 we analyzed the network complexity, and in Section 4.2.1, we experimented and analyzed the time complexity between models.

Point 3: Some correction in page 9, line 294 i.e. TABLE II bust be Table 2

Response 3: Thanks for your careful checks. We are sorry for our carelessness. Based on your comments, we have made the corrections to make the word harmonized within the whole manuscript.

 

Reviewer 3 Report

Good paper... here are Some suggestions for improvement:

Abstract (line 16):  Provide alternate phrase, or a definition or description of “social noise”; if unable to consider removing the term from the abstract and using it later in the text where it can be properly explained

Introduction (line 45):  “head” and “tail” are poorly described/explained… it is difficult to understand this as described by the authors.  Are the authors talking about probability densities/distributions?  If so, try to make this description more formal.   Although “tail” is a common term when describing distributions such as the tails of a Gaussian, “head” is not common – One does not speak of the “head” of a Gaussian distribution.    

Equations:  Throughout the document, inline equations appear elevated (superscript?) above the line of text they are part of.   This inserts unnecessary spacing into the lines of the document.

 

Figure 2:  This figure is a very high-level dataflow trying to show too many things at once - and is difficult to follow.   It does not contain enough detail to allow someone who wished to replicate the authors work for scientific reproducibility to build the system the authors claim to have built.  Recommend adding several additional figures detailing important parts such as the attention mechanism, contrastive learning mechanism, etc.    The encoder should be further define using actual network architecture diagrams instead of notional flow of data through the encoder.   Consider adding appendices as needed to contain the details if the main body of the document becomes too cluttered.

 

Bibliography:  Review the bibliography carefully:  For example, there are multiple lines which cite “In Proceedings of the Proceedings of the…” which should be corrected to “In Proceedings of the…”

 

Author Response

Response to Reviewer 5 Comments

 

Point 1: Abstract (line 16):  Provide alternate phrase, or a definition or description of “social noise”; if unable to consider removing the term from the abstract and using it later in the text where it can be properly explained

Response 1: We sincerely thank the reviewer for careful reading. We replace the “long tail phenomenon” with “social noise”

 

Point 2: Introduction (line 45):  “head” and “tail” are poorly described/explained… it is difficult to understand this as described by the authors.  Are the authors talking about probability densities/distributions?  If so, try to make this description more formal.   Although “tail” is a common term when describing distributions such as the tails of a Gaussian, “head” is not common – One does not speak of the “head” of a Gaussian distribution.  

Response 2: We have re-written this part according to the Reviewer’s suggestion.

 

Point 3: Equations:  Throughout the document, inline equations appear elevated (superscript?) above the line of text they are part of.   This inserts unnecessary spacing into the lines of the document.

Response 3: Thanks for your careful checks. We are sorry for our carelessness. Based on your comments, we have made the corrections to make the inline equations harmonized within the whole manuscript.

 

Point 4: Figure 2:  This figure is a very high-level dataflow trying to show too many things at once - and is difficult to follow.   It does not contain enough detail to allow someone who wished to replicate the authors work for scientific reproducibility to build the system the authors claim to have built.  Recommend adding several additional figures detailing important parts such as the attention mechanism, contrastive learning mechanism, etc.    The encoder should be further define using actual network architecture diagrams instead of notional flow of data through the encoder.   Consider adding appendices as needed to contain the details if the main body of the document becomes too cluttered.

Response 4: According to reviewers’ comments, we have made extensive modifications to our manuscript and supplemented extra data to make our results convincing. (1) For Figure 2, we have added detailed diagrams and necessary supplementary explanations to facilitate readers' understanding and experiments. (2)We added experiments on the attention mechanism (Section 4.3.1) to help verify the contribution of multiple views. At the same time, a contrastive learning experiment is added. (3)Instead of an encoder, we use a graph convolutional neural network as the network framework.

Thank you again for your positive comments and valuable suggestions to improve the quality of our manuscript.

 

Point 5: Bibliography:  Review the bibliography carefully:  For example, there are multiple lines which cite “In Proceedings of the Proceedings of the…” which should be corrected to “In Proceedings of the…”

Response 5:  We sincerely thank the reviewer for careful reading. As suggested by the reviewer, we have corrected the In Proceedings of the Proceedings of the…”into “In Proceedings of the…”.We appreciate for Reviewers’ warm work earnestly, and hope the correction will meet with approval. Once again, thank you very much for your comments and suggestions.

 

Back to TopTop