Next Article in Journal
Detection of Insulators on Power Transmission Line Based on an Improved Faster Region-Convolutional Neural Network
Next Article in Special Issue
On Information Granulation via Data Clustering for Granular Computing-Based Pattern Recognition: A Graph Embedding Case Study
Previous Article in Journal
Non-Invasive Systems and Methods Patents Review Based on Electrocardiogram for Diagnosis of Cardiovascular Diseases
Previous Article in Special Issue
Using Graph Embedding Techniques in Process-Oriented Case-Based Reasoning
 
 
Article
Peer-Review Record

Predicting Dynamic User–Item Interaction with Meta-Path Guided Recursive RNN

Algorithms 2022, 15(3), 80; https://doi.org/10.3390/a15030080
by Yi Liu 1,2, Chengyu Yin 1, Jingwei Li 3, Fang Wang 4,* and Senzhang Wang 5
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Algorithms 2022, 15(3), 80; https://doi.org/10.3390/a15030080
Submission received: 24 January 2022 / Revised: 21 February 2022 / Accepted: 25 February 2022 / Published: 28 February 2022
(This article belongs to the Special Issue Graph Embedding Applications)

Round 1

Reviewer 1 Report

The paper proposed a method to learn the dynamic embedding vector trajectories for users and items simultaneously in collaborative filtering rather than the static embedding vectors. Then, the Metapath-guided Recursive RNN based Shift embedding method named MRRNN-S is proposed to learn the continuously evolving embeddings of users and items for more accurately predicting their future interactions. The authors stated that the proposed MRRNN-S is an extended version of the RRNN-S model, previously published in ADMA2020, where a new module is added to better capture the auxiliary information.

 

Since MRRNN-S seems to have better performance in the experimental results than RRNN-S, the motivation and explanation of the contribution in the introductory sections should be focus in the novelty of the new module, instead of the motivation behind the original RRNN-S. Instead of just a paragraph clarifying the difference with ADMA2020 paper, the advantages of the new module and rational for it should be stated and clearly described from the very beginning as well as introducing the approach as an extension of MRRNN-S also from the abstract on.

The experiments shows that MRRNN-S outperforms RRNN-S in both datasets. However, in Wikipedia such improvements are smaller (statistical significance should be reported), whereas in JingDong differences are important regarding both the previous approach and the baselines. This issue need to be further discussed, which are the dataset characteristics that cause these differences, it is just the size?. The analysis with additional datasets should help to clarify this point.

The paper would benefit of focusing in the explanation of the additions that make this method novel with respect to the previous one. That would clarify the novelty of the contribution and the advantages of the MRRNN-S over RRNN-S and other methods. Also, that would help to explaining the experimental results, which should also be extended to considered datasets of varied sized (if that the reason of the improvements).

Small typo in line 149: “various domains [27? ,28].”

 

Author Response

Please see our response letter

Author Response File: Author Response.pdf

Reviewer 2 Report

Article presents interesting method for predicting user-item interactions. The results of proposed method confirms its advantage over more traditional methods. Description of study and presented results are clear and comprehensible. 

The only remarks I have concerns high rate of similarity with previous work, especially with paper “Recursive RNN Based Shift Representation Learning for Dynamic User-Item Interaction Prediction”. Some of the parts (section 4.2, 4.3, 5.1 and 5.2)  are almost identical to previous work. I encourage to provide more modifications in these sections.


As similarity between two papers is high I also recommend to include in conclusions comparison of results from both articles with inferences about improvement gained.

Author Response

Please see our response letter

Author Response File: Author Response.pdf

Back to TopTop