Next Article in Journal
Efficient Mobile Sink Routing in Wireless Sensor Networks Using Bipartite Graphs
Previous Article in Journal
Optimizing the Quality of Service of Mobile Broadband Networks for a Dense Urban Environment
Previous Article in Special Issue
Contrastive Refinement for Dense Retrieval Inference in the Open-Domain Question Answering Task
 
 
Article
Peer-Review Record

A Hybrid Text Generation-Based Query Expansion Method for Open-Domain Question Answering

Future Internet 2023, 15(5), 180; https://doi.org/10.3390/fi15050180
by Wenhao Zhu 1,†, Xiaoyu Zhang 1,†, Qiuhong Zhai 1 and Chenyun Liu 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Future Internet 2023, 15(5), 180; https://doi.org/10.3390/fi15050180
Submission received: 27 April 2023 / Revised: 8 May 2023 / Accepted: 9 May 2023 / Published: 12 May 2023

Round 1

Reviewer 1 Report

The paper presents a method to improve retrieval efficiency - Hybrid Text Generation-Based Query Expansion (HTGQE). In this paper the Authors’ method combines large language models with pseudo-relevance feedback techniques to enhance the input for generative models, improving text generation speed and quality. Accorgidn to the Authors, the elaborated method reached good results on both Natural Questions (NQ) and TriviaQA (Trivia) datasets for passage retrieval and reading tasks. The topic is interesting and the paper well corresponds with the journal’s aim and scope.  

 

However, there are shortcomings in this paper. In the Introduction section, the Authors introduced their contributions. I suggest Authors move the 3rd and 4th points to the Conclusions section. It is the beginning of the paper and the reference to the results should be provided at the end, after the experiments. Moreover, the information about the structure of the paper is missing.

 

When comparing the contribution included in the Introduction with the Conclusions, it is worth considering expanding the last part. In the Conclusions section, the limitations of the proposed HTGQE are missing.

 

The rest of the paper looks good. The Authors conducted the experiments and provided the analyses of the number of expansion terms on NQ and Trivia.

 

Minor typos:

 

Line 52: “In this paper,We propose HTGQE….” – “we” should be written using the lowercase letter (it is not a beginning of the sentence).

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposed a query expansion to improve retrieval efficiency. 

I have several review comments to improve the quality of this paper.

1) The author needs to explain the background and motivation of the paper more clearly in the introduction. 

2) The author needs to explain the dataset used in the paper in detail for better understanding. 

3) The authors need to clarify why they set k to 3 in the initial retrieval step. 

4) The authors need to explain further about some unrealistic contexts and how noise in the retrieval phase can provide advantages.

5) The authors need to further explain the rationale for the hyperparameter settings. 

6) The authors need to clearly describe why HTGQE can be easily applied to the existing R2 system as a plug-and-play method. 

 

The level of English expression is moderate.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop