Next Article in Journal
Design of Liquid–Air Hybrid Cooling Garment and Its Effect on Local Thermal Comfort
Next Article in Special Issue
Boosting Lightweight Sentence Embeddings with Knowledge Transfer from Advanced Models: A Model-Agnostic Approach
Previous Article in Journal
Photocatalytic Activity of TiNbC-Modified TiO2 during Hydrogen Evolution and CO2 Reduction
Previous Article in Special Issue
A Multitask Cross-Lingual Summary Method Based on ABO Mechanism
 
 
Article
Peer-Review Record

Hierarchical Clause Annotation: Building a Clause-Level Corpus for Semantic Parsing with Complex Sentences

Appl. Sci. 2023, 13(16), 9412; https://doi.org/10.3390/app13169412
by Yunlong Fan 1,2, Bin Li 1,2, Yikemaiti Sataer 1,2, Miao Gao 1,2, Chuanqi Shi 1,2, Siyi Cao 3 and Zhiqiang Gao 1,2,*
Reviewer 1:
Reviewer 2:
Appl. Sci. 2023, 13(16), 9412; https://doi.org/10.3390/app13169412
Submission received: 3 June 2023 / Revised: 8 August 2023 / Accepted: 16 August 2023 / Published: 19 August 2023
(This article belongs to the Special Issue Natural Language Processing: Novel Methods and Applications)

Round 1

Reviewer 1 Report

The paper is an interesting approach in terms of building a clause-level corpus for semantic parsing with complex sentences. They propose a novel framework, hierarchical clause annotation (HCA), based on the linguistic research of clause hierarchy. With the HCA framework, they annotate a large HCA corpus to explore the potentialities of integrating HCA structural features into semantic parsing with complex sentences. Moreover, they decompose HCA into two subtasks, i.e., clause segmentation and clause parsing, and provide neural baseline models for more silver annotations.

The article is clear, the literature references are sufficient, and the results supported by examples.

 Experimental results are presented to highlight and validate the proposed approach with support of two case studies.

All The final results of the study with the experimental analysis should be written in the abstract including a comparison with the previous studies

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report


Comments for author File: Comments.pdf


Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Few suggestions to the Authors are:

1. Introduction section should include more references .

2. Related work is shallow and needs to be modified and these related work may be helpful how the annotation scheme is developed for labeling the corpus.

Rabani, S. T., Khanday, A. M. U. D., Khan, Q. R., Hajam, U. A., Imran, A. S., & Kastrati, Z. (2023). Detecting suicidality on social media: Machine learning at rescue. Egyptian Informatics Journal24(2), 291-302.

Khanday, A. M. U. D., Khan, Q. R., & Rabani, S. T. (2021). Detecting textual propaganda using machine learning techniques. Baghdad Science Journal18(1), 0199-0199.

Khanday, A. M. U. D., Khan, Q. R., & Rabani, S. T. (2020, December). Analysing and predicting propaganda on social media using machine learning techniques. In 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN) (pp. 122-127). IEEE.

3. The authors have not discussed about the inter annotator agreement.

4. What was the value of Kappa Coefficient.

5. Some Crowd Sourcing annotation strategies needs to be discussed.

6. Start Of ART NLP methods need to be elaborated and there limitations which the proposed approach is overcoming.

7. Whatt about the biasness of the approach are the authors taking care of the biasness. 

 

Satisfactory needs slight improvement

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Dear Authors, your manuscript is well structured however, following critical comments should be accommodated, prior to further processing of the article.

1)      Refer to whole article: Article has 21% similarity i.e. higher.

2)      Refer to keywords: Use a consistent style for all keywords. Either keep first alphabet capital for all or not.

3)      Refer to section 1: Last paragraph of section 1 should present structure of the article i.e. missing in this study. Authors should include the missing paragraph.

4)      Refer to sub-section 2.2.2: Authors have discussed split and rephrase in sub-section 2.2.2 and simple sentence decomposition in sub-section 2.2.4. Reading text reflects, both are the same and aim on sentence decomposition into smaller ones. What is the major difference between both?

5)      Refer to sub-section 2.23: Authors have stated that TS maintains original information and the meaning then why proposed HCA is required and what is special feature of proposed HCA framework that makes it better?

6)      Refer to table 2: Most of the time authors have tested their proposed HCA using similar sentence i.e. If I do not check, I get very anxious, which does sort of go away after 15-30 mins, but often the anxiety is so much that I can not wait that long. Did they test or believe that their proposed HCA framework is equally beneficial for other sentences even whole English language text translation/processing?

7)      Refer to line # 303: Authors have recruited a group of human annotators that may obviously have different background and different understanding level. How do the authors believe/standardize that their input is of equal importance and significance?    

8)      Refer to line # 356” Include a valid reference for AMR 2.0 dataset.

9)      Refer to table 9: did authors check the efficacy of proposed HCA framework beyond mentioned batch size and # of epochs? Moreover, why authors have selected AdamW as an optimizer whereas, other optimizers do exist?

Good luck

Dear Authors, your manuscript is well structured however, following critical comments should be accommodated, prior to further processing of the article.

1)      Refer to whole article: Article has 21% similarity i.e. higher.

2)      Refer to keywords: Use a consistent style for all keywords. Either keep first alphabet capital for all or not.

3)      Refer to section 1: Last paragraph of section 1 should present structure of the article i.e. missing in this study. Authors should include the missing paragraph.

4)      Refer to sub-section 2.2.2: Authors have discussed split and rephrase in sub-section 2.2.2 and simple sentence decomposition in sub-section 2.2.4. Reading text reflects, both are the same and aim on sentence decomposition into smaller ones. What is the major difference between both?

5)      Refer to sub-section 2.23: Authors have stated that TS maintains original information and the meaning then why proposed HCA is required and what is special feature of proposed HCA framework that makes it better?

6)      Refer to table 2: Most of the time authors have tested their proposed HCA using similar sentence i.e. If I do not check, I get very anxious, which does sort of go away after 15-30 mins, but often the anxiety is so much that I can not wait that long. Did they test or believe that their proposed HCA framework is equally beneficial for other sentences even whole English language text translation/processing?

7)      Refer to line # 303: Authors have recruited a group of human annotators that may obviously have different background and different understanding level. How do the authors believe/standardize that their input is of equal importance and significance?    

8)      Refer to line # 356” Include a valid reference for AMR 2.0 dataset.

9)      Refer to table 9: did authors check the efficacy of proposed HCA framework beyond mentioned batch size and # of epochs? Moreover, why authors have selected AdamW as an optimizer whereas, other optimizers do exist?

Good luck

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Would you please add why you use Parseval-Full scores in the evaluation 

good

Back to TopTop