Next Article in Journal
Understanding of Customer Decision-Making Behaviors Depending on Online Reviews
Next Article in Special Issue
Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets
Previous Article in Journal
Modeling Job Satisfaction of Peruvian Basic Education Teachers Using Machine Learning Techniques
Previous Article in Special Issue
Event Detection Using a Self-Constructed Dependency and Graph Convolution Network
 
 
Article
Peer-Review Record

Improving Many-to-Many Neural Machine Translation via Selective and Aligned Online Data Augmentation

Appl. Sci. 2023, 13(6), 3946; https://doi.org/10.3390/app13063946
by Weitai Zhang 1,2,*, Lirong Dai 1, Junhua Liu 2 and Shijin Wang 2
Reviewer 1:
Reviewer 3:
Appl. Sci. 2023, 13(6), 3946; https://doi.org/10.3390/app13063946
Submission received: 10 February 2023 / Revised: 14 March 2023 / Accepted: 16 March 2023 / Published: 20 March 2023
(This article belongs to the Special Issue Natural Language Processing (NLP) and Applications)

Round 1

Reviewer 1 Report

The article proposed a selective and aligned online data augmentation algorithm to improve massively many-to-many machine translation. The algorithm incorporates a selective online back-translation method to pick suitable and high-quality training samples. They have also used contrastive learning to represent similar sentences across languages in a shared space and minimize the representation distance of similar sentences. They have boosted the SOBT algorithm with Cross-Lingual Online Substitution to strengthen transfer learning between zero-shot language pairs.
The paper is interesting.
Abstract highlights the important findings of the study.
The methods used is appropriate.
The data support the conclusions.
The title properly reflects the subject of the paper.
The keywords accurately reflect the content.
The paper an appropriate length.
The language is clear.
The introduction is well-written.
The paper summarizes recent research related to the topic.
The conclusion provides a good summary of the paper and leaves the reader with a clear understanding of the key contributions and findings of the research.
The references are relevant and recent.
In addition to BLEU, t will good to include other evaluation metrics to support the experimental results.
Recommendation: Accept with minor revision.

Author Response

The co-authors and I would like to thank you for the time and effort spent in reviewing the manuscript. We agree that more metrics would be useful to support our experimental results. Thus, we include METEOR and TER metrics in our revised manuscript. For detailed experimental results with new metrics, please see Table 3 in page 11 and Table 4 in page 12 of the revised manuscript. The METEOR and TER metrics consistently show the effectiveness of our algorithms. Thank you again for this good suggestion.

Reviewer 2 Report

Great work. I only recommend you read and revise it once again to polish it.

Author Response

The co-authors and I would like to thank you for the time and effort spent in reviewing the manuscript. We have polished our manuscript and submitted the revised manuscript.

Reviewer 3 Report

Very important and interesting topic. It should be of the interest of the readers. I suggest improving evaluation. First of all, BLEU is most common but not the best possible metric. I suggest adding other metrics that support e.g. synonyms like METEOR and also TER that is very intuitive. Collgram is another good approach to check how far are you from native speakers. BLEU scores and improvements are rather low so you should add significance tests and discuss if results of such low BLEU are usable in any way – especially business applications? I also suggest adding out-of-the-box working Google Colabolatory script for the readers.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Authors did corrections

Back to TopTop