Next Article in Journal
Fusing Dexterity and Perception for Soft Robot-Assisted Minimally Invasive Surgery: What We Learnt from STIFF-FLOP
Next Article in Special Issue
Named Entity Correction in Neural Machine Translation Using the Attention Alignment Map
Previous Article in Journal
Pressure Drop Method as a Useful Tool for Detecting Rheological Properties of Non-Newtonian Fluids during Flow
Previous Article in Special Issue
English–Welsh Cross-Lingual Embeddings
Article

Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation

Department of Computer Science and Engineering, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editors: Arturo Montejo-Ráez and Salud María Jiménez-Zafra
Appl. Sci. 2021, 11(14), 6584; https://doi.org/10.3390/app11146584
Received: 6 May 2021 / Revised: 1 July 2021 / Accepted: 15 July 2021 / Published: 17 July 2021
(This article belongs to the Special Issue Current Approaches and Applications in Natural Language Processing)
Quality estimation (QE) has recently gained increasing interest as it can predict the quality of machine translation results without a reference translation. QE is an annual shared task at the Conference on Machine Translation (WMT), and most recent studies have applied the multilingual pretrained language model (mPLM) to address this task. Recent studies have focused on the performance improvement of this task using data augmentation with finetuning based on a large-scale mPLM. In this study, we eliminate the effects of data augmentation and conduct a pure performance comparison between various mPLMs. Separate from the recent performance-driven QE research involved in competitions addressing a shared task, we utilize the comparison for sub-tasks from WMT20 and identify an optimal mPLM. Moreover, we demonstrate QE using the multilingual BART model, which has not yet been utilized, and conduct comparative experiments and analyses with cross-lingual language models (XLMs), multilingual BERT, and XLM-RoBERTa. View Full-Text
Keywords: quality estimation; neural machine translation; pretrained language model; multilingual pre-trained language model; WMT quality estimation; neural machine translation; pretrained language model; multilingual pre-trained language model; WMT
Show Figures

Figure 1

MDPI and ACS Style

Eo, S.; Park, C.; Moon, H.; Seo, J.; Lim, H. Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation. Appl. Sci. 2021, 11, 6584. https://doi.org/10.3390/app11146584

AMA Style

Eo S, Park C, Moon H, Seo J, Lim H. Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation. Applied Sciences. 2021; 11(14):6584. https://doi.org/10.3390/app11146584

Chicago/Turabian Style

Eo, Sugyeong, Chanjun Park, Hyeonseok Moon, Jaehyung Seo, and Heuiseok Lim. 2021. "Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation" Applied Sciences 11, no. 14: 6584. https://doi.org/10.3390/app11146584

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop