Next Article in Journal
Company–University Collaboration in Applying Gamification to Learning about Insurance
Previous Article in Journal
Acceptance of Smart Electronic Monitoring at Work as a Result of a Privacy Calculus Decision
Previous Article in Special Issue
Misalignment Detection for Web-Scraped Corpora: A Supervised Regression Approach
Open AccessArticle

Translation Quality and Error Recognition in Professional Neural Machine Translation Post-Editing

English Linguistics and Translation Studies, Johannes Gutenberg University, 76726 Mainz/Germersheim, Germany
*
Author to whom correspondence should be addressed.
Informatics 2019, 6(3), 41; https://doi.org/10.3390/informatics6030041
Received: 30 April 2019 / Revised: 7 September 2019 / Accepted: 11 September 2019 / Published: 17 September 2019
(This article belongs to the Special Issue Advances in Computer-Aided Translation Technology)
This study aims to analyse how translation experts from the German department of the European Commission’s Directorate-General for Translation (DGT) identify and correct different error categories in neural machine translated texts (NMT) and their post-edited versions (NMTPE). The term translation expert encompasses translator, post-editor as well as revisor. Even though we focus on neural machine-translated segments, translator and post-editor are used synonymously because of the combined workflow using CAT-Tools as well as machine translation. Only the distinction between post-editor, which refers to a DGT translation expert correcting the neural machine translation output, and revisor, which refers to a DGT translation expert correcting the post-edited version of the neural machine translation output, is important and made clear whenever relevant. Using an automatic error annotation tool and the more fine-grained manual error annotation framework to identify characteristic error categories in the DGT texts, a corpus analysis revealed that quality assurance measures by post-editors and revisors of the DGT are most often necessary for lexical errors. More specifically, the corpus analysis showed that, if post-editors correct mistranslations, terminology or stylistic errors in an NMT sentence, revisors are likely to correct the same error type in the same post-edited sentence, suggesting that the DGT experts were being primed by the NMT output. Subsequently, we designed a controlled eye-tracking and key-logging experiment to compare participants’ eye movements for test sentences containing the three identified error categories (mistranslations, terminology or stylistic errors) and for control sentences without errors. We examined the three error types’ effect on early (first fixation durations, first pass durations) and late eye movement measures (e.g., total reading time and regression path durations). Linear mixed-effects regression models predict what kind of behaviour of the DGT experts is associated with the correction of different error types during the post-editing process. View Full-Text
Keywords: neural machine translation; post-editing; revision; error annotations; Hjerson; MQM; European Commission (DGT); eye-tracking; key-logging; post-editing effort neural machine translation; post-editing; revision; error annotations; Hjerson; MQM; European Commission (DGT); eye-tracking; key-logging; post-editing effort
Show Figures

Figure 1

MDPI and ACS Style

Vardaro, J.; Schaeffer, M.; Hansen-Schirra, S. Translation Quality and Error Recognition in Professional Neural Machine Translation Post-Editing. Informatics 2019, 6, 41.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop