Next Article in Journal
Input Selection Methods for Soft Sensor Design: A Survey
Previous Article in Journal
Simulating Resource Management across the Cloud-to-Thing Continuum: A Survey and Future Directions
Open AccessArticle

Multi-Source Neural Model for Machine Translation of Agglutinative Language

by Yirong Pan 1,2,3, Xiao Li 1,2,3,*, Yating Yang 1,2,3,* and Rui Dong 1,2,3
1
Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
2
Department of Computer and Control, University of Chinese Academy of Sciences, Beijing 100049, China
3
Xinjiang Laboratory of Minority Speech and Language Information Processing, Urumqi 830011, China
*
Authors to whom correspondence should be addressed.
Future Internet 2020, 12(6), 96; https://doi.org/10.3390/fi12060096
Received: 18 May 2020 / Revised: 25 May 2020 / Accepted: 1 June 2020 / Published: 3 June 2020
(This article belongs to the Section Big Data and Augmented Intelligence)
Benefitting from the rapid development of artificial intelligence (AI) and deep learning, the machine translation task based on neural networks has achieved impressive performance in many high-resource language pairs. However, the neural machine translation (NMT) models still struggle in the translation task on agglutinative languages with complex morphology and limited resources. Inspired by the finding that utilizing the source-side linguistic knowledge can further improve the NMT performance, we propose a multi-source neural model that employs two separate encoders to encode the source word sequence and the linguistic feature sequences. Compared with the standard NMT model, we utilize an additional encoder to incorporate the linguistic features of lemma, part-of-speech (POS) tag, and morphological tag by extending the input embedding layer of the encoder. Moreover, we use a serial combination method to integrate the conditional information from the encoders with the outputs of the decoder, which aims to enhance the neural model to learn a high-quality context representation of the source sentence. Experimental results show that our approach is effective for the agglutinative language translation, which achieves the highest improvements of +2.4 BLEU points on Turkish–English translation task and +0.6 BLEU points on Uyghur–Chinese translation task. View Full-Text
Keywords: artificial intelligence; neural machine translation; agglutinative language translation; complex morphology; linguistic knowledge artificial intelligence; neural machine translation; agglutinative language translation; complex morphology; linguistic knowledge
Show Figures

Figure 1

MDPI and ACS Style

Pan, Y.; Li, X.; Yang, Y.; Dong, R. Multi-Source Neural Model for Machine Translation of Agglutinative Language. Future Internet 2020, 12, 96.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop