Next Article in Journal
V-PRUNE: Semantic-Aware Patch Pruning Before Tokenization in Vision–Language Model Inference
Next Article in Special Issue
Fine-Tuning a Large Language Model for the Classification of Diseases Caused by Environmental Pollution
Previous Article in Journal
Olive Leaf Powder as a Potential Functional Component of Food Innovation: An In Vitro Study Evaluating Its Total Antioxidant Capacity and Phenolic Content
Previous Article in Special Issue
Confidence-Based Knowledge Distillation to Reduce Training Costs and Carbon Footprint for Low-Resource Neural Machine Translation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Resourced Alphabet-Level Pivot-Based Neural Machine Translation for Translating Korean Dialects

School of Computing, Kyung-Hee University, Giheung-gu, Yongin-si 17104, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9459; https://doi.org/10.3390/app15179459
Submission received: 25 July 2025 / Revised: 20 August 2025 / Accepted: 22 August 2025 / Published: 28 August 2025
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)

Abstract

Developing a machine translator from a Korean dialect to a foreign language presents significant challenges due to a lack of a parallel corpus for direct dialect translation. To solve this issue, this paper proposes a pivot-based machine translation model that consists of two sub-translators. The first sub-translator is a sequence-to-sequence model with minGRU as an encoder and GRU as a decoder. It normalizes a dialect sentence into a standard sentence, and it employs alphabet-level tokenization. The other type of sub-translator is a legacy translator, such as off-the-shelf neural machine translators or LLMs, which translates the normalized standard sentence to a foreign sentence. The effectiveness of the alphabet-level tokenization and the minGRU encoder for the normalization model is demonstrated through empirical analysis. Alphabet-level tokenization is proven to be more effective for Korean dialect normalization than other widely used sub-word tokenizations. The minGRU encoder exhibits comparable performance to GRU as an encoder, and it is faster and more effective in managing longer token sequences. The pivot-based translation method is also validated through a broad range of experiments, and its effectiveness in translating Korean dialects to English, Chinese, and Japanese is demonstrated empirically.

1. Introduction

Most languages have their own dialects derived from social, ethnic, and regional causes. The Korean language also has eight major dialects according to the work of Choi [1]. They are Gyeongi, Gangwon, Chungcheong, Jeolla, Gyeongsang, Jeju, Pyeongan, and Hamgyeong, where the Gyeongi dialect is regarded as standard Korean. The main characteristic of Korean dialects is that they are specialized geographically, but each region in which a dialect is spoken is not large. Thus, all the dialects share the same grammar, while every dialect has its own specialized words and some morphological transitions.
The specialized words in a dialect make it difficult to directly translate the dialect to a foreign language. Figure 1 shows such an example. The standard sentence “이 나이에도 여전히 걱정만 끼쳐 죄송해요.” is translated to“I’m sorry I still worry you even at this age.” by Google translator. However, its corresponding sentence in Gangwon dialect, “이 나으에도 여전히 극정만 끼쳐 죄송해요,” is wrongly translated to “I’m sorry that I still cause extreme pain even in this situation.” This poor translation is caused by two dialect words, ‘나으 (age)’ and ‘극정 (worry)’. Even if they look similar on the surface to their standard forms of ‘나이’ and ‘걱정’, they are considered as out-of-vocabulary (OOV) words by legacy Korean–English machine translators and result in an incorrect translation.
The aforementioned OOV issue in managing Korean dialects is mainly caused by modern sub-word tokenizers which are naively applied to Korean at the syllable level. This is the point where the first research question arises: ‘Are there any better options for normalizing Korean dialects than simply applying syllable level sub-word tokenizers and a transformer architecture?’ We found some fundamental insight that Korean dialects exhibit unique phonetic and morphological characteristics that are not adequately captured by syllable-level tokenization methods: for instance, the verb ‘카다 (do)’ in the Gyeongsang dialect, which is pronounced as ka-da. The only difference in this verb from its standard form ‘하다’, pronounced as ha-da, is the first syllable ‘ka’ instead of ‘ha’. We suggested such examples in Table 1. Nevertheless, both words are represented differently by a sub-word tokenizer. According to Li et al. [2], character-level modeling is better than sub-word modeling in agglutinative languages especially when a translator is trained with a small dataset. Since Korean is an agglutinative language and its alphabet shares many features with English characters, the proposed normalization model performs at the Korean alphabet—Jamo—level. This is the answer to our first research question.
Then, the second research question of this paper arises: ‘Is there any way to apply a novel dialect normalization method to more practical fields?’ We found the answer: ‘by translating Korean dialects to foreign languages’. Korean dialects are spoken rather than written, but the needs for translating written dialects are increasing. Nowadays, commercial speech-to-text and speech-to-speech translators are emerging in translation fields. In such cases, spoken dialects are transcribed into text before translation. The current challenge in the field is a lack of available data to train a machine translator from a Korean dialect to a foreign language. That is, there is no publicly available parallel corpus for Korean dialect translation. Thus, the proposed model employs a pivot-based approach which adopts standard Korean as a pivot language. The pivot-based approach is a common choice in many low-resourced languages [3,4,5]. The normalization from a dialect to a standard sentence can be achieved by the proposed alphabet-level translation model. Once a dialect sentence is normalized to a standard sentence, the standard sentence can be translated to a foreign sentence by a legacy translator such as off-the-shelf neural machine translators or large language models.
The contributions of this paper can be summarized as follows:
  • This is the first work, to the best of our knowledge, to translate a Korean dialect to a foreign language with a pivot-based translation model. According to the experimental results, the proposed pivot-based translation outperforms the direct translation.
  • Alphabet-level tokenization is used to normalize dialect sentences, and its superiority to other sub-word tokenizations is shown empirically.
  • The proposed sequence-to-sequence model for dialect normalization adopts minGRU as an encoder and GRU as a decoder. This paper proves that minGRU as an encoder can be an alternative option to GRU as an encoder, since minGRU is faster and more effective in managing longer token sequences than GRU.
The rest of this paper is organized as follows. Section 2 surveys previous works on dialects and their translations. Section 3 explains how the proposed pivot-based Korean dialect translator is structured, and Section 4 describes normalizing a dialect sentence to a standard form in detail. Section 5 presents the evaluation results, and finally Section 6 draws conclusions of this work.

2. Related Work

Dialect translations are clustered into two categories: inner dialect translation and dialect foreign translation. Inner dialect translation is targeting the translation between dialects of a language. It includes the translation of a non-standard dialect into a standard one, which is known as dialect normalization [6]. On the other hand, dialect foreign translation focuses on translation between a dialect and a foreign language.
One critical issue in both dialect translations is to cope with lexical variations. Tan et al. [7] proposed Base-Inflectional Encoding (BITE), which can be applied to any pre-trained language model with ease. It leverages inflectional features of English and thus is robust even in non-standard English. Abe et al. [8] tried to capture consistent phonetic transformation rules shared by various Japanese dialects. Thus, they used a multilingual NMT [9] to translate a dialect into standard Japanese. On the other hand, Honnet et al. [10] and Sajjad et al. [11] empirically proved that a character-level processing is effective in managing variations of Swiss German and Egyptian Arabic, respectively.
Another critical issue in dialect translation is the lack of resources for training translation models. Faheem et al. [12] applied a semi-supervised approach to normalize Egyptian Arabic to Standard Arabic in order to overcome the lack of training data. On the other hand, Liu et al. [13] prepared a dataset for direct dialect translation. They created a parallel corpus from Singlish to Standard English. This stresses on lexical-level normalization, syntactic-level editing, and semantic-level rewriting. When a machine translator is trained with limited data, it is prone to be excessively affected by noise or superficial lexical features. Therefore, input perturbations were adopted at both the word and sentence levels.
In dialect foreign translation, a pivot-based translation is a common approach to circumvent the low-resource problem [14,15]. In this approach, a dialect is first translated into a standard form, and then the standard form is translated again into a foreign language. In addition to this approach, back translation is often adopted to solve the issue of limited dialect data [16,17]. For instance, Tahssin et al. [18] applied back translation to overcome data imbalance. At the same time, there have been some efforts to construct data for the direct translation of dialects. Riley et al. [19] presented a benchmark for few-shot region-aware machine translation. This benchmark includes language pairs of English and two regional dialects of Portuguese and Mandarin Chinese. On the other hand, Sun et al. [20] proposed a translation evaluation method which is robust to dialects.
Recent studies on the translation of Korean regional dialects have mainly focused on exploring and improving existing neural machine translation methods. Lim et al. [21] adopted a transformer-based architecture and a syllable-level SentencePiece tokenizer for Korean dialect translation. They also confirmed the effectiveness of the copy mechanism and the many-to-one translation approach. Hwang and Yang [22] took a pre-training and fine-tuning approach in Korean normalization. They fine-tuned a BART variant using standard BPE tokenization and regional information tokens. Similarly, Lee et al. [23] demonstrated the potential of large language models (LLMs) as a translator for the Jeju dialect.
Korean dialects are used more frequently in speech than in writing. Therefore, the research on speech dialect recognition is essential, yet this topic has only been addressed by a few studies. Roh and Lee [24] showed an early exploration of this topic. The experiments were performed to investigate how commercial APIs could be used to recognize Korean dialects. According to their study, the Google Speech Recognition API is more accurate than other APIs. However, despite its high accuracy, challenges remain in recognizing dialects due to the unique phonetic and lexical characteristics of the Korean language. Na et al. [25] suggested an insight regarding how off-the-shelf ASR systems can be adapted for dialect recognition. Their experimental results showed that the modern ASR systems such as Whisper and wav2vec 2.0 perform well in recognizing Korean dialects. More recently, Bak et al. [26] improved the performance of Whisper’s dialect recognition by refining its results using GPT-4o-mini. In this study, transcription errors in Whisper were corrected by applying RAG to the GPT-4o-mini language model.

3. Pivot-Based Translation for Korean Dialects

Due to a lack of parallel corpus between Korean dialects and foreign languages, it is extremely difficult to construct a direct machine translator for Korean dialects. Thus, this paper adopts standard Korean as a pivot language between a dialect and a foreign language. That is, the proposed translator first translates a dialect sentence into a standard Korean sentence and then translates that into a foreign language.
Figure 2 depicts the overall structure of the proposed pivot-based translator for Korean dialects. It consists of two sub-translators. A dialect sentence is first normalized to a standard sentence by a GRU-based sequence-to-sequence model explained below. For instance, legacy machine translators do not understand a Jeju dialect sentence “목사님 그 앞에 모니터 좀 있으면 좋지 않안허쿠가 영 하니까,” where this dialect sentence is translated incorrectly into “The pastor, it would be nice to have a monitor in front of him because he’s so young.” However, they accept its standard sentence “목사님 그 앞에 모니터 좀 있으면 좋지 않겠어요 이렇게 하니까,” of which the meaning is “Pastor, wouldn’t it better if there is a monitor in front of the stuff like this?”
Once a standard sentence has been obtained, it can be translated again into a foreign language by a legacy machine translator. Due to the large volume of parallel corpora between standard Korean and major foreign languages, a number of machine translators including LLMs show high and reliable performance. This paper leverages neural machine translators from easyNMT (https://github.com/UKPLab/EasyNMT (accessed on 28 July 2025)) and Exaone [27], a Korean LLM, to translate standard Korean to a foreign language. Then, finally, the Jeju dialect sentence is translated into “Wouldn’t it be better if there is a monitor in front of the pastor like this?” if a target language is English.

4. Normalization Model from Dialect to Standard

4.1. Tokenization

There are several options for tokenizing a Korean text. Four widely used tokenizations among them are syllable-level SentencePiece, byte-level BPE, morpheme-level, and alphabet-level. The tokens by these tokenizations for the example dialect sentence used in Figure 2 are shown in Table 2. Korean is a syllabary language. Thus, a syllable can be a natural tokenization unit. However, since a syllable is a combination of several base alphabets in Korean, the number of possible syllables is extremely large. As a result, syllable-level tokens such as syllable-level SentencePiece trained with a insufficient corpus can result in a poor performance. Byte-level BPE is effective because it does not suffer from the OOV problem, as it processes Korean texts at the byte level. However, it generates illegible output which is far from the original sentence as shown in this table. Therefore, this method is not intuitive.
Another option is to leverage morphemes as the unit of tokenization. This is standard practice in both rule-based and statistics-based machine translation. Since a morpheme is the smallest unit of meaning, this tokenization can preserve the meaning of each word. However, it depends on the performance of a morphological analyzer and also suffers from the OOV problem.
The last option for Korean tokenization is to use the Korean alphabet, Jamo. This paper proposes alphabet-level tokenization for dialect normalization. The proposed tokenizer uses Korean alphabets, alpha-numeric symbols, and two special tokens of ‘<SEP>’ and ‘<SPC>’ as tokenizing units. ‘<SEP>’ indicates the end of a syllable and ‘<SPC>’ represents a white space. For instance, a word “목사님” is represented as “ㅁ, ㅗ, ㄱ, <SEP>, ㅅ, ㅏ, <SEP>, ㄴ, ㅣ, ㅁ, <SEP> <SPC>”. Since there are only 24 letters in the Korean alphabet, Korean alphabet-level tokenization enjoys the same benefits as English character-level tokenization.

4.2. Normalization Model

A Korean dialect sentence and its standard sentence share most alphabet sequences. Therefore, the Gated Recurrent Unit (GRU) model proposed by Cho et al. [28] can be used for dialect normalization. It has shown reasonable performances in many NLP tasks and is more efficient than LSTM since it has fewer parameters.
The proposed model for dialect normalization is depicted in Figure 3. It is a GRU-based sequence-to-sequence model. However, its encoder is minGRU [29] rather than GRU since GRU suffers from slow convergence by back propagation-through-time, and alphabet-level token sequences are quite long in general, as shown in Table 2. On the other hand, its decoder is an original GRU. This is because the decoder aims at the precise generation of a target sentence in an auto-regressive way.
Assume that a natural language dialect sentence x = x 1 , , x n is given. If the t-th alphabet-level token is expressed as a vector embedding x t R b , then the sentence is represented as a matrix X R n × b by concatenating x t values. The encoder is a bi-directional multi-layer minGRU. That is, it consists of L minGRU layers. In the l-th layer ( 1 l L ), a minGRU transforms x t into a hidden state vector h t l R b . In order to speed up training, minGRU removes hidden state dependencies of GRU and reduces hyperbolic functions. That is, when ⊙ is a point-wise multiplication, h t l is obtained by
h t l = ( 1 z t ) h t 1 l + z t h ˜ t ,
where
z t = σ ( Linear d ( h t l 1 ) ) ,
h ˜ t = Linear d ( h t l 1 ) .
Here, σ and Linear d denote a sigmoid activation function and a d-dimensional linear transformation, respectively. Note that there is no reset gate r t in these equations. In addition, the forget gate of GRU represented as z t G R U = σ ( Linear d ( x t h t 1 l ) ) is replaced with z t = σ ( Linear d ( x t ) ) . Compared with z t G R U , z t has no dependency on the previous hidden state h t 1 l . Similarly, h ˜ t also does not depend on h t 1 l . As a result, z t and h ˜ t can be processed in parallel for all t values. After 1 z t and z t h ˜ t are computed for all t values, the hidden state h t l in Equation (1) is obtained in parallel using the Parallel Scan algorithm [30,31]. The final output of the l-th layer becomes H e l = h 1 l , , h n l . Equations (2) and () depend on h t l 1 , which is the hidden state of the ( l 1 ) -th layer. Thus, computing H e l can be understood as
H e l = minGRU ( H e l 1 ) ,
where H e 0 = X .
In order to reflect bi-directional contexts to the hidden state, both H e l = h 1 l , , h n l and H e l = h 1 l , , h n l are used. That is, H ¯ e l R n × 2 b is used as a final hidden state matrix, where H ¯ e l is h ¯ 1 l , , h ¯ n l and h ¯ t l = [ h t l , h t l ] . Here, [ · , · ] is a concatenation of two vectors. Then, the final output of the encoder is H ¯ e L , which is the hidden state matrix of the L-th bi-directional minGRU layer.
The decoder of the proposed translator is a GRU which generates y 1 , , y m auto-regressively. It applies the dot-product attention [32] between h t d R 2 b , the hidden state of the decoder, and H ¯ e L , the last hidden state of the encoder, to generate an output. That is, in generating y t , the attention score a t is first calculated by
a t = softmax ( H ¯ e L · h t d ) .
Then, the context vector c t becomes
c t = H ¯ e L · a t .
Since the decoder is a GRU, the hidden state vector of the decoder becomes
h t d = GRU ( [ y t 1 , c t 1 ] , h t 1 d ) ,
where y t 1 R b is an embedding of y t 1 . Finally, when V is a vocabulary, y t , the t-th output, is generated by
y ˜ t = Linear | V | ( [ h t d , c t ] ) , y ˜ t o n e = one - hot ( y ˜ t ) , y t = Lookup ( y ˜ t o n e ) .
In these equations, y ˜ t is a vector whose dimension is | V | , and it is converted to the output token y t through one−hot operation and vocabulary lookup. In Figure 3, ‘ㅗ’ is generated through this process and its embedding, y t , is fed to the generation of y t + 1 .

5. Experiments

5.1. Experimental Settings

The Korean Dialect Speech Dataset released at the Korea AI Hub (https://aihub.or.kr (accessed on 28 July 2025)) is used for the training and evaluation of Korean dialect normalization. This dataset contains five South Korean dialects of Gyeongsang, Jeolla, Jeju, Gangwon, and Chungcheong. Two dialects of Pyeongan and Hamgyeong are not included in this dataset. This dataset was collected and published in South Korea, where it is impossible to collect data on these two dialects since they are mainly spoken in North Korea. Further details and linguistic features of these dialects can be found in the work of Choi [1]. The dataset consists of audio dialogue and its corresponding labels. The labels include various information about the dialect speakers, such as their age and place of residence. The labeled sentences are presented in two forms: a dialect sentence and its standard equivalent, with the two forms aligned and provided for every speech. Thus, the pairs of a dialect sentence and its standard sentence extracted from this dataset are regarded as a parallel corpus for normalizing Korean dialects.
Since the sentences in the dataset are spoken dialects, they are pre-processed to remove their noise such as stutters and laughter. In addition, sentences that are too short, with fewer than four eojeols, are also removed. An eojeol is a spacing unit in Korean. The original dataset contains a large portion of pairs in which the standard and dialect sentences are exactly the same. This is because the dataset comprises spoken dialogues. If speakers conversed in standard Korean, the standard and dialect forms are labeled as the same. Thus, all such cases are filtered out of the dataset. A simple statistic on the final pre-processed dataset is provided in Table 3. The number in the parenthesis indicates the number of original pairs. The ratio of dialect pairs in the training set varies considerably from 12% to 35%. Nevertheless, there are enough pairs to train and test a model for normalizing Korean dialects.
Four kinds of tokenizations are evaluated for normalizing Korean dialects: syllable-level SentencePiece, byte-level BPE, morpheme level, and alphabet level. SentencePiece tokenization is implemented with the SentencePiece module from GitHub (v. 0.2.0) and BPE is implemented with the ByteLevelBPETokenizer class from the tokenizers module of HuggingFace. Korean morphemes are analyzed by the MeCab-ko (https://github.com/hephaex/mecab-ko (accessed on 28 July 2025)) analyzer, and the decomposition of a syllable to an alphabet sequence is accomplished with the jamo module from GitHub.
The normalization models are trained to optimize the cross-entropy loss with the Adam optimizer and the ReduceLROnPlateau scheduler. The batch size was set to 200 for all tokenization and dialects except for morpheme-level tokens. As the vocabulary size of morpheme-level tokens is larger than others, the batch size for these was set to 64. The learning rate was initialized at 5 × 10 4 . The weight decay is set to 1 × 10 5 . Additionally, b in Equation (1), the dimension of the hidden state vectors of an encoder, is set as 128. Thus, that of a decoder is 256. The number of layers, L, is set to three.
The translation results are evaluated with ChrF++, BLEU, and BERT score. Among the metrics, BLEU and ChrF++ are used to assess the normalization results. Note that phonemic transitions are predominantly observed in Korean dialects. Thus, the primary work of the normalization models is to reconstruct phonemic varieties to restore the standard form. This is why n-gram-based metrics are useful for evaluating dialect normalization models. In particular, ChrF++ is designed for character-level evaluation, while BLEU focuses at the word level. On the other hand, the results of pivot translation are evaluated with ChrF++ and BERT score. Since this task is an ordinary translation task, the BERT score is adopted to evaluate the semantic similarity between original and translated sentences.
The translation models used to translate normalized dialects into foreign languages are (i) Opus-MT [33], (ii) m2m_100_1.2B [34], and (iii) EXAONE-3.0-7.8B-Instruct [27]. Llama-3.1-8B-Instruct is used to generate reference translations for pivot translation experiments. The Llama model is adopted since it is one of the most popular publicly available LLMs. Opus-MT and m2m_100_1.2B are neural machine translators. Opus-MT is designed based on Marinan NMT [35] and is trained using OPUS datasets [36]. On the other hand, m2m_100_1.2B is a many-to-many translation model. It can translate any pair of the one hundred languages it has been trained on. Their trained checkpoints are loaded and used by easyNMT, where easyNMT is a Python library (v. 2.0.2) for neural machine translation which provides a unified interface to various translation models. Two neural translation models have been selected as a baseline because they are effective and can be easily adapted for practical use through easyNMT. Exaone is a large language model (LLM) developed by LG AI Research. According to Sim et al. [37], it is specialized in understanding Korean culture, which implies that it is better at processing Korean dialects. This is the core reason why Exaone was adopted for this experiment. The LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct checkpoint from HuggingFace is used in the experiments. Translations into English, Japanese, and Chinese are evaluated for m2m_100_1.2B and Exaone. However, only the English translation is reported for Opus-MT because the only available Opus-MT checkpoint is from Korean to English.
Two LLMs, Llama-3.1-8B-Instruct and Exaone-3.0-7.8B-Instruct, are used in the experiments. The prompts for each model are designed to elicit the desired translation outcomes. The prompt examples for each model are shown in Figure 4. The prompts are given in a zero-shot setting with no additional fine tuning. The prompts commonly include “다음 문장을 영어로 번역해줘 (Please translate the following sentence into English)”. For other languages, the term ‘영어 (English)’ is replaced with ‘일본어 (Japanese)’ or ‘중국어 (Chinese)’. The prompts are written in Korean for both models, forcing the language model to focus more on the Korean translation task. On the other hand, instruction for their output is in two forms: “You should generate only the translated text” for Llama-3.1-8B-Instruct and “번역한 문장만 출력하도록 해 (Please output only the translated sentence)” for Exaone-3.0-7.8B-Instruct. This is because Llama-3.1-8B-Instruct was primarily trained on English data, whereas Exaone-3.0-7.8B-Instruct was specialized for Korean.
Normalization experiments were performed three times with randomly initialized weights. The table states the average and standard deviation of the evaluation metrics across these three runs. This is crucial for evaluating the efficiency of each tokenization methods. Conversely, pivot translation experiments are conducted once for each translation model. This is because the translation models are pre-trained and their weights are fixed. The normalization model used in the pivot translation experiments is the model that performed best in the normalization experiments.

5.2. Evaluations on Normalization from Dialect to Standard

Table 4 and Table 5 compare the performance of tokenization methods when normalizing Korean dialects. According to these tables, alphabet-level tokenization outperforms all other methods. Its average chrF++ score is over 90, implying that it restores almost perfect standard sentences from dialect sentences. Alphabet-level tokenization has a statistically significant advantage over other tokenization methods except for the Jeju dialect. A similar tendency is observed when BLEU is used as an evaluation metric. This is because the Korean dialects share their grammar and most words with standard Korean, as shown in Table 2.
One thing to note about these tables is that sub-words are ineffective for this task. They are heavily influenced by the initial weight of the model. The standard deviations of the sub-word tokenizers are higher than those of alphabet-level tokenization. SentencePiece and BPE are more complex and restrictive than the others. They model the natural language within a pre-defined vocabulary size and require a larger corpus to capture all the necessary patterns for the task. The vocabulary size in sub-word tokenization is set to 30,000, which is smaller than that in morpheme-level tokenization. The difference in vocabulary size is clearly evident in the performance gap between sub-word tokenization and morpheme-level tokenization.
Another thing to note is that the performance of the Jeju dialect is consistently lower than that of other dialects, whichever tokenization is used. That is, even though the Jeju dialect has the largest number of training instances, its performance is the worst. This is due to the geographical characteristics of the Jeju area. Jeju is an isolated island located far south of Seoul. Thus, the Jeju dialect differs significantly from standard Korean primarily at its surface form, resulting in poor normalization performance. This is also the reason why the sub-word tokenizers perform better with the Jeju dialect than with other dialects. No statistical significance was observed between sub-word tokenization and alphabet-level tokenization in the Jeju dialect.
The size of the proposed normalization model with alphabet-level tokenization is much smaller than the model with sub-word tokenizers. The proposed model with alphabet-level tokenization has 1.2M trainable parameters for 156 Korean alphabets including numbers and symbols. In constrast, the model with syllable-level SentencePiece has 21M parameters. That is, the alphabet-level tokenization achieves a higher performance with much fewer parameters.
The proposed model adopts minGRU as its encoder because the length of an input sequence becomes longer due to its alphabet-level tokenization. However, GRU can also be used as an encoder. According to Table 6, the normalizing performance of using a bidirectional GRU is slightly better than that of using a bidirectional minGRU. However, this difference is not statistically significant with p < 0.05 . The true advantage of minGRU lies in its efficiency during its training and inference. That is, its execution time is faster than that of GRU. Figure 5 depicts how much faster minGRU is than GRU during the training. MinGRU takes less time to execute each epoch than GRU for all types of dialect. Overall, using a minGRU encoder saves about 15% of epoch time, even though the minGRU encoder consists of three minGRU layers and the GRU encoder has only one GRU layer. The three-layered minGRU and the one-layered GRU are compared because they demonstrate similar performance in character-level tokenization.
Figure 6 compares minGRU and GRU regarding the GPU processing time used for normalizing dialects to the standard. In this figure, X-axis is the number of tokens in a dialect sentence, and Y-axis represents the GPU time (msec) consumed to normalize a dialect sentence. This figure proves that minGRU always consumes less GPU time than GRU. The difference between minGRU and GRU is not significant when the sentence length is less than 400. However, the longer a dialect sentence is, the larger time gain minGRU has.

5.3. Evaluations on Pivot Translation

There is no parallel corpus for Korean dialects and foreign languages. Thus, a parallel corpus has been constructed from the normalization dataset. Note that the normalization dataset contains a standard Korean sentence for each dialect sentence. The standard sentences of the test set are first translated into foreign languages by an LLM, Llama-3.1-8B-Instruct, under the assumption that the translated sentences in this way are correct. Although the model has limited language modeling capability, it was chosen due to resource constraints. Then, three translation models—m2m_100_1.2B, Opus-MT and Exaone—are used to prepare pairs of a dialect sentence and its translated foreign equivalent as well as pairs of a normalized sentence and its translated foreign equivalent.
Table 7 shows the evaluation results of the proposed pivot-based translation model for English. Here, ‘Direct Translation’ means that the dialects are translated directly into English without using a pivot standard. This table reports the ChrF++ and BERT score for pivot-based dialect translations. The proposed model achieves better performances than direct translation for all dialects, proving the effectiveness of using standard Korean as a pivot language. Although the improvement in performance is modest for both ChrF++ and BERT score, it is still significant. The ChrF++ score indicates the surface form distance, but this is not the only factor that determines the translation quality. The BERT score measures the semantic similarity between two sentences. However, according to Hanna and Bojar [38], the BERT score often assigns a high score even to incorrect translations. In the experiment of Hanna and Bojar [38], sentences containing several grammatical errors achieved a BERT score of around 82, while grammatically correct sentences achieved a score of around 83. The BERT score assigns a high score to incorrect translations, but it definitely penalizes defective translations. This shows that an improvement in the BERT score, even by less than 1.0 points, still implies a certain amount of improvement in the semantic level. In summary, the table shows that the proposed pivot-based translation is better in terms of both surface form and semantics.
Table 8 and Table 9 demonstrate performance when translating dialects into Chinese and Japanese, respectively. Since neither language has word spacing, ChrF is used instead of ChrF++. Unlike Table 7, these tables do not include the Opus-MT’s performance, as there is no checkpoint for translating Korean to Chinese and Japanese. A similar phenomenon to that observed for English is seen for Chinese and Japanese, too. The proposed pivot-based translation consistently outperforms direct translation in these languages.
Exaone is a general-purpose LLM, while m2m_100_1.2B is a specialized neural machine translation model. It is important to note that both models outperform direct translation when the proposed pivot-based approach is used. This tendency is observed in all three foreign languages. This demonstrates the robustness and effectiveness of the proposed pivot-based translation approach with character-level tokenization for dialect normalization.

6. Conclusions

In this paper, we propose a pivot-based translation model for translating Korean dialects into foreign languages. To overcome a lack of a parallel corpus for direct translation from a dialect to a foreign language, our model first normalizes the dialect sentence to a standard sentence and then translates the standard sentence into a foreign language. The dialect normalization model is a standard GRU-based sequence-to-sequence model, using minGRU as an encoder and GRU as a decoder. As the model adopts alphabet-level tokenization, the input sentence tends to be a long sequence of tokens. To address this issue, a multi-layer minGRU is adopted as the encoder instead of a GRU. A legacy translator is then used for the translation between standard and foreign sentences. In this paper, two neural translators (Opus-MT and m2m_100_1.2B) and an LLM, Exaone, have been tested for this purpose.
Experiments on the Korean Dialect Speech Dataset demonstrate that alphabet-level tokenization achieves higher performance than sub-word tokenization and morpheme-level tokenization. Furthermore, minGRU is shown to be a more effective encoder than GRU for dialect normalization. In addition, it was also shown that the proposed pivot-based translation is superior to the direct translation when translating Korean dialects to English, Chinese, and Japanese.

Author Contributions

Conceptualization, J.P.; methodology, S.-B.P.; software, J.P.; validation, S.-B.P.; formal analysis, J.P.; investigation, J.P.; resources, S.-B.P.; data curation, J.P.; writing—original draft preparation, J.P.; writing—review and editing, S.-B.P.; visualization, J.P.; supervision, S.-B.P.; funding acquisition, S.-B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP)-ITRC (Information Technology Research Center) grant funded by the Korea government (MSIT) (IITP-2025-RS-2023-00258649, 50%) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-2022-00155911, Artificial Intelligence Convergence Innovation Human Resources Development (Kyung Hee University), 50%).

Data Availability Statement

The Data used in this paper are published in https://www.aihub.or.kr/ (accessed on 28 July 2025). The datasets are distrubted in five categories: Gyeongsang (https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=119) (accessed on 20 July 2025), Jeolla (https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=120) (accessed on 20 July 2025), Jeju (https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=121) (accessed on 20 July 2025), Gangwon (https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=118) (accessed on 20 July 2025), and Chungcheong (https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=122) (accessed on 20 July 2025). However, as we noticed in Section 5, the dataset is available in Korea only.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Choi, M.O. Korean Dialects; Sechang: Seoul, Republic of Korea, 1995. [Google Scholar]
  2. Li, J.; Shen, Y.; Huang, S.; Dai, X.; Chen, J. When is char better than subword: A systematic study of segmentation algorithms for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, 1–6 August 2021; pp. 543–549. [Google Scholar]
  3. Chen, Y.; Liu, Y.; Cheng, Y.; Li, V. A Teacher-Student Framework for Zero-Resource Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada, 30 July–4 August 2017; pp. 1925–1935. [Google Scholar]
  4. Firat, O.; Sankaran, B.; Al-Onaizan, Y.; Vural, F.; Cho, K.H. Zero-Resource Translation with Multi-lingual Neural Machine Translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 1–5 November 2016; pp. 268–277. [Google Scholar]
  5. Kim, Y.S.; Petrov, P.; Petrushkov, P.; Ney, H. Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 3–7 November 2019. [Google Scholar]
  6. Kuparinen, O.; Miletić, A.; Scherrer, Y. Dialect-to-Standard Normalization: A Large-Scale Multilingual Evaluation. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, 6–10 December 2023; pp. 13814–13828. [Google Scholar]
  7. Tan, S.; Joty, S.; Varshney, L.; Kan, M.Y. Mind Your Inflections! Improving NLP for Non-Standard Englishes with Base-Inflection Encoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Online, 16–20 November 2020; pp. 5647–5663. [Google Scholar] [CrossRef]
  8. Abe, K.; Matsubayashi, Y.; Okazaki, N.; Inui, K. Multi-dialect Neural Machine Translation and Dialectometry. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation, Hong Kong, China, 1–3 December 2018. [Google Scholar]
  9. Johnson, M.; Schuster, M.; Le, Q.; Krikun, M.; Wu, Y.; Chen, Z.; Thorat, N.; Viégas, F.; Wattenberg, M.; Corrado, G.; et al. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Trans. Assoc. Comput. Linguist. 2017, 5, 339–351. [Google Scholar] [CrossRef]
  10. Honnet, P.E.; Popescu-Belis, A.; Musat, C.; Baeriswyl, M. Machine Translation of Low-Resource Spoken Dialects: Strategies for Normalizing Swiss German. In Proceedings of the 11th International Conference on Language Resources and Evaluation, Miyazaki, Japan, 7–12 May 2018. [Google Scholar]
  11. Sajjad, H.; Darwish, K.; Belinkov, Y. Translating Dialectual Arabic to English. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, 4–9 August 2013; pp. 1–6. [Google Scholar]
  12. Faheem, M.; Wassif, K.; Bayomi, H.; Abdou, S. Improving neural machine translation for low resource languages through non-parallel corpora: A case study of Egyptian dialect to modern standard Arabic translation. Sci. Rep. 2024, 14, 2265. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, Z.; Ni, S.; Aw, A.; Chen, N. Singlish Message Paraphrasing: A Joint Task of Creole Translation and Text Normalization. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 12–17 October 2022; pp. 3924–3936. [Google Scholar]
  14. Paul, M.; Finch, A.; Dixon, P.; Sumita, E. Dialect Translation: Integrating Bayesian Co-segmentation Models with Pivot-based SMT. In Proceedings of the 1st Workshop on Algorithms and Resources for Modelling of Dialects and Language Varieties, Edinburgh, UK, 31 July 2011; pp. 1–9. [Google Scholar]
  15. Jeblee, S.; Feely, W.; Bouamor, H.; Lavie, A.; Habash, N.; Oflazer, K. Domain and Dialect Adaptation for Machine Translation into Egyptian Arabic. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing, Doha, Qatar, 25–29 October 2014; pp. 196–206. [Google Scholar]
  16. Edunov, S.; Ott, M.; Auli, M.; Grangier, D. Understanding Back-Translation at Scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 489–500. [Google Scholar] [CrossRef]
  17. Sennrich, R.; Haddow, B.; Birch, A. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; pp. 86–96. [Google Scholar] [CrossRef]
  18. Tahssin, R.; Kishk, Y.; Torki, M. Identifying Nuanced Dialect for Arabic Tweets with Deep Learning and Reverse Translation Corpus Extension System. In Proceedings of the 5th Arabic Natural Language Processing Workshop, Barcelona, Spain, 12 December 2020; pp. 288–294. [Google Scholar]
  19. Riley, P.; Dozat, T.; Botha, J.; Garcia, X.; Garrette, D.; Riesa, J.; Firat, O.; Constant, N. FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation. Trans. Assoc. Comput. Linguist. 2023, 11, 671–685. [Google Scholar] [CrossRef]
  20. Sun, J.; Sellam, T.; Clark, E.; Vu, T.; Dozat, T.; Garrette, D.; Siddhant, A.; Eisenstein, J.; Gehrmann, S. Dialect-robust Evaluation of Generated Text. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Toronto, ON, Canada, 9–14 July 2023; pp. 6010–6028. [Google Scholar] [CrossRef]
  21. Lim, S.B.; Park, C.J.; Yang, Y.W. Deep Learning-based Korean Machine Translation Research Considering Linguistics Features and Service. J. Korean Converg. Soc. 2022, 13, 21–29. [Google Scholar]
  22. Hwang, J.S.; Yang, H.C. Korean dialect-standard language translation using special token in KoBART. In Proceedings of the Symposium of the Korean Institute of Communications and Information Sciences, Pyeongchang, Republic of Korea, 31 January–2 February 2024; pp. 1178–1179. [Google Scholar]
  23. Lee, S.Y.; Jung, D.-E.; Sim, J.Y.; Kim, S.H. Study on Jeju Dialect Machine Translation Utilizing an Open-Source Large Language Model. In Proceedings of the Summer Annual Conference of IEIE 2024, Jeju, Republic of Korea, 26–28 June 2024; pp. 2923–2926. [Google Scholar]
  24. Roh, H.G.; Lee, K.H. A Basic Performance Evaluation of the Speech Recognition APP of Standard Language and Dialect using Google, Naver, and Daum KAKAO APIs. Asia-Pac. J. Multimed. Serv. Converg. Art Humanit. Sociol. 2017, 7, 819–829. [Google Scholar] [CrossRef]
  25. Na, J.; Park, Y.; Lee, B. A Comparative Study on the Biases of Age, Gender, Dialects, and L2 speakers of Automatic Speech Recognition for Korean Language. In Proceedings of the 2024 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Macau, China, 3–6 December 2024; pp. 1–6. [Google Scholar] [CrossRef]
  26. Bak, S.H.; Choi, S.M.; Jung, Y.C. Voice Recognition Control using LLM for Regional Dialects. In Proceedings of the KIIT Conference, Jeju, Republic of Korea, 14–17 October 2025; pp. 617–620. [Google Scholar]
  27. An, S.Y.; Bae, K.H.; Choi, E.B.; Choi, S.; Choi, Y.M.; Hong, S.K.; Hong, Y.J.; Hwang, J.W.; Jeon, H.J.; Jo, G.; et al. EXAONE 3.0 7.8B Instruction Tuned Language Model. arXiv 2024, arXiv:2408.03541. [Google Scholar] [CrossRef]
  28. Cho, K.H.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014; pp. 1724–1734. [Google Scholar]
  29. Feng, L.; Tung, F.; Ahmed, M.; Bengio, Y.; Hajimirsadeghi, H. Were RNNs All We Needed? arXiv 2024, arXiv:2410.01201. [Google Scholar] [CrossRef]
  30. Blelloch, G. Prefix Sums and Their Applications; Technical Report CMU-CS-90-190; School of Computer Science, Carnegie Mellon University: Pittsburgh, PA, USA, 1990. [Google Scholar]
  31. Heinsen, F. Efficient Parallelization of a Ubiquitous Sequential Computation. arXiv 2023, arXiv:2311.06281. [Google Scholar] [CrossRef]
  32. Luong, T.; Pham, H.; Manning, C.; Su, J. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 1412–1421. [Google Scholar] [CrossRef]
  33. Tiedemann, J.; Thottingal, S. OPUS-MT—Building open translation services for the World. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, Lisboa, Portugal, 3–5 November 2020; pp. 479–480. [Google Scholar]
  34. Fan, A.; Bhosale, S.; Schwenk, H.; Ma, Z.; El-Kishky, A.; Goyal, S.; Baines, M.; Celebi, O.; Wenzek, G.; Chaudhary, V. Beyond English-Centric Multilingual Machine Translation. J. Mach. Learn. Res. 2020, 22, 4839–4886. [Google Scholar]
  35. Junczys-Dowmunt, M.; Grundkiewicz, R.; Dwojak, T.; Hoang, H.; Heafield, K.; Neckermann, T.; Seide, F.; Germann, U.; Aji, A.; Bogoychev, N.; et al. Marian: Fast Neural Machine Translation in C++. In Proceedings of the ACL 2018, System Demonstrations, Melbourne, Australia, 15–20 July 2018; pp. 116–121. [Google Scholar]
  36. Tiedemann, J. Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation, Istanbul, Turkey, 23–25 May 2012; pp. 2214–2218. [Google Scholar]
  37. Sim, Y.J.; Lee, W.J.; Kim, H.J.; Kim, H.S. Evaluating Large Language Models on Korean Cultural Understanding in Empathetic Response Generation. In Proceedings of the 36th Annual Conference on Human and Cognitive Language Technology, Seongnam, Republic of Korea, 11–12 October 2024; pp. 325–330. [Google Scholar]
  38. Hanna, M.; Bojar, O. A Fine-Grained Analysis of BERTScore. In Proceedings of the Sixth Conference on Machine Translation, Online, 10–11 November 2021; Barrault, L., Bojar, O., Bougares, F., Chatterjee, R., Costa-jussa, M.R., Federmann, C., Fishel, M., Fraser, A., Freitag, M., Graham, Y., et al., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 507–517. [Google Scholar]
Figure 1. An example translation of a Gangwon dialect sentence to English. The input Korean sentence is “이 나이에도 여전히 걱정만 끼쳐 죄송해요.” Its English translation is “I’m sorry I still worry you even at this age.” but the dialect sentence is mistranslated to “I’m sorry that I still cause extreme pain even in this situation.”
Figure 1. An example translation of a Gangwon dialect sentence to English. The input Korean sentence is “이 나이에도 여전히 걱정만 끼쳐 죄송해요.” Its English translation is “I’m sorry I still worry you even at this age.” but the dialect sentence is mistranslated to “I’m sorry that I still cause extreme pain even in this situation.”
Applsci 15 09459 g001
Figure 2. The overall structure of the proposed pivot-based machine translator for Korean dialects. The red fonts indicates the normalized part of given sentence.
Figure 2. The overall structure of the proposed pivot-based machine translator for Korean dialects. The red fonts indicates the normalized part of given sentence.
Applsci 15 09459 g002
Figure 3. The architecture of the proposed translator from Korean regional dialects to the standard.
Figure 3. The architecture of the proposed translator from Korean regional dialects to the standard.
Applsci 15 09459 g003
Figure 4. Examples of prompts for Llama-3.1-8B and Exaone-3.0-7.8B.
Figure 4. Examples of prompts for Llama-3.1-8B and Exaone-3.0-7.8B.
Applsci 15 09459 g004
Figure 5. Epoch time comparison between bi-directional minGRU and bi-directional GRU.
Figure 5. Epoch time comparison between bi-directional minGRU and bi-directional GRU.
Applsci 15 09459 g005
Figure 6. Change of GPU execution time per a sentene according to token length in dialect normalization.
Figure 6. Change of GPU execution time per a sentene according to token length in dialect normalization.
Applsci 15 09459 g006
Table 1. Example specialized words in Korean dialects.
Table 1. Example specialized words in Korean dialects.
DialectStandardDialectMeaning
Gyeongsang하다 [hada]카다 [kada]do
Jeolla버르장머리 [pʌrɯdzaŋmʌri]버르쟁이 [pʌrɯdzɛŋi]courtesy
Jeju있었어? [iśʌśʌ]있언? [iśʌn]was it?
Gangwon  고등학교 [kodɯŋhakkyo]  고등핵교 [kodɯŋhɛkkyo]   high school
Chungcheong어떻게 [ʌt́ʌkhe]어트케 [ʌdhɯkhe]how
Table 2. An example of tokenization for a Korean dialect sentence. ‘##’ in morpheme-level tokens and ‘▁’ in SentencePiece tokens represent white space information.
Table 2. An example of tokenization for a Korean dialect sentence. ‘##’ in morpheme-level tokens and ‘▁’ in SentencePiece tokens represent white space information.
Tokenization MethodTokens
dialect sentence목사님   그   앞에   모니터   좀   있으면   좋지   않안허쿠가   영   하니까
standard sentence목사님   그   앞에   모니터   좀   있으면   좋지   않겠어요   이렇게   하니까
SentencePiece▁목사님  ▁그  ▁앞에  ▁모니터  ▁좀  ▁있으면  ▁좋지  ▁않안  허  쿠가  
▁영  ▁하니까
byte-level BPE목 ìĤ¬ëĭĺ Ġê·¸ ĠìķŀìĹIJ Ġ모ëĭĪ íĦ° Ġì¢Ģ ĠìŀĪìľ¼ë©´ Ġì¢ĭì§Ģ ĠìķĬìķĪ íĹĪ
ì¿łê°Ģ Ġìĺģ ĠíķĺëĭĪê¹Į
morpheme-level목사  ##님  그  앞  ##에  모니터  좀  있  ##으면  좋  ##지  않  ##안  ##허  ##쿠  
##가  영  하  ##니까
ㅁ  ㅗ  ㄱ  <SEP>  ㅅ  ㅏ  <SEP>  ㄴ  ㅣ  ㅁ  <SEP>  <SPC>  ㄱ  ㅡ  <SEP>  <SPC>
ㅇ  ㅏ  ㅍ  <SEP>  ㅇ  ㅔ  <SEP>  <SPC>  ㅁ  ㅗ  <SEP>  ㄴ  ㅣ  <SEP>  ㅌ  ㅓ
<SEP>  <SPC>  ㅈ  ㅗ  ㅁ  <SEP>  <SPC>  ㅇ  ㅣ  ㅆ  <SEP>  ㅇ  ㅡ  <SEP>
alphabet-levelㅁ  ㅕ  ㄴ  <SEP>  <SPC>  ㅈ  ㅗ  ㅎ  <SEP>  ㅈ  ㅣ  <SEP>  <SPC>  ㅇ   ㅏ  ㄶ  
<SEP>  ㅇ  ㅏ  ㄴ  <SEP>  ㅎ  ㅓ  <SEP>  ㅋ  ㅜ  <SEP>  ㄱ  ㅏ  <SEP>  <SPC>
ㅇ  ㅕ  ㅇ  <SEP>  <SPC>  ㅎ  ㅏ  <SEP>  ㄴ  ㅣ  <SEP>  ㄲ  ㅏ  <SEP>
Table 3. A simple statistic of the dataset for Korean dialect normalization.
Table 3. A simple statistic of the dataset for Korean dialect normalization.
Dialect# of Training Pairs# of Validation Pairs# of Test Pairs
Gyeongsang260,494 (2,088,717)14,210 (89,512)14,181 (89,511)
Jeolla254,207 (1,992,101)25,922 (110,458)25,727 (110,459)
Jeju758,384 (2,774,257)42,938 (80,062)42,669 (80,061)
Gangwon557,969 (1,573,237)24,203 (91,346)24,084 (91,345)
Chungcheong260,494 (1,848,455)15,434 (95,000)15,601 (95,000)
Table 4. Korean dialect normalization results evaluated on BLEU. * Means that the difference is statistically significant (p < 0.05) compared to the alphabet-level tokenization. The bold values indicate the best performance in each column.
Table 4. Korean dialect normalization results evaluated on BLEU. * Means that the difference is statistically significant (p < 0.05) compared to the alphabet-level tokenization. The bold values indicate the best performance in each column.
TokenizationsGyeongsangJeollaJejuGangwonChungcheongOverall
SentencePiece 76.3 ± 4.0 * 28.6 ± 14 * 79.2 ± 4.8 51.7 ± 12 * 33.3 ± 13 *53.82
byte-level BPE 48.5 ± 9.9 * 24.0 ± 7.0 * 81.2 ± 4.1 28.8 ± 12 * 21.7 ± 6.8 *40.84
morpheme-level 93.5 ± 0.8 * 85.8 ± 2.2 * 81.4 ± 2.1 * 83.1 ± 2.4 * 83.1 ± 3.1 *85.38
alphabset-level 97 . 9 ± 0 . 4 96 . 5 ± 0 . 4 90 . 0 ± 0 . 4 93 . 7 ± 0 . 8 96 . 3 ± 0 . 8 94.88
Table 5. Korean dialect normalization results evaluated on chrF++. * Implies statistical significance (p < 0.05) over the alphabet-level tokenization. The bold values indicate the best performance in each column.
Table 5. Korean dialect normalization results evaluated on chrF++. * Implies statistical significance (p < 0.05) over the alphabet-level tokenization. The bold values indicate the best performance in each column.
TokenizationsGyeongsangJeollaJejuGangwonChungcheongOverall
SentencePiece 66.6 ± 5.1 * 20.3 ± 11 * 69.4 ± 6.0 39.2 ± 12 * 22.7 ± 11 *43.64
byte-level BPE 36.3 ± 9.7 * 15.1 ± 4.9 * 71.9 ± 5.2 19.7 ± 9.6 * 13.5 ± 4.5 *31.3
morpheme-level 90.3 ± 0.9 * 79.0 ± 3.2 * 72.5 ± 2.8 * 75.1 ± 3.2 * 75.2 ± 4.4 *78.42
alphabset-level 96 . 4 ± 0 . 7 94 . 0 ± 0 . 5 84 . 0 ± 0 . 6 89 . 8 ± 1 . 2 93 . 6 ± 1 . 3 91.56
Table 6. Performance comparison according to encoder types: GRU vs. minGRU.
Table 6. Performance comparison according to encoder types: GRU vs. minGRU.
DialectBLEUchrF++
minGRU GRUminGRU GRU
Gyeongsang 97.9 ± 0.4 98.2 ± 0.1 96.4 ± 0.7 96.8 ± 0.2
Jeolla 96.5 ± 0.4 97.2 ± 0.1 94.0 ± 0.5 95.2 ± 0.1
Jeju 90.0 ± 0.4 90.7 ± 0.1 84.0 ± 0.6 85.1 ± 0.2
Gangwon 93.7 ± 0.8 94.7 ± 0.5 89.8 ± 1.2 91.4 ± 0.6
Chungcheong 96.3 ± 0.8 97.4 ± 0.1 93.6 ± 1.3 95.5 ± 0.3
Overall94.8895.4491.5692.50
Table 7. Evaluations on Korean dialect translation to English. In this table, chr./B. means chrF++/BERT score.
Table 7. Evaluations on Korean dialect translation to English. In this table, chr./B. means chrF++/BERT score.
DialectDirect TranslationProposed Model
Opus-MT m2m_100_1.2B Exaone Opus-MT m2m_100_1.2B Exaone
chr./B. chr./B. chr./B. chr./B. chr./B. chr./B.
Gyeongsang27.82/89.4127.64/89.6239.80/91.2329.19/89.6529.17/89.9040.45/91.26
Jeolla27.86/89.2127.66/89.3739.81/90.9528.96/89.4829.27/89.7540.37/91.05
Jeju23.30/88.1522.97/88.1833.86/89.8526.06/88.9025.94/89.2036.75/90.51
Gangwon25.95/88.9225.61/89.1537.18/90.5627.83/89.4527.43/89.6238.36/90.77
  Chungcheong  28.06/89.2228.12/89.4039.75/90.9329.10/89.4529.54/89.7040.22/90.98
Overall26.60/88.9826.40/89.1638.08/90.7028.23/89.3928.27/89.6339.05/90.91
Table 8. Evaluations on translation from Korean dialects to Chinese. In this table, BERT means BERT score.
Table 8. Evaluations on translation from Korean dialects to Chinese. In this table, BERT means BERT score.
DialectDirect TranslationProposed Model
m2m_100_1.2B Exaone m2m_100_1.2B Exaone
chrF/BERT chrF/BERT chrF/BERT chrF/BERT
Gyeongsang9.03/69.2813.16/73.929.69/70.3713.31/74.05
Jeolla9.01/69.3512.71/73.659.67/70.5512.94/73.90
Jeju7.62/67.1910.82/71.029.02/69.9212.09/73.10
Gangwon8.18/68.4712.01/72.869.12/70.1612.50/73.58
Chungcheong8.74/69.4412.33/73.669.37/70.4012.60/73.90
Overall8.52/68.7512.21/73.029.37/70.2812.69/73.71
Table 9. Evaluations on translation from Korean dialects to Japanese. In this table, BERT means BERT score.
Table 9. Evaluations on translation from Korean dialects to Japanese. In this table, BERT means BERT score.
DialectDirect TranslationProposed Model
m2m_100_1.2B Exaone m2m_100_1.2B Exaone
chrF/BERT chrF/BERT chrF/BERT chrF/BERT
Gyeongsang12.94/74.1819.69/78.5913.83/75.0120.67/79.07
Jeolla13.03/74.1819.44/78.4813.78/75.0019.92/78.81
Jeju9.38/71.7115.42/76.2011.02/73.3517.05/77.47
Gangwon10.99/73.1217.92/77.7912.09/74.2618.60/78.31
Chungcheong12.71/73.9019.08/78.2113.51/74.6019.70/78.51
Overall11.81/73.4218.31/77.8512.84/74.4419.19/78.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, J.; Park, S.-B. Low-Resourced Alphabet-Level Pivot-Based Neural Machine Translation for Translating Korean Dialects. Appl. Sci. 2025, 15, 9459. https://doi.org/10.3390/app15179459

AMA Style

Park J, Park S-B. Low-Resourced Alphabet-Level Pivot-Based Neural Machine Translation for Translating Korean Dialects. Applied Sciences. 2025; 15(17):9459. https://doi.org/10.3390/app15179459

Chicago/Turabian Style

Park, Junho, and Seong-Bae Park. 2025. "Low-Resourced Alphabet-Level Pivot-Based Neural Machine Translation for Translating Korean Dialects" Applied Sciences 15, no. 17: 9459. https://doi.org/10.3390/app15179459

APA Style

Park, J., & Park, S.-B. (2025). Low-Resourced Alphabet-Level Pivot-Based Neural Machine Translation for Translating Korean Dialects. Applied Sciences, 15(17), 9459. https://doi.org/10.3390/app15179459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop