Next Article in Journal
Novel Methods Applied to Security and Privacy Problems in Future Networking Technologies
Previous Article in Journal
How to Use Redundancy for Memory Reliability: Replace or Code?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Knowledge-Enhanced Model for Korean Abstractive Text Summarization

1
Department of IT Convergence Engineering, Gachon University, Sungnam 13120, Republic of Korea
2
Department of Computer Engineering, Gachon University, Sungnam 13120, Republic of Korea
3
Department of Health Administration, Kongju National University, Gongju 32588, Republic of Korea
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(9), 1813; https://doi.org/10.3390/electronics14091813
Submission received: 29 March 2025 / Revised: 24 April 2025 / Accepted: 25 April 2025 / Published: 29 April 2025

Abstract

:
Text summarization plays a crucial role in processing extensive textual data, particularly in low-resource languages such as Korean. However, abstractive summarization faces persistent challenges, including semantic distortion and inconsistency. This study addresses these limitations by proposing a multi-knowledge-enhanced abstractive summarization model tailored for Korean texts. The model integrates internal knowledge, specifically keywords and topics that are extracted using a context-aware BERT-based approach. Unlike traditional statistical extraction methods, our approach utilizes the semantic context to ensure that the internal knowledge is both diverse and representative. By employing a multi-head attention mechanism, the proposed model effectively integrates multiple types of internal knowledge with the original document embeddings. Experimental evaluations on Korean datasets (news and legal texts) demonstrate that our model significantly outperforms baseline methods, achieving notable improvements in lexical overlap, semantic consistency, and structural coherence, as evidenced by higher ROUGE and BERTScore metrics. Furthermore, the method maintains information consistency across diverse categories, including dates, quantities, and organizational details. These findings highlight the potential of context-aware multi-knowledge integration in enhancing Korean abstractive summarization and suggest promising directions for future research into broader knowledge-incorporation strategies.

1. Introduction

Text summarization is a task that requires effectively utilizing key information from a text and has evolved since the 1950s through various techniques, such as statistical methods and linguistics [1]. Text summarization methods can be classified into extractive and abstractive summarization, based on the summarization approach. Extractive summarization involves selecting of important sentences or words and establishing their natural connections. To identify these key sentences and words, techniques such as statistical analysis, conceptual approaches, and graph-based methods have been proposed [2]. However, extractive summarization works significantly differently from how humans summarize texts and often fails to simplify complex and lengthy sentences in the summarized results [3]. Abstractive summarization was introduced to address these issues and to adopt a more human-like summarization approach. Abstractive summarization generates concise and coherent summaries while preserving essential information and the overall meaning of the input text, thereby mitigating the limitations of extractive summarization.
Early abstractive summarization methods used deep learning models, such as recurrent neural networks (RNNs). However, these models had limitations in learning the relationships between distant words, such as the first and last words in long sentences, which constrained their ability to generate effective summaries. To address the issues of early models, transformer-based pre-trained language models (PLMs), which have proven to be effective in the field of natural language processing, have been introduced for abstractive summarization [1,4]. Transformer-based PLMs have enabled the effective summarization not only of general conversations but also of texts from more specific domains such as healthcare [5,6]. However, although significant research has been conducted on transformer-based PLMs for English data, validation for non-English data is still lacking. Recently, transformer-based PLMs for summarization tasks have been proposed that are specialized in specific languages, such as Czech and Arabic [7,8]. This approach is also applicable to the abstractive summarization of Korean texts, which is the problem we aim to address.
Despite advancements in transformer-based PLMs, abstractive summarization using these models still faces challenges, such as generating semantically inappropriate summaries, distortion, and issues with temporal and causal consistency [4]. Knowledge enhancement has the potential to mitigate these problems by enabling models to better understand the given data and produce more accurate results.
Knowledge enhancement refers to the process of integrating knowledge in order to improve the recognition and understanding of input texts in natural language generation (NLG). It has demonstrated effectiveness in enhancing the interpretability of NLG models and producing coherent texts [9]. The knowledge used in this process can be classified as either internal or external knowledge, depending on the source. Various architectures and training methods have been proposed for integrating knowledge with the original text. Internal knowledge includes elements such as topics and keywords extracted from the input text. Prominent examples of knowledge enhancement using internal knowledge involve extracting topics using latent Dirichlet allocation (LDA) and integrating them into summarization models, or using neural topic models and variational inference to enhance the model’s summarization performance [10,11]. However, external knowledge relies on knowledge graphs or other datasets constructed from external sources. Moreover, enhancing the model’s knowledge through external resources requires additional data. As Korean is a low-resource language, sufficient resources for natural language processing are not readily available [12]. This study focuses on applying knowledge enhancement based on internal knowledge to Korean texts to address the limitations posed by the lack of abundant resources.
This study proposes a knowledge-enhanced approach to Korean abstractive text summarization (ATS) by incorporating various types of internal knowledge. Knowledge enhancement using internal knowledge emphasizes critical information from the input text, which can improve the consistency of the generated summaries. In this study, knowledge enhancement with topics and keywords involves two common approaches: integrating the extracted knowledge into the NLG model and jointly optimizing the NLG and knowledge extraction models [9]. To enhance the word matching, sentence structure, and semantic consistency in Korean abstractive summarization, we propose a method that simultaneously integrates extracted topics and keywords into a Korean ATS model:
  • The proposed internal knowledge extraction method utilizes a Bidirectional Encoder Representations from Transformer (BERT)-based PLM to extract diverse and consistent knowledge from a text while considering the semantic context;
  • By employing an attention mechanism that simultaneously considers various types of internal knowledge, the proposed approach aims to consistently preserve critical information from the input text in the summary, thereby mitigating distortion issues;
  • The proposed method demonstrates that the knowledge-enhanced approach, which integrates multiple types of knowledge, outperforms PLMs that only use the original document as input and those that are enhanced with a single type of knowledge in the task of Korean ATS.

2. Related Work

2.1. Abstractive Text Summarization

Abstractive summarization, an NLG task, aims to produce summaries that preserve the essential content and overall meaning of an input text. Recently, transformer-based PLMs have proven effective in natural language processing and have been applied in the field of NLG. For example, BERT has demonstrated effectiveness in understanding the complex relationships between words and meanings, thereby enabling the generation of accurate and consistent summaries, whereas the Generative Pre-trained Transformer (GPT) has leveraged its generative capabilities to produce contextually relevant summaries for ATS tasks [4]. However, most existing research has been conducted primarily on English text summarization.
The recent research on abstractive summarization has increasingly focused on underrepresented languages. For instance, a study on Czech summarization proposed a GPT-based model that was pre-trained on Czech web data and fine-tuned on Czech news datasets, which tailored it for Czech abstractive summarization [8]. An evaluation of RNNs, transformers, and transformer-based PLMs for Arabic news summarization demonstrated that the transformers outperformed the RNNs, while the transformer-based PLMs exhibited strong performance even in low-resource settings [7]. In the case of Urdu summarization, a hybrid approach combining extractive and abstractive summarization was introduced. Extractive summaries were generated using three methods: sentence weighting algorithms, term frequency–inverse document frequency (TF-IDF) algorithms, and word frequency-based summarization. Common sentences from these summaries were then input into a BERT model to produce human-like summaries [13]. Furthermore, for the task of Korean ATS, which is the focus of this study, a multi-encoder transformer leveraging multiple transformer-based PLMs has been proposed to enhance the summarization performance of ATS models [14]. Recently, large language models (LLMs) have shown strong performance across various natural language processing tasks, including language understanding and generation. However, LLMs require substantial computational and energy resources for training, and in some cases, they have demonstrated lower performance than models such as BERT, which can be fine-tuned for specific tasks [15,16]. Additionally, since pre-trained LLMs may contain evaluation data within their training corpora, their performance can be overestimated. They also tend to exhibit an increased bias depending on the nature of the training data [17,18]. Therefore, this study proposes a method tailored to Korean abstractive summarization using a transformer-based model.
Despite the technological advancements in this field, abstractive summarization still faces several unresolved challenges. In particular, abstractive summarization models often encounter issues related to distortions in their generated summaries. To address this issue, knowledge-based approaches have been explored to ensure factual consistency during model development [4]. However, as these approaches have been developed primarily for English texts, we propose a knowledge-based method specifically designed to ensure factual consistency in Korean summarization.

2.2. Knowledge-Enhanced Model

Knowledge enhancement addresses the problem of limited information in text-only models by integrating auxiliary knowledge, thereby improving their performance and interpretability [9]. Earlier studies used specialized architectures or supervised learning to incorporate knowledge such as dependency relations or structured representations into summarization models. Attention mechanisms have been widely adopted to support this integration [9]. Internal knowledge sources like keywords and topics, which do not require external resources, have been particularly effective. For example, TextRank-based keyword encoding improved the summary quality of English abstractive models [19], and topic distributions generated via LDA were successfully used to enhance the summarization of British news articles [10]. While earlier approaches that used RNNs or statistical models laid the groundwork for ATS, recent work has focused on Transformer-based models with integrated knowledge. Our approach extends this line of research by proposing a lightweight, context-aware attention mechanism that fuses internal knowledge keywords and topics without relying on external databases, which is especially beneficial for low-resource languages like Korean.
Recently, knowledge-enhanced PLMs have been studied to address issues such as their lack of interpretability, limitations in representing rare words, and insufficient logical reasoning capabilities [20]. Various approaches have been proposed for integrating knowledge into PLMs, including embedding knowledge during the embedding process or using it as training data [21]. Such knowledge-enhanced models have demonstrated improved performance through the integration of implicit knowledge for conversational summarization or incorporation of knowledge into prompts for large language models. However, there is a lack of research on optimization strategies for simultaneously integrating multiple types of knowledge or applying these methods to non-English texts [21]. In this study, we apply knowledge-enhancement methods that are proven to be effective for English texts to Korean summarization tasks. Furthermore, we propose a method for integrating multiple types of knowledge simultaneously, addressing a major challenge in the field, and aim to verify its potential to improve summarization results. However, incorporating types of knowledge beyond keywords and topics remains a subject for future research.

2.3. Piror Work on Internal Knowledge Extraction Method

Internal knowledge extraction is employed in knowledge-enhancement methods that utilize topics or keywords. Although traditional research has relied on statistical methods to extract topics and keywords, these approaches have limitations in adequately capturing the semantic meanings of texts. Recently, BERT-based knowledge extraction methods have addressed this issue by overcoming the reliance on word frequency that is inherent to statistical techniques. By leveraging BERT, these methods ensure that contextual meaning is incorporated into the extracted knowledge [22,23].
BERTopic [22] leverages the capability of BERT-based PLMs to effectively capture contextual meaning by vectorizing the input documents and extracting topics that reflect semantic similarity. Additionally, BERT-based PLMs enable the extraction of more diverse and consistent keywords than traditional statistical methods [23]. Specifically, keyword extraction using BERT-based PLMs has been applied to conversational summarization tasks and has contributed to improved outcomes [24]. These methods can also be applied to Korean knowledge extraction by using BERT-based PLMs that are specifically designed for the Korean language.

3. Background: Knowledge-Enhanced Model

This study employs knowledge-enhancement methods that have recently been proposed to address challenges in summarization tasks and PLMs. While the existing knowledge-enhancement methods typically integrate a single type of information, this study adopts a multi-knowledge-enhancement approach by combining various types of knowledge. Specifically, we propose an internal-knowledge-based enhancement architecture that can be utilized without requiring additional external data, which makes it particularly suitable for low-resource languages.

3.1. Generative Knowledge Models

Internal knowledge is generated using statistical techniques and integrated into the input of the text summarization model. Among the types of internal knowledge, topics have mainly been extracted using LDA, a statistical method. However, LDA has limitations in capturing semantic relationships. BERTopic addresses this issue by leveraging a Transformer-based model to generate document embeddings, perform clustering, and derive topic representations that reflect semantic relationships [22]. In this study, we enhance BERTopic for use with Korean texts by adding a morpheme-level tokenization step and employing a pre-trained Sentence-BERT model that supports Korean.
Keywords, another form of internal knowledge, have traditionally been assigned from predefined vocabularies or extracted based on statistical importance from the input document. Recently, KeyBERT [23] was proposed to extract keywords while considering the semantic context of the given text. Based on KeyBERT, our study uses the robustly optimized BERT approach (RoBERTa), an embedding model known for its efficiency in effectively extracting meaningful keywords from small datasets.

3.2. Multiple Knowledge Integration

The most commonly used approach for integrating knowledge is to design specialized architectures that can represent specific types of knowledge [9]. Among these, methods based on attention mechanisms effectively integrate knowledge representations. In this study, we propose a multi-knowledge integration method using an attention mechanism. An attention mechanism is a method designed to focus on important information among the various elements of the input data [25]. It calculates attention scores by measuring the similarity between the query Q, which represents the focus criterion, and the vector representation K of each element in the input data. The calculated attention scores are then used to compute a weighted sum of the actual values of the input data, generating a context vector. This context vector is integrated into NLG tasks in the form of knowledge enhancement, for example, by being used as the initial input for the decoder in a Transformer model [9].
Multi-head attention is a type of attention mechanism in which each attention head focuses on different parts of the input text, enabling the model to learn diverse relationships and features, and, thereby, to generate richer representations [26]. In summarization research, multi-head attention has been used to effectively learn and integrate relationships across video, text, and audio data [27]. This capability of integrating multiple sources of information demonstrates the effectiveness of this approach and supports its applicability to the integration of multiple types of knowledge as proposed in this study.

4. Proposed Method

The proposed method (Figure 1) is a multi-knowledge-enhanced model that simultaneously integrates multiple types of knowledge for Korean ATS. It consists of three main stages: internal knowledge extraction, knowledge enhancement, and Korean abstractive summarization. In the internal knowledge extraction stage, contextual knowledge is extracted from preprocessed documents, such as documents with outliers and stopwords [28] removed using a pre-trained BERT-based model. The extracted knowledge is then combined with the original document through a multi-head attention mechanism in the knowledge-enhancement stage and then transformed into a knowledge vector. In the Korean abstractive summarization stage, the first vector of the embedded original document is replaced with the knowledge vector, and a pre-trained transformer-based model is used to generate the final summary in Korean.

4.1. Internal Knowledge Extraction

The proposed method is an internal-knowledge-based knowledge-enhancement model that utilizes extracted keywords and topics. For internal knowledge extraction, the original text undergoes preprocessing, during which 59,528 stopwords including particles and pronouns are removed, along with non-Korean characters, numbers, and punctuation. The preprocessed text is then used for the extraction of keywords and topics using methods that are specialized for the Korean language.
As illustrated in Figure 2, we adopt BERTopic to extract document-level topics. BERTopic begins by embedding the input document using a pre-trained Sentence-BERT model, which captures semantic relationships by encoding contextual information. In contrast to traditional statistical approaches, Sentence-BERT maps semantically similar documents closer together in the embedding space. For topic modeling in this study, we employ a Korean-specific Sentence-BERT model with approximately 117.7 million parameters. To reduce the computational complexity of the model while preserving its structural characteristics, we apply uniform manifold approximation and projection (UMAP) for dimensionality reduction. This technique maintains both local and global structures in the reduced vector space. Next, hierarchical density-based clustering (HDBSCAN) is used to detect document clusters with varying densities and performs soft clustering to filter out irrelevant or noisy documents. Each resulting cluster is treated as a single composite document. A class-based TF-IDF algorithm is then applied within each cluster to extract high-importance words, which are selected as candidate topic terms. Finally, topics with low semantic similarity are merged during post-processing to produce a refined set of representative topics. To enhance the model’s compatibility with Korean text, we incorporate morpheme-level tokenization using the Open Korean Text (Okt) processor and apply a pre-trained Sentence-BERT model that has been fine-tuned on Korean corpora.
For keyword extraction, we utilize a RoBERTa-based KeyBERT, a lightweight and high-performing keyword extraction model that is optimized for small-scale data. Specifically, we employ a Korean RoBERTa model with approximately 124.6 million parameters. The input document is segmented into candidate n-grams ranging from one to three words. The embeddings for the document and each n-gram candidate are computed independently. The cosine similarity is then measured between the document vector and each candidate, and the top five most relevant n-grams are selected as keywords. To ensure both semantic relevance and lexical diversity, we set use_mmr=True and the diversity parameter to 0.7. This configuration has demonstrated strong performance in Korean keyword extraction tasks and is effectively integrated into our model to provide informative inputs for summarization.

4.2. Multi-Knowledge-Enhanced Model

Figure 3 illustrates the proposed multi-knowledge-enhanced model and the process of creating a knowledge-integrated input. First, the original document and the extracted knowledge are encoded separately using a pre-trained KoBERTSum (Korean BERT summarization) model. To incorporate diverse types of knowledge into the model, contextual embeddings are extracted from the encoder for the input document, keywords, and topics. In this study, we use five keywords, each consisting of one to three morphemes, and five topics, each composed of a single morpheme, and obtain individual embeddings for each.
Let
E d o c = R 1 × d
be the embedding of the input document,
E k w i = R 1 × d
the embedding of the i -th keyword, and
E t p j = R 1 × d
the embedding of the j -th topic. These embeddings are concatenated along the sequence dimension to form the final multi-knowledge representation:
E a l l = C o n c a t ( E d o c , E k w 1 , , E k w K , E t p 1 , , E t p T ) R ( 1 + K + T ) × d
Here, K denotes the number of keywords, T the number of topics, and d the dimensionality of each embedding vector. R m × d denotes a real-valued matrix with m rows and d columns. The combined embedding is then passed through a multi-head attention (MHA) mechanism to generate a context-aware representation:
C = M H A ( E a l l , E a l l , E a l l ) R ( 1 + K + T ) × d
Each row of C corresponds to an attention-enhanced embedding for the document, keywords, and topics, respectively. In this work, only the first row C 0 , which corresponds to the document input position, is extracted and used as the enhanced document embedding:
e d o c e n h a n c e d = C 0 R 1 × d
The vector defined in Equation (6), denoted simply as e, is an enhanced document embedding that incorporates semantic information from auxiliary knowledge such as keywords and topics. This approach follows the knowledge-enhanced attention architecture [9], in which knowledge tokens are concatenated with document tokens, and only the output at the first position is used. In this study, we integrate multiple forms of knowledge into the input of the KoBERTSum model. Specifically, the document embedding is combined with knowledge embeddings and passed through an attention mechanism to enhance their contextual representation. After the model has interacted with the knowledge tokens (i.e., keywords and topics), only the vector corresponding to the first position originally occupied by the document embedding is used. In other words, the first token serves as a representative of the original document and has been enriched with semantic information through interaction with the knowledge embeddings.

4.3. Korean ATS Model

The proposed Korean ATS model utilizes the pre-trained KoBERTSum model. The KoBERTSum model consists of an encoder-decoder architecture, similar to that of BERT, that has been adapted for Korean language summarization. The encoder processes both the original document and the knowledge-enhanced vector to produce contextual representations, which the decoder then uses to generate fluent and semantically accurate summaries in Korean. The model is initialized with the publicly available Hugging Face checkpoint “EbanLee/kobert-summary-v3”, which contains 123,859,968 parameters, and its weights are updated during training. The embedding process is capable of processing support input sequences of up to 512 tokens. For training, the model employs the cross-entropy loss function and the Adam optimizer, with a learning rate of 0.00002 and a maximum of 100 epochs. To prevent overfitting, early stopping is applied if the loss does not improve for more than three consecutive epochs. The knowledge-enhanced vector is used to replace the original first token embedding in the encoder output of the ATS model. Let the original encoder hidden states of the input document be Henc ∈ RL×d; the final encoder output is defined as:
H e n c f i n a l = e d o c e n h a n c e d ; H e n c 1 : R L × d
where L is the input sequence length, and [⋅;⋅] denotes the replacement of the first token embedding. To enrich the document representation with semantic information from auxiliary knowledge, we replace the original document token embedding H e n c 0 with the enhanced embedding e d o c e n h a n c e d , obtained through multi-head attention over the keyword and topic vectors. This attention-based mechanism integrates external knowledge into the document representation by computing interactions among the document, keyword, and topic embeddings [9]. Only the output at the first position, corresponding to the document, is retained as the knowledge-integrated vector. This approach preserves the original sequence structure and positional encodings while enriching the semantic representation of the document. As a result, the decoder receives a knowledge-informed context vector, improving both the semantic coherence and summary consistency of the model without disrupting the transformer’s architecture. The resulting encoder output Hfinal is then passed to the decoder as context. The decoder uses this modified encoder representation to generate abstractive summaries. By integrating auxiliary knowledge directly into the document representation, the decoder gains a richer semantic context, which leads to an improved summarization performance. The decoder generates a Korean abstractive summary consisting of a minimum of 12 tokens and a maximum of 300 tokens.

5. Result

5.1. Datasets

We use the News (Hugging Face) dataset and the Law (AI Hub) dataset, both of which are Korean abstractive summarization datasets, to evaluate the proposed method. Table 1 presents statistics of the datasets, including the average word count, document length, and sentence count for the original documents and reference summaries across the training, validation, and test sets. The datasets are split into training, validation, and test sets in a ratio of 8:1:1.
The News (Hugging Face) dataset was constructed by crawling IT- and economy-related news articles from Naver, which were collected between 1 July and 10 July 2022 [29]. This dataset consists of 17,380 training samples, 2482 validation samples, and 4967 test samples. On average, the original documents contain 400 words and 17 sentences, while the reference summaries consist of 70 words and 1 sentence.
The Law (AI Hub) dataset was constructed using civil, criminal, and other case rulings obtained via an open API for obtaining full-text judgments that was provided through the Public Data Portal. After removing duplicate documents, a total of 27,033 cases were used. This dataset includes 18,923 training samples, 2703 validation samples, and 5407 test samples. On average, the original documents contain 270 words and 4 sentences, and the reference summaries are written to be approximately one-third the length of the original documents.

5.2. Research Environment

This study utilizes Google Colaboratory, a cloud-based computational environment provided by Google. The system specifications include an Intel® Xeon® CPU with six cores and a clock speed of 2.2 GHz, an NVIDIA L4 GPU with 23 GB of memory, 54 GB of system memory, and 79 GB of disk storage. Additionally, the following key libraries were used: Python v3.10.12, Transformers v4.42.4, BERTScore v0.3.12, and SentenceTransformers v3.0.1. For the News dataset, the fine-tuning took approximately 1469.30 s per epoch and the inference required an average of 0.96 s per document, totaling 4756.82 s for the entire evaluation. In the Law dataset, the fine-tuning took approximately 1638.08 s per epoch and the inference required an average of 1.33 s per document, totaling 7176.64 s.

5.3. Automatic Evaluation

The generated summaries are evaluated using Korean Recall-Oriented Understudy for Gisting Evaluation (KoROUGE) [30], a representative metric designed to assess the overlap of Korean morphemes between a reference and generated summaries. Specifically, we use ROUGE-N and ROUGE-L. ROUGE-N evaluates the word overlap by calculating the n-gram match rate between the reference and generated summaries. ROUGE-1 measures the unigram overlap between the generated and reference summaries based on morphemes. ROUGE-2 evaluates bigram matches, reflecting the short-range syntactic structures in the document. ROUGE-L assesses the structural similarity by identifying the longest common subsequence (LCS) of morphemes between the two summaries, and thereby captures both the structure and word order preservation. These metrics provide insight into how well the generated summary retains the lexical and structural features of the reference at the morpheme level. In addition to surface-level metrics, we employ BERTScore to evaluate the semantic similarity between generated and reference summaries. Unlike traditional n-gram-based metrics such as ROUGE, which may penalize paraphrased or restructured sentences, BERTScore leverages contextual embeddings from pre-trained BERT models to compute the token-level cosine similarity between summaries. This enables a more robust evaluation of meaning preservation. Specifically, we adopt KoBERTScore, a variant tailored for Korean morpheme embeddings, to effectively assess the semantic consistency at the morpheme level. By combining token-level precision, recall, and F1 scores based on contextualized representations, BERTScore provides a meaning-aware evaluation that complements ROUGE and enhances the reliability of summarization performance assessment.

5.4. Baselines

To evaluate the performance of the proposed multi-knowledge-based summarization model, we compare it with several baseline models that either simplify the proposed architecture or include only partial components. The raw model uses only the original document as input, without incorporating any additional knowledge. In this experiment, we adopt a pre-trained KoBERT-based Korean ATS model, which serves as a reference point for assessing the effectiveness of knowledge enhancement. The M1 model replaces BERT-based context-aware knowledge extraction with traditional statistical methods such as the TF-IDF algorithm and LDA, while keeping the rest of the architecture identical to the proposed model. The M2 model is designed to verify the knowledge integration method; it includes both topic and keyword inputs but integrates them using a standard attention layer instead of multi-head attention. The M3 model evaluates the isolated effect of using a single type of knowledge, either a topic or keyword, while maintaining the use of multi-head attention. This allows us to assess the impact of multi-knowledge integration. Lastly, we include two LLMs, ChatGPT 4 and Gemma 2, as additional comparison targets. A detailed analysis of these models is provided in the ablation study section.

5.5. Quantitative Evaluation of the Proposed Method

Table 2 presents the performance of the proposed model and baseline models on the news dataset. The proposed model achieved the highest performance across all evaluation metrics, recording 50.92% in ROUGE-1, 42.44% in ROUGE-2, 46.26% in ROUGE-L, and 82.75% in BERTScore. These results demonstrate the effectiveness of context-aware multi-knowledge extraction and the integration mechanism based on the multi-head attention structure.
Compared to the raw model, which uses only the original document as input, the proposed model showed improvements of approximately 3% with ROUGE-1 and over 1.1% with BERTScore. This suggests that incorporating internal knowledge contributes not only to better summarization quality but also to improved lexical overlap and semantic consistency. Although M1 used knowledge extracted by statistical methods, its performance gain was minimal, indicating that such approaches fail to sufficiently capture contextual information. M2, which uses the same knowledge inputs but integrates them via a simple attention layer, slightly outperformed both the raw model and M1. This highlights that how knowledge is integrated is as important as what knowledge is integrated. M3, which uses only a single type of knowledge (either topic or keyword), showed better performance than M1 and M2 but still fell short of the proposed model. Notably, the performance gap between using only topics versus only keywords was negligible, suggesting a complementary effect when both are used together. Overall, the proposed model demonstrated strong abstractive summarization performance in terms of both content richness and semantic consistency by effectively integrating multiple internal knowledge sources through a well-structured attention mechanism.
To assess the generalizability of the proposed model, we conducted additional experiments on the law dataset, a domain-specific dataset composed of legal documents. Table 3 summarizes the results in comparison with the raw model and the M3 baseline (single-knowledge input), along with the performance changes under different numbers of attention heads.
The proposed model with a four-head attention structure achieved the best performance, obtaining 43.36% with ROUGE-1, 29.38% with ROUGE-2, 41.14% with ROUGE-L, and 82.67% with BERTScore. These results confirm that the proposed model can operate effectively even in formal and low-resource domains such as legal documents. However, increasing the number of attention heads to eight resulted in an overall performance drop. For instance, the performance of BERTScore decreased by over 1%, and that of ROUGE-L dropped by more than 2%. This implies that excessively dispersed attention may hinder the model’s ability to focus on important knowledge or critical parts of the document, especially in domains characterized by long sentences and complex structures. On the other hand, the M3 baseline, which uses only a single type of knowledge (either topic or keyword), showed slightly better performance than the raw model but did not produce significant improvements. These findings further emphasize the necessity of integrating multiple knowledge sources. In conclusion, the legal domain experiment demonstrates that the proposed model is not limited to news summarization and can be effectively adapted to low-resource, domain-specific settings when properly configured.
As shown in Table 3, the attention configuration with four heads outperformed the variant with eight heads. This suggests that, in MHA, using an excessive number of heads can lead to redundancy, as multiple heads may converge to similar attention distributions. Prior research [31] also reported that increasing the number of attention heads improved the performance up to eight heads, but that further increases degraded the performance due to overlapping focus and noise amplification. In our Korean ATS task, the four-head configuration appears to better balance diversity and focus in the attention mechanism. Furthermore, according to Hrycej et al. [32], when the dataset size is limited, excessive parameter counts can harm the model’s generalization performance. This supports the notion that the number of heads should be adjusted in proportion to the data size, aligning with the effectiveness of our chosen configuration.
Table 4 presents a comparative evaluation of the proposed model against prior research and state-of-the-art pretrained models for Korean abstractive summarization. Shin [14] proposed a multi-encoder transformer-based summarization model, achieving a value of 77.04% with BERTScore (F1) on the same legal dataset used in our study. Among recent models, “alaggung/bart-r3f”, a BART-based model pretrained specifically for Korean summarization, achieved a value of 78.62% with BERTScore (F1) for legal documents and 76.97% BERTScore (F1) for news articles. In comparison, our proposed model achieved a value of 81.06% BERTScore (F1) on legal documents and 79.67% BERTScore (F1) on news documents, outperforming the existing models by approximately 4% and 2.7%, respectively. These results demonstrate the superior performance of our model in both domains, particularly in legal document summarization, where capturing nuanced domain-specific information is critical.

5.6. Ablation Study

5.6.1. Analysis of Extracted Knowledge on the News Dataset

The internal knowledge extracted using the proposed method yielded significantly more diverse information than that obtained using statistical methods, with 317 topics and 15,829 keywords being extracted. Table 5 provides examples of the extracted internal knowledge, showing that the topics consisted of an average of 2.24 characters, while the keywords averaged 9.96 characters. Additionally, 40% of the topics and 96% of the keywords were composed of morphemes that were not present in the original document.
Table 6 presents the inclusion of extracted knowledge in the generated summaries. When using the knowledge-enhanced method with topics only, the generated summaries included the extracted topics but no new knowledge appeared, in contrast to the summaries generated using only the original document. This trend was also observed in the proposed method. By contrast, the knowledge-enhanced method using only keywords showed that the number of extracted keywords included in the generated summaries was higher than when using only the original document. In some cases, only parts of the keywords composed of up to three morphemes were included. Notably, the proposed method tended to incorporate more internal knowledge into the generated summaries than methods using only topics or only keywords.
Table 7 presents a consistency analysis of the generated summaries based on different knowledge-enhancement methods, focusing on information related to people, organizations, dates/times, places, quantities, and other categories. The people-related information includes names, characteristics, and traits of individuals, as well as information about their pets. The place-related information covers locations such as countries, cities, and oceans. The analysis was conducted using a named entity recognition model trained on news data [33]. The summaries generated using only the original document retained the most information about people and places. The proposed method outperformed the single-knowledge-based approach in maintaining consistency for various types of information, including organizational categories (e.g., economy and education), temporal details (e.g., dates and times), quantitative values (e.g., age, size, ratio, and price), and other miscellaneous categories not related to people.

5.6.2. Comparison with LLM

To evaluate the proposed method against LLMs, which have recently shown strong performance in natural language processing tasks, we conducted experiments using Gemma 2-2b and ChatGPT-4. For Gemma 2-2b, we applied prompt tuning using the training data, which we reformulated in the style of Alpaca-style instructions, and used the same prompt format during testing. For ChatGPT-4, we tested two types of prompts on 10 randomly selected samples: one consisting of only the original document with the instruction “Summarize”, and another combining the same instruction with the original document and the extracted knowledge. Figure 4 shows examples of the prompts used for each LLM method.
Table 8 presents the results of the comparison between the proposed method and Gemma 2-2b, showing that the proposed method achieved higher performance. Table 9 shows the results obtained for ChatGPT-4 for 10 randomly selected samples using prompts with only the original document (raw), prompts with extracted knowledge (KE), and the proposed method. The proposed method outperformed the other prompt-based settings, demonstrating its effectiveness.

6. Conclusions

Korean abstractive summarization presents unique challenges that distinguish it from English ATS. Unlike English, which uses space-delimited words with relatively rigid syntactic rules, Korean is an agglutinative language with a flexible word order and which entails the extensive use of particles to mark grammatical roles. As a result, word-level tokenization is less effective in Korean, making morpheme-level processing essential to accurately capture the semantic structure. Furthermore, the scarcity of high-quality pretraining corpora in Korean exacerbates the difficulty in learning robust contextual representations. This study proposes a Korean ATS method that uses a multi-knowledge-enhanced model. Knowledge-enhancement offers advantages in improving the performance of ATS and ensuring the consistency of the summarized text in abstractive summarization. In particular, extracting internal knowledge while considering the context has the potential to achieve better performance in knowledge integration. The proposed method utilizes a BERT-based model that was pre-trained for Korean which considers the context of documents and extracts knowledge at the morpheme level, the smallest semantic unit in the Korean language. This approach enables the extraction of internal knowledge specialized for Korean, contributing to performance improvements in the Korean ATS model. This model demonstrates enhanced word overlap, structural consistency, and semantic coherence when incorporating single types of knowledge, compared to using only the original document. Furthermore, integrating multiple types of knowledge simultaneously outperforms using single types of knowledge on both the news and law datasets.
The proposed model not only demonstrates enhanced summarization quality through multi-knowledge integration but also demonstrates strong potential for real-world deployment in AI-based embedded systems and edge natural language processing modules. Since the model relies exclusively on internal knowledge extracted from the input document and does not require external resources or access to external APIs, it is well-suited for low-power environments. Its architecture can be quantized and pruned for deployment on memory- and computation-constrained platforms such as mobile summarization apps or offline document-processing edge devices. Moreover, the efficient multi-head attention mechanism used for knowledge integration supports low-latency inference, making the model applicable to real-time summarization platforms where an immediate response is required, such as news aggregation services or live event summarization. Given the growing importance of privacy-preserving, especially for sensitive content domains like legal or healthcare documents, the proposed model provides a practical solution by enabling meaningful summarization without transmitting raw data to external servers. In future work, we will explore methods to enhance the model’s summarization performance by integrating various types of internal and external knowledge, including discourse structures, entity graphs, and domain-specific information. Inspired by hybrid prompt-based knowledge integration research [34], we also plan to investigate a prompt-based approach that extracts and combines knowledge from multiple PLMs.

Author Contributions

Conceptualization, K.O.; Methodology, K.O.; Software, K.O.; Validation, K.O.; Formal analysis, K.O.; Investigation, K.O.; Resources, K.O.; Data curation, K.O.; Writing—original draft preparation, K.O.; Writing—review and editing, Y.L. and H.W.; Visualization, K.O.; Supervision, Y.L. and H.W.; Project administration, Y.L. and H.W.; Funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2024-00350688).

Data Availability Statement

The datasets used in this study are publicly available. The Naver News Summarization Dataset (Korean) is available at https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko (accessed on 6 January 2025), and the Document Summary Text dataset is available at https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=97 (accessed on 6 January 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RNNRecurrent Neural Network
PLMsPre-trained Language Models
NLGNatural Language Generation
LDALatent Dirichlet Allocation
ATSAbstractive Text Summarization
BERTBidirectional Encoder Representations from Transformer
GPTGenerative Pre-trained Models
TF-IDFTerm Frequency-Inverse Document Frequency
LLMsLarge Language Models
RoBERTaRobustly Optimized BERT approach
UMAPUniform Manifold Approximation and Projection
HDBSCANHierarchical density based clustering
OktOpen Korean Text
KoBERTSumKorean BERT Summarization
MHAMulti-Head Attention
KoROUGEKorean Recall-Oriented Understudy for Gisting Evaluation

References

  1. Sharma, G.; Sharma, D. Automatic text summarization methods: A comprehensive review. SN Comput. Sci. 2022, 4, 33. [Google Scholar] [CrossRef]
  2. El-Kassas, W.S.; Salama, C.R.; Rafea, A.A.; Mohamed, H.K. Automatic text summarization: A comprehensive survey. Expert Syst. Appl. 2021, 165, 113679. [Google Scholar] [CrossRef]
  3. Widyassari, A.P.; Rustad, S.; Shidik, G.F.; Noersasongko, E.; Syukur, A.; Affandy, A.; Setiadi, D.R.I.M. Review of automatic text summarization techniques & methods. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 1029–1046. [Google Scholar]
  4. Shakil, H.; Farooq, A.; Kalita, J. Abstractive text summarization: State of the art, challenges, and improvements. Neurocomputing 2024, 603, 128255. [Google Scholar] [CrossRef]
  5. Feng, X.; Feng, X.; Qin, L.; Qin, B.; Liu, T. Language model as an annotator: Exploring DialoGPT for dialogue summarization. arXiv 2021, arXiv:2105.12544. [Google Scholar]
  6. Chintagunta, B.; Katariya, N.; Amatriain, X.; Kannan, A. Medically aware GPT-3 as a data generator for medical dialogue summarization. In Proceedings of the Machine Learning for Healthcare Conference, Online, 6–7 August 2021; pp. 354–372. [Google Scholar]
  7. Bani-Almarjeh, M.; Kurdy, M.-B. Arabic abstractive text summarization using RNN-based and transformer-based architectures. Inf. Process. Manag. 2023, 60, 103227. [Google Scholar] [CrossRef]
  8. Hájek, A.; Horák, A. Czegpt-2–training new model for czech generative text processing evaluated with the summarization task. IEEE Access 2024, 12, 34570–34581. [Google Scholar] [CrossRef]
  9. Yu, W.; Zhu, C.; Li, Z.; Hu, Z.; Wang, Q.; Ji, H.; Jiang, M. A survey of knowledge-enhanced text generation. ACM Comput. Surv. 2022, 54, 1–38. [Google Scholar] [CrossRef]
  10. Narayan, S.; Cohen, S.B.; Lapata, M. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. arXiv 2018, arXiv:1808.08745. [Google Scholar]
  11. Fu, X.; Wang, J.; Zhang, J.; Wei, J.; Yang, Z. Document summarization with vhtm: Variational hierarchical topic-aware mechanism. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 7740–7747. [Google Scholar]
  12. Lee, S.; Park, C.; Jung, D.; Moon, H.; Seo, J.; Eo, S.; Lim, H.-S. Leveraging Pre-existing Resources for Data-Efficient Counter-Narrative Generation in Korean. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, 20–25 May 2024; pp. 10380–10392. [Google Scholar]
  13. Raza, A.; Soomro, M.H.; Shahzad, I.; Batool, S. Abstractive Text Summarization for Urdu Language. J. Comput. Biomed. Inform. 2024, 7, 1–19. [Google Scholar]
  14. Shin, Y. Multi-encoder transformer for Korean abstractive text summarization. IEEE Access 2023, 11, 48768–48782. [Google Scholar] [CrossRef]
  15. Zhong, Q.; Ding, L.; Liu, J.; Du, B.; Tao, D. Can chatgpt understand too? A comparative study on chatgpt and fine-tuned bert. arXiv 2023, arXiv:2302.10198. [Google Scholar]
  16. Raiaan, M.A.K.; Mukta, M.S.H.; Fatema, K.; Fahad, N.M.; Sakib, S.; Mim, M.M.J.; Ahmad, J.; Ali, M.E.; Azam, S. A review on large language models: Architectures, applications, taxonomies, open issues and challenges. IEEE Access 2024, 12, 26839–26874. [Google Scholar] [CrossRef]
  17. Ali, M.; Panda, S.; Shen, Q.; Wick, M.; Kobren, A. Understanding the interplay of scale, data, and bias in language models: A case study with bert. arXiv 2024, arXiv:2407.21058. [Google Scholar]
  18. Jiang, M.; Liu, K.Z.; Zhong, M.; Schaeffer, R.; Ouyang, S.; Han, J.; Koyejo, S. Investigating data contamination for pre-training language models. arXiv 2024, arXiv:2401.06059. [Google Scholar]
  19. Li, C.; Xu, W.; Li, S.; Gao, S. Guiding generation for abstractive text summarization based on key information guide network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, LA, USA, 1–6 June 2018; Volume 2, pp. 55–60. [Google Scholar]
  20. Hu, L.; Liu, Z.; Zhao, Z.; Hou, L.; Nie, L.; Li, J. A survey of knowledge enhanced pre-trained language models. IEEE Trans. Knowl. Data Eng. 2023, 36, 1413–1430. [Google Scholar] [CrossRef]
  21. Yang, J.; Hu, X.; Xiao, G.; Shen, Y. A survey of knowledge enhanced pre-trained language models. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 2024. [Google Scholar] [CrossRef]
  22. Grootendorst, M. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv 2022, arXiv:2203.05794. [Google Scholar]
  23. Kim, S.-E.; Lee, J.-B.; Park, G.-M.; Sohn, S.-M.; Park, S.-B. RoBERTa-Based Keyword Extraction from Small Number of Korean Documents. Electronics 2023, 12, 4560. [Google Scholar] [CrossRef]
  24. Wang, S.; Ma, H.; Zhang, Y.; Ma, J.; He, L. Enhancing Abstractive Dialogue Summarization with Internal Knowledge. In Proceedings of the 2024 International Joint Conference on Neural Networks (IJCNN), Yokohama, Japan, 30 June–5 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–8. [Google Scholar]
  25. Niu, Z.; Zhong, G.; Yu, H. A review on the attention mechanism of deep learning. Neurocomputing 2021, 452, 48–62. [Google Scholar] [CrossRef]
  26. Li, J.; Wang, X.; Tu, Z.; Lyu, M.R. On the diversity of multi-head attention. Neurocomputing 2021, 454, 14–24. [Google Scholar] [CrossRef]
  27. Baek, D.; Kim, J.; Lee, H. VATMAN: Integrating Video-Audio-Text for Multimodal Abstractive SummarizatioN via Crossmodal Multi-Head Attention Fusion. IEEE Access 2024, 12, 119174–119184. [Google Scholar] [CrossRef]
  28. Spikeekips. Stopwords-ko.txt. 2016. Available online: https://gist.github.com/spikeekips/40eea22ef4a89f629abd87eed535ac6a#file-stopwords-ko-txt (accessed on 26 March 2025).
  29. Kim, D. Naver News Summarization Dataset (Korean). 2024. Available online: https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko (accessed on 6 January 2025).
  30. Kim, H. Korouge. 2023. Available online: https://github.com/HeegyuKim/korouge (accessed on 26 March 2025).
  31. Jiang, S.; Suriawinata, A.A.; Hassanpour, S. MHAttnSurv: Multi-head attention for survival prediction using whole-slide pathology images. Comput. Biol. Med. 2023, 158, 106883. [Google Scholar] [CrossRef]
  32. Hrycej, T.; Bermeitinger, B.; Handschuh, S. Number of Attention Heads vs. Number of Transformer-Encoders in Computer Vision. arXiv 2022, arXiv:2209.07221. [Google Scholar]
  33. Yeajinmin. NER-NewsBI-150142-e3b4 2025. Available online: https://huggingface.co/yeajinmin/NER-NewsBI-150142-e3b4 (accessed on 26 March 2025).
  34. Bai, J.; Yan, Z.; Zhang, S.; Yang, J.; Guo, H.; Li, Z. Infusing internalized knowledge of language models into hybrid prompts for knowledgeable dialogue generation. Knowl. Based Syst. 2024, 296, 111874. [Google Scholar] [CrossRef]
Figure 1. Multi knowledge enhanced model.
Figure 1. Multi knowledge enhanced model.
Electronics 14 01813 g001
Figure 2. Modified BERTopic model for Korean topic extraction.
Figure 2. Modified BERTopic model for Korean topic extraction.
Electronics 14 01813 g002
Figure 3. Integration of multi-knowledge with the Korean ATS model.
Figure 3. Integration of multi-knowledge with the Korean ATS model.
Electronics 14 01813 g003
Figure 4. Prompt examples for large language models.
Figure 4. Prompt examples for large language models.
Electronics 14 01813 g004
Table 1. News and law dataset statistics.
Table 1. News and law dataset statistics.
DatasetOriginal DocumentReference Summary
WordSentenceLengthWordSentenceLength
NewsTrain421.3217.331045.772.241.35181.87
Validation452.4618.641122.572.181.32180.75
Test431.9117.861071.171.791.42180.1
LawTrain297.854.66659.4388.562.1201.75
Validation278.195.14615.2991.022.08207.17
Test276.584.1612.1688.092.1200.79
Table 2. News dataset summarization results.
Table 2. News dataset summarization results.
MethodKEDescriptionROUGE-1ROUGE-2ROUGE-LBERTScore
RawOnly original document0.4737890.3901430.4325480.815607
M1TopicContext-agnostic TF-IDF/LDA knowledge0.4880530.4015020.4429970.820140
Keyword0.4879460.4013690.4427750.820177
Topic and
Keyword
0.4880940.4014960.4429550.820160
M2TopicContextual knowledge + standard attention0.4887430.4023080.4436480.820815
Keyword0.4887430.4023080.4436480.820815
Topic and
Keyword
0.4887110.4022710.4436070.820813
M3TopicContextual knowledge + multi-head attention (single type)0.4880160.4014130.4429020.82047
Keyword0.4881090.4015660.4430110.820453
OursContextual knowledge + multi-head attention (multi-type)0.5091740.424410.4625770.827523
Table 3. Law dataset summarization results.
Table 3. Law dataset summarization results.
KEHeadsROUGE-1ROUGE-2ROUGE-LBERTScore
Topic40.4115840.2937160.3870740.816412
80.4115020.293630.3870420.816463
Keyword40.4106210.2925940.3860910.816252
80.4113740.2934580.3868960.816372
Ours40.4336140.2938030.4114240.826701
80.4116380.2937290.3871230.816508
Table 4. Performance comparison between the proposed model and prior methods.
Table 4. Performance comparison between the proposed model and prior methods.
DataModelBERTScore
Precision
BERTScore
Recall
BERTScore
F1 Score
LawShin [14]78.2579.0878.61
T573.7480.7777.04
BART80.7877.3578.62
Ours81.6883.8282.67
NewsT583.7976.7580.04
BART72.1582.6176.97
Ours82.3683.2682.75
Table 5. Examples of knowledge extracted by extraction method.
Table 5. Examples of knowledge extracted by extraction method.
KnowledgeMethodExtracted KnowledgeRomanized
TopicLDA., 는, 은, 한, 수., neun, eun, han, su
BERTopic했다, 하는, 기업, 하고, 이다haessda, haneun, gieob, hago, ida
KeywordTextRank장관, 서울, 대응, 대외, 가격jang-gwan, seoul, daeeung, daeoe, gagyeog
KeyBERT기획재정부 장관 3일, 추경호 부총리 기획재정부, 하고 부총리 물류, 하겠다고 밝혔다, 3일gihoegjaejeongbu jang-gwan 3il, chugyeongho buchongli gihoegjaejeongbu, hago buchongli mullyu, hagessdago balghyeossda, 3il
Table 6. Analysis of the impact of knowledge on summary generation.
Table 6. Analysis of the impact of knowledge on summary generation.
KETypeRefer SummaryGen SummaryAfter KE
KeywordWhole519229387
Part11,8679800178
Topic2794350
OursKeyword (Whole)5483326224
Keyword (Part)12,18810,117586
Topic2784420
Table 7. Information consistency of knowledge enhancement method.
Table 7. Information consistency of knowledge enhancement method.
MethodPersonOrganizationDate/TimeLocationQuantityOthers
Topic1854527743467952552
Keyword1864527753467922551
Ours1494878333519032723
Table 8. Comparison of summarization results for Gemma 2.
Table 8. Comparison of summarization results for Gemma 2.
MethodKEROUGE-1ROUGE-2ROUGE-LBERTScore
Gemma2News0.0756160.045180.0719170.576962
Law0.0681250.0352590.0648920.606618
OursNews0.5091740.424410.4625770.827523
Law0.4336140.2938030.4114240.826701
Table 9. Comparison of summarization results for ChatGPT 4.
Table 9. Comparison of summarization results for ChatGPT 4.
MethodKEROUGE-1ROUGE-2ROUGE-LBERTScore
ChatGPT 4News(Raw)0.2729960.1234780.2168290.761404
News(KE)0.2704830.1296070.2301950.766662
Law(Raw)0.3012390.1544580.2903270.79321
Law(KE)0.2945280.1337020.2705980.791059
OursNews0.506730.4105060.465370.831302
Law0.3627070.2746070.3579660.787268
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oh, K.; Lee, Y.; Woo, H. Multi-Knowledge-Enhanced Model for Korean Abstractive Text Summarization. Electronics 2025, 14, 1813. https://doi.org/10.3390/electronics14091813

AMA Style

Oh K, Lee Y, Woo H. Multi-Knowledge-Enhanced Model for Korean Abstractive Text Summarization. Electronics. 2025; 14(9):1813. https://doi.org/10.3390/electronics14091813

Chicago/Turabian Style

Oh, Kyoungsu, Youngho Lee, and Hyekyung Woo. 2025. "Multi-Knowledge-Enhanced Model for Korean Abstractive Text Summarization" Electronics 14, no. 9: 1813. https://doi.org/10.3390/electronics14091813

APA Style

Oh, K., Lee, Y., & Woo, H. (2025). Multi-Knowledge-Enhanced Model for Korean Abstractive Text Summarization. Electronics, 14(9), 1813. https://doi.org/10.3390/electronics14091813

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop