Next Article in Journal
Community Detection Based on Differential Evolution Using Modularity Density
Previous Article in Journal
Composite Numbers That Give Valid RSA Key Pairs for Any Coprime p
Article Menu

Export Article

Open AccessArticle
Information 2018, 9(9), 217; https://doi.org/10.3390/info9090217

Skeleton to Abstraction: An Attentive Information Extraction Schema for Enhancing the Saliency of Text Summarization

1,2,3
,
1,2,3,* , 1,2,3
,
2,3
,
2,3
and
2,3
1
University of Chinese Academy of Sciences, No. 19 (A), Yuquan Road, Shijingshan District, Beijing 100049, China
2
Institute of Electronics, Chinese Academy of Sciences, No. 19, North Fourth Ring West Road, Haidian District, Beijing 100190, China
3
Key Laboratory of Spatial Information Processing and Applied System Technology, Chinese Academy of Sciences, No. 19, North Fourth Ring West Road, Haidian District, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Received: 14 July 2018 / Revised: 19 August 2018 / Accepted: 28 August 2018 / Published: 29 August 2018
(This article belongs to the Section Artificial Intelligence)
Full-Text   |   PDF [3233 KB, uploaded 29 August 2018]   |  

Abstract

Current popular abstractive summarization is based on an attentional encoder-decoder framework. Based on the architecture, the decoder generates a summary according to the full text that often results in the decoder being interfered by some irrelevant information, thereby causing the generated summaries to suffer from low saliency. Besides, we have observed the process of people writing summaries and find that they write a summary based on the necessary information rather than the full text. Thus, in order to enhance the saliency of the abstractive summarization, we propose an attentive information extraction model. It consists of a multi-layer perceptron (MLP) gated unit that pays more attention to the important information of the source text and a similarity module to encourage high similarity between the reference summary and the important information. Before the summary decoder, the MLP and the similarity module work together to extract the important information for the decoder, thus obtaining the skeleton of the source text. This effectively reduces the interference of irrelevant information to the decoder, therefore improving the saliency of the summary. Our proposed model was tested on CNN/Daily Mail and DUC-2004 datasets, and achieved a 42.01 ROUGE-1 f-score and 33.94 ROUGE-1, recall respectively. The result outperforms the state-of-the-art abstractive model on the same dataset. In addition, by subjective human evaluation, the saliency of the generated summaries was further enhanced. View Full-Text
Keywords: recurrent neural network (RNN); abstractive text summarization; information extraction; attention mechanism; semantic relevance; saliency of summarization recurrent neural network (RNN); abstractive text summarization; information extraction; attention mechanism; semantic relevance; saliency of summarization
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Xiang, X.; Xu, G.; Fu, X.; Wei, Y.; Jin, L.; Wang, L. Skeleton to Abstraction: An Attentive Information Extraction Schema for Enhancing the Saliency of Text Summarization. Information 2018, 9, 217.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top