Next Article in Journal
Genetic Optimized Location Aided Routing Protocol for VANET Based on Rectangular Estimation of Position
Next Article in Special Issue
Preliminary Results on Different Text Processing Tasks Using Encoder-Decoder Networks and the Causal Feature Extractor
Previous Article in Journal
Characteristics of Duplex Angular Contact Ball Bearing with Combined External Loads and Angular Misalignment
Previous Article in Special Issue
Best Practices of Convolutional Neural Networks for Question Classification
 
 
Article

Can We Survive without Labelled Data in NLP? Transfer Learning for Open Information Extraction

by 1,2,* and 2
1
Department of Computer Engineering, Arab Academy for Science, Technology and Maritime Transport (AAST), Alexandria 21500, Egypt
2
Department of Information and Computing Sciences, Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(17), 5758; https://doi.org/10.3390/app10175758
Received: 29 July 2020 / Revised: 14 August 2020 / Accepted: 18 August 2020 / Published: 20 August 2020
Various tasks in natural language processing (NLP) suffer from lack of labelled training data, which deep neural networks are hungry for. In this paper, we relied upon features learned to generate relation triples from the open information extraction (OIE) task. First, we studied how transferable these features are from one OIE domain to another, such as from a news domain to a bio-medical domain. Second, we analyzed their transferability to a semantically related NLP task, namely, relation extraction (RE). We thereby contribute to answering the question: can OIE help us achieve adequate NLP performance without labelled data? Our results showed comparable performance when using inductive transfer learning in both experiments by relying on a very small amount of the target data, wherein promising results were achieved. When transferring to the OIE bio-medical domain, we achieved an F-measure of 78.0%, only 1% lower when compared to traditional learning. Additionally, transferring to RE using an inductive approach scored an F-measure of 67.2%, which was 3.8% lower than training and testing on the same task. Hereby, our analysis shows that OIE can act as a reliable source task. View Full-Text
Keywords: transfer learning; open information extraction; relation extraction; recurrent neural networks; word embeddings transfer learning; open information extraction; relation extraction; recurrent neural networks; word embeddings
Show Figures

Figure 1

MDPI and ACS Style

Sarhan, I.; Spruit, M. Can We Survive without Labelled Data in NLP? Transfer Learning for Open Information Extraction. Appl. Sci. 2020, 10, 5758. https://doi.org/10.3390/app10175758

AMA Style

Sarhan I, Spruit M. Can We Survive without Labelled Data in NLP? Transfer Learning for Open Information Extraction. Applied Sciences. 2020; 10(17):5758. https://doi.org/10.3390/app10175758

Chicago/Turabian Style

Sarhan, Injy, and Marco Spruit. 2020. "Can We Survive without Labelled Data in NLP? Transfer Learning for Open Information Extraction" Applied Sciences 10, no. 17: 5758. https://doi.org/10.3390/app10175758

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop