Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = distantly supervised relation extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1902 KB  
Article
Distantly Supervised Relation Extraction Method Based on Multi-Level Hierarchical Attention
by Zhaoxin Xuan, Hejing Zhao, Xin Li and Ziqi Chen
Information 2025, 16(5), 364; https://doi.org/10.3390/info16050364 - 29 Apr 2025
Cited by 1 | Viewed by 987
Abstract
Distantly Supervised Relation Extraction (DSRE) aims to automatically identify semantic relationships within large text corpora by aligning with external knowledge bases. Despite the success of current methods in automating data annotation, they introduce two main challenges: label noise and data long-tail distribution. Label [...] Read more.
Distantly Supervised Relation Extraction (DSRE) aims to automatically identify semantic relationships within large text corpora by aligning with external knowledge bases. Despite the success of current methods in automating data annotation, they introduce two main challenges: label noise and data long-tail distribution. Label noise results in inaccurate annotations, which can undermine the quality of relation extraction. The long-tail problem, on the other hand, leads to an imbalanced model that struggles to extract less frequent, long-tail relations. In this paper, we introduce a novel relation extraction framework based on multi-level hierarchical attention. This approach utilizes Graph Attention Networks (GATs) to model the hierarchical structure of the relations, capturing the semantic dependencies between relation types and generating relation embeddings that reflect the overall hierarchical framework. To improve the classification process, we incorporate a multi-level classification structure guided by hierarchical attention, which enhances the accuracy of both head and tail relation extraction. A local probability constraint is introduced to ensure coherence across the classification levels, fostering knowledge transfer from frequent to less frequent relations. Experimental evaluations on the New York Times (NYT) dataset demonstrate that our method outperforms existing baselines, particularly in the context of long-tail relation extraction, offering a comprehensive solution to the challenges of DSRE. Full article
Show Figures

Figure 1

22 pages, 749 KB  
Article
Improving Distantly Supervised Relation Extraction with Multi-Level Noise Reduction
by Wei Song and Zijiang Yang
AI 2024, 5(3), 1709-1730; https://doi.org/10.3390/ai5030084 - 23 Sep 2024
Cited by 3 | Viewed by 1803
Abstract
Background: Distantly supervised relation extraction (DSRE) aims to identify semantic relations in large-scale texts automatically labeled via knowledge base alignment. It has garnered significant attention due to its high efficiency, but existing methods are plagued by noise at both the word and [...] Read more.
Background: Distantly supervised relation extraction (DSRE) aims to identify semantic relations in large-scale texts automatically labeled via knowledge base alignment. It has garnered significant attention due to its high efficiency, but existing methods are plagued by noise at both the word and sentence level and fail to address these issues adequately. The former level of noise arises from the large proportion of irrelevant words within sentences, while noise at the latter level is caused by inaccurate relation labels for various sentences. Method: We propose a novel multi-level noise reduction neural network (MLNRNN) to tackle both issues by mitigating the impact of multi-level noise. We first build an iterative keyword semantic aggregator (IKSA) to remove noisy words, and capture distinctive features of sentences by aggregating the information of keywords. Next, we implement multi-objective multi-instance learning (MOMIL) to reduce the impact of incorrect labels in sentences by identifying the cluster of correctly labeled instances. Meanwhile, we leverage mislabeled sentences with cross-level contrastive learning (CCL) to further enhance the classification capability of the extractor. Results: Comprehensive experimental results on two DSRE benchmark datasets demonstrated that the MLNRNN outperformed state-of-the-art methods for distantly supervised relation extraction in almost all cases. Conclusions: The proposed MLNRNN effectively addresses both word- and sentence-level noise, providing a significant improvement in relation extraction performance under distant supervision. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

18 pages, 1106 KB  
Article
MKDAT: Multi-Level Knowledge Distillation with Adaptive Temperature for Distantly Supervised Relation Extraction
by Jun Long, Zhuoying Yin, Yan Han and Wenti Huang
Information 2024, 15(7), 382; https://doi.org/10.3390/info15070382 - 30 Jun 2024
Cited by 4 | Viewed by 3081
Abstract
Distantly supervised relation extraction (DSRE), first used to address the limitations of manually annotated data via automatically annotating the data with triplet facts, is prone to issues such as mislabeled annotations due to the interference of noisy annotations. To address the interference of [...] Read more.
Distantly supervised relation extraction (DSRE), first used to address the limitations of manually annotated data via automatically annotating the data with triplet facts, is prone to issues such as mislabeled annotations due to the interference of noisy annotations. To address the interference of noisy annotations, we leveraged a novel knowledge distillation (KD) method which was different from the conventional models on DSRE. More specifically, we proposed a model-agnostic KD method, Multi-Level Knowledge Distillation with Adaptive Temperature (MKDAT), which mainly involves two modules: Adaptive Temperature Regulation (ATR) and Multi-Level Knowledge Distilling (MKD). ATR allocates adaptive entropy-based distillation temperatures to different training instances for providing a moderate softening supervision to the student, in which label hardening is possible for instances with great entropy. MKD combines the bag-level and instance-level knowledge of the teacher as supervisions of the student, and trains the teacher and student at the bag and instance levels, respectively, which aims at mitigating the effects of noisy annotation and improving the sentence-level prediction performance. In addition, we implemented three MKDAT models based on the CNN, PCNN, and ATT-BiLSTM neural networks, respectively, and the experimental results show that our distillation models outperform the baseline models on bag-level and instance-level evaluations. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 10737 KB  
Article
Distantly Supervised Relation Extraction via Contextual Information Interaction and Relation Embeddings
by Huixin Yin, Shengquan Liu and Zhaorui Jian
Symmetry 2023, 15(9), 1788; https://doi.org/10.3390/sym15091788 - 18 Sep 2023
Cited by 5 | Viewed by 1964
Abstract
Distantly supervised relation extraction (DSRE) utilizes an external knowledge base to automatically label a corpus, which inevitably leads to the problem of mislabeling. Existing approaches utilize BERT to provide instances and relation embeddings to capture a wide set of relations and address the [...] Read more.
Distantly supervised relation extraction (DSRE) utilizes an external knowledge base to automatically label a corpus, which inevitably leads to the problem of mislabeling. Existing approaches utilize BERT to provide instances and relation embeddings to capture a wide set of relations and address the noise problem. However, the method suffers from a single method of textual information processing, underutilizing the feature information of entity pairs in the relation embeddings part and being interfered with by noisy labels when classifying multiple labels. For this reason, we propose the contextual information interaction and relation embeddings (CIRE) method. First, we utilize BERT and Bi-LSTM to construct a neural network model to enhance contextual information interaction by filtering and supplementing sequence information through the error repair capability of the Bi-LSTM gating mechanism. At the same time, we combine the vector difference between entity pairs and entity pairs in the relation embeddings layer to improve the relation embeddings accuracy. Finally, we choose sparse softmax as the classifier, which improves the ability to control the noise categories by controlling the number of output categories. The experimental results show that our method significantly outperforms the baseline method and improves the AUC metric by 2.6% on the NYT2010 dataset. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 904 KB  
Article
KGGCN: Knowledge-Guided Graph Convolutional Networks for Distantly Supervised Relation Extraction
by Ningyi Mao, Wenti Huang and Hai Zhong
Appl. Sci. 2021, 11(16), 7734; https://doi.org/10.3390/app11167734 - 22 Aug 2021
Cited by 6 | Viewed by 3575
Abstract
Distantly supervised relation extraction is the most popular technique for identifying semantic relation between two entities. Most prior models only focus on the supervision information present in training sentences. In addition to training sentences, external lexical resource and knowledge graphs often contain other [...] Read more.
Distantly supervised relation extraction is the most popular technique for identifying semantic relation between two entities. Most prior models only focus on the supervision information present in training sentences. In addition to training sentences, external lexical resource and knowledge graphs often contain other relevant prior knowledge. However, relation extraction models usually ignore such readily available information. Moreover, previous works only utilize a selective attention mechanism over sentences to alleviate the impact of noise, they lack the consideration of the implicit interaction between sentences with relation facts. In this paper, (1) a knowledge-guided graph convolutional network is proposed based on the word-level attention mechanism to encode the sentences. It can capture the key words and cue phrases to generate expressive sentence-level features by attending to the relation indicators obtained from the external lexical resource. (2) A knowledge-guided sentence selector is proposed, which explores the semantic and structural information of triples from knowledge graph as sentence-level knowledge attention to distinguish the importance of each individual sentence. Experimental results on two widely used datasets, NYT-FB and GDS, show that our approach is able to efficiently use the prior knowledge from the external lexical resource and knowledge graph to enhance the performance of distantly supervised relation extraction. Full article
Show Figures

Figure 1

13 pages, 1068 KB  
Article
Populating Web-Scale Knowledge Graphs Using Distantly Supervised Relation Extraction and Validation
by Sarthak Dash, Michael R. Glass, Alfio Gliozzo, Mustafa Canim and Gaetano Rossiello
Information 2021, 12(8), 316; https://doi.org/10.3390/info12080316 - 6 Aug 2021
Cited by 3 | Viewed by 3453
Abstract
In this paper, we propose a fully automated system to extend knowledge graphs using external information from web-scale corpora. The designed system leverages a deep-learning-based technology for relation extraction that can be trained by a distantly supervised approach. In addition, the system uses [...] Read more.
In this paper, we propose a fully automated system to extend knowledge graphs using external information from web-scale corpora. The designed system leverages a deep-learning-based technology for relation extraction that can be trained by a distantly supervised approach. In addition, the system uses a deep learning approach for knowledge base completion by utilizing the global structure information of the induced KG to further refine the confidence of the newly discovered relations. The designed system does not require any effort for adaptation to new languages and domains as it does not use any hand-labeled data, NLP analytics, and inference rules. Our experiments, performed on a popular academic benchmark, demonstrate that the suggested system boosts the performance of relation extraction by a wide margin, reporting error reductions of 50%, resulting in relative improvement of up to 100%. Furthermore, a web-scale experiment conducted to extend DBPedia with knowledge from Common Crawl shows that our system is not only scalable but also does not require any adaptation cost, while yielding a substantial accuracy gain. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

12 pages, 543 KB  
Article
Semantic Enhanced Distantly Supervised Relation Extraction via Graph Attention Network
by Xiaoye Ouyang, Shudong Chen and Rong Wang
Information 2020, 11(11), 528; https://doi.org/10.3390/info11110528 - 14 Nov 2020
Cited by 3 | Viewed by 3446
Abstract
Distantly Supervised relation extraction methods can automatically extract the relation between entity pairs, which are essential for the construction of a knowledge graph. However, the automatically constructed datasets comprise amounts of low-quality sentences and noisy words, and the current Distantly Supervised methods ignore [...] Read more.
Distantly Supervised relation extraction methods can automatically extract the relation between entity pairs, which are essential for the construction of a knowledge graph. However, the automatically constructed datasets comprise amounts of low-quality sentences and noisy words, and the current Distantly Supervised methods ignore these noisy data, resulting in unacceptable accuracy. To mitigate this problem, we present a novel Distantly Supervised approach SEGRE (Semantic Enhanced Graph attention networks Relation Extraction) for improved relation extraction. Our model first uses word position and entity type information to provide abundant local features and background knowledge. Then it builds the dependency trees to remove noisy words that are irrelevant to relations and employs Graph Attention Networks (GATs) to encode syntactic information, which also captures the important semantic features of relational words in each instance. Furthermore, to make our model more robust against noisy words, the intra-bag attention module is used to weight the bag representation and mitigate noise in the bag. Through extensive experiments on Riedel New York Times (NYT) and Google IISc Distantly Supervised (GIDS) datasets, we demonstrate SEGRE’s effectiveness. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop