Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = knowledge graph embedding (KGE)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 549 KiB  
Review
Large Language Models for Knowledge Graph Embedding: A Survey
by Bingchen Liu, Yuanyuan Fang, Naixing Xu, Shihao Hou, Xin Li and Qian Li
Mathematics 2025, 13(14), 2244; https://doi.org/10.3390/math13142244 - 10 Jul 2025
Viewed by 888
Abstract
Large language models (LLMs) have attracted a lot of attention in various fields due to their superior performance, aiming to train hundreds of millions or more parameters on large amounts of text data to understand and generate natural language. As the superior performance [...] Read more.
Large language models (LLMs) have attracted a lot of attention in various fields due to their superior performance, aiming to train hundreds of millions or more parameters on large amounts of text data to understand and generate natural language. As the superior performance of LLMs becomes apparent, they are increasingly being applied to knowledge graph embedding (KGE)-related tasks to improve the processing results. Traditional KGE representation learning methods map entities and relations into a low-dimensional vector space, enabling the triples in the knowledge graph to satisfy a specific scoring function in the vector space. However, based on the powerful language understanding and semantic modeling capabilities of LLMs, which have recently been invoked to varying degrees in different types of KGE-related scenarios such as multi-modal KGE and open KGE according to their task characteristics, researchers are increasingly exploring how to integrate LLMs to enhance knowledge representation, improve generalization to unseen entities or relations, and support reasoning beyond static graph structures. In this paper, we investigate a wide range of approaches for performing LLMs-related tasks in different types of KGE scenarios. To better compare the various approaches, we summarize each KGE scenario in a classification. In the article we also discuss the applications in which the methods are mainly used and suggest several forward-looking directions for the development of this new research area. Full article
(This article belongs to the Special Issue Data-Driven Decentralized Learning for Future Communication Networks)
Show Figures

Figure 1

18 pages, 14780 KiB  
Article
Boosting Deep Reinforcement Learning with Semantic Knowledge for Robotic Manipulators
by Lucía Güitta-López, Vincenzo Suriani, Jaime Boal, Álvaro J. López-López and Daniele Nardi
Robotics 2025, 14(7), 86; https://doi.org/10.3390/robotics14070086 - 24 Jun 2025
Viewed by 489
Abstract
Deep Reinforcement Learning (DRL) is a powerful framework for solving complex sequential decision-making problems, particularly in robotic control. However, its practical deployment is often hindered by the substantial amount of experience required for learning, which results in high computational and time costs. In [...] Read more.
Deep Reinforcement Learning (DRL) is a powerful framework for solving complex sequential decision-making problems, particularly in robotic control. However, its practical deployment is often hindered by the substantial amount of experience required for learning, which results in high computational and time costs. In this work, we propose a novel integration of DRL with semantic knowledge in the form of Knowledge Graph Embeddings (KGEs), aiming to enhance learning efficiency by providing contextual information to the agent. Our architecture combines KGEs with visual observations, enabling the agent to exploit environmental knowledge during training. Experimental validation with robotic manipulators in environments featuring both fixed and randomized target attributes demonstrates that our method achieves up to 60% reduction in learning time and improves task accuracy by approximately 15 percentage points, without increasing training time or computational complexity. These results highlight the potential of semantic knowledge to reduce sample complexity and improve the effectiveness of DRL in robotic applications. Full article
(This article belongs to the Special Issue Applications of Neural Networks in Robot Control)
Show Figures

Figure 1

26 pages, 4300 KiB  
Article
HGeoKG: A Hierarchical Geographic Knowledge Graph for Geographic Knowledge Reasoning
by Tailong Li, Renyao Chen, Yilin Duan, Hong Yao, Shengwen Li and Xinchuan Li
ISPRS Int. J. Geo-Inf. 2025, 14(1), 18; https://doi.org/10.3390/ijgi14010018 - 3 Jan 2025
Viewed by 1419
Abstract
The Geographic Knowledge Graph (GeoKG) serves as an effective method for organizing geographic knowledge, playing a crucial role in facilitating semantic interoperability across heterogeneous data sources. However, existing GeoKGs are limited by a lack of hierarchical modeling and insufficient coverage of geographic knowledge [...] Read more.
The Geographic Knowledge Graph (GeoKG) serves as an effective method for organizing geographic knowledge, playing a crucial role in facilitating semantic interoperability across heterogeneous data sources. However, existing GeoKGs are limited by a lack of hierarchical modeling and insufficient coverage of geographic knowledge (e.g., limited entity types, inadequate attributes, and insufficient spatial relationships), which hinders their effective use and representation of semantic content. This paper presents HGeoKG, a hierarchical geographic knowledge graph that comprehensively models hierarchical structures, attributes, and spatial relationships of multi-type geographic entities. Based on the concept and construction methods of HGeoKG, this paper developed a dataset named HGeoKG-MHT-670K. Statistical analysis reveals significant regional heterogeneity and long-tail distribution patterns in HGeoKG-MHT-670K. Furthermore, extensive geographic knowledge reasoning experiments on HGeoKG-MHT-670K show that most knowledge graph embedding (KGE) models fail to achieve satisfactory performance. This suggests the need to accommodate spatial heterogeneity across different regions and improve the embedding quality of long-tail geographic entities. HGeoKG serves as both a reference for GeoKG construction and a benchmark for geographic knowledge reasoning, driving the development of geographical artificial intelligence (GeoAI). Full article
Show Figures

Figure 1

22 pages, 1244 KiB  
Article
KLR-KGC: Knowledge-Guided LLM Reasoning for Knowledge Graph Completion
by Shengwei Ji, Longfei Liu, Jizhong Xi, Xiaoxue Zhang and Xinlu Li
Electronics 2024, 13(24), 5037; https://doi.org/10.3390/electronics13245037 - 21 Dec 2024
Cited by 2 | Viewed by 2725
Abstract
Knowledge graph completion (KGC) involves inferring missing entities or relationships within a knowledge graph, playing a crucial role across various domains, including intelligent question answering, recommendation systems, and dialogue systems. Traditional knowledge graph embedding (KGE) methods have proven effective in utilizing structured data [...] Read more.
Knowledge graph completion (KGC) involves inferring missing entities or relationships within a knowledge graph, playing a crucial role across various domains, including intelligent question answering, recommendation systems, and dialogue systems. Traditional knowledge graph embedding (KGE) methods have proven effective in utilizing structured data and relationships. However, these methods often overlook the vast amounts of unstructured data and the complex reasoning capabilities required to handle ambiguous queries or rare entities. Recently, the rapid development of large language models (LLMs) has demonstrated exceptional potential in text comprehension and contextual reasoning, offering new prospects for KGC tasks. By using traditional KGE to capture the structural information of entities and relations to generate candidate entities and then reranking them with a generative LLM, the output of the LLM can be constrained to improve reliability. Despite this, new challenges, such as omissions and incorrect responses, arise during the ranking process. To address these issues, a knowledge-guided LLM reasoning for knowledge graph completion (KLR-KGC) framework is proposed. This model retrieves two types of knowledge from the knowledge graph—analogical knowledge and subgraph knowledge—to enhance the LLM’s logical reasoning ability for specific tasks while injecting relevant additional knowledge. By integrating a chain-of-thought (CoT) prompting strategy, the model guides the LLM to filter and rerank candidate entities, constraining its output to reduce omissions and incorrect responses. The framework aims to learn and uncover the latent correspondences between entities, guiding the LLM to make reasonable inferences based on supplementary knowledge for more accurate predictions. The experimental results demonstrate that on the FB15k-237 dataset, KLR-KGC outperformed the entity generation model (CompGCN), achieving a 4.8% improvement in MRR and a 5.8% improvement in Hits@1. Full article
(This article belongs to the Special Issue Advances in Graph-Based Data Mining)
Show Figures

Figure 1

22 pages, 2629 KiB  
Article
Universal Knowledge Graph Embedding Framework Based on High-Quality Negative Sampling and Weighting
by Pengfei Zhang, Huang Peng, Yang Fang, Zongqiang Yang, Yanli Hu, Zhen Tan and Weidong Xiao
Mathematics 2024, 12(22), 3489; https://doi.org/10.3390/math12223489 - 8 Nov 2024
Viewed by 1645
Abstract
The traditional model training approach based on negative sampling randomly samples a portion of negative samples for training, which can easily overlook important negative samples and adversely affect the training of knowledge graph embedding models. Some researchers have explored non-sampling model training frameworks [...] Read more.
The traditional model training approach based on negative sampling randomly samples a portion of negative samples for training, which can easily overlook important negative samples and adversely affect the training of knowledge graph embedding models. Some researchers have explored non-sampling model training frameworks that use all unobserved triples as negative samples to improve model training performance. However, both training methods inevitably introduce false negative samples and easy-to-separate negative samples that are far from the model’s decision boundary, and they do not consider the adverse effects of long-tail entities and relations during training, thus limiting the improvement of model training performance. To address this issue, we propose a universal knowledge graph embedding framework based on high-quality negative sampling and weighting, called HNSW-KGE. First, we conduct pre-training based on the NS-KGE non-sampling training framework to quickly obtain an initial set of relatively high-quality embedding vector representations for all entities and relations. Second, we design a candidate negative sample set construction strategy that samples a certain number of negative samples that are neither false negatives nor easy-to-separate negatives for all positive triples, based on the embedding vectors obtained from pre-training. This ensures the provision of high-quality negative samples for model training. Finally, we apply weighting to the loss function based on the frequency of the entities and relations appearing in the triples to mitigate the adverse effects of long-tail entities and relations on model training. Experiments conducted on benchmark datasets FB15K237 and WN18RR using various knowledge graph embedding models demonstrate that our proposed framework HNSW-KGE, based on high-quality negative sampling and weighting, achieves better training performance and exhibits versatility, making it applicable to various types of knowledge embedding models. Full article
Show Figures

Figure 1

20 pages, 1425 KiB  
Article
Knowledge Graph Embedding Using a Multi-Channel Interactive Convolutional Neural Network with Triple Attention
by Lin Shi, Weitao Liu, Yafeng Wu, Chenxu Dai, Zhanlin Ji and Ivan Ganchev
Mathematics 2024, 12(18), 2821; https://doi.org/10.3390/math12182821 - 11 Sep 2024
Cited by 2 | Viewed by 1723
Abstract
Knowledge graph embedding (KGE) has been identified as an effective method for link prediction, which involves predicting missing relations or entities based on existing entities or relations. KGE is an important method for implementing knowledge representation and, as such, has been widely used [...] Read more.
Knowledge graph embedding (KGE) has been identified as an effective method for link prediction, which involves predicting missing relations or entities based on existing entities or relations. KGE is an important method for implementing knowledge representation and, as such, has been widely used in driving intelligent applications w.r.t. question-answering systems, recommendation systems, and relationship extraction. Models based on convolutional neural networks (CNNs) have achieved good results in link prediction. However, as the coverage areas of knowledge graphs expand, the increasing volume of information significantly limits the performance of these models. This article introduces a triple-attention-based multi-channel CNN model, named ConvAMC, for the KGE task. In the embedding representation module, entities and relations are embedded into a complex space and the embeddings are performed in an alternating pattern. This approach helps in capturing richer semantic information and enhances the expressive power of the model. In the encoding module, a multi-channel approach is employed to extract more comprehensive interaction features. A triple attention mechanism and max pooling layers are used to ensure that interactions between spatial dimensions and output tensors are captured during the subsequent tensor concatenation and reshaping process, which allows preserving local and detailed information. Finally, feature vectors are transformed into prediction targets for embedding through the Hadamard product of feature mapping and reshaping matrices. Extensive experiments were conducted to evaluate the performance of ConvAMC on three benchmark datasets compared with state-of-the-art (SOTA) models, demonstrating that the proposed model outperforms all compared models across all evaluation metrics on two of the datasets, and achieves advanced link prediction results on most evaluation metrics on the third dataset. Full article
Show Figures

Figure 1

16 pages, 892 KiB  
Article
Enhancing Knowledge Graph Embedding with Hierarchical Self-Attention and Graph Neural Network Techniques for Drug-Drug Interaction Prediction in Virtual Reality Environments
by Lizhen Jiang and Sensen Zhang
Symmetry 2024, 16(5), 587; https://doi.org/10.3390/sym16050587 - 9 May 2024
Cited by 2 | Viewed by 1653
Abstract
In biomedicine, the critical task is to decode Drug–Drug Interactions (DDIs) from complex biomedical texts. The scientific community employs Knowledge Graph Embedding (KGE) methods, enhanced with advanced neural network technologies, including capsule networks. However, existing methodologies primarily focus on the structural details of [...] Read more.
In biomedicine, the critical task is to decode Drug–Drug Interactions (DDIs) from complex biomedical texts. The scientific community employs Knowledge Graph Embedding (KGE) methods, enhanced with advanced neural network technologies, including capsule networks. However, existing methodologies primarily focus on the structural details of individual entities or relations within Biomedical Knowledge Graphs (BioKGs), overlooking the overall structural context of BioKGs, molecular structures, positional features of drug pairs, and their critical Relational Mapping Properties. To tackle the challenges identified, this study presents HSTrHouse an innovative hierarchical self-attention BioKGs embedding framework. This architecture integrates self-attention mechanisms with advanced neural network technologies, including Convolutional Neural Network (CNN) and Graph Neural Network (GNN), for enhanced computational modeling in biomedical contexts. The model bifurcates the BioKGs into entity and relation layers for structural analysis. It employs self-attention across these layers, utilizing PubMedBERT and CNN for position feature extraction, and a GNN for drug pair molecular structure analysis. Then, we connect the position and molecular structure features to integrate them into the self-attention calculation of entity and relation. After that, the output of the self-attention layer is combined with the connected vectors of the position feature and molecular structure feature to obtain the final representation vector, and finally, to model the Relational Mapping Properties (RMPs), the representation vector is embedded into the complex vector space using Householder projections to obtain the BioKGs model. The paper validates HSTrHouse’s efficacy by comparing it with advanced models on three standard BioKGs for DDIs research. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 1497 KiB  
Article
Decentralized Federated Learning-Enabled Relation Aggregation for Anomaly Detection
by Siyue Shuai, Zehao Hu, Bin Zhang, Hannan Bin Liaqat and Xiangjie Kong
Information 2023, 14(12), 647; https://doi.org/10.3390/info14120647 - 3 Dec 2023
Cited by 5 | Viewed by 3253
Abstract
Anomaly detection plays a crucial role in data security and risk management across various domains, such as financial insurance security, medical image recognition, and Internet of Things (IoT) device management. Researchers rely on machine learning to address potential threats in order to enhance [...] Read more.
Anomaly detection plays a crucial role in data security and risk management across various domains, such as financial insurance security, medical image recognition, and Internet of Things (IoT) device management. Researchers rely on machine learning to address potential threats in order to enhance data security. In the financial insurance industry, enterprises tend to leverage the relation mining capabilities of knowledge graph embedding (KGE) for anomaly detection. However, auto insurance fraud labeling strongly relies on manual labeling by experts. The efficiency and cost issues of labeling make auto insurance fraud detection still a small-sample detection challenge. Existing schemes, such as migration learning and data augmentation methods, are susceptible to local characteristics, leading to their poor generalization performance. To improve its generalization, the recently emerging Decentralized Federated Learning (DFL) framework provides new ideas for mining more frauds through the joint cooperation of companies. Based on DFL, we propose a federated framework named DFLR for relation embedding aggregation. This framework trains the private KGE of auto insurance companies on the client locally and dynamically selects servers for relation aggregation with the aim of privacy protection. Finally, we validate the effectiveness of our proposed DFLR on a real auto insurance dataset. And the results show that the cooperative approach provided by DFLR improves the client’s ability to detect auto insurance fraud compared to single client training. Full article
(This article belongs to the Special Issue Emerging Research on Neural Networks and Anomaly Detection)
Show Figures

Figure 1

16 pages, 1390 KiB  
Article
Convolutional Models with Multi-Feature Fusion for Effective Link Prediction in Knowledge Graph Embedding
by Qinglang Guo, Yong Liao, Zhe Li, Hui Lin and Shenglin Liang
Entropy 2023, 25(10), 1472; https://doi.org/10.3390/e25101472 - 21 Oct 2023
Cited by 2 | Viewed by 2329
Abstract
Link prediction remains paramount in knowledge graph embedding (KGE), aiming to discern obscured or non-manifest relationships within a given knowledge graph (KG). Despite the critical nature of this endeavor, contemporary methodologies grapple with notable constraints, predominantly in terms of computational overhead and the [...] Read more.
Link prediction remains paramount in knowledge graph embedding (KGE), aiming to discern obscured or non-manifest relationships within a given knowledge graph (KG). Despite the critical nature of this endeavor, contemporary methodologies grapple with notable constraints, predominantly in terms of computational overhead and the intricacy of encapsulating multifaceted relationships. This paper introduces a sophisticated approach that amalgamates convolutional operators with pertinent graph structural information. By meticulously integrating information pertinent to entities and their immediate relational neighbors, we enhance the performance of the convolutional model, culminating in an averaged embedding ensuing from the convolution across entities and their proximal nodes. Significantly, our methodology presents a distinctive avenue, facilitating the inclusion of edge-specific data into the convolutional model’s input, thus endowing users with the latitude to calibrate the model’s architecture and parameters congruent with their specific dataset. Empirical evaluations underscore the ascendancy of our proposition over extant convolution-based link prediction benchmarks, particularly evident across the FB15k, WN18, and YAGO3-10 datasets. The primary objective of this research lies in forging KGE link prediction methodologies imbued with heightened efficiency and adeptness, thereby addressing salient challenges inherent to real-world applications. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing II)
Show Figures

Figure 1

17 pages, 761 KiB  
Article
Knowledge Reasoning via Jointly Modeling Knowledge Graphs and Soft Rules
by Yinyu Lan, Shizhu He, Kang Liu and Jun Zhao
Appl. Sci. 2023, 13(19), 10660; https://doi.org/10.3390/app131910660 - 25 Sep 2023
Cited by 5 | Viewed by 1873
Abstract
Knowledge graphs (KGs) play a crucial role in many applications, such as question answering, but incompleteness is an urgent issue for their broad application. Much research in knowledge graph completion (KGC) has been performed to resolve this issue. The methods of KGC can [...] Read more.
Knowledge graphs (KGs) play a crucial role in many applications, such as question answering, but incompleteness is an urgent issue for their broad application. Much research in knowledge graph completion (KGC) has been performed to resolve this issue. The methods of KGC can be classified into two major categories: rule-based reasoning and embedding-based reasoning. The former has high accuracy and good interpretability, but a major challenge is to obtain effective rules on large-scale KGs. The latter has good efficiency and scalability, but it relies heavily on data richness and cannot fully use domain knowledge in the form of logical rules. We propose a novel method that injects rules and learns representations iteratively to take full advantage of rules and embeddings. Specifically, we model the conclusions of rule groundings as 0–1 variables and use a rule confidence regularizer to remove the uncertainty of the conclusions. The proposed approach has the following advantages: (1) It combines the benefits of both rules and knowledge graph embeddings (KGEs) and achieves a good balance between efficiency and scalability. (2) It uses an iterative method to continuously improve KGEs and remove incorrect rule conclusions. Evaluations of two public datasets show that our method outperforms the current state-of-the-art methods, improving performance by 2.7% and 4.3% in mean reciprocal rank (MRR). Full article
Show Figures

Figure 1

14 pages, 1897 KiB  
Article
Complex Knowledge Graph Embeddings Based on Convolution and Translation
by Lin Shi, Zhao Yang, Zhanlin Ji and Ivan Ganchev
Mathematics 2023, 11(12), 2627; https://doi.org/10.3390/math11122627 - 8 Jun 2023
Cited by 1 | Viewed by 2105
Abstract
Link prediction involves the use of entities and relations that already exist in a knowledge graph to reason about missing entities or relations. Different approaches have been proposed to date for performing this task. This paper proposes a combined use of the translation-based [...] Read more.
Link prediction involves the use of entities and relations that already exist in a knowledge graph to reason about missing entities or relations. Different approaches have been proposed to date for performing this task. This paper proposes a combined use of the translation-based approach with the Convolutional Neural Network (CNN)-based approach, resulting in a novel model, called ConCMH. In the proposed model, first, entities and relations are embedded into the complex space, followed by a vector multiplication of entity embeddings and relational embeddings and taking the real part of the results to generate a feature matrix of their interaction. Next, a 2D convolution is used to extract features from this matrix and generate feature maps. Finally, the feature vectors are transformed into predicted entity embeddings by obtaining the inner product of the feature mapping and the entity embedding matrix. The proposed ConCMH model is compared against state-of-the-art models on the four most commonly used benchmark datasets and the obtained experimental results confirm its superiority in the majority of cases. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

19 pages, 3782 KiB  
Article
A Quick Prototype for Assessing OpenIE Knowledge Graph-Based Question-Answering Systems
by Giuseppina Di Paolo, Diego Rincon-Yanez and Sabrina Senatore
Information 2023, 14(3), 186; https://doi.org/10.3390/info14030186 - 16 Mar 2023
Cited by 7 | Viewed by 3857
Abstract
Due to the rapid growth of knowledge graphs (KG) as representational learning methods in recent years, question-answering approaches have received increasing attention from academia and industry. Question-answering systems use knowledge graphs to organize, navigate, search and connect knowledge entities. Managing such systems requires [...] Read more.
Due to the rapid growth of knowledge graphs (KG) as representational learning methods in recent years, question-answering approaches have received increasing attention from academia and industry. Question-answering systems use knowledge graphs to organize, navigate, search and connect knowledge entities. Managing such systems requires a thorough understanding of the underlying graph-oriented structures and, at the same time, an appropriate query language, such as SPARQL, to access relevant data. Natural language interfaces are needed to enable non-technical users to query ever more complex data. The paper proposes a question-answering approach to support end users in querying graph-oriented knowledge bases. The system pipeline is composed of two main modules: one is dedicated to translating a natural language query submitted by the user into a triple of the form <subject, predicate, object>, while the second module implements knowledge graph embedding (KGE) models, exploiting the previous module triple and retrieving the answer to the question. Our framework delivers a fast OpenIE-based knowledge extraction system and a graph-based answer prediction model for question-answering tasks. The system was designed by leveraging existing tools to accomplish a simple prototype for fast experimentation, especially across different knowledge domains, with the added benefit of reducing development time and costs. The experimental results confirm the effectiveness of the proposed system, which provides promising performance, as assessed at the module level. In particular, in some cases, the system outperforms the literature. Finally, a use case example shows the KG generated by user questions in a graphical interface provided by an ad-hoc designed web application. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and Its Applications)
Show Figures

Figure 1

16 pages, 2123 KiB  
Article
Modeling Noncommutative Composition of Relations for Knowledge Graph Embedding
by Chao Xiang, Cong Fu, Deng Cai and Xiaofei He
Electronics 2023, 12(6), 1348; https://doi.org/10.3390/electronics12061348 - 12 Mar 2023
Cited by 1 | Viewed by 2134
Abstract
Knowledge Graph Embedding (KGE) is a powerful way to express Knowledge Graphs (KGs), which can help machines learn patterns hidden in the KGs. Relation patterns are useful hidden patterns, and they usually assist machines to predict unseen facts. Many existing KGE approaches can [...] Read more.
Knowledge Graph Embedding (KGE) is a powerful way to express Knowledge Graphs (KGs), which can help machines learn patterns hidden in the KGs. Relation patterns are useful hidden patterns, and they usually assist machines to predict unseen facts. Many existing KGE approaches can model some common relation patterns like symmetry/antisymmetry, inversion, and commutative composition patterns. However, most of them are weak in modeling noncommutative composition patterns. It means these approaches can not distinguish a lot of composite relations like “father’s mother” and “mother’s father”. In this work, we propose a new KGE method called QuatRotatScalE (QRSE) to overcome this weakness, since it utilizes rotation and scaling transformations of quaternions to design the relation embedding. Specifically, we embed the relations and entities into a quaternion vector space under the difference norm KGE framework. Since the multiplication of quaternions does not satisfy the commutative law, QRSE can model noncommutative composition patterns naturally. The experimental results on the synthetic dataset also support that QRSE has this ability. In addition, the experimental results on real-world datasets show that QRSE reaches state-of-the-art in link prediction problem. Full article
(This article belongs to the Special Issue Applications of Computational Intelligence, Volume 2)
Show Figures

Figure 1

17 pages, 7684 KiB  
Article
TeCre: A Novel Temporal Conflict Resolution Method Based on Temporal Knowledge Graph Embedding
by Jiangtao Ma, Chenyu Zhou, Yonggang Chen, Yanjun Wang, Guangwu Hu and Yaqiong Qiao
Information 2023, 14(3), 155; https://doi.org/10.3390/info14030155 - 1 Mar 2023
Cited by 7 | Viewed by 3170
Abstract
Since the facts in the knowledge graph (KG) cannot be updated automatically over time, some facts have temporal conflicts. To discover and eliminate the temporal conflicts in the KG, this paper proposes a novel temporal conflict resolution method based on temporal KG embedding [...] Read more.
Since the facts in the knowledge graph (KG) cannot be updated automatically over time, some facts have temporal conflicts. To discover and eliminate the temporal conflicts in the KG, this paper proposes a novel temporal conflict resolution method based on temporal KG embedding (named TeCre). Firstly, the predicate relation and timestamp information of time series are incorporated into the entity–relation embedding representation by leveraging the temporal KG embedding (KGE) method. Then, taking into account the chronological sequence of the evolution of the entity–relation representation over time, TeCre constrains the temporal relation in the KG according to the principles of time disjoint, time precedence, and time mutually exclusive constraints. Besides that, TeCre further considers the sequence vectorization of predicate relation to discover the temporal conflict facts in the KG. Finally, to eliminate the temporal conflict facts, TeCre deletes the tail entities of the temporal conflict facts, and employs the link prediction method to complete the missing tail entities according to the output of the score function based on the entity–relation embedding. Experimental results on four public datasets show that TeCre is significantly better than the state-of-the-art temporal KG conflict resolution model. The mean reciprocal ranking (MRR) and Hits@10 of TeCre are at least 5.46% and 3.2% higher than the baseline methods, respectively. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and Its Applications)
Show Figures

Figure 1

19 pages, 1932 KiB  
Article
Attention Knowledge Network Combining Explicit and Implicit Information
by Shangju Deng, Jiwei Qin, Xiaole Wang and Ruijin Wang
Mathematics 2023, 11(3), 724; https://doi.org/10.3390/math11030724 - 1 Feb 2023
Cited by 2 | Viewed by 2046
Abstract
The existing knowledge graph embedding (KGE) method has achieved good performance in recommendation systems. However, the relevancy degree among entities reduces gradually along the spread in the knowledge graph. Focusing on the explicit and implicit relationships among entities, this paper proposes an attention [...] Read more.
The existing knowledge graph embedding (KGE) method has achieved good performance in recommendation systems. However, the relevancy degree among entities reduces gradually along the spread in the knowledge graph. Focusing on the explicit and implicit relationships among entities, this paper proposes an attention knowledge network combining explicit and implicit information (AKNEI) to effectively capture and exactly describe the correlation between entities in the knowledge graph. First, we design an information-sharing layer (ISL) to realize information sharing between projects and entities through implicit interaction. We innovatively propose a cross-feature fusion module to extract high-order feature information in the model. At the same time, this paper uses the attention mechanism to solve the problem of the decline of information relevance in the process of knowledge graph propagation. Finally, the features of KGE and cross feature fusion module are integrated into the end-to-end learning framework, the item information in the recommendation task and the knowledge graph entity information are interacted implicitly and explicitly, and the characteristics between them are automatically learned. We performed extensive experiments on multiple public datasets that include movies, music, and books. According to the experimental results, our model has a great improvement in performance compared with the latest baseline. Full article
Show Figures

Figure 1

Back to TopTop