Relgraph: A Multi-Relational Graph Neural Network Framework for Knowledge Graph Reasoning Based on Relation Graph

: Multi-relational graph neural networks (GNNs) have found widespread application in tasks involving enhancing knowledge representation and knowledge graph (KG) reasoning. However, existing multi-relational GNNs still face limitations in modeling the exchange of information between predicates. To address these challenges, we introduce Relgraph, a novel KG reasoning framework. This framework introduces relation graphs to explicitly model the interactions between different relations, enabling more comprehensive and accurate handling of representation learning and reasoning tasks on KGs. Furthermore, we design a machine learning algorithm based on the attention mechanism to simultaneously optimize the original graph and its corresponding relation graph. Benchmark and experimental results on large-scale KGs demonstrate that the Relgraph framework improves KG reasoning performance. The framework exhibits a certain degree of versatility and can be seamlessly integrated with various traditional translation models.


Introduction
In the field of Artificial Intelligence, knowledge graph (KG) reasoning has become a pivotal research topic.As a powerful representation, KG integrates billions of available relational facts.Its importance lies in its potential to enhance the representation and inference of knowledge, enabling more intelligent decision making and problem solving.
KG is now utilized in various downstream applications, including recommendation systems [1], query answering [2], and drug discovery [3].However, due to constraints in human knowledge and text extraction technology, even the largest KGs remain incomplete.Given that the construction of knowledge graphs involves extensive data and complex semantic relationships, there are inevitable omissions and defects.To address these issues, knowledge graph reasoning techniques fill in the gaps and optimize the knowledge graph, with a typical task being knowledge graph completion (KGC).Currently, KG reasoning has become an important research field, attracting the attention and exploration of numerous researchers.With the continuous advancement of technology and increasing application demands, KG reasoning will play a more significant role in future intelligent applications.
Multi-relational graph neural networks (GNNs) [4,5] extend traditional GNNs by considering multiple relations in the graph as distinct node attributes.This extension enables the model to capture a broader range of semantic information and better encode the contextual relationships between entities.By incorporating multiple relations, multirelational GNNs are able to handle more complex reasoning tasks, such as transitive reasoning, role reversal, and collective entity recognition.This increased expressivity significantly improves the accuracy and scalability of knowledge graph reasoning.
Despite their advantages, current multi-relational GNNs face limitations.Firstly, most existing models assume that different relations have independent impacts on entities, ignoring potential interactions between different relations.This limitation can lead to incomplete or inaccurate reasoning results.Secondly, existing models often treat all relations equally, disregarding their different levels of importance in different reasoning tasks.This uniform treatment may result in information loss or overfitting in complex reasoning tasks.Figure 1 offers an example of the interactions between predicates.In this paper, we aim to address the limitations of ignoring interactive effects between relations in existing multi-relational GNNs.We introduce the Relgraph, a novel knowledge graph reasoning framework that explores logical relationships between relations by introducing a relation graph.This relation graph serves as a dual graph of the original KG, treating relations of the original KG (or predicates) as entities and entities in the original KG as relations.The Relgraph explicitly models the interaction between different relations as relations on the relation graph (referred to as entity-links), enabling more comprehensive and accurate reasoning (corresponding to it, the relations in the original KG are actually relation-links).By assigning different weights to different entity links based on their importance in specific reasoning tasks, Relgraph effectively handles complex reasoning tasks while reducing information loss and overfitting.A typical original knowledge graph and its corresponding relation graph are shown in Figure 2.
We extend the traditional graph attention network (GAT) to its dual neural network, the relation graph attention network (RGAT), and design a machine learning algorithm based on the attention mechanism to synchronously optimize the two networks.As a result, we successfully apply the Relgraph framework to knowledge graph reasoning tasks.The Relgraph framework can embed various algorithms for representation learning, such as TransE and RotateE, to optimize the representation learning of entities and relations, ultimately enhancing the reasoning performance on KGs.
Through benchmark testing and conducting a drug repurposing task based on biochemical KGs, we experimentally demonstrate that Relgraph improves the performance of reasoning models on KGs.
The primary contributions of this article are the following.

•
We introduce the concept of the relation graph, which extends the capabilities of multi-relational GNNs for knowledge graph reasoning tasks.From this foundation, we present the Relgraph, a novel knowledge graph reasoning framework.

•
We design and implement a machine learning mechanism based on GAT that simultaneously optimizes GAT and RGAT using an attention mechanism.This integration allows the Relgraph framework to learn representations from KGs and apply them to reasoning tasks, thereby enhancing the performance of multi-relational GNNs in knowledge graph reasoning tasks.

•
The proposed Relgraph framework is versatile and can integrate various representation learning algorithms, such as TransE [6] and RotatE [7].

Related Work
The primary objective of this article is to introduce an innovative approach to representation learning on KGs that leverages the attention mechanism of GNNs for KGC reasoning tasks.To achieve this, we conducted a literature review encompassing KG embeddings for KGC, advancements in GNN-based KGC, and models that incorporate multi-relational GNNs.Unlike traditional embedding-based and GNN-based KGC models, the Relgraph framework excels at capturing relational dependencies within GNNs, enabling more effective KG embedding learning and subsequently enhancing the performance of KGC models.
KGC via KG embeddings.The utilization of learnable embeddings for KGs has spawned numerous works aimed at KGC tasks.For instance, references [6][7][8][9][10] describe methods that learn to map KG relations into vector spaces and employ scoring functions for KGC tasks.In contrast, NTN [11] parameterizes each relation using a neural network.Paper [12] proposes a divide-search-combine algorithm, RelEns-DSC, to efficiently search relation-aware ensemble weights for KG embedding.The common drawback of these methods is that they cannot explicitly model the relationships between predicates.
KGC via GNNs.GNN is a framework introduced by [13] for learning deep models or embeddings on graph-structured data.The theoretical foundation for GNNs' ability to capture common graph heuristics for KGC tasks in simple graphs was established by [14].Furthermore, ref. [15] employed a similar approach to achieve competitive results on inductive matrix completion.Homogeneous GNN models are often unable to directly apply to tasks involving multiple relations.
Multi-relational GNNs.Multi-relational GNNs have been widely studied in recent years as a powerful tool for enhancing knowledge representation and reasoning on KGs [4,5].These networks aim to capture the complex relationships between entities and predicates in KGs.One line of research focuses on introducing more complex network architectures to capture the multi-relational interactions better [16].These models often involve the use of attention mechanisms or graph convolutional layers to capture the interactions between different relations explicitly.Another line of research aims to effectively develop learning algorithms that can optimize the representations learned by multi-relational GNNs [4].Paper [17] proposes an adaptive propagation path learning method for GNNbased inductive KG reasoning, addressing the limitations of hand-designed paths and the explosive growth of entities.Relphormer [18] leverages Triple2Seq to handle heterogeneity and a structure-enhanced self-attention mechanism to encode relational information.These algorithms often involve the use of contrastive learning or other optimization techniques to improve the performance of representation learning and reasoning tasks on KGs.However, they often cannot explicitly model the relationships between predicates, which limits their accuracy and efficiency in conducting knowledge graph reasoning.

Preliminaries
KGs.The KG is a structured representation of facts in the form of triplet information.It is defined as a set of triples K containing elements (h, r, t) where h and t belong to the set of entities E , and r belongs to the set of predicates R. A triple (h, r, t) ∈ K represents a fact in the KG, where h and t are entities and r is a relation, K ⊆ E × R × E .
To avoid confusion, we use the terms "relation" and "predicate" interchangeably throughout this discussion, as they carry the same meaning in our context.
When referring to the triple (h, r, t), h and t are called the "head entity" and "tail entity", respectively.The sets of all head entities and tail entities corresponding to a particular predicate r are denoted as HD r and T L r , respectively.

The Definition of Relgraph
Meta-nodes.In this paper, we introduce the concept of meta-nodes to represent the relationship between entities and predicates in KGs.These meta-nodes are divided into two categories: entity meta-nodes and predicate meta-nodes.Entity meta-nodes, denoted as emn i,a , represent the component of entity e i corresponding to predicate r a .Entity metanodes express the influence of entities on predicates.The set E MN represents all such meta-nodes in the KG, formally defined as {emn i,a |e i ∈ E , r a ∈ R}, and belongs to the cross product E × R. Similarly, E MN a represents all entity meta-nodes associated with a specific predicate r a .
For the relation r a ∈ R, we define the graph G a = ⟨V a , r a , D a ⟩, where the set of edges D a contains triples (h, r a , t), h, t ∈ V a represent source and target vertices, respectively, and r a is the unique relation of the graph G a .For each fact (e m , r a , e n ) ∈ K, let h, t represent the entity meta-nodes emn m,a , emn n,a ∈ E MN a , respectively.Then, the triple (h, r a , t) corre- sponds to the fact (e m , r a , e n ) in the knowledge graph.For a triple (emn m,a , r a , emn n,a ) ∈ D a associated with entities e m , e n ∈ E related to the predicate r a ∈ R, we define the logical function: Equation ( 1) is referred as the truth function of graph G a .
Dually, for e i ∈ E and r a ∈ R, we set pmn i,a as the predicate meta-node representing the influence of predicate r a on entity e i .The set PMN = {pmn i,a |e i ∈ E, r a ∈ R} ⊆ R × E represents all predicate meta-nodes in the knowledge graph.Similarly, PMN i = {pmn i,a |r a ∈ R} represents all predicate meta-nodes associated with entity e i .Entity Links.Let e i ∈ E be an entity.We define a graph RG i = ⟨RV i , el i , RD i ⟩ such that each edge rd ∈ RD i is a triple (h, el i , t) where h, t ∈ RV i are source and destination vertices.The only relation in RG i is el i .For predicates r f , r g ∈ R, let h, t be predicate meta-nodes pmn i, f , pmn i,g ∈ P MN i , define el i as an entity link.The graph RG i is called a relation graph about entity link el i .To distinguish it from entity graphs, we refer to G a as an entity graph.
It follows from the definition that for each entity e i ∈ E, there exists a corresponding collection of predicate meta-nodes P MN i and an entity link el i .The collection of all entity links is denoted by E L, and |E L| = |E |.To reflect the influence of entities on predicate pairs, we extend the truth function Equation ( 1) to relation graphs.The truth function for relation graph RG i is defined as follows: True ∃e x , e y ∈ E , (e x , r f , e i ) ∈ K and (e i , r g , e y ), ∈ K, False else. ( The visual representation of the Truth function for relation graphs can be found in Figure 2. It shows that if an entity e i is the tail entity of a fact (e x , r f , e i ) and the head entity of a fact (e i , r g , e y ), and the predicate meta-nodes pmn i, f , pmn i,g represent the influence of predicate r f , r g on e i , respectively, then P(pmn i, f , el i , pmn i,g ) = True.In analogy to the definition of facts in knowledge graphs, for any pmn i, f , pmn i,g ∈ P MN i ,

Relgraph Attention Network
Knowledge graph embedding (KGE) represents entities and relations in a continuous space called embeddings.A scoring function based on these embeddings can be defined to score triplets (h, r, t), h, t ∈ E , r ∈ R. The embeddings are trained to ensure that observed facts in the KG have higher scores than unobserved ones.Well-trained embeddings are typically used for tasks like KGC and rule mining.
According to the definition provided in Section 4, we have transformed the knowledge graph into |R| entity graphs {G a |r a ∈ R}, each containing |E | vertices representing entity meta-nodes.Additionally, we have |E | relation graphs {RG i |e i ∈ E }, each containing |R| vertices representing predicate meta-nodes.In the entity graph, the relations are defined by the predicates present in the KG, while in the relation graph, the relations are defined by entity-links.To perform tasks, such as KGC, Relgraph utilizes two interconnected graph attention networks to train embeddings for entities, predicates, and entity links.These networks are referred to as the Entity GAT (graph attention network) and the Relation GAT (relation graph attention network).The two GATs share the same embedding of predicates.
Entity GAT.The Entity GAT corresponding to entity graph G k is responsible for analyzing the message passing process of Relgraph and the attention between vertices emn i,k and emn j,k (entity meta-nodes representing the influence of entity e i , e j on the predicate r k , respectively).We denote e ijk as the component of emn i,k corresponding to the vertex emn j,k .Matrix F ∈ R |E |×d represents the input features, where d is the dimension of the feature.Each row f i = F i: represents the embedding of entity e i .Similarly, matrix P ∈ R |R|×d represents features of the predicates, and each row p k = P k: represents the embedding of predicate r k .We denote f ijk as the embedding of e ijk .
In accordance with [16], the entity GAT uses the following aggregation function to update the embedding of entities and predicates: where p k is the embedding of predicate r k determined by e j , r k , e i ∈ K, neighbor degree N i = j|e j ∈ E , r k ∈ R, e j , r k , e i ∈ K is the number set of e i 's one-hop neighboring vertices, and N i also includes i (i.e., there is a self-loop); R ij = k|e i ∈ E, e j ∈ E, e j , r k , e i ∈ K is the number set of predicates that have appeared in all facts with e i as the head entity and e j as the tail entity; a ijk is the attention weight between the target vertex emn i,k and the neighboring vertex emn j,k , which is generated by applying softmax to the values computed by b ijk .The parameters W f , W b , W F , and W P are trainable parameters of the attention function.
Relation GAT.Dually, for each relation graph RG s in the Relgraph, to comprehend the message-passing process and the attention between vertices pmn us and pmn vs (predicate meta-nodes that symbolize the impact of predicates r u , r v on entity e s , respectively), we utilize r uvs as the component of pmn us at vertex pmn vs .Matrix Q ∈ R |E |×d represents the features of entity links, with each row q e = Q e: denoting the embedding of entity link el e .We also denote p uvs as the embedding of r uvs .
After updating the embedding of entities and predicates using Equations ( 7) and ( 8) via entity GAT, Relation GAT employs the aggregation function below to update the embedding of entity links and predicates once again.
where q s represents the embedding of entity link el s according to the relation-fact (r v , el s , r u ) ∈ RK, QN u = {v|r v ∈ R, el s ∈ E L, (r v , el s , r u ) ∈ RK} is the number set of r u 's one-hop neighboring vertices, and QN u also includes u (i.e., there is a self-loop on each vertex); EL uv = {s|r u ∈ R, r v ∈ R, (r v , el s , r u ) ∈ RK} is the number set of entity-links that have appeared in all relation-facts with r v as the head entity and r u as the tail entity; b uvs is the attention weight between the target vertex pmn u,s and the neighboring vertex pmn v,s , which is computed by applying softmax to the values determined by d uvs .W p , W d , W PQ , and W Q are the trainable parameters of the attention function.

Training Algorithm
We train Relgraph through KGC tasks on KGs for embedding learning.For entity GAT, we utilize the training dataset K train ⊂ K, as well as the negative sample training set K ′ train that is generated from K train in a 3:1 ratio, where For Relation GAT, the model generates the relation facts as training set RK train ⊂ RK based on K train , using Equation (2).Similarly, the negative sample training set RK ′ train is generated following the same method.
The scoring function SC(e i , r a , e j ) predicts the probability of (e i , r a , e j ) ∈ K being true.For relation facts (r a , el j , r b ) ∈ RK, the model employs the same scoring function SC(r a , el j , r b ).In each iteration, the model minimizes the following loss: where γ denotes the margin hyperparameter and η denotes the RGAT weight hyperparameter used to control the penalty for incorrect predictions in the relation graph.

Variants of Relgraph
The Relgraph framework can be seamlessly integrated with TransE, RotatE, and other transitive representation learning methods by utilizing distinct scoring functions.Based on the scoring function utilized, Relgraph can be classified into distinct variants.We have developed the Relgraph-RotatE and Relgraph-TransE versions of the model, along with the GAT-RotatE version for comparative experiments that do not involve entity links.

•
Relgraph-TransE.In the Relgraph-TransE version, we use the following formula as the scoring function: Here, f h , f t represent the embeddings of h, t, respectively, and p r represents the embedding of r.

•
Relgraph-RotatE.In the original RotatE model, entity embeddings consist of both the real and imaginary parts, which are operated separately with the predicate embeddings representing the rotation angle in the complex space.Consequently, the dimensionality of entity embeddings needs to be twice that of the predicates.However, in Relgraph, there exists a duality between entities and predicates, requiring the dimensions of entities and predicates to be consistent.This contradiction necessitates a special design for Relgraph-RotatE.Let p r be the embedding of a predicate or entity link r with a dimensionality of D. We can represent p r as the concatenation of two parts: p u r and p d r .Here, p u r represents the first D 2 dimensions of p r , and p d r represents the last D 2 dimensions of p r .Mathematically, we can express it as p r = Cat(p u r , p d r ), where Cat represents the concatenation operation of vectors.In Relgraph-RotatE, the rotation angle from the head vertex to the tail vertex in a triplet is calculated using the following equation: where p θ r represents the measurement of the rotation angle from the head entity to the tail entity in RotatE.The scoring function used in Relgraph-RotatE is as follows: where f h , f t denotes the embeddings of h, t.The reason for this design is due to the discontinuous nature of the following equation (Equation ( 23)): This discontinuous nature allows the representation learning model to better distinguish between entities and predicates that result in true or false triple assignments.It also helps to maintain a relatively clear and stable geometric interpretation of the dimensions in the embeddings.• GAT-RotatE and GAT-TransE.We employ GAT-RotatE and GAT-TransE as benchmark models to assess the impact of introducing entity links on model performance.By setting η in Equation ( 18) to 0, we obtain GAT-RotatE/GAT-TransE from Relgraph-RotatE/Relgraph-TransE.

Results
We have conducted rigorous experiments to demonstrate the efficacy of Relgraph in enhancing the performance of reasoning models on KGs.These experiments include KGC experiments on the benchmark dataset as well as an experiment about drug repurposing.

Benchmark datasets.
The evaluation of open-world knowledge KGC tasks often relies on subsets of Word-Net and Freebase, such as WN18RR [10] and FB15K-237 [19].To verify the effectiveness of Relgraph, we need to choose datasets with numerous predicates and high difficulty levels to validate our method's efficacy.Therefore, we have selected FB15K-237, WN18RR, and UMLS [20] as our experimental datasets.UMLS is a domainspecific knowledge graph in the medical domain, containing biomedical concepts and their relationships.
Drug repurposing datasets.To verify the efficacy of this method for knowledge extraction and logical reasoning on large-scale datasets, we conducted drug repurposing experiments on the open-source biochemical knowledge graph RTX-KG2c [21].RTX-KG2c integrates data from 70 public knowledge sources into a comprehensive graph where all biological entities (e.g., "ibuprofen") are represented as nodes and all concept-predicateconcept relationships (e.g., "ibuprofen-increased activity-GP1BA gene") are encoded as edges.This dataset comprises approximately 6.4 M entities across 56 distinct categories, with 39.3 M relationship edges described by 77 distinct relations.
Drug repurposing, also known as drug rediscovery or drug repositioning, refers to discovering a new indication for an existing medication.The objective of this experiment is to employ the KGC model to learn the interactions between diseases and drugs from RTX-KG2c, aiming to predict potential therapeutic relationships between drugs and diseases.To identify "new" applications for existing drugs, therapeutic relationships were retrieved from external databases, including MyChem [22] and SemMedDB datasets [23].
To prevent information leakage during training, we excluded all existing edges connecting potential drug nodes (nodes labeled "Drug" or "SmallMolecule") with potential disease nodes (nodes labeled "Disease", "PhenotypicFeature", "BehavioralFeature," or "DiseaseOrPhenotypicFeature") in RTX-KG2c.We then added drug-disease pairs that were confirmed true positives (pairs with the relation "indication" from MyChem Datasets or the predicate "treats" from SemMedDB Datasets).A new predicate treat was introduced to represent this therapeutic relationship in the experimental KG.
We generated new triples based on these drug-disease pairs and added them to the KG, dividing them into training, validation, and testing sets in a 7:2:1 ratio.
Baselines.To test the effectiveness and versatility of Relgraph, we conducted an extensive selection of well-established knowledge graph reasoning models.We used these models on the benchmarks as the baseline.For each baseline, we used them in conjunction with Relgraph to conduct performance testing and record their performance improvement.The chosen baseline models include: TransE, DistMult [9], RotatE and ConvE [10], CompGCN [4], etc.
In the drug repurposing experiments, all models were trained using the modified RTX-KG2c training dataset.However, when tested on the test set, only the predicted results of triples associated with the predicate treat were considered.
Experiment setting details.We set the entity and relation embedding dimensions to 200 for our experiments.To optimize the model, we utilized the Adam optimization algorithm [24].We experimented with different learning rates within the range of 0.002 to 0.006, as well as mini-batch sizes from 64 to 256.Additionally, we applied dropout regularization to both the entity and relation embeddings, as well as all feed-forward layers.We searched for an optimal dropout rate within the range of 0.55.
In line with the common practices mentioned in [25,26], we utilize standard evaluation metrics for the link prediction task: Hit@1, which represents the number of correctly predicted head terms among the top 1 predictions, and mean reciprocal ranking (MRR), calculated as the mean of the reciprocal rank of the correct answer in the list of predictions.
To generate predictions, we feed the predicate and entity representations learned from each model version into a ConvKB model [27].We then train this ConvKB model to act as a decoder, assigning scores to candidate head and tail entities based on the scores given by the ConvKB decoder for each triplet in the test set.Finally, we sort these entities based on these scores, calculate the corresponding metrics (Hit@K and MRR), and evaluate the performance of each model.
All experiments were conducted on a machine equipped with 6 Nvidia Tesla V100 GPUs and 32 GB RAM (Beijing, China).We used the PyTorch library in Python for implementation.1, in the KGC tasks, the performance of Relgraph is optimal on most metrics.Relgraph-TransE and Relgraph-RotatE have better performance than other representation-based methods such as TransE, RotatE, and ConvE.For MRR on FB15K-237 and UMLS database, Relgraph-RotatE is 56.5% and 22.0% higher than the best algorithm before Relgraph and the improvement of Relgraph-TransE is 52.9% and 21.6%, respectively.Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 The impact of relation graph on enhancing the performance of reasoning.Table 2 shows the improvement brought by relation graph in embedding learning and KG reasoning.It can be found that the model version using relation graph (Relgraph-RotatE or Relgraph-TransE) has certain advantages in performance compared to the version not used (GAT-RotatE or GAT-TransE).

Analytical Experiments
To gain a deeper understanding of the impact of various parameters in the model and delve into the inner workings of the model, we meticulously designed and executed a series of analytical experiments.
The impact of embedding dimension size.As Figure 4 demonstrates, we conducted an experimental analysis to investigate the impact of the embedding dimension of entities, predicates, and entity-links within Relgraph on its performance.The embedding dimensions of the model were set to 50, 100, 150, and 200, respectively.The results indicate that the performance improvement plateaus at a dimension of 200.
The performance comparison between the Relgraph-RotatE-min, Relgraph-RotatEmax, and Relgraph-RotatE models on the FB15k-237 dataset is displayed in Figure 6.It is evident that the rotation angle calculation scheme proposed in this paper offers several advantages.Comparison of different similarity algorithms in the Relgraph model.We also conducted an analysis and verification of the performance impact of different similarity algorithms in scoring functions (Equations ( 16) and ( 17)) on the Relgraph model.For this experiment, we utilized cosine similarity, F1 norm, and F2 norm in scoring functions to calculate the similarity between embeddings.The comparison of their performance on the FB15K-237 dataset is presented in Figure 7.

Performance Analysis of Relgraph
From the benchmark results presented in Table 1 and the drug repurposing experiment outcomes in Table 3, it is evident that the Relgraph outperforms the baseline model in terms of performance across various KGC tasks.This underscores the advantage of Relgraph in leveraging the relation graph for extracting information exchange between predicates.
Notably, the Relgraph exhibits the most significant performance improvement compared to baseline methods on the FB15K-237 dataset, as evident from Table 1.Improvements achieved by Relgraph-RotatE and Relgraph-TransE over suboptimal baseline methods amount to approximately 56.5% and 52.9%, respectively.
On the WN18RR dataset, the Relgraph exhibits a relatively smaller improvement over the baseline method.Among them, the Relgraph-RotatE approach exhibits the most notable improvement, reaching approximately 1.9%, followed by the Relgraph-TransE approach with approximately 0.4%.
Relgraph exhibits a significant performance improvement over the baseline method on the UMLS dataset.When compared to the suboptimal baseline, Relgraph-RotatE and Relgraph-TransE achieve improvements of 22.0% and 21.6%, respectively.
The Relgraph model exhibits an advantage compared to the baseline in its ability to identify new indications for existing drugs, achieving a performance improvement of 4.4% for Relgraph-RotatE relative to RotatE and 1.3% for Relgraph-TransE relative to TransE.This suggests that in large-scale knowledge bases, Relgraph remains effective.
Upon comparing different experimental tasks, it is observed that the Relgraph model exhibits its greatest advantage on FB15K-237, which has the highest number of predicates, and relatively least advantage on WN18RR with the lowest number of predicates.This underscores the unique advantage of the relation graph in handling complex KGs with numerous predicates.

The Performance Improvement Brought by the Relation Graph and the Universality of Relgraph
The results in Table 2 demonstrate that Relgraph outperforms ordinary GAT, regardless of the representation learning method used.Specifically, Relgraph-Rotate offers a 5.6% improvement over GAT-Rotate, while Relgraph-TransE offers a 5.9% improvement over GAT-TransE on the FB15K-237 dataset.This highlights the value of introducing relation graphs to capture predicate interactions.
The results in Table 2 also indicate that the Relgraph framework is versatile and can be seamlessly integrated with other transitive representation learning methods on KGs, enhancing their performance.Notably, the performance improvement of Relgraph-RotatE over the original RotatE is 56.5%, and the improvement of Relgraph-TransE over the original TransE is 72.3%.As Figure 3 shows, this improvement is consistent across datasets, highlighting the generalizability of the Relgraph framework for enhancing knowledge graph reasoning.

Analysis of Hyperparameters and Related Settings
In Figure 4, it can be observed that as the embedding dimensions in the Relgraph increase from 50 to 200, the model's performance improves accordingly.However, further increasing the dimensions does not significantly enhance the model's performance.Our experimental conclusion is that the optimal embedding dimension is dependent on factors such as the size of the dataset, the number of predicates, and the representation learning method used (TransE, RotatE, etc.).
As shown in Figure 5, the Relgraph typically achieves rapid convergence, approaching approximate optimal performance at around 1000 epochs, highlighting its relative efficiency.As the number of training epochs increases, the performance of the Relgraph continues to improve slightly.
The comparative experiment in Figure 6 demonstrates the effectiveness of the rotation angle calculation method (Equation ( 23)) utilized in RotatE.Compared to using max or min functions, Equation ( 23) exhibits a performance advantage of approximately 3%.
Based on the experimental results depicted in Figure 7, it can be concluded that in the Relgraph model, scoring triplets using cosine similarity yields the best performance.Conventionally, representation learning models for KGs utilize the F1 norm to measure vector similarity.The utilization of cosine similarity as the scoring function in Relgraph leads to optimal performance, possibly due to the influence of the relation graph.The predicate representation learned by the model is insensitive to vector magnitude but focuses more on vector angles.

Conclusions
This article proposes a new KG reasoning framework, Relgraph, which explicitly models the interaction between different relations by introducing a relation graph.An attention mechanism-based machine learning algorithm is designed to synchronously optimize GAT for original graph and relation GAT for relation graph, thereby improving the performance of transitive representation learning methods and multi-relational graph neural network models in reasoning tasks.The experimental results demonstrate the effectiveness of Relgraph in knowledge graph reasoning tasks.The universality of this framework and its ability to embed various representation learning algorithms make it widely applicable.We found that Relgraph is particularly suitable for reasoning on datasets with rich predicates, and can still perform KGC on large-scale datasets.The computational complexity of the model is on the same level as traditional GAT.
Limitations.The primary limitations of this article are twofold.Firstly, the graph attention learning mechanism upon which it relies is relatively conventional and lacks integration with the latest advancements in the field.Secondly, the proposed new framework lacks comprehensive exploration of its application potential, being confined primarily to the realm of transductive KGC, while neglecting other promising scenarios, such as inductive KGC, rule mining, and knowledge discovery.
Next, we will further explore the unique advantages of Relgraph in predicate information mining and rule mining.We will also attempt to integrate some new GAT mechanisms into our framework, striving to make new breakthroughs in interpretable and highly generalized reasoning.

Figure 1 .
Figure 1.The interaction between predicates of a toy knowledge graph (KG) about kinship relationships.The blue lines and blue squares represent the relations and entities in the KG, respectively.Entity D can be used to analyze the information connections between the two predicates of father and uncle, while entity A can help to analyze the interaction between grandson and father, which are depicted in red lines in the figure.

Figure 2 .
Figure 2. The toy kinship knowledge graph (KG) (top) and its corresponding relation graph (bottom).In the knowledge graph above, vertices represent entities, and links represent relations (or predicates) in the knowledge graph.In the relation graph, vertices represent relations from the knowledge graph, links represent entity links (red dashed line).The presence of an entity link between vertices is determined by Equation (2).
experiments.Let d be the dimension of embeddedings in the model, it can be analyzed that the spatial complexity of Relgraph is O(2|E |d + |R|d) and the temporal complexity is O(|K|d 2 + |K|d + |RK|d 2 + |RK|d).In contrast, the TransE model has a spatial complexity of O(|E |d + |R|d) and a temporal complexity of O(d), while the single relation GAT model has a spatial complexity of O(|E |d + |R|d) and a temporal complexity of O(|E |d 2 + |K|d).Without considering the relation graph, the spatial complexity of Entity GAT (multi-relational GAT) is O(|E |d + |R|d) and the temporal complexity is O(|K|d 2 + |K|d).It can be seen that compared to GAT and multi-relational GAT models, the complexity of Relgraph only increases linearly.In the KGC experiment on the FB15K-237 dataset, when d = 200, Relgraph can complete training in about 1 h.

Figure 3 .
Figure 3. MRR improvement brought by relation graph in Relgraph on benchmarks.Drug repurposing.As shown in Table 3, in the drug repurposing experiment, Relgraph performed best in most metrics.Relgraph-TransE and Relgraph-RotatE had better performances than baseline methods, such as TransE, RotatE, and ConvE.Relgraph-RotatE was 4.4% higher than the best algorithm before Relgraph, and Relgraph-TransE was 1.3% higher than the original TransE.The model version using relation graph (Relgraph-RotatE or Relgraph-TransE) also has advantages in performance compared to the version not used (GAT-RotatE or GAT-TransE).

Figure 4 .
Figure 4. MRR of Relgraph under different embedding dimensions on FB15K-237 dataset.The impact of training epoch.To analyze the learning efficiency of the Relgraph model, the MRR performance of each training epoch model was sampled during experiments on the FB15k-237 dataset, as displayed in Figure 5.It is evident that the model achieved satisfactory performance at approximately 500 epochs and reached optimal fitness at 3000 epochs.

Figure 5 .
Figure 5.The learning curve of Relgraph under different epochs on FB15K-237 dataset.Comparison of schemes for calculating the rotation angles in the Relgraph-RotatE.To verify the effectiveness of the scheme to calculate rotation angles in the Relgraph-RotatE model as per Equation (23), we conducted a comparative experiment.We established two modified versions of the model, Relgraph-RotatE-min and Relgraph-RotatE-max, based on

Figure 6 .
Figure 6.Comparison of model performance under different rotation angle calculation schemes on the FB15k-237 dataset.

Figure 7 .
Figure 7.Comparison of model performance under different similarity algorithms on the FB15k-237 dataset.

Table 1 .
Results of knowledge graph completion (KGC) tasks on benchmarks.* denotes results from publications.-denotes the unpublished results.The experimental results indicate that the KGC performance of Relgraph-RotatE and Relgraph-TransE exceeds traditional methods.

Table 2 .
The impact of relation graph on enhancing the performance of embedding learning and knowledge graph (KG) reasoning.* denotes results from publications.The experimental results indicate that the introduction of relation graph can improve the accuracy of KG reasoning.

Table 3 .
Results of drug repurposing experiment.