Next Article in Journal
FedCO: Communication-Efficient Federated Learning via Clustering Optimization
Next Article in Special Issue
Contrastive Refinement for Dense Retrieval Inference in the Open-Domain Question Answering Task
Previous Article in Journal
Examining Gender Bias of Convolutional Neural Networks via Facial Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ERGCN: Enhanced Relational Graph Convolution Network, an Optimization for Entity Prediction Tasks on Temporal Knowledge Graphs

School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Future Internet 2022, 14(12), 376; https://doi.org/10.3390/fi14120376
Submission received: 23 November 2022 / Revised: 5 December 2022 / Accepted: 6 December 2022 / Published: 13 December 2022

Abstract

:
Reasoning on temporal knowledge graphs, which aims to infer new facts from existing knowledge, has attracted extensive attention and in-depth research recently. One of the important tasks of reasoning on temporal knowledge graphs is entity prediction, which focuses on predicting the missing objects in facts at current time step when relevant histories are known. The problem is that, for entity prediction task on temporal knowledge graphs, most previous studies pay attention to aggregating various semantic information from entities but ignore the impact of semantic information from relation types. We believe that relation types is a good supplement for our task and making full use of semantic information of facts can promote the results. Therefore, a framework of Enhanced Relational Graph Convolution Network (ERGCN) is put forward in this paper. Rather than only considering representations of entities, the context semantic information of both relations and entities is considered and merged together in this framework. Experimental results show that the proposed approach outperforms the state-of-the-art methods.

1. Introduction

Knowledge graphs (KGs), which stores a human’s knowledge and facts of the real world, are widely used in various applications [1,2,3]. However, knowledge graphs are often uncompleted, which limits its application in real world. As the incompleteness of facts may obstruct the reasoning procedure, it is necessary to complete knowledge graphs by predicting the missing facts. Several methods have been proposed for completing knowledge graphs, such as TransE [4], DistMult [5], ConvE [6]. The other issue that cannot be ignored is that facts often change over time. In order to depict the changing trend of facts over time, the relevant information can be organized into a series of knowledge graphs and each of them corresponds to a group of facts at different time stamps [1,3,7,8]. This series of knowledge graphs organized in chronological order is called temporal knowledge graphs (TKGs). There is concern regarding whether we can predict unseen facts through historical information. Therefore, learning the evolution of facts over time and then predicting unseen entities on TKGs has attracted the attention of researchers and has become a hot topic recently.
Prediction of facts over TKGs is classified into two categories: interpolation and extrapolation [9]. Interpolation, known as the completion problem, is mainly used for completing missing information during a given time interval [10,11,12]. A sample of the interpolation problem is to infer the president of America in 2016 when this fact is not seen between 1990 and 2020. Extrapolation, which is also known as entity prediction tasks, involves making a forecast of unknown facts at a future time. Extrapolation is a more difficult challenge than interpolation. An example of extrapolation is predicting who will win the next US presidential election. We prepare a example of entity prediction in Figure 1. Extrapolation research is not only of practical significance, but also theoretical value, because studying the evolution of facts can help us understand the informative relationship hidden behind the structural knowledge graphs. There have been many efforts focused on this problem but it is far from being solved.
The entity prediction tasks on knowledge graphs can be separated into two parts—static prediction methods and dynamic prediction models.
According to the optimization targets, static prediction methods can be further classified into three types: distance-based methods, semantic similarity-based methods and deep learning methods. Among the distance-based methods, TransE [4] is a classical approach to interpret relations on KGs. TransE regards the representation of entities and relations as transitional vectors, the goal is to minimize the distance of representation in the triple ( s , r , o ) , i.e., m i n | | s + r o | | . Based on the idea, TransD [13], TransR [14] and TransH [14] were proposed with different weight matrices to transfer entities’ vectors before scoring the distance loss. The semantic similarity-based methods, e.g., DistMult [5], use a bi-linear function to calculate the plausibility in the triple. Some studies on knowledge graph completion follow this idea, such as HolE [15] and Ripplenet [3]. The scoring function is generally formed as f ( s , r , o ) = s T W r o where W r is a parameter matrix to represent the relation types. A popular genre of deep learning methods for entity prediction in KGs is GCN-based approaches, such as GAT [16], SAGE [17]. A GCN-based block consists of multiple layers of neural network blocks to generate hidden representations of entities which include rich semantic information of context. A common GCN block is formed as h s ( l + 1 ) = σ ( m M s g m ( h s l , h j l ) ) Here, h s l , h j l represent the hidden representations of entity s and its related entity j in the l-th layer. M s denotes the set of neighbors of entity s. g m is a specific neural network function for propagating messages. For instance, Kifp et al. [18] propose a linear parameter matrix to transform representations of entities and their neighbors. GAT [16] uses a local attention weight to distinguish the importance of the target entity’s neighbors. SAGE [17] concatenates the embedding of a target entity and its neighbors as a type of feature vector which is able to reserve more original features of the target. Different from models mentioned above for graphs where there is only one type of relations, RGCN [19] is a notable approach which introduces a relational specific transformation function to deal with multiple relation types.
As static methods ignore the influence of time, they cannot model the evolutionary trend of facts on knowledge graphs. In order to predict future facts based on histories, dynamic prediction models try to train time-varying representation of facts to reflect the evolution over time. Several works modified static methods to adapt to the temporal change of data. One aspect of efforts adds extra weights or features to entities’ representations, such as Time-Aware [12], TA-TransE [11], DE-TransE [20]. The comparative experiments show that dynamic methods which learn the evolution of facts perform better. Time-aware [12] is an early work to predict the changes of relations on TKGs. It uses an asymmetric matrix to translate the relation matrix of TransE and add integer programming as constraints to capture temporal features. TTransE [21] uses a series of weights to represent the relations on different time. TA-TransE [11] directly defines the representation of time as a series of vectors. DE-TransE [20] creates a diachronic method to represent evolution of entities. Know-Evolve [10] and its follow-up Dyrep [22] use RNN-based models to create dynamic representation of entities. The other aspect of efforts uses sequence-encoder modeling methods to create extra hidden vectors standing for chronological features of facts [9]. GCRN [23] is the first sequence modeling method on TKGs. RE-NET [1] follows GCRN’s structure but adds a global vector to represent global states of whole facts at each time. Evolve-GCN [24] merges GCN block into a GRU [25] unit to update GCN’s weights which allows the GCN block adapts to relations at a different time. REGCN [26] designs a static properties algorithm to reflect the evolutionary trend of TKGs.
However, most of the previous dynamic prediction methods pay attention to extracting semantic features on entities and their neighbors, but fail to consider the interactions between entities and pairwise relations. Therefore, in this research, we try to make up for the deficiencies of previous methods through learning semantic interactions between entities and relations, and then combining them into the prediction model. We believe that these semantic interactions contain informative clues about context dependencies. Capturing these factors for inferring on TKGs holds promise for making the results more reasonable.
Therefore, we propose a GCN-based model, Enhanced Relation Graph Convolution Network (ERGCN) (code is available at https://github.com/Uynixu/ERGCN (accessed on 22 November 2022)), to accomplish the entity prediction task. The model especially focuses on learning the full semantic information of facts between relations and entities. We try to evaluate the performance of the model, and try to prove the necessity of adding the full semantic interaction between relations and entities into the model. We will compare the proposed models in this paper with previous models on relevant data through several experiments designed for the task and then reach a conclusion. Overall, the contributions in this paper can be summarized as below:
1. We test a new GCN-based method, named ERGCN, which takes context dependencies between pairwise entities and relations into account during training, and achieves better performance than previous methods;
2. We design a new approach to predict unseen facts on TKGs and compare it with different models to demonstrate the necessity of using the full semantic information of facts in relevant reasoning tasks.

2. Methodology

2.1. Problem Definition

We firstly give the following definitions used in this paper.
Definition 1
(Temporal knowledge graphs). A temporal knowledge graph (TKG) is represented as a set of chronological knowledge graphs with discrete time stamps, { G 1 , G 2 , , G T } , where each graph at time t is G t = ( V , R , E t ) , t [ 1 , T ] . Here, V is the set of entities, R is the set of relation types, and E t is the set of edges. Each edge represents a fact which includes two entities linked by a relation type. Therefore, E t { ( s , r , o ) t | s V , o V , r R } , where the triple ( s , r , o ) t stands for an event or fact that the subject entity s has the relationship r with object entity o at time t.
Definition 2
(Entity prediction task). Given the query ( s , r , ? , t ) , the entity prediction task is to model the conditional probability distributions of all object entities under the subjects s when relation r is given and historical graphs in a fixed length of observation windows m, { G ( t m + 1 ) , , G t 1 } , are also given. The conditional probability distribution is represented as function f 1 in Formula (1). Meanwhile, we add a sub-query ( s , ? , t ) to constrain the reasoning process. The sub-query is to model the conditional probability distribution of all relation types when s and historical graphs are given. This probability distribution is represented as function f 2 in Formula (2). Therefore, our task is to find appropriate trainable functions f 1 to fit the conditional probability distribution of entities on TKGs. The formulations are shown as:
p ( o | s , r , t ) = f 1 ( s , r , G t 1 : t m + 1 )
p ( r | s , t ) = f 2 ( s , G t 1 : t m + 1 )
Definition 3
(Neighbor set of an entity). Given a snapshot of the TKG at time t, the entity s with its neighbor entities and linked relations types make a sub-graph S u b t ( s ) . In this sub-graph, all nodes from the neighbor entity set of s, which is denoted as N e t ( s ) . Its linked relations constitute the neighbor relation set of s, denoted as N r t ( s ) .

2.2. Framework of the Model

Following the study of RENET [1], the key idea of our approach is to learn the local context dependencies near the central facts by our ERGCN block as well as to learn the global semantic structure of the whole graph on TKGs. The reasoning logic is based on the following assumptions: (1) Reasoning future facts can be regarded as a sequential inference processing via past relevant histories at different timestamps. (2) Temporal adjacent of facts may contain necessary informative patterns which imply the evolutionary trend of facts.
To approach the problem, our model is divided into two parts, the local learning unit and the global unit. The local learning unit is made for aggregating features in the neighborhood to extract the local dependency around the specific entity which stands for the local temporal features. As the same time, the goal of the global unit is to generate a single vector to represent the informative structure of the current graph as a whole, referred to as the global representation.
Both the local learning unit and the global unit follow the encoder–decoder structure. Here, the encoder part consist of certain layers of the GCN block and one layer of the GRU block. The GCN block integrates the dependencies of edges in a knowledge graph at each timestamp, and then the informative sequential features learned in GCN and their pairwise time presentations are merged into single vectors to represent the evolution of facts at different timestamp via the GRU block. Based on these various vectors and the static representation of entities and relations, temporal reasoning results at the next timestamps can be evaluated by the decoder function. The structure of our model that reflects the above idea is illustrated in Figure 2.

2.3. Local Learning Unit

To represent the semantic features of entities and relations, we use internal initialized embedding vectors, E ( s , o ) R n × d and E r R r × d , to stand for entities and relations, respectively. Here, n, r stand for the number of entities and relation types, respectively, and d is the dimension size of each embedding.
Since static embedding vectors are not able to reflect the evolution characteristics of facts over time, two types of representations, the local temporal feature and the global vector, are proposed to reflect the evolution of facts. The local temporal feature h s t summarizes the local information around a central entity until timestamp t, reflecting the change of relationships between these linked facts in the past. The global vector g t focuses on leaning the trend of background information of entire facts on the current knowledge graph. The two types of dynamic representation capture different aspect of informative knowledge from TKGs, which allows us to verify the reasoning process in different ways.
To capture the local structural information around the fact, GCN blocks are proposed to aggregate neighbor information and transform them into a single representation standing for the main feature of the central entity. The problem is that previous GCN blocks used in knowledge graphs ignore the semantics of relations, and some recent models only regard relation types as a part of entities. However, classical knowledge graph embedding studies show that semantic features from entities and relations have different effects on the performance in the model. To illustrate this divergence, we introduce a new GCN algorithm, which uses the full semantic information of facts to create representations of facts, named ERGCN. The aggregator is formally defined as follows:
h s , t l + 1 = W 0 l h s , t l + 1 n r N r t ( s ) o N e t ( s ) ( W r , 1 l e o + W r , 2 l e r )
Here, h s , t l stands for the neighborhood message of entity s at the l-th layer. W 0 l and W r , 1 l , W r , 2 l are trainable parameters for self-loop and aggregating features at the l-th layer. e o , e r represent the embedding of entities and relations. n is the number of neighbor of entity s.
Therefore, the local historical representation of entity s at time t can be illustrated as a sequence of the neighborhood message in an observed length m:
h ( s , t ) = { h s , t 1 , h s , t 2 , , h s , t m + 1 }
Then, we update the state of the local temporal feature for query and its sub-query via a GRU block:
H s t = G R U 1 ( [ h m ( s , t ) : T m ( t ) ] )
We use the final hidden state vector H s t to represent the local temporal feature of entity s at time t. T m ( t ) is the sequential temporal features trained in the global unit and it will be discussed in the next part. The symbol: represents the concatenation operation.

2.4. Global Unit

Distribution of entities on certain knowledge graphs represents specific temporal information to imply the evolutionary trend of facts. Therefore, we try to represent these global evolutionary trends by modeling the entity distribution over time. We assume that the entity distributions depend on historical graph features at the last m steps. Therefore, the entity distribution is modeled by function f 3 in Formula (6), where the current graph embedding vector T t is inputs:
p ( s | t ) = f 3 ( T t )
To learn the graph embedding, we propose the global unit to capture the global structural state of the entire current graph and record the evolutionary trend of the state. To capture the global structural state at each TKG, we use our ERGCN block to learn the semantic vectors of all entities { h s , t } and then propose an element-wise max-pooling operation f m a x to represent the current global state:
g t = f m a x ( { h s , t } )
Then, we use the graph historical sequence in the last timestamps m to represent the evolutionary trend:
g m ( t ) = { g t 1 , g t 2 , , g t m + 1 }
To reflect the evolutionary trend from g m ( t ) , we use the hidden state trained from a GRU block:
T t = G R U 2 ( g m ( t ) )
T t summarizes the evolutionary trend of the whole graph with a global view. Obviously, a neighbor message aggregated from ERGCN only provides a local view around the facts. Therefore, many context dependencies and semantic interactions between distant facts lose if we only focus on the local views. To compensate for this drawback, we use the graph embedding as a complement of the local views to represent a global view of whole facts. Then, we define the historical sequences of global embedding T m ( t ) as the global temporal features in an observed windows with length m:
T m ( t ) = { T t 1 , T t 2 , , T t m + 1 }

2.5. Decoding Process

To answer the query and sub-query, the conditional distributions of predicted objects p ( o ¯ t | s , r ) and relations p ( r ¯ t | s ) are modeled by two linear functions. The formulas are presented as Formula (11) and (12)
p ( o ˜ | s , r , t ) = [ H s t ; e s ; e r ] W o + b o
p ( r ˜ | s , t ) = [ H s t ; e s ] W r + b r
As the entities prediction task is considered as a multi-classification task, cross-entropy loss is selected as the loss function. For simplicity of expression, we omit the notations of prediction in Formula (13). The loss function is as follows:
L = ( s , r , o ) t G t l o g p ( o | s , r , t ) + λ l o g p ( r | s , t )
Here, λ is a hyper-parameter to balance the importance between two parts. In entity prediction tasks, we aim to predict objects depending on relevant subjects and their linked relations.
We summarize the whole training process as shown in Algorithm 1. Our training approach is divided into two steps. In the first step, during the preset maximum number of iterations e p o c h s , we generate the graph embedding from the global unit and save the optimal results for the next step. It is noticed that we choose 2-norm as the loss function in the global model to fit the temporal distribution of all subject entities D s , t . In the second step, we use the local learning unit to estimate the conditional probability distribution of object entities p ^ ( o | s , r , t ) and answer the queries. It is worth noting that, at the preset maximum iteration number e p o c h s , we regard the model with the best M R R ratio as the best situation of our model.
Algorithm 1: Learning algorithm of ERGCN
Futureinternet 14 00376 i001

3. Results

3.1. Datasets

To evaluate the performance of ERGCN, we selected six representative datasets widely used in previous works for the entity prediction task on TKGs. They are YAGO [27], WIKI [21], ICEWS14 [11], ICEWS15 [11], ICEWS18 [28] and GDELT [29]. YAGO and WIKI include temporal facts extracted from open-source datasets. The series of ICEWS are event-based datasets from the Integrated Crisis Early Warning System. GDELT is from the Global Database of Events, Language and Tone. The statistical details of all datasets are shown in Table 1.

3.2. Evaluation Metrics

In the experiments, MRR and Hits@1, 3, 10 are selected as the metrics for entity prediction. Because the Hit@1 in YAGO and WIKI are not reported in previous works [1,26], we only record Hit@3, 10. It is worth noting that some previous works use different filter settings to evaluate the performance of their works. Hence, in order to make the results comparable, we only report the original results (named r a w m e t r i c ) of each model.

3.3. Benchmarks

Our ERGCN model is compared to two types of models: static KG models and dynamic TKG reasoning models. Here, Distmult [5], ConvE [6], RGCN [19], HyTE [30] are selected as static models. On the other hand, TTransE [21], TA-Distmult [11], R-GCRN [23], RENET [1], REGCN [26] are selected as dynamic methods released in recent years.

3.4. Implementation Settings

The embedding dimension d is 200 in both the local learning unit and the global unit. The number of ERGCN layers in the local learning unit is 1, but that in the global unit is 2. The dropout rate is 0.2 in both units. We test the length of history m from 1 to 10 and find that the optimal length is 5 in all datasets. The experiments include one-step inference in the validation and test. All experiments only report the results of reasoning the objects in test set with the raw metric. We obtain the results in five runs on each datasets and report the average of the results.

3.5. Result Analysis

The experimental results are illustrated in Table 2 and Table 3. ERGCN outperforms the benchmarks on WIKI and ICEWS. Especially, the performances on WIKI rise significantly. The experimental results show that it is helpful to make full use of semantic information in entity prediction tasks. Obviously, ERGCN works better than static models because ERGCN captures the evolutionary pattern of facts. Thus, it can achieve higher performance when testing on unseen temporal knowledge graphs. Compared with recent dynamic models, such as REGCN and RENET, our ERGCN overtakes the others in most tasks. Although ERGCN does not have the best performance on YAGO and GDELT, its performance is very close to the best results. Therefore, ERGCN’s overall performance is better. The results verify the importance of differential treatment for various relation types, which contains much useful semantic information about the temporal dependencies of facts. As mentioned above, we only use a one-layer ERGCN block in the tasks. The reason is that the performance on the high-accuracy metrics, such as Hit@1 and Hit@3, drops significantly when the layers are more than one. This phenomenon may indicate that ERGCN focuses on 1-hop neighborhoods, while long-distance relationships may interfere with the entity reasoning process. However, ERGCN still outperforms the other dynamic models and these results suggest that information in the 1-hop neighborhood is underutilized in previous approaches, and that ERGCN can extract these information more effectively.
ERGCN is similar to RENET, but we pay more attention to applying full semantic information of facts. By capturing more precise temporal representation of sequential knowledge, ERGCN overtakes RENET in the majority of the datasets and our results are close to those of RENET on GDELT. Different from REGCN, which includes a new recurrent block to learn sequential histories of entities, the structure of ERGCN is simple, but the performance is good.
Compared with the previous best results on WIKI, ERGCN has improved 11.44% in MRR metric, 11.35% in Hit@3 metric and 8.40% in Hit@10 metric, respectively. In this dataset, temporal facts are widely collected from the open-source dataset, Wikipedia. The informative interactions between different entities are discrete, temporal dependencies around facts are often limited in small areas, and then 1-hop neighborhoods contain the most important structural information of dependencies. Different from concentrating attention on neighbors’ entities, ERGCN concerns the interaction between entities and their linked relation types, which provides more relevant structural dependencies between facts.
The results on ICEWS show that, compared with other methods, using more semantic information of facts provides more accurate temporal characteristics for the reasoning process. In terms of the MRR metric, the performance of ERGCN is 0.17/5.38/0.78% higher than the previous best on ICEWS14/ICEWS15/ICEWS18, respectively. Moreover, ERGCN improves Hit@1,3,10 in each dataset as well. The ICEWS series dataset is event-based and facts here often change frequently. Therefore, learning the relation type becomes a key point, which is able to provide basic information indicating the trend of the facts. Different from previous studies, ERGCN focuses on learning the interaction between entities and relation types, which reserves various temporal dependencies. If these complex structural dependencies are ignored, there will be a lot of loss in modeling sequential patterns. The results demonstrate that ERGCN is more capable of learning the complex temporal structures in TKG.
ERGCN’s performances on YAGO and GDELT are a little worse than the previous best. YAGO consists of lots of temporal facts with repetitive patterns. ERGCN does not handle this problem well, but this phenomenon does not appear in other datasets. GDELT includes massive concepts and definitions that follow specific rules. This situation makes entity reasoning difficult. The results of all models are similarly poor in GDELT.
It is noticed that the results of all methods on ICEWS18 and GDELT are still at a low level. For example, the M R R is under 20% and 30% in GDELT and ICEWS18, respectively. This phenomenon shows that capturing the evolutionary trends of facts on TKGs is still a hard challenge and we need further studies to identify the complex dependent relationships between facts at different time.

3.6. Ablation Study

In this part, we discuss the effect of each part in ERGCN. To test the contribution of each part in ERGCN, we conduct the ablation studies in WIKI and ICEWS18. To test the importance of graph embedding, we remove the global unit in our approach, named as E R G C N w t g . To illustrate the essential context semantic information, we remove the learnable weight W r , 2 in ERGCN, named E R G C N w t r . To demonstrate the necessity of the semantic interaction, we remove the sub-query when training, named E R G C N w t c . The further discussion of the contributions of each part in ERGCN is reported in Table 4 and Table 5.
To illustrate how the global embedding affects the results, we conduct experiments without the global model. The results are denoted as E R G C N w t g . It can be seen that removing the global embedding results in a significant decline in the performance on WIKI and ICEWS18. When we remove the global embedding, ERGCN will lose lots of temporal dependencies around the whole graphs and the model will only focus on learning the neighbor structural information.
After removing the independent weight on relation types from the model, the model becomes E R G C N w t r . Therefore, our ERGCN becomes similar to the other studies where entities and relations share the same transform weights in training process. Since the correlations between various entities and relations are usually different, after removing the weight on relation types from the model, we will lose specific features between certain combinations of facts. The results prove our prediction as the results decrease by about 1–2% in WIKI and ICEWS18.
The results are labeled as ERGCN wtc, where the relation constraint between entities and relations are removed. The relation constraint can be seen as interactions between entities and their linked relations, which helps the model obtains the combination features of facts.

4. Conclusions

After reviewing previous literature, we found that the interaction of semantic features between entities and relations are often omitted and many studies focus on designing methods to extract features on entities and their neighbors. We propose different approaches to model the interaction of semantic features between entities and relations and we propose the Enhanced Relational Graph Convolution Network (ERGCN), which is modified from previous GCN models to assemble relations and entities together. Although experiments show that relations themselves can provide less information for prediction tasks than entities, combining relations and entities together enhances the context information between entities and relations and benefits the tasks. The results of experiments show that the improvement is significant.
In future work, we are going to apply ERGCN in different datasets or applications to verify the effectiveness of this model and we will also try other possible methods to further extract information from the interaction between entities and relations. We notice that our method focuses on aggregating closed information around the source entity, but it is hard to utilize the information of long-distance relational paths. Finding ways to capture this information to promote the model’s performance is an important part of our further work.

Author Contributions

Conceptualization, X.X.; Methodology, Y.W. and X.X.; Software, X.X.; Writing—original draft, X.X.; Writing—review & editing, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 61375053.

Data Availability Statement

Not Applicable, the study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TKGTemporal Knowledge Graphs
GCNGraph Convolutional Network
MRRMean Reciprocal Rank
RNNRecurrent Neural Network

References

  1. Jin, W.; Zhang, C.; Szekely, P.A.; Ren, X. Recurrent Event Network for Reasoning over Temporal Knowledge Graphs. arXiv 2019, arXiv:1904.05530. [Google Scholar]
  2. Tiddi, I.; Schlobach, S. Knowledge graphs as tools for explainable machine learning: A survey. Artif. Intell. 2022, 302, 103627. [Google Scholar] [CrossRef]
  3. Wang, H.; Zhang, F.; Wang, J.; Zhao, M.; Li, W.; Xie, X.; Guo, M. Ripple Network: Propagating User Preferences on the Knowledge Graph for Recommender Systems. arXiv 2018, arXiv:1803.03467. [Google Scholar]
  4. Bordes, A.; Usunier, N.; García-Durán, A.; Weston, J.; Yakhnenko, O. Translating Embeddings for Modeling Multi-relational Data. In Proceedings of the Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013, Lake Tahoe, NV, USA, 5–8 December 2013; pp. 2787–2795. [Google Scholar]
  5. Yang, B.; Yih, W.; He, X.; Gao, J.; Deng, L. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  6. Dettmers, T.; Minervini, P.; Stenetorp, P.; Riedel, S. Convolutional 2D Knowledge Graph Embeddings. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, LA, USA, 2–7 February 2018; pp. 1811–1818. [Google Scholar]
  7. Wang, H.; Zhang, F.; Xie, X.; Guo, M. DKN: Deep Knowledge-Aware Network for News Recommendation. arXiv 2018, arXiv:1801.08284. [Google Scholar]
  8. Bonatti, P.A.; Ioffredo, L.; Petrova, I.M.; Sauro, L.; Siahaan, I.R. Real-time reasoning in OWL2 for GDPR compliance. Artif. Intell. 2020, 289, 103389. [Google Scholar] [CrossRef]
  9. Kazemi, S.M.; Goel, R.; Jain, K.; Kobyzev, I.; Sethi, A.; Forsyth, P.; Poupart, P. Representation Learning for Dynamic Graphs: A Survey. J. Mach. Learn. Res. 2020, 21, 70:1–70:73. [Google Scholar]
  10. Trivedi, R.; Dai, H.; Wang, Y.; Song, L. Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017; Volume 70, pp. 3462–3471. [Google Scholar]
  11. García-Durán, A.; Dumancic, S.; Niepert, M. Learning Sequence Encoders for Temporal Knowledge Graph Completion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 4816–4821. [Google Scholar]
  12. Jiang, T.; Liu, T.; Ge, T.; Sha, L.; Chang, B.; Li, S.; Sui, Z. Towards Time-Aware Knowledge Graph Completion. In Proceedings of the COLING 2016, 26th International Conference on Computational Linguistics, Osaka, Japan, 11–16 December 2016; pp. 1715–1724. [Google Scholar]
  13. Ji, G.; He, S.; Xu, L.; Liu, K.; Zhao, J. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, Beijing, China, 26–31 July 2015; Volume 1, pp. 687–696. [Google Scholar]
  14. Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; pp. 2181–2187. [Google Scholar]
  15. Nickel, M.; Rosasco, L.; Poggio, T. Holographic Embeddings of Knowledge Graphs; AAAI Press: Washington, DC, USA, 2015. [Google Scholar]
  16. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  17. Hamilton, W.L.; Ying, Z.; Leskovec, J. Inductive Representation Learning on Large Graphs. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; pp. 1024–1034. [Google Scholar]
  18. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017. [Google Scholar]
  19. Schlichtkrull, M.S.; Kipf, T.N.; Bloem, P.; van den Berg, R.; Titov, I.; Welling, M. Modeling Relational Data with Graph Convolutional Networks. In Proceedings of the The Semantic Web—15th International Conference, ESWC 2018, Heraklion, Crete, Greece, 3–7 June 2018; Volume 10843, pp. 593–607. [Google Scholar]
  20. Goel, R.; Kazemi, S.M.; Brubaker, M.; Poupart, P. Diachronic Embedding for Temporal Knowledge Graph Completion. arXiv 2019, arXiv:1907.03143. [Google Scholar] [CrossRef]
  21. Leblay, J.; Chekol, M.W. Deriving Validity Time in Knowledge Graph. In Proceedings of the Companion of the The Web Conference 2018 on The Web Conference 2018, WWW 2018, Lyon, France, 23–27 April 2018; pp. 1771–1776. [Google Scholar]
  22. Trivedi, R.; Farajtabar, M.; Biswal, P.; Zha, H. DyRep: Learning Representations over Dynamic Graphs. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  23. Seo, Y.; Defferrard, M.; Vandergheynst, P.; Bresson, X. Structured Sequence Modeling with Graph Convolutional Recurrent Networks. In Proceedings of the Neural Information Processing—25th International Conference, ICONIP 2018, Siem Reap, Cambodia, 13–16 December 2018; Volume 11301, pp. 362–373. [Google Scholar]
  24. Pareja, A.; Domeniconi, G.; Chen, J.; Ma, T.; Suzumura, T.; Kanezashi, H.; Kaler, T.; Schardl, T.B.; Leiserson, C.E. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, 7–12 February 2020; pp. 5363–5370. [Google Scholar]
  25. Cho, K.; van Merrienboer, B.; Gülçehre, Ç.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, Doha, Qatar, 25–29 October 2014; pp. 1724–1734. [Google Scholar]
  26. Li, Z.; Jin, X.; Li, W.; Guan, S.; Guo, J.; Shen, H.; Wang, Y.; Cheng, X. Temporal knowledge graph reasoning based on evolutional representation learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 11–15 July 2021; pp. 408–417. [Google Scholar]
  27. Mahdisoltani, F.; Biega, J.; Suchanek, F.M. YAGO3: A Knowledge Base from Multilingual Wikipedias. In Proceedings of the Seventh Biennial Conference on Innovative Data Systems Research, CIDR 2015, Asilomar, CA, USA, 4–7 January 2015. [Google Scholar]
  28. Icews coded event data. Harvard Dataverse, 2015, Volume 12. Available online: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/28075 (accessed on 22 November 2022).
  29. Leetaru, K.; Schrodt, P.A. Gdelt: Global data on events, location, and tone, 1979–2012. In Proceedings of the ISA Annual Convention, San Francisco, CA, USA, 3–6 April 2013; Citeseer: Princeton, NJ, USA, 2013; Volume 2, pp. 1–49. [Google Scholar]
  30. Dasgupta, S.S.; Ray, S.N.; Talukdar, P. Hyte: Hyperplane-based temporally aware knowledge graph embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 2001–2011. [Google Scholar]
Figure 1. An example of entity prediction task on temporal knowledge graphs.
Figure 1. An example of entity prediction task on temporal knowledge graphs.
Futureinternet 14 00376 g001
Figure 2. Main structure of ERGCN.
Figure 2. Main structure of ERGCN.
Futureinternet 14 00376 g002
Table 1. Dataset statistics.
Table 1. Dataset statistics.
DatasetN_TrainN_ValidN_TestEntitiesRelationsTime Gap
YAGO161,54019,52320,02610,623101 year
WIKI539,28667,53863,11012,554241 year
GDELT1,734,399238,765305,2417,69124015 min
ICEWS1474,8458,5147,3716,86923024 h
ICEWS15368,86846,30246,15910,09425124 h
ICEWS18373,01845,99545,99523,03325624 h
Table 2. Experimental results of entity prediction on ICEWS14, 15 and 18 in raw metrics.
Table 2. Experimental results of entity prediction on ICEWS14, 15 and 18 in raw metrics.
ModelICEWS14ICEWS15ICEWS18
MRRhit1hit3hit10MRRhit1hit3hit10MRRhit1hit3hit10
Distmult20.326.1327.5946.6119.915.6327.2247.3313.865.6115.2231.26
ConvE30.3021.3034.4247.8931.4021.5635.7050.9622.8113.6325.8341.43
RGCN28.0319.4231.9544.8327.1318.8330.4143.1615.058.1316.4929.01
HyTE16.782.1324.8443.9416.056.5320.234.727.413.117.3316.01
TTransE12.863.1415.7233.6516.535.5120.7739.268.441.858.9522.38
TA-Distmult26.2216.8329.7245.2327.5117.5731.4647.3216.428.6118.1332.51
R-GCRN33.3124.0836.5551.5435.9326.2340.0254.6323.4614.2426.6241.96
RENET35.7725.9940.1054.8736.8626.2441.8557.6026.1716.4329.8944.37
REGCN37.7827.1742.5058.8438.2727.4343.0659.9327.5117.8231.1746.55
ERGCN (ours)37.9528.7742.5455.3243.6533.4848.9262.9428.2918.4632.6047.47
Table 3. Experimental results of entity prediction on WIKI, YAGO and GDELT in raw metrics.
Table 3. Experimental results of entity prediction on WIKI, YAGO and GDELT in raw metrics.
ModelWIKIYAGOGDELT
MRRhit1hit3hit10MRRhit1hit3hit10MRRhit1hit3hit10
Distmult27.96-32.4539.5144.05-49.7059.948.613.918.2717.04
ConvE26.03-30.5139.1841.22-47.0359.9018.3711.2919.3632.13
RGCN13.96-15.7522.0520.25-24.0137.3012.177.4012.3720.63
HyTE25.40-29.1637.5414.42-39.7346.986.690.017.5719.06
TTransE20.66-23.8833.0426.10-36.2847.735.530.464.9715.37
TA-Distmult26.44-31.3638.9744.98-50.6461.1110.344.4410.4421.63
R-GCRN28.68-31.4438.5838.58-43.7148.5318.6311.5319.8132.42
RENET30.87-33.5541.2746.81-52.7161.9319.6012.0320.5633.89
REGCN39.84-44.4353.8858.27-65.6272.9719.1511.9220.4133.19
ERGCN (ours)51.28-55.7862.2855.25-64.2270.6918.9611.6520.2433.14
Table 4. Ablation studies on WIKI.
Table 4. Ablation studies on WIKI.
ModelMRRhit1hit3hit10
ERGCN51.2844.5255.7862.28
ERGCN wtg40.8731.2148.7154.36
ERGCN wtr49.0942.7153.4159.07
ERGCN wtc49.1242.6753.5559.22
Table 5. Ablation studies on ICEWS18.
Table 5. Ablation studies on ICEWS18.
ModelMRRhit1hit3hit10
ERGCN28.2918.4632.6047.47
ERGCN wtg26.9717.3131.9446.71
ERGCN wtr27.1316.6831.1847.07
ERGCN wtc26.8816.5930.9846.77
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Xu, X. ERGCN: Enhanced Relational Graph Convolution Network, an Optimization for Entity Prediction Tasks on Temporal Knowledge Graphs. Future Internet 2022, 14, 376. https://doi.org/10.3390/fi14120376

AMA Style

Wang Y, Xu X. ERGCN: Enhanced Relational Graph Convolution Network, an Optimization for Entity Prediction Tasks on Temporal Knowledge Graphs. Future Internet. 2022; 14(12):376. https://doi.org/10.3390/fi14120376

Chicago/Turabian Style

Wang, Yinglin, and Xinyu Xu. 2022. "ERGCN: Enhanced Relational Graph Convolution Network, an Optimization for Entity Prediction Tasks on Temporal Knowledge Graphs" Future Internet 14, no. 12: 376. https://doi.org/10.3390/fi14120376

APA Style

Wang, Y., & Xu, X. (2022). ERGCN: Enhanced Relational Graph Convolution Network, an Optimization for Entity Prediction Tasks on Temporal Knowledge Graphs. Future Internet, 14(12), 376. https://doi.org/10.3390/fi14120376

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop