Special Issue "Trustworthy Graph Neural Networks: Models and Applications"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 15 September 2022 | Viewed by 3181

Special Issue Editors

Prof. Dr. Zhao Kang
E-Mail Website
Guest Editor
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610031, China
Interests: data mining; machine learning; artificial intelligence; information retrieval; social networks
Prof. Dr. Xiao Wang
E-Mail Website
Guest Editor
School of Computer Science, Beijing University of Posts and Telecommunications, 12472 Beijing, Beijing, China
Interests: data mining; machine learning; analysis of complex networks; network embedding; graph neural networks; community detection

Special Issue Information

Dear Colleagues,

In the era of big data, graph data has attracted considerable attention. We have witnessed the impressive performance of graph neural networks (GNNs) in dealing with graph data, as well as various real-world applications (e.g., recommender systems, molecular property prediction). The increasing number of works on GNNs indicates a global trend in both academic and industrial communities. Despite the progress made in GNNs, there are various open, unexplored, and unidentified challenges. One major concern is whether current GNNs are trustworthy. This is an inescapable problem when GNNs step into real-world applications, especially in risk-sensitive domains. To address this problem, we need to make GNNs more robust, explainable, and stable. Thus, there is a pressing demand for novel and advanced trustworthy GNNs. In this Special Issue, our goal is to bring together researchers and practitioners working in the areas of GNNs to address a wide range of theoretical and practical issues.

Prof. Dr. Zhao Kang
Prof. Dr. Xiao Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph neural networks
  • deep learning for graphs
  • graph representation learning
  • spectral graph theory
  • robust graph neural networks
  • explainable graph neural networks
  • stable graph neural networks
  • uncertainty in graph neural networks
  • graph neural networks related applications

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Fairness-Aware Predictive Graph Learning in Social Networks
Mathematics 2022, 10(15), 2696; https://doi.org/10.3390/math10152696 - 29 Jul 2022
Viewed by 189
Abstract
Predictive graph learning approaches have been bringing significant advantages in many real-life applications, such as social networks, recommender systems, and other social-related downstream tasks. For those applications, learning models should be able to produce a great prediction result to maximize the usability of [...] Read more.
Predictive graph learning approaches have been bringing significant advantages in many real-life applications, such as social networks, recommender systems, and other social-related downstream tasks. For those applications, learning models should be able to produce a great prediction result to maximize the usability of their application. However, the paradigm of current graph learning methods generally neglects the differences in link strength, leading to discriminative predictive results, resulting in different performance between tasks. Based on that problem, a fairness-aware predictive learning model is needed to balance the link strength differences and not only consider how to formulate it. To address this problem, we first formally define two biases (i.e., Preference and Favoritism) that widely exist in previous representation learning models. Then, we employ modularity maximization to distinguish strong and weak links from the quantitative perspective. Eventually, we propose a novel predictive learning framework entitled ACE that first implements the link strength differentiated learning process and then integrates it with a dual propagation process. The effectiveness and fairness of our proposed ACE have been verified on four real-world social networks. Compared to nine different state-of-the-art methods, ACE and its variants show better performance. The ACE framework can better reconstruct networks, thus also providing a high possibility of resolving misinformation in graph-structured data. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

Article
Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
Mathematics 2022, 10(14), 2525; https://doi.org/10.3390/math10142525 - 20 Jul 2022
Viewed by 212
Abstract
Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have [...] Read more.
Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have not yet been fully utilized, and the combination of visual and auxiliary information can lead to visual dependencies, which impacts relationship prediction among objects. Therefore, we propose a novel knowledge-based model with adjustable visual contextual dependency. Our model has three key components. The first module extracts the visual features and bounding boxes in the input image. The second module uses two encoders to fully integrate visual information and external knowledge. Finally, visual context loss and visual relationship loss are introduced to adjust the visual dependency of the model. The difference between the initial prediction results and the visual dependency results is calculated to generate the dependency-corrected results. The proposed model can obtain better global and contextual information for predicting object relationships, and the visual dependencies can be adjusted through the two loss functions. The results of extensive experiments show that our model outperforms most existing methods. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

Article
A Clustering Ensemble Framework with Integration of Data Characteristics and Structure Information: A Graph Neural Networks Approach
Mathematics 2022, 10(11), 1834; https://doi.org/10.3390/math10111834 - 26 May 2022
Cited by 1 | Viewed by 374
Abstract
Clustering ensemble is a research hotspot of data mining that aggregates several base clustering results to generate a single output clustering with improved robustness and stability. However, the validity of the ensemble result is usually affected by unreliability in the generation and integration [...] Read more.
Clustering ensemble is a research hotspot of data mining that aggregates several base clustering results to generate a single output clustering with improved robustness and stability. However, the validity of the ensemble result is usually affected by unreliability in the generation and integration of base clusterings. In order to address this issue, we develop a clustering ensemble framework viewed from graph neural networks that generates an ensemble result by integrating data characteristics and structure information. In this framework, we extract structure information from base clustering results of the data set by using a coupling affinity measure After that, we combine structure information with data characteristics by using a graph neural network (GNN) to learn their joint embeddings in latent space. Then, we employ a Gaussian mixture model (GMM) to predict the final cluster assignment in the latent space. Finally, we construct the GNN and GMM as a unified optimization model to integrate the objectives of graph embedding and consensus clustering. Our framework can not only elegantly combine information in feature space and structure space, but can also achieve suitable representations for final cluster partitioning. Thus, it can produce an outstanding result. Experimental results on six synthetic benchmark data sets and six real world data sets show that the proposed framework yields a better performance compared to 12 reference algorithms that are developed based on either clustering ensemble architecture or a deep clustering strategy. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

Article
Multi-View Graph Clustering by Adaptive Manifold Learning
Mathematics 2022, 10(11), 1821; https://doi.org/10.3390/math10111821 - 25 May 2022
Viewed by 396
Abstract
Graph-oriented methods have been widely adopted in multi-view clustering because of their efficiency in learning heterogeneous relationships and complex structures hidden in data. However, existing methods are typically investigated based on a Euclidean structure instead of a more suitable manifold topological structure. Hence, [...] Read more.
Graph-oriented methods have been widely adopted in multi-view clustering because of their efficiency in learning heterogeneous relationships and complex structures hidden in data. However, existing methods are typically investigated based on a Euclidean structure instead of a more suitable manifold topological structure. Hence, it is expected that a more suitable manifold topological structure will be adopted to carry out intrinsic similarity learning. In this paper, we explore the implied adaptive manifold for multi-view graph clustering. Specifically, our model seamlessly integrates multiple adaptive graphs into a consensus graph with the manifold topological structure considered. We further manipulate the consensus graph with a useful rank constraint so that its connected components precisely correspond to distinct clusters. As a result, our model is able to directly achieve a discrete clustering result without any post-processing. In terms of the clustering results, our method achieves the best performance in 22 out of 24 cases in terms of four evaluation metrics on six datasets, which demonstrates the effectiveness of the proposed model. In terms of computational performance, our optimization algorithm is generally faster or in line with other state-of-the-art algorithms, which validates the efficiency of the proposed algorithm. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

Article
Robust Graph Neural Networks via Ensemble Learning
Mathematics 2022, 10(8), 1300; https://doi.org/10.3390/math10081300 - 14 Apr 2022
Cited by 1 | Viewed by 462
Abstract
Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble [...] Read more.
Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. However, most existing GNNs suffer from the nonrobustness issues, which poses a great challenge for applying GNNs into sensitive scenarios. Some researchers concentrate on constructing an ensemble model to mitigate the nonrobustness issues. Nevertheless, these methods ignore the interaction among base models, leading to similar graph representations. Moreover, due to the deterministic propagation applied in most existing GNNs, each node highly relies on its neighbors, leaving the nodes to be sensitive to perturbations. Therefore, in this paper, we propose a novel framework of graph ensemble learning based on knowledge passing (called GEL) to address the above issues. In order to achieve interaction, we consider the predictions of prior models as knowledge to obtain more reliable predictions. Moreover, we design a multilayer DropNode propagation strategy to reduce each node’s dependence on particular neighbors. This strategy also empowers each node to aggregate information from diverse neighbors, alleviating oversmoothing issues. We conduct experiments on three benchmark datasets, including Cora, Citeseer, and Pubmed. GEL outperforms GCN by more than 5% in terms of accuracy across all three datasets and also performs better than other state-of-the-art baselines. Extensive experimental results also show that the GEL alleviates the nonrobustness and oversmoothing issues. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

Article
Inferring from References with Differences for Semi-Supervised Node Classification on Graphs
Mathematics 2022, 10(8), 1262; https://doi.org/10.3390/math10081262 - 11 Apr 2022
Cited by 1 | Viewed by 327
Abstract
Following the application of Deep Learning to graphic data, Graph Neural Networks (GNNs) have become the dominant method for Node Classification on graphs in recent years. To assign nodes with preset labels, most GNNs inherit the end-to-end way of Deep Learning in which [...] Read more.
Following the application of Deep Learning to graphic data, Graph Neural Networks (GNNs) have become the dominant method for Node Classification on graphs in recent years. To assign nodes with preset labels, most GNNs inherit the end-to-end way of Deep Learning in which node features are input to models while labels of pre-classified nodes are used for supervised learning. However, while these methods can make full use of node features and their associations, they treat labels separately and ignore the structural information of those labels. To utilize information on label structures, this paper proposes a method called 3ference that infers from references with differences. Specifically, 3ference predicts what label a node has according to the features of that node in concatenation with both features and labels of its relevant nodes. With the additional information on labels of relevant nodes, 3ference captures the transition pattern of labels between nodes, as subsequent analysis and visualization revealed. Experiments on a synthetic graph and seven real-world graphs proved that this knowledge about label associations helps 3ference to predict accurately with fewer parameters, fewer pre-classified nodes, and varying label patterns compared with GNNs. Full article
(This article belongs to the Special Issue Trustworthy Graph Neural Networks: Models and Applications)
Show Figures

Figure 1

Back to TopTop