entropy-logo

Journal Browser

Journal Browser

Robustness of Graph Neural Networks

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 April 2025) | Viewed by 1912

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China
Interests: machine learning; social computing; neural networks; web intelligence; data mining; information retrieval; plagiarism detection; multimedia information processing

E-Mail Website
Guest Editor Assistant
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China
Interests: my current research primarily focuses on geometric deep learning and its applications in AI for science. this includes equivariance analysis, molecular modeling, understanding chemical reactions, mate

Special Issue Information

Dear Colleagues,

The use of deep learning on graph-structured data has seen a significant rise in popularity in recent years. Among the various techniques for graph representation learning, Graph Neural Networks (GNNs) stand out as the most promising. They have been widely adopted in numerous applications, including, but not limited to, social network analysis, recommender systems, financial fraud detection, combinatorial optimization, knowledge graph embedding, drug discovery, and materials discovery.

The primary objective of this Special Issue is to assemble experts from diverse disciplines to showcase their innovative methodologies and models that enhance the robustness of GNNs. While GNNs have achieved initial successes across a broad spectrum of applications, they can often be overly sensitive to noise and biases present in real-world datasets. This sensitivity can result in suboptimal performance and potentially lead to adverse societal implications. As such, the development of more robust GNNs is crucial for their effective implementation in practical scenarios. In the context of this Special Issue, we will discuss various aspects, including the development of novel robust GNN architectures; the robustness analysis of GNNs in relation to noise and dataset biases; and the examination of GNN robustness from the perspectives of entropy and information theory. We will also delve into the scalability and reliability of GNNs on large-scale graph datasets, their adversarial robustness to attacks, out-of-distribution samples and group transformations (equivariance), as well as the application of robust GNN algorithms in social network analysis, point cloud, biology and chemistry-related fields.

Prof. Dr. Irwin King
Guest Editor

Dr. Ziqiao Meng
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robust GNN architecture
  • GNN robustness to noises
  • graph dataset biases
  • GNNs and information theory
  • scalable and reliable GNNs
  • adversarial robustness
  • out-of-distribution
  • robustness to group transformations
  • robust GNNs on social networks analysis
  • robust GNNs on point cloud
  • robust GNNs on protein, molecules and materials

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 689 KiB  
Article
GBsim: A Robust GCN-BERT Approach for Cross-Architecture Binary Code Similarity Analysis
by Jiang Du, Qiang Wei, Yisen Wang and Xingyu Bai
Entropy 2025, 27(4), 392; https://doi.org/10.3390/e27040392 - 7 Apr 2025
Viewed by 312
Abstract
Recent advances in graph neural networks have transformed structural pattern learning in domains ranging from social network analysis to biomolecular modeling. Nevertheless, practical deployments in mission-critical scenarios such as binary code similarity detection face two fundamental obstacles: first, the inherent noise in graph [...] Read more.
Recent advances in graph neural networks have transformed structural pattern learning in domains ranging from social network analysis to biomolecular modeling. Nevertheless, practical deployments in mission-critical scenarios such as binary code similarity detection face two fundamental obstacles: first, the inherent noise in graph construction processes exemplified by incomplete control flow edges during binary function recovery; second, the substantial distribution discrepancies caused by cross-architecture instruction set variations. Conventional GNN architectures demonstrate severe performance degradation under such low signal-to-noise ratio conditions and cross-domain operational environments, particularly in security-sensitive vulnerability identification tasks where feature instability or domain shifts could trigger critical false judgments. To address these challenges, we propose GBsim, a novel approach that combines graph neural networks with natural language processing. GBsim employs a cross-architecture language model to transform binary functions into semantic graphs, leverages a multilayer GCN for structural feature extraction, and employs a Transformer layer to integrate semantic information, generates robust cross-architecture embeddings that maintain high performance despite significant distribution shifts. Extensive experiments on a large-scale cross-architecture dataset show that GBsim achieves an MRR of 0.901 and a Recall@1 of 0.831, outperforming state-of-the-art methods. In real-world vulnerability detection tasks, GBsim achieves an average recall rate of 81.3% on a 1-day vulnerability dataset, demonstrating its practical effectiveness in identifying security threats and outperforming existing methods by 2.1%. This performance advantage stems from GBsim’s ability to maximize information preservation across architectural boundaries, enhancing model robustness in the presence of noise and distribution shifts. Full article
(This article belongs to the Special Issue Robustness of Graph Neural Networks)
Show Figures

Figure 1

15 pages, 664 KiB  
Article
Few-Shot Graph Anomaly Detection via Dual-Level Knowledge Distillation
by Xuan Li, Dejie Cheng, Luheng Zhang, Chengfang Zhang and Ziliang Feng
Entropy 2025, 27(1), 28; https://doi.org/10.3390/e27010028 - 1 Jan 2025
Viewed by 1315
Abstract
Graph anomaly detection is crucial in many high-impact applications across diverse fields. In anomaly detection tasks, collecting plenty of annotated data tends to be costly and laborious. As a result, few-shot learning has been explored to address the issue by requiring only a [...] Read more.
Graph anomaly detection is crucial in many high-impact applications across diverse fields. In anomaly detection tasks, collecting plenty of annotated data tends to be costly and laborious. As a result, few-shot learning has been explored to address the issue by requiring only a few labeled samples to achieve good performance. However, conventional few-shot models may not fully exploit the information within auxiliary sets, leading to suboptimal performance. To tackle these limitations, we propose a dual-level knowledge distillation-based approach for graph anomaly detection, DualKD, which leverages two distinct distillation losses to improve generalization capabilities. In our approach, we initially train a teacher model to generate prediction distributions as soft labels, capturing the entropy of uncertainty in the data. These soft labels are then employed to construct the corresponding loss for training a student model, which can capture more detailed node features. In addition, we introduce two representation distillation losses—short and long representation distillation—to effectively transfer knowledge from the auxiliary set to the target set. Comprehensive experiments conducted on four datasets verify that DualKD remarkably outperforms the advanced baselines, highlighting its effectiveness in enhancing identification performance. Full article
(This article belongs to the Special Issue Robustness of Graph Neural Networks)
Show Figures

Figure 1

Back to TopTop