New Advances in Graph Neural Networks (GNNs) and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E: Applied Mathematics".

Deadline for manuscript submissions: 15 April 2026 | Viewed by 2500

Special Issue Editor


E-Mail Website
Guest Editor
College of Mathematics and System Science, Shandong University of Science and Technology, Qingdao 266590, China
Interests: deep learning; graph neural networks; machine learning; applied mathematics; computer vision; process mining

Special Issue Information

Dear Colleagues,

We sincerely invite you to submit your latest research achievements on Graph neural networks (GNNS) and their various applications, especially those focusing on the topic of "New Advances and Application Practices". This special issue is titled "New Advances in Graph Neural Networks and Applications".

In recent years, graph neural networks, as a powerful tool, have shown great potential in handling complex relational data and have been widely applied in multiple fields such as social network analysis, recommendation systems, bioinformatics, transportation networks, and intelligent perception. GNN can effectively capture the hidden information in large-scale heterogeneous graphs by learning the representations of nodes, edges and substructures, promoting breakthroughs in many key tasks. However, to achieve its efficient deployment and large-scale application in practical scenarios, there are still many challenges.

The main difficulties include: high computing costs, the adaptability of the model in resource-constrained environments, the defense ability against attacks, and the performance stability in dynamic and heterogeneous graphs. For this reason, the hot research directions include algorithm optimization, model compression, distributed training and robustness improvement, etc.

This special issue aims to present the latest technological breakthroughs, innovative applications and future research directions in the field of Graph Neural Networks. We welcome original research papers, review articles and case analyses, aiming to promote the joint exploration of the future development of GNN by the academic and industrial sectors.

We look forward to your wonderful submission and jointly usher in a new era of graph neural networks with global peers!

Prof. Dr. Hua Duan
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph neural network
  • machine learning
  • applied mathematics
  • computer vision
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 831 KB  
Article
TSAD: Transformer-Based Semi-Supervised Anomaly Detection for Dynamic Graphs
by Jin Zhang and Ke Feng
Mathematics 2025, 13(19), 3123; https://doi.org/10.3390/math13193123 - 30 Sep 2025
Viewed by 358
Abstract
Anomaly detection aims to identify abnormal instances that significantly deviate from normal samples. With the natural connectivity between instances in the real world, graph neural networks have become increasingly important in solving anomaly detection problems. However, existing research mainly focuses on static graphs, [...] Read more.
Anomaly detection aims to identify abnormal instances that significantly deviate from normal samples. With the natural connectivity between instances in the real world, graph neural networks have become increasingly important in solving anomaly detection problems. However, existing research mainly focuses on static graphs, while there is less research on mining anomaly patterns in dynamic graphs, which has important application value. This paper proposes a Transformer-based semi-supervised anomaly detection framework for dynamic graphs. The framework adopts the Transformer architecture as the core encoder, which can effectively capture long-range dependencies and complex temporal patterns between nodes in dynamic graphs. By introducing time-aware attention mechanisms, the model can adaptively focus on important information at different time steps, thereby better understanding the evolution process of graph structures. The multi-head attention mechanism of Transformer enables the model to simultaneously learn structural and temporal features of nodes, while positional encoding helps the model understand periodic patterns in time series. Comprehensive experiments on three real datasets show that TSAD significantly outperforms existing methods in anomaly detection accuracy, particularly demonstrating excellent performance in label-scarce scenarios. Full article
(This article belongs to the Special Issue New Advances in Graph Neural Networks (GNNs) and Applications)
Show Figures

Figure 1

27 pages, 2378 KB  
Article
Advancing Graph Neural Networks for Complex Relational Learning: A Multi-Scale Heterogeneity-Aware Framework with Adversarial Robustness and Interpretable Analysis
by Hao Yang, Yunhong Zhou, Xianzhe Ji, Zifan Liu, Zhen Tian, Qiang Tang and Yanchao Shi
Mathematics 2025, 13(18), 2956; https://doi.org/10.3390/math13182956 - 12 Sep 2025
Viewed by 685
Abstract
Graph Neural Networks (GNNs) face fundamental algorithmic challenges in real-world applications due to a combination of data heterogeneity, adversarial heterophily, and severe class imbalance. A critical research gap exists for a unified framework that can simultaneously address these issues, limiting the deployment of [...] Read more.
Graph Neural Networks (GNNs) face fundamental algorithmic challenges in real-world applications due to a combination of data heterogeneity, adversarial heterophily, and severe class imbalance. A critical research gap exists for a unified framework that can simultaneously address these issues, limiting the deployment of GNNs in high-stakes domains like financial fraud detection and social network analysis. This paper introduces HAG-CFNet, a novel framework designed to bridge this gap by integrating three key innovations: (1) a heterogeneity-aware message-passing mechanism that uses relation-specific attention to capture rich semantic information; (2) a dual-channel heterophily detection module that explicitly identifies and neutralizes adversarial camouflage through separate aggregation pathways; and (3) a domain-aware counterfactual generator that produces plausible, actionable explanations by co-optimizing feature and structural perturbations. These are supported by a synergistic imbalance correction strategy combining graph-adapted oversampling with cost-sensitive learning. Extensive testing on large-scale financial datasets validates the framework’s impact: HAG-CFNet achieves a 4.2% AUC-PR improvement over state-of-the-art methods, demonstrates superior robustness by reducing performance degradation under structural noise by over 50%, and generates counterfactual explanations with 91.8% validity while requiring minimal perturbations. These advances provide a direct pathway to building more trustworthy and effective AI systems for critical applications ranging from financial risk management to supply chain analysis and social media content moderation. Full article
(This article belongs to the Special Issue New Advances in Graph Neural Networks (GNNs) and Applications)
Show Figures

Figure 1

31 pages, 2025 KB  
Article
Enterprise Bankruptcy Prediction Model Based on Heterogeneous Graph Neural Network for Fusing External Features and Internal Attributes
by Xinke Du, Jinfei Cao, Xiyuan Jiang, Jianyu Duan, Zhen Tian and Xiong Wang
Mathematics 2025, 13(17), 2755; https://doi.org/10.3390/math13172755 - 27 Aug 2025
Viewed by 1057
Abstract
Enterprise bankruptcy prediction is a critical task in financial risk management. Traditional methods, such as logistic regression and decision trees, rely heavily on structured financial data, which limits their ability to capture complex relational networks and unstructured industry information. Heterogeneous graph neural networks [...] Read more.
Enterprise bankruptcy prediction is a critical task in financial risk management. Traditional methods, such as logistic regression and decision trees, rely heavily on structured financial data, which limits their ability to capture complex relational networks and unstructured industry information. Heterogeneous graph neural networks (HGNNs) offer a solution by modeling multiple relationships between enterprises. However, current models struggle with financial risk graph data challenges, such as the oversimplification of internal financial features and the lack of dynamic imputation for missing external topological features. To address these issues, we propose HGNN-EBP, an enterprise bankruptcy prediction algorithm that integrates both internal and external features. The model constructs a multi-relational heterogeneous graph that combines structured financial data, unstructured textual information, and real-time industry data. A multi-scale graph convolution network captures diverse relationships, while a Transformer-based self-attention mechanism dynamically imputes missing external topological features. Finally, a multi-layer perceptron (MLP) predicts bankruptcy probability. Experimental results on a dataset of 32,459 Chinese enterprises demonstrate that HGNN-EBP outperforms traditional models, especially in handling relational diversity, missing features, and dynamic financial risk data. Full article
(This article belongs to the Special Issue New Advances in Graph Neural Networks (GNNs) and Applications)
Show Figures

Figure 1

Back to TopTop