How Graph Convolutional Networks Work: Mechanisms and Models

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 October 2025 | Viewed by 1042

Special Issue Editors

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610039, China
Interests: machine learning; medical image analysis; graph learning; artificial neural networks; graph-related neural networks
Special Issues, Collections and Topics in MDPI journals
School of Mathematical and Computational Sciences, Massey University, Auckland 1142, New Zealand
Interests: clustering analysis; spectral learning; graph machine learning
Special Issues, Collections and Topics in MDPI journals
CBICA, University of Pennsylvania, Philadelphia, PA 19104, USA
Interests: medical image registration; medical image segmentation; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Graph Convolutional Networks (GCNs) have undergone rapid development, giving rise to a wide variety of models across numerous domains—including biomedicine, genetic analysis, and pattern recognition. As a class of deep learning methods designed to operate on graph-structured data, GCNs excel at capturing local structural information and identifying meaningful patterns tailored to tasks such as node classification, graph classification, and link prediction. Furthermore, they can learn node representations that reflect underlying topological relationships and serve as informative features for downstream applications like classification and clustering.

Despite these strengths, several challenges remain in the application of GCNs. First, their transductive nature often limits their ability to generalize to unseen data, as their graph structure is typically fixed and constructed only from training data. Second, storing the full graph structure can be memory-intensive, necessitating a careful consideration of scalability. Third, effectively modeling diverse data types—whether in homogeneous or heterogeneous graphs—remains a critical challenge depending on the task at hand.

To address these issues and advance the field, this Special Issue invites scholars to contribute novel research on the mechanism and modeling of GCN frameworks. We also welcome high-quality submissions that focus on theoretical analysis and improving the interpretability of GCNs.

Below is a non-exhaustive list of topics relevant to this Special Issue:

Theoretical foundations and analytical studies of GCNs;

Kernel-based, metric-based, and causal inference-based learning in GCNs;

Explainable representation learning with GCNs;

Supervised, semi-supervised, unsupervised, transfer, and reinforcement learning approaches for GCNs;

Missing data imputation using GCN models;

Safety and reliability in GCN representation learning;

Subgraph representation learning in GCNs;

Federated learning with GCNs;

Modeling homogeneous and heterogeneous graphs with GCNs.

Dr. Rongyao Hu
Dr. Tong Liu
Dr. Jiong Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • theory construction and analysis of GCNs
  • kernel-based, metrics-based, causal inference-based learning for GCNs
  • explainable representation learning for GCNs
  • supervised, semi-supervised, unsupervised, transfer, and reinforcement-based learning for GCNs
  • missing information imputation of GCN model
  • safety and reliability of GCNs with representation learning
  • sub-graph learning for GCNs
  • federated learning in GCNs model
  • homogeneity graphs and heterogeneity graphs for GCNs

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3460 KB  
Article
Explainable Multi-Frequency Long-Term Spectrum Prediction Based on GC-CNN-LSTM
by Wei Xu, Jianzhao Zhang, Zhe Su and Luliang Jia
Electronics 2025, 14(17), 3530; https://doi.org/10.3390/electronics14173530 - 4 Sep 2025
Viewed by 535
Abstract
The rapid development of wireless communication technology is leading to increasingly scarce spectrum resources, making efficient utilization a critical challenge. This paper proposes a Convolutional Neural Network–Long Short-Term Memory-Integrated Gradient-Weighted Class Activation Mapping (GC-CNN-LSTM) model, aimed at enhancing the accuracy of long-term spectrum [...] Read more.
The rapid development of wireless communication technology is leading to increasingly scarce spectrum resources, making efficient utilization a critical challenge. This paper proposes a Convolutional Neural Network–Long Short-Term Memory-Integrated Gradient-Weighted Class Activation Mapping (GC-CNN-LSTM) model, aimed at enhancing the accuracy of long-term spectrum prediction across multiple frequency bands and improving model interpretability. First, we achieve multi-frequency long-term spectrum prediction using a CNN-LSTM and compare its performance against models including LSTM, GRU, CNN, Transformer, and CNN-LSTM-Attention. Next, we use an improved Grad-CAM method to explain the model and obtain global heatmaps in the time–frequency domain. Finally, based on these interpretable results, we optimize the input data by selecting high-importance frequency points and removing low-importance time segments, thereby enhancing prediction accuracy. The simulation results show that the Grad-CAM-based approach achieves good interpretability, reducing RMSE and MAPE by 6.22% and 4.25%, respectively, compared to CNN-LSTM, while a similar optimization using SHapley Additive exPlanations (SHAP) achieves reductions of 0.86% and 3.55%. Full article
(This article belongs to the Special Issue How Graph Convolutional Networks Work: Mechanisms and Models)
Show Figures

Figure 1

18 pages, 2639 KB  
Article
CA-NodeNet: A Category-Aware Graph Neural Network for Semi-Supervised Node Classification
by Zichang Lu, Meiyu Zhong, Qiguo Sun and Kai Ma
Electronics 2025, 14(16), 3215; https://doi.org/10.3390/electronics14163215 - 13 Aug 2025
Viewed by 297
Abstract
Graph convolutional networks (GCNs) have demonstrated remarkable effectiveness in processing graph-structured data and have been widely adopted across various domains. Existing methods mitigate over-smoothing through selective aggregation strategies such as attention mechanisms, edge dropout, and neighbor sampling. While some approaches incorporate global structural [...] Read more.
Graph convolutional networks (GCNs) have demonstrated remarkable effectiveness in processing graph-structured data and have been widely adopted across various domains. Existing methods mitigate over-smoothing through selective aggregation strategies such as attention mechanisms, edge dropout, and neighbor sampling. While some approaches incorporate global structural context, they often underexplore category-aware representations and inter-category differences, which are crucial for enhancing node discriminability. To address these limitations, a novel framework, CA-NodeNet, is proposed for semi-supervised node classification. CA-NodeNet comprises three key components: (1) coarse-grained node feature learning, (2) category-decoupled multi-branch attention, and (3) inter-category difference feature learning. Initially, a GCN-based encoder is employed to aggregate neighborhood information and learn coarse-grained representations. Subsequently, the category-decoupled multi-branch attention module employs a hierarchical multi-branch architecture, in which each branch incorporates category-specific attention mechanisms to project coarse-grained features into disentangled semantic subspaces. Furthermore, a layer-wise intermediate supervision strategy is adopted to facilitate the learning of discriminative category-specific features within each branch. To further enhance node feature discriminability, we introduce an inter-category difference feature learning module. This module first encodes pairwise differences between the category-specific features obtained from the previous stage and then integrates complementary information across multiple feature pairs to refine node representations. Finally, we design a dual-component optimization function that synergistically combines intermediate supervision loss with the final classification objective, encouraging the network to learn robust and fine-grained node representations. Extensive experiments on multiple real-world benchmark datasets demonstrate the superior performance of CA-NodeNet over existing state-of-the-art methods. Ablation studies further validate the effectiveness of each module in contributing to overall performance gains. Full article
(This article belongs to the Special Issue How Graph Convolutional Networks Work: Mechanisms and Models)
Show Figures

Figure 1

Back to TopTop