Analyzing the Properties of Graph Neural Networks with Evolutionary Algorithms
Abstract
1. Introduction
- (1)
- The model proposed in this paper uses an evolutionary algorithm to learn GNN parameters from the perturbation graphs. The goal of this model aims to utilize the graph properties in the perturbed graph to learn better graph node representation embeddings through the mechanism of evolutionary algorithm. The core principle of an evolutionary algorithm is to retain the optimal graph learning embedding parameters.
- (2)
- This is the first time that a novel framework model is proposed using an evolutionary algorithm in graph structure optimization tasks based on graph properties. This model continuously evolves and selects the optimal graph structure through the mutation, evaluation and selection of evolutionary algorithm. By evolving the graph structure, the model can adapt to different graph structures during the learning process and better capture the common features between graphs, thereby improving the generalization ability on graphs.
- (3)
- Experiments and tests on multiple challenging datasets demonstrated that EProGNN achieved convincing performance. In addition, EProGNN can adjust parameters more flexibly, making the learned graph embedding more adaptable to the requirements of the task.
2. Related Work
2.1. Graph Neural Networks
2.2. Research on Perturbation Graphs
3. Problem Statement
4. The Proposed Framework
4.1. Low Rank and Sparsity
4.2. Feature Smoothness
4.3. Mutation
| Algorithm 1 Evolutionary Algorithm | |
| Input: GNNs population P, total training epochs , parameters used in the evolution process | |
| Output: evolved population | |
| 1: | Initialization: graph structures |
| 2: | Mutate population into individuals according to Equations (11), (14) and (17) |
| 3: | Evaluate various individuals according to Equation (18) |
| 4: | while individuals cannot adapt to the environment do |
| 5: | Input attacked graphs |
| 6: | Mutate population into individuals according to Equations (11), (14) and (17) |
| 7: | Evaluate various individuals according to Equation (18) |
| 8: | Select the best individual |
| 9: | end while |
| 10: | evolved population |
4.4. Evaluation
4.5. Overview of EProGNN
4.6. Complexity Analysis
- Parallelism: Unlike the gradient-based sequential updates in traditional ProGNN, the fitness evaluation of individuals in EProGNN is independent. Therefore, the complexity caused by the factor P can be effectively offset by parallel computation across multiple GPUs.
- Performance trade-offs: The additional computational overhead enables EProGNN to overcome the limitations of traditional ProGNN gradient descent, thereby escaping local optima and achieving greater robustness against targeted attacks, a primary goal for security-critical applications.
| Algorithm 2 Properties GNNs with Evolutionary Algorithm | |
| Input: Adjacency matrix of perturbation graph A, Attribute matrix of perturbation graph X, Label set of the perturbation graph, Hyperparameters , , , , Probability matrix . | |
| Output: EProGNN model parameters sets . | |
| 1: | Initialization: initialize basic perturbation graphs , initialize basic perturbation attribute matrices , initialize model parameters ; |
| 2: | while Stopping condition is not met do |
| 3: | % Perform low-rank, sparse and feature smoothing processing on the perturbation graphs |
| 4: | Low-rank and sparsify the input adjacency matrix according to Equations (3) and (8); |
| 5: | Smooth the input feature matrix features according to Equation (9); |
| 6: | for to i do |
| 7: | % Training model classifier |
| 8: | for to do |
| 9: | The graphs after attribute processing are mutated through Equations (11), (14) and (17); |
| 10: | Different individuals perform quality evaluation through Equation (18); |
| 11: | end for |
| 12: | end for |
| 13: | ; |
| 14: | |
| 15: | end while |
| 16: | return |
5. Experiments
5.1. Experimental Settings
5.1.1. Datasets
- Citation Networks: We employ three standard citation graphs: Cora [43], Citeseer [44], and Pubmed [45]. In these graphs, nodes represent scientific publications, and edges denote citation links. For Cora and Citeseer, node features are binary word vectors indicating the presence of specific words. Pubmed involves diabetes-related publications, utilizing TF-IDF weighted word vectors as node features.
5.1.2. Baselines
- (1)
- GCN [47] is a method for learning and representing sets of vertices and edges in GNNs, which truly implements the mapping from words to graphs. Image inputs are successfully used as latent image outputs through node embedding and random walk strategies.
- (2)
- GAT processes graph-structured data. It aggregates neighbor nodes through the attention mechanism, realizes adaptive allocation of weights for different neighbors, and greatly improves the expressiveness of GNNs.
- (3)
- GraphSAGE is a popular graph neural network architecture for inductive node embeddings. The model exploits node features (e.g., text attributes, node configuration information, node degree) to learn an embedding function that generalizes to unseen nodes. By adding node features into the learning algorithm, the model simultaneously learns the topology structure of each node neighborhood and the distribution of node features in the neighborhood.
- (4)
- ProGNN is a model that can learn graph structure and GNN parameters at the same time. The goal of this model aims to defend against the most common adversarial attack setting on graph data, namely poisoning adversarial attacks on graph structures. In this setting, modifying the edges before training the GNN already disturbs the graph structure, while the node features of the model itself are not changed.
5.1.3. Parameter Settings
5.2. Defense Performance
- Targeted Perturbation: The scheme aims to conduct a targeted single edge set perturbation on a specific node on the canonical graph structure in order to deceive the graph model on the original target node. This paper uses two targeted perturbation methods, poisoning perturbation and evasion perturbation, and this is currently the most advanced purposeful perturbation on graph structure model data. Under this perturbation scheme, the attacker may not be able to directly modify the target node, and may only be able to access some nodes other than the target node or access incomplete data. In doing so, the experimenter can clearly distinguish between the perturbation node and the target node.
- Targetless Indiscriminate Perturbation: This perturbation scheme is different from the former one. The indiscriminate perturbation is an attempt to reduce the overall performance of the experimental model on the entire graph so as to examine the ability of the model to resist comprehensive perturbations. Under the indiscriminate perturbation, the attacker tries to reduce the generalization ability of the model and reduce the performance on unlabeled nodes. The core idea of this perturbation scheme aims to use the adjacency matrix of the experimental graph structure as a hyperparameter. It is worth noting that the objective function of the total scheme is a bi-level problem, and each element in the adjacency matrix of the graph can only be 0 or 1, so this is also a discrete optimization problem.
- Randomization Perturbation: This perturbation scheme is different from the previous two. After the randomization perturbation selects a fixed perturbation graph, it is assumed that the selected graph structure is empty, and then the graph structure to be tested is randomly selected and randomly assigned. In other words, the perturbation scheme deletes random edges in any clean graph structure and fills in wrong edge sets. This kind of wrong filling can be completely regarded as a source of poisoning noise to some extent, and several such sources of poisoning noise will be added to each graph structure. According to the experimental data, the perturbation scheme is quite convincing and provable.
5.2.1. Against with Targetless Indiscriminate Perturbation
5.2.2. Against with Targeted Adversarial Perturbation
5.2.3. Against with Randomization Adversarial Perturbation
5.3. Effectiveness Analysis of Graph Structure Learning
5.4. Ablation Experiment
- EProGNN w/o RW: The variant removing the Random Walk Mutation.
- EProGNN w/o BT: The variant removing the Betweenness Mutation.
- EProGNN w/o SB: The variant removing the Stochastic Block Mutation.
- EProGNN w/o FS: The variant removing the Feature Smoothing module.
- Impact of Mutations: Removing any of the mutation operators from the complete model (e.g., not using RW, not using BT, or not using SB) leads to a decrease in performance. This indicates that the different mutation strategies complement each other in exploring the graph structure space and preventing the population from converging prematurely to local optima.
- Impact of feature smoothing: EProGNN’s performance degrades without feature smoothing, indicating that feature smoothing is crucial for filtering high-frequency noise in the feature matrix, especially when the graph structure is perturbed.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef]
- Guo, Z.; Wang, H. A deep graph neural network-based mechanism for social recommendations. IEEE Trans. Ind. Inform. 2020, 17, 2776–2783. [Google Scholar] [CrossRef]
- Song, Z.; Zhang, Y.; King, I. Towards fair financial services for all: A temporal GNN approach for individual fairness on transaction networks. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 2331–2341. [Google Scholar]
- Li, M.M.; Huang, K.; Zitnik, M. Graph representation learning in biomedicine. arXiv 2021, arXiv:2104.04883. [Google Scholar]
- Yasunaga, M.; Ren, H.; Bosselut, A.; Liang, P.; Leskovec, J. QA-GNN: Reasoning with language models and knowledge graphs for question answering. arXiv 2021, arXiv:2104.06378. [Google Scholar]
- Hu, Z.; Dong, Y.; Wang, K.; Chang, K.W.; Sun, Y. Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, CA, USA, 23–27 August 2020; pp. 1857–1867. [Google Scholar]
- Shi, W.; Rajkumar, R. Point-gnn: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1711–1719. [Google Scholar]
- Han, K.; Wang, Y.; Guo, J.; Tang, Y.; Wu, E. Vision gnn: An image is worth graph of nodes. Adv. Neural Inf. Process. Syst. 2022, 35, 8291–8303. [Google Scholar]
- Yu, Y.; Chen, J.; Gao, T.; Yu, M. DAG-GNN: DAG structure learning with graph neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 7154–7163. [Google Scholar]
- Fan, X.; Gong, M.; Wu, Y.; Tang, Z.; Liu, J. Neural gaussian similarity modeling for differential graph structure learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, British Columbia, 20–27 February 2024; Volume 38, pp. 11919–11926. [Google Scholar]
- Wang, S.; Wang, D.; Ruan, X.; Fan, X.; Gong, M.; Zhang, H. HGRL-S: Towards Heterogeneous Graph Representation Learning with Optimized Structures. IEEE Trans. Emerg. Top. Comput. 2025, 9, 2359–2370. [Google Scholar] [CrossRef]
- Liao, R.; Li, Y.; Song, Y.; Wang, S.; Hamilton, W.; Duvenaud, D.K.; Urtasun, R.; Zemel, R. Efficient graph generation with graph recurrent attention networks. Adv. Neural Inf. Process. Syst. 2019, 32–42. Available online: https://proceedings.neurips.cc/paper_files/paper/2019/file/d0921d442ee91b896ad95059d13df618-Paper.pdf (accessed on 25 November 2025).
- Errica, F.; Podda, M.; Bacciu, D.; Micheli, A. A fair comparison of graph neural networks for graph classification. arXiv 2019, arXiv:1912.09893. [Google Scholar]
- Lan, Z.; Yu, L.; Yuan, L.; Wu, Z.; Niu, Q.; Ma, F. Sub-gmn: The subgraph matching network model. arXiv 2021, arXiv:2104.00186. [Google Scholar]
- Gao, C.; Zheng, Y.; Li, N.; Li, Y.; Qin, Y.; Piao, J.; Quan, Y.; Chang, J.; Jin, D.; He, X.; et al. A survey of graph neural networks for recommender systems: Challenges, methods, and directions. ACM Trans. Recomm. Syst. 2023, 1, 1–51. [Google Scholar] [CrossRef]
- Zhang, B.; Xiao, J.; Jiao, J.; Wei, Y.; Zhao, Y. Affinity attention graph neural network for weakly supervised semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8082–8096. [Google Scholar] [CrossRef]
- Wu, L.; Chen, Y.; Ji, H.; Liu, B. Deep learning on graphs for natural language processing. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, 11–15 July 2021; pp. 2651–2653. [Google Scholar]
- Liu, Z.; Yang, D.; Wang, Y.; Lu, M.; Li, R. EGNN: Graph structure learning based on evolutionary computation helps more in graph neural networks. Appl. Soft Comput. 2023, 135, 110040. [Google Scholar] [CrossRef]
- Tang, H.; Ma, G.; Chen, Y.; Guo, L.; Wang, W.; Zeng, B.; Zhan, L. Adversarial attack on hierarchical graph pooling neural networks. arXiv 2020, arXiv:2005.11560. [Google Scholar] [CrossRef]
- Wang, R.; Mou, S.; Wang, X.; Xiao, W.; Ju, Q.; Shi, C.; Xie, X. Graph structure estimation neural networks. In Proceedings of the Web Conference, Ljubljana, Slovenia, 19–23 April 2021; pp. 342–353. [Google Scholar]
- Chen, H.; Zhou, K.; Lai, K.H.; Hu, X.; Wang, F.; Yang, H. Adversarial graph perturbations for recommendations at scale. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; pp. 1854–1858. [Google Scholar]
- Wu, L.; Lin, H.; Huang, Y.; Li, S.Z. Knowledge distillation improves graph structure augmentation for graph neural networks. Adv. Neural Inf. Process. Syst. 2022, 35, 11815–11827. [Google Scholar]
- Shu, J.; Xi, B.; Li, Y.; Wu, F.; Kamhoua, C.; Ma, J. Understanding Dropout for Graph Neural Networks. In Proceedings of the Companion Proceedings of the Web Conference, Lyon, France, 25–29 April 2022; pp. 1128–1138. [Google Scholar]
- Sun, L.; Dou, Y.; Yang, C.; Zhang, K.; Wang, J.; Philip, S.Y.; He, L.; Li, B. Adversarial attack and defense on graph data: A survey. IEEE Trans. Knowl. Data Eng. 2022, 35, 7693–7711. [Google Scholar] [CrossRef]
- Xue, H.; Zhou, K.; Chen, T.; Guo, K.; Hu, X.; Chang, Y.; Wang, X. CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks. arXiv 2021, arXiv:2110.14855. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Yang, X.; Yan, M.; Pan, S.; Ye, X.; Fan, D. Simple and efficient heterogeneous graph neural network. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 10816–10824. [Google Scholar]
- Huang, Z.; Li, K.; Jiang, Y.; Jia, Z.; Lv, L.; Ma, Y. Graph Relearn Network: Reducing performance variance and improving prediction accuracy of graph neural networks. Knowl.-Based Syst. 2024, 301, 112311. [Google Scholar] [CrossRef]
- Liu, Z.; Yu, X.; Fang, Y.; Zhang, X. Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. In Proceedings of the ACM Web Conference, Austin, TX, USA, 30 April–4 May 2023; pp. 417–428. [Google Scholar]
- Li, J.; Chen, J.; Liu, J.; Ma, H. Learning a graph neural network with cross modality interaction for image fusion. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 4471–4479. [Google Scholar]
- Jin, W.; Ma, Y.; Liu, X.; Tang, X.; Wang, S.; Tang, J. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, 6–10 July 2020; pp. 66–74. [Google Scholar]
- Bacciu, D.; Numeroso, D. Explaining deep graph networks via input perturbation. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 10334–10345. [Google Scholar] [CrossRef]
- Tang, X.; Zhang, C.; Guo, R.; Yang, X.; Qian, X. A Causality-Aware Graph Convolutional Network Framework for Rigidity Assessment in Parkinsonians. IEEE Trans. Med. Imaging 2023, 43, 229–240. [Google Scholar] [CrossRef]
- Xie, J.; Miao, Q.; Liu, R.; Xin, W.; Tang, L.; Zhong, S.; Gao, X. Attention adjacency matrix based graph convolutional networks for skeleton-based action recognition. Neurocomputing 2021, 440, 230–239. [Google Scholar] [CrossRef]
- Xie, Y.; Xu, Z.; Zhang, J.; Wang, Z.; Ji, S. Self-supervised learning of graph neural networks: A unified review. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2412–2429. [Google Scholar] [CrossRef]
- Qu, M.; Bengio, Y.; Tang, J. Gmnn: Graph markov neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 5241–5250. [Google Scholar]
- Yang, J.; Sun, J.; Ren, Y.; Li, S.; Ding, S.; Hu, J. GACP: Graph neural networks with ARMA filters and a parallel CNN for hyperspectral image classification. Int. J. Digit. Earth 2023, 16, 1770–1800. [Google Scholar] [CrossRef]
- Huang, P.Y.; Frederking, R. Rwr-gae: Random walk regularization for graph auto encoders. arXiv 2019, arXiv:1908.04003. [Google Scholar] [CrossRef]
- You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; Shen, Y. Graph contrastive learning with augmentations. Adv. Neural Inf. Process. Syst. 2020, 33, 5812–5823. [Google Scholar]
- Wang, M.; Wang, C.; Yu, J.X.; Zhang, J. Community detection in social networks: An in-depth benchmarking study with a procedure-oriented framework. Proc. VLDB Endow. 2015, 8, 998–1009. [Google Scholar] [CrossRef]
- Leskovec, J.; Lang, K.J.; Dasgupta, A.; Mahoney, M.W. Statistical properties of community structure in large social and information networks. In Proceedings of the 17th international conference on World Wide Web, Beijing, China, 21–25 April 2008; pp. 695–704. [Google Scholar]
- McCallum, A.; Nigam, K.; Ungar, L.H. Efficient clustering of high-dimensional data sets with application to reference matching. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Boston, MA, USA, 20–23 August 2000; pp. 169–178. [Google Scholar]
- Hansen, P.; Ruiz, M.; Aloise, D. A VNS heuristic for escaping local extrema entrapment in normalized cut clustering. Pattern Recognit. 2012, 45, 4337–4345. [Google Scholar] [CrossRef]
- Krauthammer, M.; Nenadic, G. Term identification in the biomedical literature. J. Biomed. Inform. 2004, 37, 512–526. [Google Scholar] [CrossRef]
- Li, Y.; Jin, W.; Xu, H.; Tang, J. Deeprobust: A platform for adversarial attacks and defenses. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 16078–16080. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Zügner, D.; Akbarnejad, A.; Günnemann, S. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 2847–2856. [Google Scholar]



| Dataset | Type | Nodes | Edges | Classes |
|---|---|---|---|---|
| BlogCatalog | Social | 10,312 | 333,982 | 39 |
| Orkut | Social | 48,760 | 98,020 | - |
| Cora | Citation | 2708 | 5429 | 7 |
| Citeseer | Citation | 3327 | 4732 | 6 |
| Pubmed | Citation | 19,717 | 44,338 | 3 |
| Dataset | Ptb Rate (%) | GCN | GAT | GraphSAGE | RGCN | GCN-Jaccard | GCN-SVD | Pro-GNN-fs | Pro-GNN | EProGNN |
|---|---|---|---|---|---|---|---|---|---|---|
| Cora | 0 | 79.93 | ||||||||
| 5 | 79.13 | |||||||||
| 10 | 75.10 | |||||||||
| 15 | 71.42 | |||||||||
| 20 | 68.19 | |||||||||
| 25 | 64.85 | |||||||||
| Citeseer | 0 | 69.88 | ||||||||
| 5 | 68.91 | |||||||||
| 10 | 67.42 | |||||||||
| 15 | 66.91 | |||||||||
| 20 | 65.36 | |||||||||
| 25 | 60.16 | |||||||||
| Polblogs | 0 | 91.98 | - | - | ||||||
| 5 | - | 91.09 | - | |||||||
| 10 | - | - | 90.90 | |||||||
| 15 | - | - | 87.81 | |||||||
| 20 | - | - | 85.24 | |||||||
| 25 | - | - | 80.26 | |||||||
| Pubmed | 0 | - | 83.98 | |||||||
| 5 | - | 83.39 | ||||||||
| 10 | - | 83.24 | ||||||||
| 15 | - | 83.06 | ||||||||
| 20 | - | 82.87 | ||||||||
| 25 | - | 82.57 | ||||||||
| Orkut | 0 | - | 78.98 | |||||||
| 5 | - | 77.13 | ||||||||
| 10 | - | 75.92 | ||||||||
| 15 | - | 73.25 | ||||||||
| 20 | - | 72.45 | ||||||||
| 25 | - | 70.36 |
| Method | Cora (Acc%) | CiteSeer (Acc%) |
|---|---|---|
| EProGNN (Full) | ||
| EProGNN w/o RW | ||
| EProGNN w/o BT | ||
| EProGNN w/o SB | ||
| EProGNN w/o FS |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Liu, Z.; Lu, Z.; Wang, H.; Chen, D.; Wang, S.; Chu, J.; Gao, R.; Jiang, A. Analyzing the Properties of Graph Neural Networks with Evolutionary Algorithms. Mathematics 2026, 14, 154. https://doi.org/10.3390/math14010154
Liu Z, Lu Z, Wang H, Chen D, Wang S, Chu J, Gao R, Jiang A. Analyzing the Properties of Graph Neural Networks with Evolutionary Algorithms. Mathematics. 2026; 14(1):154. https://doi.org/10.3390/math14010154
Chicago/Turabian StyleLiu, Zhaowei, Zhifei Lu, Haiyang Wang, Dawei Chen, Shaoyu Wang, Jie Chu, Rufei Gao, and Anzuo Jiang. 2026. "Analyzing the Properties of Graph Neural Networks with Evolutionary Algorithms" Mathematics 14, no. 1: 154. https://doi.org/10.3390/math14010154
APA StyleLiu, Z., Lu, Z., Wang, H., Chen, D., Wang, S., Chu, J., Gao, R., & Jiang, A. (2026). Analyzing the Properties of Graph Neural Networks with Evolutionary Algorithms. Mathematics, 14(1), 154. https://doi.org/10.3390/math14010154

