Next Article in Journal
A Resolution Under Interval Uncertainty
Previous Article in Journal
Tripled Fixed Points, Obtained by Ran-Reunrings Theorem for Monotone Maps in Partially Ordered Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability-Optimized Graph Convolutional Network: A Novel Propagation Rule with Constraints Derived from ODEs

1
School of Science, Zhejiang Sci-Tech University, Hangzhou 310000, China
2
School of Computer Science, Zhejiang Sci-Tech University, Hangzhou 310000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(5), 761; https://doi.org/10.3390/math13050761
Submission received: 20 January 2025 / Revised: 19 February 2025 / Accepted: 23 February 2025 / Published: 26 February 2025

Abstract

:
The node representation learning capability of Graph Convolutional Networks (GCNs) is fundamentally constrained by dynamic instability during feature propagation, yet existing research lacks systematic theoretical analysis of stability control mechanisms. This paper proposes a Stability-Optimized Graph Convolutional Network (SO-GCN) that enhances training stability and feature expressiveness in shallow architectures through continuous–discrete dual-domain stability constraints. By constructing continuous dynamical equations for GCNs and rigorously proving conditional stability under arbitrary parameter dimensions using nonlinear operator theory, we establish theoretical foundations. A Precision Weight Parameter Mechanism is introduced to determine critical Frobenius norm thresholds through feature contraction rates, optimized via differentiable penalty terms. Simultaneously, a Dynamic Step-size Adjustment Mechanism regulates propagation steps based on spectral properties of instantaneous Jacobian matrices and forward Euler discretization. Experimental results demonstrate SO-GCN’s superiority: 1.1–10.7% accuracy improvement on homophilic graphs (Cora/CiteSeer) and 11.22–12.09% enhancement on heterophilic graphs (Texas/Chameleon) compared to conventional GCN. Hilbert–Schmidt Independence Criterion (HSIC) analysis reveals SO-GCN’s superior inter-layer feature independence maintenance across 2–7 layers. This study establishes a novel theoretical paradigm for graph network stability analysis, with practical implications for optimizing shallow architectures in real-world applications.

1. Introduction

Graph Convolutional Networks (GCNs) have emerged as a fundamental framework for graph data modeling by extending convolutional operations from Euclidean to non-Euclidean domains [1,2,3]. Since the seminal work of Kipf and Welling [4], GCNs have demonstrated remarkable success in social network analysis [5], recommendation systems [6], and biomolecular interaction prediction [7]. The core mechanism of GCNs—spectral-domain filtering for neighborhood information propagation—effectively captures topological features while maintaining computational efficiency. However, recent studies reveal that GCNs’ dynamic stability limitations in heterophilic graph structures constrain both theoretical interpretability and practical applicability [8,9].

1.1. Related Work

Despite significant advances in GCN performance [10,11,12,13], three fundamental theoretical gaps persist. First, a comprehensive framework for quantifying dynamic stability is lacking; while methods such as residual connections [14] and geometry-driven propagation [15] mitigate oversmoothing, rigorous mathematical characterization of Lyapunov stability remains absent. Although Chen et al. [16] demonstrated that an uncontrollable spectral radius causes deep GCN failure, stability mapping from parameter to feature space has not been established.
Second, discretization error control remains inadequate. Traditional forward Euler discretization often leads to numerical divergence in dynamic graph convolutions, and while Rusch et al. [17] introduced a gradient gating mechanism, fixed-step-size strategies still cause superlinear growth of node feature variance in heterophilic graphs, exacerbating training instability.
Third, existing studies predominantly address oversmoothing in deep networks, largely neglecting stability optimization in shallow architectures [18], resulting in suboptimal performance on complex graph structures.

1.2. Our Approach

To address these challenges, we propose the Stability-Optimized Graph Convolutional Network (SO-GCN), a dynamically stabilized framework that enhances intrinsic stability and feature representation in shallow architectures (2–7 layers). The innovations include the following:
  • We developed a time-varying differential equation model for graph convolution propagation, establishing mathematical criteria for the Precision Weight Parameter Mechanism through stability mapping analysis.
  • We formulated stability domain constraints for the weight matrix through Jacobian spectral analysis and proposed a Dynamic Step-size Adjustment Mechanism based on forward Euler discretization, applying stability mapping during gradient descent.
  • We designed stable propagation rules integrating dual protection mechanisms with self-loop factors, optimizing feature stability and independence through quantitative metrics.
Recent theoretical advances, including adaptive frequency response filters [19,20,21] and potential connect of Lipschitz stability in graph neural diffusion  provide foundational support for our stability quantification framework  [22,23,24,25]. These developments not only validate our approach but also advance robust applications of GCNs in complex systems and depth optimization in shallow and stable networks.

2. Preliminaries

Based on Kipf’s seminal work [4], the standard Graph Convolutional Network (GCN) propagates node features by spectral filtering. The propagation rule is defined as follows:
H ( l + 1 ) = σ D ˜ 1 2 A ˜ D ˜ 1 2 H ( l ) W ( l )
Consider an undirected graph G = ( V , E ) , where V = { v i } i = 1 N denotes the node set and E V × V represents the edge set. The augmented adjacency matrix is A ˜ = A + I N R N × N , with A as the original adjacency matrix and I N as the identity matrix. Here, H ( l ) R N × d l is the feature matrix at layer l, W ( l ) R d l × d l + 1 denotes the learnable weight matrix, and σ ( · ) is the activation function. The initial input is H ( 0 ) = X R N × d 0 , the node feature matrix.
Despite its effectiveness in graph modeling, this framework suffers from training instability and limited noise robustness on heterophilic graphs [8,9]. To address these limitations, we reformulate the graph convolution mechanism from a continuous dynamics perspective and establish a rigorous stability guarantee framework.

2.1. Continuous Dynamics Model

To establish the theoretical connection between graph convolutional networks and dynamical systems, we extend the discrete layer index l to a continuous time variable t R + , deriving the continuous propagation equation for graph convolution as follows:
d H ( t ) d t = σ L H ( t ) W , L = D ˜ 1 / 2 A ˜ D ˜ 1 / 2
where H ( t ) R N × d t represents the node feature matrix at time t, L R N × N denotes the symmetric normalized graph Laplacian operator, W ( t ) R d t × d t + 1 is the time-dependent parameter matrix, and σ ( · ) indicates the element-wise activation function.

2.1.1. Stability Analysis of Equilibrium State

The existence of an equilibrium state is essential for stable propagation, as it enables the system to resist noise from minor disturbances or initial condition variations. Using fixed-point theory [26,27], we establish the following key result:
Theorem 1
(Existence of Equilibrium). The continuous propagation equation guarantees an equilibrium state when L F , W F < ; the activation function σ is globally Lipschitz continuous. Then, there exists an equilibrium state H R N × d t + 1 satisfying
H = σ ( L H W )
This differential equation characterizes the evolution dynamics of node features in continuous time. We analyze the system’s dynamic behavior near the equilibrium point H , where input and output feature dimensions are identical and Equation (3) holds. To quantify the system’s convergence rate, we define the feature contraction rate:
Definition 1
(Feature Contraction Rate). The instantaneous contraction rate of the propagation operator is
γ ( t ) = sup H 0 L H ( t ) W F H ( t ) F
when γ ( t ) 1 and the system is asymptotically stable.
Based on this, we derive the theoretical foundation for the Precision Weight Parameter Mechanism (PW Mechanism):
Theorem 2
(Weight Frobenius Norm Constraint). When the weight matrix satisfies
W F 1 E [ σ ] · L F
it ensures γ ( t ) < 1 , where E [ σ ] is the expected value of the activation function’s derivative.

2.1.2. Stability Guarantee of the Forward Euler Discretization Scheme

Discretizing Equation (2) by the forward Euler method yields the following:
H t + 1 = H t + η t σ ( L H t W )
where η t denotes the adaptive step size.
The stability of the discrete system depends on the spectral properties of the instantaneous Jacobian J t , providing the theoretical foundation for the Dynamic Step-size Adjustment Mechanism (DS Mechanism):
Theorem 3
(Step Size Upper Bound Constraint). The discrete system achieves numerical stability when the step size upper bound η t satisfies
η t min 2 | λ min ( J t ) | , 1 σ max ( J t )
where J t = L · diag ( σ ( L H t W ) ) · W represents the instantaneous Jacobian matrix, λ min denotes the smallest eigenvalue modulus, and σ max signifies the largest singular value.
When the forward Euler discretization of the GCN continuous propagation Equation (Equation (6)) meets the stability condition (Equation (7)), we introduce a self-loop influence factor α [ 0 , 1 ] to enhance stability, yielding the discrete propagation equation:
H t + 1 = σ I + η t ( L α I ) H t W
We define Ω t ˜ = I + η ( L α I ) as the discrete adjustment term, which balances node self-information preservation with neighborhood information aggregation. This formulation establishes a rigorous theoretical basis for SO-GCN’s stable propagation rule.
The proofs of Theorems 1–3 appear in Appendix A, Appendix B, and Appendix C, respectively.

3. Stability-Optimized Graph Convolutional Network

3.1. Network Overview

Based on the preceding theoretical results, we propose a stability propagation rule that integrates dynamic step-size adjustment, weight matrix norm constraints, and an upper-bound penalty term. By discretizing the propagation equation, we develop a Stability-Optimized Graph Convolutional Network (SO-GCN) with enhanced training stability, improved prediction accuracy, and superior loss convergence.The of SO-GCN with two layers is shown as Figure 1. During training, the network dynamically determines the weight matrix norm threshold to constrain parameter updates and optimizes parameter space distribution through a differentiable penalty term. Simultaneously, based on the spectral properties of the Jacobian matrix, we employ the forward Euler discretization method to dynamically adjust the step size and introduce a self-loop influence factor to enhance propagation stability.
This framework incorporates three modules into the GCN feature propagation process:
  • Precision Weight Parameter Mechanism (PW Mechanism): constrains the Frobenius norm of the weight matrix through stability analysis of differential equations;
  • Dynamic Step-size Adjustment Mechanism (DS Mechanism): adaptively controls the propagation step size based on the instantaneous Jacobian matrix;
  • Stable propagation rule: combines the PW Mechanism and DS Mechanism to optimize both numerical stability and model expressiveness.
The basic framework of SO-GCN with two layers is as Figure 2.

3.2. Stability-Optimized Propagation Rule

To enhance graph filtering performance, we develop a stable propagation rule based on discrete dynamical system theory:
H ( l + 1 ) = σ I + η l · ( L α I ) H ( l ) W ( l )
where σ ( · ) denotes the Leaky ReLU activation function, α [ 0 , 1 ] represents the self-loop influence factor, W ( l ) satisfies the norm constraint in the Precision Weight Parameter Mechanism, and η l is dynamically determined by the Dynamic Step-size Adjustment Mechanism. This rule maintains node self-information through the α I term while enabling adaptive neighborhood information aggregation via Ω ˜ ( l ) .

3.2.1. Precision Weight Parameter Mechanism

To accelerate optimization, we propose the Precision Weight Parameter Mechanism (PW Mechanism). The Frobenius norm threshold for the weight matrix at layer l is
θ ( l ) = E [ σ ] L F 1
where E [ σ ] = 1 N d l i , j σ ( M i j ) is the expected value of the activation function derivative, M ( l ) = L H ( l ) W ( l ) is the intermediate feature matrix, and L = D ˜ 1 / 2 A ˜ D ˜ 1 / 2 is the symmetric normalized graph Laplacian.
First, the mechanism inputs node features H ( l ) into SO-GCN for standard graph convolution, computes the norm threshold for layer l, and performs two-stage optimization based on Theorem 2 to obtain network response distribution characteristics. When W ( l ) F > θ ( l ) , it applies Frobenius norm projection,
W ^ ( l ) = θ ( l ) · W ( l ) / W ( l ) F
to constrain parameters within the stability domain. Simultaneously, it constructs a quadratic gradient penalty term:
P = max 0 , W ( l ) F θ ( l ) 2
This penalty term guides parameter convergence through backpropagation, preventing it from exceeding the predefined range. The pseudocode for the Precision Weight Parameter Mechanism is as Algorithm 1:
Algorithm 1: Precision Weight Parameter Mechanism
1:
Require: Feature matrix H ( l ) , parameter matrix W ( l ) , graph Laplacian L
2:
Ensure: Stable parameter matrix W ^ ( l ) , penalty term P
3:
M ( l ) L H ( l ) W ( l )
4:
E [ σ ] 1 N d l i , j σ ( M i j )                              ▹ Expectation of activation derivative
5:
θ ( l ) ( E [ σ ] L F ) 1
6:
if   W ( l ) F   >   θ ( l )   then
7:
     W ^ ( l ) θ ( l ) W ( l ) / W ( l ) F                                                        ▹ Parameter projection
8:
else
9:
     W ^ ( l ) W ( l )
10:
end if
11:
P max ( 0 ,   W ( l ) F θ ( l ) ) 2                                              ▹ Quadratic penalty term
12:
return W ^ ( l ) , P
This strategy combines hard constraints with soft penalties to maintain numerical stability while preserving end-to-end differentiability, preventing extreme weight variations during iterations and ensuring smooth feature updates. After integrating the Precision Weight Parameter Mechanism into the baseline GCN, we refer to the resulting model as the Precision Weight Graph Convolutional Network (PW-GCN).

3.2.2. Dynamic Step-Size Adjustment Mechanism

To enhance inter-layer propagation stability, we propose the Dynamic Step-size Adjustment Mechanism (DS Mechanism). The instantaneous Jacobian matrix is defined as
J ( l ) = L · diag ( σ ( M ( l ) ) ) · W ( l ) , M ( l ) = L H ( l ) W ( l )
Based on Theorem 3’s analysis of J ( l ) ’s spectral properties, the upper bound of the step size is
η l min 2 | λ min ( J ( l ) ) | , 1 σ max ( J ( l ) )
where λ min denotes the smallest eigenvalue modulus and σ max represents the largest singular value.
Using the power iteration method to estimate the principal eigenvector, we set the step size as η l 0.85 η max , where the safety factor of 0.85 compensates for spectral estimation errors and ensures numerical stability. This strategy maximizes information propagation efficiency while maintaining discrete system stability. The pseudocode for the Dynamic Step-size Adjustment Mechanism is as follows Algorithm 2.
Integrating the Dynamic Step-size Adjustment Mechanism into the baseline GCN yields the Dynamic Step Graph Convolutional Network (DS-GCN). The Precision Weight Parameter Mechanism ensures weight matrix stability through Frobenius norm projection and gradient penalties, while the Dynamic Step-size Adjustment Mechanism further optimizes discrete error accumulation via safety-factor-adjusted step sizes. The stable propagation rule, combining both mechanisms, provides theoretical support for enhancing feature output stability.
Algorithm 2: Dynamic Step-size Adjustment Mechanism
1:
Require: Feature matrix H ( l ) , parameter matrix W ( l ) , graph Laplacian L
2:
Ensure: Stable step size η l , propagation matrix H ( l + 1 )
3:
J ( l ) L · diag ( σ ( M ( l ) ) ) · W ( l )                                              ▹ M ( l ) = L H ( l ) W ( l )
4:
v random unit vector in R N
5:
for  k = 1 to K do
6:
     v k J ( l ) v k
7:
     v k + 1 v k / v k 2
8:
end for
9:
σ ^ J ( l ) v 2                                                             ▹ Spectral norm estimation
10:
η max min 2 | λ min ( J ( l ) ) | , 1 σ ^
11:
η l 0.85 η max                                                                             ▹ Safety factor
12:
H ( l + 1 ) σ I + η l · ( L α I ) H ( l ) W ( l )
13:
return  η l , H ( l + 1 )

4. Experiments

This study conducts systematic validation on three homophilic graph datasets (Cora, CiteSeer, and PubMed) [20] and three heterophilic graph datasets (Chameleon, Texas, and Squirrel) [5]. We perform comprehensive ablation studies with four architectures: GCN, PW-GCN, DS-GCN, and SO-GCN. Model performance is evaluated through semi-supervised node classification tasks and compared to the performance of mainstream methods including GIN, GraphSAGE, GAT, and GCN. Our experimental setup adopts a learning rate of 0.01, 64-dimensional embeddings, a dropout rate of 0.5, and an α parameter of 0.5. Table 1 summarizes the topological statistics of the datasets, where homophily metrics quantitatively verify structural heterogeneity.

4.1. Experiment Results and Analysis

Table 2 presents a classification accuracy comparison for GCN, PW-GCN, DS-GCN, and SO-GCN across six benchmark datasets. The results demonstrate significant synergistic optimization effects between the Precision Weight Parameter Mechanism and the Dynamic Step-size Adjustment Mechanism in SO-GCN.
On homophilic datasets such as Cora and PubMed, PW-GCN improves accuracy by 0.9–1.8% through the Precision Weight Parameter Mechanism’s optimization of information propagation efficiency. However, its accuracy decreases by 0.35% on heterophilic graphs such as Chameleon, suggesting that using this mechanism alone can amplify noise. In contrast, DS-GCN achieves performance gains of 6.56% and 5.7% on Texas and Chameleon, respectively, through the Dynamic Step-size Adjustment Mechanism, effectively suppressing noise propagation in heterophilic graphs. The combined model SO-GCN achieves an average 11.65% improvement in accuracy on heterophilic datasets, demonstrating the complementary nature of the dual stabilization mechanisms.
Figure 3 further reveals the dynamic properties of the four models’ training: SO-GCN’s loss curve fluctuates significantly less than GCN’s, confirming the improvement in training stability due to the Dynamic Step-size Adjustment Mechanism. On PubMed’s sparse graph, PW-GCN converges the fastest, but its overall performance is best, with faster convergence, lower training loss (on average), and smoother accuracy curves.
This paper validates the effectiveness of SO-GCN through semi-supervised node classification tasks across six benchmark datasets. As shown in Table 3, SO-GCN achieves an average classification accuracy of 68.01%, a 10.8% improvement over baseline GCN. Notably, on heterophilic graphs, the model achieves 56.83% and 71.07% accuracy on the Texas and Squirrel datasets, respectively, significantly outperforming models such as GAT.
This indicates that the stability propagation rule based on stability theory effectively optimizes information flow and enhances the network’s ability to capture graph signals.

4.2. Depth Analysis and Feature Preservation

To explore performance variations in shallow networks, we compare SO-GCN and GCN on classification accuracy across two- to seven-layer configurations on six benchmark datasets, as shown in Table 4:
SO-GCN consistently outperforms GCN across all six benchmark datasets and layer configurations, particularly showing a 15.2% improvement in accuracy on the five-layer configuration of PubMed. This validates the framework’s ability to alleviate vanishing gradients by optimizing gradient stability and feature smoothness.
We quantify feature preservation through Hilbert–Schmidt Independence Criterion (HSIC) analysis between network layers. As shown in Table 5, SO-GCN reduces inter-layer HSIC values by 16.7% in seven-layer PubMed configurations compared to GCN, demonstrating effective prevention of feature homogenization while preserving original graph structural information.

4.3. Computational Complexity Analysis

The above experimental results from multiple dimensions confirm that the SO-GCN framework, through the synergistic Precision Weight Parameter Mechanism and Dynamic Step-size Adjustment Mechanism, enhances model performance. However, its computational complexity inevitably increases in the following two aspects: (1) dynamic learning rate calculation and spectral analysis of density matrix squared neighborhoods introduce O ( N 2 d o u t ) extra overhead; (2) the Precision Weight Parameter Mechanism requires calculating the Frobenius norm, resulting in an additional O ( N d i n d o u t ) cost, where d i n and d o u t are the input and output feature dimensions. For large-scale graph data, Nyström low-rank approximation can be applied to reduce matrix computation complexity to O ( N r 2 ) ( r N ) , achieving a balance between efficiency and performance.

5. Conclusions

This study systematically validates the effectiveness of the SO-GCN model in semi-supervised node classification tasks. Experimental results demonstrate that the stability propagation rule, combining the Precision Weight Parameter Mechanism and the Dynamic Step-size Adjustment Mechanism, significantly enhances model performance: on homophilic graphs such as Cora and CiteSeer, SO-GCN improves classification accuracy by 1.1–10.7% compared to traditional GCN; on heterophilic graphs such as Texas and Chameleon, accuracy increases by 11.22–12.09% over the baseline model through dynamic adjustment of neighborhood information aggregation intensity. HSIC analysis further reveals that SO-GCN reduces inter-layer feature homogenization by 16.7% compared to GCN, effectively mitigating the gradient vanishing problem in shallow networks through optimized gradient propagation stability.
From an application perspective, SO-GCN’s architectural characteristics provide significant advantages in the following scenarios:
  • In heterophilic graph settings such as social network anomaly detection, the Dynamic Step-size Adjustment Mechanism dynamically suppresses noise propagation through spectral characteristic perception;
  • On knowledge graphs with sparse features but complex structures, such as biomedical networks, the Precision Weight Parameter Mechanism optimizes information propagation efficiency through Frobenius norm constraints;
  • In industrial-grade graph computing systems (e.g., multi-hop inference in recommendation systems), the synergistic effect of dual stabilization mechanisms ensures numerical stability during higher-order propagation.

Author Contributions

Conceptualization, L.C.; Methodology, L.C.; Validation, H.Z.; Formal analysis, L.C. and H.Z.; Investigation, H.Z.; Resources, S.H.; Data curation, L.C. and H.Z.; Writing—original draft, L.C.; Writing—review & editing, L.C., H.Z. and S.H.; Visualization, L.C.; Funding acquisition, S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China [12471304].

Data Availability Statement

The data used in this study, including the Cora, Citeseer, PubMed, Chameleon, Texas, and Squirrel datasets, are publicly available. These datasets can be accessed through relevant libraries in PyTorch or other machine learning frameworks. Detailed instructions on how to load the datasets can be found in the official documentation of these libraries (https://pytorch.org/docs/stable/). No new data were created for this study, and the data used in the experiments are openly archived for public use.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Theorem 1

Appendix A.1. Existence of Theorem 1

Proof. 
Define the nonlinear mapping Φ : R N × d t + 1 R N × d t + 1 as
Φ ( H ) = σ L H W , L = D ˜ 1 / 2 A ˜ D ˜ 1 / 2
where σ ( · ) is an activation function satisfying the global Lipschitz condition with constant L σ .
For any convergent sequence H k F H , consider the continuity of the linear operator C ( H ) = L H W :
C ( H k ) C ( H ) F   L F W F H k H F 0 ( k )
From the finiteness of the Frobenius norms of L and W, we conclude that C ( · ) is continuous. When we compose it with the Lipschitz continuous function σ ( · ) , Φ remains continuous in the Frobenius norm topology.
Define the closed ball:
B M = H R N × d t + 1 | H F M
Choosing M L σ L F W F M , we can show by induction that Φ ( B M ) B M . By the Heine–Borel theorem, B M is a compact convex set in finite-dimensional space.
Applying Brouwer’s Fixed Point Theorem, the continuous mapping Φ : B M B M has at least one fixed point H B M satisfying
H = σ ( L H W )

Appendix A.2. Uniqueness of Theorem 1

Proof. 
We prove the contractive property of mapping Φ in the Banach space ( R N × d t + 1 , · F ) .
For any H 1 , H 2 R N × d t + 1 :
Φ ( H 1 ) Φ ( H 2 ) F   L σ L ( H 1 H 2 ) W F
  L σ L F W F H 1 H 2 F
Let the contraction coefficient be κ = L σ L F W F . When κ < 1 , Φ is a strict contraction mapping.
To achieve a globally unique equilibrium state, we impose the following weight constraint:
W F < 1 L σ L F
By the Banach Fixed Point Theorem, there exists a unique H R N × d t + 1 satisfying H = Φ ( H ) . □

Appendix B. Proof of Theorem 2

Proof. 
Let the perturbation of the equilibrium state be δ H ( t ) = H ( t ) H ; its dynamic equation is given by:
d d t δ H = σ ( L ( H + δ H ) W ) σ ( L H W )
Perform Frechet differentiation at δ H = 0 :
J = L σ ( L H W ) · W
γ ( t ) = sup δ H 0 L δ H W F δ H F L F W F
where σ denotes the derivative of the activation function and ⊙ represents the Hadamard product.
Let E [ σ ] = 1 N d t + 1 i , j σ ( M i j ) be the spatial mean of the derivative, where M = L H W . By Jensen’s inequality,
L F W F E [ σ ] γ ( t )
When W F ( E [ σ ] L F ) 1 is satisfied, we have γ ( t ) 1 , and the system exhibits asymptotic stability. □

Appendix C. Proof of Theorem 3

Proof. 
Consider the forward Euler discretization scheme H t + 1 = H t + η t σ ( L H t W ) , where the local truncation error is controlled by the Jacobian matrix J t = L · diag ( σ ( M t ) ) · W .
Spectral Constraint Condition
To ensure numerical stability, the following condition must be satisfied:
ρ ( I + η t J t ) 1
where ρ ( · ) denotes the spectral radius. Using the Gershgorin circle theorem, we obtain
| λ min ( J t ) | 1 2 tr ( J t )
σ max ( J t ) 1   J t 2 1
By choosing the step size upper bound η t min 2 | λ min ( J t ) | 1 , σ max ( J t ) 1 , we ensure that the eigenvalue distribution remains within the unit circle, and the discrete system remains Lyapunov stable. □

References

  1. Bruna, J.; Zaremba, W.; Szlam, A.; LeCun, Y. Spectral networks and locally connected networks on graphs. In Proceedings of the International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  2. Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef]
  3. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with localized spectral filtering. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 3844–3852. [Google Scholar]
  4. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  5. Hamilton, W.L.; Ying, R.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1024–1034. [Google Scholar]
  6. Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 974–983. [Google Scholar]
  7. Fout, A.; Byrd, J.; Shariat, B.; Ben-Hur, A. Protein interface prediction using graph convolutional networks. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6533–6542. [Google Scholar]
  8. Li, X.; Chen, S.; Hu, X.; Yang, J. Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift. arXiv 2018, arXiv:1801.05134. [Google Scholar]
  9. Oono, K.; Suzuki, T. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. arXiv 2019, arXiv:1905.10947. Available online: https://api.semanticscholar.org/CorpusID:209994765 (accessed on 27 May 2019).
  10. Wang, X.; Zhang, M. How Powerful are Spectral Graph Neural Networks. In Proceedings of the 39th International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 23341–23362. [Google Scholar]
  11. Chien, E.; Peng, J.; Li, P.; Milenkovic, O. Adaptive Universal Generalized PageRank Graph Neural Network. arXiv 2020, arXiv:2001.06922. [Google Scholar]
  12. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
  13. Xu, B.; Shen, H.; Cao, Q.; Cen, K.; Cheng, X. Graph convolutional networks using heat kernel for semi-supervised learning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 1928–1934. [Google Scholar]
  14. Li, Q.; Han, Z.; Wu, X.-M. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 3538–3545. [Google Scholar]
  15. Pei, H.; Wei, B.; Chang, K.C.-C.; Lei, Y.; Yang, B. Geom-GCN: Geometric Graph Convolutional Networks. arXiv 2020, arXiv:2002.05287. [Google Scholar]
  16. Chen, D.; Liao, R. Stability and Generalization of Graph Neural Networks via Spectral Dynamics. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 11289–11302. [Google Scholar]
  17. Rusch, T.K.; Bronstein, M.; Mishra, S. Gradient Gating for Deep Multi-Rate Learning on Graphs. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2022; pp. 22136–22149. [Google Scholar]
  18. Topping, J.; Giovanni, F.D.; Chamberlain, B.P.; Dong, X.; Bronstein, M.M. Understanding Over-Squashing and Bottlenecks on Graphs via Curvature. In Proceedings of the 10th International Conference on Learning Representations, Virtual, 25–29 April 2022. [Google Scholar]
  19. Dong, Y.; Ding, K.; Jalaeian, B.; Ji, S.; Li, J. AdaGNN: Graph Neural Networks with Adaptive Frequency Response Filter. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, Australia, 1–5 November 2021. [Google Scholar]
  20. Yang, Z.; Cohen, W.; Salakhutdinov, R. Revisiting semi-supervised learning with graph embeddings. In Proceedings of the International Conference on Machine Learning (ICML), New York, NY, USA, 19–24 June 2016. [Google Scholar]
  21. Lim, D.; Robinson, J.; Zhao, L.; Smidt, T.; Sra, S.; Maron, H.; Jegelka, S. Sign and basis invariant networks for spectral graph representation learning. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual, 1 May 2023. [Google Scholar]
  22. Metz, L.; Maheswaranathan, N.; Freeman, C.D.; Poole, B.; Sohl-Dickstein, J.N. Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves. arXiv 2009, arXiv:2009.11243. [Google Scholar]
  23. Haber, E.; Lensink, K.; Treister, E.; Ruthotto, L. IMEXnet: A Forward Stable Deep Neural Network. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2525–2534. [Google Scholar]
  24. Haber, E.; Ruthotto, L. Stable architectures for deep neural networks. Inverse Probl. 2018, 34, 014004. [Google Scholar] [CrossRef]
  25. Haber, E.; Ruthotto, L.; Holtham, E. Learning across scales—A multiscale method for convolution neural networks. arXiv 2017, arXiv:1703.02009. [Google Scholar] [CrossRef]
  26. Brouwer, L.E.J. Über Abbildung von Mannigfaltigkeiten. Math. Ann. 1911, 71, 97–115. [Google Scholar] [CrossRef]
  27. Banach, S. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
Figure 1. Workflow of stability-Optimized Graph Convolutional Network with Two layers.
Figure 1. Workflow of stability-Optimized Graph Convolutional Network with Two layers.
Mathematics 13 00761 g001
Figure 2. Basic framework of Stability-Optimized Graph Convolutional Network with Two Layers.
Figure 2. Basic framework of Stability-Optimized Graph Convolutional Network with Two Layers.
Mathematics 13 00761 g002
Figure 3. Training process of four networks with two layers on homophilic graph datasets.
Figure 3. Training process of four networks with two layers on homophilic graph datasets.
Mathematics 13 00761 g003
Table 1. Dataset statistics.
Table 1. Dataset statistics.
DatasetNodesEdgesClassesFeaturesHomophily Level
Cora27085429714330.81
CiteSeer33274732637030.74
PubMed19,71744,33835000.80
Chameleon227736,101323250.18
Texas183309517030.11
Squirrel5201217,073320890.018
Table 2. Mean accuracy (%) of SO-GCN in two-layer ablation experiment.
Table 2. Mean accuracy (%) of SO-GCN in two-layer ablation experiment.
MethodCoraCiteSeerPubMedTexasChameleonSquirrelAvg
GCN83.6050.2375.7044.7446.5568.0861.48
PW-GCN84.5050.5377.5350.3646.2067.1362.71
DS-GCN84.2058.0776.0751.3052.2570.1865.34
SO-GCN84.7060.9078.4056.857.7771.0768.01
Table 3. Mean accuracy (%) comparison across six benchmark networks with two layers.
Table 3. Mean accuracy (%) comparison across six benchmark networks with two layers.
MethodCoraCiteSeerPubMedTexasChameleonSquirrel
MLP57.0053.3572.9067.1851.4438.56
ChebNet73.6053.8569.0073.7855.7739.75
GIN64.2044.8073.3059.4668.6442.75
GraphSAGE78.0052.1976.0056.7665.6753.20
GCN83.6050.2375.7044.7446.5568.08
GAT84.1048.5777.7046.8557.3143.65
SO-GCN84.7060.9078.4056.8357.7771.07
Table 4. Performance comparison across models with different layer configurations.
Table 4. Performance comparison across models with different layer configurations.
DatasetModel2 Layers3 Layers4 Layers5 Layers6 Layers7 Layers
CoraGCN0.8360.7500.7070.4720.4310.413
SO-GCN0.8470.7910.7380.5740.5680.557
CiteSeerGCN0.5020.4130.3020.2730.2770.274
SO-GCN0.6090.5190.4600.3810.3740.369
PubMedGCN0.7570.7490.7580.5320.4790.458
SO-GCN0.7840.7610.7590.6840.6070.554
Table 5. HSIC values across different datasets and models.
Table 5. HSIC values across different datasets and models.
DatasetLayersModelHSIC (Layers 1–2)HSIC (Input–Layer 1)
Cora2GCN0.18 ± 0.020.40 ± 0.03
SO-GCN0.10 ± 0.010.38 ± 0.02
4GCN0.25 ± 0.030.32 ± 0.04
SO-GCN0.15 ± 0.020.35 ± 0.03
7GCN0.29 ± 0.040.28 ± 0.05
SO-GCN0.18 ± 0.030.33 ± 0.04
CiteSeer2GCN0.22 ± 0.030.35 ± 0.04
SO-GCN0.12 ± 0.020.34 ± 0.03
4GCN0.31 ± 0.040.25 ± 0.05
SO-GCN0.20 ± 0.030.30 ± 0.04
7GCN0.36 ± 0.050.21 ± 0.06
SO-GCN0.25 ± 0.040.28 ± 0.05
PubMed2GCN0.15 ± 0.020.45 ± 0.03
SO-GCN0.08 ± 0.010.43 ± 0.02
4GCN0.20 ± 0.030.38 ± 0.04
SO-GCN0.12 ± 0.020.40 ± 0.03
7GCN0.26 ± 0.040.30 ± 0.05
SO-GCN0.20 ± 0.030.40 ± 0.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, L.; Zhu, H.; Han, S. Stability-Optimized Graph Convolutional Network: A Novel Propagation Rule with Constraints Derived from ODEs. Mathematics 2025, 13, 761. https://doi.org/10.3390/math13050761

AMA Style

Chen L, Zhu H, Han S. Stability-Optimized Graph Convolutional Network: A Novel Propagation Rule with Constraints Derived from ODEs. Mathematics. 2025; 13(5):761. https://doi.org/10.3390/math13050761

Chicago/Turabian Style

Chen, Liping, Hongji Zhu, and Shuguang Han. 2025. "Stability-Optimized Graph Convolutional Network: A Novel Propagation Rule with Constraints Derived from ODEs" Mathematics 13, no. 5: 761. https://doi.org/10.3390/math13050761

APA Style

Chen, L., Zhu, H., & Han, S. (2025). Stability-Optimized Graph Convolutional Network: A Novel Propagation Rule with Constraints Derived from ODEs. Mathematics, 13(5), 761. https://doi.org/10.3390/math13050761

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop