Next Article in Journal
Risk-Based Multi-Objective Approach for Improving Fairness of PV Curtailment in Low-Voltage Distribution Networks
Previous Article in Journal
Fourier Feature-Enhanced Neural Networks for Wind Turbine Power Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Topology-Robust Power System Stability Prediction with a Supervised Contrastive Spatiotemporal Graph Convolutional Network

1
State Grid Fujian Electric Power Research Institute, Fuzhou 350007, China
2
Fujian Key Laboratory of Smart Grid Protection and Operation Control, Fuzhou 350007, China
3
State Grid Fujian Electric Power Co., Ltd., Fuzhou Company, Fuzhou 350007, China
4
School of Electrical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Electricity 2025, 6(4), 71; https://doi.org/10.3390/electricity6040071
Submission received: 14 September 2025 / Revised: 10 November 2025 / Accepted: 5 December 2025 / Published: 9 December 2025

Abstract

Modern power systems face growing challenges in stability assessment due to large-scale renewable energy integration and rapidly changing operating conditions. Data-driven approaches have emerged as promising solutions for real-time stability assessment, yet their performance often degrades under network topology reconfigurations. To address this limitation, the Spatiotemporal Contrastive Graph Convolutional Network (STCGCN) is proposed for the joint task prediction of voltage and transient stability across known and unknown topologies. The framework integrates a graph convolutional network (GCN) encoder to capture spatial dependencies and a temporal convolutional network to model electromechanical dynamics. It also employs supervised contrastive learning to extract discriminative features due to the grid topology variation, enhance stability class separability, and mitigate class imbalance under varying operating conditions, such as fluctuating loads and renewable integration. Case studies on the IEEE 39-bus system demonstrate that STCGCN achieves 89.66% accuracy on in-sample datasets from known topologies and 87.73% on out-of-sample datasets from unknown topologies, outperforming single-task learning approaches. These results highlight the method’s robustness to topology variations and its strong generalization across configurations, providing a topology-aware and resilient solution for real-time joint voltage and transient stability assessment in power systems.

1. Introduction

Modern power grids are undergoing a profound transformation driven by large-scale renewable energy integration and ambitious decarbonization targets [1]. Unlike conventional systems reliant on synchronous generators, renewable-rich grids depend on inverter-based resources, such as wind and photovoltaic generation, which introduce variability, forecast uncertainty, and reduced physical inertia. These characteristics, combined with evolving demand patterns, market-driven operations, and frequent topology changes due to switching, outages, or reconfigurations, intensify nonlinear interactions and shrink stability margins [2]. Real-time stability assessment is thus essential for secure operation. Two critical aspects of system security are voltage stability, which ensures acceptable bus voltages after disturbances, and transient stability, which maintains rotor-angle synchronism following large disturbances like faults or line trips [3]. Voltage stability is increasingly challenged by limited reactive power support and weak grid conditions under high renewable penetration. Similarly, transient stability is compromised by reduced system inertia and damping due to the displacement of synchronous generators. In renewable-rich grids, these stability phenomena are tightly coupled across space and time, as active and reactive power dynamics interact through converter controls and low-inertia networks. Topology changes further complicate this coupling by altering power flows and stability boundaries. Analyzing voltage and transient stability separately overlooks these interdependencies, necessitating integrated, efficient, and adaptive assessment methods for renewable-rich, topology-varying systems.
Transient stability refers to a power system’s ability to maintain generator synchronism after major disturbances, such as short circuits, line outages, or sudden loss of generation or load [4]. It depends on the balance between mechanical input power ( P m ) and electromagnetic output power ( P e ), where a sustained mismatch causes rotor acceleration or deceleration, risking loss of synchronism and potentially triggering cascading failures or blackouts [5]. Voltage stability, conversely, is the ability to maintain bus voltages within acceptable limits during normal operation and after disturbances [6]. It hinges on the balance of reactive power supply and demand, influenced by load behavior, network configuration, and transmission losses. Without sufficient reactive support, heavy loading can lead to progressive voltage declines and ultimately result in collapse. In renewable-rich grids, these stability types are interconnected, as reduced voltages weaken angular stability, and rotor oscillations impact voltage recovery. These interactions, shaped by induction motor dynamics, excitation systems, network strength, and reactive power compensation, require high-fidelity modeling, typically through time-domain simulation of differential-algebraic equations (DAEs) for generators, excitation systems, loads, and compensation [7]. However, traditional computational methods face significant challenges. Conventional contingency screening solves large-scale DAEs for each scenario, often taking 5–30 min for realistic systems, which is too slow for real-time or wide-area assessment [7]. Furthermore, decoupled analysis, which evaluates voltage and transient stability separately, neglects their dynamic coupling. This is especially problematic in weak grids with high renewable penetration, reduced inertia, and variable reactive support, underscoring the need for integrated approaches to ensure accurate and timely stability assessment.
Classical control-theoretic methods have long underpinned power-system stabilization, with robust designs based on Linear Parameter-Varying (LPV) models and mixed H 2 / H synthesis offering performance guarantees under bounded uncertainties and disturbances. For example, Dehghani et al. [8] proposed a robust model predictive control (RMPC) scheme for AC microgrids that achieves effective voltage and frequency regulation via gain-scheduled state feedback. While such approaches are resilient to model uncertainty and exogenous perturbations, they typically rely on accurate plant parameters and can become computationally intractable at scale. Related advances include decentralized robust excitation control [9], stability-index-guided placement of Thyristor-Controlled Series Capacitors (TCSCs) [10], and nonlinear robust controllers for Flexible AC Transmission Systems (FACTS)—e.g., Static Synchronous Series Compensators (SSSCs)—to damp oscillations and enhance transient stability [11]. However, increasing system complexity, frequent topology changes, and limited model fidelity constrain the applicability of these model-based strategies, motivating a shift toward data-driven paradigms enabled by wide-area Phasor Measurement Unit (PMU) deployments [12]. Representative data-driven contributions include the Temporal and Topological Embedding Deep Neural Network (TTEDNN) for accurate, efficient rotor-angle stability prediction from PMU streams [13]; a concurrent transient- and voltage-stability framework based on a Spatiotemporal-Embedding Graph Neural Network (STEGNN) that encodes network structure and temporal dynamics and outperforms single-task baselines on the IEEE 118-bus system [14]; and surveys covering machine-learning-based decision and control—spanning artificial neural networks (ANNs), decision trees (DTs), support vector machines (SVMs), and (deep) reinforcement learning—for secure operation and transient-stability assessment [15]. Building on this trajectory, Zhou et al. developed a multi-input deep convolutional neural network for joint rotor-angle and voltage-stability classification, achieving high accuracy on an eight-machine, 36-bus system [16], while Hu et al. employed deep belief networks to identify instability patterns via learned features [17]. Early ANN/SVM approaches delivered strong classification performance but struggled with time-varying dynamics [18]; subsequent RNN/CNN models improved temporal feature learning but failed to capture grid topology [19] sufficiently. Graph convolutional networks (GCNs) address this limitation by explicitly encoding network structure, improving predictive accuracy for stability assessment [20]. More recently, topology-aware deep reinforcement learning has leveraged GCNs to accommodate distribution-grid reconfigurations for voltage-control tasks [21], collectively highlighting the promise of data-driven methods for real-time stability analysis in complex, evolving power systems.
Despite this progress, significant limitations persist. Many methods overlook the interdependence of active and reactive power, topology changes, and severe class imbalance. Treating voltage and transient stability independently can obscure synergistic risks, resulting in incomplete assessments. Graph convolutional network (GCN)-based models commonly assume a fixed adjacency matrix, which fails to adapt to real-time reconfigurations; performance degrades under frequent switching operations—for example, the 12–18 daily topology changes reported for the IEEE 39-bus system [22]. In practical datasets, unstable events often constitute fewer than 5% of samples, biasing models toward the stable class and elevating false-negative rates (FNR > 20%), undermining early-warning reliability. These gaps motivate the development of joint, spatiotemporal, topology-aware contrastive GCN learning frameworks that (a) encode grid structure and fast dynamics, (b) adapt to topology variations, and (c) address class imbalance, thereby enabling scalable, reliable online assessment for modern power systems.
The remainder of this paper is structured as follows: Section 2 introduces the Spatiotemporal Contrastive Graph Convolutional Network (STCGCN), which integrates a graph convolutional network encoder and a temporal convolutional network. It employs a topology-adaptive adjacency matrix and supervised contrastive learning to enhance robustness against grid reconfigurations. Section 3 details the dataset generation methodology, including stochastic topology perturbations for the IEEE 39-bus system, PMU data simulation, and stability labeling. Section 4 presents experimental results: STCGCN achieves 89.66% accuracy on in-sample topologies and 87.73% on out-of-sample topologies, demonstrating strong generalization across configurations. Finally, Section 5 concludes by highlighting the framework’s efficacy for real-time stability assessment in renewable-rich power systems.

2. Joint-Task Prediction Framework for Transient and Voltage Stability Under Topology

To address the topology change and the interdependence between active and reactive power, this paper proposes a Spatiotemporal Contrastive Graph Convolutional Network (STCGCN) framework, as shown in Figure 1, for joint stability prediction. STCGCN employs a hybrid GCN-TCN architecture with a topology-adaptive adjacency matrix and data augmentation. STCGCN adopts a sequential fusion architecture to learn spatiotemporal representations. A supervised contrastive GCN first extracts spatial features at each time step, after which mean pooling across all nodes produces a compact system-level topological representation. This pooled representation is then passed to the TCN to capture temporal dependencies in the transient dynamics. Through this GCN–TCN pipeline, STCGCN effectively models both the power grid topology and its dynamic response, enabling robust spatiotemporal feature extraction. Simultaneously, it captures spatial topology dependencies and disturbance-induced temporal dynamics through shared feature layers, enabling joint modeling. It dynamically constructs adjacency relationships based on real-time electrical parameters (line impedance, and power flow), ensuring robustness under grid reconfiguration. Also, it employs class-weighted loss functions and synthetic minority oversampling to alleviate data imbalance.
The STCGCN architecture combines contrastive graph representation learning with temporal feature extraction, as illustrated in Figure 1. Let X R N × T × d denote the PMU input tensor for N buses, T time steps, and d features.

2.1. Supervised Contrastive GCN Encoder

The contrastive GCN learns spatial topological representations by aggregating localized neighborhoods across all nodes and time steps [23]. Given an undirected graph G = ( V , E ) with adjacency matrix A R N × N and input tensor X R T × N × d (T time steps, N buses, and d features per bus), the spatial dependencies are processed at each time step [23]:
H t ( l + 1 ) = ReLU D ˜ 1 2 A ˜ D ˜ 1 2 H t ( l ) Θ ( l ) for t = 1 , , T
where H t ( 0 ) = X [ t , : , : ] R N × d is the feature matrix at time t, A ˜ = A + I N adds self-connections, and D ˜ is the degree matrix of A ˜ with D ˜ i i = j A ˜ i j .
This implements spatiotemporal Laplacian smoothing, where each node’s representation becomes a weighted average of its neighbors and itself at each time step. After the final layer L, node features are aggregated to obtain a global temporal representation:
h t = 1 N i = 1 N H t ( L ) [ i , : ] R d gcn
The sequence of global representations forms the output:
H = h 1 , h 2 , , h T R T × d gcn
In contrastive learning, sample relationships are defined through positive and negative pairs, which jointly determine the structure of the learned representation space. In terms of stability class, a positive pair consists of two samples that share the same joint stability class label, regardless of differences in their underlying topologies or operating conditions. Negative pairs refer to any pair of samples with different joint stability labels that are treated as negative pairs. At the same time, the power grid topology is encoded using contrastive GCN for feature representation.
The supervised contrastive loss L cont operates on the entire sequence representation [24]:
L cont = i I 1 | P ( i ) | p P ( i ) log exp ( s ( H ( i ) , H ( p ) ) / τ ) j S ( i ) exp ( s ( H ( i ) , H ( j ) ) / τ )
where I = { 1 , , 2 B } is the index set of augmented samples in batch size B, P ( i ) = { p I { i } y p = y i } is the positive set the sample with the same joint stability class label, S ( i ) = I { i } is the comparison set, s ( H ( i ) , H ( j ) ) = vec ( H ( i ) ) , vec ( H ( j ) ) vec ( H ( i ) ) vec ( H ( j ) ) is the normalized sequence similarity, and τ > 0 is the temperature parameter. The formulation applies supervised contrastive learning by utilizing label-derived relationships, treating all same-class sequences as positives. The supervised contrastive loss is computed over anchor i, with the positive set P ( i ) containing all samples that share anchor i’s label (excluding i itself). The loss function simultaneously pulls ( i , p ) pairs for p P ( i ) closer together while pushing away representations from different classes.

2.2. Temporal Convolutional Network

The Temporal Convolutional Network (TCN) extracts temporal features from the global system dynamics. The TCN takes as input the global temporal representation H R T × d gcn generated by the contrastive GCN, where H = f θ ( A , X ) is obtained through spatial aggregation over all nodes (Section 2.1), T is the number of time steps, and  d gcn is the spatial feature dimension. It is worth noting that our fusion is strictly sequential rather than branched. The GCN-derived H encapsulates not only nodal states but also graph-induced relational context, making it a topology-infused temporal input. The TCN processes the global sequence using stacked dilated causal convolutions. The output at position t in layer l is computed as
Z t ( l ) = k = 0 K 1 W k ( l ) h t k · δ ( l ) ( l 1 ) + b ( l )
where Z t ( l ) R d out ( l ) is the output feature at time t and layer l, K denotes the convolution kernel size, δ ( l ) represents the dilation factor at layer l ( δ ( l ) = 2 l 1 for l 1 ), W k ( l ) R d in ( l ) × d out ( l ) are learnable filters ( d in ( 1 ) = d gcn ), b ( l ) R d out ( l ) is the bias term, and h ( 0 ) = H is the input sequence
The causal constraint prevents future information leakage:
h t k · δ ( l ) ( l 1 ) = 0 for t k · δ ( l ) < 0
Dilated convolutions enable exponential receptive field growth. For L layers with kernel size K:
R = 1 + ( K 1 ) · l = 1 L δ ( l ) = 1 + ( K 1 ) ( 2 L 1 )
Residual blocks with skip connections mitigate gradient vanishing as follows:
h ( l ) = ReLU F ( h ( l 1 ) ) + H ( h ( l 1 ) )
where F denotes convolution operations, and  H is a 1×1 convolution for dimension matching.
The final TCN output Z R d tcn × d gcn is obtained through temporal aggregation, where d tcn is the temporal feature dimension (number of output channels), and d gcn is preserved as the spatial feature dimension. This matrix Z contains compressed spatiotemporal features of the entire system and serves as input to the MLP classifier for global stability prediction.

2.3. MLP Classifier for Stability Prediction

The MLP classifier receives the global temporal feature matrix Z R d tcn × d gcn from the TCN output, where d tcn is the TCN output temporal feature dimension and d gcn is the GCN output spatial feature dimension. The input to the MLP is prepared by flattening the feature matrix into a vector:
z = vec ( Z ) R d tcn · d gcn
The MLP architecture processes this global representation to produce the final stability prediction:
  • Input Layer: Receives the flattened feature vector z R d in where d in = d tcn · d gcn .
  • Hidden Layers: Each hidden layer applies an affine transformation followed by ReLU activation:
    u ( k ) = ReLU Φ ( k ) u ( k 1 ) + β ( k )
    where u ( 0 ) = z , Φ ( k ) R d k × d k 1 is the weight matrix, and  β ( k ) R d k is the bias vector.
  • Output Layer: Produces logits for the system-wide stability prediction:
    y ^ = Φ ( K ) u ( K 1 ) + β ( K )
    where K is the total number of layers. The output dimension d K = 4 corresponds to the four stability classes defined in Equation (14).
The predicted probability vector for the entire system is obtained via softmax:
p ( c ) = exp ( y ^ ( c ) ) k = 1 4 exp ( y ^ ( k ) ) , c = 1 , 2 , 3 , 4
where p ( c ) represents the probability that the power system belongs to stability class c.

2.4. Spatiotemporal Joint Training and Loss Functions

The joint training strategy is presented in Algorithm 1:Electricity 06 00071 i001
The parameters in the training procedure are associated with the supervised contrastive and joint-task cross-entropy loss functions L cont and L task . The joint training utilizes supervised contrastive loss function L cont for representation learning as follows:
L cont = i I 1 | P ( i ) | p P ( i ) log exp ( s ( H i , H p ) / τ ) j S ( i ) exp ( s ( H i , H j ) / τ )
where I is the set of indices for all augmented samples in a batch, P ( i ) is the set of indices of positive samples sharing the same stability label y i , S ( i ) = I { i } is the comparison set, s ( H i , H j ) = vec ( H i ) , vec ( H j ) vec ( H i ) vec ( H j ) computes cosine similarity between flattened sequence representations, τ > 0 is the temperature hyperparameter, and H i R T × d gcn is the global temporal representation for sample i. Under supervised contrastive learning, the model maps samples from the same stability class to nearby points in the embedding space while pushing apart samples from different classes. Because positive pairs are constructed across heterogeneous network topologies, the objective explicitly promotes invariance to structural variations, thereby enhancing generalization to previously unseen topologies.
The GCN mapping function f θ generates global temporal representations:
H gcn = f θ ( A , X ) = 1 N n = 1 N GCN ( L ) ( A , X t ) t = 1 T R T × d gcn
where X R N × T × d is the PMU measurement tensor, A R N × N is the topology-adaptive adjacency matrix, and GCN ( L ) denotes the final GCN layer output. The adjacency matrix A is dynamically constructed using grid-informed electrical parameters:
A i j = K i j sin ( δ i δ j ) ( i , j ) E p norm 0 otherwise
After pretraining and freezing θ * , the joint task cross entropy loss function L task is defined as
L task = c = 1 4 y c log ( y ^ c )
where y c is the ground truth stability label for class c (system-level), and y ^ c is the predicted probability for class c (system-level). The prediction vector y ^ R 4 represents system-wide stability probabilities:   
y ^ = f ϕ ( Z ) = σ Φ ( 2 ) ReLU Φ ( 1 ) vec ( Z ) + β ( 1 ) + β ( 2 )
where Z = f w ( H gcn ) R d tcn × d gcn is the global feature matrix, vec ( Z ) flattens the matrix to R d tcn · d gcn , and σ ( · ) is the softmax activation.

3. Dataset Generation with Topology Variation

3.1. PMU Data Generation with Topology Variation

To emulate a realistic wide-area monitoring system (WAMS) and ensure observability of critical dynamics, PMUs are assumed to be installed at all generator buses and a subset of critical load buses. In the IEEE 39-bus system, this corresponds to PMUs placed at the 10 generator buses (Buses 30–39), which directly capture the rotor-angle and frequency dynamics of synchronous machines. In addition, PMUs are installed at several key load buses selected based on electrical centrality, e.g., Buses 4, 8, 15, 16, 20, and 21, to enhance spatial coverage of the network. This configuration ensures that all input features, including voltage magnitude | U | , and relative phase U θ , are consistently derived from these designated PMU locations.
The dataset generation framework presented in Algorithm 2 is used for the IEEE N-bus system through three-level stochastic perturbations.
D = i = 1 N topo j = 1 N load k = 1 N fault Sim ( G i , P L , j , ϕ k )
where N topo : number of topology variants, N load : loading conditions, and N fault = | E | : branch fault locations. Given the setup, one topology instance G i = ( V , E i ) is generated via the Markov chain edge replacement algorithm.Electricity 06 00071 i002
For each graph topology instance G i , the loading profile matrix is generated, as follows:
P L = P L 0 ( 1 + Δ M )
where Δ U ( 0.1 , 0.1 ) and M { 0 , 1 } N × T masks variation application nodes. The inputs to the proposed joint prediction framework are the feature matrix of transient dynamic time series data X = { x 1 , x 2 , , x F } R F × N × l :
x i = P M U 1 i ( 0 ) P M U 2 i ( 0 ) P M U N i ( 0 ) P M U 1 i ( 1 ) P M U 2 i ( 1 ) P M U N i ( 1 ) P M U 1 i ( l 1 ) P M U 2 i ( l 1 ) P M U N i ( l 1 ) T
where i = 1 , 2 , , F , F denotes the number of features, and N represents the number of nodes. The j t h column of x i constitutes the post-fault time series of length l, describing one PMU data stream P M U j i at node j ( j = 1 , 2 , , N ) in power systems. Consequently, the bus relative phase ( U θ ), bus voltage magnitude ( | U | ), rotor angles ( θ ), and rotor speed ( ω ) are selected as features for the input X = { U θ , | U | , θ , ω } , where F = 4 .

3.2. Stability Labeling

Transient stability is determined by the transient stability index (TSI) σ T S I [25]:
σ T S I = 180 | Δ δ i j | m a x 180 + | Δ δ i j | m a x
where | Δ δ i j | m a x denotes the maximum absolute value of rotor angle difference between any two synchronous machines during simulation. The label y T is defined by the sign of σ T S I :
y T = 0 σ T S I > 0 1 σ T S I 0
The voltage stability label y V determines whether the voltage magnitude dip/sag of any bus in the system exceeds 20% for more than 50 cycles [26], defined as
y V = 1 if min t V limit ( t ) > 0.8 p . u . 0 otherwise
where V limit represents the minimum voltage limit.
As presented in Table 1, the stability label y c for the STCGCN model training is a one-hot encoded variable for the four classes:
{ [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , [ 0 , 0 , 0 , 1 ] } ,
determined by the transient stability label y T and voltage stability label y V .

4. Results and Discussion

The proposed STCGCN model is validated on the IEEE 39-bus system using time-domain simulations implemented in the Power System Toolbox (PST). Each simulation spans 10   s with a time step of 0.01   s (i.e., 100   Hz sampling), comprising 1   s of pre-fault evolution and 9   s of post-fault response. Numerical integration employs the trapezoidal method with a tolerance of 10 6 per unit.
Fault scenarios are generated by applying a balanced three-phase short circuit at t = 0.10   s , cleared at t = 0.1667   s (four cycles at 60   Hz ). Load levels are varied by scaling active power uniformly to 90– 110 % of the base case while maintaining a constant reactive power factor. To reflect uncertainty, zero-mean Gaussian perturbations are superimposed with a standard deviation of ± 5 % of the base active-power level. For each of the 20 topology configurations considered, each of the 46 transmission branches is faulted in turn. Every ( topology , faulted branch ) pair is repeated 25 times with independent load/noise realizations, yielding a dataset of 20 × 46 × 25 = 23 , 000 labeled samples.
The joint stability assessment is cast as a binary classification for transient stability and voltage stability (stable vs. unstable), using the conventional confusion-matrix definitions. Unstable operating states are labeled as the positive class; stable states are labeled as the negative class. Let T P , T N , F P , and F N denote true positives, true negatives, false positives, and false negatives, respectively. Accuracy (ACC), False Negative Rate (FNR), and False Positive Rate (FPR) are reported and defined as
ACC = T P + T N T P + T N + F P + F N , FNR = F N T P + F N , FPR = F P F P + T N .
ACC reflects overall correctness. FNR quantifies missed detections of instability critical for safety-critical operation. FPR measures false alarms, which affect operator burden and automated protection logic.

4.1. Dataset Construction and Splits

Twenty varied power systems are synthesized with distinct topologies using Algorithm 2. These are partitioned into disjoint sets: fourteen topologies for training, two for validation, and four for testing. Two regimes are evaluated as follows: (i) in-sample, where test events come from topologies that also appear in the training split (but are held out as distinct events); and (ii) out-of-sample, where test events arise from topologies unseen during training. All models are first trained on 20,700 labeled events and then evaluated on 2300 held-out topological loading scenarios drawn from the same topology family (in-sample) without reusing any training events. The joint STCGCN model is trained on the dataset. The optimal hyperparameters for STCGCN are summarized in Table 2. A grid search strategy was employed to identify these optimal hyperparameters.
The supervised contrastive learning objective requires the construction of positive sample pairs, generated via a physics-informed data augmentation process. For each input graph-sequence sample x i in a batch, an augmented view x ˜ i is generated by applying stochastic perturbations hat reflect realistic power system variations while preserving the original stability label y i . Specifically, three augmentation strategies are employed as follows: injecting Gaussian noise into raw PMU measurements to emulate sensor errors; applying minor topology perturbations, such as randomly switching the status of a single branch, under the constraint that the resulting system retains the same stability class, thereby explicitly enforcing topology-invariant representations; and perturbing load profiles within ±5% to capture normal operating condition fluctuations.

4.2. Training Procedure and Representation Quality

Training proceeds in two stages. First, a supervised contrastive GCN encoder is optimized to map PMU time-series samples into a feature space where intra-class pairs are tightly clustered and inter-class pairs are well separated. Figure 2 shows the training loss trajectories. To visualize GCN feature clustering segregation, high-dimensional embeddings are projected to 2D UMAP [27], and Figure 3 shows clear separation among stable vs. unstable events, and between voltage-unstable and transient-unstable subsets. As shown in Figure 3, samples belonging to the same stability category form tight clusters in the latent space, indicating that the model indeed learns topology-invariant feature representations. The cluster segregation indicates that supervised contrastive GCN learning yields discriminative, class-aware representations that provide a strong basis for the downstream TCN and MLP classifier. The generalization is examined by applying the supervised contrastive GCN encoder to out-of-sample events from the power systems with grid topologies never seen during training. As shown in Figure 4, the contrastive GCN feature embeddings preserve cluster structure despite topology changes, evidencing robust cross-topology feature segregation.

4.3. Classification Benchmark

The confusion matrices are reported for both regimes, the in-sample and out-of-sample datasets. Figure 5 shows the confusion matrix of the STCGCN model on the in-sample test dataset with topologies during training, and accuracy is 89.66%. Figure 6 shows the confusion matrix of the STCGCN model on the out-of-sample test dataset with the unseen topologies during training, and accuracy is 87.73%. In addition, Table 3 and Table 4 show the recall and F1-Score of STCGCN evaluated on the in-sample and out-of-sample datasets.
While performance decreases slightly under topology shift, accuracy remains high, indicating that the model adapts to structural changes and sustains effective stability prediction.
To quantify the advantages of the proposed contrastive learning-based joint task (STCGCN) prediction model, the Spatiotemporal Embedding Graph Neural Network (STEGNN) [14] is used as the spatiotemporal baseline for comparison. The STEGNN architecture comprises (i) a multi-head supervised contrastive GCN for topological feature extraction, (ii) a TCN for temporal dynamics, and (iii) an MLP classifier. Training uses standard cross-entropy loss.
Figure 7 presents the confusion matrix of STEGNN on the in-sample test set (seen topologies), and Figure 8 shows the corresponding results on the out-of-sample test set (unseen topologies). Table 5 and Table 6 report accuracy and sample counts for STEGNN and STCGCN on both settings. On the out-of-sample set, STCGCN attains 87.73% accuracy versus 79.00% for STEGNN, an absolute gain of 8.73 percentage points, corresponding to 402 additional correct classifications out of 4600 samples. The improvement comes from the topology-aware representation learning induced by the supervised contrastive objective, which enhances cross-topology feature separability, and to the joint (multi-task) formulation that exploits the coupling between voltage and transient stability.
To rigorously evaluate the reliability of the STCGCN method, experiments were repeated over three independent runs with different random seeds for model initialization and data shuffling. The mean accuracy and corresponding standard deviation are reported in Table 7 for the in-sample and out-of-sample datasets.

4.4. Joint-Task and Single-Task Prediction

The joint-task STCGCN prediction is further compared against the single-task prediction that estimates only transient stability (TSP) or only voltage stability (VSP). As shown in Table 8 and Table 9, the joint STCGCN improves both heads: TSP accuracy rises from single-task 86.60% to joint-task 90.34%, and VSP accuracy from 82.56% to 94.63%. These gains reflect the benefits of joint-task learning, where shared spatiotemporal representations and supervised contrastive alignment leverage the physical coupling between voltage and angle dynamics.

4.5. Impact of Observation Window Length

The effect of the post-fault observation window length on prediction accuracy is evaluated, aiming to maximize accuracy while minimizing latency. Using the same hyperparameters as in Table 2, the performance is assessed for multiple PMU window lengths after fault clearing, increasing the window in 0.05 s increments. Figure 9 plots test accuracy versus window length; the corresponding performance metrics are listed in Table 10. Accuracy plateaus for windows of about 0.40 s and longer: the joint STCGcN model attains 87.73% at 0.40 s and 87.91% at 0.45 s (with no meaningful gains beyond). Therefore, 0.40 s is selected as a practical trade-off between real-time responsiveness and accuracy.
The practical deployability of STCGCN is assessed by benchmarking inference latency on a system with an AMD EPYC 7763 (64-core) CPU and an NVIDIA GeForce RTX 4090 GPU, using PyTorch 1.9.0 with CUDA 11.1. Experiments were run on 10,000 randomly selected test samples. Table 11 reports the average per-sample inference time for multiple batch sizes, a metric that is critical for real-time deployment where end-to-end latency typically must remain below 50 ms per prediction. STCGCN attains a single-sample inference latency of 18.5 ms, well below the 50 ms real-time threshold, and achieves a batched throughput of approximately 3265 samples/s. These results indicate ample headroom for real-time deployment, enabling sub-second stability assessment even under large-scale power grids.

4.6. Sensitivity to PMU Noise

The analyses above assumed lossless PMU delivery [28]. In practice, PMU measurements exhibit noise with typical standard deviations in the range 0.0005 0.01 p.u. [29] and signal-to-noise ratios (SNR) around 45 dB. The robustness is evaluated by injecting Gaussian noise at different SNR levels and reporting ACC, FNR, and FPR for both tasks. Results in Table 12 show that performance degrades gradually as SNR declines; degradation remains modest above 40dB and becomes more pronounced below this threshold, yet overall accuracy stays comparatively high—indicating strong noise robustness of STCGCN. On the IEEE 39-bus system, the joint-task STCGCN model achieves 87.73 accuracy on the out-of-sample dataset with the unseen topologies, outperforming the STEGNN baseline (79.00%) by +8.73 percentage points, and exhibiting superior generalization under N 1 contingencies and topology changes.

5. Conclusions

Modern power systems, driven by high renewable penetration and frequent topology reconfigurations, require precise and rapid stability assessments to ensure reliable operation. The STCGCN model is proposed for the joint prediction of voltage and transient stability. The architecture integrates a graph-convolutional encoder to model spatial and topological dependencies, coupled with a temporal–convolutional network to capture fast electromechanical dynamics. This design enables an accurate representation of coupled voltage and transient stability dynamics. To maintain resilience under switching operations, STCGCN applies a topology-adaptive adjacency matrix in the contrastive GCN learning using real-time electrical features, including line admittance, impedance, power-flow magnitudes, and directions. Furthermore, it employs a class-weighted loss function and physics-informed data augmentation to mitigate class imbalance and enhance feature discrimination across diverse operating conditions, such as load and renewable fluctuations.
On the IEEE 39-bus system, STCGCN achieves 87.73% accuracy for joint stability prediction on a test set with unseen topologies, outperforming the STEGNN baseline’s 79.00% accuracy. Compared to the single-task models, the joint-task STCGCN model significantly enhances performance for both voltage stability prediction (94.63% vs. 82.56%) and transient stability prediction (90.34% vs. 86.60%). These results demonstrate the value of the joint-task STCGCN model. The model achieves stable accuracy with only 0.40 s of post-fault data, supporting real-time contingency screening. It also exhibits resilience under N–1 contingencies and topology variations. Noise-stress tests confirm consistent performance for PMU signal-to-noise ratios (SNR) ≥ 40 dB, ensuring practical deployability in noisy environments.
STCGCN redefines real-time power system stability assessment by addressing three critical challenges: the decoupled modeling of voltage and transient stability, reliance on static topology assumptions, and class imbalance. By integrating spatial, temporal, and task-coupled dynamics, it enables robust and scalable stability prediction, paving the way for the secure operation of renewable-rich grids. Future research will scale STCGCN to larger systems, such as the IEEE 118-bus grid, incorporate cyber–physical constraints like cyberattack resilience, and optimize for edge deployment to support ultra-fast, decentralized stability control. These advancements will drive the adoption of data-driven solutions for next-generation power systems, ensuring resilience and efficiency in dynamic grid environments. The STCGCN framework, with its demonstrated ability to learn topology-invariant representations, opens avenues for applications in broader cyber-physical power-energy systems. For example, the GCN naturally models the physical connectivity of charging stations and the logical communication links in a unified manner.

Author Contributions

Conceptualization, L.D., X.D., W.C., J.H., J.W. and X.C.; methodology, X.C. and S.L.; validation, X.C.; data analysis, W.Q. and S.L.; writing—original draft preparation, S.L. and W.Q.; writing—review and editing, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology Project of State Grid Fujian Power Co., Ltd. of China (521304240018), which is named as Research on Artificial Intelligence Joint Prediction Method for Transient Stability Based on Physics Driven Feature Engineering.

Data Availability Statement

Due to the complexity of the data, the data provided in this study is available to the corresponding authors.

Conflicts of Interest

Authors Liyu Dai, Wujie Chao, Junwei Huang and Jinke Wang were employed by the company State Grid Fujian Electric Power Research Institute, Fujian Key Laboratory of Smart Grid Protection and Operation Control. Author Xuhui Deng was employed by the company State Grid Fujian Electric Power Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Olanrewaju, R.; Thanasingh, M.J.C.; Al-Greer, M. Techno-Economic Evaluation of Hybrid Renewable Energy Systems for Power Supply. In Proceedings of the 2024 29th International Conference on Automation and Computing (ICAC), Sunderland, UK, 28–30 August 2024; pp. 1–6. [Google Scholar] [CrossRef]
  2. Wang, D.; Jiang, Y.; Qiu, C.; Xiong, H.; Bi, M.; Ge, Y.; Li, J.; Cao, Y.; Li, G.; Cui, Z.; et al. Power System Real Time Reliability Monitoring and Security Assessment in Short-term and on-line Mode. In Proceedings of the 2019 IEEE Innovative Smart Grid Technologies—Asia (ISGT Asia), Chengdu, China, 21–24 May 2019; pp. 758–763. [Google Scholar] [CrossRef]
  3. Kundur, P.; Paserba, J.; Ajjarapu, V.; Andersson, G.; Bose, A.; Canizares, C.; Hatziargyriou, N.; Hill, D.; Stankovic, A.; Taylor, C.; et al. Definition and classification of power system stability IEEE/CIGRE joint task force on stability terms and definitions. IEEE Trans. Power Syst. 2004, 19, 1387–1401. [Google Scholar] [CrossRef]
  4. Wang, H.; Li, Z. A Review of Power System Transient Stability Analysis and Assessment. In Proceedings of the 2019 Prognostics and System Health Management Conference (PHM-Qingdao), Qingdao, China, 25–27 October 2019; pp. 1–6. [Google Scholar] [CrossRef]
  5. ALShamli, Y.; Hosseinzadeh, N.; Yousef, H.; Al-Hinai, A. A review of concepts in power system stability. In Proceedings of the 2015 IEEE 8th GCC Conference & Exhibition, Muscat, Oman, 1–4 February 2015; pp. 1–6. [Google Scholar] [CrossRef]
  6. Li, X.; Li, Z.; Guan, L.; Zhu, L.; Liu, F. Review on Transient Voltage Stability of Power System. In Proceedings of the 2020 IEEE Sustainable Power and Energy Conference (iSPEC), Chengdu, China, 23–25 November 2020; pp. 940–947. [Google Scholar] [CrossRef]
  7. Hatziargyriou, N.; Milanovic, J.; Rahmann, C.; Ajjarapu, V.; Canizares, C.; Erlich, I.; Hill, D.; Hiskens, I.; Kamwa, I.; Pal, B.; et al. Definition and Classification of Power System Stability—Revisited & Extended. IEEE Trans. Power Syst. 2021, 36, 3271–3281. [Google Scholar] [CrossRef]
  8. Dehghani, M.; Ghiasi, M.; Niknam, T.; Rouzbehi, K.; Wang, Z.; Siano, P.; Alhelou, H.H. Control of LPV Modeled AC-Microgrid Based on Mixed H2/H Time-Varying Linear State Feedback and Robust Predictive Algorithm. IEEE Access 2022, 10, 3738–3755. [Google Scholar] [CrossRef]
  9. Sun, Y.; Li, X.; Yan, S.; Song, Y.; Farsangi, M. Novel decentralized robust excitation control for power system stability improvement. In Proceedings of the International Conference on Electric Utility Deregulation and Restructuring and Power Technologies, London, UK, 4–7 April 2000; pp. 443–448. [Google Scholar] [CrossRef]
  10. Ishimaru, M.; Yokoyama, R.; Shirai, G.; Lee, K. Allocation and design of robust TCSC controllers based on power system stability index. In Proceedings of the 2002 IEEE Power Engineering Society Winter Meeting, New York, NY, USA, 27–31 January 2002; Volume 1, pp. 573–578. [Google Scholar] [CrossRef]
  11. Lei, B.; Wu, X.; Fei, S. Nonlinear robust control design for SSSC to improve damping oscillations and transient stability of power system. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 3101–3106. [Google Scholar] [CrossRef]
  12. Löper, M.; Trummal, T.; Kilter, J. Analysis of the Applicability of PMU Measurements for Power Quality Assessment. In Proceedings of the 2018 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), Sarajevo, Bosnia and Herzegovina, 21–25 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  13. Sun, P.; Huo, L.; Chen, X.; Liang, S. Rotor Angle Stability Prediction Using Temporal and Topological Embedding Deep Neural Network Based on Grid-Informed Adjacency Matrix. J. Mod. Power Syst. Clean Energy 2024, 12, 695–706. [Google Scholar] [CrossRef]
  14. Deng, C.; Dai, L.; Chao, W.; Huang, J.; Wang, J.; Lin, L.; Qin, W.; Lai, S.; Chen, X. An Advanced Spatio-Temporal Graph Neural Network Framework for the Concurrent Prediction of Transient and Voltage Stability. Energies 2025, 18, 672. [Google Scholar] [CrossRef]
  15. Alimi, O.A.; Ouahada, K.; Abu-Mahfouz, A.M. A Review of Machine Learning Approaches to Power System Security and Stability. IEEE Access 2020, 8, 113512–113531. [Google Scholar] [CrossRef]
  16. Zhou, Y.; Xu, T.; Ye, L.; Liu, M.; Chen, X.; Yang, Y.; Guo, Q.; Sun, H. Transient Rotor Angle and Voltage Stability Discrimination Based on Deep Convolutional Neural Network with Multiple Inputs. In Proceedings of the 2021 IEEE 4th International Electrical and Energy Conference (CIEEC), Wuhan, China, 28–30 May 2021; pp. 1–6. [Google Scholar] [CrossRef]
  17. Hu, Y.; Yang, X.; Gu, W.; Lang, Y.; Wu, X.; Yu, Y. Identification of Power Angle Instability and Transient Voltage Instability Based on Deep Learning. In Proceedings of the 2019 IEEE 3rd Conference on Energy Internet and Energy System Integration (EI2), Changsha, China, 8–10 November 2019; pp. 759–764. [Google Scholar] [CrossRef]
  18. Thukaram, D.; Khincha, H.; Ravikumar, B. A New Approach for Fault Location Identification in Transmission system using Stability Analysis and SVMs. In Proceedings of the 2006 International Conference on Power Electronic, Drives and Energy Systems, New Delhi, India, 12–15 December 2006; pp. 1–6. [Google Scholar] [CrossRef]
  19. Plathottam, S.J.; Salehfar, H.; Ranganathan, P. Convolutional Neural Networks (CNNs) for power system big data analysis. In Proceedings of the 2017 North American Power Symposium (NAPS), Morgantown, WV, USA, 17–19 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
  20. Hossain, R.R.; Huang, Q.; Huang, R. Graph Convolutional Network-Based Topology Embedded Deep Reinforcement Learning for Voltage Stability Control. IEEE Trans. Power Syst. 2021, 36, 4848–4851. [Google Scholar] [CrossRef]
  21. Hossain, R.; Gautam, M.; MansourLakouraj, M.; Livani, H.; Benidris, M. Voltage Control in Distribution Grids Using Topology Aware Deep Reinforcement Learning. In Proceedings of the 2023 IEEE Industry Applications Society Annual Meeting (IAS), Nashville, TN, USA, 29 October–2 November 2023; pp. 1–7. [Google Scholar] [CrossRef]
  22. Deng, Y.; Lin, S.; Jia, H.; Tong, X. Optimal allocation of voltage sag meters with power system topology variation considered. In Proceedings of the 2022 China Automation Congress (CAC), Xiamen, China, 25–27 November 2022; pp. 3145–3150. [Google Scholar] [CrossRef]
  23. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2017, arXiv:cs.LG/1609.02907. [Google Scholar] [CrossRef]
  24. Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised Contrastive Learning. arXiv 2021, arXiv:cs.LG/2004.11362. [Google Scholar] [PubMed]
  25. Ye, X.; Morales, J.D.; Milanović, J.V. Truncated Transient Stability Index for On-line Power System Transient Stability Assessment. In Proceedings of the 2021 IEEE PES Innovative Smart Grid Technologies Europe (ISGT Europe), Espoo, Finland, 18–21 October 2021; pp. 1–5. [Google Scholar] [CrossRef]
  26. He, Q.; Qi, F.; Wang, S.; Zeng, Y.; Sheng, H.; Ma, J. Research on Static Voltage Stability Index of Regional Power Network with New Energy Stations Based on Voltage Stability Criterion. In Proceedings of the 2023 IEEE 7th Conference on Energy Internet and Energy System Integration (EI2), Hangzhou, China, 15–18 December 2023; pp. 2597–2601. [Google Scholar] [CrossRef]
  27. Li, B.; Zheng, Y.; Ran, R. 2DUMAP: Two-Dimensional Uniform Manifold Approximation and Projection for Fault Diagnosis. IEEE Access 2025, 13, 12819–12831. [Google Scholar] [CrossRef]
  28. Rafique, Z.; Khalid, H.M.; Muyeen, S.; Kamwa, I. Bibliographic review on power system oscillations damping: An era of conventional grids and renewable energy integration. Int. J. Electr. Power Energy Syst. 2022, 136, 107556. [Google Scholar] [CrossRef]
  29. Okendo, E.O.; Wekesa, C.W.; Saulo, M.J. Optimal placement of Phasor Measurement Unit considering System Observability Redundancy Index: Case study of the Kenya power transmission network. Heliyon 2021, 7, e07670. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The framework of the joint STCGCN stability prediction.
Figure 1. The framework of the joint STCGCN stability prediction.
Electricity 06 00071 g001
Figure 2. Training loss curves for the joint prediction model.
Figure 2. Training loss curves for the joint prediction model.
Electricity 06 00071 g002
Figure 3. In-sample dataset GCN encoder representation quality. Two-dimensional UMAP projection of supervised contrastive GCN features showing clear cluster separation among stable, unstable, voltage-unstable, and transient-unstable feature labels.
Figure 3. In-sample dataset GCN encoder representation quality. Two-dimensional UMAP projection of supervised contrastive GCN features showing clear cluster separation among stable, unstable, voltage-unstable, and transient-unstable feature labels.
Electricity 06 00071 g003
Figure 4. Out-of-sample dataset GCN encoder representation quality. Two-dimensional UMAP projection of supervised contrastive GCN features obtained with the out-of-sample with unseen topologies, showing clear cluster separation and demonstrating cross-topology robustness.
Figure 4. Out-of-sample dataset GCN encoder representation quality. Two-dimensional UMAP projection of supervised contrastive GCN features obtained with the out-of-sample with unseen topologies, showing clear cluster separation and demonstrating cross-topology robustness.
Electricity 06 00071 g004
Figure 5. STCGCN confusion matrix on the in-sample dataset. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] , respectively.
Figure 5. STCGCN confusion matrix on the in-sample dataset. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] , respectively.
Electricity 06 00071 g005
Figure 6. STCGCN confusion matrix on the out-of-sample dataset. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] , respectively.
Figure 6. STCGCN confusion matrix on the out-of-sample dataset. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] , respectively.
Electricity 06 00071 g006
Figure 7. STEGNN confusion matrix on the in-sample test dataset with unseen topologies. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] .
Figure 7. STEGNN confusion matrix on the in-sample test dataset with unseen topologies. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] .
Electricity 06 00071 g007
Figure 8. STEGNN Confusion matrix on the out-of-sample test dataset with unseen topologies. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] .
Figure 8. STEGNN Confusion matrix on the out-of-sample test dataset with unseen topologies. Four classes (class 1, class 2, class 3, and class 4) correspond to one-hot labels [ 1 , 0 , 0 , 0 ] , [ 0 , 1 , 0 , 0 ] , [ 0 , 0 , 1 , 0 ] , and [ 0 , 0 , 0 , 1 ] .
Electricity 06 00071 g008
Figure 9. Effect of post-fault PMU window length on the joint STCGCN prediction accuracy in the IEEE 39-bus system.
Figure 9. Effect of post-fault PMU window length on the joint STCGCN prediction accuracy in the IEEE 39-bus system.
Electricity 06 00071 g009
Table 1. Description as table header, please confirm. of label y T and y V as one-hot form for transient stability and voltage stability.
Table 1. Description as table header, please confirm. of label y T and y V as one-hot form for transient stability and voltage stability.
Label y y T = 1 y T = 0
y V = 1[1, 0, 0, 0][0, 1, 0, 0]
y V = 0[0, 0, 1, 0][0, 0, 0, 1]
Table 2. Hyperparameter settings and selection rationale for the STCGCN model.
Table 2. Hyperparameter settings and selection rationale for the STCGCN model.
CategoryHyperparameterValueSelection Criteria
ArchitectureGCN Layers3Balances spatial feature extraction depth and computational cost. Fewer layers led to underfitting, while more caused overfitting.
GCN Hidden Dimension256Provides sufficient model capacity. Determined via grid search over {128, 256, 512}.
TCN Kernel Size3Effective for capturing local temporal patterns.
TCN Dilation Rates{1, 2, 4}Designed to capture multi-scale electromechanical dynamics within the 0.4 s window.
Dropout Rate0.2Mitigates overfitting; validated on the out-of-sample topology test set.
MLP Hidden Units512Offers ample nonlinear transformation capability for the final joint classification.
TrainingBatch Size64Maximizes GPU memory utilization while maintaining stable gradient descent.
Initial Learning Rate 1 × 10 3 Used with the Adam optimizer for stable and efficient convergence.
SCLTemperature ( τ )0.1Scales the cosine similarity in the contrastive loss function.
Projection Head Dim128Projects GCN embeddings into the space where contrastive loss is applied.
Positive Pair Margin0.5Defines the similarity threshold for constructing positive pairs in SCL.
DataObservation Window0.40 sSelected as the optimal trade-off between prediction accuracy and real-time latency, as analyzed in Section 4.5.
Table 3. Per-class performance metrics of STCGCN on in-sample dataset.
Table 3. Per-class performance metrics of STCGCN on in-sample dataset.
ClassPrecisionRecallF1-ScoreSupport
Class 10.8890.9590.9231690
Class 20.9480.8580.9011203
Class 30.9340.8990.916785
Class 40.8960.9080.902925
Table 4. Per-class performance metrics of STCGCN on the out-of-sample dataset.
Table 4. Per-class performance metrics of STCGCN on the out-of-sample dataset.
ClassPrecisionRecallF1-ScoreSupport
Class 10.8530.9410.8711690
Class 20.9420.8420.8651178
Class 30.8740.8610.868782
Class 40.8690.8470.856945
Table 5. Performance of STCGCN vs. STEGNN in the out-of-sample dataset with unseen power grid topologies.
Table 5. Performance of STCGCN vs. STEGNN in the out-of-sample dataset with unseen power grid topologies.
MethodCorrectMissingAccuracy (%)
STCGCN403656487.73
STEGNN374685481.43
Table 6. Performance of STCGCN vs. STEGNN in the in-sample dataset with unseen power grid topologies.
Table 6. Performance of STCGCN vs. STEGNN in the in-sample dataset with unseen power grid topologies.
MethodCorrectMissingAccuracy (%)
STCGCN412447689.66
STEGNN363496679.00
Table 7. Performance and result reliability.
Table 7. Performance and result reliability.
ModelIn-Sample Dataset (%)Out-of-Sample Dataset (%)
STCGCN (Run 1)89.5187.62
STCGCN (Run 2)90.1288.05
STCGCN (Run 3)89.0886.91
STCGCN (Mean ± SD) 89.66 ± 0.85 87.73 ± 1.02
Table 8. Transient stability prediction (TSP): single-task vs. joint-task for the IEEE 39-bus system.
Table 8. Transient stability prediction (TSP): single-task vs. joint-task for the IEEE 39-bus system.
MethodCorrectMissingAccuracy (%)
Single-task TSP398461686.60
Joint-task TSP415644490.34
Table 9. Voltage stability prediction (VSP): single-task vs. joint-task for the IEEE 39-bus system.
Table 9. Voltage stability prediction (VSP): single-task vs. joint-task for the IEEE 39-bus system.
MethodCorrectMissingAccuracy (%)
Single-task VSP379880282.56
Joint-task VSP435324794.63
Table 10. Performance comparison across different window sizes.
Table 10. Performance comparison across different window sizes.
Window (s)
0.100.150.200.250.300.350.400.450.50
Joint-task in four classes
   ACC (%)71.2475.0176.1280.5482.2484.7187.7387.9187.34
Joint-task TSP
   ACC (%)74.3277.8579.0883.6785.4288.1390.3490.2889.97
   FPR (%)10.879.428.657.346.736.286.066.116.09
   FNR (%)24.2521.5019.1516.8414.9214.0813.8213.9113.87
Joint-task VSP
   ACC (%)78.9582.4384.1788.9290.8692.9894.6394.5794.21
   FPR (%)6.155.484.874.133.783.523.453.483.46
   FNR (%)15.4313.6211.959.848.928.678.618.658.69
Table 11. Inference time for STCGCN on the IEEE 39-bus system.
Table 11. Inference time for STCGCN on the IEEE 39-bus system.
Batch SizeAvg. Time (ms)Throughput (Samples/s)
118.554.1
812.3650.4
329.83265.3
Table 12. Joint-task STCGCN performance under different PMU SNR levels in the IEEE 39-bus system.
Table 12. Joint-task STCGCN performance under different PMU SNR levels in the IEEE 39-bus system.
TaskSNR (dB)Metrics
ACC (%)FNR (%)FPR (%)
Joint-task TSPNone90.3413.826.06
6089.9714.256.38
5089.2314.886.92
4087.8516.247.81
3084.0619.8310.52
2080.3723.1413.75
Joint-task VSPNone94.638.613.45
6094.258.923.78
5093.689.474.25
4092.3110.855.14
3088.9214.248.03
2085.4317.8611.32
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dai, L.; Deng, X.; Chao, W.; Huang, J.; Wang, J.; Lai, S.; Qin, W.; Chen, X. Topology-Robust Power System Stability Prediction with a Supervised Contrastive Spatiotemporal Graph Convolutional Network. Electricity 2025, 6, 71. https://doi.org/10.3390/electricity6040071

AMA Style

Dai L, Deng X, Chao W, Huang J, Wang J, Lai S, Qin W, Chen X. Topology-Robust Power System Stability Prediction with a Supervised Contrastive Spatiotemporal Graph Convolutional Network. Electricity. 2025; 6(4):71. https://doi.org/10.3390/electricity6040071

Chicago/Turabian Style

Dai, Liyu, Xuhui Deng, Wujie Chao, Junwei Huang, Jinke Wang, Shengquan Lai, Wenyu Qin, and Xin Chen. 2025. "Topology-Robust Power System Stability Prediction with a Supervised Contrastive Spatiotemporal Graph Convolutional Network" Electricity 6, no. 4: 71. https://doi.org/10.3390/electricity6040071

APA Style

Dai, L., Deng, X., Chao, W., Huang, J., Wang, J., Lai, S., Qin, W., & Chen, X. (2025). Topology-Robust Power System Stability Prediction with a Supervised Contrastive Spatiotemporal Graph Convolutional Network. Electricity, 6(4), 71. https://doi.org/10.3390/electricity6040071

Article Metrics

Back to TopTop