Next Article in Journal
Small Defects Detection of Galvanized Strip Steel via Schatten-p Norm-Based Low-Rank Tensor Decomposition
Previous Article in Journal
A Review of Vision-Based Multi-Task Perception Research Methods for Autonomous Vehicles
Previous Article in Special Issue
Secondary Operation Risk Assessment Method Integrating Graph Convolutional Networks and Semantic Embeddings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forced Oscillation Detection via a Hybrid Network of a Spiking Recurrent Neural Network and LSTM

College of Electrical Engineering, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(8), 2607; https://doi.org/10.3390/s25082607
Submission received: 15 February 2025 / Revised: 31 March 2025 / Accepted: 18 April 2025 / Published: 20 April 2025
(This article belongs to the Special Issue Diagnosis and Risk Analysis of Electrical Systems)

Abstract

:
The detection of forced oscillations, especially distinguishing them from natural oscillations, has emerged as a major concern in power system stability monitoring. Deep learning (DL) holds significant potential for detecting forced oscillations correctly. However, existing artificial neural networks (ANNs) face challenges when employed in edge devices for timely detection due to their inherent complex computations and high power consumption. This paper proposes a novel hybrid network that integrates a spiking recurrent neural network (SRNN) with long short-term memory (LSTM). The SRNN achieves computational and energy efficiency, while the integration with LSTM is conducive to effectively capturing temporal dependencies in time-series oscillation data. The proposed hybrid network is trained using the backpropagation-through-time (BPTT) optimization algorithm, with adjustments made to address the discontinuous gradient in the SRNN. We evaluate our proposed model on both simulated and real-world oscillation datasets. Overall, the experimental results demonstrate that the proposed model can achieve higher accuracy and superior performance in distinguishing forced oscillations from natural oscillations, even in the presence of strong noise, compared to pure LSTM and other SRNN-related models.

1. Introduction

Detecting forced oscillations plays a significant role in ensuring the security and stability of power systems. Forced oscillations can cause undesirable power vibrations, damage equipment, and even interrupt the power supply. If the frequency of a forced oscillation is close to that of a poorly damped natural oscillation in power systems, it can lead to highly magnified energy. This can cause the forced oscillation to propagate across the entire system, potentially leading to catastrophic cascading blackouts [1,2,3].
Unlike natural oscillations caused by intrinsic natural interactions among dynamic components within the power system, forced oscillations are usually excited by periodic external disturbances, e.g., malfunctions in generator exciters, inverter controllers of renewable sources, and large-scale cyclic loads [1,2]. Therefore, the strategy of mitigating forced oscillations is different from that of natural ones; natural oscillations can be remedied by improving damping, while forced oscillations are effectively mitigated by removing forced oscillation sources [3]. To determine the correct remedial strategy, it is a prerequisite to distinguish forced and natural oscillations accurately.
With the help of synchrophasor data provided by phasor measurement units (PMUs), many methods of detecting forced oscillations or distinguishing them from natural oscillations have been developed. Theses methods are broadly divided into two categories: the traditional approaches and the deep learning approaches. The traditional methods commonly utilize spectral analysis and time–frequency representation to extract features of different oscillation signals [4,5,6,7,8,9,10,11,12]. For example, based on the mathematical representation of two oscillation types, a spectral approach is proposed using the noise responses and their spectral differences [8]. Based on statistical signatures from power spectral density, an oscillation diagnosis algorithm is developed [10]. To improve the detection performance when the frequency of forced oscillations is close to that of natural oscillations, a residual spectral analysis method is proposed [11]. To handle non-stationary oscillating signals, some time–frequency representation methods, such as short-time Fourier transform (STFT), STFT-based synchrosqueezing transform, and continuous wavelet transform, are utilized to extract non-stationary components, and then the time–frequency ridges with maximum energy are applied to detection the forced oscillations [6,9,13]. Further, some machine learning methods, such as the support vector machine (SVM) method and K-nearest neighbors algorithm, utilize the extracted features to distinguish two oscillation types [12] or to classify time-series data corresponding to the location of the forced oscillation sources [14]. Although traditional methods are computationally efficient and require fewer data, their heavy reliance on expert knowledge to extract features from oscillation signals limits their ability to handle the complex and changing operating conditions in power systems.
Recently, the success of deep learning methods has facilitated their application in the oscillation problems due to the strong feature extraction capability of these methods. Commonly, deep learning methods are mainly implemented based on artificial neural networks (ANNs), namely, the second generation of neural networks (NNs). For example, a shallow NN, consisting of a two-layer feedforward network, is proposed to distinguish forced oscillation from natural oscillation [15]. A hierarchical deep learning NN, comprising three NNs, is developed to perform FO detection, identification, and suppression [16]. Several NNs, based on long short-term memory (LSTM) and a convolutional NN (CNN), are employed to predict the Low-Frequency modes of oscillations [17]. To improve the training efficiency of deep learning models, a two-stage deep transfer learning and CNN method is proposed to convert the forced oscillation location problem into an image recognition problem [18]. Additionally, a new transfer branch-level transformer-based deep learning approach is proposed to localize the forced oscillation source without requiring extensive training data [19]. These developments indicate that ANNs are a fundamental technology in handling forced oscillation problems.
Despite the effectiveness of previous work based on ANNs, their inherent computational complexity and high power consumption pose significant challenges for forced oscillation detection applications, especially in edge computing. Typically, forced oscillations occur randomly in power systems due to various factors, such as changing operating conditions and the integration of renewable energy. It is essential to detect the presence of forced oscillations in a timely manner at edge devices, which are deployed at critical nodes in the power system, before these oscillations spread widely throughout the entire power grid and cause catastrophic cascading blackouts. However, edge devices are often designed with relatively limited computational resources (e.g., memory and storage) and constrained power supplies. Thus, deploying ANNs that require complex computations and high power consumption becomes difficult on edge devices.
In contract, spiking neural networks (SNNs) [20,21,22] have been theoretically proved to achieve greater computational and energy efficiency than ANNs [23], thanks to the advantage of a bio-inspired spiking computation mechanism. As the third generation of neural networks, SNNs are designed to emulate the information processing in the brain. While the structure of SNNs is similar to that of ANNs, unlike ANNs, which use continuous activation values to transmit information between neurons, SNNs utilize discrete spikes to represent and transmit information by encoding and processing data as electrical pulses or spikes. Since energy is mostly consumed only when a spike event occurs, this spiking-driven computation strategy of SNNs provides attractive energy-saving benefits, making it possible to deploy detection algorithms on edge devices [24,25]. Currently, SNNs have found applications in diverse domains, including power transformer fault diagnosis [26], image recognition [27], and more. However, SNNs are typically constructed as shallow networks and face the issue of training difficulty due to the non-differentiable nature of brain-like neurons. Consequently, the accuracy of SNNs is usually lower than that of popular ANNs [28,29]. Brief descriptions of the advantages and disadvantages of traditional methods, ANNs, and SNNs are shown in Table 1.
To explore the advantages of ANNs and SNNs, the hybrid networks of integrating SNNs and ANNs, such as the SNN–CNN [28,30], have been used and proven effective in some applications. Unlike these existing hybrid networks, we propose a novel hybrid network that integrates a spiking recurrent neural network (SRNN) with LSTM. In this hybrid architecture, the SRNN is constructed using adaptive leak integrate-and-fire (ALIF) neurons with self-recurrency. The ALIF neurons enhance computational capability through an adaptive firing threshold, while the recurrent mechanism within ALIF facilitates the capture of temporal dependencies in time-series data. Meanwhile, the LSTM, as a type of recurrent ANN, enables selective information processing, which further captures long-term dependencies and avoids issues such as vanishing gradients. The proposed hybrid network is directly trained by adopting a Gaussian function as a surrogate for the non-differentiable active one of the ALIF neurons in the backpropagation-through-time (BPTT) algorithm [31]. Finally, the trained hybrid network is applied to PMU data to implement the detection of forced oscillations, mainly distinguishing them from natural oscillations. Our contributions are mainly summarized as follows:
  • The proposed hybrid network not only leverages the energy-saving advantage of SRNNs but also improves the detection performance of forced oscillations since combining with the LSTM can better capture the time-dependent information of oscillation data.
  • Different from the existing detection methods based on ANNs, the proposed hybrid network driven by spike events is more friendly for deploying in edge devices.
  • The proposed hybrid network achieves more satisfactory performance for detecting forced oscillations on simulated and real-world measured PMU data compared with other schemes, even in resonance conditions and periodically non-sinusoid injected disturbances.
The rest of this paper is organized as follows. The proposed methodology, including the problem formulation of forced oscillation detection, the proposed hybrid network, and the training strategy, is presented in Section 2. The detection performance and visual results of the proposed hybrid network are demonstrated in Section 3. Conclusions and potential future work are addressed in Section 4.

2. Methodology

In this section, we first introduce the problem formulation for forced oscillation detection. Then, we propose a hybrid network that integrates an SRNN structure with LSTM to distinguish forced oscillations from natural oscillations in PMU data. To train the proposed hybrid network directly, the BPTT algorithm is utilized. This involves replacing the discontinuous gradient with a smooth gradient function, which addresses the non-differentiability of the activation function in the brain-inspired neurons of the SRNN structure. Finally, several evaluation indices are introduced to assess the detection performance of the proposed hybrid network.

2.1. The Problem Formulation of Forced Oscillation Detection

Under the condition of forced oscillations, a mathematical model of the power system can be represented as a set of nonlinear differential Equations [32], given by
x ˙ ( t ) = f ( x ( t ) , d u ( t ) , t ) y ( t ) = g ( x ( t ) , d u ( t ) , t ) ,
where f ( · ) and g ( · ) are the nonlinear functions, x ( t ) R n is a state vector (e.g., variables of the rotor angle and generator speed), x ˙ ( t ) R n is the derivative of x concerning given time t, y ( t ) R m is a vector of output (e.g., variables of the voltage magnitude and voltage angle), and d u ( t ) R n j is a vector of periodic external disturbances with the number of n j .
Due to the existence of periodic forced disturbances, there are sustained oscillations of the measurements (e.g., active power, voltage, and current) in the power system. Generally, forced oscillations interact with multiple natural modes, and the measured output vector y ( t ) can be modeled as [9,16]
y ( t ) = y N ( t ) + y F ( t ) + ϵ ( t ) = i = 1 n i A i N e α i N t sin ( ω i N t ) + j = 1 n j A j F sin ( ω j F t + ψ j F ) + ϵ ( t )
where y N ( t ) and y F ( t ) are the output vectors of dominant natural modes and forced oscillations, respectively. ( A i N , α i N , ω i N ) are the amplitude, damping ratio, and frequency of the dominant natural modes, respectively, ( A j F , ω j F , ψ j F ) are the amplitude, frequency, and phase of the frequency components of forced oscillations, and n i and n j are the total number of natural modes and forced oscillations, respectively. ϵ ( t ) is measured noise.
The main target of detecting forced oscillations is to distinguish forced and natural oscillations by analyzing PMU data, whereas from (2), it can be observed that under forced oscillation conditions, the measured outputs y ( t ) contain y N ( t ) , y F ( t ) , and noise. In addition, when the frequency ω j F of the disturbance u j ( t ) is close to the frequency ω i N of the natural mode with a weak damping ratio, i.e., ω j F ω i N , resonance phenomena occur, leading to a similarity with pure natural oscillations in most cases [2]. Thus, the existence of nonlinear, natural oscillations and resonance brings difficulty for forced oscillation detection.

2.2. Architecture of Proposed Hybrid Network

To achieve both computational energy efficiency and high detection accuracy in detecting forced oscillations, a hybrid network integrating an SRNN structure with LSTM is proposed. The hybrid network mainly consists of four layers, i.e., one input layer, one SRNN layer, one LSTM layer, and one output classification layer. As shown in Figure 1, the input layer receives time-series float data and generates spike sequences through a spike encoder to be used as the input of the SRNN layer, and the SRNN layer is built with a set of ALIF neurons [33], as a spike response model to receive and dispose of time-dependent information. In addition, there are the self-recurrent connections of ALIF neurons within the SRNN layer and the lateral recurrent connections between the input layer and the SRNN layer. The LSTM layer is used to extract features related to more time-dependent information further, and the output layer provides the classification results to distinguish forced oscillations from natural oscillations. The dimensions of the input–output data flow in each layer are also shown in Figure 1. Moreover, the details of some key components within the proposed hybrid network are described as follows.

2.3. Spike Encoder

The spike encoder is utilized to convert the raw PMU time-series data within a time window into spike sequences, since the time-series data consist of continuous float-point real values, while the subsequent SRNN layer only accepts discrete spike trains. Before encoding time-series data into spikes, we normalize the values of data into the range of [ 0 , 1 ] , adaptive to the measured data collected from different voltage levels. During the encoding process, a random number is generated with Poisson distribution or radial basic function (RBF), denoted as Poisson or RBF encoding in our experiments, and a spike event is triggered if the value of the generated random number is larger than the magnitude of the corresponding normalized input value [34,35]. In addition, an RBF-Th encoding, generating a spike when the generated random number is larger than a set threshold (Th) related to the input magnitude, is also utilized in Section 3.
For a signal of forced oscillations, shown in Figure 2a, taking a Poisson encoding as an example, we show the spike sequences generated along time steps for 80 encoder neurons in Figure 2b, where the generated spikes are represented as black dots. The density of generated spikes at each time step corresponds to the value of the corresponding input data. To demonstrate the correctness of spiking encoding, we count the number of spikes for all 80 neurons at each time step to reconstruct the real number data and show the results in Figure 2c, similar to the original data shown in Figure 2a. The generated spiking sequence is used to simulate the biological realistic neurons for implementing forced oscillation detection tasks.

2.4. SRNN Layer with ALIF Neurons

The SRNN layer is constructed from a set of ALIF neurons with self-recurrency, which emulate the behavior of biological neurons to receive and dispose of time-dependent information. The incoming spiking signals are integrated, and action potentials are fired when a certain potential threshold of the membrane is reached, as shown in Figure 3a. Different from the integrate-and-fire (LIF) neuron in Figure 3b commonly used in SNN structures, the ALIF neuron in Figure 3c, as a modified version of a LIF neuron, is used to improve the computation capability with an adaptive firing threshold in our works [33].
For a common LIF neuron p, the dynamic membrane potential (or voltage) U p ( t ) at time t is governed by a differential equation, given by
τ m d U p ( t ) d t = ( U p ( t ) U r ( t ) ) + R m I p ( t )
where τ m and R m are the time constant and leaky resistance of the membrane, respectively, U r ( t ) is a resting membrane potential, and I p ( t ) is the input current of the neuron p, as shown in Figure 3a, given by
I p ( t ) = q W p q S q ( t ) ,
where W p q is the weight from presynaptic neurons q to the neuron p, and S q ( t ) are the incoming spikes from the presynaptic neurons q. When U p ( t ) reaches a certain threshold U th , the neuron p emits a spike, and U p ( t ) is reset to U r ( t ) , as shown in Figure 3b. The process of firing and resetting in discrete time can be modeled as
S p ( t ) = 1 U p ( t ) U th 0 U p ( t ) < U th
and
U p ( t + δ t ) = U p ( t ) ( 1 S p ( t ) ) + U r ( t ) S p ( t ) ,
respectively, where δ t is the minimum time step.
Although having computational simplicity, the LIF neuron lacks much of the more complex behavior of real neurons, e.g., the response for longer history dependencies. For this issue, the ALIF neuron augments an adaptive threshold, which increases upon neuron firing after each emitted spike and decays exponentially to a baseline threshold U 0 , as shown in Figure 3c. For an ALIF neuron j, its threshold adaptation can be modeled as
U j , th ( t ) = U 0 + β η j ( t ) η j ( t + δ t ) = ρ j η j ( t ) + ( 1 ρ j ) S j ( t ) ρ j = exp ( δ t / τ j , a d p ) ,
where β is a constant that controls the deviation η j from the baseline U 0 , and ρ j is a parameter related to temporal dynamics, governing the exponential decay of the threshold with a time constant τ j , a d p . With the adaptive threshold, the neural dynamics of ALIF neuron j in discrete time is given by
U j ( t + δ t ) = α j U j ( t ) + ( 1 α j ) R I j ( t ) U j , th ( t ) S j ( t ) ,
where α j = exp ( δ t / τ j , m ) represents the exponential decay of the membrane potential. Commonly, the behavior of ALIF neurons can be modeled as being self-recurrent with weights α j and ρ j , and the related self-recurrent parameters ( τ j , m , τ j , a d p ) and weights W p q are trainable, expected to improve the performance of ALIF neurons [33].

2.5. Spike-Driven LSTM Layer

To further mine the temporal dependencies of spikes from the SRNN layer, an LSTM layer [36] is utilized in this model. As a special variant of recurrent neural networks (RNNs), LSTM not only inherits the ability of RNNs to capture the temporal dependencies of their inputs but also solves the problem of vanishing/exploding gradients when using the BPTT algorithm to learn from long-term sequences.
The spike-driven LSTM is similar to the conventional LSTM in terms of structure but different in terms of input state, which comprises spikes. Figure 4 shows the structure of the spike-drive LSTM unit, where X t is an input spike state from the previous SRNN layer, f t is a forget gate controlling how much of the previous state is transferred into the next state, i t is an input gate controlling the extent to which new data values are allowed to change the cell state, C ˜ t is the prospective new cell state, o t is an output gate controlling the part of the learned state returned by the model, C t is a cell state serving as the long-term memory, and H t is a hidden state made available to the memory. Concretely, the aforementioned elements such as data states and gates can be described by
f t = σ ( W x f X t + W h f H t 1 + b f ) i t = σ ( W x i X t + W h i H t 1 + b i ) C ˜ t = tanh ( W x c X t + W h c H t 1 + b c ) o t = σ ( W x f X t + W h o H t 1 + b o ) C t = f t × C t 1 + i t × C ˜ t H t = o t × tanh ( C t ) ,
where C t 1 and H t 1 denote the cell state and hidden states from the previous recurrent unit, σ denotes the sigmoid activation function, W and b , i.e., θ L S T M = ( W , b ) , denote weights and biases, respectively, that are both needed for training, and a symbol × denotes the Hadamard product.

2.6. Output Layer

A fully connected linear layer with 2 neurons is employed on the results from the previous LSTM layer, where 2 neurons correspond to the number of classes n = 2 , that is, forced oscillations and natural oscillations. Then, the softmax active function is used to predict the oscillation classes z, given by
z = argmax z softmax ( P ) ,
where P denotes a predict vector consisting of a component p i from each class.

2.7. Training of the Proposed Hybrid Network

To train the hybrid network, the correct label of the time sequence is compared with the output of the whole hybrid network. Since the detection of forced oscillation is considered the classification task in our work, the cross-entropy function is used as a loss function L o s s , given by
L t o t ( z i , z ^ i ) = 1 M i = 1 M z i log ( z ^ i ) + ( 1 z i ) ( 1 log ( z ^ i )
where M is the number of training examples, z i { 0 , 1 } is a target label for a training example y i in (2), measured by PMUs, and z ^ i = P θ ( y i ) is the output probability generated by the proposed model with neural network parameters θ on the given y i .
To update network parameters θ during the training process, the BPTT algorithm [31] is utilized to minimize the loss function in (11) by computing the partial gradient L t o t / θ with the chain rule. However, since the active function of the ALIF neuron in (5) is non-differentiable, the computation of the chain rule becomes difficult. For this issue, an alternative way is to replace the discontinuous gradient with a smooth gradient function. In our work, a Gaussian function ( G s ( · ) ) is used as a surrogate gradient function for (5) during the backpropagation process, given by
G s ( U p ( t ) ) = N ( U p ( t ) | U th , σ g 2 )
where σ g is the standard deviation of the Gaussian. It was proven that the surrogate gradient resembles the actual gradient of the spikes [31], and the learnable parameters can be processed and updated by the BPTT algorithm with an Adam optimizer.
Given the loss function L t o t , the BPTT algorithm updates network parameters θ by computing the partial derivative L t o t / θ . The learnable parameters θ L S T M of the LSTM layer are updated as follows:
θ L S T M k = θ L S T M k η L t o t θ L S T M k ,
and the parameters of the SRNN layer, i.e., W in (4), τ a d p in (7), and τ m in (8), are updated as
W k = W k η L t o t W k , τ a d p k = τ a d p k η L t o t τ a d p k , τ m k = τ m k η L t o t τ m k .
where η ( 0 , 1 ) is the learning rate. Thus, the above learnable parameters can be implemented using the Adam optimizer, and the entire training procedure is outlined in Algorithm 1.
Algorithm 1 The training procedure of the proposed SRNN-LSTM network.
Input: 
Time-sequence data and the class label (y, z);
output: 
Parameters of the SRNN-LSTM network;
  • Initialize: randomly initialize parameters of SRNN-LSTM, epochs, and learning rate η ;
  • Preprocessing: data segmentation and normalization.
  • Encoding: s = SpikingEncoding(y);
  • for  epoch = 1 to epochs do
  •    % forward propagation process
  •    x = SRNN(s);
  •    p = Softmax(LSTM(x));
  •     z ^ = max ( p ) ;
  •    % back propagation process
  •    Compute L t o t ( z , z ^ ) ;
  •     θ L S T M θ L S T M η L t o t θ L S T M ;
  •     W W η L t o t W ;
  •     τ a d p τ a d p η L t o t τ a d p ;
  •     τ m τ m η L t o t τ m
  • end for

2.8. Evaluation Metrics

To evaluate the detection performance of the proposed hybrid network, several evaluation indices, i.e., Accuracy (Acc), Precision (Pre), Recall (Rec), and F1-score (F1), are used. The formulas of the four indices are calculated as
Acc = T P + T N T P + T N + F P + T N Pre = T P T P + F P Rec = T P T P + F N F 1 = 2 Pre Rec Prec + Rec ,
where T P , F N , T N , and F N are the number of true positives, false positives, true negatives, and false negatives, respectively.

3. Experimental Results and Discussion

In this section, the performance of the proposed hybrid network is analyzed and compared with LSTM and two SRNN-related models (i.e., SRNN + SRNN and SRNN + SRNN + LSTM) using both simulated and real-world data. The LSTM is used as the representative of ANNs. The SRNN + SRNN network is employed to assess the impact of integrating LSTM with an SRNN, while the SRNN + SRNN + LSTM network is utilized to evaluate the performance of adding more SRNN layers, particularly in scenarios with limited training data. To intuitively demonstrate the effectiveness of the proposed model, visualization analyses are conducted for the simulated oscillation data under two special operating scenarios: the resonance case and periodically non-sinusoidal injected disturbances. Additionally, experiments on simulated data are carried out to analyze the impact of different parameters on the proposed model, mainly including β and U 0 in (7), and R m in (3).

3.1. Datasets

Simulated and real-world measured PMU data are utilized in our experiments. The simulated PMU data are generated through 40 s time-domain simulations using TSAT 10.0 software on a WECC 179-bus system with 29 machines, as shown in Figure 5 [37]. There are 27 simulated oscillation cases, i.e., 9 poorly damped natural electromechanical oscillation cases and 18 forced oscillation cases. The natural oscillation cases include a set of combining the following scenarios, e.g., local modes or inter-area modes, or a combination of both generated from single or multiple sources, while the forced oscillation cases include resonance or non-resonance scenarios. The total number of simulation data is 14,202 time-sequence samples. Meanwhile, the real-world PMU measured data have 1327 time-sequence samples and consist of six actual oscillation events, captured in an ISO New England (ISO-NE) system [38].
To enhance the dataset, we employ a sliding window segmentation approach. Each signal sample is configured with a length of 800 × 1 data points, and the values of the data are normalize into the range of [0, 1], adapting to the measured data collected from different voltage levels. From the simulated and/or real-world data, we randomly selected 75% of samples for training and 25% of samples for testing, i.e., a ratio of 4:1.

3.2. Parameter Setting

The proposed model is implemented in Python 3.10 and Pytorch 2.1.2 and is trained on a 13th Gen Intel (R) Core (TM) 15-13490F CPU and NVIDIA GeForce RTX 3060 GPU. There are 80 encoding neurons in the input layer, 256 ALIF neurons in the SRNN layer, 128 units in the LSTM layer, and 2 neurons in the output layer, as shown in Table 2. The main parameters of the ALIF neurons and key hyperparameters of network training are shown in Table 3.
The number of neurons in the other three comparative networks is also shown in Table 2. For a fair comparison, the LSTM network has the same number of layers and the same number of neurons in each layer as the proposed model. The SRNN + SRNN network is constructed with only two SRNN layers, without an LSTM layer, while the SRNN + SRNN + LSTM network is constructed with two SRNN layers and one LSTM layer. Three encoding methods, i.e., Poisson, RBF, and RBF-Th, are utilized to generate spike sequences for the proposed model and the two SRNN-based networks. Moreover, the parameters of the ALIF neurons and network training are kept the same for the three SRNN-based networks.

3.3. Results with Simulation Data

The simulation data are first used to train and test the performance of the four networks, where the three SRNN-related models utilize three different encoding methods. Figure 6 shows the loss and accuracy curves of the four models during the training process, with RBF encoding applied to the three SRNN-related models. As can be seen from Figure 6, the LSTM converges at a slower rate and experiences larger fluctuations in both loss and accuracy curves compared to the other three SRNN networks. The reason is that multiple LSTM layers are susceptible to the exploding gradient problem, which causes the weights to update by large amounts, leading to instability of the loss function. In addition, comparing among the three SRNN-related models, the loss values of the two models with an LSTM layer decrease in stability faster than that of the SRNN + SRNN model without an LSTM layer, and the two models with an LSTM layer achieve detection accuracy values exceeding 97% after an epoch of 50 during the training procedure. This indicates that combining SRNN and LSTM layers is conducive to the training of the SRNN-based model. Using other encoding methods, the curves of loss and accuracy from the three SRNN-related models are similar to those of RBF encoding.
The four trained networks are utilized to test other simulation data, which are not interlaced with the training data. The evaluation indices in (15) are calculated and listed in Table 4. As shown in Table 4, the LSTM achieves high precision and low recall. This implies that the LSTM is good at correctly detecting forced oscillation cases but misses many actual forced oscillation instances that exist in the data. Thus, the accuracy and F1 score of the LSTM are not satisfactory due to the limited training epochs and data. Meanwhile, the two SRNN-related models with an LSTM layer can provide higher values in each index compared to the SRNN + SRNN model without an LSTM layer. This demonstrates that integrating an SRNN with LSTM can provide satisfactory detection performance. Moreover, the proposed SRNN + LSTM network achieves the highest accuracy and precision among the four networks in the case of RBF or RBF-Th encoding.

3.4. Results with Addition of Real-World Data

In this experiment, four networks are re-trained using data that include both simulation data and limited real-world measurement data to evaluate their performance in real operating scenarios. The training and testing settings are the same as those in Section 3.3. The performance indices for each model are shown in Table 5.
Compared with Table 4, the indices for each network in Table 5 decrease due to the introduction of measurement data. The reason is that the data distribution of the measurement data exhibits differences from that of the simulation data. Moreover, the amount of measurement data is less than that of the simulated data. This causes the degraded performance of the trained network on the testing data, which are randomly selected partly from the measurement data and partly from the simulation data. Moreover, in the case of Poisson encoding, integrating LSTM with an SRNN layer causes the detection failure. The reason for this phenomenon needs to be explored in future work.
We also note that although the LSTM has a certain level of detection accuracy, its recall and F1 score are quite low. This is because the LSTM misclassifies a significant number of forced oscillations as natural oscillations. Since the number of correctly detected natural oscillation samples is high, the overall accuracy of the LSTM appears to be satisfactory. However, integrating LSTM and an SRNN is helpful in effectively mining temporal dependencies in time-series oscillation data and achieves good performance. In addition, comparing the SRNN + SRNN + LSTM network with the proposed SRNN + LSTM network, the indices in Table 5 also show that adding more SRNN layers cannot bring about an improvement of detection performance under limited data. Overall, the proposed hybrid network indicates its classification performance for forced oscillations and natural oscillations.

3.5. Visual Analysis

To visualize the operation process of the proposed hybrid network, taking a resonance case and periodically rectangular injected signal case as examples, we present the fired spikes of input and SRNN layers and prediction results after the LSTM and output layers in this section.
First, a resonance case is considered, where forced oscillation is caused by the sinusoidal signal injected at generator 4 in Figure 5 using exactly the local natural mode frequency of 0.86 Hz. To demonstrate that the proposed hybrid network is adapted to voltage signals with weak oscillation signatures, we take the voltage of forced oscillation far away from the oscillation resource as an example. The shape of voltage data is similar to that of natural oscillation, as shown in Figure 7a. The resonance makes it difficult to determine whether the forced oscillation exists. Two voltage data are, respectively, input into the proposed hybrid network, and the pulse spikes are generated by the RBF-based spiking encoder in the input layer.
To observe the generated spikes clearly, only spikes from four neurons, i.e., 20, 40, 60, and 80, within the range of time step [1, 200] are shown in Figure 7b for the forced and natural oscillations, respectively. After the pulse spikes are processed by ALIF neurons in the SRNN layer, the fired spikes of each ALIF neuron at each time step are represented by a single bit in the block dot, as shown in Figure 7c. Each column represents the fired spike results for 256 ALIF neurons at each time step, and each row represents the fired spikes of a certain ALIF neuron at 800 time steps. The successive spikes from time 1 to 200 indicate an increasing trend in the signal value, while the absence of spikes signifies a decrease in the signal. Meanwhile, the intervals between dense spikes and sparse spikes also reflect the frequency of the signal. After the SRNN layer, the fired spikes are fed into the LSTM layer and classification layer; the prediction probability results for two types of oscillation are shown in Figure 7d, respectively. The spike differences in the input and SRNN layers between two oscillation signals demonstrate that the proposed hybrid network can effectively detect forced oscillations in resonance cases.
Next, the other oscillation case, caused by a periodically rectangular disturbance signal, is considered. The voltage signal of the forced oscillations shows the tendency of rectangular shape as shown in Figure 8a. Accordingly, to show the detection effectiveness of the proposed hybrid network, we choose voltage data of the natural oscillation shown in Figure 8a, whose initial stage is similar to that of forced oscillation, as an example. Similarly, the processing results from the input layer using RBF-based encoding, the SRNN layer, and the LSTM + softmax layer for two types of oscillations are shown in Figure 8b–d, respectively. We also note that the density distribution of the fired neurons in Figure 8c is different from that in Figure 7c. This difference is attributed to the distinct responses of the encoding and firing mechanisms to rectangular and sinusoidal signals. From Figure 8, it can be observed that the proposed hybrid network can effectively distinguish forced oscillations from natural oscillations in a periodically rectangular injected disturbance signal since there are obvious spike differences in the input and SRNN layers between the two oscillation types.

3.6. Discussion

3.6.1. Parameter Sensitivity

To analyze the parameter sensitivity of the proposed SRNN-LSTM hybrid model, we conduct experiments on simulation data with the RBF-th encoding strategy. The parameters considered include β and U 0 in (7) and R m in (3). These parameters were chosen because β and U 0 are key parameters for the adaptive firing threshold of the ALIF neuron and R m is related to dynamic membrane potential (or voltage).
For β , we examined values ranging from 0.0 to 3.0 in intervals of 0.6. Here, β = 0 indicates that a non-adaptive threshold is used. When β is varied, the other parameters are kept constant as specified in Section 3.2. As shown in Table 6, the detection accuracy of the model initially decreases with increasing β , reaches its highest point when β = 1.8 , and then decreases again. In short, an appropriate setting for β tends to yield better results; conversely, this is not the case. For U 0 and R m , we considered ranges of [0.07, 0.12] and [0.4, 1.4] with intervals of 0.01 and 0.2, respectively. As their values increase, the detection accuracies of the model first increase, reach corresponding peaks, and then decrease. The underlying cause of this phenomenon remains to be explored in our future work.

3.6.2. Robustness on Various Noise Level

To demonstrate the noise robustness of the proposed hybrid network, we compare its detection accuracy with three other comparative networks. The simulation data are mixed with noise at SNRs ranging from +50 to 0 dB for both training and testing. The noisy float-point data are then converted into spike sequences using the RBF-Th encoding method for the three SRNN-related models. After training and testing the four models on the noisy simulation data, we collect the prediction results and present the corresponding accuracy values in Table 7.
The proposed hybrid network achieves an accuracy of 98.83–93.06% across the SNR levels of 50 to 0 dB, although the accuracy decreases as the noise increases, which is the same as in the other three comparative models. However, as shown in Table 7, the accuracy decline of the proposed SRNN-LSTM network is largely lower than that of the other methods. The SRNN + SRNN + LSTM and the proposed model, which integrates LSTM with SRNN, show much smaller accuracy drops than pure LSTM and SRNN models. These results highlight that the integration of an SRNN and LSTM enhances the noise robustness of the model.

4. Conclusions

This paper proposes a hybrid network that combines an SRNN with an LSTM structure to distinguish forced oscillations from natural oscillations. For oscillation PMU data, the time-series float-point data are first converted into spiking sequences using a spiking encoder. The SRNN leverages the advantages of ALIF neurons with self-recurrence to enhance computational capability while inheriting the inherent high energy efficiency of the spiking computation mechanism. Integrating the SRNN with the LSTM structure further enhances the capability of mining temporal dependencies in spike signals. Extensive experiments were conducted on both simulated and limited real-world PMU data to verify the effectiveness of the proposed model. The experimental results demonstrate the superiority of the proposed hybrid model over pure LSTM and other SRNN-related models for detecting force oscillations. In a nutshell, the proposed model reaches a satisfactory detection performance, even in the presence of strong noise.
In the future, the performance of the proposed model can be further improved. For instance, the results obtained using limited real measured data show that the performance of the proposed model is not as high as that using simulation data. The small sample size of real-world data is one of the main reasons for the performance degradation. Thus, how to train the proposed model effectively for small-size data needs to be addressed in future work. Moreover, the underlying causes of the proposed model’s parameter sensitivity remain to be explored in our future work. For real applications, the proposed model can be considered for implementation in FPGA and deployment in edge devices to detect forced oscillations in a timely manner.

Author Contributions

Conceptualization, X.Y., J.W. and Y.W.; data curation, J.W. and X.H.; formal analysis, X.Y.; investigation, J.W.; methodology, X.Y., J.W. and X.H.; resources, X.Y.; supervision, X.X.; validation, X.Y.; visualization, J.W. and X.H.; writing—original draft, X.Y. and J.W.; writing—review and editing, Y.W. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from [38].

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SNNSpiking neural network
SRNN  Spiking recurrent neural network
PMUPhasor measurement unit
BPTTBackpropagation through time
ALIFAdaptive leak integrate-and-fire
RBFRadial basic function
RBF-ThRadial basic function with a set threshold
LIFLeak integrate-and-fire
NONatural oscillation
FOForced oscillation

References

  1. Ghorbaniparvar, M. Survey on forced oscillations in power system. J. Mod. Power Syst. Clean Energy 2017, 5, 671–682. [Google Scholar] [CrossRef]
  2. Ye, H.; Liu, Y.; Zhang, P.; Du, Z. Analysis and Detection of Forced Oscillation in Power System. IEEE Trans. Power Syst. 2017, 32, 1149–1160. [Google Scholar] [CrossRef]
  3. Trudnowski, D.J.; Guttromson, R. A Strategy for Forced Oscillation Suppression. IEEE Trans. Power Syst. 2020, 35, 4699–4708. [Google Scholar] [CrossRef]
  4. Follum, J.; Pierre, J.W. Detection of Periodic Forced Oscillations in Power Systems. IEEE Trans. Power Syst. 2016, 31, 2423–2433. [Google Scholar] [CrossRef]
  5. Khan, M.A.; Pierre, J.W. Detection of Periodic Forced Oscillations in Power Systems Using Multitaper Approach. IEEE Trans. Power Syst. 2019, 34, 1086–1094. [Google Scholar] [CrossRef]
  6. Estevez, P.G.; Marchi, P.; Galarza, C.; Elizondo, M. Non-Stationary Power System Forced Oscillation Analysis Using Synchrosqueezing Transform. IEEE Trans. Power Syst. 2021, 36, 1583–1593. [Google Scholar] [CrossRef]
  7. Agrawal, U.; Pierre, J.W. Detection of Periodic Forced Oscillations in Power Systems Incorporating Harmonic Information. IEEE Trans. Power Syst. 2019, 34, 782–790. [Google Scholar] [CrossRef]
  8. Xie, R.; Trudnowski, D. Distinguishing features of natural and forced oscillations. In Proceedings of the 2015 IEEE Power & Energy Society General Meeting, Denver, CO, USA, 26–30 July 2015; pp. 1–5. [Google Scholar] [CrossRef]
  9. Jha, R.; Senroy, N. Wavelet Ridge Technique Based Detection of Forced Oscillation in Power System Signal. IEEE Trans. Power Syst. 2019, 34, 3306–3308. [Google Scholar] [CrossRef]
  10. Wang, X.; Turitsyn, K. Data-Driven Diagnostics of Mechanism and Source of Sustained Oscillations. IEEE Trans. Power Syst. 2016, 31, 4036–4046. [Google Scholar] [CrossRef]
  11. Ghorbaniparvar, M.; Zhou, N.; Li, X.; Trudnowski, D.J.; Xie, R. A Forecasting-Residual Spectrum Analysis Method for Distinguishing Forced and Natural Oscillations. IEEE Trans. Smart Grid 2019, 10, 493–502. [Google Scholar] [CrossRef]
  12. Liu, J.; Yao, W.; Wen, J.; He, H.; Zheng, X. Active Power Oscillation Property Classification of Electric Power Systems Based on SVM. J. Appl. Math. 2014, 2014, 218647. [Google Scholar] [CrossRef]
  13. Estevez, P.G.; Marchi, P.; Messina, F.; Galarza, C. Forced Oscillation Identification and Filtering From Multi-Channel Time-Frequency Representation. IEEE Trans. Power Syst. 2023, 38, 1257–1269. [Google Scholar] [CrossRef]
  14. Meng, Y.; Yu, Z.; Lu, N.; Shi, D. Time Series Classification for Locating Forced Oscillation Sources. IEEE Trans. Smart Grid 2021, 12, 1712–1721. [Google Scholar] [CrossRef]
  15. Singh, P.; Prakash, A.; Parida, S. Neural network based pattern recognition for classification of the forced and natural oscillation. Electr. Power Syst. Res. 2023, 224, 109706. [Google Scholar] [CrossRef]
  16. Surinkaew, T.; Emami, K.; Shah, R.; Islam, M.R.; Islam, S. Forced oscillation management in a microgrid with distributed converter-based resources using hierarchical deep-learning neural network. Electr. Power Syst. Res. 2023, 222, 109479. [Google Scholar] [CrossRef]
  17. Muhammed, A.O.; Isbeih, Y.J.; Moursi, M.S.E.; Hosani, K.H.A. Deep Learning-Based Models for Predicting Poorly Damped Low-Frequency Modes of Oscillations. IEEE Trans. Power Syst. 2024, 39, 3257–3270. [Google Scholar] [CrossRef]
  18. Feng, S.; Chen, J.; Ye, Y.; Wu, X.; Cui, H.; Tang, Y.; Lei, J. A two-stage deep transfer learning for localisation of forced oscillations disturbance source. Int. J. Electr. Power Energy Syst. 2022, 135, 107577. [Google Scholar] [CrossRef]
  19. Matar, M.; Estevez, P.G.; Marchi, P.; Messina, F.; Elmoudi, R.; Wshah, S. Transformer-based deep learning model for forced oscillation localization. Int. J. Electr. Power Energy Syst. 2023, 146, 108805. [Google Scholar] [CrossRef]
  20. Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Netw. 1997, 10, 1659–1671. [Google Scholar] [CrossRef]
  21. Cao, Y.; Chen, Y.; Khosla, D. Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition. Int. J. Comput. Vis. 2015, 113, 54–66. [Google Scholar] [CrossRef]
  22. Zheng, N.; Mazumder, P. Online Supervised Learning for Hardware-Based Multilayer Spiking Neural Networks Through the Modulation of Weight-Dependent Spike-Timing-Dependent Plasticity. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4287–4302. [Google Scholar] [CrossRef] [PubMed]
  23. Maass, W.; Markram, H. On the computational power of circuits of spiking neurons. J. Comput. Syst. Sci. 2004, 69, 593–616. [Google Scholar] [CrossRef]
  24. Wang, X.; Lin, X.; Dang, X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw. 2020, 125, 258–280. [Google Scholar] [CrossRef]
  25. Peng, C.; Qiao, G.; Ge, B. Dynamic Cascade Spiking Neural Network Supervisory Controller for a Nonplanar Twelve-Rotor UAV. Sensors 2025, 25, 1177. [Google Scholar] [CrossRef] [PubMed]
  26. Raja Pagalavan, B.; Venkatakrishnan, G.; Rengaraj, R. Power transformer fault diagnosis and condition monitoring using hybrid TDO-SNN technique. Int. J. Hydrogen Energy 2024, 68, 1370–1381. [Google Scholar] [CrossRef]
  27. Hussaini, S.; Milford, M.; Fischer, T. Applications of Spiking Neural Networks in Visual Place Recognition. IEEE Trans. Robot. 2025, 41, 518–537. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Xu, H.; Huang, L.; Chen, C. A storage-efficient SNN–CNN hybrid network with RRAM-implemented weights for traffic signs recognition. Eng. Appl. Artif. Intell. 2023, 123, 106232. [Google Scholar] [CrossRef]
  29. Deng, L.; Wu, Y.; Hu, X.; Liang, L.; Ding, Y.; Li, G.; Zhao, G.; Li, P.; Xie, Y. Rethinking the performance comparison between SNNS and ANNS. Neural Netw. 2020, 121, 294–307. [Google Scholar] [CrossRef]
  30. Rana, A.; Kim, K.K. Electrocardiography Classification with Leaky Integrate-and-Fire Neurons in an Artificial Neural Network-Inspired Spiking Neural Network Framework. Sensors 2024, 24, 3426. [Google Scholar] [CrossRef]
  31. Neftci, E.O.; Mostafa, H.; Zenke, F. Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-Based Optimization to Spiking Neural Networks. IEEE Signal Process. Mag. 2019, 36, 51–63. [Google Scholar] [CrossRef]
  32. Kerdphol, T.; Ngamroo, I.; Surinkaew, T. Forced oscillation suppression using extended virtual synchronous generator in a low-Inertia microgrid. Int. J. Electr. Power Energy Syst. 2023, 151, 109126. [Google Scholar] [CrossRef]
  33. Yin, B.; Corradi, F.; Bohté, S.M. Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks. In Proceedings of the International Conference on Neuromorphic Systems 2020 (ICONS 2020), Chicago, IL, USA, 28–30 July 2020. [Google Scholar] [CrossRef]
  34. Xie, X.; Qu, H.; Yi, Z.; Kurths, J. Efficient Training of Supervised Spiking Neural Network via Accurate Synaptic-Efficiency Adjustment Method. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1411–1424. [Google Scholar] [CrossRef] [PubMed]
  35. Lee, C.; Srinivasan, G.; Panda, P.; Roy, K. Deep Spiking Convolutional Neural Network Trained with Unsupervised Spike-Timing-Dependent Plasticity. IEEE Trans. Cogn. Dev. Syst. 2019, 11, 384–394. [Google Scholar] [CrossRef]
  36. Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y. Short-Term Residential Load Forecasting Based on LSTM Recurrent Neural Network. IEEE Trans. Smart Grid 2019, 10, 841–851. [Google Scholar] [CrossRef]
  37. Maslennikov, S.; Wang, B.; Zhang, Q.; Ma, F.; Luo, X.; Sun, K.; Litvinov, E. A test cases library for methods locating the sources of sustained oscillations. In Proceedings of the 2016 IEEE Power and Energy Society General Meeting (PESGM), Boston, MA, USA, 17–21 July 2016; pp. 1–5. [Google Scholar] [CrossRef]
  38. Maslennikov, S.; Wang, B.; Zhang, Q.; Ma, F.; Luo, X.; Sun, K.; Litvinov, E. Test Cases Library of Power System Sustained Oscillationss. 2016. Available online: https://web.eecs.utk.edu/~kaisun/Oscillation/simulatedcases.html (accessed on 6 August 2021).
Figure 1. Structure of the proposed hybrid network.
Figure 1. Structure of the proposed hybrid network.
Sensors 25 02607 g001
Figure 2. Generated spikes and their reconstructed signal using Poisson encoding: (a) forced oscillation raw PMU data; (b) spiking sequences; (c) reconstructed oscillation data from the spiking sequences in (b).
Figure 2. Generated spikes and their reconstructed signal using Poisson encoding: (a) forced oscillation raw PMU data; (b) spiking sequences; (c) reconstructed oscillation data from the spiking sequences in (b).
Sensors 25 02607 g002
Figure 3. Spike generation of biological neurons. (a) Behavior of biological neuron; (b) LIF neuron; (c) ALIF neuron with adaptive threshold.
Figure 3. Spike generation of biological neurons. (a) Behavior of biological neuron; (b) LIF neuron; (c) ALIF neuron with adaptive threshold.
Sensors 25 02607 g003
Figure 4. Structure of LSTM.
Figure 4. Structure of LSTM.
Sensors 25 02607 g004
Figure 5. WECC 179-bus model as the test system.
Figure 5. WECC 179-bus model as the test system.
Sensors 25 02607 g005
Figure 6. Loss and detection accuracy curves versus epochs for four networks during training procedure. (a) LSTM; (b) SRNN + SRNN; (c) SRNN + SRNN + LSTM; (d) proposed SRNN + LSTM.
Figure 6. Loss and detection accuracy curves versus epochs for four networks during training procedure. (a) LSTM; (b) SRNN + SRNN; (c) SRNN + SRNN + LSTM; (d) proposed SRNN + LSTM.
Sensors 25 02607 g006
Figure 7. For the case of resonance, the results of three key layers in the proposed hybrid network. The first row refers to the forced oscillation, which resonates with local natural oscillation; the second row refers to the natural oscillation. (a) PMU voltage raw data; (b) fired spikes generated by RBF encoding in the input layer; (c) fired spikes of the SRNN layer; (d) prediction probability after LSTM layer and softmax function.
Figure 7. For the case of resonance, the results of three key layers in the proposed hybrid network. The first row refers to the forced oscillation, which resonates with local natural oscillation; the second row refers to the natural oscillation. (a) PMU voltage raw data; (b) fired spikes generated by RBF encoding in the input layer; (c) fired spikes of the SRNN layer; (d) prediction probability after LSTM layer and softmax function.
Sensors 25 02607 g007
Figure 8. For the case of periodic/rectangular injected disturbance, the results of three key layers in the proposed hybrid network. The first row refers to forced oscillation; the second row refers to the natural oscillation. (a) PMU voltage raw data; (b) fired spikes generated by RBF encoding in the input layer; (c) fired spikes of the SRNN layer; (d) prediction probability after LSTM layer and softmax function.
Figure 8. For the case of periodic/rectangular injected disturbance, the results of three key layers in the proposed hybrid network. The first row refers to forced oscillation; the second row refers to the natural oscillation. (a) PMU voltage raw data; (b) fired spikes generated by RBF encoding in the input layer; (c) fired spikes of the SRNN layer; (d) prediction probability after LSTM layer and softmax function.
Sensors 25 02607 g008
Table 1. Comparison of advantages and disadvantages of traditional methods, ANNs, and SNNs.
Table 1. Comparison of advantages and disadvantages of traditional methods, ANNs, and SNNs.
MethodsAdvantagesDisadvantages
TraditionalLow computational cost.Feature extraction complexity.
Easy to implement.Limitation of handling complex
Less data requirement.and changing operating conditions.
ANNsStrong feature extraction capability.Inherent computational complexity.
Effective networks.High power consumption.
SNNsEnergy-efficient.Non-differentiable.
Bio-inspired.Training difficulty.
Table 2. Number of neurons in each layer of three networks.
Table 2. Number of neurons in each layer of three networks.
NetworkInput LayerHidden LayersOutput Layer
LSTM80[256, 128]2
SRNN + SRNN80[256, 128]2
SRNN + SRNN + LSTM80[256, 128, 128]2
Proposed (SRNN + LSMT)80[256, 128]2
Table 3. Parameters of ALIF neurons and network training.
Table 3. Parameters of ALIF neurons and network training.
Parameters of ALIFValuesTraining ParametersValues
R m in (3)1length of time window800
β in (7)1.8batch size200
U 0 in (7)0.5epoch200
σ g in (12)0.5learning rate 1 × 10 2
Table 4. Evaluation indices for four networks on simulation testing data.
Table 4. Evaluation indices for four networks on simulation testing data.
NetworkEncodingAcc (%)Pre (%)Rec (%)F1 (%)
LSTMNone87.4899.9066.6779.97
SRNN + SRNNRBF87.1676.3795.2484.76
RBF-Th91.7396.0981.2788.06
Poisson95.1990.9396.8393.78
SRNN + SRNN + LSTMRBF94.5298.9887.9393.13
RBF-Th93.8896.5386.7991.41
Poisson95.4592.2595.4793.83
Proposed (SRNN + LSMT)RBF95.6192.0296.7094.30
RBF-Th98.8399.8097.0898.42
Poisson94.1991.5693.0892.32
Table 5. Evaluation indices for four networks with the addition of real-world measurement data into the simulation data.
Table 5. Evaluation indices for four networks with the addition of real-world measurement data into the simulation data.
NetworkEncodingAcc (%)Pre (%)Rec (%)F1 (%)
LSTMNone75.3250.000.100.19
SRNN + SRNNRBF85.2391.0544.5759.84
RBF-Th86.1397.7144.8661.48
Poisson83.9276.8349.9060.51
SRNN + SRNN + LSTMRBF84.7692.2241.8157.53
RBF-Th86.7896.7448.1064.25
Poissonxxxxxxxx
Proposed (SRNN + LSMT)RBF87.1898.0949.0565.39
RBF-Th86.7489.3252.5766.18
Poissonxxxxxxxx
xx: denotes detection failure.
Table 6. Accuracy under different parameter settings for the proposed hybrid network.
Table 6. Accuracy under different parameter settings for the proposed hybrid network.
β Acc (%) U 0 Acc (%) R m Acc (%)
0.095.330.0795.570.493.55
0.693.360.0895.600.694.50
1.291.170.0997.480.896.57
1.898.830.1098.831.098.83
2.495.830.1196.621.295.12
3.093.260.1295.291.493.60
Table 7. Detection accuracy of four networks under different SNR levels.
Table 7. Detection accuracy of four networks under different SNR levels.
Network (Acc (%))SNR (dB)
01020304050Noise-Free
LSTM71.1671.4075.8485.0386.5087.0587.48
SRNN + SRNN79.2983.3786.5390.2691.4093.5595.19
SRNN + SRNN + LSTM85.4487.7591.4892.1693.9595.2395.45
Proposed (SRNN + LSTM)93.0694.6997.0197.9898.8398.8398.83
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Wang, J.; Huang, X.; Wang, Y.; Xiao, X. Forced Oscillation Detection via a Hybrid Network of a Spiking Recurrent Neural Network and LSTM. Sensors 2025, 25, 2607. https://doi.org/10.3390/s25082607

AMA Style

Yang X, Wang J, Huang X, Wang Y, Xiao X. Forced Oscillation Detection via a Hybrid Network of a Spiking Recurrent Neural Network and LSTM. Sensors. 2025; 25(8):2607. https://doi.org/10.3390/s25082607

Chicago/Turabian Style

Yang, Xiaomei, Jinfei Wang, Xingrui Huang, Yang Wang, and Xianyong Xiao. 2025. "Forced Oscillation Detection via a Hybrid Network of a Spiking Recurrent Neural Network and LSTM" Sensors 25, no. 8: 2607. https://doi.org/10.3390/s25082607

APA Style

Yang, X., Wang, J., Huang, X., Wang, Y., & Xiao, X. (2025). Forced Oscillation Detection via a Hybrid Network of a Spiking Recurrent Neural Network and LSTM. Sensors, 25(8), 2607. https://doi.org/10.3390/s25082607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop