Next Article in Journal
The Fractional Dunkl Laplacian: Definition and Harmonization via the Mellin Transform
Previous Article in Journal
Novel Composite Speed Control of Permanent Magnet Synchronous Motor Using Integral Sliding Mode Approach
Previous Article in Special Issue
An Inhomogeneous Model for Laser Welding of Industrial Interest
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Methodology for Power Systems’ Emergency Control Based on Deep Learning and Synchronized Measurements

1
Department of Automated Electrical Systems, Ural Federal University, 620002 Yekaterinburg, Russia
2
Faculty of Electrical and Environmental Engineering, Riga Technical University, 1048 Riga, Latvia
3
College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(22), 4667; https://doi.org/10.3390/math11224667
Submission received: 20 October 2023 / Revised: 13 November 2023 / Accepted: 15 November 2023 / Published: 16 November 2023

Abstract

:
Modern electrical power systems place special demands on the speed and accuracy of transient and steady-state process control. The introduction of renewable energy sources has significantly influenced the amount of inertia and uncertainty of transient processes occurring in energy systems. These changes have led to the need to clarify the existing principles for the implementation of devices for protecting power systems from the loss of small-signal and transient stability. Traditional methods of developing these devices do not provide the required adaptability due to the need to specify a list of accidents to be considered. Therefore, there is a clear need to develop fundamentally new devices for the emergency control of power system modes based on adaptive algorithms. This work proposes to develop emergency control methods based on the use of deep machine learning algorithms and obtained data from synchronized vector measurement devices. This approach makes it possible to ensure adaptability and high performance when choosing control actions. Recurrent neural networks, long short-term memory networks, restricted Boltzmann machines, and self-organizing maps were selected as deep learning algorithms. Testing was performed by using IEEE14, IEEE24, and IEEE39 power system models. Two data samples were considered: with and without data from synchronized vector measurement devices. The highest accuracy of classification of the control actions’ value corresponds to the long short-term memory networks algorithm: the value of the accuracy factor was 94.31% without taking into account the data from the synchronized vector measurement devices and 94.45% when considering this data. The obtained results confirm the possibility of using deep learning algorithms to build an adaptive emergency control system for power systems.

1. Introduction

The emergency control (EC) of electrical power systems (EPSs) is an important part of ensuring the reliability of power supply to consumers. For modern EPSs, emergency control is aimed at maintaining small-signal stability (SSS), transient stability (TS), voltage stability, and maintaining the required level of alternating current frequency and the permissible current load of electrical network elements [1]. A separate automation is responsible for each EPS control task by analyzing the parameters of the electrical mode (currents, voltages, active power flows, reactive power flows, etc.) and issuing control actions (CAs) (load shedding, shutdown generators, unloading generators, etc.) followed by the specified logical rules. The operation of automation devices is implemented in stages: the first includes the operation of devices protecting local energy areas and the last includes devices aimed at dividing EPSs.
The EC structure described above shows high reliability and efficiency including the active introduction of renewable energy sources (RESs) [2,3,4,5]. The goal of introducing RESs is to reduce the impact of the electricity sector on global warming [6] by transitioning from traditional power plants running on fossil fuels to low-carbon electricity production by converting solar, wind, tidal, and other renewable energy sources. This goal was partially achieved in many European countries, the USA, India, and China [7].
The key features of RESs are the stochasticity of electricity generation [8,9,10], low inertia [11], and the difficulty of predicting their operating modes. The process of increasing the share of RESs in EPSs is accompanied by the simultaneous dismantling of traditional fossil fuel power units. As a result, the characteristics of the properties of modern EPSs have changed significantly: the speed of transient processes has increased and the accuracy of predicting steady-state processes has decreased.
Features of EPSs are considered in a complex technical system with a significant predominance of digital devices for the analysis, control, and planning of steady-state and transient processes. The digitalization of the electric power industry has led to the accumulation of large amounts of data and the possibility of their processing using machine learning (ML) algorithms [12].
Following the tightening of the rules of the electricity market, there is an active implementation of digital systems that allow for increasing the flow of intersystem active power flows [13,14,15], which reduces the stability margin and increases the likelihood of developing an accident with EPS division.
Table 1 provides an analysis of the features of modern EPSs.
The described features of modern EPSs place new demands on traditional EC systems in terms of adaptability, accuracy, and speed. Traditional EC systems are based on one of two principles:
  • The choice of CA is carried out based on a predetermined logic, considering the worst-case scenario for accident development;
  • The choice of CA is carried out based on the actual operating mode of the EPS in a cyclic mode based on a mathematical model of the protected section of the electrical network.
To implement both EC principles, a pre-prepared list of examined accidents is used. This list is compiled manually based on operating experience and EPS accident statistics. In addition, the logic of operation of traditional EC devices is based on strictly deterministic approaches by implying a numerical analysis of a system of differential-algebraic equations that describe the mathematical model of the protected EPS. These features of traditional EC devices do not meet the requirements of modern EPSs in terms of speed and adaptability.
The development of computer technology, the theory of mathematical statistics, ML methods, as well as the active implementation of synchronized vector measurement devices (PMUs) make it possible to develop an adaptive EC EPS system based on big data classification algorithms using obtained PMUs during the registration of transient processes. This approach allows us to ensure the required adaptability and speed of CA selection.
The purpose of this article is to develop and test the EC EPS methodology based on the IEEE14 [16], IEEE24 [17], and IEEE39 [18] EPS mathematical models to preserve TS and SSS, considering a classification algorithm for big data obtained from PMUs.
The scientific contribution of this paper is described as follows:
  • A meta-analysis of existing EC EPS methods based on ML algorithms is presented. The advantages and disadvantages of existing methods are identified.
  • A comprehensive methodology for using ML algorithms in EC EPSs is proposed.
  • A study of the accuracy of EC EPS value classification based on ML algorithms is carried out.
  • The feasibility of using data from synchronized vector measurement devices for use in data sampling for ML algorithms is proved.
  • The feasibility of using ML algorithms to develop a flexible approach to provide SSS and TS EPSs is proved.

2. Related Works

From the ML perspective, the task of selecting CAs for EC EPSs to provide SSS and TS can be considered a multi-class classification problem. Unlike the task of estimating TS and SSS, the EC task is more complex due to the transition from binary classification [19] to multi-class. Sets of optimal CAs aimed at ensuring the stability of the post-accident regime for the accident under consideration are considered as classes. The signs for the EC EPS problem are pre-emergency values of electrical mode parameters: voltages in EPS nodes, values of active and reactive power flows across the elements of the electrical network, load angles of synchronous generators (SGs), etc. In addition, most of the considered accidents in EPSs do not require EC, i.e., the post-emergency operating mode of EPSs is stable without the use of CA. Thus, the EC EPS problem based on ML algorithms has the following features:
  • A significant number of features in the data sample;
  • Uneven distribution of classes in the sample (most emergency processes do not require CA implementation);
  • Significant degree of overlap between classes (one CA can be part of many classes).
These features of the EC problem based on ML algorithms lead to the need for the pre-processing of data samples through the following actions:
  • Reducing the dimension of the problem by reducing the number of features;
  • Balancing the distribution of classes in the original data set;
  • Removing outliers from the original data set.
A similar algorithm is described in [20]. In addition to processing the initial data sample, for the EC EPS task based on ML algorithms, an important step is the collection and retraining of the model in response to changes in the protected energy region (construction or dismantling of electrical network elements, active power sources, changes in SG excitation controller algorithms, etc.).
To solve the EC EPS problem based on ML algorithms, the following algorithms are used:
  • Decision trees (DTs) [21,22,23];
  • Random forest (RF) [24];
  • Support vector machines (SVMs) [25,26,27];
  • Artificial neural networks (ANNs) [28,29,30,31];
  • Deep learning (DL) [32,33,34,35];
  • Extreme gradient boosting (XGBoost) [36,37].
DT is a logistic classification and regression algorithm that combines logical conditions into a tree structure [38]. The DT algorithm is based on a description of data in the form of a tree-like hierarchical structure with a specific classification rule in each node, which is determined as a result of training. DT can be constructed based on the analysis of the Gini coefficient [39]:
Δ G = G i n i ( m ) [ n 1 m n m G i n i ( m 1 ) + n 2 m n m G i n i ( m 2 ) ] ,
where Gini(m) is the value of the Gini coefficient, which shows the degree of heterogeneity of the data set m; ∆G is the value of the Gini coefficient increments after splitting the data at a node DT; nm is the amount of data in the set m; n1 is the amount of data in the set m1; n2 is the amount of data in the set m2; m1 and m2 are the data after splitting set m at node DT.
As a result of DT learning, a set of rules is determined at each node in such a way as to minimize the value of the Gini coefficient. During the training process, logical rules are determined for only one sample feature in one iteration, so training data with multiple features is a time-consuming task.
In the study [21], the DT algorithm was used to determine the CA to ensure EPS stability. To reduce the time delay for algorithm training, the authors used Fisher’s linear discriminant. When forming DT, the following objective function was used:
i G = 1 n G k G P i G + j L = 1 n L k L P j L min ,
where nG is the number of SGs by disabling which CA can be implemented, nL is the number of loads by disabling which CA can be implemented, PiG is the power SG i, and PjL is the load capacity j.
The IEEE39 EPS model [22] was used to train and test the developed algorithm. The accuracy of the DT algorithm on the training set was 99.43% and on the test set—99.66%.
In [23], an algorithm for ensuring acceptable voltage levels is proposed based on the DT algorithm and data obtained from PMU devices. Testing was performed using a mathematical model from Hong Kong.
One way to overcome the disadvantages of the DT algorithm, associated with significant time costs for training and the tendency to overtrain, is to combine several shallow DT algorithms into an ensemble. This method of combining DT algorithms is called RF [40].
In the study [24], the RF algorithm was used to select CAs. The solution to the EC problem using EPS modes was divided into two stages:
  • Stage 1: determination of the TS reserve for a given emergency process;
  • Stage 2: CA selection using RF algorithms followed by post-emergency prediction using recurrent neural networks.
The developed algorithm was tested based on the EPS IEEE16 model.
One of the ML algorithms that allows you to solve the classification problem is SVM. This method is based on searching for separating hyperplanes that maximize the distance between classes in the data sample. When using SVM, the assumption is made that classification reliability is increased by increasing the distance between the separating hyperplanes. To construct hyper-planes, the following objective function is used [41]:
1 2 w 2 + C i = 1 N ξ i min ,
where ||w|| is the norm of a vector perpendicular to the separating hyperplane, ξi are variables describing the classification error, C is a coefficient that provides a compromise between the training error and the separation boundary, and N is the number of elements in the data sample.
The authors of [25] used SVMs to analyze the static stability of EPSs based on data obtained from the PMU. The choice of SVMs is due to the ability to learn on small data samples, the absence of an obvious over-training problem, and the small number of adjustable parameters compared to ANNs. In this study, stress values at EPS nodes are used to analyze the control system. To test the proposed algorithm, the EPS IEEE39 mathematical model was used. The results of 492 simulations were used to form the data sample. The accuracy of the SVMs was 97% with a delay of 0.042 s for one emergency process.
In [26], an SVM-based algorithm for analyzing the static stability of EPSs was developed. The paper compares the classification accuracy of SVMs and ANNs on obtained data during the calculation of a series of transient processes for a mathematical model of the Brazilian power system, consisting of 2684 nodes. Using SVMs allows you to increase the accuracy by 2.5 times and reduce the time of analyzing the static stability of EPSs by 2.5 times compared to ANNs.
The authors of the study [27] proposed a method for ensuring acceptable voltage levels based on the SVM algorithm and data obtained from PMU devices. The proposed algorithm consists of two stages: training the SVM algorithm on historical data and running the trained algorithm in real time. The proposed algorithm was tested based on the EPS IEEE39 mathematical model.
The authors of [28] proposed an algorithm for analyzing SSS EPSs based on deep learning ANNs, taking into account RESs. The initial data for the algorithm are retrospective data on electricity generation from salt and wind power plants, parameters of the electrical mode of the power system, and a set of CAs for the emergency processes under consideration. The next stage is a statistical analysis of the resulting sample to reduce its dimensionality by removing features with a low correlation concerning the classification object. Next, the sample is divided into tests and training. The final stage is training the model followed by testing. The proposed algorithm was tested on the EPS IEEE68 mathematical model. The accuracy of the model was 99.89%.
The study [29] presents an EC EPS algorithm based on ANNs and the Lyapunov function, taking into account the minimization of the cost of CA implementation.
The authors of [30] used the ANN algorithm to ensure acceptable voltage levels in EPS nodes. To increase the accuracy of the ANNs, the Proximal Policy Optimization algorithm was used [31]. Testing was performed based on the EPS IEEE39 mathematical model.
In [32,33], the DL algorithm was used to develop the EC algorithm for EPS modes. This algorithm is based on the interaction of the agent function with the external environment, which is modeled as a Markov decision-making process. At each time step of the simulation, the agent can observe the state of the external environment and receive a reward signal depending on the influences issued to it. The goal of the algorithm is to apply optimal influences on the external environment to obtain the maximum total reward.
In [32], to describe the EPS as an external environment for the DL algorithm, the following system of differential-algebraic equations with restrictions is used:
{ x t = f ( x t , y t , d t , a t ) 0 = g ( x t , y t , d t , a t ) x t min x t x t max y t min y t y t max a t min a t a t max ,
where xt is the list of variables that determine the dynamic state of the EPS, yt is the list of variables that determine the steady state of the EPS, at is the list of CAs participating in the EC operating mode of the EPS, dt is the magnitude of the disturbance, and values with superscripts max and min denote the maximum and minimum limitation of each variable.
To train and test an algorithm that implements EC modes of EPSs based on DL, the study [32] proposed a platform containing two modules:
  • Module 1 implements the DL algorithm (OpenAI Gym, Python3);
  • Module 2 is used for EPS modeling and EC implementation (InterPSS, Java).
Communication between modules is ensured using the Py4J wrapper function. To configure the work of Module 1, a pre-prepared training sample is used. Next, in Module 2, the EPS transient process is simulated with the transfer of selected operating parameters to Module 1, in which the CA volume is calculated. To test the proposed platform, the standard EPS IEEE39 model was used; load shedding, generation shutdown, and the dynamic braking of generators due to braking resistors were used as CAs to ensure TS. As a result of a series of mathematical experiments, a high efficiency of the proposed platform was shown. The average CA-selection time for one emergency process was 0.18 s.
To ensure the required frequency level in isolated EPSs, the authors of [34] proposed an algorithm based on the use of DL. The proposed algorithm was tested on the EPS IEEE37 mathematical model. For the considered test scenarios, the results of calculating the accuracy of CA determination and the time delay required to analyze one emergency process are not provided. The authors argue that a promising direction for the development of the proposed algorithm is to reduce its computational complexity. Work [35] discusses the use of DL to ensure the required frequency level in isolated EPSs with RESs. The work also does not provide the results of determining the accuracy and time delays of the algorithm.
The development of the RF algorithm led to the development of the XGBoost algorithm [36], which uses gradient descent for training. Thus, it is possible to overcome the disadvantages of the RF algorithm.
The authors of the study [37] used the XGBoost algorithm to provide TS EPSs. Testing was performed on the IEEE39 EPS mathematical model, and real data were obtained from the South Carolina EPS. For the IEEE39 model, 478 stable modes and 362 unstable modes were considered, and the EC accuracy was 97.88%. For the South Carolina model, 483 stable modes and 405 unstable modes were considered, and the CA selection accuracy was 98.62%.
Table 2 provides an analysis of the reviewed ML algorithms used for EC EPSs.
The considered ML algorithms used to select a CA for storing TS EPSs are characterized by several disadvantages associated with the suboptimal choice of CAs, significant time costs for training the model, high requirements for RAM, a tendency to overtrain, and the complexity of determining hyperparameters. The XGBoost algorithm has the shortest CA-selection time to store TS EPSs compared to the DL and DT algorithms. For the RF algorithm, the study does not provide an estimate of time costs. To analyze and store SSS EPSs, two machine learning algorithms were considered: SVMs and ANNs. The highest performance of CA selection was shown for the SVM algorithm. The main disadvantage of this algorithm is its sensitivity to outliers and noise in the source data, which can lead to a significant displacement of the separating hyperplane from the optimal position. This problem can be effectively solved through the use of statistical filters at the data pre-processing stage.
One of the significant disadvantages of the considered ML algorithms is their non-universality in terms of choosing a CA only for storing TS or SSS. In addition, for the considered algorithms to work, a pre-generated list of accidents is required, for which the selection of a CA is required. This approach significantly reduces the adaptability of EC EPSs.
The purpose of this study is to develop and test, on mathematical data, a universal CA-selection algorithm for saving TS and SSS EPSs based on big data analysis through the use of DL algorithms. The following algorithms are under consideration: recurrent neural networks (RNNs), long short-term memory networks (LSTMs), restricted Boltzmann machines (RBMs), and self-organizing maps (SOMs). The selection of these algorithms was based on the analysis of the study [42].
The scientific novelty of this research lies in solving the problem of adaptive EC EPSs based on deep learning methods for preserving TS and SSS. The adaptability of the proposed method is ensured by the absence of the need to specify a list of accidents under consideration for which the CA is selected. To increase the accuracy of CA selection in the original data set, measurements obtained from the PMU are used.

3. Description of the EPS EC Technique Based on the ML Algorithm

For the EC EPS algorithm, the block diagram shown in Figure 1 is used.
The first step in the flowchart shown in Figure 1 is data generation. To form a data sample, two sources can be used: the results of mathematical modeling and real data obtained during the registration of transient processes in real EPSs. The next stage is the selection of features to increase the speed of training procedures for the DL model. After training, the DL model is tested, and the classification quality criteria are analyzed. If the classification quality is unacceptable, the DL model is retrained with the correction of hyperparameters or a set of features. Once an acceptable classification quality is achieved, the trained DL model is integrated into the EC EPS process, which runs in real time. When there is a change in the EPS structure [43], the DL model is retrained. The following is a detailed description of each of the stages of the flowchart shown in Figure 1.

3.1. Data Generation

The results of mathematical modeling of transient processes can be used as data for sampling, indicating the selected CA for storing TS or SSS. This modeling is most often performed using software packages for calculating transient processes in EPSs [44]. The main advantage of this method of obtaining data for sampling is the ability to simulate a wide range of transient processes with different electrical network configurations and different types of accidents. The main disadvantage of this method of obtaining data is the possible error in modeling transient processes in EPSs. The occurrence of error is associated with the use of irrelevant parameters of EPS mathematical models, regulators, and control devices. The second source of data is records of real transient processes occurring in the EPS. The registration of accidents can be performed using PMUs [45], which makes it possible to obtain a description of transient processes with high accuracy. The main disadvantage of this source of information is related to the difficulty of obtaining data because a loss of stability accident is a rare event [46]. Therefore, a data sample can be generated by combining real and simulated data. If there are changes in the EPS structure, the data sample must be replaced, because adding or removing elements from the EPS block diagram almost completely changes the nature of the transient processes. The data selection should be updated when there are changes in the EPS structure (implementation of new SGs and transmission lines) or when there are new records of accidents occurring in the EPS.
In this study, numerical modeling performed in the MATLAB/Simulink environment was used to generate a data sample. The standard mathematical models EPS IEEE14 [16], IEEE24 [17], and IEEE39 [18] with modified load values and SG powers were used. A detailed description of the mathematical models used is given in Appendix A, Appendix B, and Appendix C. To select CAs, an algorithm adapted for the analysis of TS and SSS from the study [47] was used. To simulate a series of transient processes, a proportional change in loads and generations was performed in the EPS models used.
After generating the initial data sample, it must be processed to remove outliers, noise, and features with a high correlation.

3.2. Feature Selection

Feature selection is an important step in preparing a data sample before training an ML algorithm. Feature selection is performed to reduce the dimensionality of the CA selection problem being solved and to reduce the model training time. The selection of features is carried out according to the stages described in Figure 2.
Thus, a three-step procedure combining the Spearman correlation coefficient analysis and the use of the RF algorithm [48] is used for feature selection. The Spearman correlation coefficient is calculated using the following equation:
ρ = 1 6 d 2 n ( n 2 1 ) ,
where ρ is the Spearman correlation value, d is the difference in ranks for pair (X, Y) of two series of numbers, and n is the length of rows X and Y.

3.3. Description of the DL Algorithms Used

This study considers four DL algorithms: RNNs, LSTMs, RBMs, and SOMs. Below are the basic operating principles of each algorithm. To train and use DL algorithms, the Python3 programming language was used in conjunction with the TensorFlow, Keras, scikit-learn, and sklearn_som libraries.

3.3.1. RNNs

The RNN algorithm is based on ANNs, in which the connections between layers are a directed sequence [49]. The calculations required at each step in an RNN are defined by the following equations:
{ h ( t ) = σ ( W h x x ( t ) + W h h h ( t 1 ) + b h ) y ( t ) = softmax ( W y h h ( t ) + b y ) ,
where h(t) is the value in the hidden node at time t, h(t−1) is the value in the hidden node at time t − 1, x(t) is the value of input data at time t, y(t) is the output value at time t, Whx is the weight matrix between the input and hidden layers, Whh is the matrix of weight coefficients between hidden layers, Wyh is the matrix of weighting coefficients between the input and output, bh is the hidden layer offset vector, by is the displacement vector of the output layer, softmax() is the multivariate logistic function, and σ is the activation function.

3.3.2. LSTMs

The architecture of LSTMs is similar to that of RNNs. The difference is that an LSTM replaces hidden layers with a memory cell. Each memory cell contains a node with a self-connected recurrent edge of a certain weight [50]. The calculations in LSTMs are defined by the following equations:
{ f t = σ g ( W f x t + U f h t 1 + b f ) i t = σ g ( W i x t + U i h t 1 + b i ) o t = σ g ( W o x t + U o h t 1 + b o ) C t = σ c ( W c x t + U c h t 1 + b c ) c t = f t c t 1 + i t C t h t = o t σ h ( c t ) ,
where xt is the input vector; ht is the output vector, ct is the state vector; W, U, and b are the matrices and vector of model parameters; ft, fi, and ot are the gate’s activation vector; σg is the sigmoid-based activation function; σc and σh are the activation function based on hyperbolic tangent; and ⊙ is the Hadamard product operator.

3.3.3. RBMs

An RBM is a type of generative stochastic ANN in which neurons are divided into visible and hidden layers and connections are only allowed between neurons of different types, thus limiting connections [51]. For RBMs, there is a concept of network energy through which all calculations are performed:
E ( u , h ) = a T u b T h u T W h ,
where E is the network energy value, a is the displacement vector for the visible layer, b is the displacement vector for the hidden layer, W is the weight matrix, u is the binary elements of the visible layer, h is the hidden layer’s binary elements, and T is the denotation of the transposition operation.

3.3.4. SOMs

An SOM is a type of ANN. The main difference between this technology and neural networks trained using the backpropagation algorithm is that the training method uses an unsupervised learning method, that is, the learning result depends only on the structure of the input data [52]. The SOM operation algorithm can be presented as follows:
  • Initialization;
  • Selecting a vector from a data set;
  • Finding the best matching unit for the selected vector;
  • Determination of the number of best matching units (BMUs);
  • Definition of error.

3.4. Algorithm Quality Assessment

To assess the classification quality of the trained algorithm, the following parameters were used: accuracy (ACC), missed detection rate (MDR), false alarm rate (FAR), and area under the receiver operating characteristic curve (AUC). The parameters ACC, MDR, and FAR are expressed through the values true positive (TP), true negative (TN), false positive (FP), and false negative (FN) [53].
ACC is a coefficient that determines the ratio of correctly classified data to the total number of classifications:
A C C = T P + T N T P + T N + F P + F N 100 % .
MDR is a ratio that determines the ratio of examples that were classified as false positives to the total number of classifications:
M D R = F P T P + T N + F P + F N 100 % .
FAR is a ratio that determines the ratio of examples that were classified as false negatives to the total number of classifications:
F A R = F N T P + T N + F P + F N 100 % .
AUC is a value that determines the area under the receiver operating characteristic curve. The AUC metric is used for binary classifications. In multivariate classifications, AUC can be used for only two classes [54].

3.5. Real-Time Application

In this study, the real-time EC problem is considered only theoretically and is a direction for future research. A block diagram describing the use of the proposed EC algorithm based on ML and data received from the PMU is shown in Figure 3. When an accident occurs (in Figure 3, the accident location is shown with a red arrow), PMU devices register emergency currents and voltages with transmission (in Figure 3, the process of transmitting information from the PMU to the data center is shown by blue arrows) to the data center, which has EC command transmission channels (in Figure 3, the process of transmitting a command to implement CAs is shown by green arrows) to turn off SGs and loads in a protected energy region. Based on the obtained values of current and voltage synchrophasors, the trained DL algorithm selects CAs in the form of a set of switchable SGs and loads. Next, the set of CAs is transmitted via communication channels to control objects.
To implement this EC principle, it is necessary to estimate the acceptable time delay between the transfer of current and voltage synchrophasors to the data center and the implementation of CAs at power system facilities. This problem can be solved using real-time EPS transient simulations [55,56,57,58,59,60].

4. Case Study

For numerical experiments, MATLAB/Simulink was used to simulate transient processes, and Python3 was used to train and analyze DL algorithms.

4.1. Data Preparation

To generate data samples for the IEEE14, IEEE24, and IEEE39 models, cyclic calculations of transient processes with CA selection were used. During the cyclic calculations, the following model parameters were changed: loads in nodes, values of active- and reactive-power SGs, and the topology was changed taking into account single repairs of network elements. When considering repairs of network equipment, situations of isolating isolated islands were excluded [61]. To generate the data and CA selection, the following perturbations were considered [47]:
  • Three-phase short circuits of 0.1 and 0.2 s durations;
  • Two-phase short-circuits of 0.1, 0.2 and 0.3 s durations;
  • Single-phase short circuits of 0.1, 0.2, 0.3, and 0.4 s durations;
  • Disconnection of the transmission line;
  • SG switching off.
Short circuits were considered for selecting CAs to preserve TS. Transmission lines and SG disconnections were considered for selecting CAs to preserve SSS. The transient calculations in the considered EPS mathematical models were performed in MATLAB/Simulink. CA selection was performed by transferring the initial data to a dynamic library implementing CA-selection algorithms.
The following features were considered in the data sample: voltages in nodes, voltage phases in nodes, current loads of electrical network elements, active power flows, reactive power flows, active power SGs, reactive power SGs, and load angle SGs. Table 3 shows the characteristics of the obtained initial data samples: the number of features, the total volume of the data sample, the number of scenarios with CAs, the number of scenarios without CAs, and the degree of class imbalance. A scenario is understood as the result of calculating the transient process for the considered accident. Figure 4 provides a graphical representation of class imbalance in the original data samples. The abscissa axis of Figure 4 shows the names of the considered EPS mathematical models.
The degree of class imbalance (ID) [62] is calculated using the following equation:
I D = N C A N s 100 % ,
where ID is the degree of class imbalance, NCA is the number of scenarios with CAs, and NS is the total number of scenarios considered.
The volume of the initial sample for each model is determined by the following equation [19]:
N D S = N S G N R N L N V ,
where NDS is the initial data sample volume, NR is the number of repairs being considered, NL is the number of nodes with variable loads, NSG is the number of SGs with variable active powers, and NV is the coefficient characterizing the number of changes in loads and generations during modeling.
In addition to the imbalance of data for classes with and without CAs in the selection of data, there is an imbalance of classes describing the composition of CAs, which must be applied to maintain the stability of EPSs. Therefore, for the initial data sample, it is necessary to ensure equality between classes with and without CAs, as well as equality between classes with different sets of CAs. For this purpose, a data sample-thinning procedure is used to ensure class balance.
Table 4 shows the results of class balancing in the original data set. Figure 5 shows a graphical representation of the class imbalance in the processed data sample.
After ensuring the balance of the data sample, feature selection is performed using the algorithm described in Figure 2. The first stage of this algorithm is the removal of features with a low correlation concerning the answer. Table 5 shows the results of removing features that have Spearman correlation values concerning the answer below 0.3.
The low correlation value of current loads of electrical network elements to the CA class, aimed at preserving TS and SSS, is explained by the nature of the phenomenon of EPS instability [63,64,65]. The preservation of TS and SSS largely depends on the flow of active power and voltage levels; there is practically no direct connection with the current loads of the elements of the electrical network. There is only an indirect connection through the magnitude of the loading currents of the electrical network and voltage loss. Also, the low correlation concerning CA class corresponds to node voltages electrically distant from SGs. Including highly cross-correlated features in the training set can lead to poor task conditioning and can lead to instability and unreliability in determining the optimal hyperparameters of the DL model [66]. Table 6 shows the results of removing features that have Spearman cross-correlation values higher than 0.9.
In the data sample, a Spearman correlation value higher than 0.9 was obtained for the voltages of some nodes. Such cases correspond to the low electrical distance between EPS nodes. As a result of the analysis of the cross-correlation of features, two features were removed from the sample corresponding to the IEEE14 model, one feature was removed from the sample corresponding to the IEEE24 model, and three features were removed from the sample corresponding to the IEEE39 model. The next stage of processing the data sample is to analyze the importance of features for the RF algorithm. The RF algorithm allows you to obtain the importance values of each feature during classification. The RF algorithm was trained by dividing the data sample into two parts: training and testing. Table 7 shows the values of hyperparameters obtained during the training of the RF algorithm. To obtain optimal hyperparameters, the standard random search procedure was used [67].
Table 7 uses the following notation: n_estimators are the number of decision trees in the ensemble, max_features are the maximum number of features considered for splitting a node, max_depth is the maximum number of levels of the decision tree, min_samples_split is the minimum amount of data placed in a node before splitting, and min_samples_leaf is the minimum amount of data placed in a leaf node. Figure 5 shows an example of one of the decision trees for the IEEE14 model. In Figure 6, the following notations are used: U1 is the voltage in node 1, U2 is the voltage in node 2, U3 is the voltage in node 3, PSG1 is the active power of the first SG, QSG2 is the reactive power of the second SG, and leaf is the value in the leaf node. Table 8 shows the results of removing features with an importance of less than 0.2. For the RF algorithm, the minimum significance corresponds to the reactive power flows indicated in Table 8. Figure 7 shows the process of changing the number of features in the processing of a data sample. The abscissa axis indicates the stage of processing the data sample, as described in Figure 2. Step 0 shows the initial value of the features in the data sample.
During the processing of the initial data sample, the balance of the sample by class was ensured and features were selected using the methods of statistical analysis and analysis of the importance of features for the RF algorithm.

4.2. Training and Testing Algorithms without Taking into Account PMU Data

After preparing the data sample, the stages of training and testing the algorithms were completed: RNNs, LSTMs, RBMs, and SOMs. The algorithms were trained using the standard algorithm exhaustive grid search from the TensorFlow, Keras, scikit-learn, and sklearn_som libraries. Table 9, Table 10, Table 11 and Table 12 show the main defined hyperparameters of the algorithms. In Table 9, Table 10, Table 11 and Table 12, the following notations are used: RNN_Size is the size of the hidden layer of the RNN, RNN_Layers is the number of layers in the RNN, Sequence_Length is the size of the data packet, Batch_Size is the length of the sequence for the RNN, Epoch_Number is the number of epochs (one pass forward and backward for the training example), Act_Func is the activation function, LSM_Layer is the number of layers in the LSM, Weight_Initializer is the method of setting initial weights, glorot_uniform is the glorot uniform initializer [68], Dropout_Ratio is the regularization coefficient of the DL model, tanh is the hyperbolic tangent, Relu is the rectified linear unit, Kernel_size is the kernel size, Hidden_Number is the number of nodes in the hidden-feature layer, PrimaryCaps_Number is the number of nodes in the PrimaryCaps layer, N_function is the Neighborhood Distance Function, Gaussian_width is the width of the Gaussian function used, Learning_Rate is the learning rate, and BMU_Selection is the method for determining the BMU.
To train the model, an ACC threshold value of 95% was chosen [69,70]. When this value is reached, the training process stops to prevent overtraining of the DL model. The ACC threshold value can be selected depending on the requirements of the problem being solved and changed during the operation of the trained DL algorithm.
Figure 8 shows the changes in ACC depending on the training iteration. Graphs of changes in ACC for the various considered models (IEEE14, IEEE24, and IEEE39) and for the various algorithms (RNNs, LSTMs, RBMs, SOMs) are presented. The red dotted line shows the ACC threshold value of 95%.
For training and testing, a procedure was performed to divide the sample into test and training. A total of 80% of the original sample was selected for training and 20% for testing [71].
Table 13 shows the results of the trained DL algorithms on the test data.
From the results of testing DL algorithms, it is clear that with an increasing dimension of the EPS model, the accuracy of CA selection decreases, which can be explained by an increase in uncertainty due to the significant complexity of considering the entire list of accidents. It is also clear that the best classification results correspond to the LSTM algorithm and the worst results correspond to the SOM algorithm. In this section, DL algorithms were trained and tested without data from the PMU [72], i.e., there were no voltage phases in the data sample.

4.3. Testing Algorithms by Taking into Account PMU Data

Table 14, Table 15, Table 16 and Table 17 show the main defined hyperparameters of the algorithms, taking into account the data received from the PMU.
To train the model, an ACC threshold value of 95% was chosen [69,70]. When this value is reached, the training process stops to prevent overtraining of the DL model. Figure 9 shows changes in ACC depending on the training iteration. Table 18 shows the results of the trained DL algorithms on the test data.
Taking into account the phases of the voltages of the nodes considered by the EPS leads to an increase in the accuracy of the CA classification. This fact is explained by the high degree of dependence of SSS and TS on active power flows, which are determined by the distribution of voltage phases across EPS nodes. The best classification results correspond to the LSTM algorithm.

4.4. Testing the Ability to Determine a Set of CA for an Accident That Is Not in the Data Sample

To test the ability of DL algorithms to independently generate an optimal set of CAs for a given set of characteristics, a calculation of the transient process was performed with an accident not taken into account in the initial data sample, which consists of a three-phase short circuit in node 9 of the IEEE14 model. An LSTM algorithm is used to select CAs. Figure 10 shows the angular speeds of SGs without introducing CAs. In the IEEE14 model, SG1 is balancing, so its load angle is not shown in Figure 10. Figure 11 shows the angular speeds of SGs with the introduction of CAs.
The loss of TS on the graph of angular speed SGs is indicated by an excess of 1 p.u. [73]. Figure 10 shows the process of losing the TS EPS, highlighting two groups of generators: SG3, SG8, SG2, and SG6. As a result of using the LSTM algorithm, the CA was selected in the form of disabling SG8 to preserve the TS EPS.

4.5. Comparison of Accuracy of DL Algorithms with DT, RF, SVM, and XGBoost Algorithms

The DT, RF, SVM, XGBoost, and DL algorithms’ RNNs, LSTMs, RBMs, and SOMs were selected to perform the comparison of the CA EPS value classification results. The comparison of the performance of the algorithms was executed on the data obtained for the IEEE39 mathematical model. The hyperparameters of the DT, RF, SVM, and XGBoost algorithms were determined using the standard exhaustive grid search approach.
Table 19 shows the obtained values of the hyperparameters of the DT, RF, SVM, and XGBoost algorithms.
Table 19 assumes the following notations: criterion is the function for measuring the quality of splitting the DT data sample, splitter is a strategy used for splitting at each DT node, max_depth is the maximum DT depth, min_samples_split is the minimum number of samples needed to split the inner DT node, n_estimators is the number of DTs in the RF, min_samples_leaf is the minimum number of samples needed to be in the leaf node, max_features is the number of features needed to be considered when searching for the best separation, degree is the degree of the polynomial kernel function, gamma is the kernel coefficient, kernel is the kernel type, alpha is the L1 regularization factor, lambda is the L2 regularization factor, gamma is the minimum loss reduction factor required for node separation, learning_rate is the learning rate, base_score is the initial prediction score, and max_delta_step is the maximum delta step.
Table 20 shows the values of the ACC ratio determined by expression (9).
Due to the significant number of features in the data set and the high share of hidden patterns described by the system of algebraic-differential equations of the dynamic state of EPSs, the accuracy of the DT, RF, SVM, and XGBoost algorithms is 18.6% lower than for the RNN, LSTM, RBM, and SOM algorithms in the average.
Based on both Table 13 and Table 18, the highest accuracy of the CA EPS classification corresponds to the LSTM algorithm.

5. Conclusions

This study presents the results of developing an EC EPS methodology based on DL algorithms and obtained data from PMUs. Due to the introduction of a significant number of RESs and the rules tightening the functioning of the electricity market, there are significant changes in the speed of transition processes for modern EPSs. Therefore, traditional EC EPS systems do not meet the requirements for speed and reliability. This paper proposes a methodology for selecting a CA for storing TS and SSS EPSs.
The following DL algorithms were considered: RNNs, LSTMs, RBMs, and SOMs. To form a data sample, the following EPS mathematical models were used: EEE14, IEEE24, and IEEE39. To process the data sample, a three-step algorithm was used, which consists of a sequential analysis of the Spearman correlation coefficient of each characteristic from the responses, the cross-Spearman correlation coefficient of the characteristics, and an analysis of the importance of the characteristics for the RF algorithm. For the IEEE14 model, the initial number of features in the data set was 107, for the IEEE24 model it was 244, and for the IEEE39 model, it was 313. After applying the feature selection algorithm shown in Figure 2, the number of features was reduced for IEEE14 to 69, for IEEE24 to 189, and for IEEE39 to 238.
Next, the considered DL algorithms were trained and tested on a processed data sample, even without considering the voltage phase values that can be measured by PMUs. During testing, the following classification accuracy characteristics were analyzed: MDRs, FARs, ACCs, and AUCs. Table 21 shows the average values for the IEEE14, IEEE24, and IEEE39 models of the ACC ratio of the RNN, LSTM, RBM, and SOM algorithms.
As a result of testing, it was found that the maximum accuracy corresponds to the LSTM algorithm. The ACC value for the LSTM algorithm is shown in bold in Table 21.
To test the possibility of selecting a CA for an accident that is missing in the data sample, for the IEEE14 model, an accident consisting of a short circuit in node 9 was considered. As a result of modeling this accident, SG8 and SG3 lose stability and operate in asynchronous mode. To provide TS, the CA was chosen in the form of SG8 shutdown. The correctness of this CA is confirmed by the results of the calculation of the transient process.
Further research will be aimed at developing a methodology for EC with isolated EPSs which contains a significant proportion of RESs and energy storage devices. For EPS data, the CAs are considered to provide the required AC voltage and frequency levels. Moreover, there are other important directions for future research. For instance, more detailed testing of the ability of DL algorithms to select optimal CAs for accidents that are not in the training set; the determination of acceptable values of time delays of DL algorithms, allowing us to define the optimal value of CAs during the development of the transient process; the determination of the minimum size of the data set; and the analysis of the influence of errors in data synchronization from PMU devices based on the accuracy of CA EPS selection are required.

Author Contributions

All authors have made valuable contributions to this paper. Conceptualization, M.S. (Mihail Senyuk), M.S. (Murodbek Safaraliev), A.P., O.P., I.Z. and S.B.; methodology, M.S. (Mihail Senyuk), M.S. (Murodbek Safaraliev), A.P., O.P., I.Z. and S.B.; software, M.S., M.S. (Murodbek Safaraliev), A.P. and O.P.; validation, M.S. (Mihail Senyuk), M.S. (Murodbek Safaraliev), A.P., O.P., I.Z. and S.B.; formal analysis, M.S. (Mihail Senyuk), M.S. (Murodbek Safaraliev), O.P., I.Z. and S.B.; investigation, M.S. (Mihail Senyuk), M.S. (Murodbek Safaraliev), A.P., O.P. and I.Z.; writing—original draft preparation, M.S. (Mihail Senyuk), M.S. (Murodbek Safaraliev), A.P., O.P., I.Z. and S.B.; writing—review and editing, I.Z., M.S. (Mihail Senyuk) and S.B.; supervision, M.S. (Murodbek Safaraliev). All authors have read and agreed to the published version of the manuscript.

Funding

The reported study was supported by Russian Science Foundation, research project № 23-79-01024.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ACCAccuracy
ANNArtificial neural network
AUCArea under receiver operating characteristic curve
BMUBest matching unit
CAControl action
DLDeep learning
DTDecision tree
ECEmergency control
EPSElectrical power system
FARFalse alarm rate
FNFalse negative
FPFalse positive
IDImbalance degree
LSTMLong short-term memory networks
MDRMissed detection rate
MLMachine learning
PMUPhasor measurement unit
RBMRestricted Boltzmann machines
RESRenewable sources of energy
RFRandom forest
RNNRecurrent neural networks
SGSynchronous generator
SOMSelf-organizing maps
SSSSmall signal stability
SVMSupport vector machine
TNTrue negative
TPTrue positive
TSTransient stability
XGBoostExtreme gradient boosting

Appendix A

The IEEE14 model parameters used are shown in Table A1 and Table A2.
Figure A1. IEEE14 model diagram.
Figure A1. IEEE14 model diagram.
Mathematics 11 04667 g0a1
Table A1. Parameters of SGs used in the IEEE14 model.
Table A1. Parameters of SGs used in the IEEE14 model.
SG NumberPmax, MWPmin, MWQmax, MVArQmin, MVArEC
1332.40.010.00.0No
2140.030.050.00.0Yes
3100.00.040.00.0Yes
6100.00.024.00.0Yes
8100.00.024.00.0Yes
Table A2. Parameters of IEEE14 model nodes participating in EC.
Table A2. Parameters of IEEE14 model nodes participating in EC.
Bus NumberP, MWQ, MVArVmax, p.u.Vmin, p.u.
221.712.71.050.95
394.219.01.050.95
447.83.91.050.95
57.61.61.050.95
109.05.81.050.95
113.51.81.050.95
126.11.61.050.95
1313.55.81.050.95
1414.95.01.050.95

Appendix B

The IEEE24 model parameters used are shown in Table A3 and Table A4.
Figure A2. IEEE24 model diagram.
Figure A2. IEEE24 model diagram.
Mathematics 11 04667 g0a2
Table A3. Parameters of SGs used in the IEEE24 model.
Table A3. Parameters of SGs used in the IEEE24 model.
SG NumberPmax, MWPmin, MWQmax, MVArQmin, MVArEC
1192.00.096.00.0Yes
2192.00.096.00.0Yes
7300.00.0150.00.0Yes
13591.00.0290.00.0Yes
15215.00.0100.00.0Yes
16155.040.060.00.0Yes
18400.00.0200.00.0Yes
21400.020.0200.00.0Yes
22300.00.0100.00.0Yes
23650.00.0300.00.0No
Table A4. Parameters of IEEE24 model nodes participating in EC.
Table A4. Parameters of IEEE24 model nodes participating in EC.
Bus NumberP, MWQ, MVArVmax, p.u.Vmin, p.u.
1108.022.01.050.95
297.020.01.050.95
474.015.01.050.95
571.014.01.050.95
6136.028.01.050.95
9175.036.01.050.95
13265.054.01.050.95
15317.064.01.050.95
16100.020.01.050.95
18333.068.01.050.95
20128.026.01.050.95

Appendix C

The IEEE39 model parameters used are shown in Table A5 and Table A6.
Table A5. Parameters of SGs used in the IEEE39 model.
Table A5. Parameters of SGs used in the IEEE39 model.
SG NumberPmax, MWPmin, MWQmax, MVArQmin, MVArEC
11129.00.0411.00.0No
2526.00.0257.00.0Yes
3692.050.0270.00.0Yes
4638.00.0151.00.0Yes
5511.00.0177.00.0Yes
6657.00.0222.00.0Yes
7605.020.045.00.0Yes
8547.030.074.00.0Yes
9837.00.030.00.0Yes
10301.00.0151.00.0Yes
Figure A3. IEEE39 model diagram.
Figure A3. IEEE39 model diagram.
Mathematics 11 04667 g0a3
Table A6. Parameters of IEEE39 model nodes participating in EC.
Table A6. Parameters of IEEE39 model nodes participating in EC.
Bus NumberP, MWQ, MVArVmax, p.u.Vmin, p.u.
3322.02.01.050.95
4500.0184.01.050.95
6320.0153.01.050.95
8158.030.01.050.95
12308.092.01.050.95
15320.0153.01.050.95
18158.030.01.050.95
22274.0115.01.050.95
23274.085.01.050.95
25224.047.01.050.95
26139.017.01.050.95
27281.075.01.050.95
28206.028.01.050.95
29283.027.01.050.95

References

  1. Hatziargyriou, N.; Milanovic, J.; Rahmann, C.; Ajjarapu, V.; Canizares, C.; Erlich, I.; Hill, D.; Hiskens, I.; Kamwa, I.; Pal, B.; et al. Definition and Classification of Power System Stability—Revisited & Extended. IEEE Trans. Power Syst. 2020, 36, 3271–3281. [Google Scholar] [CrossRef]
  2. Carrasco, J.M.; Franquelo, L.G.; Bialasiewicz, J.T.; Galvan, E.; PortilloGuisado, R.; Prats, M.A.M.; Leon, J.I.; Moreno-Alfonso, N. Power-electronic systems for the grid integration of renewable energy sources: A survey. IEEE Trans. Ind. Electron. 2004, 53, 1002–1016. [Google Scholar] [CrossRef]
  3. Liu, J.; Miura, Y.; Ise, T. Comparison of dynamic characteristics between virtual synchronous generator and droop control in inverter-based distributed generators. IEEE Trans. Power Electron. 2016, 31, 3600–3611. [Google Scholar] [CrossRef]
  4. Han, H.; Hou, X.; Yang, J.; Wu, J.; Su, M.; Guerrero, J.M. Review of Power Sharing Control Strategies for Islanding Operation of AC Microgrids. IEEE Trans. Smart Grid 2016, 7, 200–215. [Google Scholar] [CrossRef]
  5. Deng, R.; Yang, Z.; Chow, M.-Y.; Chen, J. A Survey on Demand Response in Smart Grids: Mathematical Models and Approaches. IEEE Trans. Ind. Inform. 2015, 11, 570–582. [Google Scholar] [CrossRef]
  6. Ula, A.H.M.S. Global warming and electric power generation: What is the connection? IEEE Trans. Energy Convers. 1991, 6, 599–604. [Google Scholar] [CrossRef]
  7. Elavarasan, R.M.; Shafiullah, G.M.; Padmanaban, S.; Kumar, N.M.; Annam, A.; Vetrichelvan, A.M.; Mihet-Popa, L.; Holm-Nielsen, J.B. A Comprehensive Review on Renewable Energy Development, Challenges, and Policies of Leading Indian States with an International Perspective. IEEE Access 2020, 8, 74432–74457. [Google Scholar] [CrossRef]
  8. Zhou, Z.; Xiong, F.; Huang, B.; Xu, C.; Jiao, R.; Liao, B.; Yin, Z.; Li, J. Game-Theoretical Energy Management for Energy Internet with Big Data-Based Renewable Power Forecasting. IEEE Access 2017, 5, 5731–5746. [Google Scholar] [CrossRef]
  9. Liang, J.; Tang, W. Ultra-Short-Term Spatiotemporal Forecasting of Renewable Resources: An Attention Temporal Convolutional Network-Based Approach. IEEE Trans. Smart Grid 2022, 13, 3798–3812. [Google Scholar] [CrossRef]
  10. Prema, V.; Bhaskar, M.S.; Almakhles, D.; Gowtham, N.; Rao, K.U. Critical Review of Data, Models and Performance Metrics for Wind and Solar Power Forecast. IEEE Access 2021, 10, 667–688. [Google Scholar] [CrossRef]
  11. Huang, Y.; Wang, Y.; Li, C.; Zhao, H.; Wu, Q. Physics insight of the inertia of power system and methods to provide inertial response. CSEE J. Power Energy Syst. 2022, 8, 559–568. [Google Scholar] [CrossRef]
  12. Alimi, O.A.; Ouahada, K.; Abu-Mahfouz, A.M. A Review of Machine Learning Approaches to Power System Security and Stability. IEEE Access 2020, 8, 113512–113531. [Google Scholar] [CrossRef]
  13. Cao, J.; Fan, Z. Deep Learning-Based Online Small Signal Stability Assessment of Power Systems with Renewable Generation. In Proceedings of the 2018 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Guangzhou, China, 8–12 October 2018; pp. 216–221. [Google Scholar] [CrossRef]
  14. Azman, S.K.; Isbeih, Y.J.; El Moursi, M.S.; Elbassioni, K. A Unified Online Deep Learning Prediction Model for Small Signal and Transient Stability. IEEE Trans. Power Syst. 2020, 35, 4585–4598. [Google Scholar] [CrossRef]
  15. Papadopoulos, T.A.; Kontis, E.O.; Barzegkar-Ntovom, G.A.; Papadopoulos, P.N. A Three-Level Distributed Architecture for the Real-Time Monitoring of Modern Power Systems. IEEE Access 2022, 10, 29287–29306. [Google Scholar] [CrossRef]
  16. Hashim, N.; Hamzah, N.; Latip, M.A.; Sallehhudin, A. Transient Stability Analysis of the IEEE 14-Bus Test System Using Dynamic Computation for Power Systems (DCPS). In Proceedings of the 2012 Third International Conference on Intelligent Systems Modelling and Simulation, Kota Kinabalu, Malaysia, 8–10 February 2012; pp. 481–486. [Google Scholar] [CrossRef]
  17. Khairuddin, A.; Ahmed, S.; Mustafa, M.; Zin, A.; Ahmad, H. A Novel Method for ATC Computations in a Large-Scale Power System. IEEE Trans. Power Syst. 2004, 19, 1150–1158. [Google Scholar] [CrossRef]
  18. Subrahmanyam, V.S.; Jain, S.; Narayanan, G. Real-time Simulation of IEEE 10-Generator 39-Bus System with Power System Stabilizers on Miniature Full Spectrum Simulator. In Proceedings of the 2019 IEEE International Conference on Sustainable Energy Technologies and Systems (ICSETS), Bhubaneswar, India, 26 February–1 March 2019; pp. 161–166. [Google Scholar]
  19. Senyuk, M.; Safaraliev, M.; Kamalov, F.; Sulieman, H. Power System Transient Stability Assessment Based on Machine Learning Algorithms and Grid Topology. Mathematics 2023, 11, 525. [Google Scholar] [CrossRef]
  20. Sarajcev, P.; Kunac, A.; Petrovic, G.; Despalatovic, M. Artificial Intelligence Techniques for Power System Transient Stability Assessment. Energies 2022, 15, 507. [Google Scholar] [CrossRef]
  21. Li, T.; Liu, J.; Gao, K.; Tang, J.; Cui, D.; Zeng, H.; Wang, T.; Wang, Z.; Zhang, Y.; Xu, X. Decision Tree-based Real-time Emergency Control Strategy for Power System. In Proceedings of the 2018 International Conference on Power System Technology (POWERCON), Guangzhou, China, 6–8 November 2018; pp. 1832–1838. [Google Scholar] [CrossRef]
  22. Pyo, G.; Park, J.; Moon, S. A new method for dynamic reduction of power system using pam algorithm. In Proceedings of the IEEE PES General Meeting, Minneapolis, MN, USA, 25–29 July 2010. [Google Scholar]
  23. Zhu, L.; Lu, C.; Kamwa, I.; Zeng, H. Spatial–Temporal Feature Learning in Smart Grids: A Case Study on Short-Term Voltage Stability Assessment. IEEE Trans. Ind. Inform. 2018, 16, 1470–1482. [Google Scholar] [CrossRef]
  24. Zhang, X.; Wang, Y.; Xie, P.; Lin, S.; Luo, H.; Ling, H.; Li, W. Power System Transient Stability Control Method Based on Deep Learning Hybrid Model. In Proceedings of the 2021 IEEE/IAS Industrial and Commercial Power System Asia (I&CPS Asia), Chengdu, China, 18–21 July 2021; pp. 1447–1451. [Google Scholar]
  25. Gomez, F.R.; Rajapakse, A.D.; Annakkage, U.D.; Fernando, I.T. Support vector machine-based algorithm for post-fault transient stability status prediction using synchronized measurements. IEEE Trans. Power Syst. 2011, 26, 1474–1483. [Google Scholar] [CrossRef]
  26. Moulin, L.S.; Da Silva, A.; El-Sharkawi, M.A.; MarksII, R.J. Support Vector Machines for Transient Stability Analysis of Large-Scale Power Systems. IEEE Trans. Power Syst. 2004, 19, 818–825. [Google Scholar] [CrossRef]
  27. Yang, H.; Zhang, W.; Chen, J.; Wang, L. PMU-based voltage stability prediction using least square support vector machine with online learning. Electr. Power Syst. Res. 2018, 160, 234–242. [Google Scholar] [CrossRef]
  28. Ramirez-Gonzalez, M.; Nösberger, L.; Sevilla, F.R.S.; Korba, P. Small-signal stability assessment with transfer learning-based convolutional neural networks. In Proceedings of the 2022 IEEE Electrical Power and Energy Conference (EPEC), Victoria, BC, Canada, 5–7 December 2022; pp. 386–391. [Google Scholar] [CrossRef]
  29. Bellizio, F.; Cremer, J.L.; Strbac, G. Transient Stable Corrective Control Using Neural Lyapunov Learning. IEEE Trans. Power Syst. 2023, 38, 3245–3253. [Google Scholar]
  30. Jiang, C.X.; Li, Z.; Zheng, J.H.; Wu, Q.H. Power System Emergency Control to Improve Short-Term Voltage Stability Using Deep Reinforcement Learning Algorithm. In Proceedings of the 2019 IEEE 3rd International Electrical and Energy Conference (CIEEC), Beijing, China, 7–9 September 2019; pp. 1872–1877. [Google Scholar]
  31. Cheng, Y.; Huang, L.; Wang, X. Authentic Boundary Proximal Policy Optimization. IEEE Trans. Cybern. 2021, 52, 9428–9438. [Google Scholar] [CrossRef]
  32. Huang, Q.; Huang, R.; Hao, W.; Tan, J.; Fan, R.; Huang, Z. Adaptive Power System Emergency Control Using Deep Reinforcement Learning. IEEE Trans. Smart Grid 2019, 11, 1171–1182. [Google Scholar] [CrossRef]
  33. Glavic, M.; Fonteneau, R.; Ernst, D. Reinforcement Learning for Electric Power System Decision and Control: Past Considerations and Perspectives. IFAC-PapersOnLine 2017, 50, 6918–6927. [Google Scholar] [CrossRef]
  34. Horri, R.; Roudsari, H.M. Reinforcement-learning-based load shedding and intentional voltage manipulation approach in a microgrid considering load dynamics. IET Gener. Transm. Distrib. 2022, 16, 3384–3401. [Google Scholar] [CrossRef]
  35. Zhou, T.; Wang, Y.; Xu, Y.; Wang, Q.; Zhu, Z. Applications of Reinforcement Learning in Frequency Regulation Control of New Power Systems. In Proceedings of the 2022 International Conference on Cyber-Physical Social Intelligence (ICCSI), Nanjing, China, 18–21 November 2022; pp. 501–506. [Google Scholar]
  36. Hou, J.; Xie, C.; Wang, T.; Yu, Z.; Lü, Y.; Dai, H. Power System Transient Stability Assessment Based on Voltage Phasor and Convolution Neural Network. In Proceedings of the 2018 IEEE International Conference on Energy Internet (ICEI), Beijing, China, 21–25 May 2018; pp. 247–251. [Google Scholar] [CrossRef]
  37. Zhang, S.; Zhang, D.; Qiao, J.; Wang, X.; Zhang, Z. Preventive control for power system transient security based on XGBoost and DCOPF with consideration of model interpretability. CSEE J. Power Energy Syst. 2021, 7, 279–294. [Google Scholar]
  38. Li, Y.; Dong, M.; Kothari, R. Classifiability-Based Omnivariate Decision Trees. IEEE Trans. Neural Netw. 2005, 16, 1547–1560. [Google Scholar] [CrossRef]
  39. Jiang, D.; Zang, W.; Sun, R.; Wang, Z.; Liu, X. Adaptive Density Peaks Clustering Based on K-Nearest Neighbor and Gini Coefficient. IEEE Access 2020, 8, 113900–113917. [Google Scholar] [CrossRef]
  40. Guo, C.-Y.; Lin, Y.-J. Random Interaction Forest (RIF)–A Novel Machine Learning Strategy Accounting for Feature Interaction. IEEE Access 2022, 11, 1806–1813. [Google Scholar] [CrossRef]
  41. Tsang, I.-H.; Kwok, J.-Y.; Zurada, J. Generalized Core Vector Machines. IEEE Trans. Neural Netw. 2006, 17, 1126–1140. [Google Scholar] [CrossRef] [PubMed]
  42. Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  43. Senyuk, M.; Beryozkina, S.; Berdin, A.; Moiseichenkov, A.; Safaraliev, M.; Zicmane, I. Testing of an Adaptive Algorithm for Estimating the Parameters of a Synchronous Generator Based on the Approximation of Electrical State Time Series. Mathematics 2022, 10, 4187. [Google Scholar] [CrossRef]
  44. Senyuk, M.; Safaraliev, M.; Gulakhmadov, A.; Ahyoev, J. Application of the Conditional Optimization Method for the Synthesis of the Law of Emergency Control of a Synchronous Generator Steam Turbine Operating in a Complex-Closed Configuration Power System. Mathematics 2022, 10, 3979. [Google Scholar] [CrossRef]
  45. Senyuk, M.; Rajab, K.; Safaraliev, M.; Kamalov, F. Evaluation of the Fast Synchrophasors Estimation Algorithm Based on Physical Signals. Mathematics 2023, 11, 256. [Google Scholar] [CrossRef]
  46. Senyuk, M.; Beryozkina, S.; Gubin, P.; Dmitrieva, A.; Kamalov, F.; Safaraliev, M.; Zicmane, I. Fast Algorithms for Estimating the Disturbance Inception Time in Power Systems Based on Time Series of Instantaneous Values of Current and Voltage with a High Sampling Rate. Mathematics 2022, 10, 3949. [Google Scholar] [CrossRef]
  47. Smolovik, S.V.; Koshcheev, L.A.; Lisitsyn, A.A.; Denisenko, A.I. Special Automation for Isolated Power Systems Emergency Control. In Proceedings of the 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), Moscow, Russia, 26–29 January 2021; pp. 1558–1561. [Google Scholar] [CrossRef]
  48. Deng, H.; Runger, G. Feature selection via regularized trees. In Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef]
  49. Xia, Y.; Wang, J. A Recurrent Neural Network for Solving Nonlinear Convex Programs Subject to Linear Constraints. IEEE Trans. Neural Netw. 2005, 16, 379–386. [Google Scholar] [CrossRef]
  50. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  51. Guo, S.; Zhou, C.; Wang, B.; Zheng, X. Training Restricted Boltzmann Machines Using Modified Objective Function Based on Limiting the Free Energy Value. IEEE Access 2018, 6, 78542–78550. [Google Scholar] [CrossRef]
  52. Kohonen, T. The self-organizing map. Proc. IEEE 1990, 78, 1464–1480. [Google Scholar] [CrossRef]
  53. Vaccari, I.; Carlevaro, A.; Narteni, S.; Cambiaso, E.; Mongelli, M. eXplainable and Reliable against Adversarial Machine Learning in Data Analytics. IEEE Access 2022, 10, 83949–83970. [Google Scholar] [CrossRef]
  54. Atapattu, S.; Tellambura, C.; Jiang, H. MGF Based Analysis of Area under the ROC Curve in Energy Detection. IEEE Commun. Lett. 2011, 15, 1301–1303. [Google Scholar] [CrossRef]
  55. Yang, C.; Xue, Y.; Zhang, X.-P.; Zhang, Y.; Chen, Y. Real-Time FPGA-RTDS Co-Simulator for Power Systems. IEEE Access 2018, 6, 44917–44926. [Google Scholar] [CrossRef]
  56. Al-Ismail, F.S.; Hassan, M.A.; Abido, M.A. RTDS Implementation of STATCOM-Based Power System Stabilizers. Can. J. Electr. Comput. Eng. 2014, 37, 48–56. [Google Scholar] [CrossRef]
  57. Wang, B.; Dong, X.; Bo, Z.; Perks, A. RTDS Environment Development of Ultra-High-Voltage Power System and Relay Protection Test. IEEE Trans. Power Deliv. 2008, 23, 618–623. [Google Scholar] [CrossRef]
  58. Lee, H.; Jung, C.; Song, C.S.; Lee, S.-R.; Yang, B.-M.; Jang, G. Novel Protection Scheme with the Superconducting Power Cables and Fault Current Limiters Through RTDS Test in Icheon Substation. IEEE Trans. Appl. Supercond. 2011, 22, 4705304. [Google Scholar] [CrossRef]
  59. Yang, Z.; Wang, Y.; Xing, L.; Yin, B.; Tao, J. Relay Protection Simulation and Testing of Online Setting Value Modification Based on RTDS. IEEE Access 2019, 8, 4693–4699. [Google Scholar] [CrossRef]
  60. Bansal, Y.; Sodhi, R. PMUs Enabled Tellegen’s Theorem-Based Fault Identification Method for Unbalanced Active Distribution Network Using RTDS. IEEE Syst. J. 2020, 14, 4567–4578. [Google Scholar] [CrossRef]
  61. Farrokhabadi, M.; Canizares, C.A.; Bhattacharya, K. Frequency Control in Isolated/Islanded Microgrids through Voltage Regulation. IEEE Trans. Smart Grid 2015, 8, 1185–1194. [Google Scholar] [CrossRef]
  62. Ebenuwa, S.H.; Sharif, M.S.; Alazab, M.; Al-Nemrat, A. Variance Ranking Attributes Selection Techniques for Binary Classification Problem in Imbalance Data. IEEE Access 2019, 7, 24649–24666. [Google Scholar] [CrossRef]
  63. Ge, X.; Qian, J.; Fu, Y.; Lee, W.-J.; Mi, Y. Transient Stability Evaluation Criterion of Multi-Wind Farms Integrated Power System. IEEE Trans. Power Syst. 2022, 37, 3137–3140. [Google Scholar] [CrossRef]
  64. Ma, J.; Wang, S.; Qiu, Y.; Li, Y.; Wang, Z.; Thorp, J.S. Angle Stability Analysis of Power System with Multiple Operating Conditions Considering Cascading Failure. IEEE Trans. Power Syst. 2016, 32, 873–882. [Google Scholar] [CrossRef]
  65. Liu, H.; Su, J.; Qi, J.; Wang, N.; Li, C. Decentralized Voltage and Power Control of Multi-Machine Power Systems with Global Asymptotic Stability. IEEE Access 2019, 7, 14273–14282. [Google Scholar] [CrossRef]
  66. Zhang, Z.; Zhao, M.; Chow, T.W. Binary- and Multi-class Group Sparse Canonical Correlation Analysis for Feature Extraction and Classification. IEEE Trans. Knowl. Data Eng. 2012, 25, 2192–2205. [Google Scholar] [CrossRef]
  67. Javeed, A.; Zhou, S.; Liao, Y.; Qasim, I.; Noor, A.; Nour, R. An Intelligent Learning System Based on Random Search Algorithm and Optimized Random Forest Model for Improved Heart Disease Detection. IEEE Access 2019, 7, 180235–180243. [Google Scholar] [CrossRef]
  68. Li, H.; Krček, M.; Perin, G. A Comparison of Weight Initializers in Deep Learning-Based Side-Channel Analysis. In Applied Cryptography and Network Security Workshops: ACNS 2020; Lecture Notes in Computer Science; Zhou, J., Conti, M., Ahmed, C.M., Au, M.H., Batina, L., Li, Z., Lin, J., Losiouk, E., Luo, B., Majumdar, S., et al., Eds.; Springer: Cham, Switzerland, 2020; Volume 12418. [Google Scholar] [CrossRef]
  69. Khalyasmaa, A.I.; Senyuk, M.D.; Eroshenko, S.A. High-Voltage Circuit Breakers Technical State Patterns Recognition Based on Machine Learning Methods. IEEE Trans. Power Deliv. 2019, 34, 1747–1756. [Google Scholar] [CrossRef]
  70. Khalyasmaa, A.I.; Senyuk, M.D.; Eroshenko, S.A. Analysis of the State of High-Voltage Current Transformers Based on Gradient Boosting on Decision Trees. IEEE Trans. Power Deliv. 2020, 36, 2154–2163. [Google Scholar] [CrossRef]
  71. Yang, N.; Tang, H.; Yue, J.; Yang, X.; Xu, Z. Accelerating the Training Process of Convolutional Neural Networks for Image Classification by Dropping Training Samples Out. IEEE Access 2020, 8, 142393–142403. [Google Scholar] [CrossRef]
  72. He, M.; Vittal, V.; Zhang, J. Online dynamic security assessment with missing pmu measurements: A data mining approach. IEEE Trans. Power Syst. 2013, 28, 1969–1977. [Google Scholar] [CrossRef]
  73. Klein, M.; Rogers, G.; Kundur, P. A fundamental study of inter-area oscillations in power systems. IEEE Trans. Power Syst. 1991, 6, 914–921. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the EC EPS algorithm.
Figure 1. Flowchart of the EC EPS algorithm.
Mathematics 11 04667 g001
Figure 2. Flowchart of feature selection algorithm.
Figure 2. Flowchart of feature selection algorithm.
Mathematics 11 04667 g002
Figure 3. Flowchart of real-time EC EPS algorithm’s application.
Figure 3. Flowchart of real-time EC EPS algorithm’s application.
Mathematics 11 04667 g003
Figure 4. Imbalance of classes of the original data samples.
Figure 4. Imbalance of classes of the original data samples.
Mathematics 11 04667 g004
Figure 5. Imbalance of classes of processed data samples.
Figure 5. Imbalance of classes of processed data samples.
Mathematics 11 04667 g005
Figure 6. One of the decision trees for the IEEE14 model.
Figure 6. One of the decision trees for the IEEE14 model.
Mathematics 11 04667 g006
Figure 7. Changes in features during the processing of data samples: (a) IEEE14 model; (b) IEEE24 model; (c) IEEE39 model.
Figure 7. Changes in features during the processing of data samples: (a) IEEE14 model; (b) IEEE24 model; (c) IEEE39 model.
Mathematics 11 04667 g007
Figure 8. Change in ACC indicator during training of DL models without taking into account data from PMU: (a) model IEEE14, RNN algorithm; (b) model IEEE24, RNN algorithm; (c) model IEEE39, RNN algorithm; (d) model IEEE14, LSTM algorithm; (e) model IEEE24, LSTM algorithm; (f) model IEEE39, LSTM algorithm; (g) model IEEE14, RBM algorithm; (h) model IEEE24, RBM algorithm; (i) model IEEE39, RBM algorithm; (j) model IEEE14, SOM algorithm; (k) model IEEE24, SOM algorithm; (l) model IEEE39, SOM algorithm.
Figure 8. Change in ACC indicator during training of DL models without taking into account data from PMU: (a) model IEEE14, RNN algorithm; (b) model IEEE24, RNN algorithm; (c) model IEEE39, RNN algorithm; (d) model IEEE14, LSTM algorithm; (e) model IEEE24, LSTM algorithm; (f) model IEEE39, LSTM algorithm; (g) model IEEE14, RBM algorithm; (h) model IEEE24, RBM algorithm; (i) model IEEE39, RBM algorithm; (j) model IEEE14, SOM algorithm; (k) model IEEE24, SOM algorithm; (l) model IEEE39, SOM algorithm.
Mathematics 11 04667 g008
Figure 9. Change in ACC indicator during the training of DL models with taking into account data from PMU: (a) model IEEE14, RNN algorithm; (b) model IEEE24, RNN algorithm; (c) model IEEE39, RNN algorithm; (d) model IEEE14, LSTM algorithm; (e) model IEEE24, LSTM algorithm; (f) model IEEE39, LSTM algorithm; (g) model IEEE14, RBM algorithm; (h) model IEEE24, algorithm RBM; (i) model IEEE39, RBM algorithm; (j) model IEEE14, SOM algorithm; (k) model IEEE24, SOM algorithm; (l) model IEEE39, SOM algorithm.
Figure 9. Change in ACC indicator during the training of DL models with taking into account data from PMU: (a) model IEEE14, RNN algorithm; (b) model IEEE24, RNN algorithm; (c) model IEEE39, RNN algorithm; (d) model IEEE14, LSTM algorithm; (e) model IEEE24, LSTM algorithm; (f) model IEEE39, LSTM algorithm; (g) model IEEE14, RBM algorithm; (h) model IEEE24, algorithm RBM; (i) model IEEE39, RBM algorithm; (j) model IEEE14, SOM algorithm; (k) model IEEE24, SOM algorithm; (l) model IEEE39, SOM algorithm.
Mathematics 11 04667 g009
Figure 10. Changing the angular speeds of SG2, SG3, SG6, and SG8 during a short circuit in node 9 without implementing CAs.
Figure 10. Changing the angular speeds of SG2, SG3, SG6, and SG8 during a short circuit in node 9 without implementing CAs.
Mathematics 11 04667 g010
Figure 11. Change in angular speeds of SG2, SG3, SG6, and SG8 during a short circuit in node 9 with CA implementation.
Figure 11. Change in angular speeds of SG2, SG3, SG6, and SG8 during a short circuit in node 9 with CA implementation.
Mathematics 11 04667 g011
Table 1. Features of modern EPSs.
Table 1. Features of modern EPSs.
SpecificityMeritsDrawbacks
A significant share of RESsReducing the impact of the electricity generation process on the environmentIncreased speed of transient processes, increased uncertainty of EPS operations
Tightening the rules of the electricity marketIncreasing the financial efficiency of EPSsReduced stability margin
A high degree of digitalizationPossibility of developing and implementing fundamentally new algorithms for managing and analyzing EPS operationsIncreased complexity and labor costs for maintaining digital systems
Table 2. Analysis of the considered ML algorithms.
Table 2. Analysis of the considered ML algorithms.
AlgorithmReferencesMeritsDrawbacksField of Applicability
DT[21,22,23]High reliability and representativeness of the resultsSignificant time costs for training the algorithmTS
RF[24]A high degree of parallelism, high performanceHigh random access memory requirementsTS
SVM[25,26,27]High performance of the trained algorithmInstability when working with noisy dataSSS
ANN[28,29,30,31]Resistant to data noise, significant fault toleranceTendency to overtrainSSS
DL[32,33,34,35]Does not require a significant amount of training samplesIt is possible to select optimal CAs due to the limited actions of the agentTS
XGBoost[36,37]The high speed of the trained algorithmDifficulty in determining hyperparametersTS
Table 3. Parameters of initial data samples.
Table 3. Parameters of initial data samples.
ModelNumber of FeaturesTotal Data Sample SizeNumber of Scenarios with CAsNumber of Scenarios without CAsID, %
IEEE141073465692277219.9
IEEE2424423,940654816,75827.3
IEEE3931343,47013,54630,42931.1
Table 4. Parameters of processed data samples.
Table 4. Parameters of processed data samples.
ModelNumber of FeaturesTotal Data Sample SizeNumber of Scenarios with CAsNumber of Scenarios without CAsID, %
IEEE14107138674963754.0
IEEE2424486184412420651.1
IEEE3931314,7798172660755.3
Table 5. The result of removing features with a low correlation to the target.
Table 5. The result of removing features with a low correlation to the target.
ModelThe Initial Number of FeaturesNumber of Features RemovedNumber of Features after ProcessingRemoved Features
IEEE141073176Values of current loads of network elements; voltages in nodes 14, 11, and 10; reactive powers along lines adjacent to nodes 9 and 5
IEEE2424442202Values of current loads of network elements; voltages in nodes 6, 14, 19, and 20
IEEE3931354259Values of current loads of network elements; voltages in nodes 3, 4, 7, 11–14, and 24
Table 6. The result of removing features with high cross-correlations.
Table 6. The result of removing features with high cross-correlations.
ModelThe Initial Number of FeaturesNumber of Features RemovedNumber of Features after ProcessingRemoved Features
IEEE1476274Node voltages 12 and 13
IEEE242021201Node voltage 5
IEEE392593256Node voltages 18, 27, and 28
Table 7. Values of the RF algorithm hyperparameters.
Table 7. Values of the RF algorithm hyperparameters.
Modeln_EstimatorsMax_FeaturesMax_DepthMin_Samples_SplitMin_Samples_Leaf
IEEE1411223108
IEEE2421476129
IEEE39287953016
Table 8. The result of removing features after decision trees analyses.
Table 8. The result of removing features after decision trees analyses.
ModelThe Initial Number of FeaturesNumber of Features RemovedNumber of Features after ProcessingRemoved Features
IEEE1474569Flows of reactive power along branches extending from node 4
IEEE2420112189Flows of reactive power along branches extending from nodes 8, 13, 17, and 21
IEEE3925618238Flows of reactive power along branches extending from nodes 16, 19, 26, and 28
Table 9. Values of hyperparameters of the RNN algorithm.
Table 9. Values of hyperparameters of the RNN algorithm.
ModelRNN_SizeRNN_LayersSequence_LengthBatch_SizeEpoch_Number
IEEE1425622550250
IEEE2425632855250
IEEE3925652870250
Table 10. Values of hyperparameters of the LSTM algorithm.
Table 10. Values of hyperparameters of the LSTM algorithm.
ModelAct_FuncLSM_LayerEpoch_NumberWeight_InitializerDropout_Ratio
IEEE14tanh100250glorot_uniform0.1
IEEE24tanh100250glorot_uniform0.2
IEEE39tanh150250glorot_uniform0.2
Table 11. Values of hyperparameters of the RBM algorithm.
Table 11. Values of hyperparameters of the RBM algorithm.
ModelAct_FuncEpoch_NumberKernel_SizeHidden_NumberPrimaryCaps_Number
IEEE14Relu2501012864
IEEE24Relu2501012864
IEEE39Relu2502412864
Table 12. Values of hyperparameters of the SOM algorithm.
Table 12. Values of hyperparameters of the SOM algorithm.
ModelKernel_SizeN_FunctionGaussian_WidthLearning_RateBMU_Selection
IEEE1470Gaussian70.04Euclidean distance
IEEE24150Gaussian120.03Euclidean distance
IEEE39210Gaussian220.01Euclidean distance
Table 13. Analysis of the results of DL algorithms on test data samples.
Table 13. Analysis of the results of DL algorithms on test data samples.
ModelParameterAlgorithm
RNNLSTMRBMSOM
IEEE14MDR, %1.111.381.671.73
FAR, %1.431.651.341.29
ACC, %93.8794.6394.1592.29
AUC0.970.950.950.94
IEEE24MDR, %2.111.871.611.93
FAR, %2.252.152.361.81
ACC, %93.8394.8593.8492.17
AUC0.970.960.980.97
IEEE39MDR, %2.521.861.681.59
FAR, %2.101.801.651.16
ACC, %92.2593.4492.7491.90
AUC0.970.980.950.96
Table 14. Values of hyperparameters of the RNN algorithm taking into account PMU data.
Table 14. Values of hyperparameters of the RNN algorithm taking into account PMU data.
ModelRNN_SizeRNN_LayersSequence_LengthBatch_SizeEpoch_Number
IEEE1425622348250
IEEE2425632651250
IEEE3925652865250
Table 15. Values of hyperparameters of the LSTM algorithm taking into account PMU data.
Table 15. Values of hyperparameters of the LSTM algorithm taking into account PMU data.
ModelAct_FuncLSM_LayerEpoch_NumberWeight_InitializerDropout_Ratio
IEEE14tanh90250glorot_uniform0.1
IEEE24tanh120250glorot_uniform0.2
IEEE39tanh136250glorot_uniform0.2
Table 16. Values of hyperparameters of the RBM algorithm taking into account PMU data.
Table 16. Values of hyperparameters of the RBM algorithm taking into account PMU data.
ModelAct_FuncEpoch_NumberKernel_SizeHidden_NumberPrimaryCaps_Number
IEEE14Relu250812864
IEEE24Relu250812864
IEEE39Relu2502012864
Table 17. Values of hyperparameters of the SOM algorithm taking into account PMU data.
Table 17. Values of hyperparameters of the SOM algorithm taking into account PMU data.
ModelKernel_SizeN_FunctionGaussian_WidthLearning_RateBMU_Selection
IEEE1460Gaussian60.04Euclidean distance
IEEE24120Gaussian100.03Euclidean distance
IEEE39200Gaussian220.01Euclidean distance
Table 18. Analysis of the results of DL algorithms on test data samples, taking into account data from the PMU.
Table 18. Analysis of the results of DL algorithms on test data samples, taking into account data from the PMU.
ModelParameterAlgorithm
RNNLSTMRBMSOM
IEEE14MDR, %1.081.361.621.72
FAR, %1.431.611.331.21
ACC, %93.7294.8394.1792.31
AUC0.960.920.950.92
IEEE24MDR, %2.081.831.601.92
FAR, %2.122.122.311.80
ACC, %93.6594.8893.8692.14
AUC0.920.940.900.91
IEEE39MDR, %2.501.841.611.58
FAR, %2.071.721.601.14
ACC, %92.1893.6492.7691.91
AUC0.980.970.930.96
Table 19. Values of hyperparameters of the DT, RF, SVM, and XGBoost algorithms.
Table 19. Values of hyperparameters of the DT, RF, SVM, and XGBoost algorithms.
AlgorithmHyperparameters
DTcriterion = “gini”
splitter = “best”
max_depth = 10
min_samples_split = 2
RFn_estimators = 25
max_depth = 3
min_samples_split = 0.01
min_samples_leaf = 0.01
max_features = 7
SVMdegree = 3
gamma = 1
kernel = Radial Basis Function
XGBoostalpha = 0.05
lambda = 0.05
gamma = 1
max_depth = 5
base_score = 0.6
n_estimators = 35
learning_rate = 1
max_delta_step = 1
Table 20. Average values of ACC ratio for the considered algorithms.
Table 20. Average values of ACC ratio for the considered algorithms.
AlgorithmACC, %
Without PMU data
RNN92.25
LSTM93.44
RBM92.74
SOM91.90
DT62.17
RF71.85
SVM70.13
XGBoost80.66
With PMU data
RNN92.18
LSTM93.64
RBM92.76
SOM91.91
DT72.54
RF77.12
SVM75.43
XGBoost81.57
Table 21. Average values of ACC ratio of the RNN, LSTM, RBM, and SOM algorithms.
Table 21. Average values of ACC ratio of the RNN, LSTM, RBM, and SOM algorithms.
AlgorithmACC, %
Without PMU data
RNN93.31
LSTM94.31
RBM93.57
SOM92.12
With PMU data
RNN93.18
LSTM94.45
RBM93.58
SOM92.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Senyuk, M.; Safaraliev, M.; Pazderin, A.; Pichugova, O.; Zicmane, I.; Beryozkina, S. Methodology for Power Systems’ Emergency Control Based on Deep Learning and Synchronized Measurements. Mathematics 2023, 11, 4667. https://doi.org/10.3390/math11224667

AMA Style

Senyuk M, Safaraliev M, Pazderin A, Pichugova O, Zicmane I, Beryozkina S. Methodology for Power Systems’ Emergency Control Based on Deep Learning and Synchronized Measurements. Mathematics. 2023; 11(22):4667. https://doi.org/10.3390/math11224667

Chicago/Turabian Style

Senyuk, Mihail, Murodbek Safaraliev, Andrey Pazderin, Olga Pichugova, Inga Zicmane, and Svetlana Beryozkina. 2023. "Methodology for Power Systems’ Emergency Control Based on Deep Learning and Synchronized Measurements" Mathematics 11, no. 22: 4667. https://doi.org/10.3390/math11224667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop