Next Article in Journal
Emergency Board Management as a Tool for Strengthening Resilience of the Electric Power Industry: A Case Study in the Czech Republic
Next Article in Special Issue
Energy Saving, Energy Efficiency or Renewable Energy: Which Is Better for the Decarbonization of the Residential Sector in Italy?
Previous Article in Journal
A Study on Elemental Sulfur Equilibrium Content in Mixtures of Methane, Carbon Dioxide, and Hydrogen Sulfide under Conditions of Natural Gas Pipeline Transmission
Previous Article in Special Issue
Optimal Installation of Heat Pumps in Large District Heating Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementing Optimization Techniques in PSS Design for Multi-Machine Smart Power Systems: A Comparative Study

1
Advanced Lightning and Power Energy System (ALPER), Department of Electrical/Electronic Engineering, Faculty of Engineering, University Putra Malaysia (UPM), Serdang 43400, Selangor, Malaysia
2
Smart Microgrid Research Center, Najafabad Branch, Islamic Azad University, Najafabad 85141-43131, Iran
3
Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran 15916-34311, Iran
4
Department of Electrical and Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran 14778-93855, Iran
5
Department of Electrical Engineering, Najafabad Branch, Islamic Azad University, Najafabad 85141-43131, Iran
*
Author to whom correspondence should be addressed.
Energies 2023, 16(5), 2465; https://doi.org/10.3390/en16052465
Submission received: 21 January 2023 / Revised: 15 February 2023 / Accepted: 3 March 2023 / Published: 5 March 2023
(This article belongs to the Special Issue Sustainable Technologies for Decarbonising the Energy Sector)

Abstract

:
This study performed a comparative analysis of five new meta-heuristic algorithms specifically adopted based on two general classifications; namely, nature-inspired, which includes artificial eco-system optimization (AEO), African vulture optimization algorithm (AVOA), gorilla troop optimization (GTO), and non-nature-inspired or based on mathematical and physics concepts, which includes gradient-based optimization (GBO) and Runge Kutta optimization (RUN) for optimal tuning of multi-machine power system stabilizers (PSSs). To achieve this aim, the algorithms were applied in the PSS design for a multi-machine smart power system. The PSS design was formulated as an optimization problem, and the eigenvalue-based objective function was adopted to improve the damping of electromechanical modes. The expressed objective function helped to determine the stabilizer parameters and enhanced the dynamic performance of the multi-machine power system. The performance of the algorithms in the PSS’s design was evaluated using the Western System Coordinating Council (WSCC) multi-machine power test system. The results obtained were compared with each other. When compared to nature-inspired algorithms (AEO, AVOA, and GTO), non-nature-inspired algorithms (GBO and RUN) reduced low-frequency oscillations faster by improving the damping of electromechanical modes and providing a better convergence ratio and statistical performance.

1. Introduction

The interconnection of synchronous machines in the power grid system makes the power system complex. This complexity gives rise to electromechanical modes, reduces low-frequency oscillation control within the synchronous machines, and affects the stability of the power grid system. To improve the damping of the electromechanical modes, smart damping controllers are designed and coupled to synchronous machines. A study [1] conducted a thorough review of damping controller design. The authors pointed out that one of the smart damping controllers is the power system stabilizer (PSS). However, for optimal performance of the PSS, optimization techniques are adopted in its design [2]. Meta-heuristic algorithms which use deep and global search mechanisms to discover optimal solutions are widely adopted in PSS design [3]. A comprehensive review of meta-heuristic algorithms in [4] classified the algorithms. Therefore, this study experimentally compares two broad groups of meta-heuristic algorithms (nature- and non-nature-inspired) by using algorithms from each group to design power system stabilizers for a multimachine power system.
Dealing with the problem of low-frequency oscillation in a power system is essential to maintaining a reliable, stable, and synchronized power system. The low-frequency oscillation occurs in the synchronous machines of the power system and can lead to damage and huge economic impact if not dealt with [5]. Therefore, damping low-frequency oscillations is important to enhance power transfer capability and stability in the power system. Stability here involves small signal analysis due to low-frequency oscillations [6]. A power system stabilizer (PSS) damping controller is used to damp low-frequency oscillations. PSS is a compensator for the lag error between the generator exciter and electrical torque as it produces extra (compensating) torque on the rotor [7,8,9].
The power system is non-linear; this non-linearity results in sudden oscillations over a wide operating range [10]. Sudden oscillations make conventional PSS with fixed parameters ineffective for the power system’s best damping efficiency [11,12]. The PSS design concept is based on linear control theory; hence, the power system is linearized around a designated operating point.
Over the years, various PSS design techniques have emerged, and they include self-tuning regulators [13,14], feedback control loops [15], and pole placement methods [16,17,18]. These methods, however, suffer from low intensive efficiency in computing solutions and require a long time to process information [19,20].
Metaheuristic algorithm techniques in PSS design have emerged recently. Several metaheuristic algorithms have been proposed for PSS design in a multi-machine power system, including the farmland fertility algorithm in [21,22], the kidney-inspired algorithm in [23], particle swarm optimization in [24,25], improved particle swarm optimization in [26,27], bacterial foraging in [28], an improved salp swarm algorithm in [29], evolutionary programming in [30], an improved Harris Hawk algorithm in [31], chaotic teaching-learning in [32], firefly algorithm in [33], cuckoo search algorithm in [34,35], new chaotic sunflower optimization in [36], whale and improved whale optimization in [37,38], gray wolf and sine-cosine were combined in [39], and ant-lion algorithm in [40]. The important advantage of metaheuristic algorithms is that their search engine simulates two important steps: exploration and exploitation. The former is concerned with locating new positions in search spaces that are distinct from the current positions in the entire search space. The former entails searching for optimally close positions. Simulating exploration only may result in new positions with low precision. On the contrary, deploying exploitation only enhances the rate of getting stuck in the local optimum positions. Hence, many new metaheuristic algorithms tend to balance the exploration and exploitation processes in their search algorithms. Metaheuristic algorithms are being developed and proposed for PSS optimization problems because of the no-free-launch theorem, which expresses that no algorithm is best for all optimization problems [1]. This study performs a comparative study of five metaheuristic algorithms: GBO and RUN, which are based on mathematical concepts, and AVOA, AEO, and GTO, which are nature-inspired for optimal PSS design in a multi-machine power system considering different operating conditions; the whole procedure is graphically represented in Figure 1. The aim is to propose the best-performing algorithms from this study.
The rest of the paper is arranged as follows: Section 2 briefly explains the mathematical modeling of the five algorithms, Section 3 describes the power system test model and PSS design, Section 4 presents the results and discussions from the analysis, and Section 5 concludes the paper.

2. Metaheuristic Algorithms

2.1. Artificial Eco-System (AEO)

The artificial ecosystem is a nature-inspired novel metaheuristic algorithm developed by [41]. The algorithm imitates the flow of energy between the three unique components in an ecosystem and their behavioral process, which are the producer (production), the consumer (consumption), and the decomposer (decomposition). The interaction between these three components makes up an ecosystem food chain, which describes the feeding process in an ecosystem and shows the flow of energy in an ecosystem. Only one producer and one decomposer exist as individuals in the ecosystem’s population. The rest of the population, representing the search space, are consumers chosen as carnivores, herbivores, or omnivores with the same probability. The defined objective function indicates an individual with a higher energy level for a minimization problem.

2.1.1. Production Process

The production process allows AEO to randomly produce a new search entity. The newly created search entity replaces the previous “best entity” (solution) ( x n ) . This displacement is between the best entity and a new search entity randomly produced in the search space ( x r a n d ) . An operator known as a production operator is used to mathematically describe this behavior as shown in Equations (1)–(3):
x 1 ( t + 1 ) = ( 1 a ) x n ( t ) + a x r a n d ( t )
a = ( 1 t m a x i t ) r 1
x r a n d = r × ( V P S S m a x V P S S m i n ) + V P S S m i n
where the population size is represented by n , a is the weight coefficient, t is the current iteration, the maximum number of iterations performed or stop criteria is given as m a x i t , V P S S m a x and V P S S m i n are the upper PSS and lower PSS limits, respectively, and   r 1   is a value randomly generated in the range of [ 0 ,   1 ] . r is a vector produced randomly within the range of [ 0 ,   1 ] .   x r a n d is an individual position generated randomly in the search space.

2.1.2. Consumption Process

After the production process, consumption takes place among the consumers. Each consumer wishing to obtain food energy may either feed on another consumer randomly chosen with a lower energy level, a producer, or both. A consumption factor c , with levy flight characteristics, is proposed and defined in Equations (4) and (5) as a simple parameter-free random walk.
C = 1 2 v 1 | v 2 |
v 1   N   ( 0 ,   1 ) ,         v 2   N ( 0 ,   1 )
v 1   and   v 2 are the levy flight distributions where the normal distribution is given as N ( 0 ,   1 ) with a standard deviation of 1 and mean of 0.
The consumption factor is crucial because it assists each consumer in hunting for food. The three consumers, carnivores, herbivores, and omnivores, adopt unique consumption strategies.
If herbivore is randomly selected as a consumer, they can only eat the producer. Its behavior is represented in a mathematical model as shown in Equation (6).
x i ( t + 1 ) = x i ( t ) + C × ( x i ( t ) x 1 ( t ) ) ,                       i [ 2 , , n ]
However, if a randomly chosen consumer is an omnivore, it can only eat a randomly chosen consumer with more energy. This can be shown mathematically in Equations (7) and (8):
x i ( t + 1 ) = x i ( t ) + C × ( x i ( t ) x j ( t ) ) ,     i [ 3 , , n ]
j = r a n d i ( [ 2 i 1 ] )
Moreover, if an omnivore is randomly selected as a consumer, it can eat a random consumer with a higher energy level and a producer as well. It is mathematically modeled in Equations (9) and (10):
x i ( t + 1 ) = x i ( t ) + C × ( r 2 × ( x i ( t ) x 1 ( t ) ) + ( 1 r 2 ) ( x i ( t ) x j ( t ) ) , i = 3 , , n
j = r a n d i ( [ 2 i 1 ] )
where r 2   is a number generated randomly in the range of [ 0 ,   1 ] .

2.1.3. Decomposition Process

The decomposition process is essential because it provides the producer with the required nutrients for growth. The decomposer chemically breaks down the remains of each individual in the population after death. Its behavior is mathematically modeled as a decomposition factor D with weighting coefficients e and h in Equations (11)–(14) as follows:
x i ( t + 1 ) = x n ( t ) + D × ( e × x n ( t ) h . x i ( t ) ) ,   i = 1 , , n
D = 3 u ,                       u N ( 0 , 1 ) .  
e = r 3 × r a n d i ( [ 1 ,   2 ] ) 1
h = 2 × r 3 1

2.2. African Vulture Optimization Algorithm (AVOA)

The AVOA is like most metaheuristic algorithms that mimic the biological principles of living things. The AVOA is based on the theories and principles of Old World African vultures described in [42], with habitats in Africa, Europe, and Asia. The AVOA optimization steps are grouped into four and are presented as follows:
Step One: (Population Initialization) To determine the population where the best vulture is located, the population size is initiated, and the fitness of all members is calculated, the best solution is chosen as the best vulture of the first group; likewise, the second-best solution is the best vulture in the second group; concurrently, other solutions follow from Equation (15)
R ( i ) = { B e s t V u l t u r e 1   i f   P i = L 1 B e s t V u l t u r e 2   i f   P i = L 2
where L 1 and L 2 are measured parameters before the search operation. They are within the range [ 0 ,   1 ] and the sum of the two parameters is 1 . Roulette wheel selection is used for the probability of choosing the best solution from each group and is expressed in Equation (16) as
P i = F i i = 1 n F i
where F denotes the satisfaction rate and P i is the best solution for each group. If an α numeric parameter is near 1 and the β numeric parameter is near 0 , it increases intensification, while if the β numeric parameter is near 1 and the α numeric parameter is near 0 , it increases AVOA diversity.
Step Two: (Vulture starvation rate) The starvation rate of vultures determines the distance vultures travel in search of food (search space) Equation (17) explains this behavior.
t = h × ( s i n W ( π 2 × i t e r a t i o n i m a x i t e r a t i o n s ) + c o s ( π 2 × i t e r a t i o n i m a x i t e r a t i o n s ) )
F = ( 2 × r a n d i + 1 ) × z × ( 1 i t e r a t i o n i m a x i t e r a t i o n s ) + 1
where F denotes satisfaction rate, m a x i t e r a t i o n s represents the total number of iterations initiated, i t e r a t i o n i   indicates the current iteration number, z is a randomly generated number between [ 2 ,   2 ] , r a n d i value is [ 0 ,   1 ] ; a value below 0 for z means the vulture is starving, while a value above or equal to 0 means the vulture is satisfied.
Step Three: (Exploration) Vultures in the AVOA experience difficulty finding food; hence, they explore their habitat for a long time and travel far and wide in search of food. Two strategies are adopted in this search process and a parameter P 1   is used to decide on each strategy to be chosen. This parameter is set before the optimization operation and is within the range of [ 0 ,   1 ] .
In choosing a strategy in the exploration phase, a random number r a n d P 1   is generated if r a n d P 1 P 1   Equation (22) is used; otherwise, Equation (21) is adopted. In this scenario, each vulture randomly searches the environment for food. This behavior is modeled in Equation (19)
P ( i + 1 ) = { E q u a t i o n ( 22 )   i f   P 1 r a n d P 1 E q u a t i o n ( 21 )   i f   P 1 < r a n d P 1
P ( i + 1 ) = R ( i ) F + r a n d 2 × ( ( u b l b ) × r a n d 3 + l b )
P ( i + 1 ) = R ( i ) D ( i ) × F
D ( i ) = | X | + R ( i ) P ( i )
In Equation (21), vultures initiate a random search for food in their immediate environment. The distance of the search is randomly selected for one of the best vultures of the two groups. The vulture position vector in the next iteration is expressed as P ( i + 1 ) , and the satisfaction rate of the vulture is F computed using Equation (18) in the present iteration. R ( i )   in Equation (21) is one of the best vultures chosen via Equation (15). r a n d 2 is a random number within the range [ 0 ,   1 ] . and is used to ensure a random high coefficient at the environmental search scale; thus, increasing the variety of search space areas. D ( i )   Equation (22) denotes the distance between the vulture and a present optimal one. X is a randomly generated value within range [ 0 ,   2 ] . u b and l b denote the upper and lower boundary limits.
Step Four: (Exploitation) This stage is further divided into two parts.
Stage one: Stage one explores the efficiency of the AVOA. The AVOA initiates the exploitation process if F < 1 . P 2   is a parameter within the range [ 0 ,   1 ] and is adopted to decide on the strategy to be selected. A strategy called siege-fight is gently applied if the value r a n d P 2 P 2 ; otherwise, a strategy known as rotational flying is adopted. This behavior is modeled in Equation (23).
P ( i + 1 ) = { E q u a t i o n ( 25 )   i f   P 2 r a n d P 2 E q u a t i o n ( 26 )   i f   P 2 < r a n d P 2
P ( i + 1 ) = D ( i ) × ( F + r a n d 4 ) d ( t )
P ( i + 1 ) = R ( i ) ( S 1 + S 2 )
d ( i ) = R ( i ) P ( i )
r a n d 4   is a generated random value in the range [ 0 ,   1 ] and d ( t )   is the distance between one best vulture from one group to the other best vultures in the other group calculated using Equation (26).
S 1   and S 2   are computed using Equations (27) and (28) as expressed below.
S 1 = R ( i ) × ( r a n d 5 × P ( i ) 2 π ) × cos ( P ( i ) )
S 2 = R ( i ) × ( r a n d 6 × P ( i ) 2 π ) × sin ( P ( i ) )
r a n d 5   and r a n d 6   in Equations (27) and (28) are made up of random numbers between [ 0 ,   1 ] .
Stage two: Stage two of the AVOA is employed if F < 0.5 . At the start, r a n d 3   is produced in the range [ 0 ,   1 ] . If parameter P 3 r a n d 3 , a strategy known as “competition for food” is initiated which attracts various vultures to the food source. The position of vultures is updated via Equation (29).
P ( i + 1 ) = A 1 + A 2 2
Equations (30) and (31) compute A 1 ,   A 2 as explained below.
A 1 = B e s t V u l t u r e 1 ( i ) B e s t V u l t u r e 1 ( i ) × P ( i ) B e s t V u l t u r e 1 ( i ) ( P ( i ) ) 2 × F
A 2 = B e s t V u l t u r e 1 ( i ) B e s t V u l t u r e 2 ( i ) × P ( i ) B e s t V u l t u r e 2 ( i ) ( P ( i ) ) 2 × F
In the second stage of AVOA exploitation, vultures gather toward the best vulture to feast on leftover food. As a result, the vulture’s position is updated using Equation (32).
P ( i + 1 ) = R ( i ) d ( t ) × F × l e v y ( d )
where d is the problem’s dimension; the “levy flight” phenomenon enhances AVOA efficiency and is described as
L F ( x ) = 0.001 × u × σ | v | 1 P
σ = ( Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 × ( β 1 2 ) ) 1 P
β = 1.5   ( Constant   value ) while v   and   u are randomly generated numbers within a range of [ 0 ,   1 ] .
For more details about the AVOA, see [42].

2.3. Gorilla Troop Optimization (GTO)

A new metaheuristic algorithm inspired by the gorilla’s pack behavior and developed by [43] is mathematically modeled to explain the two important phases in optimization (exploration and exploitation). GTO algorithm uses five different operators in exploration and exploitation for optimization problems.
Exploitation: Follow the silverback; competition for adult females
Exploration: Migrate to unknown places, migrate around known places, and move to other gorilla groups.

2.3.1. Exploration Process

The process applied to the exploration phase is explained here. Gorillas live in groups under the leadership and dominance of the silverback; however, there are times when gorillas move away from their group. This movement can occur in three ways, each represented by a parameter P   with the range [ 0 ,   1 ] , gorillas can move to unknown places (i.e., r a n d < P ), to known places (i.e., r a n d 0.5 ), and to other gorilla groups (i.e., r a n d 0.5 ). The first movement type provides a possibility for the algorithm to survey the entire search space well, the second movement type improves the algorithm’s exploration performance, and the third movement type enhances the algorithm’s ability to escape the local optima trap. Equation (35) models this behavior.
G X ( t + 1 ) = { ( U B L B ) × r 1 + L B r a n d < P ( r 2 C ) × X r ( t ) + L × H r a n d 0.5 X ( i ) L ( L × ( X ( t ) G X r ( t ) ) + r 3 × ( X ( t ) G X r ( t ) ) ) r a n d < 0.5
G X ( t + 1 ) is the candidate position vector of a gorilla in the next iteration t . X ( t )   is the present vector of the position of the gorilla. r 1 , r 2 , r 3 , and r a n d are random values within range [ 0 ,   1 ]   updated in each iteration. U B and L B denote the upper and lower limit boundaries of variables. X r is one member of the gorillas in a group that is randomly selected from the entire population and G X r   is one of the vectors of the gorilla’s positions of the randomly selected candidate’ it also includes the updated positions in each phase. c ,   H , and L are computed using Equations (36)–(38).
C = F × ( 1 I t m a x I t )
F = C o s ( 2 × r 4 ) + 1
L = C × l
I t   in Equation (36) represents the iteration current value, m a x I t   is the total number of iterations performed, C o s in Equation (37) denotes the cosine function, and r 4   is random values within range [ 0 ,   1 ] updated in each iteration. l in Equation (38) is a random value in the range [ 1 ,   1 ] . The silverback dominance is modeled using Equation (38). From Equation (35), H is computed using Equation (39), while Z is computed in Equation (40).
H = Z × X ( t )
Z = [ C ,   C ]
where Z is a random value in the dimensions of the problem within the range [ C ,   C ]

2.3.2. Exploitation Process

The two behaviors applied here are following the silverback and competition for adult females. The silverback dominates the group, makes decisions for the group, directs gorillas to food sources, and determines their movements. Silverbacks may get old and die eventually, thus letting blackbacks (the young male gorillas) become the leader, or the other male gorillas may fight the silverback and dominate the group. Each of the behaviors in the exploitation phase, as already mentioned, can be selected using C in Equation (36); if C W , the following silverback behavior is selected, while if C < W , competition for adult females is selected. W is a value to be set before optimization.
Equation (41) simulates following the silverback.
G X ( t + 1 ) = L × M × ( X ( t ) X s i l v e r b a c k ) + X ( t )
where X s i l v e r b a c k   is the position vector of the silverback gorilla, X ( t ) is the position vector of the gorilla, L is computed using Equation (38) and M is computed using Equation (42):
M = ( | 1 N i = 1 N G X i ( t ) | g ) 1 g
g = 2 L
where G X i ( t )   indicates each candidate gorilla’s vector position in iteration t , N is the total gorilla numbers, g is estimated via Equation (43), and L in Equation (43) is computed using Equation (38):
Q = 2 × r 5 1
A = β × E
where r 5   is a random value within [ 0 ,   1 ] range, A in Equation (45) is a coefficient vector that determines the degree of violence in conflicts, β is a parameter that is set before optimization operation, and E in Equation (45) is a parameter set while being used to model the effect of violence on the dimensions of the solution. If r a n d 0.5 , E s   value of E is equal to random values in a normal distribution with problems dimensions, while if r a n d < 0.5 ,   E will be equal to a random value in a normal distribution. For more details about GTO see [43].

2.4. Gradient-Based Optimization (GBO)

A search engine based on newton’s method and developed by [44] employs a set of vectors to search the solution space, which involves two operators: the gradient search rule (GSR) and the local escaping operator (LEO). These two operators help balance the exploitation and exploration process.

2.4.1. Initialization

In GBO, the following vector N in dimensional space D describes the population and iteration numbers as shown in Equation (46).
X n = X m i n + r a n d ( 0 , 1 ) × ( X m a x X m i n )
X m i n and X m a x are the boundary limits of X   variables and r a n d is a randomly generated number within range [ 0 ,   1 ] .

2.4.2. GSR Process

An important factor ρ is adopted in balancing the exploration of important regions in search space while maintaining near global and optimal points. The factor ρ is based on the sine function and expressed as follows:
ρ 1 = 2 × r a n d × α α
α = | β × sin ( 3 π 2 ) + sin ( β × 3 π 2 ) |
β = β m i n + ( β m a x β m i n ) × ( 1 ( m M ) 3 ) 2
β m a x   and β m i n   are constant values of 0.2   and 1.2 , respectively, the current iteration is denoted by m , M is the maximum number of iterations, and the ρ value changes during iterations of optimization. Starting with a large value to ensure wide variety, the ρ value increases through defined iterations within an assigned range. Thus, increasing GBO solution diversity and providing multiple solution exploration to the problem for the algorithm. GSR is expressed as
G S R = r a n d n × ρ 1 × 2 Δ x × x n x w o r s t x b e s t + ε
GBO uses a behavior that is random to create an exploration method. This includes finding local optima. From Equation (50), the random offset is specified as it deals with the difference between a randomly chosen solution x r 1 m   and the best solution x b e s t . Δ x variable’s meaning is changed during iterations following Equation (53) below. r a n d n is an additional random number for exploration as shown below:
Δ x = r a n d ( 1 : N ) × | s t e p |
s t e p = ( x b e s t x r 1 m ) + δ 2
δ = 2 × r a n d × ( | x r 1 m + x r 2 m + x r 3 m + x r 4 m 4 x n m | )
r a n d ( 1 : N )   denotes random vector N elements in range within [ 0 ,   1 ] ,   the integers r 1 , r 2 , r 3 , and r 4   are randomly chosen in a way that ( r 1 r 2 r 3 r 4 n ) Equation (52) expresses a step-in phase scale computed by x b e s t   and x r 1 m .
For convergence, movement with direction is adopted to enable convergence across the solution field x n . To provide a convenient local search ability with an effect on GBO convergence, a property D M uses its best vector from a population of vector candidates and sends the present vector x n   in the best vector x b e s t x n   direction and is calculated as follows:
D M = r a n d × ρ 2 × ( x b e s t x n )
r a n d is a uniformly generated number within range [ 0 ,   1 ]   and   ρ 2   is a random value adopted to address the size phase in each vector; ρ 2 also is considered an important parameter and is calculated as
ρ 2 = 2 × r a n d × α α
GSR and DM using Equations (56) and (57) shown below are improved based on the present vector position x n m .
X 1 n m = x n m G S R + D M
X 1 n m   is an improved vector as a result of improved X 1 n m . From Equations (34) and (39), X 1 n m   transformation is expressed as
X 1 n m = x n m r a n d n × ρ 1 × 2 Δ x × x n m y p n m y q n m + ε + r a n d n × ρ 2 × ( x b e s t x n m )
y p n m = y n + Δ x ,   y q n m = y n Δ x , y n = average of two vectors which are the present solution x n     and     z n + 1   . For more details about GBO see [44], Figure 2 shows the optimization process of GBO.

2.5. Runge Kutta Optimization (RUN)

The RUN algorithm concept is from the Runge-Kutta method of solving ordinary differential equations in the numeric form developed by [45]. This algorithm has two stages; the first stage procedure is based on the Runge-Kutta theory, while the second stage is known as enhanced solution quality (ESQ). The RUN algorithm is explained as follows:
Solution Updating: RUN optimization adopts a search mechanism (SM) from the Runge-Kutta concept in updating the present solution position of each iteration; this is scripted in stage one below.
Stage One: Search mechanism for updating the present solution position in RUN.
Step 1: If r a n d < 0.5 then
Exploration process;
x n + 1 = ( X c + r × S F × g × x c ) + S F × S M + μ × ( r a n d n × ( x m x c ) )
Else
Exploration process;
x n + 1 = ( X m + r × S F × g × x m ) + S F × S M + μ × ( r a n d n × ( x r 1 x r 2 ) )
End if.
r represents an integer number within range [ 1 ,   1 ] and this parameter helps in enhancing RUN diversity. g is a randomly generated value within the range [ 0 ,   2 ] ,   as is parameter μ . S M is computed as in the study [30]. S F is an adaptive factor computed as
S F = 2 × ( 0.5 r a n d ) × f
F = α × exp ( b × r a n d × ( i m a x i t ) )
m a x i t is the maximum number of iterations and the parameters x m and x c are computed as
x c = × x n + ( 1 ) × x r 1
x m = × x b e s t + ( 1 ) × x c b e s t
is a randomly generated number within range [ 0 ,   1 ] ; x b e s t is the best solution found. x c b e s t is the position of the best solution found after each iteration.
Stage two: Enhanced solution quality (ESQ).
This process is adopted to increase solution quality and evade local optimal trapping during iterations. The process is scripted below:
Step 2: ESQ adoption in RUN to compute solution x n e w 2 .
If r a n d < 0.5 then
If w < 1 , then
x n e w 2 = x n e w 1 + r × w × | ( x n e w 1 x a r g ) + r a n d n |
Else
x n e w 2 = ( x n e w 1 x a v g ) + r × w × | ( u . x n e w 1 x a v g ) + r a n d n |
End if
End if.
From Step 2, w ,   x a v g   and x n e w new values are calculated via the below equations:
w = r a n d ( 0 , 2 ) . exp ( c ( i m a x i t ) )
x a v g = x r 1 + x r 2 + x r 3 3
x n e w 1 = β × x a v g + ( 1 β ) × x b e s t
β is a randomly generated number within range [ 0 ,   1 ] ,   c = 5 × r a n d , the parameter r a n d is random, and r is an integer number with values 1 ,   0 ,   or   1 . x b e s t is the best solution found. The calculated solution x n e w 2   does not in each case have better fitness than the best solution. Thus, the RUN algorithm further computes x n e w 3   to improve fitness via Step 3.
Step 3: Improving the new solution x n e w 3 .
If r a n d < w , then
x n e w 3 = ( x n e w 2 r a n d x n e w 2 ) + S F × ( r a n d x R k + ( v . x b x n e w 2 ) )
End If.
v is a random number which is 2   rand . Figure 3 shows the Runge-Kutta optimization process. For more details about RUN, see [45].

3. Problem Formulation

3.1. Power System Model

The power system dynamic model is described using differential algebraic equations (DAEs) in [46] and is used to represent a power system with m-number of synchronous machines and the voltage regulator called automatic voltage regulator (AVR). The DAEs are as described in Equations (70)–(74):
T d 0 i d E q i d t = E q i ( X d i X d i ) I d i + E f d i
T q 0 i d E d i d t = E d i ( X q i X q i )
d δ i d t = ω i ω s  
  2 H i ω s d ω i d t = T M i E d i I d i E q i I q i ( X d i X d i ) I d i I q i D i ( ω i ω s )
    T A i d E f d i d t = K A i E f d i + K A i ( V r e f i V i )
where in Equations (70)–(74), subscript i denotes i th synchronous generator, T d 0   and T q 0   are the d-axis and q-axis open-circuit time constants, E d   and E q   are the transient EMF of the d-axis and q-axis due to flux linkage in the damper coils, E f d   the excitation field voltage, X d   and X q   are the synchronous transient and sub-transient of d-axis and q-axis reactances, δ is the rotor angle of the generator, ω is the generator rotor speed, ω s generator synchronous speed, H is the generator inertia constant, D is the damping coefficient, T M   is the mechanical torque or power output, I d and I q   are stator current d-axis and q-axis components, V r e f   is the reference excitation voltage, V is the terminal voltage of the generator, K A is excitation static gain, T A   is the regulator time constant, T E   is the electrical torque, V d   and V q are generator terminal voltage of the d-axis and q-axis components, and R s is the armature resistance.
T M i , the input mechanical torque, is kept constant in designing the excitation controller, i.e., to not significantly affect machine dynamics, the generator action is assumed to be slow. Electrical torque is substituted in Equation (73) as follows:
    T E i = E d i I d i + E d i I d i + ( X q i X d i ) I d i I q i
A power grid system of n number of buses and m number of generators, with load buses m-n described using algebraic equations as follows from Equations (76)–(78):
0 = V i e j θ i + ( R s i + j X d i ) ( I d i + j I q i ) e j ( δ i π 2 ) [ E d i + ( X q i X d i ) I q i + j E q i ] e j ( δ i π 2 ) i = 1 , , m
V i e j θ i ( I d i + j I q i ) + P L i ( V i ) + j Q L i ( V i ) = k = 1 n V i V k Y i k e j ( θ i θ k α i k ) , i = 1 , , m
P L i ( V i ) + j Q L i ( V i ) = k = 1 n V i V k Y i k e j ( θ i θ k α i k ) , i = m + 1 , , n
The load active power and reactive power are represented by P L   and Q L , respectively. Yej α denotes the power system admittance matrix and θ is the bus voltage V angle. The admittance matrix load element in power lines is reduced by the order reduction method, as in Equation (79).
Δ x = A x + B u , ˙
The power system linear model is described using Equation (79), where x is the system state vector variables, A the state space matrix of the system, B is the system input matrix, and u is the system control input vector.
G i ( s ) = V P S S i ( s ) Δ w i ( s ) = K G i T w s ( 1 + s T w ) ( 1 + s T 1 i ) ( 1 + s T 2 i ) ( 1 + s T 3 i ) ( 1 + s T 4 i )

3.2. PSS Design Procedure

For analysis, this study employs a conventional lead-lag PSS connected to an IEEE-ST1-type excitation [18]. The ith system transfer function in Equation (80) describes the PSS connection with the IEEE-ST1 excitation system.
Where the output signal of the PSS at the ith machine is V P S S i , T w   is the washout time constant equal to 10 in this study, synchronous machine speed deviation signal is Δ w i . Optimal parameters of stabilizer gain K p s s i   and parameters T i 1 , T i 2 ,   T i 3 , and T i 4 , are to be determined.
An eigenvalue objective function is employed to enhance the damping characteristics of electromechanical modes (EMs) of the system and shift the eigenvalues of the power system to the left region of the complex s-plane. Stabilizer gain and parameters of the PSS are determined through the defined eigenvalue objective function as follows in Equation (81):
J e i g = m a x { r e a l ( λ i ) | λ i   ϵ   E M s } + P C   { r e a l ( λ j ) | λ j > 0 } E M s = { λ k | 0 < i m ( λ k ) 2 π < 5 }
Eigenvalues of the power system state space matrix are denoted by λ i , while P C   is a penalty constant applied in producing positive eigenvalues and can enhance slow eigenvalues [18]. In this study, P C   is 50. J e i g , which is the objective function, is minimized subject to PSS gain ( K p s s i ) and time constants ( T 1 i , T 2 i , T 3 i ,   and   T 4 i ) constraints ( 0.001 K p s s i 50 and 0.001 T 1 i 1 , 0.001 T 3 i 1 , 0.001, and 0.02 T 4 i 1 ) [47].

4. Results and Discussion

The power test system adopted in this study is the Western System Coordinating Council (WSCC), which is a three-machine, nine-bus test system, as seen in Figure 4 below. The modeling parameters for the test system are shown in Table 1. The standard parameters for the generators and loads can be seen in the Figure 4. PSSs are coupled to generators (G2 and G3), with a washout time constant T w = 10 . The total number of parameters optimized was 10 (i.e., five for each PSS). The five algorithms were run 15 times on the model separately, and the parameter settings are shown in Table 2. The study is carried out under different operating conditions as shown in Table 3; a 100 MVA per unit scale was used. The optimal PSS parameters obtained for generators (G2 and G3) under base condition is shown in Table 4. Five dominant modes of the power test system are considered as shown in Table 5. Then, eigenvalues and nonlinear time-domain analysis simulation were run under a 100 ms symmetrical three-phase fault observed at t = 1   s for different operating conditions.

4.1. Base Condition

In the base operating condition from Table 3, the optimal PSS parameters obtained for generators (G2 and G3) are shown in Table 4, where the optimized PSS parameters were used to carry out a small signal stability analysis (SSA).
Table 4. Optimal PSS parameters.
Table 4. Optimal PSS parameters.
Algorithm K P S S 2 T 21 T 22 T 23 T 24
G2AEO6.27890.52220.02160.58130.0315
AVOA6.02460.56090.03180.56220.0220
GTO4.51500.68390.03090.61220.0263
RUN3.50790.73490.02230.71060.0343
GBO3.97950.81350.02570.49940.0242
Algorithm K P S S 3 T 31 T 32 T 33 T 34
G3AEO6.10170.00180.16780.00120.2185
AVOA4.81400.01970.34200.18150.2420
GTO2.53590.00360.99300.99960.3233
RUN2.56170.57130.02020.58420.0209
GBO3.09220.81560.03290.64590.0287
With NO-PSS on the power system, the electromechanical modes 1 and 2 ( 0.686 ± 12.776 i   and 0.123 ± 8.287 i ) with weak damping ratios are significantly changed after the optimization process, particularly for RUN ( 4.989 ± 15.024 i , 3.834 ± 8.270 i )   and GBO ( 4.875 ± 18.240 i , 3.795 ± 7.660 i )   algorithms as shown in Table 5. Moreover, the worst damping ratio of 0.0148 was enhanced to 0.4203 for RUN and 0.4439 for GBO as shown in Table 6.
Table 5. Eigenvalue results for NO-PSS and the algorithms after optimization in base operating condition.
Table 5. Eigenvalue results for NO-PSS and the algorithms after optimization in base operating condition.
ModesNO-PSSAEOAVOAGTORUNGBO
1 0.686 ± 12.776 i 2.972 ± 11.355 i 2.847 ± 11.384 i 3.077 ± 11.429 i 4.989 ± 15.024 i 4.875 ± 18.240 i
2 0.123 ± 8.287 i 3.121 ± 11.312 i 3.205 ± 11.316 i 3.062 ± 11.407 i 3.834 ± 8.270 i 3.795 ± 7.660 i
3 2.379 ± 2.617 i 2.955 ± 3.014 i 2.844 ± 3.075 i 3.094 ± 3.064 i 3.837 ± 3.051 i 3.888 ± 3.185 i
4 4.671 ± 1.375 i 4.298 ± 2.033 i 4.435 ± 1.977 i 4.199 ± 2.222 i 3.832 ± 2.190 i 3.789 ± 2.202 i
5 3.5199 ± 1.016 i 3.374 ± 2.052 i 3.360 ± 1.554 i 3.603 ± 1.670 i 3.832 ± 2.090 i 3.789 ± 2.131 i
The system is restored to its original form after fault clearance, and generator rotor speed and angle variations indicate the status of stability in the power system. Hence, rotor speed deviations ( ω 2 ω 1 ,   ω 3 ω 1 )   and angle deviations of generators 2 and 3 with respect to generator 1 ( δ 2 δ 1 ,   δ 3 δ 1 )   are shown graphically in Figure 5. The time domain simulations and convergence curve results of Figure 5 show that RUN and GBO designed PSSs, which are mathematically inspired, effectively damp oscillations compared to AEO, AVOA, and GTO that are nature-inspired and hence the two algorithms are recommended for robust optimal PSSs design. Table 7 indicates transient analysis from Figure 5 in terms of rise time and settling time, which are key indicators of the performance of each algorithm. Furthermore, the settling time which is the time it takes each algorithm applied in PSS’s design to stabilize and control the low-frequency oscillation due to fault in the system has been presented in a bar chart in Figure 6. Without a PSS controller, the system is unstable for the period of simulation; hence, its settling time is not considered. For the five algorithms compared, GBO and RUN show slight improvements from the bar charts for rotor speed deviations ( ω 2 ω 1 ,   ω 3 ω 1 )   and angle deviations of generators 2 and 3 with respect to generator 1 ( δ 2 δ 1 ,   δ 3 δ 1 ) .

4.2. Condition 1

To further verify the robustness of RUN and GBO-based PSSs, the operating conditions were changed from Table 3. Figure 7 graphically indicates the generator rotor speed deviations ( ω 2 ω 1 ,   ω 3 ω 1 ) and angle deviations of generators 2 and 3 with respect to generator 1, as well as the convergence curve results ( δ 2 δ 1 ,   δ 3 δ 1 ) . Table 8 indicates transient analysis from Figure 7 in terms of rise time and settling time, which are key indicators of the performance of each algorithm. Furthermore, the settling time, which is the time it takes each algorithm applied in PSS’s design to stabilize and control the low-frequency oscillation due to fault in the system, has been presented in a bar chart in Figure 8. Without a PSS controller, the system is unstable for the period of simulation; hence, its settling time is not considered. For the five algorithms compared, GBO and RUN show slight improvements from the bar charts for rotor speed deviations ( ω 2 ω 1 ,   ω 3 ω 1 ) and angle deviations of generators 2 and 3 with respect to generator 1 ( δ 2 δ 1 ,   δ 3 δ 1 ) .

4.3. Condition 2

The operating conditions were again changed from Table 3. Figure 9 graphically indicates the generator rotor speed ( ω 2 ω 1 ,   ω 3 ω 1 )   and angle deviations of generators 2 and 3 with respect to generator 1, as well as the convergence curve results ( δ 2 δ 1 ,   δ 3 δ 1 ) . Table 9 indicates transient analysis from Figure 9 in terms of rise time and settling time, which are key indicators of the performance of each algorithm. Furthermore, the settling time, which is the time it takes each algorithm applied in PSS’s design to stabilize and control low-frequency oscillation due to faults in the system, has been presented in a bar chart in Figure 10. Without a PSS controller, the system is unstable for the period of simulation; hence, its settling time is not considered. Of the five algorithms compared, GBO and RUN show slight improvements from the bar charts for rotor speed deviations ( ω 2 ω 1 ,   ω 3 ω 1 ) and angle deviations of generators 2 and 3 with respect to generator 1 ( δ 2 δ 1 ,   δ 3 δ 1 ) .

4.4. Condition 3

Condition 3 operating conditions are in Table 3. Figure 11 graphically indicates the generator rotor speed ( ω 2 ω 1 ,   ω 3 ω 1 )   and angle deviations of generators 2 and 3 with respect to generator 1, as well as the convergence curve results ( δ 2 δ 1 ,   δ 3 δ 1 ) . Table 10 indicates transient analysis from Figure 11 in terms of rise time and settling time, which are key indicators of the performance of each algorithm. Furthermore, the settling time, which is the time it takes each algorithm applied in PSS’s design to stabilize and control the low-frequency oscillation due to fault in the system, has been presented in a bar chart in Figure 12. Without a PSS controller, the system is unstable for the period of simulation; hence, its settling time is not considered. Of the five algorithms compared, GBO and RUN show slight improvements from the bar charts for rotor speed deviations ( ω 2 ω 1 ,   ω 3 ω 1 ) and angle deviations of generators 2 and 3 with respect to generator 1 ( δ 2 δ 1 ,   δ 3 δ 1 ) .

5. Conclusions

This study compared five meta-heuristic optimization algorithms for PSSs design to damp low-frequency oscillation in a multi-machine power system. Three of the five algorithms are nature-inspired (AEO, AVOA, and GTO) while the remaining two are based on mathematical concepts (RUN and GBO). An eigenvalue objective function was defined to achieve this aim. Eigenvalue analysis and time domain simulations were carried out under three operating conditions in the multi-machine system. In the base condition, the optimized PSS parameters used for small signal stability analysis improved the damping ratio of the system with NO-PSS controller from 0.0148 (1.48%) to 0.2660 (26.6%) for AEO-PSS, 0.2725 (27.25%) for AVOA-PSS, 0.2592 (25.92%) for GTO-PSS, 0.4203 (42.03%) for RUN-PSS, and 0.4439 (44.39%) for GBO-PSS. Furthermore, in the time domain simulation of the different conditions (base, condition 1, 2, and 3) run for 5 s, GBO and RUN provided slight overall improvement in settling time for the generator rotor speed deviations ( ω 2 ω 1 ,   ω 3 ω 1 ) and angle deviations ( δ 3 δ 1 ,   δ 3 δ 1 ) of generators 2 and 3 with respect to generator 1. This improvement is evident in the bar graphs plot for the settling time values. The power test model used in this study was the Western System Coordinating Council power test system, which is a standard IEEE test system. It is safe to say and thus recommended that this research can be applied to a larger test system (for example, the 10-machine New England test system) and it should give the same results. Conclusively, RUN and GBO outperformed AEO, AVOA, and GTO in optimally designing lead-lag PSSs for the multi-machine power system and hence strengthens the claim made by [30] in the run beyond metaphor that most metaphor-based nature-inspired algorithms in AEO, AVOA, and GTO that imitate search patterns in animals might have limitations in an optimization problem.

Author Contributions

Conceptualization, A.S. and T.E.O.; methodology, A.S. and T.EO.; software, H.S., M.M., and Z.A.; validation, H.S., M.M. and Z.A.; formal analysis, A.S. and T.E.O.; investigation, H.S., M.M. and Z.A.; resources, H.S., M.M. and Z.A.; data curation, A.S. and T.E.O.; and writing—original draft preparation, A.S. and T.E.O.; Reviewing and editing, H.S., M.M., Z.A., A.S. and T.E.O.; visualization, H.S., A.S. and T.E.O., and supervision, A.S. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data are contained within this article.

Conflicts of Interest

The authors declare no potential conflict of interest with respect to the research, authorship, and/or publication of this article.

References

  1. Sabo, A.; Kolapo, B.Y.; Odoh, T.E.; Dyari, M.; Abdul Wahab, N.I.; Veerasamy, V. Solar, Wind and Their Hybridization Integration for Multi-Machine Power System Oscillation Controllers Optimization: A Review. Energies 2023, 16, 24. [Google Scholar] [CrossRef]
  2. Sabo, A.; Abdul Wahab, N.I.; Othman, M.L.; Mohd Jaffar, M.Z.A.; Beiranvand, H. Optimal design of power system stabilizer for multimachine power system using farmland fertility algorithm. Int. Trans. Electr. Energy Syst. 2020, 30, e12657. [Google Scholar] [CrossRef]
  3. Hannan, M.A.; Islam, N.N.; Mohamed, A.; Lipu, M.S.H.; Ker, P.J.; Rashid, M.M.; Shareef, H. Artificial intelligent based damping controller optimization for the multi-machine power system: A review. IEEE Access 2018, 6, 39574–39594. [Google Scholar] [CrossRef]
  4. Moshtaghi, H.R.; Eshlaghy, A.T.; Motadel, M.R. A Comprehensive Review on Meta-Heuristic Algorithms and their Classification with Novel Approach. J. Appl. Res. Ind. Eng. 2021, 8, 63–89. [Google Scholar]
  5. Sabo, A.; Wahab, N.I.; Othman, M.L.; Mohd Jaffar, M.Z.; Acikgoz, H.; Beiranvand, H. Application of Neuro-Fuzzy Controller to Replace SMIB and Interconnected Multi-Machine Power System Stabilizers. Sustainability 2020, 12, 9591. [Google Scholar] [CrossRef]
  6. Sabo, A.; Abdul Wahab, N.I.; Othman, M.L.; Mohd Jaffar, M.Z.A.; Beiranvand, H.; Acikgoz, H. Application of a neuro-fuzzy controller for single machine infinite bus power system to damp low-frequency oscillations. Trans. Inst. Meas. Control. 2021, 43, 3633–3646. [Google Scholar] [CrossRef]
  7. Ekinci, S.; İzci, D.; Hekimoğlu, B. Implementing the Henry gas solubility optimization algorithm for optimal power system stabilizer design. Electrica 2021, 21, 250–258. [Google Scholar] [CrossRef]
  8. El-Dabah, M.A.; Kamel, S.; Khamies, M.; Shahinzadeh, H.; Gharehpetian, G.B. Artificial Gorilla Troops Optimizer for Optimum Tuning of TID Based Power System Stabilizer. In Proceedings of the 2022 9th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), Bam, Iran, 2–4 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5. [Google Scholar]
  9. Kumar, A. Nonlinear AVR for power system stabilisers robust phase compensation design. IET Gener. Transm. Distrib. 2020, 14, 4927–4935. [Google Scholar] [CrossRef]
  10. Sabo, A.; Wahab, N.I.A.; Othman, M.L.; Zurwatul, M.; Jaffar, A.M. Novel Farmland Fertility Algorithm Based PIDPSS Design for SMIB Angular Stability Enhancement. Int. J. Adv. Sci. Technol. 2020, 29, 873–882. [Google Scholar]
  11. Sabo, A.; Wahab, N.I.A.; Othman, M.L.; Jaffar, M.Z.A.B.M.; Acikgoz, H.; Nafisi, H.; Shahinzadeh, H. Artificial Intelligence-Based Power System Stabilizers for Frequency Stability Enhancement in Multi-machine Power Systems. IEEE Access 2021, 9, 166095–166116. [Google Scholar] [CrossRef]
  12. Odoh, T.E.; Sabo, A.; Wahab, N.I.A. Mitigation of Power System Oscillation in a DFIG-Wind Integrated Grid: A Review. Appl. Model. Simul. 2022, 6, 134–149. [Google Scholar]
  13. Ghandakly, A.A.; Farhoud, A.M. A parametrically optimized self-tuning regulator for power system stabilizers. IEEE Trans. Power Syst. 1992, 7, 1245–1250. [Google Scholar] [CrossRef]
  14. Barreiros, J.A.L.; e Silva, A.S.; Costa, A.J.A.S. A self-tuning generalized predictive power system stabilizer. Int. J. Electr. Power Energy Syst. 1998, 20, 213–219. [Google Scholar] [CrossRef]
  15. Khodabakhshian, A.; Hemmati, R. Robust decentralized multi-machine power system stabilizer design using quantitative feedback theory. Int. J. Electr. Power Energy Syst. 2012, 41, 112–119. [Google Scholar] [CrossRef]
  16. Denai, M.B.A.; Mohamed, B. Robust stabilizer of electric power generator using H∞ with pole placement constraints. J. Electr. Eng. 2005, 56, 176–182. [Google Scholar]
  17. Gomes, S., Jr.; Guimarães, C.H.C.; Martins, N.; Taranto, G.N. Damped Nyquist Plot for a pole placement design of power system stabilizers. Electr. Power Syst. Res. 2018, 158, 158–169. [Google Scholar] [CrossRef]
  18. Peres, W.; Coelho, F.C.R.; Costa, J.N.N. A pole placement approach for multi-band power system stabilizer tuning. Int. Trans. Electr. Energy Syst. 2020, 30, e12548. [Google Scholar] [CrossRef]
  19. Davut, I. A novel modified arithmetic optimization algorithm for power system stabilizer design. Sigma J. Eng. Nat. Sci. 2022, 40, 529–541. [Google Scholar]
  20. Izci, D. A novel improved atom search optimization algorithm for designing power system stabilizer. Evol. Intell. 2022, 15, 2089–2103. [Google Scholar] [CrossRef]
  21. Sabo, A.; Wahab, N.I.A.; Othman, M.L.; Jaffar, M.Z.A.M.; Beiranvand, H. Farmland fertility optimization for designing of interconnected multi-machine power system stabilizer. Appl. Model. Simul. 2020, 4, 183–201. [Google Scholar]
  22. Sabo, A.; Wahab, N.I.A.; Othman, M.L.; Jaffar, M.Z.A.M. Mitigation of Oscillations in SMIB using a Novel Farmland Fertility Optimization based PIDPSS. In Proceedings of the 2020 2nd International Conference on Smart Power & Internet Energy Systems (SPIES), Bangkok, Thailand, 15–18 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 234–239. [Google Scholar]
  23. Ekinci, S.; Demiroren, A.; Hekimoglu, B. Parameter optimization of power system stabilizers via kidney-inspired algorithm. Trans. Inst. Meas. Control. 2019, 41, 1405–1417. [Google Scholar] [CrossRef]
  24. Ekinci, S.; Demiroren, A. PSO based PSS design for transient stability enhancement 2. Modeling of Flux Decay Model and Fast. IU J. Electr. Electron. Eng. 2015, 15, 1855–1862. [Google Scholar]
  25. Wang, D.; Ma, N.; Wei, M.; Liu, Y. Parameters tuning of power system stabilizer PSS4B using hybrid particle swarm optimization algorithm. Int. Trans. Electr. Energy Syst. 2018, 28, e2598. [Google Scholar] [CrossRef]
  26. Latif, S.; Irshad, S.; Ahmadi Kamarposhti, M.; Shokouhandeh, H.; Colak, I.; Eguchi, K. Intelligent Design of Multi-Machine Power System Stabilizers (PSSs) Using Improved Particle Swarm Optimization. Electronics 2022, 11, 946. [Google Scholar] [CrossRef]
  27. Bento, M.E.C. A hybrid particle swarm optimization algorithm for the wide-area damping control design. IEEE Trans. Ind. Inform. 2021, 18, 592–599. [Google Scholar] [CrossRef]
  28. Ibrahim, N.M.A.; Elnaghi, B.E.; Ibrahim, H.A.; Talaat, H.E.A. Performance assessment of bacterial foraging based power system stabilizer in multi-machine power system. Int. J. Intell. Syst. Appl. 2019, 11, 43. [Google Scholar] [CrossRef] [Green Version]
  29. Akbari, E.; Mollajafari, M.; Al-Khafaji, H.M.R.; Alkattan, H.; Abotaleb, M.; Eslami, M.; Palani, S. Improved salp swarm optimization algorithm for damping controller design for multimachine power system. IEEE Access 2022, 10, 82910–82922. [Google Scholar] [CrossRef]
  30. Abido, M.A.; Member, S. Optimal Design of Power System Stabilizers Using Evolutionary Programming. Trans. Energy Convers. 2002, 17, 429–436. [Google Scholar] [CrossRef]
  31. Chaib, L.; Choucha, A.; Arif, S.; Zaini, H.G.; El-Fergany, A.; Ghoneim, S.S.M. Robust design of power system stabilizers using improved harris hawk optimizer for interconnected power system. Sustainability 2021, 13, 11776. [Google Scholar] [CrossRef]
  32. Farah, A.; Guesmi, T.; Hadj Abdallah, H.; Ouali, A. A novel chaotic teaching–learning-based optimization algorithm for multi-machine power system stabilizers design problem. Int. J. Electr. Power Energy Syst. 2016, 77, 197–209. [Google Scholar] [CrossRef]
  33. Singh, M.; Patel, R.N.; Neema, D.D. Robust tuning of excitation controller for stability enhancement using multi-objective metaheuristic Firefly algorithm. Swarm Evol. Comput. 2019, 44, 136–147. [Google Scholar] [CrossRef]
  34. Chitara, D.; Niazi, K.R.; Swarnkar, A.; Gupta, N. Cuckoo search optimization algorithm for designing of a multimachine power system stabilizer. IEEE Trans. Ind. Appl. 2018, 54, 3056–3065. [Google Scholar] [CrossRef]
  35. Aswathi, V.S.; Laly, M.J.; Mathew, A.T.; Cheriyan, E.P. Cuckoo Search Optimization Algorithm based Small Signal Stability Improvement of a Two-Area Power System incorporated with AVR and PSS. In Emerging Technologies for Sustainability; CRC Press: Boca Raton, FL, USA, 2020; pp. 303–311. ISBN 0429353626. [Google Scholar]
  36. Alshammari, B.M.; Guesmi, T. New chaotic sunflower optimization algorithm for optimal tuning of power system stabilizers. J. Electr. Eng. Technol. 2020, 15, 1985–1997. [Google Scholar] [CrossRef]
  37. Dasu, B.; Sivakumar, M.; Srinivasarao, R. Interconnected multi-machine power system stabilizer design using whale optimization algorithm. Prot. Control. Mod. Power Syst. 2019, 4, 1–11. [Google Scholar] [CrossRef]
  38. Butti, D.; Mangipudi, S.K.; Rayapudi, S.R. An improved whale optimization algorithm for the design of multi-machine power system stabilizer. Int. Trans. Electr. Energy Syst. 2020, 30, e12314. [Google Scholar] [CrossRef]
  39. Devarapalli, R.; Bhattacharyya, B. A hybrid modified grey wolf optimization-sine cosine algorithm-based power system stabilizer parameter tuning in a multimachine power system. Optim. Control. Appl. Methods 2020, 41, 1143–1159. [Google Scholar] [CrossRef]
  40. Bayu, E.S.; Khan, B.; Ali, Z.M.; Alaas, Z.M.; Mahela, O.P. Mitigation of Low-Frequency Oscillation in Power Systems through Optimal Design of Power System Stabilizer Employing ALO. Energies 2022, 15, 3809. [Google Scholar] [CrossRef]
  41. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2020, 32, 9383–9425. [Google Scholar] [CrossRef]
  42. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  43. Abdollahzadeh, B.; Soleimanian Gharehchopogh, F.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  44. Ahmadianfar, I.; Bozorg-haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  45. Ahmadianfar, I.; Asghar, A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  46. PAi, M.A.; Sauer, P.W.; Chow, J.H. Power System Dynamics and Stability With Synchrophasor Measurement and Power System Toolbox, 2nd ed.; IEEE Press: Piscataway, NJ, USA; Wiley: Hoboken, NJ, USA, 2018; pp. 1–364. [Google Scholar]
  47. Beiranvand, H.; Rokrok, E. General relativity search algorithm: A global optimization approach. Int. J. Comput. Intell. Appl. 2015, 14, 1550017. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the overall procedure for PSS design.
Figure 1. Flow chart of the overall procedure for PSS design.
Energies 16 02465 g001
Figure 2. GBO optimization process.
Figure 2. GBO optimization process.
Energies 16 02465 g002
Figure 3. Runge-Kutta optimization process.
Figure 3. Runge-Kutta optimization process.
Energies 16 02465 g003
Figure 4. Western System Coordination Council (WSCC) power test system.
Figure 4. Western System Coordination Council (WSCC) power test system.
Energies 16 02465 g004
Figure 5. Base condition time domain simulations and convergence curves for the power test system.
Figure 5. Base condition time domain simulations and convergence curves for the power test system.
Energies 16 02465 g005
Figure 6. Bar chart of settling time for the algorithms for base condition.
Figure 6. Bar chart of settling time for the algorithms for base condition.
Energies 16 02465 g006
Figure 7. Condition 1-time domain simulations and convergence curves for the power test system.
Figure 7. Condition 1-time domain simulations and convergence curves for the power test system.
Energies 16 02465 g007
Figure 8. Bar chart of settling time for the algorithms for condition 1.
Figure 8. Bar chart of settling time for the algorithms for condition 1.
Energies 16 02465 g008
Figure 9. Condition 2-time domain simulations and convergence curves for the power test system.
Figure 9. Condition 2-time domain simulations and convergence curves for the power test system.
Energies 16 02465 g009aEnergies 16 02465 g009b
Figure 10. Bar chart of settling time for the algorithms for condition 2.
Figure 10. Bar chart of settling time for the algorithms for condition 2.
Energies 16 02465 g010
Figure 11. Condition 3-time domain simulations and convergence curves for the power test system.
Figure 11. Condition 3-time domain simulations and convergence curves for the power test system.
Energies 16 02465 g011aEnergies 16 02465 g011b
Figure 12. Bar chart of settling time for the algorithms for condition 3.
Figure 12. Bar chart of settling time for the algorithms for condition 3.
Energies 16 02465 g012
Table 1. Parameters of the WSCC test system.
Table 1. Parameters of the WSCC test system.
ParameterValue
Transmission line X T 14 = 0.0576   pu , X T 27 = 0.0625   pu , X T 39 = 0.0586   pu ,
X T 45 = 0.085   pu , X T 46 = 0.092   pu , X T 57 = 0.161   pu ,
X T 69 = 0.17   pu , X T 78 = 0.072   pu , X T 89 = 0.1008   pu ,
X L 14 = 0   pu , X L 27 = 0   pu , X L 39 = 0   pu , X L 45 = 0.176   pu ,
X L 46 = 0.158   pu , X L 57 = 0.306   pu , X L 69 = 0.358   pu ,
X L 78 = 0.149   pu , X L 89 = 0.209   pu ,
Machine H 1 = 23.63   s ,   H 2 = 6.4   s , H 3 = 3.01   s , D = 0.0 ,   T d 01 = 8.96   s ,
T d 02 = 6.0   s , T d 03 = 5.89   s ,   T q 01 = 0.31   s , T q 02 = 0.535   s ,
T q 03 = 0.6   s ,   X q 1 = 0.0969 , X q 2 = 0.8645 ,
X q 3 , = 1.2578 ,   X d 1 = 0.146 , X d 2 = 0.8958 , X d 3 = 1.3125 ,
X q 1 = 0.0969   s ,   X q 2 = 0.8645   s , X q 3 = 1.2578   s ,
X d 1 = 0.0608   s , X d 2 = 0.1198   s , X d 3 = 0.1813   s
X d = 0.8958 ,   X q = 0.1969   s ,   X d = 0.1198   s
Exciter K A 1 = 20.0 ,   T A 1 = 0.2   s ,   K A 2 = 20.0 ,   T A 2 = 0.2   s ,   K A 3 = 20.0 ,
T A 3 = 0.2   s
Table 2. Parameter settings of the five algorithms in PSSs design.
Table 2. Parameter settings of the five algorithms in PSSs design.
ParameterValue
Population size 100
Number of variables 10 ( K p s s 2 , K p s s 3 , T 21 T 22 , T 23 , T 24 , T 31 , T 32 , T 33 , T 34 )
Maximum iteration 100
Lower bound [ 0.001 , 0.001 , 0.001 , 0.02 , 0.001 , 0.02 , 0.001 , 0.02 , 0.001 , 0.02 ]
Upper bound [ 50 , 50 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ]
Number of runs 15
Simulation time 5   s
Table 3. Operating conditions for the WSCC test system in 100 MVA per unit used for simulation.
Table 3. Operating conditions for the WSCC test system in 100 MVA per unit used for simulation.
GeneratorBase ConditionCondition 1Condition 2Condition 3
PQPQPQPQ
G10.720.272.211.090.360.160.331.12
G21.630.071.920.560.80−0.112.000.57
G30.85−0.111.280.360.45−0.201.500.38
LoadBase ConditionCondition 1Condition 2Condition 3
PQPQPQPQ
A1.250.502.000.800.650.551.500.90
B0.900.301.800.600.450.351.2010.80
C1.000.351.500.600.500.251.000.50
Table 6. Eigenvalue damping ratio and frequency.
Table 6. Eigenvalue damping ratio and frequency.
ModesNO-PSSAEOAVOAGTORUNGBO
Damping RatioFrequencyDamping RatioFrequencyDamping RatioFrequencyDamping RatioFrequencyDamping RatioFrequencyDamping RatioFrequency
10.05362.03330.25321.80720.24261.81180.26001.81900.31522.39110.25812.9037
20.01481.31890.26601.80030.27251.80100.25921.81550.42031.31700.44391.2192
30.67260.41650.70010.47960.67900.48940.71060.48760.78190.48690.77360.5069
40.95930.21880.90400.32350.91340.31470.88380.35370.86770.34940.86460.3504
50.96080.16160.85440.32660.90760.24730.90700.26620.87820.33220.87160.3392
Table 7. Transient stability analysis of base condition time-domain simulations.
Table 7. Transient stability analysis of base condition time-domain simulations.
Algorithms ω 2 ω 1 ω 3 ω 1
Rise Time (s)Settling Time (s)Rise Time (s) ( × 10 4 ) Settling Time (s)
AEO0.000465.90661.15223.1463
AVOA0.00463.28238.88002.8602
GTO0.00352.97166.85502.8711
GBO0.00222.31755.62302.0694
RUN0.00242.04636.13012.4344
Algorithms δ 2 δ 1 δ 3 δ 1
Rise Time (s)Settling Time (s)Rise time (s) ( × 10 4 )Settling Time (s)
AEO0.00164.458366.0004.2811
AVOA0.01543.21097.19303.9317
GTO0.01493.16788.39403.8720
GBO0.01352.82137.86073.9857
RUN0.01322.85307.99903.9027
Table 8. Transient stability analysis of condition 1-time domain simulations.
Table 8. Transient stability analysis of condition 1-time domain simulations.
Algorithms ω 2 ω 1 ω 3 ω 1
Rise Time (s)Settling Time (s)Rise Time (s)Settling Time (s)
AEO0.00372.94000.00162.9637
AVOA0.00123.24540.000583.0309
GTO0.00462.93070.00202.8008
GBO0.00402.79930.00182.0736
RUN0.00412.62550.00182.4559
Algorithms δ 2 δ 1 δ 3 δ 1
Rise Time (s)Settling Time (s)Rise Time (s) ( × 10 4 ) Settling Time (s)
AEO0.0000083.08450.01503.2810
AVOA0.01343.14651.21193.2913
GTO0.00153.05611.16913.2376
GBO0.00242.61802.89903.1512
RUN0.00522.38471.81653.3010
Table 9. Transient stability analysis of condition 2-time domain simulations.
Table 9. Transient stability analysis of condition 2-time domain simulations.
Algorithms ω 2 ω 1 ω 3 ω 1
Rise Time (s)Settling Time (s)Rise Time (s)Settling Time (s)
AEO0.00607.05770.00112.4576
AVOA0.01213.43340.00202.8849
GTO0.00753.92040.00152.4299
GBO0.01062.90770.00202.5460
RUN0.01012.95430.01012.9543
Algorithms δ 2 δ 1 δ 3 δ 1
Rise Time (s)Settling Time (s)Rise Time (s)Settling Time (s)
AEO0.00284.30700.011104.2811
AVOA0.02263.29130.000563.9317
GTO0.00313.01780.015903.8720
GBO0.01612.80950.000503.9857
RUN0.01592.82990.000163.9027
Table 10. Transient stability analysis of condition 3-time domain simulations.
Table 10. Transient stability analysis of condition 3-time domain simulations.
Algorithms ω 2 ω 1 ω 3 ω 1
Rise Time (s) ( × 10 4 )Settling Time (s)Rise Time (s) ( × 10 4 )Settling Time (s)
AEO9.39902.92554.12002.3288
AVOA4.57863.56381.87602.6453
GTO10.0002.92874.26042.3503
GBO43.0002.744216.0002.3487
RUN12.0002.89344.84722.3544
Algorithms δ 2 δ 1 δ 3 δ 1
Rise Time (s)Settling Time (s)Rise Time (s)Settling Time (s)
AEO0.01133.19380.00263.5620
AVOA0.00923.39750.01043.4283
GTO0.01043.22860.00563.5465
GBO0.00942.84560.000193.4575
RUN0.01033.10760.00483.4732
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sabo, A.; Odoh, T.E.; Shahinzadeh, H.; Azimi, Z.; Moazzami, M. Implementing Optimization Techniques in PSS Design for Multi-Machine Smart Power Systems: A Comparative Study. Energies 2023, 16, 2465. https://doi.org/10.3390/en16052465

AMA Style

Sabo A, Odoh TE, Shahinzadeh H, Azimi Z, Moazzami M. Implementing Optimization Techniques in PSS Design for Multi-Machine Smart Power Systems: A Comparative Study. Energies. 2023; 16(5):2465. https://doi.org/10.3390/en16052465

Chicago/Turabian Style

Sabo, Aliyu, Theophilus Ebuka Odoh, Hossien Shahinzadeh, Zahra Azimi, and Majid Moazzami. 2023. "Implementing Optimization Techniques in PSS Design for Multi-Machine Smart Power Systems: A Comparative Study" Energies 16, no. 5: 2465. https://doi.org/10.3390/en16052465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop