Next Article in Journal
Electrifying with High-Temperature Water Electrolysis to Produce Syngas from Wood via Oxy-Gasification, Leading to Superior Carbon Conversion Yield for Methanol Synthesis
Previous Article in Journal
Sensor-Guided Assembly of Segmented Structures with Industrial Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Analysis of Partitioned Step Particle Swarm Optimization in Function Evaluation

Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(6), 2670; https://doi.org/10.3390/app11062670
Submission received: 26 January 2021 / Revised: 9 March 2021 / Accepted: 12 March 2021 / Published: 17 March 2021

Abstract

:
The partitioned step particle swarm optimization (PSPSO) introduces a two-fold searching mechanism that increases the search capability of Particle Swarm Optimization. The first layer involves the γ and λ, values which are introduced to describe the current condition of characteristics of the searched solution that diversifies the particles when it is converging too much on some optima. The second layer involves the partitioning of particles that tries to prevent premature convergence. With the two search mechanisms, the PSPSO presents a simpler way of making the particles communicate with each other without too much compromise of the computational time. The proposed algorithm was compared with different variants of particle swarm optimization (PSO) using benchmark functions as well as the IEEE 10-unit unit commitment problem. Results proved the effectiveness of PSPSO with different functions and proved its competitive advantage in comparison with published PSO variants.

1. Introduction

Heuristic, which means “to find” or “to discover by trial and error”, is a way of producing an acceptable solution to a complex problem in a reasonable time [1]. Moreover, metaheuristic algorithms are the further development of heuristic algorithms. In the literature, there is no clear distinction between these algorithms, and the terms are usually interchangeable. Although heuristic methods use randomness to search for the solution, these methods are flexible enough to search through non-smooth space where mathematical methods have difficulty [2].
Metaheuristic algorithms, at times called intelligent search methods [3], are methods patterned on certain behaviors in our environment [4]: genetic algorithm (GA), which was based on evolution; particle swarm optimization (PSO) and firefly algorithm (FA) are examples of swarm intelligence; gravitational search algorithm (GSA) is a physics-based algorithm; teaching-learning based optimization (TLBO) and harmony search(HS) are based on human behavior and a never-ending list of algorithms that are patterned to different behaviors of organisms.
The success of a metaheuristic algorithm depends on how it balances the exploration and exploitation capability of each algorithm [5]. Exploration ensures the diversity of the solution by searching on a global scale, while exploitation focuses on the search on the local region where good solutions are. Algorithms provide different weights on these two terms. Another unique characteristic of algorithms is how they perform their random walk that determines a new and, hopefully, a better solution. Certain components of the algorithm that also helps in its movement, such as the inertia weight in PSO or the differential weight in differential evolution (DE) [6], the circling movement around the target in whale algorithm [7], and the use of a quantum bit to store the direction and magnitude of rotation angle in quantum-inspired algorithms (QEA) [8]. Mostly these random walks provide a unique movement that helps in the exploration and exploitation.

1.1. Review of Related Algorithms

The three algorithms discussed below are the inspiration of the partitioned step particle swarm optimization (PSPSO). The concepts and behavior of these algorithms are used as a guide to develope the PSO variant.

1.1.1. Genetic Algorithm

John Holland’s GA in 1975 became the foundation of modern evolutionary intelligent search methods [1]. It is based on Darwinian theory on natural selection–crossover, mutation, and selection. Each individual in the population is composed of chromosomes that are encrypted with data and are, then, evolved through recombination with other particles or mutation. These individuals are then evaluated according to their fitness and randomly selected on the basis of this fitness value as their probability of being selected to generate the next population or children. Because of the complexity of the GA, finding the global optimal in a wide variety of functions has been a great success but at the expense of large calculation time [3].

1.1.2. Particle Swarm Optimization

PSO was introduced by Kennedy and Eberhart [9]. This algorithm simulates the social behavior of flocking birds and schooling fish. It uses the autobiographical memory of the particle (pbest), described as the inclination of the individual to return to the place that is more satisfying in the past, and the target (gbest). Each particle represents a solution for the problem and the collective of particles is called the swarm. For every iteration, the velocity of the particles is determined on the basis of the distance of a particle to its current best (pbest), overall best (gbest), and previous direction. The new position of the swarm is determined by moving the particles according to their calculated velocities [1].
The structure of PSO seems simple enough with its few parameters and easy implementation. However, PSO usually suffers from premature convergence [2] or being stuck at the local optima since it always moves towards the pbest and gbest, which may simply be just the local optima [3]. It has problems with scattering its particles or exploration. Hence, balancing exploration and exploitation needs to be studied for PSO.

1.1.3. Harmony Search

HS was introduced by Geem, Lee, and Loganathan [10]. It was inspired by how the music player improvises music to create better harmony. A generation of harmony can be done in three basic steps: (1) playing by memory, (2) playing a variation of the harmony, or (3) composing new harmony. With its simple structure, fast convergence, and a good balance between exploration and exploitation [5], HS has been widely implemented and combined with other algorithms such as random search [10], PSO [11], virus optimization algorithm [12], etc.
The format of HS is not quite as similar to the other algorithms, but concepts have some equivalent to other algorithms, such as GA’s elitism and mutation and PSO’s simplicity. In HS, the harmony is composed of different melodies (solution). The harmony is divided into harmony memory, a variation of the memory, and new melodies through initializing the percentage of each of these components. Better melodies are, then, saved in the harmony memory every iteration [1]. This concept of harmony division is done using a random selector variable that dictates what type of harmony a certain solution is, which is the inspiration for the proposed partitioned step particle swarm optimization (PSPSO).
From the discussions above, it can be observed that there are four main components in an algorithm: the step, elitism, crossover, and mutation. In GA, each of these four components can easily be distinguished in its algorithm as they are separated. In HS, the step is merged with memory, variation, and composition, representing elitism, crossover, and mutation, respectively, in one equation. In a standard PSO, the representation of each of these components is not too obvious; the velocity computation is the step, within this step is its elitism (pbest and gbest) and its limited mutation (the random multiplier on the pbest and gbest). The crossover done by PSO is only with its pbest and gbest–this means that particles only move toward the particle’s best value and the current global best. While this can mean that each particle independently operates, lack of knowledge about other pbest always maintains the velocity of the particle a little towards the gbest. This is the main reason why PSO suffers from premature convergence. Therefore, the PSO particles should learn how to communicate with other particles to improve their search capability.
The PSPSO introduces a fast way of incorporating crossover and mutation without compromising the computational time. Two layers of search are also introduced. The first layer handles the direction of the inertia weight that is responsible for the partial mutation in PSO. The second is through a modified step equation that partitions the crossover of pbest and gbest, and pbest and another pbest. The proposed algorithm is then compared to different variants of the PSO in terms of the function’s characteristics and computational time.
This paper analyzes the effectivity of PSPSO in solving different benchmark problems as it was not done when it was first introduced by the authors in their paper [13]. This paper provides a better understanding as to how and why it is capable of solving such problems through conducting further tests on the PSPSO. Furthermore, only PSO variants were tested in this paper because we wanted to show its improvement on the PSO algorithm. The paper is organized as follows. Section 1.2 discusses in detail the components of PSO and its known variants found in the literature. Section 2 discusses the PSPSO algorithm and testing methods. Parameter setting results and function evaluation results are discussed in Section 3. Section 4 discusses the possible implications of the use of this algorithm. Lastly, Section 5 discusses the conclusion.

1.2. Particle Swarm Optimization Components

There are several studies conducted that focuses on the enhancement of the PSO algorithm through the following components.

1.2.1. Velocity Rule

The velocity rule is how the particles of the algorithm determine their next update or next step. Originally, PSO [9] uses three components for the step; the previous velocity, the particle best (pbest), and the global best (gbest). The inertia weight variant later introduced by Shi and Eberhart [14] is the recent widely used variant that follows the equation
v e l n e w = ω v e l o l d + c 1 r a n d p b e s t x + c 2 r a n d g b e s t x
where c1 and c2 are the cognitive and cooperative constants, respectively; x is the particle position, ω is the inertia weight, and vel is the velocity of the particle.
However, as discussed in Bonyadi and Michalewics [15], the weighted velocity variant discussed above is locally convergent and may suffer from premature convergence. Other papers tried to mitigate this problem by changing the value of the weight and/or changing the velocity rule. One notable method is removing the pbest component and just use the previous velocity and the global best for the update [16,17] which showed better results, especially in multimodal functions.
Another unique variant uses an additional component to the original PSO. Weight-Improved Crazy PSO (WICPSO) [18] uses a bad experience component (pworst) to help the particle move away from the not so good position.

1.2.2. Inertia Weight

From the discussion of Shi and Eberhart [16], the velocity is intended to expand the search space, that the particles have in terms of ability to explore a new area. The inertia weight is responsible for controlling the previous velocity, thereby, controlling the movement of particles toward the previous velocity. In addition, this inertia weight is also responsible for balancing the global search (toward gbest) and local search (toward pbest).
Because of the inertia weight’s control on the exploration of the PSO, several PSO variants were performed to modify the value of ω. The crazy PSO (CPSO) [14] uses a probability of craziness to determine if the new velocity will be used or if it would be a random number from zero to the pre-set maximum velocity. For both CPSO and weight-improved PSO (WIPSO) [19], the value of ω changes on the basis of the current iteration that decreases as the iteration increases. This decrease in the change of the ω value is necessary to make sure that the algorithm converges. The generalized PSO (GPSO) [20], likewise, changes in the inertia weight depending on the iteration. However, instead of a craziness component, the GPSO uses the previous gbest and current gbest to decide how much of the previous inertia would be used. However, if the GPSO got stuck in a local minimum, the inertia would not change. The GPSO compensates this by including the position of a random particle and a velocity of another random particle as the fourth and fifth terms in their velocity update, leading to too much exploration.
The adaptive PSO (APSO) has been presented in several pieces of literature as the consideration of the current state of particles or sparseness of the solution for exploration. Li et al. [21] used the distances between particles to compute the local sparseness degree to select the exemplars of each group of particles as pbest particles. Although Li et al.’s APSO seems a good way to select pbest particles, the presented algorithm itself uses long complicated steps. In the work of Han et al. [22], the APSO used additional constant parameters to adjust the degree of sparseness than then adjusted the value of ω. Han et al.’s variant seems to be promising because the particles were moved according to the current state of the solution rather than being limited by a range of ω or relying on randomness as in like the CPSO and WIPSO.

1.2.3. Crossover

Seeing a crossover operation in PSO is not usual because the velocity update itself seems to be the crossover operator—a crossover with its pbest and gbest [9]. In the paper of Chen et al. [17], discusses that the crossover enhances the information sharing between particles and prevents premature convergence. They have proposed the PSO with Crossover Operation (PSOCO), which uses two types of crossover operation, arithmetic crossover and differential evolution crossover. They also combined the concept of exemplars as a replacement strategy for stagnated value and competitive selection from differential evolution resulting in a faster convergence. However, arriving at the solution takes longer than usual due to the use of multiple procedures and function evaluation in every iteration.

2. Partitioned Step Particle Swarm Optimization

2.1. Theory

The PSPSO incorporates adaptive components and crossover on the basis of the current state of the particles. In Li et al.’s [21] concept of APSO, the adaptive component is based on the congestion and distribution of particles. For the PSPSO, these two conditions are reformulated as the λ and γ components, acting similar to congestion and distribution components, respectively. The λ component indicates the sparseness of the solution in the current iteration; the larger the λ, the much closer are the fitness values. The γ component indicates how far is the fitness is the current best solution; the larger the value, the more near the particle is to the current best solution.
λ = f x m i n f x m a x
γ = 1 f x f g b e s t f x m a x f g b e s t
ω = λ γ M a x I t e r i M a x I t e r
Equations (2)–(4) show the computation of the three components, λ, γ, and ω, respectively, where x is particle position, f(x) is the fitness function, MaxIter is the maximum iteration, and i is the iteration number. At the beginning of the algorithm when the sparseness is still high, the particles will try to improve themselves by moving towards the known better solutions, pbest and gbest. When the particles suddenly try to converge together, ω will be high, which means that the algorithm will do more exploration. In other studies, the ω is referred to as the “mutation” of PSO; when the ω is high, the algorithm searches more the solution space. This means that the PSPSO will try to use the previous trajectory more to look for other solutions or to diversify.
Traditional random walk for PSO includes the previous velocity, pbest, and gbest. Nonetheless, some research suggests that including gbest makes the algorithm converge prematurely. To remedy this, the probability of using gbest is decreased at the start of the iterations using the partition (p) (5). Even though the use of gbest is reduced, the cooperative component will remain throughout the iteration through the use of a random particle’s pbest (pbestrand). The velocity (6) would have two types of particles, the crossover of pbest and gbest, and the crossover of pbest and pbestrand. The vector representation for velocity (6) and particle movement update (7) is seen in Figure 1a.
p = r a n d < i M a x I t e r
v e l n e w = ω v e l o l d + c 1 r a n d p b e s t x + c 2 r a n d p   g b e s t + 1 p p b e s t r n d x
p a r t n e w = p a r t o l d + v e l n e w
Figure 1b shows the flowchart of the PSPSO, with the blue boxes indicating the modified equations for the PSPSO.

2.2. Parameter Setting

There are only two parameters for PSPSO—cognitive-cooperative constants (c1 and c2) and the number of particles. These parameters are determined by performing a brutal search, first, on the constants, then the number of parameters. The search range for the constants is from 1.5 to 2.5 with 0.25 increments, while the number of particles (N) is 30 to 60 with increments of 10. The mean, standard deviation, and computational time for the 30 runs of all the test functions are recorded. The parameter setting that has the best or lowest results in terms of mean, standard deviation, and computational time is then chosen.

2.3. Performance Tests

2.3.1. PSO Variants

Five PSO variants were used to compare the performance of the proposed PSPSO, namely, PSO [16], CPSO [14], PSOCO [17], WICPSO [18], APSO [21], and scout PSO (ScPSO) [23]. Each of these algorithms focus on a unique component of PSO, either in the velocity rule, inertia weight, crossover, or mutation. The CPSO introduced the craziness probability to adjust the inertia weight. WICPSO improved the CPSO by introducing the pworst component in the velocity rule but was not tested for global search. Li et al.’s APSO considered the relative positions of the particles in the movement. Lastly, the ScPSO uses the mutation criteria based on the artificial bee colony (ABC) algorithm.
All the variants were coded in MATLAB R2020a on an Intel i5 Core 3.2GHz with 8GB RAM. The initialized particles use the Halton quasirandom point set for uniform distribution on the solution space and uniformity of all algorithms. Parameter settings for each of the algorithms are shown in Table 1. The number of particles (N) and maximum iteration is set to 50 and 1000*dimension, respectively.

2.3.2. Benchmark Functions

The list of unconstrained functions can be found in Table 2, with its corresponding property taken from [24]. Moreover, five benchmarks constrained optimization problems were also used for the testing, as shown in Table 3.

2.3.3. Unit Commitment Problem

The PSPSO is, likewise, applied to solve the IEEE 10-unit unit commitment (UC) problem, which is a non-convex, large-scale, non-linear, and mixed-integer combinatorial problem. There are several attempts to solve the IEEE10-unit UC problem in literature using WICPSO [18], hybrid genetic-imperialist competitive algorithm (HGICA) [25], GA [26], quantum-inspired binary grey wolf optimizer (QBGWO) [27], binary artificial sheep algorithm (BASA) [28], binary grey wolf optimizer (BGWO) [29], and MILP-based stochastic multi-objective UC (SMOUC) using one scenario and single objective base case [30].
The UC problem deals with the 24-h scheduling of generators, deciding on when a generator turns on or off (unit commitment decision) and what amount of power should be scheduled for each generator (economic dispatch decision) in order to have the least operating cost. The generator constants and load are listed in Table 4 and Table 5. Table 6 lists the objective function and constraints of the UC problem. The objective, as found in Table 6-a, is to minimize sum of the fuel cost and start-up (SU) cost of the generators for the entire period. It is subject to seven system constraints. The total fuel cost at each hour t is equal the summation of the fuel cost of each generator i given their fuel constants (a, b, and c) (Table 6-b). UCi is the commitment of generator i; a UC value of 1 means that the generator is turned on, and 0 means the generator is turned off. The power balance of the system must be maintained at each hour such that the summation of the power generated by each generator PDG must be equal to the load demand PL at a particular hour (Table 6-c). The generator power output should not exceed the maximum (UB) and minimum (LB) operating limits of each generator (Table 6-d). The ramping up and down of power should not exceed the generator’s ramping limits RmpDG (Table 6-e). The generators have a minimum up time (MUT) and minimum down time (MDT) where the generator cannot turn off and turn on, respectively (Table 6-f). The shutdown costs of each generator depend on the cold start hour (CSH) of each generator (Table 6-g). Finally, the scheduled reserve (10% of the load) at each hour (Table 6-h) should be available.

2.3.4. Function Evaluation Tests

The test is conducted by running the function evaluation 30 times with 30, 40, and 50 unknown variables. The number of iterations is set to 1000*dimension for all algorithms. The average of the runs of each algorithm is recorded. The average value, standard deviation, and running time for each function are, likewise, recorded.
The best algorithm for each function characteristic is determined by obtaining the average ranking of the values of every algorithm for every function characteristic. The overall best algorithm is determined by simply obtaining the average ranking of each algorithm for all functions.

3. Data and Results

3.1. Variables and Parameter Setting

Table 7 shows the results for varying c1 or c2 value from 1.00 to 2.50 (with 0.25 increments) while making the number of particles constant at 50 and varying the number of particles N (with 10 increments) while keeping the c1 or c2 value constant at 1.50. These are evaluated using all functions in Table 2 and Table 3 with zero as optimal value.
On the basis of the data in Table 7a, we can conclude that 1.50 is the most suitable constant parameter since it has the lowest mean, standard deviation, and computational time. Increasing the c value increases all the criteria values. Decreasing its value also increases the all the criteria values but not as much as increasing the c value.
Using 1.5 as the constant value, as seen in Table 7b, N = 50 can be selected as the number of parameters for PSPSO, since increasing N does not give appreciable change in the mean and standard deviation. Although N = 70 has the lowest mean and standard deviation, the increase in computational time is large compared to the decrease in mean and standard deviation.

3.2. PSPSO

Figure 2 shows the change in the fitness f(x) of a selected random particle with its ω, γ and λ. Notice that when the diversity of the particles is high during the start of the iteration (high λ and γ), ω will also be in its higher value. When the particle moves closer to the best solution, the value of ω increases, which means that the algorithm tries to explore more and does not come too near the best solution at the earlier iterations. This can happen in any iteration, which can be seen in the pulses in ω. These pulses will be larger when a bad solution for a particle is found, moving the particle away from the bad solution by giving a larger ω. Convergence is also assured as the value of ω decreases as the iteration increases.

3.3. Benchmark Functions

Figure 3 shows the average convergence graph for the 30 runs for Rosenbrock, Pinter, G02, and Pathological functions of PSPSO compared with other PSO variants. In almost all functions, the PSPSO was able to find the solution before 25% of the total iterations. Of all the benchmark functions only in (f12) Pathological and (f19) Styblinski– Tang was the PSPSO not able to converge in less than 25% of the total iterations. However, for the Pathological function, the PSPSO was still able to outperform the others in finding a better solution in the end.
The mean cost, standard deviation (std) of the costs, and average computational time in seconds of the 30 runs are recoded in Table 8 and Table 9 for each PSO variant for every benchmark function. The results of Table 8 and Table 9 are summarized in Table 10 by ranking the performance of each algorithm. It can be seen that in almost all functions that the PSPSO was ranked 1 except for separable, non-scalable, and multimodal functions where it ranked second. The PSOCO ranked second on almost all functions, performing better in discontinuous, separable, non-scalable, and multimodal functions but performed poorly in constrained function. Overall, the PSPSO ranked first, followed by the PSOCO, CPSO, and APSO tied on third, ScPSO and PSO on fifth, and lastly WICPSO.
Figure 4a shows the average ranking of the algorithms with increasing dimension, where PSPSO almost remained first in rank in function evaluation with a slightly improved performance. It can be noted that both the APSO and PSPSO, both with downward slope, had better performance when the dimension was increased. This can also mean the other way around, that the other algorithms’ performance worsened, especially for the CPSO with an increased average ranking (upward slope). Figure 4b shows the average computational time of every variant with increasing dimension. It also shows that the computational times of PSPSO, CPSO, WICPSO, ScPSO, and PSO were not that far from each other, even in the increase in dimension. On the other hand, PSOCO was found to be always more than twice the computational time of the other variants. The computational time PSO variants are magnified in Figure 4c, and the effect in the increase in dimension is seen linear for all but with different slopes. The proposed PSPSO had the lowest slope followed by APSO, PSO, ScPSO, WICPSO, CPSO, and PSOCO. Although the average computational time for PSPSO was the lowest, its overall ranking for computational time was just a third, as seen in Table 10. This means that not in every function evaluation was solved the fasted by the PSPSO, and that most of the time APSO or PSO solved a function faster than PSPSO.

3.4. Unit Commitment Performance

Table 11 shows the result of the UC from the PSPSO with a total UC cost of $563,934.16. The total fuel cost is $559,844.16 and the total start-up cost is $4,090. This result is the same commitment of units with the other published literatures but with discrepancies in the dispatch and/or costs.
Table 12 shows the results of the PSPSO compared with algorithms in the literature, which also solved the 10-unit UC problem. Although the maximum iteration is 1000, the algorithm terminated around the 400th iteration with an average time of 3.57 s. The best result of the PSPSO reached the minimum solution found in the literature. The average cost of the algorithm was 104.33 more than the best cost, which was a 0.018% difference. Moreover, some algorithms were three times slower than the proposed PSPSO, and only one algorithm was faster than the proposed.
Table 13 shows the results of the 10-unit system multiples with the termination criteria changed to maximum iteration for all algorithms. In almost all cases, the PSPSO performed the best. However, in the 20-unit and 60-unit systems, the WICPSO was able to outperform PSPSO in finding the best cost, yet PSPSO still was the best in terms of the average cost, worst cost, and computational time. The performance of WICPSO was already expected because it has already been used to solved this test system in the literature, but the fact that PSPSO almost always demonstrates the best performance in all the multiples, costs, and computational time makes PSPSO a viable and competitive algorithm in solving a unit commitment problem.

4. Discussion

The PSPSO showed very competitive results with the other known variants of PSO in both search and computational time. The PSPSO outperformed CPSO, WICPSO, ScPSO, and PSO on all function properties except non-scalable functions. Although PSOCO’s search for the global optimum was better than the other variants, PSOCO had the worst computational time, which at times was twice the time of a standard PSO due to its multiple crossover operation, which is computationally taxing. The PSPSO, on the other hand, computed faster compared to other variants. This may have been due to the algorithm’s freedom to direct the particles. Combining this concept with a crossover with another particle would make the particles move more intelligently on the solution space. This suggests that PSPSO can be an alternative to PSO and its variants.

5. Conclusions

In the literature, it has always been the disadvantage of PSO of it having no crossover operation, aside from its crossover to its global best solution, in terms of preventing it from premature convergence. Although some variants tried to include a crossover operation, the operational time is usually compromised, as in the case of PSOCO. The proposed PSPSO’s γ and λ values summarize the characteristics of the current set of particles so that all particles will gain some information about other particles as well and continually search for a better solution. Combining these constants with the partitioned velocity rule may provide an alternative crossover operation that does not compromise the computational time.
The results have proven the effectiveness of PSPSO compared to variants without the crossover operation in different types of functions. The computational speed is likewise competitive with published variants. Even with the increased dimension, the effectiveness of PSPSO is still maintained with a slight improvement in computational time. The PSPSO was also proven effective in solving the IEEE 10-unit problem and is competitive with other algorithms found in the literature.

Author Contributions

Conceptualization, E.O. and C.-C.K.; methodology and software, E.O.; formal analysis, E.O. and C.-C.K.; writing—review and editing, C.-H.L.; supervision and project administration, C.-C.K. All authors have read and agreed to the published version of the manuscript.

Funding

The support of this research by the Ministry of Science and Technology of the Republic of China under Grant No. MOST108-3116-F-042A-004-CC2 is gratefully acknowledged.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

All data are provided in this manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Yang, X.-S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Beckington, UK, 2010. [Google Scholar]
  2. Abujarad, S.Y.; Mustafa, M.W.; Jamian, J.J. Recent approaches of unit commitment in the presence of intermittent renewable energy resources: A review. Renew. Sustain. Energy Rev. 2017, 70, 215–223. [Google Scholar] [CrossRef]
  3. Abdmouleh, Z.; Gastli, A.; Ben-Brahim, L.; Haouari, M.; Al-Emadi, N.A. Review of optimization techniques applied for the integration of distributed generation from renewable energy sources. Renew. Energy 2017, 113, 266–280. [Google Scholar] [CrossRef]
  4. Chou, J.-S.; Truong, D.-N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  5. Portilla-Flores, E.A.; Sanchez-Marquez, A.; Flores-Pulido, L.; Vega-Alvarado, E.; Yanez, M.B.C.; Aponte-Rodriguez, J.A.; Nino-Suarez, P.A. Enhancing the Harmony Search Algorithm Performance on Constrained Numerical Optimization. IEEE Access 2017, 5, 25759–25780. [Google Scholar] [CrossRef]
  6. Storn, R.; Price, K. Differential Evolution–A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  7. Mafarja, M.M.; Mirjalili, S. Hybrid Whale Optimization Algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  8. Xiong, H.; Wu, Z.; Fan, H.; Li, G.; Jiang, G. Quantum rotation gate in quantum-inspired evolutionary algorithm: A review, analysis and comparison study. Swarm Evol. Comput. 2018, 42, 43–57. [Google Scholar] [CrossRef]
  9. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  10. Kamboj, V.K.; Bath, S.K.; Dhillon, J.S. Implementation of hybrid harmony search/random search algorithm for single area unit commitment problem. Int. J. Electr. Power Energy Syst. 2016, 77, 228–249. [Google Scholar] [CrossRef]
  11. Nazari-Heris, M.; Fakhim-Babaei, A.; Mohammadi-Ivatloo, B. A novel hybrid harmony search and particle swarm optimization method for solving combined heat and power economic dispatch. In Proceedings of the 2017 Smart Grid Conference (SGC), Tehran, Iran, 20–21 December 2017; pp. 1–9. [Google Scholar] [CrossRef]
  12. Liang, Y.; Juarez, J.R.C. Harmony search and virus optimization algorithm for multi-objective combined economic energy dispatching problems. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 3947–3954. [Google Scholar] [CrossRef]
  13. Ocampo, E.; Huang, Y.-C.; Kuo, C.-C. Feasible Reserve in Day-Ahead Unit Commitment Using Scenario-Based Optimization. Energies 2020, 13, 5239. [Google Scholar] [CrossRef]
  14. Victoire, T.A.A.; Jeyakumar, A.E. Reserve constrained dynamic dispatch of units with valve-point effects. IEEE Trans. Power Syst. 2005, 20, 1273–1282. [Google Scholar] [CrossRef]
  15. Bonyadi, M.R.; Michalewicz, Z. Analysis of Stability, Local Convergence, and Transformation Sensitivity of a Variant of the Particle Swarm Optimization Algorithm. IEEE Trans. Evol. Comput. 2016, 20, 370–385. [Google Scholar] [CrossRef] [Green Version]
  16. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar] [CrossRef]
  17. Chen, Y.; Li, L.; Xiao, J.; Yang, Y.; Liang, J.; Li, T. Particle swarm optimizer with crossover operation. Eng. Appl. Artif. Intell. 2018, 70, 159–169. [Google Scholar] [CrossRef]
  18. Shukla, A.; Singh, S.N. Multi-objective unit commitment with renewable energy using hybrid approach. IET Renew. Power Gener. 2016, 10, 327–338. [Google Scholar] [CrossRef]
  19. Khokhar, B.; Parmar, K. A novel weight-improved particle swarm optimization for combined economic and emission dispatch problems. Int. J. Eng. Sci. Technol. 2012, 4, 2012. [Google Scholar]
  20. Sedighizadeh, D.; Masehian, E.; Sedighizadeh, M.; Akbaripour, H. GEPSO: A new generalized particle swarm optimization algorithm. Math. Comput. Simul. 2021, 179, 194–212. [Google Scholar] [CrossRef]
  21. Li, D.; Guo, W.; Lerch, A.; Li, Y.; Wang, L.; Wu, Q. An adaptive particle swarm optimizer with decoupled exploration and exploitation for large scale optimization. Swarm Evol. Comput. 2021, 60, 100789. [Google Scholar] [CrossRef]
  22. Han, H.; Lu, W.; Hou, Y.; Qiao, J. An Adaptive-PSO-Based Self-Organizing RBF Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 104–117. [Google Scholar] [CrossRef]
  23. Koyuncu, H.; Ceylan, R. A PSO based approach: Scout particle swarm algorithm for continuous global optimization problems. J. Comput. Des. Eng. 2019, 6, 129–142. [Google Scholar] [CrossRef]
  24. Jamil, M.; Yang, X. A literature survey of benchmark functions for global optimisation problems. arXiv 2013, arXiv:abs/1308.4008. [Google Scholar] [CrossRef] [Green Version]
  25. Saber, N.A.; Salimi, M.; Mirabbasi, D. A priority list based approach for solving thermal unit commitment problem with novel hybrid genetic-imperialist competitive algorithm. Energy 2016, 117, 272–280. [Google Scholar] [CrossRef]
  26. Quan, H.; Srinivasan, D.; Khosravi, A. Integration of renewable generation uncertainties into stochastic unit commitment considering reserve and risk: A comparative study. Energy 2016, 103, 735–745. [Google Scholar] [CrossRef]
  27. Srikanth, K.; Panwar, L.K.; Panigrahi, B.K.; Herrera-Viedma, E.; Sangaiah, A.K.; Wang, G.-G. Meta-heuristic framework: Quantum inspired binary grey wolf optimizer for unit commitment problem. Comput. Electr. Eng. 2018, 70, 243–260. [Google Scholar] [CrossRef]
  28. Wang, W.; Li, C.; Liao, X.; Qin, H. Study on unit commitment problem considering pumped storage and renewable energy via a novel binary artificial sheep algorithm. Appl. Energy 2017, 187, 612–626. [Google Scholar] [CrossRef]
  29. Panwar, L.K.; Reddy, K.S.; Verma, A.; Panigrahi, B.K.; Kumar, R. Binary Grey Wolf Optimizer for large scale unit commitment problem. Swarm Evol. Comput. 2018, 38, 251–266. [Google Scholar] [CrossRef]
  30. Soltani, Z.; Ghaljehei, M.; Gharehpetian, G.B.; Aalami, H.A. Integration of smart grid technologies in stochastic multi-objective unit commitment: An economic emission analysis. Int. J. Electr. Power Energy Syst. 2018, 100, 565–590. [Google Scholar] [CrossRef]
  31. MATLAB. MATLAB Documenation: Solve a Constrained Non-Linear Problem. Available online: https://www.mathworks.com/help/optim/ug/example-nonlinear-constrained-minimization.html (accessed on 5 October 2020).
  32. Bird Problem (Constrained). Available online: https://web.archive.org/web/20150506143633/http://www.phoenixint.com/software/benchmark_report/index.php (accessed on 5 October 2020).
  33. Constrained Optimization in Chebfun. Available online: http://www.chebfun.org/examples/opt/ConstrainedOptimization.html (accessed on 5 October 2020).
  34. Ma, H.; Simon, D. Constrained Benchmark Functions. In Evolutionary Computation with Biogeography-based Optimization; ISTE Ltd and John Wiley & Sons, Inc: New York, NY, USA, 2017; pp. 265–287. [Google Scholar]
Figure 1. PSPSO (a) sample vector representation of the particle movement and (b) flowchart.
Figure 1. PSPSO (a) sample vector representation of the particle movement and (b) flowchart.
Applsci 11 02670 g001
Figure 2. Sample constant values of a single particle during the first 200 iterations of f20 Trigonometric 2. (a) x shows the function value of a selected random particle with the other particles (in red), the blue line indicates the best value. The values of this particle’s ω, γ, and λ are shown in (bd), respectively.
Figure 2. Sample constant values of a single particle during the first 200 iterations of f20 Trigonometric 2. (a) x shows the function value of a selected random particle with the other particles (in red), the blue line indicates the best value. The values of this particle’s ω, γ, and λ are shown in (bd), respectively.
Applsci 11 02670 g002
Figure 3. Average convergence of partitioned step particle swarm optimization (PSPSO) compared with different variants of particle swarm optimization (PSO) for (a) Rosenbrock, (b) Pinter, (c) G02, and (d) Pathological.
Figure 3. Average convergence of partitioned step particle swarm optimization (PSPSO) compared with different variants of particle swarm optimization (PSO) for (a) Rosenbrock, (b) Pinter, (c) G02, and (d) Pathological.
Applsci 11 02670 g003
Figure 4. PSO variants in increasing dimension for (a) average rank in function evaluation, and (b) average computational time. (c) Enlargement of (b).
Figure 4. PSO variants in increasing dimension for (a) average rank in function evaluation, and (b) average computational time. (c) Enlargement of (b).
Applsci 11 02670 g004
Table 1. Parameters for compared algorithms.
Table 1. Parameters for compared algorithms.
AlgorithmsParameter Setting
PSOc1 = c2 = 2, ωmin = 0.4, ωmax = 0.9
CPSOc1 = c2 = 2, ωmin = 0.4, ωmax = 0.9
PSOCOc = 1.49618, ω = 0.7298, CR = 0.05, G = 7
WICPSOc1max = 2.2, c1min = 1.5, c1max = 2.2, c1min = 1.5, ωmin = 0.4, ωmax = 0.9
APSOc1 = c2 = 2, L = 3, C = 1
ScPSOc1 = c2 = 2, ωmin = 0.4, ωmax = 0.9
Table 2. Benchmark functions (Unconstrained).
Table 2. Benchmark functions (Unconstrained).
No.Function NameCharacteristicsRange
f1SphereC, D, S, Sc, U[−10, 10]
f2RosenbrockC, D, NS, Sc, U[−30, 30]
f3Chung ReynoldsC, D, PS, Sc, U[−100, 100]
f4Dixon and PriceC, D, NS, Sc, U[−10, 10]
f5Powell SingularC, D, NS, Sc, U[−4, 5]
f6StepDC, ND, S, Sc, U[−100, 100]
f7Cosine MixtureDC, ND, S, Sc, M[−1, 1]
f8Ackley 1C, D, NS, Sc, M[−35, 35]
f9AlpineC, ND, S, NSc, M[−10, 10]
f10Egg HolderC, D, NS, Sc, M[−512, 512]
f11GriewankC, D, NS, Sc, M[−100, 100]
f12PathologicalC, D, NS, NSc, M[−100, 100]
f13WeierstrassC, D, S, Sc, M[−0.5, 0.5]
f14SchwefelC, D, S, NSc, M[−500, 500]
f15PinterC, D, NS, Sc, M[−100, 100]
f16QingC, D, S, Sc, M[−500, 500]
f17QuinticC, D, S, NSc, M[−10, 10]
f18Stretched V Sine Wave FunctionC, D, NS, Sc, U[−10, 10]
f19Styblinski-TangC, D, NS, NSc, M[−5, 5]
f20Trigonometric 2C, D, NS, Sc, M[−500, 500]
Nomenclature: Continuous (C), Differentiable (D), Discontinuous (DC), Multimodal (M), Non-Differentiable (ND), Non-Separable (NS), Non-Scalable (NSc), Partially Separable (PS), Separable (S), Scalable (Sc), and Unimodal (U)
Table 3. Benchmark functions (Constrained).
Table 3. Benchmark functions (Constrained).
No.Function NameDimension
c1Rosenbrock [31]2
c2Bird Problem [32]2
c3Townsend [33]2
c4G01 [34]13
c5G02 [34]20
Table 4. Generator data.
Table 4. Generator data.
GenMAX OP
(kW)
MIN OP
(kW)
a
($)
b
($/MW)
c
($/MW2)
Hot Start Cost
(HSC)
Cold Start Cost
(CSC)
Cold Start Hour
(CSH)
MUT
(h)
MDT
(h)
tini (h)Ramp Up/
Down
(kW)
1455150100016.190.00048450090005888160
245515097017.260.00031500010,0005888160
31302070016.600.002005501100554−5100
41302068016.500.002115601120554−5100
51622545019.700.003989001800664−6100
6802037022.250.00712170340233−360
7852548027.740.00079260520233−360
8551066025.920.004133060011−140
9551066527.270.002223060011−140
10551067027.790.001733060011−140
Table 5. Load Data.
Table 5. Load Data.
Hour123456789101112
Load70075085095010001100115012001300140014501500
Hour131415161718192021222324
Load1400130012001050100011001200140013001100900800
Table 6. Ten-unit unit commitment problem.
Table 6. Ten-unit unit commitment problem.
Objective min t = 1 T F u e l t + S U t 6-a
s.t.
F u e l t = i = 1 n g e n U C i t a i P D G , i 2 t + b i P D G , i t + c i

6-b
i n g e n P D G , i t = P L t 6-c
L B i t P D G , i t U B i t
P D G , i t P D G , i t 1 R m p D G , i
6-d
6-e
t O N , i M U T i     ;   t O F F , i M D T i 6-f
S U i t = H S C i M D T i t O F F , i M D T i + C S H i C S C i M D T i + C S H i < t O F F , i 6-g
i = 1 n g e n U C i t P m a x , i t P D G , i t S R t 6-h
Table 7. Parameter setting results using functions with zero as global minima
Table 7. Parameter setting results using functions with zero as global minima
(a) N = 50(b) C1 = C2 = 1.50
CMeanStd DevTime (s)NMeanStd DevTime(s)
1.0065.609114.31048.773301.09405.19895.303
1.2530.496829.32338.020400.05460.02826.353
1.500.04470.00067.625500.04460.00067.591
1.752.57550.70629.687600.04450.00028.868
2.0013,050.0915,406.510.373700.04447.16 × 10−610.131
2.255.69 × 10293.11 × 103010.358800.04451.37 × 10−411.622
2.501.07 × 10362.64 × 103610.107900.04459.62 × 10−512.995
Note: In bold are the results for the chosen c and N parameters.
Table 8. Results of benchmark unconstrained functions (D = 30).
Table 8. Results of benchmark unconstrained functions (D = 30).
PSPSOPSOCOCPSOWICPSOAPSOScPSOPSO PSPSOPSOCOCPSOWICPSOAPSOScPSOPSO
f1mean0.0 × 1005.6 × 10-2966.7 × 1027.0 × 1043.9 × 1021.4 × 10-2083.3 × 102f11mean1.3 × 10-30.0 × 1001.9 × 10-11.8 × 1012.8 × 10-11.9 × 10-21.8 × 10-2
std0.0 × 1000.0 × 1002.5 × 1032.2 × 1031.8 × 1030.0 × 1001.8 × 103std3.0 × 10-30.0 × 1006.8 × 10-16.3 × 10-17.2 × 10-12.5 × 10-22.0 × 10-2
time2.5975.7462.843.2641.8052.4231.778time4.74110.4875.5965.9333.1074.8453.837
f2mean5.0 × 10-42.5 × 10-21.7 × 1048.9 × 1054.6 × 1041.5 × 1051.1 × 105f12mean2.2 × 1003.6 × 1005.5 × 1004.3 × 1005.3 × 1006.4 × 1006.4 × 100
std1.6 × 10-33.5 × 10-22.5 × 1047.2 × 1045.7 × 1047.6 × 1047.2 × 104std6.9 × 10-13.9 × 10-11.0 × 1004.7 × 10-18.7 × 10-17.5 × 10-017.0 × 10-1
time2.7666.9563.0283.41.9572.762.058time4.67816.2678.0845.6033.8756.4065.487
f3mean0.0 × 1000.0 × 1003.3 × 1064.9 × 1093.3 × 1061.0 × 1073.3 × 106f13mean0.0 × 1006.9 × 10-12.3 × 1005.2 × 1017.1 × 1003.6 × 1003.2 × 100
std0.0 × 1000.0 × 1001.8 × 1073.2 × 1081.8 × 1073.1 × 1071.8 × 107std0.0 × 1004.6 × 10-12.5 × 1006.1 × 10-014.5 × 1003.0 × 1003.5 × 100
time2.7125.4592.7963.2921.8672.4881.798time58.365305.53194.07876.38186.41392.41191.694
f4mean6.7 × 10-13.2 × 10-13.3 × 1041.7 × 1062.4 × 1015.7 × 1044.5 × 104f14mean1.2 × 10-2195.6 × 10-1657.3 × 1021.9 × 10406.1 × 1027.7 × 1027.2 × 102
std0.0 × 1003.3 × 10-15.0 × 1043.1 × 1043.8 × 1016.5 × 1046.1 × 104std0.0 × 1000.0 × 1002.1 × 1021.4 × 10402.1 × 1022.2 × 1021.9 × 102
time2.6659.5633.5363.4351.9922.8892.214time3.6168.1594.9984.0672.2983.8842.982
f5mean2.3 × 10-73.5 × 10-45.7 × 1025.0 × 1036.5 × 1019.4 × 1028.9 × 102f15mean0.0 × 1002.8 × 1012.3 × 1031.8 × 1041.6 × 1031.9 × 1032.3 × 103
std1.5 × 10-71.2 × 10-43.4 × 1026.0 × 1026.8 × 1016.7 × 1026.9 × 102std0.0 × 1001.5 × 1021.1 × 1032.1 × 1021.3 × 1039.4 × 1029.5 × 102
time6.29319.6697.4196.3635.7736.2175.314time10.02819.8468.52111.076.2317.8827.23
f6mean0.0 × 1000.0 × 1001.0 × 1037.0 × 1043.6 × 1023.3 × 1026.7 × 102f16mean1.3 × 10-211.2 × 10-281.4 × 10-282.2 × 10119.7 × 10-111.5 × 10-281.5 × 10-28
std0.0 × 1000.0 × 1003.1 × 1031.5 × 1031.8 × 1031.8 × 1032.5 × 103std7.3 × 10-210.0 × 1002.9 × 10-292.2 × 1095.2 × 10-105.1 × 10-295.5 × 10-29
time3.6989.2786.125.212.9084.6843.67time4.0179.7334.1334.2272.3413.3852.55
f7mean-1.8 × 100-1.8 × 100-1.8 × 100-1.8 × 100-1.8 × 100-1.8 × 100-1.8 × 100f17mean1.5 × 10-170.0 × 1001.6 × 10-152.3 × 1055.3 × 10-14.6 × 10-162.0 × 10-15
std9 × 10-169 × 10-169 × 10-169 × 10-169 × 10-169 × 10-169.0 × 10-16std8.1 × 10-170.0 × 1002.3 × 10-151.6 × 1041.9 × 1001.1 × 10-153.7 × 10-15
time0.0940.2520.0890.110.0730.0910.065time9.24864.70921.96823.55321.12820.73119.947
f8mean3.7 × 10-153.8 × 10-152.0 × 1012.0 × 1011.8 × 1012.0 × 1012.0 × 101f18mean0.0 × 1002.3 × 10-372.0 × 10-383.0 × 10-10.0 × 1002.3 × 10-382.6 × 10-38
std6.5 × 10-169.0 × 10-162.1 × 10-38.2 × 10-45.1 × 1001.5 × 10-31.7 × 10-03std0.0 × 1009.9 × 10-374.2 × 10-389.2 × 10-20.0 × 1004.5 × 10-384.6 × 10-38
time4.0159.647.2425.4623.5455.9934.942time0.1180.3350.1090.130.0770.1140.088
f9mean5.8 × 10-204.2 × 10-95.2 × 1005.3 × 1014.1 × 10-16.0 × 1005.3 × 100f19mean-1.1 × 103-1.2 × 103-1.2 × 103-4.1 × 102-1.1 × 103-1.2 × 103-1.2 × 103
std3.2 × 10-191.6 × 10084.2 × 1003.6 × 10-141.0 × 1004.0 × 1004.3 × 100std1.7 × 1014.2 × 10-131.0 × 1017.9 × 1004.1 × 1011.1 × 1011.1 × 101
time4.08911.0995.5085.5033.2544.733.74time9.64229.1229.99110.6283.2949.6978.744
f10mean-959.6-958-959.6-959.6-959.6-943.5-951.6f20mean1.0 × 1001.0 × 1001.7 × 1041.8 × 1061.7 × 1041.7 × 1044.5 × 100
std5.8 × 10-135.4 × 1005.8 × 10-132.8 × 10-55.8 × 10-136.1 × 1014.4 × 101std1.0 × 10-169.2 × 10-176.3 × 1044.1 × 1046.3 × 1046.3 × 1043.4 × 100
time0.0960.2670.090.1130.0780.0960.07time5.27810.3216.10815.0273.5915.6584.728
Note: time is in seconds.
Table 9. Results of benchmark constrained functions.
Table 9. Results of benchmark constrained functions.
PSPSOPSOCOCPSOWICPSOAPSOScPSOPSO
c1mean7.90 × 10−288.25 × 10−62.53 × 10−291.65 × 10−31.00 × 10−292.92 × 10−286.23 × 10−28
std4.28 × 10−271.30 × 10−57.39 × 10−291.33 × 10−34.00 × 10−298.95 × 10−281.89 × 10−27
time0.0910.2240.0690.0910.0590.0750.05
c2mean−1.07 × 102−1.07 × 102−1.07 × 102−1.07 × 102−1.10 × 102−1.07 × 102−1.07 × 102
std2.81 × 10−143.82 × 10−142.84 × 10−142.27 × 10−13.90 × 10−143.07 × 10−143.02 × 10−14
time0.1270.2770.0840.110.0720.0890.064
c3mean−2.13 × 100−2.08 × 100−2.13 × 100−2.13 × 100−2.10 × 100−2.13 × 100−2.13 × 100
std1.17 × 10−151.42 × 10−18.96 × 10−163.83 × 10−46.00 × 10−169.72 × 10−161.06 × 10−15
time0.0950.2850.0840.1090.0810.090.065
c4mean−1.50 × 101−1.50 × 101−9.77 × 100−8.15 × 100−1.10 × 101−6.63 × 100−6.43 × 100
std0.00 × 1000.00 × 1002.21 × 1002.61 × 1002.40 × 1001.69 × 1001.43 × 100
time1.2073.3891.5571.2980.8411.361.027
c5mean−7.96 × 10−1−7.83 × 10−1−5.71 × 10−1−1.54 × 10−1−7.20 × 10−1−6.00 × 10−1−6.32 × 10−1
std1.10 × 10−28.73 × 10−31.31 × 10−19.32 × 10−39.80 × 10−21.50 × 10−11.28 × 10−1
time4.210.6783.9234.0183.3013.7463.221
Note: time is in seconds.
Table 10. Ranking of each PSO variant (D = 30).
Table 10. Ranking of each PSO variant (D = 30).
PSPSOPSOCOCPSOWICPSOAPSOScPSOPSO
Unconstrained function property
Continuous1237464
Discontinuous1167434
Differentiable1237554
Non-differentiable1257345
Separable2147635
Non-separable1237465
Scalable1237465
Non-scalable2137654
Unimodal1247355
Multimodal2137654
Constrained functions1427346
Overall Ranking in Function Evaluation1237355
Overall Ranking in Computational Time3756142
Table 11. Unit Commitment schedule and costs.
Table 11. Unit Commitment schedule and costs.
hour\unit12345678910Fuel CostSU Cost
14552450000000013,683.130
24552950000000014,554.500
345537000250000016,809.45900
445545500400000018,597.670
54553900130250000020,020.02560
6455360130130250000022,387.041100
7455410130130250000023,261.980
8455455130130300000024,150.340
945545513013085202500027,250.86860
104554551301301623325100030,057.2260
1145545513013016273251010031,915.3360
12455455130130162802543101033,889.3660
134554551301301623325100030,057.220
1445545513013085202500027,250.860
15455455130130300000024,150.340
16455310130130250000021,513.660
17455260130130250000020,641.820
18455360130130250000022,387.040
19455455130130300000024,150.340
204554551301301623325100030,057.22490
2145545513013085202500027,250.860
2245545500145202500022,735.320
2345542500020000017,645.160
244553450000000015,427.420
559,844.164090
Total Cost ($)563,934.16
Table 12. Benchmark unit commitment (UC) problem.
Table 12. Benchmark unit commitment (UC) problem.
(a) Without Ramp Constraint
ApproachBest ($)Average ($)Worst ($)Avg. Time (s)Max Itern
WICPSO [18]563,937.69--39.610010
HGICA [25]563,935.31--2.443255
GA [26]563,937.69---30064
QBGWO [27]563,936.30563,963.30563,936.3062.350030
BASA [28]563,937563,937563,937---
BGWO [29]563,937.30563,937.30563,937.3066.1550030
MILP-SMOUC [30]56,3937-----
PSPSO563,934.17564,112.10564,357.633.57100050
(b)With Ramp Constraint
ApproachBest ($)Average ($)Worst ($)Avg. Time (s)Max Itern
HGICA [25]564,186.63--1.613255
MILP- SMOUC [30]564,326.80 --1.08--
PSPSO564,111.36564,178.22564,563.005.9100050
Table 13. System Cost ($) and Computational Time (sec) of the 10-unit UC problem with increased. Dimension compared with other PSO variants.
Table 13. System Cost ($) and Computational Time (sec) of the 10-unit UC problem with increased. Dimension compared with other PSO variants.
PSOCPSOPSOCOWICPSOPSPSO
10-unit DUC
best564,112564,278566,485563,974563,934
average564,966564,892569,874564,623564,039
worst566,891566,847572,136566,127564,214
time24.0521.9335.8619.947.24
30-unit DUC
best1,124,7231,125,1191,134,3581,124,6391,124,663
average1,126,9831,126,5161,140,3271,126,1661,125,221
worst1,131,9851,129,7341,144,4891,128,5431,126,375
time46.0142.2665.0738.1427.80
40-unit DUC
best2,249,4982,250,8682,273,5592,246,6592,245,909
average2,254,2392,255,0142,282,1122,249,5932,248,930
worst2,257,9902,260,1502,290,0502,252,6652,250,830
time96.1087.31131.0086.4585.65
60-unit DUC
best3,379,4823,376,9463,407,4863,367,8813,368,371
average3,384,3793,385,5743,420,4763,373,7153,371,589
worst3,392,0983,392,5953,430,8143,379,1093,375,929
time164.85152.47217.43176.86146.83
80-unit DUC
best4,508,6274,509,9844,550,5954,495,5494,495,053
average4,517,8064,521,1294,567,6514,502,5534,497,875
worst4,533,6524,536,6294,585,9824,509,5854,500,931
time235.21214.21295.07240.66235.68
100-unit DUC
best5,633,2685,643,9195,691,8195,622,3105,619,793
average5,652,7975,657,4215,719,0495,630,8485,626,222
worst5,669,6495,669,1515,826,6625,636,7125,631,809
time318.28280.12368.28309.69319.69
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ocampo, E.; Liu, C.-H.; Kuo, C.-C. Performance Analysis of Partitioned Step Particle Swarm Optimization in Function Evaluation. Appl. Sci. 2021, 11, 2670. https://doi.org/10.3390/app11062670

AMA Style

Ocampo E, Liu C-H, Kuo C-C. Performance Analysis of Partitioned Step Particle Swarm Optimization in Function Evaluation. Applied Sciences. 2021; 11(6):2670. https://doi.org/10.3390/app11062670

Chicago/Turabian Style

Ocampo, Erica, Chien-Hsun Liu, and Cheng-Chien Kuo. 2021. "Performance Analysis of Partitioned Step Particle Swarm Optimization in Function Evaluation" Applied Sciences 11, no. 6: 2670. https://doi.org/10.3390/app11062670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop