Next Article in Journal
New Optical Design Method of Floating Type Collimator for Microscopic Camera Inspection
Next Article in Special Issue
Solving the Integrated Multi-Port Stowage Planning and Container Relocation Problems with a Genetic Algorithm and Simulation
Previous Article in Journal
Modeling of the Suspended Solid Removal of a Granular Media Layer in an Upflow Stormwater Runoff Filtration System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dimension-Wise Particle Swarm Optimization: Evaluation and Comparative Analysis

1
Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
2
Department of Applied Cybernetics, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(13), 6201; https://doi.org/10.3390/app11136201
Submission received: 1 June 2021 / Revised: 25 June 2021 / Accepted: 1 July 2021 / Published: 4 July 2021
(This article belongs to the Special Issue New Challenges in Evolutionary Computation)

Abstract

:
This article evaluates a recently introduced algorithm that adjusts each dimension in particle swarm optimization semi-independently and compares it with the traditional particle swarm optimization. In addition, the comparison is extended to differential evolution and genetic algorithm. This presented comparative study provides a clear exposition of the effects introduced by the proposed algorithm. Performance of all evaluated optimizers is evaluated based on how well they perform in finding the global minima of 24 multi-dimensional benchmark functions, each having 7, 14, or 21 dimensions. Each algorithm is put through a session of self-tuning with 100 iterations to ensure convergence of their respective optimization parameters. The results confirm that the new variant is a significant improvement over the traditional algorithm. It also obtained notably better results than differential evolution when applied to problems with high-dimensional spaces relative to the number of available particles.

1. Introduction

Particle swarm optimization (PSO) algorithms are reputed for optimizing complex multidimensional problems with a balanced trade-off between reliability of convergence and computational efficiency. PSO relies on momentum, local/personal best attraction, and global best attraction to find a global optima. Although PSO tends to be slightly slower than differential evolution (DE), it can produce similar or even slightly better convergence rates depending on the problem at hand [1,2]. PSO also tends to be faster and more efficient than genetic algorithms (GAs), which rely heavily on random mutation and cross-over.
The work reported in this article builds on prior work where PSO was modified to optimize each dimension semi-independently [3]. This variant, called dimension-wise particle swarm optimization (DPSO), aims to increase the convergence reliability at the cost of some additional time complexity. The focus of the original article [3] was on how varying the ratio of particles to dimensions and per-dimension sensitivity—henceforth referred to as ill-conditioning—affected the algorithm’s ability to find the global optimum. These evaluations were carried out with scarce populations, i.e., with more dimensions than particles, which increased the difficulty and importance of exploration. The same article also introduced self-tuning as a method of PSO parameter selection. These configurations are carried over to this work with some changes in the rules, an inclusion of DE and GA in the evaluation, and an increase in the number of functions to be optimized.
An in-sample evaluation with some degree of similarity to each out-of-sample function is used for each optimizer’s parameter optimization process. The most suitable optimizer parameters are determined using self-tuning. This way, the algorithm of interest can explore and change its parameters during the in-sample test should it find a statistically better set w.r.t. fitness. For the out-of-sample tests, each algorithm must rely on the generally optimized parameters derived from the self-tuning process. The results are evaluated based on the statistical mean and standard deviation of each algorithm’s global optimum results for the 24 problem functions used. These benchmark functions are composed of 7-, 14-, and 21-dimensional spaces with randomly generated offsets, ill-conditioning, rotation, and overlaps. To ensure the results are reliable, each problem function is randomly generated and evaluated 30 times.

Related Work

PSO has a wide range of modifications, which address specific aspects of the algorithm or the problem of interest. PSO can be improved by integrating other algorithms into its process—such as neural networks and support vector machines—or by modifying the fundamental rules for particle movement [4,5,6,7,8]. For relatively high dimensional problems, it is usually preferred to selectively optimize a subset of dimensions at a time [9]. By occasionally changing the subset, progressive convergence towards the overall global optimum can be improved.
DPSO differs from other variants such as dual gbest PSO (DGPSO) in that it does not rely on preprocessing steps to identify, order, and select the dimensions with the greatest sensitivity [9]. A simpler method of feature selection borrows from the GA, where crossover is used to combine the particle’s current location, a predetermined relevant feature vector, the global best position, and its local best position. However, this approach uses a look-ahead approach, evaluating the three candidate locations before choosing the best one as the new individual, effectively tripling the population size during evaluation [10]. PSO with an enhanced learning strategy and crossover operator uses a weighted sum of local bests over all particles based on a normalized fitness distribution to achieve variations in attractor locations [11]. DGPSO’s approach can be described as a divide-and-conquer approach, while PSO with crossover hybridizes features from GA into PSO. These methods improve PSO’s ability to find the optimum; however, many come at the expense of notably larger preprocessing requirements or by superficially increasing the population size. The aim of DPSO is to improve the fitness with relatively small increases in computational demand by limiting the scale of the modifications. DPSO’s approach is less complicated, randomly selecting dimensions to apply global and local best attractions based on a fixed probability value. The selection of dimensions is not of a fixed number, nor is it based on pre-processed information about the local search space, thus maintaining a relatively low computational cost. The binary nature of DPSO’s attractors is closer to casting a net-like structure of potential attraction points on the search space solely based on the particles’ current location and the location of the local and global best points; i.e., it does not require pre-processing or a look-ahead feature.
The velocity limitations imposed on DPSO are similar to the boundary restrictions of the population migration PSO’s (PMPSO). However, DPSO’s limits do not change in scale over time and only restrict each particle’s maximum viable search space for the next iteration relative to its current location [12]. Velocity restriction-based improvised PSO (VRIPSO) uses a dynamic method of limiting velocity and relies on an escape velocity mechanism to explore beyond its velocity limit [13]. Alternatively, DPSO warps individuals who appear to be stagnating, while VRIPSO occasionally allows particles to escape the imposed velocity limit. Warping has been applied to all algorithms in this article as a general rule to ensure that it does not become a primary difference in results. Although VRIPSO’s approach may allow for faster convergence in some circumstances, DPSO’s warping condition recycles particles to increase the exploration of dimensions that may not have been sufficiently covered.
Tabu search PSO has an interesting feature, where the fitness results of particles and the respective locations are saved for a specified number of iterations [8]. This type of logging has been applied for all algorithms covered in this article but without removing older records. By recording the fitness of a location for later, it is possible to skip the function evaluation step and, over time, to significantly reduce the time spent on iterations with repeated positions. An element of reserving the position was also included in all algorithms to ensure that multiple particles on the same position would only require one shared evaluation.
Another feature found in VRIPSO is that it adjusts momentum over time [13]. Continuous PSO has a more in-depth version of this momentum feature in that it determines a gradient by which to adjust the momentum factors, encouraging movement in directions with greater improvements and not just in the current direction [14]. Though these dynamic methods are appealing, DPSO was limited to using random noise injection, which can either dampen or excite the particle’s movement irrespective of the perceived gradient or its direction of travel. These random perturbations allow for greater degrees of exploration along the dimensions lacking in representation or for which movement has stagnated. Given that this research is interested in solving high-dimensional problems with relatively sparse population sizes, it is important to ensure that particles do not restrict themselves to a subset of searchable dimensions while neglecting the rest.

2. Algorithms

For all algorithms, the local best position is the individual’s recorded best optimal value such that, in the case of minimizing the reward r t ,
{ p local , r local } = { p t , r t } if r t < r local or ( r t = = r local and r a n d ( 0 , 1 ) < 1 / N ) , { p local , r local } otherwise ,
where N is the number of individuals used in the algorithm. The conditional replacement of the position when r t = = r local serves to allow for a possibility of equivalent locations to replace the current attraction point. This comparison can also be made for the global best by evaluating across each individual i of the current iteration such that
{ p global , r global } = { p local | i , r local | i } if r local | i < r global or ( r local | i = = r global and r a n d ( 0 , 1 ) < 1 / N ) , { p global , r global } otherwise .
The reward r is an evaluation of fitness w.r.t. the parameters p when they are applied to the problem of interest. As the chosen problems are evaluated over 30 runs, r is the average plus standard deviation, to not only prefer parameters that perform well on average, but also give some preference to parameters that give consistent results. By recording with six decimal points of resolution and a range ( 1 , + 1 ) for problem functions and ( 0 , 1 ) for self-tuning algorithms, the search spaces become partially discretized and ensure the recorded values are precise in their outcomes. This setup assures that dependencies in the reward do not arise from rounding off the seventh decimal place, and allows for some extrapolation into more granular ranges such as integers. As the algorithms are iterating over a relatively long series of optimization steps, a log of fitness values is included to save processing time at the expense of memory. In the situation where the algorithm fails to process the current position, the fitness value is set to + . Should a given particle stagnate, i.e., when | Δ p t |   = 0 , the individual is warped to a new random position, maintaining a degree of random exploration.

2.1. Traditional PSO

In the traditional PSO algorithm, each individual has a velocity and position update method [15]
v t + 1 = c 0 · v t + c 1 · r a n d ( 0 , 1 ) · ( p global p t ) + c 2 · r a n d ( 0 , 1 ) · ( p local p t ) ,
and
p t + 1 = p t + v t + 1 ,
respectively, where c 0 is the momentum constant, and c 1 and c 2 are the global and local attraction strengths. r a n d ( 0 , 1 ) is a randomly generated value from a uniform distribution within the range [ 0 , 1 ) . r a n d ( 0 , 1 ) varies the degree of attraction to the respective best value every iteration up to c 1 and c 2 , respectively. The parameter labels have been changed for ease of equating to data arrays in code. This is applied to all later algorithms as well, however, these parameters are entirely independent.
Velocity and position vectors are subject to the dimension-wise limitation
v t + 1 = v t + 1 if p max > p t + 1 > p min , v t + 1 otherwise ,
and
p t + 1 = p t + 1 if p max > p t + 1 > p min , p min if p t + 1 p min , p max if p max p t + 1 .
Allowing particles to bounce off the upper and lower limits maintains a degree of activity, preventing early termination of exploration due to boundary collisions. Additionally, given that the position is stopped at the respective limit, a measure of performance at the said limit will still be obtained. The general process flow for PSO is shown in Figure 1a. The velocity calculation time complexity is O ( 5 D × N ) , position updates are O ( D × N ) , and each dimension limit check is O ( D × N ) , resulting in a total time complexity per iteration of O ( 8 D × N ) .

2.2. Dimension-Wise PSO

The Dimension-wise PSO (DPSO) algorithm has been designed with the intention of handling relatively large dimensional spaces with a sparse population size. This is accomplished by determining the global and local attraction points separately for each dimension. DPSO’s velocity is defined as
v t + 1 = c 0 · v t + c 1 · r a n d ( 1 , 1 ) · | v t | + c 2 · b 3 · ( p global p t ) + c 4 · b 5 · ( p local p t ) ,
and is also subject to dimension-wise limitations
v t + 1 = v lim if v lim < v t + 1 , v lim if v lim > v t + 1 , v t + 1 otherwise ,
followed by Equation (5). As with PSO, the individual’s position update relies on Equation (4) subject to Equation (6).
Percent noise injection—regulated by c 1 —is used as a precaution to improve variations in movement and increase the likelihood of escaping from local minima without causing a large change in course. Noise injection replaces the random values affecting attractor strengths and allows it to have an effect independent of a particle’s distance from the optima. Visually, it can be likened to roughening a surface dependent on the speed of a ball rolling down its surface, causing the said ball to deviate from a trivial path. The trajectory and speed are often disrupted, but the general direction of travel is not necessarily changed as it has three other elements that are largely responsible for determining velocity, i.e., momentum, global best attraction, and local best preference.
The velocity limit, v lim , partially opposes the effect of percent noise injection, as it attempts to restrict the overshooting caused by an excessive build up of momentum and attractive strength along a given dimension. Situations where a particle builds up too much momentum and is slingshot further away from the known optimal locations can help with increasing diversity in exploration, but scattering particles away from the optimal attractors can also impair the rate of convergence [16]. To prevent an excessive expansion in the required search parameters, v lim is simplified to
v lim = c 6 · ( p max p min ) ,
where c 6 is the percentage of a given dimension’s span a particle can traverse in one iteration, and p max and p min are the position boundaries bracketing the valid range of exploration along the given dimension.
The primary change that gives DPSO the ability to search on a per-dimension basis is that, in Equation (7), the uniform random attraction values are replaced with probabilistic binary values, b 3 and b 5 , respectively. These values determine if the individual should move toward the respective best position of a given dimension. The activation of b 3 and b 5 can be described as
b n = 1 if r a n d ( 0 , 1 ) < c n , 0 otherwise ,
where c 3 and c 5 denote the respective probability of activation. It must be stressed that b 3 and b 5 are evaluated on a per-dimension basis, as opposed to being a single value applied to each dimension. Although this discretized approach reduces the coverage of points between the two known best locations, it increases the exploration of points in the surrounding neighborhood (see Figure 2). When momentum and noise injection are accounted for, the individual is less likely to become fixated on a single vector and can partially explore the area around each point. Without noise or momentum, the probability of moving to a given point is based on the products of b 3 , b 5 , 1 b 3 , and/or 1 b 5 for each dimension. For the top-left example in Figure 2, if the lower ‘×’ marks the global optimum, the point in the top left corner would have the following transition probability
P ( TL ) = ( 1 b 3 hor ) · ( 1 b 5 hor ) · b 3 vir · b 5 vir ,
where hor and vir denote the dimension with which the binary attractor is associated. The probability of transitioning to the global optimum’s mark is
P ( glob ) = b 3 hor · ( 1 b 5 hor ) · b 3 vir · ( 1 b 5 vir ) .
As with the traditional PSO, when c 2 and c 4 are reduced, the scale of movement, i.e., the grid size, is reduced. Momentum skews the final landing point in the direction of motion, while noise injection causes each potential attractor point to represent the center of a distribution. It is worth noting that DPSO values c 0 = 0 and c 6 = 1 make this variant the closest possible form to the original PSO, where the key differences are the attraction random values. The general process flow for DPSO is identical to PSO (see Figure 1a). The velocity calculation time complexity is O ( 9 D × N ) , binary attractor selections are O ( D × N ) , position updates are O ( D × N ) , and each dimension limit check is O ( D × N ) , resulting in a total time complexity per iteration of O ( 10 D × N ) —approximately O ( 2 D × N ) more than PSO.

2.3. Genetic Algorithm

The Genetic Algorithm (GA) is very different from PSO in its method of optimization, as it does not rely on momentum or points of attraction. Instead, it uses the fitness of individuals from the last generation [17]. The first step of modifying the parameters is selecting parents. In this study, it is carried out through a roulette-style competition without replacement, where each individual’s fitness is determined by the following
r ^ ( i ) = ( r i min ( r gen ) ) × ( 1 len ( argmin ( r gen ) ) N ) max ( r gen ) min ( r gen ) + len ( argmin ( r gen ) ) N 2 .
where len ( argmin ( r ) ) returns the number of individuals who obtained the current minimum fitness, which allows the fitness distribution to approach a uniform distribution as more individuals produce equivalent fitness values.
As there is no order on the resulting list of parent indices, they are considered sufficiently mixed. These parents are used in pairs to produce the kth child
parent 1 = mod ( k , q 0 ) ,
parent 2 = mod ( ( k + 1 ) , q 0 ) .
where mod ( k , q ) is the modulus division of k by the number of selected parents q. parent1 and parent2 are the indices of the chosen parents whose genes are p parent 1 and p parent 2 , respectively.
The number of parents permitted to produce offspring is defined by
q 0 = max ( 2 , min ( N 2 , c 0 × N ) ) ,
where c 0 is the percent population allowed to reproduce. The percent replacement is determined by ( 1 c 0 ) , i.e., the population that was not selected is replaced to minimize redundant evaluations. After the parents are selected, crossover occurs with
p child = p parent 1 × b g + p parent 2 × ( 1 b g ) ,
where b g = 1 with random permutation g = P ( G , G / 2 ) and b g = 0 otherwise. G is the length of the genome, and P ( G , X ) denotes a random choice of X elements of G without replacement, while P R ( G , X ) is with replacement. Gene mutation for each child is determined by a random permutation with replacement g = P R ( G , c 1 × G ) , where c 1 is the maximum allowed percent mutation a genome can undergo (the minimum is one mutated genome). For each unique gene/dimension selected for mutation, a uniform random value within the respective limits is applied. The general process flow for GA is shown in Figure 1b. The individual fitness calculation time complexity is O ( 4 N ) , parent selection which shuffles and fully utilizes the current population is O ( 2 N ) , a random permutation of the genes for each child is O ( D × N ) , crossover with full replacement is O ( 2 D × N ) , mutation is O ( D × N ) , and each dimension limit check is O ( D × N ) , resulting in a total time complexity per iteration of O ( ( 5 D + 6 ) × N ) .

2.4. Differential Evolution

Differential Evolution (DE) tends to be faster in processing due to its relative simplicity. A key difference from PSO and GA is that DE mutates its population before evaluating the fitness of its individuals [18]. The first step is choosing a random permutation a , b , c = P ( N , 3 ) for the genetic material to be used for mutation
p m = p local , a + 2 × c 0 × ( p local , b p local , c ) ,
where c 0 is the degree of mutation that can be imposed on the base material provided by sample ‘a’. After a mutant is generated, each gene/dimension has the possibility of going through crossover—determined by the crossover probability c 1 —where the respective mutated gene is clipped to fit within the dimension’s limits and applied to the targeted individual, i.e., p i , s = p m , s . If no genes are selected for crossover, one is selected at random.
The general process flow of DE is shown in Figure 1c. The random permutation of three genetic sources has a time complexity is O ( 3 N ) , mutated gene production is O ( 3 D × N ) , crossover using a random permutation is O ( D × N ) , and each dimension limit check is O ( D × N ) , resulting in a total time complexity per iteration of O ( ( 5 D + 3 ) × N ) .

3. Methodology

The configuration for this experiment is such that the algorithm is assigned an initial estimate of the best parameter combination, which is randomly seeded with one of the individuals to be evaluated. This algorithm (Alg1) is set to optimize 5 particles for up to 100 iterations. Thirty copies of the algorithm (Alg2) are generated as the problem function, each optimizing 5 particles for up to 500 iterations. For each of these optimized algorithms, 30 randomly offset and ill-conditioned copies of the in-sample problem function are set to be optimized. Alg2’s average global optimum result across these 30 copies plus the standard deviation is taken as the algorithm’s measure of fitness during self-tuning. The outcome of evaluating the fitness of Alg2, i.e., its final global optimum result, is used as follows: if it is better than what was recorded for the global optimum in Alg1, the parameters used by Alg1 are updated to those of Alg2 before moving on to the next iteration. After self-tuning, the optimal parameters are applied to the algorithm again with 30 separate runs of the out-of-sample function problems. The out-of-sample optimization uses 5 individuals and lasts for 1000 iterations without allowance for early termination. The mean results of the out-of-sample global best logs are recorded for plotting and the final mean and standard deviation values are recorded for tabulation.

3.1. Self-Tuning

Self-tuning is a form of bootstrapping, where the algorithm attempts to improve its own parameters based on its relative improvement in optimizing a problem [19]. Granted, it is inefficient to self-tune on the problem of interest as this will likely result in having found the global optimum several times over. Therefore, it is preferred to self-tune using a simpler approximation of the out-of-sample problem(s) as an in-sample training step. This allows the algorithm parameters to be generally optimized for any problem similar to the in-sample function. For self-tuning to be effective, the in-sample problem must be a close approximation of the out-of-sample function set. In the case where the out-of-sample set only has one function, the in-sample function must at least be sufficiently simplified to make the additional evaluation steps worthwhile. In contrast to some alternative self-tuning methods, an external approach does not require specialized modifications or rules to adjust the parameters on the fly [20,21].

3.2. Function Problems

The base problem functions used as benchmarks are: Elliptical (scale: 500), Ackley (scale: 32), Rosenbrock (scale: 7.5; offset: 1.5), Rastrigin (scale: 5.12), Drop-wave (scale: 5.12), Zero-sum (scale: 10), and Salomon (scale: 100) [22,23]. For 2 to 3 dimensions, most of these functions would be considered somewhat challenging, as they have many local minima. To add complexity to each problem, random offsets (up to 80 % off center) and ill-conditioning (up to 10 times the scale), rotations, and partial-separability ( 30 % overlap) were applied in steps as shown in Table 1 [24]. The dimensions shown in the table are for the 14D problem; however, these steps were similarly carried out for 7D and 21D problems. Ill-conditioning, rotation, and overlap effects were applied to all 7, 14, and 21 dimensions, respectively, while the 4 copies of each function had independently allocated dimensions. For the cases of overlap and rotation, some of these functions are partially dependent on the same inputs, further complicating the search process.

4. Results

The resulting optimal parameters after self-tuning on f 0 are shown in Appendix A (Table A1, Table A2 and Table A3). For each algorithm, the final optimal parameters tended to be found well before reaching the limit of 100 iterations. In all cases, DPSO has shown to be a significant improvement from PSO, which was consistently in the last place. The resulting fitness in Table A4, the normalized fitness and per-problem rank in Table 2, and the plots shown in Figure A1, Figure A4, Figure A7 and Figure A10, suggest that DPSO is capable of outperforming DE and GA in overall fitness with a margin of approximately 20 % when using five particles on seven-dimensional problems. For 14 dimensions, Table 3 and Table A5, as well as Figure A2, Figure A5, Figure A8 and Figure A11, show that, although DPSO easily bests the GA results and despite only placing third or fourth in 6 of the 24 problem functions, it underperforms w.r.t. DE with a margin of 17 % . Rank Factor is the average result of the normalized fitness divided by the smallest average normalized fitness. The Normalized fitness in Table 2, Table 3 and Table 4 are the fitness results scaled for ease of comparison.
Increasing the dimensions to 21, DPSO’s overall fitness is notably better with respect to Table 4 and Table A6 relative to the ranked second DE with a margin of more than 58 % . The gradual improvements shown in Figure A3, Figure A6, Figure A9 and Figure A12 also suggest that DPSO’s ability to make gradual improvements was not severely hindered by the small ratio of particles to dimensions. Given that the standard deviation is relatively small compared to the mean fitness, the algorithms’ final rankings are expected to be reliable for the applied out-of-sample functions and chosen parameters.
The results of this experiment show that DPSO and DE can be optimized through self-tuning given a sufficient number of iterations and that they both demonstrate satisfactory results. The tuned parameters allow these algorithms to remain largely effective when the target problem deviates from the in-sample function, i.e., when they are able to self-tune on an approximation of the out-of-sample problems and can be expected to perform well. Given the fact that the rules were applied equally to all algorithms, the primary cause for improvement in DPSO over PSO is the use of dimension-wise updates. A likely reason why a number of fitness results are relatively large is that ill-conditioning scales up the dimensional range, making it harder to find the global optimum. Regardless, every algorithm was able to demonstrate some improvement in each problem, e.g., ending with a fitness of 10 32 is a significant improvement when starting with 10 44 . It is interesting that the Ellipse problems had the worst performance results despite the relative simplicity of its surface—likely attributed to the exponential nature of each axis. PSO and DPSO have momentum factors that can cause them to overshoot or orbit the minima, but this problem does not exist for DE and GA. The likely factor that makes some of these problems more difficult for DE and GA is that the population size, which they rely on for diversity, is relatively small compared to the number of dimensions. This way, they are entirely reliant on warping and mutation to increase the variety of candidate solutions. Without randomly generating new genomes, they can only make do with trying to improve on the dimensions for which there is a sufficient variety of overall fit individuals to experiment on. From Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8, Figure A9, Figure A10, Figure A11 and Figure A12, it is apparent that most algorithms tend to converge or approach convergence within the first 500 iterations, with relatively small improvements thereafter. The last improvement in the global optimum is given at x-axis value 1.0 in the plots—shown in Table A4, Table A5 and Table A6—but data with next to no improvement at the end were removed to improve clarity where possible. The sudden improvement given by DPSO may be due to the fact that the dimension-wise activation of attractors is similar in principle to the crossover method found in GA and DE.
A separate test without logging was conducted on the elliptical function ( f 3 ) to analyze the computational costs of these algorithms (given in Table 5). As expected, DPSO is slightly more demanding than PSO by approximately 300 bytes (4.7%) for seven dimensions, further diminishing to 3.6% and 3.2% for 14 and 21 dimensions, respectively. Compared to GA and DE, both DPSO and PSO are notably more memory demanding.
In terms of execution time, DPSO requires a few milliseconds more per iteration than GA and DE. The time lapse was determined by the average execution time to complete one iteration over 200 iterations and 21 dimensions. Given that the time complexities for GA and DE are similar and smaller than for PSO and DPSO, it is likely that some areas of code are not fully optimized even though the base code was the same and care was taken to minimize deviations from the base. In exchange for a slightly larger processing time relative to the other algorithms, DPSO was able to reduce the rate at which its ability to converge worsened when the disparity between population and search-space dimensionality increased.
There are several components of the DPSO algorithm that contribute to its execution time. Their individual contributions can be measured by recording the corresponding time lapse. The time required for PSO to calculate momentum followed by the time lapse for local and global attraction forces can be compared to DPSO’s momentum plus noise injection and dimension-wise attraction forces. The difference in calculation time for momentum (>1 μs) versus momentum plus noise injection (36 μs) and the regular attraction (7 μs) versus the dimension-wise attraction (116 μs) calculations is larger than expected. A notable portion of this increase can likely be attributed to DPSO relying on a for-loop to perform its per-dimension calculations, while PSO is able to exploit the optimizations given by the numpy library. Regardless, the velocity update steps for DPSO are expected to take more time given that the attractors are decided on a per-dimension basis.

5. Conclusions and Future Work

This article examines the recently introduced dimension-wise particle swarm optimization (DPSO) algorithm and compares its performance with other commonly used metaheuristic optimization systems. Specifically, it compares the statistical mean of the global fitness values for DPSO, PSO, GA, and DE in a two-step process: in-sample tuning and out-of-sample evaluation. The optimal parameters for each algorithm were selected by applying self-tuning while evaluating 30 independent runs of a generic in-sample problem that approximated the set of all functions used for evaluation. To evaluate the performance, each algorithm was tested on 24 separate problems, and the mean and standard deviation were obtained from 30 separate runs of each for 7, 14, and 21 dimensions. The obtained results show that DPSO performs better than DE, PSO, and GA when the population is sparse w.r.t the number of dimensions to be explored.
In future work, it may be worth incorporating other methods such as Adaptive PSO to reduce the number of parameters that must be tuned. It may also be interesting to investigate other options for rules on warping particles and setting the population size as one of the adjustable parameters. To better grasp the benefits of DPSO, it would also be worth comparing this algorithm with more state-of-the-art variations in PSO such as Dual Gbest PSO without regard for the computational costs. Given that there is no conflict with momentum degradation methods such as the one found in continuous PSO w.r.t. the changes made in DPSO, integrating them into DPSO may allow for further improvements in fitness with minimal addition to computational costs [9].

Author Contributions

Conceptualization, P.M.; methodology, J.S.; software, J.S.; investigation, J.S.; resources, P.M.; writing—original draft preparation, J.S.; writing—review and editing, P.M.; supervision, P.M.; project administration, P.M.; and funding acquisition, P.M. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada grant number RGPIN-2017-05866 and by the Government of Alberta under the Major Innovation Fund, project RCP-19-001-MIF.

Conflicts of Interest

The authors declare no conflict of interest.

Symbols

The following symbols are used in this manuscript:
p t Position of a particle at time t
v t Velocity of a particle at time t
rLocation fitness on the problem surface at a given point/time
NThe total number of particle/individual used
DThe total number of dimensions in the search space
iAn arbitrary particle/individual of the current iteration
tTime step/iteration/generation
localDenoting local attractor information
globalDenoting global attractor information
r a n d ( 0 , 1 ) A uniform randomly generated number within the range [ 0 , 1 )
c n An algorithm parameter designated for tuning
b n Binary decider for attractors and crossover

Appendix A. Additional Tables and Graphs

Table A1. Global best self-tuning parameters for 7 dimensions.
Table A1. Global best self-tuning parameters for 7 dimensions.
DPSOc0c1c2c3c4c5c6
0.8620560.8590080.6202990.5268930.1038190.9865120.801248
PSO c 0 c 1 c 2
0.1077100.7416070.492210
GA c 0 c 1
0.9740850.558012
DE c 0 c 1
0.2517320.000000 1
1 DE is forced to choose one parameter to crossover.
Table A2. Global best self-tuning parameters for 14 dimensions.
Table A2. Global best self-tuning parameters for 14 dimensions.
DPSOc0c1c2c3c4c5c6
0.5076460.4386681.0000000.3083611.0000000.6557050.970521
PSO c 0 c 1 c 2
0.6496080.5949500.175723
GA c 0 c 1
0.8990980.233414
DE c 0 c 1
0.4701440.544929
Table A3. Global best self-tuning parameters for 21 dimensions.
Table A3. Global best self-tuning parameters for 21 dimensions.
DPSOc0c1c2c3c4c5c6
0.7346270.7124160.8913121.0000000.7675080.8533930.786354
PSO c 0 c 1 c 2
0.7153300.3868080.897650
GA c 0 c 1
0.3902450.092636
DE c 0 c 1
1.0000000.532237
Table A4. Fitness w.r.t. each function problem with 7 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
Table A4. Fitness w.r.t. each function problem with 7 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
FunctionDPSOPSOGADE
f 0 Ave1.234 × 10 31 1.424 × 10 5 8.956 × 10 4 3.439 × 10 23
Std2.252 × 10 15 2.910 × 10 11 0.0001.342 × 10 8
f 1 Ave1.641 × 10 6 3.849 × 10 11 1.326 × 10 8 2.560 × 10 4
Std2.328 × 10 10 1.831 × 10 4 4.470 × 10 8 1.091 × 10 11
f 2 Ave2.402 × 10 6 6.209 × 10 10 3.242 × 10 6 5.608 × 10 5
Std9.313 × 10 10 2.289 × 10 5 1.397 × 10 9 2.328 × 10 10
f 3 Ave1.257 × 10 31 2.950 × 10 31 9.232 × 10 31 3.505 × 10 43
Std6.755 × 10 15 4.504 × 10 15 1.801 × 10 16 9.904 × 10 27
f 4 Ave1.235 × 10 12 1.626 × 10 11 1.039 × 10 11 1.293 × 10 6
Std0.0006.104 × 10 5 1.526 × 10 5 0.000
f 5 Ave9.617 × 10 5 3.099 × 10 11 1.026 × 10 11 5.356 × 10 11
Std3.492 × 10 10 1.221 × 10 4 1.526 × 10 5 3.052 × 10 4
f 6 Ave4.011 × 10 1 6.085 × 10 1 3.239 × 10 1 4.845 × 10 1
Std1.421 × 10 14 2.132 × 10 14 1.421 × 10 14 3.553 × 10 14
f 7 Ave4.172 × 10 1 6.060 × 10 1 3.821 × 10 1 2.000 × 10 1
Std2.132 × 10 14 3.553 × 10 14 2.842 × 10 14 3.553 × 10 15
f 8 Ave4.172 × 10 1 5.472 × 10 1 3.821 × 10 1 3.983 × 10 1
Std2.132 × 10 14 3.553 × 10 14 2.842 × 10 14 1.421 × 10 14
f 9 Ave1.0071.248 × 10 4 2.851 × 10 3 6.054 × 10 1
Std4.441 × 10 16 1.819 × 10 12 2.274 × 10 12 2.842 × 10 14
f 10 Ave8.802 × 10 3 9.429 × 10 4 2.645 × 10 4 1.889 × 10 4
Std7.276 × 10 12 4.366 × 10 11 1.091 × 10 11 0.000
f 11 Ave1.179 × 10 1 1.303 × 10 5 1.277 × 10 4 4.141 × 10 3
Std5.329 × 10 15 8.731 × 10 11 9.095 × 10 12 0.000
f 12 Ave3.482 × 10 1 1.310 × 10 2 2.987 × 10 1 8.200 × 10 1
Std2.842 × 10 14 0.0007.105 × 10 15 4.263 × 10 14
f 13 Ave2.288 × 10 1 1.189 × 10 2 5.204 × 10 1 1.364 × 10 2
Std7.105 × 10 15 7.105 × 10 14 3.553 × 10 14 2.842 × 10 14
f 14 Ave2.444 × 10 1 1.990 × 10 2 8.907 × 10 1 1.593 × 10 1
Std0.0001.137 × 10 13 1.421 × 10 14 5.329 × 10 15
f 15 Ave1.913 × 10 1 2.3571.9561.945
Std1.388 × 10 16 0.0004.441 × 10 16 8.882 × 10 16
f 16 Ave1.7782.4481.6181.963
Std1.332 × 10 15 1.332 × 10 15 8.882 × 10 16 1.332 × 10 15
f 17 Ave1.0002.0511.7821.924
Std4.441 × 10 16 4.441 × 10 16 1.110 × 10 15 6.661 × 10 16
f 18 Ave7.162 × 10 2 2.955 × 10 2 1.094 × 10 3 8.071 × 10 1
Std5.684 × 10 13 1.705 × 10 13 0.0004.263 × 10 14
f 19 Ave2.052 × 10 2 6.003 × 10 2 6.232 × 10 2 3.490
Std5.684 × 10 14 1.137 × 10 13 4.547 × 10 13 1.332 × 10 15
f 20 Ave1.812 × 10 2 4.409 × 10 2 3.381 × 10 2 2.830 × 10 2
Std1.137 × 10 13 2.274 × 10 13 5.684 × 10 14 1.705 × 10 13
f 21 Ave2.223 × 10 4 1.836 × 10 1 4.693 × 10 3 9.548
Std0.0001.421 × 10 14 0.0000.000
f 22 Ave1.660 × 10 1 1.291 × 10 2 2.451 × 10 1 1.630 × 10 1
Std1.066 × 10 14 5.684 × 10 14 1.421 × 10 14 7.105 × 10 15
f 23 Ave6.7001.643 × 10 2 2.302 × 10 1 8.036
Std1.776 × 10 15 0.0001.066 × 10 14 3.553 × 10 15
Table A5. Fitness w.r.t. each function problem with 14 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
Table A5. Fitness w.r.t. each function problem with 14 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
FunctionDPSOPSOGADE
f 0 Ave7.483 × 10 31 2.379 × 10 6 2.393 × 10 14 2.127 × 10 2
Std0.0004.657 × 10 10 9.375 × 10 2 8.527 × 10 14
f 1 Ave7.640 × 10 3 1.753 × 10 5 1.680 × 10 5 3.327 × 10 2
Std2.728 × 10 12 5.821 × 10 11 8.731 × 10 11 5.684 × 10 14
f 2 Ave7.064 × 10 5 8.116 × 10 31 1.452 × 10 24 2.590 × 10 5
Std1.164 × 10 10 3.603 × 10 16 2.684 × 10 8 0.000
f 3 Ave2.018 × 10 31 1.638 × 10 26 1.943 × 10 32 2.578 × 10 17
Std1.126 × 10 16 6.872 × 10 10 3.603 × 10 16 6.400 × 10 1
f 4 Ave7.842 × 10 26 1.378 × 10 12 4.856 × 10 10 3.923 × 10 11
Std2.749 × 10 11 0.0001.526 × 10 5 1.221 × 10 4
f 5 Ave4.560 × 10 11 8.607 × 10 36 2.788 × 10 10 3.949 × 10 19
Std0.0002.361 × 10 21 7.629 × 10 6 3.277 × 10 4
f 6 Ave7.538 × 10 1 8.185 × 10 1 6.799 × 10 1 7.900 × 10 1
Std0.0004.263 × 10 14 4.263 × 10 14 1.421 × 10 14
f 7 Ave8.224 × 10 1 8.116 × 10 1 6.852 × 10 1 6.462 × 10 1
Std0.0000.0004.263 × 10 14 0.000
f 8 Ave8.228 × 10 1 8.116 × 10 1 6.852 × 10 1 6.462 × 10 1
Std4.263 × 10 14 0.0004.263 × 10 14 0.000
f 9 Ave1.963 × 10 4 5.403 × 10 5 2.195 × 10 5 3.387 × 10 2
Std7.276 × 10 12 3.492 × 10 10 2.910 × 10 11 5.684 × 10 14
f 10 Ave3.243 × 10 4 1.549 × 10 6 2.234 × 10 6 6.067 × 10 3
Std2.183 × 10 11 1.164 × 10 9 1.397 × 10 9 4.547 × 10 12
f 11 Ave7.982 × 10 4 2.034 × 10 7 2.279 × 10 5 8.108 × 10 4
Std5.821 × 10 11 7.451 × 10 9 2.910 × 10 11 4.366 × 10 11
f 12 Ave6.891 × 10 1 1.443 × 10 3 2.789 × 10 2 3.996 × 10 2
Std1.421 × 10 14 6.821 × 10 13 1.705 × 10 13 2.842 × 10 13
f 13 Ave7.068 × 10 1 7.193 × 10 2 2.165 × 10 2 1.123 × 10 2
Std2.842 × 10 14 4.547 × 10 13 0.0005.684 × 10 14
f 14 Ave1.647 × 10 2 2.841 × 10 2 2.389 × 10 2 7.673 × 10 1
Std8.527 × 10 14 1.705 × 10 13 1.137 × 10 13 4.263 × 10 14
f 15 Ave2.2563.6892.9323.500
Std8.882 × 10 16 2.665 × 10 15 1.332 × 10 15 1.332 × 10 15
f 16 Ave2.6633.5102.8972.871
Std0.0001.332 × 10 15 1.332 × 10 15 4.441 × 10 16
f 17 Ave2.5402.9702.9603.017
Std4.441 × 10 16 2.220 × 10 15 4.441 × 10 16 8.882 × 10 16
f 18 Ave6.945 × 10 2 1.831 × 10 3 1.122 × 10 3 1.005 × 10 3
Std1.137 × 10 13 9.095 × 10 13 2.274 × 10 13 6.821 × 10 13
f 19 Ave1.708 × 10 2 2.040 × 10 3 6.648 × 10 2 5.270
Std0.0004.547 × 10 13 3.411 × 10 13 8.882 × 10 16
f 20 Ave1.545 × 10 2 1.397 × 10 3 6.831 × 10 2 3.054 × 10 2
Std8.527 × 10 14 0.0000.0000.000
f 21 Ave2.142 × 10 2 7.904 × 10 3 3.584 × 10 3 2.357 × 10 4
Std0.0009.095 × 10 13 1.819 × 10 12 0.000
f 22 Ave9.540 × 10 1 6.300 × 10 1 7.769 × 10 2 4.672 × 10 1
Std4.263 × 10 14 7.105 × 10 15 1.137 × 10 13 2.132 × 10 14
f 23 Ave5.050 × 10 1 5.681 × 10 1 1.995 × 10 2 3.500 × 10 1
Std1.421 × 10 14 7.105 × 10 15 5.684 × 10 14 7.105 × 10 15
Table A6. Fitness w.r.t. each function problem with 21 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
Table A6. Fitness w.r.t. each function problem with 21 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
FunctionDPSOPSOGADE
f 0 1.375 × 10 19 6.121 × 10 27 6.249 × 10 11 9.395 × 10 9
Std0.0000.0000.0003.815 × 10 6
f 1 2.600 × 10 7 2.824 × 10 8 2.417 × 10 9 5.773 × 10 6
Std1.490 × 10 8 0.0009.537 × 10 7 2.794 × 10 9
f 2 2.401 × 10 5 4.489 × 10 12 1.091 × 10 7 1.657 × 10 17
Std8.731 × 10 11 9.766 × 10 4 3.725 × 10 9 6.400 × 10 1
f 3 2.028 × 10 30 5.324 × 10 35 5.551 × 10 23 8.484 × 10 27
Std1.407 × 10 15 2.951 × 10 20 2.013 × 10 8 2.199 × 10 12
f 4 1.012 × 10 10 5.906 × 10 11 3.516 × 10 25 1.311 × 10 26
Std3.815 × 10 6 2.441 × 10 4 2.147 × 10 10 6.872 × 10 10
f 5 2.981 × 10 23 2.988 × 10 12 7.175 × 10 27 1.038 × 10 25
Std1.342 × 10 8 0.0004.398 × 10 12 0.000
f 6 8.179 × 10 1 8.342 × 10 1 8.114 × 10 1 7.567 × 10 1
Std1.421 × 10 14 0.0001.421 × 10 14 2.842 × 10 14
f 7 7.943 × 10 1 8.394 × 10 1 8.307 × 10 1 8.329 × 10 1
Std1.421 × 10 14 7.105 × 10 14 2.842 × 10 14 4.263 × 10 14
f 8 8.311 × 10 1 8.394 × 10 1 7.225 × 10 1 8.329 × 10 1
Std0.0007.105 × 10 14 4.263 × 10 14 4.263 × 10 14
f 9 1.220 × 10 6 3.937 × 10 7 5.202 × 10 5 2.246 × 10 5
Std6.985 × 10 10 7.451 × 10 9 1.746 × 10 10 8.731 × 10 11
f 10 1.817 × 10 5 2.345 × 10 7 1.674 × 10 6 1.345 × 10 7
Std5.821 × 10 11 1.118 × 10 8 6.985 × 10 10 0.000
f 11 2.353 × 10 5 1.266 × 10 7 8.217 × 10 5 3.199 × 10 5
Std0.0001.863 × 10 9 3.492 × 10 10 5.821 × 10 11
f 12 4.166 × 10 2 1.787 × 10 3 6.351 × 10 2 1.185 × 10 3
Std1.137 × 10 13 2.274 × 10 13 1.137 × 10 13 4.547 × 10 13
f 13 5.709 × 10 2 7.462 × 10 2 5.936 × 10 2 7.634 × 10 2
Std3.411 × 10 13 1.137 × 10 13 4.547 × 10 13 3.411 × 10 13
f 14 3.199 × 10 2 6.809 × 10 2 5.400 × 10 2 3.077 × 10 2
Std5.684 × 10 14 3.411 × 10 13 4.547 × 10 13 0.000
f 15 3.8833.9413.7403.517
Std0.0001.332 × 10 15 4.441 × 10 16 4.441 × 10 16
f 16 3.6933.9583.5663.776
Std1.332 × 10 15 4.441 × 10 16 0.0004.441 × 10 16
f 17 3.0453.7313.7642.943
Std1.332 × 10 15 8.882 × 10 16 0.0004.441 × 10 16
f 18 2.111 × 10 3 3.932 × 10 3 2.477 × 10 3 5.943 × 10 2
Std1.364 × 10 12 1.819 × 10 12 1.364 × 10 12 0.000
f 19 9.646 × 10 2 1.594 × 10 3 3.848 × 10 2 7.506 × 10 2
Std3.411 × 10 13 4.547 × 10 13 1.137 × 10 13 2.274 × 10 13
f 20 6.138 × 10 2 8.441 × 10 2 1.936 × 10 3 5.966 × 10 2
Std2.274 × 10 13 5.684 × 10 13 1.364 × 10 12 4.547 × 10 13
f 21 2.518 × 10 3 1.894 × 10 4 6.016 × 10 3 5.172 × 10 3
Std4.547 × 10 13 7.276 × 10 12 2.728 × 10 12 1.819 × 10 12
f 22 3.762 × 10 2 1.709 × 10 2 9.187 × 10 3 3.008 × 10 3
Std5.684 × 10 14 5.684 × 10 14 3.638 × 10 12 4.547 × 10 13
f 23 3.401 × 10 3 1.415 × 10 2 6.376 × 10 3 2.164 × 10 2
Std1.364 × 10 12 8.527 × 10 14 2.728 × 10 12 8.527 × 10 14
Figure A1. The average In-Sample ( f 0 , f 1 , f 2 ) and Elliptical ( f 3 , f 4 , f 5 ) global best results over time for 7 dimensions.
Figure A1. The average In-Sample ( f 0 , f 1 , f 2 ) and Elliptical ( f 3 , f 4 , f 5 ) global best results over time for 7 dimensions.
Applsci 11 06201 g0a1
Figure A2. The average In-Sample ( f 0 , f 1 , f 2 ) and Elliptical ( f 3 , f 4 , f 5 ) global best results over time for 14 dimensions.
Figure A2. The average In-Sample ( f 0 , f 1 , f 2 ) and Elliptical ( f 3 , f 4 , f 5 ) global best results over time for 14 dimensions.
Applsci 11 06201 g0a2
Figure A3. The average In-Sample ( f 0 , f 1 , f 2 ) and Elliptical ( f 3 , f 4 , f 5 ) global best results over time for 21 dimensions.
Figure A3. The average In-Sample ( f 0 , f 1 , f 2 ) and Elliptical ( f 3 , f 4 , f 5 ) global best results over time for 21 dimensions.
Applsci 11 06201 g0a3
Figure A4. The average Ackley ( f 6 , f 7 , f 8 ) and Rosenbrock ( f 9 , f 10 , f 11 ) global best results over time for 7 dimensions.
Figure A4. The average Ackley ( f 6 , f 7 , f 8 ) and Rosenbrock ( f 9 , f 10 , f 11 ) global best results over time for 7 dimensions.
Applsci 11 06201 g0a4
Figure A5. The average Ackley ( f 6 , f 7 , f 8 ) and Rosenbrock ( f 9 , f 10 , f 11 ) global best results over time for 14 dimensions.
Figure A5. The average Ackley ( f 6 , f 7 , f 8 ) and Rosenbrock ( f 9 , f 10 , f 11 ) global best results over time for 14 dimensions.
Applsci 11 06201 g0a5
Figure A6. The averageAckley ( f 6 , f 7 , f 8 ) and Rosenbrock ( f 9 , f 10 , f 11 ) global best results over time for 21 dimensions.
Figure A6. The averageAckley ( f 6 , f 7 , f 8 ) and Rosenbrock ( f 9 , f 10 , f 11 ) global best results over time for 21 dimensions.
Applsci 11 06201 g0a6
Figure A7. The average Rastrigin ( f 12 , f 13 , f 14 ) and Drop-Wave ( f 15 , f 16 , f 17 ) global best results over time for 7 dimensions.
Figure A7. The average Rastrigin ( f 12 , f 13 , f 14 ) and Drop-Wave ( f 15 , f 16 , f 17 ) global best results over time for 7 dimensions.
Applsci 11 06201 g0a7
Figure A8. The average Rastrigin ( f 12 , f 13 , f 14 ) and Drop-Wave ( f 15 , f 16 , f 17 ) global best results over time for 14 dimensions.
Figure A8. The average Rastrigin ( f 12 , f 13 , f 14 ) and Drop-Wave ( f 15 , f 16 , f 17 ) global best results over time for 14 dimensions.
Applsci 11 06201 g0a8
Figure A9. The average Rastrigin ( f 12 , f 13 , f 14 ) and Drop-Wave ( f 15 , f 16 , f 17 ) global best results over time for 21 dimensions.
Figure A9. The average Rastrigin ( f 12 , f 13 , f 14 ) and Drop-Wave ( f 15 , f 16 , f 17 ) global best results over time for 21 dimensions.
Applsci 11 06201 g0a9
Figure A10. The averageZero-Sum ( f 18 , f 19 , f 20 ) and Salomon ( f 21 , f 22 , f 23 ) global best results over time for 7 dimensions.
Figure A10. The averageZero-Sum ( f 18 , f 19 , f 20 ) and Salomon ( f 21 , f 22 , f 23 ) global best results over time for 7 dimensions.
Applsci 11 06201 g0a10
Figure A11. The average Zero-Sum ( f 18 , f 19 , f 20 ) and Salomon ( f 21 , f 22 , f 23 ) global best results over time for 14 dimensions.
Figure A11. The average Zero-Sum ( f 18 , f 19 , f 20 ) and Salomon ( f 21 , f 22 , f 23 ) global best results over time for 14 dimensions.
Applsci 11 06201 g0a11
Figure A12. The averageZero-Sum ( f 18 , f 19 , f 20 ) and Salomon ( f 21 , f 22 , f 23 ) global best results over time for 21 dimensions.
Figure A12. The averageZero-Sum ( f 18 , f 19 , f 20 ) and Salomon ( f 21 , f 22 , f 23 ) global best results over time for 21 dimensions.
Applsci 11 06201 g0a12

References

  1. Chandrasekar, K.; Ramana, N.V. Performance Comparison of GA, DE, PSO and SA Approaches in Enhancement of Total Transfer Capability using FACTS Devices. J. Electr. Eng. Technol. 2012, 7. [Google Scholar] [CrossRef] [Green Version]
  2. Deb, A.; Roy, J.S.; Gupta, B. Performance Comparison of Differential Evolution, Particle Swarm Optimization and Genetic Algorithm in the Design of Circularly Polarized Microstrip Antennas. IEEE Trans. Antennas Propag. 2014, 62, 3920–3928. [Google Scholar] [CrossRef]
  3. Schlauwitz, J.; Musilek, P. A Dimension-Wise Particle Swarm Optimization Algorithm Optimized via Self-Tuning. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  4. Moodi, M.; Ghazvini, M.; Moodi, H. A hybrid intelligent approach to detect Android Botnet using Smart Self-Adaptive Learning-based PSO-SVM. Knowl.-Based Syst. 2021, 222, 106988. [Google Scholar] [CrossRef]
  5. Li, B.; Yingli, D.; Penghua, L. Application of improved PSO algorithm in power grid fault diagnosis. In Proceedings of the 2020 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 26–27 May 2020; pp. 242–247. [Google Scholar]
  6. Verma, H.; Verma, D.; Tiwari, P.K. A population based hybrid FCM-PSO algorithm for clustering analysis and segmentation of brain image. Expert Syst. Appl. 2021, 167, 114121. [Google Scholar] [CrossRef]
  7. Lu, G.; Cao, Z. Radiation pattern synthesis with improved high dimension PSO. In Proceedings of the 2017 Progress in Electromagnetics Research Symposium-Fall (PIERS-FALL), Singapore, 19–22 November 2017; pp. 2160–2165. [Google Scholar]
  8. Zhang, Y.D.; Wu, L. A hybrid TS-PSO optimization algorithm. J. Converg. Inf. Technol. 2011, 6, 169–174. [Google Scholar] [CrossRef]
  9. Dong, H.; Pan, Y.; Sun, J. High Dimensional Feature Selection Method of Dual Gbest Based on PSO. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  10. Hichem, H.; Rafik, M.; Mesaaoud, M.T. PSO with crossover operator applied to feature selection problem in classification. Informatica 2018, 42, 189–198. [Google Scholar]
  11. Molaei, S.; Moazen, H.; Najjar-Ghabel, S.; Farzinvash, L. Particle swarm optimization with an enhanced learning strategy and crossover operator. Knowl.-Based Syst. 2021, 215, 106768. [Google Scholar] [CrossRef]
  12. Song, S.; Lu, B.; Kong, L.; Cheng, J. A Novel PSO Algorithm Model Based on Population Migration Strategy and its Application. J. Comput. 2011, 6, 280–287. [Google Scholar] [CrossRef]
  13. Mouna, H.; Mukhil Azhagan, M.S.; Radhika, M.N.; Mekaladevi, V.; Nirmala Devi, M. Velocity Restriction-Based Improvised Particle Swarm Optimization Algorithm. In Progress in Advanced Computing and Intelligent Engineering; Saeed, K., Chaki, N., Pati, B., Bakshi, S., Mohapatra, D.P., Eds.; Springer: Singapore, 2018; pp. 351–360. [Google Scholar]
  14. Orlando, C.; Ricciardello, A. Analytic solution of the continuous particle swarm optimization problem. Optim. Lett. 2020, 1–11. [Google Scholar] [CrossRef]
  15. Mousakazemi, S.M.H. Computational effort comparison of genetic algorithm and particle swarm optimization algorithms for the proportional–integral–derivative controller tuning of a pressurized water nuclear reactor. Ann. Nucl. Energy 2020, 136, 107019. [Google Scholar] [CrossRef]
  16. Abdmouleh, Z.; Gastli, A.; Ben-Brahim, L.; Haouari, M.; Al-Emadi, N. Review of optimization techniques applied for the integration of distributed generation from renewable energy sources. Renew. Energy 2017, 113. [Google Scholar] [CrossRef]
  17. Hermawanto, D. Genetic Algorithm for Solving Simple Mathematical Equality Problem. arXiv 2017, arXiv:1308.4675. [Google Scholar]
  18. Eltaeib, T.; Mahmood, A. Differential Evolution: A Survey and Analysis. Appl. Sci. 2018, 8, 1945. [Google Scholar] [CrossRef] [Green Version]
  19. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; Computational Neuroscience Series; MIT Press: London, UK; Cambridge, MA, USA, 2013. [Google Scholar]
  20. Nobile, M.S.; Cazzaniga, P.; Besozzi, D.; Colombo, R.; Mauri, G.; Pasi, G. Fuzzy Self-Tuning PSO: A settings-free algorithm for global optimization. Swarm Evol. Comput. 2018, 39, 70–85. [Google Scholar] [CrossRef]
  21. Nobile, M.S.; Pasi, G.; Cazzaniga, P.; Besozzi, D.; Colombo, R.; Mauri, G. Proactive Particles in Swarm Optimization: A self-tuning algorithm based on Fuzzy Logic. In Proceedings of the 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Istanbul, Turkey, 2–5 August 2015; pp. 1–8. [Google Scholar]
  22. Adorio, E.P.; January, R. MVF-Multivariate Test Functions Library in C for Unconstrained Global Optimization; University of the Philippines Diliman: Quezon City, Philippines, 2005. [Google Scholar]
  23. Gavana, A. Test Functions Index. 2013. Available online: http://infinity77.net/global_optimization/test_functions.html#test-functions-index (accessed on 16 June 2021).
  24. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K. Benchmark Functions for the CEC’2013 Special Session and Competition on Large-Scale Global Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1–23. [Google Scholar]
Figure 1. Flow diagrams depicting the process of each algorithm.
Figure 1. Flow diagrams depicting the process of each algorithm.
Applsci 11 06201 g001
Figure 2. The difference between exploitation with PSO (blue envelope) and DPSO (red points on the grid).
Figure 2. The difference between exploitation with PSO (blue envelope) and DPSO (red points on the grid).
Applsci 11 06201 g002
Table 1. The list of functions evaluated on for each algorithm.
Table 1. The list of functions evaluated on for each algorithm.
Abbrev.NameAugmentsDimensions
f 0 In-Sample(one of each base function) offset and ill-conditioning(2 each = 14 )
f 1 In-Sample(one of each base function) offset, ill-conditioning, and rotation(2 each = 14 )
f 2 In-Sample(one of each base function) offset, ill-conditioning, rotation, and overlap(2 each = 14 )
f 3 Elliptical(4 copies) offset and ill-conditioning(~ 14 / 4 each = 14 )
f 4 Elliptical(4 copies) offset, ill-conditioning, and rotation(~ 14 / 4 each = 14 )
f 5 Elliptical(4 copies) offset, ill-conditioning, rotation, and overlap(~ 14 / 4 each = 14 )
f 6 Ackley(4 copies) offset and ill-conditioning(~ 14 / 4 each = 14 )
f 7 Ackley(4 copies) offset, ill-conditioning, and rotation(~ 14 / 4 each = 14 )
f 8 Ackley(4 copies) offset, ill-conditioning, rotation, and overlap(~ 14 / 4 each = 14 )
f 9 Rosenbrock(4 copies) offset and ill-conditioning(~ 14 / 4 each = 14 )
f 10 Rosenbrock(4 copies) offset, ill-conditioning, and rotation(~ 14 / 4 each = 14 )
f 11 Rosenbrock(4 copies) offset, ill-conditioning, rotation, and overlap(~ 14 / 4 each = 14 )
f 12 Rastrigin(4 copies) offset and ill-conditioning(~ 14 / 4 each = 14 )
f 13 Rastrigin(4 copies) offset, ill-conditioning, and rotation(~ 14 / 4 each = 14 )
f 14 Rastrigin(4 copies) offset, ill-conditioning, rotation, and overlap(~ 14 / 4 each = 14 )
f 15 Drop-Wave(4 copies) offset and ill-conditioning(~ 14 / 4 each = 14 )
f 16 Drop-Wave(4 copies) offset, ill-conditioning, and rotation(~ 14 / 4 each = 14 )
f 17 Drop-Wave(4 copies) offset, ill-conditioning, rotation, and overlap(~ 14 / 4 each = 14 )
f 18 Zero-Sum(4 copies) offset and ill-conditioning(~ 14 / 4 each = 14 )
f 19 Zero-Sum(4 copies) offset, ill-conditioning, and rotation(~ 14 / 4 each = 14 )
f 20 Zero-Sum(4 copies) offset, ill-conditioning, rotation, and overlap(~ 14 / 4 each = 14 )
f 21 Salomon(4 copies) offset and ill-conditioning(~ 14 / 4 each = 14 )
f 22 Salomon(4 copies) offset, ill-conditioning, and rotation(~ 14 / 4 each = 14 )
f 23 Salomon(4 copies) offset, ill-conditioning, rotation, and overlap(~ 14 / 4 each = 14 )
Table 2. Normalized fitness w.r.t. each function problem with 7 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
Table 2. Normalized fitness w.r.t. each function problem with 7 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
FunctionDPSOPSOGADE
f 0 Norm1.0004.286 × 10 27 0.0002.788 × 10 8
f 1 Norm4.198 × 10 6 1.0003.444 × 10 4 0.000
f 2 Norm2.965 × 10 5 1.0004.319 × 10 5 0.000
f 3 Norm0.0004.829 × 10 13 2.276 × 10 12 1.000
f 4 Norm1.0001.317 × 10 1 8.409 × 10 2 0.000
f 5 Norm0.0005.785 × 10 1 1.915 × 10 1 1.000
f 6 Norm2.713 × 10 1 1.0000.0005.643 × 10 1
f 7 Norm5.350 × 10 1 1.0004.485 × 10 1 0.000
f 8 Norm2.126 × 10 1 1.0000.0009.816 × 10 2
f 9 Norm0.0001.0002.284 × 10 1 4.773 × 10 3
f 10 Norm0.0001.0002.065 × 10 1 1.180 × 10 1
f 11 Norm0.0001.0009.790 × 10 2 3.169 × 10 2
f 12 Norm4.899 × 10 2 1.0000.0005.153 × 10 1
f 13 Norm0.0008.458 × 10 1 2.568 × 10 1 1.000
f 14 Norm4.650 × 10 2 1.0003.996 × 10 1 0.000
f 15 Norm0.0001.0008.147 × 10 1 8.099 × 10 1
f 16 Norm1.924 × 10 1 1.0000.0004.153 × 10 1
f 17 Norm0.0001.0007.439 × 10 1 8.794 × 10 1
f 18 Norm6.275 × 10 1 2.121 × 10 1 1.0000.000
f 19 Norm3.254 × 10 1 9.631 × 10 1 1.0000.000
f 20 Norm0.0001.0006.042 × 10 1 3.920 × 10 1
f 21 Norm1.0003.965 × 10 4 2.108 × 10 1 0.000
f 22 Norm2.659 × 10 3 1.0007.282 × 10 2 0.000
f 23 Norm0.0001.0001.036 × 10 1 8.478 × 10 3
Rank Factor1.0003.5601.2281.299
Rank Sum47845950
Overall Rank1432
Table 3. Normalized fitness w.r.t. each function problem with 14 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
Table 3. Normalized fitness w.r.t. each function problem with 14 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
FunctionDPSOPSOGADE
f 0 Norm1.0003.179 × 10 26 3.198 × 10 18 0.000
f 1 Norm4.177 × 10 2 1.0009.584 × 10 1 0.000
f 2 Norm5.512 × 10 27 1.0001.790 × 10 8 0.000
f 3 Norm1.038 × 10 1 8.430 × 10 7 1.0000.000
f 4 Norm1.0001.695 × 10 15 0.0004.382 × 10 16
f 5 Norm4.974 × 10 26 1.0000.0004.589 × 10 18
f 6 Norm5.333 × 10 1 1.0000.0007.938 × 10 1
f 7 Norm1.0009.388 × 10 1 2.211 × 10 1 0.000
f 8 Norm1.0009.366 × 10 1 2.205 × 10 1 0.000
f 9 Norm3.572 × 10 2 1.0004.059 × 10 1 0.000
f 10 Norm1.183 × 10 2 6.927 × 10 1 1.0000.000
f 11 Norm0.0001.0007.310 × 10 3 6.228 × 10 5
f 12 Norm0.0001.0001.529 × 10 1 2.407 × 10 1
f 13 Norm0.0001.0002.248 × 10 1 6.415 × 10 2
f 14 Norm4.245 × 10 1 1.0007.824 × 10 1 0.000
f 15 Norm0.0001.0004.718 × 10 1 8.685 × 10 1
f 16 Norm0.0001.0002.758 × 10 1 2.454 × 10 1
f 17 Norm0.0009.025 × 10 1 8.807 × 10 1 1.000
f 18 Norm0.0001.0003.761 × 10 1 2.728 × 10 1
f 19 Norm8.134 × 10 2 1.0003.241 × 10 1 0.000
f 20 Norm0.0001.0004.254 × 10 1 1.214 × 10 1
f 21 Norm0.0003.293 × 10 1 1.443 × 10 1 1.000
f 22 Norm6.666 × 10 2 2.230 × 10 2 1.0000.000
f 23 Norm9.424 × 10 2 1.326 × 10 1 1.0000.000
Rank Factor1.1713.8972.1431.000
Rank Sum49836444
Overall Rank2431
Table 4. Normalized fitness w.r.t. each function problem with 21 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
Table 4. Normalized fitness w.r.t. each function problem with 21 dimensions and the resulting ranking: rank 1 (best), rank 2, rank 3, and rank 4.
FunctionDPSOPSOGADE
f 0 Norm2.246 × 10 9 1.0001.006 × 10 16 0.000
f 1 Norm8.388 × 10 3 1.147 × 10 1 1.0000.000
f 2 Norm0.0002.710 × 10 5 6.438 × 10 11 1.000
f 3 Norm3.810 × 10 6 1.0000.0001.593 × 10 8
f 4 Norm0.0004.428 × 10 15 2.682 × 10 1 1.000
f 5 Norm4.154 × 10 5 0.0001.0001.447 × 10 3
f 6 Norm7.895 × 10 1 1.0007.057 × 10 1 0.000
f 7 Norm0.0001.0008.062 × 10 1 8.557 × 10 1
f 8 Norm9.287 × 10 1 1.0000.0009.443 × 10 1
f 9 Norm2.543 × 10 2 1.0007.551 × 10 3 0.000
f 10 Norm0.0001.0006.413 × 10 2 5.706 × 10 1
f 11 Norm0.0001.0004.721 × 10 2 6.814 × 10 3
f 12 Norm0.0001.0001.595 × 10 1 5.604 × 10 1
f 13 Norm0.0009.106 × 10 1 1.182 × 10 1 1.000
f 14 Norm3.266 × 10 2 1.0006.224 × 10 1 0.000
f 15 Norm8.635 × 10 1 1.0005.252 × 10 1 0.000
f 16 Norm3.222 × 10 1 1.0000.0005.351 × 10 1
f 17 Norm1.238 × 10 1 9.600 × 10 1 1.0000.000
f 18 Norm4.544 × 10 1 1.0005.639 × 10 1 0.000
f 19 Norm4.795 × 10 1 1.0000.0003.025 × 10 1
f 20 Norm1.288 × 10 2 1.848 × 10 1 1.0000.000
f 21 Norm0.0001.0002.130 × 10 1 1.616 × 10 1
f 22 Norm2.277 × 10 2 0.0001.0003.147 × 10 1
f 23 Norm5.228 × 10 1 0.0001.0001.202 × 10 2
Rank Factor1.0003.7442.2021.584
Rank Sum47806152
Overall Rank1432
Table 5. The Computational costs in terms of average memory and time for each dimension size on the elliptical problem.
Table 5. The Computational costs in terms of average memory and time for each dimension size on the elliptical problem.
DPSOPSOGADE
Dims[Bytes][s][Bytes][s][Bytes][s][Bytes][s]
76872 1.849 6552 1.218 5528 1.849 4944 1.849
147904 1.879 7616 1.060 6280 1.856 5448 1.851
218968 1.866 8680 1.474 7096 1.860 5952 1.857
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Schlauwitz, J.; Musilek, P. Dimension-Wise Particle Swarm Optimization: Evaluation and Comparative Analysis. Appl. Sci. 2021, 11, 6201. https://doi.org/10.3390/app11136201

AMA Style

Schlauwitz J, Musilek P. Dimension-Wise Particle Swarm Optimization: Evaluation and Comparative Analysis. Applied Sciences. 2021; 11(13):6201. https://doi.org/10.3390/app11136201

Chicago/Turabian Style

Schlauwitz, Justin, and Petr Musilek. 2021. "Dimension-Wise Particle Swarm Optimization: Evaluation and Comparative Analysis" Applied Sciences 11, no. 13: 6201. https://doi.org/10.3390/app11136201

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop