Next Article in Journal
A Three-Tier Architecture of Large-Scale Wireless Sensor Networks for Big Data Collection
Previous Article in Journal
Non-Local Spatial and Temporal Attention Network for Video-Based Person Re-Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Steady-State Jaya Algorithm for Optimization

by
Uday K. Chakraborty
Department of Computer Science, University of Missouri, St. Louis, MO 63121, USA
Appl. Sci. 2020, 10(15), 5388; https://doi.org/10.3390/app10155388
Submission received: 22 June 2020 / Revised: 26 July 2020 / Accepted: 27 July 2020 / Published: 4 August 2020

Abstract

:
The Jaya algorithm is arguably one of the fastest-emerging metaheuristics amongst the newest members of the evolutionary computation family. The present paper proposes a new, improved Jaya algorithm by modifying the update strategies of the best and the worst members in the population. Simulation results on a twelve-function benchmark test-suite and a real-world problem show that the proposed strategy produces results that are better and faster in the majority of cases. Statistical tests of significance are used to validate the performance improvement.

1. Introduction

For optimization of computationally hard problems and of problems that are mathematically intractable, machine-learning-based strategies such as evolutionary computation (EC) [1] and artificial neural network (ANN) [2] have seen significant success in numerous application areas. The “no-free-lunch theorem” [3] tells us that, theoretically, over all possible optimization functions, all algorithms perform equally well. In practice, however, for specific problems (particularly, hard problems), the need for better and still better algorithms (and heuristics) remains.
The Jaya algorithm [4], one of the newest members of the evolutionary computation family, has seen remarkable success across a wide variety of applications in continuous optimization. Jaya’s success can arguably be attributed to the following two features: (a) it requires very few algorithm parameters, and (b) compared to most of its EC-cousins, Jaya is extremely simple to implement. A user of the Jaya algorithm has to decide on suitable values for only two parameters—population size and the number of iterations (generations). Because any population-based algorithm (or heuristic) must have a population size, and because the user of any algorithm/heuristic must have an idea of when to stop the process, it can be argued that the population size and the stopping condition are two fundamental attributes of any population-based heuristic and that the Jaya algorithm is parameterless. In this paper, we present an algorithm that improves over the Jaya algorithm by modifying the search strategy, without compromising on the above two qualities. The improved algorithm uses new update strategies for the best and the worst members in the population. The comparative performance of Jaya and the proposed method is studied empirically on a twelve-function benchmark test-suite as well as on a real-world problem from fuel cell stack design optimization. The improvement in performance afforded by the proposed algorithm is validated with statistical tests of significance. (Technically, Jaya is not an algorithm; it is a heuristic. However, following common practice in the evolutionary computation community, we continue to refer to it as an algorithm in this paper.)
The remainder of this paper is organized as follows. A very brief outline of some of the most interesting previous work on the Jaya algorithm is presented in Section 2. Section 3 presents the proposed algorithm. Simulation results and statistical tests for performance analysis are presented in Section 4. Finally, conclusions are drawn in Section 5.

2. A Brief Overview of Previous Work on Jaya

A variation of the standard Jaya algorithm is presented in the multi-team perturbation-guiding Jaya (MTPG-Jaya) [5] where several “teams” explore the search space, with the same population being used by each team, while the “perturbations” governing the progression of the teams are different. The MTPG-Jaya was applied to the layout optimization problem of a wind farm. The Jaya algorithm was originally designed for continuous (real-valued) optimization, and most of Jaya’s applications to date have been in the continuous domain. A binary version of Jaya, however, was proposed in [6], where the authors borrowed (from [7]) the idea of combining particle swarm optimization with angle modulation and adapted that idea for Jaya. The binary Jaya was applied to feature selection in [6]. Modifications to the standard Jaya algorithm include a self-adaptive multi-population-based Jaya algorithm that was applied to entropy generation minimization of a plate-fin heat exchanger [8], a multi-objective Jaya algorithm that was applied to waterjet machining process optimization [9], and a hybrid parallel Jaya algorithm for a multi-core environment [10]. Application areas of the Jaya algorithm have included such diverse fields as pathological brain detection systems [11], flow-shop scheduling [12], maximum power point tracking problems in photovoltaic systems [13], identification and monitoring of electroencephalogram-based brain-computer interface for motor imagery tasks [14], and traffic signal control [15].

3. The Proposed Algorithm

The new algorithm is presented in Algorithm 1 where, without loss of generality, an array representation with conventional indexed access is assumed for the members (individuals) of a population. At each generation, we examine the individuals in the population one by one, in sequence, conditionally replacing each with a newly created individual. A new individual is created from the current individual by using the best individual, the worst individual, and two random numbers—each chosen uniformly randomly in (0, 1]—per problem parameter (variable). The generation of the new individual x new , given the current individual x current , is described by the following equation ( x new , x current , x best and x worst are each a d-component vector):
x i new = x i current + r t , i , 1 ( x i best | x i current | ) r t , i , 2 ( x i worst | x i current | )
where x i , i = 1 to d, represent the d parameters (variables) to be optimized, r t , i , 1 and r t , i , 2 are each a random number in (0.0, 1.0], t indicates the iteration (generation) number, x best and x worst represent, respectively, the best and the worst individual in the population at the time of the creation of x new from x current . When x i new falls outside its problem-specified lower or upper bound, it is clamped at the appropriate bound.
In the original Jaya algorithm, the new individual replaces the current individual only if it (the former) is better than the latter. The present algorithm, however, accepts the new individual if it is at least as good as (that is, has a fitness value that is equal or better—either smaller (for minimization) or larger (for maximization)—than that of) the current individual.
The original Jaya updates the population-best and the population-worst individuals once every generation. Algorithm 1, however, checks to see if x best needs to be updated, and performs the update if needed, after every single replacement of the existing individual. A similar approach is adopted for updating x worst , but in this case, an update is needed only for the case when the existing (current) individual is the worst one; this is because a replacement is guaranteed never to cause the objective (cost) function to be worse.
Algorithm 1: Pseudocode of the improved algorithm.
Applsci 10 05388 i001
The simultaneous presence in the population of more than one best (or worst) individual (clones of the same individual and/or different genotypes with the same phenotype) presents no problem for the new algorithm, because the computation of the best (or worst) is always over the entire population, that is, it is never done incrementally.
We improve upon Jaya by changing the policies of updating the best and the worst members and also by changing the criterion used to accept a new member as a replacement of an existing member. The motivation for the first pair of changes comes from the argument that an early availability and use of the best and worst individuals should lead to an earlier creation of better individuals; this is similar to the idea behind the “steady-state“ operation of genetic algorithms [16,17]. The logic behind the second change is to try to avoid the “plateau problem”.
We call the proposed algorithm semi-steady-state Jaya or SJaya.

4. Simulation Results

For studying the comparative performance of Jaya and SJaya, we use a benchmark test-suite comprising a dozen well-known test functions from the literature and a real-world problem of fuel cell stack design optimization. All of the thirteen problems involve minimization of the objective function value (fitness). The following metrics [18] are used for performance comparison:
  • Best-of-run fitness: the best (lowest), mean, and standard deviation of the best-of-run fitness values from 30 (or 100 [Section 4.3]) runs;
  • The number of fitness evaluations (FirstHitEvals) needed to reach a specified fitness value for the first time in a run: the best (fewest), mean, and standard deviation of these numbers from 30 (or 100 [Section 4.3]) runs;
  • Success count: the number of runs (out of the thirty or the hundred) in which the specified fitness level is reached (it is possible that the specified level is never reached with the given population size and the given number of generations).
The best-of-run fitness provides a measure of the quality of the solution, while the FirstHitEvals metric expresses how fast the algorithm is able to find a solution of a given quality. The two metrics are thus complementary to each other.

4.1. Results on the Benchmark Test-Suite

The benchmark suite (Table 1) [4,19] includes functions of a wide variety of features and levels of problem difficulty, including unimodal/multimodal, separable/non-separable, continuous/discontinuous, differentiable/non-differentiable, and convex/non-convex functions.
For each test function, the population size and the number of generations were chosen based loosely on the problem size (number of variables) and the problem difficulty. No systematic tuning of the population size (PopSize) or the number of generations (Gens) was attempted; the values used in this study were found to be reasonably good across a majority of the problems after a few initial trials. Two PopSize-Gens combinations were used for each function (see Table 2). For d = 30, population sizes of 100 and 150 were used, with the corresponding number of generations being 3000 and 5000. For d = 2, the population sizes were 15 and 20, with 5000 generations used for both. Thirty independent runs of each of the two algorithms were executed for each PopSize-Gens combination on each of the test functions. A run is considered a success if it manages to produce at least one solution with a fitness within a distance of ± 1.0 × 10 6 from the true (known) global optimum, and the number of fitness evaluations corresponding to the first appearance of such a solution is recorded as the FirstHitEvals of that run.
Table 2 and Table 3 show the results of SJaya and Jaya, respectively, on the 12-function test-suite. In all the tables in this paper, results are rounded at the fourth decimal place.
From Table 2 and Table 3 we see that SJaya produces superior results than Jaya on all the metrics. Specifically,
  • On the best of best-of-runs metric, out of 24 cases, SJaya outperforms Jaya in 12 cases and is outperformed by Jaya in 2 cases, with 10 cases resulting in ties. In a few cases (such as the values of 3.0000 of the best of best-of-run fitnesses and of the mean of best-of-run fitnesses corresponding to the Goldstein-Price function for both SJaya and Jaya), differences exist at the fifth or a later decimal position but do not show in Table 2 and Table 3.
  • On the mean of best-of-runs metric, SJaya is the winner with win-loss-tie figures of 18-1-5.
  • The success counts are higher (5-1-18) for SJaya.
  • SJaya outperforms Jaya 19-1-4 on the best FirstHitEvals metric.
  • On the mean FirstHitEvals metric, SJaya outperforms Jaya 19-1-4.
Table 4 presents the t-scores and one-tailed p-values from Smith–Satterthwaite tests (Welch’s tests) [20] (corresponding to unequal population variances) run on the data in Table 2 and Table 3 for examining whether or not the difference between the means of Jaya and SJaya (for the best-of-run fitnesses metric and, separately, for the FirstHitEvals metric) is significant. Using the subscripts 1 and 2 for Jaya and SJaya respectively, we obtain the test statistic as a t-score given by
t = x ¯ 1 x ¯ 2 0 s 1 2 n 1 + s 2 2 n 2 ,
and the degrees of freedom of the t-distribution (this t-distribution is used to approximate the sampling distribution of the difference between the two means) as
s 1 2 n 1 + s 2 2 n 2 2 ( s 1 2 / n 1 ) 2 n 1 1 + ( s 2 2 / n 2 ) 2 n 2 1 ,
where the symbols x ¯ , s and n represent mean, standard deviation and sample size, respectively. Note that even though 30 runs were executed in each case, the sample sizes are not always 30 (because not all runs were successful in all cases); for instance, for the Goldstein-Price function (executed with parameters PopSize = 15 and Gens = 5000), n 1 = n 2 = 30 for the mean best-of-run fitness calculation, whereas n 1 = 5 and n 2 = 6 for the mean FirstHitEvals computation. (To avoid division by zero, we cannot use the above formulas when both s 1 and s 2 are zeros or when any one of n 1 and n 2 is unity.)
Using α = 0.05 as the level of significance, we see from the results in Table 4 that on the best-of-run metric, out of a total of 19 cases, ten cases produce a positive t statistic that corresponds to a one-tailed p-value less than α (the p-values were obtained with t-tests from scipy.stats). Thus the null hypothesis x ¯ 1 = x ¯ 2 must be rejected in favor of x ¯ 1 > x ¯ 2 for those ten cases. The 19 cases include a lone negative t score, but the corresponding p-value is greater than 0.05. On the FirstHitEvals metric, we have a total of 19 cases (the two occurrences of 19 between best-of-run and FirstHitEvals is a coincidence), of which fourteen have a positive t with a p-value less than 0.05, and a single case has a negative t-score with a less-than-0.05 p-value.
The statistical tests in Table 4 provide performance comparison separately on each of the twelve functions (using two different algorithm parameter settings for each function). A measure of the combined performance on the 12 functions taken together can be obtained using a paired-sample Wilcoxon signed rank test on the 12-function suite. The results of this test for each of the two metrics are presented in Table 5 where the null hypothesis is that the Jaya mean and the SJaya mean are identical and the alternate hypothesis is that the former is larger than the latter. The second column in Table 5 shows the number of zero differences between SJaya and Jaya; n represents the effective number of samples obtained by ignoring the samples, if any, corresponding to zero differences (e.g., n is 24 5 = 19 for the mean of best-of-run fitness metric); W is the test statistic obtained as the minimum of W + and W ; α represents the level of significance (a value of 0.05 is used here); and the critical W for a given n and for α = 0.05 is obtained from standard statistical tables. The W statistic is seen to be less than the critical W. The mean of W is
mean = n ( n + 1 ) 4 ,
and its standard deviation is given by
std dev = n ( n + 1 ) ( 2 n + 1 ) 24 ,
and, arguing that the sample size n is large enough for the discrete distribution of the W statistic to be approximated by a normal distribution, we obtain the z-statistic as
z = W mean std dev .
The one-tailed p-value corresponding to the above z-statistic is obtained from standard tables of the normal distribution.
From the results in Table 4 and Table 5 we conclude that at the 5% significance level, SJaya is better than Jaya on the benchmark test-set.

4.2. Results on Fuel Cell Stack Design Optimization

A proton exchange membrane fuel cell (PEMFC) [21,22] stack design optimization problem [23,24,25] is considered here. This problem has been investigated in the fuel cell literature as a problem of practical importance for which the global minimum is believed to be mathematically intractable [24]. This is a constrained optimization problem where the task is to minimize the cost of building a PEMFC stack that meets specific requirements. The objective (cost) function is a function of three variables N p , N s , A cell :
cost = K n × N p × N s + K diff × V load , rated V load , mpp + K a × A cell + P ,
where N s is the number of cells connected in series in each group; N p is the number of groups connected in parallel; A cell is the cell area; V load , r is the rated (given) terminal voltage of the stack; V load , mpp represents the output voltage at the maximum power point of the stack; P load , r is the rated (given) output power of the stack; P load , max is the maximum output power of the stack; K n , K diff , K a are pre-determined constants [24] used to adjust the relative importance of the different components of the cost function; and P represents a penalty term given by
P = 0 if P load , max P load , r ; c ( P load , r P load , max ) otherwise .
P load , max and V load , mpp are obtained numerically from the following equation by iterating over the load current i load , d (recall that power is voltage times current), using a step size of i load = 1 mA:
V st = N s E Nernst A ln i load , d / N p + i n , d i 0 , d + B ln 1 i load , d / N p + i n , d i limit , d ( i load , d / N p + i n , d ) r a ,
where V st is the stack voltage, E Nernst is the Nernst e.m.f., A and B are constants known from electrochemistry, r a is the area-specific resistance, and the i’s represent different types of current densities (the subscript d is used to indicate density) in the cell [21,26]. The lower and upper bounds of N s , N p and A cell are provided in Table 6 and the numerical values of the parameters in Table 7.
Table 8 and Table 9 present results of the two algorithms on the fuel cell problem; 30 independent runs are executed for each of 13 PopSize-Gens combinations for either algorithm. For this problem, the success of a run is defined as the production of at least one solution with a fitness of 13.62 or lower [24]. For 12 of the 13 cases in Table 8, the mean of the best-of-run costs is better for SJaya than for Jaya. Furthermore, on the mean FirstHitEvals metric, SJaya outperforms Jaya 10 out of the 13 times. Again, SJaya beats Jaya 9-3-1 on the success count metric. Results of Smith–Satterthwaite tests (Table 10) show that for the best-of-run cost metric, the t-statistic is positive in all cases but one, but the one-tailed p-values are not less than 0.05. Thus we do not have a strong reason at the 5% significance level to reject the null hypothesis that the two means of the best-of-run costs are equal. For the best-of-run metric, the single negative t-score in Table 10 corresponds to a p-value that is close to 0.5, indicating no reason to consider Jaya to be significantly better than SJaya on that case. The FirstHitEvals metric shows SJaya to be significantly better (at the 5% level) in two of the 12 cases, the other cases being ties at that level of significance.
Table 11 shows results of Wilcoxon signed-rank tests for the PEMFC problem. For each of the two metrics, the W-statistic is less than the critical W. Moreover, the one-tailed p-value computed from the z-score is less than 0.05 for both the metrics, thereby establishing a significant (at the 5% level) superiority of SJaya over Jaya on the fuel cell problem.

4.3. Performance Comparison with the Algorithm of Chakraborty (Energies, 2019)

For a head-to-head comparison of SJaya with the Jaya variant developed in [24], 100 independent runs of SJaya are executed and the results summarized in Table 12. A comparison of the mean of 100 best-of-run costs (Table 12 in this paper and Table 14 in [24]) shows that the present approach’s mean value is lower in five of the 13 cases and higher in the remaining eight. The difference, however, is not statistically significant, as seen from the results (Table 13) of the Wilcoxon signed rank test which shows no clear advantage for either algorithm on this metric (the one-tailed p-value is much closer to 0.5 than to zero). On the success count metric (Table 15 in [24]), SJaya outperforms the method of [24] in six cases and is outperformed in four cases, with three cases being ties. On the mean FirstHitEvals metric (Table 15 in [24]), SJaya wins in 11 out of the 13 cases, with the difference seen to be statistically significant at the 5% level (the p-value in Table 13 is 0.0044). Thus we conclude that SJaya is quite competitive with the method of [24].

4.4. Performance Comparison with Other Heuristics

Because the performance of EC heuristics depends—often dramatically—on parameter settings, empirical performance comparison of SJaya with other heuristics may not mean much unless either those competing heuristics require no other parameters except the population size and the number of generations, or the comparative study is based on runs with a very large number of parameter setting combinations. Most stochastic heuristics in the EC family, however, employ additional parameters (probability of crossover [27], probability of mutation [28], strategy parameters [1,29], to name a few). A proper head-to-head comparison of SJaya with non-Jaya methods is therefore difficult. Table 14 presents the results of a brief comparative study of SJaya with four well-known EC algorithms, namely genetic algorithm (GA) [1], particle swarm optimization (PSO) [30], differential evolution (DE) [29] and artificial bee colony algorithm (ABC) [31]. The metrics used for comparison are the mean and the standard deviation of the best-of-run solutions of 30 independent runs for each problem, with each run executed for 500,000 evaluations (population size = 50 and number of generations = 10,000). (The population size and the number of generations used to produce the data in Table 14 are different from those used earlier in this paper; and, for Ackley and Rosenbrock, the bounds are not the same as the corresponding ones in Table 1.) The non-SJaya results in this table are taken from [31]. These results show that SJaya is competitive with the other methods.

5. Conclusions

This paper presented an improvement to the Jaya algorithm by introducing new update policies in the search process. The usefulness of the present approach is that, unlike most other improvements to Jaya reported in the literature, our strategy does not require the introduction of any additional parameter. It retains both the features that the original Jaya is famous for, namely “parameterlessness” and simplicity, while providing performance that is statistically significantly better (in terms of the solution quality) and/or faster (in terms of the speed of finding a near-optimal solution) than that produced by Jaya.

Funding

This research was funded in part by United States National Science Foundation Grant IIS-1115352.

Acknowledgments

The author thanks the anonymous reviewers for their comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Michalewicz, Z. Genetic Algorithms+ Data Structures = Evolution Programs; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  2. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  3. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  4. Rao, R. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar]
  5. Rao, R.V.; Keesari, H.S. Multi-team perturbation guiding Jaya algorithm for optimization of wind farm layout. Appl. Soft Comput. 2018, 71, 800–815. [Google Scholar] [CrossRef]
  6. Li, Y.; Yang, Z. Application of EOS-ELM with binary Jaya-based feature selection to real-time transient stability assessment using PMU data. IEEE Access 2017, 5, 23092–23101. [Google Scholar] [CrossRef]
  7. Pampara, G.; Franken, N.; Engelbrecht, A.P. Combining particle swarm optimisation with angle modulation to solve binary problems. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Scotland, UK, 2–5 September 2005; Volume 1, pp. 89–96. [Google Scholar]
  8. Rao, R.V.; Saroj, A. A self-adaptive multi-population based Jaya algorithm for engineering optimization. Swarm Evol. Comput. 2017, 37, 1–26. [Google Scholar]
  9. Rao, R.V.; Rai, D.P.; Balic, J. Multi-objective optimization of abrasive waterjet machining process using Jaya algorithm and PROMETHEE Method. J. Intell. Manuf. 2019, 30, 2101–2127. [Google Scholar] [CrossRef]
  10. Michailidis, P.D. An efficient multi-core implementation of the Jaya optimisation algorithm. Int. J. Parallel Emergent Distrib. Syst. 2019, 34, 288–320. [Google Scholar] [CrossRef]
  11. Nayak, D.R.; Dash, R.; Majhi, B. Development of pathological brain detection system using Jaya optimized improved extreme learning machine and orthogonal ripplet-II transform. Multimed. Tools Appl. 2018, 77, 22705–22733. [Google Scholar] [CrossRef]
  12. Buddala, R.; Mahapatra, S.S. Improved teaching–learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems. J. Ind. Eng. Int. 2018, 14, 555–570. [Google Scholar] [CrossRef] [Green Version]
  13. Huang, C.; Wang, L.; Yeung, R.S.C.; Zhang, Z.; Chung, H.S.H.; Bensoussan, A. A prediction model-guided Jaya algorithm for the PV system maximum power point tracking. IEEE Trans. Sustain. Energy 2017, 9, 45–55. [Google Scholar] [CrossRef]
  14. Sinha, R.K.; Ghosh, S. Jaya based ANFIS for monitoring of two class motor imagery task. IEEE Access 2016, 4, 9273–9282. [Google Scholar]
  15. Gao, K.; Zhang, Y.; Sadollah, A.; Su, R. Jaya algorithm for solving urban traffic signal control problem. In Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, 13–15 November 2016; pp. 1–6. [Google Scholar]
  16. Syswerda, G. A study of reproduction in generational and steady-state genetic algorithms. In Foundations of Genetic Algorithms; Elsevier: Amsterdam, The Netherlands, 1991; Volume 1, pp. 94–101. [Google Scholar]
  17. Chakraborty, U.K.; Deb, K.; Chakraborty, M. Analysis of selection algorithms: A Markov chain approach. Evol. Comput. 1996, 4, 133–167. [Google Scholar] [CrossRef]
  18. Chakraborty, U.K.; Abbott, T.E.; Das, S.K. PEM fuel cell modeling using differential evolution. Energy 2012, 40, 387–399. [Google Scholar] [CrossRef]
  19. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef] [Green Version]
  20. Johnson, R.A.; Miller, I.; Freund, J.E. Probability and Statistics for Engineers; Pearson Education: London, UK, 2000; Volume 2000. [Google Scholar]
  21. Larminie, J.; Dicks, A.; McDonald, M.S. Fuel Cell Systems Explained; J. Wiley: Chichester, UK, 2003; Volume 2. [Google Scholar]
  22. O’hayre, R.; Cha, S.W.; Colella, W.; Prinz, F.B. Fuel Cell Fundamentals; John Wiley & Sons: Chichester, UK, 2016. [Google Scholar]
  23. Mohamed, I.; Jenkins, N. Proton exchange membrane (PEM) fuel cell stack configuration using genetic algorithms. J. Power Sources 2004, 131, 142–146. [Google Scholar] [CrossRef]
  24. Chakraborty, U.K. Proton exchange membrane fuel cell stack design optimization using an improved Jaya algorithm. Energies 2019, 12, 3176. [Google Scholar] [CrossRef] [Green Version]
  25. Besseris, G.J. Using qualimetric engineering and extremal analysis to optimize a proton exchange membrane fuel cell stack. Appl. Energy 2014, 128, 15–26. [Google Scholar] [CrossRef]
  26. Chakraborty, U.K. A new model for constant fuel utilization and constant fuel flow in fuel cells. Appl. Sci. 2019, 9, 1066. [Google Scholar] [CrossRef] [Green Version]
  27. Yi, J.H.; Xing, L.N.; Wang, G.G.; Dong, J.; Vasilakos, A.V.; Alavi, A.H.; Wang, L. Behavior of crossover operators in NSGA-III for large-scale optimization problems. Inf. Sci. 2020, 509, 470–487. [Google Scholar] [CrossRef]
  28. Michalewicz, Z.; Fogel, D.B. How to Solve It: Modern Heuristics; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  29. Chakraborty, U.K. Advances in Differential Evolution; Springer: Berlin, Germany, 2008; Volume 143. [Google Scholar]
  30. Bonyadi, M.R.; Michalewicz, Z. Particle swarm optimization for single objective continuous space problems: A review. Evol. Comput. 2017, 25, 1–54. [Google Scholar] [CrossRef]
  31. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
Table 1. Benchmark functions.
Table 1. Benchmark functions.
NameDefinitionDim.Global MinimumBounds
Ackley f ( x 1 , , x n ) = 20 e x p 0.2 1 n i = 1 n x i 2 e x p 1 n i = 1 n c o s ( 2 π x i ) + 20 + e 30 f ( x * ) = 0
x * = ( 0 , , 0 )
10 x i 10
Rosenbrock f ( x 1 , , x n ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( 1 x i ) 2 ] 30 f ( x * ) = 0
x * = ( 1 , , 1 )
10 x i 10
Chung-Reynolds f ( x 1 , , x n ) = i = 1 n x i 2 2 30 f ( x * ) = 0
x * = ( 0 , , 0 )
10 x i 10
Step f ( x 1 , , x n ) = i = 1 n | x i | 30 f ( x * ) = 0
x i * ( 1 , 1 )
100 x i 100
Alpine-1 f ( x 1 , , x n ) = i = 1 n | x i sin ( x i ) + 0.1 x i | 30 f ( x * ) = 0
x * = ( 0 , , 0 )
10 x i 10
SumSquares f ( x 1 , , x n ) = i = 1 n i x i 2 30 f ( x * ) = 0
x * = ( 0 , , 0 )
10 x i 10
Sphere f ( x 1 , , x n ) = i = 1 n x i 2 30 f ( x * ) = 0
x * = ( 0 , , 0 )
100 x i 100
Bohachevsky-3 f ( x 1 , x 2 ) = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 + 4 π x 2 ) + 0.3 2 f ( x * ) = 0
x * = ( 0 , 0 )
100 x 1 , x 2 100
Bohachevsky-2 f ( x 1 , x 2 ) = x 1 2 + 2 x 2 2 0.3 cos ( 3 π x 1 ) cos ( 4 π x 2 ) + 0.3 2 f ( x * ) = 0
x * = ( 0 , 0 )
100 x 1 , x 2 100
Bartels Conn f ( x 1 , x 2 ) = | x 1 2 + x 2 2 + x 1 x 2 | + | sin ( x 1 ) | + | cos ( x 2 ) | 2 f ( x * ) = 1
x * = ( 0 , 0 )
500 x 1 , x 2 500
Goldstein-Price f ( x 1 , x 2 ) = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ×
30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 )
2 f ( x * ) = 3
x * = ( 0 , 1 )
2 x 1 , x 2 2
Matyas f ( x 1 , x 2 ) = 0.26 ( x 1 2 + x 2 2 ) 0.48 x 1 x 2 2 f ( x * ) = 0
x * = ( 0 , 0 )
10 x 1 , x 2 10
Table 2. Results of SJaya on the 12-function test-suite (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
Table 2. Results of SJaya on the 12-function test-suite (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
FunctionPopSizeGensBest-of-Run FitnessFirstHitEvals
BestMeanStd DevSuccessBestMeanStd Dev
Ackley10030007.4347 × 10−101.8090 × 10−99.1920 × 10−1030209,499217,209.43334885.3830
15050001.0938 × 10−122.7097 × 10−127.9283 × 10−1330407,146426,516.76677522.8400
Rosenbrock10030000.001525.453228.87640
15050000.000117.056526.91450
Chu-Rey10030005.0261 × 10−371.1798 × 10−353.0313 × 10−353077,03584,420.63325.8495
15050001.2691 × 10−484.9288 × 10−476.4529 × 10−4730153,594162,497.06673651.8492
Step10030000.00.06670.2494283900443,895.03575319.6538
15050000.00.00.03068,09973,154.93333639.5311
Alpine-110030000.02476.82456.43450
15050000.01374.59765.74990
SumSquares10030006.1724 × 10−183.8440 × 10−174.0234 × 10−1730138,646144,029.43333771.9204
15050001.3309 × 10−237.2599 × 10−235.5164 × 10−2330266,653280,539.43336344.5950
Sphere10030005.6616 × 10−172.9297 × 10−162.6115 × 10−1630152,133157,149.23332954.1983
15050001.3981 × 10−226.1597 × 10−224.1632 × 10−2230298,554306,880.06674927.0814
Boha-31550000.00.00.0308821322.4667308.4498
2050000.00.00.03011821838.7333.6645
Boha-21550000.00.00.0307181005.3333268.2153
2050000.00.00.0308901443.3667222.3957
Bartels1550001.01.00.0308931061.090.1706
2050001.01.00.03011281523.4333124.4451
Gold-Pr1550003.00003.00001.0820 × 10−5628,32055,587.514,917.3860
2050003.00003.00001.8986 × 10−5558,44282,977.014,243.2530
Matyas1550000.03.0482 × 10−351.6415 × 10−3430471856.1169.1497
2050000.05.6005 × 10−1233.0160 × 10−122306921152.7333264.9280
Table 3. Results of Jaya on the 12-function test-suite (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
Table 3. Results of Jaya on the 12-function test-suite (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
FunctionPopSizeGensBest-of-Run FitnessFirstHitEvals
BestMeanStd DevSuccessBestMeanStd Dev
Ackley10030004.2232 × 10−67.6506 × 10−61.9595 × 10−60
15050003.9148 × 10−88.2624 × 10−82.5913 × 10−830620,422651,813.433311,801.5819
Rosenbrock10030000.031026.811327.52000
15050000.052137.093932.60630
Chu-Rey10030006.1251 × 10−232.2695 × 10−212.7432 × 10−2130122,216130,083.46673283.9261
15050001.1798 × 10−301.1626 × 10−291.2429 × 10−2930230,733245,191.66139.8179
Step10030000.00.00.0308211588940.64467.5422
15050000.00.00.030154,374166,652.76676105.3915
Alpine-110030000.02409.75025.69130
15050000.03816.26105.66900
SumSquares10030001.5297 × 10−104.5700 × 10−102.2292 × 10−1030213,427222,775.16673910.2976
15050004.9038 × 10−153.7103 × 10−141.7512 × 10−1430406,918421,195.17055.2581
Sphere10030001.2410 × 10−94.6650 × 10−92.4779 × 10−930231,137245,599.16674874.0277
15050008.6939 × 10−143.6152 × 10−132.3875 × 10−1330441,574464,684.366710,701.7923
Boha-31550000.00.03010.1624299471368.5517257.8614
2050000.00.00.03014611877.5333275.5259
Boha-21550000.00.03470.1866298091102.7931158.0520
2050000.00.00.03011601590.8667243.5768
Bartels1550001.01.00.0309951238.766791.8632
2050001.01.00.03013751684.0667152.4998
Gold-Pr1550003.00003.00001.4203 × 10−5536,98157,683.412,921.8507
2050003.00003.00001.7344 × 10−5337,55052,030.015,017.5414
Matyas1550000.01.6173 × 10−118.7092 × 10−1130572906.9667261.1821
2050000.01.9566 × 10−551.0537 × 10−54307611286.0264.6156
Table 4. Smith–Satterthwaite tests: Jaya vs. SJaya on the benchmark functions.
Table 4. Smith–Satterthwaite tests: Jaya vs. SJaya on the benchmark functions.
FunctionPopSizeGensBest-of-Run FitnessFirstHitEvals
t-Statisticp-Valuet-Statisticp-Value
Ackley100300021.38001.3355 × 10−19
150500017.46363.1280 × 10−1788.17203.7508 × 10−56
Rosenbrock10030000.18650.4264
15050002.59580.0060
Chu-Rey10030004.53144.6542 × 10−553.51102.3156 × 10−51
15050005.12368.9954 × 10−663.40311.1548 × 10−47
Step1003000−1.46390.077034.79521.9600 × 10−38
150500072.04802.6003 × 10−50
Alpine-110030001.86550.0336
15050001.12830.1319
SumSquares100300011.22852.2360 × 10−1279.38634.1571 × 10−61
150500011.60451.0180 × 10−1281.19383.3244 × 10−61
Sphere100300010.31161.6374 × 10−1185.00164.2333 × 10−54
15050008.29381.9158 × 10−973.36313.1842 × 10−45
Boha-31550001.01710.15880.62340.2678
2050000.49150.3125
Boha-21550001.01710.15881.70710.0472
2050002.44940.0087
Bartels1550007.56411.6549 × 10−10
2050004.46991.9412 × 10−5
Gold-Pr1550001.06760.14520.24960.4042
2050000.74070.2309−2.87650.0217
Matyas1550001.01710.15880.89540.1875
2050001.01710.15881.94940.0280
Table 5. Wilcoxon signed rank tests: Jaya vs. SJaya on the 12-function benchmark suite.
Table 5. Wilcoxon signed rank tests: Jaya vs. SJaya on the 12-function benchmark suite.
Metric#zero Diff.n W + W W α Critical WMean of WStd. Dev. of Wz-StatisticLeft Tail p
Mean of Best-of-Run Fitnesses51917515150.05539524.8495−3.21940.0006
Mean of FirstHitEvals01918010100.05539524.8495−3.42060.0003
Table 6. Bounds of the design variables [23].
Table 6. Bounds of the design variables [23].
VariableLower BoundUpper Bound
N s 150
N p 150
A cell (cm 2 )10400
Table 7. PEMFC parameters and coefficients.
Table 7. PEMFC parameters and coefficients.
ParameterValue
V load , r 12 V
P load , r 200 W
K n 0.5
K diff 10
K a 0.001
c200
r a 98.0 × 10 6 K Ω cm 2
i limit , d 129 mA/cm 2
i 0 , d 0.21 mA/cm 2
i n , d 1.26 mA/cm 2
A0.05 V
B0.08 V
E Nernst 1.04 V
Table 8. Results of SJaya on the fuel cell problem (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
Table 8. Results of SJaya on the fuel cell problem (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
PopSizeGensBest-of-Run FitnessFirstHitEvals
BestMeanStd DevSuccessBestMeanStd Dev
201013.616213.68850.07593127172.031.9479
152013.616113.62550.019021128254.809547.8187
202013.615913.63760.052321127310.071.8338
202513.615913.63020.048425127335.889.0222
254013.615713.61640.00232989510.6897166.4118
402513.615813.61840.004425291654.24213.3435
2010013.615713.61588.7813 × 10−530127436.1333304.5035
1002013.615913.61950.0029204631491.5437.1448
3010013.615713.61580.000230370585.4667230.4605
1003013.615813.61790.0022254631675.08550.3110
4010013.615713.61600.000630291778.9333385.4906
1004013.615713.61740.0022264631737.1154622.4179
10010013.615713.61620.0010294632155.31031395.2800
Table 9. Results of Jaya on the fuel cell problem (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
Table 9. Results of Jaya on the fuel cell problem (each row corresponds to 30 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
PopSizeGensBest-of-Run FitnessFirstHitEvals
BestMeanStd DevSuccessBestMeanStd Dev
201013.621313.70260.07130
152013.616013.63740.034213124241.846245.9378
202013.616313.63670.048320298363.6531.7636
202513.616013.63120.046325298382.3648.3867
254013.615813.62980.052028144540.6071144.4191
402513.615813.62290.023626250739.5170.0993
2010013.615713.61820.012629298454.6897236.2226
1002013.616013.79470.9338149071595.2857360.2738
3010013.615715.14448.230829368740.6207546.2285
1003013.615913.79100.9344259071922.44492.2595
4010013.615713.62020.023729250787.5517222.2544
1004013.615713.79070.9345269071972.7308544.2687
10010013.615713.79000.9346279072118.4074914.8884
Table 10. Smith–Satterthwaite tests: Jaya vs. Sjaya on the fuel cell problem.
Table 10. Smith–Satterthwaite tests: Jaya vs. Sjaya on the fuel cell problem.
PopSizeGensBest-of-Run FitnessFirstHitEvals
t-Statisticp-Valuet-Statisticp-Value
20100.74290.2303
15201.66730.0512−0.78720.2191
2020−0.06270.47513.11750.0021
20250.08650.46572.29760.0137
25401.40680.08500.72560.2356
40251.02020.15781.57420.0612
201001.04610.15210.26200.3971
100201.02790.15620.75640.2276
301001.01720.15871.41290.0830
100301.01470.15931.67510.0503
401000.98380.16670.10560.4582
100401.01580.15911.45300.0763
1001001.01900.1583−0.11780.4534
Table 11. Wilcoxon signed rank tests: Jaya vs. Sjaya on the fuel cell problem.
Table 11. Wilcoxon signed rank tests: Jaya vs. Sjaya on the fuel cell problem.
Metric#zero Diff.n W + W W α Critical WMean of WStd. Dev. of Wz-StatisticLeft Tail p
Mean of Best-of-Run Fitnesses01390110.052145.514.3091−3.10990.0009
Mean of FirstHitEvals01271770.05173912.7475−2.51030.0060
Table 12. Results of SJaya on the fuel cell problem (each row corresponds to 100 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
Table 12. Results of SJaya on the fuel cell problem (each row corresponds to 100 independent runs). Most numbers are shown with rounding at the fourth place after the decimal.
PopSizeGensBest-of-Run FitnessFirstHitEvals
BestMeanStd DevSuccessBestMeanStd Dev
201013.616213.68470.06776127184.029.2461
152013.615914.685510.513864116249.609448.8686
202013.615913.62760.033669127322.420362.6424
202513.615813.62250.027886127348.976778.3951
254013.615713.61760.01369789486.6598149.3530
402513.615713.62030.014381267693.6543189.3033
2010013.615713.61590.0005100127422.63236.1126
1002013.615913.62030.0049613651422.6230434.6534
3010013.615713.61590.000699175581.5859200.6175
1003013.615713.61770.0022863651723.3721616.9152
4010013.615713.61590.0004100267829.13383.5640
1004013.615713.61700.0019923651839.6522743.6137
10010013.615713.61620.0010993652136.27271335.2097
Table 13. Wilcoxon signed rank tests: Algorithm of [24] vs. Sjaya on the fuel cell problem.
Table 13. Wilcoxon signed rank tests: Algorithm of [24] vs. Sjaya on the fuel cell problem.
Metric#zero Diff.n W + W W α Critical WMean of WStd. Dev. of Wz-StatisticLeft Tail p
Mean of Best-of-Run Fitnesses1124236360.05173912.7476−0.23530.4070
Mean of FirstHitEvals01383880.052145.514.3091−2.62070.0044
Table 14. Comparative performance: mean and standard deviation of 30 best-of-run fitnesses. For each function, the top row shows the means and the bottom row the standard deviations (results are rounded at the fourth decimal place).
Table 14. Comparative performance: mean and standard deviation of 30 best-of-run fitnesses. For each function, the top row shows the means and the bottom row the standard deviations (results are rounded at the fourth decimal place).
FunctionDim.BoundsGlob. min.GAPSODEABCSJaya
Ackley30(−32, 32)014.67180.1646000.2723
0.1781410.493867000.5022
Rosenbrock30(−30, 30)01.96 × 10 5 15.088618.20390.08880.0009
3.85 × 10 4 24.17025.03620.07740.0046
Step30(−100, 100)01.17 × 10 3 0000.2333
76.56150000.4955
SumSquares30(−10, 10)01.48 × 10 2 0000
12.40930000
Sphere30(−100, 100)01.11 × 10 3 0000
74.21450000
Boha-32(−100, 100)000000
00000
Boha-22(−100, 100)00.06830000
0.07820000
Goldstein-Price2(−2, 2)35.25063333
5.87010000
Matyas2(−10, 10)000000
00000

Share and Cite

MDPI and ACS Style

Chakraborty, U.K. Semi-Steady-State Jaya Algorithm for Optimization. Appl. Sci. 2020, 10, 5388. https://doi.org/10.3390/app10155388

AMA Style

Chakraborty UK. Semi-Steady-State Jaya Algorithm for Optimization. Applied Sciences. 2020; 10(15):5388. https://doi.org/10.3390/app10155388

Chicago/Turabian Style

Chakraborty, Uday K. 2020. "Semi-Steady-State Jaya Algorithm for Optimization" Applied Sciences 10, no. 15: 5388. https://doi.org/10.3390/app10155388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop