# Out of the Niche: Using Direct Search Methods to Find Multiple Global Optima

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Direct Search Methods

#### 2.1. Pattern Search Methods

#### Compass Search Method

#### 2.2. Simplex Search Methods

#### Nelder-Mead Method

#### 2.3. Methods with Adaptive Sets of Search Directions

#### 2.3.1. Rosenbrock’s Method

#### 2.3.2. Powell’s Search Method

## 3. Benchmark Functions for the Niching Competition

`ph`, defined as the value of the objective function at the global optima; (iii) A niche radius

`r`value that can sufficiently distinguish two closest global optima.

- The peak ratio PR, defined as the average percentage of all known global optima found over the NR runs,$$\mathtt{PR}=\frac{{\sum}_{i=1}^{\mathtt{NR}}{\mathtt{NPF}}_{i}}{\mathtt{NKP}\xb7\mathtt{NR}},$$
- The success rate SR, computed as the percentage of successful runs, defined, in turn, as those runs where all known global optima are found,$$\mathtt{SR}=\frac{\mathtt{NSR}}{\mathtt{NR}},$$
`NSR`denotes the number of successful runs. - The average number of function evaluations AveFEs required to find all known global optima$$\mathtt{AveFEs}=\frac{{\sum}_{i=1}^{\mathtt{NR}}{\mathtt{FE}}_{i}}{\mathtt{NR}},$$

#### 3.1. Nominating Direct Methods for the Competition

Algorithm 1: Multimodal optimization strategy using Direct Search Methods. |

`mPR`as the proportion of the total number of global optima found at the current stage of the optimization process on a given run. Therefore,

`mPR`could theoretically take values $\{0,1/\mathrm{NKP},2/\mathrm{NKP},\dots ,1\}$ for a given test function with NKP known global optima, 0 meaning that no global optima has yet been reached and 1 that all have already been found. Note that the largest values of

`mPR`might never be attained for some functions, indicating that the corresponding method was not able to find all known global optima. Then, for a given (attained) value of

`mPR`, we record the average number of function evaluations AveFEs (computed over the

`NR`runs) that was needed to find the corresponding number of global optima $\nu \left(\mathrm{mPR}\right)=\mathrm{mPR}\xb7\mathrm{NKP}$ for a given accuracy level $\u03f5$. By plotting all attained

`mPR`vs AveFEs pairs, we obtain the so-called performance curve. Each point in the curve corresponds to the average number of function evaluations that was needed to find the number of global optima $\nu \left(\mathtt{mPR}\right)$ associated with the matching

`mPR`.

`mSR`as the proportion of runs that were actually able to find the number of global optima indicated by $\nu \left(\mathtt{mPR}\right)$, i.e.,

## 4. Implementation

#### 4.1. Latin Hypercube Sampling

Algorithm 2: Latin Hypercube Sampling Algorithm. |

#### 4.2. Termination Criteria

`tol`, which cannot be greater than the niche radius

`r`defined in Section 3. Alternatively, we could require that the increase in the function value in the terminating step be fractionally smaller than some tolerance

`ftol`, which cannot be greater than the accuracy level $\u03f5$ defined in Section 3. Besides, we need to observe an additional restriction on the maximum number of function evaluations MaxFEs. We will indicate in Section 5 the specific values used for each direct search method implemented.

## 5. Numerical Results

`DE/nrand/1/bin`(abbr. DE1) and

`Crowding DE/rand/1/bin`(abbr. DE2). We are knowledgeable about new and more powerful algorithms that have been devised to take part in more recent niching competitions, but, as we have already discussed, we do not intend to directly compete against them. Rather, our aim is to offer new ways of exploiting the abundant data that is generated on each new iteration when performing multimodal optimization. And, as far as we aware, the two aforementioned algorithms provide the most complete information about their dynamic performance when searching for global optima. We will, therefore, compare our methods with them in terms of how fast and robustly new global optima are found, paying special attention to the two new metrics introduced in Section 3.1.

#### 5.1. Pattern Search Methods: Compass Search

`(6, 20)`. We additionally set the minimum steplenght for every dimension at ${10}^{-4}$ which is, as required, smaller than any of the niche radii defined for the test functions [34].

`NR = 50`runs for the different values of the accuracy level. We need to compare such figures with those in Tables II and III in [34] which, for the sake of completion, are reproduced in Table A1 and Table A2 in Appendix A. We have highlighted in dark gray those values for which our method outperforms both baseline algorithms DE1 and DE2, and in light gray those cases when our method ties with the best result between DE1 and DE2. White cells represent the rest of the possible cases. We would like to emphasize how exacting such ranking is since if, for instance, CS beats DE1 but loses against DE2, we will record that as a loss. Even with such a restrictive reckoning, CS attains remarkable results, winning (tying) in 41% (42%) of the cases for PR, and 8% (80%) for SR, respectively, with notable improvements in PR in most functions where CS wins.

`NR = 50`runs when the accuracy level $\u03f5$ is set to ${10}^{-4}$, following the same color code for wins and ties. Together with the mean value for FE, we also provide the standard deviation, where zero values mean that

`SR = 0`, i.e., that CS was not able to find all the known global optima in any of the

`NR = 50`runs. The corresponding information for DE1 and DE2 is displayed in Table A1 and Table A2. Again, the performance of CS is noteworthy in terms of convergence speed, winning in 7 of the 20 functions, with striking relative reductions in the number of FEs compared to DE1 and DE2, of up to 99% in ${F}_{1}$.

`mPR`and

`mSR`defined in Section 3.1. The available information in [34] for DE1 and DE2 is the average PR attained and the average number of FEs spent, computed over the

`NR = 50`runs and for an accuracy level $\u03f5={10}^{-4}$. We have plotted these values in Figure 1, using plus and cross symbols for DE1 and DE2, respectively. Note that we have used a logarithmic scale on the abscissa axis for a better visualization. For the ease of viewing, we present four different plots, grouping the test functions according to their MaxFEs budget. Specifically, Figure 1a (

`MaxFEs = 50,000`) displays functions ${F}_{1}$ to ${F}_{5}$; Figure 1b (

`MaxFEs = 200,000`), ${F}_{6}$–${F}_{11}$ (all 2D); Figure 1c (

`MaxFEs = 400,000`), ${F}_{6}$, ${F}_{7}$, ${F}_{11}$ and ${F}_{12}$ (all 3D); and Figure 1d (

`MaxFEs = 400,000`), ${F}_{11}$ (5D and 10D) and ${F}_{12}$ (5D, 10D and 20D).

`mPR`vs. AveFEs with solid dots for each test function and for the same accuracy level $\u03f5={10}^{-4}$.

`mSR`defined in Section 3.1. To that aim, we have adopted the following graphical convention—particularly well observed in Figure 1d—for the lines joining every two consecutive points within a performance curve: (1) A solid line, when the number of global optima found in every run coincides with the value of $\nu \left(\mathtt{mPR}\right)$ at the endpoint of the segment, i.e., when

`mSR = 1`in both extremes of the segment. Note, however, that, as long as $\mathtt{mPR}<1$ holds, different sets of global optima may have been found on different runs. (2) A dashed line, when only some runs found the number of global optima associated with $\nu \left(\mathtt{mPR}\right)$ at the endpoint, that is, when

`0 < mSR < 1`. In such cases, note that the average FE was computed by assigning the maximum value MaxFEs to the unsuccessful runs, as indicated in the competition rules. (3) A horizontal dotted line extending as far as MaxFEs, meaning that no more global optima were found on any run within the available budget, that is

`mSR = 0`. Note that, for the sake of clarity, we have clipped in each graphic the region between $\nu \left(\mathtt{mPR}\right)=0$ and $\nu \left(\mathtt{mPR}\right)=1$.

#### 5.2. Simplex Search Methods: Nelder-Mead

`1`, whereas the contraction, expansion, and reflection coefficients were set to

`0.5`,

`2`and

`1`, respectively. We used two termination criteria in the final step: (i) The vector distance moved is fractionally smaller in magnitude than

`${10}^{-4}$`. (ii) The increase in the function value is fractionally smaller than $\u03f5$.

^{−4}. Again, we can see that most plus and cross symbols lie below and to the right of the corresponding curve, illustrating the excellent results obtained by NM.

#### 5.3. Rosenbrock’s Method

#### 5.4. Comparison between Direct Search Methods

## 6. Conclusions

## Supplementary Materials

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Appendix A. Performance of Baseline Niching Algorithms

${\mathit{F}}_{1}$(1D) | ${\mathit{F}}_{2}$(1D) | ${\mathit{F}}_{3}$(1D) | ${\mathit{F}}_{4}$(2D) | ${\mathit{F}}_{5}$(2D) | ||||||

Accuracy Level$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-2}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-3}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-4}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

Convergence speed | 22,886.0 | 2689.056 | 1552.0 | 386.106 | 1258.0 | 781.179 | 13,610.0 | 1399.453 | 3806.0 | 618.890 |

${10}^{-5}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{6}}$(2D) | ${\mathit{F}}_{\mathbf{7}}$(2D) | ${\mathit{F}}_{\mathbf{6}}$(3D) | ${\mathit{F}}_{\mathbf{7}}$(3D) | ${\mathit{F}}_{\mathbf{8}}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.450 | 0.000 | 0.347 | 0.000 | 0.108 | 0.000 | 0.097 | 0.000 | 1.000 | 1.000 |

${10}^{-2}$ | 0.438 | 0.000 | 0.346 | 0.000 | 0.105 | 0.000 | 0.095 | 0.000 | 1.000 | 1.000 |

${10}^{-3}$ | 0.440 | 0.000 | 0.349 | 0.000 | 0.113 | 0.000 | 0.099 | 0.000 | 0.998 | 0.980 |

${10}^{-4}$ | 0.434 | 0.000 | 0.337 | 0.000 | 0.112 | 0.000 | 0.095 | 0.000 | 1.000 | 1.000 |

Convergence speed | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 9858.0 | 833.015 |

${10}^{-5}$ | 0.000 | 0.000 | 0.333 | 0.000 | 0.113 | 0.000 | 0.094 | 0.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{9}}$(2D) | ${\mathit{F}}_{\mathbf{10}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(3D) | ${\mathit{F}}_{\mathbf{12}}$(3D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.683 | 0.000 | 0.855 | 0.240 | 0.667 | 0.000 | 0.667 | 0.000 | 0.522 | 0.000 |

${10}^{-2}$ | 0.673 | 0.000 | 0.837 | 0.220 | 0.667 | 0.000 | 0.667 | 0.000 | 0.535 | 0.000 |

${10}^{-3}$ | 0.683 | 0.000 | 0.815 | 0.140 | 0.667 | 0.000 | 0.667 | 0.000 | 0.507 | 0.000 |

${10}^{-4}$ | 0.673 | 0.000 | 0.815 | 0.160 | 0.667 | 0.000 | 0.667 | 0.000 | 0.502 | 0.000 |

Convergence speed | 200,000.0 | 0.000 | 181,658.0 | 42,543.630 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.670 | 0.000 | 0.777 | 0.100 | 0.667 | 0.000 | 0.667 | 0.000 | 0.507 | 0.000 |

${\mathit{F}}_{\mathbf{11}}$(5D) | ${\mathit{F}}_{\mathbf{12}}$(5D) | ${\mathit{F}}_{\mathbf{11}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(20D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.677 | 0.000 | 0.345 | 0.000 | 0.403 | 0.000 | 0.227 | 0.000 | 0.130 | 0.000 |

${10}^{-2}$ | 0.663 | 0.000 | 0.325 | 0.000 | 0.343 | 0.000 | 0.167 | 0.000 | 0.127 | 0.000 |

${10}^{-3}$ | 0.663 | 0.000 | 0.295 | 0.000 | 0.323 | 0.000 | 0.152 | 0.000 | 0.130 | 0.000 |

${10}^{-4}$ | 0.663 | 0.000 | 0.290 | 0.000 | 0.270 | 0.000 | 0.125 | 0.000 | 0.125 | 0.000 |

Convergence speed | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.657 | 0.000 | 0.287 | 0.000 | 0.250 | 0.000 | 0.127 | 0.000 | 0.123 | 0.000 |

${\mathit{F}}_{1}$(1D) | ${\mathit{F}}_{2}$(1D) | ${\mathit{F}}_{3}$(1D) | ${\mathit{F}}_{4}$(2D) | ${\mathit{F}}_{5}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-2}$ | 0.710 | 0.500 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-3}$ | 0.090 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-4}$ | 0.020 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.995 | 0.980 | 1.000 | 1.000 |

Convergence speed | 50,000.0 | 0.000 | 3386.0 | 1368.749 | 2576.0 | 2625.974 | 41,666.0 | 3772.598 | 12,980.0 | 2046.799 |

${10}^{-5}$ | 0.000 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.420 | 0.040 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{6}}$(2D) | ${\mathit{F}}_{\mathbf{7}}$(2D) | ${\mathit{F}}_{\mathbf{6}}$(3D) | ${\mathit{F}}_{\mathbf{7}}$(3D) | ${\mathit{F}}_{\mathbf{8}}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 0.703 | 0.000 | 0.847 | 0.000 | 0.271 | 0.000 | 1.000 | 1.000 |

${10}^{-2}$ | 0.999 | 0.980 | 0.724 | 0.000 | 0.835 | 0.000 | 0.272 | 0.000 | 1.000 | 1.000 |

${10}^{-3}$ | 0.972 | 0.740 | 0.715 | 0.000 | 0.716 | 0.000 | 0.274 | 0.000 | 1.000 | 1.000 |

${10}^{-4}$ | 0.107 | 0.000 | 0.709 | 0.000 | 0.290 | 0.000 | 0.274 | 0.000 | 1.000 | 1.000 |

Convergence speed | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 30,306.0 | 1984.677 |

${10}^{-5}$ | 0.000 | 0.000 | 0.716 | 0.000 | 0.038 | 0.000 | 0.270 | 0.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{9}}$(2D) | ${\mathit{F}}_{\mathbf{10}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(3D) | ${\mathit{F}}_{\mathbf{12}}$(3D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.937 | 0.720 | 0.380 | 0.000 | 0.837 | 0.400 | 0.683 | 0.000 | 0.730 | 0.000 |

${10}^{-2}$ | 0.690 | 0.040 | 0.055 | 0.000 | 0.683 | 0.020 | 0.667 | 0.000 | 0.690 | 0.000 |

${10}^{-3}$ | 0.667 | 0.000 | 0.007 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.627 | 0.000 |

${10}^{-4}$ | 0.667 | 0.000 | 0.007 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.490 | 0.000 |

Convergence speed | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.667 | 0.000 | 0.002 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.375 | 0.000 |

${\mathit{F}}_{\mathbf{11}}$(5D) | ${\mathit{F}}_{\mathbf{12}}$(5D) | ${\mathit{F}}_{\mathbf{11}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(20D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.697 | 0.000 | 0.567 | 0.080 | 0.517 | 0.080 | 0.000 | 0.000 | 0.502 | 0.380 |

${10}^{-2}$ | 0.667 | 0.000 | 0.425 | 0.000 | 0.250 | 0.000 | 0.000 | 0.000 | 0.013 | 0.000 |

${10}^{-3}$ | 0.667 | 0.000 | 0.280 | 0.000 | 0.200 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |

${10}^{-4}$ | 0.667 | 0.000 | 0.115 | 0.000 | 0.173 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |

Convergence speed | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.667 | 0.000 | 0.047 | 0.000 | 0.170 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |

## References

- Duarte, A.; Martí, R.; Glover, F.; Gortazar, F. Hybrid scatter tabu search for unconstrained global optimization. Ann. Oper. Res.
**2011**, 183, 95–123. [Google Scholar] [CrossRef] - Pan, J.; McInnes, F.; Jack, M. Application of parallel genetic algorithm and property of multiple global optima to VQ codevector index assignment for noisy channels. Electron. Lett.
**1996**, 32, 296–297. [Google Scholar] [CrossRef] - Pintér, J.D. Global Optimization: Scientific and Engineering Case Studies; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 85. [Google Scholar]
- Rao, R.; Waghmare, G. Solving composite test functions using teaching learning based optimization algorithm. In Proceedings of the International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA), Odisha, India, 22–23 December 2012; Springer: Berlin/Heidelberg, Germany, 2013; pp. 395–403. [Google Scholar]
- Preuss, M. Multimodal Optimization by Means of Evolutionary Algorithms; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Back, T. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
- Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley: Reading, MA, USA, 1989. [Google Scholar]
- Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Chapter Evolution Strategies and Other Methods; Springer: Berlin/Heidelberg, Germany, 1996; pp. 159–177. [Google Scholar]
- Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim.
**1997**, 11, 341–359. [Google Scholar] [CrossRef] - Kennedy, J. Particle swarm optimization. In Encyclopedia of Machine Learning; Springer: Berlin/Heidelberg, Germany, 2011; pp. 760–766. [Google Scholar]
- Kushner, H.J. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng.
**1964**, 86, 97–106. [Google Scholar] [CrossRef] - Mockus, J. Application of Bayesian approach to numerical methods of global and stochastic optimization. J. Glob. Optim.
**1994**, 4, 347–365. [Google Scholar] [CrossRef] - Jones, D.R.; Schonlau, M.; Welch, W.J. Efficient global optimization of expensive black-box functions. J. Glob. Optim.
**1998**, 13, 455–492. [Google Scholar] [CrossRef] - Mahfoud, S.W. Niching Methods for Genetic Algorithm. Ph.D. Thesis, University of Illinois, Champaign County, IL, USA, 1995. [Google Scholar]
- Shir, O.M. Niching in Evolutionary Algorithms. In Handbook of Natural Computing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1035–1069. [Google Scholar]
- Shir, O. Niching in Derandomized Evolution Strategies and Its Applications in Quantum Control; Natural Computing Group, LIACS, Faculty of Science, Leiden University: Leiden, The Netherlands, 2008. [Google Scholar]
- Stoean, C.; Preuss, M.; Stoean, R.; Dumitrescu, D. Multimodal Optimization by Means of a Topological Species Conservation Algorithm. IEEE Trans. Evol. Comput.
**2010**, 14, 842–864. [Google Scholar] [CrossRef] - Rönkkönen, J. Continuous Multimodal Global Optimization with Differential Evolution-Based Methods. Ph.D. Thesis, Lappeenranta University of Technology, Lappeenranta, Finland, 2009. [Google Scholar]
- Barrera, J.; Coello, C.A.C. A review of particle swarm optimization methods used for multimodal optimization. In Innovations in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2009; pp. 9–37. [Google Scholar]
- Moscato, P. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts—Towards Memetic Algorithms. Caltech Concurr. Comput. Program C3P Rep.
**1989**, 826, 68. [Google Scholar] - Neri, F.; Cotta, C. Memetic algorithms and memetic computing optimization: A literature review. Swarm Evol. Comput.
**2012**, 2, 1–14. [Google Scholar] [CrossRef] - Locatelli, M.; Schoen, F. Local search based heuristics for global optimization: Atomic clusters and beyond. Eur. J. Oper. Res.
**2012**, 222, 1–9. [Google Scholar] [CrossRef] - Rios, L.M.; Sahinidis, N.V. Derivative-free optimization: A review of algorithms and comparison of software implementations. J. Glob. Optim.
**2013**, 56, 1247–1293. [Google Scholar] [CrossRef] [Green Version] - Hooke, R.; Jeeves, T.A. Direct Search Solution of Numerical and Statistical Problems. J. ACM
**1961**, 8, 212–229. [Google Scholar] [CrossRef] - Lewis, R.M.; Torczon, V.; Trosset, M.W. Direct search methods: Then and now. J. Comput. Appl. Math.
**2000**, 124, 191–207. [Google Scholar] [CrossRef] [Green Version] - Torczon, V. On the convergence of pattern search algorithms. SIAM J. Optim.
**1997**, 7, 1–25. [Google Scholar] [CrossRef] - Dolan, E.D. Pattern Search Behavior in Nonlinear Optimization. Ph.D. Thesis, College of William and Mary, Williamsburg, VA, USA, 1999. [Google Scholar]
- Torczon, V.J. Multidirectional Search: A Direct Search Algorithm for Parallel Machines. Ph.D. Thesis, Rice University, Houston, TX, USA, 1989. [Google Scholar]
- Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J.
**1965**, 7, 308–313. [Google Scholar] [CrossRef] - Chang, K.H. Stochastic Nelder-Mead simplex method—A new globally convergent direct search method for simulation optimization. Eur. J. Oper. Res.
**2012**, 220, 684–694. [Google Scholar] [CrossRef] - Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes: The Art of Scientific Computing, 3rd ed.; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
- Rosenbrock, H. An automatic method for finding the greatest or least value of a function. Comput. J.
**1960**, 3, 175–184. [Google Scholar] [CrossRef] [Green Version] - Powell, M.J. An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput. J.
**1964**, 7, 155–162. [Google Scholar] [CrossRef] - Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization; Technical Report; RMIT University, Evolutionary Computation and Machine Learning Group: Melbourne, Australia, 2013. [Google Scholar]
- McKay, M.D.; Beckman, R.J.; Conover, W.J. Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics
**1979**, 21, 239–245. [Google Scholar] - Stein, M. Large sample properties of simulations using Latin hypercube sampling. Technometrics
**1987**, 29, 143–151. [Google Scholar] [CrossRef] - Zhao, Z.; Yang, J.; Hu, Z.; Che, H. A differential evolution algorithm with self-adaptive strategy and control parameters based on symmetric Latin hypercube design for unconstrained optimization problems. Eur. J. Oper. Res.
**2016**, 250, 30–45. [Google Scholar] [CrossRef] - Hansen, P.; Mladenović, N.; Pérez, J.A.M. Variable neighbourhood search: Methods and applications. Ann. Oper. Res.
**2010**, 175, 367–407. [Google Scholar] [CrossRef] - Mladenović, N.; Hansen, P. Variable neighborhood search. Comput. Oper. Res.
**1997**, 24, 1097–1100. [Google Scholar] [CrossRef]

**Figure 1.**Comparison of performance curves between CS, DE1 (plus) and DE2 (cross). (

**a**) ${F}_{1}$–${F}_{5}$. (

**b**) ${F}_{6}$–${F}_{11}$ (all 2D). (

**c**) ${F}_{6}$, ${F}_{7}$, ${F}_{11}$, ${F}_{12}$ (all 3D). (

**d**) ${F}_{11}$ (5D, 10D) and ${F}_{12}$ (5D, 10D, 20D).

**Figure 2.**Comparison of performance curves between NM, DE1 (plus) and DE2 (cross). (

**a**) ${F}_{1}$–${F}_{5}$. (

**b**) ${F}_{6}$–${F}_{11}$ (all 2D). (

**c**) ${F}_{6}$, ${F}_{7}$, ${F}_{11}$, ${F}_{12}$ (all 3D). (

**d**) ${F}_{11}$ (5D, 10D) and ${F}_{12}$ (5D, 10D, 20D).

**Figure 3.**Comparison of performance curves between RB, DE1 (plus) and DE2 (cross). (

**a**) ${F}_{1}$–${F}_{5}$. (

**b**) ${F}_{6}$–${F}_{11}$ (all 2D). (

**c**) ${F}_{6}$, ${F}_{7}$, ${F}_{11}$, ${F}_{12}$ (all 3D). (

**d**) ${F}_{11}$ (5D, 10D) and ${F}_{12}$ (5D, 10D, 20D).

**Figure 4.**Performance curves for CS, NM, RB, DE1 (plus) and DE2 (cross) ${F}_{6}$, ${F}_{7}$, ${F}_{11}$, ${F}_{12}$ (all 3D).

${\mathit{F}}_{1}$(1D) | ${\mathit{F}}_{2}$(1D) | ${\mathit{F}}_{3}$(1D) | ${\mathit{F}}_{4}$(2D) | ${\mathit{F}}_{5}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-2}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-3}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-4}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

Convergence speed | 199.0 | 117.312 | 465.0 | 284.163 | 293.0 | 307.422 | 981.0 | 510.217 | 273.0 | 124.014 |

${10}^{-5}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{6}}$(2D) | ${\mathit{F}}_{\mathbf{7}}$(2D) | ${\mathit{F}}_{\mathbf{6}}$(3D) | ${\mathit{F}}_{\mathbf{7}}$(3D) | ${\mathit{F}}_{\mathbf{8}}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 1.000 | 1.000 | 0.989 | 0.480 | 0.458 | 0.000 | 1.000 | 1.000 |

${10}^{-2}$ | 1.000 | 1.000 | 0.758 | 0.000 | 0.987 | 0.360 | 0.403 | 0.000 | 1.000 | 1.000 |

${10}^{-3}$ | 1.000 | 1.000 | 0.759 | 0.000 | 0.988 | 0.320 | 0.411 | 0.000 | 1.000 | 1.000 |

${10}^{-4}$ | 1.000 | 1.000 | 0.756 | 0.000 | 0.876 | 0.000 | 0.408 | 0.000 | 1.000 | 1.000 |

Convergence speed | 17,688.0 | 7002.265 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 3688.0 | 1595.058 |

${10}^{-5}$ | 1.000 | 1.000 | 0.749 | 0.000 | 0.050 | 0.000 | 0.412 | 0.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{9}}$(2D) | ${\mathit{F}}_{\mathbf{10}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(3D) | ${\mathit{F}}_{\mathbf{12}}$(3D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.673 | 0.000 | 0.765 | 0.000 | 0.667 | 0.000 | 0.670 | 0.000 | 0.750 | 0.000 |

${10}^{-2}$ | 0.667 | 0.000 | 0.750 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.750 | 0.000 |

${10}^{-3}$ | 0.667 | 0.000 | 0.747 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.750 | 0.000 |

${10}^{-4}$ | 0.667 | 0.000 | 0.750 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.750 | 0.000 |

Convergence speed | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.667 | 0.000 | 0.592 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.543 | 0.000 |

${\mathit{F}}_{\mathbf{11}}$(5D) | ${\mathit{F}}_{\mathbf{12}}$(5D) | ${\mathit{F}}_{\mathbf{11}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(20D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.670 | 0.000 | 0.645 | 0.000 | 0.623 | 0.000 | 0.495 | 0.000 | 0.207 | 0.000 |

${10}^{-2}$ | 0.667 | 0.000 | 0.670 | 0.000 | 0.617 | 0.000 | 0.477 | 0.000 | 0.187 | 0.000 |

${10}^{-3}$ | 0.667 | 0.000 | 0.663 | 0.000 | 0.643 | 0.000 | 0.487 | 0.000 | 0.195 | 0.000 |

${10}^{-4}$ | 0.667 | 0.000 | 0.665 | 0.000 | 0.633 | 0.000 | 0.475 | 0.000 | 0.192 | 0.000 |

Convergence speed | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.667 | 0.000 | 0.417 | 0.000 | 0.627 | 0.000 | 0.270 | 0.000 | 0.062 | 0.000 |

${\mathit{F}}_{1}$(1D) | ${\mathit{F}}_{2}$(1D) | ${\mathit{F}}_{3}$(1D) | ${\mathit{F}}_{4}$(2D) | ${\mathit{F}}_{5}$(2D) | ||||||

Accuracy Level$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-2}$ | 0.02 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-3}$ | 0.02 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-4}$ | 0.05 | 0.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

Convergence speed | 50,000.0 | 0.000 | 446.0 | 189.987 | 370.0 | 348.013 | 565.0 | 249.638 | 288.0 | 144.609 |

${10}^{-5}$ | 0.004 | 0.00 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{6}}$(2D) | ${\mathit{F}}_{\mathbf{7}}$(2D) | ${\mathit{F}}_{\mathbf{6}}$(3D) | ${\mathit{F}}_{\mathbf{7}}$(3D) | ${\mathit{F}}_{\mathbf{8}}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.998 | 0.960 | 0.886 | 0.000 | 0.385 | 0.000 | 0.676 | 0.000 | 1.000 | 1.000 |

${10}^{-2}$ | 0.991 | 0.980 | 0.882 | 0.000 | 0.397 | 0.000 | 0.667 | 0.000 | 1.000 | 1.000 |

${10}^{-3}$ | 1.000 | 1.000 | 0.884 | 0.000 | 0.412 | 0.000 | 0.679 | 0.000 | 1.000 | 1.000 |

${10}^{-4}$ | 0.998 | 0.960 | 0.887 | 0.000 | 0.389 | 0.000 | 0.674 | 0.000 | 1.000 | 1.000 |

Convergence speed | 44,975.0 | 34,213.793 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 4935.0 | 2589.305 |

${10}^{-5}$ | 0.997 | 0.980 | 0.870 | 0.000 | 0.192 | 0.000 | 0.680 | 0.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{9}}$(2D) | ${\mathit{F}}_{\mathbf{10}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(3D) | ${\mathit{F}}_{\mathbf{12}}$(3D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 0.997 | 0.980 | 0.997 | 0.980 | 0.680 | 0.000 | 0.733 | 0.000 |

${10}^{-2}$ | 1.000 | 1.000 | 0.000 | 1.000 | 0.983 | 0.900 | 0.667 | 0.000 | 0.733 | 0.000 |

${10}^{-3}$ | 1.000 | 1.000 | 1.000 | 1.000 | 0.987 | 0.920 | 0.667 | 0.000 | 0.707 | 0.000 |

${10}^{-4}$ | 1.000 | 1.000 | 1.000 | 1.000 | 0.993 | 0.960 | 0.667 | 0.000 | 0.733 | 0.000 |

Convergence speed | 6877.0 | 5065.769 | 65,671.0 | 46,501.38 | 65,151.0 | 54,227.57 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 1.000 | 1.000 | 0.997 | 0.980 | 0.977 | 0.860 | 0.667 | 0.000 | 0.655 | 0.000 |

${\mathit{F}}_{\mathbf{11}}$(5D) | ${\mathit{F}}_{\mathbf{12}}$(5D) | ${\mathit{F}}_{\mathbf{11}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(20D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.667 | 0.000 | 0.502 | 0.000 | 0.353 | 0.000 | 0.045 | 0.000 | 0.000 | 0.000 |

${10}^{-2}$ | 0.667 | 0.000 | 0.462 | 0.000 | 0.373 | 0.000 | 0.052 | 0.000 | 0.000 | 0.000 |

${10}^{-3}$ | 0.667 | 0.000 | 0.490 | 0.000 | 0.370 | 0.000 | 0.052 | 0.000 | 0.000 | 0.000 |

${10}^{-4}$ | 0.667 | 0.000 | 0.465 | 0.000 | 0.193 | 0.000 | 0.010 | 0.000 | 0.000 | 0.000 |

Convergence speed | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.317 | 0.000 | 0.130 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |

${\mathit{F}}_{1}$(1D) | ${\mathit{F}}_{2}$(1D) | ${\mathit{F}}_{3}$(1D) | ${\mathit{F}}_{4}$(2D) | ${\mathit{F}}_{5}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-2}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-3}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${10}^{-4}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

Convergence speed | 163.0 | 126.861 | 311.0 | 138.230 | 170.0 | 156.255 | 966.0 | 541.892 | 389.0 | 246.292 |

${10}^{-5}$ | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{6}}$(2D) | ${\mathit{F}}_{\mathbf{7}}$(2D) | ${\mathit{F}}_{\mathbf{6}}$(3D) | ${\mathit{F}}_{\mathbf{7}}$(3D) | ${\mathit{F}}_{\mathbf{8}}$(2D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 1.000 | 1.000 | 0.889 | 0.020 | 0.896 | 0.000 | 0.515 | 0.000 | 1.000 | 1.000 |

${10}^{-2}$ | 1.000 | 1.000 | 0.856 | 0.000 | 0.896 | 0.000 | 0.505 | 0.000 | 1.000 | 1.000 |

${10}^{-3}$ | 1.000 | 1.000 | 0.864 | 0.000 | 0.891 | 0.000 | 0.503 | 0.000 | 1.000 | 1.000 |

${10}^{-4}$ | 1.000 | 1.000 | 0.863 | 0.000 | 0.832 | 0.000 | 0.492 | 0.000 | 1.000 | 1.000 |

Convergence speed | 37,156.0 | 15,930.223 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 5223.0 | 2967.538 |

${10}^{-5}$ | 1.000 | 1.000 | 0.859 | 0.000 | 0.097 | 0.000 | 0.482 | 0.000 | 1.000 | 1.000 |

${\mathit{F}}_{\mathbf{9}}$(2D) | ${\mathit{F}}_{\mathbf{10}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(2D) | ${\mathit{F}}_{\mathbf{11}}$(3D) | ${\mathit{F}}_{\mathbf{12}}$(3D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.680 | 0.000 | 0.768 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.782 | 0.020 |

${10}^{-2}$ | 0.667 | 0.000 | 0.750 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.750 | 0.000 |

${10}^{-3}$ | 0.667 | 0.000 | 0.750 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.750 | 0.000 |

${10}^{-4}$ | 0.667 | 0.000 | 0.750 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.748 | 0.000 |

Convergence speed | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 200,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.667 | 0.000 | 0.680 | 0.000 | 0.667 | 0.000 | 0.667 | 0.000 | 0.432 | 0.000 |

${\mathit{F}}_{\mathbf{11}}$(5D) | ${\mathit{F}}_{\mathbf{12}}$(5D) | ${\mathit{F}}_{\mathbf{11}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(10D) | ${\mathit{F}}_{\mathbf{12}}$(20D) | ||||||

Accuracy Level
$\mathbf{\u03f5}$ | PR | SR | PR | SR | PR | SR | PR | SR | PR | SR |

${10}^{-1}$ | 0.907 | 0.540 | 0.740 | 0.120 | 0.673 | 0.040 | 0.182 | 0.000 | 0.002 | 0.000 |

${10}^{-2}$ | 0.667 | 0.000 | 0.592 | 0.000 | 0.593 | 0.000 | 0.182 | 0.000 | 0.000 | 0.000 |

${10}^{-3}$ | 0.667 | 0.000 | 0.585 | 0.000 | 0.570 | 0.000 | 0.170 | 0.000 | 0.000 | 0.000 |

${10}^{-4}$ | 0.667 | 0.000 | 0.550 | 0.000 | 0.567 | 0.000 | 0.140 | 0.000 | 0.000 | 0.000 |

Convergence speed | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 | 400,000.0 | 0.000 |

${10}^{-5}$ | 0.667 | 0.000 | 0.432 | 0.000 | 0.587 | 0.000 | 0.050 | 0.000 | 0.000 | 0.000 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Cano, J.; Alfaro, C.; Gomez, J.; Duarte, A.
Out of the Niche: Using Direct Search Methods to Find Multiple Global Optima. *Mathematics* **2022**, *10*, 1494.
https://doi.org/10.3390/math10091494

**AMA Style**

Cano J, Alfaro C, Gomez J, Duarte A.
Out of the Niche: Using Direct Search Methods to Find Multiple Global Optima. *Mathematics*. 2022; 10(9):1494.
https://doi.org/10.3390/math10091494

**Chicago/Turabian Style**

Cano, Javier, Cesar Alfaro, Javier Gomez, and Abraham Duarte.
2022. "Out of the Niche: Using Direct Search Methods to Find Multiple Global Optima" *Mathematics* 10, no. 9: 1494.
https://doi.org/10.3390/math10091494