# Pure Random Orthogonal Search (PROS): A Plain and Elegant Parameterless Algorithm for Global Optimization

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## Featured Application

**The proposed optimization method is a general tool which can have numerous applications in various mathematical, engineering and technological fields. Due to its simplicity and straight-forward implementation, it can easily be applied by non-experts to obtain optimized results.**

## Abstract

## 1. Introduction

#### 1.1. No Free Lunch Theorem in Optimization

#### 1.2. Occam’s Razor and Simplicity in Optimization

**Test problems bias**. IOA has better performance than OA in the test examples published by B, but not in other problems. In other words, there is some bias as the problems chosen and presented are in favor of IOA in comparison to OA.**Parameter bias**. The extra parameters introduced in IOA work good for some problems, such as the ones chosen and presented by B, but not in others. And of course, extra knowledge and experience is needed in order to find the right values of the extra parameters for every single problem.**Author bias**. A combination of the two above-mentioned points. Author B knows the IOA better than anyone else and has also experience in optimization problems. As a result, B can fine-tune the algorithm in such a way that he/she can achieve optimal performance, but the same is not true for other users of the algorithm facing different optimization problems. Thus, the “improved” algorithm works good only if it can be fine-tuned by an expert, such as B.

## 2. Materials and Methods

#### 2.1. Formulation of the Optimization Problem

**x**is a vector of decision variables,

**x**= {x

_{1}, …, x

_{D}} (candidate solution in the search space), f: $\mathbb{R}$

^{D}→ $\mathbb{R}$ is a scalar-valued function called objective (or cost) function to be minimized, while D is the dimensionality of the optimization problem. In the present work, the search space Ω ⊂ $\mathbb{R}$

^{D}is defined by the lower bound and upper bound vectors as Ω = [

**lb**,

**ub**], i.e., lb

_{i}≤ x

_{i}≤ ub

_{i}for every i ∈ {1, …, D}. Truly unconstrained optimization corresponds to the case where Ω = $\mathbb{R}$

^{D}.

#### 2.2. Pure Random Search

- Initialize
**x**with a random position in the search space (feasible region Ω), i.e., lb_{i}≤ x_{i}≤ ub_{i}for every i ∈ {1, …, D} - Until a termination criterion is met (e.g., number of iterations performed, or adequate fitness reached), repeat the following:
- ○
- Sample a new position
**y**randomly in the search space, using uniform distribution - ○
- If f(
**y**) < f(**x**) then move to the new position by setting**x**=**y**

- At the end,
**x**holds the best-found position and f(**x**) is the best objective function value found.

#### 2.3. Pure Random Orthogonal Search

- Initialize
**x**with a random position in the search space (feasible region Ω), i.e., lb_{i}≤ x_{i}≤ ub_{i}for every i ∈ {1, …, D} - Until a termination criterion is met, repeat the following:
- ○
- Set
**y**=**x** - ○
- Sample a random integer j ∈ {1, …, D}
- ○
- Sample a random number r in the range [lb
_{j}, ub_{j}] using uniform distribution - ○
- Set y
_{j}= r - ○
- If f(
**y**) < f(**x**) then move to the new position by setting**x**=**y**

- At the end,
**x**holds the best-found position and f(**x**) is the best solution.

^{2}and F10-Rastrigin’s function (rastrigin_func) in [−5.12, 5.12]

^{2}have been used as example cases. The functions are described in detail in Appendix B, together with the other functions used in the study.

^{5}. For each candidate solution

**y**

_{i}at step i, the bold cell in brackets ([]) represents the j-th element that is different than the corresponding element of the current solution

**x**

_{i}.

#### 2.4. Methodology of the Study and Objective Functions Used

**x**= [x

_{1}, x

_{2}, x

_{3}, x

_{4}, x

_{5}] and the optimization problem is relatively simple. This is not the case with higher values of D where the problem gets significantly harder. For example, in the case D = 50, each design vector has 50 elements to adjust (x

_{1}to x

_{50}). The five algorithms used are the following:

- Genetic algorithm (GA);
- Particle swarm optimization (PSO);
- Differential evolution (DE);
- Pure random search (PRS);
- Pure random orthogonal search (PROS).

#### 2.5. Objective Functions

**x**

^{*}), the location of the optimum

**x**

^{*}in the design space and some other properties are presented in Table 3. More details on the functions can be found in Appendix B, where the functions are plotted for the two-dimensional case (D = 2). The aim of every optimization algorithm used in the study is to find the design vector

**x**

^{*}that minimizes the value of f, for each function f

_{i}(I = 1,2,…,12).

## 3. Results

#### 3.1. Objective Function Values

#### 3.2. Convergence History for Each Algorithm

#### 3.3. Computational Efficiency

#### 3.4. Conditions under Which PROS Can Be Trapped in A Local Optimum

**x**):

_{1}= 0, x

_{2}= 0. The values of the objective function at the four corners of the [0, 1]

^{2}square are the following:

- f(0, 0) = 1 (current solution)
- f(1, 0) = f(0, 1) = 2
- f(1, 1) = 0 (global minimum)

_{1},0) > f(0,0) for every x

_{1}∈ [0, 1], while f(0, x

_{2}) > f(0,0) for every x

_{2}∈ [0, 1]. The current point (0, 0) is a local minimum and, in addition, any change in only one of the x

_{1}or x

_{2}variables cannot lead to a better value of the objective function (f). There is no improvement in f in any of the orthogonal x

_{1}or x

_{2}directions considered on their own. To find a better value of the objective function, the optimizer would need to move diagonally, but such a movement is not allowed in PROS.

## 4. Discussion

#### 4.1. Advantages and Drawbacks of PROS

#### 4.2. PROS Applications

## 5. Conclusions

#### Future Directions

## Author Contributions

## Funding

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Appendix A. Nomenclature

Acronym | Meaning |
---|---|

ACO | Ant colony optimization |

DE | Differential evolution |

GA | Genetic algorithm |

HS | Harmony search |

IOA | Improved optimization algorithm |

NFL | No free lunch |

OA | Optimization algorithm |

PROS | Pure random orthogonal search |

PRS | Pura random search |

PSO | Particle swarm optimization |

Symbol | Meaning |
---|---|

f(x) | Objective function |

D | Dimensionality of the optimization problem (no. of dimensions) |

Ω | Search space |

$\mathbb{R}$^{D} | A real coordinate space of dimension D |

x or y | vector of decision variables |

x_{i} or y_{i} | i-th decision variable, i-th element of the decision variables’ vector |

lb/ub | Lower/upper bounds vector |

lb_{i}/ub_{i} | Lower/upper bound for the decision variable x_{i} |

x^{*} | Decision variables’ vector corresponding to the optimum solution |

f(x^{*}) | Optimum value of the function f |

r | Random number |

NP | Population size |

MaxIter | Maximum number of iterations (or generations) |

OFE | Maximum number of function evaluations |

F | Differential weight of the DE algorithm |

CR | Crossover probability of the DE algorithm |

## Appendix B. Objective Functions Used in the Study

## References

- Solorzano, G.; Plevris, V. Optimum Design of RC Footings with Genetic Algorithms According to ACI 318-19. Buildings
**2020**, 10, 110. [Google Scholar] [CrossRef] - Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Longman Publishing Co.: Boston, MA, USA, 1989. [Google Scholar]
- Holland, J. Adaptation in Natural and Artificial Systems; University of Michigan Press.: Ann Arbor, MI, USA, 1975. [Google Scholar]
- Plevris, V.; Karlaftis, N.D.; Lagaros, N.D. A Swarm Intelligence Approach for. Emergency Infrastructure Inspection Scheduling. In Sustainable and Resilient Critical Infrastructure Systems: Simulation, Modeling, and Intelligent Engineering; Gopalakrishnan, K., Peeta, S., Eds.; Springer: Heidelberg, Germany, 2010; pp. 201–230. [Google Scholar]
- Plevris, V.; Papadrakakis, M. A Hybrid. Particle Swarm—Gradient Algorithm for Global Structural Optimization. Comput. Aided Civ. Infrastruct. Eng.
**2011**, 26, 48–68. [Google Scholar] [CrossRef] - Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
- Georgioudakis, M.; Plevris, V. A Combined Modal Correlation Criterion for Structural Damage Identification with Noisy Modal Data. Adv. Civ. Eng.
**2018**, 2018, 318067. [Google Scholar] [CrossRef] - Georgioudakis, M.; Plevris, V. On the Performance of Differential Evolution Variants in Constrained Structural Optimization. Procedia Manuf.
**2020**, 44, 371–378. [Google Scholar] [CrossRef] - Georgioudakis, M.; Plevris, V. A Comparative Study of Differential Evolution Variants in Constrained Structural Optimization. Front. Built Environ.
**2020**, 6, 1–14. [Google Scholar] [CrossRef] - Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim.
**1997**, 11, 341–359. [Google Scholar] [CrossRef] - Dorigo, M.; Maniezzo, V.; Colorni, A. The Ant System: Optimization by a Colony of Cooperating Agents. IEEE Trans. Syst. Man Cybern. B.
**1996**, 26, 2941. [Google Scholar] [CrossRef] [PubMed][Green Version] - Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. SIMULATION
**2001**, 76, 60–68. [Google Scholar] [CrossRef] - Gao, X.Z.; Govindasamy, V.; Xu, H.; Wang, X.; Zenger, K. Harmony Search Method: Theory and Applications. Comput. Intell. Neurosci.
**2015**, 2015, 258491. [Google Scholar] [CrossRef] [PubMed][Green Version] - Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science
**1983**, 220, 671–680. [Google Scholar] [CrossRef] [PubMed] - Glover, F.; Taillard, E.; Taillard, E. A User’s Guide to Tabu Search. Ann. Oper. Res.
**1993**, 41, 1–28. [Google Scholar] [CrossRef] - Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput.
**1997**, 1, 67–82. [Google Scholar] [CrossRef][Green Version] - Culberson, J.C. On the Futility of Blind. Search: An. Algorithmic View of “No Free Lunch”. Evol. Comput.
**1998**, 6, 109–127. [Google Scholar] [CrossRef] [PubMed] - Ho, Y.C.; Pepyne, D.L. Simple Explanation of the No-Free-Lunch Theorem and Its Implications. J. Optim. Theory Appl.
**2002**, 115, 549–570. [Google Scholar] [CrossRef] - Koziel, S.; Michalewicz, Z. Evolutionary Algorithms, Homomorphous Mappings, and Constrained Parameter Optimization. Evol. Comput.
**1999**, 7, 19–44. [Google Scholar] [CrossRef] [PubMed] - Plevris, V. Innovative Computational Techniques for the Optimum Structural Design Considering Uncertainties; National Technical University of Athens: Athens, Greece, 2009; p. 312. [Google Scholar]
- Zabinsky, Z.B. Stochastic Adaptive Search for Global Optimization. In Nonconvex Optimization and Its Applications; Springer: New York, USA, 2003; ISBN 978-1-4419-9182-9. [Google Scholar]
- Spall, J.C. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control; Wiley: Hoboken, NJ, USA, 2003; ISBN 9780471330523. [Google Scholar]
- Brooks, S.H. A Discussion of Random Methods for Seeking Maxima. Oper. Res.
**1958**, 6, 244–251. [Google Scholar] [CrossRef] - Peng, J.-P.; Shi, D.-H. Improvement of Pure Random Search in Global Optimization. J. Shanghai Univ.
**2000**, 4, 92–95. [Google Scholar] [CrossRef]

**Figure 2.**Search path of the PROS algorithm in two 2D problem examples: (

**a**) F01-Sphere function (sphere_func) in [−100, 100]

^{2}; (

**b**) F10-Rastrigin’s function (rastrigin_func) in [−5.12, 5.12]

^{2}.

**Figure 3.**Time needed for the calculation of each objective function 100,000 times, for a random vector of 5, 10, 30 or 50 variables (D = 5, 10, 30, 50). The y-axis is in logarithmic scale.

**Figure 4.**Best results (final values of the objective functions) for the various optimizers for D = 5, 10, 30, and 50, for all 12 objective functions (averages of 10 runs)—lower is better.

**Figure 5.**Convergence history of the various optimizers for 5 dimensions (D = 5), for all 12 objective functions (average of 10 runs).

**Figure 6.**Convergence history of the various optimizers for 10 dimensions (D = 10), for all 12 objective functions (average of 10 runs).

**Figure 7.**Convergence history of the various optimizers for 30 dimensions (D = 30), for all 12 objective functions (average of 10 runs).

**Figure 8.**Convergence history of the various optimizers for 50 dimensions (D = 50), for all 12 objective functions (average of 10 runs).

**Figure 9.**Time needed for the various optimizers, for all 12 objective functions and D = 5, 10, 30, and 50. The y-axis is in logarithmic scale.

**Figure 10.**The objective function f(x

_{1},x

_{2}) = x

_{1}+ x

_{2}− 3·x

_{1}·x

_{2}+ 1, plotted in [0, 1]

^{2}. PROS is trapped at the local minimum (0, 0) since there is no improvement in f in any of the x

_{1}or x

_{2}directions on their own: (

**a**); 3D surface plot and (

**b**) 2D contour plot.

**Table 1.**Example case: the 5 first steps of the PROS algorithm for the F01-Sphere function (sphere_func) in [–100, 100]

^{2}.

Step | x_{i}/y_{i} | Design Vector | Obj. Value | Notes | ||||
---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | ||||

1 | x_{1} | −34.32 | −5.13 | −58.68 | 1.29 | −22.26 | 5.15·10^{3} | No improvement keep x _{1} for Step 2 (x_{2} = x_{1}) |

y_{1} (j = 4) | −34.32 | −5.13 | −58.68 | [14.73] | −22.26 | 5.36·10^{3} | ||

2 | x_{2} | −34.32 | −5.13 | −58.68 | 1.29 | −22.26 | 5.15·10^{3} | Improvement keep y _{2} for Step 3 (x_{3} = y_{2}) |

y_{2} (j = 3) | −34.32 | −5.13 | [35.52] | 1.29 | −22.26 | 2.96·10^{3} | ||

3 | x_{3} | −34.32 | −5.13 | 35.52 | 1.29 | −22.26 | 2.96·10^{3} | No improvement keep x _{3} for Step 4 (x_{4} = x_{3}) |

y_{3} (j = 5) | −34.32 | −5.13 | 35.52 | 1.29 | [−43.44] | 4.36·10^{3} | ||

4 | x_{4} | −34.32 | −5.13 | 35.52 | 1.29 | −22.26 | 2.96·10^{3} | Improvement keep y _{4} for Step 5 (x_{5}=y_{4}) |

y_{4} (j = 1) | [11.97] | −5.13 | 35.52 | 1.29 | −22.26 | 1.93·10^{3} | ||

5 | x_{5} | 11.97 | −5.13 | 35.52 | 1.29 | −22.26 | 1.93·10^{3} | No improvement keep x _{5} for Step 6 (x_{6}=x_{5}) |

y_{5} (j = 5) | 11.97 | −5.13 | 35.52 | 1.29 | [−88.95] | 9.35·10^{3} |

**Table 2.**Optimization parameters and convergence criteria used for each category of problems based on the number of dimensions.

No of Dimensions, D | D = 5 | D = 10 | D = 30 | D = 50 |
---|---|---|---|---|

Population size NP NP = 10·D | 50 | 100 | 300 | 500 |

Max. Generations/Iterations MaxIter MaxIter = 20·D − 50 | 50 | 150 | 550 | 950 |

Max. Obj. function evaluations MaxFE MaxFE = NP·MaxIter | 2500 | 15,000 | 165,000 | 475,000 |

ID | Name and Code Name | Search Range | Minimum | Properties |
---|---|---|---|---|

F01 | Sphere function sphere_func | [−100, 100]^{D} | f_{01}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Unimodal Highly symmetric, in particular rotationally invariant |

F02 | Ellipsoid function ellipsoid_func | [−100, 100]^{D} | f_{02}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Unimodal Symmetric |

F03 | Sum of Different Powers function sumpow_func | [−10, 10]^{D} | f_{03}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Unimodal |

F04 | Quintic function quintic_func | [−20, 20]^{D} | f_{04}(x^{*}) = 0at x^{*} = {−1, -1, …, −1} or x^{*} = {2, 2, …, 2} | Has two global optima |

F05 | Drop-Wave function drop_wave_func | [−5.12, 5.12]^{D} | f_{05}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Multi-modal Highly complex |

F06 | Weierstrass function weierstrass_func | [−0.5, 0.5]^{D} | f_{06}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Multi-modal Continuous everywhere but only differentiable on a set of points |

F07 | Alpine 1 function alpine1_func | [−10, 10]^{D} | f_{07}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Multi-modal |

F08 | Ackley’s function ackley_func | [−32.768, 32.768]^{D} | f_{08}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Multi-modal Having many local optima with the global optima located in a very small basin |

F09 | Griewank’s function griewank_func | [−100, 100]^{D} | f_{09}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Multi-modal With many regularly distributed local optima |

F10 | Rastrigin’s function rastrigin_func | [−5.12, 5.12]^{D} | f_{10}(x^{*}) = 0at x^{*} = {0, 0, …, 0} | Multi-modal With many regularly distributed local optima |

F11 | HappyCat function happycat_func | [−20, 20]^{D} | f_{11}(x^{*}) = 0at x^{*} = {−1, −1, …, −1} | Multi-modal Global optimum located in curved narrow valley |

F12 | HGBat function hgbat_func | [−15, 15]^{D} | f_{12}(x^{*}) = 0at x^{*} = {−1, −1, …, −1} | Multi-modal Global optima located in curved narrow valley |

**Table 4.**Best results * of the PROS algorithm for all 12 objective functions, for D = 5, 10, 30, 50 dimensions (averages of 10 runs). The target (optimum) value is zero in all the cases considered.

ID | Function Name | D = 5 | D = 10 | D = 30 | D = 50 |
---|---|---|---|---|---|

F01 | Sphere | 4.99E-01 | 9.56E-02 | 2.17E-02 | 9.25E-03 |

F02 | Ellipsoid | 9.78E-01 | 6.72E-01 | 2.92E-01 | 3.77E-01 |

F03 | Sum of Dif. Powers | 8.07E-04 | 5.58E-05 | 5.35E-06 | 2.80E-06 |

F04 | Quintic | 8.56E-01 | 6.61E-01 | 5.40E-01 | 5.18E-01 |

F05 | Drop-Wave | 2.63E-01 | 5.48E-01 | 9.10E-01 | 9.74E-01 |

F06 | Weierstrass | 3.66E-01 | 3.62E-01 | 4.25E-01 | 5.36E-01 |

F07 | Alpine 1 | 4.99E-03 | 3.16E-03 | 2.36E-03 | 2.46E-03 |

F08 | Ackley’s | 7.92E-01 | 1.72E-01 | 3.99E-02 | 1.87E-02 |

F09 | Griewank’s | 5.34E-02 | 4.13E-02 | 2.51E-02 | 1.13E-02 |

F10 | Rastrigin’s | 3.44E-01 | 4.94E-02 | 1.03E-02 | 5.74E-03 |

F11 | HappyCat | 4.71E-01 | 4.25E-01 | 5.63E-01 | 6.92E-01 |

F12 | HGBat | 4.40E-01 | 4.61E-01 | 6.51E-01 | 5.97E-01 |

**Table 5.**Best results * for all 5 optimizers and all 12 objective functions, for 5 dimensions (D = 5, averages of 10 runs). The target (optimum) value is zero in all cases considered.

ID | Function Name | GA | PSO | DE | PRS | PROS |
---|---|---|---|---|---|---|

F01 | Sphere | 6.99E-04 | 9.04E-04 | 2.34E-01 | 8.30E+02 | 4.99E-01 |

F02 | Ellipsoid | 2.12E-03 | 5.58E-03 | 5.26E-01 | 1.56E+03 | 9.78E-01 |

F03 | Sum of Dif. Powers | 6.38E-06 | 1.85E-08 | 3.11E-05 | 2.15E+01 | 8.07E-04 |

F04 | Quintic | 2.49E-01 | 1.95E-01 | 4.03E+00 | 2.33E+03 | 8.56E-01 |

F05 | Drop-Wave | 1.56E-01 | 6.38E-02 | 9.55E-02 | 4.20E-01 | 2.63E-01 |

F06 | Weierstrass | 2.21E-01 | 4.50E-02 | 5.27E-01 | 3.87E+00 | 3.66E-01 |

F07 | Alpine 1 | 6.18E-03 | 2.39E-03 | 3.65E-01 | 1.93E+00 | 4.99E-03 |

F08 | Ackley’s | 1.89E-01 | 9.54E-02 | 6.52E-01 | 1.19E+01 | 7.92E-01 |

F09 | Griewank’s | 1.96E-02 | 1.46E-01 | 2.60E-01 | 1.00E+00 | 5.34E-02 |

F10 | Rastrigin’s | 5.20E-01 | 3.70E+00 | 7.59E+00 | 2.08E+01 | 3.44E-01 |

F11 | HappyCat | 5.31E-01 | 1.92E-01 | 3.89E-01 | 5.41E+00 | 4.71E-01 |

F12 | HGBat | 5.46E-01 | 2.40E-01 | 2.97E-01 | 1.81E+01 | 4.40E-01 |

**Table 6.**Best results * for all 5 optimizers and all 12 objective functions, for 10 dimensions (D = 10, averages of 10 runs). The target (optimum) value is zero in all cases considered.

ID | Function Name | GA | PSO | DE | PRS | PROS |
---|---|---|---|---|---|---|

F01 | Sphere | 6.93E-06 | 4.41E-11 | 1.02E-01 | 4.83E+03 | 9.56E-02 |

F02 | Ellipsoid | 5.89E-05 | 3.83E-10 | 4.03E-01 | 1.83E+04 | 6.72E-01 |

F03 | Sum of Dif. Powers | 4.22E-08 | 7.75E-23 | 1.57E-06 | 2.04E+03 | 5.58E-05 |

F04 | Quintic | 2.42E-02 | 3.52E-05 | 1.04E+01 | 4.95E+04 | 6.61E-01 |

F05 | Drop-Wave | 3.45E-01 | 1.24E-01 | 1.73E-01 | 7.85E-01 | 5.48E-01 |

F06 | Weierstrass | 1.24E-01 | 9.90E-05 | 8.30E-01 | 1.03E+01 | 3.62E-01 |

F07 | Alpine 1 | 1.10E-03 | 4.56E-07 | 1.68E+00 | 7.33E+00 | 3.16E-03 |

F08 | Ackley’s | 1.94E-03 | 1.98E-06 | 2.66E-01 | 1.70E+01 | 1.72E-01 |

F09 | Griewank’s | 1.06E-02 | 1.48E-01 | 4.83E-01 | 2.11E+00 | 4.13E-02 |

F10 | Rastrigin’s | 1.96E-04 | 8.12E+00 | 3.61E+01 | 6.62E+01 | 4.94E-02 |

F11 | HappyCat | 7.92E-01 | 1.86E-01 | 3.98E-01 | 1.14E+01 | 4.25E-01 |

F12 | HGBat | 8.00E-01 | 3.00E-01 | 3.09E-01 | 1.02E+02 | 4.61E-01 |

**Table 7.**Best results * for all 5 optimizers and all 12 objective functions, for 30 dimensions (D = 30, averages of 10 runs). The target (optimum) value is zero in all cases considered.

ID | Function Name | GA | PSO | DE | PRS | PROS |
---|---|---|---|---|---|---|

F01 | Sphere | 7.50E-07 | 4.03E-26 | 3.70E+02 | 3.58E+04 | 2.17E-02 |

F02 | Ellipsoid | 2.25E-05 | 1.15E-24 | 2.80E+03 | 4.57E+05 | 2.92E-01 |

F03 | Sum of Dif. Powers | 1.50E-12 | 1.54E-52 | 3.50E+03 | 2.46E+15 | 5.35E-06 |

F04 | Quintic | 1.80E-02 | 4.43E-14 | 2.73E+03 | 1.92E+06 | 5.40E-01 |

F05 | Drop-Wave | 7.29E-01 | 3.76E-01 | 8.71E-01 | 9.63E-01 | 9.10E-01 |

F06 | Weierstrass | 3.63E-01 | 1.55E-01 | 2.06E+01 | 4.06E+01 | 4.25E-01 |

F07 | Alpine 1 | 3.37E-04 | 7.42E-15 | 2.54E+01 | 3.86E+01 | 2.36E-03 |

F08 | Ackley’s | 6.89E-04 | 4.37E-14 | 6.08E+00 | 1.94E+01 | 3.99E-02 |

F09 | Griewank’s | 1.23E-03 | 6.39E-03 | 1.07E+00 | 1.01E+01 | 2.51E-02 |

F10 | Rastrigin’s | 1.99E-01 | 4.56E+01 | 2.34E+02 | 3.14E+02 | 1.03E-02 |

F11 | HappyCat | 9.79E-01 | 3.39E-01 | 9.38E-01 | 2.88E+01 | 5.63E-01 |

F12 | HGBat | 9.55E-01 | 4.95E-01 | 7.12E+00 | 8.32E+02 | 6.51E-01 |

**Table 8.**Best results * for all 5 optimizers and all 12 objective functions, for 50 dimensions (D = 50, averages of 10 runs). The target (optimum) value is zero in all cases considered.

ID | Function Name | GA | PSO | DE | PRS | PROS |
---|---|---|---|---|---|---|

F01 | Sphere | 1.03E-06 | 2.04E-34 | 1.03E+04 | 8.00E+04 | 9.25E-03 |

F02 | Ellipsoid | 3.50E-05 | 1.19E-32 | 1.46E+05 | 1.77E+06 | 3.77E-01 |

F03 | Sum of Dif. Powers | 8.98E-13 | 3.65E-60 | 6.01E+14 | 1.50E+30 | 2.80E-06 |

F04 | Quintic | 1.87E-02 | 3.42E-15 | 4.79E+05 | 5.92E+06 | 5.18E-01 |

F05 | Drop-Wave | 8.36E-01 | 6.09E-01 | 9.68E-01 | 9.82E-01 | 9.74E-01 |

F06 | Weierstrass | 4.66E-01 | 2.08E+00 | 5.39E+01 | 7.30E+01 | 5.36E-01 |

F07 | Alpine 1 | 3.84E-04 | 1.99E-14 | 6.33E+01 | 7.97E+01 | 2.46E-03 |

F08 | Ackley’s | 5.50E-04 | 1.16E-01 | 1.53E+01 | 2.01E+01 | 1.87E-02 |

F09 | Griewank’s | 4.44E-08 | 2.47E-03 | 3.59E+00 | 2.04E+01 | 1.13E-02 |

F10 | Rastrigin’s | 1.25E-04 | 9.60E+01 | 5.02E+02 | 5.99E+02 | 5.74E-03 |

F11 | HappyCat | 1.08E+00 | 5.01E-01 | 8.57E+00 | 3.89E+01 | 6.92E-01 |

F12 | HGBat | 1.06E+00 | 6.04E-01 | 2.46E+02 | 1.75E+03 | 5.97E-01 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Plevris, V.; Bakas, N.P.; Solorzano, G. Pure Random Orthogonal Search (PROS): A Plain and Elegant Parameterless Algorithm for Global Optimization. *Appl. Sci.* **2021**, *11*, 5053.
https://doi.org/10.3390/app11115053

**AMA Style**

Plevris V, Bakas NP, Solorzano G. Pure Random Orthogonal Search (PROS): A Plain and Elegant Parameterless Algorithm for Global Optimization. *Applied Sciences*. 2021; 11(11):5053.
https://doi.org/10.3390/app11115053

**Chicago/Turabian Style**

Plevris, Vagelis, Nikolaos P. Bakas, and German Solorzano. 2021. "Pure Random Orthogonal Search (PROS): A Plain and Elegant Parameterless Algorithm for Global Optimization" *Applied Sciences* 11, no. 11: 5053.
https://doi.org/10.3390/app11115053