Next Article in Journal
Dataset: Roundabout Aerial Images for Vehicle Detection
Previous Article in Journal
Classification of Building Types in Germany: A Data-Driven Modeling Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking

by
Vagelis Plevris
1 and
German Solorzano
2,*
1
Department of Civil and Architectural Engineering, Qatar University, Doha P.O. Box 2713, Qatar
2
Department of Civil Engineering and Energy Technology, OsloMet—Oslo Metropolitan University, Pilestredet 35, 0166 Oslo, Norway
*
Author to whom correspondence should be addressed.
Submission received: 24 February 2022 / Revised: 8 April 2022 / Accepted: 9 April 2022 / Published: 11 April 2022

Abstract

:

Abstract

A collection of thirty mathematical functions that can be used for optimization purposes is presented and investigated in detail. The functions are defined in multiple dimensions, for any number of dimensions, and can be used as benchmark functions for unconstrained multidimensional single-objective optimization problems. The functions feature a wide variability in terms of complexity. We investigate the performance of three optimization algorithms on the functions: two metaheuristic algorithms, namely Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), and one mathematical algorithm, Sequential Quadratic Programming (SQP). All implementations are done in MATLAB, with full source code availability. The focus of the study is both on the objective functions, the optimization algorithms used, and their suitability for solving each problem. We use the three optimization methods to investigate the difficulty and complexity of each problem and to determine whether the problem is better suited for a metaheuristic approach or for a mathematical method, which is based on gradients. We also investigate how increasing the dimensionality affects the difficulty of each problem and the performance of the optimizers. There are functions that are extremely difficult to optimize efficiently, especially for higher dimensions. Such examples are the last two new objective functions, F29 and F30, which are very hard to optimize, although the optimum point is clearly visible, at least in the two-dimensional case.

Dataset

All the functions and the optimization algorithms are provided with full source code in MATLAB for anybody interested to use, test, or explore further. All the results of the paper can be reproduced, tested, and verified using the provided source code in MATLAB. A dedicated github repository has been made for this at https://github.com/vplevris/Collection30Functions (accessed on 24 February 2022).

Dataset License

CC-BY

1. Background

1.1. Introduction

Mathematical optimization is the process of finding the best element, with regard to a given criterion, from a set of available alternatives. Optimization problems arise in various quantitative disciplines from computer science and engineering to economics and operational research. The development of solution methods to optimization problems has been of interest in mathematics and engineering for centuries [1].
Even though there are some well-established optimization methods, the truth is that there is no single method that outperforms all the others when several different optimization problems are considered. This is often referred to as the No Free Lunch (NFL) theorem [2,3]. Consequentially, new optimization methods or new variants of existing ones are proposed on a regular basis [4,5,6]. When a new optimization method is proposed, the developers of the method usually choose a set of popular optimization problems (or objective functions) to test the algorithm on and also to serve as a basis for the comparison of the new algorithm with other, existing ones. The chosen objective functions for the testing phase are known as benchmark functions and play a decisive role to determine whether the new proposed algorithm can be considered successful when its performance is better or at least similar to the ones of existing, established algorithms.
Benchmark functions are usually defined in such a way that they can be computed in an arbitrarily chosen number of dimensions. As the number of dimensions increases, the complexity of the optimization task also increases. A certain optimization algorithm could perform very well for a small number of dimensions but poorly in higher dimensional spaces. This is the so-called “Curse of Dimensionality”, a well-known problem in data science referring to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur when low-dimensional settings are implemented [7]. Additionally, the size of the search domain is another important variable. Benchmark functions based on explicit mathematics usually span infinitely, thereby, an appropriate size of the search space must be chosen a priori. As a result, choosing the benchmark functions, the number of dimensions, and the size of the search domain is not a trivial task when testing and comparing optimization algorithms.
In this study, we investigate a total of 30 mathematical functions that can be used as optimization benchmark functions. There is no consensus among researchers on how an optimization problem should be properly tested or which benchmark functions should be particularly used. The goal of the present study is not to answer this question. Instead, the study aims at providing a compilation of ready-to-use functions of various complexities that are suited for benchmarking purposes. We investigate and assess the properties and complexity of these functions by observing and comparing the difficulties encountered by popular optimization algorithms when searching to find their respective optimum values. The selected methods used for these comparisons are: Genetic Algorithm (GA) [8,9,10], Particle Swarm Optimization (PSO) [11,12,13,14,15], and Sequential Quadratic Programming (SQP) [11,16,17]. Based on the obtained results, the complete set of the 30 functions can be used for checking the efficiency of any other optimization algorithm.
Before the description of the implemented methodology, a brief introduction to the basic concepts, notation, and common search strategies used in optimization methods are described in the following Section 1.2 and Section 1.3.

1.2. Formulation of an Optimization Problem

An optimization problem is usually written in terms of an objective function f(x) which needs to be minimized (or maximized), that denotes the purpose of the problem. The vector term x is known as the design vector, and it constitutes a candidate solution to the problem. It is composed of several design variables, x = {x1, …, xD}, that represent the unknown optimal parameters that are to be found. The number of design variables D is the number of dimensions of the design vector and the optimization problem in general. Design variables are expressed in various forms and can have binary, integer, or real values. In all cases, some sort of boundaries must be specified to restrict the search space to a realistic domain Ω (i.e., the lower and upper bounds, Ω = [lb, ub] where lbixiubi for every i ∈ {1, …, D} [18]. The optimization task is then described as the process of finding a design vector x* such that the following expression is fulfilled, for a minimization problem:
f ( x * ) f ( x ) for all x Ω ,
The definition expressed through Equation (1) implies that there is no better solution to the optimization problem than the one denoted by the design vector x*. In that case, the solution is known as the global optimum. However, in most practical optimization problems, an exact solution such as x* is difficult or practically impossible to obtain. Instead, an approximate solution of the actual global optimum, which is usually a local minimum, can be acceptable for practical purposes. Moreover, most optimization problems in practice are subjected to restrictions within their search space, meaning that some values of the domain Ω are not valid as solutions to the problem, due to the imposed constraints. These constraints can be expressed as equality functions, h(x) = 0, or more frequently as inequality functions, g(x) ≤ 0. When there are no constraints, other than the design variable bounds, the formal formulation of an optimization problem (for minimization) is simply:
min x Ω f ( x )

1.3. Optimization Search Strategies

There are two general types of strategies that can be used to solve optimization problems. On the one hand, deterministic or exact methods are based on a solid mathematical formulation and are commonly used to solve simple optimization problems where the effort grows only polynomially with the problem size. However, if the problem is NP-hard, the computational effort grows exponentially, and even small-sized problems can become unsolvable by these methods as they usually get trapped in local minima. In the present study, we use SQP as a mathematical (exact) method.
Alternatively, metaheuristic optimization algorithms (MOAs) [19] are based on stochastic search strategies that incorporate some form of randomness or probability that increases their robustness [4,20,21]. As a result, such algorithms are very effective in handling hard or ill-conditioned optimization problems where the objective function may be nonconvex, nondifferentiable, and possibly discontinuous over a continuous, discrete, or mixed continuous–discrete domain. Furthermore, these algorithms often show good performance for many NP-complete problems and are widely used in practical applications. Although MOAs usually provide good quality solutions, they can offer no guarantee that the optimal solution has been found. In the present study, we use two well-known MOAs, namely the GA and PSO, as explained in detail in Section 2.1. MOAs have been used to solve mathematical problems as well as more practical optimization problems in various scientific fields, including computer science, economics, operations research, engineering, and others. Other popular MOAs that have been successfully applied to a variety of problems in different disciplines are Evolution Strategies (ES) [22,23], Differential Evolution (DE) [24,25,26,27,28], and Ant Colony Optimization (ACO) [15,29], among others.

2. Methodology

A total of 30 objective functions that can serve for benchmarking purposes are investigated, denoted as F01 to F30. Their mathematical expressions as well as a 2-dimensional graphical visualization and other details are thoroughly described in Appendix A. The functions are chosen according to the following specific criteria so that they are well-suited for benchmarking purposes:
(i)
They are scalable in terms of their number of dimensions, i.e., they can be defined for any number of dimensions D.
(ii)
They can be expressed explicitly in a clear mathematical form without any ambiguities.
(iii)
All correspond to minimization problems. Therefore, a specific minimum value (and a corresponding solution vector) exists.
(iv)
All functions have a minimum value of zero, for consistency. This is not a limitation, as a constant number can be easily added to any function, making the minimum value whatever is desired.
All the objective functions are investigated in this study in multiple numbers of dimensions, namely: (i) D = 5, (ii) D = 10, (iii) D = 30, and (iv) D = 50. In other words, each problem is defined and investigated with 5, 10, 30, or 50 variables. Three different optimization algorithms are used to find the minimum value of every objective function, in each of the chosen dimensions. The chosen algorithms and their respective parameters are discussed in Section 2.1.
Each of the optimization tasks is defined as an unconstrained optimization problem (see Equation (2)). All the tested objective functions are scalar-valued such as f: ℝD  ℝ, where D is the number of dimensions. The search space Ω ⊂ ℝD is box-shaped in the D-dimensional space and it is defined by the lower and upper bounds vectors denoted as Ω = [lb, ub] where lbixiubi for every i ∈ [18]. A design vector x with design variables x = {x1, …, xD} is a candidate solution inside the search space x ∈ Ω (the adopted notation is introduced in Section 1.2). The obtained results are presented and compared in Section 3 where the complexity and properties of the presented objective functions are discussed.
All the simulations and the numerical work in this study have been completed in MATLAB. All the work is available with its source code (in a github repository), where any interested researcher can download the scripts, run the program, and reproduce the results on his/her own computer. This is particularly useful for researchers as they can (i) use the provided functions for their optimization and benchmarking work; (ii) use the provided optimizers for other optimization problems; and (iii) investigate the performance and suitability of these algorithms in optimizing the provided functions in various dimensions, replicate, and validate the results of the present study.

2.1. Optimization Algorithms Used

We have chosen three well-known optimization algorithms to study the selected optimization functions:
  • Genetic Algorithm (GA) [8,9,10];
  • Particle Swarm Optimization (PSO) [11,12,15];
  • Sequential Quadratic Programming (SQP) [11,16,17].
GA and PSO are metaheuristic methods that use a population of agents (or particles) at each generation (iteration). In addition, they are stochastic methods, which means that the final result of the optimization procedure will be different each time the method is run. For this reason, we run these two algorithms 50 times each and we process the results statistically in the end. On the other hand, SQP is a deterministic method which will give the very same result every time the algorithm is run, provided that the starting point of the search is the same. In this study, SQP is also run 50 times, starting from different random points in the design space. After the results of 50 runs for each algorithm have been obtained, we calculate and report the average and the median objective function values, as well as the standard deviation, over the 50 runs, for each problem. In addition, we report the median values of two useful evaluation metrics, Δx and Δf [30,31], that are defined in the domain space and the image space, respectively. Finally, we calculate the median value of a third final evaluation metric, Δt, which is a combination of the other two. The metrics are described in detail in Section 2.3.
All three algorithms (GA, PSO, SQP) are based on MATLAB implementations and are executed using the MATLAB commands ga, particleswarm, and fmincon, respectively.
GA uses the following default parameters:
  • ‘CrossoverFraction’, 0.8. The CrossoverFraction option specifies the fraction of each population, other than elite children, that are made up of crossover children;
  • ‘EliteCount’, 0.05*PopulationSize. EliteCount specifies the number of elite children;
  • ‘FunctionTolerance’, 10−6.
For the MATLAB fmincon command, which is a mathematical optimizer, we also use the additional option ‘Algorithm’, ‘sqp’ to ensure that the SQP variant of the mathematical optimizer will be employed.
We use the maximum function evaluations as the only convergence criterion for GA and PSO, i.e., both algorithms will stop after a certain number of function evaluations is completed. In the case of SQP (fmincon MATLAB command), we use, additionally, the following parameters that can affect the convergence criterion:
  • ‘StepTolerance’, 10−30;
  • ‘ConstraintTolerance’, 10−30;
  • ‘OptimalityTolerance’, 10−30;
  • ‘MaxFunctionEvaluations’, NP*MaxIter.
In fact, for the SQP case, we try to enforce very strict criteria for the tolerances, to try to ensure that the max. number of function evaluations will be reached so that the comparison is somehow fair between the three methods. Since the GA, PSO, and SQP are run 50 times for each problem, the total number of optimization problems solved is 3 (methods) ∗ 4 (different dimensions) ∗ 30 (Problems) ∗ 50 (Runs) = 18,000. To maintain consistency for all problems and all the different cases, for GA and PSO the population size is set to NP = 10∙D and the maximum number of iterations (or generations) is set to MaxIter = 20∙D − 50. Then, the max. number of function evaluations can be calculated as MaxFE = NPMaxIter. Table 1 shows the population size, max. number of generations/iterations, and the max. number of function evaluations for each category of problems, based on the number of dimensions.

2.2. Objective Functions

The selected objective functions together with their suggested search range and the location of the global optimum x* in the design space are briefly presented in Table 2. For uniformity reasons, the optimum (minimum) value of all functions is zero, in all cases (fi(x*) = 0, for all i = 1, 2, …, 30). However, the location of the minimum, x*, varies with the problems. It is at x* = {0, 0, …, 0} in 24 of the functions (80% of them), while it is different in 6 of them, namely, F04, F11, F12, F13, F17, and F21.
At this point, it is worth noting that some algorithms, such as PSO, tend to converge, at least for their free response of the associated dynamical system to the {0, 0, …, 0} point and this can cause a bias in the procedure, favoring these algorithms in cases where the optimum lies at {0, 0, …, 0} or near that. For a fair and more general comparison, it would be advisable to shift and rotate the functions using proper transformations, before using them. On the other hand, the direct comparison of the performance of the different algorithms is not the main purpose of the present study, and to keep things simple and consistent we will use these functions in their original form in the paper and the MATLAB code implementation.
The properties, mathematical formulation, suggested search space, and the location of the global minimum for each function are given in detail in Appendix A, together with figures visualizing the functions in the simple two-dimensional (D = 2) case. The mathematical functions have been implemented in MATLAB and their code has been optimized to achieve a faster execution time. Wherever possible, the use of “for-loops” is avoided and replaced with vectorized operations, as it is known that MATLAB is slow when processing for-loops, while it is very fast and efficient in handling vectors and matrices. Only 3 of the functions, namely F06-Weierstrass, F17-Perm D, Beta, and F19-Expanded Schaffer’s F6, use some limited for-loops in their code, while the other functions use only vectorized operations without any for-loops. Most functions are very fast to calculate using a modern computer, with the exceptions of F06 (Weierstrass function) and F17 (Perm D, Beta function), which require relatively more time, especially for the higher dimension cases.

2.3. Evaluation Metrics

Various metrics can be used for the evaluation of the performance of an optimization algorithm in optimizing an objective function. In this study, we first use the average value of the objective function, the median value, and the standard deviation over 50 runs. Although these can provide some information on the performance of each algorithm in each problem, they are not normalized metrics and they cannot be comparable among different functions. The functions are defined in various ranges, in different dimensions, while their values within the multidimensional search space also vary. For this reason, we use three additional normalized evaluation metrics, Δx, Δf, and Δt [30,31], in particular their median values over 50 runs. Δx is the root mean square of the normalized Euclidean distance (in the domain space) between the optimizer-found optimum location x and the location of the global optimum x*. ∆f is the associated normalized distance in the image space. The first two metrics are defined as follows:
Δ x = 1 D i = 1 D ( x i x i * R i ) 2
Δ f = f min f min * f max * f min * = f min f max *
where Ri is the range of the i-th variable, i.e., Ri = ubilbi, fmin is the final objective function value found by an optimizer, f*min = 0 is the global optimum which is zero for all functions in the present study, and f*max is the maximum value of the objective function in the search space. The third metric, Δt, is a combination of the other two, as shown in Equation (5), which gives an overall evaluation of the quality of the final result.
Δ t = Δ x 2 + Δ f 2 2
Again, the final value of the Δt evaluation metric reported in this study is the median value over 50 runs. Equation (5) should not be applied on the final median values of Δx and Δf to obtain Δt in a single step, but rather on the individual values of Δx and Δf for each optimization run and then take the median value of Δt over the 50 runs.
It should be noted that the exact value of f*max for every function (for a given number of dimensions, D) is not known a priori. For this reason, we perform a Monte Carlo Simulation to approximate the f*max value. For every function and every number of dimensions (5, 10, 30, 50), we generate 10,000 sample points in the search space, and we calculate the corresponding objective function values for all of them. Then, we take the maximum value as the f*max to apply it to Equation (4).

3. Results

3.1. Obtained Objective Function Values

For all 30 objective functions, the minimum (target) value of the objective function is zero, as shown in Table 2. In our case, we run each algorithm 50 times, for each problem. The total number of optimization runs is therefore 3 × 4 × 30 × 50 = 18,000. Considering that the full convergence history of each individual run is recorded, together with the final optimum, the execution time, and other important parameters, it is obvious that the generated amount of data is massive, and it is not easy to present all these results in a simple, compact, and comprehensive way.
For comparison purposes, we present in the figures: (i) the median values of the final optimum, over 50 runs; (ii) the median of Δf metric; (iii) the median of Δx metric; and (iv) the median of Δt metric, for each problem and each optimization algorithm. In case a problem has two global optima (the case of F04, Quintic function), we take into account the minimum Δx and Δt metrics. The results are presented in Figure 1 (for the case D = 5), Figure 2 (D = 10), Figure 3 (D = 30), and Figure 4 (D = 50). In all four figures, the y-axis is in logarithmic scale for the first subfigure which has to do with the objective function value, and it has been limited to the value of 105 for all cases. For the Δf, Δx, and Δt metrics (2nd, 3rd, and 4th subfigures), the y-axis is in normal scale with automatic min/max values.
More detailed results are presented in table format in Appendix C, where Table A1 shows the results obtained from the three optimizers for the cases D = 5 and D = 10, as averages over 50 runs, and Table A2 shows the corresponding average results for the cases D = 30 and D = 50. Table A3 (cases D = 5 and D = 10) and Table A4 (cases D = 30 and D = 50) show the results obtained from the three optimizers as median values. Table A5 and Table A6 show the standard deviation of the results, for each algorithm and each dimension case. In addition, Table A7 and Table A8 show the median values of the Δx metric, Table A9 and Table A10 show the median values of the Δf metric, and last, Table A11 and Table A12 show the median values of the combined Δt metric.
As expected, the SQP shows the least variance of the results (lowest values of the standard deviation), in most cases, and this is particularly true for higher dimensions. The SQP seems to specialize in some specific problems, such as F01, F02, F13, F14, F15, F16, F20, F24, and F28, where it manages to get very close to the optimum solution, in comparison to the GA and PSO. The values of the Δf and Δx metrics provide a good indication of the performance of the algorithms and which problem is hard to solve. According to the Δf metric, the functions F05, F06, F08, F10, F18, F19, F21, F24, F26, F29, and F30 are hard to optimize, with F29 and F30 being the hardest.

3.2. Convergence History for Each Problem and Each Optimization Algorithm

The convergence histories for each problem and each optimization algorithm for the various numbers of dimensions are presented in the following figures as follows: D = 5 (Figure 5 and Figure 6), D = 10 (Figure 7 and Figure 8), D = 30 (Figure 9 and Figure 10), and D = 50 (Figure 11 and Figure 12), as the median values over 50 runs, for each case. The median is the 0.5 quantile of a data set, i.e., the middle number in a sorted list of numbers. The presentation of these results using the median curve is more descriptive than the one using the average curve, as the median is not affected by the existence of any outliers, in contrast with the average. It should be noted that in these convergence history plots, the y-axis (median of objective function values) is in the logarithmic scale, while the x-axis (number of iterations) remains in the normal scale.
Although the median curve is presented in these figures, there is variation among the 50 independent runs of the algorithms, and it is worth also investigating the spread of these results. For this purpose, at the end of each optimization case (i.e., 50 runs) we calculate the 0.1 quantile, Q0.1 and the 0.9 quantile, Q0.9. The 0.1 quantile is the 10th percentile, i.e., the point where 10% percent of the data have values less than this number. Similarly, the 0.9 quantile is the 90th percentile, i.e., the point where 90% percent of the data have values less than this number. In our case, with 50 elements (50 runs), these two correspond to the average of the 5th and the 6th elements (Q0.1), and the average of the 45th and the 46th elements (Q0.9) of the ordered list containing, in ascending order, the values of the objective function (50 elements in total). Within this range [Q0.1, Q0.9], there are 80% of the values of the objective function (i.e., 40 values in our case).
We see that in some cases this vertical line is long, i.e., there is a large spread of the results above and below the median value, while in other cases the line is barely drawn or it is not drawn at all, i.e., the spread of the results is small. Again, it should be emphasized that this vertical line is drawn along an axis which is presented in a logarithmic scale, and for this reason its top part (the part above the median) would be drawn shorter in length, in comparison to the bottom part (below the median), in a case where the two actually have the same length in absolute values.

4. Discussion and Conclusions

There are plenty of data to be analyzed from the total number of 18,000 optimization problems that are solved. The number of unique problems is in fact 360, since each problem is solved 50 times, to compute the average, the median, and other statistical quantities and evaluation metrics for every algorithm and every problem. The presented results and the convergence histories show both the relevant difficulty of each optimization problem, in the given range of dimensions, and also a comparison of the performance of the different optimization algorithms in each problem.
Every function has its own unique characteristics, and every optimization algorithm has its own special features, advantages, and drawbacks. Some functions are easily optimized by all algorithms, while others pose a real challenge to some (or even all) of the optimizers. It appears to be impossible to establish a single criterion to determine the complexity of the functions; however, we will try to provide a general overview by identifying some common patterns found in the results. In the following, we will use the labels “low”, “middle”, and “high” to estimate the function’s complexity relative to the challenge that they pose to each optimizer.
The convergence history plots shown in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 provide an overall picture on how easy or difficult the process of finding the minimum value for an optimization algorithm is, for each problem. If the curve shows a steady decrease towards the zero value early in the process, it means that the algorithm is working as intended for the specific problem, and it is a sign of good performance. When the curve is horizontal at a point above zero, it means that the algorithm is trapped in a local minimum, and it cannot move further. Note that the results presented as convergence history plots are median values over 50 runs, for all three algorithms, the GA, PSO, and SQP. The combined Δt metric can also give us a good indication of the success of each algorithm in each problem.
The first major pattern found is with functions that appear to be easily solvable by the deterministic SQP approach but much more difficult for the GA or the PSO metaheuristics. These functions are, namely: F01, F02, F04, F11, F12, F13, F14, F15, F16, F20, F24, F27, and F28. Such an observation is not a surprise, as these functions are convex and/or unimodal and pure mathematical methods usually excel in such problems, taking advantage of gradient information. Nevertheless, in most of these cases, it appears that increasing the number of iterations may improve the result for the GA or PSO. These functions are classified as a low level of complexity for the SQP and between the low to middle level for the GA and PSO.
The next distinction is made for problems that show a good convergence history curve (i.e., steady decrease towards zero) in all the tested dimensions, but are considered only for the metaheuristics, i.e., the GA or the PSO algorithms. In other words, they are relatively easily solvable by at least one of the tested metaheuristic approaches. The SQP is not considered here to avoid comparing results based on algorithms that serve very different purposes. The identified functions with this characteristic are: F01, F02, F03, F04, F07, F09, F10, F14, F16, F20, F22, F23, F26, and F27. These functions are classified as a low to middle level of complexity for optimizers that are based on metaheuristic approaches.
Functions with a high level of complexity are considered as the ones where none of the algorithms considered seem to have found a satisfactory solution that lays close to the global minimum in at least one of the tested dimensions. Functions with such properties are: F05, F06, F08, F17, F18, F19, F21, F23, F25, F29, and F30. Some of these functions are difficult only in higher dimensions (i.e., D = 30 or D = 50), while others, such as F05, F17, F29, and F30 are very challenging in all the tested dimensions, even for the simplest D = 5 case. Most of these functions are nonconvex and multimodal and the optimizers get trapped in local minima quite often. For the last two functions, F29 and F30, although the optimum point is clearly visible in the 2D case, as shown in Figure A30 and Figure A31, respectively, it is extremely difficult to locate it in practice using optimization procedures. Due to the presence of numerous local minima and the isolation of the global minimum, these two problems represent difficult “needle in a haystack” optimization cases that are extremely hard to optimize effectively. In both problems F29 and F30, all three optimizers fail to reach objective function values lower than 10,000 in all dimension cases, even for the simplest D = 5 case. Based on some additional tests that were performed, it appears that the task is very challenging even when only two dimensions are considered (case D = 2).
The three optimizers, GA, PSO, and SQP, in their MATLAB implementations require different computational times to end up to the optimum solutions. In general, the PSO was found to be the fastest algorithm in all the examined problems. In most cases, the SQP was the slowest algorithm, requiring more time than the GA, especially when low-dimensional spaces were examined (D = 5 or D = 10 cases). The needed time for each algorithm and each problem is recorded by the program and the relevant results are available in the github repository hosting the source code of the project.

5. User Notes

A dedicated github repository, freely available at https://github.com/vplevris/Collection30Functions (accessed on 24 February 2022), has been made for this project, where the interested reader can download the code, run it, and reproduce all the results and the data of the paper, including tables, figures, etc. A detailed instruction file is also provided in Word format on how to run the different modules of the code.

Author Contributions

Conceptualization, V.P.; methodology, V.P.; software, V.P. and G.S.; validation, V.P. and G.S.; formal analysis, V.P. and G.S.; investigation, V.P. and G.S.; resources, V.P. and G.S.; data curation, V.P. and G.S.; writing—original draft preparation, V.P. and G.S.; writing—review and editing, V.P. and G.S.; visualization, V.P. and G.S.; supervision, V.P.; project administration, V.P.; funding acquisition, G.S. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Oslo Metropolitan University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The full source code of the study, including the 30 functions, the 3 optimizers, and the full methodology, are provided. The code is written in MATLAB. Any reader can download the source code, run it in MATLAB, and reproduce the results of the study. It is available online at https://github.com/vplevris/Collection30Functions (accessed on 24 February 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Detailed Description of the 30 Functions

The properties, mathematical formulation, suggested search space, and the location of the global minimum are given in detail for each function in this appendix. In addition, the 30 functions are plotted for the simple two-dimensional case (D = 2), to provide a visual idea of their shapes and complexities. For each function, there are two plots. The one on the right (b) provides a general overview as the plotting area is set to the suggested search range according to Table 2. The plot on the left (a) is a closer look (or a zoom-in) into the search area by a factor of ×10 (in other words, the plot range is limited to 1/10 of the suggested search range).
1. Sphere function (sphere_func)
The Sphere function [32], also known as De Jong’s function [33] is one of the simplest optimization test functions, probably the simplest, easiest, and most commonly used continuous domain search problem. It is continuous, convex, unimodal, differentiable, separable, highly symmetric, and rotationally invariant. The suggested search area is the hypercube [−100, 100]D. The global minimum is f01(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 01 ( x ) = i = 1 D x i 2
Figure A1 depicts the function in the 2D case (D = 2). In this case, the function is simplified as:
f 01 ( x 1 , x 2 ) = x 1 2 + x 2 2
Figure A1. F01—Sphere function in two dimensions.
Figure A1. F01—Sphere function in two dimensions.
Data 07 00046 g0a1
2. Ellipsoid function (ellipsoid_func)
The Ellipsoid function [32], or Hyper-ellipsoid function or Axis Parallel Hyper-ellipsoid function, is similar to the sphere function and it is also known as the Weighted sphere function [33]. It is continuous, convex, differentiable, separable, and unimodal. The suggested search area is the hypercube [−100, 100]D. The global minimum is f02(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 02 ( x ) = i = 1 D i x i 2
Figure A2 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 02 ( x 1 , x 2 ) = x 1 2 + 2 x 2 2
3. Sum of Different Powers function (sumpow_func)
The Sum of Different Powers function [33] is a commonly used unimodal test function. The suggested search area is the hypercube [−10, 10]D. The global minimum is f03(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 03 ( x ) = i = 1 D | x i | i + 1
Figure A3 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 03 ( x 1 , x 2 ) = | x 1 | 2 + | x 2 | 3
Figure A2. F02—Ellipsoid function in two dimensions.
Figure A2. F02—Ellipsoid function in two dimensions.
Data 07 00046 g0a2
Figure A3. F03—Sum of Different Powers function in two dimensions.
Figure A3. F03—Sum of Different Powers function in two dimensions.
Data 07 00046 g0a3
4. Quintic function (quintic_func)
The Quintic function has the following general formulation:
f 04 ( x ) = i = 1 D | x i 5 3 x i 4 + 4 x i 3 + 2 x i 2 10 x i 4 |
The suggested search area is the hypercube [−20, 20]D. The function has two distinct global minima with f04(x*) = 0 at x* = {−1, −1, …, −1} or x* = {2, 2, …, 2}.
Figure A4 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 04 ( x 1 , x 2 ) = | x 1 5 3 x 1 4 + 4 x 1 3 + 2 x 1 2 10 x 1 4 | + | x 2 5 3 x 2 4 + 4 x 2 3 + 2 x 2 2 10 x 2 4 |
Figure A4. F04—Quintic function in two dimensions.
Figure A4. F04—Quintic function in two dimensions.
Data 07 00046 g0a4
5. Drop-Wave function (drop_wave_func)
The Drop-Wave function is a multimodal function with high complexity. The suggested search area is the hypercube [−5.12, 5.12]D. The global minimum is f05(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 05 ( x ) = 1 1 + cos ( 12 i = 1 D x i 2 ) 0.5 i = 1 D x i 2 + 2
Figure A5 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 05 ( x 1 , x 2 ) = 1 1 + cos ( 12 x 1 2 + x 2 2 ) 0.5 ( x 1 2 + x 2 2 ) + 2
Figure A5. F05—Drop-Wave function in two dimensions.
Figure A5. F05—Drop-Wave function in two dimensions.
Data 07 00046 g0a5
6. Weierstrass function (weierstrass_func)
The Weierstrass function [32] is multimodal and it is continuous everywhere but only differentiable on a set of points. It is a computationally expensive function. The suggested search area is the hypercube [−0.5, 0.5]D. In this search area, the global minimum is unique, and it is f06(x*) = 0 at x* = {0, 0, …, 0}. Note that if a larger search area is considered, then there might be multiple global optima as the function is periodic. For this reason, it is strongly suggested to use the previously mentioned search area of [−0.5, 0.5]D. The general formulation of the function is:
f 06 ( x ) = i = 1 D ( k = 0 k m a x ( a k cos ( 2 π b k ( x i + 0.5 ) ) ) ) D k = 0 k m a x ( a k cos ( π b k ) ) a = 0.5 , b = 3 , k m a x = 20
Figure A6 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 06 ( x 1 , x 2 ) = k = 0 k m a x ( a k cos ( 2 π b k ( x 1 + 0.5 ) ) ) + k = 0 k m a x ( a k cos ( 2 π b k ( x 2 + 0.5 ) ) ) 2 k = 0 k m a x ( a k cos ( π b k ) ) a = 0.5 , b = 3 , k m a x = 20
Figure A6. F06—Weierstrass function in two dimensions.
Figure A6. F06—Weierstrass function in two dimensions.
Data 07 00046 g0a6
7. Alpine 1 function (alpine1_func)
The Alpine 1 function is a non-convex multimodal differentiable function. The suggested search area is the hypercube [−10, 10]D. The global minimum is f07(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 07 ( x ) = i = 1 D | x i sin ( x i ) + 0.1 x i |
Figure A7 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 07 ( x 1 , x 2 ) = | x 1 sin ( x 1 ) + 0.1 x 1 | + | x 2 sin ( x 2 ) + 0.1 x 2 |
Figure A7. F07—Alpine 1 function in two dimensions.
Figure A7. F07—Alpine 1 function in two dimensions.
Data 07 00046 g0a7
8. Ackley’s function (ackley_func)
The Ackley’s function [32,33,34] is non-convex and multimodal, having many local optima with the global optimum located in a very small basin. The suggested search area is the hypercube [−32.768, 32.768]D. The global minimum is f08(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 08 ( x ) = 20 exp ( 0.2 1 D i = 1 D x i 2 ) exp ( 1 D i = 1 D cos ( 2 π x i ) ) + e + 20
Figure A8 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 08 ( x 1 , x 2 ) = 20 exp ( 0.2 0.5 ( x 1 2 + x 2 2 ) ) exp ( 0.5 ( cos ( 2 π x 1 ) + cos ( 2 π x 2 ) ) ) + e + 20
Figure A8. F08—Ackley’s function in two dimensions.
Figure A8. F08—Ackley’s function in two dimensions.
Data 07 00046 g0a8
9. Griewank’s function (griewank_func)
The Griewank’s function [32,33] is a multimodal function which has many regularly distributed local minima. The suggested search area is the hypercube [−100, 100]D. The global minimum is f09(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 09 ( x ) = 1 4000 i = 1 D x i 2 i = 1 D cos ( x i i ) + 1
Figure A9 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 09 ( x 1 , x 2 ) = x 1 2 + x 2 2 4000 cos ( x 1 ) cos ( x 2 2 ) + 1
Figure A9. F09—Griewank’s function in two dimensions.
Figure A9. F09—Griewank’s function in two dimensions.
Data 07 00046 g0a9
10. Rastrigin’s function (rastrigin_func)
Rastrigin’s function [32,33,34] is highly multimodal, with many regularly distributed local optima (roughly 10D local optima). The suggested search area is the hypercube [−5.12, 5.12]D. The global minimum is f10(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 10 ( x ) = i = 1 D ( x i 2 10 cos ( 2 π x i ) ) + 10 D
Figure A10 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 10 ( x 1 , x 2 ) = x 1 2 + x 2 2 10 cos ( 2 π x 1 ) 10 cos ( 2 π x 2 ) + 20
Figure A10. F10—Rastrigin’s function in two dimensions.
Figure A10. F10—Rastrigin’s function in two dimensions.
Data 07 00046 g0a10
11. HappyCat function (happycat_func)
The HappyCat function [32] is multimodal, with the global minimum located in curved narrow valley. The suggested search area is the hypercube [−20, 20]D. The global minimum is f11(x*) = 0 at x* = {−1, −1, …, −1}. The general formulation of the function is:
f 11 ( x ) = | i = 1 D x i 2 D | 1 / 4 + 0.5 i = 1 D x i 2 + i = 1 D x i D + 0.5
Figure A11 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 11 ( x 1 , x 2 ) = | x 1 2 + x 2 2 2 | 1 / 4 + 0.25 ( x 1 2 + x 2 2 ) + 0.5 ( x 1 + x 2 ) + 0.5
Figure A11. F11—HappyCat function in two dimensions.
Figure A11. F11—HappyCat function in two dimensions.
Data 07 00046 g0a11
12. HGBat function (hgbat_func)
The HGBat function [32] is similar to HappyCat function but it is even more complex. It is a multimodal function. The suggested search area is the hypercube [−15, 15]D. The global minimum is f12(x*) = 0 at x* = {−1, −1, …, −1}. The general formulation of the function is:
f 12 ( x ) = | ( i = 1 D x i 2 ) 2 ( i = 1 D x i ) 2 | 1 / 2 + 0.5 i = 1 D x i 2 + i = 1 D x i D + 0.5
Figure A12 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 12 ( x 1 , x 2 ) = | ( x 1 2 + x 2 2 ) 2 ( x 1 + x 2 ) 2 | 1 / 2 + 0.25 ( x 1 2 + x 2 2 ) + 0.5 ( x 1 + x 2 ) + 0.5
Figure A12. F12—HGBat function in two dimensions.
Figure A12. F12—HGBat function in two dimensions.
Data 07 00046 g0a12
13. Rosenbrock’s function (rosenbrock_func)
The Rosenbrock’s function [33] is a classic optimization problem also known as Rosenbrock’s valley or Banana function. The global optimum lays inside a long, narrow, parabolic shaped flat valley. Finding the valley is trivial, but convergence to the global optimum is difficult. The suggested search area is the hypercube [−10, 10]D. The global minimum is f13(x*) = 0 at x* = {1, 1, …, 1}. The general formulation of the function is:
f 13 ( x ) = i = 1 D 1 ( 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 )
Figure A13 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 13 ( x 1 , x 2 ) = 100 ( x 2 x 1 2 ) 2 + ( x 1 1 ) 2
Figure A13. F13—Rosenbrock’s function in two dimensions.
Figure A13. F13—Rosenbrock’s function in two dimensions.
Data 07 00046 g0a13
14. High Conditioned Elliptic function (ellipt_func)
The High Conditioned Elliptic function [32] is a unimodal, globally quadratic, and ill-conditioned function with smooth local irregularities. The suggested search area is the hypercube [−100, 100]D. The global minimum is f14(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 14 ( x ) = i = 1 D ( ( 10 6 ) i 1 D 1 x i 2 )
Figure A14 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 14 ( x 1 , x 2 ) = x 1 2 + 10 6 x 2 2
Figure A14. F14—High Conditioned Elliptic function in two dimensions.
Figure A14. F14—High Conditioned Elliptic function in two dimensions.
Data 07 00046 g0a14
15. Discus function (discus_func)
The Discus function is a globally quadratic unimodal function with smooth local irregularities where a single direction in the search space is thousands of times more sensitive than all others (conditioning is about 106). The suggested search area is the hypercube [−100, 100]D. The global minimum is f15(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 15 ( x ) = 10 6 x 1 2 + i = 2 D x i 2
Figure A15 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 15 ( x 1 , x 2 ) = 10 6 x 1 2 + x 2 2
Figure A15. F15—Discus function in two dimensions.
Figure A15. F15—Discus function in two dimensions.
Data 07 00046 g0a15
16. Bent Cigar function (bent_cigar_func)
The Bent Cigar function is unimodal and nonseparable, with the optimum located in a smooth, but very narrow valley. The suggested search area is the hypercube [−100, 100]D. The global minimum is f16(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 16 ( x ) = x 1 2 + 10 6 i = 2 D x i 2
Figure A16 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 16 ( x 1 , x 2 ) = x 1 2 + 10 6 x 2 2
We notice that in the 2D case, functions f14, f15, and f16 give essentially the same optimization problem, but for D > 2 this is not the case.
Figure A16. F16—Bent Cigar function in two dimensions.
Figure A16. F16—Bent Cigar function in two dimensions.
Data 07 00046 g0a16
17. Perm D, Beta function (permdb_func)
The Perm D, Beta function is a unimodal function. The suggested search area is the hypercube [−D, D]D. This is because the global minimum f17(x*) = 0 is at x* = {1, 2, …, D}, to ensure that it will always lie inside the search area. In the present study, since D = 50 is the max. number of dimensions considered, and to keep things consistent, we will use the search range [−50, 50]D for all the cases considered (for all dimensions).
The general formulation of the function is:
f 17 ( x ) = i = 1 D ( j = 1 D ( j i + β ) ( ( x j j ) i 1 ) ) 2 β = 0.5
Figure A17 depicts the function in the 2D case (D = 2) for the search range considered [−50, 50]2 and the zoomed case [−5, 5]2. In this case, the formula is:
f 17 ( x 1 , x 2 ) = ( ( 1 + β ) ( x 1 1 ) + ( 2 + β ) ( 0.5 x 2 1 ) ) 2 + ( ( 1 + β ) ( x 1 2 1 ) + ( 4 + β ) ( 0.25 x 2 2 1 ) ) 2 β = 0.5
By setting β = 0.5 in the 2D case, we obtain:
f 17 ( x 1 , x 2 ) = ( 1.5 x 1 + 1.25 x 2 4 ) 2 + ( 1.5 x 1 2 + 1.125 x 2 2 6 ) 2
Figure A17. F17—Perm D, Beta function in two dimensions.
Figure A17. F17—Perm D, Beta function in two dimensions.
Data 07 00046 g0a17
For illustration purposes, Figure A18 depicts the same 2D function in the range [−20, 20]2 and the zoomed case [−2, 2]2.
Figure A18. F17—Perm D, Beta function in two dimensions (closer look).
Figure A18. F17—Perm D, Beta function in two dimensions (closer look).
Data 07 00046 g0a18
18. Schaffer’s F7 function (schafferf7_func)
The Schaffer’s F7 function [32,34] is multimodal and nonseparable. The suggested search area is the hypercube [−100, 100]D. The global minimum is f18(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 18 ( x ) = ( 1 D 1 i = 1 D 1 ( s i + s i sin 2 ( 50 s i 1 / 5 ) ) ) 2 s i = x i 2 + x i + 1 2
Figure A19 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 18 ( x 1 , x 2 ) = ( 1 D 1 ( s 1 + s 1 sin 2 ( 50 s 1 1 / 5 ) ) ) 2 s 1 = x 1 2 + x 2 2
Figure A19. F18—Schaffer’s F7 function in two dimensions.
Figure A19. F18—Schaffer’s F7 function in two dimensions.
Data 07 00046 g0a19
19. Expanded Schaffer’s F6 function (expschafferf6_func)
The Expanded Schaffer’s F6 function is a multidimensional function based on the Schaffer’s F6 function [34]. It is multimodal and nonseparable. The suggested search area is the hypercube [−100, 100]D. The global minimum is f19(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 19 ( x ) = i = 1 D 1 ( g ( x i , x i + 1 ) ) + g ( x D , x 1 ) g ( x , y ) = 0.5 + sin 2 ( x 2 + y 2 ) 0.5 ( 1 + 0.001 ( x 2 + y 2 ) ) 2
Figure A20 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 19 ( x 1 , x 2 ) = 1 + 2 sin 2 ( x 1 2 + x 2 2 ) 1 ( 1 + 0.001 ( x 1 2 + x 2 2 ) ) 2
Figure A20. F19—Expanded Schaffer’s F6 function in two dimensions.
Figure A20. F19—Expanded Schaffer’s F6 function in two dimensions.
Data 07 00046 g0a20
20. Rotated Hyper-ellipsoid function (rothellipsoid_func)
The Rotated Hyper-ellipsoid function is similar to the Ellipsoid function. It is continuous, convex, and unimodal. The suggested search area is the hypercube [−100, 100]D. The global minimum is f20(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 20 ( x ) = i = 1 D j = 1 i x j 2 = i = 1 D ( D + 1 i ) x i 2
Figure A21 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 20 ( x 1 , x 2 ) = 2 x 1 2 + x 2 2
Figure A21. F20—Rotated Hyper-ellipsoid function in two dimensions.
Figure A21. F20—Rotated Hyper-ellipsoid function in two dimensions.
Data 07 00046 g0a21
21. Schwefel function (schwefel_func)
The Schwefel function [33,34] is quite complex, with multiple local minima. The suggested search area is the hypercube [−500, 500]D. The global minimum is f21(x*) = 0 at x* = {c, c, …, c}, where c = 420.968746359982025. The general formulation of the function is:
f 21 ( x ) = i = 1 D x i sin ( | x i | ) + 418 . 9828872724337 D
In the literature, the function is also found with the constant value 418.9829∙D where the optimum location is reported with c = 420.9687 [34]. This formulation is not very precise. For details on this and a relevant detailed investigation of the function, please see Appendix B.
Figure A22 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 21 ( x 1 , x 2 ) = x 1 sin ( | x 1 | ) x 2 sin ( | x 2 | ) + 837 . 9657745448674
Figure A22. F21—Schwefel function in two dimensions.
Figure A22. F21—Schwefel function in two dimensions.
Data 07 00046 g0a22
22. Sum of Different Powers 2 function (sumpow2_func)
The Sum of Different Powers 2 function [32] is similar to the Sum of Different Powers function, but its formulation is slightly different. It is unimodal and nonseparable, with different sensitives for the various design variables. The suggested search area is again the hypercube [−10, 10]D. The global minimum is f22(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 22 ( x ) = i = 1 D | x i | 2 + 4 i 1 D 1
Figure A23 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 22 ( x 1 , x 2 ) = | x 1 | 2 + | x 2 | 6
Figure A23. F22—Sum of Different Powers 2 function in two dimensions.
Figure A23. F22—Sum of Different Powers 2 function in two dimensions.
Data 07 00046 g0a23
23. Xin-She Yang’s 1 function (xinsheyang1_func)
The Xin-She Yang’s 1 function [33] is nonconvex and nonseparable. The function is not smooth, and its derivatives are not well-defined at the optimum. The suggested search area is the hypercube [−2π, 2π]D. The global minimum is f23(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 23 ( x ) = ( i = 1 D | x i | ) exp ( i = 1 D sin ( x i 2 ) )
Figure A24 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 23 ( x 1 , x 2 ) = ( | x 1 | + | x 2 | ) exp ( sin ( x 1 2 ) sin ( x 2 2 ) )
Figure A24. F23—Xin-She Yang’s 1 function in two dimensions.
Figure A24. F23—Xin-She Yang’s 1 function in two dimensions.
Data 07 00046 g0a24
24. Schwefel 2.21 function (schwefel221_func)
The Schwefel 2.21 function is convex, continuous, and unimodal. The suggested search area is the hypercube [−100, 100]D. The global minimum is f24(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 24 ( x ) = max i = 1 , , D | x i |
Figure A25 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 24 ( x 1 , x 2 ) = max ( | x 1 | , | x 2 | )
Figure A25. F24—Schwefel 2.21 function in two dimensions.
Figure A25. F24—Schwefel 2.21 function in two dimensions.
Data 07 00046 g0a25
25. Schwefel 2.22 function (schwefel222_func)
The Schwefel 2.22 function is convex, continuous, separable, and unimodal. The suggested search area is the hypercube [−100, 100]D. The global minimum is f25(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 25 ( x ) = i = 1 D | x i | + i = 1 D | x i |
Figure A26 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 25 ( x 1 , x 2 ) = | x 1 | + | x 2 | + | x 1 x 2 |
Figure A26. F25—Schwefel 2.22 function in two dimensions.
Figure A26. F25—Schwefel 2.22 function in two dimensions.
Data 07 00046 g0a26
26. Salomon function (salomon_func)
The Salomon function is nonconvex, continuous, multimodal, and nonseparable. The suggested search area is the hypercube [−20, 20]D. The global minimum is f26(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 26 ( x ) = 1 cos ( 2 π i = 1 D x i 2 ) + 0.1 i = 1 D x i 2
Figure A27 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 26 ( x 1 , x 2 ) = 1 cos ( 2 π x 1 2 + x 2 2 ) + 0.1 x 1 2 + x 2 2
Figure A27. F26—Salomon function in two dimensions.
Figure A27. F26—Salomon function in two dimensions.
Data 07 00046 g0a27
27. Modified Ridge function (modridge_func)
The original Ridge function has the form
f R i d g e ( x ) = x 1 + d ( i = 2 D x i 2 ) a
In this formula, d and a are constants and are usually set to d = 1, a = 0.1. Other values (d = 2, a = 0.5, etc) can be also found in the literature. The Modified Ridge function proposed in this study has the form:
f 27 ( x ) = | x 1 | + 2 ( i = 2 D x i 2 ) 0.1
The suggested search area is the hypercube [−100, 100]D. The global minimum is f27(x*) = 0 at x* = {0, 0, …, 0}.
Figure A28 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 27 ( x 1 , x 2 ) = | x 1 | + 2 x 2 0.2
Figure A28. F27—Ridge function in two dimensions.
Figure A28. F27—Ridge function in two dimensions.
Data 07 00046 g0a28
28. Zakharov function (zakharov_func)
The Zakharov function is continuous and unimodal. The suggested search area is the hypercube [−10, 10]D. The global minimum is f28(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 28 ( x ) = i = 1 D x i 2 + ( i = 1 D 0.5 i x i ) 2 + ( i = 1 D 0.5 i x i ) 4
The suggested search area is the hypercube [−10, 10]D. The global minimum is f28(x*) = 0 at x* = {0, 0, …, 0}.
Figure A29 depicts the function in the 2D case (D = 2). In this case, the formula is:
f 28 ( x 1 , x 2 ) = x 1 2 + x 2 2 + ( 0.5 x 1 + x 2 ) 2 + ( 0.5 x 1 + x 2 ) 4
Figure A29. F28—Zakharov function in two dimensions.
Figure A29. F28—Zakharov function in two dimensions.
Data 07 00046 g0a29
29. Modified Xin-She Yang’s 3 function (modxinsyang3_func)
The original Xin-She Yang’s 3 function is the third function proposed in the excellent work by Xin-She Yang [33]. The Modified Xin-She Yang’s 3 function proposed in this study is based on that, with some modifications. It is a standing-wave function with a defect, which is nonconvex and nonseparable, with multiple local minima, and a unique global minimum. The suggested search area is the hypercube [−20, 20]D. The global minimum is f29(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 29 ( x ) = 10 4 ( 1 + [ exp ( i = 1 D ( x i 15 ) 10 ) 2 exp ( i = 1 D x i 2 ) ] i = 1 D cos 2 ( x i ) )
Figure A30 depicts the function in the 2D case (D = 2). The function is simplified as:
f 29 ( x 1 , x 2 ) = 10 4 ( 1 + [ exp ( ( x 1 15 ) 10 ( x 2 15 ) 10 ) 2 exp ( x 1 2 x 2 2 ) ] ( cos 2 ( x 1 ) cos 2 ( x 2 ) ) )
Figure A30. F29—Modified Xin-She Yang’s 3 function in two dimensions.
Figure A30. F29—Modified Xin-She Yang’s 3 function in two dimensions.
Data 07 00046 g0a30
30. Modified Xin-She Yang’s 5 function (modxinsyang5_func)
The original Xin-She Yang’s 5 function is the fifth function proposed in the work by Xin-She Yang [33]. The Modified Xin-She Yang’s 5 function proposed in this study is based on that, with some minor modifications. The suggested search area is the hypercube [−100, 100]D. The global minimum is f30(x*) = 0 at x* = {0, 0, …, 0}. The general formulation of the function is:
f 30 ( x ) = 10 4 [ 1 + ( i = 1 D sin 2 ( x i ) exp ( i = 1 D x i 2 ) ) exp ( i = 1 D sin 2 ( | x i | ) ) ]
The function has multiple local minima, but the global minimum is unique. Figure A31 depicts the function in the 2D case (D = 2) where its landscape looks like a wonderful candlestick [33]. The function is simplified as:
f 30 ( x 1 , x 2 ) = 10 4 [ 1 + ( sin 2 ( x 1 ) + sin 2 ( x 2 ) exp ( x 1 2 + x 2 2 ) ) exp ( sin 2 ( | x 1 | ) sin 2 ( | x 2 | ) ) ]
Figure A31. F30—Modified Xin-She Yang’s 5 function in two dimensions.
Figure A31. F30—Modified Xin-She Yang’s 5 function in two dimensions.
Data 07 00046 g0a31

Appendix B. Investigation of the Schwefel Function (F21)

In the literature [33,34], the Schwefel function (F21 in this study) is usually found with the value 418.9829∙D in its formula and the optimum location is reported with c = 420.9687. This formulation is not very precise, compared to the formulation described here in the description of the function. Indeed, to find the correct values, one can take the one-dimensional case (D = 1) and find the minimum of the function
y = x sin | x |
in the search area [−500, 500]. A plot of the function of Equation (A63) is presented in Figure A32.
Figure A32. Plot of the function y = x sin | x | for −500 < x < 500.
Figure A32. Plot of the function y = x sin | x | for −500 < x < 500.
Data 07 00046 g0a32
As shown in the figure, the minimum is obviously within the range [400, 500] for x. To find the exact location of the minimum, we can omit the absolute term (since x > 0 in this range) and find the value of x ∈ [400, 500] that makes the derivative of the function equal to zero. In this case, we have:
y ( x ) = x sin x for x 0 y ( x ) = sin x x cos ( x ) 2 for x 0
By using MATLAB and the function “vpasolve”, we can numerically find the root of y’(x) for x ∈ [400, 500] as follows (code in MATLAB):
        syms x y
        y = -x*sin(sqrt(x));
        yd = diff(y,x);
        s = vpasolve(yd = = 0, x, [400 500])
		
Then we obtain the result:
s = 420.96874635998202731184436501869
We substitute the above s value in the function (as x), to find the minimum value of f, as follows:
        fmin = subs(f,x,s)
		  
and we obtain
fmin(x) = −418.9828872724337062747864351956
Practically, there is no need to take into account so many decimal places. In the formulation proposed in this study, the formula for f20 is the following
f 21 ( x ) = i = 1 D x i sin ( | x i | ) + 418 . 9828872724337 D
For the above function, the global minimum is f21(x*) = 0 at x* = {c, c, …, c}, where c = 420.968746359982025.

Appendix C. Tables with the Numerical Results

Table A1. Average values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 5 and D = 10.
Table A1. Average values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 5 and D = 10.
IDFunction NameD = 5D = 10
GAPSOSQPGAPSOSQP
F01Sphere7.59E-047.37E-031.38E-142.52E-045.65E-081.05E-15
F02Ellipsoid1.02E-032.86E-029.16E-141.00E-032.49E-071.44E-13
F03Sum of Different Powers4.25E-052.70E-075.68E-111.85E-061.52E-142.64E-08
F04Quintic2.12E-012.42E-013.78E-061.43E-012.47E-032.27E-06
F05Drop-Wave1.86E-016.38E-028.61E-014.61E-011.45E-019.18E-01
F06Weierstrass3.21E-014.78E-025.89E+007.41E-011.13E-011.20E+01
F07Alpine 18.21E-032.32E-022.52E-061.67E-038.92E-041.58E-06
F08Ackley’s2.60E-013.21E-021.89E+011.48E-014.63E-021.95E+01
F09Griewank’s1.48E-021.58E-013.35E+001.10E-021.46E-016.38E-01
F10Rastrigin’s5.93E-013.49E+002.55E+011.10E+001.10E+016.77E+01
F11HappyCat4.66E-011.89E-014.18E-029.48E-012.39E-011.56E-01
F12HGBat5.27E-012.65E-014.90E-018.87E-014.04E-015.20E-01
F13Rosenbrock’s2.57E+002.54E+005.50E-016.50E+001.35E+014.78E-01
F14High Cond. Elliptic7.39E+016.00E+013.50E-114.88E+015.44E+033.57E-11
F15Discus5.80E+021.86E+021.50E-111.92E+037.16E-067.18E-12
F16Bent Cigar7.71E+022.42E+031.16E-109.10E+013.15E+019.83E-11
F17Perm D, Beta3.86E+037.69E+027.81E+014.77E+159.91E+148.35E+15
F18Schaffer’s F7 3.08E-011.01E-017.12E+013.76E-011.50E-017.24E+01
F19Expanded Schaffer’s F6 9.11E-017.40E-012.32E+002.40E+002.76E+004.64E+00
F20Rotated Hyper-ellipsoid2.03E-039.91E-036.24E-146.20E-041.30E-078.83E-14
F21Schwefel2.58E+022.84E+029.63E+025.80E+029.50E+021.83E+03
F22Sum of Dif. Powers 25.61E-058.71E-079.31E-131.54E-067.59E-141.91E-10
F23Xin-She Yang’s 14.84E-029.07E-021.68E-018.94E-042.60E-032.56E-03
F24Schwefel 2.214.13E-016.92E-022.81E-071.26E+002.60E-022.59E-07
F25Schwefel 2.222.24E-022.51E-015.09E+005.87E-024.81E-046.53E+01
F26Salomon2.50E-011.06E-012.16E+003.08E-012.10E-013.11E+00
F27Modified Ridge8.25E-011.09E+006.08E-027.40E-013.32E-018.00E-02
F28Zakharov2.50E-012.54E-036.01E-141.95E+002.03E+008.05E-14
F29Mod. Xin-She Yang’s 31.00E+041.00E+041.00E+041.00E+041.00E+041.00E+04
F30Mod. Xin-She Yang’s 59.61E+031.00E+041.15E+041.00E+041.00E+041.03E+04
Table A2. Average values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 30 and D = 50.
Table A2. Average values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 30 and D = 50.
IDFunction NameD = 30D = 50
GAPSOSQPGAPSOSQP
F01Sphere1.77E-023.43E-103.38E-152.34E-013.46E-066.00E-15
F02Ellipsoid2.63E-019.61E-093.26E-135.35E+001.12E-042.87E-13
F03Sum of Different Powers1.30E-072.33E-111.93E+106.21E-061.98E+003.64E+25
F04Quintic5.76E+003.20E-036.61E-062.06E+011.89E-011.76E-05
F05Drop-Wave8.67E-017.27E-019.85E-019.30E-019.10E-019.72E-01
F06Weierstrass7.89E+004.47E+003.60E+011.88E+011.45E+015.70E+01
F07Alpine 13.81E-022.05E-064.47E-062.59E-012.72E-047.77E-06
F08Ackley’s9.51E-012.02E+001.95E+011.52E+005.37E+001.95E+01
F09Griewank’s4.43E-031.50E-021.48E-149.88E-033.86E-021.06E-14
F10Rastrigin’s8.32E+008.37E+011.78E+022.44E+011.51E+022.84E+02
F11HappyCat1.07E+004.86E-017.18E-021.12E+005.81E-016.50E-02
F12HGBat1.18E+006.06E-015.00E-011.25E+005.86E-015.00E-01
F13Rosenbrock’s3.88E+015.27E+011.36E+009.74E+019.53E+017.97E-01
F14High Cond. Elliptic1.72E+025.10E+011.46E-111.05E+035.90E+027.28E-12
F15Discus1.03E+032.00E+025.12E-122.24E+033.03E-076.01E-13
F16Bent Cigar1.26E+041.10E-031.62E-102.08E+053.48E-015.71E-10
F17Perm D, Beta7.45E+867.19E+821.10E+808.96E+1616.44E+1622.34E+168
F18Schaffer’s F7 5.36E-017.84E+007.59E+016.83E-011.80E+017.56E+01
F19Expanded Schaffer’s F6 1.08E+011.17E+011.39E+011.94E+012.10E+012.33E+01
F20Rotated Hyper-ellipsoid3.74E-014.43E-092.14E-136.73E+002.53E-053.00E-13
F21Schwefel3.42E+034.07E+035.77E+037.10E+037.20E+039.96E+03
F22Sum of Dif. Powers 23.01E-041.70E-141.05E-061.33E-027.21E-111.01E-04
F23Xin-She Yang’s 17.13E-122.56E-117.03E-102.47E-201.35E-191.02E-10
F24Schwefel 2.212.03E+001.34E+014.81E-072.17E+003.34E+018.34E-07
F25Schwefel 2.221.10E+002.20E+001.12E+085.80E+007.00E+005.61E+55
F26Salomon5.78E-011.05E+005.85E+007.54E-012.11E+007.31E+00
F27Modified Ridge1.34E+002.18E-018.87E-021.73E+004.57E-011.59E-01
F28Zakharov2.48E+022.41E+021.35E-138.32E+026.12E+021.40E-12
F29Mod. Xin-She Yang’s 31.00E+041.00E+041.00E+041.00E+041.00E+041.00E+04
F30Mod. Xin-She Yang’s 51.00E+041.00E+041.00E+041.00E+041.00E+041.00E+04
Table A3. Median values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 5 and D = 10.
Table A3. Median values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 5 and D = 10.
IDFunction NameD = 5D = 10
GAPSOSQPGAPSOSQP
F01Sphere3.48E-046.70E-043.77E-167.29E-051.95E-086.72E-16
F02Ellipsoid4.27E-043.72E-031.59E-142.46E-042.40E-084.64E-14
F03Sum of Different Powers1.23E-057.91E-091.71E-151.59E-071.72E-171.78E-12
F04Quintic1.56E-011.27E-011.02E-067.70E-023.41E-042.02E-06
F05Drop-Wave2.14E-016.38E-029.08E-015.22E-012.14E-019.57E-01
F06Weierstrass2.59E-013.78E-025.68E+006.27E-011.48E-031.19E+01
F07Alpine 12.89E-038.94E-047.31E-078.03E-045.09E-061.35E-06
F08Ackley’s2.80E-021.35E-021.92E+011.04E-024.93E-051.96E+01
F09Griewank’s9.05E-051.41E-013.29E+006.29E-068.27E-026.52E-02
F10Rastrigin’s6.30E-022.93E+002.24E+019.95E-018.95E+006.57E+01
F11HappyCat3.67E-011.73E-013.72E-021.00E+002.42E-011.29E-01
F12HGBat4.29E-012.43E-014.99E-018.92E-013.88E-015.00E-01
F13Rosenbrock’s2.07E+001.88E+005.29E-114.05E+006.03E+006.02E-11
F14High Cond. Elliptic1.58E+011.46E+004.69E-111.69E+003.23E-042.03E-11
F15Discus6.04E+004.27E-021.18E-125.51E-011.81E-073.50E-13
F16Bent Cigar2.31E+022.78E+025.35E-111.84E+011.14E-021.61E-11
F17Perm D, Beta7.01E+021.60E+021.13E-011.37E+151.60E+145.98E+14
F18Schaffer’s F7 1.12E-012.62E-027.55E+011.92E-014.29E-037.61E+01
F19Expanded Schaffer’s F6 9.48E-017.07E-012.43E+002.46E+002.78E+004.73E+00
F20Rotated Hyper-ellipsoid7.04E-042.34E-031.18E-142.37E-044.37E-083.27E-14
F21Schwefel2.47E+022.38E+029.98E+025.83E+029.51E+021.83E+03
F22Sum of Dif. Powers 29.14E-068.51E-099.93E-162.93E-072.00E-154.16E-14
F23Xin-She Yang’s 14.18E-029.14E-022.09E-018.53E-042.62E-032.62E-03
F24Schwefel 2.211.56E-014.51E-022.77E-071.13E+001.74E-022.32E-07
F25Schwefel 2.221.93E-024.86E-021.09E-011.24E-021.88E-045.64E+01
F26Salomon2.00E-019.99E-022.50E+003.00E-012.00E-013.25E+00
F27Modified Ridge8.25E-011.08E+004.80E-027.30E-013.17E-016.13E-02
F28Zakharov5.72E-033.89E-044.06E-142.44E-011.84E-036.49E-14
F29Mod. Xin-She Yang’s 31.00E+041.00E+041.00E+041.00E+041.00E+041.00E+04
F30Mod. Xin-She Yang’s 51.00E+041.00E+041.13E+041.00E+041.00E+041.00E+04
Table A4. Median values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 30 and D = 50.
Table A4. Median values (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 30 and D = 50.
IDFunction NameD = 30D = 50
GAPSOSQPGAPSOSQP
F01Sphere1.41E-025.30E-112.84E-151.84E-017.70E-092.55E-15
F02Ellipsoid1.64E-016.66E-107.86E-143.75E+002.69E-071.48E-13
F03Sum of Different Powers1.63E-081.84E-174.73E+077.15E-089.87E-081.44E+22
F04Quintic3.77E+001.09E-046.30E-062.05E+013.78E-031.14E-05
F05Drop-Wave8.73E-017.70E-019.85E-019.31E-019.20E-019.91E-01
F06Weierstrass7.52E+004.24E+003.62E+011.86E+011.43E+015.71E+01
F07Alpine 12.34E-023.99E-074.41E-062.30E-011.53E-057.75E-06
F08Ackley’s1.06E+002.01E+001.96E+011.58E+004.67E+001.96E+01
F09Griewank’s7.26E-049.86E-031.49E-147.64E-039.86E-031.05E-14
F10Rastrigin’s6.73E+007.21E+011.73E+022.25E+011.38E+022.70E+02
F11HappyCat1.06E+004.46E-015.49E-021.12E+005.69E-015.54E-02
F12HGBat1.18E+005.24E-015.00E-011.27E+004.10E-015.00E-01
F13Rosenbrock’s1.52E+012.94E+016.33E-119.95E+019.42E+016.59E-11
F14High Cond. Elliptic3.79E+013.47E-065.14E-125.98E+026.28E-043.86E-12
F15Discus1.25E-017.42E-104.47E-137.54E-015.49E-082.65E-13
F16Bent Cigar8.42E+038.94E-054.46E-111.50E+051.20E-025.15E-11
F17Perm D, Beta6.98E+826.14E+827.92E+783.62E+1602.62E+1621.88E+160
F18Schaffer’s F7 4.97E-016.93E+007.67E+016.67E-011.83E+017.56E+01
F19Expanded Schaffer’s F6 1.10E+011.20E+011.40E+011.96E+012.11E+012.33E+01
F20Rotated Hyper-ellipsoid1.89E-018.18E-101.49E-135.87E+004.16E-071.19E-13
F21Schwefel3.48E+034.16E+035.93E+036.91E+037.10E+039.99E+03
F22Sum of Dif. Powers 21.28E-041.68E-154.37E-075.83E-031.98E-127.28E-05
F23Xin-She Yang’s 16.75E-122.53E-114.41E-102.35E-201.36E-193.38E-12
F24Schwefel 2.212.00E+001.28E+012.86E-072.19E+003.38E+016.43E-07
F25Schwefel 2.229.57E-011.54E-043.49E+045.71E+002.96E-024.97E+23
F26Salomon6.00E-011.10E+006.10E+007.00E-012.10E+007.85E+00
F27Modified Ridge1.33E+002.06E-016.10E-021.72E+003.51E-011.23E-01
F28Zakharov2.38E+022.52E+028.77E-148.52E+025.98E+021.70E-13
F29Mod. Xin-She Yang’s 31.00E+041.00E+041.00E+041.00E+041.00E+041.00E+04
F30Mod. Xin-She Yang’s 51.00E+041.00E+041.00E+041.00E+041.00E+041.00E+04
Table A5. Standard deviation (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 5 and D = 10.
Table A5. Standard deviation (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 5 and D = 10.
IDFunction NameD = 5D = 10
GAPSOSQPGAPSOSQP
F01Sphere1.10E-031.94E-024.00E-145.12E-041.29E-072.15E-15
F02Ellipsoid1.43E-039.20E-021.65E-132.56E-035.30E-071.88E-13
F03Sum of Different Powers6.83E-051.01E-062.80E-105.03E-067.80E-141.22E-07
F04Quintic1.46E-012.93E-011.48E-052.28E-017.64E-031.20E-06
F05Drop-Wave1.17E-017.59E-071.57E-011.40E-018.17E-021.61E-01
F06Weierstrass2.32E-014.01E-021.22E+006.43E-013.62E-012.38E+00
F07Alpine 11.33E-028.67E-027.94E-062.23E-035.87E-031.30E-06
F08Ackley’s7.18E-014.49E-021.64E+003.72E-012.26E-013.44E-01
F09Griewank’s3.78E-028.99E-021.93E+003.79E-021.61E-011.50E+00
F10Rastrigin’s9.86E-012.70E+001.79E+011.42E+007.93E+002.41E+01
F11HappyCat3.15E-017.90E-021.97E-023.53E-018.77E-021.03E-01
F12HGBat3.42E-011.06E-018.07E-023.39E-011.38E-011.45E-01
F13Rosenbrock’s4.00E+002.23E+001.36E+001.26E+012.65E+011.30E+00
F14High Cond. Elliptic1.49E+022.08E+022.89E-112.56E+023.07E+043.68E-11
F15Discus2.31E+031.30E+032.39E-116.68E+033.32E-051.72E-11
F16Bent Cigar1.65E+033.71E+031.35E-101.95E+021.93E+021.69E-10
F17Perm D, Beta9.50E+031.47E+032.54E+026.62E+152.43E+153.01E+16
F18Schaffer’s F7 4.03E-012.21E-012.19E+014.61E-013.38E-011.43E+01
F19Expanded Schaffer’s F6 5.12E-013.73E-012.65E-015.87E-015.36E-012.83E-01
F20Rotated Hyper-ellipsoid4.67E-031.82E-021.05E-131.04E-032.66E-071.24E-13
F21Schwefel1.49E+021.62E+022.92E+022.40E+023.07E+024.08E+02
F22Sum of Dif. Powers 21.30E-044.42E-063.74E-123.07E-063.22E-138.59E-10
F23Xin-She Yang’s 11.24E-022.85E-025.73E-022.39E-041.65E-047.09E-04
F24Schwefel 2.214.96E-016.56E-021.42E-076.22E-012.90E-021.05E-07
F25Schwefel 2.221.50E-021.23E+001.37E+011.75E-011.10E-033.79E+01
F26Salomon1.60E-012.37E-028.83E-018.45E-028.06E-021.17E+00
F27Modified Ridge1.54E-012.57E-014.52E-021.41E-011.13E-015.38E-02
F28Zakharov1.52E+005.37E-036.91E-144.86E+007.22E+005.77E-14
F29Mod. Xin-She Yang’s 30.00E+000.00E+009.51E-080.00E+000.00E+001.49E-07
F30Mod. Xin-She Yang’s 51.91E+032.58E+011.36E+032.47E-042.03E+004.86E+02
Table A6. Standard deviation (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 30 and D = 50.
Table A6. Standard deviation (over 50 runs) of the optimum results, for the 3 optimizers, for dimensions D = 30 and D = 50.
IDFunction NameD = 30D = 50
GAPSOSQPGAPSOSQP
F01Sphere1.27E-028.88E-103.80E-151.60E-012.37E-051.47E-14
F02Ellipsoid3.37E-013.08E-085.48E-133.99E+007.44E-044.77E-13
F03Sum of Different Powers3.28E-071.43E-101.21E+113.22E-051.38E+012.42E+26
F04Quintic5.37E+001.09E-021.54E-067.07E+001.07E+002.24E-05
F05Drop-Wave5.42E-029.40E-022.48E-032.13E-023.37E-021.30E-01
F06Weierstrass2.41E+002.33E+004.32E+003.29E+003.64E+005.26E+00
F07Alpine 14.78E-025.63E-061.92E-061.49E-018.96E-042.63E-06
F08Ackley’s5.34E-011.24E+001.73E-013.14E-012.45E+001.19E-01
F09Griewank’s1.32E-021.81E-025.24E-156.62E-038.21E-023.31E-15
F10Rastrigin’s5.49E+004.09E+015.48E+018.06E+004.78E+016.88E+01
F11HappyCat1.96E-011.43E-017.90E-021.55E-011.28E-015.26E-02
F12HGBat1.81E-012.83E-018.52E-041.19E-012.80E-013.84E-04
F13Rosenbrock’s4.17E+013.13E+011.89E+005.96E+013.75E+011.59E+00
F14High Cond. Elliptic6.12E+023.56E+022.72E-111.31E+032.81E+039.63E-12
F15Discus2.72E+031.40E+031.46E-114.01E+038.91E-077.70E-13
F16Bent Cigar1.24E+044.03E-034.07E-101.75E+051.28E+002.44E-09
F17Perm D, Beta4.07E+877.41E+822.63E+80InfInfInf
F18Schaffer’s F7 1.88E-015.98E+008.63E+001.81E-016.34E+007.13E+00
F19Expanded Schaffer’s F6 1.22E+001.05E+005.40E-011.39E+001.35E+006.00E-01
F20Rotated Hyper-ellipsoid7.17E-011.17E-082.18E-133.89E+001.32E-046.49E-13
F21Schwefel5.33E+027.89E+027.26E+028.93E+029.33E+029.12E+02
F22Sum of Dif. Powers 24.40E-044.27E-141.64E-062.73E-022.33E-108.72E-05
F23Xin-She Yang’s 11.44E-121.84E-121.19E-096.26E-211.06E-202.30E-10
F24Schwefel 2.214.52E-015.58E+009.71E-074.29E-015.64E+007.09E-07
F25Schwefel 2.227.75E-011.40E+014.45E+082.27E+002.33E+013.93E+56
F26Salomon8.07E-022.78E-011.23E+008.30E-024.63E-011.89E+00
F27Modified Ridge1.61E-016.81E-027.34E-021.24E-016.57E-019.51E-02
F28Zakharov1.24E+021.51E+021.60E-132.88E+022.82E+023.04E-12
F29Mod. Xin-She Yang’s 30.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F30Mod. Xin-She Yang’s 51.54E-096.42E-093.70E-035.14E-130.00E+003.29E-05
Table A7. Median Δx metric values (over 50 runs), for the 3 optimizers, for dimensions D = 5 and D = 10.
Table A7. Median Δx metric values (over 50 runs), for the 3 optimizers, for dimensions D = 5 and D = 10.
IDFunction NameD = 5D = 10
GAPSOSQPGAPSOSQP
F01Sphere4.17E-055.79E-054.34E-111.35E-052.21E-074.10E-11
F02Ellipsoid2.90E-057.83E-052.20E-101.23E-051.31E-071.71E-10
F03Sum of Different Powers1.02E-035.37E-046.45E-056.05E-043.61E-041.45E-03
F04Quintic3.54E-023.54E-024.18E-023.91E-023.55E-024.24E-02
F05Drop-Wave4.55E-022.27E-022.74E-016.45E-023.22E-022.91E-01
F06Weierstrass1.01E-034.46E-051.96E-016.73E-031.11E-071.88E-01
F07Alpine 16.80E-024.46E-032.11E-015.13E-027.15E-022.10E-01
F08Ackley’s9.83E-054.93E-052.46E-013.82E-051.88E-072.92E-01
F09Griewank’s4.63E-054.35E-022.56E-011.39E-052.78E-022.54E-02
F10Rastrigin’s7.78E-046.28E-022.06E-013.07E-028.69E-022.50E-01
F11HappyCat2.05E-021.06E-023.04E-033.52E-021.50E-021.08E-02
F12HGBat3.08E-022.24E-023.33E-024.45E-022.87E-023.33E-02
F13Rosenbrock’s3.95E-023.86E-023.26E-073.37E-024.19E-022.44E-07
F14High Cond. Elliptic1.55E-048.67E-043.84E-101.89E-047.21E-063.37E-10
F15Discus6.10E-054.30E-043.43E-105.05E-056.09E-073.74E-10
F16Bent Cigar8.64E-051.12E-022.01E-104.44E-054.65E-052.77E-10
F17Perm D, Beta2.55E-022.62E-022.12E-024.45E-027.01E-027.28E-02
F18Schaffer’s F7 5.51E-047.66E-052.82E-011.61E-035.43E-052.91E-01
F19Expanded Schaffer’s F6 6.07E-023.53E-022.95E-011.09E-011.76E-012.96E-01
F20Rotated Hyper-ellipsoid3.67E-057.00E-051.53E-101.23E-051.55E-071.44E-10
F21Schwefel3.24E-014.58E-015.28E-013.34E-015.99E-015.07E-01
F22Sum of Dif. Powers 27.99E-045.23E-046.67E-054.26E-043.27E-059.37E-05
F23Xin-She Yang’s 19.77E-022.39E-014.95E-011.84E-014.22E-014.74E-01
F24Schwefel 2.215.48E-041.52E-048.23E-103.55E-035.72E-056.30E-10
F25Schwefel 2.222.27E-055.88E-052.10E-041.12E-051.24E-075.75E-02
F26Salomon2.23E-021.12E-022.79E-012.37E-021.58E-022.57E-01
F27Modified Ridge3.25E-051.59E-045.89E-051.20E-056.64E-066.25E-05
F28Zakharov1.58E-034.14E-043.74E-097.67E-036.42E-043.93E-09
F29Mod. Xin-She Yang’s 33.87E-014.46E-013.34E-013.25E-014.04E-012.88E-01
F30Mod. Xin-She Yang’s 52.21E-012.28E-014.72E-012.46E-012.51E-013.34E-01
Table A8. Median Δx metric values (over 50 runs), for the 3 optimizers, for dimensions D = 30 and D = 50.
Table A8. Median Δx metric values (over 50 runs), for the 3 optimizers, for dimensions D = 30 and D = 50.
IDFunction NameD = 30D = 50
GAPSOSQPGAPSOSQP
F01Sphere1.08E-046.64E-094.86E-113.03E-046.21E-083.57E-11
F02Ellipsoid1.53E-047.93E-091.16E-105.28E-041.02E-078.89E-11
F03Sum of Different Powers4.91E-034.86E-031.62E-011.17E-021.67E-022.11E-01
F04Quintic4.48E-024.46E-024.45E-024.67E-024.46E-024.42E-02
F05Drop-Wave9.33E-026.52E-022.89E-011.01E-019.40E-022.89E-01
F06Weierstrass5.69E-026.36E-021.58E-017.72E-029.53E-021.42E-01
F07Alpine 15.88E-021.54E-012.21E-015.14E-021.59E-012.24E-01
F08Ackley’s3.85E-037.80E-032.89E-015.91E-032.01E-022.89E-01
F09Griewank’s1.38E-045.73E-036.40E-104.71E-044.44E-034.99E-10
F10Rastrigin’s4.37E-021.51E-012.34E-016.13E-021.62E-012.26E-01
F11HappyCat3.61E-022.35E-024.33E-033.73E-022.66E-024.41E-03
F12HGBat5.12E-023.39E-023.33E-025.30E-023.02E-023.33E-02
F13Rosenbrock’s1.77E-024.53E-021.41E-071.46E-024.05E-021.09E-07
F14High Cond. Elliptic1.47E-034.09E-072.55E-102.71E-034.11E-062.24E-10
F15Discus1.98E-042.46E-081.73E-106.03E-041.47E-071.47E-10
F16Bent Cigar2.48E-048.43E-072.67E-104.11E-044.91E-061.73E-10
F17Perm D, Beta1.50E-012.27E-012.53E-013.50E-013.71E-014.17E-01
F18Schaffer’s F7 3.73E-037.05E-022.89E-014.08E-031.24E-012.86E-01
F19Expanded Schaffer’s F6 2.18E-012.41E-012.90E-012.34E-012.59E-012.88E-01
F20Rotated Hyper-ellipsoid1.57E-049.11E-091.33E-106.36E-041.22E-077.98E-11
F21Schwefel4.19E-016.12E-014.99E-014.29E-015.86E-015.09E-01
F22Sum of Dif. Powers 22.15E-032.45E-051.12E-034.27E-038.26E-052.68E-03
F23Xin-She Yang’s 12.29E-014.84E-013.69E-012.24E-014.90E-013.21E-01
F24Schwefel 2.215.06E-034.28E-027.09E-105.10E-031.08E-011.56E-09
F25Schwefel 2.224.13E-043.82E-081.39E-011.37E-037.59E-061.39E-01
F26Salomon2.74E-025.02E-022.78E-012.47E-027.42E-022.78E-01
F27Modified Ridge1.19E-047.29E-074.34E-053.37E-043.27E-075.47E-05
F28Zakharov1.41E-011.45E-012.69E-092.06E-011.73E-012.92E-09
F29Mod. Xin-She Yang’s 32.76E-012.85E-012.86E-012.45E-022.85E-012.80E-01
F30Mod. Xin-She Yang’s 52.35E-012.47E-012.81E-012.40E-012.59E-012.90E-01
Table A9. Median Δf metric values (over 50 runs), for the 3 optimizers, for dimensions D = 5 and D = 10.
Table A9. Median Δf metric values (over 50 runs), for the 3 optimizers, for dimensions D = 5 and D = 10.
IDFunction NameD = 5D = 10
GAPSOSQPGAPSOSQP
F01Sphere8.43E-091.63E-089.16E-211.03E-092.76E-139.48E-21
F02Ellipsoid3.19E-092.78E-081.19E-196.26E-106.11E-141.18E-19
F03Sum of Different Powers1.13E-117.22E-151.56E-211.51E-181.63E-281.68E-23
F04Quintic1.31E-081.07E-088.61E-143.95E-091.75E-111.04E-13
F05Drop-Wave2.14E-016.38E-029.08E-015.22E-012.14E-019.57E-01
F06Weierstrass1.50E-022.19E-033.30E-012.06E-024.87E-053.93E-01
F07Alpine 18.11E-052.51E-052.05E-081.25E-057.94E-082.11E-08
F08Ackley’s1.26E-036.07E-048.64E-014.68E-042.23E-068.85E-01
F09Griewank’s7.32E-061.14E-022.66E-013.37E-074.43E-033.50E-03
F10Rastrigin’s3.66E-041.70E-021.30E-013.10E-032.79E-022.05E-01
F11HappyCat2.09E-039.84E-042.12E-046.58E-031.58E-038.46E-04
F12HGBat4.04E-042.29E-044.70E-045.56E-042.41E-043.11E-04
F13Rosenbrock’s5.59E-075.09E-071.43E-177.01E-071.04E-061.04E-17
F14High Cond. Elliptic1.54E-091.42E-104.55E-211.39E-102.65E-141.67E-21
F15Discus6.04E-104.27E-121.18E-225.52E-111.81E-173.50E-23
F16Bent Cigar6.46E-097.78E-091.50E-212.72E-101.69E-132.37E-22
F17Perm D, Beta5.85E-161.33E-169.41E-205.51E-216.46E-222.41E-21
F18Schaffer’s F7 2.53E-045.91E-051.70E-015.42E-041.21E-052.15E-01
F19Expanded Schaffer’s F6 2.69E-012.01E-016.89E-013.99E-014.53E-017.70E-01
F20Rotated Hyper-ellipsoid5.35E-091.78E-088.97E-205.84E-101.08E-138.07E-20
F21Schwefel6.64E-026.41E-022.69E-019.19E-021.50E-012.88E-01
F22Sum of Dif. Powers 28.35E-127.78E-159.08E-222.07E-131.42E-212.95E-20
F23Xin-She Yang’s 11.45E-053.18E-057.28E-053.08E-089.47E-089.46E-08
F24Schwefel 2.211.56E-034.51E-042.77E-091.13E-021.74E-042.32E-09
F25Schwefel 2.223.31E-128.34E-121.87E-117.77E-221.18E-233.54E-18
F26Salomon3.33E-021.67E-024.17E-014.00E-022.67E-024.34E-01
F27Modified Ridge7.82E-031.02E-024.55E-046.90E-032.99E-035.80E-04
F28Zakharov3.50E-102.38E-112.48E-211.65E-101.24E-124.40E-23
F29Mod. Xin-She Yang’s 35.33E-015.33E-015.33E-019.65E-019.65E-019.65E-01
F30Mod. Xin-She Yang’s 52.40E-012.40E-012.72E-012.37E-012.37E-012.37E-01
Table A10. Median Δf metric values (over 50 runs), for the 3 optimizers, for dimensions D = 30 and D = 50.
Table A10. Median Δf metric values (over 50 runs), for the 3 optimizers, for dimensions D = 30 and D = 50.
IDFunction NameD = 30D = 50
GAPSOSQPGAPSOSQP
F01Sphere8.77E-083.30E-161.77E-207.49E-073.13E-141.04E-20
F02Ellipsoid5.80E-082.36E-162.78E-205.70E-074.10E-142.25E-20
F03Sum of Different Powers1.59E-391.80E-484.62E-247.06E-599.75E-591.42E-29
F04Quintic1.03E-072.98E-121.72E-133.44E-076.31E-111.90E-13
F05Drop-Wave8.73E-017.70E-019.85E-019.31E-019.20E-019.91E-01
F06Weierstrass9.88E-025.57E-024.75E-011.54E-011.19E-014.73E-01
F07Alpine 11.69E-042.89E-093.19E-081.03E-036.87E-083.48E-08
F08Ackley’s4.85E-029.24E-028.99E-017.27E-022.15E-019.02E-01
F09Griewank’s1.71E-052.33E-043.52E-161.20E-041.55E-041.64E-16
F10Rastrigin’s8.56E-039.18E-022.20E-011.82E-021.12E-012.18E-01
F11HappyCat9.16E-033.86E-034.76E-049.90E-035.03E-034.90E-04
F12HGBat3.25E-041.44E-041.38E-042.14E-046.93E-058.44E-05
F13Rosenbrock’s1.22E-062.36E-065.08E-184.79E-064.53E-063.17E-18
F14High Cond. Elliptic1.65E-091.51E-162.24E-221.99E-082.09E-141.29E-22
F15Discus1.25E-117.42E-204.47E-237.54E-115.49E-182.65E-23
F16Bent Cigar5.36E-085.68E-162.83E-226.13E-074.92E-142.10E-22
F17Perm D, Beta3.35E-212.95E-213.80E-254.54E-113.29E-092.36E-11
F18Schaffer’s F7 1.88E-032.62E-022.90E-012.70E-037.41E-023.06E-01
F19Expanded Schaffer’s F6 6.61E-017.17E-018.37E-017.33E-017.91E-018.71E-01
F20Rotated Hyper-ellipsoid6.49E-082.80E-165.09E-208.70E-076.17E-141.77E-20
F21Schwefel2.08E-012.49E-013.54E-012.65E-012.73E-013.84E-01
F22Sum of Dif. Powers 25.36E-117.07E-221.84E-132.31E-097.83E-192.89E-11
F23Xin-She Yang’s 11.26E-184.71E-188.20E-172.53E-291.46E-283.64E-21
F24Schwefel 2.212.00E-021.28E-012.86E-092.19E-023.38E-016.43E-09
F25Schwefel 2.223.47E-555.58E-591.27E-501.95E-891.01E-911.70E-66
F26Salomon5.96E-021.09E-016.06E-015.91E-021.77E-016.63E-01
F27Modified Ridge1.25E-021.94E-035.73E-041.62E-023.29E-031.16E-03
F28Zakharov2.18E-102.31E-108.05E-261.76E-111.23E-113.51E-27
F29Mod. Xin-She Yang’s 31.00E+001.00E+001.00E+001.00E+001.00E+001.00E+00
F30Mod. Xin-She Yang’s 59.83E-019.83E-019.83E-011.00E+001.00E+001.00E+00
Table A11. Median Δt metric values (over 50 runs), for the 3 optimizers, for dimensions D = 5 and D = 10.
Table A11. Median Δt metric values (over 50 runs), for the 3 optimizers, for dimensions D = 5 and D = 10.
IDFunction NameD = 5D = 10
GAPSOSQPGAPSOSQP
F01Sphere2.95E-054.09E-053.07E-119.54E-061.56E-072.90E-11
F02Ellipsoid2.05E-055.54E-051.55E-108.68E-069.30E-081.21E-10
F03Sum of Different Powers7.23E-043.80E-044.56E-054.28E-042.55E-041.03E-03
F04Quintic2.51E-022.50E-022.95E-022.77E-022.51E-023.00E-02
F05Drop-Wave1.55E-014.79E-026.71E-013.72E-011.53E-017.07E-01
F06Weierstrass1.06E-021.55E-032.74E-011.61E-023.45E-053.02E-01
F07Alpine 14.81E-023.16E-031.49E-013.63E-025.05E-021.49E-01
F08Ackley’s8.92E-044.30E-046.35E-013.32E-041.58E-066.59E-01
F09Griewank’s3.33E-053.15E-022.61E-019.86E-062.02E-021.81E-02
F10Rastrigin’s6.08E-044.94E-021.72E-012.18E-026.39E-022.28E-01
F11HappyCat1.46E-027.54E-032.15E-032.53E-021.06E-027.69E-03
F12HGBat2.18E-021.58E-022.35E-023.15E-022.03E-022.36E-02
F13Rosenbrock’s2.79E-022.73E-022.30E-072.38E-022.96E-021.72E-07
F14High Cond. Elliptic1.10E-046.13E-042.71E-101.34E-045.10E-062.38E-10
F15Discus4.31E-053.04E-042.43E-103.57E-054.31E-072.64E-10
F16Bent Cigar6.11E-057.89E-031.42E-103.14E-053.29E-051.96E-10
F17Perm D, Beta1.80E-021.85E-021.50E-023.14E-024.96E-025.15E-02
F18Schaffer’s F7 4.29E-046.98E-052.31E-011.34E-033.97E-052.56E-01
F19Expanded Schaffer’s F6 1.96E-011.44E-015.29E-012.91E-013.43E-015.83E-01
F20Rotated Hyper-ellipsoid2.59E-054.95E-051.08E-108.72E-061.10E-071.02E-10
F21Schwefel2.32E-013.27E-014.14E-012.44E-014.37E-014.14E-01
F22Sum of Dif. Powers 25.65E-043.70E-044.72E-053.01E-042.31E-056.62E-05
F23Xin-She Yang’s 16.91E-021.69E-013.50E-011.30E-012.98E-013.35E-01
F24Schwefel 2.211.17E-033.34E-042.04E-098.38E-031.29E-041.73E-09
F25Schwefel 2.221.60E-054.16E-051.49E-047.92E-068.75E-084.07E-02
F26Salomon2.84E-021.42E-023.55E-013.29E-022.19E-023.56E-01
F27Modified Ridge5.53E-037.22E-033.27E-044.88E-032.12E-034.12E-04
F28Zakharov1.12E-032.93E-042.64E-095.42E-034.54E-042.78E-09
F29Mod. Xin-She Yang’s 34.66E-014.91E-014.45E-017.20E-017.40E-017.12E-01
F30Mod. Xin-She Yang’s 52.31E-012.34E-013.85E-012.41E-012.44E-012.90E-01
Table A12. Median Δt metric values (over 50 runs), for the 3 optimizers, for dimensions D = 30 and D = 50.
Table A12. Median Δt metric values (over 50 runs), for the 3 optimizers, for dimensions D = 30 and D = 50.
IDFunction NameD = 30D = 50
GAPSOSQPGAPSOSQP
F01Sphere7.66E-054.70E-093.44E-112.15E-044.39E-082.53E-11
F02Ellipsoid1.08E-045.60E-098.19E-113.74E-047.24E-086.28E-11
F03Sum of Different Powers3.47E-033.43E-031.14E-018.26E-031.18E-021.49E-01
F04Quintic3.17E-023.15E-023.15E-023.30E-023.15E-023.12E-02
F05Drop-Wave6.21E-015.47E-017.26E-016.62E-016.54E-017.30E-01
F06Weierstrass8.12E-026.75E-023.58E-011.25E-011.08E-013.50E-01
F07Alpine 14.16E-021.09E-011.56E-013.64E-021.12E-011.58E-01
F08Ackley’s3.44E-026.56E-026.67E-015.15E-021.53E-016.70E-01
F09Griewank’s9.82E-054.05E-034.53E-103.41E-043.14E-033.53E-10
F10Rastrigin’s3.16E-021.25E-012.27E-014.53E-021.39E-012.22E-01
F11HappyCat2.63E-021.68E-023.08E-032.73E-021.92E-023.15E-03
F12HGBat3.62E-022.40E-022.36E-023.75E-022.14E-022.36E-02
F13Rosenbrock’s1.25E-023.20E-029.99E-081.04E-022.86E-027.72E-08
F14High Cond. Elliptic1.04E-032.89E-071.81E-101.92E-032.91E-061.58E-10
F15Discus1.40E-041.74E-081.23E-104.26E-041.04E-071.04E-10
F16Bent Cigar1.76E-045.96E-071.89E-102.91E-043.47E-061.22E-10
F17Perm D, Beta1.06E-011.61E-011.79E-012.47E-012.62E-012.95E-01
F18Schaffer’s F7 3.06E-035.31E-022.91E-013.43E-031.01E-012.96E-01
F19Expanded Schaffer’s F6 4.97E-015.37E-016.23E-015.43E-015.89E-016.46E-01
F20Rotated Hyper-ellipsoid1.11E-046.45E-099.44E-114.50E-048.65E-085.64E-11
F21Schwefel3.31E-014.71E-014.33E-013.51E-014.57E-014.50E-01
F22Sum of Dif. Powers 21.52E-031.74E-057.92E-043.02E-035.84E-051.90E-03
F23Xin-She Yang’s 11.62E-013.42E-012.61E-011.58E-013.47E-012.27E-01
F24Schwefel 2.211.47E-029.55E-022.11E-091.59E-022.52E-014.68E-09
F25Schwefel 2.222.92E-042.70E-089.84E-029.70E-045.36E-069.82E-02
F26Salomon4.64E-028.50E-024.72E-014.53E-021.36E-015.08E-01
F27Modified Ridge8.84E-031.37E-034.07E-041.14E-022.32E-038.19E-04
F28Zakharov9.94E-021.02E-011.90E-091.46E-011.22E-012.06E-09
F29Mod. Xin-She Yang’s 37.34E-017.35E-017.35E-017.07E-017.35E-017.34E-01
F30Mod. Xin-She Yang’s 57.14E-017.16E-017.23E-017.27E-017.30E-017.36E-01

References

  1. Plevris, V.; Tsiatas, G. Computational Structural Engineering: Past Achievements and Future Challenges. Front. Built Environ. 2018, 4, 1–5. [Google Scholar] [CrossRef]
  2. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  3. Culberson, J.C. On the Futility of Blind Search: An Algorithmic View of “No Free Lunch”. Evol. Comput. 1998, 6, 109–127. [Google Scholar] [CrossRef] [PubMed]
  4. Kaveh, A. Advances in Metaheuristic Algorithms for Optimal Design of Structures, 3rd ed.; Springer International Publishing: Cham, Switzerland, 2021; ISBN 978-3-030-59392-6. [Google Scholar] [CrossRef]
  5. Plevris, V.; Bakas, N.P.; Solorzano, G. Pure Random Orthogonal Search (PROS): A Plain and Elegant Parameterless Algorithm for Global Optimization. Appl. Sci. 2021, 11, 5053. [Google Scholar] [CrossRef]
  6. Bakas, N.P.; Plevris, V.; Langousis, A.; Chatzichristofis, S.A. ITSO: A novel inverse transform sampling-based optimization algorithm for stochastic search. Stoch. Environ. Res. Risk Assess. 2022, 36, 67–76. [Google Scholar] [CrossRef]
  7. Chen, S.; Montgomery, J.; Bolufé-Röhler, A. Measuring the curse of dimensionality and its effects on particle swarm optimization and differential evolution. Appl. Intell. 2015, 42, 514–526. [Google Scholar] [CrossRef]
  8. Solorzano, G.; Plevris, V. Optimum Design of RC Footings with Genetic Algorithms According to ACI 318-19. Buildings 2020, 10, 110. [Google Scholar] [CrossRef]
  9. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Longman Publishing Co.: Boston, MA, USA, 1989. [Google Scholar]
  10. Holland, J. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  11. Plevris, V.; Papadrakakis, M. A Hybrid Particle Swarm—Gradient Algorithm for Global Structural Optimization. Comput.-Aided Civ. Infrastruct. Eng. 2011, 26, 48–68. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  13. Plevris, V. Innovative Computational Techniques for the Optimum Structural Design Considering Uncertainties; National Technical University of Athens: Athens, Greece, 2009. [Google Scholar]
  14. Moayyeri, N.; Gharehbaghi, S.; Plevris, V. Cost-Based Optimum Design of Reinforced Concrete Retaining Walls Considering Different Methods of Bearing Capacity Computation. Mathematics 2019, 7, 1232. [Google Scholar] [CrossRef] [Green Version]
  15. Plevris, V.; Karlaftis, M.G.; Lagaros, N.D. A Swarm Intelligence Approach For Emergency Infrastructure Inspection Scheduling. In Sustainable and Resilient Critical Infrastructure Systems: Simulation, Modeling, and Intelligent Engineering; Gopalakrishnan, K., Peeta, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 201–230. [Google Scholar] [CrossRef]
  16. Thanedar, P.B.; Arora, J.S.; Tseng, C.H.; Lim, O.K.; Park, G.J. Performance of some SQP algorithms on structural design problems. Int. J. Numer. Meth. Engng. 1986, 23, 2187–2203. [Google Scholar] [CrossRef]
  17. Bonnans, J.-F.; Gilbert, J.C.; Lemarechal, C.; Sagastizábal, C.A. Numerical Optimization: Theoretical and Practical Aspects, 2nd ed.; Springer: Berlin, Germany, 2006; ISBN 978-3-540-35447-5. [Google Scholar] [CrossRef]
  18. Plevris, V.; Mitropoulou, C.C.; Lagaros, N.D. Structural Seismic Design Optimization and Earthquake Engineering: Formulations and Applications; IGI Global: Hershey, PA, USA, 2012. [Google Scholar] [CrossRef]
  19. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Chapter 10—Metaheuristic Algorithms: A Comprehensive Review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Sangaiah, A.K., Sheng, M., Zhang, Z., Eds.; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar] [CrossRef]
  20. Wang, F.-S.; Chen, L.-H. Heuristic Optimization. In Encyclopedia of Systems Biology; Dubitzky, W., Wolkenhauer, O., Cho, K.-H., Yokota, H., Eds.; Springer: New York, NY, USA, 2013; p. 885. [Google Scholar] [CrossRef]
  21. Sörensen, K.; Glover, F.W. Metaheuristics. In Encyclopedia of Operations Research and Management Science; Gass, S.I., Fu, M.C., Eds.; Springer: Boston, MA, USA, 2013; pp. 960–970. [Google Scholar] [CrossRef]
  22. Rechenberg, I. Evolution Strategy: Optimization of Technical Systems according to the Principles of Biological Evolution; Frommann-Holzboog: Stuttgart, Germany, 1973. [Google Scholar]
  23. Papadrakakis, M.; Lagaros, N.D.; Plevris, V. Optimum Design of Space Frames under Seismic Loading. Int. J. Struct. Stabil. Dyn. 2001, 1, 105–123. [Google Scholar] [CrossRef]
  24. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces; International Computer Science Institute (ICSI): Berkeley, CA, USA, 1995. [Google Scholar]
  25. Storn, R.; Price, K. Differential Evolution–A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  26. Georgioudakis, M.; Plevris, V. A Combined Modal Correlation Criterion for Structural Damage Identification with Noisy Modal Data. Adv. Civ. Eng. 2018, 2018, 20. [Google Scholar] [CrossRef]
  27. Georgioudakis, M.; Plevris, V. A comparative study of differential evolution variants in constrained structural optimization. Front. Built Environ. 2020, 6, 1–14. [Google Scholar] [CrossRef]
  28. Georgioudakis, M.; Plevris, V. On the Performance of Differential Evolution Variants in Constrained Structural Optimization. Procedia Manuf. 2020, 44, 371–378. [Google Scholar] [CrossRef]
  29. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed Optimization by Ant Colonies. In Proceedings of the First European Conference on Artificial Life; Varela, F., Bourgine, P., Eds.; Elsevier Publishing: Paris, France, 1992; pp. 134–142. [Google Scholar]
  30. Serani, A.; Leotardi, C.; Iemma, U.; Campana, E.F.; Fasano, G.; Diez, M. Parameter selection in synchronous and asynchronous deterministic particle swarm optimization for ship hydrodynamics problems. Appl. Soft Comput. 2016, 49, 313–334. [Google Scholar] [CrossRef] [Green Version]
  31. Serani, A.; Diez, M.; Leotardi, C.; Peri, D.; Fasano, G.; Iemma, U.; Campana, E.F. On the use of synchronous and asynchronous single-objective deterministic particle swarm optimization in ship design problems. In Proceedings of the 1st International Conference in Engineering and Applied Sciences Optimization, Kos Island, Greece, 4–6 June 2014. [Google Scholar]
  32. Tan, Y. Chapter 12-A CUDA-Based Test Suit. In Gpu-Based Parallel Implementation of Swarm Intelligence Algorithms; Tan, Y., Ed.; Morgan Kaufmann: San Francisco, CA, USA, 2016; pp. 179–206. [Google Scholar] [CrossRef]
  33. Yang, X.-S. Test Problems in Optimization. arXiv 2010, arXiv:1008.0549v1. [Google Scholar]
  34. Dieterich, J.M.; Hartke, B. Empirical review of standard benchmark functions using evolutionary global optimization. arXiv 2012, arXiv:1207.4318. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Results (over 50 runs), for the 3 optimizers, for D = 5, for all 30 objective functions.
Figure 1. Results (over 50 runs), for the 3 optimizers, for D = 5, for all 30 objective functions.
Data 07 00046 g001
Figure 2. Results (over 50 runs), for the 3 optimizers, for D = 10, for all 30 objective functions.
Figure 2. Results (over 50 runs), for the 3 optimizers, for D = 10, for all 30 objective functions.
Data 07 00046 g002
Figure 3. Results (over 50 runs), for the 3 optimizers, for D = 30, for all 30 objective functions.
Figure 3. Results (over 50 runs), for the 3 optimizers, for D = 30, for all 30 objective functions.
Data 07 00046 g003
Figure 4. Results (over 50 runs), for the 3 optimizers, for D = 50, for all 30 objective functions.
Figure 4. Results (over 50 runs), for the 3 optimizers, for D = 50, for all 30 objective functions.
Data 07 00046 g004
Figure 5. Convergence histories of the various optimizers for 5 dimensions (D = 5), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Figure 5. Convergence histories of the various optimizers for 5 dimensions (D = 5), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Data 07 00046 g005
Figure 6. Convergence histories of the various optimizers for 5 dimensions (D = 5), for the objective functions F16 to F30 (median of 50 runs).
Figure 6. Convergence histories of the various optimizers for 5 dimensions (D = 5), for the objective functions F16 to F30 (median of 50 runs).
Data 07 00046 g006
Figure 7. Convergence histories of the various optimizers for 10 dimensions (D = 10), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Figure 7. Convergence histories of the various optimizers for 10 dimensions (D = 10), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Data 07 00046 g007
Figure 8. Convergence histories of the various optimizers for 10 dimensions (D = 10), for the objective functions F16 to F30 (median of 50 runs).
Figure 8. Convergence histories of the various optimizers for 10 dimensions (D = 10), for the objective functions F16 to F30 (median of 50 runs).
Data 07 00046 g008
Figure 9. Convergence histories of the various optimizers for 30 dimensions (D = 30), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Figure 9. Convergence histories of the various optimizers for 30 dimensions (D = 30), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Data 07 00046 g009
Figure 10. Convergence histories of the various optimizers for 30 dimensions (D = 30), for the objective functions F16 to F30 (median of 50 runs).
Figure 10. Convergence histories of the various optimizers for 30 dimensions (D = 30), for the objective functions F16 to F30 (median of 50 runs).
Data 07 00046 g010
Figure 11. Convergence histories of the various optimizers for 50 dimensions (D = 50), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Figure 11. Convergence histories of the various optimizers for 50 dimensions (D = 50), for the first 15 objective functions, F01 to F15 (median of 50 runs).
Data 07 00046 g011
Figure 12. Convergence histories of the various optimizers for 50 dimensions (D = 50), for the objective functions F16 to F30 (median of 50 runs).
Figure 12. Convergence histories of the various optimizers for 50 dimensions (D = 50), for the objective functions F16 to F30 (median of 50 runs).
Data 07 00046 g012
Table 1. Optimization parameters and convergence criteria used for each category of problems based on the number of dimensions.
Table 1. Optimization parameters and convergence criteria used for each category of problems based on the number of dimensions.
No of Dimensions, DD = 5D = 10D = 30D = 50
Population size NP
NP = 10∙D
50100300500
Max. iterations MaxIter
MaxIter = 20∙D – 50
50150550950
Max. obj. function evaluations MaxFE
MaxFE = NPMaxIter
250015,000165,000475,000
Table 2. The 30 objective functions used in the study, search range, and location of the optimum.
Table 2. The 30 objective functions used in the study, search range, and location of the optimum.
IDFunction NameFile NameSearch RangeLocation of the Optimum
f(x*) = 0 1
F01Spheresphere_func[−100, 100]Dx* = {0, 0, …, 0}
F02Ellipsoidellipsoid_func[−100, 100]Dx* = {0, 0, …, 0}
F03Sum of Different Powerssumpow_func[−10, 10]Dx* = {0, 0, …, 0}
F04Quinticquintic_func[−20, 20]Dx* = {−1,−1, …,−1} or x* = {2,2, …,2}
F05Drop-Wavedrop_wave_func[−5.12, 5.12]Dx* = {0, 0, …, 0}
F06Weierstrassweierstrass_func[−0.5, 0.5]Dx* = {0, 0, …, 0}
F07Alpine 1alpine1_func[−10, 10]Dx* = {0, 0, …, 0}
F08Ackley’sackley_func[−32.768, 32.768]Dx* = {0, 0, …, 0}
F09Griewank’sgriewank_func[−100, 100]Dx* = {0, 0, …, 0}
F10Rastrigin’srastrigin_func[−5.12, 5.12]Dx* = {0, 0, …, 0}
F11HappyCathappycat_func[−20, 20]Dx* = {−1, −1, …, −1}
F12HGBathgbat_func[−15, 15]Dx* = {−1, −1, …, −1}
F13Rosenbrock’srosenbrock_func[−10, 10]Dx* = {1, 1, …, 1}
F14High Conditioned Ellipticellipt_func[−100, 100]Dx* = {0, 0, …, 0}
F15Discusdiscus_func[−100, 100]Dx* = {0, 0, …, 0}
F16Bent Cigarbent_cigar_func[−100, 100]Dx* = {0, 0, …, 0}
F17Perm D, Betapermdb_func[−D, D]D generally 2x* = {1, 2, …, D}
F18Schaffer’s F7 schafferf7_func[−100, 100]Dx* = {0, 0, …, 0}
F19Expanded Schaffer’s F6 expschafferf6_func[−100, 100]Dx* = {0, 0, …, 0}
F20Rotated Hyper-ellipsoidrothellipsoid_func[−100, 100]Dx* = {0, 0, …, 0}
F21Schwefelschwefel_func[−500, 500]Dx* = {c, c, …, c} 3
F22Sum of Different Powers 2sumpow2_func[−10, 10]Dx* = {0, 0, …, 0}
F23Xin-She Yang’s 1xinsheyang1_func[−2π, 2π]Dx* = {0, 0, …, 0}
F24Schwefel 2.21schwefel221_func[−100, 100]Dx* = {0, 0, …, 0}
F25Schwefel 2.22schwefel222_func[−100, 100]Dx* = {0, 0, …, 0}
F26Salomonsalomon_func[−20, 20]Dx* = {0, 0, …, 0}
F27Modified Ridgemodridge_func[−100, 100]Dx* = {0, 0, …, 0}
F28Zakharovzakharov_func[−10, 10]Dx* = {0, 0, …, 0}
F29Modified Xin-She Yang’s 3modxinsyang3_func[−20, 20]Dx* = {0, 0, …, 0}
F30Modified Xin-She Yang’s 5modxinsyang5_func[−100, 100]Dx* = {0, 0, …, 0}
1 The optimum value of the objective function if f(x*) = 0, for all cases. 2 The search range [−50, 50]D has been used in this study for uniformity, for all numbers of dimension, since Dmax = 50. 3 Where c = 420.968746359982025, see also Appendix B.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Plevris, V.; Solorzano, G. A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data 2022, 7, 46. https://doi.org/10.3390/data7040046

AMA Style

Plevris V, Solorzano G. A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data. 2022; 7(4):46. https://doi.org/10.3390/data7040046

Chicago/Turabian Style

Plevris, Vagelis, and German Solorzano. 2022. "A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking" Data 7, no. 4: 46. https://doi.org/10.3390/data7040046

Article Metrics

Back to TopTop