Allocation of Starting Points in Global Optimization Problems

: We propose new multistart techniques for finding good local solutions in global optimization problems. The objective function is assumed to be differentiable, and the feasible set is a convex compact set. The techniques are based on finding maximum distant points on the feasible set. A special global optimization problem is used to determine the maximum distant points. Preliminary computational results are given


Introduction
Within the concept of a "smart" digital environment, methods of mathematical modeling and machine learning are actively used to design and implement digital twins of complex technical, technological, and organizational systems.In this case, it is usually necessary to solve complex global optimization problems to automate the selection of effective structures and parameters of the corresponding models of these digital twins.The effectiveness of global optimization methods depends significantly on the choice of the initial set of solutions, which are subsequently used to find the global optimum or a good local optimum that approximates the global one.This is especially important when using global optimization methods for the continuously differentiable functions of real variables, because in this case, it is possible to obtain optimal solutions guaranteed by the strict mathematical apparatus of applied mathematics.
Let a differentiable function f : R n → R and a convex compact set X ⊂ R n with a nonempty interior, int(X) ̸ = ∅, be given.The problem considered in this paper consists in finding a good local minimum using the multistart strategy.In order achieve this, it is necessary to allocate p starting points x 1 , . . ., x p in X, such that they cover X "more or less uniformly."The proposed multistart strategy is based on the CONOPT solver [1].
Various uniform sampling procedures can be used for this purpose.A survey of special methods for allocation points on spheres is presented in [2].If X is a polytope, sampling based on simplicial decomposition of X is applied, as given in [3].In [4], a class of Markov chain Monte Carlo (MCMC) algorithms for distribution points on polytopes is described.In a more general case, when X is a convex body, a random walk strategy [5] based on the MCMC technique is successfully applied.A brief review of different kinds of random walk can be found in [4].However, uniform random sampling algorithms are of exponential complexity [6].Uniform sampling is usually used for the approximate calculation of an integral or volume of X.We are interested in finding a good local solution in global optimization problems.The most attractive feature of uniform sampling consists in the following: a global minimum solution can be found with a probability of one as the length of the sampling tends to infinity.However, due to the specifics of high-dimensional spaces [7], random sampling is not efficient from a practical point of view.Nevertheless, uniform sampling continues to draw attention, and investigations on this topic are of serious interest [8].Approaches based on the p-location problem [9] and p-center methodology [10] can also be used for solving the problems considered in our paper.However, we aimed to check the efficiency of a global optimization approach.
In our paper, we propose a procedure for the good allocation of points on a convex compact set X.The idea is to use a special global optimization problem as an auxiliary one for allocation.The special global optimization problem consists in maximizing the Euclidean norm plus a linear term over a convex compact set.Because of the particular form of the problem, it can be solved to global optimality for a sufficiently large number of variables, for example, for n ∼ 30 − 50.In doing so, we achieve a better covering of set X by a family of points.We believe that a combination of the proposed approach and advanced metaheuristics [11] will be of serious practical importance.
The first approach.The most attractive statement of the problem can be formalized as follows: t → max, t = ∥x i − x j ∥ 2 , x i , x j ∈ X, 1 ≤ i < j ≤ p. (1) Problem (1) means that it is necessary to allocate p points such that the distance between any two points is the same and is as maximal as possible.In this case, the set {x 1 , . . ., x p } is called the set of equidistant points .However, it is well known that Problem (1) is solvable only if p ≤ n + 1.When p = n + 1, then points {x 1 , . . ., x n+1 } are vertices of a regular simplex.If ∥x i − x j ∥ = δ, 1 ≤ i < j ≤ n + 1, all points x i belong to the sphere of radius centered at However, in many applications, it is necessary to allocate more than n + 1 points.The second approach.We move to another problem of the following form: We want to allocate p points such that the minimum distance between any two of them is as maximal as possible.Problem (3) always has a solution since the objective function is continuous and the feasible set is nonempty and compact.The objective function is nonsmooth, but this can be avoided by the standard reduction of Problem (3) to the following one: Two main difficulties are unavoidable when solving Problem (4).Firstly, the number of variables is equal to . Secondly, the feasible domain is nonconvex.Hence, we have to overcome the nonconvexity of the feasible domain, but we are seriously restricted in dimension n.
The third approach.Given p − 1 points v i ∈ X, find point v p as a solution to the problem As a result, set X is covered by p balls centered at v 1 , . . ., v p with radius r p equal to φ p (v p ).
The theoretical foundation of the approach based on solving Problem ( 5) is given by the following theorem.
Proof.Functions φ p , p = 2, . . .are Lipschitz functions with the same Lipschitz constant.Therefore, φ p , p = 2, . . . is an equicontinuous sequence of functions.Since X is a compact set, then φ p (x) ≤ D(X) < +∞, where D(X) is the diameter of X, and functions φ p , p = 2, . . .are uniformly bounded.By construction φ p (x) ≤ φ p−1 (x) ∀x ∈ X.Hence, due to the Arzelà-Ascoli theorem, φ p , p = 2, . . . is a sequence of functions uniformly convergent to a continuous function η : Assume that lim 6), due to the continuity of η, we have lim a contradiction, which proves the theorem.
Hence, we can theoretically achieve the covering of X by a number of balls with sufficiently small radius.In practice, especially in high dimensions we restrict ourselves to a reasonable value of p.
Let us rewrite Problem (5) in a more computationally tractable form.Point v p is the maximum distant point from points The feasible domain in (7) is convex, and the objective function is convex.Therefore, we have a convex maximization problem, and special advanced methods [12] can be used for solving (7).
In our paper, we develop the iterative scheme of the third approach based on solving problems of type (7).The description is the following.Take an arbitrary first point v 1 .The other points are determined according to the solutions to problem (7) for p = 2, 3, . ... Points are found sequentially: the new point is determined after finding the previous ones.This is why we call points v 1 , v 2 , . . ., v p obtained on the base of the iterative solution of problem (7) sequentially maximum distant pointsor simply sequentially distant points .Notation: e j , j = 1, . . ., n are unit vectors with 1 on the j-th position and 0 on the others; x j is the j-th component of vector x ∈ R n ; x i is the i-th vector in a sequence of n-dimensional vectors x 1 , . . ., x i , . ..; x ⊤ y is the dot (inner) product of vectors x, y ∈ R n .

Allocation of Points in the Unit Ball
Assume that X is the unit ball, that is, In this case, Problem (5) can be solved analytically.The obtained points are called ball sequentially distant points .We start with the problem of setting the n + 1 equidistant point in B that is equivalent to inscribing a regular simplex in B. The distance between points can be determined from (2) with R = 1, Since the points are equidistant: Due to the symmetry of B, we can set x 1 = e 1 = (1, 0, . . ., 0) ⊤ .Then, from (2), Since points x j , j = 2, . . ., n + 1 belong to the intersection of a plane orthogonal to x 1 and a boundary of B, we also can choose the point x 2 as a point with maximal zero components.Therefore, we set 2 ) 2 = δ 2 , and (x 2 1 ) 2 + (x 2 2 ) 2 = 1.From these two equations and (9), we obtain . Now, let us repeat the same consideration for the n − 1-dimensional ball centered at x 2 and obtained as an intersection of the plane {x ∈ R n : n , 0, . . ., 0 .After repeating this consideration similarly for the remaining cases, we obtain the final description of the equidistant point in the unit ball: Let us switch now to the construction of the sequentially maximum distant points.Again, due to the symmetry of B, the starting point v 1 = e 1 .The next point, which is denoted by v n+1 , is determined as Let us introduce the sets Then, solving Problem ( 11) is reduced to solving the following two problems: and Determine sets Problem ( 14) is reduced to find solutions to the three auxiliary problems In both cases, the maximum value 2 is attained at the point −e 2 .For the last auxiliary problem, we have f 32 (x) ≤ 2 − 2x 2 ∀x ∈ X 32 , that is, the corresponding maximum value cannot be greater than 2. Therefore, point v n+2 = −e 2 is a solution to Problem (14).So far, four points v i = e i , v n+i = −e i , i = 1, 2 are obtained.We are going to prove by induction that the same principle is true for 2n points: The basis of induction: the hypothesis is true for k = 2.The induction step: let us prove that the hypothesis is true for the case k + 1.Consider the problem Define for i ∈ K = {1, . . ., k} the following sets Then, Problem (15) disintegrates into 2k problems As above, Therefore, we can take e k+1 as a solution to Problem (15) and set v k+1 = e k+1 .Let us consider now the next problem: Using the same arguments as earlier, it is easy now to see that f k+2 (x) ≤ 2 ∀x ∈ B and f k+2 (−e k+1 ) = 2. Hence, we can accept −e k+1 as a solution to (18) and set v n+k+1 = −e k+1 .Therefore, the first 2n points are determined as The maximum distance between any two points in ( 19) is equal to 2, and the minimum distance between any two points is equal to √ 2. Let us now determine point v 2n+1 .In order to do this, we have to solve the problem Rewrite f as follows: The maximal value of the expression in (21) over B is obviously equal to 1 and is achieved at the origin 0 = (0, . . ., 0) ⊤ .From ( 19) and (20), we have The maximum distance between any two points in the set {v i , v n+i , i = 1, . . ., n, v 2n+1 } is equal to √ 2, and the minimum distance is equal to 1.The solution to the problem is given by the point Finally, sequentially distant 2n + 1 + 2 n points for the unit ball are given by The maximum distance between any two points is obviously equal to 1. Due to the symmetricity of the ball, the minimum distance can be determined as the distance between v 2n+2 and any point v j , j = 1, . . ., n.For example, 22) and ( 23) are calculated without solving the corresponding optimization problems.
The above procedures can be generalized for the allocation of points in a general ball Case A. Generalization of the n + 1 equidistant points.We add the center x c to the set of points and obtain the following n + 2 ball sequentially distant points v 1 , . . ., v n+2 with (10) The obtained points are not equidistant.The maximum distance between any two points is equal to R, and the minimum distance is equal to R √ 2 1 + 1 n (see (8)).Case B. Ball sequentially distant 2n + 1 points.These points are just a direct generalization of (22), The maximum distance is equal to R, and the minimum distance is equal to R Then, the points are determined as follows: The maximum distance between any two points is equal to R, and the minimum distance is Let us compare the allocation of a ball sequentially 2n + 1 from (26) without the center v 2n+1 and with a uniform distribution over a unit sphere.We take the minimum distance between two points as a measure of allocation efficiency: the greater minimum distance, the better the allocation.The uniform distribution over the unit sphere is obtained using normal distribution with mean 0 and standard deviation 1 by normalization.The minimum distance between two ball sequentially distant points is √ 2 ≈ 1.414 for any n.If we uniformly distribute 200 points over the unit sphere in a 100-dimensional case, then the minimum distance is on average 1.098 (after 10 repetitions).Therefore, the ball sequentially distant points allocation is almost 40% better than the uniform allocation.

Mapping the Ball Sequentially Distant Points on a Compact Convex Set
Let X be a convex compact set defined by a system of inequalities . ., m are convex and twice continuously differentiable functions, and int(X) ̸ = ∅.We use the concept of an analytical center x a [13].The point x a is the solution to the convex optimization problem ), and F is a twice continuously differentiable concave function.Since int(X) ̸ = ∅, we have g i (x a ) < 0, i = 1, . . ., m, so the following ellipsoid can be defined: Then, X ⊃ E. The Hessian H can be represented as H = U ⊤ ΛU, U is an n × n orthonormal matrix with eigenvectors of H as columns, and Λ is an n × n diagonal matrix with eigenvalues λ i > 0, i = 1, . . ., n on the main diagonal.Let us introduce new variables y = Λ 1 2 U(x − x a ).Then, in variables y, ellipsoid E in (30) is the unit ball B = {y ∈ R n : y ⊤ y ≤ 1}.Let {v i , i = 1, . . ., N} be ball sequentially distant points in y-space constructed in correspondence to the cases A (N = n + 2), B (N = 2n + 1) or C (N = 2 n + 2n + 1) from the previous section.In the x-space, we define points Images w i of the ball equidistant points (i = 1, . . ., n + 1) are solutions to the problem Images w i of the ball sequentially distant points (cases D or C, i = 1, . . ., N, n = 2n + 1 or N = 2 n + 2n + 1) are solutions to the problem Example 1.Consider the following problem: We use Case C from the previous section, so N = 2 n + 2n + 1 = 9 for n = 2. Points v i , i = 1, . . ., 9 are determined in ( 27) and (28) with R = 1, points w i = x a + U ⊤ Λ − 1 2 v i , i = 1, ...9, points x * ,i are stationary points determined by the CONOPT solver [1] starting from points w i , and f * ,i = f (x * ,i ) are the corresponding objective function values (see Table 1).
) ⊤ (0.998, 3.058) ⊤ (0.700, 3.000) ⊤ −1.000 We can see from Table 1 that the global minimum point was determined three times.In the other six cases, different stationary points were found with two points x * ,2 and x * ,3 with the same value −0.656, and two points x * ,5 and x * ,8 with the value −0.885.
Geometrical interpretation of points w i , i = 1, . . ., 9 and the ellipsoid as a dashed curve are given in Figure 1.The advantage of the proposed approach consists in the following: well-allocated points in "narrow and arbitrary oriented" convex compact sets can be determined since the ellipsoid (30) provides a good inner approximation of X.
Example 2. We extend the proposed approach to solve the following problem [15]: Set X is determined by the following system: Points v i , i = 1, . . ., 2n + 1 = 27 were determined according to Case B (26).Points w i , i = 1, . . ., 27 were computed by (31), and x a is the analytical center of X.Since the objective function is nonconvex and quadratic, the global minimum is achieved on the boundary of X. Points u i were obtained as intersections of rays x a + τ(w i − x a ), τ ≥ 0, i = 1, . . ., 27 with the boundary of X.Then, the multistart procedure started from points u i was applied, and the global minimum x * = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1) ⊤ , f (x * ) = −15 was found.

Allocation of an Arbitrary Given Number of Points
In the previous section, the number of allocated points was equal to n + 2 or 2n + 1 or 2 n + 2n + 1.The allocation procedure was based on setting the points in a ball.In this section, we assume that the number of allocated points is p, which is different from the previous values, and, more importantly, the allocation procedure is not connected to the ball.The price for such an approach is a sequential solution to a special global optimization problem.
Problem (7) is to be iteratively solved as was announced in Section 1.This problem is a problem of the global maximization of a convex quadratic function over a bounded polyhedral set.Hence, special methods can be used for the solution.
Let the number p of allocated points be given.The first point v 1 can be chosen arbitrarily.The remaining points are found by solving the global optimization problem (32) In solving the examples below, we used the solver SCIP [16] for finding the global maximum in Problem (32).

Example 3. The number of allocated points
Since the feasible set is polytope, it was decided to start from the vertex v 1 = (0, 0) ⊤ .In Figure 2, a geometrical interpretation of the allocated points is given.In Table 2, the coordinates of vectors v i are given, and r 2 is the squared maximum distance from the current point to the previous ones.
The starting vertex v 1 = (0, 3) ⊤ .The determined vertices are shown in Figure 3. Table 3 contains the coordinates of v i and again the squared maximum distances (r 2 ) from the current point to the previously found ones.Example 3 shows that vertices of the given polytope are not necessarily covered by points v i .The vertex (3, 6) ⊤ is not covered.
In practice, it is enough to find a new point, which is sufficiently far from the previous points.Hence, a good local solver can be used for finding the solution to Problem (32).In the testing below, we used the IPOPT solver [17] for this purpose.In the testing problems, the feasible set X was a bounded polyhedral set were determined randomly in a such a way that int(X) ̸ = ∅.The first two points v 1 and v 2 are approximate solutions to the problem where v 1 = x * , v 2 = y * .For solving Problem (33), the SCIP solver was used with the solution time limitation increased by 30 s.The number of points was equal to 100.The solution to the corresponding problems (32) for k = 3, . . ., 99 were obtained by the IPOPT solver.The last point, v 100 , was obtained by the SCIP solver with the time limitation increased to 300 s.In Table 4, n is the number of variables, m is the number of rows in matrix A, ∆ 12 = ∥v 1 − v 2 ∥, δ is the obtained maximum distance from the last point v 100 to the previous ones, and T is the solving time in seconds.Testing was performed on IntelCore i7-3610QM (2.3Ghz, 8GB DDR3 memory).In problems with five and ten variables, globally optimal solutions were found.In other words, for example, when n = 10, the diameter of X was equal to 1931.523, and the exact maximum distance from the 99 previous points to the point x 100 was equal to 608.201.In higher-dimensional problems, approximate solutions were determined.

Two Kinds of Multistart Strategy
We know that the feasible set X can be covered by p balls with centers at v 1 , . . ., v p and with radius r p = φ(v p ) (see Problem ( 5)).Consider the p optimization problem where j = 1, . . ., p. Let x ♯,j , j = 1, . . ., p be points obtained as a result of the application of the CONOPT solver to Problem (34) using v j , j = 1, . . ., p as the starting points.Compare Problem (34) with the following one: Let x * ,j , j = 1, . . ., p be solutions of (35) obtained also by the CONOPT solver applied p times from points v j , j = 1, . . ., p as the starting points.Points x ♯,j , j = 1, . . ., p have a "local nature" because of constraints ∥x − v j ∥ ≤ r 2 p , j = 1, . . ., p. Therefore, we can make the following assumption: the set Ω ♯ p = {x ♯,j : j = 1, . . ., p} contains more different local minima than the set Ω * p = {x * ,j : j = 1, . . ., p}.It is not difficult to construct an example, in which all points x ♯,j , j = 1, . . ., p as well as points x * ,j , j = 1, . . ., p are points of different local minima.The first multistart strategy is connected to the construction of the sets Ω * p .The second multistart strategy is connected to the construction of the sets Ω ♯ p .However, in practice there can be a significant difference between these sets of points for particular cases.Let us consider the following examples.
The global minimum is unique, x g = (−3.2,12.53) ⊤ , f (x g ) = 5.559.When p ≤ 17, the sets Ω ♯ The global minimum is unique, x g = (0, 0) ⊤ , f (x g ) = 0.The set Ω * 5 contains five different local minima, and one of them is the global minimum.When p ≤ 4, the sets Ω * p do not contain the global minimum.As for the sets Ω ♯ p , they contain the global minimum for p ≥ 26.The set Ω ♯ 26 contains twenty-five different local minima, and one of them is the global minimum.In comparison, the set Ω * 26 contains eighteen different local minima, and one of them is the global minimum.
Example 8. Consider the Mishra problem: The problem has the unique global minimum x g = (−1.987,−10) ⊤ , f (x g ) = −0.1198.The sets Ω * p contain the global minimum for p ≥ 6, while the set Ω * 6 contains five different local minima, and one of them is global.The sets Ω ♯ p do not contain the global minima for p ≤ 600.The corresponding radius of each covering ball for p = 600 is equal to 0.625.Hence, the Mishra problem has very many "narrow" points of local minima.
Example 9. Consider the Price problem: The global minimum is unique, x g = (0, 0) ⊤ , f (x g ) = 0.9.The sets Ω * p contains the global minimum for p ≥ 26.The set Ω ♯ p contains the global minimum for p ≥ 13.
+ sin(70 sin(x 1 )) + sin(sin(80x 2 )), The global minimum is unique, x g = (−0.0244,0.2106) ⊤ , f (x g ) = −3.3069.Assuming the differentiability of the objective function and finiteness of the set of local minima, it is not possible to assess the number of local minima.Therefore, we propose the following approach.Assess the number p of local minima from some additional practical considerations.Then, construct the set Ω * p containing a good local minimum point or even a global minimum point.After that, construct the set Ω ♯ p to enlarge the number of local minima to catch situations similar to the Price function.Due to the very high efficiency of the CONOPT solver, finding the sets Ω * p and Ω ♯ p is not too computationally demanding.We can obtain a practical assessment of the number of minima of the objective function by using such a mixture of these two kinds of the multistart strategy.If the number of total determined local minima is not very large (for example, many of them are found many times), then we can conclude that we performed a good exploration of the objective function.Otherwise, we can reach the conclusion that the objective function is of a very complicated structure.

Testing Sequentially Distant Points in Optimization Problems
We present the results of testing the comparative efficiency of using sequentially distant and randomly generated points in solving optimization problems.Three strategies, A, B, and C, based on the cases from Section 1, are tested.Optimization problems are problems of minimizing highly nonlinear functions over a box or parallelepiped.Firstly, the maximum radius ball centered at the center of the parallelepiped is constructed.Secondly, for strategy A, n + 2 ball sequentially points corresponding to (24)-(25) are determined.For strategy B, 2n + 1 points based on (26) are determined.For strategy C, we use the points (28) plus the center of the parallelepiped, in total 2 n + 1 points.
We used the multistart strategy with the generated points as the starting points.Strategies A, B, and C are compared with random strategies Rnd A , Rnd B , and Rnd C of the corresponding sizes.In strategy Rnd A , n + 2 uniformly distributed points are generated; in strategy Rnd B , the number of uniformly distributed points is 2n + 1; and in strategy Rnd C , the number of uniformly distributed points is 2 n + 1.In all strategies, a parallel local search process based on the CONOPT solver was started.
In Tables 6-9, the column "Duplicated Solutions" shows the number of points, which were found several times; the column "Different Solutions" shows the number of different found points; the column "Different Minimum Values" shows the number of different local minimum values among different solutions (i.e., there could be different local minimum points with the same objective value); the column "Record Value" shows the value of the objective function at the best point; in the column "Global Minimum," the sign "+" means that the global minimum was found, otherwise the sign "−" is used; and the column "Time" shows the total solution time in seconds.Testing was performed on an Intel Core i7-3610QM computer (2.3 GHz, 8 GB DDR3 memory).All computations were done in GAMS Demo version.
Strategies C and Rnd C were used for dimensions n = 5 and n = 10, since they are of exponential complexity.
Global minimum x * = (0, . . ., 0) ⊤ , f (x * ) = 0. Testing results are given in Table 7.Let us make some comments on the results in Table 7.A uniform distribution of the starting points happened to be very inefficient: the best solution is very far from the optimum.Take, for example, the case n = 300.Strategy A found 302 different local minima with 11 different objective function values.Checking the list of local minimum points shows that there are 78 different local minimum points, with the best value being 0.995.Therefore, strategy A shows that there are quite a number of different local minima with objective value close to the optimal one.Formally, the same can be said about strategies Rnd A and Rnd B .These random strategies also found a large number of different local minima; however, the objective function values are very far from the optimal value.x ∈ Π = {x ∈ R n : −500 ≤ x i ≤ 500, i = 1, . . ., n}.

Figure 1 .
Figure 1.Starting points w i in feasible domain and the inscribed ellipsoid in Example 1.

Figure 2 .
Figure 2. Allocation of the starting points in Example 3.

Figure 3 .
Figure 3. Allocation of starting points in Example 4.

18 Ω ♯ 18 contains 10 Example 7 .
17 and Ω * 17 do not contain the global minimum point.The set Ω * 18 contains nine different local minima, and one of them is the global minimum.The set Ω ♯ 18 also contains nine different local minima, and one of them is the global minimum.Sets Ω * 18 and Ω ♯ 18 do not coincide, and their union Ω * different local minima, and one of them is the global minimum.Consider the egg crate problem: Schwefel function.Consider the optimization problemf (x) = 418.9829n− n ∑ i=1 x i sin( |x i |) → min, (11)upper bound for the maximum value in(12)is given by max{2 − 2x 1 : x ∈ X 21 } = 2 and is achieved, for example, at point e 2 .The value f 21 (e 2 ) = 2. Therefore, e 2 is a solution to problem(12).Similarly, f 22 (x) = ∥x∥ 2 + 2x 1 + 1 ≤ 2 + 2x 1 ∀x ∈ B, the upper bound max{2 + 2x 1 : x ∈ X 22 } = 2 is also achieved at e 2 and f 22 (e 2 ) = 2. Hence, point e 2 is a solution to problem(13).The latter means that e 2 is a solution to Problem(11), and we can set v 2 = e 2 .

Table 1 .
Starting and stationary points in Example 1.

Table 2 .
Points and distances in Example 3. The number of allocated points p = 16, set X

Table 3 .
Points and distances in Example 4.

Table 4 .
Initial and final distances for testing Problem (33).
For p ≤ 5, sets Ω ♯ p and Ω * p do not contain global minimum points.When p = 6, the set Ω ♯ 6 contains five different local minima, and one of them is a global minimum.The set Ω * 6 contains four different local minima, and one of them is a global minimum.In total, the set Ω * The sets Ω * p start to contain a global minimum from p = 5.The set Ω * 5 contains only two different local minima, and one of them is global.The sets Ω ♯ p contain a global minimum when p ≥ 29, and all twenty-nine local minima of the set Ω ♯ 29 are different.
This problem has very many local minima.For example, the set Ω * 30 consists of thirty different local minima, with no global minimum among them.The set Ω ♯ 30 contains twenty-eight new different local minima in addition to the set Ω * 30 , again with no global minimum among them.Therefore, the union Ω * 30 Ω * 30 contains the fifty-eight different local minima and no global minimum.Only for p ≥ 570, the sets Ω * p contain the global minimum.The set Ω ♯ 570 contains the five hundred seventy different local minima and no global minimum.Each radius of the five hundred seventy balls, which cover the feasible set, is equal to 0.625.In all considered examples, the following properties should be mentioned.As a rule, the sets Ω * p need fewer points to detect a global minimum.Example 8 with the Mishra function provides a very remarkable confirmation of this assumption: only six points were used in the set Ω * 6 to cover the global minimum, whereas even six hundred points were not enough to detect the global minimum in the case of the set Ω ♯ 600 .The price for such a behaviour is that many points in the sets Ω * p are found several times, in contrast to the sets Ω ♯ p .We also have to keep in mind that in example 9 with the Price function, the situation is opposite: thirteen points to detect the global minimum for the set Ω ♯ 13 and twenty-six points to detect the global minimum for the set Ω * 26 .The number of different local minimum points in the sets Ω ♯ p is usually larger than in the sets Ω * p .Nevertheless, local minimum points in the sets Ω * p being smaller in number, usually (not always) have lower objective function values.Let us compare the sets Ω * p and Ω ♯ p for all tested problems and for the same number of points p = 20, that is, we compare the sets Ω * 20 and Ω ♯ 20 .The results of the comparison are given in Table 5. Column N * L (N * G ) shows the number N * L of the different local minima in the corresponding sets Ω * 20 , with N * G being the number of global minima among them.Similarly, column N ♯ L (N ♯ G ) shows the number N ♯ L of different local minima in the sets Ω ♯

Table 5 .
Comparison of two multistart strategies.

Table 7 .
Testing results for the Rastrigin function.

Table 9 .
Testing results for the Levy function.