A New Parameterless Filled Function Method for Global Optimization

: The ﬁlled function method is an effective way to solve global optimization problems. However, its effectiveness is greatly affected by the selection of parameters, and the non-continuous or non-differentiable properties of the constructed ﬁlled function. To overcome the above-mentioned drawbacks, in this paper, a new parameterless ﬁlled function is proposed that is continuous and differentiable. Theoretical proofs have been made to show the properties of the proposed ﬁlled function. Based on the new ﬁlled function, a ﬁlled function algorithm is proposed to solve unconstrained global optimization problems. Experiments are carried out on widely used test problems and an application of supply chain problems with equality and inequality constraints. The numerical results show that the proposed ﬁlled function is effective.


Introduction
Global optimization is rich in content and widely used subject in mathematics.With the development of science and information technology, global optimization has been widely applied to economic models, finance, image processing, machine designing and so on.Therefore, the theories and methods for global optimization need to be studied deeply.With the efforts of scholars in this field, various methods have been developed for global optimization.However, finding the global optimal solution is usually not easy due to the two properties of the global optimization problems: (1) usually there exists a lot of local optimal solutions, and (2) optimization algorithms are very easy to be trapped in certain local optimal solutions and unable to escape.Therefore, one key problem is how to help the optimization method escapes from local optimal solutions.The filled function method is specifically designed to solve this problem.Now we will introduce some basic information about the filled function method.
The filled function method was first proposed by Ge [1] in which he constructed an auxiliary function named filled function to help the optimization algorithm escape from local optimal solutions.In the following, we will introduce the original definition of the filled function proposed by Ge [1] and its related concepts.In this paper, we consider the following optimization problem: where n is the dimension of the objective function F(x), which is continuous and differentiable.F(x) has a finite number of local optimal solutions x * 1 , x * 2 , . . ., x * m .Suppose x * k is the local optimal solution found by the optimization algorithm in the kth iteration; the definition of basin B * k is as follows.
Definition 1.The basin B * k of the objective function F(x) at an isolated minimum (local optimal solution) x * k refers to the connected domain that contains x * k , and in this domain, the steepest descent trajectory of F(x) will converge to x * k starting from any initial point, but outside the basin, the steepest descent trajectory of F(x) does not converge to x * k .
A basin B * 1 at x * 1 is lower (or higher) than basin B * 2 at x * 2 iff A basin is actually an area that contains one local optimal solution.Within this area, the gradient descent optimization algorithm will converge to the corresponding local optimal solution no matter what the initial point is.One basin is lower than the other basin if its corresponding local optimal solution is smaller (better for minimization problems).Definition 2. A function P(x, x * 1 ) is called a filled function of F(x) at a local minimum x * 1 if it satisfies the following properties: 1.
x * 1 is a strictly local maximum of P(x, x * 1 ), and the whole basin B * 1 of F(x) becomes a part of a hill of P(x, x * 1 ).2. P(x, x * 1 ) has no minima or stable points in any basin of F(x) higher than B * 1 .

3.
If F(x) has a lower basin than B * 1 , then there is a point x in such a basin that minimizes P(x, x * 1 ) on the line through x and x * 1 .
From the definition of the filled function, we can see that the three properties together ensure the optimization algorithm escapes from one local optimal solution to a better one.For example, the optimization algorithm is now trapped in a local optimal solution x * 1 and can not escape.Now we construct a filled function to help escape x * 1 .The first property of the filled function will make the local optimal solution x * 1 become a local worst solution of the filled function.In this case, when the filled function P(x, x * 1 ) is optimized, it will surely leave the local worst solution; that is, the local optimal solution will escape.Then the second property of the filled function will ensure that when optimizing P(x, x * 1 ), it will not end up at a worse solution than x * 1 because there are no minima or stable points in any basin higher than B * 1 .Instead, the optimization procedure of P(x, x * 1 ) will enter a basin that is better than B * 1 , if such a basin exists.The overall optimization procedure is as follows: First, it starts from an initial point to optimize the objective function F(x) and find a locally optimal solution (e.g., x * 1 ).Secondly, a filled function at this point is constructed (e.g., P(x, x * 1 )) and optimized starting from x * 1 .After the optimization of P(x, x * 1 ), the algorithm will enter a better region (basin) ensured by the properties of the filled function.Thirdly, starting from the new better basin, the algorithm continues to optimize F(x) to find a better local optimal solution than x * 1 .Then, repeating the above steps, the algorithm will continuously move from one local optimal solution to better ones till the global optimal solution is found.

Related Work
With optimization algorithms extensively used in various fields, more and more efforts are devoted to optimization theory.As a deterministic algorithm for optimization, the filled function method has drawn a lot of attention.The main idea of the filled function method is to locate a current local optimal solution by any local search algorithm and then construct an auxiliary function called the filled function at that local optimal solution.The filled function should have three good properties in order to help it escape from the current local optimal solutions and enter regions that contain better solutions.
The first definition of a filled function was proposed by Ge in ref. [1], in which he constructed a filled function with two parameters: Experiments show this filled function is effective.However, it has two disadvantages: first, this filled function has two adjusting parameters r and ρ, the values of the two parameters need to be adjusted in order to ensure the global optimal solution will not be missed in the optimization procedure; secondly, since there is an exponent term in denominator when it gets larger, the function value will become smaller, and thus, the filled function may locate a fake stable point.
In order to improve the efficiency of the filled function method, a lot of effort has been made, and new contributions have been achieved.In ref.
[2], a filled function with only one parameter is proposed that also has no exponent term: To overcome the discontinuous and non-differential disadvantages of filled function, Liu proposed a class of filled functions that is continuous and differentiable [3]: where u and w are two real functions that are twice continuously differentiable in their domains, satisfying the following conditions: However, this class of filled function is not easy to construct and still contains two parameters to adjust.Afterward, new continuous differentiable filled functions were proposed [4][5][6], yet these filled functions all have one or two parameters.In order to improve the parameter-adjusting problem of the filled function, the authors of [7] proposed a filled function with two parameters and gave a reasonable and effective way to choose the parameters.In ref. [8], the authors proposed a filled function without any parameter.This filled function contains no exponent term and is simple in form; however, it is not a continuous differentiable, which may produce extra local optimal solutions.To overcome this problem, the authors of [9] proposed a continuously differentiable filled function without any adjusting parameter: Afterward, researchers have proposed more continuous differentiable filled functions without parameters, such as in ref. [10][11][12][13][14].The authors of [12] proposed a new continuous differentiable filled function without any parameter or exponent term: These parameterless and continuous differentiable filled functions have several advantages.First, more efficient local search algorithms can be applied.Secondly, it is not easy to produce extra fake local optimal solutions.Thirdly, no parameter adjusting is needed.Thus this kind of filled function can improve the efficiency of the performance of filled function methods.
To better enhance the efficiency of filled function methods, a two-stage method with a stretch function was proposed [15].After a current local minimum is located in the stage of optimizing the objective function, a stretch function is used to make this local minimum higher.Then a filled function is constructed and optimized in the second stage.However, this filled function is not continuous, which means classical efficient local search methods can not be applied to this method.
The authors of [16] proposed a new algorithm based on filled function.First, a multidimensional objective function is transformed into one-dimensional functions, and then, for each direction, a filled function is constructed to optimize the one-dimensional function.
To overcome the potential failure that only local minimum is found, the authors of [17] proposed a new filled function method.By combing an adaptive strategy of determining the initial points and a narrow valley widening strategy, the ability to escape the local minimum and locate the global minimum is further enhanced.In ref. [18], the authors proposed a new filled function using a smoothing method to eliminate local optimal solutions.Further, an adaptive method is used to determine the step length and shallow valleys.Now the filled function method is not only used in unconstrained optimization problems but is extended to constrained optimization problems with inequalities, bi-level programming, nonlinear integer programming and non-smooth constrained problems.The authors of [19] proposed a continuous differentiable filled function with one parameter to solve constrained optimization problems.The authors of [20] proposed a single-parameter filled function and applied it to a supply chain problem, which is a nonlinear programming problem with equality and inequality constraints.For bi-level programming with inequality and equality constraints, the authors of [21] first transformed the bi-level programming problem into a single-layer constrained optimization problem and then constructed the filled function combing penalty functions.
The authors of [22] first transformed the original problem into an equivalent constrained optimization problem and then constructed a filled function to solve it.
For the following type of constrained global optimization P: where Z n is an integer set of R n and S = {x ∈ Z n |g i (x) ≤ 0, i = 1, 2, . . ., m} is bounded.The authors of [23] proposed a method to transform this constrained problem into a boxconstrained integer programming problem and then constructed a filled function to solve it.In ref. [24], the authors proposed a parameterless filled function to solve nonlinear equations with box constraints.
The filled function method is also extended to non-smoothing optimization problems.The authors of [25,26] proposed a one-parameter filled function based on a new definition of filled function for a non-smoothing-constrained programming problem.
Based on the idea of filled function, in this paper, a new parameterless filled function is proposed that is continuous and differentiable.The properties of the new filled function is proven in section 3. Based on it, a filled function algorithm is also proposed to handle unconstrained optimization problems.Numerical experiments are carried out, and comparisons are made in Section 4.

A New Parameterless Filled Function and a Filled Function Algorithm
In this section, a new filled function is proposed with the advantages of being parameterless, continuous and differentiable.The three properties of the proposed filled function are described and proven.Based on it, a new filled function method is designed to solve unconstrained optimization problems.

A New Parameterless Filled Function and Its Properties
The first definition of the filled function is defined in [1].However, the third property of the definition is not so clear, e.g., it is not clear where the point x is and where the line is through x and x * 1 .To make the definition more clear and more strict, some scholars gave several revised definitions of the filled function [27,28].In this paper, we will use the revised definition from ref. [9] since it is more clear and strict by using the gradient.The revised definition is as follows.Definition 3. A function P(x, x * k ) is called a filled function of F(x) at a local minimum x * k if it satisfies the following properties: 1.
x * k is a strictly local maximum of P(x, x * k ), and the whole basin B * k of F(x) becomes a part of a hill of P(x, x * k ).

2.
For any x ∈ Ω 1 , we have P(x, x * k ) = 0, where If Now we give a brief explanation of the revised definition of the filled function.Property 1 is the same as the original definition, which turns the local minimum x * k of the objective function into a local maximum of the filled function.In this case, when optimizing the filled function, it is easy to escape x * k since it is a local maximum (worst solution for the minimization problem).Property 2 makes sure the optimization procedure will not end up with solutions worse than the current local minimum x * k because there are no stationary points there.Property 3 means that it is easy for the optimization procedure to end in a region that contains a better solution than the current local minimum because in that region, there exists a local minimum.Therefore, the three properties together will drive the optimization procedure to escape from the current local minimum and enter a better region that contains a better solution.
Based on Definition 3, we design a new parameterless filled function that is also continuous and differentiable: The new filled function mainly has two advantages.First, it has no parameter to adjust, which makes it easier to apply to different optimization problems.Secondly, the new filled function is continuous and differentiable.Note that continuity and differentiability are two excellent properties of filled function.Compared to filled functions that are not continuous or differentiable, it is easier to optimize since more choices of algorithms, especially more efficient algorithms designed for continuous differentiable functions can be used, and it is also not easy to generate extra local optimal solutions during the optimization.Now, we will first prove that the new filled function is continuously differentiable and then prove that it fulfills the three properties of the definition of the filled function.
Since the only point that may cause the filled function P(x, x * k ) to not be continuously differentiable is t = 0 in g(t), so if g(t) is continuously differentiable at t = 0, the filled function Thus, g + (0) = g − (0) = 0, so the new filled function P(x, x * k ) is differentiable.Now we will prove that P(x, x * k ) satisfies the three properties of filled function.
Theorem 1. Suppose x * k is a local minimum of the objective function F(x) and P(x, x * k ) is the filled function constructed at x * k , then x * k is a strictly local maximum of P(x, x * k ).
Proof.Suppose B * k is the basin containing x * k (please refer to Definition 1 about basin), since According to the construction of the filled function P(x, x * k ), we get Theorem 2. For any x ∈ Ω 1 , we have P(x, x * k ) = 0, where This proves Theorem 2.
Proof.Since Ω 2 is not empty, then F(x) must have a minimum in Ω 2 .Since P(x, x * k ) is continuous and differentiable on R n , it must have a minimum, say Therefore, we know that x k ∈ Ω 1 (according to the definition of Ω 1 from Theorem 2); therefore, x k ∈ Ω 2 .

A Filled Function Algorithm to Solve Unconstrained Optimization Problems
Based on the proposed filled function, we design a filled function algorithm to solve unconstrained optimization problems.The steps of the algorithm are as follows.

1.
Initialization.Randomly generate 10 points in the feasible region and choose the point with the best function value as the initial point x 0 .Then, set bestX = x 0 , bestVal = F(x 0 ) to record the best solution and its corresponding function value, we set = e − 10 as the stopping criteria and k = 1 as the iteration counter.

2.
Optimize the objective function F(x).Starting from the initial point x 0 , we use the BFGS Quasi-Newton Method as the local search method to optimize the objective function to locate a local optimal point x * k .The main steps of the BFGS method are shown in Algorithm 1.

3.
Construct the filled function at x * k : 4.
Optimize the filled function P(x, x * k ).In the following, we use an example to demonstrate the optimization procedure of the filled function algorithm.Figure 1 shows the objective function f (x) = x + 10 sin(5x) + 7 cos(4x) with the search region [−2, 2].From Figure 1, we can see that f (x) has three basins B * 1 , B * 2 and B * 3 in the search region, where B * 3 is the lowest basin that contains the global optimal solution.Suppose the optimization procedure starts from x 0 ; using the BFGS local search method we can obtain a local minimal solution x * 1 of the objective function f (x).To escape from this local minimum x * 1 , we construct the filled function P(x, x * 1 ) at x * 1 , as shown in Figure 2.
From Figure 2, we can see that x * 1 is a strictly local maximum (maximal point) of P(x, x * 1 ), which is guaranteed by the definition of the filled function.Therefore, a local search of P(x, x * 1 ), starting from point x * 1 + 0.1, can easily escape from this point and yield a local minima x 1 of P(x, x * 1 ).Next, using x 1 (x 1 = x 1 + 0.1) as the initial point to optimize the objective function f (x), we can obtain another local minimal solution x * 2 that is better than x * 1 .At this time, the first iteration is completed.To escape from the local minimum x * 2 , we repeat the above steps to construct the filled function P(x, x * 2 ) at x * 2 , as shown in Figure 3. Similarly, P(x, x * 2 ) peaks at point x * 2 , which makes it easy to escape from this point.We continue to optimize P(x, x * 2 ) to obtain a local minimal point x 2 .Then, using x 2 = x 2 + 0.1 as the initial point to optimize the objective function f (x), a new better local optimal solution x * 3 is obtained.Now, the second iteration is completed.We continue the above procedure to optimize the objective function and filled function alternately to escape from the current local optimal solution to a better one till the global optimal solution is located.
From the above optimization procedure, we can clearly see that the proposed method can easily and continuously escape from a current local optimal solution to obtain a better one till the global optimal solution is located.This is a good way to overcome the disadvantage of premature convergence of optimization algorithms.Moreover, the proposed method also has three other advantages.First, since the proposed filled function is parameterless; the algorithm has no adjustable parameters to tune for different problems.Secondly, since the new filled function is continuous and differentiable, the proposed algorithm is less apt to produce an extra local minimum while more choices of local search methods, especially the efficient gradient-based ones, can be applied to make the optimization more efficient and effective.Thirdly, once the filled function is designed and constructed, it is easy to implement and apply to different optimization problems.
There are mainly two disadvantages of the filled function method.First, it is not easy to design a good filled function and each time when a local optimal solution is found, the filled function has to be constructed.Secondly, the filled function method becomes less effective when the dimensionality of the problem is large.More research is needed to extend the scope of the filled function method.

Numerical Experiments
The proposed filled function algorithm is implemented in Matlab 2021 and tested on wildly used test problems.Comparisons are made with a state-of-the-art filled function algorithm [18], another continuous differentiable filled function algorithm [5] and Ge's filled function algorithm [1].The test problems used in this paper are listed as follows.Test case 1. (The rastrigin function) The global minimum solution is x * = (0, 0) T , and the corresponding function value is where c = 0.05, 0.2, 0.5.The global minimum solution is x * = (1, 0) T , and the corresponding function value is F(x * ) = 0 for all values of c. Test case 3. (Three-hump back camel function) The global minimum solution is x * = (0, 0) T , and the corresponding function value is The global minimum solution is x * = (±0.0898,±0.7127) T , and the corresponding function value is F(x * ) = −1.0316.
The global minimum solution is x * = (−2, 0) T and x * = (0, 0) T , and the corresponding function value is F(x * ) = 0. Test case 6. ( Two-dimensional Shubert function) There are multiple local minimum solutions in the feasible region, and the global minimum function value is F(x * ) = −186.7309.
The global minimum solution is x * = (1, 1, . . ., 1) T , and the corresponding function value is F(x * ) = 0 for all values of n.
First, all results obtained by the new filled function algorithm are listed in Tables 1-14.Further, the comparisons are made with another continuous differentiable filled function algorithm, CDFA in [5].In these tables, we use the following notations: x * k : the local minimum of the objective function in the kth iteration.f * k : the function value of the objective function at x * k .k : the iteration counter.F f : the total function evaluations of the objective function and the filled function.CDFA: the filled function algorithm proposed in [5].FFFA: the filled function algorithm proposed in [18].Ours CDFA

× 10 −13
Tables 1-14 show the numerical results of the test problems in different test criteria (different parameters and different dimensions) obtained by the proposed filled function algorithm.In these tables, k means the iterations for the filled function algorithm to locate the global minimum solution x * , F(x * ) is the corresponding function value and n is the dimension of the test problem.
From the numerical results, we can see that the proposed filled function algorithm can locate all the global minima solutions successfully (some with a precision error of less than 10 −10 ) and within small iterations.For these test problems, we also listed the results of another continuous differentiable filled function algorithm, CDFA [5].Since CDFA just carried out parts of the test problems, we use slashes ( / ) to indicate the missed values.From the comparison, we can see that for problem 2 (c = 0.2), we use one less iteration to locate a minimum solution with six orders of magnitude higher accuracy than CDFA.For problem 2 (c = 0.5), although we use one more iteration than CDFA, we successfully located the global minimum 0. For problem 2 (c = 0.05), we use one less iteration and obtain a better result than CDFA.For problem 3 we locate a better result (five orders of magnitude higher accuracy) than CDFA with the same iterations.For problems 5 and 7 (n = 7), our algorithm use one more iteration than CDFA.For problem 6 we use one less iteration to locate the global minimum than CDFA.For problem 7 (n = 10), although we use one less iteration, CDFA located the global minimum 0 while we get 1.51 × 10 −13 .From the above analysis, we can achieve the comparison results that our algorithm has four wins (problem 2 with c = 0.2, c = 0.05, and problems 3 and 6), two losses (problems 5 and 7 with n = 7), and three ties (problem 2 with c = 0.5, and problems 4 and 7 with n = 10) out of the overall ten test problems.Therefore, we come to the conclusion that the proposed filled function algorithm is more effective than CDFA.
Comparisons are also made with a state-of-the-art filled function algorithm, FFFA [18] and Ge's filled function [1].The comparison results are listed in Table 15, where No. is the number of test problems and n is the dimension, F f refers to the total number of function evaluations consumed to obtain the optimal solutions (minimum solutions for minimization problems).
Since all three filled function algorithms can find the global minimum solutions, we compare their efficiency by the total number of iterations and function evaluations consumed by each algorithm.From Table 15, we can see that for all test problems, our algorithm is much better than Ge's algorithm.As for the comparison with FFFA, we can see that for test problems 1, 3 and 4, although our algorithm takes one more iteration to get the optimal solution, we use much fewer function evaluations.For problems 2, 5 and 6, our algorithm uses fewer iterations and fewer function evaluations than that of FFFA.For problem 7, we can see that for dimension n = 2, our algorithm uses fewer iterations but more function evaluations than FFFA, for n = 3 and n = 7, our algorithm uses more function evaluations than that of FFFA, but for n = 5 and n = 10, our algorithm performs much better than that of FFFA.We can see that for n = 5 our algorithm uses three fewer iterations and only uses 2287 function evaluations, while FFFA uses 12,681 function evaluations; for n = 10, our algorithm only uses 12,795 function evaluations (which is nearly half of that of FFFA's 20,044) to get the global optimal solution.Overall, we come to the conclusion that the filled function algorithm proposed in this paper is more efficient than FFFA.From the numerical results and comparison with the other three filled function algorithms, we come to the conclusion that the new filled function algorithm is effective and efficient for solving unconstrained global optimization problems.

An Application of the Filled Function Algorithm
In this section, the proposed filled function algorithm is applied to the supply chain problem.Supply chain problems can be divided into three types, namely manufacturer's core supply chain, supplier core supply chain and seller core supply chain.In this paper, we mainly consider the manufacturer's core supply chain.For the manufacturer's core supply chain, there are multiple suppliers, multiple shippers, multiple generalized transportation methods, multiple sellers and one manufacturer.In this supply chain, the manufacturer uses different raw materials to produce various products that are sold by multiple sellers.The optimization objective of the supply chain is to minimize the total transportation cost.
We suppose there is a supply chain with a manufacturer as the core, one supplier and one kind of raw material required for production.The unit raw material cost of this kind of raw material supplied by the supplier is 2000 USD/t, the maximum supply is 5000 t, and all shippers can deliver it.The manufacturer produces only one product and requires 1.2 t of raw material per ton of product.There are two shippers, both of which can provide services of two generalized modes of transport.There are three sellers, and the order quantity of each seller must be strictly satisfied.The manufacturer initially has no inventory products, the production cost per unit product is 1000 USD, and the maximum production capacity is 4500 t.The relevant unit costs are shown in Tables 16-18.properties mean the filled function is less apt to produce extra fake local minimum during the optimization; further, more choices of local search methods, especially some efficient gradient-based ones, can be applied to make the optimization more efficient and effective.The proposed filled function algorithm is tested on widely used benchmark problems and is also applied to a supply chain problem.Numerical experiments show the algorithm is effective and efficient.However, we also notice that the filled function algorithm become less effective with high dimensional optimization problems.The reason may lie in two aspects: First, the search space grows exponentially with the increase in the distention, and secondly, the local search method is not efficient enough for high dimensional problems.The time complexity of the proposed filled function algorithm is dependent on the local search method adopted.Since the filled function designed in this paper has the advantage of continuous and differentiable, the efficient gradient-based algorithm BFGS Quasi-Newton Method can be used in the proposed model.BFGS is well known for its fast super-linear convergence speed, although its time complexity is O(n 2 ).This may be one of the reasons that when the optimization problem grows, its performance degrades.The proposed algorithm mainly helps the optimization process to repeatedly escape from local optimal solutions to better ones to locate the global optimal solution.In this process, a different local search method can be used.In our future work, we will continue to work on this issue and design new and better local search methods to make the filled function algorithm more efficient and perform better on higher dimensional problems.

Algorithm 1
Main steps of the BFGS Quasi-Newton Method 1: Given an initial value x 0 and an accuracy threshold , set D 0 = I, k := 0. 2: Determine the direction of the search: then, the algorithm ends.5: Calculate y k = g k+1 − g k .6: Calculate D k+1 = (I − k := k + 1, go to Step 2

Figure 1 .
Figure 1.Illustration of steps 1 and 2 of the filled function algorithm.

Figure 2 .
Figure 2. Illustration of steps 3 to step 5 of the filled function algorithm.

Figure 3 .
Figure 3. Illustration of the filled function algorithm in the second iteration.

Author Contributions:
Conceptualization, H.L. and S.X.; methodology, H.L. and S.X.; software, S.X.; validation, Y.C.; writing-original draft, H.L. and S.X.; writing-review and editing, Y.C. and S.T.; supervision, H.L.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.Funding: This work was supported by the National Natural Science Foundation of China (No.62002289).
Set x * k + 0.1 as the initial point, and use the BFGS Quasi-Newton Method local search method to optimize the filled function P(x, x * k ) to obtain a local minimum point x k of P(x, x * k ).It is known from property 3 of the filled function that point x k must lie in a lower basin than x * k . 5. Set the point x k +0.1 as the initial point, and continue to optimize the objective function F(x) to obtain a new local minimum point x * k+1 of F(x).6. Determine whether F(x * k+1 ) − bestVal is less than − .If satisfied, update bestX by x * k+1 and bestVal by F(x * k+1 ), let k = k + 1. Go to step 2, otherwise, bestX is the global optimum and the algorithm terminates.

Table 7 .
Another results of Problem 4.

Table 15 .
The overall comparisons.

Table 16 .
Unit transportation cost and maximum transportation capacity of shippers.

Table 17 .
Sales capacity and unit sales cost.