You are currently viewing a new version of our website. To view the old version click .
Axioms
  • Article
  • Open Access

19 December 2022

A New Parameterless Filled Function Method for Global Optimization

,
,
and
School of Computer Science and Technology, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Optimization Algorithms and Applications

Abstract

The filled function method is an effective way to solve global optimization problems. However, its effectiveness is greatly affected by the selection of parameters, and the non-continuous or non-differentiable properties of the constructed filled function. To overcome the above-mentioned drawbacks, in this paper, a new parameterless filled function is proposed that is continuous and differentiable. Theoretical proofs have been made to show the properties of the proposed filled function. Based on the new filled function, a filled function algorithm is proposed to solve unconstrained global optimization problems. Experiments are carried out on widely used test problems and an application of supply chain problems with equality and inequality constraints. The numerical results show that the proposed filled function is effective.

1. Introduction

Global optimization is rich in content and widely used subject in mathematics. With the development of science and information technology, global optimization has been widely applied to economic models, finance, image processing, machine designing and so on. Therefore, the theories and methods for global optimization need to be studied deeply. With the efforts of scholars in this field, various methods have been developed for global optimization. However, finding the global optimal solution is usually not easy due to the two properties of the global optimization problems: (1) usually there exists a lot of local optimal solutions, and (2) optimization algorithms are very easy to be trapped in certain local optimal solutions and unable to escape. Therefore, one key problem is how to help the optimization method escapes from local optimal solutions. The filled function method is specifically designed to solve this problem. Now we will introduce some basic information about the filled function method.
The filled function method was first proposed by Ge [1] in which he constructed an auxiliary function named filled function to help the optimization algorithm escape from local optimal solutions. In the following, we will introduce the original definition of the filled function proposed by Ge [1] and its related concepts. In this paper, we consider the following optimization problem:
min F ( x ) s . t . x Ω = [ l , u ] = { x | l x u , l , u R n }
where n is the dimension of the objective function F ( x ) , which is continuous and differentiable. F ( x ) has a finite number of local optimal solutions x 1 * , x 2 * , , x m * . Suppose x k * is the local optimal solution found by the optimization algorithm in the k t h iteration; the definition of basin B k * is as follows.
Definition 1.
The basin B k * of the objective function F ( x ) at an isolated minimum (local optimal solution) x k * refers to the connected domain that contains x k * , and in this domain, the steepest descent trajectory of F ( x ) will converge to x k * starting from any initial point, but outside the basin, the steepest descent trajectory of F ( x ) does not converge to x k * .
A basin B 1 * at x 1 * is lower (or higher) than basin B 2 * at x 2 * iff
F ( x 1 * ) ( o r > ) F ( x 2 * )
A basin is actually an area that contains one local optimal solution. Within this area, the gradient descent optimization algorithm will converge to the corresponding local optimal solution no matter what the initial point is. One basin is lower than the other basin if its corresponding local optimal solution is smaller (better for minimization problems).
Definition 2.
A function P ( x , x 1 * ) is called a filled function of F ( x ) at a local minimum x 1 * if it satisfies the following properties:
1. 
x 1 * is a strictly local maximum of P ( x , x 1 * ) , and the whole basin B 1 * of F ( x ) becomes a part of a hill of P ( x , x 1 * ) .
2. 
P ( x , x 1 * ) has no minima or stable points in any basin of F ( x ) higher than B 1 * .
3. 
If F ( x ) has a lower basin than B 1 * , then there is a point x in such a basin that minimizes P ( x , x 1 * ) on the line through x and x 1 * .
From the definition of the filled function, we can see that the three properties together ensure the optimization algorithm escapes from one local optimal solution to a better one. For example, the optimization algorithm is now trapped in a local optimal solution x 1 * and can not escape. Now we construct a filled function to help escape x 1 * . The first property of the filled function will make the local optimal solution x 1 * become a local worst solution of the filled function. In this case, when the filled function P ( x , x 1 * ) is optimized, it will surely leave the local worst solution; that is, the local optimal solution will escape. Then the second property of the filled function will ensure that when optimizing P ( x , x 1 * ) , it will not end up at a worse solution than x 1 * because there are no minima or stable points in any basin higher than B 1 * . Instead, the optimization procedure of P ( x , x 1 * ) will enter a basin that is better than B 1 * , if such a basin exists. The overall optimization procedure is as follows: First, it starts from an initial point to optimize the objective function F ( x ) and find a locally optimal solution (e.g., x 1 * ). Secondly, a filled function at this point is constructed (e.g., P ( x , x 1 * ) ) and optimized starting from x 1 * . After the optimization of P ( x , x 1 * ) , the algorithm will enter a better region (basin) ensured by the properties of the filled function. Thirdly, starting from the new better basin, the algorithm continues to optimize F ( x ) to find a better local optimal solution than x 1 * . Then, repeating the above steps, the algorithm will continuously move from one local optimal solution to better ones till the global optimal solution is found.

3. A New Parameterless Filled Function and a Filled Function Algorithm

In this section, a new filled function is proposed with the advantages of being parameterless, continuous and differentiable. The three properties of the proposed filled function are described and proven. Based on it, a new filled function method is designed to solve unconstrained optimization problems.

3.1. A New Parameterless Filled Function and Its Properties

The first definition of the filled function is defined in [1]. However, the third property of the definition is not so clear, e.g., it is not clear where the point x is and where the line is through x and x 1 * . To make the definition more clear and more strict, some scholars gave several revised definitions of the filled function [27,28]. In this paper, we will use the revised definition from ref. [9] since it is more clear and strict by using the gradient. The revised definition is as follows.
Definition 3.
A function P ( x , x k * ) is called a filled function of F ( x ) at a local minimum x k * if it satisfies the following properties:
1. 
x k * is a strictly local maximum of P ( x , x k * ) , and the whole basin B k * of F ( x ) becomes a part of a hill of P ( x , x k * ) .
2. 
For any x Ω 1 , we have P ( x , x k * ) 0 , where Ω 1 = { x Ω | F ( x ) F ( x k * ) , x x k * } .
3. 
If Ω 2 = { x Ω | F ( x ) < F ( x k * ) } is not empty, then there exists x k Ω 2 , such that x k is a local minimum of P ( x , x k * ) .
Now we give a brief explanation of the revised definition of the filled function. Property 1 is the same as the original definition, which turns the local minimum x k * of the objective function into a local maximum of the filled function. In this case, when optimizing the filled function, it is easy to escape x k * since it is a local maximum (worst solution for the minimization problem). Property 2 makes sure the optimization procedure will not end up with solutions worse than the current local minimum x k * because there are no stationary points there. Property 3 means that it is easy for the optimization procedure to end in a region that contains a better solution than the current local minimum because in that region, there exists a local minimum. Therefore, the three properties together will drive the optimization procedure to escape from the current local minimum and enter a better region that contains a better solution.
Based on Definition 3, we design a new parameterless filled function that is also continuous and differentiable:
P ( x , x k * ) = 1 1 + x x k * · g ( F ( x ) F ( x k * ) ) g ( t ) = 1 t 0 t 3 + 1 t < 0
The new filled function mainly has two advantages. First, it has no parameter to adjust, which makes it easier to apply to different optimization problems. Secondly, the new filled function is continuous and differentiable. Note that continuity and differentiability are two excellent properties of filled function. Compared to filled functions that are not continuous or differentiable, it is easier to optimize since more choices of algorithms, especially more efficient algorithms designed for continuous differentiable functions can be used, and it is also not easy to generate extra local optimal solutions during the optimization. Now, we will first prove that the new filled function is continuously differentiable and then prove that it fulfills the three properties of the definition of the filled function.
Since the only point that may cause the filled function P ( x , x k * ) to not be continuously differentiable is t = 0 in g ( t ) , so if g ( t ) is continuously differentiable at t = 0 , the filled function P ( x , x k * ) is continuously differentiable.
Since lim t 0 + g ( t ) = lim t 0 g ( t ) = 1 , the new filled function P ( x , x k * ) is continuous.
Since
g + ( 0 ) = lim t 0 + g ( t ) g ( 0 ) t 0 = lim t 0 + 1 1 t = 0
and
g ( 0 ) = lim t 0 g ( t ) g ( 0 ) t 0 = lim t 0 t 3 + 1 1 t = lim t 0 t 2 = 0 .
Thus, g + ( 0 ) = g ( 0 ) = 0 , so the new filled function P ( x , x k * ) is differentiable. Now we will prove that P ( x , x k * ) satisfies the three properties of filled function.
Theorem 1.
Suppose x k * is a local minimum of the objective function F ( x ) and P ( x , x k * ) is the filled function constructed at x k * , then x k * is a strictly local maximum of P ( x , x k * ) .
Proof. 
Suppose B k * is the basin containing x k * (please refer to Definition 1 about basin), since x k * is a local minimum of F ( x ) , so x B k * , x x k * , we have F ( x ) > F ( x k * ) . Thus, F ( x ) F ( x k * ) > 0 , in this case g ( F ( x ) F ( x k * ) ) = 1 . According to the construction of the filled function P ( x , x k * ) , we get
P ( x , x k * ) = 1 1 + x x k * · g ( F ( x ) F ( x k * ) ) = 1 1 + x x k * < 1
P ( x k * , x k * ) = 1 1 + x k * x k * · g ( F ( x k * ) F ( x k * ) ) = g ( 0 ) = 1
Thus, P ( x , x k * ) < P ( x k * , x k * ) , which means that x k * is the strict local maximum of P ( x , x k * ) .    □
Theorem 2.
For any x Ω 1 , we have P ( x , x k * ) 0 , where Ω 1 = { x Ω | F ( x ) F ( x k * ) , x x k * } .
Proof. 
Since Ω 1 = { x Ω | F ( x ) F ( x k * ) , x x k * } , for any x Ω 1 , we have F ( x ) F ( x k * ) , thus
P ( x , x k * ) = 1 1 + x x k * · g ( F ( x ) F ( x k * ) ) = 1 1 + x x k *
and
P ( x , x k * ) = 1 1 + x x k * 2 0
This proves Theorem 2.   □
Theorem 3.
If Ω 2 = { x Ω | F ( x ) < F ( x k * ) } is not empty, then there exists x k Ω 2 , such that x k is a local minimum of P ( x , x k * ) .
Proof. 
Since Ω 2 is not empty, then F ( x ) must have a minimum in Ω 2 .
Since P ( x , x k * ) is continuous and differentiable on R n , it must have a minimum, say x k at Ω 2 . Because P ( x , x k * ) is differentiable at x k , then this minimum x k must be a stationary point, that is, P ( x k , x k * ) = 0 .
Since Ω 2 is not empty, then there exists a point z Ω 2 such that P ( z , x k * ) < 0 . Thus P ( x k , x k * ) P ( z , x k * ) < 0 and x k x k * . Therefore, we know that x k Ω 1 (according to the definition of Ω 1 from Theorem 2); therefore, x k Ω 2 .   □

3.2. A Filled Function Algorithm to Solve Unconstrained Optimization Problems

Based on the proposed filled function, we design a filled function algorithm to solve unconstrained optimization problems. The steps of the algorithm are as follows.
  • Initialization. Randomly generate 10 points in the feasible region and choose the point with the best function value as the initial point x 0 . Then, set b e s t X = x 0 , b e s t V a l = F ( x 0 ) to record the best solution and its corresponding function value, we set ϵ = e 10 as the stopping criteria and k = 1 as the iteration counter.
  • Optimize the objective function F ( x ) . Starting from the initial point x 0 , we use the BFGS Quasi-Newton Method as the local search method to optimize the objective function to locate a local optimal point x k * . The main steps of the BFGS method are shown in Algorithm 1.
  • Construct the filled function at x k * :
    P ( x , x k * ) = 1 1 + x x k * · g ( F ( x ) F ( x k * ) ) g ( t ) = 1 t 0 t 3 + 1 t < 0
  • Optimize the filled function P ( x , x k * ) . Set x k * + 0.1 as the initial point, and use the BFGS Quasi-Newton Method local search method to optimize the filled function P ( x , x k * ) to obtain a local minimum point x k of P ( x , x k * ) . It is known from property 3 of the filled function that point x k must lie in a lower basin than x k * .
  • Set the point x k + 0.1 as the initial point, and continue to optimize the objective function F ( x ) to obtain a new local minimum point x k + 1 * of F ( x ) .
  • Determine whether F ( x k + 1 * ) b e s t V a l is less than ϵ . If satisfied, update b e s t X by x k + 1 * and b e s t V a l by F ( x k + 1 * ) , let k = k + 1 . Go to step 2, otherwise, b e s t X is the global optimum and the algorithm terminates.
Algorithm 1 Main steps of the BFGS Quasi-Newton Method
 1:
Given an initial value x 0 and an accuracy threshold ϵ , set D 0 = I , k : = 0 .
 2:
Determine the direction of the search: d k = D k · g k .
 3:
set S k = λ k d k , X k + 1 : = X k + S k , and λ k = a r g m i n f ( X k + λ d k ) , λ R .
 4:
if g k + 1 < ϵ , then, the algorithm ends.
 5:
Calculate y k = g k + 1 g k .
 6:
Calculate D k + 1 = ( I S k y k T y k T S k ) D k ( I y k S k T y k T S k ) + S k S k T y k T S k .
 7:
let k : = k + 1 , go to Step 2
In the following, we use an example to demonstrate the optimization procedure of the filled function algorithm. Figure 1 shows the objective function f ( x ) = x + 10 sin ( 5 x ) + 7 cos ( 4 x ) with the search region [−2, 2]. From Figure 1, we can see that f ( x ) has three basins B 1 * , B 2 * and B 3 * in the search region, where B 3 * is the lowest basin that contains the global optimal solution. Suppose the optimization procedure starts from x 0 ; using the BFGS local search method we can obtain a local minimal solution x 1 * of the objective function f ( x ) .
Figure 1. Illustration of steps 1 and 2 of the filled function algorithm.
To escape from this local minimum x 1 * , we construct the filled function P ( x , x 1 * ) at x 1 * , as shown in Figure 2.
Figure 2. Illustration of steps 3 to step 5 of the filled function algorithm.
From Figure 2, we can see that x 1 * is a strictly local maximum (maximal point) of P ( x , x 1 * ) , which is guaranteed by the definition of the filled function. Therefore, a local search of P ( x , x 1 * ) , starting from point x 1 * + 0.1 , can easily escape from this point and yield a local minima x 1 of P ( x , x 1 * ) . Next, using x 1 ( x 1 = x 1 + 0.1 ) as the initial point to optimize the objective function f ( x ) , we can obtain another local minimal solution x 2 * that is better than x 1 * . At this time, the first iteration is completed.
To escape from the local minimum x 2 * , we repeat the above steps to construct the filled function P ( x , x 2 * ) at x 2 * , as shown in Figure 3.
Figure 3. Illustration of the filled function algorithm in the second iteration.
Similarly, P ( x , x 2 * ) peaks at point x 2 * , which makes it easy to escape from this point. We continue to optimize P ( x , x 2 * ) to obtain a local minimal point x 2 . Then, using x 2 = x 2 + 0.1 as the initial point to optimize the objective function f ( x ) , a new better local optimal solution x 3 * is obtained. Now, the second iteration is completed. We continue the above procedure to optimize the objective function and filled function alternately to escape from the current local optimal solution to a better one till the global optimal solution is located.
From the above optimization procedure, we can clearly see that the proposed method can easily and continuously escape from a current local optimal solution to obtain a better one till the global optimal solution is located. This is a good way to overcome the disadvantage of premature convergence of optimization algorithms. Moreover, the proposed method also has three other advantages. First, since the proposed filled function is parameterless; the algorithm has no adjustable parameters to tune for different problems. Secondly, since the new filled function is continuous and differentiable, the proposed algorithm is less apt to produce an extra local minimum while more choices of local search methods, especially the efficient gradient-based ones, can be applied to make the optimization more efficient and effective. Thirdly, once the filled function is designed and constructed, it is easy to implement and apply to different optimization problems.
There are mainly two disadvantages of the filled function method. First, it is not easy to design a good filled function and each time when a local optimal solution is found, the filled function has to be constructed. Secondly, the filled function method becomes less effective when the dimensionality of the problem is large. More research is needed to extend the scope of the filled function method.

4. Numerical Experiments

The proposed filled function algorithm is implemented in Matlab 2021 and tested on wildly used test problems. Comparisons are made with a state-of-the-art filled function algorithm [18], another continuous differentiable filled function algorithm [5] and Ge’s filled function algorithm [1]. The test problems used in this paper are listed as follows.
Test case 1. (The rastrigin function)
min F ( x ) = x 1 2 + x 2 2 cos 18 x 1 cos 18 x 2 s . t . 3 < x 1 < 3 , 3 < x 2 < 3
The global minimum solution is x * = ( 0 , 0 ) T , and the corresponding function value is F ( x * ) = 2 .
Test case 2. (Two-dimensional function)
min F ( x ) = [ 1 2 x 2 + c sin ( 4 π x 2 ) x 1 ] 2 + [ x 2 0.5 sin ( 2 π x 1 ) ] 2 s . t . 0 < x 1 < 10 , 10 < x 2 < 0
where c = 0.05 , 0.2 , 0.5 . The global minimum solution is x * = ( 1 , 0 ) T , and the corresponding function value is F ( x * ) = 0 for all values of c.
Test case 3. (Three-hump back camel function)
min F ( x ) = 2 x 1 2 1.05 x 1 4 + 1 6 x 1 6 x 1 x 2 + x 2 2 s . t . 3 < x 1 < 3 , 3 < x 2 < 3
The global minimum solution is x * = ( 0 , 0 ) T , and the corresponding function value is F ( x * ) = 0 .
Test case 4. (Six-hump back camel function)
min F ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 x 1 x 2 4 x 2 2 + 4 x 2 4 s . t . 3 < x 1 < 3 , 3 < x 2 < 3
The global minimum solution is x * = ( ± 0.0898 , ± 0.7127 ) T , and the corresponding function value is F ( x * ) = 1.0316 .
Test case 5. (Treccani function)
min F ( x ) = x 1 4 + 4 x 1 3 + 4 x 1 2 + x 2 2 s . t . 3 < x 1 < 3 , 3 < x 2 < 3
The global minimum solution is x * = ( 2 , 0 ) T and x * = ( 0 , 0 ) T , and the corresponding function value is F ( x * ) = 0 .
Test case 6. ( Two-dimensional Shubert function)
min F ( x ) = i = 1 5 i cos i + 1 x 1 + i i = 1 5 i cos i + 1 x 2 + i s . t . 0 < x 1 < 10 , 0 < x 2 < 10
There are multiple local minimum solutions in the feasible region, and the global minimum function value is F ( x * ) = 186.7309 .
Test case 7. ( n-dimensional function)
min F ( x ) = π n [ 10 sin 2 ( π x 1 ) + g ( x ) + ( x n 1 ) 2 ] s . t . 10 < x i < 10 , i = 1 , 2 , , n
where
g ( x ) = i = 1 n 1 [ ( x i 1 ) 2 ( 1 + 10 sin 2 ( π x i + 1 ) ) ]
The global minimum solution is x * = ( 1 , 1 , , 1 ) T , and the corresponding function value is F ( x * ) = 0 for all values of n.
First, all results obtained by the new filled function algorithm are listed in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14. Further, the comparisons are made with another continuous differentiable filled function algorithm, CDFA in [5]. In these tables, we use the following notations:
Table 1. Results of Problem 1.
Table 2. Results of Problem 2 (c = 0.2).
Table 3. Results of Problem 2 (c = 0.5).
Table 4. Results of Problem 2 (c = 0.05).
Table 5. Results of Problem 3.
Table 6. Results of Problem 4.
Table 7. Another results of Problem 4.
Table 8. Results of Problem 5.
Table 9. Results of Problem 6.
Table 10. Results of Problem 7 (n = 2).
Table 11. Results of Problem 7 (n = 3).
Table 12. Results of Problem 7 (n = 5).
Table 13. Results of Problem 7 (n = 7).
Table 14. Results of Problem 7 (n = 10).
x k * : the local minimum of the objective function in the k t h iteration.
f k * : the function value of the objective function at x k * .
k : the iteration counter.
F f : the total function evaluations of the objective function and the filled function.
CDFA: the filled function algorithm proposed in [5].
FFFA: the filled function algorithm proposed in [18].
Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14 show the numerical results of the test problems in different test criteria (different parameters and different dimensions) obtained by the proposed filled function algorithm. In these tables, k means the iterations for the filled function algorithm to locate the global minimum solution x * , F ( x * ) is the corresponding function value and n is the dimension of the test problem.
From the numerical results, we can see that the proposed filled function algorithm can locate all the global minima solutions successfully (some with a precision error of less than 10 10 ) and within small iterations.
For these test problems, we also listed the results of another continuous differentiable filled function algorithm, CDFA [5]. Since CDFA just carried out parts of the test problems, we use slashes ( / ) to indicate the missed values. From the comparison, we can see that for problem 2 (c = 0.2), we use one less iteration to locate a minimum solution with six orders of magnitude higher accuracy than CDFA. For problem 2 (c = 0.5), although we use one more iteration than CDFA, we successfully located the global minimum 0. For problem 2 (c = 0.05), we use one less iteration and obtain a better result than CDFA. For problem 3 we locate a better result (five orders of magnitude higher accuracy) than CDFA with the same iterations. For problems 5 and 7 (n = 7), our algorithm use one more iteration than CDFA. For problem 6 we use one less iteration to locate the global minimum than CDFA. For problem 7 (n = 10), although we use one less iteration, CDFA located the global minimum 0 while we get 1.51 × 10 13 . From the above analysis, we can achieve the comparison results that our algorithm has four wins (problem 2 with c = 0.2, c = 0.05, and problems 3 and 6), two losses (problems 5 and 7 with n = 7), and three ties (problem 2 with c = 0.5, and problems 4 and 7 with n = 10) out of the overall ten test problems. Therefore, we come to the conclusion that the proposed filled function algorithm is more effective than CDFA.
Comparisons are also made with a state-of-the-art filled function algorithm, FFFA [18] and Ge’s filled function [1]. The comparison results are listed in Table 15, where N o . is the number of test problems and n is the dimension, F f refers to the total number of function evaluations consumed to obtain the optimal solutions (minimum solutions for minimization problems).
Table 15. The overall comparisons.
Since all three filled function algorithms can find the global minimum solutions, we compare their efficiency by the total number of iterations and function evaluations consumed by each algorithm. From Table 15, we can see that for all test problems, our algorithm is much better than Ge’s algorithm. As for the comparison with FFFA, we can see that for test problems 1, 3 and 4, although our algorithm takes one more iteration to get the optimal solution, we use much fewer function evaluations. For problems 2, 5 and 6, our algorithm uses fewer iterations and fewer function evaluations than that of FFFA. For problem 7, we can see that for dimension n = 2 , our algorithm uses fewer iterations but more function evaluations than FFFA, for n = 3 and n = 7 , our algorithm uses more function evaluations than that of FFFA, but for n = 5 and n = 10 , our algorithm performs much better than that of FFFA. We can see that for n = 5 our algorithm uses three fewer iterations and only uses 2287 function evaluations, while FFFA uses 12,681 function evaluations; for n = 10 , our algorithm only uses 12,795 function evaluations (which is nearly half of that of FFFA’s 20,044) to get the global optimal solution. Overall, we come to the conclusion that the filled function algorithm proposed in this paper is more efficient than FFFA. From the numerical results and comparison with the other three filled function algorithms, we come to the conclusion that the new filled function algorithm is effective and efficient for solving unconstrained global optimization problems.

5. An Application of the Filled Function Algorithm

In this section, the proposed filled function algorithm is applied to the supply chain problem. Supply chain problems can be divided into three types, namely manufacturer’s core supply chain, supplier core supply chain and seller core supply chain. In this paper, we mainly consider the manufacturer’s core supply chain. For the manufacturer’s core supply chain, there are multiple suppliers, multiple shippers, multiple generalized transportation methods, multiple sellers and one manufacturer. In this supply chain, the manufacturer uses different raw materials to produce various products that are sold by multiple sellers. The optimization objective of the supply chain is to minimize the total transportation cost.
We suppose there is a supply chain with a manufacturer as the core, one supplier and one kind of raw material required for production. The unit raw material cost of this kind of raw material supplied by the supplier is 2000 USD/t, the maximum supply is 5000 t, and all shippers can deliver it. The manufacturer produces only one product and requires 1.2 t of raw material per ton of product. There are two shippers, both of which can provide services of two generalized modes of transport. There are three sellers, and the order quantity of each seller must be strictly satisfied. The manufacturer initially has no inventory products, the production cost per unit product is 1000 USD, and the maximum production capacity is 4500 t. The relevant unit costs are shown in Table 16, Table 17 and Table 18.
Table 16. Unit transportation cost and maximum transportation capacity of shippers.
Table 17. Sales capacity and unit sales cost.
Table 18. Unit cost and maximum transportation capacity of shippers.
The optimization model used here is:
min Z C = ( 216 β 1111 + 252 β 1121 + 228 β 1211 + 264 β 1221 ) × Q + 3700 x 1111 + 3740 x 1112 + 3695 x 1113 + 3660 x 1121 + 3690 x 1122 + 3695 x 1123 + 3680 x 1211 + 3710 x 1212 + 3695 x 1213 + 3680 x 1221 + 3700 x 1222 + 3705 x 1223 .
The constraints are:
Q 4500 x 1111 + x 1112 + x 1113 1000 x 1121 + x 1122 + x 1123 1200 x 1211 + x 1212 + x 1213 1500 x 1221 + x 1222 + x 1223 1000 x 1111 + x 1121 + x 1211 + x 1221 = 1000 x 1112 + x 1122 + x 1212 + x 1222 = 1200 x 1113 + x 1123 + x 1213 + x 1223 = 800 1.2 × Q 5000 1.2 × β 1111 × Q 2000 1.2 × β 1121 × Q 2200 1.2 × β 1211 × Q 2500 1.2 × β 1221 × Q 2000 β 1111 + β 1121 + β 1211 + β 1221 = 1
where:
Q = x 1111 + x 1112 + x 1113 + x 1121 + x 1122 + x 1123 + x 1211 + x 1212 + x 1213 + x 1221 + x 1222 + x 1223 .
where x i j n k are non-negative integers, k = 1 , 2 , 3 ; j = 1 , 2 ; n = 1 , 2 ; β 1 j n l are non-negative numbers. The symbols used in the model are explained as follows:
Z C : Total supply chain cost;
x i j n k : The number of i -th product delivered to the k -th seller use the n -th generalized transportation method by the j -th transporter;
β i j n k : The ratio of j -th transporter using the n -th generalized transportation method from the l -th supplier to the r -th raw material to the manufacturer’s demand for the kind of raw material.
It can be seen from the model that the objective function is nonlinear, and the constraints of suppliers and the transportation of raw materials are also nonlinear, so the model is a nonlinear mixed integer programming model.
We applied the proposed filled function algorithm to this supply chain model to optimize the total transportation cost. We used MATLAB2021b programming on a 64-bit Windows 10, Intel(R) Core(TM) i5-9400F CPU@2.90 GHz memory personal computer to calculate it, we executed 20 independent runs, and the results are listed in Table 19.
Table 19. The optimization results.
Table 19 shows the optimization results of nonzero variables (the values of other variables are all 0). We can see that this supply chain has multiple optimal solutions. By careful observation, we can see that when x 1121 , x 1122 , x 1222 , β 1111 and β 1211 are fixed to the values shown in Table 19, and x 1113 and x 1213 satisfy the following conditions:
x 1113 + x 1213 = 800 .
Therefore, the minimum transportation cost of this example is USD 1171.8 million. In this case, x 1122 = 200 means that the manufacturer in this supply chain should arrange shipper 1 to deliver 200 t of the product to the second seller using the second generalized mode of transportation. β 1211 = 0.444 means that the manufacturer should let shipper 2 complete 44.4 percent of the transportation task using generalized transportation method 2.
We compare our results with the results from ref. [20], the proposed filled function algorithm in this paper finds multiple optimal solutions and takes less computational time. From Table 19, we can see that our algorithm successfully finds five optimal solutions while the algorithm in [20] only finds single optimal solution x * = (0, 0, 800, 0, 1000, 200, 0, 0, 0, 0, 1000, 0), β * = ( 0.556 , 0 , 0.444 , 0 ) . Moreover, the average running time of our algorithm is 1106 s, while the running time of [20] is 5128 seconds. Therefore, we can come to the conclusion that the filled function algorithm in this paper is more effective and efficient.

6. Conclusions

In this paper, we design a new filled function method to solve unconstrained global optimization problems. The new filled function has two advantages. First, it has no adjustable parameters to tune for different optimization problems; Secondly, the new filled function is continuous and differentiable, which are very good properties. These good properties mean the filled function is less apt to produce extra fake local minimum during the optimization; further, more choices of local search methods, especially some efficient gradient-based ones, can be applied to make the optimization more efficient and effective. The proposed filled function algorithm is tested on widely used benchmark problems and is also applied to a supply chain problem. Numerical experiments show the algorithm is effective and efficient. However, we also notice that the filled function algorithm become less effective with high dimensional optimization problems. The reason may lie in two aspects: First, the search space grows exponentially with the increase in the distention, and secondly, the local search method is not efficient enough for high dimensional problems. The time complexity of the proposed filled function algorithm is dependent on the local search method adopted. Since the filled function designed in this paper has the advantage of continuous and differentiable, the efficient gradient-based algorithm BFGS Quasi-Newton Method can be used in the proposed model. BFGS is well known for its fast super-linear convergence speed, although its time complexity is O ( n 2 ) . This may be one of the reasons that when the optimization problem grows, its performance degrades. The proposed algorithm mainly helps the optimization process to repeatedly escape from local optimal solutions to better ones to locate the global optimal solution. In this process, a different local search method can be used. In our future work, we will continue to work on this issue and design new and better local search methods to make the filled function algorithm more efficient and perform better on higher dimensional problems.

Author Contributions

Conceptualization, H.L. and S.X.; methodology, H.L. and S.X.; software, S.X.; validation, Y.C.; writing—original draft, H.L. and S.X.; writing—review and editing, Y.C. and S.T.; supervision, H.L.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No.62002289).

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ge, R. A filled function method for finding a global minimizer of a function of several variables. Math. Program. 1990, 46, 191–204. [Google Scholar] [CrossRef]
  2. Liu, X. Finding Global Minima with a Computable Filled Function. J. Glob. Optim. 2001, 19, 151–161. [Google Scholar] [CrossRef]
  3. Liu, X. A Class of Continuously Differentiable Filled Functions for Global Optimization. Syst. Man Cybern. Part A Syst. Hum. IEEE Trans. 2008, 38, 38–47. [Google Scholar] [CrossRef]
  4. Lin, H.; Wang, Y.; Fan, L.; Gao, Y. A new discrete filled function method for finding global minimizer of the integer programming. Appl. Math. Comput. 2013, 219, 4371–4378. [Google Scholar] [CrossRef]
  5. Lin, H.; Gao, Y.; Wang, Y. A continuously differentiable filled function method for global optimization. Numer. Algorithms 2014, 66, 511–523. [Google Scholar] [CrossRef]
  6. Gao, Y.; Yang, Y.; You, M. A new filled function method for global optimization. Appl. Math. Comput. 2015, 268, 685–695. [Google Scholar] [CrossRef]
  7. El-Gindy, T.; Salim, M.; Ahmed, A. A new filled function method applied to unconstrained global optimization. Appl. Math. Comput. 2016, 273, 1246–1256. [Google Scholar] [CrossRef]
  8. Ma, S.; Yang, Y.; Liu, H. A parameter free filled function for unconstrained global optimization. Appl. Math. Comput. 2010, 215, 3610–3619. [Google Scholar] [CrossRef]
  9. Liu, H.; Wang, Y.; Guan, S.; Liu, X. A new filled function method for unconstrained global optimization. Int. J. Comput. Math. 2017, 94, 2283–2296. [Google Scholar] [CrossRef]
  10. Pandiya, R.; Widodo, W.; Endrayanto, I. Non parameter-filled function for global optimization. Appl. Math. Comput. 2021, 391, 125642. [Google Scholar] [CrossRef]
  11. Ahmed, A. A new parameter free filled function for solving unconstrained global optimization problems. Int. J. Comput. Math. 2021, 98, 106–119. [Google Scholar] [CrossRef]
  12. Ahmed, A. A new filled function for global minimization and system of nonlinear equations. Optimization 2021, 71, 1–24. [Google Scholar] [CrossRef]
  13. Pandiya, R.; Salmah; Widodo; Endrayanto, I. A Class of Parameter-Free Filled Functions for Unconstrained Global Optimization. Int. J. Comput. Methods 2022, 19, 2250003. [Google Scholar] [CrossRef]
  14. Qu, D.; Shang, Y.; Zhan, Y.; Wu, D. A new parameter-free filled function for the global optimization problem. Oper. Res. Trans. 2021, 25, 89–95. [Google Scholar]
  15. Wang, Y.J.; Zhang, J.S. A new constructing auxiliary function method for global optimization. Math. Comput. Model. 2008, 47, 1396–1410. [Google Scholar] [CrossRef]
  16. Sahiner, A.; Abdulhamid, İ.A.M.; Ibrahem, S.A. A new filled function method with two parameters in a directional search. J. Multidiscip. Model. Optim. 2019, 2, 34–42. [Google Scholar]
  17. Wu, X.; Wang, Y.; Fan, N. A new filled function method based on adaptive search direction and valley widening for global optimization. Appl. Intell. 2021, 51, 6234–6254. [Google Scholar] [CrossRef]
  18. Liu, J.; Wang, Y.; Wei, S.; Sui, X.; Tong, W. A Filled Flatten Function Method Based on Basin Deepening and Adaptive Initial Point for Global Optimization. Int. J. Pattern Recognit. Artif. Intell. 2020, 34, 2059011. [Google Scholar] [CrossRef]
  19. Lin, H.; Wang, Y.; Gao, Y.; Wang, X. A filled function method for global optimization with inequality constraints. Comput. Appl. Math. 2018, 37, 1524–1536. [Google Scholar] [CrossRef]
  20. Qu, D.; Shang, Y.; Wu, D.; Sun, G. Filled function method to optimize supply chain transportation costs. J. Ind. Manag. Optim. 2022, 18, 3339. [Google Scholar] [CrossRef]
  21. Yuan, L.; Li, Q. Filling function method for solving two-level programming problems. J. Math. 2022, 42, 153–161. [Google Scholar]
  22. Wan, Z.; Yuan, L.; Chen, J. A filled function method for nonlinear systems of equalities and inequalities. Comput. Appl. Math. 2012, 31, 391–405. [Google Scholar] [CrossRef][Green Version]
  23. Ma, S.; Gao, Y.; Zhang, B.; Zuo, W. A New Nonparametric Filled Function Method for Integer Programming Problems with Constraints. Mathematics 2022, 10, 734. [Google Scholar] [CrossRef]
  24. Yuan, L.y.; Wan, Z.p.; Tang, Q.h.; Zheng, Y. A class of parameter-free filled functions for box-constrained system of nonlinear equations. Acta Math. Appl. Sin. Engl. Ser. 2016, 32, 355–364. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Xu, Y.T. A one-parameter filled function method applied to nonsmooth constrained global optimization. Comput. Math. Appl. 2009, 58, 1230–1238. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Xu, Y.; Zhang, L. A filled function method applied to nonsmooth constrained global optimization. J. Comput. Appl. Math. 2009, 232, 415–426. [Google Scholar] [CrossRef][Green Version]
  27. Zhang, L.S.; Ng, C.K.; Li, D.; Tian, W.W. A New Filled Function Method for Global Optimization. J. Glob. Optim. 2004, 28, 17–43. [Google Scholar] [CrossRef]
  28. Yang, Y.; Shang, Y. A new filled function method for unconstrained global optimization. Appl. Math. Comput. 2006, 173, 501–512. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.