Next Article in Journal
LA-ESN: A Novel Method for Time Series Classification
Next Article in Special Issue
A Thorough Reproducibility Study on Sentiment Classification: Methodology, Experimental Setting, Results
Previous Article in Journal
A Deep Learning Technique for Biometric Authentication Using ECG Beat Template Matching
Previous Article in Special Issue
Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NeuralMinimizer: A Novel Method for Global Optimization

by
Ioannis G. Tsoulos
1,*,
Alexandros Tzallas
1,
Evangelos Karvounis
1 and
Dimitrios Tsalikakis
2
1
Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece
2
Department of Engineering Informatics and Telecommunications, University of Western Macedonia, 50100 Kozani, Greece
*
Author to whom correspondence should be addressed.
Information 2023, 14(2), 66; https://doi.org/10.3390/info14020066
Submission received: 19 December 2022 / Revised: 19 January 2023 / Accepted: 20 January 2023 / Published: 25 January 2023
(This article belongs to the Special Issue Feature Papers in Information in 2023)

Abstract

:
The problem of finding the global minimum of multidimensional functions is often applied to a wide range of problems. An innovative method of finding the global minimum of multidimensional functions is presented here. This method first generates an approximation of the objective function using only a few real samples from it. These samples construct the approach using a machine learning model. Next, the required sampling is performed by the approximation function. Furthermore, the approach is improved on each sample by using found local minima as samples for the training set of the machine learning model. In addition, as a termination criterion, the proposed technique uses a widely used criterion from the relevant literature which in fact evaluates it after each execution of the local minimization. The proposed technique was applied to a number of well-known problems from the relevant literature, and the comparative results with respect to modern global minimization techniques are shown to be extremely promising.

1. Introduction

An innovative method for finding the global minimum of multidimensional functions is presented here. The functions considered are continuous and differentiable and defined as f : S R , S R n . The problem of locating the global optimum is usually formulated as:
x * = arg min x S f ( x )
with S:
S = a 1 , b 1 a 2 , b 2 a n , b n
A variety of problems in the physical world can be represented as global minimum problems, such as problems from physics [1,2,3], chemistry [4,5,6], economics [7,8], medicine [9,10], etc. During the past years, many methods, especially the stochastic one, have been proposed to tackle the problem of Equation (1), such as Controlled Random Search methods [11,12,13], Simulated Annealing methods [14,15,16], Differential Evolution methods [17,18], Particle Swarm Optimization (PSO) methods [19,20,21], Ant Colony Optimization [22,23], Genetic algorithms [24,25,26], etc. A systematic review of global optimization methods can also be found in the work of Floudas et al. [27]. In addition, during the last few years, a variety of work has been proposed on combinations and modifications to some global optimization methods to more efficiently find the global minimum, such as methods that combine PSO with other methods [28,29,30], methods aimed to discover all the local minima of functions [31,32,33], new stopping rules to efficiently terminate the global optimization techniques [34,35,36], etc. In addition, due to the massive use of parallel processing techniques, several methods have been proposed that take full advantage of parallel processing, such as parallel techniques [37,38,39], methods that utilize the GPU architectures [40,41], etc.
In addition, during the past years, many metaheuristic algorithms have appeared to tackle global optimization problems such as the Quantum-based avian navigation optimizer algorithm [42], a Tunicate Swarm Algorithm (TSA) inspired by simulating the lives of Tunicates at sea and how food is obtained [43], Starling murmuration optimizer [44,45], the Diversity-maintained multi-trial vector differential evolution algorithm (DMDE) used in large-scale global optimization [46], an improved moth-flame optimization algorithm with an adaptation mechanism to solve numerical and mechanical engineering problems [47], the dwarf-mongoose optimization algorithm [48], etc.
In this paper, a new multistart method is proposed that uses a machine learning model, which is trained in parallel with the evolution of the optimization process. Although multistart methods are considered the basis for more modern optimization techniques, they have been successfully used in several problems such as the Traveling Salesman Problem (TSP) [49,50,51], the maximum clique problem [52,53], the vehicle routing problem [54,55], scheduling problems [56,57], etc. In the new technique, a Radial Basis Function (RBF) network [58] is used to construct an approximation of the objective function. This construction is carried out in parallel with the execution of the optimization. A limited number of samples from the objective function and the local minima discovered during the optimization are used to construct the approximation function. During the execution of the method, the samples needed to start local minimizers are taken from the approximation function that is constructed by the neural network. The RBF network was used as an approximation tool as it has been successfully used in a wide range of problems in the field of artificial intelligence [59,60,61,62] and its training procedure is very fast, if compared to artificial neural networks, for example. In addition, for a more efficient termination of the method, a termination method proposed by Tsoulos is used [63], but this termination method is applied after each execution of the local minimization procedure. The mentioned method was applied to some test functions provided by the relevant literature and the results are extremely promising as compared with other global optimization techniques.
The proposed method does not sample the actual function but an approximation of it, which is generated incrementally. The creation of the approximation is done by using an RBF neural network, known for its reliability and its ability to efficiently approximate functions. The initial approximation is created from a limited number of points, and then, it will be improved through the local minimizers that will be found during the execution of the method. With the above procedure, the required number of function calls is drastically reduced, since the actual function is not used to produce samples, but an approximation of them. Only samples with low function values are taken from the approximation function, which means that finding the global minimum is likely to be performed faster than other techniques and more efficiently. Furthermore, the generation of the approximation function does not use any prior knowledge about the objective problem.
The rest of this article is organized as follows: in Section 2, the description of the proposed method is provided. In Section 3, the used experimental functions as well as the experimental results and comparisons are listed, and finally, in Section 4, some conclusions and final thoughts are given.

2. Method Description

The proposed technique generates an estimation of the objective function during the optimization using an RBF network. This estimation is initially generated from some samples from the objective function and gradually local minima that will have been discovered during the optimization are added to it. In this way, the estimation of the objective function will be continuously improved to approximate the true function as much as possible. At every iteration, several samples are then taken from the estimated function and sorted in ascending order. Those with the lowest value will be starting points of the local minimization method. The local optimization method used here is a BFGS variant of Powell [64]. This process has the effect of drastically reducing the total number of function calls that are made and, at the same time, the points used as initiators of the local minimization technique approach the global minimum of the objective function. In addition, the proposed method checks the termination rule after the application of every local search method. That way, if the absolute minimum has already been discovered with some certainty, no more function calls will be wasted finding it.
In the following subsections, the training procedure of RBF networks as well as the proposed method are fully described.

2.1. RBF Preliminaries

An RBF network can be defined as:
N x = i = 1 k w i ϕ x c i
where
  • The vector x is called the input pattern to the equation.
  • The vectors c i , i = 1 , , k are called the center vectors.
  • The vector w stands for the the output weight of the RBF network.
In most cases, the function ϕ ( x ) is a Gaussian function:
ϕ ( x ) = exp x c 2 σ 2
The training error for the RBF network on a set of points T = x 1 , y 1 , x 2 , y 2 , , x M , y M is estimated as
E N x = i = 1 M N x i y i 2
In most approaches, Equation (4) is minimized with respect to the parameters of the RBF network using a two-phase procedure:
  • In the first phase, the K-Means algorithm [65] is used to approximate the k centers and the corresponding variances.
  • In the second phase, the weight vector w = w 1 , w 2 , , w k is calculated by solving a linear system of equations as follows:
    (a)
    Set W = w k j .
    (b)
    Set Φ = ϕ j x i .
    (c)
    Set T = t i = f x i , i = 1 , , M .
    (d)
    The system to be solved is identified as:
    Φ T T Φ W T = 0
    With the solution:
    W T = Φ T Φ 1 Φ T T = Φ T

2.2. The Main Algorithm

The main steps of the proposed algorithm are as follows:
1.
Initialization step.
(a)
Setk the number of weights in the RBF network.
(b)
Set N S the initial samples that will be taken from the function f ( x ) .
(c)
Set N T the number of samples that will be used in every iteration as starting points for the local optimization procedure.
(d)
Set N R the number of samples that will be drawn from the RBF network at every iteration with N R > N T .
(e)
Set N G the maximum number of allowed iterations.
(f)
Set Iter = 0, the iteration number.
(g)
Set x * , y * as the global minimum. Initially, y * =
2.
Creation Step.
(a)
Set T = , the training set for the RBF network.
(b)
For i = 1 , , N S do
i.
Take a new sample x i S .
ii.
Calculate y i = f x i .
iii.
T = T x i , y i .
(c)
End For
(d)
Train the RBF network on the training set T.
3.
Sampling Step.
(a)
Set T R = .
(b)
For i = 1 , , N R do
i.
Take a random sample x i , y i from the RBF network.
ii.
Set T R = T R x i , y i .
(c)
End For
(d)
Sort T R according to the y values in ascending order.
4.
Optimization Step.
(a)
For i = 1 , , N T do
i.
Take the next sample x i , y i from T R .
ii.
y i = L S x i , where LS(x) is a predefined local search method.
iii.
T = T x i , y i ; this step updates the training set of the RBF network.
iv.
Train the RBF network on the set T.
v.
If y i y * , then x * = x i , y * = y i .
vi.
Check the termination rule as suggested in [63]. If it holds, then report x * , y * as the located global minimum and terminate.
(b)
End For
5.
Set iter = iter + 1.
6.
Goto to Sampling step.
The steps of the algorithm are illustrated graphically in Figure 1.

3. Experiments

To estimate the efficiency of the new technique, a number of functions from the relevant literature were used [66,67]. These functions are provided in Appendix A. The proposed technique was tested on these test functions and the results produced were compared with those given by a simple genetic algorithm or a PSO method or differential evolution (DE) method. The used genetic algorithm is based on the GA c r 1 , l algorithm from the work of Kaelo and Ali [68]. In order to have fairness in the comparison of the results, for all global optimization techniques, the same local minimization method as that of the proposed method has been used. In addition, the number of chromosomes in the genetic algorithm and the number of particles in the PSO method are identical to the parameter N T of the proposed procedure. In addition, for the DE method, the number of agents was set to N T . The values for the parameters used in the conducted experiments are shown in Table 1. For every function and for every global optimizer, 30 independent runs were executed using a different seed for the random generator each time. The proposed method is implemented as the method with the name NeuralMinimizer in the OPTIMUS global optimization environment, which is freely available from https://github.com/itsoulos/OPTIMUS (accessed on 19 January 2023). All the experiments were conducted on an AMD Ryzen 5950X with 128 GB of RAM and the Debian Linux operating system.
The experimental results from the application of the proposed method and the other methods are shown in Table 2. The number in the cells represent average function calls. The number in parentheses indicates the fraction of runs where the global optimum was successfully discovered. Absence of this fraction indicated that the global minimum is discovered for every execution (100% success). At the end of the table, an additional row named AVERAGE has been added to show the total number of function calls and the average success rate in locating the global minimum. In the experimental results, the superiority of the proposed technique over the other two methods in terms of the number of function calls is clear. The proposed technique requires an average of 90% fewer function calls than the other methods. In addition, the proposed technique appears to be more efficient than the other two as it finds, on average, more often the global minimum of most test functions in the experiments. In addition, the statistical comparison between the global optimization methods is shown in Figure 2.
In addition, Table 3 presents the experimental results for the proposed method and for various values of the parameter N S . As can be seen, the increase in this parameter does not cause a large increase in the total number of function calls, while, at the same time, it improves to some extent the ability of the proposed technique to find the global minimum.
The efficiency of the method is also shown in Table 4, where the proposed method is compared against the genetic algorithm and particle swarm optimization for a range of number of atoms of the Potential problem. As can be seen in the table, the proposed method requires a significantly smaller number of function calls compared with the other techniques and its reliability in finding the global minimum remains high even when the number of atoms in the potential increases significantly.

4. Conclusions

An innovative technique for finding the global minimum of multidimensional functions was presented in this work. This new technique is based on the multistart procedure, but also generates an estimation of the objective function through a machine learning model. The machine learning model constructs an estimation of the objective function using a small number of samples from the true function but also with the contribution of local minima discovered during the execution of the method. In this way, the estimation of the objective function is continuously improved and the sampling to perform local minimization is done from the estimated function rather than the actual one. This procedure combined with checking the termination criterion after each execution of the local minimization method led the proposed method to have excellent results both in terms of the speed of finding the global minimum and its efficiency. In addition, the method shows significant stability in its performance even in the presence of large changes of its parameters as presented in the experimental results section.
In the future, the use of the RBF network to construct an approximation of the objective function can be applied to more modern optimization techniques such as genetic algorithms. It would also be interesting to create a parallel implementation of the proposed method, in order to significantly speed up its execution and to be able to be used efficiently in optimization problems of higher dimensions.

Author Contributions

I.G.T., A.T., E.K. and D.T. conceived of the idea and methodology. I.G.T. and A.T. conducted the experiments, employing several test functions and provided the comparative experiments. E.K. and D.T. performed the statistical analysis and all other authors prepared the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

  • Bent Cigar function. The function is
    f ( x ) = x 1 2 + 10 6 i = 2 n x i 2
    with the global minimum f x * = 0 . For the conducted experiments, the value n = 10 was used.
  • Bf1 function. The function Bohachevsky 1 is given by the equation
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10
    with x [ 100 , 100 ] 2 .
  • Bf2 function. The function Bohachevsky 2 is given by the equation
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10
    with x [ 50 , 50 ] 2 .
  • Branin function. The function is defined by f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10 with 5 x 1 10 , 0 x 2 15 . The value of global minimum is 0.397887 with x [ 10 , 10 ] 2 .
  • CM function. The Cosine Mixture function is given by the equation
    f ( x ) = i = 1 n x i 2 1 10 i = 1 n cos 5 π x i
    with x [ 1 , 1 ] n . For the conducted experiments, the value n = 4 was used.
  • Camel function. The function is given by
    f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 , x [ 5 , 5 ] 2
    The global minimum has the value of f x * = 1.0316 .
  • Discus function. The function is defined as
    f ( x ) = 10 6 x 1 2 + i = 2 n x i 2
    with global minimum f x * = 0 . For the conducted experiments, the value n = 10 was used.
  • Easom function. The function is given by the equation
    f ( x ) = cos x 1 cos x 2 exp x 2 π 2 x 1 π 2
    with x [ 100 , 100 ] 2 and global minimum 1.0 .
  • Exponential function.
    The function is given by
    f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1
    The values n = 4 , 16 , 64 were used here and the corresponding function names are EXP4, EXP16, EXP64.
  • Griewank10 function, defined as:
    f ( x ) = i = 1 n x i 2 4000 i = 1 n cos x i i + 1
    with n = 10 .
  • Hansen function. f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i j = 1 5 j cos ( j + 1 ) x 2 + j , x [ 10 , 10 ] 2 . The global minimum of the function is −176.541793.
  • Hartman 3 function. The function is given by
    f ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2
    with x [ 0 , 1 ] 3 and a = 3 10 30 0.1 10 35 3 10 30 0.1 10 35 , c = 1 1.2 3 3.2 and
    p = 0.3689 0.117 0.2673 0.4699 0.4387 0.747 0.1091 0.8732 0.5547 0.03815 0.5743 0.8828
    The value of the global minimum is −3.862782.
  • Hartman 6 function.
    f ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2
    with x [ 0 , 1 ] 6 and a = 10 3 17 3.5 1.7 8 0.05 10 17 0.1 8 14 3 3.5 1.7 10 17 8 17 8 0.05 10 0.1 14 , c = 1 1.2 3 3.2 and
    p = 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381
    The value of the global minimum is −3.322368.
  • High Conditioned Elliptic function, defined as
    f ( x ) = i = 1 n 10 6 i 1 n 1 x i 2
    with n = 10 for the conducted experiments.
  • Potential function used to represent the lowest energy for the molecular conformation of N atoms via the Lennard–Jones potential [69]. The function is defined as:
    V L J ( r ) = 4 ϵ σ r 12 σ r 6
    In the current experiments, two different cases were studied: N = 3 , 5 .
  • Rastrigin function. The function is given by
    f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) , x [ 1 , 1 ] 2
  • Shekel 7 function.
    f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i
    with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 3 5 3 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 .
  • Shekel 5 function.
    f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i
    with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 , c = 0.1 0.2 0.2 0.4 0.4 .
  • Shekel 10 function.
    f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i
    with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 5 3 3 8 1 8 1 6 2 6 2 7 3.6 7 3.6 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.6 .
  • Sinusoidal function. The function is given by
    f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n sin 5 x i z , 0 x i π .
    The global minimum is located at x * = ( 2.09435 , 2.09435 , , 2.09435 ) with f x * = 3.5 . For the conducted experiments, the cases of n = 4 , 8 , and z = π 6 were studied.
  • Test2N function. This function is given by the equation
    f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i , x i [ 5 , 5 ] .
    The function has 2 n local minima in the specified range and, in our experiments, we used n = 4 , 5 , 6 , 7 .
  • Test30N function. This function is given by
    f ( x ) = 1 10 sin 2 3 π x 1 i = 2 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n
    with x [ 10 , 10 ] . The function has 30 n local minima in the specified range and we used n = 3 , 4 in the conducted experiments.

References

  1. Honda, M. Application of genetic algorithms to modelings of fusion plasma physics. Comput. Phys. Commun. 2018, 231, 94–106. [Google Scholar] [CrossRef]
  2. Luo, X.L.; Feng, J.; Zhang, H.H. A genetic algorithm for astroparticle physics studies. Comput. Phys. Commun. 2020, 250, 106818. [Google Scholar] [CrossRef] [Green Version]
  3. Aljohani, T.M.; Ebrahim, A.F.; Mohammed, O. Single and Multiobjective Optimal Reactive Power Dispatch Based on Hybrid Artificial Physics–Particle Swarm Optimization. Energies 2019, 12, 2333. [Google Scholar] [CrossRef] [Green Version]
  4. Pardalos, P.M.; Shalloway, D.; Xue, G. Optimization methods for computing global minima of nonconvex potential energy functions. J. Glob. Optim. 1994, 4, 117–133. [Google Scholar] [CrossRef]
  5. Liwo, A.; Lee, J.; Ripoll, D.R.; Pillardy, J.; Scheraga, A.H. Protein structure prediction by global optimization of a potential energy function. Biophysics 1999, 96, 5482–5485. [Google Scholar] [CrossRef] [Green Version]
  6. An, J.; He, G.; Qin, F.; Li, R.; Huang, Z. A new framework of global sensitivity analysis for the chemical kinetic model using PSO-BPNN. Comput. Chem. Eng. 2018, 112, 154–164. [Google Scholar] [CrossRef]
  7. Gaing, Z.-L. Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. Power Syst. 2003, 18, 1187–1195. [Google Scholar] [CrossRef]
  8. Basu, M. A simulated annealing-based goal-attainment method for economic emission load dispatch of fixed head hydrothermal power systems. Int. J. Electr. Power Energy Syst. 2005, 27, 147–153. [Google Scholar] [CrossRef]
  9. Cherruault, Y. Global optimization in biology and medicine. Math. Comput. Model. 1994, 20, 119–132. [Google Scholar] [CrossRef]
  10. Lee, E.K. Large-Scale Optimization-Based Classification Models in Medicine and Biology. Ann. Biomed. Eng. 2007, 35, 1095–1109. [Google Scholar] [CrossRef] [Green Version]
  11. Price, W.L. Global optimization by controlled random search. J. Optim. Theory Appl. 1983, 40, 333–348. [Google Scholar] [CrossRef]
  12. Křivý, I.; Tvrdík, J. The controlled random search algorithm in optimizing regression models. Comput. Stat. Data Anal. 1995, 20, 229–234. [Google Scholar] [CrossRef]
  13. Ali, M.M.; Törn, A.; Viitanen, S. A Numerical Comparison of Some Modified Controlled Random Search Algorithms. J. Glob. 1997, 11, 377–385. [Google Scholar]
  14. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  15. Ingber, L. Very fast simulated re-annealing. Math. Comput. Model. 1989, 12, 967–973. [Google Scholar] [CrossRef] [Green Version]
  16. Eglese, R.W. Simulated annealing: A tool for operational research. Eur. J. Oper. Res. 1990, 46, 271–281. [Google Scholar] [CrossRef]
  17. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  18. Liu, J.; Lampinen, J. A Fuzzy Adaptive Differential Evolution Algorithm. Soft Comput. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  19. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  20. Poli, R.; Kennedy, J.K.; Blackwell, T. Particle swarm optimization algorithm: An Overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  21. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  22. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  23. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef] [Green Version]
  24. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  25. Michaelewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer-Verlag: Berlin, Germany, 1996. [Google Scholar]
  26. Grady, S.A.; Hussaini, M.Y.; Abdullah, M.M. Placement of wind turbines using genetic algorithms. Renew. Energy 2005, 30, 259–270. [Google Scholar] [CrossRef]
  27. Floudas, C.A.; Gounaris, C.E. A review of recent advances in global optimization. J. Glob. Optim. 2009, 45, 3–38. [Google Scholar] [CrossRef]
  28. Da, Y.; Xiurun, G. An improved PSO-based ANN with simulated annealing technique. Neurocomputing 2005, 63, 527–533. [Google Scholar] [CrossRef]
  29. Liu, H.; Cai, Z.; Wang, Y. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl. Soft Comput. 2010, 10, 629–640. [Google Scholar] [CrossRef]
  30. Pan, X.; Xue, L.; Lu, Y.; Sun, N. Hybrid particle swarm optimization with simulated annealing. Multimed. Tools Appl. 2019, 78, 29921–29936. [Google Scholar] [CrossRef]
  31. Ali, M.M.; Storey, C. Topographical multilevel single linkage. J. Glob. Optim. 1994, 5, 349–358. [Google Scholar] [CrossRef]
  32. Salhi, S.; Queen, N.M. A hybrid algorithm for identifying global and local minima when optimizing functions with many minima. Eur. J. Oper. Res. 2004, 155, 51–67. [Google Scholar] [CrossRef]
  33. Tsoulos, I.G.; Lagaris, I.E. MinFinder: Locating all the local minima of a function. Comput. Phys. Commun. 2006, 174, 166–179. [Google Scholar] [CrossRef] [Green Version]
  34. Betro, B.; Schoen, F. Optimal and sub-optimal stopping rules for the multistart algorithm in global optimization. Math. Program. 1992, 57, 445–458. [Google Scholar] [CrossRef]
  35. Hart, W.E. Sequential stopping rules for random optimization methods with applications to multistart local search. Siam J. Optim. 1998, 9, 270–290. [Google Scholar] [CrossRef]
  36. Lagaris, I.E.; Tsoulos, I.G. Stopping Rules for Box-Constrained Stochastic Global Optimization. Appl. Math. Comput. 2008, 197, 622–632. [Google Scholar] [CrossRef]
  37. Schutte, J.F.; Reinbolt, J.A.; Fregly, B.J.; Haftka, R.T.; George, A.D. Parallel global optimization with the particle swarm algorithm. Int. J. Numer. Methods Eng. 2004, 61, 2296–2315. [Google Scholar] [CrossRef] [Green Version]
  38. Larson, J.; Wild, S.M. Asynchronously parallel optimization solver for finding multiple minima. Math. Comput. 2018, 10, 303–332. [Google Scholar] [CrossRef]
  39. Tsoulos, I.G.; Tzallas, A.; Tsalikakis, D. PDoublePop: An implementation of parallel genetic algorithm for function optimization. Comput. Phys. Commun. 2016, 209, 183–189. [Google Scholar] [CrossRef]
  40. Kamil, R.; Reiji, S. An Efficient GPU Implementation of a Multi-Start TSP Solver for Large Problem Instances. In Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, Philadelphia, PA, USA, 7–11 July 2012; pp. 1441–1442. [Google Scholar]
  41. Van Luong, T.; Melab, N.; Talbi, E.G. GPU-Based Multi-start Local Search Algorithms. In Learning and Intelligent Optimization; Coello, C.A.C., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6683. [Google Scholar]
  42. Hoda, Z.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar]
  43. Gharehchopogh, F.S. An Improved Tunicate Swarm Algorithm with Best-random Mutation Strategy for Global Optimization Problems. J. Bionic Eng. 2022, 19, 1177–1202. [Google Scholar] [CrossRef]
  44. Hoda, Z.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar]
  45. Nadimi-Shahraki, M.H.; Asghari Varzaneh, Z.; Zamani, H.; Mirjalili, S. Binary Starling Murmuration Optimizer Algorithm to Select Effective Features from Medical Data. Appl. Sci. 2023, 13, 564. [Google Scholar] [CrossRef]
  46. Nadimi-Shahraki, M.H.; Hoda, Z. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
  47. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Abualigah, L. An improved moth-flame optimization algorithm with adaptation mechanism to solve numerical and mechanical engineering problems. Entropy 2021, 23, 1637. [Google Scholar] [CrossRef] [PubMed]
  48. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  49. Li, W. A Parallel Multi-start Search Algorithm for Dynamic Traveling Salesman Problem. In Experimental Algorithms; Pardalos, P.M., Rebennack, S., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6630. [Google Scholar]
  50. Martí, R.; Resende, M.G.C.; Ribeiro, C.C. Multi-start methods for combinatorial optimization. Eur. J. Oper. Res. 2013, 226, 1–8. [Google Scholar] [CrossRef]
  51. Pandiri, V.; Singh, A. Two multi-start heuristics for the k-traveling salesman problem. Opsearch 2020, 57, 1164–1204. [Google Scholar] [CrossRef]
  52. Wu, Q.; Hao, J.K. An adaptive multistart tabu search approach to solve the maximum clique problem. J. Comb. Optim. 2013, 26, 86–108. [Google Scholar] [CrossRef] [Green Version]
  53. Djeddi, Y.; Haddadene, H.A.; Belacel, N. An extension of adaptive multi-start tabu search for the maximum quasi-clique problem. Comput. Ind. Eng. 2019, 132, 280–292. [Google Scholar] [CrossRef]
  54. Bräysy, O.; Hasle, G.; Dullaert, W. A multi-start local search algorithm for the vehicle routing problem with time windows. Eur. J. Oper. Res. 2004, 159, 586–605. [Google Scholar] [CrossRef]
  55. Michallet, J.; Prins, C.; Amodeo, L.; Yalaoui, F.; Vitry, G. Multi-start iterated local search for the periodic vehicle routing problem with time windows and time spread constraints on services. Comput. Oper. Res. 2014, 41, 196–207. [Google Scholar] [CrossRef]
  56. Peng, K.; Pan, Q.K.; Gao, L.; Li, X.; Das, S.; Zhang, B. A multi-start variable neighbourhood descent algorithm for hybrid flowshop rescheduling. Swarm Evol. Comput. 2019, 45, 92–112. [Google Scholar] [CrossRef]
  57. Mao, J.Y.; Pan, Q.K.; Miao, Z.H.; Gao, L. An effective multi-start iterated greedy algorithm to minimize makespan for the distributed permutation flowshop scheduling problem with preventive maintenance. Expert Syst. Appl. 2021, 169, 114495. [Google Scholar] [CrossRef]
  58. Park, J.; Sandberg, I.W. Approximation and Radial-Basis-Function Networks. Neural Comput. 1993, 5, 305–316. [Google Scholar] [CrossRef]
  59. Yoo, S.H.; Oh, S.K.; Pedrycz, W. Optimized face recognition algorithm using radial basis function neural networks and its practical applications. Neural Netw. 2015, 69, 111–125. [Google Scholar] [CrossRef]
  60. Huang, G.B.; Saratchandran, P.; Sundararajan, N. A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans. Neural Netw. 2005, 16, 57–67. [Google Scholar] [CrossRef]
  61. Majdisova, Z.; Skala, V. Radial basis function approximations: Comparison and applications. Appl. Math. Model. 2017, 51, 728–743. [Google Scholar] [CrossRef] [Green Version]
  62. Kuo, B.C.; Ho, H.H.; Li, C.H.; Hung, C.C.; Taur, J.S. A Kernel-Based Feature Selection Method for SVM With RBF Kernel for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 317–326. [Google Scholar] [CrossRef]
  63. Tsoulos, I.G. Modifications of real code genetic algorithm for global optimization. Appl. Math. Comput. 2008, 203, 598–607. [Google Scholar] [CrossRef]
  64. Powell, M.J.D. A Tolerant Algorithm for Linearly Constrained Optimization Calculations. Math. Program. 1989, 45, 547–566. [Google Scholar] [CrossRef]
  65. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Davis Davis, CA, USA, 27 December 1965–7 January 1966; Volume 1, pp. 281–297. [Google Scholar]
  66. Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A Numerical Evaluation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test Problems. J. Glob. Optim. 2005, 31, 635–672. [Google Scholar] [CrossRef]
  67. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.; Esposito, W.R.; Gümüs, Z.H.; Harding, S.; Klepeis, J.; Meyer, C.; Schweiger, C. Handbook of Test Problems in Local and Global Optimization; Kluwer Academic Publishers: Dordrecht, Germany, 1999. [Google Scholar]
  68. Kaelo, P.; Ali, M.M. Integrated crossover rules in real coded genetic algorithms. Eur. J. Oper. Res. 2007, 176, 60–76. [Google Scholar] [CrossRef]
  69. Lennard-Jones, J.E. On the Determination of Molecular Fields. Proc. R. Soc. Lond. A 1924, 106, 463–477. [Google Scholar]
Figure 1. The steps of the proposed algorithm.
Figure 1. The steps of the proposed algorithm.
Information 14 00066 g001
Figure 2. Statistical comparison between the global optimization methods.
Figure 2. Statistical comparison between the global optimization methods.
Information 14 00066 g002
Table 1. Experimental settings.
Table 1. Experimental settings.
Parameter MeaningValue
kNumber of weights10
N S Start samples50
N T Number of samples used as starting points100
N R Number of samples that will be drawn from the RBF network 10 × N T
N C Chromosomes or Particles or agents100
N G Maximum number of iterations200
Table 2. Comparison between the proposed method and the Genetic and PSO methods.
Table 2. Comparison between the proposed method and the Genetic and PSO methods.
Function GeneticPSODEProposed
BF171509030 (0.87)55791051
BF275046505 (0.67)5598921
BRANIN61356865 (0.93)5888460
CAMEL656451626403778
CIGAR1011,81318,80313,3131896
CM410,53711,12490181877 (0.87)
DISCUS1020,20860397797478
EASOM528120377917258
ELP1020,33716,73128632263
EXP410,53791555944750
EXP1620,13114,0613653885
EXP6420,14089583692948
GRIEWANK1020,151 (0.10)17,497 (0.03)16,469 (0.03)2697
POTENTIAL318,902993654521192
POTENTIAL518,47712,38539722399
HANSEN10,708910414,0162370 (0.93)
HARTMAN3848112,9714677642
HARTMAN617,723 (0.60)15,174 (0.57)14,372 (0.90)883
RASTRIGIN67447639 (0.97)61481408 (0.80)
ROSENBROCK420,815 (0.63)11,52616,7631619
ROSENBROCK820,597 (0.67)16,96716,6312444
SHEKEL514,456 (0.73)15,082 (0.47)13,1782333 (0.87)
SHEKEL716,786 (0.83)14,625 (0.40)12,0501844 (0.93)
SHEKEL1015,586 (0.80)12,628 (0.53)13,1072451
SINU411,90810,6599048802
SINU820,11513,91216,2101500 (0.97)
TEST2N413,94312,94810,864878 (0.93)
TEST2N515,81413,936 (0.90)15,259971 (0.77)
TEST2N618,98715,449 (0.70)12,839997 (0.70)
TEST2N720,03516,020 (0.50)8185 (0.97)1084 (0.30)
TEST30N313,029723948391061
TEST30N412,88980515070854
Average 472,596 (0.89)368,218 (0.86)296,814 (0.96)42,994 (0.94)
Table 3. Experimental results for the proposed method and for different values of the critical parameter N S (50, 100, 200). Numbers in the cells represent averages of 30 runs.
Table 3. Experimental results for the proposed method and for different values of the critical parameter N S (50, 100, 200). Numbers in the cells represent averages of 30 runs.
Function N S = 50 N S = 100 N S = 200
BF1105111161224
BF29219491058
BRANIN460506599
CAMEL778676739
CIGAR10189619342042
CM41877 (0.87)1859 (0.93)1877 (0.90)
DISCUS10478531634
EASOM258307450
ELP10226323393130
EXP4750778884
EXP168859321030
EXP649489981091
GRIEWANK10269726472801
POTENTIAL3119212281305
POTENTIAL5239924172544
HANSEN2370 (0.93)2602 (0.93)2578 (0.97)
HARTMAN3642696798
HARTMAN68839401038
RASTRIGIN1408 (0.80)989 (0.83)1041
ROSENBROCK4161916741751
ROSENBROCK8244424992583
SHEKEL52333 (0.87)12671878 (0.97)
SHEKEL71844 (0.93)1517 (0.93)1685 (0.97)
SHEKEL10245126951498
SINU4802821901
SINU81500 (0.97)12161247
TEST2N4878 (0.93)934850 (0.97)
TEST2N5971 (0.77)941 (0.80)993
TEST2N6997 (0.70)1087 (0.77)1098
TEST2N71084 (0.30)1160 (0.53)1313 (0.57)
TEST30N310619981320
TEST30N48548301108
Average 42,994 (0.94)42,083 (0.96)45,088 (0.97)
Table 4. Optimizing the Potential problem for different number of atoms.
Table 4. Optimizing the Potential problem for different number of atoms.
AtomsGeneticPSOProposed
318,90299361192
417,80612,5601964
518,47712,3852399
619,069 (0.20)96833198
716,390 (0.33)10,533 (0.17)3311 (0.97)
815,924 (0.50)8053 (0.50)3526
915,041 (0.27)9276 (0.17)4338
1014,817 (0.03)7548 (0.17)5517 (0.87)
1113,885 (0.03)6864 (0.13)6588 (0.80)
1214,435 (0.17)12,182 (0.07)7508 (0.83)
1314,457 (0.07)10,748 (0.03)6717 (0.77)
1413,906 (0.07)14,235 (0.13)6201 (0.93)
1512,832 (0.10)12,980 (0.10)7802 (0.90)
Average 205,941 (0.37)137,134 (0.42)60,258 (0.93)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tsoulos, I.G.; Tzallas, A.; Karvounis, E.; Tsalikakis, D. NeuralMinimizer: A Novel Method for Global Optimization. Information 2023, 14, 66. https://doi.org/10.3390/info14020066

AMA Style

Tsoulos IG, Tzallas A, Karvounis E, Tsalikakis D. NeuralMinimizer: A Novel Method for Global Optimization. Information. 2023; 14(2):66. https://doi.org/10.3390/info14020066

Chicago/Turabian Style

Tsoulos, Ioannis G., Alexandros Tzallas, Evangelos Karvounis, and Dimitrios Tsalikakis. 2023. "NeuralMinimizer: A Novel Method for Global Optimization" Information 14, no. 2: 66. https://doi.org/10.3390/info14020066

APA Style

Tsoulos, I. G., Tzallas, A., Karvounis, E., & Tsalikakis, D. (2023). NeuralMinimizer: A Novel Method for Global Optimization. Information, 14(2), 66. https://doi.org/10.3390/info14020066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop