Abstract
Multi-objective optimization problems (MOPs) have been widely studied during the last decades. In this paper, we present a new approach based on Chaotic search to solve MOPs. Various Tchebychev scalarization strategies have been investigated. Moreover, a comparison with state of the art algorithms on different well known bound constrained benchmarks shows the efficiency and the effectiveness of the proposed Chaotic search approach.
1. Introduction
Many problems in science and industry are concerned with multi-objective optimization problems (MOPs). Multi-objective optimization seeks to optimize several components of an objective function vector. Contrary to single-objective optimization, the solution of a MOP is not a single solution, but a set of solutions known as Pareto optimal set, which is called Pareto front when it is plotted in the objective space. Any solution of this set is optimal in the sense that no improvement can be made on a component of the objective vector without worsening at least another of its components. The main goal in solving a difficult MOP is to approximate the set of solutions within the Pareto optimal set and, consequently, the Pareto front.
Definition 1
(MOP). A multi-objective optimization problem (MOP) may be defined as:
where k is the number of objectives, is the vector representing the decision variables, and X represents the set of feasible solutions associated with equality and inequality constraints, and explicit bounds. is the vector of objectives to be optimized.
The set of all values satisfying the constraints defines the feasible region X and any point is a feasible solution. As mentioned before, we seek for the Pareto optima.
Definition 2
(Pareto). A point is Pareto Optimal if for every and and there is at least one such that .
This definition states that is Pareto optimal if no feasible vector exists which would improve some criterion without causing a simultaneous worsening in at least one other criterion.
Definition 3
(Dominance). A vector is said to dominate (denoted by ) if and only if is partially less than , i.e.,
Definition 4
(Pareto set). For a given MOP , the Pareto optimal set is defined as .
Definition 5
(Pareto front). For a given MOP and its Pareto optimal set , the Pareto front is defined as .
Definition 6
(Reference point). A reference point is a vector which defines the aspiration level (or goal) to reach for each objective .
Definition 7
(Nadir point). A point is the nadir point if it maximizes each objective function of F over the Pareto set, i.e., .
The approaches developed for treating optimization problems can be mainly divided into deterministic and stochastic. Deterministic methods (e.g., linear programming, nonlinear programming, and mixed-integer nonlinear programming, etc.) provide a theoretical guarantee of locating the global optimum or at least a good approximation of it whereas the stochastic methods offer a guarantee in probability [1,2,3].
Most of the well-known metaheuristics (e.g., evolutionary algorithms, particle swarm, ant colonies) have been adapted to solve multi-objective problems [4,5] with a growing number of applications [6,7,8].
Multi-objective metaheuristics can be classified in three main categories:
- Scalarization-based approaches: this class of multi-objective metaheuristics contains the approaches which transform a MOP problem into a single-objective one or a set of such problems. Among these methods one can find the aggregation methods, weighted metrics, cheybycheff method, goal programming methods, achievement functions, goal attainment methods and the -constraint methods [9,10].
- Dominance-based approaches: the dominance-based approaches (Also named Pareto approaches.) use the concept of dominance and Pareto optimality to guide the search process. Since the beginning of the nineties, interest concerning MOPs area with Pareto approaches always grows. Most of Pareto approaches use EMO (Evolutionary Multi-criterion Optimization) algorithms. Population-based metaheuristics seem particularly suitable to solve MOPs, because they deal simultaneously with a set of solutions which allows to find several members of the Pareto optimal set in a single run of the algorithm. Moreover, they are less sensitive to the shape of the Pareto front (continuity, convexity). The main differences between the various proposed approaches arise in the followins search components: fitness assignment, diversity management, and elitism [11].
- Decomposition-based approaches: most of decomposition based algorithms in solving MOPs operate in the objective space. One of the well-known frameworks for MOEAs using decomposition is MOEOA/D [12]. It uses scalarization to decompose the MOP into multiple scalar optimization subproblems and solve them simultaneously by evolving a population of candidate solutions. Subproblems are solved using information from the neighbouring subproblems [13].
We are interested in tackling MOPs using Chaotic optimization approaches. In our previous work, we proposed a efficient metaheuristic for single objective optimization called Tornado which is based on Chaotic search. It is a metaheuristic developed to solve large-scale continuous optimization problems. In this paper we extend our Chaotic approach Tornado to solve MOPs by using various Tchebychev scalarization approaches.
The paper is organized as follow. Section 1 recalls the main principles of the Chaotic search Tornado algorithm. Then the extended Chaotic search for multi-objective optimization is detailed in Section 3. In Section 4 the experimental settings and computational results against competing methods are detailed and analyzed. Finally, the Section 5 concludes and presents some future works.
2. The Tornado Chaotic Search Algorithm
2.1. Chaotic Optimization Algorithm: Recall
The chaotic optimization algorithm (COA) is recently proposed metaheuristic method [14] which is based on chaotic sequences instead of random number generators and mapped as the design variables for global optimization. It includes generally two main stages:
- Global search: This search corresponds to exploration phase. A sequence of chaotic solutions is generated using a chaotic map. Then, this sequence is mapped into the range of the search space to get the decision variables on which the objective function is evaluated and the solution with the best objective function is chosen as the current solution.
- Local search: After the exploration of the search space, the current solution is assumed to be close to the global optimum after a given number of iterations, and it is viewed as the centre on which a little chaotic perturbation, and the global optimum is obtained through local search. The above two steps are iterated until some specified stopping criterion is satisfied.
In recent years COA has attracted widespread attention and have been widely investigated and many choatic approaches have been proposed in the litterature [15,16,17,18,19,20,21].
2.2. Tornado Principle
The Tornado algorithm has been as an improvement to the basic COA approach to correct some of major drawbacks such as the inability of the method to deal with high-dimensional problems [22,23]. Indeed, the Tornado algorithm introduces new strategies such as symmetrization and levelling of the distribution of chaotic variables in order to break the rigidity inherent in chaotic dynamics.
The proposed Tornado algorithm is composed of three main procedures:
- The chaotic global search (CGS): this procedure explores the entire research space and select the best point among a distribution of symmetrized distribution of chaotic points.
- The chaotic local search (CLS): The CLS carries out a local search on neighbourhood of the solution initially found CGS, it exploits the of the solution. Moreover, by focusing on successive promising solutions, CLS allows also the exploration of promising neighboring regions.
- The chaotic fine search (CFS): It is programmed after the CGS and CLS procedures in order to refine their solutions by adopting a coordinate adaptive zoom strategy to intensify the search around the current optimum.
As a COA approach, The Tornado algorithm relies on a chaotic sequence in order to generate chaotic variables that will be used in the search process. For instance, Henon map was adopted as a generator of a chaotic sequence in Tornado.
To this end. We consider a sequence of normalized Henon vectors built by the following linear transformation of the standard Henon map [24]:
where and
Thus, we obtain . In this paper, the parameters of the Henon map sequence are set as follows:
The structure of the proposed Tornado approach is given in Algorithm 1.
| Algorithm 1 The Tornado algorithm structure. |
|
2.2.1. Chaotic Global Search (CGS)
CGS starts by mapping the chaotic sequence generated by the adopted standard Henon map variable Z into ranges of design variable X by considering the following transformations (Figure 1):
Figure 1.
Selection of chaotic variables for CGS.
Those proposed transformations aim to overcome the inherent rigidity of the chaos dynamics.
- Levelling approach: In order to provide more diversification in the chaotic distribution, CGS proceeds by levelling with chaotic levels. In each chaotic level , and for each iteration k three chaotic variables are generated through the following formulas:Note that for sake of simplicity, henceforth will be simply noted .
- Symmetrization approach: In high-dimensional space, the exploration of all the dimensions is not practical because of combinatorial explosion. To get around this difficulty, we have introduced a new strategy based on a stochastic decomposition of the search space into two vectorial subspaces: a vectorial line and its corresponding hyperplane :By consequence,where
The symmetrization approach based on this stochastic decomposition of the design space offers two significant advantages:
- ∘
- Significant reduction of complexity in the high dimensional problem in a way as if we were dealing with a 2D space with four directions.
- ∘
- The symmetric chaos is more regular and more ergodic than the basic one (Figure 2).
Figure 2. Illustration of symmetrisation approach in 2D.
Thanks to the stochastic decomposition (Equation (7)), CGS generates four symmetric chaotic points using axial symmetries (Figure 3):
where the axial symmetries are defined as follows:
Figure 3.
Illustration of axial symmetries and .
In other words, :
At last, the best solution among these all generated chaotic points is selected as illustrated by Figure 4.
Figure 4.
Generation of chaotic variables by the symmetrization approach in CGS.
The code of CGS is detailed in Algorithm 2.
| Algorithm 2 Chaotic global search (CGS). |
|
2.2.2. Chaotic Local Search (CLS)
The CLS procedure is designed to refine the search by exploiting the neighborhood of the solution found by the chaotic global search CGS. In fact, the search process is conducted near the current solution within a local search area whose radius is focused on (see Figure 5a), where is a random parameter that corresponds to the reduction rate, and denotes the radius of the search zone such as:
Figure 5.
Illustration of the CLS mechanism.
The CLS procedure uses the following strategy to produce chaotic variables:
- Like the CGS, the CLS also involves a levelling approach by creating chaotic levels focused on . The local search process in each chaotic level , is limited to a local area focused on (see Figure 5b) characterized by its radius defined by the following:where is a decreasing parameter through levels which we have formulated in this work as follows:This levelling approach used by the CLS can be interpreted as a progressive zoom focus on the current solution carried out through chaotic levels, and is the factor indicating the speed of this zoom process (see Figure 5b).Furthermore, by looking for potential solutions relatively far from the current solution CLS contributes also to the exploration of the decision space. Indeed, once the CGS provides an initial solution , the CLS intensifies the search around this solution, through several chaotic layers. After each CGS run, CLS carry out several cycles of local search (i.e., ) This way, the CLS participates also to the exploration of neighboring regions by following the zoom dynamic through the CLS cycles as shown in Figure 6.
Figure 6. Illustration of exploration aspect in CLS with the zoom dynamic produced by many local search cycles. - Moreover, in each chaotic level , CLS creates two symmetric chaotic variables defined as follows (Figure 7):
Figure 7. Selection of symmetric chaotic variables in CLS.By randomly choosing an index a stochastic decomposition of is built given by:Based on this, a decomposition of each chaotic variable is applied:Finally, from each chaotic variable , symmetric chaotic points are generated using the polygonal model (Figure 8): polygonal model
Figure 8. Illustration of the generation of symmetric chaotic points in CLS.
Moreover, if is close enough to the borders of the search area , the search process risks to leave it and then may give an infeasible solution localized outside .
Indeed, that particularly happens in case of for at least one component (Figure 9), where is the distance of the component to borders defined as follows:
Figure 9.
Illustration of overflow: .
To prevent this overflow, the local search radius is corrected through the the following formula:
where . This ensures . Hence, Equation (17) become
Finally, the chaotic local search (CLS) code is detailed in Algorithm 3.
| Algorithm 3 Chaotic Local Search (CLS). |
|
2.2.3. Chaotic Fine Search (CFS)
The proposed CFS procedure aims to speed up the intensification process and refines the accuracy of the search. Indeed, suppose that the solution X obtained at the end of search processes is close to the global optimum with precision , . That can be formulated as:
Then the challenge is how to go beyond the accuracy ?
One can observe that the distance can be interpreted as a parasitic signal of the solution, which is enough to filter in a suitable way to reach the global optimum, or it corresponds to the distance to which is the global optimum of its approximate solution. Thus, one solution is to conducts a search in a local area in which the radius adapts to the distance component by component.
However, in practice, the global optimum is not known a priori. To overcome this difficulty, as we know that, as the search process proceeds the resulting solution X is supposed to be close enough to the global optimum, one solution is to consider instead of the relation (24) the difference between the current solution X and its decimals fractional parts of order , :
where is the fractional of order i.e., the closest point of X to the precision formalised as: (see Figure 10).
Figure 10.
Illustration of the 10 power zoom via the successive fractional parts.
Furthermore, we propose to add a stochastic component in the round process in order to perturb a potential local optima. Indeed, we consider the stochastic round defined by:
where is a stochastic perturbation operated on X alternatively through The Tornado cycles. This way, the new formulation of the error of X is given by:
The structure of the chaotic fine search CFS is similar to the CLS local chaotic search. Indeed, it proceeds by levelling approach creating levels, except the fact that the local area of level is defined by its radius based on the error and given by:
This way, the local area search in CFS is carried out in a narrow domain that allow a focus on the current solution adapted coordinate by coordinate unlike the uniform local search in CLS as illustrated in Figure 11. The modified radius is formulated as follows:
where and .
Figure 11.
Illustration of the coordinate adaptative local search in CFS.
The design of the radius allows to zoom at an exponential rate of decimal over the levels. Indeed, we have:
As a consequence, the CFS provides an ultra fast exploitation of the neighborhood of the current solution and allows in principle the refinement of the optimum with a good precision.
The Fine Chaotic Search (CFS) is described in Algorithm 4 and the Tornado algorithm is detailed in Algorithm 5:
| Algorithm 4 Chaotic Fine Search (CFS). |
|
| Algorithm 5 Tornado Pseudo-Code. |
|
3. Scalarization-Based Chaotic Search
3.1. Tchebychev Scalarization Approaches
The aggregation (or weighted) method is one of the most popular scalarization method for the generation of Pareto optimal solutions. It consists in using an aggregation function to transform a into a single objective problem by combining the various objective functions into a single objective function f generally in a linear way.
The first proposed scalarization approach (Multi-Objective Fractal Decomposition Algorithm Scalarization) uses the Tcheybycheff function [10,25]. It introduces the concept of ideal point or reference point as follows:
where = () is the reference point, and is the weight vector.
There have been numerous studies of decomposition approaches to use different types of reference points for providing evolutionary search directions. According to the position of reference point relative to the true PF in the objective space, we consider here three Tchebychev approaches:
- The Standard Tchebychev approach (TS) that consider the utopian point as the reference point.
- The modified Tchebychev variant (TM) that consider multiple utopian reference points instead of just one reference point [26] (see Figure 12).
Figure 12. Illustration of Tchebychev decomposition according to the choice of reference points. - The augmented Tchebychev variant (AT) which is defined by the augmented Tchebychev function defined as [27]:
3.2. X-Tornado Algorithm
By using N different weight vectors , we solves N different problems using the chaotic Tornado approach, each generating one solution composing the final Pareto Front (PF). One of the downsides of using scalarization methods is that the number of solutions composing the PF found by the algorithm will be, at most, the same as the number of different weight vectors N. In certain cases, if two or more weight vectors are too close, the algorithm might find the same solution.
4. Computational Experiments
The proposed algorithm X-Tornado is implemented on Matlab. The computing platform used consists of an Intel(R) Core(TM) i3 4005U CPU 1:70 GHz with 4 GB RAM.
4.1. Test Problems
In order to evaluate the performance of the proposed X-Tornado algorithm, 14 test problems are selected from the literature. These functions will test the proposed algorithm’s performance in the different characteristics of the Pareto front: convexity, concavity, discreteness, non uniformity, and multimodality. For instance, the test problems KUR and ZDT3 have disconnected Pareto fronts; ZDT4 has too many local optimal Pareto solutions, whereas ZDT6 has non convex Pareto optimal front with low density of solutions near Pareto front. The test problems and their properties are shown in Table 1.
Table 1.
Benchmark Problems used in our experiments.
4.2. Parameters Setting
In X-Tornado, the parameters setting were set as follows:
- The number of CGS chaotic levels : .
- The number of CLS chaotic levels : .
- The number of CFS chaotic levels : .
- The number of CLS-CFS per cycle : .
- The number of subproblems resolved with the Tchebychev decomposition approach : .
4.3. Performances Measures
Due to the fact that the convergence to the Pareto optimal front and the maintenance of a diverse set of solutions are two different goals of the multi-objective optimization, two performance measures were adopted in this study: the generational distance () to evaluate the convergence, and the Spacing (S) to evaluate the diversity and cardinality.
- The convergence metric measure the extent of convergence to the true Pareto front. It is defined as:where N is the number of solutions found and is the Euclidean distance between each solution and its nearest point in the true Pareto front. The lower value of GD, the better convergence of the Pareto front to the real one.
- The Spacing metric S indicates how the solutions of an obtained Pareto front are spaced with respect to each other. It is defined as:
4.4. Impact of the Tchebychev Scalarization Strategies
By adopting the three Tchebychev we have tested three X-Tornado variants as follows:
- X-Tornado-TS: denotes X-Torando with Standard Tchebychev approach.
- X-Tornado-TM: denotes X-Torando with Tchebychev variant involving multiple utopian reference points instead of just one reference point.
- X-Tornado-ATS: denotes X-Torando with augmented Tchebychev approach.
The computational results in term of , S) for 300000 function evaluations are shown in Table 2 and Table 3 respectively, according to the three variants of X-Tornado.
Table 2.
The Generationnal distance metric result for the three versions of X-Tornado on the 14 selected problems.
Table 3.
Results of the Spacing metric for the three versions of X-Tornado on the 14 selected problems.
The analysis of the results obtained for the 14 selected problems show that X-Tornado-TM achieves the best performance in term of the two considered metrics and S. Indeed, in term of convergence X-Tornado-TM wins the competition on 6 problems, X-Tornado-TS wins on 5 problems whereas X-Tornado-ATS wins on only 3 problems. In term of spacing metric, X-Tornado-TM releases clearly the best performance by winning the competition on whereas X-Tornado-TS and X-Tornado-ATS win both only on three problems.
Based on this performance analysis, X-Tornado-TM variant seems to be the most promising one and therefore, in the next section we will compare its performance against some state-of the-art evolutionary algorithms. Moreover, it will be designed as X-Tornado for sake of simplicity.
4.5. Comparison with Some State-Of The-Art Evolutionary Algorithms
In this section, we choose three well-known multiobjective evolutionary algorithms NSGA-II, PESA-II, and MOEA/D (MATLAB implementation obtained for the yarpiz library available at www.yarpiz.com.). The Tchebychev function in MOEA/D was selected as the scalarizing function and the neighborhood size was specified as of the population size. The population size in the three algorithms was set to 100, The size of the archive was set to 50 in PESA-II and MOEA/D.
Besides the 14 bi-objective considered problems in the previous section, 4 additional 3d objective problems will be also tested, which are DTLZ1, DTLZ2, DTLZ3, DTLZ4. However, note that, as the TM decomposition is not suitable in the 3d cases, those 3d problems will be tested with X-Tornado-TS variant.
The computational results using the performance indicators and S for 300000 function evaluations are shown in Table 4 and Table 5 respectively, according to all four algorithms: NSGA-II, PESA-II, MOEA-D, and X-Tornado. The mean and variance of simulation results in 10 independent experiments are depicted for each algorithm. The mean of the metrics reveals the average evolutionary performance and represents the optimization results in comparison with other algorithms. The variance of the metrics indicates the consistency of an algorithm. The best performance is represented by bold fonts.
Table 4.
The Generational distance metric comparison result for the for algorithms on the 14 selected problems.
Table 5.
The Spacing metric comparison result for the four algorithms on the 14 selected problems.
By analysing the obtained results in Table 4, it is clear that the X-Tornado approach has the best performance in term of convergence to the front. Indeed the proposed X-Tornado obtains the lowest metric value for twelve out of the 18 test problems and with small standard deviation in almost all problems. A low metric value of an algorithm on a problem is significant for accuracy of the obtained Pareto front. That is to say, the proposed algorithm has a good accuracy and stability on the performance of these problems.
In Table 5 X-Tornado outperforms all other algorithms on the mean of the spacing metric in almost all test problems except in ZDT3, KUR, MSC and DTLZ4.
The next Table 6 and Table 7 show the comparisons result in case of doubling the archive length used by the three meta heuristics involved in the comparison with our X-Tornado method. Indeed, as w can see, X-Tornado is still having slightly better performance among the compared methods. But, the most interesting thing that had motived this additional simulation is to demonstrate the superiority of our method over the classical approach adopting an archiving mechanism in term of Time execution. In fact, in this kind of metaheuristics, an update of the archive of non dominated points is carried out after each cycle of the algorithms. Therefore, if all the algorithms in comparison were adopting an archiving mechanism (which is often the case) the corresponding cost is usually not considered since it has no sensitive effect in comparison. However, in our case, dislike the methods in comparisons, X-Tornado don’t use an archiving mechanism, whereas on the other side doubling the archive has considerably increased the execution time as can be observed in Table 8.
Table 6.
The Generational distance metric comparison results for the for algorithms on the 8 selected problems.
Table 7.
The Spacing metric comparison results for the four algorithms on the 8 selected problems.
Table 8.
Mean Time result for the four algorithms on problems .
Moreover, the distribution of the non dominated points reflects automatically the regularity of the Tchebychev decomposition and unlike the three other algorithms in comparison, a filtering mechanism such as “crowding” is no longer required in our X-Tornado approach. The consequence of this important advantage is that unlike the other methods in comparison, the capture of the front by the X-Tornado method is more continuous and therefore more precise, as illustrated by the Figure 13, Figure 14 and Figure 15. In addition, the X-Tornado method is much faster (at least 5 times less expensive in CPU time) compared to other methods as we can observe in Table 9.
Figure 13.
Pareto Front captured by X-Tornado for problems.
Figure 14.
Pareto Front captured by X-Tornado for problems.
Figure 15.
Pareto Front captured by X-Tornado for problems.
Table 9.
Mean Time result for the four algorithms on the 14 selected problems.
Application to Multi-Objective Structural Optimization Problem
In this section, we consider two applications: The first one is the four bar truss problem proposed by Stadler in [28]. The goal is to find the optimal truss structure while simultaneously minimizing the total mass and static displacement at point C. These two criteria are in conflict since minimizing the mass of a structure tends to increase displacement. So the best solution is to find a trade off between the two criteria. For this, we consider two cost functions to minimize. the total volume ( (cm))) and displacement ( (cm)). The four bar truss problem is shown in Figure 16.
Figure 16.
Four-bar Truss Problem (TR4).
The second application is the two bar truss structure subjected to random Gaussian loading [29]. The two bar problem is a bi-objective problem where the areas of the two bars are decision variables of the optimization. The left end of the bars is fixed while the other end is subjected to a mean plus a fluctuating load. Two objectives are considered for the optimization problem: the mean and the standard deviation of the vertical displacement. Figure 17 illustrates the two bar problem and further technical details can be found in [29].
Figure 17.
Two-bar Truss Problem (TR2).
Table 10 show the comparison results for the four methods in comparisons in term of GD and Spacing metrics for 300,000 Fes. By analysing these results we observe that X-Tornado is performing as well as Nsga2 in term of convergence and spacing metrics and outperforms Pesa2 and MOEA/D for this two problems. Moreover, X-Tornado is much faster in comparison to the three algorithms.
Table 10.
Comparison Results obtained by the four methods for TR4 and TR2 problems.
In addition, Figure 18 and Figure 19 illustrate the advantages of X-Tornado PF in term of convergence and regularity over the other comparative algorithms.
Figure 18.
Obtained Pareto fronts by X-Tornado, NSGA-II, MOEA/D and PESA-II for the problem TR2.
Figure 19.
Obtained Pareto fronts by X-Tornado, NSGA-II, MOEA/D and PESA-II for the problem TR4.
5. Conclusions and Future Work
In this paper, we have successfully developed the X-Tornado algorithm which is based on Chaotic search.
The proposed X-Tornado algorithm was tested on various benchmark problems with different features and complexity levels. The results obtained amply demonstrate that the approach is efficient in converging to the true Pareto fronts and finding a diverse set of solutions along the Pareto front. Our approach largely outperforms some popular evolutionary algorithms such as MOEA/D, NSGA-II, and PESA-II in terms of the convergence, cardinality and diversity of the obtained Pareto fronts. The X-Tornado algorithm is characterized by its fast and accurate convergence, and parallel independent decomposition of the objective space.
We are investigating to develop new adaptive mechanism in order to extend X-Tornado to the field of challenging constrained problems involving multi-extremal problems [30,31].
A massively parallel implementation on heterogeneous architectures composed of multi-cores and GPUs is under development. It is obvious that the proposed algorithms have to be improved to tackle many objective optimization problems. We will also investigate the adaptation of the algorithms to large scale MOPs such as the hyperparameter optimization of deep neural networks.
Author Contributions
N.A. conceived the concept and performed the search. R.E. contributed to the designed the experiments and revised the paper. T.E.-g. provides the instructions and contributed to the discussion and analysis of the results. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Pinter, J.D. Global Optimization in Action; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1979. [Google Scholar]
- Strongin, R.G.; Sergeyev, Y.D. Global Optimization with Non-convex Constraints: Sequential and Parallel Algorithms; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000. [Google Scholar]
- Paulavicius, R.; Zilinskas, J. Simplicial Global Optimization; Springer: New York, NY, USA, 2014. [Google Scholar]
- Talbi, E.G. Metaheuristics: From Design to Implementation; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
- Coello Coello, C.A. Multi-objective optimization. In Handbook of Heuristics; Martí, R., Pardalos, P., Resende, M., Eds.; Springer International Publishing AG: Cham, Switzerland, 2018; pp. 177–204. [Google Scholar]
- Diehl, M.; Glineur, F.; Jarlebring, E.; Michiels, W. Recent Advances in Optimization and Its Applications in Engineering; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Battaglia, G.; Di Matteo, A.; Micale, G.; Pirrotta, A. Vibration-based identification of mechanical properties of orthotropic arbitrarily shaped plates: Numerical and experimental assessment. Compos. Part Eng. 2018, 150, 212–225. [Google Scholar] [CrossRef]
- Di Matteo, A.; Masnata, C.; Pirrotta, A. Simplified analytical solution for the optimal design of Tuned Mass Damper Inerter for base isolated structures. Mech. Syst. Signal Process. 2019, 134, 106337. [Google Scholar] [CrossRef]
- Jaimes, A.L.; Martınez, S.Z.; Coello Coello, A.C. An introduction to multiobjective optimization techniques. In Optimization in Polymer Processing; Nova Science Publishers: New York, NY, USA, 2011; pp. 29–58. [Google Scholar]
- Miettinen, K.; Ruiz, F.; Wierzbicki, P. Introduction to Multiobjective Optimization, Interactive Approaches. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Springer: Heidelberg, Germany, 2008; pp. 27–57. [Google Scholar]
- Liefooghe, A.; Basseur, M.; Jourdan, L.; Talbi, E.G. ParadisEO-MOEO: A Framework for Evolutionary Multi-objective Optimization. In International Conference on Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 386–400. [Google Scholar]
- Qingfu, Z.; Hui, L. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
- Gauvain, M.; Bilel, D.; Liefooghe, A.; Talbi, E.G. Shake them all!: Rethinking selection and replacement in MOEA/D. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2014; pp. 641–651. [Google Scholar]
- Li, B.; Jiang, W. Chaos optimization method and its application. J. Control. Theory Appl. 1997, 14, 613–615. [Google Scholar]
- Wu, L.; Zuo, C.; Zhang, H.; Liu, Z.H. Bimodal fruit fly optimization algorithm based on cloud model learning. J. Soft Comput. 2015, 21, 1877–1893. [Google Scholar] [CrossRef]
- Yuan, X.; Dai, X.; Zhao, J.; He, Q. On a novel multi-swarm fruit fly optimization algorithm and its application. J. Appl. Math. Comput. 2014, 233, 260–271. [Google Scholar] [CrossRef]
- Hang, Y.; Wu, L.; Wang, S. UCAV Path Planning by Fitness-Scaling Adaptive Chaotic Particle Swarm Optimization. J. Math. Probl. Eng. 2013, 2013, 147–170. [Google Scholar]
- Shengsong, L.; Min, W.; Zhijian, H. Hybrid Algorithm of Chaos Optimization and SLP for Optimal Power Flow Problems with Multimodal Characteristic. IEEE Proc. Gener. Transm. Distrib. 2003, 150, 543–547. [Google Scholar] [CrossRef]
- Tavazoei, M.S.; Haeri, M. An optimization algorithm based on chaotic behavior and fractal nature. J. Comput. Appl. Math. 2007, 206, 1070–1081. [Google Scholar] [CrossRef]
- Hamaizia, T.; Lozi, R. Improving Chaotic Optimization Algorithm using a new global locally averaged strategy. In Emergent Properties in Natural and Artificial Complex Systems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011; pp. 17–20. [Google Scholar]
- Hamaizia, T.; Lozi, R.; Hamri, N. Fast chaotic optimization algorithm based on locally averaged strategy and multifold chaotic attractor. J. Appl. Math. Comput. 2012, 219, 188–196. [Google Scholar] [CrossRef]
- Yang, D.; Li, G.; Cheng, G. On the efficiency of chaos optimization algorithms for global optimization. Chaos Solitons Fractals 2007, 34, 1366–1375. [Google Scholar] [CrossRef]
- Li, B.; Jiang, W. Optimizing complex function by chaos search. J. Cybern. Syst. 1998, 29, 409–419. [Google Scholar]
- Al-Dhahir, A. The Henon map. Faculty of Applied Mathematics; University of Twente: Enschede, The Netherlands, 1996. [Google Scholar]
- Ma, X.L.; Zhang, Q.F.; Tian, G.; Yang, J.; Zhu, Z. On Tchebycheff Decomposition Approaches for Multiobjective Evolutionary Optimization. IEEE Trans. Evol. Comput. 1983, 22, 226–244. [Google Scholar] [CrossRef]
- Lin, W.; Lin, Q.; Zhu, Z.; Li, J.; Chen, J.; Ming, Z. Evolutionary Search with Multiple Utopian Reference Points in Decomposition-Based Multiobjective Optimization. Complex. J. 2019, 2019, 1–22. [Google Scholar] [CrossRef]
- Steuer, R.E.; Choo, E. An interactive weighted Tchebycheff procedure for multiple objective programming. J. Math. Program. 1983, 26, 326–344. [Google Scholar] [CrossRef]
- Stadler, W.; Duer, J. Multicriteria optimization in engineering: A tutorial and survy. In Structural Optimization: Status and Future; American institute of Aeronautics and Astronautics: Reston, VA, USA, 1992; pp. 209–249. [Google Scholar]
- Zidani, H.; Pagnacco, E.; Sampaio, R.; Ellaia, R.; Souza de Cursi, J.E. Multi-objective optimization by a new hybridized method: Applications to random mechanical systems. Eng. Optim. 2013, 45, 917–939. [Google Scholar] [CrossRef]
- Gaviano, D.E.; Kvasov, D.; Lera, Y.D.; Sergeyev. Software for Generation of Classes of Test Functions with Known Local and Global MINIMA for global Optimization; TOMS 29; ACM: New York, NY, USA, 2003; pp. 469–480. [Google Scholar]
- Grishagin, V.A.; Israfilov, R.A. Multidimensional Constrained Global Optimization in Domains with Computable Boundaries. In Proceedings of the CEUR Workshop Proceedings, Turin, Italy, 28–29 September 2015; pp. 75–84. [Google Scholar]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).