Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation
Abstract
:1. Introduction
2. Background and Related Work
- (a)
- Let . Then, the vector v is less than w () if for all . The relation is defined analogously.
- (b)
- is dominated by a point () with respect to (Equation (MOP)) if and .
- (c)
- is called a Pareto point or Pareto optimal if there is no that dominates x.
- (d)
- The set of Pareto optimal solutions
- (e)
- The image of the Pareto set is called the Pareto front.
3. Motivation
4. Reference Set Generator (RSG)
4.1. General Idea
Algorithm 1 Reference Set Generator (RSG) |
|
4.2. Component Detection
- If it is known that the PF is connected, this step can simply be omitted. Note that the main application of RSG is in the approximation of Pareto fronts of known benchmark functions, where the shapes of the PFs are at least known roughly.
- If the range of the PF is roughly known or normalized and the PF is disconnected (or at least suspected to be), r can be set to 10% of the range and to 10% of when few components are expected. Alternatively, r can be set to a smaller value (e.g., 2–3%) and to 1–2% of when several components are anticipated.
- No information of the PF is known a priori. To make the component detection process “parameter-free”, we compute a small grid search to find the optimal values of and r based on the weakest link function defined in [56]. Based on our experiments, we suggest setting the parameter values to and for bi-objective problems and to and otherwise, where is the average pairwise distance between all points and .
Algorithm 2 Component Detection |
|
4.3. Filling
- For , we sort the points of in increasing order of , i.e., the first objective. Then, we consider the piecewise linear curve formed by the segments between and , and , and so on. The total length of this curve is given by , where . To perform the filling, we arrange the desired points along the curve L such that the first point is and the subsequent points are distributed equidistantly along L. This is achieved by placing each point at a distance of from the previous one along L. See Algorithm 3 for details.
- The filling process for consists of several intermediate steps that must be described first; see Algorithm 4 for a general outline of the procedure. The procedure is as follows: First, to better represent (particularly for the filling step), we triangulate this set in dimensional space. This is done because the PF for continuous MOPs forms a set whose dimension is at most . To achieve this, we compute a “normal vector” to using Equation (8), and then we project it onto the hyperplane normal to , obtaining the projected set . After this, we compute the Delaunay triangulation [57] of , which provides a triangulation that can be used in the original k-dimensional space. For some PFs, the triangulation may include triangles (or simplex for ) that extend beyond (Figure 12d), so a removal strategy is applied to eliminate these triangles and obtain the final triangulation T. Finally, each triangle is uniformly filled at random with a number of points proportional to its area (or volume for ), resulting in the filled set F of size .We will now describe each step in more detail in the following:
- -
- Computing “normal vector” : Since the front is not known, we compute the normal direction orthogonal to the convex hull defined by the minimal elements of . More precisely, we compute as follows: if , choose
- -
- Projection : We use as the first axis of a new coordinate system , where the vectors are defined as above. In this coordinate system, the orthonormal vectors form the basis of a hyperplane orthogonal to . , and the projection of the points of onto this hyperplane is achieved by first expressing the points in this new coordinate system as and then removing the first coordinate, yielding . Finally, .
- -
- Delaunay Triangulation : Compute the Delaunay triangulation of . This returns , which is a list of size containing the indices of that form the triangles (or simplices for ). The list serves as the triangulation for the k-dimensional set , which is possible because consists of indices, making it independent of the dimension. We use to denote the number of triangles obtained, to denote the indices of the vertices forming triangle i, and to denote the corresponding vertices of triangle i.
- -
- Triangle Cleaning : We identify three types of unwanted triangles: those with large sides, those with large areas, and those where the matrix containing the coordinates of the vertices has a large condition number. The type of cleaning applied depends on the problem; however, the procedure remains the same for any problematic triangle case and is outlined in Algorithm 5. First, the property (area, largest side, or condition number) is computed for all the triangles . Next, triangles i with are removed.
- -
- Triangle Filling : For each triangle with area , we generate points uniformly at random inside triangle , following the procedure described in [58]. That is, the number of points is proportional to the area (or volume) of each triangle (or simplex). Here, is the total area of the triangulation.
Algorithm 3 Filling ( Objectives) |
|
Algorithm 4 Filling ( Objectives) |
|
Algorithm 5 Triangle Cleaning |
|
4.4. Reduction
4.5. Obtaining
- -
- Sampling: In the easiest case, either the PS or the PF is given analytically, which is indeed given for several benchmark problems. If a sampling can be performed directly in objective space (e.g., for linear fronts), the remaining steps of the RSG may not be needed to further improve the quality of the solution set. If the sampling is performed in decision variable space, the elements of the resulting image are not necessarily uniformly distributed along the Pareto front as discussed above. However, in that case, the filling and reduction steps may help to remove biases. We have used sampling, e.g., for the test problems DTLZ1, CONV3, and CONV3-4.
- -
- Continuation: If neither the PS nor the PF has an “easy” shape, one alternative is to make use of multi-objective continuation methods, probably in combination with the use of several different starting points and with a non-dominance test. In particular, we have used the Pareto Tracer (PT, [61,62]), which is a state-of-the-art continuation method that is able to treat problems of, in principle, any dimensions (both n and k), can handle general constraints, and that can even detect local degeneration of the solution set. Continuation is advisable if the PS/PF consists of relatively few connected components and if the gradient information can at least be approximated. We have used PT, e.g., for the test problems WFG2, DTLZ5, and DTLZ6.
- -
- Archiving: The result of an MOEA or any other MOP solver can, of course, be taken. This could be either the final archive of the population, via merging several populations of the same or several runs [52], or via using external (unbounded) archives [43]. Note that this includes taking a reference set from a given repository. Archiving is advisable if none of the above techniques can be applied successfully.We have used archiving, e.g., for the test problems DTLZ1-4, DTLZ7, ZDT1-6, CONV3, CONV3-4, and CONV4-2F.
4.6. Complexity Analysis
- Component Detection: The time complexity is , which accounts for the computation of the average distance plus the size of the grid search () multiplied by the sum of the complexities of DBSCAN and the WeakestLink computation. Here, ℓ is the size of , and represents the number of parameter combinations of the grid search, with for and for using the suggested values for the case where no information about the PF is known a priori. If it is previously known that the Pareto front is connected, then the parameters of DBSCAN can be correctly adjusted, and can be set to 1.
- Filling: The time complexity depends on the number of objectives:
- -
- For , the time complexity is , which accounts for sorting and placing the points along the line segments.
- -
- For , the time complexity is due to the computations involved in determining the normal vector , changing coordinates and projecting, performing the Delaunay triangulation, and filling the triangles. Here, represents the size of the cleaned Delaunay triangulation, i.e., the number of triangles. Additionally, triangle cleaning must be considered, though its complexity depends on the method used. It is given by when the cleaning is based on area or the condition number (due to determinant computation) or when the cleaning is based on the longest side.
- Select Reference Set T: The time complexity is due to the k-means clustering algorithm. Here, is the number of iterations of k-means.
5. Numerical Results
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
Current PF approximation | |
ℓ | Size of , the starting PF approximation |
N | Desired size of approximation |
Z | RSG result: reference set of size N |
F | Filled set |
Size of filling | |
i-th detected component | |
C | Set of all detected components |
L | Total length of 2D curve |
Delaunay triangulation | |
Number of triangles in | |
T | Cleaned triangulation |
Number of triangles in T | |
Normal vector | |
Projected | |
Selected cleaning property (area/volume, largest side, or condition number) | |
Value of property for triangle i | |
Threshold for removing triangles | |
Area/volume of triangle/simplex i | |
A | Total area/volume of the triangulation |
r | Radius of DBSCAN |
Appendix A. Function Definitions
- CONV3
- CONV3-4
- CONV4-2F
References
- Hillermeier, C. Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001; Volume 135. [Google Scholar]
- Coello Coello, C.A.; Goodman, E.; Miettinen, K.; Saxena, D.; Schütze, O.; Thiele, L. Interview: Kalyanmoy Deb Talks about Formation, Development and Challenges of the EMO Community, Important Positions in His Career, and Issues Faced Getting His Works Published. Math. Comput. Appl. 2023, 28, 34. [Google Scholar] [CrossRef]
- Veldhuizen, D.A.V. Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations; Technical Report; Air Force Institute of Technology: Dayton, OH, USA, 1999. [Google Scholar]
- Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; Grunert, V.D.F. Performance assessment of multiobjective optimizers: An analysis and review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
- Coello, C.A.C.; Cruz, N.C. Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
- Schütze, O.; Esquivel, X.; Lara, A.; Coello Coello, C.A. Using the averaged Hausdorff distance as a performance measure in evolutionary multi-objective optimization. IEEE Trans. Evol. Comput. 2012, 16, 504–522. [Google Scholar] [CrossRef]
- Ishibuchi, H.; Masuda, H.; Nojima, Y. A Study on Performance Evaluation Ability of a Modified Inverted Generational Distance Indicator. In Proceedings of the GECCO’15: Genetic and Evolutionary Computation Conference, Madrid, Spain, 11–15 July 2015; pp. 695–702. [Google Scholar] [CrossRef]
- Bogoya, J.M.; Vargas, A.; Cuate, O.; Schütze, O. A (p,q)-Averaged Hausdorff Distance for Arbitrary Measurable Sets. Math. Comput. Appl. 2018, 23, 51. [Google Scholar] [CrossRef]
- Deb, K.; Ehrgott, M. On Generalized Dominance Structures for Multi-Objective Optimization. Math. Comput. Appl. 2023, 28, 100. [Google Scholar] [CrossRef]
- Deb, K.; Pratap, A.; Sameer, S.A.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. Evol. Comput. IEEE Trans. 2002, 6, 182–197. [Google Scholar] [CrossRef]
- Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In Proceedings of the Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems (EUROGEN 2001), Athens, Greece, 19–21 September 2002; Giannakoglou, K., Tsahalis, D., Periaux, J., Papailiou, K., Eds.; International Center for Numerical Methods in Engineering (CIMNE): Barcelona, Spain, 2002; pp. 95–100. [Google Scholar]
- Fonseca, C.M.; Fleming, P.J. An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 1995, 3, 1–16. [Google Scholar] [CrossRef]
- Knowles, J.D.; Corne, D.W. Approximating the nondominated front using the Pareto Archived Evolution Strategy. Evol. Comput. 2000, 8, 149–172. [Google Scholar] [CrossRef]
- Zhang, Q.; Li, H. MOEA/D: A Multi-objective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
- Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
- Jain, H.; Deb, K. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach. IEEE Trans. Evol. Comput. 2014, 18, 602–622. [Google Scholar] [CrossRef]
- Zuiani, F.; Vasile, M. Multi Agent Collaborative Search based on Tchebycheff decomposition. Comput. Optim. Appl. 2013, 56, 189–208. [Google Scholar] [CrossRef]
- Moubayed, N.A.; Petrovski, A.; McCall, J. (DMOPSO)-M-2: MOPSO Based on Decomposition and Dominance with Archiving Using Crowding Distance in Objective and Solution Spaces. Evol. Comput. 2014, 22, 47–77. [Google Scholar] [CrossRef]
- Beume, N.; Naujoks, B.; Emmerich, M.T.M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
- Zitzler, E.; Thiele, L.; Bader, J. SPAM: Set Preference Algorithm for multiobjective optimization. In Proceedings of the Parallel Problem Solving From Nature PPSN X, Dortmund, Germany, 13–17 September 2008; pp. 847–858. [Google Scholar]
- Wagner, T.; Trautmann, H. Integration of Preferences in Hypervolume-based multiobjective evolutionary algorithms by means of desirability functions. IEEE Trans. Evol. Comput. 2010, 14, 688–701. [Google Scholar] [CrossRef]
- Fonseca, C.M.; Fleming, P.J. Genetic algorithms for multiobjective optimization: Formulation, discussion, and generalization. In Proceedings of the 5-th International Conference on Genetic Algorithms, Champaign, IL, USA, 17–21 July 1993; pp. 416–423. [Google Scholar]
- Srinivas, N.; Deb, K. Multiobjective optimization using nondominated sorting in genetic algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
- Horn, J.; Nafpliotis, N.; Goldberg, D.E. A niched Pareto genetic algorithm for multiobjective optimization. In Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Computation, Orlando, FL, USA, 27–29 June 1994; IEEE Press: Piscataway, NJ, USA, 1994; pp. 82–87. [Google Scholar]
- Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
- Rudolph, G. Finite Markov Chain results in evolutionary computation: A Tour d’Horizon. Fundam. Inform. 1998, 35, 67–89. [Google Scholar] [CrossRef]
- Rudolph, G. On a multi-objective evolutionary algorithm and its convergence to the Pareto set. In Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC 1998), Anchorage, AK, USA, 4–9 May 1998; IEEE Press: Piscataway, NJ, USA, 1998; pp. 511–516. [Google Scholar]
- Rudolph, G.; Agapie, A. Convergence Properties of Some Multi-Objective Evolutionary Algorithms. In Proceedings of the Evolutionary Computation (CEC), La Jolla, CA, USA, 16–19 July 2000; IEEE Press: Piscataway, NJ, USA, 2000. [Google Scholar]
- Rudolph, G. Evolutionary Search under Partially Ordered Fitness Sets. In Proceedings of the International NAISO Congress on Information Science Innovations (ISI 2001), Dubai, United Arab Emirates, 17–21 March 2001; ICSC Academic Press: Sliedrecht, The Netherlands, 2001; pp. 818–822. [Google Scholar]
- Hanne, T. On the convergence of multiobjective evolutionary algorithms. Eur. J. Oper. Res. 1999, 117, 553–564. [Google Scholar] [CrossRef]
- Hanne, T. Global multiobjective optimization with evolutionary algorithms: Selection mechanisms and mutation control. In Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization, EMO 2001, Zurich, Switzerland, 7–9 March 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 197–212. [Google Scholar]
- Hanne, T. A multiobjective evolutionary algorithm for approximating the efficient set. Eur. J. Oper. Res. 2007, 176, 1723–1734. [Google Scholar] [CrossRef]
- Hanne, T. A Primal-Dual Multiobjective Evolutionary Algorithm for Approximating the Efficient Set. In Proceedings of the Evolutionary Computation (CEC), Singapore, 25–28 September 2007; IEEE Press: Piscataway, NJ, USA, 2007; pp. 3127–3134. [Google Scholar]
- Brockhoff, D.; Tran, T.D.; Hansen, N. Benchmarking numerical multiobjective optimizers revisited. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; pp. 639–646. [Google Scholar]
- Wang, R.; Zhou, Z.; Ishibuchi, H.; Liao, T.; Zhang, T. Localized weighted sum method for many-objective optimization. IEEE Trans. Evol. Comput. 2016, 22, 3–18. [Google Scholar] [CrossRef]
- Pang, L.M.; Ishibuchi, H.; Shang, K. Algorithm Configurations of MOEA/D with an Unbounded External Archive. arXiv 2020, arXiv:2007.13352. [Google Scholar]
- Nan, Y.; Shu, T.; Ishibuchi, H. Effects of External Archives on the Performance of Multi-Objective Evolutionary Algorithms on Real-World Problems. In Proceedings of the 2023 IEEE Congress on Evolutionary Computation (CEC), Chicago, IL, USA, 1–5 July 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Rodriguez-Fernandez, A.E.; Schäpermeier, L.; Hernández, C.; Kerschke, P.; Trautmann, H.; Schütze, O. Finding ϵ-Locally Optimal Solutions for Multi-Objective Multimodal Optimization. IEEE Trans. Evol. Comput. 2024. [Google Scholar] [CrossRef]
- Schütze, O.; Rodriguez-Fernandez, A.E.; Segura, C.; Hernández, C. Finding the Set of Nearly Optimal Solutions of a Multi-Objective Optimization Problem. IEEE Trans. Evol. Comput. 2024, 29, 145–157. [Google Scholar] [CrossRef]
- Nan, Y.; Ishibuchi, H.; Pang, L.M. Small Population Size is Enough in Many Cases with External Archives. In Evolutionary Multi-Criterion Optimization, Proceedings of the 13th International Conference, EMO 2025, Canberra, ACT, Australia, 4–7 March 2025; Singh, H., Ray, T., Knowles, J., Li, X., Branke, J., Wang, B., Oyama, A., Eds.; Springer Nature: Singapore, 2025; pp. 99–113. [Google Scholar]
- Loridan, P. ϵ-Solutions in Vector Minimization Problems. J. Optim. Theory Appl. 1984, 42, 265–276. [Google Scholar] [CrossRef]
- Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef] [PubMed]
- Schütze, O.; Hernández, C. Archiving Strategies for Evolutionary Multi-Objective Optimization Algorithms; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
- Knowles, J.D.; Corne, D.W. Properties of an adaptive archiving algorithm for storing nondominated vectors. IEEE Trans. Evol. Comput. 2003, 7, 100–116. [Google Scholar] [CrossRef]
- Knowles, J.D.; Corne, D.W. Bounded Pareto archiving: Theory and practice. In Metaheuristics for Multiobjective Optimisation; Springer: Berlin/Heidelberg, Germany, 2004; pp. 39–64. [Google Scholar]
- Knowles, J.D.; Corne, D.W.; Fleischer, M. Bounded archiving using the Lebesgue measure. In Proceedings of the IEEE Congress on Evolutionary Computation, Canberra, ACT, Australia, 8–12 December 2003; IEEE Press: IEEE, NJ, USA, 2003; pp. 2490–2497. [Google Scholar]
- López-Ibáñez, M.; Knowles, J.D.; Laumanns, M. On Sequential Online Archiving of Objective Vectors. In Evolutionary Multi-Criterion Optimization (EMO 2011), Proceedings of the 6th International Conference, EMO 2011, Ouro Preto, Brazil, 5–8 April 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 46–60. [Google Scholar]
- Castellanos, C.I.H.; Schütze, O. A Bounded Archiver for Hausdorff Approximations of the Pareto Front for Multi-Objective Evolutionary Algorithms. Math. Comput. Appl. 2022, 27, 48. [Google Scholar] [CrossRef]
- Laumanns, M.; Zenklusen, R. Stochastic convergence of random search methods to fixed size Pareto front approximations. Eur. J. Oper. Res. 2011, 213, 414–421. [Google Scholar] [CrossRef]
- Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
- Blank, J.; Deb, K. Pymoo: Multi-Objective Optimization in Python. IEEE Access 2020, 8, 89497–89509. [Google Scholar] [CrossRef]
- Wang, H.; Rodriguez-Fernandez, A.E.; Uribe, L.; Deutz, A.; Cortés-Piña, O.; Schütze, O. A Newton Method for Hausdorff Approximations of the Pareto Front Within Multi-objective Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2024. [Google Scholar] [CrossRef]
- Rudolph, G.; Schütze, O.; Grimme, C.; Domínguez-Medina, C.; Trautmann, H. Optimal averaged Hausdorff archives for bi-objective problems: Theoretical and numerical results. Comput. Optim. Appl. 2016, 64, 589–618. [Google Scholar] [CrossRef]
- Dilettoso, E.; Rizzo, S.A.; Salerno, N. A Weakly Pareto Compliant Quality Indicator. Math. Comput. Appl. 2017, 22, 25. [Google Scholar] [CrossRef]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the KDD, Portland, OR, USA, 2–4 August 1996; Simoudis, S., Han, J., Fayyad, U., Eds.; AAAI Press: Menlo Park, CA, USA, 1996; pp. 226–231. [Google Scholar]
- Ben-David, S.; Ackerman, M. Measures of Clustering Quality: A Working Set of Axioms for Clustering. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–10 December 2008; Koller, D., Schuurmans, D., Bengio, Y., Bottou, L., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2008; Volume 21. [Google Scholar]
- Delaunay, B. Sur la sphère vide. Bull. L’AcadeÉmie Des Sci. L’URSS Cl. Des Sci. MathÉmatiques 1934, 1934, 793–800. [Google Scholar]
- Smith, N.A.; Tromble, R.W. Sampling Uniformly from the Unit Simplex; Johns Hopkins University: Baltimore, MD, USA, 2004. [Google Scholar]
- Uribe, L.; Bogoya, J.M.; Vargas, A.; Lara, A.; Rudolph, G.; Schütze, O. A Set Based Newton Method for the Averaged Hausdorff Distance for Multi-Objective Reference Set Problems. Mathematics 2020, 8, 1822. [Google Scholar] [CrossRef]
- Chen, W.; Ishibuchi, H.; Shang, K. Clustering-Based Subset Selection in Evolutionary Multiobjective Optimization. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 468–475. [Google Scholar] [CrossRef]
- Martín, A.; Schütze, O. Pareto Tracer: A predictor-corrector method for multi-objective optimization problems. Eng. Optim. 2018, 50, 516–536. [Google Scholar] [CrossRef]
- Schütze, O.; Cuate, O. The Pareto Tracer for the treatment of degenerated multi-objective optimization problems. Eng. Optim. 2024, 57, 261–286. [Google Scholar] [CrossRef]
- Li, W.; Yao, X.; Zhang, T.; Wang, R.; Wang, L. Hierarchy ranking method for multimodal multiobjective optimization with local Pareto fronts. IEEE Trans. Evol. Comput. 2022, 27, 98–110. [Google Scholar] [CrossRef]
- Cai, X.; Wu, L.; Zhao, T.; Wu, D.; Zhang, W.; Chen, J. Dynamic adaptive multi-objective optimization algorithm based on type detection. Inf. Sci. 2024, 654, 119867. [Google Scholar] [CrossRef]
0.5118 | 0.7384 | 0.9084 | 0.9873 | 0.6423 | 0.9084 | 0.9873 | 1.3671 | |
0.0698 | 0.1002 | 0.4522 | 1.0744 | 0.3198 | 0.4522 | 1.0744 | 8.2024 | |
0.0684 | 0.0684 | 0.6835 | 0.7883 | 0.4833 | 0.6835 | 0.7883 | 1.2987 | |
0.0684 | 0.0684 | 2.5974 | 3.6765 | 1.8367 | 2.5974 | 3.6765 | 8.1341 | |
0.0028 | 0.0032 | 0.8968 | 0.9776 | 0.6341 | 0.8968 | 0.9776 | 1.3671 | |
0.0008 | 0.0010 | 0.4117 | 0.8792 | 0.2911 | 0.4117 | 0.8792 | 8.2024 | |
0.0007 | 0.0007 | 0.6835 | 0.7893 | 0.4833 | 0.6835 | 0.7893 | 1.3664 | |
0.0007 | 0.0007 | 2.5974 | 3.6767 | 1.8367 | 2.5974 | 3.6767 | 8.2018 |
= | 1000 | 5000 | 10,000 | 50,000 | |
---|---|---|---|---|---|
ZDT1 | 0.050 | 0.127 | 0.434 | 11.548 | |
ZDT3 | 0.030 | 0.097 | 0.266 | 6.272 | |
ZDT4 | 0.016 | 0.086 | 0.289 | 6.639 | |
ZDT6 | 0.015 | 0.093 | 0.376 | 9.296 |
= | 10,000 | 50,000 | 100,000 | 500,000 | 1,000,000 | |
---|---|---|---|---|---|---|
DTLZ2 | 0.271 | 3.164 | 8.059 | 235.510 | 1049.119 | |
DTLZ7 | 1.305 | 3.359 | 7.962 | 230.973 | 981.419 | |
C2_DTLZ2 | 1.710 | 3.698 | 8.510 | 216.197 | 1025.821 | |
CDTLZ2 | 0.329 | 3.595 | 9.452 | 247.404 | 1013.905 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rodriguez-Fernandez, A.E.; Wang, H.; Schütze, O. Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation. Mathematics 2025, 13, 1626. https://doi.org/10.3390/math13101626
Rodriguez-Fernandez AE, Wang H, Schütze O. Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation. Mathematics. 2025; 13(10):1626. https://doi.org/10.3390/math13101626
Chicago/Turabian StyleRodriguez-Fernandez, Angel E., Hao Wang, and Oliver Schütze. 2025. "Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation" Mathematics 13, no. 10: 1626. https://doi.org/10.3390/math13101626
APA StyleRodriguez-Fernandez, A. E., Wang, H., & Schütze, O. (2025). Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation. Mathematics, 13(10), 1626. https://doi.org/10.3390/math13101626