Next Article in Journal
Advanced Integration of Machine Learning Techniques for Accurate Segmentation and Detection of Alzheimer’s Disease
Next Article in Special Issue
Multiobjective Path Problems and Algorithms in Telecommunication Network Design—Overview and Trends
Previous Article in Journal
Three-Way Alignment Improves Multiple Sequence Alignment of Highly Diverged Sequences
Previous Article in Special Issue
Multi-Objective BiLevel Optimization by Bayesian Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elite Multi-Criteria Decision Making—Pareto Front Optimization in Multi-Objective Optimization

by
Adarsh Kesireddy
1,2,* and
F. Antonio Medrano
1,2
1
Conrad Blucher Institute for Surveying and Science, Texas A&M University–Corpus Christi, Corpus Christi, TX 78412, USA
2
Department of Computer Science, Texas A&M University–Corpus Christi, Corpus Christi, TX 78412, USA
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(5), 206; https://doi.org/10.3390/a17050206
Submission received: 31 March 2024 / Revised: 8 May 2024 / Accepted: 10 May 2024 / Published: 10 May 2024

Abstract

:
Optimization is a process of minimizing or maximizing a given objective function under specified constraints. In multi-objective optimization (MOO), multiple conflicting functions are optimized within defined criteria. Numerous MOO techniques have been developed utilizing various meta-heuristic methods such as Evolutionary Algorithms (EAs), Genetic Algorithms (GAs), and other biologically inspired processes. In a cooperative environment, a Pareto front is generated, and an MOO technique is applied to solve for the solution set. On other hand, Multi-Criteria Decision Making (MCDM) is often used to select a single best solution from a set of provided solution candidates. The Multi-Criteria Decision Making–Pareto Front (M-PF) optimizer combines both of these techniques to find a quality set of heuristic solutions. This paper provides an improved version of the M-PF optimizer, which is called the elite Multi-Criteria Decision Making–Pareto Front (eMPF) optimizer. The eMPF method uses an evolutionary algorithm for the meta-heuristic process and then generates a Pareto front and applies MCDM to the Pareto front to rank the solutions in the set. The main objective of the new optimizer is to exploit the Pareto front while also exploring the solution area. The performance of the developed method is tested against M-PF, Non-Dominated Sorting Genetic Algorithm-II (NSGA-II), and Non-Dominated Sorting Genetic Algorithm-III (NSGA-III). The test results demonstrate the performance of the new eMPF optimizer over M-PF, NSGA-II, and NSGA-III. eMPF was not only able to exploit the search domain but also was able to find better heuristic solutions for most of the test functions used.

1. Introduction

Most real-world problems are multi-objective (MO) in nature. In an MO problem, a solution or set of solutions are to be found which are optimal for conflicting objective functions. Pareto optimal solutions represent an optimal trade-off between objectives in most real-world applications. In fact, for conflicting objectives where one Pareto optimal solution is known, in order to find another feasible solution that improves in one objective, one must worsen in one or more other objectives. Resource limiting constraints add to the complexity of finding solutions, and for discrete decision variables, this results in a problem that is classified as NP-Hard, where the time to solve to exact optimality grows exponentially as the problem size increases linearly [1].
The main objective of an optimizer technique is to find a set of solutions which are optimal or near optimal for the conflicting objectives under the given constraints. Algorithms with stochastic components are referred as meta-heuristics [2]. Stochastic optimization techniques are often applied in solving black-box optimization problems [3]. Generally, most real-world problems are classified as black-box optimization problems due to uncertainty in the exact definition of the objective functions, constraints, and other uncertainties.
In multi-objective optimization (MOO), the search space needs to be explored to find the best solutions for the objective functions. Exploration and exploitation of the search area both need to be performed. Various techniques such as Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) [4], Non-Dominated Sorting Genetic Algorithm-III (NSGA-III) [5], Particle Swarm Optimization (PSO) [6], and Ant Colony Optimization (ACO) [7] have previously been developed to solve such difficult problems, and the new presented technique will be evaluated against those. Some real-world applications of MOO include finance [8], politics [9], mechanics [10], and spatial optimization [11].
On the contrary, if a set of feasible solutions is already provided, then Multi-Criteria Decision Making (MCDM) is used to find the best solution from the given set of solutions. In MCDM, a single best solution is generally selected from the provided set of solutions depending upon the significance of the objective. Various methods for the selection of a solution have previously been developed [12].
A decision maker (DM) first applies MOO techniques and obtains a set of Pareto front solutions. Next, the DM applies MCDM techniques on the set of Pareto front solutions to pick one best solution. Various researchers have previously implemented this process in [13,14,15]. An optimizer combining both of these approaches was previously developed, which was called Multi-Criteria Decision Making–Pareto Front (M-PF) optimizer [16]. The main drawback of the M-PF optimizer is its tendency to converge toward one objective to exploit its search area with low probability of exiting from local minimum fitness. Other researchers have tried to combine the MOO and MCDM techniques [17] in which they use predefined objective weights by decision maker (DM). One of the major issues with predefined objective weights is that this results in a bias toward one of the objective functions. In this paper, an improved version of the M-PF optimizer is developed, which is called the elite Multi-Criteria Decision Making–Pareto Front (eMPF) optimizer. The simultaneous objectives of the new optimizer include both exploitation of the search area and also the exploration of a diverse solution set.
The main contribution of this paper is to present a better optimizer than the M-PF optimizer with a lower possibility of being stuck in a local optimal fitness. The other contribution is to reduce the time complexity over the M-PF optimizer. In experiments, the new optimizer has better solution quality over the other evaluated test functions.
The structure of the subsequent sections of this paper is as follows. In Section 2, the relevant literature review of MOO, Pareto front, Multi-Criteria Decision Making, and Evolutionary Algorithms is presented. In Section 3, the new algorithm along with the step-by-step process is explained. In addition, all of the test functions used for this paper are explained. In Section 4, the results of the simulation are demonstrated, and the data are interpreted in Section 5. Finally, in Section 6, the conclusions for this paper are provided.

2. Literature Review

This section provides an overview of Multi-Objective Optimization (MOO), Pareto front (PF), Multi-Criteria Decision Making (MCDM), Evolutionary Algorithm (EA), and Multi-Objective Evolutionary Algorithms (MOEA).

2.1. Multi-Objective Optimization (MOO)

The aim of an MOO is to find a set of optimal solutions for conflicting objectives which may have constraints associated with them. Mathematically, MOO can be represented as [18]
m i n [ f 1 ( x ) ,   f 2 ( x ) , ,   f n ( x ) ]
subject to c i ( x ) where i = 1 , 2 , 3 , , m . Here, f 1 ( x ) ,   f 2 ( x ) , ,   f n ( x ) are the objective functions, n is the number of objective functions, c i ( x ) are the constraints, and m is the number of constraints.
In an MOO, the number of objective functions should be greater than or equal to 2. Each optimal solution is a trade-off between objectives and may not be an optimal solution for any objective function taken individually [19]. This conflicting nature of the objective functions and their corresponding constraints make the problem of finding the optimal set of solutions challenging.

2.2. Pareto Front

A single solution from the given set of solutions is considered Pareto optimal only if there exists no other feasible solution that improves any objective without worsening one or more objectives. The Pareto dominance can be explained using Figure 1.
In Figure 1, all of the points represent the feasible solutions obtained for the objective functions 1 and 2. Considering points A, B, and C and with the desire to minimize both objective functions, the following conclusions can be made:
  • Point A dominates point B in objective 1 and C in both the objectives.
  • Point B dominates point A in objective 2 and C in both the objectives.
  • Point C is dominated by both A and B points in both the objectives.
Here, points A and B are not dominating each other, and there are no other solutions dominating them. Hence, points A and B are in the same Pareto front. Point C is dominated by both solutions, thus pushing it to the next Pareto front.
To list all of the individuals into various Pareto fronts, first, all of the non-dominated solutions P F 1 are found from the entire solution set X. Next, the leftover solutions X P F 1 are considered to find P F 2 non-dominated solutions. This process is repeated until all the solutions are assigned a Pareto front number [20].

2.3. Multi-Criteria Decision Making (MCDM)

The aim of Multi-Criteria Decision Making (MCDM) is to obtain a single best solution from a set of candidate solutions depending on the various criteria or the end user requirements. As per [21], MCDM was first developed by Benjamin Franklin while working on the moral algebra concept.
In MCDM, weights are assigned to a criteria depending on the requirements from the end user or problem type. Any of the methods such as Simple Additive Weighting (SAW) [22], Analytic Hierarchy Process (AHP) [23], Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [24], and the ranking method [25] are applied on the individuals to find the best solution in the set of candidate solutions.

2.4. Evolutionary Algorithms (EA)

Evolutionary Algorithms (EAs) are a biologically inspired method for solving meta-heuristic MOO problems. They are based on a population-based optimization process. The initial step of EA is generating a total random population T = ( p + q ) . The population is evaluated against the objective functions, and fitness values are calculated. The individuals in the population are checked against each other with methods such as the binary tournament search method [26]. Let q be the individuals which need to be eliminated; these individuals are removed from the total population, leaving p = T q individuals. A new q population is added back to the kept population. To fill the q population, random individuals from p are copied and randomized. Similarly, random elements in the total population T are mutated. When the process of evaluation, selection, removal, and mutation is performed, these four steps are called a generation. The stop condition can be reaching a given generation number or reaching fitness values of individuals in the population.

2.5. Multi-Objective Evolutionary Algorithm (MOEA)

In MOEA, the selection process is different from the above in Section 2.4. The Pareto fronts are created for the entire population set. Various optimization techniques listed in Section 1 are applied in the selection process to find the best individuals for the next steps. MOEAs are classified into three categories: dominance-based, indicator-based, and decomposition based [27]. Depending upon the selection process, the optimizer will fit into the any of the above three categories.

3. Methodology

In this section, the developed algorithm is discussed in Section 3.1. The calculation of weights of objective functions using entropy and degrees of diversification is explained in Section 3.1.1. This is followed by the TOPSIS method [28], which is presented as it will be used for the selection of policies from the Pareto front. The performance of the optimizers are evaluated using the Pareto front spread ( Δ ) (discussed in Section 3.2.1), generational distance ( G D ) (discussed in Section 3.2.2), and the Pareto front spacing S p (discussed in Section 3.2.3). Moreover, the test functions used for this research are presented in Section 3.3.

3.1. Algorithm

The eMPF optimization technique combines EA with MCDM, as shown in Figure 2. Initially, a random population of T = ( p + q ) is generated and evaluated in the environment. The environment can be user defined as per requirements. With the fitness values evaluated from the simulator, the Pareto fronts of the population are generated as discussed in Section 2.2. This results in p better-performing individuals or policies over q individuals or policies.
Once the Pareto fronts are generated, the number of individuals to remove from the required Pareto front set is determined on q individuals. If the number of individuals is less than or equal to 3, then any of those individuals can be selected. If the number of individuals is more than 3, then the TOPSIS method is applied on all the individuals of the Pareto front except for the extreme solutions, as shown in Algorithm 1. Before using the TOPSIS method, weights for the objectives need to be found. Entropy and degree of diversification are used to find weights for the objectives in this paper. The best outputs of TOPSIS will be given priority over other individuals for survival. Next, priority is given to extreme individuals. Finally, the priority of survival of individuals is given depending on the ranking derived from the TOPSIS method.
Algorithm 1 eMPF selection process
  • Input: Pareto front P, Number of individuals to select N
  • Output: Selected individuals S
  • if  N 3  then
  •      S Randomly select N individuals from P
  • else
  •      E Select individuals with the extreme fitness values from P
  •     Calculate the weights of each objectives
  •     Apply TOPSIS on individuals except E to rank individuals
  •      S Select the best individual, individuals with the lowest fitness, followed by individuals as per rank in TOPSIS
  • end if
  • return S
Once the q policies are removed from the population using mutation q, new individual policies are developed. The process of evaluation is repeated on p + q policies. The methodology for the research is shown in Algorithm 1.
There are two major differences between the eMPF and M-PF optimizers. The first difference is the selection of extreme fitness values in the Pareto front. This gives an advantage to the eMPF optimizer of exploring the entire decision space over the M-PF optimizer, which helps the optimizer explore the search area in between the extreme fitness values. An extreme fitness value is the fitness value of an individual in the entire population with the lowest (in minimization) or highest (in maximization) objective value toward one objective function. The second difference is that in eMPF, the selection of individuals is based on the individuals within the Pareto front. In M-PF, the first Pareto front individuals are used for selection of the best individuals in the required Pareto front, whereas in eMPF, the individuals in the same Pareto front are used for selection of individuals. This transforms the optimizer from an indicator-based optimizer to a dominance-based optimizer. The improvements in the performance of the optimizer are shown in results section (Section 4).
From the aforementioned second variation between the eMPF and M-PF optimizers, the time complexity is reduced for the eMPF optimizer. In M-PF, two loops, one each for the first Pareto front and the other required Pareto front, are needed. With eMPF, only one loop is necessary in the required Pareto front, reducing the overall time complexity with O(n), where n is the number of individuals in the first Pareto front.

3.1.1. Calculating Weights of Each Objective

In the eMPF method, the weight of each objective is calculated in the process as shown below [16]:
  • The first step is to normalize the fitness values. The fitness values are normalized using Equation (2),
    F i j = f i j Σ i = 1 N f i
    where i is the individual policy, j is the objective number, N is the total number of individual policies, f is the actual fitness value, and F is the normalized fitness value.
  • The next step is to find the entropy values ( e j ) of each objective using Equation (3),
    e j = h i = 1 N F i j l n ( F i j )
    where h = 1 l n ( N ) , j is the objective fitness, N is the total number of individual policies, F is the normalized fitness value from Equation (2), and i is the individual policy.
  • Lastly, the weights ( w j ) of the objective are calculated using Equation (4),
    w j = 1 e j j = 1 M ( 1 e j )
    where e j is the entropy value from the above step, j is the number of objectives, and M is the total number of objectives.
By following the above listed steps, the weight of each objective function is calculated for each generation in an MOEA.

3.1.2. TOPSIS

Once the weights of the objectives are calculated, the next step in the optimizer is to use the TOPSIS method to rank the individual policies except for the extreme individuals as shown in Figure 2. The TOPSIS method is as follows.
  • Normalize the actual fitness values using Equation (5).
    F i j = f i j j = 1 m f i j 2
    where i is the individual policy, j is the objective number, f is the actual fitness value, and F is the normalized fitness value.
  • Next, find the fitness weights by taking the product of the weight of each objective with fitness values using the equation as shown below:
    W i j = w j F i j
    where w j is the weight of each objective, and F i j is the normalized fitness value calculated in Equation (5) for the individual policy i and objective j.
  • Depending on the W i j value, the best and worst-performing individuals of each objective are selected and flagged as the best or worst individual policy.
  • The Euclidean distances between all of the individuals to both the best and worst individuals are calculated and assigned as S i , S i + , where S i + is assigned as the distance to the best individual and S i is assigned as the distance to the worst individual.
  • The last step is to find the degree of approximation D i using Equation (7),
    D i = S i S i + S i +
    where S i + is assigned to the distance to the best individual, S i is assigned to the distance to the worst individual from the above calculations, and i is the individual element.
Individual elements are sorted from low to high based on the degree of approximation D i , where the first solution is the best one.

3.2. Performance Evaluation

The performance of the optimizers first is determined by visual graphical inspection. Next, the mean and standard deviations of the first Pareto front individual fitness values are used for analyzing the performance of the optimizers. In addition, the Pareto front spread metric ( Δ ) [29], generational distance ( G D ) [30], and the Pareto front spacing is used to evaluate the performance of the optimizers.

3.2.1. Pareto Front Spread ( Δ )

The spread metric is generally represented as Δ .
Δ = d f + d l + i = 1 m 1 | d i d ¯ | d f + d l + ( m 1 ) d ¯
Here, d f and d l are the distances between extreme solutions of the Pareto front, d i is the distance between consecutive solutions, d ¯ is the average of all d i values, and m is the number of solutions in the Pareto front. The higher spread value demonstrates the higher distribution of solutions across the objective space.

3.2.2. Generational Distance ( G D )

To calculate the generational distance, the Euclidean distance between the non-dominated Pareto front and its nearest reference Pareto front value is calculated. The generational distance is calculated using Equation (9).
G D = 1 n ( 1 n d j )
Here, the generational distance is represented as G D , n is the objective function number, and d j is the Euclidean distance between the reference Pareto front value to its nearest non-dominated Pareto front value. In this paper, the reference Pareto front was generated combining the Pareto fronts of 30 statistical runs by each optimizer.

3.2.3. Pareto Front Spacing ( S p )

One of the ways to measure the performance of an optimizer is by using the spacing in the Pareto front [31]. The spacing determines how evenly the individuals are distributed in the Pareto front. All of the individuals are said to be evenly distributed if the spacing S p value is 0.
To find out the spacing value, the distance between individual elements is calculated. The standard deviation of the distance is spacing value S p [32].

3.3. Test Functions

To demonstrate the performance of each optimizer, 21 test functions were used for this research, as listed below:
  • Binh and Korn Function [33];
  • Chankong and Haimes Function [33];
  • Fonseca–Fleming Function [34];
  • Test Function 4 [35];
  • Kursawe Function [36];
  • Schaffer Function N1, and N2 [37];
  • Poloni’s Two-Objective Function [38];
  • Zitzler–Deb–Thiele’s Function N1, N2, N3, N4, and N6 [39];
  • Osyczka and Kundu Function [40];
  • Constr-Ex Problem;
  • VR-UC Test 1, VR-UC Test 2 [41];
  • MSGA Test 1 [42];
  • Viennet Function [43];
  • MHHM1, MHHM2 [44].
The equations for each test function and the constraints corresponding to the test functions along with the search domain are presented in Appendix A.

4. Results

This section presents the results of using the eMPF, M-PF, NSGA-II, and NSGA-III methodologies on the 21 test functions. As mentioned in Section 3.3, the test functions used for the analysis along with the constraints and search domain are presented in Appendix A. The mutation rate of 0.5 was used with the mean value of 0.0, and a Gaussian distribution was used for the randomness in the population. A total of 100 individual policies were used and iterated over 1000 generations. No other stopping condition was applied to any of the test functions. All of the test functions were to be minimized over the generations. As mentioned earlier, the goal of the eMPF method presented in this paper is to exploit the environment while maintaining diverse solutions.
The mean and standard deviation of the fitness values of each test function along with the methodology are shown in Appendix B. In addition, the generational distance ( G D ), the Pareto front spread ( Δ ), and the fitness spacing ( S p ) are also calculated in Section 3.2. All results are included in Appendix C.

5. Discussion

In this section, the performance of the eMPF optimizer is compared with the other tested optimization techniques. The performance is evaluated with the test functions mentioned in Appendix A. The objective and search space of all of the test functions used are plotted and provided in the linked GitHub repository found at the end of this paper.
Binh and Korn Function: eMPF achieved a low standard deviation, demonstrating its performance in obtaining quality solutions across all test runs. Moreover, it had the lowest average of fitness compared with other optimizers in the first objective function. When the spread ( Δ ) was considered, eMPF had the highest spread value over the other optimization techniques with a larger set of solutions generated over the Pareto front. The eMPF optimizer had 50 % better performance than the M-PF optimizer when compared using the generational distance ( G D ).
Chancing and Haimes Function: For this test function, the minimum optimal solution obtained for each objective function was the lowest with NSGA-III and M-PF and had a difference of around 0.1 units higher than NSGA-II. However, eMPF showed a more centered distribution of solutions along with consistently more reliable solutions. When the spread ( Δ ) was considered, eMPF generated a diverse set of solutions, and NSGA-III offered the widest coverage of the Pareto front for this specific function. The generational distance ( G D ) of eMPF was higher than that of the rest of the optimizers due to the extreme solutions carried forward to the next generations. Figure 3 demonstrates the extreme solutions generated by the eMPF optimizer. Holding on to these extreme solutions will be beneficial when the complexity of the search space increases.
Fonseca–Fleming Function: eMPF demonstrated good performance, closely competing with NSGA-II in finding the optimal solutions while surpassing it in average performance and consistency. With the highest spread value, eMPF outperformed all other methods in terms of diversity, indicating its superior capability to explore and cover a wider range of the Pareto front.
Test Function 4: All the four optimization techniques had similar lowest, average, and standard deviation values. However, eMPF surpassed the other techniques in the spread ( Δ ), showing its diverse set of solution generation capability.
Kursawe Function: M-PF surpassed all other optimization techniques in spread ( Δ ), proving its diverse solution set. In addition, M-PF outperformed all other optimization techniques even in obtaining minimum values, average, and standard deviation. However, eMPF had more than 25 % better performance in the generational distance ( G D ) than that of M-PF.
Schaffer Function N1 and N2: eMPF was able to reach better results than other optimization techniques with respect to the minimum values, mean, and standard deviations. It also had higher spread ( Δ ) values than other optimizers. NSGA-II and NSGA-III, while offering competitive performances, were not able to match eMPF’s precision, consistency, and diversity. eMPF outperformed the other optimizers by obtaining accuracy (minimum values), consistency (low mean and standard deviation), and diversity (high spread).
Poloni’s Two Objectives: NSGA-II and eMPF optimization techniques had nearly the same minimum value; however, eMPF had a better mean and standard deviation. NSGA-III and M-PF optimizer had a better spread ( Δ ) value than eMPF.
Zitzler–Deb–Thiele’s from N1, N2, N3, N4, and N6: All of the optimization techniques had nearly the same performance in this set of functions, and there were negligible value variances. For example, in Zitzler–Deb–Thielse’s N2, the difference between minimum function 1 values between all optimizers was 0.000001 while considering the precision value of 6. All optimization techniques performed equally well in all of these test functions. In addition, eMPF had better performance than all other optimizers in the generational distance ( G D ) except for the N4 test function.
Osyczka and Kundu Function: All of the optimization techniques had nearly the same optimal value in terms of minimum values for each objective function. However, the eMPF optimizer was able to find a more diverse set of solutions compared with the other optimization techniques. Moreover, it was able to generate good solutions with greater consistency. In addition, eMPF had the better generational distance ( G D ) than NSGA-II.
Constr-Ex Function: The eMPF optimizer was able to achieve the lowest minimum values out of all of the optimizers. However, the average values were slightly higher than other optimization techniques, making it somewhat skewed. This skew can be clearly seen in Figure 4. The generational distance ( G D ) does concur with the results due to its higher value for the eMPF. The spread ( Δ ), on the other hand, showed good diversification of the solutions generated by eMPF.
VR-UC Test Functions: In the VR-UC test function 1, eMPF had the better average and the highest standard deviation compared with other optimizers. However, eMPF had a higher generational distance ( G D ) than the other optimizers. Despite the higher generational distance, the spread of solutions is better for the eMPF optimizer. In the VR-UC test function 2, all the optimizers had a negligible difference in the averages, the generational distance ( G D ), and the spread ( Δ ).
MSGA Test Function: eMPF outperformed all other optimizers in exploring the lowest fitness values of both objectives and was able to produce reliable solutions. This can be established with the lower mean and the higher standard deviations of the eMPF optimizer when compared to other optimizers. In addition, eMPF had a better generational distance ( G D ) and spread ( Δ ) than NSGA-III and the M-PF optimizer.
Viennet Function: As mentioned in Appendix A, this was a three-dimensional minimization multi-objective optimization problem. The average fitness values of the eMPF optimizer were better than other optimizers in all of the three objective functions. The eMPF optimizer had better generational distance ( G D ) than that of the M-PF optimizer, and it also had the highest spread ( Δ ) values over other optimizers.
MHHM Functions: These test functions also had three objective functions as mentioned in Appendix A. In both of the test functions, eMPF had the lowest average fitness, lowest generational distance ( G D ), and highest spread ( Δ ) values over the other optimizers. This demonstrates eMPF’s dominance in generating better and more diverse solutions when compared to other optimizers on these test functions. The Pareto front plot confirms eMPF’s dominance, as shown in Figure 5.
Overall, eMPF was able to find diverse solutions in all test scenarios while maintaining consistency in the solution set. Moreover, it was able to find better solutions than existing optimization techniques. In challenging problems such as Zitzler–Deb–Thiele’s functions, it was able to generate better solutions than NSGA-II and NSGA-III. Reliability was also demonstrated with lower standard deviations for Schaffer functions and Poloni’s two objective functions. The learning curves generated from the fitness values showed a higher learning rate in most test scenarios than the other optimizers. Moreover, when the objective functions were increased from two to three, the eMPF optimizer was able to perform better than the other optimizers demonstrated using the Viennet and MHHM test functions.
However, the optimizer was not able to outperform other optimizers in all the test functions. The reliability and diversification of the solutions obtained using eMPF were not always better than other optimizer solutions. Additionally, when the true Pareto front was a straight line, such as in the Chankong Haimes function, the solutions from eMPF were not as reliable. This conclusion comes from using the generational distance ( G D ) and the average fitness values. With a convex or concave Pareto front, eMPF always had the minimum optimal solution and consistency with repeatable solutions.

6. Conclusions

With a greater spread ( Δ ) than all the other mentioned optimization techniques, eMPF has demonstrated its effectiveness in generating a diverse solution set. When the number of objective functions was increased, eMPF consistently achieved better performance. In most of the test functions, eMPF demonstrated its performance by reaching nearly the lowest standard deviations. This proves its capabilities for achieving quality solutions over the other methods. However, as mentioned in Section 5, there were certain cases where other optimizers slightly outperformed eMPF, but overall, eMPF consistently performed either best or near best. Future work will focus on evaluating eMPF performance in more than three objective functions (many objective optimization) and in real world applications.

Author Contributions

Conceptualization, A.K.; methodology, A.K.; software, A.K.; validation, A.K.; formal analysis, A.K.; investigation, A.K.; resources, A.K.; data curation, A.K.; writing—original draft preparation, A.K.; writing—review and editing, A.K. and F.A.M.; visualization, A.K.; supervision, F.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MOOMulti-Objective Optimization
MCDMMulti-Criteria Decision Making
EAEvolutionary Algorithm
TOPSISTechnique for Order of Preference by Similarity to Ideal Solution
NSGA-IINon-Dominated Sorting Genetic Algorithm-II
NSGA-IIINon-Dominated Sorting Genetic Algorithm-III
M-PFMulti-Criteria Decision Making–Pareto Front
eMPFelite Multi-Criteria Decision Making–Pareto Front
CCEACooperative Co-evolutionary Algorithms

Appendix A

This section contains all the test MOO functions used for this paper, including the constraints and search domains of each MOO function.
Table A1. Multi-objective optimization test functions.
Table A1. Multi-objective optimization test functions.
FunctionTest FunctionConstraintsSearch Domain
Binh and Korn Function f 1 ( x ) = 4 x 1 2 + 4 x 2 2
f 2 ( x ) = ( x 1 5 ) 2 + ( x 2 5 ) 2
( x 1 5 ) 2 + x 2 2 25
( x 1 8 ) 2 + ( x 2 + 3 ) 2 7.7
0 x 1 5
0 x 2 3
Chankong and Haimes Function f 1 ( x 1 , x 2 ) = 2 + ( x 1 2 ) 2 + ( x 2 1 ) 2
f 2 ( x 1 , x 2 ) = 9 x 1 ( x 2 1 ) 2
x 1 2 + x 2 2 225
x 1 3 x 2 + 10 0
20 x 1
x 2 20
Fonseca–Fleming Function f 1 ( x ) = 1 exp i = 1 n x i 1 n 2
f 2 ( x ) = 1 exp i = 1 n x i + 1 n 2
4 x i 4
1 i n
Test Function 4 f 1 ( x 1 , x 2 ) = x 1 2 x 2
f 2 ( x 1 , x 2 ) = 0.5 x 1 x 2 1
6.5 x 1 6 x 2 0
7.5 0.5 x 1 x 2 0
30 5 x 1 x 2 0
7 x 1
x 2 4
Kursawe Function f 1 ( x ) = i = 1 n 1 10 exp 0.2 x i 2 + x i + 1 2
f 2 ( x ) = i = 1 n | x i | 0.8 + 5 sin ( x i 3 )
5 x i 5
1 i 3
Schaffer Function N1 f 1 ( x ) = x 2
f 2 ( x ) = ( x 2 ) 2
A x A
Values of A from 10 to 10 5
Schaffer Function N2 f 1 ( x ) = x x 1 x 2 1 < x 3 4 x 3 < x 4 x 4 x > 4
f 2 ( x ) = ( x 5 ) 2
5 x 10
Poloni’s Two-Objective Function f 1 ( x 1 , x 2 ) = [ 1 + ( A 1 B 1 ( x 1 , x 2 ) ) 2 + ( A 2 B 2 ( x 1 , x 2 ) ) 2 ]
f 2 ( x 1 , x 2 ) = ( x 1 + 3 ) 2 + ( x 2 + 1 ) 2
where
A 1 = 0.5 sin ( 1 ) 2 cos ( 1 ) + sin ( 2 ) 1.5 cos ( 2 ) ,
A 2 = 1.5 sin ( 1 ) cos ( 1 ) + 2 sin ( 2 ) 0.5 cos ( 2 ) ,
B 1 ( x 1 , x 2 ) = 0.5 sin ( x 1 ) 2 cos ( x 1 ) + sin ( x 2 ) 1.5 cos ( x 2 ) ,
B 2 ( x 1 , x 2 ) = 1.5 sin ( x 1 ) cos ( x 1 ) + 2 sin ( x 2 ) 0.5 cos ( x 2 ) .
π x 1 ; x 2 π
Zitzler–Deb–Thiele’s Function (ZDT) N1 f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) )
g ( x ) = 1 + 9 29 i = 2 30 x i
h ( f 1 ( x ) , g ( x ) ) = 1 f 1 ( x ) g ( x )
0 x i 1
1 i 30
Zitzler–Deb–Thiele’s Function (ZDT) N2 f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) )
g ( x ) = 1 + 9 29 i = 2 30 x i
h ( f 1 ( x ) , g ( x ) ) = 1 ( f 1 ( x ) g ( x ) ) 2
0 x i 1
1 i 30
Zitzler–Deb–Thiele’s Function (ZDT) N3 f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) )
g ( x ) = 1 + 9 29 i = 2 30 x i
h ( f 1 ( x ) , g ( x ) ) = 1 f 1 ( x ) g ( x ) ( f 1 ( x ) g ( x ) ) sin 10 π f 1 ( x )
0 x i 1
1 i 30
Zitzler–Deb–Thiele’s Function (ZDT) N4 f 1 ( x ) = x 1
f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) )
g ( x ) = 91 + i = 2 10 ( x i 2 10 cos 4 π x 1 )
h ( f 1 ( x ) , g ( x ) ) = 1 f 1 ( x ) g ( x )
0 x 1 1
5 x i 5
2 i 10
Zitzler–Deb–Thiele’s Function (ZDT) N6 f 1 ( x ) = 1 exp 4 x 1 ( sin 6 π x 1 ) 6
f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) )
g ( x ) = 1 + 9 [ i = 2 10 ( x i ) 9 ] 0.25
h ( f 1 ( x ) , g ( x ) ) = 1 ( f 1 ( x ) g ( x ) ) 2
0 x i 1
1 i 10
Osyczka & Kundu Function f 1 ( x ) = 25 ( x 1 2 ) 2 ( x 2 2 ) 2 ( x 3 1 ) 2 ( x 4 4 ) 2 ( x 5 1 ) 2
f 2 ( x ) = i = 1 6 x 1 2
g 1 ( x ) = x 1 + x 2 2 0
g 2 ( x ) = 6 x 1 x 2 0
g 3 ( x ) = 2 x 2 + x 1 0
g 4 ( x ) = 2 x 1 + 3 x 2 0
g 5 ( x ) = 4 ( x 3 3 ) 2 x 4 0
g 6 ( x ) = ( x 5 3 ) 2 + x 6 4 0
0 x 1 , x 2 , x 6 10
1 x 3 , x 5 5
0 x 4 6
Constr-Ex Problem f 1 ( x 1 , x 2 ) = x 1
f 2 ( x 1 , x 2 ) = 1 + x 2 x 1
x 2 + 9 x 1 6
x 2 + 9 x 1 1
0.1 x 1 1
0 x 2 5
VR-UC Test 1 f 1 ( x 1 , x 2 ) = 1 x 1 2 + x 2 2 + 1
f 2 ( x 1 , x 2 ) = x 1 2 + 3 x 2 2 + 1
3 x 1 , x 2 3
VR-UC Test 2 f 1 ( x 1 , x 2 ) = x 1 + x 2 + 1
f 2 ( x 1 , x 2 ) = x 1 2 + 2 x 2 1
3 x 1 , x 2 3
MSGA Test 1 f 1 ( x 1 , x 2 ) = ( x 1 2 + x 2 2 ) 0.125
f 2 ( x 1 , x 2 ) = ( ( x 1 0.5 ) 2 + ( x 2 0.5 ) 2 ) 0.25
5 x 1 , x 2 10
MHHM1 f 1 ( x ) = ( x 0.8 ) 2
f 2 ( x ) = ( x 0.85 ) 2
f 3 ( x ) = ( x 0.9 ) 2
0 x 1
MHHM2 f 1 ( x 1 , x 2 ) = ( x 1 0.8 ) 2 + ( x 2 0.6 ) 2
f 2 ( x 1 , x 2 ) = ( x 1 0.85 ) 2 + ( x 2 0.7 ) 2
f 3 ( x 1 , x 2 ) = ( x 1 0.9 ) 2 + ( x 2 0.6 ) 2
0 x 1 , x 2 1
Viennet Function f 1 ( x 1 , x 2 ) = 0.5 ( x 1 2 + x 2 2 ) + sin x 1 2 + x 2 2
f 2 ( x 1 , x 2 ) = ( 3 x 1 2 x 2 + 4 ) 2 8 + ( x 1 x 2 + 1 ) 2 27 + 15
f 3 ( x 1 , x 2 ) = 1 x 1 2 + x 2 2 + 1 1.1 exp ( ( x 1 2 + x 2 2 ) )
3 x 1 ; x 2 3

Appendix B

Thirty statistical simulation runs were performed using NSGA-II, NSGA-III, M-PF, and the eMPF optimizer on the test functions mentioned in Appendix A.
Table A2. Results for the two-objective function using each optimization technique.
Table A2. Results for the two-objective function using each optimization technique.
Test FunctionMethodMinimumMeanStandard Deviation
Function 1Function 2Function 1Function 2Function 1Function 2
NSGA-II0.0009484.00555652.64599217.28300239.40030312.500695
Binh and KornNSGA-III9.3321524.533799100.46408810.27714543.56518813.756248
M-PF9.49850710.47306732.99343719.53937318.2653937.521567
eMPF0.0020374.02352214.01569929.17442117.3318935.806211
NSGA-II10.119513−217.695484113.45425−112.73734262.12851262.677727
Chankong HaimesNSGA-III96.621684−217.730419213.296477−210.36340320.6567924.166874
M-PF57.01729−101.64550380.131923−80.21153348.91304448.881173
eMPF10.125042−217.686516127.519573−127.35205223.8720524.71453
Fonseca–FlemingNSGA-II0.0000340.0000420.5782480.5801660.2967330.296151
NSGA-III0.0924190.0329750.7238040.4979920.1532090.159965
M-PF0.0706280.0663460.6093860.5961730.2263190.233033
eMPF0.0000220.0000210.5117560.4995010.4657860.466366
NSGA-II−6.504972−8.499939−1.534355−8.1454233.3367200.277358
Test Function 4NSGA-III−6.506321−8.498921−5.331177−7.6237713.9295420.229023
M-PF−6.392928−8.316240−4.654117−7.8541351.9019640.210908
eMPF−6.506243−8.499792−5.808833−7.6366782.5206140.161645
NSGA-II−19.785629−5.138218−10.217168−2.5606733.6967441.566748
KursaweNSGA-III−19.800519−2.761169−12.516551−1.2902722.8121030.732146
M-PF−12.427413−3.105100−9.705192−1.6236250.9631430.812313
eMPF−19.823842−5.133368−12.190258−1.7715243.2242301.219680
NSGA-II0.0000000.0000001.3097961.3357081.1714951.178720
Schaffer function N1NSGA-III0.4589130.4524712.0921661.5480181.8303921.803354
M-PF0.3144520.3652261.4846311.9112461.6782261.713672
eMPF0.0000000.0000001.1185331.1213660.7262760.734062
NSGA-II−0.9999660.0000000.5191617.2173031.0308914.693345
Schaffer function N2NSGA-III−0.8249307.796460−0.62483314.0281930.9427494.035698
M-PF0.1784683.9916290.7424438.1411381.7446426.828264
eMPF−0.9995030.0000002.1625990.9647160.5132561.922794
NSGA-II1.0001070.0000116.8675336.3008375.0102899.464460
Poloni’s two objectivesNSGA-III2.0726678.0963447.51621613.6664587.20454211.749644
M-PF2.6461641.74881412.3854703.5253885.2371127.848127
eMPF1.0006560.0000353.7904992.4294062.9108713.427426
NSGA-II0.0000161.8307780.2597723.2910230.2956450.985772
ZDT’s N1NSGA-III0.0000121.8367940.2943633.2281280.3137971.012261
M-PF0.0000161.8529340.2793103.2245100.3020150.972492
eMPF0.0000271.8319700.2847483.2196090.3058020.967179
NSGA-II0.0000203.3605210.1468864.2094700.2653800.626616
ZDT’s N2NSGA-III0.0000183.3974060.1508924.2008820.2754580.627170
M-PF0.0000153.3898500.1241764.2318270.2441550.646990
eMPF0.0000153.3847500.1547234.2626110.2748390.691868
NSGA-II0.0000151.1979870.3025743.0093830.2956331.165188
ZDT’s N3NSGA-III0.0000161.2507500.2888003.0495430.2944401.148127
M-PF0.0000151.2305960.2990953.0194580.2962601.142140
eMPF0.0000141.2212930.3061382.9972150.3022441.155063
NSGA-II0.00001428.9181320.08484964.3497820.17417932.479856
ZDT’s N4NSGA-III0.00001730.1786940.07713169.3686320.16852434.873943
M-PF0.00001128.0183230.08196565.5903500.16688233.832099
eMPF0.00001327.4802610.10522962.2566010.20162631.532401
NSGA-II0.2807756.3566730.4923367.2717150.2784710.661174
ZDT’s N6NSGA-III0.2807756.2335130.4993827.2211040.2814400.653730
M-PF0.2807756.2764370.4744757.2579310.2698620.630176
eMPF0.2807756.2683190.4911377.2394780.2856700.677693
NSGA-II−258.6274195.194767−164.89891528.36604262.98699821.322610
Osyczka and KunduNSGA-III−258.6335165.433823−168.59855329.16991362.05332320.778173
M-PF−255.0804845.367792−162.23531726.60665060.83659718.988171
eMPF−257.3824195.312247−167.06953128.63992661.64387521.469461
NSGA-II0.3907171.0071210.5477604.7472380.1391162.426397
Const-ExNSGA-III0.3902301.5156150.4417087.5185920.0935021.616727
M-PF0.4454051.1752810.6166172.9025490.0937891.547132
eMPF0.3916151.0046180.7669521.7197320.0997451.512755
VR-UC Test 1NSGA-II0.0527611.0005740.15624217.2963260.19393210.902374
NSGA-III0.0527182.8666090.07416331.8954840.0867818.590167
M-PF0.0709032.9217660.1447107.9976470.0571973.582939
eMPF0.0532241.0000240.7059322.3396420.1749285.048489
VR-UC Test 2NSGA-II−4.995860−9.993871−4.995082−9.9928740.0028980.004093
NSGA-III−4.995535−9.993506−4.995059−9.9929050.0026680.003992
M-PF−4.995112−9.993227−4.994349−9.9922670.0031350.003989
eMPF−4.996121−9.994576−4.995738−9.9939980.0026670.003675
MSGA Test 1NSGA-II0.2344890.2358900.7069060.7090100.1978870.197780
NSGA-III0.5491580.4916330.7793500.7342320.0869800.100268
M-PF0.5547720.5312180.7687620.7681410.0512100.054106
eMPF0.2314590.2293160.6617940.6705740.2479050.247862
Table A3. Results of the minimum values for three objective functions using each optimization technique.
Table A3. Results of the minimum values for three objective functions using each optimization technique.
Test FunctionMethodMinimum
Function 1Function 2Function 3
ViennetNSGA-II0.00005115.000000−0.099997
NSGA-III131,609.806452443,378.9677420.000004
M-PF4292.11052410,870.875716−0.002940
eMPF0.00002815.000026−0.099998
MHHM1NSGA-II9.6666 × 10 11 7.3333 × 10 11 6.33333 × 10 11
NSGA-III6.193743 × 10 5 4.7659963 × 10 5 3.433997 × 10 5
M-PF2.4781883 × 10 5 7.064937 × 10 5 7.134097 × 10 5
eMPF1.0666 × 10 10 9.6666 × 10 11 5.3333 × 10 11
MHHM2NSGA-II6.18220 × 10 6 1.7605 × 10 6 2.837620 × 10 6
NSGA-III0.0012830.0012290.00249529
M-PF0.000293380.00040465360.0006900754
eMPF6.868939 × 10 6 1.63898 × 10 6 4.5422 × 10 6
Table A4. Results of the mean values for three objective functions using each optimization technique.
Table A4. Results of the mean values for three objective functions using each optimization technique.
Test FunctionMethodMean
Function 1Function 2Function 3
ViennetNSGA-II3.33008615.2852440.053903
NSGA-III132,295.691684445,689.2410410.000004
M-PF4607.78051611,696.5225340.001230
eMPF0.57222915.485611−0.024929
MHHM1NSGA-II0.00340990.00091040.00341086
NSGA-III0.00130040.00052240.00474447
M-PF0.004438030.0020166140.00459519
eMPF0.00269030.000200940.00271158
MHHM2NSGA-II0.00541500.0059250.00555724
NSGA-III0.00355690.0036490.0063590
M-PF0.0046422190.005287380.005879
eMPF0.00428170.00460300.004078
Table A5. Results of the standard deviations for three objective functions using each optimization technique.
Table A5. Results of the standard deviations for three objective functions using each optimization technique.
Test FunctionMethodStandard Deviation
Function 1Function 2Function 3
ViennetNSGA-II2.6885370.5114590.081092
NSGA-III6979.93741326,870.6037320.000000
M-PF2278.8261057778.0519730.010432
eMPF0.3309330.3289190.049485
MHHM1NSGA-II0.003131150.000790.0031071
NSGA-III0.00166410.00043830.00146057
M-PF0.00459890.000743560.004502838
eMPF0.0015290.000590.0015419
MHHM1NSGA-II0.0037490.004175690.0037675
NSGA-III0.0019860.00180500.0020722
M-PF0.00322530.00353060.0034066
eMPF0.00209890.00214890.002019

Appendix C

In this part, the Pareto front spread value obtained for the fitness values is presented. The Pareto fronts generated for Appendix B were used to generate this. The code used for generating this is in the Github link provided.
Table A6. Generational distance ( G D ), spread ( Δ ), and spacing ( S p ) of the test functions.
Table A6. Generational distance ( G D ), spread ( Δ ), and spacing ( S p ) of the test functions.
Test FunctionMethodGenerational Distance GD Spread ( Δ )Spacing ( Sp )
Binh and KornNSGA-II
NSGA-III
M-PF
eMPF
0.069016
0.378684
1.333222
0.770759
0.938548
1.530796
1.233198
1.584592
0.185688
0.554448
1.062628
1.040446
Chankong HaimesNSGA-II
NSGA-III
M-PF
eMPF
0.132257
6.711842
1.897858
10.481315
0.803492
1.600441
0.99258
1.220204
0.216754
3.754718
0.41429
2.947663
Fonseca FlemingNSGA-II
NSGA-III
M-PF
eMPF
0.000838
0.002347
0.000664
0.006234
0.86052
1.401653
1.134345
1.664327
0.001535
0.004618
0.00218
0.006798
Test Function 4NSGA-II
NSGA-III
M-PF
eMPF
0.006338
0.133243
0.020431
0.081445
0.938548
1.530796
1.233198
1.584592
0.039957
0.287708
0.061413
0.14012
KursaweNSGA-II
NSGA-III
M-PF
eMPF
0.006938
0.70871
0.127039
0.075359
0.803527
0.911151
1.19283
0.892111
0.336557
0.470105
0.334158
0.45627
Schaffer Function N1NSGA-II
NSGA-III
M-PF
eMPF
0.000985
0.003258
0.003113
0.002111
0.772657
1.44585
1.44585
1.274417
0.002629
0.007116
0.00495
0.005048
Schaffer Function N2NSGA-II
NSGA-III
M-PF
eMPF
0.001906
0.130632
0.010325
0.096201
0.742149
1.514319
1.319566
1.621567
0.006122
0.097936
0.023796
0.070892
Poloni’s Two ObjectivesNSGA-II
NSGA-III
M-PF
eMPF
0.0127
0.102661
0.031244
0.186827
1.47718
1.789741
1.791253
1.55847
0.737613
0.693373
0.664818
0.984685
Zitzler–Deb–Thiele’s N1NSGA-II
NSGA-III
M-PF
eMPF
0.042448
0.052309
0.045381
0.048533
0.7066
0.712719
0.721581
0.685075
0.074755
0.077806
0.07787
0.147347
Zitzler–Deb–Thiele’s N2NSGA-II
NSGA-III
M-PF
eMPF
0.073596
0.041085
0.068744
0.070442
0.697774
0.710485
0.738163
0.712368
0.19127
0.113637
0.120021
0.16111
Zitzler–Deb–Thiele’s N3NSGA-II
NSGA-III
M-PF
eMPF
0.032912
0.030987
0.044047
0.040325
0.714989
0.660256
0.662726
0.72205
0.098628
0.102081
0.111724
0.084448
Zitzler–Deb–Thiele’s N4NSGA-II
NSGA-III
M-PF
eMPF
0.810677
2.272926
2.018945
2.279626
0.744636
0.757631
0.786935
0.738201
9.043884
7.516822
8.574137
9.78726
Zitzler–Deb–Thiele’s N6NSGA-II
NSGA-III
M-PF
eMPF
0.191495
0.099431
0.065991
0.092465
0.685276
0.59864
0.638246
0.69825
0.115021
0.140287
0.173285
0.260617
Osycak and Kundu FunctionNSGA-II
NSGA-III
M-PF
eMPF
1.504086
1.294484
1.316762
1.388888
1.178373
1.127386
1.132501
1.2232
3.283527
2.84922
3.077236
3.686033
Constr-Ex ProblemNSGA-II
NSGA-III
M-PF
eMPF
0.00495
0.043087
0.02453
0.07408
0.714432
1.313374
1.179857
1.683737
0.012929
0.067725
0.042256
0.090675
MSGA Test 1NSGA-II
NSGA-III
M-PF
eMPF
0.000538
0.013212
0.012998
0.003187
0.972065
1.038185
1.492854
1.242157
0.002284
0.015305
0.006631
0.004642
VR-UC Test 1NSGA-II
NSGA-III
M-PF
eMPF
0.01332
0.070166
0.142823
0.38772
0.786952
1.427575
1.316431
1.695775
0.05018
0.176857
0.190408
0.402943
VR-UC Test 2NSGA-II
NSGA-III
M-PF
eMPF
9.4 × 10 5
9.4 × 10 5
0.000324
0.000273
0.717724
0.636498
0.577738
0.712867
0.00000
0.00000
0.00000
0.00000
Vinnet FunctionNSGA-II
NSGA-III
M-PF
eMPF
0.00169
0.627138
1.615187
0.648107
0.782624
0.989541
1.21211
1.557092
0.015777
0.087255
0.169447
0.095202
MHHM1NSGA-II
NSGA-III
M-PF
eMPF
6 × 10 6
2.8 × 10 5
1.8 × 10 5
2.3 × 10 5
0.772737
1.640253
1.620182
1.688381
1.1 × 10 5
3.3 × 10 5
2.6 × 10 5
3.5 × 10 5
MHHM2NSGA-II
NSGA-III
M-PF
eMPF
0.000401
0.001194
0.000514
0.001207
0.651384
0.765195
0.798712
0.702216
0.003814
0.001724
0.003292
0.001756

References

  1. Garey, M.R.; Johnson, D.S. Computers and Intractability; Freeman: San Francisco, CA, USA, 1979; Volume 174. [Google Scholar]
  2. Hooshyar, M.; Huang, Y.M. Meta-heuristic Algorithms in UAV Path Planning Optimization: A Systematic Review (2018–2022). Drones 2023, 7, 687. [Google Scholar] [CrossRef]
  3. Muñoz, M.A.; Sun, Y.; Kirley, M.; Halgamuge, S.K. Algorithm selection for black-box continuous optimization problems: A survey on methods and challenges. Inf. Sci. 2015, 317, 224–245. [Google Scholar] [CrossRef]
  4. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  5. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2013, 18, 577–601. [Google Scholar] [CrossRef]
  6. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  7. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  8. Horn, J.; Nafpliotis, N.; Goldberg, D.E. A niched Pareto genetic algorithm for multiobjective optimization. In Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, Orlando, FL, USA, 27–29 June 1994; pp. 82–87. [Google Scholar]
  9. Gunasekara, R.C.; Mehrotra, K.; Mohan, C.K. Multi-objective optimization to identify key players in social networks. In Proceedings of the 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014), Beijing, China, 17–20 August 2014; pp. 443–450. [Google Scholar]
  10. Stavroulakis, G.E.; Charalambidi, B.G.; Koutsianitis, P. Review of computational mechanics, optimization, and machine learning tools for digital twins applied to infrastructures. Appl. Sci. 2022, 12, 11997. [Google Scholar] [CrossRef]
  11. Murray, A.T.; Baik, J. Opensource spatial optimization in GIScience for strategic positioning. Trans. GIS 2023, 27, 646–662. [Google Scholar] [CrossRef]
  12. Sahoo, S.K.; Goswami, S.S. A comprehensive review of multiple criteria decision-making (MCDM) Methods: Advancements, applications, and future directions. Decis. Mak. Adv. 2023, 1, 25–48. [Google Scholar] [CrossRef]
  13. Chen, H.; Lu, C.; Feng, L.; Liu, Z.; Sun, Y.; Chen, W. Structural optimization design of BIW using NSGA-III and entropy weighted TOPSIS methods. Adv. Mech. Eng. 2023, 15, 16878132231220351. [Google Scholar] [CrossRef]
  14. Akbari, M.; Asadi, P.; Rahimi Asiabaraki, H. A Hybrid Method of NSGA-II and TOPSIS to Optimize the Performance of Friction Stir Extrusion. Iran. J. Mater. Form. 2021, 8, 46–62. [Google Scholar]
  15. Alkayem, N.F.; Cao, M.; Ragulskis, M. Damage diagnosis in 3D structures using a novel hybrid multiobjective optimization and FE model updating framework. Complexity 2018, 2018, 3541676. [Google Scholar] [CrossRef]
  16. Kesireddy, A.; Carrillo, L.R.G.; Baca, J. Multi-criteria decision making-pareto front optimization strategy for solving multi-objective problems. In Proceedings of the 2020 IEEE 16th International Conference on Control & Automation (ICCA), Singapore, 9–11 October 2020; pp. 53–58. [Google Scholar]
  17. Méndez, M.; Frutos, M.; Miguel, F.; Aguasca-Colomo, R. Topsis decision on approximate pareto fronts by using evolutionary algorithms: Application to an engineering design problem. Mathematics 2020, 8, 2072. [Google Scholar] [CrossRef]
  18. Coello Coello, C.A.; González Brambila, S.; Figueroa Gamboa, J.; Castillo Tapia, M.G.; Hernández Gómez, R. Evolutionary multiobjective optimization: Open research areas and some challenges lying ahead. Complex Intell. Syst. 2020, 6, 221–236. [Google Scholar] [CrossRef]
  19. Roy, B. Problems and methods with multiple objective functions. Math. Program. 1971, 1, 239–266. [Google Scholar] [CrossRef]
  20. Long, Q.; Wu, X.; Wu, C. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. J. Ind. Manag. Optim. 2021, 17. [Google Scholar] [CrossRef]
  21. Taherdoost, H.; Madanchian, M. Multi-criteria decision making (MCDM) methods and concepts. Encyclopedia 2023, 3, 77–87. [Google Scholar] [CrossRef]
  22. Stević, Ž.; Durmić, E.; Gajić, M.; Pamučar, D.; Puška, A. A novel multi-criteria decision-making model: Interval rough SAW method for sustainable supplier selection. Information 2019, 10, 292. [Google Scholar] [CrossRef]
  23. Vaidya, O.S.; Kumar, S. Analytic hierarchy process: An overview of applications. Eur. J. Oper. Res. 2006, 169, 1–29. [Google Scholar] [CrossRef]
  24. Sarkar, A. A TOPSIS method to evaluate the technologies. Int. J. Qual. Reliab. Manag. 2013, 31, 2–13. [Google Scholar] [CrossRef]
  25. Borgulya, I. A ranking method for multiple-criteria decision-making. Int. J. Syst. Sci. 1997, 28, 905–912. [Google Scholar] [CrossRef]
  26. Blickle, T.; Thiele, L. A comparison of selection schemes used in evolutionary algorithms. Evol. Comput. 1996, 4, 361–394. [Google Scholar] [CrossRef]
  27. Sharma, S.; Kumar, V. A comprehensive review on multi-objective optimization techniques: Past, present and future. Arch. Comput. Methods Eng. 2022, 29, 5605–5633. [Google Scholar] [CrossRef]
  28. Li, X.; Wang, K.; Liu, L.; Xin, J.; Yang, H.; Gao, C. Application of the entropy weight and TOPSIS method in safety evaluation of coal mines. Procedia Eng. 2011, 26, 2085–2091. [Google Scholar] [CrossRef]
  29. Trautmann, H.; Rudolph, G.; Dominguez-Medina, C.; Schütze, O. Finding evenly spaced Pareto fronts for three-objective optimization problems. In EVOLVE-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II; Springer: Berlin/Heidelberg, Germany, 2013; pp. 89–105. [Google Scholar]
  30. Ishibuchi, H.; Masuda, H.; Tanigaki, Y.; Nojima, Y. Modified distance calculation in generational distance and inverted generational distance. In Proceedings of the Evolutionary Multi-Criterion Optimization: 8th International Conference, EMO 2015, Guimarães, Portugal, 29 March–1 April 2015; Part II 8. pp. 110–125. [Google Scholar]
  31. Riquelme, N.; Von Lücken, C.; Baran, B. Performance metrics in multi-objective optimization. In Proceedings of the 2015 Latin American Computing Conference (CLEI), Arequipa, Peru, 19–23 October 2015; pp. 1–11. [Google Scholar]
  32. Schott, J.R. Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995. [Google Scholar]
  33. Binh, T.T.; Korn, U. MOBES: A multiobjective evolution strategy for constrained optimization problems. In Proceedings of the Third International Conference on Genetic Algorithms (Mendel 97), Brno, Czech Republic, 25–27 June 1997; Volume 25, p. 27. [Google Scholar]
  34. Fonseca, C.M.; Fleming, P.J. An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 1995, 3, 1–16. [Google Scholar] [CrossRef]
  35. Kita, H.; Yabumoto, Y.; Mori, N.; Nishikawa, Y. Multi-objective optimization by means of the thermodynamical genetic algorithm. In Parallel Problem Solving from Nature—PPSN IV, Proceedings of the International Conference on Evolutionary Computation—The 4th International Conference on Parallel Problem Solving from Nature, Berlin, Germany, 22–26 September 1996; Proceedings 4; Springer: Berlin/Heidelberg, Germany, 1996; pp. 504–512. [Google Scholar]
  36. Kursawe, F. A variant of evolution strategies for vector optimization. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Dortmund, Germany, 1–3 October 1990; pp. 193–197. [Google Scholar]
  37. Schaffer, J.D. Multiple objective optimization with vector evaluated genetic algorithms. In Proceedings of the First International Conference on Genetic Algorithms and Their Applications; Psychology Press: London, UK, 2014; pp. 93–100. [Google Scholar]
  38. Khatamsaz, D.; Peddareddygari, L.; Friedman, S.; Allaire, D. Bayesian optimization of multiobjective functions using multiple information sources. AIAA J. 2021, 59, 1964–1974. [Google Scholar] [CrossRef]
  39. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable multi-objective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 825–830. [Google Scholar]
  40. Osyczka, A.; Kundu, S. A new method to solve generalized multicriteria optimization problems using the simple genetic algorithm. Struct. Optim. 1995, 10, 94–99. [Google Scholar] [CrossRef]
  41. Rendón, M.V. A non-generational genetic algorithm for multiobjective optimization. In Proceedings of the 7th Interational Conference on Genetic Algorithms, East Lansing, MI, USA, 19–23 July 1997; pp. 658–665. [Google Scholar]
  42. Lis, J.; Eiben, Á.E. A multi-sexual genetic algorithm for multiobjective optimization. In Proceedings of the 1997 IEEE International Conference on Evolutionary Computation (ICEC’97), Indianapolis, IN, USA, 13–16 April 1997; pp. 59–64. [Google Scholar]
  43. Vlennet, R.; Fonteix, C.; Marc, I. Multicriteria optimization using a genetic algorithm for determining a Pareto set. Int. J. Syst. Sci. 1996, 27, 255–260. [Google Scholar] [CrossRef]
  44. Mao, J.; Hirasawa, K.; Hu, J.; Murata, J. Genetic symbiosis algorithm for multiobjective optimization problems. Trans. Soc. Instrum. Control Eng. 2001, 37, 893–901. [Google Scholar] [CrossRef]
Figure 1. Set of objective space solutions for objective functions f 1 ,   f 2 along with A, B, and C solutions.
Figure 1. Set of objective space solutions for objective functions f 1 ,   f 2 along with A, B, and C solutions.
Algorithms 17 00206 g001
Figure 2. eMPF process flow chart.
Figure 2. eMPF process flow chart.
Algorithms 17 00206 g002
Figure 3. Pareto front generated for the Chancing and Haimes function using NSGA-II, NSGA-III, M-PF, and eMPF optimizers.
Figure 3. Pareto front generated for the Chancing and Haimes function using NSGA-II, NSGA-III, M-PF, and eMPF optimizers.
Algorithms 17 00206 g003
Figure 4. Search space solutions for Const-Ex function.
Figure 4. Search space solutions for Const-Ex function.
Algorithms 17 00206 g004
Figure 5. Pareto front for the MHHM functions.
Figure 5. Pareto front for the MHHM functions.
Algorithms 17 00206 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kesireddy, A.; Medrano, F.A. Elite Multi-Criteria Decision Making—Pareto Front Optimization in Multi-Objective Optimization. Algorithms 2024, 17, 206. https://doi.org/10.3390/a17050206

AMA Style

Kesireddy A, Medrano FA. Elite Multi-Criteria Decision Making—Pareto Front Optimization in Multi-Objective Optimization. Algorithms. 2024; 17(5):206. https://doi.org/10.3390/a17050206

Chicago/Turabian Style

Kesireddy, Adarsh, and F. Antonio Medrano. 2024. "Elite Multi-Criteria Decision Making—Pareto Front Optimization in Multi-Objective Optimization" Algorithms 17, no. 5: 206. https://doi.org/10.3390/a17050206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop