Next Article in Journal
Forecasting Commodity Market Synchronization with Commodity Currencies: A Network-Based Approach
Previous Article in Journal
Entropy and Fractal Techniques for Monitoring Fish Behaviour and Welfare in Aquacultural Precision Fish Farming—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stable Large-Scale Multiobjective Optimization Algorithm with Two Alternative Optimization Methods

College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(4), 561; https://doi.org/10.3390/e25040561
Submission received: 20 February 2023 / Revised: 20 March 2023 / Accepted: 20 March 2023 / Published: 25 March 2023

Abstract

:
For large-scale multiobjective evolutionary algorithms based on the grouping of decision variables, the challenge is to design a stable grouping strategy to balance convergence and population diversity. This paper proposes a large-scale multiobjective optimization algorithm with two alternative optimization methods (LSMOEA-TM). In LSMOEA-TM, two alternative optimization methods, which adopt two grouping strategies to divide decision variables, are introduced to efficiently solve large-scale multiobjective optimization problems. Furthermore, this paper introduces a Bayesian-based parameter-adjusting strategy to reduce computational costs by optimizing the parameters in the proposed two alternative optimization methods. The proposed LSMOEA-TM and four efficient large-scale multiobjective evolutionary algorithms have been tested on a set of benchmark large-scale multiobjective problems, and the statistical results demonstrate the effectiveness of the proposed algorithm.

1. Introduction

Nowadays, optimization problems with large-scale decision variables appear in various fields, such as artificial intelligence [1], big data mining [2], large-scale software engineering [3], economic decision-making problems [4,5], and so on. Large-scale multiobjective optimization problems (LSMOPs) are generally referred as multiobjective optimization problems involving hundreds or even thousands of decision variables [6,7]. However, studies have shown that the traditional multiobjective evolutionary algorithms, such as MOEA/D [8], NSGA-III [9], and MOPSO [10], tend to converge slowly when solving LSMOPs. This phenomenon is mainly because the volume of decision variable space grows exponentially when the number of decision variables increases. In this case, searching for Pareto optimal solutions becomes very difficult. This phenomenon is called the curse of dimensionality [11,12].
Recently, researchers have proved that the idea of “divide and conquer”, which has been widely adopted in cooperative coevolution (CC), can efficiently address LSMOPs [13]. The core idea of “divide and conquer” is to divide the large-scale decision variables in LSMOPs into multiple lower-dimensional groups, thereby transforming LSMOPs into multiple small-scale problems. After that, conventional evolutionary algorithms (EAs), such as the genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution algorithm (DE), can be used as efficient tools to solve the transformed small-scale problems. Please note that conventional EAs cannot be used to solve LSMOPs directly because of the “curse of dimensionality”. The main difference between large-scale multiobjective optimization algorithms (LSMOEAs) and conventional EAs is that various strategies have been incorporated into LSMOEAs to solve the “curse of dimensionality” caused by large-scale decision variables.
Among various large-scale multiobjective optimization algorithms, LSMOEAs based on decision-variable grouping strategies have received more and more attention. Most LSMOEAs based on decision-variable grouping strategies divide the decision variables into convergence-related and diversity-related variables [12]. Specifically, the convergence-related variables help LSMOEAs find the solution sets closer to the ideal solution sets. In contrast, the diversity-related variables help LSMOEAs find the solution sets with a better distribution. Existing decision-variable grouping strategies can be divided into fixed grouping strategies [13,14,15,16,17,18,19,20] and dynamic grouping strategies [21,22,23,24]. In a fixed grouping strategy, the grouping results do not change during the evolution process, i.e., the evolutionary algorithm for large-scale many-objective optimization, LMEA. In the dynamic grouping strategies, the grouping results change during the optimization process [12], i.e., cooperative coevolutionary generalized differential evolution 3, CCGDE3. The quality of the grouping of decision variables affects the performance of algorithms directly [24]. Therefore, grouping decision variables reasonably and effectively is a challenging problem for LSMOEAs based on grouping strategies.
For LSMOEAs with fixed grouping strategies, the grouping results of decision variables are obtained according to the initial population, whose members are the initial solutions for the optimized problem and are usually generated randomly in the decision-variable space. Since the grouping results will not change in LSMOEAs with fixed grouping strategies, the corresponding algorithm may obtain unsatisfactory optimization performance if the grouping results are not good enough [12]. Figure 1 gives the grouping results of LMEA, a representative LSMOEA with a fixed grouping strategy, on the benchmark test problem BT6 [25]. In LMEA, the division of decision variables is implemented on the initial population, and the group results are used in the optimization process. It can be observed from Figure 1 most decision variables (29 out of 30) are regarded as convergence-related variables, and only one decision variable is the diversity-related decision variable. Therefore, LMEA will likely obtain a Pareto optimal set that cannot distribute well on the Pareto front of BT6. In Figure 2, the black curve represents the ideal Pareto front, which demonstrates the position of the ideal solutions for BT6. The gray dots are the members (solutions) of the current population. The gray dots should be distributed as evenly as possible on the Pareto front to guarantee population diversity. However, in LMEA, the population focuses on a single point on the Pareto front as the evolutionary process continues (as shown in Figure 2c), which demonstrates that the LMEA fails to maintain population diversity at the last stage of the evolution process. This is because the grouping results based on the initial population are sometimes unreasonable, i.e., there are too many convergence-related decision variables in BT6 (as shown in Figure 1). In this case, using the fixed group results based on the initial population to guide the evolution process cannot achieve the balance between convergence and population diversity.
This paper proposes a large-scale multiobjective optimization algorithm with two alternative optimization methods (LSMOEA-TM) to deal with LSMOPs, and the main new contributions are summarized as follows.
(1)
In the proposed two alternative optimization methods, two group strategies, namely, the convergence-related grouping strategy and the diversity-related grouping strategy, are introduced to group the large-scale decision variables based on the evaluation of the population. Specifically, if there is a significant performance degradation in the current population, the diversity-oriented stage is implemented by adopting the diversity-related grouping strategy. Suppose the diversity-oriented stage has been carried out for a certain number of generations. In that case, LSMOEA-TM implements the convergence-oriented stage with the help of the convergence-related grouping strategy.
(2)
A Bayesian-based parameter adjustment strategy is proposed to modify the parameters in the convergence-related and diversity-related grouping strategies to reduce the computational cost of the proposed algorithm.
The rest of this article is organized as follows. In Section 2, the representative algorithms for solving LSMOPs are given. Section 3 introduces the proposed LSMOEA-LS in detail. Section 4 shows experimental results and the analysis of LSMOEA-TM and four efficient LSMOEAs. Section 5 gives the concluding remarks.

2. Background

2.1. Large-Scale Multiobjective Optimization Problems (LSMOPs)

The problems having multiple conflicting objectives are called multiobjective optimization problems (MOPs) [8]. An MOP can be formulated as shown in Equation (1), where x = ( x 1 , x 2 , , x D ) is a candidate solution, M is the number of objectives, and D is the number of decision variables. If D 100 , then the MOP is regarded as a large-scale MOP, namely, an LSMOP.
m i n   F x = f 1 x , f 2 x , , f M x s u b j e c t   t o : x Ω D
Finding one solution that can optimize all objectives simultaneously is impossible since the objectives are mutually conflicting for MOPs [26]. Suppose a minimized MOP, the dominant relation between solutions x and y can be obtained according to Equation (2):
x y   i f   f   f i x f i y   i = 1 , 2 , , M   f j x < f j y
If solution x * cannot be dominated by any other solutions, then x * is the nondominated solution. For MOPs, the optimization algorithms aim to find a set of nondominated solutions (Pareto optimal solutions). Recently, multiobjective evolutionary algorithms (MOEAs) have become efficient tools for handling MOPs [8,10]. However, the performance of MOEAs may degrade when extended to solve LSMOPs [12]. This is because the decision variable space increases dramatically with the increase of the number of decision variables. Therefore, it is difficult for traditional MOEAs to find promising nondominated solutions when handling LSMOPs.

2.2. Large-Scale Multiobjective Evolutionary Algorithms (LSMOEAs)

Most existing LSMOEAs are based on “divide and conquer”. The core idea of “divide and conquer” is to divide a complex problem into several smaller subproblems and then obtain the solution of the original problem by collating the solutions of the subproblems. Based on the idea of “divide and conquer”, an LSMOP can be divided into multiple small-scale multiobjective optimization problems. The original LSMOP can be obtained by integrating the solution of the transformed small-scale multiobjective optimization problems. The existing LSMOEAs can be mainly divided into the following categories.

2.2.1. LSMOEAs Based on Fixed Grouping

The main idea of the fixed grouping strategy is to adopt fixed grouping results of the decision variables in the evolution process. Ma et al. [14] proposed a multiobjective evolutionary optimization algorithm based on a decision-variable analysis, namely, MOEA/DVA, to solve LSMOPs. In MOEA/DVA, the decision variables of some individuals were perturbed, and the individuals were randomly selected from the initial population. Then, the decision variables were divided into convergence-related variables and diversity-related variables according to the dominance relationship among the perturbed individuals. Based on MOEA/DVA, a new fixed grouping strategy, called LMEA [16], was proposed. This strategy divided the decission variables according to the angles between the perturbation fitting lines and the hyperplane normal lines. Cao et al. [20] proposed a new large-scale multiobjective optimization algorithm called mogDG-shift and adopted a graph-based differential grouping strategy to decompose the large-scale decision variables.
The fixed grouping strategy can achieve a stable grouping result since it adopts fixed grouping results of the decision variables in the evolution process. However, most fixed grouping strategies get grouping results based on the initial population, and the initial population sometimes cannot reflect the characteristics of the decision-variable space. Therefore, the fixed grouping strategy, implemented on the initial population, may generate bad group results and lead to unsatisfactory optimization performance.

2.2.2. LSMOEAs Based on Dynamic Grouping

Antonio and Coello proposed a classic LSMOEA based on dynamic grouping, namely, CCGDE3 [21]. The main idea of CCGDE3 was to randomly divide the decision variables into several equal groups. Subsequently, Antonio and Coello replaced the third generation generalized differential evolution (GDE3) in CCGDE3 with MOEA/D [8] and proposed a new algorithm MOEA/D2 [22]. However, the grouping size was set to a fixed value in CCGDE3 and MOEA/D2. Therefore, these two algorithms may not adapt well to different types of LSMOPs. To deal with the problem mentioned above, MOEA/D-RDG (MOEA/D combining random-based dynamic grouping) was proposed. In MOEA/D-RDG, a grouping parameter pool was constructed to improve the generalization ability of the proposed algorithm.
Nowadays, dynamic grouping strategies are applied to solve various LSMOPs successfully. However, many of the existing dynamic grouping strategies are derived from random grouping, which may lead to unstable and unsatisfactory group results, especially for multiobjective problems with more than 1000 decision variables and multiobjective problems with complex relationships between decision variables [12].
For LSMOEAs with dynamic grouping strategies, the grouping results of decision variables can adapt to the evolving population. However, dynamic grouping strategies cannot guarantee the stability of grouping results, especially for random-based dynamic grouping strategies. Figure 3 demonstrates the IGD values obtained by CCGDE3 and LMEA over 30 independent runs on two benchmark problems, namely DTLZ2 and UF6. CCGDE3 and LMEA are the typical algorithms in LSMOEAs with dynamic and fixed grouping strategies, respectively. IGD is a widely used metric that can evaluate a multiobjective algorithm’s performance from the perspective of both convergence and diversity. As Figure 3 shows, the performance of CCGDE3 is more unstable than that of LMEA.

2.2.3. Other LSMOEAs

Besides the LSMOEAs based on fixed and dynamic grouping strategies, many other large-scale multiobjective optimization algorithms exist. A typical representative algorithm is a large-scale multiobjective algorithm based on problem reconstruction (LSMOF) [27]. The core idea of LSMOF is to transform the original LSMOP into a small-scale single-objective optimization problem through problem reconstruction. In DGEA [28] and S3-CMA-ES [29], the population was divided into multiple subpopulations, and each sub-population focused on a single-objective function. Chen et al. [30] proposed an inverse-modeling multiobjective evolutionary algorithm based on decomposition (IM-MOEA/D) to solve LSMOPs. IM-MOEA/D divided the decision variable space into several subareas by reference vectors. Then, the population was divided into several groups according to their distribution in the decision-variable space, and each group evolved separately. The solutions of all groups cooperated in obtaining promising results for LSMOPs. In [31], a fuzzy decision variable framework with various internal optimizers (FDV) was proposed. In FDV, a two-stage optimization strategy containing the rough optimization stage and the fine optimization stage was introduced to handle LSMOPs efficiently.
It can be found that the algorithms mentioned above also adopt the idea of “divide and conquer”. However, in these algorithms, the division process focuses on objective spaces or populations rather than decision variables.

3. Method

Figure 4 shows the main framework of LSMOEA-TM. LSMOEA-TM adopts two alternative optimization methods, which include two optimization stages: the convergence-oriented stage and the diversity-oriented stage. As shown in Figure 5, the two stages adopt different grouping strategies to guide the evolution process. As shown in Figure 4, LSMOEA-TM chooses the appropriate optimization stage adaptively to achieve the balance of convergence and population diversity. Specifically, if there is a significant performance degradation in the current population, the diversity-oriented stage is implemented by adopting the diversity-related grouping strategy. In Figure 4, n e w H V and o l d H V are the values of the hypervolume (HV) metric [32] of the current and the last population, respectively. Suppose the diversity-oriented stage has been carried out for a certain number of generations. In that case, LSMOEA-TM implements the convergence-oriented stage with the help of the convergence-related grouping strategy. In LSMOEA-TM, the convergence-oriented and diversity-oriented stages are executed alternatively to balance convergence and population diversity.
Furthermore, to reduce computational costs, this paper introduces a Bayesian-based parameter adjusting strategy to modify the parameters in the convergence-related and diversity-related grouping strategies. As shown in Figure 5, the two stages have a similar structure. The difference between the two stages is that they use a convergence-related strategy and a diversity-related grouping strategy, respectively.
It can be observed from Figure 4 that the main operators in LSMOEA-TM are the initialization of the population and repository population, the Bayesian-based parameter adjustment, and the two alternative optimization methods. Detailed descriptions of the key operators are given below.

3.1. Initialization

As shown in Figure 6, the population ( P O P ) can be initialized as an N × D matrix and N is the population size. In P O P , each row is a candidate solution that has D decision variables, as shown in Figure 6. Each element of one candidate solution is generated from the search range randomly. The repository population ( R E P ) contains the nondominated solutions found by the algorithm. In the initialization step, R E P contains the nondominated solutions in P O P .

3.2. Two Alternative Optimization Methods

In LSMOEA-TM, the two alternative optimization methods contain two stages: the convergence-oriented stage and the diversity-oriented stage. As shown in Figure 4, the difference between the two stages is that they adopt different grouping strategies to divide decision variables. In LSMOEA-TM, at first, the convergence-oriented stage is adopted to update P O P . Then, LSMOEA-TM chooses the appropriate optimization stage according to the evaluation of P O P . In Figure 4, n e w H V and o l d H V are the values of the hypervolume (HV) metric of the current and the last P O P , respectively. The HV is a commonly used metric to assess the performance of a multiobjective algorithm. The reason for choosing the HV metric here is that the ideal nondominated solutions to problems are not needed when calculating the HV, which is given in [32]. For the HV, the larger the value, the better the quality of the solutions. n e w H V o l d H V / o l d H V > ε means the updated P O P is better than the last P O P . In this case, LSMOEA-TM continues to perform the convergence-oriented stage by adopting the convergence-related grouping strategy. Otherwise, LSMOEA-TM performs a diversity-oriented stage by adopting the diversity-related grouping strategy. To balance the population diversity and convergence, LSMOEA-TM switches to implementing the convergence-oriented stage after performing the diversity-oriented stage for s times. A specific description of the update of P O P and R E P , the Bayesian-based parameter adjustment, the convergence-related, and the diversity-related grouping strategy is presented below.

3.2.1. Update of P O P and REP

The update of P O P is designed based on the grouping results of decision variables. Algorithm 1 gives the detailed procedure for the update of P O P . As shown in Section 3.2.1 and Section 3.2.2, the decision variables are divided into convergence-related variables ( C V ) and diversity-related variables ( D V ). In Algorithm 1, the update of P O P contains the convergence-related update stage and the diversity-related update stage, which are implemented on the convergence-related and diversity-related variables, respectively. In the convergence-related update stage, the parent individuals are selected according to the nondominated ranks of individuals in P O P by the tournament selection method (line 1 in Algorithm 1). After that, the crossover and mutation operators are carried out only on the convergence-related variables of the parent individuals to obtain the offspring individuals (line 2 in Algorithm 1). If the offspring individuals are better than (dominate) the parent individuals, then replace the parent individuals with the offspring individuals (lines 3–4 in Algorithm 1). Based on the P O P 1 obtained in the convergence-related update stage, the diversity-related update stage is implemented only on the diversity-related decision variables (lines 5–21 in Algorithm 1). It can be observed from Algorithm 1 that there are two main differences between the convergence-related and diversity-related update stages. The first one lies in the selection of the parent population. In the diversity-related update stage, the parent population is selected from the population randomly (line 5 in Algorithm 1). The second difference is that the diversity-related update stage obtains the updated population ( n e w _ P O P ) by selecting N individuals from C a n d i d a t e s with consideration of both nondominated ranks and angle distances (line 8–21 in Algorithm 1). As shown in Algorithm 1, N is the size of P O P . The angle distance aims to help the algorithm obtain the updated population that distributes dispersedly.
Since the R E P stores the nondominated solutions found by the algorithm so far, the update of R E P can be achieved by selecting the nondominated members from the update P O P and the original R E P .
Algorithm 1: Update of P O P
Input: population P O P ; convergence-related decision variables C V ;
   diversity-related decision variables D V
Output: updated population n e w _POP
1.
Select the parent population from POP by the tournament selection method according to the nondominated ranks of the individuals in POP;
2.
Obtain offspring population Q by performing the crossover and mutation operators only on the CV of the parent population;
3.
C a n d i d a t e s P O P   Q, obtain the nondominated ranks of the members in C a n d i d a t e s .
4.
Select N individuals with the better nondominated ranks from C a n d i d a t e s as POP1 and N is the size of POP;
5.
Select the parent population from POP1 randomly;
6.
Obtain offspring population Q by performing the crossover and mutation operators only on the DV of the parent population;
7.
C a n d i d a t e s P O P   Q, obtain the nondominated ranks of the members in C a n d i d a t e s .
8.
n e w _ POP = { x | x C a n d i d a t e s   and   n o n d o m i n a t e _ r a n k x = 1 } . If | n e w _ POP| > N, then delete members from n e w _ POP randomly until | n e w _ POP| = N.
9.
R = 2;
10.
While | n e w _ POP| < N
11.
  P contains the members from C a n d i d a t e s whose nondominated ranks is R;
12.
  If | n e w _ POP| + |P| > N
13.
While | n e w _ POP| < N
14.
         n e w _ POP n e w _ P O P argmax x P min y n e w _ P O P angle x , y ;
15.
        Delete x from P;
16.
EndWhile
17.
  Else
18.
         n e w _ POP n e w _ P O P P
19.
  EndIf
20.
  R = R + 1;
21.
EndWhile

3.2.2. Bayesian-Based Parameter Adjustment Strategy

As described in Section 3.2.1 and Section 3.2.2, both the convergence-related and diversity-related grouping strategies need two parameters, namely n S e l and n P e r . n S e l determines the number of individuals that are selected from P O P to be perturbed. n P e r determines the number of perturbations for a decision variable to be grouped. Therefore, there are n S e l × n P e r perturbations to group one variable. If n S e l and n P e r are too large, it will take too many computations. However, if n S e l and n P e r are too small, the grouping results may be inaccurate. In this paper, a Bayesian-based parameter adjustment strategy is introduced to obtain appropriate values for n S e l and n P e r to achieve a balance between grouping accuracy and computations. More specifically, with the help of the Bayesian-based parameter adjustment strategy, the convergence-oriented and diversity-oriented stages can get the group results of decision variables with less computational cost, as shown in Figure 5. The main steps of the Bayesian-based parameter adjustment strategy are as follows:
Step 1: Determine the form of the loss function as shown in Equation (3);
Step 2: Generate a certain number of the initial observation values for n S e l and n P e r and obtain the loss function values for the generated observation values;
Step 3: Estimate the value of the loss functions for observation samples by the probability surrogate model in Equation (4).
Step 4: Obtain the next values for n S e l and n P e r by the acquisition function in Equation (5).
In Step 1, the loss function is formulated as shown in Equation (3). The proposed parameter adjustment strategy aims to obtain the value of n S e l and n P e r as small as possible with acceptable grouping results. In Equation (3), C V and D V represent the number of convergence-related variables and diversity-related variables obtained by the grouping strategies with parameters n S e l and n P e r , respectively. If C V is much larger than DV, then the algorithm may obtain a Pareto-optimal set that concentrates on a small area of the ideal Pareto fronts. If D V is much larger than C V , then the algorithm may have poor convergence. To balance the convergence and diversity, C V and D V should be as close as possible [12]. In Equation (3), C V n S e l , n P e r + D V n S e l , n P e r C V n S e l , n P e r D V n S e l , n P e r is used to evaluate the rationality of the grouping results quantitatively. θ n S e l , n P e r is the regularization term of parameters n S e l and n P e r is used to make the grouping parameters as small as possible when decision variables are reasonably grouped. As shown in Equation (3), the L2 parameter regularization method is adopted to obtain θ n S e l , n P e r . λ is set to 2 n S e l max 2 n S e l min 2 + n P e r max 2 n P e r min 2 in order to make θ vary between 0 and 1.
f n S e l , n P e r = C V n S e l , n P e r + D V n S e l , n P e r C V n S e l , n P e r D V n S e l , n P e r + θ n S e l , n P e r θ n S e l , n P e r = λ 2 n S e l 2 n S e l min 2 + n P e r 2 n P e r min 2
In Step 2, a certain number of initial observation values of n S e l and n P e r are generated. For a better illustration, n S e l is taken as an example since the generation process for the observation values of n S e l and n P e r are similar. Suppose nSel min and nSel max are the lower and upper limit values of the parameter n S e l , then the initial values of n S e l can be obtained by a sampling method based on dichotomy. Specifically, the original interval nSel min , nSel max is first divided into two intervals nSel lower , nSel lower + nSel upper 2 and nSel lower + nSel upper 2 , nSel upper . Then, two observation values of n S e l are generated from the two intervals randomly. The steps mentioned above are repeated until there are enough observation values for n S e l .
In Step 3, a probability surrogate model f * n S e l , n P e r , as shown in Equation (4), is adopted to estimate the value of the loss function f n S e l , n P e r since calculating the loss function cost more computations than the probability surrogate model. As shown in Equation (4), the Gaussian mixture model, a mixture of K Gaussian distributions, was adopted in this section to estimate the loss function. The Gaussian mixture model was chosen because it can simulate any function distributions theoretically [33]. In Equation (4), i = 1 K α i = 1 and ϕ n S e l , n P e r | θ i ~ N μ i , σ i 2 .
f * n S e l , n P e r = i = 1 K α i ϕ n S e l , n P e r | θ i
In Step 4, the acquisition function is implemented to obtain the appropriate values for n S e l and n P e r . Since the expectation improvement is usually adopted as the acquisition function in Bayesian optimization [34], the expectation improvement, shown in Equation (5), was used to generate the new values for n S e l and n P e r . The generated values of n S e l and n P e r are utilized in the grouping strategies to get the grouping results of decision variables. Furthermore, the new generated n S e l and n P e r and their loss function values are adopted to obtain the probability surrogate model in the next iteration of the Bayesian-based parameter adjustment strategy. Therefore, as the evolution progresses, more observation values are generated, and the probability surrogate model becomes closer to the real loss function.
E I f * = + m a x η f * , 0 p f * | n S e l , n P e r d f * n S e l n e w , n P e r n e w = argmax n S e l , n P e r E I f *

3.2.3. Convergence-Related and Diversity-Related Grouping Strategy

As discussed above, the grouping results of the decision variables directly influence the performance of LSMOEAs based on grouping strategies. This paper uses two different grouping strategies, carried out alternatively during the optimization process, to get efficient group results for LSMOEA-TM.
In a convergence-related strategy, decision variables are grouped based on the perturbation results of the selected individuals from P O P . Suppose n S e l individuals are randomly selected from the population and each decision variable is perturbed n P e r times by sampling in the decision-variable space. For a two-objective problem, Figure 7 demonstrates the perturbation results of three decision variables, i.e., x 1 , x 2 , and x3. In Figure 7, the dotted line L is the normal line of a hyperplane i = 1 m f i = 1 and m = 2 . In that case, L indicates the convergence direction for a minimized two-objective problem. If n S e l = 2 and n P e r = 10 , then 2 individuals are selected to be perturbed and each is perturbed 10 times. For example, the value of x 1 in the 2 selected individuals is perturbed 10 times to get the perturbation results for x 1 , which can be fitted by a straight line, as shown in Figure 7. After that, the angle of each fitting line and L can be calculated. Therefore, the smaller the angle, the more likely the variable is related to convergence. In Figure 7, each variable can be positioned in an angle space with two dimensions since n S e l = 2 . With the help of the K-means clustering method, all variables can be grouped into two clusters. The cluster with smaller angles contains convergence-related variables, and the other cluster contains diversity-related variables. In Figure 7, x 1 has relatively large angle values in the angle space. Therefore, it is divided into diversity-related variable group. Conversely, x 3 is regarded as a convergence-related variable with small angle values. x 2 is divided into the convergence-related variable group, since it is closer to x 3 in the angle space. However, as shown in Figure 7, the trajectory of the perturbations of x 3 is nonlinear. In this case, using a straight line to fit the perturbation results of x 3 is inappropriate. It can be observed in Figure 7 that x 3 is more likely to contribute to helping maintain population diversity. Therefore, for some variables with nonlinear perturbation curves, the convergence-related grouping strategy may obtain wrong classification results and lead to unstable optimization performance. To compensate, the following diversity-related grouping strategy is adopted in this paper.
To overcome the shortcomings in convergence-related strategy, the diversity-related grouping strategy is based on the dominance analysis of the perturbation results of variables. As shown in Figure 7, the perturbation results of x 3 are mutually dominated. Therefore, x 3 is regarded as a convergence-related variable. For x 1 , the perturbation results are nondominated solutions. Thus, x 1 contributes more to maintaining population diversity and is regarded as a diversity-related variable. For x 2 , there are both dominated and nondominated relationships in its perturbation results. Therefore, x 2 contributes to both diversity and convergence. In the diversity-related grouping strategy, the decision variables contributing to diversity and convergence are classified as diversity-related variables. In this case, the algorithm using the diversity-related grouping strategy emphasizes maintaining population diversity in its evolution process.

4. Results

4.1. Test Suites and Algorithms to Be Compared

In this section, five test suites consisting of DTLZ [35], WFG [36], UF [37], BT [25], and LSMOP [7] were chosen to evaluate the performance of the proposed LSMOEA-TM. DTLZ, WFG, and UF are the benchmark test suites that are widely used to evaluate the performance of algorithms for solving LSMOPs. BT contains a set of biased test problems, which complicates maintaining population diversity for the evaluated algorithms. For DTLZ, WFG, UF, and BT, the decision variables were set to 100, 500, and 1000, respectively. LSMOP [7] is a test suite proposed recently and contains nine large-scale multiobjective problems (LSMOP1~LSMOP9). For all problems in LSMOP, the number of decision variables was set to 100 × M, where M is the size of the objective space. LSMOP1~LSMOP9 had 300 decision variables since they were all triobjective problems.
To verify the performance of the proposed algorithm, LSMOEA-TM was compared with four efficient large-scale multiobjective optimization algorithms:
(1)
MOEA/D2 [21], which is a representative algorithm based on the dynamic grouping strategy.
(2)
LMEA [16], which is a representative algorithm based on the fixed grouping strategy.
(3)
IM-MOEA/D [30], which uses a decomposition-based strategy to solve LSMOPs.
(4)
FDV [31], which utilizes a fuzzy search strategy to group decision variables when solving LSMOPs.

4.2. Experiment Setting and Measurement Methodology

All experiments in this paper were implemented in MATLAB R2020b on a desktop with a 3.60GHz Intel I Core I i9-9900kf CPU and Windows 11 64-bit operating system with 32 GB of RAM.
For a fair comparison, MOEA/D2, LMEA, IM-MOEA/D, and FDV adopted the recommended parameter settings from [21], [16], [30], and [31], respectively. For LSMOEA-TM proposed in this paper, the strategy switch threshold ε (Section 3.2) was set to −0.15, and the threshold value s (Section 3) was set to 3 empirically. All five algorithms adopted the simulated binary crossover (SBX) operator and the polynomial mutation operator to generate offspring. The crossover probability was p c = 1.0 , and the mutation probability was p m = 1 / D , where D is the number of decision variables. For all algorithms, the population size was 100, and the maximum number of function evaluations was set to 1,000,000, 5,000,000, 15,000,000, 30,000,000, and 50,000,000 for the test problems with 100, 500, 1000, 2000, and 4000 decision variables, respectively.
In this paper, two widely used performance metrics, the inverted generational distance (IGD) [38] and the coverage over the Pareto front (CPF) [39], were adopted to quantitatively evaluate the performance of the compared algorithms. The IGD can simultaneously evaluate the convergence and diversity of a solution set, while the CPF focuses on evaluating the population diversity of algorithms. For the IGD, the smaller the value, the better the evaluated algorithm performs. For the CPF, the larger the value, the better the obtained nondominated solutions distributed along the ideal Pareto fronts. To get statistical results, all algorithms were run 30 times independently for each test problem.

4.3. Performance Comparison between LSMOEA-TM and Other Large-Scale MOEAs

Table 1 and Table 2 present the statistical results, i.e., the mean values and the standard deviations, of the IGD and CPF metrics of the five comparative algorithms on the 15 test problems with 100, 500, and 1000 decision variables obtained via 30 independent runs. The Wilcoxon rank sum test at a significance level of 0.05 was adopted to compare the performance of the algorithms, where the symbols “+”, “−”, and “=” indicate that the result is significantly better, significantly worse, and statistically similar to that obtained by LSMOEA-TM, respectively. Besides, the best results in Table 1 and Table 2 are shown in bold.
It can be observed from Table 1 and Table 2 that LSMOEA-TM achieved the best results for most of the test problems in terms of both IGD and CPF. This result indicated the effectiveness of the proposed LSMOEA-TM, which enhanced the optimization performance by adopting two alternative optimization methods. With the help of the two alternative optimization methods, LSMOEA-TM could achieve a balance between convergence and population diversity. For DTLZ1, LSMOEA-TM obtained results that were worse than LMEA in terms of both IGD and CPF. This may be because the group results of the decision variables obtained by LMEA based on the initial population were reasonable enough in most cases to solve DTLZ1. LSMOEA-TM spent some computations on adjusting the group results of the decision variables dynamically so that the number of computations consumed on the population’s evolution was smaller than that in LMEA. As computationally expensive problems, WFG2 and WFG3 needed a large number of computations to group the decision variables in LSMOEA-TM. Therefore, the number of computations left to update the population was reduced correspondingly. As shown in Table 1 and Table 2, the performance of LSMOEA-TM on WFG2 and WFG3 was worse than some of the comparative algorithms. For UF4 and UF7, the IGD values of LSMOEA-TM were slightly worse than the best results when the decision variables were lower than 500. However, LSMOEA-TM achieved the best statistical results on both IGD and CPF for UF4 and UF7 with 1000 decision variables. For BT1, BT2, BT3, and BT6, LSMOEA-TM obtained the best performance for both the IGD and CPF. This may demonstrate the effectiveness of LSMOEA-TM when handling large-scale multiobjective problems.
LSMOP [7] is a recently proposed test suite and can reflect many characteristics of real-world large-scale optimization problems. It can be observed from Table 3 and Table 4, where the best results are shown in bold that LSMOEA-TM achieved the best performance for LSMOP1, LSMOP4, LSMOP5, LSMOP8, and LSMOP9. For LSMOP7, the IGD and CPF values obtained by LSMOEA-TM were slightly worse than for FDV. For LSMOP2 and LSMOP3, the metric values of LSMOEA-TM had a certain gap with the best. That is because LSMOP2 is a unimodal multimodal mixed problem with partially separable decision variables, while LSMOP3 is a multimodal problem with partially separable and fully separable decision variables. Since LSMOEA-TM achieved efficient solutions to LSMOPs from the perspective of grouping decision variables, the effect of the grouping strategy may be affected when facing multimodal problems and problems with partially separable decision variables, thus resulting in a poor search performance. However, IM-MOEA/D decomposes the problem from the decision-variable space, and FDV uses a fuzzy search instead of a grouping strategy. Therefore, when facing this type of problem, the performance of LSMOEA-TM will be slightly worse than that of IM-MOEA/D and FDV. However, overall, LSMOEA-TM achieved the best performance on the LSMOP test suite.

5. Discussion

5.1. Investigation of the Bayesian-Based Parameter Adjusting Strategy

In this section, six problems, including DTLZ2, UF1, BT1, WFG2, LSMOP1, and LSMOP2, were selected to investigate the effectiveness of the proposed Bayesian-based parameter adjusting strategy. The number of decision variables was set to 100 for DTLZ2, UF1, BT1, and WFG2. For LSMOP1 and LSMOP2, the number of decision variables was 100 × M , where M is the dimension of objective space. Since LSMOP1 and LSMOP are three-objective problems, the number of decision variables for these two problems was 300. For LSMOEA-TM without the Bayesian-based parameter-adjusting strategy, the parameters used in the grouping strategies, namely n S e l and n P e r , had the same values as in [16]. For LSMOEA-TM proposed in this paper, n S e l and n P e r were adjusted according to a Bayesian-based parameter-adjusting strategy to achieve the balance between grouping accuracy and computations. As shown in Figure 8, LSMOEA-TM could obtain the best IGD values faster than LSMOEA-TM without the Bayesian-based parameter-adjusting strategy for all test problems. This may indicate the effectiveness of the proposed Bayesian-based parameter-adjusting strategy, which could find the appropriate values for the parameters used in the grouping strategies. It can be observed from Figure 8 that LSMOEA-TM converged slower than LSMOEA-TM without the Bayesian-based parameter-adjusting strategy at the early stages for some problems, such as DTLZ2, UF1, and BT1. As the evolution progressed, more observation values were generated, and the probability surrogate model adopted in the Bayesian-based parameter-adjusting strategy was closer to the real loss function. Therefore, in the early stages of LSMOEA-TM, the Bayesian-based parameter-adjusting strategy may not obtain satisfactory values for n S e l and n P e r , thus leading to relatively poor IGD values. However, as shown in Figure 8, the Bayesian-based parameter-adjusting strategy could help LSMOEA-TM find appropriate parameter values in the middle and late stages and finally achieve a better performance.

5.2. Investigation of the Scalability of LSMOEA-TM

To further investigate the scalability of LSMOEA-TM, the test problems with more decision variables were adopted in this section. Figure 9 presents the average IGD values obtained by LSMOEA-TM over 30 independent runs for DTLZ1, DTLZ3, UF5, UF10, WFG2, and WFG6, in which the number of decision variables ranged from 100 to 4000. As shown in Figure 9, the IGD values fluctuated with the number of decision variables increasing. For all six test problems, there was no significant degradation in the performance of LSMOEA-TM when the number of decision variables ranged from 100 to 4000. The experiment results demonstrated that LSMOEA-TM had a stable performance for large-scale MOPs with the different decision variables.

5.3. Investigation of the Computational Efficiency of LSMOEA-TM

To save space, Table 5 presents the runtime of five algorithms on some selected problems. For a comprehensive comparison, Table 6 gives the average runtime of five algorithms on all problems of each test suite. It should be noted that the number of decision variables in the LSMOP suite was set to a fixed constant, i.e., 300, so the runtime of LSMOP was obviously less than that of the other test suites for all five algorithms.
Because of the simplicity of the procedure, MOEA/D2 spent the least time among all algorithms. It can be observed from both Table 5 and Table 6 that LSMOEAs based on grouping strategies, i.e., LMEA and the proposed LSMOEA-TM, spent less time than IM-MOEA/D and FDV. For the DTLZ, UF, and LSMOP test suites, LSMOEA-TM cost less runtime than LMEA, IM-MOEA/D, and FDV. This was because the proposed Bayesian-based parameter-adjustment strategy helped LSMOEA-TM reduce the computational cost by estimating the appropriate parameters ( n S e l and n P e r ) in its grouping strategies. However, LSMOEA-TM took longer than LMEA when solving computationally expensive problems, such as the WFG test problems. This was because the calculation costs of the HV metric, which was used to control the alternation of the convergence-oriented and diversity-oriented stages in LSMOEA-TM, for the WFG test suite were significantly higher than those for the other test suites. For the BT test suite, LSMOEA-TM spent more time in some cases. This was because LSMOEA-TM had to execute more grouping strategies to achieve the balance of convergence and diversity since it was difficult to maintain the population diversity for the BT test suite.

6. Conclusions

This paper proposed a large-scale multiobjective optimization algorithm with two alternative optimization methods (LSMOEA-TM) to solve LSMOPs. In the two alternative optimization methods, two grouping strategies, namely, the convergence-related and the diversity-related grouping strategies, were used to divide decision variables and get stable grouping results. Specifically, LSMOEA-TM utilized the grouping strategy chosen from the two grouping strategies mentioned above according to the performance of the evolved population to balance convergence and population diversity. Furthermore, this paper introduced a Bayesian-based parameter-adjusting strategy to reduce computational costs by optimizing the parameters in the proposed two alternative optimization methods. The proposed LSMOEA-TM was compared with four state-of-the-art large-scale optimization algorithms on benchmark large-scale test problems in the experimental section. The statistical results demonstrated that LSMOEA-TM performed best for most of the test problems. This indicated the effectiveness of the proposed two alternative optimization methods in solving LSMOPs. Moreover, the results in Section 4.2 showed that the proposed Bayesian-based parameter-adjusting strategy could reduce computational costs and improve the search efficiency of LSMOEA-TM. In addition, Figure 9 demonstrated that there was no significant degradation in the performance of LSMOEA-TM when extended to LSMOPs with more decision variables.
As described in Section 3.2, LSMOEA-TM used the grouping strategy chosen from the convergence-related and diversity-related grouping strategies according to the performance of the evolved population, which was evaluated by the widely adopted HV metric. However, the calculation costs of HV will increase exponentially with the increase of the dimension of the objective space, so how to design an efficient evaluation method is left for future work. In addition, how to improve the performance of LSMOEA-TM in solving complex LSMOPs, such as problems with partially separable decision variables and problems with multimodal characteristics, is also a topic for future work.

Author Contributions

Conceptualization, T.L.; methodology, T.L. and J.Z.; software, J.Z.; validation, T.L. and L.C.; formal analysis, T.L. and J.Z.; investigation, T.L. and J.Z.; resources, T.L.; data curation, J.Z.; writing—original draft preparation, T.L., J.Z. and L.C.; writing—review and editing, J.Z. and L.C.; visualization, J.Z.; supervision, T.L.; project administration, T.L.; funding acquisition, T.L. and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 61806122 and 62102242.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Harada, T.; Kaidan, M.; Thawonmas, R. Comparison of synchronous and asynchronous parallelization of extreme surrogate-assisted multi-objective evolutionary algorithm. Nat. Comput. 2022, 21, 187–217. [Google Scholar] [CrossRef]
  2. Wu, Z.; Feng, H.; Chen, L.; Ge, Y. Performance Optimization of a Condenser in Ocean Thermal Energy Conversion (OTEC) System Based on Constructal Theory and a Multi-Objective Genetic Algorithm. Entropy 2020, 22, 641. [Google Scholar] [CrossRef]
  3. Li, J.; Zhao, H. Multi-Objective Optimization and Performance Assessments of an Integrated Energy System Based on Fuel, Wind and Solar Energies. Entropy 2021, 23, 431. [Google Scholar] [CrossRef]
  4. Qiu, X.; Chen, L.; Ge, Y.; Shi, S. Efficient Power Characteristic Analysis and Multi-Objective Optimization for an Irreversible Simple Closed Gas Turbine Cycle. Entropy 2022, 24, 1531. [Google Scholar] [CrossRef]
  5. Zhou, Y.; Ruan, J.; Hong, G.; Miao, Z. Multi-Objective Optimization of the Basic and Regenerative ORC Integrated with Working Fluid Selection. Entropy 2022, 24, 902. [Google Scholar] [CrossRef]
  6. Cheng, R. Nature Inspired Optimization of Large Problems.; University of Surrey: Guildford, UK, 2016. [Google Scholar]
  7. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. Test Problems for Large-Scale Multiobjective and Many-Objective Optimization. IEEE Trans. Cybern. 2017, 47, 4108–4121. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Computat. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  9. Deb, K.; Jain, H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems with Box Constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  10. Ling, H.; Zhu, X.; Zhu, T.; Nie, M.; Liu, Z.-H.; Liu, H.-Y. A Parallel Multiobjective PSO Weighted Average Clustering Algorithm Based on Apache Spark. Entropy 2023, 25, 259. [Google Scholar] [CrossRef]
  11. Tian, Y.; Lu, C.; Zhang, X.; Tan, K.C.; Jin, Y. Solving Large-Scale Multiobjective Optimization Problems with Sparse Optimal Solutions via Unsupervised Neural Networks. IEEE Trans. Cybern. 2021, 51, 3115–3128. [Google Scholar] [CrossRef]
  12. Tian, Y.; Si, L.; Zhang, X.; Cheng, R.; He, C.; Tan, K.C.; Jin, Y. Evolutionary Large-Scale Multi-Objective Optimization: A Survey. ACM Comput. Surv. 2022, 54, 1–34. [Google Scholar] [CrossRef]
  13. Li, X.; Yao, X. Cooperatively Coevolving Particle Swarms for Large Scale Optimization. IEEE Trans. Evol. Comput. 2012, 16, 210–224. [Google Scholar] [CrossRef]
  14. Ma, X.; Liu, F.; Qi, Y.; Wang, X.; Li, L.; Jiao, L.; Yin, M.; Gong, M. A Multiobjective Evolutionary Algorithm Based on Decision Variable Analyses for Multiobjective Optimization Problems With Large-Scale Variables. IEEE Trans. Evol. Comput. 2016, 20, 275–298. [Google Scholar] [CrossRef]
  15. Chen, H.; Zhu, X.; Pedrycz, W.; Yin, S.; Wu, G.; Yan, H. PEA: Parallel Evolutionary Algorithm by Separating Convergence and Diversity for Large-Scale Multi-Objective Optimization. In Proceedings of the 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria, 2–6 July 2018. [Google Scholar] [CrossRef]
  16. Zhang, X.; Tian, Y.; Cheng, R.; Jin, Y. A Decision Variable Clustering-Based Evolutionary Algorithm for Large-Scale Many-Objective Optimization. IEEE Trans. Evol. Comput. 2018, 22, 97–112. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, H.-L.; Gu, F.; Zhang, Q. Decomposition of a Multiobjective Optimization Problem into a Number of Simple Multiobjective Subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef] [Green Version]
  18. Du, W.; Tong, L.; Tang, Y. A framework for high-dimensional robust evolutionary multi-objective optimization. In Proceedings of the 2018 Genetic and Evolutionary Computation Conference, Kyoto, Japan, 15–19 July 2018. [Google Scholar] [CrossRef]
  19. Du, W.; Zhong, W.; Tang, Y.; Du, W.; Jin, Y. High-Dimensional Robust Multi-Objective Optimization for Order Scheduling: A Decision Variable Classification Approach. IEEE Trans. Ind. Inf. 2019, 15, 293–304. [Google Scholar] [CrossRef] [Green Version]
  20. Cao, B.; Zhao, J.; Gu, Y.; Ling, Y.; Ma, X. Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm Evol. Comput. 2020, 53, 100626. [Google Scholar] [CrossRef]
  21. Antonio, L.M.; Coello, C.A.C. Use of cooperative coevolution for solving large scale multiobjective optimization problems. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013. [Google Scholar] [CrossRef]
  22. Miguel Antonio, L.; Coello Coello, C.A. Decomposition-Based Approach for Solving Large Scale Multi-objective Problems. In Proceedings of the Parallel Problem Solving from Nature–PPSN XIV: 14th International Conference, Edinburgh, UK, 17–21 September 2016. [Google Scholar] [CrossRef]
  23. Antonio, L.M.; Coello, C.A.C.; Brambila, S.G.; González, J.F.; Tapia, G.C. Operational decomposition for large scale multi-objective optimization problems. In Proceedings of the 2019 Genetic and Evolutionary Computation Conference Companion, Prague, Czech Republic, 13–17 July 2019. [Google Scholar] [CrossRef]
  24. Song, A.; Yang, Q.; Chen, W.-N.; Zhang, J. A random-based dynamic grouping strategy for large scale multi-objective optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016. [Google Scholar] [CrossRef]
  25. Li, H.; Zhang, Q.; Deng, J. Biased Multiobjective Optimization and Decomposition Algorithm. IEEE Trans. Cybern. 2017, 47, 52–66. [Google Scholar] [CrossRef] [Green Version]
  26. Zhu, W.; Tianyu, L. A Novel Multi-Objective Scheduling Method for Energy Based Unrelated Parallel Machines With Auxiliary Resource Constraints. IEEE Access 2019, 7, 168688–168699. [Google Scholar] [CrossRef]
  27. He, C.; Li, L.; Tian, Y.; Zhang, X.; Cheng, R.; Jin, Y.; Yao, X. Accelerating Large-Scale Multiobjective Optimization via Problem Reformulation. IEEE Trans. Evol. Comput. 2019, 23, 949–961. [Google Scholar] [CrossRef]
  28. He, C.; Cheng, R.; Yazdani, D. Adaptive Offspring Generation for Evolutionary Large-Scale Multiobjective Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 786–798. [Google Scholar] [CrossRef]
  29. Chen, H.; Cheng, R.; Wen, J.; Li, H.; Weng, J. Solving large-scale many-objective optimization problems by covariance matrix adaptation evolution strategy with scalable small subpopulations. Inf. Sci. 2020, 509, 457–469. [Google Scholar] [CrossRef]
  30. Farias, L.R.C.; Araujo, A.F.R. IM-MOEA/D: An Inverse Modeling Multi-Objective Evolutionary Algorithm Based on Decomposition. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021. [Google Scholar] [CrossRef]
  31. Yang, X.; Zou, J.; Yang, S.; Zheng, J.; Liu, Y. A Fuzzy Decision Variables Framework for Large-scale Multiobjective Optimization. IEEE Trans. Evol. Comput. 2022, 23, 1. [Google Scholar] [CrossRef]
  32. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef] [Green Version]
  33. Srinivas, N.; Krause, A.; Kakade, S.M.; Seeger, M. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. IEEE Trans. Inform. Theory 2012, 58, 3250–3265. [Google Scholar] [CrossRef] [Green Version]
  34. Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; de Freitas, N. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proc. IEEE 2016, 104, 148–175. [Google Scholar] [CrossRef] [Green Version]
  35. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable Test Problems for Evolutionary Multiobjective Optimization; Springer: London, UK, 2005; pp. 105–145. [Google Scholar] [CrossRef] [Green Version]
  36. Huband, S.; Hingston, P.; Barone, L.; While, L. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 2006, 10, 477–506. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, Q.; Zhou, A.; Zhao, S.; Suganthan, P.N.; Liu, W.; Tiwari, S. Multiobjective Optimization Test Instances for the CEC 2009 Special Session and Competition; special session on performance assessment of multi-objective optimization algorithms, technical report; University of Essex: Colchester, UK; Nanyang Technological University: Singapore, 2008; Volume 264, pp. 1–30. [Google Scholar]
  38. Coello, C.A.C.; Cortes, N.C. Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  39. Tian, Y.; Cheng, R.; Zhang, X.; Li, M.; Jin, Y. Diversity Assessment of Multi-Objective Evolutionary Algorithms: Performance Metric and Benchmark Problems. IEEE Comput. Intell. Mag. 2019, 14, 61–74. [Google Scholar] [CrossRef]
Figure 1. Grouping Results of BT6 by LMEA.
Figure 1. Grouping Results of BT6 by LMEA.
Entropy 25 00561 g001
Figure 2. Evolution Process of BT6 by LMEA. (a) evolution starts; (b) evolution processes half; (c) evolution ends.
Figure 2. Evolution Process of BT6 by LMEA. (a) evolution starts; (b) evolution processes half; (c) evolution ends.
Entropy 25 00561 g002
Figure 3. Illustration of the Stability of CCGDE3 and LMEA. (a) on DTLZ2; (b) on UF6.
Figure 3. Illustration of the Stability of CCGDE3 and LMEA. (a) on DTLZ2; (b) on UF6.
Entropy 25 00561 g003
Figure 4. Framework of LSMOEA-TM.
Figure 4. Framework of LSMOEA-TM.
Entropy 25 00561 g004
Figure 5. Illustration of Convergence-Oriented and Diversity-Oriented Stages.
Figure 5. Illustration of Convergence-Oriented and Diversity-Oriented Stages.
Entropy 25 00561 g005
Figure 6. Illustration of the Structure of the Population.
Figure 6. Illustration of the Structure of the Population.
Entropy 25 00561 g006
Figure 7. Illustration of Perturbation Results for Decision Variables.
Figure 7. Illustration of Perturbation Results for Decision Variables.
Entropy 25 00561 g007
Figure 8. Comparison of LSMOEA-TM with and without Bayesian-based Parameter Adjusting Strategy.
Figure 8. Comparison of LSMOEA-TM with and without Bayesian-based Parameter Adjusting Strategy.
Entropy 25 00561 g008
Figure 9. IGD Metric Values of LSMOEA-TM on Six Problems with Different Numbers of Decision Variables, Averaged over 30 Runs.
Figure 9. IGD Metric Values of LSMOEA-TM on Six Problems with Different Numbers of Decision Variables, Averaged over 30 Runs.
Entropy 25 00561 g009
Table 1. IGD values of Five Algorithms on DTLZ, UF, WFG, and BT Test Suites.
Table 1. IGD values of Five Algorithms on DTLZ, UF, WFG, and BT Test Suites.
ProblemDMOEA/D2LMEAIM-MOEA/DFDVLSMOEA-TM
DTLZ11001.17 × 103 (2.45 × 102) −2.05 × 10−2 (1.42 × 10−6) +3.16 × 10−1 (1.05 × 10−1) −2.05 × 10−2 (8.72 × 10−6) +2.09 × 10−2 (2.53 × 10−4)
5003.03 × 103 (3.54 × 102) −2.05 × 10−2 (1.59 × 10−6) +5.34 × 100 (8.06 × 10−1) −2.12 × 10−2 (2.20 × 10−4) =2.11 × 10−2 (3.96 × 10−4)
10004.67 × 103 (6.89 × 102) −2.05 × 10−2 (1.84 × 10−6) +1.65 × 101 (1.57 × 100) −2.87 × 10−2 (1.65 × 10−3) −2.12 × 10−2 (3.01 × 10−4)
DTLZ21001.90 × 100 (6.12 × 10−1) −5.44 × 10−2 (3.85 × 10−6) −5.92 × 10−2 (1.17 × 10−9) −5.44 × 10−2 (1.82 × 10−6) −5.33 × 10−2 (4.29 × 10−4)
5004.42 × 100 (8.40 × 10−1) −5.44 × 10−2 (5.19 × 10−6) −5.92 × 10−2 (6.74 × 10−9) −5.44 × 10−2 (6.38 × 10−8) −5.38 × 10−2 (5.64 × 10−4)
10007.74 × 100 (1.71 × 100) −5.44 × 10−2 (4.25 × 10−6) −5.92 × 10−2 (3.75 × 10−8) −5.44 × 10−2 (2.73 × 10−8) −5.39 × 10−2 (4.68 × 10−4)
DTLZ31003.37 × 103 (7.76 × 102) −5.44 × 10−2 (3.69 × 10−6) −8.61 × 10−1 (3.98 × 10−1) −5.45 × 10−2 (1.86 × 10−5) −5.34 × 10−2 (5.10 × 10−4)
5007.88 × 103 (1.21 × 103) −5.44 × 10−2 (5.79 × 10−6) −1.41 × 101 (2.29 × 100) −5.66 × 10−2 (8.20 × 10−4) −5.40 × 10−2 (6.39 × 10−4)
10001.32 × 104 (2.19 × 103) −5.44 × 10−2 (4.94 × 10−6) =4.60 × 101 (6.83 × 100) −7.97 × 10−2 (5.60 × 10−3) −5.46 × 10−2 (7.88 × 10−4)
DTLZ71003.21 × 100 (3.66 × 10−1) −2.91 × 10−1 (1.81 × 10−1) −1.24 × 10−1 (1.05 × 10−16) −7.76 × 10−2 (3.20 × 10−3) −5.89 × 10−2 (1.33 × 10−3)
5003.72 × 100 (2.29 × 10−1) −2.85 × 10−1 (1.83 × 10−1) −1.24 × 10−1 (5.70 × 10−13) −7.89 × 10−2 (3.01 × 10−3) −5.90 × 10−2 (1.21 × 10−3)
10003.97 × 100 (1.91 × 10−1) −2.46 × 10−1 (1.85 × 10−1) −1.24 × 10−1 (1.89 × 10−7) −7.85 × 10−2 (3.62 × 10−3) −5.94 × 10−2 (1.21 × 10−3)
UF11005.92 × 10−1 (7.68 × 10−2) −5.92 × 10−1 (7.68 × 10−2) −8.30 × 10−2 (9.00 × 10−3) −8.06 × 10−3 (3.34 × 10−3) −3.73 × 10−3 (1.56 × 10−8)
5006.92 × 10−1 (7.72 × 10−2) −6.92 × 10−1 (7.72 × 10−2) −9.33 × 10−2 (1.02 × 10−2) −8.12 × 10−3 (2.78 × 10−3) −3.73 × 10−3 (4.78 × 10−8)
10007.55 × 10−1 (7.47 × 10−2) −7.55 × 10−1 (7.47 × 10−2) −1.01 × 10−1 (1.19 × 10−2) −8.22 × 10−3 (2.41 × 10−3) −3.73 × 10−3 (9.09 × 10−8)
UF21002.18 × 10−1 (3.94 × 10−2) −2.18 × 10−1 (3.94 × 10−2) −4.22 × 10−2 (2.09 × 10−2) −7.96 × 10−3 (1.07 × 10−3) −3.73 × 10−3 (1.17 × 10−9)
5002.88 × 10−1 (5.56 × 10−2) −2.88 × 10−1 (5.56 × 10−2) −5.06 × 10−2 (2.08 × 10−2) −8.37 × 10−3 (1.11 × 10−3) −3.73 × 10−3 (4.36 × 10−9)
10003.23 × 10−1 (5.93 × 10−2) −3.23 × 10−1 (5.93 × 10−2) −5.69 × 10−2 (1.75 × 10−2) −9.15 × 10−3 (9.22 × 10−4) −3.73 × 10−3 (1.09 × 10−8)
UF41001.07 × 10−1 (4.09 × 10−3) −5.59 × 10−2 (3.01 × 10−3) −4.21 × 10−2 (1.97 × 10−3) −1.09 × 10−2 (1.11 × 10−3) +2.07 × 10−2 (1.64 × 10−4)
5001.33 × 10−1 (5.55 × 10−3) −6.27 × 10−2 (5.02 × 10−3) −5.12 × 10−2 (2.23 × 10−3) −1.89 × 10−2 (1.16 × 10−3) +2.32 × 10−2 (1.05 × 10−4)
10001.45 × 10−1 (3.63 × 10−3) −6.78 × 10−2 (4.50 × 10−3) −5.63 × 10−2 (1.86 × 10−3) −2.59 × 10−2 (4.84 × 10−4) −2.43 × 10−2 (8.24 × 10−5)
UF71006.22 × 10−1 (1.14 × 10−1) −9.72 × 10−2 (2.08 × 10−1) −8.02 × 10−2 (1.03 × 10−1) −3.24 × 10−2 (3.94 × 10−2) +5.97 × 10−2 (5.79 × 10−7)
5008.01 × 10−1 (8.70 × 10−2) −2.23 × 10−1 (3.12 × 10−1) −8.48 × 10−2 (9.00 × 10−2) −4.77 × 10−2 (4.55 × 10−2) +5.97 × 10−2 (1.42 × 10−6)
10008.37 × 10−1 (9.75 × 10−2) −2.52 × 10−1 (3.22 × 10−1) =5.67 × 10−2 (1.34 × 10−2) +8.07 × 10−2 (6.75 × 10−2) =5.97 × 10−2 (2.55 × 10−6)
WFG11002.24 × 100 (7.82 × 10−2) −1.38 × 100 (1.34 × 10−1) −3.11 × 10−1 (2.42 × 10−2) +1.41 × 10−1 (4.28 × 10−4) +6.32 × 10−1 (8.79 × 10−2)
5002.25 × 100 (7.21 × 10−2) −1.50 × 100 (1.18 × 10−1) −2.74 × 10−1 (3.68 × 10−2) +1.41 × 10−1 (7.56 × 10−5) +8.13 × 10−1 (7.58 × 10−2)
10002.24 × 100 (7.73 × 10−2) −1.52 × 100 (9.55 × 10−2) −2.87 × 10−1 (1.96 × 10−1) +1.41 × 10−1 (2.15 × 10−5) +7.77 × 10−1 (9.23 × 10−2)
WFG21004.65 × 10−1 (5.75 × 10−4) −5.71 × 10−1 (4.84 × 10−2) −1.87 × 10−1 (4.60 × 10−3) −1.65 × 10−1 (1.01 × 10−3) −1.65 × 10−1 (5.24 × 10−3)
5004.65 × 10−1 (5.97 × 10−4) −5.68 × 10−1 (3.79 × 10−2) −2.14 × 10−1 (6.55 × 10−3) −1.74 × 10−1 (3.12 × 10−3) −1.74 × 10−1 (5.98 × 10−2)
10004.65 × 10−1 (4.53 × 10−4) −5.85 × 10−1 (3.50 × 10−2) −2.24 × 10−1 (6.22 × 10−3) −1.83 × 10−1 (4.43 × 10−3) −1.63 × 10−1 (3.33 × 10−3)
WFG31001.08 × 10−1 (2.65 × 10−2) −6.06 × 10−1 (4.08 × 10−2) −2.33 × 10−1 (1.72 × 10−2) −6.76 × 10−2 (6.12 × 10−3) −3.06 × 10−2 (4.74 × 10−3)
5001.15 × 10−1 (1.88 × 10−2) −6.62 × 10−1 (4.51 × 10−2) −2.60 × 10−1 (2.05 × 10−2) −9.22 × 10−2 (6.72 × 10−3) −3.14 × 10−2 (3.90 × 10−3)
10001.19 × 10−1 (1.62 × 10−2) −6.66 × 10−1 (4.51 × 10−2) −2.56 × 10−1 (2.02 × 10−2) −1.36 × 10−1 (2.37 × 10−2) −3.19 × 10−2 (4.13 × 10−3)
BT11001.09 × 101 (8.92 × 10−1) −9.25 × 100 (9.19 × 10−1) −1.74 × 100 (3.03 × 10−1) −1.83 × 100 (2.16 × 10−1) −3.77 × 10−2 (8.90 × 10−3)
5002.54 × 101 (3.08 × 100) −1.79 × 101 (1.53 × 100) −3.86 × 100 (5.68 × 10−1) −4.42 × 100 (3.39 × 10−1) −7.97 × 10−2 (1.28 × 10−2)
10003.96 × 101 (4.46 × 100) −2.85 × 101 (2.46 × 100) −8.49 × 100 (8.65 × 10−1) −9.39 × 100 (4.05 × 10−1) −1.33 × 10−1 (2.25 × 10−2)
BT21008.42 × 100 (6.39 × 10−1) −1.85 × 100 (9.82 × 10−2) −7.84 × 10−1 (4.69 × 10−2) −8.18 × 10−1 (1.96 × 10−2) −3.45 × 10−1 (4.32 × 10−2)
5001.91 × 101 (1.29 × 100) −3.95 × 100 (2.02 × 10−1) −1.65 × 100 (4.89 × 10−2) −1.77 × 100 (3.10 × 10−2) −7.39 × 10−1 (5.64 × 10−2)
10003.06 × 101 (1.58 × 100) −6.40 × 100 (2.18 × 10−1) −2.66 × 100 (7.64 × 10−2) −3.21 × 100 (4.68 × 10−2) −1.22 × 100 (8.36 × 10−2)
BT31001.12 × 101 (1.36 × 100) −3.23 × 100 (9.68 × 10−1) −2.33 × 10−1 (6.59 × 10−2) −8.81 × 10−1 (9.40 × 10−2) −1.05 × 10−2 (2.82 × 10−3)
5002.42 × 101 (3.08 × 100) −6.69 × 100 (1.30 × 100) −2.71 × 10−1 (8.65 × 10−2) −1.90 × 100 (1.77 × 10−1) −1.83 × 10−2 (4.97 × 10−3)
10003.86 × 101 (4.71 × 100) −1.23 × 101 (1.70 × 100) −5.06 × 10−1 (1.02 × 10−1) −3.85 × 100 (3.29 × 10−1) −3.05 × 10−2 (5.04 × 10−3)
BT61001.10 × 101 (1.05 × 100) −9.12 × 100 (8.70 × 10−1) −1.47 × 100 (3.43 × 10−1) −1.95 × 100 (1.92 × 10−1) −2.61 × 10−2 (1.01 × 10−2)
5002.52 × 101 (3.16 × 100) −1.80 × 101 (1.88 × 100) −3.77 × 100 (5.81 × 10−1) −4.65 × 100 (2.31 × 10−1) −5.29 × 10−2 (8.05 × 10−3)
10003.94 × 101 (4.66 × 100) −2.82 × 101 (3.08 × 100) −7.98 × 100 (6.22 × 10−1) −9.53 × 100 (4.77 × 10−1) −8.38 × 10−2 (1.36 × 10−2)
+/−/=0/45/03/40/24/41/08/36/1
Table 2. CPF Values of Five Algorithms on DTLZ, UF, WFG, and BT Test Suites.
Table 2. CPF Values of Five Algorithms on DTLZ, UF, WFG, and BT Test Suites.
ProblemDMOEA/D2LMEAIM-MOEA/DFDVLSMOEA-TM
DTLZ11000.00 × 100 (0.00 × 100) −8.41 × 10−1 (3.51 × 10−5) +1.56 × 10−1 (1.39 × 10−1) −8.41 × 10−1 (1.06 × 10−4) +8.40 × 10−1 (5.22 × 10−4)
5000.00 × 100 (0.00 × 100) −8.41 × 10−1 (3.09 × 10−5) +0.00 × 100 (0.00 × 100) −8.36 × 10−1 (1.25 × 10−3) −8.39 × 10−1 (8.68 × 10−4)
10000.00 × 100 (0.00 × 100) −8.41 × 10−1 (2.27 × 10−5) +0.00 × 100 (0.00 × 100) −8.11 × 10−1 (4.36 × 10−3) −8.37 × 10−1 (8.24 × 10−4)
DTLZ21000.00 × 100 (0.00 × 100) −5.59 × 10−1 (3.46 × 10−5) −5.39 × 10−1 (3.94 × 10−8) −5.59 × 10−1 (1.70 × 10−5) −5.61 × 10−1 (5.84 × 10−4)
5000.00 × 100 (0.00 × 100) −5.59 × 10−1 (4.56 × 10−5) −5.39 × 10−1 (3.09 × 10−7) −5.59 × 10−1 (1.83 × 10−7) −5.61 × 10−1 (7.25 × 10−4)
10000.00 × 100 (0.00 × 100) −5.59 × 10−1 (4.19 × 10−5) −5.39 × 10−1 (1.41 × 10−6) −5.59 × 10−1 (7.20 × 10−8) −5.60 × 10−1 (8.03 × 10−4)
DTLZ31000.00 × 100 (0.00 × 100) −5.59 × 10−1 (5.74 × 10−5) −4.24 × 10−2 (4.42 × 10−2) −5.58 × 10−1 (4.27 × 10−4) −5.61 × 10−1 (9.12 × 10−4)
5000.00 × 100 (0.00 × 100) −5.59 × 10−1 (5.04 × 10−5) −0.00 × 100 (0.00 × 100) −5.42 × 10−1 (3.66 × 10−3) −5.59 × 10−1 (6.82 × 10−4)
10000.00 × 100 (0.00 × 100) −5.59 × 10−1 (4.40 × 10−5) +0.00 × 100 (0.00 × 100) −4.84 × 10−1 (1.14 × 10−2) −5.54 × 10−1 (1.65 × 10−3)
DTLZ71000.00 × 100 (0.00 × 100) −2.42 × 10−1 (1.58 × 10−2) −2.60 × 10−1 (1.01 × 10−16) −2.67 × 10−1 (1.72 × 10−3) −2.79 × 10−1 (6.07 × 10−4)
5000.00 × 100 (0.00 × 100) −2.42 × 10−1 (1.62 × 10−2) −2.60 × 10−1 (1.03 × 10−14) −2.68 × 10−1 (1.73 × 10−3) −2.79 × 10−1 (6.12 × 10−4)
10000.00 × 100 (0.00 × 100) −2.46 × 10−1 (1.65 × 10−2) −2.60 × 10−1 (4.84 × 10−9) −2.69 × 10−1 (1.55 × 10−3) −2.79 × 10−1 (6.79 × 10−4)
UF11001.02 × 10−1 (4.38 × 10−2) −7.16 × 10−1 (9.80 × 10−3) −7.14 × 10−1 (3.98 × 10−3) −6.23 × 10−1 (1.60 × 10−2) −7.20 × 10−1 (6.99 × 10−7)
5005.78 × 10−2 (3.52 × 10−2) −7.14 × 10−1 (1.27 × 10−2) −7.14 × 10−1 (3.07 × 10−3) −6.12 × 10−1 (1.84 × 10−2) −7.20 × 10−1 (1.51 × 10−6)
10003.74 × 10−2 (2.01 × 10−2) −6.87 × 10−1 (9.42 × 10−2) −7.14 × 10−1 (2.42 × 10−3) −6.00 × 10−1 (1.70 × 10−2) −7.20 × 10−1 (1.80 × 10−6)
UF21004.48 × 10−1 (4.00 × 10−2) −7.17 × 10−1 (6.27 × 10−4) −7.14 × 10−1 (1.22 × 10−3) −6.86 × 10−1 (1.12 × 10−2) −7.20 × 10−1 (3.93 × 10−8)
5003.77 × 10−1 (5.18 × 10−2) −7.16 × 10−1 (6.13 × 10−4) −7.13 × 10−1 (1.47 × 10−3) −6.76 × 10−1 (1.17 × 10−2) −7.20 × 10−1 (1.07 × 10−7)
10003.44 × 10−1 (5.36 × 10−2) −7.15 × 10−1 (8.32 × 10−4) −7.11 × 10−1 (1.26 × 10−3) −6.68 × 10−1 (1.10 × 10−2) −7.20 × 10−1 (2.16 × 10−7)
UF41003.00 × 10−1 (5.28 × 10−3) −3.67 × 10−1 (4.10 × 10−3) −4.35 × 10−1 (1.50 × 10−3) +3.89 × 10−1 (2.12 × 10−3) −4.17 × 10−1 (2.61 × 10−4)
5002.66 × 10−1 (6.65 × 10−3) −3.58 × 10−1 (6.78 × 10−3) −4.21 × 10−1 (1.22 × 10−3) +3.77 × 10−1 (2.90 × 10−3) −4.13 × 10−1 (1.87 × 10−4)
10002.52 × 10−1 (3.97 × 10−3) −3.51 × 10−1 (6.02 × 10−3) −4.09 × 10−1 (1.47 × 10−3) −3.68 × 10−1 (2.61 × 10−3) −4.12 × 10−1 (1.44 × 10−4)
UF71003.85 × 10−2 (3.73 × 10−2) −5.08 × 10−1 (1.45 × 10−1) −5.51 × 10−1 (3.63 × 10−2) +5.05 × 10−1 (7.37 × 10−2) −5.16 × 10−1 (1.46 × 10−6)
5003.65 × 10−3 (6.82 × 10−3) −4.20 × 10−1 (2.14 × 10−1) −5.34 × 10−1 (3.96 × 10−2) +4.97 × 10−1 (6.72 × 10−2) −5.16 × 10−1 (2.45 × 10−6)
10001.82 × 10−3 (3.67 × 10−3) −3.98 × 10−1 (2.20 × 10−1) =5.51 × 10−1 (3.63 × 10−2) +5.15 × 10−1 (1.57 × 10−2) −5.16 × 10−1 (4.55 × 10−6)
WFG11003.09 × 10−4 (1.69 × 10−3) −3.62 × 10−1 (5.77 × 10−2) −5.34 × 10−1 (3.96 × 10−2) +9.44 × 10−1 (1.36 × 10−4) +8.34 × 10−1 (1.96 × 10−2)
5000.00 × 100 (0.00 × 100) −3.03 × 10−1 (3.93 × 10−2) −5.51 × 10−1 (3.63 × 10−2) +9.44 × 10−1 (4.67 × 10−5) +7.42 × 10−1 (2.91 × 10−2)
10004.83 × 10−4 (2.65 × 10−3) −2.94 × 10−1 (3.38 × 10−2) −5.34 × 10−1 (3.96 × 10−2) +9.44 × 10−1 (2.87 × 10−5) +7.67 × 10−1 (2.67 × 10−2)
WFG21008.64 × 10−1 (3.93 × 10−3) −6.77 × 10−1 (1.77 × 10−2) −5.51 × 10−1 (3.63 × 10−2) +9.23 × 10−1 (2.24 × 10−3) −9.26 × 10−1 (2.55 × 10−3)
5008.60 × 10−1 (2.81 × 10−3) −6.71 × 10−1 (1.27 × 10−2) −5.34 × 10−1 (3.96 × 10−2) +9.00 × 10−1 (5.24 × 10−3) −9.21 × 10−1 (2.62 × 10−2)
10008.60 × 10−1 (2.74 × 10−3) −6.61 × 10−1 (1.29 × 10−2) −5.51 × 10−1 (3.63 × 10−2) +8.87 × 10−1 (5.44 × 10−3) −9.24 × 10−1 (2.93 × 10−3)
WFG31003.66 × 10−1 (1.33 × 10−2) −1.73 × 10−1 (9.82 × 10−3) −5.34 × 10−1 (3.96 × 10−2) +3.96 × 10−1 (2.14 × 10−3) −4.13 × 10−1 (3.08 × 10−3)
5003.62 × 10−1 (9.48 × 10−3) −1.53 × 10−1 (1.35 × 10−2) −5.51 × 10−1 (3.63 × 10−2) +3.84 × 10−1 (2.91 × 10−3) −4.11 × 10−1 (2.50 × 10−3)
10003.60 × 10−1 (7.88 × 10−3) −1.51 × 10−1 (1.02 × 10−2) −5.34 × 10−1 (3.96 × 10−2) +3.62 × 10−1 (1.14 × 10−2) −4.10 × 10−1 (2.66 × 10−3)
BT11000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.51 × 10−1 (3.63 × 10−2) +0.00 × 100 (0.00 × 100) −6.71 × 10−1 (1.14 × 10−2)
5000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.34 × 10−1 (3.96 × 10−2) +0.00 × 100 (0.00 × 100) −6.18 × 10−1 (1.61 × 10−2)
10000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.51 × 10−1 (3.63 × 10−2) +0.00 × 100 (0.00 × 100) −5.53 × 10−1 (2.72 × 10−2)
BT21000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.34 × 10−1 (3.96 × 10−2) +1.34 × 10−2 (4.55 × 10−3) −3.24 × 10−1 (4.09 × 10−2)
5000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.51 × 10−1 (3.63 × 10−2) +0.00 × 100 (0.00 × 100) −5.49 × 10−2 (2.24 × 10−2)
10000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.34 × 10−1 (3.96 × 10−2) +0.00 × 100 (0.00 × 100) −3.72 × 10−1 (9.24 × 10−3)
BT31000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.51 × 10−1 (3.63 × 10−2) +1.45 × 10−3 (3.99 × 10−3) −7.06 × 10−1 (4.61 × 10−3)
5000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.34 × 10−1 (3.96 × 10−2) +0.00 × 100 (0.00 × 100) −6.96 × 10−1 (7.10 × 10−3)
10000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.51 × 10−1 (3.63 × 10−2) +0.00 × 100 (0.00 × 100) −6.79 × 10−1 (6.91 × 10−3)
BT61000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.34 × 10−1 (3.96 × 10−2) +0.00 × 100 (0.00 × 100) −6.24 × 10−1 (1.54 × 10−2)
5000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.51 × 10−1 (3.63 × 10−2) +0.00 × 100 (0.00 × 100) −5.85 × 10−1 (1.08 × 10−2)
10000.00 × 100 (0.00 × 100) −0.00 × 100 (0.00 × 100) −5.34 × 10−1 (3.96 × 10−2) +0.00 × 100 (0.00 × 100) −5.43 × 10−1 (1.90 × 10−2)
+/−/=0/45/04/40/17/37/14/41/0
Table 3. IGD Values of the Five Algorithms on the LSMOP Test Suites.
Table 3. IGD Values of the Five Algorithms on the LSMOP Test Suites.
ProblemMOEA/D2LMEAIM-MOEA/DFDVLSMOEA-TM
LSMOP16.94 × 100 (8.21 × 10−1) −1.65 × 10−1 (1.82 × 10−1) −2.40 × 10−1 (8.69 × 10−2) −2.19 × 10−1 (2.59 × 10−3) −5.62 × 10−2 (3.52 × 10−3)
LSMOP21.08 × 10−1 (4.91 × 10−3) −9.69 × 10−2 (8.76 × 10−2) −6.03 × 10−2 (1.08 × 10−3) +7.28 × 10−2 (1.34 × 10−3) +8.63 × 10−2 (2.42 × 10−3)
LSMOP31.69 × 101 (2.32 × 100) −9.38 × 10−1 (1.21 × 100) =6.96 × 10−1 (1.65 × 10−1) −4.12 × 10−1 (6.30 × 10−2) +6.10 × 10−1 (7.95 × 10−2)
LSMOP43.03 × 10−1 (9.56 × 10−3) −1.37 × 10−1 (1.02 × 10−1) −9.90 × 10−2 (2.49 × 10−3) −1.24 × 10−1 (3.35 × 10−3) −8.87 × 10−2 (5.95 × 10−3)
LSMOP51.04 × 101 (2.98 × 100) −3.84 × 100 (2.98 × 100) −2.29 × 10−1 (8.94 × 10−2) −5.41 × 10−1 (2.16 × 10−4) −7.29 × 10−2 (3.62 × 10−3)
LSMOP61.59 × 101 (6.45 × 102) −3.08 × 101 (1.23 × 102) −1.04 × 10−2 (3.21 × 10−1) +1.18 × 10−1 (2.02 × 10−3) −5.26 × 10−2 (9.46 × 10−1)
LSMOP71.58 × 100 (6.60 × 10−2) −1.36 × 100 (1.90 × 10−1) −9.78 × 10−1 (7.00 × 10−2) −9.00 × 10−1 (1.22 × 10−2) +9.17 × 10−1 (2.27 × 10−1)
LSMOP89.26 × 10−1 (6.60 × 10−2) −1.08 × 10−1 (7.33 × 10−3) −3.51 × 10−1 (1.98 × 10−2) −3.60 × 10−1 (9.23 × 10−3) −8.29 × 10−2 (4.33 × 10−3)
LSMOP94.07 × 101 (9.28 × 100) −1.29 × 100 (1.16 × 100) −1.30 × 100 (2.48 × 10−1) −1.19 × 100 (4.06 × 10−1) −1.82 × 10−1 (9.94 × 10−3)
+/−/=0/9/00/8/12/7/03/6/0
Table 4. CPF Values of the Five Algorithms on the LSMOP Test Suites.
Table 4. CPF Values of the Five Algorithms on the LSMOP Test Suites.
ProblemMOEA/D2LMEAIM-MOEA/DFDVLSMOEA-TM
LSMOP10.00 × 100 (0.00 × 100) −6.71 × 10−1 (2.21 × 10−1) −6.12 × 10−1 (7.79 × 10−4) −6.03 × 10−1 (1.24 × 10−1) −8.03 × 10−1 (7.08 × 10−3)
LSMOP27.41 × 10−1 (6.55 × 10−3) −7.70 × 10−1 (6.06 × 10−2) +7.95 × 10−1 (1.38 × 10−3) +8.08 × 10−1 (1.35 × 10−3) +7.50 × 10−1 (7.16 × 10−3)
LSMOP30.00 × 100 (0.00 × 100) −1.46 × 10−1 (1.02 × 10−1) =4.27 × 10−1 (1.03 × 10−1) +1.26 × 10−1 (1.16 × 10−1) =1.11 × 10−1 (5.89 × 10−2)
LSMOP44.78 × 10−1 (1.06 × 10−2) −7.13 × 10−1 (9.47 × 10−2) −7.37 × 10−1 (3.49 × 10−3) −7.62 × 10−1 (2.75 × 10−3) +7.51 × 10−1 (1.08 × 10−2)
LSMOP50.00 × 100 (0.00 × 100) −1.08 × 10−1 (1.63 × 10−1) −3.35 × 10−1 (1.66 × 10−3) −4.19 × 10−1 (4.33 × 10−2) −5.05 × 10−1 (6.57 × 10−3)
LSMOP60.00 × 100 (0.00 × 100) −1.27 × 10−1 (1.57 × 10−2) −8.54 × 10−1 (4.85 × 10−2) +6.32 × 10−1 (4.24 × 10−3) −8.14 × 10−1 (2.67 × 10−2)
LSMOP70.00 × 100 (0.00 × 100) −1.57 × 10−1 (1.92 × 10−2) −6.78 × 10−1 (1.71 × 10−2) −8.49 × 10−1 (4.74 × 10−2) +8.18 × 10−1 (1.71 × 10−2)
LSMOP82.78 × 10−2 (1.20 × 10−3) −4.31 × 10−1 (1.22 × 10−2) −3.64 × 10−1 (1.80 × 10−3) −3.51 × 10−1 (9.96 × 10−3) −4.77 × 10−1 (6.62 × 10−3)
LSMOP90.00 × 100 (0.00 × 100) −8.55 × 10−2 (6.57 × 10−2) −1.28 × 10−1 (4.18 × 10−2) −1.14 × 10−1 (2.18 × 10−2) −2.04 × 10−1 (5.25 × 10−3)
+/−/=0/9/01/7/13/6/03/5/1
Table 5. Average Runtime (s) of Five Algorithms on Some Test Problems.
Table 5. Average Runtime (s) of Five Algorithms on Some Test Problems.
ProblemDMOEA/D2LMEAIM-MOEA/DFDVLSMOEA-TM
DTLZ1100168.83 224.63 474.65 243.91 227.60
300272.06 384.71 607.00 576.66 352.90
500380.38 551.08 615.88 786.35 481.02
DTLZ2100157.58 230.39 464.97 240.74 230.78
300236.75 370.66 577.62 561.55 346.33
500319.52 521.12 562.91 739.27 470.82
UF1100165.81 234.19 498.66 251.45 244.10
300261.72 407.26 632.01 598.13 379.70
500369.13 572.05 625.15 810.61 496.00
UF2100175.96 257.42 577.15 271.65 268.19
300313.04 460.12 901.47 733.59 458.52
500450.95 652.48 808.19 940.94 618.54
WFG110066.41 88.65 216.48 91.92 129.69
300270.90 366.31 664.18 529.00 436.69
500702.05 918.97 1420.00 1380.80 1192.80
WFG210061.70 85.43 199.48 79.09 101.67
300236.76 344.46 564.17 440.87 566.61
500586.46 881.83 1113.90 1107.10 1211.00
BT110050.26 70.73 147.84 801.63 119.52
300206.88 356.95 559.30 499.84 493.77
500568.67 936.16 1245.70 1526.20 1038.00
BT210067.98 91.76 185.13 101.60 151.59
300335.92 449.34 619.96 541.92 526.43
500886.51 1193.81 1559.92 1732.30 1193.90
LSMOP1300196.34 292.76 475.78 418.54 289.54
LSMOP2300194.60 307.84 578.82 429.64 296.92
LSMOP3300205.80 299.76 571.12 457.34 310.92
LSMOP4300205.80 299.76 571.12 457.34 310.92
Table 6. Average Runtime (s) of Five Algorithms on Test Suites.
Table 6. Average Runtime (s) of Five Algorithms on Test Suites.
ProblemDMOEA/D2LMEAIM-MOEA/DFDVLSMOEA-TM
DTLZ100163.47 232.56 468.09 237.06 255.88
300255.47 387.12 581.66 553.52 379.97
500359.72 554.70 591.68 744.48 499.60
UF100170.09 245.30 528.12 262.98 252.13
300286.20 440.89 733.59 643.87 437.47
500412.96 612.12 693.62 869.50 587.71
WFG10067.56 88.97 208.45 84.90 115.65
300262.25 359.12 605.24 472.18 475.27
500651.99 911.01 1228.20 1203.77 1066.97
BT10059.95 82.81 166.00 333.91 117.82
300282.39 409.16 600.34 542.06 499.83
500812.32 957.18 1061.06 1717.90 1103.87
LSMOP300194.35 303.05 570.10 440.48 303.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, T.; Zhu, J.; Cao, L. A Stable Large-Scale Multiobjective Optimization Algorithm with Two Alternative Optimization Methods. Entropy 2023, 25, 561. https://doi.org/10.3390/e25040561

AMA Style

Liu T, Zhu J, Cao L. A Stable Large-Scale Multiobjective Optimization Algorithm with Two Alternative Optimization Methods. Entropy. 2023; 25(4):561. https://doi.org/10.3390/e25040561

Chicago/Turabian Style

Liu, Tianyu, Junjie Zhu, and Lei Cao. 2023. "A Stable Large-Scale Multiobjective Optimization Algorithm with Two Alternative Optimization Methods" Entropy 25, no. 4: 561. https://doi.org/10.3390/e25040561

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop