Next Article in Journal
White-Noise-Driven KdV-Type Boussinesq System
Next Article in Special Issue
Adaptive Stylized Image Generation for Traditional Miao Batik Using Style-Conditioned LCM-LoRA Enhanced Diffusion Models
Previous Article in Journal
Evolution and Simulation Analysis of Digital Transformation in Rural Elderly Care Services from a Multi-Agent Perspective in China
Previous Article in Special Issue
Intelligent Dynamic Multi-Dimensional Heterogeneous Resource Scheduling Optimization Strategy Based on Kubernetes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Angle-Based Dual-Association Evolutionary Algorithm for Many-Objective Optimization

1
Faculty of Humanities and Arts, Macau University of Science and Technology, Macao 999078, China
2
Management School, The University of Sheffield, Sheffield S10 2TG, UK
3
James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1757; https://doi.org/10.3390/math13111757
Submission received: 15 April 2025 / Revised: 22 May 2025 / Accepted: 23 May 2025 / Published: 26 May 2025

Abstract

:
As the number of objectives increases, the comprehensive processing performance of multi-objective optimization problems significantly declines. To address this challenge, this paper proposes an Angle-based dual-association Evolutionary Algorithm for Many-Objective Optimization (MOEA-AD). The algorithm enhances the exploration capability of unknown regions by associating empty subspaces with the solutions of the highest fitness through an angle-based bi-association strategy. Additionally, a novel quality assessment scheme is designed to evaluate the convergence and diversity of solutions, introducing dynamic penalty coefficients to balance the relationship between the two. Adaptive hierarchical sorting of solutions is performed based on the global diversity distribution to ensure the selection of optimal solutions. The performance of MOEA-AD is validated on several classic benchmark problems (with up to 20 objectives) and compared with five state-of-the-art multi-objective evolutionary algorithms. Experimental results demonstrate that the algorithm exhibits significant advantages in both convergence and diversity.

1. Introduction

Many complex problems in the real world can be abstracted as multi-objective optimization problems (MaOPs) [1]. Such problems typically involve three or more conflicting objectives, which can be mathematically described as follows:
Minimize F ( x ) = f 1 ( x ) , , f m ( x ) s . t . x Ω R n
Here,  x = x 1 , x 2 , , x n is an n-dimensional decision variable in the decision space  Ω , and  F ( x )  represents the m  ( m > 3 ) conflicting objective functions to be optimized simultaneously in the m-dimensional objective space. Due to the conflicts among these objectives, no solution exists that can optimize all objectives at the same time. In multi-objective optimization problems, an effective strategy involves producing a collection of Pareto-optimal solutions through balancing competing objectives [2]. Mathematically, solution x is considered to Pareto-dominate solution y (expressed as  x y ) when x demonstrates equal or superior performance across all objective functions while exhibiting strictly better performance in at least one objective dimension.
A solution x * is considered Pareto optimal if no solution in set x Ω satisfies the condition x x * . The aggregation of all such Pareto-optimal solutions in the decision space constitutes the Pareto-optimal set, with their corresponding objective vectors in the objective space forming the Pareto frontier.
In various practical domains, a wide range of complex real-world problems can be modeled as many-objective optimization problems (MaOPs). Typical examples include the optimal design of water distribution systems [3], search-based recommendation systems [4], and route planning problems [5]. The proven effectiveness of evolutionary algorithms (EA) in solving MaOPs in diverse application areas, such as magnetism research [6], supply chain optimization [7], task allocation in hierarchical multi-agent systems [8], wind-thermal economic emission dispatch [9], and vehicular navigation in road networks [3], has drawn considerable attention from the research community [10,11].
Despite these advances, recent empirical studies have shown that the performance of traditional multi-objective evolutionary algorithms (MOEAs) tends to degrade significantly as the number of objectives increases [12]. This degradation is primarily attributed to two major challenges as follows: First, the effectiveness of the Pareto dominance relation diminishes in high-dimensional objective spaces. As more objectives are considered, a large proportion of the population becomes non-dominated, which reduces the selection pressure toward the true Pareto front [13]. Consequently, the dominance-based sorting loses its discriminative power, making it difficult to guide the evolutionary process effectively.
As the number of objectives increases, many traditional metaheuristic algorithms, especially those based on Pareto front methods, face significant challenges in solving multi-objective optimization problems. These challenges primarily include the loss of selection pressure, difficulty in maintaining diversity, and imbalance between convergence and diversity. In high-dimensional objective spaces, traditional algorithms often fail to effectively explore sparse regions and tend to result in incomplete coverage of the solution space or premature convergence. To overcome these issues, we propose an angle-based evolutionary algorithm for addressing multi-objective optimization problems (MaOPs). Compared with existing reference point/vector based approaches, the main contributions of this paper are summarized as follows:
(1)
The proposed angle-based twofold association strategy incorporates null subspace consideration, establishing associations with the most fitness-appropriate solutions. This approach not only enhances the probability of exploring uncharted regions but also maintains convergence guarantees.
(2)
This paper presents a novel quality assessment scheme that has been developed to quantify solution quality within subspaces. This scheme initially evaluates both convergence and diversity metrics for each solution, with the diversity component further decomposed into global and local diversity measures. The incorporation of dynamic penalty coefficients serves to penalize solutions with inferior global diversity while preserving those located in sparse regions. Additionally, an adaptive two-phase sorting mechanism has been implemented to simultaneously maintain convergence and diversity characteristics.
(3)
To validate the efficacy of MOEA-AD, a comprehensive series of simulation experiments were conducted. The experimental results demonstrate that MOEA-AD outperforms existing MOEAs in addressing MaOPs, primarily attributed to its dual-phase sorting strategy. This innovative approach not only facilitates superior search space exploration and population diversity maintenance but also ensures robust convergence characteristics, thereby significantly enhancing the algorithm’s overall performance.
The remainder of this paper is organized as follows. Section 2 introduces the related work and background knowledge. Section 3 elaborates on the framework of the proposed algorithm and the specific implementation of its components. Section 4 presents the experimental results and provides detailed analysis and discussion. Finally, Section 5 concludes the paper.

2. Related Work and Background Knowledge

This section is divided into two main components. Section 2.1 examines the existing approaches and advancements in addressing MaOPs challenges. Section 2.2 outlines the essential background information on the relevant algorithms, facilitating a deeper understanding of the following discussions.

2.1. Related Work

The scalability of evolutionary algorithms in handling many-objective optimization problems (MaOPs) remains a major challenge due to the increasing number of objectives, which significantly reduces algorithmic effectiveness. Existing research efforts to address this issue can generally be categorized into four main strategies as follows: enhanced Pareto dominance mechanisms, problem simplification, objective reduction, and preference-driven optimization.

2.1.1. Enhanced Pareto Dominance Strategies

The first class of approaches seeks to improve Pareto dominance mechanisms to mitigate the declining selection pressure in high-dimensional objective spaces. As the number of objectives increases, most solutions tend to become non-dominated, which weakens the driving force toward convergence. To address this issue, various enhanced dominance techniques have been proposed.
For instance, Aguirre et al. [14] proposed an adaptive ε -ranking strategy, which performs fine-grained sorting after the initial Pareto classification, followed by randomized ε -dominance sampling to promote solution diversity. Jiang et al. [15] introduced a method that uses reference vectors and angle-penalized distances to balance convergence and diversity. Cai et al. [16] developed a dual-distance-based dominance relation leveraging the PBI (Penalty-based Boundary Intersection) function. Grid-based strategies have also proven effective; for example, Yang et al. [17] enhanced selection pressure through grid-based solution partitioning. Additionally, fuzzy logic has been incorporated into dominance assessment, where fuzzy dominance degrees allow for more nuanced fitness evaluation and ranking among solutions [18,19].

2.1.2. Problem Simplification Strategies

The second category focuses on simplifying MaOPs, particularly by identifying and eliminating redundant objectives. This dimensionality reduction helps to ease the computational burden while retaining solution quality.
Vijai et al. [20] proposed a federated learning-assisted NSGA-II framework that uses an optimal point set initialization strategy to enhance solution quality for simplified MaOPs. Meanwhile, Zhu et al. [21] introduced a neural network-based gradient descent method that decouples complex interdependencies between objectives, improving both convergence and diversity. However, simplification methods tend to lose effectiveness when applied to problems with inherently high-dimensional or complex Pareto fronts, limiting their practical scope to moderate-sized MaOPs.

2.1.3. Objective Reduction Strategies

Objective reduction techniques aim to decrease the dimensionality of the objective space by identifying essential objectives while discarding irrelevant or weakly contributing ones. These methods are especially useful in scenarios where many objectives are correlated or redundant.
A pioneering effort in this area is the PCA-NSGA-II algorithm by Deb and Saxena [22], which integrates Principal Component Analysis into the NSGA-II framework to filter out redundant objectives. Singh et al. [23] extended this idea by estimating the effective dimensionality of the Pareto front using extreme points, thereby constructing minimal yet representative objective subsets. Despite their merits, such strategies typically suffer performance degradation in highly complex or dynamic MaOPs, as they may omit crucial information during reduction.

2.1.4. Preference-Driven Optimization Strategies

The final category involves preference-driven approaches, which guide the search toward regions of interest on the Pareto front using decision-maker input. These methods significantly improve computational efficiency by narrowing the solution space according to predefined preferences.
Deb and Kumar [24] developed RD-NSGA-II, which incorporates reference direction guidance into the search process. Thiele et al. [25] proposed PBEA by integrating reference points into the IBEA framework. Further advancing this line of work, Wang et al. [26] introduced PICEA-g, which generates a complete Pareto approximation before integrating decision preferences. Although these approaches demonstrate strong performance in specific scenarios, over-reliance on preference information can cause premature convergence and hinder exploration, as noted by Bechikh et al. [27].

2.2. Basic Definition

Definition 1. 
(Pareto dominance): Given two solutions, denoted as solution x a and solution x b , the following condition holds:
f i ( x a ) f i ( x b ) , i { 1 , 2 , , m } f j ( x a ) < f j ( x b ) , j { 1 , 2 , , m }
Then, it is said that x a dominates x b , or x b is dominated by x a , denoted as x a x b .
Definition 2. 
(Pareto-optimal solution): The status of Pareto optimality is conferred upon a decision vector x Ω when the scenario x z * is unattainable for any x Ω .
Definition 3. 
(Pareto-optimal set): The ensemble encompassing every Pareto-optimal solution is delineated as follows:
P S = { x Ω x is Pareto optimal }
Definition 4. 
( z * ): The ideal point z * is represented by the vector z * = ( z 1 * , z 2 * , , z m * ) T , with each z i * denoting the minimal value within f i , where i ranges from 1 to m.
Definition 5. 
( z n a d ): The nadir point z n a d is represented by the vector z n a d = ( z 1 n a d , z 2 n a d , , z m n a d ) T , in which z m n a d denotes the maximum value within f i , given that i 1 , 2 , , m .
Definition 6. 
(Reference vector generation): Reference vector W = ( λ 1 , λ 2 , , λ n ) partitions the objective space into N distinct subspaces. In this approach, reference vectors are drawn from the unit simplex, with their quantity being determined by the dimension m of the objective space and the number of segments H per objective axis, where N = H + m 1 m 1 . λ i = λ i 1 , λ i 2 , , λ i m represents the reference vector corresponding to the i-th index.

3. The Proposed MOEA-AD

This section not only elaborates on the angle-based dual-association evolutionary algorithm framework and its components but also provides an in-depth analysis of the proposed innovations, thereby validating the uniqueness and originality of MOEA-AD.

3.1. The Main Framework of MOEA-AD

The overall structure of the MOEA-AD algorithm is outlined in Algorithm 1. The process begins by generating a set of predefined reference vectors. For cases where H m , Deb et al. [28] recommend using a two-layer reference vector structure with a reduced value of H, which enables a more uniform partitioning of the objective space into N distinct subregions. Subsequently, an initial population of size N is randomly generated. In the next phase, steps 3 and 4 compute the minimum and maximum values of each objective to identify the ideal point z * and the nadir point z n a d , respectively. The main loop of the algorithm (steps 6 to 27) is repeated until the termination criterion is met, which in this study is defined as a maximum of 10,000 function evaluations. In step 7, the offspring population Q t is generated by applying genetic operators to the current parent population P t , including simulated binary crossover (SBX) [29] and polynomial mutation (PM) [30]. The parent population P t is then merged with the offspring population Q t to form a new population R t . Step 8 performs non-dominated sorting on R t , followed by step 9, which updates the set of ideal points. To enhance computational efficiency, step 11 normalizes the objective values of all individuals. In step 12, a dual-association process is conducted to associate each solution with a reference vector. Then, in step 13, the solutions within each subregion are ranked using the evaluation strategy proposed in this study. The best-performing solution in each subregion is assigned to Q 1 , while the remaining solutions are distributed into different levels ( Q 2 , Q 3 , etc.), forming the set S t . From steps 14 to 18, adaptive selection is carried out in each subregion based on the progress of the iteration. The selected solutions S ( i ) are evaluated using an adaptive parameter K to determine whether to prioritize diversity ranking or convergence ranking. Finally, from steps 18 to 24, the solutions in S ( i ) are sequentially used to fill the new generation. If the population size N is not yet reached after all S ( i ) have been used, the remaining slots are filled by randomly selecting from the remaining candidate solutions to ensure population completeness. This mechanism effectively balances convergence and diversity throughout the evolutionary process. The following sections will provide a detailed explanation of the core components and implementation of MOEA-AD.
Algorithm 1 The main framework of MOEA-AD
Input: N (Population size)
Output: P (final population)
  1:
W Create reference vectors (N);
  2:
P 0 Initialize the Population (N);
  3:
z * Initialize the IdealPoint ( P 0 );
  4:
z n a d Initialize the NadirPoint ( P 0 );
  5:
t 0;
  6:
while the terminal criterion is not fulfilled do
  7:
    Q t Generate offspring population ( P t );
  8:
    R t Q t P t ;
  9:
    S t Stratify by Pareto nondomination ( R t );
10:
   Update the set of ideal points S t ;
11:
    F ˜ ( S t ) Normalize using the ideal point and the nadir point ( S t , z * , z n a d );
12:
    [ S ( 1 ) , S ( 2 ) , S ( N ) ] Double association ( F ˜ ( S t ) ,W);
13:
   Fitness evaluation ranking [ S ( 1 ) , S ( 2 ) , S ( N ) ] ;
14:
    S ( i ) + 1 Adaptive subpopulation ranking ( S ( i ) );
15:
   if  S ( i ) > K  then
16:
      S ( i ) Diversity-promoting ranking;
17:
   else
18:
      S ( i ) Convergence-promoting ranking;
19:
   end if
20:
    P t + 1 ∅;
21:
    i 1;
22:
   while  | P t + 1 | + | S ( i ) | < N  do
23:
      P t + 1 P t + 1 S ( i ) ;
24:
      i i + 1 ;
25:
   end while
26:
   Random ranking S ( i ) ;
27:
    P t + 1 P t + 1 S ( i ) [ N | P t + 1 | ] ;
28:
end while

3.2. The Angle-Based Dual-Association Strategy

Once the entire objective space is divided into N distinct subspaces, the population spread across the objective space must be distributed among these subspaces. The approach of linking each solution in the normalized objective set to its nearest reference vector has been embraced in various recent studies, each with unique features and rationales. In the conventional association strategy [31], if a subspace lacks solutions, meaning it is empty, a solution is randomly picked and linked to that subspace. This implies that the randomly selected solution is allocated to the vacant subspace. This technique is referred to as random association. While random association aims to enhance the exploration of uncharted areas, the random selection of solutions might adversely affect the search process. In algorithms for subspace decomposition based on density [32], every non-dominated solution is linked to a subspace, with the density of each subspace being gauged by the count of solutions connected to it. Nonetheless, the existence of empty subspaces can result in flawed estimations. Moreover, certain approaches do not undertake additional measures and merely disregard these empty subspaces upon their appearance, potentially undermining the diversity of the solutions achieved. This technique of association is known as single association.
To more effectively explore uncharted areas and boost diversity, we have developed a dual association approach. Initially, akin to current techniques for linking empty subspaces [33], every reference vector creates a subspace, and each solution is linked to its nearest subspace. Yet, as previously mentioned, this can lead to empty subspaces and may not preserve the diversity of the solutions acquired. To address this limitation, we implement a secondary association phase. For every empty subspace, we link the nearest solution by employing the perpendicular Euclidean distance d 2 (between each solution and each reference vector) to gauge the distance between the solution and the subspace (denoted by the respective reference vector). This method markedly enhances the diversity of the solutions obtained, as elaborated subsequently.
Illustrated in Figure 1, the normalized population features vector F ˜ ( x ) = ( f 1 ( x ) , , f m ( x ) ) as the normalization vector, where the origin o represents the ideal point, and u denotes the projection of F ˜ ( x ) onto the reference vector λ i . Let d 1 i ( x ) denote the distance between the ideal point o and u, while d 2 i ( x ) represents the perpendicular Euclidean distance from F ˜ ( x ) to λ i . The precise definitions of d 1 i ( x ) and d 2 i ( x ) are outlined as follows:
d 1 i ( x ) = F ˜ ( x ) T λ i / λ i
d 2 i ( x ) = F ˜ ( x ) d 1 i ( x ) ( λ i / λ 1 )
The initial association step aligns with the conventional methods as follows: for every solution x i R t , the perpendicular Euclidean distance F ˜ ( x i ) between λ i and each reference vector d 2 i ( x i ) is computed, linking solution x i to the closest subspace. In the subsequent step, any empty subspaces lacking associated solutions are connected to the nearest solution through comparison d 2 i . While the traditional dual association approach significantly enhances diversity, it exhibits somewhat limited convergence efficacy in addressing specific challenges. The angle-based dual association approach introduced in this study aims to mitigate this problem. During the initial phase, the solutions are connected following the conventional dual association technique. In particular, every solution within the collection is associated with its corresponding reference vector in the normalized objective space by means of the perpendicular Euclidean distance d 2 i ( x ) .
During the second phase ( x i R t ), we initially verify the presence of a solution set. Upon confirmation, we calculate the angle a n g l e ( x i , λ j ) between every solution in x i R t and the reference vector λ j ( j = 1 , 2 , , N ) of the vacant subspace, which is defined as follows:
angle ( x i , λ j ) = arccos f ( x i ) T · λ j n o r m ( f ( x i ) ) · n o r m ( λ j )
p ( x i , λ j ) = cos ( angle ( x i , λ j ) ) · d 2 i ( x )
In this context, f ( x i ) functions as the fitness metric, while n o r m ( . ) denotes the normalized distance between the candidate point and the origin within the objective space. Following this, the sine of a n g l e ( x i , λ j ) is computed and then multiplied by the perpendicular Euclidean distance d 2 i ( x i ) , resulting in p ( x i , λ j ) . This product p ( x i , λ j ) is utilized as the evaluation criterion in this study, as elaborated in Equation (8). It is employed to identify the optimal solution for linking with an unoccupied subspace. In the absence of an empty subspace, the secondary association phase is omitted. By combining distance and angular metrics, solutions more aligned with the empty subspace are prioritized. This method ensures population convergence while boosting diversity, thereby strengthening the algorithm’s robustness.
The angle-based dual-association strategy (Algorithm 2) aims to associate each solution in the normalized merged population F ˜ ( R t ) with one of the predefined reference vectors in set W. First, all reference vectors are initialized with empty associated solution sets S ( 1 ) , S ( 2 ) , , S ( N ) . Then, for each solution x i F ˜ ( R t ) , the perpendicular Euclidean distance d 2 j ( x i ) between its objective vector and each reference vector λ j W is calculated. The solution is associated with the reference vector λ k that minimizes a penalty function p ( x i , λ j ) , typically incorporating both distance and angle metrics. The solution x i is then added to the corresponding set S ( k ) .
Algorithm 2 The angle-based dual-association strategy
Input:  F ˜ ( R t ) (The normalized merged population.), W (Reference vector set)
Output:  { S ( 1 ) , S ( 2 ) , , S ( N ) } (Store all solutions associated with each reference vector)
  1:
{ S ( 1 ) , S ( 2 ) , , S ( N ) } { , , , } ;
  2:
for each x i S ˜ ( t )  do
  3:
   for each λ j W  do
  4:
     Calculate the perpendicular Euclidean distance d 2 j ( x i ) between F ˜ ( x i ) and λ j ;
  5:
   end for
  6:
    k = arg min j { 1 , , | W | } p ( x i , λ j ) ;
  7:
    S ( k ) = S ( k ) x i ;
  8:
end for
  9:
for each S ( j )  do
10:
   if isempty S ( j )  then
11:
     Calculate the angle a n g l e ( x i , λ j ) between F ˜ ( x i ) and λ j .
12:
     Compute the product p ( x i , λ j ) of d 2 j ( x i ) and a n g l e ( x i , λ j ) .
13:
      q = arg min i { 1 , , | S t ˜ | } p ( x i , p ( x i , λ j ) ;
14:
      S ( j ) = S ( j ) x q ;
15:
   end if
16:
end for

3.3. Adaptive Evaluation Strategy

In order to holistically assess the potential of each solution, a novel quality evaluation function has been developed for solutions within each subspace S ( i ) . This function incorporates metrics for both convergence and diversity. The evaluation function for a solution x in subspace S ( i ) is defined as follows:
Z ( x ) = d 1 i ( x ) + u m · d 2 i ( x ) + mean d ( x , y )
In this context, d 1 i ( x ) denotes the minimum perpendicular Euclidean distance between each solution x and the ideal point, serving as a metric to evaluate the convergence of solution x. A lower value of d 1 i ( x ) signifies the superior convergence properties of the algorithm. The minimal perpendicular Euclidean distance d 2 i ( x ) between each solution x and the reference vectors λ 1 u m serves as a metric to assess the solution’s contribution to global diversity. A reduced value of d 2 i ( x ) reflects the enhanced diversity in the solution. m e a n d ( x , y ) denotes the mean distance between solution x and all other solutions within subspace S ( i ) . A lower value of m e a n d ( x , y ) reflects the greater diversity in solution x. u acts as a penalty function aimed at discouraging solutions in densely populated areas. The value of u increases with the number of solutions present in subspace S ( i ) . For this research, u is fixed at | S ( i ) | , while m denotes the count of problem objectives. With an increase in the number of objectives, the selection pressure intensifies significantly. Adaptively adjusting the weight of d 2 i ( x ) facilitates a more effective balance between convergence and diversity in high-dimensional scenarios.

3.4. Adaptive Two-Stage Sorting

After evaluating all solutions in Section 3.3, the solutions in subspace S ( i ) ( i = 1 , 2 , , n ) are sorted according to Z ( x ) , resulting in S ˜ ( i ) . Following the sorting process, an appropriate number of solutions r j from S ˜ ( i ) are selected based on the adaptive formula, expressed as follows:
r j = ln G G max 1 2 · ( e 1 ) · S ˜ ( i )
k = N 2 m
r j k d 2 rank ( S ˜ ( 1 : r j ) ) r j < k d 1 rank ( S ˜ ( 1 : r j ) )
Select the top r j solutions from S ( i ) ( i = 1 , 2 , , n ) and compare them with k to choose a more suitable method for sorting the selected solutions. If r j > k , sort the 1 : r j solutions in S ˜ ( i ) based on d 1 i ( x ) . If r j < k , sort the 1 : r j solutions in S ˜ ( i ) based on d 1 i ( x ) to better balance convergence and diversity. Define k as the population size N divided by twice the number of objectives. In contrast to conventional selection methods, the dual-stage sorting introduced in this study adaptively organizes solutions according to iteration count and the number of problem objectives. This approach integrates both diversity and convergence metrics and enhances the preservation of solutions in sparsely populated areas, leading to an improved equilibrium between diversity and convergence.

3.5. Handling Multi-Objective Optimization Problems

To effectively address many-objective optimization problems (MaOPs), the proposed MOEA-AD adopts the following strategies:
(1)
Angle-based dual association: The algorithm assigns solutions to reference vectors by considering both perpendicular distance and the angular relationship, ensuring a well-spread distribution even in high-dimensional objective spaces.
(2)
Subspace-based quality evaluation: Each solution’s quality is evaluated not only by its convergence to the Pareto front but also by its diversity. The diversity is further categorized into the following: Global diversity, which measures the solution’s contribution to the overall spread. Local diversity, which maintains the distribution within subregions.
(3)
Dynamic penalty mechanism: A penalty is applied to solutions with poor global diversity, protecting sparse regions and promoting exploration in less explored areas.
(4)
Adaptive two-stage selection: To balance convergence and diversity, the algorithm uses a two-stage selection strategy. First, solutions are ranked based on their convergence and diversity scores; then, an adaptive mechanism selects the most representative solutions for the next generation.
These strategies collectively help MOEA-AD to maintain a balance between convergence and diversity, effectively tackling the challenges of solving MaOPs.

4. Experimental Study

This section outlines the experimental framework designed to evaluate the performance of the proposed MOEA-AD in addressing many-objective optimization challenges. Initially, a concise overview of the test problems and experimental configurations is provided. To ensure an equitable comparison, all competing algorithms are executed on the PlatEMO [34] platform (Matlab version: R2020b 64-bit), with the stipulation that the software must reside in the C drive to meet CPU specifications. Following this, the experimental outcomes are meticulously analyzed and deliberated to underscore the efficacy of the proposed MOEA-AD. Ultimately, ablation studies confirm the robust competitiveness of the introduced innovations.

4.1. Test Problems

To assess the performance of the six algorithms examined in this study, the DTLZ (DTLZ1-DTLZ7) [35] and WFG (WFG1-WFG9) [36] test suites were employed. These suites are extensively used as benchmarks in many-objective optimization. They feature a range of intricate properties, including degeneracy, bias, large-scale, non-separable, and partially separable decision variables in the decision space, along with linear, convex, concave, mixed geometric structures, and multimodal Pareto fronts (PF) in the objective space. These varied properties present considerable challenges to algorithm performance. Moreover, the objectives and decision variables in these test problems can be adjusted to any required scale.
In this experiment, the number of objectives for the test problems is set to range from 5 to 20, i.e., m { 5 , 8 , 12 , 16 , 20 } . For the DTLZ test suite, the number of decision variables is determined by the formula n = m + k 1 . The value of k may vary across different test problems. Following the recommendations in [29,35], the k value for DTLZ1 is set to 5; for DTLZ2 to DTLZ6, it is set to 10; and for DTLZ7, it is set to 20. For the WFG test suite, as suggested by [36,37], the number of decision variables for all test problems is set to 24, and the position-related parameter is set to m 1 .

4.2. Comparative Algorithms

To assess the effectiveness of the proposed MOEA-AD algorithm in addressing many-objective optimization problems (MaOPs), this research conducted comparative experiments with five prominent many-objective evolutionary algorithms (MaOEAs). NSGA-III [28] was selected as the benchmark algorithm, given its extensive application in solving MaOPs. ANSGA-III [38] was incorporated for its dynamic parameter and strategy adaptation, which enhances performance across diverse problems. SPEA/R [33], a decomposition-based algorithm that integrates Pareto dominance, was also included. Furthermore, the indicator-driven MaOEA-IGD [39] and the recently introduced MaOEA-IT [40] were part of the experiments, with MaOEA-IT utilizing novel techniques to favor solutions exhibiting strong convergence and diversity.

4.3. Experimental Settings

This subsection outlines the experimental configurations and details the parameter settings for each algorithm under comparison.
(1)
Execution and stopping condition: Every algorithm is run 20 times independently on each test case. The stopping condition for each execution is the attainment of the maximum function evaluations (MFEs). For the DTLZ1-7 and WFG1-9 test suites, the MFEs are configured as 99,960; 99,990; 100,100; 100,386; and 99,960 corresponding to objective counts m of 5, 8, 12, 16, and 20, respectively.
(2)
Statistical evaluation: The Wilcoxon rank-sum test is utilized to evaluate the statistical significance of the outcomes from MOEA-AD and the five other algorithms, with a significance threshold set at 0.05. In all tables, +, −, and ≈ indicate superior to, inferior to, and comparable to competing methods, respectively, while gray highlighting denotes the best value achieved on the current test problem.
(3)
Population size: The determination of population size N is governed by parameter H rather than being arbitrarily set. For problems involving 8, 12, 16, and 20 objectives, where H m , a two-tiered reference vector generation approach with reduced H values, as suggested in, is employed to generate intermediate reference vectors. Comparable configurations are applied to other algorithms. To maintain fairness in comparisons, identical population sizes are utilized across all algorithms.
(4)
Crossover and mutation parameters: All compared algorithms utilize SBX [29] and PM [30]. For SBX, the crossover probability p c is set to 1.0, and the distribution index η c is set to 20. For polynomial mutation, the distribution index η m and mutation probability p m are set to 20 and 1 / n , respectively. More algorithm parameter settings are shown in Table 1.
(5)
Evaluation metrics: This paper employs two metrics, IGD and HV, to assess the diversity and convergence of the algorithms. IGD quantifies the average distance between the algorithm’s solutions and the sampled points on the true Pareto front (PF). A lower IGD value reflects superior convergence and diversity. Notably, solutions dominated by the reference point are omitted in HV calculations. HV is computed using PlatEMO [34], with a higher HV value indicating better algorithm performance.

4.4. Performance Comparison Analysis on the DTLZ Test Suite

The Wilcoxon rank-sum test results in Table 2 indicate that MOEA-AD achieves significantly superior IGD values compared to NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT across the 35 test instances. Specifically, MOEA-AD outperforms these algorithms in 17, 22, 21, 24, and 35 instances, respectively. In contrast, NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT exhibit better IGD results than MOEA-AD in only 11, 7, 10, 6, and 0 instances, respectively.
Similarly, the Wilcoxon rank-sum test results in Table 3 reveal that MOEA-AD demonstrates significantly better HV performance than NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT in the majority of the 35 test instances (15, 17, 21, 26, and 35 instances, respectively). Conversely, the HV results of NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT surpass those of MOEA-AD in only 12, 9, 12, 3, and 0 instances, respectively. These findings highlight MOEA-AD’s consistent dominance in both IGD and HV metrics across most test scenarios.
In the DTLZ2 benchmark problem, MOEA-AD demonstrates superior performance in scenarios involving 16 and 20 objectives. For a clearer visualization of the outcomes from the six evaluated algorithms, Figure 2 illustrates the parallel coordinate plots of the ultimate solutions derived by each algorithm for the 20-objective DTLZ2 scenario. The figure reveals that MOEA-AD’s approximate Pareto Front (PF) showcases enhanced uniformity and convergence in the 20-objective DTLZ2 context, with SPEA/R trailing closely. Conversely, the remaining trio of algorithms fall short in achieving convergence to the authentic PF.
The DTLZ3 problem, characterized by its multimodal complexity, poses significant challenges. An analysis of the IGD and HV metrics, as emphasized in Table 2 and Table 3, indicates MOEA-AD’s dominance in 12- and 20-objective instances. MOEA-AD surpasses both MaOEA-IGD and MaOEA-IT, a success potentially linked to its dual-association strategy that ensures robust diversity among solutions.
The DTLZ5 and DTLZ6 benchmark problems exhibit degenerate Pareto Fronts (PFs); however, a non-degenerate segment emerges in the Pareto front when the objective count surpasses four. This characteristic significantly influences the performance of comparative algorithms in addressing degenerate multi-objective optimization problems (MaOPs). In the context of these two problems, MOEA-AD consistently delivers superior IGD values across instances with 5, 8, and 12 objectives, underscoring the robustness and versatility of its dual-association strategy. Regarding the HV metric, MOEA-AD outperforms competing algorithms in tackling such challenges, as its inherent collaborative mechanism ensures an optimal equilibrium between convergence and diversity.
A thorough examination of Table 2 and Table 3 and Figure 2 reveals that while MOEA-AD may not secure top results in certain objective test scenarios, it consistently excels across the majority of test problems. This consistent performance further validates MOEA-AD’s advantage over the five rival algorithms.

4.5. Performance Comparison Analysis on the WFG Test Suite

The statistical analysis of IGD and HV metrics, as presented in Table 4 and Table 5, demonstrates the superior performance of our proposed MOEA-AD over the other five algorithms in the majority of cases. Based on the Wilcoxon rank-sum test results in the final row of Table 4, MOEA-AD achieves significantly better IGD values than NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT across most of the 45 test instances (26, 25, 27, 41, and 45 instances, respectively). Conversely, the IGD results of NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT surpass those of MOEA-AD in only a limited number of instances (11, 8, 10, 0, and 0 instances, respectively).
Similarly, the Wilcoxon rank-sum test results in the last row of Table 5 indicate that MOEA-AD’s HV values are significantly higher than those of NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT in the majority of the 45 test instances (25, 29, 29, 40, and 42 instances, respectively). In contrast, the HV results of NSGA-III, ANSGA-III, SPEA/R, MaOEA-IGD, and MaOEA-IT outperform those of MOEA-AD in only a small subset of instances (8, 8, 8, 0, and 0 instances, respectively).
For a more detailed comparative analysis between MOEA-AD and the five competing algorithms, Figure 3 illustrates the distribution of solutions generated by the six algorithms on the 16-objective WFG3 benchmark problem. The visualization clearly indicates that MaOEA-IGD and MaOEA-IT yield the least favorable results. On the other hand, NSGA-III, ANSGA-III, and SPEA/R demonstrate solutions that are relatively closer to the Pareto Front (PF) than those of MaOEA-IGD and MaOEA-IT. Nevertheless, MOEA-AD consistently achieves the closest proximity to the PF across the entire evaluation process, with a noticeable trend of continuous improvement. This observation further underscores the robust competitiveness of MOEA-AD. The following section will provide a comprehensive analysis of the experimental findings.
WFG3, a connected variant of WFG2, comprises multiple disjoint convex Pareto Front (PF) segments with non-separable variables. Its linear and degenerate PF presents a formidable challenge for reference vector-based algorithms. DAEA excels in 8-, 12-, 16-, and 20-objective test cases, with the exception of the 5-objective scenario, where NSGAIII and ANSGAIII outperform it. These findings underscore the efficacy of our proposed MOEA-AD in bolstering convergence and preserving diversity.
WFG4 through WFG9 feature a shared hyper-elliptical PF in the objective space, yet they exhibit distinct characteristics in the decision space. Notably, WFG4’s multimodal nature, characterized by “hill sizes”, serves as a litmus test for an algorithm’s ability to evade local optima. MOEA-AD holds a pronounced edge in 5-, 8-, 16-, and 20-objective cases, barring the 12-objective instance, where NSGAIII takes the lead. On WFG4, MOEA-AD secures the top HV values for all objectives.
WFG6, a non-separable and streamlined problem, sees DAEA leading in IGD values across all test instances, save for the 16-objective case, where SPEAR prevails. The HV metric reveals that SPEA/R’s performance is on par with MOEA-AD in 5- and 12-objective instances, highlighting MOEA-AD’s prowess in diversity and convergence, particularly in high-dimensional optimization challenges.
WFG7 to WFG9 incorporate specific biases to test algorithmic diversity. MOEA-AD consistently ranks first in all test instances, with SPEA/R, NSGA-III, and ANSGA-III trailing behind. In contrast, MaOEA-IGD and MaOEA-IT falter in these scenarios.

4.6. Performance Comparison Analysis on Real-World Problems

To further demonstrate the effectiveness of the proposed algorithm within the MOEA-D framework, five representative real-world benchmark problems were selected for comparative experiments. These include the following: the Disc Brake Design Problem (DBDP) [41], involving two objectives and four constraints; the Car Side Impact Design Problem (CSIDP) [42], characterized by three objectives and nine constraints; the Gear Train Design Problem (GTDP) [38], which focuses on optimizing gear parameters such as the number of teeth, module, and transmission ratio to satisfy criteria like efficiency, compactness, and cost; and two planar truss structures frequently used in engineering mechanics—the Four-Bar Plane Truss (FBPT) [11] and the Two-Bar Plane Truss (TBPT) [43]—where both the members and applied forces are confined to a single plane. Further details on these problems are available in the cited original references.
Since real-world problems lack a true Pareto front, the Hypervolume (HV) indicator is used to evaluate algorithm performance—a higher HV value indicates better performance. In Table 6, the HV values of six algorithms across five real-world problems are presented. Clearly, MOEA-AD outperforms NSGA-III, ANSGA-III, SPEAR, MaOEA-IGD, and MaOEA-IT on three, four, four, five, and three problems, respectively, and it also achieves the best HV value on three of the five test problems. This further demonstrates the strong practicality and competitiveness of MOEA-AD.

5. Conclusions

This paper proposes an Angle-based Dual-Association Evolutionary Algorithm (MOEA-AD) for many-objective optimization problems. The algorithm employs a dual-association strategy based on angular information to associate empty subspaces with the most suitable solutions, thereby promoting the exploration of unexplored regions while maintaining good convergence performance. In addition, a novel quality evaluation framework for subspace solutions is developed. This framework first assesses the convergence and diversity of each solution, with diversity further divided into global and local aspects.
(1)
To protect solutions in sparse regions, the framework introduces a dynamic penalty factor that penalizes solutions with insufficient global diversity.
(2)
The algorithm also adopts an adaptive two-stage sorting mechanism, which effectively balances convergence and diversity, leading to favorable optimization performance.
(3)
However, despite its innovative design, the efficiency of the proposed method decreases as the number of objectives increases.
Future research on MOEA-AD is expected to focus on several key directions aimed at comprehensively improving its performance and expanding its application scope. To address the challenges posed by high-dimensional objective spaces, effective dimensionality reduction techniques and diversity preservation strategies can be introduced to enhance solution distribution quality. In addition, by incorporating specialized constraint-handling mechanisms, the adaptability of MOEA-AD to constrained optimization problems can be further strengthened. Moreover, deploying the algorithm in real-world engineering and manufacturing applications, combined with adaptive parameter tuning, will not only improve its practical utility but also provide solid support for experimental validation and engineering implementation.

Author Contributions

Conceptualization, X.W. and J.C.; Data curation, X.W. and J.C.; Formal analysis, J.C.; Funding acquisition, W.W. and J.C.; Investigation, J.C.; Methodology, J.C.; Project administration, W.W. and J.C.; Resources, J.C.; Software, J.C.; Supervision, X.W. and J.C.; Validation, X.W. and J.C.; Visualization, X.W., H.W. and Z.T.; Writing—original draft, X.W., H.W., Z.T., W.W. and J.C.; Writing—review and editing, X.W., H.W., Z.T., W.W. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All experimental data in the article were obtained through testing experiments conducted on the test sets (WFG and DTLZ) of the PlatEMO 4.6 platform. The access link is https://github.com/BIMK/PlatEMO (accessed on 19 September 2023).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
NPopulation size
MNumber of objectives
z * Ideal point
z n a d Nadir point
d Perpendicular distance to reference vector
θ Angle between solution vector and reference vector
QQuality score of a solution
G D Global Diversity
L D Local Diversity
P F Pareto Front
E A s Evolutionary Algorithms
M a O P s Many-objective Optimization Problems
MOEA-ADAngle-based dual-association Evolutionary Algorithm for Many-Objective Optimization
M O E A Multi-Objective Evolutionary Algorithm

References

  1. Li, G.; Li, L.; Cai, G. A two-stage coevolutionary algorithm based on adaptive weights for complex constrained multiobjective optimization. Appl. Soft Comput. 2025, 173, 112825. [Google Scholar] [CrossRef]
  2. Chao, Y.; Chen, X.; Chen, S.; Yuan, Y. An improved multi-objective antlion optimization algorithm for assembly line balancing problem considering learning cost and workstation area. Int. J. Interact. Des. Manuf. (IJIDeM) 2025, 1–15. [Google Scholar] [CrossRef]
  3. Nuthakki, P.; T., P.K.; Alhussein, M.; Anwar, M.S.; Aurangzeb, K.; Gunnam, L.C. AI-Driven Resource and Communication-Aware Virtual Machine Placement Using Multi-Objective Swarm Optimization for Enhanced Efficiency in Cloud-Based Smart Manufacturing. Comput. Mater. Contin. 2024, 81, 4743–4756. [Google Scholar] [CrossRef]
  4. Guo, X.; Gong, R.; Bao, H.; Lu, Z. A multiobjective optimization dispatch method of wind-thermal power system. IEICE Trans. Inf. Syst. 2020, 103, 2549–2558. [Google Scholar] [CrossRef]
  5. Tian, W.; Zhang, Y.; Fang, Q.; Liu, W. A route network resource allocation method based on multi-objective optimization and improved genetic algorithm. J. Intell. Fuzzy Syst. 2024, 48, 1–13. [Google Scholar] [CrossRef]
  6. Di Barba, P.; Mognaschi, M.E.; Wiak, S. A method for solving many-objective optimization problems in magnetics. In Proceedings of the 2019 19th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering (ISEF), Nancy, France, 29–31 August 2019; pp. 1–2. [Google Scholar]
  7. Moncayo-Martínez, L.A.; Mastrocinque, E. A multi-objective intelligent water drop algorithm to minimise cost of goods sold and time to market in logistics networks. Expert Syst. Appl. 2016, 64, 455–466. [Google Scholar] [CrossRef]
  8. Li, M.; Wang, Z.; Li, K.; Liao, X.; Hone, K.; Liu, X. Task allocation on layered multiagent systems: When evolutionary many-objective optimization meets deep Q-learning. IEEE Trans. Evol. Comput. 2021, 25, 842–855. [Google Scholar] [CrossRef]
  9. Amorim, E.A.; Rocha, C. Optimization of wind-thermal economic-emission dispatch problem using NSGA-III. IEEE Lat. Am. Trans. 2021, 18, 1555–1562. [Google Scholar] [CrossRef]
  10. Wang, Z.; Wang, J.; Lv, C. Multi-Stage and Multi-Objective Optimization of Solar Air-Source Heat Pump Systems for High-Rise Residential Buildings in Hot-Summer and Cold-Winter Regions. Energies 2024, 17, 6414. [Google Scholar] [CrossRef]
  11. Hao, L.; Peng, W.; Liu, J.; Zhang, W.; Li, Y.; Qin, K. Competition-based two-stage evolutionary algorithm for constrained multi-objective optimization. Math. Comput. Simul. 2025, 230, 207–226. [Google Scholar] [CrossRef]
  12. Bechikh, S.; Elarbi, M.; Ben Said, L. Many-objective optimization using evolutionary algorithms: A survey. In Recent Advances in Evolutionary Multi-Objective Optimization; Springer: Cham, Switzerland, 2017; pp. 105–137. [Google Scholar]
  13. Ishibuchi, H.; Tsukamoto, N.; Nojima, Y. Evolutionary many-objective optimization: A short review. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 2419–2426. [Google Scholar]
  14. Gu, Q.; Li, K.; Wang, D.; Liu, D. A MOEA/D with adaptive weight subspace for regular and irregular multi-objective optimization problems. Inf. Sci. 2024, 661, 120143. [Google Scholar] [CrossRef]
  15. Jiang, B.; Dong, Y.; Zhang, Z.; Li, X.; Wei, Y.; Guo, Y.; Liu, H. Integrating a Physical Model with Multi-Objective Optimization for the Design of Optical Angle Nano-Positioning Mechanism. Appl. Sci. 2024, 14, 3756. [Google Scholar] [CrossRef]
  16. Cai, X.; Li, B.; Wu, L.; Chang, T.; Zhang, W.; Chen, J. A dynamic interval multi-objective optimization algorithm based on environmental change detection. Inf. Sci. 2025, 694, 121690. [Google Scholar] [CrossRef]
  17. Yang, S.; Li, M.; Liu, X.; Zheng, J. A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  18. He, Z.; Yen, G.G.; Zhang, J. Fuzzy-based Pareto optimality for many-objective evolutionary algorithms. IEEE Trans. Evol. Comput. 2013, 18, 269–285. [Google Scholar] [CrossRef]
  19. Liu, S.; Lin, Q.; Tan, K.C.; Gong, M.; Coello, C.A.C. A fuzzy decomposition-based multi/many-objective evolutionary algorithm. IEEE Trans. Cybern. 2020, 52, 3495–3509. [Google Scholar] [CrossRef] [PubMed]
  20. Vijai, P.; P., B.S. A hybrid multi-objective optimization approach With NSGA-II for feature selection. Decis. Anal. J. 2025, 14, 100550. [Google Scholar] [CrossRef]
  21. Zhu, J.; He, Y.; Gao, Z. Wind power interval and point prediction model using neural network based multi-objective optimization. Energy 2023, 283, 129079. [Google Scholar] [CrossRef]
  22. Deb, K.; Saxena, D. Searching for Pareto-optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems. In Proceedings of the World Congress on Computational Intelligence (WCCI-2006), Vancouver, BC, Canada, 16–21 July 2006; pp. 3352–3360. [Google Scholar]
  23. Singh, H.K.; Isaacs, A.; Ray, T. A Pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems. IEEE Trans. Evol. Comput. 2011, 15, 539–556. [Google Scholar] [CrossRef]
  24. Deb, K.; Kumar, A. Interactive evolutionary multi-objective optimization and decision-making using reference direction method. In Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, London, UK, 7–11 July 2007; pp. 781–788. [Google Scholar]
  25. Thiele, L.; Miettinen, K.; Korhonen, P.J.; Molina, J. A preference-based evolutionary algorithm for multi-objective optimization. Evol. Comput. 2009, 17, 411–436. [Google Scholar] [CrossRef]
  26. Wang, R.; Purshouse, R.C.; Fleming, P.J. Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans. Evol. Comput. 2012, 17, 474–494. [Google Scholar] [CrossRef]
  27. Bechikh, S. Incorporating Decision Maker’s Preference Information in Evolutionary Multi-Objective Optimization. Ph.D. Thesis, University of Tunis, Tunis, Tunisia, 2012. [Google Scholar]
  28. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2013, 18, 577–601. [Google Scholar] [CrossRef]
  29. Deb, K.; Agrawal, R.B. Simulated binary crossover for continuous search space. Complex Syst. 1995, 9, 115–148. [Google Scholar]
  30. Deb, K.; Goyal, M. A combined genetic adaptive search (GeneAS) for engineering design. Comput. Sci. Inform. 1996, 26, 30–45. [Google Scholar]
  31. Dai, C.; Wang, Y. A new multiobjective evolutionary algorithm based on decomposition of the objective space for multiobjective optimization. J. Appl. Math. 2014, 2014, 906147. [Google Scholar] [CrossRef]
  32. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 2014, 19, 694–716. [Google Scholar] [CrossRef]
  33. Jiang, S.; Yang, S. A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization. IEEE Trans. Evol. Comput. 2017, 21, 329–346. [Google Scholar] [CrossRef]
  34. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  35. Deb, K.; Thiele, L.; Laumanns, M.; Zitzler, E. Scalable multi-objective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No. 02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 825–830. [Google Scholar]
  36. Huband, S.; Hingston, P.; Barone, L.; While, L. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 2006, 10, 477–506. [Google Scholar] [CrossRef]
  37. Gómez, R.H.; Coello, C.A.C. MOMBI: A new metaheuristic for many-objective optimization based on the R2 indicator. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2488–2495. [Google Scholar]
  38. Jain, H.; Deb, K. An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach. IEEE Trans. Evol. Comput. 2013, 18, 602–622. [Google Scholar] [CrossRef]
  39. Sun, Y.; Yen, G.G.; Yi, Z. IGD indicator-based evolutionary algorithm for many-objective optimization problems. IEEE Trans. Evol. Comput. 2018, 23, 173–187. [Google Scholar] [CrossRef]
  40. Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G. A new two-stage evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2018, 23, 748–761. [Google Scholar] [CrossRef]
  41. Osyczka, A.; Kundu, S. A genetic algorithm-based multicriteria optimization method. In Proceedings of the 1st World Congress on Structural and Multidisciplinary Optimization, Goslar, Germany, 28 May–2 June 1995; pp. 909–914. [Google Scholar]
  42. Ray, T.; Liew, K. A swarm metaphor for multiobjective design optimization. Eng. Optim. 2002, 34, 141–153. [Google Scholar] [CrossRef]
  43. Cheng, F.Y.; Li, X. Generalized center method for multiobjective engineering optimization. Eng. Optim. 1999, 31, 641–661. [Google Scholar] [CrossRef]
Figure 1. Visual representation of distance d 1 i ( x ) and distance d 2 i ( x ) .
Figure 1. Visual representation of distance d 1 i ( x ) and distance d 2 i ( x ) .
Mathematics 13 01757 g001
Figure 2. Parallel coordinate visualization of the non-dominated solutions from six algorithms on the 20-objective DTLZ2 problem.
Figure 2. Parallel coordinate visualization of the non-dominated solutions from six algorithms on the 20-objective DTLZ2 problem.
Mathematics 13 01757 g002
Figure 3. Parallel coordinate visualization of the non-dominated solutions from six algorithms on the 16-objective WFG3 problem.
Figure 3. Parallel coordinate visualization of the non-dominated solutions from six algorithms on the 16-objective WFG3 problem.
Mathematics 13 01757 g003
Table 1. Parameter settings of all compared algorithms.
Table 1. Parameter settings of all compared algorithms.
AlgorithmPopulation SizeGenerations p c p m η c η m
NSGA-III100201.0 1 / n 2020
A-NSGA-III100201.0 1 / n 2020
SPEAR100201.0 1 / n 2020
MaOEA-IGD100201.0 1 / n 2020
MaOEA-IT100201.0 1 / n 2020
MOEA-AD100201.0 1 / n 2020
Table 2. IGD results (mean and standard deviation) obtained by six algorithms on the DTLZ problems.
Table 2. IGD results (mean and standard deviation) obtained by six algorithms on the DTLZ problems.
ProblemMDNSGAIIIANSGAIIISPEARMaOEAIGDMaOEAITMOEA-AD
DTLZ1596.8098e-2 (6.85e-5) +7.1616e-2 (6.77e-3) −1.1925e-1 (3.72e-2) −5.7702e-1 (3.45e-1) −2.1659e+0 (1.88e+0) −7.0213e-2 (1.56e-2)
8121.1217e-1 (1.13e-2) +1.6669e-1 (5.54e-2) ≈1.6260e-1 (2.74e-2) ≈4.2136e-1 (2.84e-1) −8.9007e+0 (1.40e+1) −2.3043e-1 (1.12e-1)
12161.8235e-1 (2.74e-2) +2.0208e-1 (8.33e-2) +2.5237e-1 (9.06e-2) +4.6780e-1 (2.74e-1) ≈9.7623e+0 (7.39e+0) −3.7521e-1 (6.44e-2)
16203.2874e-1 (5.11e-2) ≈3.2676e-1 (4.82e-2) ≈2.7671e-1 (7.96e-2) +7.0980e-1 (5.86e-1) −1.1653e+1 (1.36e+1) −3.2329e-1 (5.21e-2)
20243.3646e-1 (4.56e-2) ≈3.6796e-1 (4.32e-2) −3.2870e-1 (7.29e-2) ≈1.1355e+0 (6.04e-1) −1.3091e+1 (1.31e+1) −3.2918e-1 (4.17e-2)
DTLZ25142.1222e-1 (2.16e-5) +2.1657e-1 (3.43e-3) +2.1555e-1 (1.97e-3) +2.4122e-1 (8.45e-2) ≈5.7181e-1 (1.17e-1) −2.2229e-1 (2.39e-3)
8174.3802e-1 (7.58e-2) ≈5.4075e-1 (4.15e-2) −3.8749e-1 (7.25e-4) +4.2962e-1 (7.19e-2) −1.0506e+0 (1.34e-1) −4.1287e-1 (3.11e-3)
12216.2319e-1 (3.71e-2) −6.2230e-1 (4.45e-2) −6.0438e-1 (4.11e-3) −8.0273e-1 (1.15e-1) −1.0007e+0 (7.04e-2) −5.5233e-1 (4.47e-3)
16257.7400e-1 (1.87e-2) −7.7504e-1 (1.58e-2) −7.1031e-1 (6.90e-4) +7.9172e-1 (8.28e-2) −1.1505e+0 (9.18e-2) −7.2888e-1 (4.51e-3)
20299.9901e-1 (3.01e-2) −1.0207e+0 (2.70e-2) −7.6103e-1 (5.13e-4) −9.5389e-1 (1.84e-1) −1.4086e+0 (1.01e-1) −7.5049e-1 (3.68e-3)
DTLZ35142.1398e-1 (1.92e-3) +2.4505e-1 (3.42e-2) +6.9919e-1 (4.08e-1) +1.2305e+1 (4.26e+0) −1.2415e+1 (7.75e+0) −8.5040e-1 (1.22e-1)
8177.8146e-1 (7.29e-1) +1.4563e+0 (1.10e+0) ≈4.1491e+0 (1.98e+0) −1.0201e+1 (9.77e+0) −3.2674e+2 (1.20e+2) −1.0003e+0 (2.53e-2)
12212.7194e+0 (2.26e+0) −2.3482e+0 (1.45e+0) −3.2929e+1 (1.74e+1) −9.7749e+0 (8.60e+0) −3.1085e+2 (1.28e+2) −1.0516e+0 (2.65e-2)
16251.1405e+0 (4.43e-1) +1.1876e+0 (4.35e-1) +1.4001e+1 (7.32e+0) −4.5793e+0 (2.38e+0) −3.3470e+2 (1.29e+2) −1.2220e+0 (6.26e-2)
20295.0443e+0 (3.98e+0) −3.8575e+0 (2.57e+0) −3.5426e+1 (1.32e+1) −7.0219e+0 (4.35e+0) −4.1051e+2 (1.15e+2) −1.2677e+0 (4.93e-2)
DTLZ45142.2386e-1 (5.19e-2) +3.0261e-1 (1.14e-1) ≈2.1619e-1 (1.97e-3) +3.5437e-1 (1.69e-1) ≈8.8496e-1 (7.62e-2) −2.2644e-1 (2.71e-3)
8174.5651e-1 (8.11e-2) ≈5.0545e-1 (9.71e-2) ≈3.8758e-1 (6.75e-4) +5.5873e-1 (1.24e-1) −1.1928e+0 (1.08e-1) −4.3019e-1 (8.54e-3)
12216.3702e-1 (3.97e-2) −6.3244e-1 (3.74e-2) −6.1237e-1 (2.88e-3) −6.5071e-1 (5.54e-2) −1.1201e+0 (7.07e-2) −5.7397e-1 (5.58e-3)
16257.8428e-1 (1.03e-2) −7.7229e-1 (2.60e-2) −7.1547e-1 (3.76e-3) +8.6686e-1 (6.73e-2) −1.4080e+0 (1.91e-1) −7.3737e-1 (3.75e-3)
20299.7895e-1 (3.92e-2) −9.6633e-1 (4.52e-2) −7.6489e-1 (1.67e-3) −8.6876e-1 (3.23e-2) −1.4626e+0 (1.32e-1) −7.5097e-1 (5.20e-3)
DTLZ55141.0084e-1 (3.02e-2) ≈9.5663e-2 (3.57e-2) ≈2.7790e-1 (7.47e-2) −5.6062e-1 (1.73e-1) −4.3737e-1 (7.46e-2) −8.2493e-2 (2.13e-2)
8172.6923e-1 (9.91e-2) −2.7483e-1 (7.41e-2) −4.2423e-1 (1.02e-1) −6.3627e-1 (1.36e-1) −1.0924e+0 (3.40e-1) −1.7223e-1 (4.46e-2)
12212.3597e-1 (7.17e-2) ≈2.3848e-1 (4.83e-2) −7.9204e-1 (2.07e-1) −6.1468e-1 (1.71e-1) −4.0869e-1 (7.46e-2) −2.0711e-1 (4.70e-2)
16252.9446e-1 (7.90e-2) ≈3.4403e-1 (1.20e-1) −8.6579e-1 (2.99e-1) −9.1284e-2 (2.96e-4) +7.1616e-1 (2.15e-1) −2.5970e-1 (3.80e-2)
20298.5755e-1 (5.67e-1) −1.0574e+0 (5.59e-1) −1.0243e+0 (2.56e-1) −9.1237e-2 (3.27e-4) +5.9212e-1 (1.56e-1) −2.1242e-1 (5.58e-2)
DTLZ65142.8246e-1 (6.13e-2) −2.3699e-1 (5.78e-2) −5.7268e-1 (1.93e-1) −7.8057e-1 (2.20e-1) −8.4868e+0 (1.83e-1) −1.6575e-1 (1.21e-1)
8174.6259e-1 (2.57e-1) −5.2664e-1 (3.23e-1) −7.0656e-1 (4.09e-1) −8.3843e-1 (3.80e-1) −8.5382e+0 (5.16e-1) −2.0354e-1 (8.49e-2)
12216.6405e-1 (3.97e-1) −7.5110e-1 (3.92e-1) −8.1620e+0 (5.10e-1) −7.8367e-1 (2.65e-1) −8.5004e+0 (1.78e-1) −2.7977e-1 (8.42e-2)
16256.5447e-1 (3.68e-1) −6.6824e-1 (4.12e-1) −2.8426e+0 (1.13e+0) −1.5643e-1 (2.00e-1) +8.4048e+0 (3.15e-1) −3.4459e-1 (1.57e-1)
20294.6529e+0 (1.80e+0) −4.8337e+0 (1.21e+0) −4.5914e+0 (1.11e+0) −1.5625e-1 (2.00e-1) +8.3272e+0 (6.06e-1) −3.3311e-1 (1.30e-1)
DTLZ75243.8508e-1 (2.20e-2) −3.9368e-1 (2.19e-2) −4.9947e-1 (1.00e-2) −2.0942e+0 (1.13e+0) −3.6984e+0 (1.82e+0) −3.6730e-1 (3.78e-2)
8279.5196e-1 (8.05e-2) +9.2995e-1 (6.13e-2) +1.3207e+0 (2.10e-1) ≈1.4248e+0 (9.19e-1) ≈2.4654e+1 (3.87e+0) −1.8136e+0 (1.02e+0)
12312.8491e+0 (5.32e-1) −2.9186e+0 (4.56e-1) −4.6247e+0 (1.98e-2) −1.9166e+0 (8.83e-2) ≈4.6134e+1 (2.47e+0) −2.1287e+0 (1.03e+0)
16356.9596e+0 (1.17e+0) +7.2672e+0 (7.60e-1) +1.2312e+1 (3.00e+0) ≈3.0505e+0 (4.69e-1) +6.3561e+1 (8.78e+0) −1.2013e+1 (3.51e-1)
20391.4887e+1 (4.27e-1) +1.4811e+1 (4.96e-1) +1.3494e+1 (4.06e+0) +3.5570e+0 (3.33e-1) +8.2679e+1 (1.20e+1) −1.5370e+1 (3.34e-1)
+ / / 11/17/77/22/610/21/46/24/50/35/0
Table 3. HV results (mean and standard deviation) obtained by six algorithms on the DTLZ problems.
Table 3. HV results (mean and standard deviation) obtained by six algorithms on the DTLZ problems.
ProblemMDNSGAIIIANSGAIIISPEARMaOEAIGDMaOEAITMOEA-AD
DTLZ1599.7065e-1 (2.01e-4) +9.6656e-1 (6.01e-3) +8.8478e-1 (7.79e-2) ≈2.1152e-1 (3.23e-1) −9.2985e-2 (2.41e-1) −9.1165e-1 (3.73e-2)
8129.8734e-1 (6.55e-3) +9.4468e-1 (6.21e-2) +9.2251e-1 (4.23e-2) +3.6346e-1 (4.13e-1) −3.8363e-2 (1.66e-1) −6.9529e-1 (2.42e-1)
12169.9159e-1 (2.24e-2) +9.3777e-1 (2.06e-1) +7.2727e-1 (2.99e-1) +3.2726e-1 (3.69e-1) −9.7810e-3 (4.37e-2) −4.0168e-1 (1.24e-1)
16207.1179e-1 (7.73e-2) +7.2979e-1 (6.91e-2) +7.2592e-1 (1.73e-1) +2.1504e-1 (3.18e-1) −8.1313e-5 (3.64e-4) −4.9844e-1 (1.40e-1)
20246.5333e-1 (1.08e-1) ≈5.8538e-1 (1.42e-1) ≈6.2364e-1 (2.70e-1) +2.3611e-2 (5.58e-2) −0.0000e+0 (0.00e+0) −5.7950e-1 (1.26e-1)
DTLZ25147.7468e-1 (4.39e-4) +7.6791e-1 (5.33e-3) +7.7138e-1 (2.05e-3) +7.5438e-1 (8.83e-2) +1.5193e-1 (1.32e-1) −7.2182e-1 (7.70e-3)
8178.5417e-1 (4.49e-2) ≈7.7603e-1 (2.29e-2) −8.8121e-1 (2.55e-3) +8.5879e-1 (8.52e-2) +4.6399e-2 (8.56e-2) −8.3063e-1 (1.74e-2)
12219.4621e-1 (2.49e-2) +9.4449e-1 (3.18e-2) +9.6652e-1 (1.10e-3) +6.5508e-1 (1.81e-1) −1.1461e-1 (6.50e-2) −8.8895e-1 (1.65e-2)
16257.4359e-1 (2.60e-2) −7.3946e-1 (2.44e-2) −7.0307e-1 (3.90e-2) −7.3662e-1 (9.90e-2) −2.8194e-2 (3.72e-2) −8.0687e-1 (1.82e-2)
20294.4456e-1 (8.28e-2) −4.0930e-1 (6.89e-2) −7.5500e-1 (2.05e-2) −5.9706e-1 (2.52e-1) −1.9838e-2 (5.15e-2) −8.6454e-1 (1.48e-2)
DTLZ35147.6348e-1 (1.12e-2) +7.2822e-1 (4.02e-2) +3.1638e-1 (1.49e-1) +0.0000e+0 (0.00e+0) −0.0000e+0 (0.00e+0) −2.0286e-1 (7.23e-2)
8176.4491e-1 (3.83e-1) +2.8205e-1 (3.25e-1) ≈0.0000e+0 (0.00e+0) −0.0000e+0 (0.00e+0) −0.0000e+0 (0.00e+0) −2.1103e-1 (1.75e-2)
12218.6706e-2 (2.39e-1) −2.0313e-1 (3.66e-1) −0.0000e+0 (0.00e+0) −4.0275e-3 (1.80e-2) −0.0000e+0 (0.00e+0) −2.3943e-1 (2.91e-2)
16254.3535e-1 (2.61e-1) +3.6382e-1 (2.83e-1) ≈0.0000e+0 (0.00e+0) −4.1070e-3 (1.84e-2) −0.0000e+0 (0.00e+0) −1.3182e-1 (4.46e-2)
20293.1883e-2 (8.25e-2) −4.6696e-2 (8.38e-2) −0.0000e+0 (0.00e+0) −0.0000e+0 (0.00e+0) −0.0000e+0 (0.00e+0) −1.7223e-1 (6.27e-2)
DTLZ45147.6800e-1 (2.89e-2) +7.2803e-1 (5.92e-2) ≈7.6846e-1 (3.03e-3) +6.9326e-1 (1.18e-1) ≈1.8523e-1 (6.33e-2) −7.4072e-1 (7.63e-3)
8178.4934e-1 (4.33e-2) ≈8.2530e-1 (4.68e-2) ≈8.7744e-1 (3.26e-3) +7.7494e-1 (1.05e-1) −6.6369e-2 (4.60e-2) −8.5949e-1 (1.12e-2)
12219.4495e-1 (2.31e-2) ≈9.4854e-1 (1.98e-2) ≈9.6433e-1 (1.90e-3) +9.2466e-1 (7.11e-2) ≈2.1281e-2 (2.24e-2) −9.3856e-1 (5.75e-3)
16258.1048e-1 (2.20e-2) −8.1346e-1 (2.41e-2) −7.5353e-1 (1.71e-2) −6.1899e-1 (7.57e-2) −3.9028e-2 (7.68e-2) −8.8642e-1 (1.31e-2)
20296.2192e-1 (6.82e-2) −6.4820e-1 (8.11e-2) −7.7080e-1 (1.87e-2) −7.4069e-1 (4.62e-2) −1.6475e-2 (3.76e-2) −9.3206e-1 (8.57e-3)
DTLZ55141.0817e-1 (4.44e-3) ≈1.1059e-1 (4.98e-3) ≈2.1929e-2 (2.88e-2) −8.4523e-2 (2.89e-2) −4.8124e-4 (7.19e-4) −1.0785e-1 (8.27e-3)
8179.3097e-2 (2.04e-3) −9.1567e-2 (1.62e-3) −6.0250e-3 (1.28e-2) −8.7424e-2 (2.06e-2) −3.3451e-7 (1.17e-6) −1.0007e-1 (1.55e-3)
12219.1645e-2 (7.02e-4) ≈9.1601e-2 (7.92e-4) ≈7.5173e-4 (3.36e-3) −8.2299e-2 (2.81e-2) −1.0882e-6 (4.23e-6) −9.2007e-2 (8.40e-4)
16259.1402e-2 (5.43e-4) ≈9.1244e-2 (4.46e-4) ≈4.9220e-3 (2.03e-2) −9.1610e-2 (3.32e-4) +0.0000e+0 (0.00e+0) −9.1347e-2 (4.70e-4)
20292.9854e-2 (4.20e-2) −1.6880e-2 (3.48e-2) −0.0000e+0 (0.00e+0) −9.1377e-2 (3.68e-4) ≈2.2860e-3 (6.35e-3) −9.1187e-2 (3.77e-4)
DTLZ65149.1692e-2 (1.83e-3) −9.3245e-2 (5.17e-3) −3.3052e-3 (1.42e-2) −3.7098e-2 (4.18e-2) −0.0000e+0 (0.00e+0) −1.0095e-1 (6.09e-3)
8176.3983e-2 (4.30e-2) −7.7316e-2 (3.33e-2) −1.4170e-2 (3.06e-2) −5.0364e-2 (4.37e-2) −0.0000e+0 (0.00e+0) −9.9026e-2 (2.30e-3)
12215.4579e-2 (4.57e-2) −5.0109e-2 (4.65e-2) −0.0000e+0 (0.00e+0) −6.8506e-2 (4.06e-2) ≈0.0000e+0 (0.00e+0) −9.1714e-2 (1.16e-3)
16255.4561e-2 (4.57e-2) −5.4553e-2 (4.57e-2) −0.0000e+0 (0.00e+0) −9.1451e-2 (3.11e-4) ≈0.0000e+0 (0.00e+0) −9.1499e-2 (5.32e-4)
20290.0000e+0 (0.00e+0) −0.0000e+0 (0.00e+0) −0.0000e+0 (0.00e+0) −9.1231e-2 (3.27e-4) ≈0.0000e+0 (0.00e+0) −9.1130e-2 (3.21e-4)
DTLZ75242.3354e-1 (3.81e-3) +2.3235e-1 (3.65e-3) +2.2444e-1 (2.31e-3) +1.1949e-1 (3.53e-2) −2.4737e-2 (2.88e-2) −2.0400e-1 (7.93e-3)
8271.7957e-1 (3.01e-2) +1.9195e-1 (3.33e-3) +1.6281e-1 (9.11e-3) −9.9722e-2 (1.02e-2) −0.0000e+0 (0.00e+0) −1.7115e-1 (9.03e-3)
12311.3112e-1 (1.83e-2) ≈1.3128e-1 (1.25e-2) −1.4143e-1 (6.64e-3) ≈2.8396e-2 (2.05e-2) −0.0000e+0 (0.00e+0) −1.3833e-1 (7.86e-3)
16355.1877e-2 (1.03e-2) −4.4437e-2 (9.60e-3) −9.4014e-3 (1.25e-2) −3.5338e-2 (1.74e-2) −0.0000e+0 (0.00e+0) −1.1063e-1 (3.99e-3)
20391.0281e-1 (4.74e-3) −1.0207e-1 (5.84e-3) −1.9194e-3 (1.96e-3) −1.5167e-3 (2.97e-3) −0.0000e+0 (0.00e+0) −1.0768e-1 (3.67e-3)
+ / / 12/15/89/17/912/21/23/26/60/35/0
Table 4. IGD results (mean and standard deviation) obtained by six algorithms on the WFG problems.
Table 4. IGD results (mean and standard deviation) obtained by six algorithms on the WFG problems.
ProblemMDNSGAIIIANSGAIIISPEARMaOEAIGDMaOEAITMOEA-AD
WFG15144.7304e-1 (4.92e-3) +4.9953e-1 (1.82e-2) ≈5.3277e-1 (1.06e-2) −3.4855e+0 (6.13e-1) −3.2866e+0 (1.36e+0) −4.9273e-1 (1.50e-2)
8171.0156e+0 (3.89e-2) +1.0843e+0 (8.23e-2) +1.3966e+0 (9.89e-2) −6.0433e+0 (2.24e+0) −4.5525e+0 (3.05e+0) −1.0921e+0 (1.59e-2)
12211.4370e+0 (1.86e-1) +1.4796e+0 (2.33e-1) +1.6506e+0 (6.31e-2) −9.5657e+0 (5.14e+0) −3.6350e+0 (2.01e-1) −1.5790e+0 (5.15e-2)
16252.4191e+0 (1.74e-1) ≈2.4365e+0 (2.43e-1) ≈2.8801e+0 (5.17e-1) ≈1.9326e+1 (8.31e+0) −4.7117e+0 (1.29e-1) −2.4782e+0 (1.10e-1)
20295.1917e+0 (1.84e-1) −5.1612e+0 (2.50e-1) −4.9032e+0 (2.90e-1) +1.6425e+1 (9.08e+0) −6.1888e+0 (6.59e-2) −4.9661e+0 (1.39e-1)
WFG25145.0503e-1 (2.93e-3) +6.4659e-1 (1.79e-1) ≈5.0817e-1 (5.60e-3) +1.7875e+0 (1.98e-1) −1.7868e+0 (3.01e-1) −5.5681e-1 (1.66e-2)
8171.1936e+0 (1.54e-1) +1.3477e+0 (1.38e-1) ≈1.0660e+0 (2.10e-2) +2.4318e+0 (4.19e-1) −4.4970e+0 (6.93e-1) −1.3122e+0 (1.39e-1)
12211.5552e+0 (1.15e-1) +1.4794e+0 (8.59e-2) +1.4078e+0 (3.57e-2) +5.1810e+0 (2.88e+0) −5.4335e+0 (1.10e+0) −2.8017e+0 (9.39e-1)
16252.8249e+0 (6.49e-1) +2.8268e+0 (5.66e-1) +2.0708e+0 (2.20e-2) +8.7450e+0 (9.03e+0) ≈8.0003e+0 (1.16e+0) −5.1623e+0 (1.42e+0)
20298.7535e+0 (6.66e-1) −9.8186e+0 (2.59e+0) −3.9097e+0 (5.19e-2) +1.1434e+1 (9.48e+0) ≈9.9334e+0 (1.83e+0) −6.4197e+0 (9.39e-1)
WFG35145.9629e-1 (5.88e-2) +5.1618e-1 (6.06e-2) +1.0357e+0 (1.07e-1) −5.4348e+0 (2.14e-2) −1.9575e+0 (8.71e-1) −7.4350e-1 (7.21e-2)
8172.0639e+0 (3.83e-1) −1.7971e+0 (4.25e-1) −2.1259e+0 (1.08e-1) −6.7842e+0 (3.31e+0) −7.4976e+0 (2.70e-1) −1.2646e+0 (4.69e-1)
12212.9489e+0 (1.36e+0) −4.1876e+0 (1.21e+0) −3.5980e+0 (7.13e-1) −1.1606e+1 (3.87e+0) −3.5313e+0 (6.57e-1) −1.3330e+0 (2.62e-1)
16255.1887e+0 (5.81e-1) −5.0764e+0 (7.24e-1) −8.4280e+0 (1.05e-1) −1.5723e+1 (4.87e+0) −1.0740e+1 (1.11e+0) −3.5339e+0 (6.22e-1)
20291.4197e+1 (3.29e+0) −1.5258e+1 (2.99e+0) −1.0857e+1 (7.65e-1) −2.1262e+1 (4.57e+0) −1.3401e+1 (1.26e+0) −4.7943e+0 (8.42e-1)
WFG45141.2257e+0 (4.36e-4) −1.2826e+0 (2.14e-2) −1.2359e+0 (8.00e-3) −6.5267e+0 (6.56e-1) −2.8586e+0 (4.93e-1) −1.2250e+0 (3.72e-4)
8173.5919e+0 (1.04e-1) −4.1109e+0 (5.35e-1) −3.5540e+0 (8.78e-3) −1.0031e+1 (1.16e+0) −9.9791e+0 (1.10e+0) −3.5416e+0 (7.09e-3)
12217.8965e+0 (1.00e-1) ≈7.9295e+0 (1.12e-1) ≈7.9484e+0 (7.88e-2) ≈1.6825e+1 (2.58e+0) −1.2844e+1 (6.07e-1) −7.9153e+0 (7.05e-2)
16251.3325e+1 (3.46e-1) −1.3279e+1 (3.81e-1) ≈1.3249e+1 (1.78e-1) −2.6215e+1 (3.20e+0) −2.3118e+1 (1.72e+0) −1.3143e+1 (1.56e-1)
20291.9437e+1 (2.15e+0) −1.9359e+1 (1.32e+0) −1.7563e+1 (1.11e-1) −3.8787e+1 (3.18e+0) −3.3914e+1 (2.01e+0) −1.7453e+1 (1.32e-1)
WFG55141.2152e+0 (1.81e-4) −1.2596e+0 (1.41e-2) −1.2248e+0 (4.89e-3) −6.4524e+0 (1.52e+0) −2.4852e+0 (4.13e-1) −1.2151e+0 (1.25e-4)
8173.5268e+0 (4.73e-3) −3.7220e+0 (1.86e-1) −3.5336e+0 (5.95e-3) −1.3228e+1 (1.70e+0) −9.4957e+0 (7.11e-1) −3.5239e+0 (5.48e-3)
12217.8543e+0 (4.23e-2) ≈7.8459e+0 (2.78e-2) ≈7.8386e+0 (4.01e-2) +2.0786e+1 (3.44e+0) −1.0527e+1 (5.14e-1) −7.8710e+0 (5.00e-2)
16251.2768e+1 (2.88e-1) +1.2661e+1 (3.98e-1) +1.2984e+1 (2.25e-2) ≈2.9961e+1 (4.83e+0) −2.0273e+1 (1.40e+0) −1.2985e+1 (2.09e-1)
20291.8346e+1 (1.33e+0) −1.8425e+1 (1.52e+0) −1.7488e+1 (1.22e-2) −3.8202e+1 (3.10e+0) −3.1727e+1 (2.01e+0) −1.6565e+1 (9.41e-1)
WFG65141.2170e+0 (3.36e-3) −1.2956e+0 (2.31e-2) −1.2281e+0 (9.60e-3) −4.9411e+0 (1.58e+0) −2.5476e+0 (2.87e-1) −1.2144e+0 (7.21e-4)
8173.6645e+0 (5.36e-1) −5.5604e+0 (4.12e-1) −3.6206e+0 (7.01e-2) −7.0445e+0 (4.28e+0) −9.9646e+0 (7.67e-1) −3.5365e+0 (3.42e-3)
12218.0900e+0 (4.36e-1) −8.1357e+0 (5.56e-1) −8.0358e+0 (6.58e-2) −1.4451e+1 (6.57e+0) ≈1.0992e+1 (6.30e-1) −7.8750e+0 (4.20e-2)
16251.3210e+1 (5.74e-1) ≈1.3103e+1 (4.66e-1) ≈1.3083e+1 (5.22e-2) +2.5768e+1 (8.52e+0) −2.1947e+1 (1.57e+0) −1.3099e+1 (3.61e-1)
20291.8407e+1 (7.57e-1) −1.8249e+1 (9.08e-1) ≈1.7447e+1 (8.96e-2) ≈2.6597e+1 (1.31e+1) ≈3.3174e+1 (1.54e+0) −1.7771e+1 (7.85e-1)
WFG75141.2276e+0 (1.49e-3) −1.2770e+0 (1.55e-2) −1.2311e+0 (3.59e-3) −5.6732e+0 (5.81e-1) −2.1871e+0 (1.79e-1) −1.2254e+0 (5.04e-4)
8173.6668e+0 (2.21e-1) −4.0097e+0 (4.89e-1) −3.6289e+0 (9.57e-2) −1.0294e+1 (1.97e+0) −1.0180e+1 (3.59e-1) −3.5304e+0 (4.97e-3)
12218.0813e+0 (1.92e-1) ≈8.1779e+0 (3.01e-1) −8.1302e+0 (7.41e-2) −1.8519e+1 (2.77e+0) −1.1584e+1 (8.39e-1) −8.0341e+0 (6.94e-2)
16251.3122e+1 (2.60e-1) ≈1.3131e+1 (2.09e-1) ≈1.3154e+1 (1.00e-1) ≈2.6849e+1 (3.03e+0) −2.1296e+1 (9.28e-1) −1.3151e+1 (2.18e-1)
20291.8306e+1 (8.84e-1) −1.8530e+1 (1.32e+0) −1.7351e+1 (1.37e-1) ≈3.8065e+1 (5.03e+0) −3.3124e+1 (1.73e+0) −1.7422e+1 (6.92e-1)
WFG85141.2475e+0 (9.28e-3) −1.3340e+0 (3.32e-2) −1.2280e+0 (3.12e-3) ≈4.8567e+0 (1.75e+0) −2.6900e+0 (3.71e-1) −1.2295e+0 (1.57e-3)
8173.7426e+0 (2.07e-1) −4.0868e+0 (2.73e-1) −3.8998e+0 (4.09e-2) −1.0949e+1 (2.49e+0) −1.0139e+1 (7.83e-1) −3.5757e+0 (2.39e-2)
12217.7142e+0 (2.31e-1) −7.6630e+0 (2.74e-1) ≈8.0206e+0 (4.91e-2) −1.8623e+1 (2.34e+0) −1.1063e+1 (7.52e-1) −7.5035e+0 (1.59e-1)
16251.3348e+1 (5.02e-1) +1.3399e+1 (4.43e-1) +1.3206e+1 (4.72e-2) +3.0024e+1 (2.44e+0) −2.2199e+1 (1.28e+0) −1.3900e+1 (4.64e-1)
20292.1473e+1 (2.73e+0) −2.2511e+1 (2.60e+0) −1.7553e+1 (6.11e-2) +3.9936e+1 (2.60e+0) −3.3661e+1 (1.39e+0) −1.8072e+1 (5.75e-1)
WFG95141.2013e+0 (3.31e-3) ≈1.2525e+0 (2.38e-2) −1.2082e+0 (5.39e-3) −5.1010e+0 (1.75e+0) −2.4189e+0 (1.73e-1) −1.2016e+0 (5.18e-3)
8173.5736e+0 (8.85e-2) −3.7453e+0 (2.34e-1) −3.6288e+0 (6.32e-2) −1.1312e+1 (3.93e+0) −9.7805e+0 (5.39e-1) −3.5026e+0 (1.78e-2)
12217.4134e+0 (2.11e-1) +7.4973e+0 (1.79e-1) +7.7948e+0 (3.16e-2) −1.7942e+1 (5.16e+0) −1.0808e+1 (6.10e-1) −7.6818e+0 (1.07e-1)
16251.2566e+1 (2.83e-1) ≈1.2534e+1 (2.89e-1) ≈1.2977e+1 (1.45e-1) ≈2.6153e+1 (8.69e+0) −2.3091e+1 (1.38e+0) −1.2756e+1 (4.10e-1)
20291.8146e+1 (1.14e+0) −1.8173e+1 (6.98e-1) −1.7435e+1 (8.52e-2) −3.4248e+1 (1.12e+1) −3.4518e+1 (1.19e+0) −1.6450e+1 (8.88e-1)
+ / / 11/26/88/25/1210/27/80/41/40/45/0
Table 5. HV results (mean and standard deviation) obtained by six algorithms on the WFG problems.
Table 5. HV results (mean and standard deviation) obtained by six algorithms on the WFG problems.
ProblemMDNSGAIIIANSGAIIISPEARMaOEAIGDMaOEAITMOEA-AD
WFG15149.9733e-1 (1.95e-4) +9.9484e-1 (1.17e-3) +9.9714e-1 (1.66e-4) +2.2457e-1 (7.93e-2) −1.6952e-1 (8.84e-2) −9.8259e-1 (3.33e-3)
8179.9936e-1 (2.99e-4) +9.9883e-1 (6.80e-4) +9.9250e-1 (5.58e-3) ≈2.6128e-1 (8.65e-2) −1.6050e-1 (6.89e-2) −9.9562e-1 (1.01e-3)
12219.9959e-1 (5.12e-4) +9.9523e-1 (1.72e-2) −9.9195e-1 (5.95e-3) −3.6601e-1 (1.72e-1) −4.8848e-2 (2.58e-2) −9.9600e-1 (6.91e-4)
16259.8992e-1 (6.21e-3) ≈9.8775e-1 (1.56e-2) ≈9.1183e-1 (6.83e-2) −1.2394e-1 (9.50e-2) −1.2462e-1 (8.46e-3) −9.9048e-1 (4.91e-3)
20299.6752e-1 (2.36e-2) −9.6971e-1 (3.08e-2) −9.6335e-1 (2.58e-2) −1.2707e-1 (7.81e-2) −1.1598e-1 (4.06e-3) −9.9660e-1 (2.39e-3)
WFG25149.9538e-1 (6.41e-4) +9.8458e-1 (6.94e-3) +9.9498e-1 (7.14e-4) +9.0825e-1 (4.56e-2) −5.6138e-1 (3.47e-2) −9.5403e-1 (7.40e-3)
8179.9548e-1 (2.97e-3) +9.9115e-1 (5.61e-3) +9.9437e-1 (1.81e-3) +9.2547e-1 (6.69e-2) −3.7767e-1 (6.52e-2) −9.8311e-1 (9.02e-3)
12219.9632e-1 (1.49e-3) +9.9621e-1 (2.71e-3) +9.9536e-1 (2.02e-3) +8.1881e-1 (1.71e-1) ≈5.2558e-1 (5.23e-2) −9.3353e-1 (5.86e-2)
16259.5829e-1 (2.57e-2) +9.6698e-1 (2.64e-2) +9.8433e-1 (1.28e-2) +7.3878e-1 (2.73e-1) ≈2.6048e-1 (5.93e-2) −8.7481e-1 (6.05e-2)
20297.9345e-1 (1.07e-2) −7.6425e-1 (6.29e-2) −9.8409e-1 (1.40e-2) +7.1974e-1 (2.36e-1) −2.7898e-1 (7.24e-2) −9.0307e-1 (4.40e-2)
WFG35141.8061e-1 (1.90e-2) +1.9856e-1 (1.99e-2) +1.4498e-1 (2.64e-2) ≈7.8651e-2 (5.68e-3) −0.0000e+0 (0.00e+0) −1.5213e-1 (3.46e-2)
8174.9226e-2 (3.07e-2) −6.1296e-2 (2.58e-2) −3.4272e-2 (3.33e-2) −4.3011e-3 (9.53e-3) −0.0000e+0 (0.00e+0) −1.3249e-1 (1.39e-2)
12215.9230e-4 (2.65e-3) ≈1.4011e-3 (6.27e-3) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0)
16250.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0)
20290.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0) ≈0.0000e+0 (0.00e+0)
WFG45147.7192e-1 (9.59e-4) −7.2430e-1 (5.58e-3) −7.7110e-1 (1.85e-3) −1.0275e-1 (3.87e-2) −2.7999e-1 (4.38e-2) −7.7273e-1 (9.23e-4)
8178.7077e-1 (2.16e-2) −7.9927e-1 (2.90e-2) −8.7507e-1 (3.48e-3) −9.9142e-2 (2.55e-2) −9.7996e-2 (5.27e-2) −8.8095e-1 (1.69e-3)
12219.6151e-1 (8.09e-3) −9.6239e-1 (9.05e-3) −9.6679e-1 (1.15e-3) −1.2324e-1 (4.08e-2) −3.1222e-1 (1.41e-2) −9.6820e-1 (9.05e-4)
16257.6042e-1 (3.01e-2) −7.6646e-1 (3.48e-2) −7.6680e-1 (3.74e-2) −1.3174e-1 (4.20e-2) −6.9645e-2 (3.47e-2) −8.0272e-1 (5.71e-3)
20293.9072e-1 (5.45e-2) −3.9669e-1 (8.04e-2) −8.0632e-1 (5.50e-2) −1.0260e-1 (2.96e-2) −7.3645e-2 (6.39e-2) −8.5826e-1 (1.44e-2)
WFG55147.2388e-1 (4.04e-4) ≈6.9553e-1 (4.59e-3) −7.2190e-1 (1.26e-3) −1.2300e-1 (1.17e-1) −2.1063e-1 (2.57e-2) −7.2383e-1 (4.13e-4)
8178.2417e-1 (2.45e-3) ≈7.7919e-1 (1.97e-2) −8.2034e-1 (2.21e-3) −8.7984e-2 (1.74e-2) −5.0847e-2 (9.89e-3) −8.2456e-1 (1.55e-3)
12219.0026e-1 (5.59e-3) −9.0124e-1 (5.46e-4) −9.0103e-1 (5.59e-4) −1.0908e-1 (4.84e-2) −2.1589e-1 (1.88e-2) −9.0170e-1 (2.85e-4)
16256.9084e-1 (3.49e-2) −6.9055e-1 (2.78e-2) −6.1666e-1 (6.77e-2) −1.1569e-1 (1.47e-1) −2.5597e-2 (1.30e-2) −7.2640e-1 (2.61e-2)
20294.5036e-1 (6.82e-2) −4.6206e-1 (4.08e-2) −6.1802e-1 (1.54e-2) −8.2448e-2 (2.31e-4) −2.0581e-2 (9.15e-3) −8.0064e-1 (1.50e-2)
WFG65147.0821e-1 (1.80e-2) ≈6.7710e-1 (2.07e-2) −7.0863e-1 (1.24e-2) ≈1.8715e-1 (1.30e-1) −1.9822e-1 (1.71e-2) −7.0093e-1 (1.46e-2)
8177.9683e-1 (4.43e-2) ≈6.8251e-1 (3.75e-2) −7.5439e-1 (6.92e-2) −3.3661e-1 (1.72e-1) −5.1077e-2 (2.27e-2) −7.9825e-1 (1.52e-2)
12218.7509e-1 (3.42e-2) ≈8.7923e-1 (3.30e-2) ≈8.8526e-1 (2.05e-2) ≈3.0792e-1 (2.01e-1) −2.1014e-1 (1.67e-2) −8.7325e-1 (1.98e-2)
16256.6126e-1 (4.26e-2) −6.7283e-1 (4.76e-2) ≈6.2883e-1 (6.26e-2) −2.3230e-1 (2.32e-1) −2.2970e-2 (9.17e-3) −6.8926e-1 (4.90e-2)
20293.8999e-1 (7.79e-2) −3.7715e-1 (7.50e-2) −7.1044e-1 (6.21e-2) ≈3.9866e-1 (2.86e-1) −1.9621e-2 (9.66e-3) −7.3240e-1 (6.43e-2)
WFG75147.7141e-1 (5.66e-4) −7.3324e-1 (7.29e-3) −7.6886e-1 (1.53e-3) −2.0308e-1 (4.59e-2) −2.9382e-1 (1.98e-2) −7.7387e-1 (4.04e-4)
8178.6922e-1 (2.35e-2) −8.1967e-1 (3.45e-2) −8.3078e-1 (9.25e-2) −1.9417e-1 (6.26e-2) −7.0205e-2 (1.63e-2) −8.8454e-1 (5.17e-4)
12219.5360e-1 (2.02e-2) −9.4848e-1 (2.63e-2) −9.6833e-1 (8.57e-4) −1.6683e-1 (5.98e-2) −2.8836e-1 (3.42e-2) −9.7012e-1 (2.25e-4)
16258.0432e-1 (2.50e-2) ≈8.0069e-1 (3.12e-2) ≈6.9151e-1 (6.34e-2) −1.4454e-1 (4.05e-2) −3.5548e-2 (6.02e-3) −7.9715e-1 (1.49e-2)
20295.0487e-1 (7.97e-2) −5.3544e-1 (8.05e-2) −7.6466e-1 (5.33e-2) −1.5002e-1 (1.36e-1) −3.0104e-2 (5.77e-3) −8.2924e-1 (4.51e-2)
WFG85146.5219e-1 (4.79e-3) −5.9065e-1 (1.48e-2) −6.6736e-1 (2.16e-3) +7.4895e-2 (4.40e-2) −1.9068e-1 (2.88e-2) −6.5900e-1 (2.47e-3)
8177.0894e-1 (9.02e-3) −7.1359e-1 (2.21e-2) −6.0566e-1 (6.41e-2) −1.5481e-1 (1.07e-1) −3.4891e-2 (2.43e-2) −7.4188e-1 (2.96e-2)
12218.6060e-1 (2.30e-2) ≈8.7220e-1 (2.79e-2) +8.8530e-1 (1.89e-2) +1.6649e-1 (6.03e-2) −2.2240e-1 (2.33e-2) −8.4178e-1 (6.72e-2)
16256.0684e-1 (2.74e-2) ≈6.0546e-1 (2.27e-2) ≈4.9026e-1 (7.51e-2) −1.1992e-1 (3.88e-2) −1.9111e-2 (1.55e-2) −6.1931e-1 (7.32e-3)
20291.5544e-1 (4.42e-2) −1.7236e-1 (7.36e-2) −5.6342e-1 (4.77e-2) −1.2234e-1 (4.15e-2) −1.1959e-2 (6.51e-3) −6.8162e-1 (8.16e-3)
WFG95147.1809e-1 (1.13e-2) −6.5478e-1 (4.60e-2) −7.0958e-1 (1.98e-2) −2.1721e-1 (1.20e-1) −2.2113e-1 (1.75e-2) −7.2747e-1 (3.26e-2)
8177.3666e-1 (6.77e-2) −7.1609e-1 (6.49e-2) −7.2246e-1 (6.86e-2) −1.8587e-1 (1.57e-1) −6.4197e-2 (1.89e-2) −8.0474e-1 (3.72e-2)
12218.3969e-1 (5.91e-2) −8.5270e-1 (5.80e-2) −8.6291e-1 (3.25e-2) −2.0559e-1 (1.58e-1) −2.4149e-1 (3.76e-2) −8.8821e-1 (1.11e-2)
16256.1020e-1 (7.26e-2) −6.1712e-1 (7.46e-2) −4.6361e-1 (5.04e-2) −2.1251e-1 (2.10e-1) −3.3553e-2 (2.04e-2) −6.9268e-1 (6.78e-2)
20294.2056e-1 (4.75e-2) −4.2298e-1 (6.29e-2) −5.4977e-1 (6.73e-2) −2.1692e-1 (1.98e-1) −2.2759e-2 (6.69e-3) −7.0363e-1 (6.98e-2)
+ / / 8/25/128/29/88/29/80/40/50/42/3
Table 6. HV results (mean and standard deviation) obtained by six algorithms on the real-world case problems.
Table 6. HV results (mean and standard deviation) obtained by six algorithms on the real-world case problems.
ProblemMDNSGAIIIANSGAIIISPEARMaOEAIGDMaOEAITMOEA-AD
DBDP244.2753e-1 (5.76e-3) −2.7328e-1 (2.02e-3) −2.7613e-1 (8.32e-6) −7.7334e-2 (6.90e-2) −4.3413e-1 (1.20e-3) ≈4.3364e-1 (1.89e-3)
GTDP244.8380e-1 (4.15e-4) ≈4.8292e-1 (1.78e-4) −4.8346e-1 (4.09e-4) −2.2583e-1 (2.06e-2) −4.8401e-1 (5.57e-5) −4.8446e-1 (4.52e-4)
CSIDP372.5507e-2 (1.69e-4) −2.3390e-2 (4.26e-4) −2.2765e-2 (2.71e-4) −1.2509e-2 (3.62e-3) −2.5564e-2 (9.96e-5) −2.5849e-2 (6.89e-5)
FBPT244.0993e-1 (1.48e-4) +4.0945e-1 (1.49e-4) +4.1036e-1 (2.24e-5) +2.2372e-1 (9.06e-3) −4.0993e-1 (1.48e-4) +4.0905e-1 (1.51e-4)
TBPT228.3442e-1 (1.28e-3) −8.4177e-1 (2.13e-3) −8.4558e-1 (2.56e-4) −6.2555e-1 (5.13e-2) −8.3441e-1 (1.42e-3) −8.4729e-1 (1.62e-4)
+ / / 1/3/11/4/01/4/00/5/01/3/1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Wang, H.; Tian, Z.; Wang, W.; Chen, J. Angle-Based Dual-Association Evolutionary Algorithm for Many-Objective Optimization. Mathematics 2025, 13, 1757. https://doi.org/10.3390/math13111757

AMA Style

Wang X, Wang H, Tian Z, Wang W, Chen J. Angle-Based Dual-Association Evolutionary Algorithm for Many-Objective Optimization. Mathematics. 2025; 13(11):1757. https://doi.org/10.3390/math13111757

Chicago/Turabian Style

Wang, Xinzi, Huimin Wang, Zhen Tian, Wenxiao Wang, and Junming Chen. 2025. "Angle-Based Dual-Association Evolutionary Algorithm for Many-Objective Optimization" Mathematics 13, no. 11: 1757. https://doi.org/10.3390/math13111757

APA Style

Wang, X., Wang, H., Tian, Z., Wang, W., & Chen, J. (2025). Angle-Based Dual-Association Evolutionary Algorithm for Many-Objective Optimization. Mathematics, 13(11), 1757. https://doi.org/10.3390/math13111757

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop