Next Article in Journal
Monitoring IoT and Robotics Data for Sustainable Agricultural Practices Using a New Edge–Fog–Cloud Architecture
Previous Article in Journal
A Fuzzy QFD-Based Methodology for Systematic Generation IT Project Management Plan and Scope Plan Elements
Previous Article in Special Issue
Mechanical Optimizations with Variable Mesh Size, Using Differential Evolution Algorithm
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Contribution-Driven Task Design: Multi-Task Optimization Algorithm for Large-Scale Constrained Multi-Objective Problems

School of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Computers 2026, 15(1), 31; https://doi.org/10.3390/computers15010031
Submission received: 16 December 2025 / Revised: 31 December 2025 / Accepted: 2 January 2026 / Published: 6 January 2026
(This article belongs to the Special Issue Operations Research: Trends and Applications)

Abstract

Large-scale constrained multi-objective optimization problems (LSCMOPs) are highly challenging due to the need to optimize multiple conflicting objectives under complex constraints within a vast search space. To address this challenge, this paper proposes a multi-task optimization algorithm based on contribution-driven task design (MTO-CDTD). The algorithm constructs a multi-task optimization framework comprising one original task and multiple auxiliary tasks. Guided by an optimal contribution objective assignment strategy, each auxiliary task optimizes a subset of decision variables that contribute most to a specific objective function. A contribution-guided initialization strategy is then employed to generate high-quality initial populations for the auxiliary tasks. Furthermore, a knowledge transfer strategy based on multi-population collaboration is developed to integrate optimization information from the auxiliary tasks, thereby effectively guiding the original task in searching the large-scale decision space. Extensive experiments on three benchmark test suites—LIRCMOP, CF, and ZXH_CF—with 100, 500, and 1000 decision variables demonstrate that the proposed MTO-CDTD algorithm achieves significant advantages in solving complex LSCMOPs.

1. Introduction

The core challenge of constrained multi-objective optimization problems (CMOPs) lies in tackling the dual task of handling conflicting objective functions and constraints [1,2,3]. Since the objective functions involved in CMOPs are conflicting, it is difficult for optimization algorithms to find a single solution that simultaneously optimizes all objectives. Therefore, the goal of solving CMOPs is to obtain a set of constraint-satisfying Pareto optimal solutions—solutions that balance multiple objectives while fully meeting all constraint requirements. Without loss of generality, a minimization CMOP can be formulated as Equation (1), where m objective functions are involved, and k and l represent the numbers of inequality and equality constraints, respectively. x denotes a candidate solution in the d-dimensional decision space R d . Such problems are relatively common in practical applications, as many real-world problems can be modeled as CMOPs, such as multi-objective path planning [4], integrated circuit optimization [5], and cloud resource allocation [6].
Minimize F ( x ) = f 1 ( x ) , f 2 ( x ) , , f m ( x ) s . t . g i ( x ) 0 , i = 1 , 2 , , k h j ( x ) = 0 , j = 1 , 2 , , l x = ( x 1 , x 2 , , x d ) , x R d
Constrained multi-objective evolutionary algorithms (CMOEAs) have become powerful tools for solving CMOPs due to their strong global search capabilities and flexibility. Over the past decades, extensive research has been conducted on CMOEAs. In general, a CMOEA consists of two key components: a multi-objective evolutionary algorithm (MOEA) responsible for optimizing multiple objectives, and a constraint handling technique (CHT) dedicated to managing constraints. In the practical operation of a CMOEA, the CHT is often embedded into the environmental selection step of the MOEA or the parent selection step within the genetic operators, in order to filter out high-quality candidate solutions. By combining MOEAs with various CHTs, a large number of different types of CMOEAs have been proposed. These CMOEAs can be classified into four categories: penalty function-based methods, objective and constraint separation methods, operator modification-based methods, and hybrid methods. Specifically, penalty function-based methods introduce constraints as penalty terms into the objective functions, thereby transforming CMOPs into unconstrained optimization problems [7,8,9]. Objective and constraint separation methods handle objectives and constraints independently, determining the superiority between solutions by separately comparing them from the perspectives of objectives and constraints [10,11,12,13,14]. Operator modification-based methods aim to improve the performance of solving CMOPs by modifying key operators such as crossover, mutation, and selection [15,16,17]. Hybrid methods refer to approaches that combine MOEAs with mathematical optimization techniques [18,19].
However, most of the above studies focus on small-scale CMOPs, where the problems may involve only about 10 to 30 decision variables. In practice, many large-scale CMOPs (LSCMOPs) exist, such as the energy optimization of large-scale natural gas pipelines [20], large-scale crude oil scheduling [21], airline crew scheduling [22], wireless sensor placement [23], and power system optimization [24]. The number of decision variables in these problems typically exceeds 100. A large number of decision variables can lead to the “curse of dimensionality,” making it easier for algorithms to get trapped in local optima during the search process. Compared to small-scale CMOPs, the difficulty of solving LSCMOPs is significantly increased, which is reflected in the following two aspects. On the one hand, there are more intricate coupling relationships between large-scale variables. The decision variables are interwoven with each other, and this complex interdependence significantly increases the difficulty of optimization. During the optimization process, algorithms are easily attracted to local optima, making it difficult for the algorithm to escape the local region and search for better solutions. On the other hand, the large-scale decision variables directly lead to an exponential expansion of the search space, which undoubtedly increases the difficulty for the algorithm to find feasible regions. The dramatic growth in the search space means that the algorithm not only requires significant computational resources but also has a relatively low probability of finding effective solutions. Furthermore, as the number of decision variables increases, the number of constraints in many problems also increases accordingly. These constraints can sometimes form complex local feasible regions in the decision space. The existence of these local regions further exacerbates the difficulty of solving CMOPs.
In recent years, the application of multi-task optimization (MTO) [25] to solve CMOPs has gradually become a research hotspot. MTO, as a mechanism capable of simultaneously handling multiple optimization tasks, enhances the efficiency and effectiveness of solving each task by sharing information between different tasks. The core idea behind this method is that there are often potential connections and complementarities between different optimization tasks, and effectively utilizing these characteristics can optimize the overall solving process. Numerous research studies have demonstrated the effectiveness of MTO, which has shown significant advantages in various optimization scenarios [26,27,28,29]. Recently, some studies have introduced the concept of MTO to solve CMOPs and achieved successful results [13,30,31]. However, MTO sometimes falls short when dealing with LSCMOPs. Key challenges that significantly affect its performance include how to design a multitasking framework capable of enhancing search ability in large-scale decision spaces, and how to ensure effective knowledge transfer among multiple tasks under constrained environments.
This paper proposes a novel multi-task optimization algorithm, referred to as MTO-CDTD, which adopts a contribution-driven task design technique to address LSCMOPs. Specifically, MTO-CDTD consists of one original task (the original LSMOP task) and a number of auxiliary tasks equal to the number of objective functions, with each task assigned a separate population for optimization. In MTO-CDTD, an optimal contribution objective assignment strategy is employed, as decision variables have varying effects on the convergence and diversity of the algorithm, and their influence differs across objective functions. This strategy assigns each convergence-related decision variable to the objective function to which it contributes the most. To perform a targeted and efficient search in a large-scale decision space, each auxiliary task is dedicated to optimizing only those decision variables that have a high-contribution on a specific objective function. In other words, the i-th auxiliary task focuses exclusively on optimizing the decision variables that make the largest contribution to the i-th objective function. In addition, this paper proposes a contribution-guided initialization strategy to generate high-quality initial populations for the auxiliary tasks. In the proposed strategy, a curve fitting method is used to estimate the optimal values of convergence-related decision variables based on their contributions to the objective functions, whereas a uniform sampling method is applied to diversity-related decision variables to maintain population diversity. To enable information exchange between multiple tasks, this paper proposes a multi-population-based knowledge transfer strategy, which aims to guide the original task toward more promising directions in the large-scale search space by sharing information across multiple tasks.
The main contributions of this paper can be summarized as follows:
  • A multi-task framework for LSMOPs is proposed, incorporating a contribution-driven task design technique. The framework consists of one original task (the original LSMOP task) and m auxiliary tasks (where m is the number of objective functions). The i-th auxiliary task optimizes only those decision variables that contribute significantly to the i-th objective function.
  • An optimal contribution objective assignment strategy is proposed to assign each convergence-related decision variable to the objective function with the greatest contribution. The assignment results are used to determine the decision variables optimized by the auxiliary tasks and to generate the initial population for the auxiliary tasks.
  • A contribution-guided initialization strategy is introduced to generate high-quality initial populations for auxiliary tasks in the large-scale decision space.
  • A multi-population-based knowledge transfer strategy is proposed to improve the algorithm’s performance on the original LSMOP task by integrating useful information from multiple auxiliary tasks.
The rest of this paper is organized as follows: Section 2 reviews the related work, Section 3 elaborates on the proposed MTO-CDTD, Section 4 discusses the experiments and presents corresponding analyses, and Section 5 summarizes the final conclusions.

2. Related Works

Given that current large-scale constrained multi-objective evolutionary algorithms (LSCMOEAs) largely draw inspiration from large-scale multi-objective evolutionary algorithms (LSMOEAs), this section will survey the relevant work on both LSMOEAs and LSCMOEAs, while also providing a brief overview of multi-task optimization (MTO).

2.1. Large-Scale Multi-Objective Evolutionary Algorithms

Currently, Multi-objective evolutionary algorithms (MOEAs) for solving large-scale multi-objective problems (LSMOPs) can be categorized into three types [32,33,34]. The first category encompasses MOEAs based on decision variable analysis. This approach commences with an analysis of decision variables, followed by partitioning them according to their characteristics or devising specific optimization strategies. For instance, Ref. [35] developed an evolutionary algorithm based on decision variable clustering, termed LMEA. This algorithm employs the K-means clustering approach to classify decision variables into those related to convergence and those associated with diversity, after which it formulates respective optimization strategies for these two categories of variables. Ref. [36] developed a differential evolution (DE) algorithm that assesses the significance of individual variables in addressing LSMOPs, categorizes variables based on their significance, and devotes greater computational resources to variable groups of higher significance. Ref. [37] proposed an adaptive local decision variable analysis method based on a decomposition framework. The central idea lies in integrating the guidance of reference vectors into the analysis of decision variables and employing an adaptive strategy to optimize these variables. Ref. [38] proposed a novel decomposition method called decomposition with overlapping variables (DOV). This method constructs an interaction matrix of decision variables and groups them based on the interaction effects among variables, thereby reducing the number of common variables between groups. Ref. [39] proposed an efficient method for analyzing decision variables grounded in reformulation. In this approach, the analytical process is restructured into an optimization problem featuring binary decision variables, aimed at approximating various grouping outcomes. Ref. [40] proposed a large-scale multi-objective evolutionary algorithm (MOEAWOD) based on weighted overlapping grouping of decision variables. First, decision variables are divided into diversity variables and convergence variables. Then, the convergence variables are further subdivided into multiple groups considering the interaction effects among variables. Ref. [41] proposed a decision variable contribution-based objective evolutionary algorithm (DVCOEA), which calculates the contribution objectives of each variable by quantifying the optimization degree of decision variables for each objective, and then groups the variables according to their contribution objectives. Although such algorithms can detect interactions between variables, when the dimension of decision variables increases, the computational resources consumed by these algorithms for variable analysis will rise sharply. Moreover, the results of decision variable analysis directly impact the algorithm’s performance. If the analysis results are not accurate enough (which easily occurs in complex LSMOPs), it will lead to a degradation in the algorithm’s performance.
The second type refers to MOEAs based on decision space reduction, which employ various strategies to map the large-scale search space to a lower-dimensional space, thereby enhancing their search capability. Ref. [42] put forward a weighted optimization framework (WOF) that partitions decision variables into K groups, assigns a weight variable to each group, and thereafter optimizes these weights to accomplish problem transformation. Ref. [43] proposed a self-organizing weighted evolutionary framework (S-WOF). Compared with the original WOF framework, S-WOF simplifies the optimization process into a single stage and adaptively adjusts the ratio of evaluation times between weighted optimization ( t 1 ) and original optimization ( t 2 ) based on the evolutionary state of the population, thereby achieving a dynamic balance between convergence and diversity. In addition, Ref. [44] proposed a clustering-based weighted optimization framework (CWOF). This framework integrates hierarchical clustering and partition clustering to choose diverse reference solutions for guiding the search direction of weighted optimization, develops a spatial conversion method to reduce the decision space, and employs an adaptive computational resource allocation strategy, thus enhancing search performance and convergence speed. Ref. [45] developed a large-scale multi-objective evolutionary algorithm (LSMOEA) grounded in problem transformation and decomposition. This algorithm reduces the dimensionality of the search space through problem transformation, thereby lowering the complexity of the original LSMOPs and improving computational efficiency. Ref. [46] proposed a fuzzy decision variable framework (FDV) to address LSMOPs. This framework splits the evolutionary process into two phases: fuzzy evolution and precise evolution. In the fuzzy evolution phase, it narrows the search range in the decision space using fuzzy decision variables; in the precise evolution phase, it directly optimizes the original decision variables, thus achieving a balance between convergence and diversity. Ref. [47] proposed an LSMOEA based on problem transformation, which initializes weights through an improved reference solution selection method and adopts a two-line strategy to combine the optimized weight vectors with the original decision variables to generate new populations. Ref. [48] achieved problem dimensionality reduction by generating uniformly distributed direction vectors based on the opposition-based learning strategy and constructing search subspaces in pairs. After optimizing the solution set within the subspaces, they mapped it back to the original space for further optimization. By transforming the original LSMOPs into low-dimensional problems, such algorithms can effectively improve computational efficiency and accelerate convergence speed. However, the dimensionality reduction process may lead to the loss of interaction information between key variables, which will directly affect the optimization performance of the algorithms, especially in cases where variables are strongly correlated.
The third category consists of MOEAs based on various search strategies. Among them, the multi-stage search strategy is a common approach, which improves efficiency by decomposing the search process into stages with different focuses. For example, Ref. [49] proposed a two-stage competitive particle swarm optimization algorithm. In the first stage, different search strategies are designed for winning particles and losing particles, respectively, to explore the space; in the second stage, the global optimal value is integrated to guide the update of losing particles so as to accelerate convergence. Ref. [50] proposed a two-stage LSMOEA (DLMOEA-DLS). In the first stage, decision variables undergo classification and independent optimization to facilitate convergence; in the second stage, a dynamic learning strategy is devised to generate new offspring. Ref. [51] proposed a two-stage optimization algorithm (TSECSO) grounded in adaptive entropy vectors. During the first stage, entropy vectors serve to partition decision variables; in the second stage, an enhanced competitive swarm optimization (CSO) is utilized to achieve global balance between convergence and diversity. On the other hand, efficient sampling strategies are widely applied to solve LSMOPs [52,53,54]. Among them, Ref. [52] proposed a meta-knowledge-assisted sampling strategy (MKAS); Ref. [53] put forward a learning-guided cross-sampling strategy; and Ref. [54] presented a dual-sampling method that combines direction-oriented sampling and fuzzy Gaussian sampling. To further accelerate the speed of algorithms in solving LSMOPs, some studies have adopted fast search strategies [55,56,57]. Specifically, the Fast LSMOEA Framework (FLEA) proposed by [55] uses Chebyshev distance to evaluate the distribution characteristics of the population in both the objective space and the decision space, then constructs reference vectors to guide the evolutionary direction, and employs a Gaussian distribution to generate potential offspring. Ref. [56] developed a fast interpolation strategy that generates decision variables via interpolation functions using a small set of variables. This approach markedly reduces the number of variables requiring optimization, thereby enhancing the convergence speed. Ref. [57] put forward a directional fast search framework that can effectively identify promising individuals in large-scale decision spaces, thereby guiding population evolution through a fast search mechanism. In recent years, the use of co-evolutionary strategies to solve LSMOPs has also attracted the attention of many researchers. For example, Refs. [58,59,60] adopted a multi-population co-evolutionary strategy. By decomposing the problem or dividing the population, multiple sub-populations can optimize different parts or objectives in parallel or interactively, and collaborate with each other to improve the quality of the overall solution. Refs. [61,62] adopted multi-task evolutionary strategies, which improve overall performance by considering the correlations between different tasks and sharing information and resources. In addition, some studies have incorporated machine learning concepts into the proposed LSMOEAs. For example, Ref. [63] utilized a three-layer lightweight convolutional neural network (CNN) in their research. They used the homogeneous diversity subset and convergence subset of the population as input and output nodes, respectively, generating new individuals through backpropagation gradients. Ref. [64] proposed an LSMOEA based on a deep Gaussian mixture model (GMM), which solves LSMOPs by hierarchically learning interactions between variables to achieve efficient variable grouping. Such algorithms can effectively improve the efficiency and performance of solving LSMOPs by means of various different search strategies. However, it is worth noting that as the dimension of decision variables continues to increase, the search space will expand sharply, and the search performance of such algorithms will gradually decline, making them prone to “premature convergence”.

2.2. Large-Scale Constrained Multi-Objective Evolutionary Algorithms

LSCMOPs, as an extension of LSMOPS, have recently attracted increasing attention from researchers. Ref. [65] conducted the first research on LSCMOPs and proposed a pairwise offspring generation-based evolutionary algorithm (POCEA). This algorithm constructs subpopulations through reference vectors and adopts a pairing strategy to select parents, thereby generating offspring solutions that can effectively cross infeasible regions and promote convergence, so as to address the challenges posed by high-dimensional decision variables and complex constraints. Ref. [66] proposed a dominance reference vector-guided co-evolutionary multi-objective algorithm. Its core lies in constructing subpopulations using reference vectors, designing an environmental selection strategy by combining angular penalty distance and dominance relationships, and applying a co-evolutionary constraint handling technique to efficiently cross infeasible regions, thereby effectively solving LSCMOPs. Ref. [67] proposed a dual-scale optimization framework based on decision migration. Its core lies in constructing a dual-scale model of the large-scale original decision space and the small-scale parameter space using the Lagrange multiplier method, implementing space switching and maximum dimensionality reduction through a decision migration algorithm, and synergistically optimizing objectives and constraints by combining a dual-scale evolutionary strategy. Ref. [68] proposed an LSCMOEA based on triple sampling. The core idea is to perform sampling in three different directions: feasibility-first, convergence-first, and diversity-first, according to the current population state, and to adaptively select the constraint handling technique for each iteration by combining reinforcement learning methods. Ref. [69] proposed a hierarchical optimization method, which accelerates the convergence of solutions through layer-by-layer dimensionality reduction. Ref. [70] proposed a conjugate gradient (CG) and differential evolution (DE) algorithm (MOCGDE). This algorithm uses DE to promote diversity in the decision space, CG to drive rapid convergence in the objective space, and integrates a target decomposition strategy to distinguish the CG of solutions, thereby ensuring the quality of offspring. Ref. [71] proposed a dynamic subspace search-based evolutionary algorithm (DSSEA), which accelerates early convergence by searching in low-dimensional subspaces and gradually expands the subspaces to explore the entire decision space. Ref. [72] proposed the IMTCMO_BS algorithm based on the idea of multi-task evolutionary computation, where the solution set generated by the bidirectional sampling strategy dynamically guides the co-evolution of the original task and auxiliary tasks.
Although the above-mentioned algorithms have achieved certain results in solving LSCMOPs, there is still room for further improvement in terms of convergence and diversity. Specifically, some methods rely on refined variable analysis to reduce the difficulty of solving problems, but their high computational cost limits their efficiency when resources are limited; other methods search directly in the original space, which, although effective for small-scale problems, cannot avoid falling into local optima in LSCMOPs; in addition, strategies that directly reduce the decision space tend to lead to the loss of the complete Pareto frontier because they ignore the correlation between variables and objectives or constraints.

2.3. MTO

Multi-task optimization (MTO) is an important research direction in the fields of evolutionary computation, machine learning, and optimization. The concept of MTO was first proposed by [25], who regarded it as a new optimization paradigm aiming to solve multiple optimization problems simultaneously by leveraging the implicit parallelism of swarm search. The core idea of MTO lies in effectively conducting knowledge transfer by exploring and utilizing the similarities between tasks, thereby accelerating the convergence speed of each task and improving the solution quality [73]. Without loss of generality, the MTO problem can be expressed by the following Equation (2).
Minimize : F 1 ( x 1 ) = ( f 1 1 ( x 1 ) , , f 1 m 1 ( x 1 ) ) F 2 ( x 2 ) = ( f 2 1 ( x 2 ) , , f 2 m 2 ( x 2 ) ) F k ( x k ) = ( f k 1 ( x k ) , , f k m k ( x k ) ) s . t . x i Ω i , i = 1 , 2 , , k
where F 1 ( x 1 ) , F 2 ( x 2 ) , , F k ( x k ) represents each task to be optimized, with each task being either a MOP or a SOP, x i = ( x 1 i , x 2 i , , x n i i ) denotes the candidate solution for the i-th task, Ω i represents the search space of the i-th task, n i indicates the dimension of the search space for the i-th task.
Currently, MTO has been effectively applied to address various challenges. For instance, Ref. [26] adopted a multi-task particle swarm optimization algorithm (MTPSO) to solve the feature selection problem in high-dimensional classification; Ref. [74] employed a surrogate-assisted evolutionary multi-task algorithm based on genetic programming (GP) to tackle the dynamic flexible job shop scheduling problem; Ref. [75] proposed an improved multi-objective multifactorial evolutionary algorithm (IMO-MFEA) to solve the assembly line balancing problem, considering scenarios of regular production and preventive maintenance.

3. The Proposed MTO-CDTD

This section provides a detailed explanation of the proposed MTO-CDTD. Specifically, it first outlines the multi-task optimization framework employed by the algorithm, followed by a sequential introduction of each core strategy used in the algorithm.

3.1. Multi-Task Framework in MTO-CDTD

Figure 1 illustrates the multi-task evolutionary framework in MTO-CDTD. As shown in Figure 1, the framework consists of a main/original task and m auxiliary tasks (m being the number of objective functions). The original LSCMOP is used as the main/original task, which simultaneously considers all objective functions and optimizes all decision variables to achieve global search. Each auxiliary task focuses on optimizing the subset of decision variables that have a higher contribution to a specific objective function, thereby enhancing the algorithm’s local search capability. For example, auxiliary task T 1 focuses on optimizing the decision variables that contribute more significantly to the objective function f 1 . In this paper, an optimal contribution objective assignment strategy is proposed to assign each decision variable to the objective function to which it contributes the most. In addition, a contribution-driven population initialization strategy is adopted in the auxiliary tasks. This strategy leverages the results of the optimal contribution objective assignment to generate high-quality initial populations for the auxiliary tasks. Furthermore, an ϵ -constraint-based environmental selection strategy is employed in the auxiliary tasks to balance feasibility and convergence, thereby improving the search efficiency. As shown in Figure 1, knowledge transfer among tasks is achieved through a multi-population cooperative strategy. On the one hand, the original task integrates optimization information from the auxiliary tasks regarding specific subsets of decision variables during the optimization process, helping it address the potential curse of dimensionality in a large-scale search space. On the other hand, the auxiliary tasks transfer their discovered potential solutions to the original task, guiding it to search in promising regions and thereby enhancing its search capability.
Algorithm 1 outlines the detailed steps of the proposed method. It first initializes the population for the original task (Line 1), and then applies the optimal contribution objective assignment strategy to assign each decision variable to the objective function it contributes the most (Line 2). A detailed description of this strategy can be found in Section 3.2. Next, based on the results of the optimal contribution objective assignment, the contribution-guided initialization (CGI) strategy is used to initialize the populations of the auxiliary tasks ( T 1 , T 2 , , T m ) (Line 3). During the evolutionary process, both the original task and the auxiliary tasks generate offspring populations using the classic differential evolution operator (DE/rand/1/bin) [35] (Lines 5–8). It is worth noting that the k-th auxiliary task applies the differential evolution operator only to the subset of decision variables that contribute significantly to the k-th objective function, while the values of the other decision variables are inherited from the original task population. In addition, MTO-CDTD facilitates information exchange among multiple tasks through a multi-population-based knowledge transfer strategy (MCKTS) (Line 9). Specifically, based on the results of the optimal contribution objective assignment, this strategy replaces the corresponding decision variables of superior individuals in the original population with certain decision variables—those that the auxiliary tasks focus on during evolution—from superior individuals in the auxiliary populations, thereby creating a promising population called S u b P O P that integrates outstanding individuals from multiple auxiliary tasks. By incorporating S u b P O P into the environmental selection of the original task, the algorithm effectively guides the original task to move toward promising regions in the large-scale decision space.
Algorithm 1: Main Loop
     Input: 
N (population size), max F E (Maximum number of function evaluations),
M (the number of objectives), D (the number of decision variables)
     Output: 
P O P o (Pareto optimal solution set)
      1:
P O P o Randomly   initialize   the   population   of   the   original   task ;
      2:
C O Optimal   contribution   objective   assignment   strategy   for   decision   variables ;
      3:
P O P 1 , P O P 2 , , P O P m generate m auxiliary populations by the contribution-guided initialization strategy according to the assignment results obtained by the previous step 2;
      4:
while Termination conditions are not satisfied do
      5:
    for  k = 1  to M do
      6:
       O F F k for the decision variables whose optimal contribution objective is f k , implement the DE/rand/1 operator on P O P k to generate the evolved values; for the other decision variables, their values are inherited from P O P o ;
      7:
    end for
      8:
     O F F o Generate   offspring   O F F o   from   P O P o   by   DE / rand / 1   operator ;
      9:
    S u b P O P Obtain a merged population integrating superior individuals from both the auxiliary populations and the original population based on the multi-population-based knowledge transfer strategy;
    10:
    for  k = 1  to M do
    11:
          P O P k AuxiliaryEnvironmentalSelection ( P O P k , O F F k ) ;
    12:
    end for
    13:
     P O P o OriginalEnvironmentalSelection ( P O P o , O F F o , S u b P O P ) .
    14:
end while
All tasks use an environmental selection operator to determine which individuals in the offspring population proceed to the next iteration (Lines 10–13). In this paper, both the auxiliary tasks and the original task employ the fitness calculation method from SPEA2 [76] for ranking individuals. This method considers both the non-dominated sorting ranks and the density of individuals during the ranking process. Please note that the original task and the auxiliary tasks adopt different constraint-handling strategies during environmental selection. Specifically, the CDP method [36] is used in the environmental selection of the original task. The core idea of this method is to always prioritize feasible solutions over infeasible ones. Among the feasible individuals, those with higher fitness are preferred. Among the infeasible individuals, priority is given to those with lower degrees of constraint violation, measured as the sum of violations across all constraints. In contrast, the ϵ -constraint method [37] is used in the environmental selection of auxiliary tasks. This method allows individuals to violate constraints to a certain extent, as long as the violation remains within a predefined ϵ threshold. The rationale behind this design is to relax constraints in auxiliary tasks so they can identify individuals that, while slightly infeasible, exhibit significant potential in the subsequent search process. The interaction between the auxiliary and original tasks helps guide the original task toward promising regions, thereby improving the overall performance of the algorithm.

3.2. Optimal Contribution Objective Assignment Strategy

Since decision variables have different impacts on the convergence and diversity of solutions, and their influence on different objectives also varies, this section proposes an optimal contribution objective assignment strategy for decision variables. The purpose of this strategy is to assign each convergence-related decision variable to the objective function to which it contributes the most. The assignment results are then used to guide the algorithm in conducting an efficient search within a large-scale decision space. The proposed strategy consists of two phases. The first phase divides the decision variables into convergence-related and diversity-related variables. The second phase assigns each convergence-related variable to the objective function to which it contributes the most.
Algorithm 2 outlines the detailed steps of the proposed optimal contribution objective assignment strategy. In the first phase, the method for classifying decision variables follows the approach used in LMEA [35] (Lines 1–7). Taking the i-th decision variable as an example, one individuals (denoted as X) are first randomly selected from the population P O P o (Line 2). Then, the i-th variable x i of X is perturbed. Specifically, six new individuals are generated by applying equidistant sampling to x i within the interval [ l b o u n d i , u b o u n d i ] , using a step size of 0.2 × ( u b o u n d i l b o u n d i ) (Line 3). Here, l b o u n d i and u b o u n d i denote the lower and upper bounds of x i in the decision space, respectively. A fitted line L is then obtained in the objective space by performing linear regression on the six perturbed individuals (Line 4). Finally, the angle between the fitted line L and the normal vector h of the hyperplane f 1 + f 2 + + f M = 1 is calculated (Line 5). In other words, each decision variable obtains six angle values, which serve as its feature values for the subsequent clustering process. Then, the k-means method is applied to group the decision variables into two categories: those with smaller angle values are classified as convergence-related decision variables (stored in C V ), while those with larger angle values are considered diversity-related decision variables (stored in D V ) (Line 7).
In the second stage, the convergence-related decision variables are assigned to the objective function to which they contribute the most. Specifically, according to Equation (3), the contribution of a decision variable x i to an objective function f k is quantified by calculating the gradient f k x i (Line 10). The larger the absolute value of the gradient, the more a small change in the decision variable x i causes a significant change in the objective function f k , indicating a higher contribution of x i to f k . In Equation (3), x i ( j ) represents the value of x i after the j-th perturbation, X ( j ) is the individual obtained after applying the j-th perturbation to the i-th decision variable, and X denotes the original (pre-perturbation) individual. Finally, the objective function to which x i contributes the most is selected as the optimal contribution objective of x i , denoted as C O i (Line 12).
contribution ik = mean f k x i = 1 6 j = 1 6 f k ( X ( j ) ) f k ( X ) x i ( j ) x i
Algorithm 2: Optimal contribution objective assignment strategy for decision variables
     Input: 
D (the number of decision variables), M (the number of objectives),
P O P o (the original population), N (population size)
     Output: 
C V (the convergence related decision variables),
D V (the diversity-related decision variables),
C O (the contribution objective of convergence-related decision variables)
      1:
for  i = 1  to D do
      2:
      X Selecting one solution randomly from P O P o ;
      3:
   The i-th variable x i of X is perturbed by performing equidistant sampling within the interval [ l b o u n d i , u b o u n d i ] using a step size of 0.2 × ( u b o u n d i l b o u n d i ) , resulting in six perturbed individuals. Here, l b o u n d i and u b o u n d i represent the lower and upper bounds of x i in the decision space, respectively;
      4:
     Obtain the fitting line L of the six perturbed solutions in the objective space;
      5:
      A n g l e i j the angle between line L and the normal vector of hyperplane f 1 + + f M = 1 ;
      6:
end for
      7:
[ C V , D V ] use k-means to cluster the decision variables into two sets based on A n g l e ;
      8:
for each decision variable x i in C V  do
      9:
     for  k = 1  to M do
    10:
           c o n t r i b u t i o n i k Calculate the contribution of x i to f k by Equation (3);
    11:
     end for
    12:
      C O i the objective function to which x i contributes the most.
    13:
end for

3.3. Contribution-Guided Initialization Strategy

For MOEAs, the quality of the initial population plays a crucial role in determining whether the algorithm can quickly and effectively find the optimal solution set. In particular, for large-scale multi-objective optimization, a high-quality initial population can significantly enhance the search efficiency in a large-scale decision space. Most existing algorithms generate the initial population randomly. However, such randomly generated populations often require a large number of iterations to approach the ideal Pareto front in large-scale search spaces. Moreover, in some constrained optimization problems, the feasible regions may be separated by infeasible regions. In such cases, some individuals in the random population may struggle to cross these infeasible regions even after many generations, potentially causing the algorithm to become trapped in local optima. Therefore, this paper adopts a contribution-guided initialization strategy for the auxiliary tasks to generate the initial population. As illustrated in Figure 2, the core idea of this strategy is to treat convergence-related and diversity-related decision variables differently, in order to help the algorithm obtain a high-quality initial population in large-scale search spaces. Specifically, for convergence-related decision variables, the optimal value of each variable is determined using a curve-fitting method based on its optimal contribution objective. For diversity-related decision variables, a uniform sampling method is used to promote diversity in the initial population.
Algorithm 3 outlines the detailed procedure of the proposed initialization strategy. For each convergence-related decision variable x i , its optimal contribution objective f k is retrieved from C O (Line 3). A parabolic curve, f k ( x i ) = a x i 2 + b x i + c , is then fitted to describe the relationship between x i and f k (Line 4), based on the perturbations of x i obtained in Step 3 of Algorithm 2. The extremum of the fitted parabola x i * = b 2 a is subsequently selected as the ideal initialization value for x i (Lines 5–6). If the diversity-related variable set D V contains only one variable x j , then N evenly spaced points are generated within the value range of x j and used as the j-th dimension of individuals in P O P (Line 9). If D V contains multiple diversity-related variables, the two variables with the greatest impact on population diversity are selected based on the angles between their fitted lines (obtained after perturbing these variables) and h (i.e., the normal vector of the hyperplane f 1 + f 2 + + f M = 1 ). According to the description in Section 3.2, the larger the angle between the fitted line and h, the greater the impact of that decision variable on the diversity of the population. The selected two variables are denoted as x s and x t (Line 11). For x s and x t , N evenly spaced points are sampled from their respective value ranges to generate N value combinations. These combinations are then used to replace the values of the s-th and t-th decision variables in the individuals of P O P (Line 12).
Algorithm 3: Contribution-guided initialization strategy
      Input: 
M (the number of objectives), N (population size),
C V (the convergence-related decision variables),
D V (the diversity-related decision variables),
C O (the contribution objective of convergence-related decision variables)
      Output: 
P O P (the initial population for auxiliary tasks)
       1:
Generate N individuals randomly in the decision space as P O P ;
% for convergence-related variables
       2:
for each decision variable x i in C V  do
       3:
     f k obtain the optimal contribution objective function according to C O ;
       4:
   Based on the perturbations of x i (as shown in Algorithm 2), obtain the fitted parabola with x i with respect to f k , that is, f k ( x i ) = a x i 2 + b x i + c ;
       5:
     x i * b 2 a ;
       6:
    Use x i * as the value of x i for each individual in P O P ;
       7:
end for
% for diversity-related variables
       8:
if  | D V | = = 1  then
       9:
    Denote the variable in D V as x j and then obtain N uniformly spaced points from the value range of x j as the values of the j-th variable for the individuals in P O P ;
     10:
else
     11:
     x s , x t two members in D V that have the greatest impact on population diversity;
     12:
   For x s and x t , obtain N uniformly spaced points from their value ranges, then generate N value combinations. Replace the values of the s-th and t-th variables in P O P with these value combinations;
     13:
end if

3.4. Multi-Population-Based Knowledge Transfer Strategy

This section presents a knowledge transfer strategy based on multi-population cooperation. The goal is to enhance the algorithm’s ability to solve the original LSCMOPs by transferring useful knowledge from multiple auxiliary tasks to the original task through cross-task information fusion.
The detailed procedure of this strategy is outlined in Algorithm 4. First, the fitness values of individuals in the original population ( P O P o ) and the auxiliary populations ( P O P 1 , P O P 2 , , P O P m ) are obtained (Line 1). The fitness evaluation method used in this section is adopted from IMTCMO_BS [72]. This method quantifies the quality of individuals by integrating objective values, density, and constraint violations. Subsequently, based on fitness values, the top α × N individuals are selected from the original population and each auxiliary population, and stored in Q and C k ( k = 1 , 2 , , m ) , respectively, (Lines 2–5). For each individual in Q, perform the following operation on each of its decision variables. Taking the j-th individual in Q as an example, if x i is a convergence-related decision variable, identify its optimal contribution objective function (denoted as f k , i.e., C O i = f k ). Then, replace x i of the j-th individual in Q with the corresponding x i from the j-th individual in C k (Lines 6–12). Thus, Q incorporates high-quality information from both the original and auxiliary populations. It is then used as the S u b P O P in the subsequent environmental selection of the original population, facilitating effective knowledge transfer from auxiliary tasks to the original task.
Algorithm 4: Procedure of the multi-population-based knowledge transfer strategy
      Input: 
P O P o (the original population), P O P 1 , , P O P m (auxiliary populations),
D (number of decision variables), M (number of objectives), N (population size)
C O (optimal contribution objective for each convergence-related decision variable),
α (knowledge transfer rate),
      Output: 
S u b P O P (the knowledge-transferred population)
      1:
Obtain fitness values of individuals in P O P o and auxiliary populations P O P 1 , , P O P m ;
      2:
Q Select   the   top   α × N   individuals   with   best   fitness   values   from   P O P o ;
      3:
for  k = 1  to M do
      4:
     C k Select   the   top   α × N   individuals   with   best   fitness   values   from   P O P k ;
      5:
end for
      6:
for  j = 1  to  | Q |  do
      7:
    for  i = 1  to D do
      8:
        if  x i is a convergence-related decision variable then
      9:
            Replace x i of the j-th individual in Q with the corresponding x i from the j-th individual in C C O i ;
    10:
        end if
    11:
    end for
    12:
end for
    13:
SubPOP Q ;

4. Experimental Results and Analysis

This section begins with a detailed description of the experimental setup, including the compared algorithms, benchmark test suites, parameter settings, and performance metrics. Subsequently, a thorough analysis is conducted on the key parameters of the proposed algorithm and the comparison results with other algorithms. Finally, the effectiveness of the key strategies employed in the proposed algorithm is investigated.

4.1. Experimental Setup

(1)
Comparative Algorithms
In this paper, comparative experiments are conducted between the proposed MTO-CDTD algorithm and two types of algorithms: (1) CMOEAs, including C3M [77] and BiCo [78]; (2) LSCMOEAs, including IMTCMO_BS [72], MOCGDE [70], and POCEA [65]. Two CMOEAs are selected as comparison algorithms due to their excellent performance in solving CMOPs. In addition, a set of LSCMOEAs specifically designed for LSCMOPs is included for comparison, where these algorithms employ various mechanisms, including MTO, objective decomposition, and a pairwise competition mechanism, to improve their performance. By conducting comparative experiments with these two groups of algorithms, the effectiveness of the proposed method can be systematically verified. Specifically, C3M handles constrained optimization problems in a phased manner. In the initial stage, it ignores feasibility to explore the objective space; then, it gradually evaluates individual constraints according to constraint priorities; finally, it comprehensively considers feasibility to obtain high-quality solutions. BiCo is a constrained multi-objective optimization algorithm based on bidirectional co-evolution, which aims to improve search performance by enhancing the diversity of the population between feasible and infeasible regions. IMTCMO_BS is designed based on MTO, which accelerates convergence and maintains diversity through a bidirectional sampling strategy. MOCGDE integrates the fast convergence capability of the conjugate gradient method with the diversity optimization mechanism of differential evolution, and combines the objective decomposition strategy and line search technology to efficiently approximate the Pareto frontier while ensuring the global distribution of the solution set. POCEA significantly enhances the solving capability for large-scale multi-objective constrained optimization problems by combining reference vector-guided subpopulation division, dynamic constraint tolerance adjustment, and an efficient pairwise competition mechanism. To ensure the fairness of algorithm comparison, all comparative algorithms are integrated into platEMO [79], and implemented in MATLAB (R2022b, The MathWorks Inc., Natick, MA, USA) on a laptop equipped with a 2.21 GHz Intel(R) Core(TM) i7-8750H CPU (Intel Corporation, Santa Clara, CA, USA), running Windows 11 64-bit (Microsoft Corporation, Redmond, WA, USA), with 16 GB RAM (Kingston Technology Company, Fountain Valley, CA, USA).
(2)
Benchmark test suites
This paper conducts comparative experiments on three standard benchmark problem sets: LIRCMOP, CF, and ZXH_CF. The LIRCMOP test suite comprises 14 benchmark problems, characterized by extensive infeasible regions in the objective space. The CF test suite consists of 10 problems, featuring complex nonlinear constraints and discontinuous Pareto fronts, such as isolated points and segmented feasible regions. The ZXH_CF test suite consists of 16 benchmark problems. Its key characteristic is the introduction of complex infeasible regions in both the decision and objective spaces through the nonlinear coupling of position variables (which define the shape of the Pareto front) and distance variables (which control the proximity of solutions to the front), combined with a dual-constraint mechanism. These features are designed to thoroughly evaluate an algorithm’s capability in addressing LSCMOPs with multiple challenges, including variable dependencies, narrow feasible regions, and fragmented Pareto fronts. The three adopted test suites possess distinct characteristics, enabling a comprehensive assessment of the algorithm’s overall performance.
(3)
Parameter settings
For all algorithms, the population size is set to N = 100 . The number of decision variables in the test problems is configured at three different scales, namely D = 100 , 500, and 1000. The maximum number of function evaluations ( m a x F E ) for all algorithms is set to 200 × D . Each algorithm is independently executed 30 times on all test problems to obtain statistically meaningful results. To ensure the reliability of the results, the remaining parameter settings for the comparative algorithms follow the configurations reported in the relevant literature. Detailed parameter settings are provided in Table 1.
(4)
Performance metrics
This study adopts Inverted Generational Distance (IGD) [80] and Hypervolume (HV) [81] as performance metrics to evaluate the performance of algorithms. IGD quantifies the proximity between the obtained Pareto front and the true Pareto front, where a smaller IGD value indicates better performance in terms of both convergence and diversity. HV evaluates the optimization capability of an algorithm by calculating the volume of the region in the objective space enclosed by the Pareto-optimal solution set obtained by the algorithm and a predefined reference point. A larger HV value indicates better algorithm performance. The reference point is defined as the worst point that is dominated by all Pareto-optimal solutions generated by the algorithm. For the minimization problems considered in this paper, the reference point is set to [1.1, 1.1,…, 1.1] in the normalized objective space. The Wilcoxon rank-sum test [82] at a significance level of 0.05 is employed to compare the results of the algorithms. The symbols “+”, “−”, and “=” denote that the comparison algorithm performs significantly better than, significantly worse than, or equivalent to the proposed MTO-CDTD, respectively.

4.2. Parameter Analysis in MTO-CDTD

In the proposed MTO-CDTD, the parameter α represents the knowledge transfer rate. α is used in the multi-population-based knowledge transfer strategy to control the extent of knowledge sharing between the original population and the auxiliary populations, thereby affecting the performance of the algorithm. In this section, some problems are selected from the three test suites to analyze the impact of the knowledge transfer rate α on the algorithm’s performance.
Figure 3 presents the average IGD values obtained by MTO-CDTD over 30 independent runs on different test functions with the number of decision variables D = 500, under various values of α . Specifically, Figure 3a–c correspond to LIRCMOP1, LIRCMOP3, and LIRCMOP5; Figure 3d–f correspond to CF1, CF3, and CF5; and Figure 3g–i correspond to ZXH_CF1, ZXH_CF3, and ZXH_CF5. When the value of α is too small, the original population may struggle to escape from local optima due to insufficient knowledge sharing. Conversely, if the value of α is too large, excessive reliance on the knowledge from auxiliary populations may suppress the global search capability of the original population, potentially causing the algorithm to miss some high-quality solutions. Moreover, as described in Section 3.4, a larger α results in more function evaluations being consumed during the knowledge transfer process. As shown in Figure 3, when α = 0.2 , MTO-CDTD achieves the best performance across all selected test problems. Therefore, α is set to 0.2 in the subsequent experiments of this paper.

4.3. Comparison Results and Analysis

(1)
LIRCMOP
The average IGD values obtained by MTO-CDTD and five other comparative algorithms over 30 independent runs on the LIRCMOP test suite are presented in Table 2. As shown in Table 2, MTO-CDTD demonstrates superior performance on 25 out of the 42 LIRCMOP test cases, where the number of decision variables is set to 100, 500, and 1000. However, for LIRCMOP1, LIRCMOP3, LIRCMOP4, LIRCMOP13, and LIRCMOP14, the IGD performance of MTO-CDTD is inferior to that of some comparative algorithms, such as IMTCMO_BS and MOCGDE. LIRCMOP1, LIRCMOP3, and LIRCMOP4 have relatively small feasible regions in the objective space and involve complex constraints with strong coupling among multiple decision variables. In the proposed MTO-CDTD, the contribution-guided initialization strategy may fail to accurately capture the interactions among variables for the above test problems, resulting in an initial population that potentially deviates from the feasible region. IMTCMO_BS adopts a bidirectional sampling strategy that integrates diversity-driven and convergence-driven sampling. For LSMOPs like LIRCMOP1, which feature extensive infeasible regions, diversity sampling prevents the population from becoming trapped in locally feasible regions, while convergence sampling guides the search toward the ideal Pareto front across scattered feasible areas. LIRCMOP13 and LIRCMOP14 feature multiple narrow feasible regions. The superior performance of MOCGDE on these two problems can be attributed to its effective constraint-handling mechanism. Specifically, when constraint violations are detected, MOCGDE immediately switches the optimization objective from multiple objectives to the degree of constraint violation. It then generates a correction direction for the infeasible solution based on the gradient information of the constraint functions. This gradient-driven directional search enables precise adjustments to infeasible solutions, allowing them to quickly escape from infeasible regions. In addition, Table 3 presents the HV comparison results between the MTO-CDTD algorithm and other algorithms on the LIRCMOP test suite. As shown in Table 3, the MTO-CDTD algorithm achieves superior HV values on the majority of the test problems.
Figure 4 takes LIRCMOP5 with D = 500 as an example to illustrate the Pareto fronts obtained by all algorithms. As shown in Figure 4, although the Pareto-optimal solution sets obtained by C3M, MOCGDE, and IMTCMO_BS exhibit relatively strong diversity, they fail to reach the ideal Pareto front. Both BiCo and POCEA fall into local feasible regions. In contrast, only MTO-CDTD successfully finds the CPF, though the diversity of solutions still needs further improvement. Overall, compared with the other algorithms, MTO-CDTD achieves the best Pareto-optimal solution set on LIRCMOP5.
(2)
CF
Table 4 presents the IGD comparison results between MTO-CDTD and the other algorithms over 30 independent runs on the CF test suite. As shown in Table 4, MTO-CDTD exhibits superior performance on 21 out of the 30 CF test problems with the number of decision variables set to 100, 500, and 1000, outperforming the other compared algorithms. However, for CF2, the performance of MTO-CDTD is inferior to that of MOCGDE, which can be attributed to three specific challenges posed by CF2: nonlinear objective coupling, strongly correlated constraints, and a discontinuous Pareto front. MOCGDE employs the conjugate gradient method to rapidly approach promising solutions within the feasible region. Additionally, by leveraging the differential evolution operator and a decomposition strategy, MOCGDE effectively balances exploration and exploitation, thereby demonstrating outstanding performance on CF2. In addition, for the three CF9 test cases with different numbers of decision variables, the performance of MTO-CDTD is slightly inferior to that of C3M. This is attributed to the constraint-separation handling strategy adopted by C3M, which effectively alleviates the optimization difficulties of problems like CF9 that feature narrow feasible regions. MTO-CDTD demonstrates excellent performance on other test problems, mainly for two reasons: first, it employs a contribution-driven multitasking framework that guides the original population to evolve toward high-potential regions through a knowledge transfer strategy; second, it utilizes a contribution-guided initialization strategy to generate well-distributed initial populations in large-scale decision spaces, thereby enhancing the algorithm’s ability to quickly find high-quality solutions. Additionally, Table 5 presents the HV comparison results of MTO-CDTD and other comparison algorithms on the CF test suite. According to the statistical results, based on the Wilcoxon rank-sum test with a significance level of 0.05, MTO-CDTD achieves significantly better HV values than the compared algorithms on the majority of CF test problems. Considering both the IGD and HV metrics, it is evident that MTO-CDTD demonstrates outstanding performance on the CF test suite.
Figure 5 takes the CF1 problem with D = 500 as an example to illustrate the Pareto fronts obtained by all algorithms. As shown in the figure, although the final Pareto-optimal solution sets obtained by BiCo, C3M, and POCEA exhibit relatively good diversity, none of them successfully found the ideal Pareto front. MOCGDE and IMTCMO_BS achieved similar results: their Pareto-optimal solutions reached the vicinity of the CPF but only found a small portion of it. In contrast, MTO-CDTD successfully found the ideal Pareto front, outperforming the other algorithms.
(3)
ZXH_CF
Table 6 presents the average IGD values of MTO-CDTD and the other comparison algorithms on the ZXH_CF test suite. As shown in Table 6, MTO-CDTD outperforms the other algorithms on most of the ZXH_CF test problems. However, for ZXH_CF5, ZXH_CF12, and ZXH_CF15, the performance of MTO-CDTD is inferior to that of MOCGDE in terms of IGD. This is because the multiple nonlinear constraints (such as the angle constraints in ZXH_CF5, the piece-wise constraints in ZXH_CF12, and the mixed constraints in ZXH_CF15) in these problems result in a highly fragmented feasible region. Moreover, as the dimension of the decision variables increases, it is difficult for the algorithm to conduct effective exploration among discrete feasible regions. MOCGDE, through the elite archive and diversity maintenance mechanism, retains elite solutions that cover discrete feasible regions, ensuring that in such scenarios where the objective function and constraints are deeply coupled, the population achieves a dynamic balance between convergence and diversity. The first stage ignores all constraints and focuses on exploring the objective space to approach the unconstrained Pareto front; the second stage, according to the constraint priorities, sequentially handles the feasibility optimization of individual constraints, while using the algorithm to dynamically identify and skip non-critical constraints, so as to efficiently approach the feasible boundaries of each constraint; in the final stage, all constraints are comprehensively considered, and the boundary solution set obtained in the previous stages is refined and optimized, finally obtaining solutions that satisfy all constraints and have both high convergence and diversity.
Moreover, Table 7 presents the HV comparison results between MTO-CDTD and the other competing algorithms on the ZXH_CF test suite. As shown in Table 7, MTO-CDTD achieves higher HV values on most problems in the ZXH_CF test suite compared to the other algorithms, further demonstrating the effectiveness of MTO-CDTD.
Figure 6 shows the Pareto fronts obtained by all algorithms on ZXH_CF6 with D = 500. As can be seen, the Pareto optimal sets produced by C3M and MOCGDE exhibit neither good convergence nor strong diversity. Although BiCo, POCEA, IMTCMO_BS, and MTO-CDTD achieve relatively good diversity, only MTO-CDTD strikes a better balance between convergence and diversity on ZXH_CF6.

4.4. Comparison of Runtime

For large-scale constrained multi-objective evolutionary algorithms, running time is a crucial indicator for measuring their computational efficiency. Therefore, this section evaluates the operational efficiency of algorithms by comparing the running time of different algorithms under the same experimental environment.
Table 8 presents the ranking of the average running time of the MTO-CDTD algorithm after 30 independent runs on all test functions when the dimension of decision variables D = 1000 . A smaller ranking value indicates less time consumption. It can be seen from the table that MTO-CDTD achieves the optimal running time ranking on almost all test problems. This is mainly because MTO-CDTD adopts a multi-task framework that integrates the original task with multiple simplified auxiliary tasks. Consequently, it allocates a portion of the function evaluations to handle the simplified tasks. When the maximum number of function evaluations is taken as the termination criterion of the algorithm, the MTO-CDTD algorithm requires less time than other algorithms. In contrast, although IMTCMO_BS also employs a multi-task mechanism, all of its tasks need to perform searches in the entire high-dimensional decision space, resulting in a longer running time than MTO-CDTD.

4.5. Ablation Study

As discussed in Section 3, this paper proposes a multitasking optimization framework to address LSCMOPs. The proposed algorithm incorporates three key strategies: the optimal contribution objective assignment strategy, the contribution-driven population initialization strategy, and the multi-population-based knowledge transfer strategy. The optimal contribution objective assignment strategy is designed to assign each decision variable to the objective function to which it contributes the most. These assignment results are subsequently utilized in the contribution-driven population initialization strategy and the multi-population-based knowledge transfer strategy to enhance the algorithm’s search performance. The following section provides a detailed analysis of the effectiveness of both the contribution-guided initialization and the multi-population-based knowledge transfer strategies.

4.5.1. Investigation of the Contribution-Guided Initialization Strategy

The contribution-driven population initialization strategy is proposed to generate high-quality initial populations for auxiliary tasks, thereby enhancing the algorithm’s search capability in large-scale decision spaces. To assess the effectiveness of this strategy, a variant of MTO-CDTD, denoted as MTO-CDTD (WOC), is developed by removing the contribution-guided initialization strategy.
Figure 7 illustrates the average IGD values obtained from 30 independent runs of MTO-CDTD and MTO-CDTD (WOC) on selected test problems (with D = 1000 ) from three benchmark test suites. As can be seen from Figure 7, for the vast majority of the selected test problems, the performance of MTO-CDTD in terms of the IGD indicator is significantly better than that of the MTO-CDTD (WOC) algorithm, which does not adopt the initialization strategy. In addition, to more intuitively demonstrate the effectiveness of the proposed initialization strategy, this section compares the quality of the initial populations generated by MTO-CDTD and MTO-CDTD (WOC). Taking LIRCMOP1, LIRCMOP3, LIRCMOP5, CF1, CF3, CF5, ZXH_CF1, ZXH_CF3, and ZXH_CF5 as examples, Table 9 shows the IGD values of the initial populations generated by MTO-CDTD and MTO-CDTD (WOC) after 30 independent runs. As shown in Table 9, from the perspective of the IGD indicator, the quality of the initial population generated by MTO-CDTD is significantly better than that of MTO-CDTD (WOC). This indicates that the proposed contribution-guided initialization strategy can indeed effectively improve the solving ability of MTO-CDTD by generating a high-quality initial population in the large-scale decision space.

4.5.2. Investigation of the Multi-Population-Based Knowledge Transfer Strategy

To verify the effectiveness of the proposed knowledge transfer strategy based on multi-population collaboration, this section compares MTO-CDTD with MTO-CDTD (RT), a variant of MTO-CDTD. The experiments were conducted under the same conditions with 30 independent runs to statistically analyze the results. Among them, MTO-CDTD (RT) adopts a random transfer strategy, which randomly selects individuals from auxiliary populations and directly transfers them to the original population.
Figure 8 shows the average IGD comparison between the two algorithms on different test problems when D = 1000. As can be seen from Figure 8, for different test problems, the performance of the MTO-CDTD algorithm with the multi-population-based knowledge transfer strategy is superior to that of the MTO-CDTD (RT) algorithm with the random transfer strategy. In addition, Table 10 shows the average ranking, the proportion of the top 50%, and the proportion of the bottom 25% of the transferred individuals obtained by the knowledge transfer strategy in the original population during a single run. The data show that the quality of the transferred individuals by the knowledge transfer strategy with multi-population collaboration is significantly higher than that of the individuals transferred randomly. This is mainly attributed to the two core mechanisms of the proposed strategy: first, it selects the top 20% of excellent individuals from auxiliary populations as knowledge sources, ensuring that the transferred variables themselves originate from high-quality solutions; second, the transfer operation is directionally guided by the contribution degree of decision variables to the objective functions. This targeted transfer directly strengthens the key variables that affect the optimization objectives, thereby effectively guiding the search process towards better solution regions. In contrast, the randomness of the RT strategy reduces the effectiveness of knowledge transfer, potentially introducing irrelevant or inefficient information into the original task, which in turn degrades the quality of solutions and slows down the convergence speed.

4.6. Validation of the Reliability of Optimal Contribution Objective Assignment Strategy

The optimal contribution objective assignment strategy serves as the core support of the MTO-CDTD multi-task framework, with its primary goal of accurately matching decision variables to objective functions, thereby laying the foundation for the targeted optimization of auxiliary tasks. Through this strategy, each auxiliary task is restricted to optimizing the subset of high-contribution variables corresponding to its target objective function (e.g., auxiliary task T 1 focuses on optimizing variables that make significant contributions to f 1 ), which avoids interference from irrelevant variables and enhances the pertinence of local search. Meanwhile, it provides a clear variable-objective mapping basis for the contribution-guided initialization strategy and the multi-population-based knowledge transfer strategy, ensuring the effectiveness of multi-task collaboration.
To verify the accuracy of the optimal contribution-based objective allocation results, this study compared the variation trends of the minimum values of the two objective functions for the CF1 test function between the main population and the auxiliary populations over the course of population iteration, as shown in Figure 9. It can be clearly observed from the figure that for the first objective function f 1 , the minimum value of f 1 of the population corresponding to auxiliary task T 1 ( P O P 1 ) remained consistently lower than that of the main population ( P O P o ) throughout the evolutionary process, and this auxiliary population converged to the theoretical minimum value of 0 for f 1 at an early evolutionary stage. Similarly, for the second objective function f 2 , the population corresponding to auxiliary task 2 ( P O P 2 ) exhibited an identical pattern: its minimum value of f 2 outperformed that of the main population over the entire iteration period, and it converged to the theoretical minimum value of 0 for f 2 at an earlier stage. The above results demonstrate that each auxiliary population can optimize its corresponding objective function more efficiently, thereby verifying the accuracy of the objective allocation scheme proposed in this study.

4.7. Analysis of the Feasible Solution Ratio During Population Iteration

The feasible solution ratio is a core indicator for evaluating an algorithm’s constraint adaptation capability and search efficiency. Its dynamic changes during the iteration process can intuitively reflect the speed at which the population breaks through constraint barriers, explores, and occupies the feasible region. In this section, the first 4 test problems from the three benchmark test suites (LIRCMOP, CF, and ZXH-CF) are selected. With the decision variable dimension D = 500 as the experimental setting, the evolution characteristics of the feasible solution ratio during the population iteration of the MTO-CDTD algorithm are systematically statistically analyzed, and the relevant results are shown in Figure 10.
As can be clearly observed from the figure, for the 12 selected test problems, the population feasible solution ratio of the MTO-CDTD algorithm shows a continuous increasing trend and ultimately climbs to 1.0. This indicates that all individuals in the algorithm’s population have fully satisfied the constraint conditions of the problems, achieving complete occupation of the feasible region. This result fully verifies the feasibility of the MTO-CDTD algorithm in handling large-scale constrained problems.

4.8. Application Study

Taking the berth and quay crane assignment problem (BACAP) as a real-world case, this section explores the capability of the proposed MTO-CDTD in solving practical problems.
Ports, as the core of modern maritime logistics, play a crucial role in driving economic development. For logistics ports, efficient operational mechanisms are vital to ensuring the rapid and smooth turnover of goods, which in turn accelerates logistics processes and promotes trade prosperity. In modern port operations, berth allocation and quay crane assignment are two critical and interconnected decision-making issues. The allocation of berths dictates the docking positions and schedules for vessels, while the efficiency of cargo handling operations is impacted by the quay crane assignment. The above-mentioned two issues are interdependent: the choice of berths directly affects quay crane utilization, and the availability of quay cranes can restrict berth arrangements. The overall operational effectiveness and service quality of ports can be greatly improved by efficiently coordinating these two issues. The specific optimization objectives and constraints are as follows.
  • Parameters and Variables
i: indices of vessels, ( i = 1 , 2 , , n ) V .
j: indices of berths, ( j = 1 , 2 , , b ) B .
q: indices of quay cranes, ( q = 1 , 2 , , c ) Q .
k: indices of priorities, ( k = 1 , 2 , , n ) O .
m: indices of service sequences, ( m = 1 , 2 , , n ) M .
A T i : the arrival time of vessel i.
T F i : the expected departure time of vessel i.
V Q M i : the maximum number of quay cranes that can be allocated to vessel i.
V E i : the number of containers to be loaded and unloaded for vessel i.
B L j : the length of berth j.
Q E V : the handling efficiency of quay cranes.
V P i : the preferred berth for vessel i.
T D i : the distance between the actual docking berth of vessel i and its preferred berth.
T Y M i : the maximum acceptable waiting time for vessel i.
Q m j : the number of quay cranes allocated to vessel m at berth j.
T S i : the start time of loading and unloading for vessel i.
V O i : the service priority of vessel i.
V B i : the berth allocated to vessel i.
V Q i : the number of quay cranes allocated to vessel i.
x i j k = 1 if   vessel   i   operates   at   berth   j   in   sequence   k 0 else
y i l = 1 if   vessel   i   operates   before   vessel   l   completes   its   operation   task   0 else
z i l = 1 if   vessels   i   and   l   have   the   same   service   priority   0 else
  • Objective functions
The BACAP in this paper employs a multi-objective problem model with three optimization objectives. The first objective function f 1 aims to minimize the total time that all vessels spend in port. As shown in Equation (4), f 1 consists of three components: operating time, waiting time, and departure delay time. The second objective function f 2 aims to minimize the number of quay crane movements, as shown in Equation (5). The third objective function f 3 , as represented by Equation (6), seeks to minimize the average additional truck transport distance between the actual berths of vessels and their preferred berths. The determination of the distance between the actual berth and the preferred berth of vessel i, denoted as T D i in Equation (6), is illustrated in Figure 11. In Figure 11, if vessel i is assigned to berth 1 and its preferred berth is berth 3, then T D i represents the distance between the center points of berth 1 and berth 3.
min f 1 = i V j B k O V E i Q E V · V Q i x i j k + i V j B k O ( T S i T A i ) x i j k + i V j B k O T S i + V E i Q E V · V Q i T F i x i j k
min f 2 = 1 2 m M j B | Q ( m + 1 ) j Q m j |
min f 3 = 1 V i V ( V E i · T D i )
  • Constraints
Equations (7)–(14) delineate the constraints that the studied BACAP must adhere to. Specifically, Equation (7) states that vessel i is served only once at its docking berth in accordance with its designated priority. Equation (8) ensures that a berth can serve only one vessel at a time. Equation (9) stipulates that in the course of loading and unloading, the number of cranes allocated to a vessel must be less than its acceptable maximum. Equation (10) restricts the unloading time, meaning that each vessel should start its loading and unloading tasks after arriving at the port. Equation (11) ensures that the waiting time for each vessel does not exceed its maximum acceptable limit. Equation (12) ensures that on the same berth, vessels with lower priority must wait for those with higher priority to complete their operations before they can begin. Equation (13) guarantees that the number of allocated cranes does not exceed the total number of available cranes. Equation (14) ensures that each vessel has a unique service priority.
j B k O x i j k = 1 , i V
i V x i j k 1 , j B , k O
V Q i V Q M i , i V
T S i T A i , i V
T S i T A i T Y M i , i V
i V T S i · x i j k > l V ( T S l + V E l Q E V · V Q l ) x i j k 1 , j B , k O
V Q i + l V V Q l · y i l Q , j B , k O
i V l V Z i l = 0 , i l
In this problem, we set the number of vessels to 100, the dimension of decision variables is as high as 500, and the number of function evaluations is also set to 200 × D. It is worth noting that since the true Constrained Pareto Front (CPF) is unknown, we adopt the Hypervolume (HV) metric when comparing the results.
Figure 12 presents the Hypervolume (HV) convergence curves and the final population distribution of the proposed MTO-CDTD and other comparative algorithms on the BACAP. As can be seen from Figure 12a, with the iteration of the population, MTO-CDTD ultimately achieves a high HV value and obtains a non-zero value earlier than other algorithms, indicating that MTO-CDTD can acquire feasible solutions in the early stage of evolution. In addition, Figure 12b shows the final population distribution of different algorithms. It can be observed from the figure that, compared with other comparative algorithms, the Pareto front obtained by MTO-CDTD performs excellently in both distribution and convergence. This indicates that the MTO-CDTD algorithm proposed in this paper is capable of solving practical problems.

5. Conclusions

In this section, we summarize the work presented in this paper, provide a theoretical analysis of the algorithm, and propose directions for future research.

5.1. Summary of Research Work

This study proposes a multi-task optimization algorithm based on contribution-driven task design (MTO-CDTD) to address LSCMOPs. The core innovation lies in constructing a multi-task framework that includes one original task (optimizing the original LSCMOP) and multiple auxiliary tasks (each focusing on optimizing the decision variables that contribute most to a specific objective function). The key strategies developed to implement this framework include: an optimal contribution objective assignment strategy, which evaluates the influence of decision variables on objective functions and accurately assigns them to the objective where they contribute the most, enabling each auxiliary task to directionally optimize its associated variable subset; a contribution-guided initialization strategy, which uses a curve fitting method to estimate the optimal values for convergence-related variables and employs a uniform sampling method for diversity-related variables to generate high-quality initial populations for auxiliary tasks; and a multi-population-based knowledge transfer strategy, which, according to the results of contribution assignment, selectively integrates elite individual information discovered in auxiliary populations into the original population, effectively guiding the original task to evolve toward more promising regions in the vast search space. Extensive experiments conducted on three benchmark test suites, LIRCMOP, CF, and ZXH_CF (with decision variable scales up to 1000), and comparisons with multiple state-of-the-art constrained multi-objective evolutionary algorithms (CMOAs) demonstrate the superior performance of MTO-CDTD. The results show that the proposed algorithm achieves better convergence and diversity on most test problems, regardless of problem size. Ablation experiments further verify that the proposed initialization strategy and knowledge transfer strategy play a key role in improving the performance of the algorithm. Meanwhile, the running time of MTO-CDTD is significantly lower than that of other comparative algorithms, highlighting its advantages in solving complex, large-scale optimization problems.

5.2. Theoretical Analysis of Algorithms

  • Convergence analysis: auxiliary tasks focus on optimizing high-contribution decision variables for specific objectives, reducing the search dimension of each subtask, and accelerating the convergence of local optimal solutions. Through the multi-population-based knowledge transfer strategy, the original task continuously absorbs high-quality information from auxiliary tasks, ensuring the search process advances toward the feasible region and the Pareto optimal front. According to evolutionary algorithm convergence theory, integrating elite individuals from multiple auxiliary tasks via SubPOP provides a mechanism for the original population to escape local optima—transferred information expands the search scope and increases the probability of approaching the global optimal region. Additionally, the environmental selection strategies adopted by the original task and auxiliary tasks prioritize retaining feasible solutions and potential high-quality infeasible individuals, ensuring the population consistently maintains the trend of objective value optimization and constraint satisfaction during iterations, laying the foundation for convergence.
  • Stability analysis: the optimal contribution objective assignment strategy classifies decision variables into convergence-related and diversity-related categories, and assigns them to appropriate tasks based on quantitative contribution degrees. This reduces interference between variables and avoids fluctuations in search direction caused by unstructured variable optimization. The contribution-guided initialization strategy generates high-quality initial populations for auxiliary tasks, ensuring the algorithm starts from an advantageous position and reducing the risk of premature convergence or divergence. Meanwhile, the knowledge transfer rate α (set to 0.2 after parameter analysis) controls the proportion of information transfer, balancing the exploration capability of the original task and the exploitation efficiency of auxiliary tasks. It not only avoids instability caused by over-reliance on local information but also prevents convergence delays due to ineffective global search, thus guaranteeing the stability of the evolutionary process.

5.3. Future Work

The experimental results indicate that the proposed algorithm still exhibits limitations when dealing with some test problems with highly complex constraints (such as certain problems in the ZXH_CF test suite). This is because the objective functions and constraints of these problems are deeply coupled. As the number of decision variables increases, the algorithm struggles to effectively explore the search space and often gets trapped in locally feasible or infeasible regions. Therefore, further exploration of the complex interactions between decision variables and constraints to guide the optimization process will be one of the future directions of this research.

Author Contributions

Conceptualization, H.L.; methodology, H.L.; software, H.L.; validation, H.L.; formal analysis, H.L.; investigation, T.L.; resources, T.L.; data curation, T.L.; writing—original draft preparation, H.L.; writing—review and editing, T.L.; visualization, T.L.; supervision, H.L.; project administration, T.L.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable, using a publicly available dataset.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ye, Q.; Wang, W.; Li, G.; Wang, Z. Dynamic-multi-task-assisted evolutionary algorithm for constrained multi-objective optimization. Swarm Evol. Comput. 2024, 90, 101683. [Google Scholar] [CrossRef]
  2. Neumann, A.; Neumann, F. Optimizing Monotone Chance-Constrained Submodular Functions Using Evolutionary Multiobjective Algorithms. Evol. Comput. 2024, 33, 363–393. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, Y.; Cai, Z. Constrained evolutionary optimization by means of (μ + λ)-differential evolution and improved adaptive trade-off model. Evol. Comput. 2011, 19, 249–285. [Google Scholar] [CrossRef] [PubMed]
  4. Zhu, M.; Kong, M.; Wen, Y.; Gu, S.; Xue, B.; Huang, T. A multi-objective path planning method for ships based on constrained policy optimization. Ocean Eng. 2025, 319, 120165. [Google Scholar] [CrossRef]
  5. Yang, Z.; Deng, L.; Li, C.; Zhang, L. Optimization of Built-In Self-Test test chain configuration in 2.5 D Integrated Circuits Using Constrained Multi-Objective Evolutionary Algorithm. Eng. Appl. Artif. Intell. 2025, 143, 109876. [Google Scholar] [CrossRef]
  6. Ben Alla, H.; Ben Alla, S.; Ezzati, A.; Touhafi, A. A Novel, Self-Adaptive, Multiclass Priority Algorithm with VM Clustering for Efficient Cloud Resource Allocation. Computers 2025, 14, 81. [Google Scholar] [CrossRef]
  7. Jan, M.A.; Zhang, Q. MOEA/D for constrained multiobjective optimization: Some preliminary experimental results. In Proceedings of the 2010 UK Workshop on Computational Intelligence (UKCI), Colchester, UK, 8–10 September 2010; pp. 1–6. [Google Scholar]
  8. Ali, M.M.; Zhu, W. A penalty function-based differential evolution algorithm for constrained global optimization. Comput. Optim. Appl. 2013, 54, 707–739. [Google Scholar] [CrossRef]
  9. Fan, Z.; Li, W.; Cai, X.; Hu, K.; Lin, H.; Li, H. Angle-based constrained dominance principle in MOEA/D for constrained multi-objective optimization problems. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 460–467. [Google Scholar]
  10. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  11. Takahama, T.; Sakai, S. Constrained optimization by the ε constrained differential evolution with an archive and gradient-based mutation. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–9. [Google Scholar]
  12. Fan, Z.; Li, W.; Cai, X.; Huang, H.; Fang, Y.; You, Y.; Mo, J.; Wei, C.; Goodman, E. An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions. arXiv 2017, arXiv:1707.08767. [Google Scholar] [CrossRef]
  13. Qiao, K.; Yu, K.; Qu, B.; Liang, J.; Song, H.; Yue, C.; Lin, H.; Tan, K.C. Dynamic auxiliary task-based evolutionary multitasking for constrained multiobjective optimization. IEEE Trans. Evol. Comput. 2022, 27, 642–656. [Google Scholar] [CrossRef]
  14. Runarsson, T.P.; Yao, X. Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evol. Comput. 2000, 4, 284–294. [Google Scholar] [CrossRef]
  15. Lara, A.; Uribe, L.; Alvarado, S.; Sosa, V.A.; Wang, H.; Schütze, O. On the choice of neighborhood sampling to build effective search operators for constrained MOPs. Memetic Comput. 2019, 11, 155–173. [Google Scholar] [CrossRef]
  16. Xu, B.; Duan, W.; Zhang, H.; Li, Z. Differential evolution with infeasible-guiding mutation operators for constrained multi-objective optimization. Appl. Intell. 2020, 50, 4459–4481. [Google Scholar] [CrossRef]
  17. Ming, F.; Gong, W.; Wang, L.; Jin, Y. Constrained multi-objective optimization with deep reinforcement learning assisted operator selection. IEEE CAA J. Autom. Sin. 2024, 11, 919–931. [Google Scholar] [CrossRef]
  18. Uribe, L.; Lara, A.; Deb, K.; Schütze, O. A new gradient free local search mechanism for constrained multi-objective optimization problems. Swarm Evol. Comput. 2021, 67, 100938. [Google Scholar] [CrossRef]
  19. Morovati, V.; Pourkarimi, L. Extension of Zoutendijk method for solving constrained multiobjective optimization problems. Eur. J. Oper. Res. 2019, 273, 44–57. [Google Scholar] [CrossRef]
  20. Peng, Y.; Qiu, R.; Zhao, W.; Zhang, F.; Liao, Q.; Liang, Y.; Fu, G.; Yang, Y. Efficient energy optimization of large-scale natural gas pipeline network: An advanced decomposition optimization strategy. Chem. Eng. Sci. 2025, 309, 121456. [Google Scholar] [CrossRef]
  21. He, R.; Xie, Y.; Zhang, S.; Xu, F.; Long, J. Knowledge-assisted hybrid optimization strategy of large-scale crude oil scheduling integrated production planning. Comput. Chem. Eng. 2025, 192, 108904. [Google Scholar] [CrossRef]
  22. Zhou, S.Z.; Zhan, Z.H.; Chen, Z.G.; Kwong, S.; Zhang, J. A multi-objective ant colony system algorithm for airline crew rostering problem with fairness and satisfaction. IEEE Trans. Intell. Transp. Syst. 2020, 22, 6784–6798. [Google Scholar] [CrossRef]
  23. Yu, Q.; Yang, C.; Dai, G.; Peng, L.; Chen, X. Synchronous wireless sensor and sink placement method using dual-population co-evolutionary constrained multiobjective optimization algorithm. IEEE Trans. Ind. Inform. 2022, 19, 7561–7571. [Google Scholar] [CrossRef]
  24. Li, X.; Wang, W.; Wang, H.; Wu, J.; Fan, X.; Xu, Q. Dynamic environmental economic dispatch of hybrid renewable energy systems based on tradable green certificates. Energy 2020, 193, 116699. [Google Scholar] [CrossRef]
  25. Gupta, A.; Ong, Y.S.; Feng, L. Multifactorial evolution: Toward evolutionary multitasking. IEEE Trans. Evol. Comput. 2015, 20, 343–357. [Google Scholar] [CrossRef]
  26. Chen, K.; Xue, B.; Zhang, M.; Zhou, F. Evolutionary multitasking for feature selection in high-dimensional classification via particle swarm optimization. IEEE Trans. Evol. Comput. 2021, 26, 446–460. [Google Scholar] [CrossRef]
  27. Song, Q.; Zheng, Y.J.; Yang, J.; Huang, Y.J.; Sheng, W.G.; Chen, S.Y. Predicting demands of COVID-19 prevention and control materials via co-evolutionary transfer learning. IEEE Trans. Cybern. 2022, 53, 3859–3872. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, N.; Gupta, A.; Chen, Z.; Ong, Y.S. Evolutionary machine learning with minions: A case study in feature selection. IEEE Trans. Evol. Comput. 2021, 26, 130–144. [Google Scholar] [CrossRef]
  29. Zhang, F.; Mei, Y.; Nguyen, S.; Tan, K.C.; Zhang, M. Task relatedness-based multitask genetic programming for dynamic flexible job shop scheduling. IEEE Trans. Evol. Comput. 2022, 27, 1705–1719. [Google Scholar] [CrossRef]
  30. Qiao, K.; Yu, K.; Qu, B.; Liang, J.; Song, H.; Yue, C. An evolutionary multitasking optimization framework for constrained multiobjective optimization problems. IEEE Trans. Evol. Comput. 2022, 26, 263–277. [Google Scholar] [CrossRef]
  31. Ming, F.; Gong, W.; Wang, L.; Gao, L. Constrained multiobjective optimization via multitasking and knowledge transfer. IEEE Trans. Evol. Comput. 2022, 28, 77–89. [Google Scholar] [CrossRef]
  32. Tian, Y.; Si, L.; Zhang, X.; Cheng, R.; He, C.; Tan, K.C.; Jin, Y. Evolutionary large-scale multi-objective optimization: A survey. ACM Comput. Surv. CSUR 2021, 54, 1–34. [Google Scholar] [CrossRef]
  33. Liu, J.; Sarker, R.; Elsayed, S.; Essam, D.; Siswanto, N. Large-scale evolutionary optimization: A review and comparative study. Swarm Evol. Comput. 2024, 85, 101466. [Google Scholar] [CrossRef]
  34. Hong, W.J.; Yang, P.; Tang, K. Evolutionary computation for large-scale multi-objective optimization: A decade of progresses. Int. J. Autom. Comput. 2021, 18, 155–169. [Google Scholar] [CrossRef]
  35. Zhang, X.; Tian, Y.; Cheng, R.; Jin, Y. A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization. IEEE Trans. Evol. Comput. 2016, 22, 97–112. [Google Scholar] [CrossRef]
  36. Liu, S.; Lin, Q.; Tian, Y.; Tan, K.C. A variable importance-based differential evolution for large-scale multiobjective optimization. IEEE Trans. Cybern. 2021, 52, 13048–13062. [Google Scholar] [CrossRef]
  37. Ma, L.; Huang, M.; Yang, S.; Wang, R.; Wang, X. An adaptive localized decision variable analysis approach to large-scale multiobjective and many-objective optimization. IEEE Trans. Cybern. 2021, 52, 6684–6696. [Google Scholar] [CrossRef] [PubMed]
  38. Meselhi, M.; Sarker, R.; Essam, D.; Elsayed, S. A decomposition approach for large-scale non-separable optimization problems. Appl. Soft Comput. 2022, 115, 108168. [Google Scholar] [CrossRef]
  39. He, C.; Cheng, R.; Li, L.; Tan, K.C.; Jin, Y. Large-scale multiobjective optimization via reformulated decision variable analysis. IEEE Trans. Evol. Comput. 2022, 28, 47–61. [Google Scholar] [CrossRef]
  40. Chen, L.; Zhang, J.; Wu, L.; Cai, X.; Xu, Y. Large-ScaleMulti-Objective Optimization Algorithm Based on Weighted Overlapping Grouping of Decision Variables. Comput. Model. Eng. Sci. 2024, 140, 363–383. [Google Scholar]
  41. Xu, Y.; Xu, C.; Zhang, H.; Huang, L.; Liu, Y.; Nojima, Y.; Zeng, X. A multi-population multi-objective evolutionary algorithm based on the contribution of decision variables to objectives for large-scale multi/many-objective optimization. IEEE Trans. Cybern. 2022, 53, 6998–7007. [Google Scholar] [CrossRef]
  42. Zille, H.; Ishibuchi, H.; Mostaghim, S.; Nojima, Y. A framework for large-scale multiobjective optimization based on problem transformation. IEEE Trans. Evol. Comput. 2017, 22, 260–275. [Google Scholar] [CrossRef]
  43. Li, Y.; Li, L.; Lin, Q.; Wong, K.C.; Ming, Z.; Coello, C.A.C. A self-organizing weighted optimization based framework for large-scale multi-objective optimization. Swarm Evol. Comput. 2022, 72, 101084. [Google Scholar] [CrossRef]
  44. Wang, H.; Zhu, S.; Fang, W. A clustering-based weighted optimization algorithm for large-scale multi-objective optimization problems: H. Wang et al. Memetic Comput. 2025, 17, 34. [Google Scholar] [CrossRef]
  45. Xiong, Z.; Wang, X.; Li, Y.; Feng, W.; Liu, Y. A problem transformation-based and decomposition-based evolutionary algorithm for large-scale multiobjective optimization. Appl. Soft Comput. 2024, 150, 111081. [Google Scholar] [CrossRef]
  46. Yang, X.; Zou, J.; Yang, S.; Zheng, J.; Liu, Y. A fuzzy decision variables framework for large-scale multiobjective optimization. IEEE Trans. Evol. Comput. 2021, 27, 445–459. [Google Scholar] [CrossRef]
  47. Sun, Y.; Jiang, D. An improved problem transformation algorithm for large-scale multi-objective optimization. Swarm Evol. Comput. 2024, 89, 101622. [Google Scholar] [CrossRef]
  48. Zhu, S.; Wang, W.; Fang, W.; Cui, M. Critical vector based evolutionary algorithm for large-scale multi-objective optimization. Clust. Comput. 2025, 28, 190. [Google Scholar] [CrossRef]
  49. Shang, Q.; Tan, M.; Hu, R.; Huang, Y.; Qian, B.; Feng, L. A multi-stage competitive swarm optimization algorithm for solving large-scale multi-objective optimization problems. Expert Syst. Appl. 2025, 260, 125411. [Google Scholar] [CrossRef]
  50. Cao, J.; Guo, K.; Zhang, J.; Chen, Z. A dual-stage large-scale multi-objective evolutionary algorithm with dynamic learning strategy. Expert Syst. Appl. 2023, 226, 120184. [Google Scholar] [CrossRef]
  51. Guo, W.; Li, S.; Dai, F.; Wang, J.; Zhang, M. A two-stage large-scale multi-objective optimization approach incorporating adaptive entropy and enhanced competitive swarm optimizer. Expert Syst. Appl. 2025, 278, 127374. [Google Scholar] [CrossRef]
  52. Wang, H.; Chen, L.; Hao, X.; Yu, T.; Qian, Y.; Yang, R.; Liu, W. Meta-knowledge-assisted sampling with variable sorting for large-scale multi-objective optimization. Appl. Soft Comput. 2025, 181, 113386. [Google Scholar] [CrossRef]
  53. Wang, H.; Chen, L.; Hao, X.; Qu, R.; Zhou, W.; Wang, D.; Liu, W. Learning-guided cross-sampling for large-scale evolutionary multi-objective optimization. Swarm Evol. Comput. 2024, 91, 101763. [Google Scholar] [CrossRef]
  54. Zhang, W.; Wang, S.; Li, G.; Zhang, W.; Wang, X. A dual-sampling based evolutionary algorithm for large-scale multi-objective optimization. Appl. Soft Comput. 2024, 167, 112344. [Google Scholar] [CrossRef]
  55. Li, L.; He, C.; Cheng, R.; Li, H.; Pan, L.; Jin, Y. A fast sampling based evolutionary algorithm for million-dimensional multiobjective optimization. Swarm Evol. Comput. 2022, 75, 101181. [Google Scholar] [CrossRef]
  56. Liu, Z.; Han, F.; Ling, Q.; Han, H.; Jiang, J. A fast interpolation-based multi-objective evolutionary algorithm for large-scale multi-objective optimization problems. Soft Comput. 2024, 28, 6475–6499. [Google Scholar] [CrossRef]
  57. Wu, Y.; Yang, N.; Chen, L.; Tian, Y.; Tang, Z. Directed quick search guided evolutionary framework for large-scale multi-objective optimization problems. Expert Syst. Appl. 2024, 239, 122370. [Google Scholar] [CrossRef]
  58. Lu, Y.; Li, B.; Liu, S.; Zhou, A. A population cooperation based particle swarm optimization algorithm for large-scale multi-objective optimization. Swarm Evol. Comput. 2023, 83, 101377. [Google Scholar] [CrossRef]
  59. Madani, A.; Engelbrecht, A.; Ombuki-Berman, B. Cooperative coevolutionary multi-guide particle swarm optimization algorithm for large-scale multi-objective optimization problems. Swarm Evol. Comput. 2023, 78, 101262. [Google Scholar] [CrossRef]
  60. Zhang, W.; Wang, S.; Li, G.; Zhang, W. Cooperative tri-population based evolutionary algorithm for large-scale multi-objective optimization. Expert Syst. Appl. 2023, 227, 120290. [Google Scholar] [CrossRef]
  61. Ge, Y.; Wang, Z.; Wang, H.; Cheng, F.; Zhang, L. Auxiliary optimization framework based on scaling transformation matrix for large-scale multi-objective problem. Swarm Evol. Comput. 2025, 95, 101931. [Google Scholar] [CrossRef]
  62. Liu, S.; Lin, Q.; Feng, L.; Wong, K.C.; Tan, K.C. Evolutionary multitasking for large-scale multiobjective optimization. IEEE Trans. Evol. Comput. 2022, 27, 863–877. [Google Scholar] [CrossRef]
  63. Cui, Z.; Wu, Y.; Zhao, T.; Zhang, W.; Chen, J. A two-stage accelerated search strategy for large-scale multi-objective evolutionary algorithm. Inf. Sci. 2025, 686, 121347. [Google Scholar] [CrossRef]
  64. Wang, M.; Li, X.; Chen, L.; Chen, H.; Chen, C.; Liu, M. A deep-based Gaussian mixture model algorithm for large-scale many objective optimization. Appl. Soft Comput. 2025, 172, 112874. [Google Scholar] [CrossRef]
  65. He, C.; Cheng, R.; Tian, Y.; Zhang, X.; Tan, K.C.; Jin, Y. Paired offspring generation for constrained large-scale multiobjective optimization. IEEE Trans. Evol. Comput. 2020, 25, 448–462. [Google Scholar] [CrossRef]
  66. Fan, C.; Wang, J.; Yang, L.T.; Xiao, L.; Ai, Z. Efficient constrained large-scale multi-objective optimization based on reference vector-guided evolutionary algorithm. Appl. Intell. 2023, 53, 21027–21049. [Google Scholar] [CrossRef]
  67. Wang, Q.; Li, T.; Meng, F.; Li, B. A framework for constrained large-scale multi-objective white-box problems based on two-scale optimization through decision transfer. Inf. Sci. 2024, 665, 120411. [Google Scholar] [CrossRef]
  68. Si, L.; Zhang, X.; Zhang, Y.; Yang, S.; Tian, Y. An efficient sampling approach to offspring generation for evolutionary large-scale constrained multi-objective optimization. IEEE Trans. Emerg. Top. Comput. Intell. 2025, 9, 2080–2092. [Google Scholar] [CrossRef]
  69. Wang, Q.; Xi, Y.; Zhang, Q.; Li, T.; Li, B. Hierarchical optimization by spatial-temporal indictor in multi-scale decision pyramid for constrained large-scale multi-objective problems. Expert Syst. Appl. 2024, 257, 125068. [Google Scholar] [CrossRef]
  70. Tian, Y.; Chen, H.; Ma, H.; Zhang, X.; Tan, K.C.; Jin, Y. Integrating conjugate gradients into evolutionary algorithms for large-scale continuous multi-objective optimization. IEEE CAA J. Autom. Sin. 2022, 9, 1801–1817. [Google Scholar] [CrossRef]
  71. Ban, X.; Liang, J.; Yu, K.; Qiao, K.; Suganthan, P.N.; Wang, Y. A Subspace Search-Based Evolutionary Algorithm for Large-Scale Constrained Multiobjective Optimization and Application. IEEE Trans. Cybern. 2025, 55, 2486–2499. [Google Scholar] [CrossRef]
  72. Qiao, K.; Liang, J.; Yu, K.; Guo, W.; Yue, C.; Qu, B.; Suganthan, P.N. Benchmark problems for large-scale constrained multi-objective optimization with baseline results. Swarm Evol. Comput. 2024, 86, 101504. [Google Scholar] [CrossRef]
  73. Chen, H.; Liu, H.L.; Gu, F.; Tan, K.C. A multiobjective multitask optimization algorithm using transfer rank. IEEE Trans. Evol. Comput. 2022, 27, 237–250. [Google Scholar] [CrossRef]
  74. Zhang, F.; Mei, Y.; Nguyen, S.; Zhang, M.; Tan, K.C. Surrogate-assisted evolutionary multitask genetic programming for dynamic flexible job shop scheduling. IEEE Trans. Evol. Comput. 2021, 25, 651–665. [Google Scholar] [CrossRef]
  75. Tang, Q.; Meng, K.; Cheng, L.; Zhang, Z. An improved multi-objective multifactorial evolutionary algorithm for assembly line balancing problem considering regular production and preventive maintenance scenarios. Swarm Evol. Comput. 2022, 68, 101021. [Google Scholar] [CrossRef]
  76. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the strength Pareto evolutionary algorithm. TIK Rep. 2001, 103. [Google Scholar] [CrossRef]
  77. Sun, R.; Zou, J.; Liu, Y.; Yang, S.; Zheng, J. A multistage algorithm for solving multiobjective optimization problems with multiconstraints. IEEE Trans. Evol. Comput. 2022, 27, 1207–1219. [Google Scholar] [CrossRef]
  78. Liu, Z.Z.; Wang, B.C.; Tang, K. Handling constrained multiobjective optimization problems via bidirectional coevolution. IEEE Trans. Cybern. 2021, 52, 10163–10176. [Google Scholar] [CrossRef]
  79. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  80. Coello, C.A.C.; Cortés, N.C. Solving multiobjective optimization problems using an artificial immune system. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  81. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 2002, 3, 257–271. [Google Scholar] [CrossRef]
  82. García, S.; Fernández, A.; Luengo, J.; Herrera, F. A study of statistical techniques and performance measures for genetics-based machine learning: Accuracy and interpretability. Soft Comput. 2009, 13, 959–977. [Google Scholar] [CrossRef]
Figure 1. The Multi-Task Framework in MTO-CDTD.
Figure 1. The Multi-Task Framework in MTO-CDTD.
Computers 15 00031 g001
Figure 2. Illustration of the contribution-guided initialization strategy.
Figure 2. Illustration of the contribution-guided initialization strategy.
Computers 15 00031 g002
Figure 3. Average IGD of MTO-CDTD under different values of α . (a): LIRCMOP1 test function; (b): LIRCMOP3 test function; (c): LIRCMOP5 test function; (d): CF1 test function; (e): CF3 test function; (f): CF5 test function; (g): ZXH_CF1 test function; (h): ZXH_CF3 test function; (i): ZXH_CF5 test function.
Figure 3. Average IGD of MTO-CDTD under different values of α . (a): LIRCMOP1 test function; (b): LIRCMOP3 test function; (c): LIRCMOP5 test function; (d): CF1 test function; (e): CF3 test function; (f): CF5 test function; (g): ZXH_CF1 test function; (h): ZXH_CF3 test function; (i): ZXH_CF5 test function.
Computers 15 00031 g003
Figure 4. Pareto fronts obtained by BiCo, C3M, POCEA, MOCGDE, IMTCMO_BS and MTO-CDTD on LIRCMOP5 (D = 500). (a) BiCo on LIRCMOP5; (b) C3M on LIRCMOP5; (c) POCEA on LIRCMOP5; (d) MOCGDE on LIRCMOP5; (e) IMTCMO_BS on LIRCMOP5; (f) MTO-CDTD on LIRCMOP5.
Figure 4. Pareto fronts obtained by BiCo, C3M, POCEA, MOCGDE, IMTCMO_BS and MTO-CDTD on LIRCMOP5 (D = 500). (a) BiCo on LIRCMOP5; (b) C3M on LIRCMOP5; (c) POCEA on LIRCMOP5; (d) MOCGDE on LIRCMOP5; (e) IMTCMO_BS on LIRCMOP5; (f) MTO-CDTD on LIRCMOP5.
Computers 15 00031 g004
Figure 5. Pareto fronts obtained by BiCo, C3M, POCEA, MOCGDE, IMTCMO_BS and MTO-CDTD on CF1 (D = 500). (a) BiCo on CF1; (b) C3M on CF1; (c) POCEA on CF1; (d) MOCGDE on CF1; (e) IMTCMO_BS on CF1; (f) MTO-CDTD on CF1.
Figure 5. Pareto fronts obtained by BiCo, C3M, POCEA, MOCGDE, IMTCMO_BS and MTO-CDTD on CF1 (D = 500). (a) BiCo on CF1; (b) C3M on CF1; (c) POCEA on CF1; (d) MOCGDE on CF1; (e) IMTCMO_BS on CF1; (f) MTO-CDTD on CF1.
Computers 15 00031 g005
Figure 6. Pareto fronts obtained by BiCo, C3M, POCEA, MOCGDE, IMTCMO_BS and MTO-CDTD on ZXH_CF6 (D = 500). (a) BiCo on ZXH_CF6; (b) C3M on ZXH_CF6; (c) POCEA on ZXH_CF6; (d) MOCGDE on ZXH_CF6; (e) IMTCMO_BS on ZXH_CF6; (f) MTO-CDTD on ZXH_CF6.
Figure 6. Pareto fronts obtained by BiCo, C3M, POCEA, MOCGDE, IMTCMO_BS and MTO-CDTD on ZXH_CF6 (D = 500). (a) BiCo on ZXH_CF6; (b) C3M on ZXH_CF6; (c) POCEA on ZXH_CF6; (d) MOCGDE on ZXH_CF6; (e) IMTCMO_BS on ZXH_CF6; (f) MTO-CDTD on ZXH_CF6.
Computers 15 00031 g006
Figure 7. Average IGD values of MTO-CDTD and MTO-CDTD (WOC) for different test problems with D = 1000.
Figure 7. Average IGD values of MTO-CDTD and MTO-CDTD (WOC) for different test problems with D = 1000.
Computers 15 00031 g007
Figure 8. Average IGD values of MTO-CDTD and MTO-CDTD (RT) for different test problems with D = 1000.
Figure 8. Average IGD values of MTO-CDTD and MTO-CDTD (RT) for different test problems with D = 1000.
Computers 15 00031 g008
Figure 9. Variation curves of the objective function minimum values of the main population and auxiliary populations during the evolutionary process. (a): Variation of the minimum f 1 value of the main and auxiliary populations during evolution; (b): Variation of the minimum f 2 value of the main and auxiliary populations during evolution.
Figure 9. Variation curves of the objective function minimum values of the main population and auxiliary populations during the evolutionary process. (a): Variation of the minimum f 1 value of the main and auxiliary populations during evolution; (b): Variation of the minimum f 2 value of the main and auxiliary populations during evolution.
Computers 15 00031 g009
Figure 10. Feasibility rate curves of MTO-CDTD on different test problems. (a): Feasibility rate of MTO-CDTD on LIRCMOP1-4; (b): Feasibility rate of MTO-CDTD on CF1-4; (c): Feasibility rate of MTO-CDTD on ZXH-CF1-4.
Figure 10. Feasibility rate curves of MTO-CDTD on different test problems. (a): Feasibility rate of MTO-CDTD on LIRCMOP1-4; (b): Feasibility rate of MTO-CDTD on CF1-4; (c): Feasibility rate of MTO-CDTD on ZXH-CF1-4.
Computers 15 00031 g010
Figure 11. Calculation of T D i .
Figure 11. Calculation of T D i .
Computers 15 00031 g011
Figure 12. HV convergence curves and final population distribution of different algorithms on BACAP. (a): HV convergence curves of different algorithms on BACAP; (b): Final population of different algorithms on BACAP.
Figure 12. HV convergence curves and final population distribution of different algorithms on BACAP. (a): HV convergence curves of different algorithms on BACAP; (b): Final population of different algorithms on BACAP.
Computers 15 00031 g012
Table 1. Parameter settings for algorithms.
Table 1. Parameter settings for algorithms.
AlgorithmParameter SettingsReference
C3MThe proportion of entering the third stage at the latest β = 0.7 .[77]
BiCoProbability of crossover p c = 1 ;
Mutation probability p m = 1 / n (where n is the number of decision variables);
Distribution indexes of crossover operator η c and mutation operator η m are 20.
[78]
IMTCMO_BSSampling execution frequency g = 50 ;
Probability of crossover p c { 0.1 , 0.2 , 1.0 } ;
Scaling factor F { 0.6 , 0.8 , 1.0 } (both randomly selected per individual);
Probability of mutation p m = 1 / n (where n is the number of decision variables);
Distribution index of mutation operator η m = 20 .
[72]
MOCGDESmall population size N P = 10 .[70]
POCEANeighborhood size K = 5 ;
Probability of mutation p m = 1 / n (where n is the number of decision variables);
Distribution index of mutation operator η m = 20 .
[65]
MTO-CDTDKnowledge transfer rate α = 0.2 ;
Probability of crossover p c is 1;
Scaling factor F is 0.5;
Probability of mutation p m = 1 / n (where n is the number of decision variables);
Distribution index of mutation operator η m = 20 .
Table 2. Average IGD values of MTO-CDTD and other algorithms on the LIRCMOP test suite (best result for each test case is highlighted; “ ” indicates no feasible solution was found by the algorithm for that problem).
Table 2. Average IGD values of MTO-CDTD and other algorithms on the LIRCMOP test suite (best result for each test case is highlighted; “ ” indicates no feasible solution was found by the algorithm for that problem).
ProblemDBiCoC3MPOCEAMOCGDEIMTCMO_BSMTO-CDTD
LIRCMOP11003.4069e-1 (8.38e-3) = 3.3807e-1 (1.65e-2) =3.6853e-1 (0.00e+0) =3.1923e-1 (1.96e-2) =3.4840e-1 (8.65e-2)
5003.5254e-1 (3.26e-3) = 3.4955e-1 (5.52e-3) = 3.4593e-1 (2.98e-3) =4.1106e-1 (1.17e-1)
10003.5617e-1 (2.70e-3) + 3.5537e-1 (4.91e-3) + 3.5048e-1 (2.12e-3) +4.2815e-1 (9.45e-2)
LIRCMOP21002.9512e-1 (8.28e-3) −2.8408e-1 (1.39e-2) −2.8443e-1 (1.73e-2) −2.9588e-1 (4.27e-2) −1.3484e-1 (9.00e-2) −1.2990e-1 (1.76e-1)
5003.1312e-1 (4.20e-3) −3.1557e-1 (2.12e-3) −3.0504e-1 (8.33e-3) − 2.0937e-2 (1.00e-2) −7.5287e-3 (8.68e-3)
1000 7.1389e-2 (2.62e-2) =2.2591e-1 (1.82e-1)
LIRCMOP31003.3632e-1 (8.47e-3) −3.3198e-1 (1.74e-2) −3.3260e-1 (2.69e-2) −3.5951e-1 (1.09e-2) =1.7320e-1 (1.09e-1) =2.2146e-1 (1.64e-1)
5003.4515e-1 (3.51e-3) −3.4378e-1 (3.47e-3) −3.4806e-1 (4.96e-3) − 2.4977e-2 (1.69e-2) −1.9785e-2 (6.75e-2)
10003.4868e-1 (1.77e-3) =3.4779e-1 (1.10e-3) =3.4800e-1 (3.66e-3) = 1.4691e-2 (1.11e-2) =2.6313e-1 (2.27e-1)
LIRCMOP41003.1332e-1 (4.47e-3) +3.0288e-1 (2.27e-2) +3.1909e-1 (1.50e-2) +3.0256e-1 (4.40e-2) =1.7709e-1 (1.12e-1) +3.3967e-1 (1.56e-1)
5003.2104e-1 (2.05e-3) =3.1756e-1 (2.85e-3) =3.1941e-1 (4.65e-3) = 2.9281e-2 (1.37e-2) +2.4630e-1 (1.91e-1)
10003.2244e-1 (2.27e-3) =3.2103e-1 (1.48e-3) =3.2155e-1 (3.29e-3) = 1.1716e-2 (5.77e-3) =1.9866e-1 (2.10e-1)
LIRCMOP51002.5734e+0 (3.21e-3) −2.5941e+1 (2.48e+0) −1.8846e+0 (6.62e-1) −2.1416e+1 (1.41e+1) −1.0525e+1 (3.51e+0) −3.3924e-1 (1.27e-1)
5002.5899e+0 (1.32e-2) −1.8969e+2 (1.46e+1) −2.5795e+0 (2.71e-3) −1.4552e+2 (7.35e+1) −1.4740e+2 (2.83e+1) −3.1766e-1 (1.49e-1)
10002.7315e+0 (2.81e-1) −4.3011e+2 (1.84e+1) −2.5892e+0 (6.94e-3) −3.7381e+2 (1.38e+2) −3.7914e+2 (3.22e+1) −2.9967e-1 (1.23e-1)
LIRCMOP61002.7586e+0 (1.73e-3) −2.7063e+1 (2.62e+0) −1.8400e+0 (6.69e-1) −1.9474e+1 (1.65e+1) −9.6954e+0 (3.52e+0) −4.1755e-1 (1.70e-1)
5002.7774e+0 (1.48e-2) −1.8894e+2 (1.34e+1) −2.7687e+0 (5.44e-3) −1.6499e+2 (7.41e+1) −1.5851e+2 (2.32e+1) −3.6825e-1 (1.55e-1)
10002.8318e+0 (1.03e-1) −4.2752e+2 (2.05e+1) −2.7818e+0 (9.75e-3) −3.0589e+2 (1.75e+2) −3.9109e+2 (4.37e+1) −3.3484e-1 (1.15e-1)
LIRCMOP71003.4403e+0 (9.87e-3) −2.5043e+1 (2.95e+0) −1.8735e+0 (1.10e+0) −1.6735e+1 (1.55e+1) −9.1255e+0 (3.95e+0) −1.4586e-1 (5.95e-2)
5003.5160e+0 (4.15e-2) −1.9011e+2 (1.71e+1) −3.3965e+0 (3.11e-1) −1.7281e+2 (6.51e+1) −1.4919e+2 (3.02e+1) −1.3341e-1 (7.05e-2)
10003.5677e+0 (6.06e-2) −4.2663e+2 (2.00e+1) −3.4742e+0 (9.57e-3) −3.2927e+2 (1.55e+2) −3.8497e+2 (3.10e+1) −1.5533e-1 (6.82e-2)
LIRCMOP81003.4416e+0 (1.54e-2) −2.5481e+1 (3.06e+0) −1.9985e+0 (8.88e-1) −2.1599e+1 (1.57e+1) −8.6258e+0 (3.54e+0) −3.4637e-1 (1.05e-1)
5003.5327e+0 (5.74e-2) −1.8842e+2 (1.39e+1) −3.4521e+0 (5.22e-3) −1.6362e+2 (8.15e+1) −1.4672e+2 (3.57e+1) −3.2217e-1 (1.45e-1)
10003.5913e+0 (6.09e-2) −4.2173e+2 (2.38e+1) −3.4787e+0 (1.08e-2) −3.4129e+2 (1.84e+2) −3.9040e+2 (3.83e+1) −3.2458e-1 (1.36e-1)
LIRCMOP91001.3780e+0 (1.14e-1) −1.4744e+1 (4.43e+0) −3.7179e+0 (3.39e+0) −8.0602e-1 (1.43e-1) =1.4463e+0 (7.20e-1) −7.7470e-1 (2.32e-1)
5001.6192e+0 (1.80e-1) −2.0658e+2 (1.49e+1) −1.4243e+2 (2.86e+1) −9.6135e-1 (1.78e-1) =1.1707e+0 (1.48e+0) =8.0723e-1 (2.71e-1)
10002.4560e+0 (4.67e-1) −5.1691e+2 (2.13e+1) −3.6729e+2 (2.80e+1) −9.2408e-1 (1.48e-1) −7.4258e-1 (1.01e-1) =7.5893e-1 (2.86e-1)
LIRCMOP101001.5023e+0 (2.58e-1) −1.5528e+1 (2.68e+0) −6.6378e+0 (1.98e+0) −7.7433e-1 (2.01e-1) −1.4280e+0 (5.99e-1) −6.5081e-1 (2.52e-1)
5001.2260e+1 (6.28e+0) −1.6628e+2 (4.97e+0) −9.7452e+1 (6.22e+0) −1.0163e+0 (1.95e-1) −1.1864e+0 (8.72e-3) −6.7845e-1 (1.93e-1)
10001.0771e+2 (1.35e+1) −3.6612e+2 (8.36e+0) −2.3975e+2 (1.09e+1) −1.0171e+0 (2.04e-1) −1.3001e+0 (7.47e-2) −5.6024e-1 (1.77e-1)
LIRCMOP111001.8810e+0 (3.55e-1) −1.5959e+1 (2.33e+0) −6.3966e+0 (2.22e+0) −5.1572e-1 (9.27e-2) =1.6238e+0 (9.11e-1) −5.8368e-1 (2.47e-1)
5001.5534e+1 (1.01e+1) −1.6625e+2 (7.32e+0) −9.8241e+1 (6.64e+0) −6.8776e-1 (1.37e-1) −8.9562e-1 (5.40e-2) −5.8493e-1 (2.53e-1)
10009.5937e+1 (1.81e+1) −3.6833e+2 (8.51e+0) −2.3891e+2 (8.87e+0) −7.6207e-1 (2.86e-1) −1.1825e+0 (1.32e-1) −5.2975e-1 (2.29e-1)
LIRCMOP121001.3512e+0 (1.91e-1) −1.4170e+1 (5.00e+0) −4.5292e+0 (4.59e+0) −1.2025e+0 (1.07e+0) −1.3103e+0 (5.26e-1) −7.9510e-1 (2.26e-1)
5001.3589e+0 (1.15e-1) −2.0804e+2 (1.84e+1) −1.4758e+2 (2.40e+1) −1.7156e+0 (3.05e+0) −1.0463e+0 (4.25e-1) =7.8703e-1 (2.30e-1)
10002.2574e+0 (5.25e-1) −5.2119e+2 (1.66e+1) −3.6453e+2 (1.76e+1) −4.0687e+0 (1.26e+1) −9.7358e-1 (2.99e-1) =8.2169e-1 (1.79e-1)
LIRCMOP131001.3199e+0 (1.99e-3) −2.6897e+0 (4.19e-1) −1.5431e+0 (1.20e-1) −9.2955e-2 (1.02e-3) +1.5402e+0 (1.28e-1) −1.7180e-1 (9.88e-2)
5002.5341e+0 (3.01e-1) −9.6686e+0 (3.01e+0) −1.5859e+0 (2.28e-1) −9.2541e-2 (9.27e-4) +7.7573e+0 (8.62e-1) −1.5857e-1 (3.76e-2)
10001.1108e+1 (1.20e+0) −2.3180e+1 (7.71e+0) −2.0403e+0 (1.09e+0) −9.2466e-2 (9.21e-4) +1.7964e+1 (2.72e+0) −1.9967e-1 (2.18e-1)
LIRCMOP141001.2765e+0 (1.55e-3) −2.7488e+0 (4.62e-1) −1.5329e+0 (1.45e-1) −1.3903e-1 (8.76e-2) +1.4903e+0 (1.61e-1) −1.9371e-1 (8.28e-2)
5002.6107e+0 (3.57e-1) −9.9407e+0 (2.97e+0) −1.6281e+0 (1.63e-1) −9.5806e-2 (1.27e-3) +7.5722e+0 (1.25e+0) −2.0073e-1 (6.60e-2)
10001.1292e+1 (1.29e+0) −2.3312e+1 (7.03e+0) −2.0186e+0 (4.27e-1) −9.4997e-2 (9.62e-4) +1.8961e+1 (2.99e+0) −2.4494e-1 (1.37e-1)
+/−/= 2/35/51/38/32/35/56/30/63/29/10
Table 3. Comparison of HV results between MTO-CDTD and other algorithms on the LIRCMOP test suite.
Table 3. Comparison of HV results between MTO-CDTD and other algorithms on the LIRCMOP test suite.
AlgorithmHV
BiCo0/37/5
C3M0/40/2
POCEA0/38/4
MOCGDE8/28/6
IMTCMO-BS3/31/8
Table 4. Average IGD values of MTO-CDTD and other algorithms on the CF test suite (best result for each test case is highlighted; “ ” indicates no feasible solution was found by the algorithm for that problem).
Table 4. Average IGD values of MTO-CDTD and other algorithms on the CF test suite (best result for each test case is highlighted; “ ” indicates no feasible solution was found by the algorithm for that problem).
ProblemDBiCoC3MPOCEAMOCGDEIMTCMO_BSMTO-CDTD
CF11001.3360e-1 (6.36e-3) −1.2404e-1 (1.08e-2) −1.8836e-1 (7.62e-3) −3.7919e-2 (7.18e-3) =4.1400e-2 (3.57e-2) =7.3178e-2 (8.51e-2)
5001.2710e-1 (3.60e-3) −1.7257e-1 (4.18e-3) −1.9749e-1 (4.60e-3) −2.2418e-2 (7.92e-3) −1.5892e-2 (1.49e-3) −2.3113e-3 (3.22e-3)
10001.4400e-1 (3.32e-3) −1.8061e-1 (3.33e-3) −2.0052e-1 (3.87e-3) −1.0167e-2 (2.30e-3) +1.5690e-2 (7.52e-4) −1.2313e-2 (3.19e-2)
CF21001.1569e-1 (3.87e-2) −7.2795e-1 (9.03e-2) −4.6647e-1 (1.05e-1) −5.3771e-2 (2.12e-2) +2.0561e-1 (3.98e-2) −9.5580e-2 (1.81e-2)
5001.2254e-1 (1.87e-2) −1.1233e+0 (4.38e-2) −5.3012e-1 (1.37e-1) −4.0643e-2 (1.10e-2) +2.0561e-1 (3.98e-2) −8.4484e-2 (1.30e-2)
10001.3763e-1 (3.85e-2) −1.2519e+0 (3.65e-2) −5.1169e-1 (1.64e-1) −5.2794e-2 (1.90e-2) +3.5414e-1 (3.66e-2) −1.0083e-1 (2.21e-2)
CF31004.2699e-1 (1.60e-1) −2.6512e+0 (6.79e-1) −4.5895e-1 (1.12e-1) −1.8635e-1 (8.47e-2) −7.7070e-1 (2.96e-1) −1.0252e-1 (4.63e-2)
5003.8269e-1 (1.93e-1) −3.1870e+0 (4.91e-1) −3.6716e-1 (1.02e-1) −1.4044e-1 (1.22e-1) =4.2263e-1 (1.81e-1) −1.0252e-1 (4.63e-2)
10003.9591e-1 (1.02e-1) −3.3795e+0 (4.41e-1) −4.2364e-1 (1.26e-1) −1.5475e-1 (1.23e-1) =3.4975e-1 (5.41e-2) −1.4157e-1 (5.76e-2)
CF41005.5852e-1 (8.78e-2) −1.3805e+1 (3.51e+0) −1.5350e+0 (4.92e+0) −5.9221e-1 (7.42e-1) −2.7344e+0 (1.74e+0) −1.2816e-1 (1.53e-2)
5001.0540e+0 (1.36e-1) −9.8562e+1 (1.95e+1) −1.7941e+1 (5.23e+1) −4.0132e+0 (1.12e+1) −6.7510e+0 (2.48e+0) −1.2649e-1 (2.08e-2)
10002.2987e+0 (3.04e-1) −2.0617e+2 (2.61e+1) −1.2422e+1 (3.86e+1) −5.3726e+0 (1.46e+1) −1.3758e+1 (3.27e+0) −1.3181e-1 (3.19e-2)
CF51001.1782e+1 (2.90e+0) −4.7115e+1 (5.48e+0) −1.1229e+1 (1.98e+0) −6.8077e+1 (1.63e+1) −4.3051e+1 (5.79e+0) −4.2948e+0 (3.26e+0)
5006.5584e+1 (4.05e+0) −2.1763e+2 (2.49e+1) −5.0912e+1 (5.56e+0) −3.6964e+2 (9.87e+1) −1.9328e+2 (3.25e+1) −4.9840e+0 (3.71e+0)
10001.5138e+2 (6.49e+0) −4.9370e+2 (5.85e+1) −1.0929e+2 (1.33e+1) −8.7283e+2 (2.86e+2) −3.4396e+2 (5.22e+1) −3.3248e+1 (3.11e+1)
CF61005.5519e-1 (9.69e-2) −1.7316e+0 (4.26e-1) −4.2822e-1 (8.85e-2) =4.3253e-1 (3.10e-1) =1.2545e+0 (4.22e-1) −4.1752e-1 (1.57e-1)
5001.0747e+0 (1.27e-1) −5.7214e+0 (7.36e-1) −5.2966e-1 (1.61e-1) −5.0571e-1 (1.70e-1) −4.6268e+0 (1.14e+0) −3.5681e-1 (8.77e-2)
10001.9182e+0 (1.50e-1) −1.5654e+1 (4.03e+0) −7.5716e-1 (1.41e-1) −5.7932e-1 (7.28e-2) −7.5024e+0 (1.49e+0) −4.9947e-1 (3.20e-1)
CF71001.3841e+1 (2.20e+0) +9.8844e+1 (1.11e+1) −2.0294e+1 (3.69e+0) −7.2672e+1 (1.39e+1) −6.4345e+1 (8.64e+0) −1.8405e+1 (1.77e+0)
5007.4763e+1 (5.45e+0) =5.3290e+2 (3.06e+1) −1.1365e+2 (1.58e+1) −4.6746e+2 (7.87e+1) −3.8368e+2 (1.52e+1) −7.2673e+1 (8.04e+0)
10001.7455e+2 (2.87e+1) −1.1515e+3 (1.20e+2) −2.1616e+2 (1.93e+1) −1.0977e+3 (1.50e+2) −7.6280e+2 (4.55e+1) −1.5188e+2 (1.43e+1)
CF8100 8.8327e-1 (1.39e-1) −6.2885e-1 (1.37e-1) =2.1760e+1 (3.67e+0) − 6.2218e-1 (2.79e-1)
500 1.2233e+0 (1.66e-1) −7.9209e-1 (1.38e-1) − 4.2674e-1 (7.05e-2)
1000 1.2015e+0 (1.50e-1) −7.6505e-1 (1.95e-1) − 5.2893e-1 (1.88e-1)
CF91008.0671e-1 (1.12e-1) −3.3271e-1 (2.80e-2) =5.1890e-1 (1.41e-1) =8.2071e-1 (6.03e-1) −5.1329e-1 (2.09e-1) =5.2817e-1 (4.74e-1)
5005.4905e-1 (1.57e-1) −3.2664e-1 (5.43e-3) =4.9072e-1 (1.24e-1) −7.5417e-1 (3.07e-1) −3.7567e-1 (1.67e-1) =3.3508e-1 (1.71e-1)
10004.6981e-1 (1.08e-1) −3.4294e-1 (1.15e-2) =4.3318e-1 (1.60e-1) =1.0924e+0 (8.66e-1) −3.6320e-1 (1.28e-1) =4.2752e-1 (3.43e-1)
CF10100 5.3139e-1 (1.96e-1)
500 8.4912e-1 (5.01e-1)
1000 7.9834e-1 (0.00e+0)
+/−/= 1/28/10/27/30/26/44/22/40/26/4
Table 5. Comparison of HV results between MTO-CDTD and other algorithms on the CF test suite.
Table 5. Comparison of HV results between MTO-CDTD and other algorithms on the CF test suite.
AlgorithmHV
BiCo0/2/8
C3M0/3/7
POCEA0/20/10
MOCGDE4/14/12
IMTCMO_BS0/2/8
Table 6. Average IGD values of MTO-CDTD and other algorithms on the ZXH_CF test suite (best result for each test case is highlighted; “ ” indicates no feasible solution was found by the algorithm for that problem).
Table 6. Average IGD values of MTO-CDTD and other algorithms on the ZXH_CF test suite (best result for each test case is highlighted; “ ” indicates no feasible solution was found by the algorithm for that problem).
ProblemDBiCoC3MPOCEAMOCGDEIMTCMO_BSMTO-CDTD
ZXH_CF11004.6828e-1 (1.94e-1) −2.9302e-1 (6.14e-2) −3.5034e-1 (7.85e-2) −1.2429e+0 (6.10e-1) −5.0238e-1 (2.77e-1) −1.3231e-1 (4.53e-2)
5008.7654e-1 (1.23e-2) −8.7868e-1 (1.72e-1) −7.8767e-1 (6.23e-2) −9.4679e-1 (6.18e-1) −6.7598e-1 (1.67e-1) −1.4485e-1 (4.84e-2)
10001.1402e+0 (1.51e-2) −1.0724e+0 (2.87e-2) −1.0240e+0 (4.22e-2) −5.5188e-1 (1.97e-1) −9.8191e-1 (9.88e-2) −1.2694e-1 (2.30e-2)
ZXH_CF2100 1.5189e+0 (6.55e-1) = 1.5391e+0 (1.94e-1)
500 1.4843e+0 (4.55e-1) = 1.7711e+0 (3.10e-3)
1000 1.4570e+0 (5.50e-1) = 1.3216e+0 (4.10e-1)
ZXH_CF31007.9046e-1 (2.13e-1) −5.5377e-1 (2.51e-1) −9.0408e-1 (2.41e-1) −1.5873e+0 (1.17e+0) −8.2294e-1 (3.32e-1) −1.4071e-1 (1.65e-2)
5001.3118e+0 (5.82e-2) −1.2587e+0 (2.41e-2) −1.2453e+0 (7.68e-2) − 6.7082e-1 (3.34e-1) −2.8389e-1 (2.36e-1)
10001.3338e+0 (2.81e-2) −1.2792e+0 (1.86e-2) −1.2818e+0 (3.66e-2) − 7.6132e-1 (3.29e-1) −3.1336e-1 (2.93e-1)
ZXH_CF41001.8913e+0 (3.61e-1) − 3.4048e+0 (6.07e-1) − 2.0658e-1 (9.38e-2)
500 1.9369e+0 (4.03e-1) −1.6659e+0 (5.82e-2) −1.8007e-1 (8.86e-2)
1000 2.0377e+0 (1.66e-1) −1.7021e+0 (9.55e-2) −1.9486e-1 (9.42e-2)
ZXH_CF5100 5.6789e-2 (2.12e-2) +1.4421e+0 (0.00e+0) =1.0570e+0 (4.56e-2)
500 3.0598e-2 (5.17e-4) = 9.1771e-1 (0.00e+0)
1000 3.0534e-2 (6.84e-4) = 8.0808e-1 (0.00e+0)
ZXH_CF61002.6637e-1 (1.54e-1) −2.2969e-1 (6.35e-2) −2.8798e-1 (1.36e-1) −1.7000e-1 (2.43e-1) −5.6772e-1 (4.60e-1) −6.4007e-2 (3.71e-2)
5001.3288e+0 (1.34e-1) −1.6121e+0 (1.37e-1) −6.4442e-1 (1.72e-1) −1.3685e-1 (1.84e-1) −6.4200e-1 (1.68e-1) −8.0028e-2 (3.84e-2)
10001.5570e+0 (5.87e-2) −1.6169e+0 (0.00e+0) =1.3623e+0 (1.22e-1) −1.7799e-1 (3.31e-1) −1.0477e+0 (1.46e-1) −1.0557e-1 (6.64e-2)
ZXH_CF7100 4.5671e-1 (2.75e-1) = 2.5824e-1 (1.44e-1)
500 1.0971e+0 (2.40e-2) −2.8642e-1 (2.25e-1)
1000 2.4380e-1 (1.65e-1)
ZXH_CF81007.3313e-1 (2.39e-1) −4.3324e-1 (2.70e-1) −1.5255e+0 (0.00e+0) =1.8438e+0 (0.00e+0) =1.0151e+0 (3.97e-1) −1.2172e-1 (4.09e-2)
5001.6939e+0 (1.92e-1) −1.6899e+0 (3.05e-1) − 5.7952e-1 (1.58e-1) −4.0990e-1 (3.51e-1)
10001.8608e+0 (2.34e-2) −1.8417e+0 (4.96e-2) − 5.4978e-1 (1.00e-1) =5.0875e-1 (4.33e-1)
ZXH_CF91008.2905e-1 (2.49e-1) −3.7082e-1 (1.75e-1) = 1.7032e+0 (1.10e+0) −1.1290e+0 (4.77e-1) −3.1399e-1 (1.53e-1)
5001.2389e+0 (2.50e-1) −1.5045e+0 (2.71e-1) − 6.8732e+0 (8.89e-2) −5.5450e-1 (2.68e-1) −2.9585e-1 (2.69e-1)
10001.5651e+0 (1.76e-1) −1.6389e+0 (1.58e-1) − 4.7646e-1 (1.55e-1) =5.8363e-1 (5.11e-1)
ZXH_CF101002.9739e+0 (2.72e-1) − 2.3589e+0 (1.31e+0) − 4.3064e-1 (1.56e-1)
500 2.5578e+0 (6.00e-1) −2.1940e+0 (1.95e-1) −3.6702e-1 (1.57e-1)
1000 2.3653e+0 (2.08e+0) − 4.4694e-1 (2.04e-1)
ZXH_CF111004.1033e-1 (2.19e-1) =2.3711e-1 (1.33e-1) +1.1771e+0 (1.80e-1) −1.1574e+0 (3.27e-1) −7.6094e-1 (5.54e-1) −3.7324e-1 (2.13e-1)
5008.2031e-1 (2.90e-1) −2.1181e+0 (2.88e-1) −1.6720e+0 (4.07e-1) −1.2345e+0 (4.11e-1) −6.9699e-1 (1.22e-1) −2.5735e-1 (9.26e-2)
10001.4345e+0 (2.22e-1) −2.0701e+0 (2.78e-1) −1.5550e+0 (4.21e-1) −1.2290e+0 (3.22e-1) −1.0133e+0 (1.96e-1) −1.9382e-1 (8.53e-2)
ZXH_CF12100 2.5104e-1 (1.52e-1) = 1.2646e+0 (0.00e+0)
500 1.3050e-1 (1.21e-1) + 1.3616e+0 (5.82e-2)
1000 1.4560e-1 (1.48e-1) + 1.4176e+0 (9.06e-2)
ZXH_CF13100 2.3059e+0 (3.00e-1) − 2.4062e+0 (8.45e-1) −1.9127e+0 (1.37e-1) −5.6216e-2 (5.14e-2)
500 1.8272e+0 (1.32e+0) −1.8532e+0 (2.95e-1) −2.7489e-2 (2.14e-2)
1000 1.5032e+0 (1.51e-1) −1.8081e+0 (2.31e-1) −2.6140e-2 (2.21e-2)
ZXH_CF141001.7711e-1 (5.10e-2) −9.7674e-2 (1.04e-2) −5.9097e-1 (1.49e-1) −2.1774e+0 (2.01e+0) −3.0007e-1 (1.89e-1) −6.7706e-2 (8.96e-2)
5003.3499e-1 (6.59e-2) −1.6120e-1 (1.80e-2) +9.7024e-1 (1.04e-1) − 2.5217e-1 (1.65e-1) −2.0171e-1 (2.59e-1)
10006.5972e-1 (1.09e-1) −2.9731e-1 (1.07e-1) −1.1171e+0 (9.85e-2) − 2.1588e-1 (5.94e-2) −1.7272e-1 (2.11e-1)
ZXH_CF15100 2.2873e-2 (4.31e-2) +1.0410e+0 (0.00e+0) =6.6154e-1 (5.16e-2)
500 3.0107e-2 (4.94e-2) + 7.1629e-1 (2.47e-1)
1000 1.1601e-2 (2.21e-2) + 1.0231e+0 (4.14e-1)
ZXH_CF161001.0294e-1 (4.06e-2) −3.3358e-2 (9.16e-3) −2.4605e-1 (3.22e-2) −4.8493e-1 (7.68e-1) −1.4637e-1 (1.54e-1) −2.1160e-2 (2.87e-2)
5001.7964e-1 (7.75e-2) −5.4073e-1 (4.08e-1) −3.1872e-1 (4.82e-2) −1.5229e-1 (3.54e-1) =1.7092e-1 (4.88e-2) −1.1828e-2 (9.92e-3)
10004.5306e-1 (9.63e-2) −8.4732e-1 (4.62e-1) − 1.7389e-1 (3.74e-1) −2.4729e-1 (8.19e-2) −1.8345e-2 (2.18e-2)
+/−/= 0/47/12/44/20/47/16/33/90/44/4
Table 7. Comparison of HV results between MTO-CDTD and other algorithms on the ZXH_CF test suite.
Table 7. Comparison of HV results between MTO-CDTD and other algorithms on the ZXH_CF test suite.
AlgorithmHV
BiCo0/48/0
C3M0/47/1
POCEA0/47/1
MOCGDE15/26/7
IMTCMO_BS0/45/3
Table 8. MTO-CDTD runtime ranking on all test functions.
Table 8. MTO-CDTD runtime ranking on all test functions.
LIRCMOPCFZXH_CF
Test ProblemRankTest ProblemRankTest ProblemRank
LIRCMOP11CF11ZXH_CF11
LIRCMOP23CF21ZXH_CF21
LIRCMOP31CF31ZXH_CF31
LIRCMOP41CF41ZXH_CF41
LIRCMOP51CF51ZXH_CF51
LIRCMOP61CF61ZXH_CF61
LIRCMOP71CF71ZXH_CF71
LIRCMOP81CF81ZXH_CF81
LIRCMOP91CF91ZXH_CF91
LIRCMOP101CF101ZXH_CF101
LIRCMOP111 ZXH_CF111
LIRCMOP121 ZXH_CF121
LIRCMOP131 ZXH_CF131
LIRCMOP141 ZXH_CF141
ZXH_CF151
ZXH_CF161
Table 9. The average IGD of the initial population for two strategies (“ ” indicates that there are no feasible solutions in the initialized population).
Table 9. The average IGD of the initial population for two strategies (“ ” indicates that there are no feasible solutions in the initialized population).
ProblemMTO-CDTD (WOC)MTO-CDTD
LIRCMOP1 2.4328e+00
LIRCMOP3 1.8345e+00
LIRCMOP51.1985e+039.7472e-01
CF13.9264e-013.4621e-01
CF31.8922e+013.1320e-01
CF52.4903e+037.7690e+01
ZXH_CF1 4.5728e-01
ZXH_CF3 6.2667e-01
ZXH_CF5
Table 10. Comparison of individual migration performance using different migration strategies.
Table 10. Comparison of individual migration performance using different migration strategies.
ProblemMTO-CDTD(RT)MTO-CDTD
Avg RankTop 50%Bottom 25%Avg RankTop 50%Bottom 25%
LIRCMOP128.7688.50%5.00%24.3697.50%0.00%
LIRCMOP319.6393.75%0.00%15.92100.00%0.00%
LIRCMOP536.4364.50%10.75%34.8369.375%10.625%
CF165.4726.00%51.50%33.7083.125%2.8125%
CF336.8670.50%12.25%28.5487.75%0.00%
CF527.3586.50%1.00%20.91100.00%0.00%
ZXH_CF122.3489.00%0.00%16.20100.00%0.00%
ZXH_CF361.1642.25%53.75%13.51100.00%0.00%
ZXH_CF514.52100.00%0.00%11.47100.00%0.00%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Liu, T. Contribution-Driven Task Design: Multi-Task Optimization Algorithm for Large-Scale Constrained Multi-Objective Problems. Computers 2026, 15, 31. https://doi.org/10.3390/computers15010031

AMA Style

Li H, Liu T. Contribution-Driven Task Design: Multi-Task Optimization Algorithm for Large-Scale Constrained Multi-Objective Problems. Computers. 2026; 15(1):31. https://doi.org/10.3390/computers15010031

Chicago/Turabian Style

Li, Huai, and Tianyu Liu. 2026. "Contribution-Driven Task Design: Multi-Task Optimization Algorithm for Large-Scale Constrained Multi-Objective Problems" Computers 15, no. 1: 31. https://doi.org/10.3390/computers15010031

APA Style

Li, H., & Liu, T. (2026). Contribution-Driven Task Design: Multi-Task Optimization Algorithm for Large-Scale Constrained Multi-Objective Problems. Computers, 15(1), 31. https://doi.org/10.3390/computers15010031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop