Next Article in Journal
An Experimental Study on the Expansion Rate of Blasting Cracks in Prefabricated Grooved Concrete Under Vertical Stresses
Previous Article in Journal
A Multifunctional MR Damper with Dual Damping and Locking Mechanisms for Seismic Control of Multi-Span Continuous Bridges
Previous Article in Special Issue
A Cellular Automata-Based Crossover Operator for Binary Chromosome Population Genetic Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Auxiliary Population Multitask Optimization Based on Chinese Semantic Understanding

1
School of Culture and Tourism, Kaifeng University, Kaifeng 475003, China
2
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9746; https://doi.org/10.3390/app15179746
Submission received: 18 August 2025 / Revised: 1 September 2025 / Accepted: 3 September 2025 / Published: 4 September 2025
(This article belongs to the Special Issue Applications of Genetic and Evolutionary Computation)

Abstract

In Chinese language semantic analysis, the processed languages often reveal similar representations in models for different application scenarios, resulting in similar language models. With that characteristic, evolutionary multitask optimization (EMTO) algorithms, which realize the synergy optimization for multiple tasks, have the potential to optimize such models for different scenarios. EMTO is an emerging topic in evolutionary computation (EC) for solving multitask optimization problems (MTOPs) with the help of knowledge transfer (KT). However, the current EMTO algorithms often involve two limitations. First, many KT methods usually ignore the distribution information of populations to evaluate task similarity. Second, many EMTO algorithms often directly transfer individuals from the source task to target task, which cannot guarantee the quality of the transferred knowledge. To overcome these challenges, an auxiliary–population–based multitask optimization (APMTO) is proposed in this paper, which will be further applied to Chinese semantic understanding in our future works. We first propose an adaptive similarity estimation (ASE) strategy to exploit the distribution information among tasks and evaluate the similarity of tasks, so as to adaptively adjust the KT frequency. Then, an auxiliary-population-based KT (APKT) strategy is designed, which uses auxiliary population to map the global best solution of the source task to target task, offering more useful transferred information for the target task. APMTO is tested on multitask test suite CEC2022 and compared with several state–of–the–art EMTO algorithms. The results show that APMTO outperforms the compared state–of–the–art algorithms, which fully reveals its effectiveness and superiority.

1. Introduction

Evolutionary computation (EC) algorithms, including evolutionary algorithms (EAs) [1,2,3,4] and swarm intelligence (SI) [5,6,7], have been explored in several research fields to solve different optimization problems, like large scale [8,9,10], multimodal [11,12,13], etc. Recently, evolutionary multitask optimization (EMTO) [14] has emerged, which uses EC algorithms to solve multitask optimization problems (MTOPs), where an MTOP contains several tasks to be solved simultaneously. Since different tasks in MTOPs are often similar to each other, the knowledge from one task can further facilitate the optimization of other tasks. Therefore, knowledge transfer (KT) is a crucial technique in EMTO algorithms.
The current EMTO algorithms can be roughly divided into two categories: single–population algorithms and multi–population algorithms. Specifically, single–population algorithms use only one population to solve all the tasks. The multifactorial evolutionary algorithm (MFEA) [15] is the first single–population algorithm, where each individual evaluates only one task, determined by its skill factor. The simulated binary crossover is applied to individuals to realize both self–evolution and KT. From this, variants of the MFEA have been developed to further enhance the performance of the MFEA, like multifactorial differential evolution (MFDE), multifactorial particle swam optimization (MFPSO) [16], MFEA–II [17], MFEA with adaptive KT (MFEA–AKT) [18], etc. Different from the single–population algorithms, multi–population algorithms often maintain multiple populations, where each population corresponds to one task and various KT mechanisms are realized in different populations for different tasks. For example, adaptive EMTO (AEMTO) [19] designs the intra–SE and inter–KT mechanisms to realize self–evolution and KT, respectively. In the multitasking genetic algorithm (MTGA) [20], the bias between tasks is evaluated and removed to eliminate the influence of the bias information. To realize the index–unaligned knowledge transfer, block-level knowledge transfer (BLKT)-based differential evolution (BLKT–DE) [21] is proposed to split individuals into small blocks and apply the evolutionary operations among the blocks.
Although these EMTO algorithms have achieved success to some degree, they often involve two limitations. First, almost all these EMTO algorithms do not consider the distribution information of populations and only use the fixed KT frequency when performing KT, such as the fixed random mating probability (rmp) used in MFEA and MFEA–AKT. If we can fully utilize the distribution information of tasks, the similarity of tasks can be effectively evaluated, which can further facilitate the adjustment of KT frequency. Second, many EMTO algorithms often directly transfer individuals from source task to target task, such as AEMTO, which cannot guarantee the quality of the transferred information, causing insufficient or even negative KT. For example, in Figure 1, the individuals of target task T1 and source task T2 are marked in black and red, respectively, and the stars denote the global best solutions of the two tasks. During the evolutionary process, the individuals of the two tasks will be gradually separated in the search space. If the individuals from source task T2 are directly transferred to target task T1, the transferred individuals may be useless for target task T1 due to such distinct distributions. On the contrary, if a suitable map between target task T1 and source task T2 can be designed, when KT occurs, the individuals in source task T2 can be mapped into the suitable solutions for target task T1; consequently, the transferred information will be more useful and the KT process will be more effective.
To overcome the aforementioned limitations, in this paper, an auxiliary–population–based multitask optimization (APMTO) algorithm is proposed. For the first limitation, we first design an adaptive similarity estimation (ASE) strategy to adaptively adjust the KT frequency according to the similarity of population distributions. Specifically, the elite swarms of the source and target tasks are collected and the average distance of all dimensions is calculated to evaluate the similarity of the source and target tasks. Then, the KT frequency is adjusted adaptively according to the task similarity. For the second limitation, we develop an auxiliary–population–based KT (APKT) method to map the global best solution from the source task to the target task, so as to offer more useful transfer information to the target task. Specifically, we take the minimum distance to the global best solution of the target task as the objective function, and build an auxiliary population to optimize the objective function. After the evolution in the auxiliary population, the best solution in the auxiliary population will act as the mapped global best solution to perform KT and generate more useful transfer information. The contributions of this paper are listed as follows:
  • An ASE strategy is designed to mine the distribution information of the populations and evaluate the similarity of tasks, so as to adaptively adjust the KT frequency.
  • An APKT method is proposed to map the global best solution from the source task to the target task, which will produce more useful transfer knowledge and facilitate the optimization of the target task.
  • APMTO is tested on widely used multitask test suite CEC2022 [22] and compared with several state–of–the–art EMTO algorithms. The experimental results fully reveal the effectiveness and superiority of APMTO.
The remainder of the paper is organized as follows. Section 2 gives the preliminary of this paper, which includes the definition of MTOP, related works in EMTO, and the motivation of APMTO. Section 3 offers the detailed design of APMTO, containing the ASE and APKT strategies, together with the complete process of APMTO. Then, the experimental results and analysis are shown in Section 4. Finally, Section 5 concludes the whole paper.

2. Preliminary

2.1. MTOPs

MTOPs represent a special category of problems in the field of EC, which usually contain several similar or related tasks. For an MTOP with NT tasks, if the tth task owns an objective function ft, MTOP is defined as follows:
{X1*, X2*, …, XNT*} = {argminXf1(X), argminXf2(X), …, argminXfNT(X)}
When solving an MTOP, since tasks often have different search spaces, a unified search space [0, 1]D (D = max{Dt}, t = 1, 2, …, NT) is commonly used in the current EMTO field, where individuals for tasks will be initialized and evolved. Only when calculating fitness will each individual be decoded into its original search space.

2.2. Differential Evolution (DE)

Differential evolution (DE) [2,3,4] is an efficient EC algorithm that can be divided into four steps:
(1)
Initialization: The N individuals are first randomly initialized within the search space to form the initial population as follows:
Xij = Lj + rand × (UjLj)
where Xij denotes the jth dimension of the ith individual Xi, [Lj, Uj] is the jth dimension range, and rand denotes a random number in interval [0, 1].
(2)
Mutation: An individual Xi is mutated to vector Vi. Two mutation strategies are widely used for DE, as follows:
DE/rand/1:
Vi = Xr1 + F × (Xr2Xr3)
DE/best/1:
Vi = gbest + F × (Xr1Xr2)
where r1, r2, and r3 denote three different randomly selected different individuals, and F is the amplification factor. In this paper, we mainly use DE/best/1 as the basic evolutionary operator.
(3)
Crossover: Binomial crossover [23] is used to generate the trial vector Ui, described as
U i j = V i j , X i j , if   r a n d < C R   or   j = j r otherwise
where Uij and Vij denote the jth dimension of Ui and Vi, respectively, jr is a random dimension to promise that no less than one dimension of Vi is used to generate Ui, and CR denotes the crossover probability.
(4)
In the selection step, the better one of Ui and Xi will be kept to the next generation. In this paper, elitist selection is employed. For N parents and N trial vectors, the best N individuals will be selected.

2.3. Related Works in EMTO

To solve MTOPs, researchers have explored various EMTO algorithms [24]. As the crucial component of EMTO algorithms, KT mechanisms will exploit the common senses between tasks to realize synergy optimization. With the help of KT, tasks can fully mine the information from other tasks and help each other to better evolve. The current EMTO algorithms can be classified into the following two categories: single-population algorithms and multi-population algorithms.
Single-population algorithms use only one population to realize both self-evolution and KT. The most representative algorithm is MFEA [15], which assigns each individual a skill factor to determine the task to evaluate on. Individuals with different skill factors are applied simulated binary crossover to realize KT, which is controlled by a random mating probability (rmp). Researchers have explored lots of MFEA variants to improve its capacity. Feng et al. [16] combined the framework of MFEA with two basic evolutionary operators, DE and particle swarm optimization (PSO), respectively, and developed MFDE and MFPSO. Bali et al. [17] proposed MFEA-II, which adaptively adjusts the rmp through online learning. With the adaptive selection of six different crossover operators, Zhou et al. [18] proposed MFEA-AKT to select more suitable crossover operators for parents to generate their transferred offspring. With diffusion gradient descent (DGD), MFEA-DGD [25] is developed with the novel operators of crossover and mutation, accelerating the convergence speed when exploring the solutions.
Different from the single-population algorithms, multi-population algorithms usually maintain multiple populations where individuals in a population only optimize one task, and KT mechanisms are realized in different populations for different tasks. In AEMTO [19], intraSE and interKT are designed to realize both self-evolution and KT, which are determined during each generation according to the historical performance related transfer probability. Considering the bias of different tasks, Wu et al. [20] propose the MTGA, which estimates and removes the bias of tasks to eliminate the influence of the bias information. To realize the index-unaligned dimension learning, Liang et al. designed BLKT-DE [21], which splits the individuals into blocks, and applied the evolutionary operators among these blocks, enriching the transferred information source of all the dimensions. Li et al. [26] developed meta-knowledge transfer-based DE (MKTDE), which gives the definition of the meta-knowledge, the knowledge to learn knowledge, and transfers the meta-knowledge instead of individuals to generate the transferred information. With the orthogonal experiment, orthogonal transfer EMTO (OTMTO) [27] is proposed by Wu et al., which finds the optimal combinations of information from source and target tasks to generate the transferred information for the target task. Yao et al. [28] proposed multitasking with feature fusion (EMFF), using the feature fusion method to extract the feature of both source and target tasks, which aggregate evolutionary information to improve the KT efficiency.

2.4. Motivation

First, almost all traditional EMTO algorithms ignore the distribution information of populations, which only use fixed KT frequency when performing KT. In fact, the distribution information is usually related to the similarity of tasks, which can evaluate the task similarity and facilitate the KT frequency adjustment accordingly. For tasks with similar distributions, tasks are similar and may better help each other optimize with more useful transfer information, which means that a high KT frequency will offer more opportunity for tasks to learn with each other. Conversely, if tasks have low similarity, the transfer information may be not useful for the target task to exploit. Hence, the KT frequency should be lowered to avoid useless or even negative transfer knowledge.
Moreover, many EMTO algorithms often directly transfer the individuals when performing KT, which cannot guarantee the quality of the transferred information. In fact, the distribution information of different task populations is different, which means that the selected transferred individuals may be useless for the target task. If a suitable map can be established between tasks, the transferred knowledge will better direct the optimization for the current task, and the KT process will be more effective.

3. APMTO

In this section, the detailed design of APMTO is demonstrated. The adaptive similarity estimation strategy is first introduced. Then, the detailed process of the auxiliary-population-based KT is introduced. Finally, the framework of APMTO is presented.

3.1. Adaptive Similarity Estimation

The ASE strategy is designed to estimate the task similarity, which is mainly based on the distribution features of populations, so as to adaptively adjust the KT frequency. We first collect the elite swarms of source and target tasks, and estimate the similarity of the two tasks, which is calculated as the average distance of the individuals in corresponding orders of source and target tasks. Then, the KT frequency is adjusted accordingly.
First, the individuals in elite swarms, i.e., the top fittest individuals from target and source tasks, respectively, are collected. Then, the average distance of all dimensions is calculated as follows:
d ¯ = 1 t o p i = 1 t o p 1 D j = 1 D ( T i j S i j ) 2
where Tij and Sij denote the jth dimension of Ti and Si, respectively, which are the ith individuals of the target and source tasks, respectively, and D is the maximum dimension of the target and source tasks. Since a unified search space is employed in our algorithm, the average distance d - will be within [0, 1]. Then, the KT frequency of target task is set as follows:
f KT = 1 d ¯
When adjusting the KT frequency, the ASE strategy effectively exploits the distribution information of populations to estimate the similarity of tasks, further adaptively adjusting a more suitable KT frequency to control the KT process. When the tasks are similar, the top fittest individuals from two tasks are distributed closer, which means that the distance between these individuals is small. In that case, the KT frequency should be higher to allow more useful transferred information to be transferred to the target task to enhance the KT efficiency. On the contrary, when two tasks have quite different distributions, individuals from two tasks are far from each other, which means that the KT frequency should be lowered to forbid the useless transferred information to be acquired by the target task, which will cause insufficient or even negative KT.

3.2. Auxiliary-Population-Based KT

The APKT strategy is designed to build the map between the source and target tasks, so as to offer more useful transfer information to the target task to optimize. Specifically, we first build an auxiliary population initialized by the individuals in the elite swarm of source task. Then, with a specified objective function, the auxiliary population evolves and the optimal solution in the auxiliary population will enact the final mapped global best solution. Next, this mapped global best solution is used to generate more useful transfer information through crossover to facilitate the KT process in the target task. The optimization process of the auxiliary population is illustrated as Algorithm 1.
Algorithm 1 The optimization process of the auxiliary population
Input: top -The number of individuals in the auxiliary population;
   Elite -Elite swarm of the source task;
    TB -Global best solution of the target task;
   Median -The (top/2)th best individual of the target task;
   APMaxFEs -Maximum function evaluation for auxiliary population.
Output: Xmap,t -The mapped global best from source task to the target task.
Begin
1:
APFEs ← 0; //The function evaluation numbers in auxiliary population.
2:
PAClone(Elite); //The initialization of auxiliary population.
3:
NoUpdateCount ← 0;
4:
limit = ||TBMedian||;
5:
While APFEsAPMaxFEs do
6:
    Apply DE/best/1 to PA;
7:
    Evaluate all the offspring according to Equation (8);
8:
    APFEsAPFEs + top;
9:
    Select best top individuals from parents and offspring;
10:
  Xmap,t ← The best individual in PA;
11:
  If Xmap is updated do
12:
    NoUpdateCount ← 0;
13:
  Else
14:
    NoUpdateCountNoUpdateCount + 1;
15:
  End If
16:
  If NoUpdateCount == 2 do
17:
    Terminate the evolution;
18:
  End If
19:
  If ||Xmap,tTB|| < limit do
20:
     Terminate the evolution;
21:
  End If
22:
End While
End
The auxiliary population is used to build the map between the source and target tasks. For the target task, the auxiliary population evolves to make the individuals approximate the global best solution of the target task. The objective function of the auxiliary population is described as
F ( X ) = j = 1 D X j T B j 2
where X denotes an individual of the auxiliary population and TB is the global best solution of the target task, Xj and TBj denote the jth dimension of X and TB, respectively, D is the dimension of the tasks.
First, the auxiliary population PA is initialized with size of top and the search space used as all the tasks. To accelerate the establishment of the map, the individuals in the auxiliary population are directly cloned from the individuals in the elite swarm of source task Elite, i.e., the top fittest individuals in the source task.
Then, the auxiliary population PA is evolved with Equation (8) as the objective function and the function evaluation of APMaxFEs as the termination condition. Herein, DE/best/1 is employed to evolve the auxiliary population. After evolution, the best solution Xmap of the auxiliary population will enact the mapped global best solution to perform KT on the target task.
Moreover, in order to save the computational resources during the evolution of the auxiliary population, we set two early termination conditions for the auxiliary population as follows:
(1)
When the distance of Xmap,t relative to the global best of the target task is closer than the (top/2)th best individual in the target task;
(2)
When the global best solution of the auxiliary population is not updated for the continuous two generations.
With the auxiliary population to build the map between tasks, the auxiliary-population-based KT is designed, which is shown as Algorithm 2. The auxiliary population is first initialized and evolved, as shown in line 1 in Algorithm 2. Then, the ASE strategy is used to adaptively adjust the KT frequency, as shown in line 2 in Algorithm 2.
Algorithm 2 The pseudocode of APKT
Input: Pt -Population of target task;
   P3−t -Population of source task;
   N -Population size of a single task;
   top - Size of elite swarm;
   FEs -Current function evaluation numbers.
Output: Pt -Population of target task;
    FEs -Function evaluation numbers after APKT.
Begin
1:
Update the mapped global best solution Xmap,t by Algorithm 1;
2:
Calculate the fKT by Equation (7);
3:
If rand < fKT do
4:
  For i = 1 to top do
5:
    Generate offi by Equation (9);
6:
  End For
7:
  Evaluate the top offspring;
8:
  FEsFEs + top;
9:
  Keep the fittest N individuals from parents and offspring;
10:
End If
End
Next, the mapped global best solution Xmap,t is exploited to generate the transferred offspring, as shown in lines 4–6. Xmap,t is used to perform binomial crossover [23] with elite swarming of the target task, i.e., the top fittest individuals of the target task, to generate offspring. The ith offspring offi is generated as follows:
o f f i j = X map j , r a n d C R   or   j = j r T i j , otherwise
where offij is the jth dimension of offi, Xjmap,t is the jth dimension of Xmap,t, rand denotes a randomly generated number in interval [0, 1], and jr denotes a random dimension index to promise no less than one dimension of Xmap,t used in offi.
Finally, the top generated offspring are evaluated and the fittest N (the population size) individuals are selected from both the parents and offspring to generate the population of the next generation, as shown in lines 7–9 of Algorithm 2.
Using the APKT strategy, the transferred information from the source task is effectively transformed by the mapping method, which makes it similar to the distribution of the target task, producing more useful transfer information to help the target task better optimize. Using the mapping method, although the mapped information is close to the global best solution of the target task, the mapped global best solution is still different from the optimal solution, which can not only offer a promising guide to accelerate the convergence speed, but also guarantees information diversity, avoiding target task stagnation.

3.3. Complete APMTO

With the combination of self-evolution, ASE, and APKT, APMTO is organized. We offer a simple flowchart of APMTO in Figure 2, and the pseudocode of APMTO is presented in Algorithm 3. First, the N individuals in each task are initialized in the unified search space. Meanwhile, mapped global best solutions of the two tasks are also initialized by the auxiliary population as Algorithm 1. Then, each task uses DE/best/1 to realize self-evolution. Next, the APKT in Algorithm 2 is applied to the target task to realize KT. The mapped global best solution for each task is first updated and the KT frequency of the target task is adjusted through the similarity with the source task as Equation (7). Then, the transferred offspring are generated, evaluated, and selected to form the new population. APMTO will be executed until the maximum function evaluation (MaxFEs) is used up.
Algorithm 3 The pseudocode of APMTO
Input: N -The size of population for one task;
   MaxFEs -The maximum function evaluation.
Output: { X 1 * , X 2 * } -The optimal solutions for the two tasks.
Begin
1:
FEs ← 0;
2:
Populations P1 and P2 with size N are initialized for the two tasks;
3:
Initialize mapped global best solutions Xmap,1 for P1 by Algorithm 1;
4:
Initialize mapped global best solutions Xmap,2 for P2 by Algorithm 1;
5:
While FEsMaxFEs do
6:
    For t = 1 to 2 do
7:
    Self-evolution in Pt;
8:
    FEsFEs + N;
9:
    Apply APKT in Algorithm 2 to task t;
10:
  End For
11:
End While
End

3.4. Time Complexity for APMTO

The time complexity of APMTO mainly consists of four parts: function evaluation, the ASE strategy, the APKT strategy, and elitist selection.
For function evaluation, the total time complexity is O(MaxFEs × D). For the ASE strategy, which is shown as Equations (6) and (7), the time complexity for two tasks in each generation is O(2 × top × D). For the auxiliary population, the time complexity in each generation is O(2 × APMaxFEs × D). For elitist selection, the time complexity is mainly for sorting, which is O(2 × N × log(N)). Combining the four components, the time complexity of APMTO is O(MaxFEs × D + G × (2 × top × D + 2 × APMaxFEs × D + 2 × N × log(N))), where G is the maximum generation of the whole evolutionary process. Note that the total generation for APMTO is approximately G = MaxFEs/(2 × N). Therefore, the total time complexity of APMTO is O(MaxFEs × (D + log(N) + D/N × (top + APMaxFEs))). Furthermore, a detailed comparison of the time complexity is shown in Table 1.

3.5. Applied to Chinese Semantic Understanding

In Chinese language semantic analysis, the processed languages often reveal similar representations in models for different application scenarios, resulting in similar language models. Hence, by considering the parameters of models as the individuals in the populations, and the loss functions as the objective functions, the EMTO algorithms, which optimize multiple tasks simultaneously, have the potential to optimize such models for different scenarios.

4. Experimental Results and Analysis

4.1. Experimental Settings

APMTO is tested on widely used multitask test suite CEC2022 [22], which contains 10 problems with two 50-dimensional tasks. The parameters of APMTO are set as follows:
  • Maximum function evaluation: MaxFEs = 1 × 105;
  • The number of individuals: N = 100;
  • Self-evolution: DE/best/1, F = 0.5, CR = 0.7;
  • Size of elite swarm: top = 20;
  • Auxiliary population: APMaxFEs = 100;
To reveal the superiority of APMTO, six algorithms are employed to compare with APMTO, including MFEA (2016), MFEA-DGD (2024), AEMTO (2022), MKTDE (2022), OTMTO (2023), and BLKT–DE (2024). The settings of the six algorithms are all identical to the original papers. Each algorithm will execute 30 times. Wilcoxon’s rank-sum test [29] is employed to analyze the statistical significance among the results, where the significance level is set as α = 0.05. For each task, the notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than the compared algorithm, respectively. The mean value, standard deviation, and p-value under the hypothesis corresponding to the notation of “+”, “=”, or “−” of the 30 runs for each task are revealed in the table. The best result is marked in boldface. Detailed comparisons among APMTO and the six compared algorithms on CEC2022 are shown in Table 2. Meanwhile, we also offer the boxplots on eight representative tasks for APMTO and the six compared algorithms in Figure 3.

4.2. Experimental Results on CEC2022

From Table 2, we find that among all the 20 tasks, APMTO can achieve the most accurate solutions to 12 tasks. Then, when compared with single–population algorithms MFEA and MFEA–DGD, APMTO outperforms them on 14 and 13 tasks, respectively, while none of them can surpass APMTO on more than 5 tasks. Then, when compared with multi-population EMTO algorithms, APMTO outperforms AEMTO, MKTDE, OTMTO, and BLKT–DE on 15, 14, 14, and 12 tasks, respectively, while AEMTO, MKTDE, OTMTO, and BLKT–DE can only dominate APMTO on 2, 2, 4, and 3 tasks, respectively. Note that AEMTO and OTMTO also incorporate an adaptive KT frequency mechanism; our algorithm outperforms them on at least 14 tasks, which fully illustrates the effectiveness of ASE. The promising performance of APMTO may be due to the well–designed APKT strategy to map the solution between source and target tasks, which will offer more useful transferred knowledge for better optimization.
Furthermore, to illustrate the convergence behavior of APMTO, we plot the convergence curves of APMTO and all these compared algorithms on eight representative tasks in Figure 4. First, in Figure 4e,f, we find that APMTO reveals the rapid speed in Benchmark7–T1 and Benchmark7–T2, while only BLKT–DE can show the comparable search behavior with APMTO, and other EMTO algorithms evolve much slower. Nonetheless, APMTO still obtains the most accurate solutions in the two tasks. Then, from Figure 4a,b,g, although APMTO is surpassed by MFEA-DGD temporarily at the early stages, APMTO finally accelerates in the later stages of the evolution and still achieves the most satisfying solutions in Benchmark3–T1, Benchmark3–T2, and Benchmark10-T1. Next, as shown in Figure 4c,d,h, the performance of both MFEA-DGD and BLKT-DE degrades while APMTO exhibits the fastest speed during the whole evolution process in Benchmark6–T1, Benchmark6–T2, and Benchmark10–T2.
In summary, APMTO is effective with the help of ASE and APKT, and achieves the most satisfying solutions on CEC2022.

4.3. Component Analysis

In this section, we mainly investigate the influence of ASE and APKT strategies. In the ASE strategy, the KT frequency fKT is adaptively adjusted. To reveal its effectiveness, we employ seven variants of APMTO with the fixed fKT = 0.0, 0.1, 0.3, 0.5, 0.6, 0.8, and 1.0 for comparison. Detailed comparisons are shown in Table 3. When fKT = 0, equivalent to the traditional EC algorithm for single-task optimization, APMTO can outperform it on 10 tasks. Next, when compared with APMTO with the lower fKT values of 0.1, 0.3, and 0.5, APMTO outperforms them on 9, 12, and 10 tasks, respectively. Then, when compared with the higher fKT value of 0.7, 0.9, and 1.0, APMTO outperforms them on 10, 12, and 11 tasks, respectively. This phenomenon fully shows the effectiveness of our adaptive ASE strategy based on the task similarity.
Next, to reveal the effectiveness of APKT, we compare APMTO with the variant of APMTO without an APKT strategy, which directly transfers the top best individuals from the source task to the target task via binomial crossover, referred to as APMTO–w/o–APKT. The results are shown in Table 4, which illustrates that APMTO outperforms APMTO–w/o–APKT on 12 tasks and underperforms APMTO–w/o–APKT on only 2 tasks, illustrating the effectiveness of APKT.

4.4. Parameter Sensitivity Analysis

Herein, the influence of top, APMaxFEs, N, F, and CR is investigated. The top is first investigated, which is the size of the elite swarm, and also the KT intensity. Herein, four variants of APMTO with top = 10, 40, 50, 70, and 100 are employed to compare with APMTO (top = 20). We continue to test these variants on CEC2022 with the settings outlined in Section 4.1. Comparisons among APMTO and the four variants are revealed in Table 5, and detailed results are shown in Table S2 in Supplementary Materials.
First, we find that when top is lower, APMTO outperforms the variants of APMTO with top = 10, 40, and 50 on 7, 7, and 5 tasks, respectively, while they dominate APMTO on 7, 5, and 4 tasks, respectively. This phenomenon illustrates that APMTO is not sensitive to the parameter top, especially when top is lower. However, when top is increased gradually to the population size N, APMTO outperforms the variants of APMTO with top = 70 and 100 on 10 and 9 tasks, respectively, and they can surpass APMTO on no task, which means that when the KT intensity is a large value, the KT efficiency will degrade, leading to information abundance. Since APMTO with top = 20 performs the best and can effectively save the computational resources when performing APKT, in this paper, top is set to 20.
Next, the function evaluation for the auxiliary population APMaxFEs is investigated. Herein, we mainly employ the variants of APMTO with APMaxFEs = 0, 50, and 500. The results are shown in Table 6, and the detailed results are shown in Table S3 in Supplementary Materials. When APMaxFEs is set to 0, APMTO outperforms APMTO with APMaxFEs = 0 on 11 tasks. The mapped global best is actually the optimal solution of the source task, which cannot offer useful transfer knowledge for the target task. Then, compared with APMTO with APMaxFEs = 50 and 500, APMTO outperforms them on 11 and 9 tasks, respectively. It fully shows that the APMaxFEs should be an intermediate value. With a small value, the mapped global best solution is closer to the source task, which may not offer an effective resolution to the target task; conversely, with too many function evaluations, the mapped global best solution may show no difference to the individuals in the target task, lowering the diversity of the population and trapping it in the local optima. Therefore, in this paper, the APMaxFEs is set to 100.
Then, for the population size N, we compare APMTO (N = 100) with its variants with N = 25, 75, and 100, and the detailed results are shown in Table S4 in Supplementary Materials. Meanwhile, the top parameter is set as 20% of N, the same as the proportion of APMTO, where N = 100 and top = 20. The results are shown in Table 7. It can be observed that APMTO outperforms APMTO with N = 25, 50, and 75 on 14, 6, and 4 tasks. This phenomenon shows that a larger population may enrich the diversity of the population, which may achieve better results. Hence, the population size in APMTO is set to 50.
Finally, we investigate parameters F and CR for the DE operator. For F, for the amplification factor F, we compare APMTO (F = 0.5) with its variants with F = 0.6, 0.7, and 0.9. And for the crossover rate CR, we compare APMTO (CR = 0.7) with its variants with CR = 0.6, 0.8, and 0.9. The results for the comparison on F and CR are shown in Table 8 and Table 9, respectively, and the detailed results are shown in Table S5 and Table S6, respectively. First, APMTO can outperform its variants with F = 0.6, 0.7, and 0.9 on 12, 16, and 18 tasks, respectively. APMTO with F = 0.5 achieves the best results with statistical significance. Next, for CR, APMTO outperforms its variant with CR = 0.6 on five tasks. For variants with CR = 0.8 and 0.9, APMTO does not show a significant difference with others. Hence, we employ this commonly used settings of DE in APMTO.

4.5. Real-World Application

To further reveal the applicability of APTMO, we apply APMTO to a real-world multitask application planar kinematic arm control problems (PKACP). An example of a three-dimensional PKACP is shown in Figure 5. The objective of each arm is to optimize the angles of each joint in the arm, so that the tip of the arm P can approximate the target position T, i.e., minimize the distance between the end-effector of the arm P and the target position T. Herein, D represents the number of joints and links of the arm, and the angle is denoted as α. Each PKACP is a minimization problem with D decision variables α1, α2, …, αD, and the objective function is defined as follows:
ft = ||PDT||
where PD is the tip position of the Dth joint. Each arm has the total length of all joints Lmax and the total maximum angles of all joints αmax. Since all the links are with the same length and same maximum joint angle, the length of each link and the maximum angle of each joint are Lmax/D and αmax/D, respectively. By varying Lmax and αmax, the tasks are generated. We generate seven PKACP test cases with dimensions of 20, 25, 30, 35, 40, 50, and 100, and each test case contains two tasks and the target position T is set to (0.5, 0.5). For more details about PKACP, please refer to [30]. In experiments, the population size N for APMTO is set to 20, top is set to 5, and the MaxFEs is set to 4000 for all compared algorithms. Detailed comparison results between APMTO and the compared EMTO algorithms are shown in Table 10.
From Table 10, we find that when compared with single-population algorithms, APMTO outperforms MFEA on all the 14 tasks, and outperforms MFEA–DGD on 9 tasks. Then, when compared with multi-population EMTO algorithms, APMTO achieves significantly better results than AEMTO, MKTDE, OTMTO, and BLKT–DE on 13, 8, 13, and 7 tasks, respectively, while only BLKT–DE can surpass APMTO on 6 tasks. Therefore, this phenomenon fully illustrates the effectiveness and applicability of APMTO.

5. Conclusions

In this paper, an APMTO algorithm is proposed to effectively solve MTOPs. We first design an ASE strategy to fully evaluate the similarity of population distributions of tasks and adaptively adjust the KT frequency. Next, to transfer more useful information from the source task, an APKT strategy is developed to map the global best solution from the source task to target task, so as to produce more suitable transfer information for the target task. The results of APMTO on multitask test suite CEC2022 fully demonstrate that APMTO performs significantly better than other compared state–of–the–art EMTO algorithms. Specifically, APMTO outperforms the compared algorithms on at least 10 tasks, while none of them can surpass APMTO on more than 3 tasks, which shows the effectiveness of APMTO. However, some potential limitations of APMTO still exist. First, APMTO is designed mainly for the problems with two tasks, which means that it cannot be applied to many–task optimization problems (MTOPs with more than three tasks). Next, the ASE strategy to estimate the task similarity is only based on the Euclidean distance, which may not fully exploit the implicit task correlation, topology, and so on.
In the future, we hope to further enhance the mechanisms designed in APMTO. We will extend the mechanisms in APMTO to solve many–task optimization problems [31]. Meanwhile, we will try to explore the possibility of the multi–criteria metrics to estimate the task similarity. Also, we will try to apply our APMTO to the Chinese semantic understanding, and some real-world multitask applications like the planar kinematic arm control problem [30], sensor coverage problem, data clustering problem [32], double pole balance [33], etc. Moreover, we will exploit our KT mechanisms to some more complicated environments like multi-objective [34,35], large-scale [36,37,38], multi-modal [39,40,41], dynamic [42,43], etc.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15179746/s1: Table S1 the comparison between APMTO and 11 algorithms on CEC2022, Table S2 the detailed comparison among APMTO with different top, Table S3 the detailed comparison among APMTO with different APMaxFEs, Table S4 the detailed comparison among APMTO with different N, Table S5 the detailed comparison among APMTO with different F, and Table S6 the detailed comparison among APMTO with different CR.

Author Contributions

Conceptualization, J.-H.Y., S.-Y.Z. and Z.-J.W.; methodology, J.-H.Y., S.-Y.Z. and Z.-J.W.; software, J.-H.Y., S.-Y.Z. and Z.-J.W.; validation, J.-H.Y., S.-Y.Z. and Z.-J.W.; formal analysis, J.-H.Y., S.-Y.Z. and Z.-J.W.; investigation, J.-H.Y., S.-Y.Z. and Z.-J.W.; resources, J.-H.Y., S.-Y.Z. and Z.-J.W.; data curation, J.-H.Y., S.-Y.Z. and Z.-J.W.; writing—original draft preparation, J.-H.Y. and S.-Y.Z.; writing—review and editing, Z.-J.W.; visualization, J.-H.Y.; supervision, Z.-J.W.; project administration, Z.-J.W.; funding acquisition, Z.-J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Guangdong Natural Science Foundation under Grant 2025A1515010256, as well as the Guangzhou Science and Technology Planning Project under Grants 2023A04J0388 and 2023A03J0662.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in github at https://github.com/Qingsongnc/Auxiliary-Population-based-Multitask-Optimization (accessed on 2 September 2025).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EMTOEvolutionary multitask optimization
MTOPMultitask optimization problem
KTKnowledge transfer
DEDifferential evolution
ASEAdaptive similarity estimation
APKTAuxiliary-population-based knowledge transfer
APMTOAuxiliary-population-based multitask optimization

References

  1. Moscato, P. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts—Towards Memetic Algorithms; California Institute of Technology: Pasadena, CA, USA, 2000. [Google Scholar]
  2. Storn, R.; Price, K. Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  3. Nguyen, V.T.; Tran, V.M.; Bui, N.T. Self-Adaptive Differential Evolution with Gauss Distribution for Optimal Mechanism Design. Appl. Sci. 2023, 13, 6284. [Google Scholar] [CrossRef]
  4. Zhan, Z.H.; Wang, Z.J.; Jin, H.; Zhang, J. Adaptive distributed differential evolution. IEEE Trans. Cybern. 2020, 50, 4633–4647. [Google Scholar] [CrossRef]
  5. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995. [Google Scholar]
  6. Jiang, Z.; Zhu, D.; Li, X.-Y.; Han, L.-B. A Hybrid Adaptive Particle Swarm Optimization Algorithm for Enhanced Performance. Appl. Sci. 2025, 15, 6030. [Google Scholar] [CrossRef]
  7. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  8. Wang, Z.J.; Jian, J.R.; Zhan, Z.H.; Li, Y.; Kwong, S.; Zhang, J. Gene targeting differential evolution: A simple and efficient method for large−scale optimization. IEEE Trans. Evol. Comput. 2023, 27, 964–979. [Google Scholar] [CrossRef]
  9. Li, X.; Yao, X. Cooperatively Coevolving Particle Swarms for Large Scale Optimization. IEEE Trans. Evol. Comput. 2012, 16, 210–224. [Google Scholar]
  10. Wang, Z.J.; Zhan, Z.H.; Yu, W.J.; Lin, Y.; Zhang, J.; Gu, T.L.; Zhang, J. Dynamic group learning distributed particle swarm optimization for large−scale optimization and its application in cloud workflow scheduling. IEEE Trans. Cybern. 2020, 50, 2715–2729. [Google Scholar] [CrossRef]
  11. Wang, Z.J.; Zhan, Z.H.; Lin, Y.; Yu, W.J.; Wang, H.; Kwong, S.; Zhang, J. Automatic niching differential evolution with contour prediction approach for multimodal optimization problems. IEEE Trans. Evol. Comput. 2020, 24, 114–128. [Google Scholar] [CrossRef]
  12. Cheng, Z.; Zhang, Y.; Fan, C.; Gao, X.; Jia, H.; Jiang, L. TCDE: Differential Evolution for Topographical Contour-Based Prediction to Solve Multimodal Optimization Problems. Appl. Sci. 2025, 15, 7557. [Google Scholar] [CrossRef]
  13. Wang, Z.J.; Zhou, Y.R.; Zhang, J. Adaptive estimation distribution distributed differential evolution for multimodal optimization problems. IEEE Trans. Cybern. 2022, 52, 6059–6070. [Google Scholar] [CrossRef] [PubMed]
  14. Xue, Z.F.; Wang, Z.J.; Jiang, Y.; Zhan, Z.H.; Kwong, S.; Zhang, J. Multi-level and multi-segment learning multitask optimization via A niching method. IEEE Trans. Evol. Comput. 2024. [Google Scholar] [CrossRef]
  15. Gupta, A.; Ong, Y.S.; Feng, L. Multifactorial evolution: Toward evolutionary multitasking. IEEE Trans. Evol. Comput. 2016, 20, 343–357. [Google Scholar] [CrossRef]
  16. Feng, L.; Zhou, W.; Zhou, L.; Jiang, S.W.; Zhong, J.H.; Da, B.S.; Zhu, Z.X.; Wang, Y. An empirical study of multifactorial PSO and multifactorial DE. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017. [Google Scholar]
  17. Bali, K.K.; Ong, Y.S.; Gupta, A.; Tan, P.S. Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II. IEEE Trans. Evol. Comput. 2020, 24, 69–83. [Google Scholar] [CrossRef]
  18. Zhou, L.; Feng, L.; Tan, K.C.; Zhong, J.; Zhu, Z.; Liu, K.; Chen, C. Toward adaptive knowledge transfer in multifactorial evolutionary computation. IEEE Trans. Cybern. 2021, 51, 2563–2576. [Google Scholar] [CrossRef]
  19. Xu, H.; Qin, A.K.; Xia, S. Evolutionary multitask optimization with adaptive knowledge transfer. IEEE Trans. Evol. Comput. 2022, 26, 290–303. [Google Scholar] [CrossRef]
  20. Wu, D.; Tan, X. Multitasking genetic algorithm (MTGA) for fuzzy system optimization. IEEE Trans. Fuzzy Syst. 2020, 28, 1050–1061. [Google Scholar] [CrossRef]
  21. Jiang, Y.; Zhan, Z.H.; Tan, K.C.; Zhang, J. Block-level knowledge transfer for evolutionary multitask optimization. IEEE Trans. Cybern. 2024, 54, 558–571. [Google Scholar] [CrossRef]
  22. IEEE. CEC 2022 Competition on Evolutionary Multitask Optimization. Available online: http://www.bdsc.site/websites/MTO_competition_2021/MTO_Competition_WCCI_2022.html (accessed on 18 July 2022).
  23. Chen, Y.; Zhong, J.; Feng, L.; Zhang, J. An adaptive archive−based evolutionary framework for many-task optimization. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 4, 3870–3876. [Google Scholar] [CrossRef]
  24. Xue, Z.F.; Wang, Z.J.; Jiang, Y.; Zhan, Z.H.; Kwong, S.; Zhang, J. Neural network-based knowledge transfer for multitask optimization. IEEE Trans. Cybern. 2024, 54, 7541–7554. [Google Scholar] [CrossRef]
  25. Liu, Z.; Li, G.; Zhang, H.; Liang, Z.; Zhu, Z. Multifactorial evolutionary algorithm based on diffusion gradient descent. IEEE Trans. Cybern. 2024, 54, 4267–4279. [Google Scholar] [CrossRef] [PubMed]
  26. Li, J.Y.; Zhan, Z.H.; Tan, K.C.; Zhang, J. A meta-knowledge transfer-based differential evolution for multitask optimization. IEEE Trans. Evol. Comput. 2022, 26, 719–734. [Google Scholar] [CrossRef]
  27. Wu, S.H.; Zhan, Z.H.; Tan, K.C.; Zhang, J. Orthogonal transfer for multitask optimization. IEEE Trans. Evol. Comput. 2023, 27, 185–200. [Google Scholar] [CrossRef]
  28. Yao, L.; Zong, X.; Wang, L.; Li, R.; Yi, J. Explicit evolutionary framework with multitasking feature fusion for optimizing operational parameters in aluminum electrolysis process. IEEE Trans. Cybern. 2024, 54, 7527–7540. [Google Scholar] [CrossRef]
  29. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  30. Mouret, J.B.; Maguire, G. Quality diversity for multi-task optimization. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference, New York, NY, USA, 8–12 July 2020. [Google Scholar]
  31. IEEE. CEC 2019 Competition on Evolutionary Multi-Task Optimization. Available online: https://www.bdsc.site/websites/MTO_competiton_2019/MTO_Competition_CEC_2019.html (accessed on 10 June 2019).
  32. Li, G.; Zhang, Q.; Wang, Z. Evolutionary competitive multitasking optimization. IEEE Trans. Evol. Comput. 2022, 26, 278–289. [Google Scholar] [CrossRef]
  33. Gomez, F.; Schmidhuber, J.; Miikkulainen, R.; Mitchell, M. Accelerated neural evolution through cooperatively coevolved synapses. J. Mach. Learn. Res. 2008, 99, 937–965. [Google Scholar]
  34. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  35. Wang, X.; Li, S. Multi-Objective Optimization Using Cooperative Garden Balsam Optimization with Multiple Populations. Appl. Sci. 2022, 12, 5524. [Google Scholar] [CrossRef]
  36. Wang, Z.J.; Zhan, Z.H.; Kwong, S.; Jin, H.; Zhang, J. Adaptive granularity learning distributed particle swarm optimization for large−scale optimization. IEEE Trans. Cybern. 2021, 51, 1175–1188. [Google Scholar] [CrossRef]
  37. Ma, X.; Liu, F.; Qi, Y.; Wang, X.; Li, L.; Jiao, L.; Yin, M.; Gong, M. A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large−scale variables. IEEE Trans. Evolut. Comput. 2015, 20, 275–298. [Google Scholar] [CrossRef]
  38. Liu, S.; Wang, Z.J.; Kou, Z.; Zhan, Z.H.; Kwong, S.; Zhang, J. Less is More: A small-scale learning particle swarm optimization for large−scale optimization. IEEE Trans. Cybern. 2025, accepted. [Google Scholar]
  39. Wang, Z.J.; Zhan, Z.H.; Lin, Y.; Yu, W.J.; Yuan, H.Q.; Gu, T.L.; Kwong, S.; Zhang, J. Dual-strategy differential evolution with affinity propagation clustering for multimodal optimization problems. IEEE Trans. Evol. Comput. 2018, 22, 894–908. [Google Scholar] [CrossRef]
  40. Wang, Y.; Li, H.X.; Yen, G.G.; Song, W. MOMMOP: Multiobjective optimization for locating multiple optimal solutions of multimodal optimization problems. IEEE Trans. Cybern. 2025, 45, 830–843. [Google Scholar] [CrossRef] [PubMed]
  41. Wang, Z.J.; Zhan, Z.H.; Li, Y.; Kwong, S.; Jeon, S.W.; Zhang, J. Fitness and distance based local search with adaptive differential evolution for multimodal optimization problems. IEEE Emerg. Top. Comput. Intell. 2023, 7, 684–699. [Google Scholar] [CrossRef]
  42. Das, S.; Mandal, A.; Mukherjee, R. An adaptive differential evolution algorithm for global optimization in dynamic environments. IEEE Trans. Cybern. 2014, 44, 966–978. [Google Scholar] [CrossRef] [PubMed]
  43. Zhou, B.; Li, E. Dynamic Robust Optimization Method Based on Two-Stage Evaluation and Its Application in Optimal Scheduling of Integrated Energy System. Appl. Sci. 2024, 14, 4997. [Google Scholar] [CrossRef]
Figure 1. Illustration of the map from source task T2 to target task T1, where the stars denote the global best solutions of the two tasks.
Figure 1. Illustration of the map from source task T2 to target task T1, where the stars denote the global best solutions of the two tasks.
Applsci 15 09746 g001
Figure 2. The framework of APMTO.
Figure 2. The framework of APMTO.
Applsci 15 09746 g002
Figure 3. The boxplots of APMTO and the compared algorithms on CEC2022. (a) Benchmark3-T1; (b) Benchmark3-T2; (c) Benchmark6-T1; (d) Benchmark6-T2; (e) Benchmark7-T1; (f) Benchmark7-T2; (g) Benchmark10-T1; (h) Benchmark10-T2.
Figure 3. The boxplots of APMTO and the compared algorithms on CEC2022. (a) Benchmark3-T1; (b) Benchmark3-T2; (c) Benchmark6-T1; (d) Benchmark6-T2; (e) Benchmark7-T1; (f) Benchmark7-T2; (g) Benchmark10-T1; (h) Benchmark10-T2.
Applsci 15 09746 g003
Figure 4. The convergence curves for APMTO and compared EMTO algorithms on CEC2022. (a) Benchmark3-T1; (b) Benchmark3-T2; (c) Benchmark6-T1; (d) Benchmark6-T2; (e) Benchmark7-T1; (f) Benchmark7-T2; (g) Benchmark10-T1; (h) Benchmark10-T2.
Figure 4. The convergence curves for APMTO and compared EMTO algorithms on CEC2022. (a) Benchmark3-T1; (b) Benchmark3-T2; (c) Benchmark6-T1; (d) Benchmark6-T2; (e) Benchmark7-T1; (f) Benchmark7-T2; (g) Benchmark10-T1; (h) Benchmark10-T2.
Applsci 15 09746 g004
Figure 5. Illustration of PKACP.
Figure 5. Illustration of PKACP.
Applsci 15 09746 g005
Table 1. Time complexity for APMTO and other EMTO algorithms.
Table 1. Time complexity for APMTO and other EMTO algorithms.
AlgorithmTime Complexity
APMTOO(MaxFEs × (D + log(N) + D/N × (top + APMaxFEs)))
MFEAO(MaxFEs × (D + log(N)))
MFDEO(MaxFEs × (D + log(N)))
MFPSOO(MaxFEs × (D + log(N)))
MFEA-AKTO(MaxFEs × (D + log(N)))
MFEA-IIO(MaxFEs × (D + T/N + log(N))) (where T is the time complexity of each calculation for the rmp matrix)
MFEA-DGDO(MaxFEs × (D + log(N)))
AEMTOO(MaxFEs × (D + 2 + log(N))
MTGAO(MaxFEs × (D + log(N)))
BLKT-DEO(MaxFEs × (D + log(N) + t × D × K)) (where t is the average iterations for K-means with K clusters)
MKTDEO(MaxFEs × (2 × D + log(N) + 1/N))
OTMTOO(MaxFEs × (D + (T × (nb × D + nb)/N + D/N + 1/(2 × N))))
EMFFO(MaxFEs × D × ((m + u)/N + 1)) (where m is the transferred individual in each generation and u is the number of donor individuals)
Table 2. Comparison between APMTO and the 6 compared EMTO algorithms on CEC2022. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than the compared algorithm.
Table 2. Comparison between APMTO and the 6 compared EMTO algorithms on CEC2022. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than the compared algorithm.
CEC2022APMTOMFEAMFEA–DGDAEMTOMKTDEOTMTOBLKT–DE
Benchmark1T16.24 × 102
8.14 × 100
-
6.52 × 102(+) 
8.14 × 100 
  0.0000
6.17 × 102(−)
8.14 × 100
0.0000
6.21 × 102(−)
8.14 × 100
0.0152
6.09 × 102(−)
8.14 × 100
0.0000
6.14 × 102(−)
8.14 × 100
0.0000
6.17 × 102(−)
8.14 × 100
0.0000
T26.26 × 102
1.08 × 101
-
6.51 × 102(+) 
1.08 × 101 
0.0000
6.21 × 102(=)
1.08 × 101
0.4119
6.21 × 102(−)
1.08 × 101
0.0059
6.08 × 102(−)
1.08 × 101
0.0000
6.14 × 102(−)
1.08 × 101
0.0000
6.15 × 102(−)
1.08 × 101
0.0000
Benchmark2T17.00 × 102 
9.10 × 10−3 
-
7.01 × 102(+)
9.10 × 10−3
0.0000
7.05 × 102(+)
9.10 × 10−3
0.0000
7.01 × 102(+)
9.10 × 10−3
0.0000
7.00 × 102(+)
9.10 × 10−3
0.0000
7.00 × 102(+)
9.10 × 10−3
0.0002
7.00 × 102(+)
9.10 × 10−3
0.0000
T27.00 × 102 
5.55 × 10−3 
-
7.01 × 102(+)
5.55 × 10−3
0.0000
7.14 × 102(+)
5.55 × 10−3
0.0000
7.01 × 102(+)
5.55 × 10−3
0.0000
7.00 × 102(+)
5.55 × 10−3
0.0000
7.00 × 102(+)
5.55 × 10−3
0.0173
7.00 × 102(+)
5.55 × 10−3
0.0000
Benchmark3T12.92 × 105 
1.69 × 105 
-
4.69 × 106(+)
1.69 × 105
0.0000
3.13 × 105(=)
1.69 × 105
0.1907
2.03 × 107(+)
1.69 × 105
0.0000
8.27 × 106(+)
1.69 × 105
0.0000
2.54 × 107(+)
1.69 × 105
0.0000
3.31 × 106(+)
1.69 × 105
0.0000
T22.71 × 105 
1.79 × 105 
-
4.41 × 106(+)
1.79 × 105
0.0000
4.28 × 106(+)
1.79 × 105
0.0007
2.13 × 107(+)
1.79 × 105
0.0000
8.44 × 106(+)
1.79 × 105
0.0000
2.43 × 107(+)
1.79 × 105
0.0000
4.16 × 106(+)
1.79 × 105
0.0000
Benchmark4T11.30 × 103
1.03 × 10−1
-
1.30 × 103(−) 
1.03 × 10−1 
0.0038
1.30 × 103(=)
1.03 × 10−1
0.7958
1.30 × 103(=)
1.03 × 10−1
0.0798
1.30 × 103(=)
1.03 × 10−1
0.4553
1.30 × 103(−)
1.03 × 10−1
0.0000
1.30 × 103(=)
1.03 × 10−1
0.2838
T21.30 × 103
7.52 × 10−02
-
1.30 × 103(−) 
7.52 × 10−2 
0.0000
1.30 × 103(+)
7.52 × 10−2
0.0000
1.30 × 103(+)
7.52 × 10−2
0.0001
1.30 × 103(=)
7.52 × 10−2
0.0963
1.30 × 103(−)
7.52 × 10−2
0.0052
1.30 × 103(=)
7.52 × 10−2
0.0537
Benchmark5T11.52 × 103 
1.04 × 101 
-
1.57 × 103(+)
1.04 × 101
0.0000
1.27 × 104(+)
1.04 × 101
0.0000
1.54 × 103(+)
1.04 × 101
0.0000
1.54 × 103(+)
1.04 × 101
0.0000
1.53 × 103(+)
1.04 × 101
0.0001
1.54 × 103(+)
1.04 × 101
0.0000
T21.52 × 103 
8.95 × 100 
-
1.57 × 103(+)
8.95 × 100
0.0000
5.01 × 104(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
1.53 × 103(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
Benchmark6T12.06 × 105 
1.11 × 105 
-
2.54 × 106(+)
1.11 × 105
0.0000
8.92 × 106(+)
1.11 × 105
0.0000
8.75 × 106(+)
1.11 × 105
0.0000
2.13 × 107(+)
1.11 × 105
0.0000
1.28 × 107(+)
1.11 × 105
0.0000
1.96 × 106(+)
1.11 × 105
0.0000
T21.33 × 105 
6.49 × 104 
-
2.33 × 106(+)
6.49 × 104
0.0000
4.09 × 107(+)
6.49 × 104
0.0000
9.23 × 106(+)
6.49 × 104
0.0000
2.11 × 107(+)
6.49 × 104
0.0000
1.35 × 107(+)
6.49 × 104
0.0000
1.79 × 106(+)
6.49 × 104
0.0000
Benchmark7T12.87 × 103 
2.82 × 102 
-
3.40 × 103(+)
2.82 × 102
0.0000
3.86 × 103(+)
2.82 × 102
0.0000
4.09 × 103(+)
2.82 × 102
0.0000
4.25 × 103(+)
2.82 × 102
0.0000
3.83 × 103(+)
2.82 × 102
0.0000
2.95 × 103(=)
2.82 × 102
0.3632
T22.87 × 103 
2.53 × 102 
-
3.38 × 103(+)
2.53 × 102
0.0000
3.65 × 103(+)
2.53 × 102
0.0000
4.13 × 103(+)
2.53 × 102
0.0000
4.24 × 103(+)
2.53 × 102
0.0000
3.92 × 103(+)
2.53 × 102
0.0000
3.09 × 103(+)
2.53 × 102
0.0028
Benchmark8T15.21 × 102
3.34 × 10−2
-
5.21 × 102(−) 
3.34 × 10−2 
0.0000
5.21 × 102(−)
3.34 × 10−2
0.0000
5.21 × 102(=)
3.34 × 10−2
0.1453
5.21 × 102(=)
3.34 × 10−2
0.3478
5.21 × 102(=)
3.34 × 10−2
0.7618
5.21 × 102(=)
3.34 × 10−2
0.1537
T25.21 × 102
2.57 × 10−2
-
5.21 × 102(−) 
2.57 × 10−2 
0.0000
5.21 × 102(−)
2.57 × 10−2
0.0000
5.21 × 102(=)
2.57 × 10−2
0.2838
5.21 × 102(=)
2.57 × 10−2
0.8534
5.21 × 102(=)
2.57 × 10−2
0.3790
5.21 × 102(=)
2.57 × 10−2
0.5997
Benchmark9T11.36 × 104
2.37 × 103
-
8.39 × 103(−) 
2.37 × 103 
0.0000
1.58 × 104(+)
2.37 × 103
0.0000
1.51 × 104(+)
2.37 × 103
0.0000
1.49 × 104(+)
2.37 × 103
0.0001
1.48 × 104(+)
2.37 × 103
0.0025
8.98 × 103(−)
2.37 × 103
0.0000
T21.62 × 103
4.40 × 10−1
-
1.62 × 103(=) 
4.40 × 10−1 
0.8187
1.62 × 103(−)
4.40 × 10−1
0.0017
1.62 × 103(+)
4.40 × 10−1
0.0000
1.62 × 103(+)
4.40 × 10−1
0.0000
1.62 × 103(+)
4.40 × 10−1
0.0000
1.62 × 103(+)
4.40 × 10−1
0.0000
Benchmark10T11.27 × 104 
6.47 × 103 
-
4.80 × 104(+)
6.47 × 103
0.0000
5.24 × 104(+)
6.47 × 103
0.0000
5.57 × 104(+)
6.47 × 103
0.0000
6.65 × 104(+)
6.47 × 103
0.0000
4.27 × 104(+)
6.47 × 103
0.0000
3.77 × 104(+)
6.47 × 103
0.0000
T21.86 × 105 
1.23 × 105 
-
4.16 × 106(+)
1.23 × 105
0.0000
6.71 × 107(+)
1.23 × 105
0.0000
1.03 × 107(+)
1.23 × 105
0.0000
2.30 × 107(+)
1.23 × 105
0.0000
1.45 × 107(+)
1.23 × 105
0.0000
2.71 × 106(+)
1.23 × 105
0.0000
Number of “+/=/−”14/1/513/3/415/3/214/4/214/2/412/5/3
Table 3. Comparison between APMTO and its variants with the fixed KT frequency. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than the compared variant with fixed fKT.
Table 3. Comparison between APMTO and its variants with the fixed KT frequency. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than the compared variant with fixed fKT.
CEC2022APMTO0.00.10.30.50.70.91.0
Benchmark1T16.24 × 102
8.14 × 100
-
6.23 × 102(=) 
8.14 × 100 
0.6414
6.24 × 102(=)
8.14 × 100
0.8534
6.23 × 102(=)
8.14 × 100
0.1907
6.22 × 102(=)
8.14 × 100
0.3183
6.23 × 102(=)
8.14 × 100
0.2643
6.22 × 102(=)
8.14 × 100
0.2581
6.22 × 102(−)
8.14 × 100
0.0116
T26.26 × 102
1.08 × 101
-
6.26 × 102(=) 
1.08 × 101 
0.1297
6.24 × 102(=)
1.08 × 101
0.9587
6.21 × 102(=)
1.08 × 101
0.0679
6.22 × 102(=)
1.08 × 101
0.1453
6.23 × 102(=)
1.08 × 101
0.1297
6.22 × 102(=)
1.08 × 101
0.2062
6.20 × 102(−)
1.08 × 101
0.0005
Benchmark2T17.00 × 102 
9.10 × 10−3 
-
7.00 × 102(+)
9.10 × 10−3
0.0103
7.00 × 102(+)
9.10 × 10−3
0.0037
7.00 × 102(+)
9.10 × 10−3
0.0021
7.00 × 102(+)
9.10 × 10−3
0.0045
7.00 × 102(+)
9.10 × 10−3
0.0049
7.00 × 102(+)
9.10 × 10−3
0.0009
7.00 × 102(+)
9.10 × 10−3
0.0088
T27.00 × 102
5.55 × 10−3
-
7.00 × 102(+)
5.55 × 10−3
0.0093
7.00 × 102(+)
5.55 × 10−3
0.0002
7.00 × 102(=) 
5.55 × 10−3 
0.2236
7.00 × 102(=)
5.55 × 10−3
0.0846
7.00 × 102(+)
5.55 × 10−3
0.0167
7.00 × 102(+)
5.55 × 10−3
0.0086
7.00 × 102(=)
5.55 × 10−3
0.0569
Benchmark3T12.92 × 105 
1.69 × 105 
-
3.18 × 105(=)
1.69 × 105
0.6952
3.73 × 105(=)
1.69 × 105
0.0773
3.40 × 105(=)
1.69 × 105
0.1087
3.83 × 105(=)
1.69 × 105
0.1023
3.58 × 105(=)
1.69 × 105
0.1669
4.78 × 105(+)
1.69 × 105
0.0022
4.07 × 105(+)
1.69 × 105
0.0189
T22.71 × 105 
1.79 × 105 
-
4.01 × 105(+)
1.79 × 105
0.0116
2.97 × 105(=)
1.79 × 105
0.4290
3.61 × 105(=)
1.79 × 105
0.0877
3.59 × 105(=)
1.79 × 105
0.2116
3.38 × 105(=)
1.79 × 105
0.0877
4.84 × 105(+)
1.79 × 105
0.0001
5.54 × 105(+)
1.79 × 105
0.0011
Benchmark4T11.30 × 103
1.03 × 10−1
-
1.30 × 103(=) 
1.03 × 10−1 
0.5592
1.30 × 103(=)
1.03 × 10−1
0.7394
1.30 × 103(−)
1.03 × 10−1
0.0203
1.30 × 103(=)
1.03 × 10−1
0.0798
1.30 × 103(=)
1.03 × 10−1
0.8883
1.30 × 103(=)
1.03 × 10−1
0.4204
1.30 × 103(=)
1.03 × 10−1
0.8418
T21.30 × 103
7.52 × 10−2
-
1.30 × 103(=)
7.52 × 10−2
0.5395
1.30 × 103(=)
7.52 × 10−2
0.8073
1.30 × 103(=)
7.52 × 10−2
0.2838
1.30 × 103(=)
7.52 × 10−2
0.8534
1.30 × 103(=)
7.52 × 10−2
0.6952
1.30 × 103(=) 
7.52 × 10−2 
0.8303
1.30 × 103(=)
7.52 × 10−2
0.5997
Benchmark5T11.52 × 103 
1.04 × 101 
-
1.54 × 103(+)
1.04 × 101
0.0000
1.54 × 103(+)
1.04 × 101
0.0000
1.54 × 103(+)
1.04 × 101
0.0000
1.54 × 103(+)
1.04 × 101
0.0000
1.53 × 103(+)
1.04 × 101
0.0001
1.54 × 103(+)
1.04 × 101
0.0000
1.54 × 103(+)
1.04 × 101
0.0000
T21.52 × 103 
8.95 × 100 
-
1.54 × 103(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
1.53 × 103(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
1.54 × 103(+)
8.95 × 100
0.0000
Benchmark6T12.06 × 105 
1.11 × 105 
-
2.83 × 105(=)
1.11 × 105
0.0798
2.25 × 105(=)
1.11 × 105
0.5106
3.31 × 105(+)
1.11 × 105
0.0007
3.36 × 105(+)
1.11 × 105
0.0014
3.57 × 105(+)
1.11 × 105
0.0004
4.44 × 105(+)
1.11 × 105
0.0001
3.59 × 105(+)
1.11 × 105
0.0003
T21.33 × 105 
6.49 × 104 
-
1.92 × 105(+)
6.49 × 104
0.0096
2.91 × 105(+)
6.49 × 104
0.0000
2.72 × 105(+)
6.49 × 104
0.0000
3.08 × 105(+)
6.49 × 104
0.0000
3.47 × 105(+)
6.49 × 104
0.0000
2.79 × 105(+)
6.49 × 104
0.0000
2.87 × 105(+)
6.49 × 104
0.0000
Benchmark7T12.87 × 103
2.82 × 102
-
2.73 × 103(=) 
2.82 × 102 
0.0555
2.80 × 103(=)
2.82 × 102
0.1858
2.70 × 103(−)
2.82 × 102
0.0099
2.84 × 103(=)
2.82 × 102
0.4825
2.73 × 103(=)
2.82 × 102
0.0519
2.72 × 103(−)
2.82 × 102
0.0210
2.67 × 103(−)
2.82 × 102
0.0035
T22.87 × 103
2.53 × 102
-
2.75 × 103(=) 
2.53 × 102 
0.1297
2.80 × 103(=)
2.53 × 102
0.5793
2.70 × 103(−)
2.53 × 102
0.0099
2.78 × 103(=)
2.53 × 102
0.3632
2.83 × 103(=)
2.53 × 102
0.5201
2.86 × 103(=)
2.53 × 102
0.6627
2.79 × 103(=)
2.53 × 102
0.4464
Benchmark8T15.21 × 102
3.34 × 10−2
-
5.21 × 102(=)
3.34 × 10−2
0.2838
5.21 × 102(=) 
3.34 × 10−2 
0.1154
5.21 × 102(=)
3.34 × 10−2
0.1624
5.21 × 102(=)
3.34 × 10−2
0.4464
5.21 × 102(=)
3.34 × 10−2
0.3711
5.21 × 102(=)
3.34 × 10−2
0.9117
5.21 × 102(=)
3.34 × 10−2
0.1715
T25.21 × 102
2.57 × 10−2
-
5.21 × 102(=)
2.57 × 10−2
0.3403
5.21 × 102(=)
2.57 × 10−2
0.1023
5.21 × 102(=)
2.57 × 10−2
0.0933
5.21 × 102(=)
2.57 × 10−2
0.8303
5.21 × 102(=)
2.57 × 10−2
0.0679
5.21 × 102(=)
2.57 × 10−2
0.3555
5.21 × 102(=) 
2.57 × 10−2 
0.1494
Benchmark9T11.36 × 104 
2.37 × 103 
-
1.47 × 104(=)
2.37 × 103
0.1120
1.49 × 104(+)
2.37 × 103
0.0027
1.50 × 104(+)
2.37 × 103
0.0000
1.48 × 104(+)
2.37 × 103
0.0014
1.48 × 104(+)
2.37 × 103
0.0075
1.46 × 104(=)
2.37 × 103
0.1373
1.50 × 104(+)
2.37 × 103
0.0002
T21.62 × 103
4.40 × 10−1
-
1.62 × 103(=)
4.40 × 10−1
0.2062
1.62 × 103(=) 
4.40 × 10−1 
0.9705
1.62 × 103(+)
4.40 × 10−1
0.0116
1.62 × 103(=)
4.40 × 10−1
0.1120
1.62 × 103(+)
4.40 × 10−1
0.0019
1.62 × 103(+)
4.40 × 10−1
0.0002
1.62 × 103(+)
4.40 × 10−1
0.0013
Benchmark10T11.27 × 104
6.47 × 103
-
1.11 × 104(=) 
6.47 × 103 
0.4643
1.26 × 104(=)
6.47 × 103
0.8883
1.58 × 104(=)
6.47 × 103
0.2838
1.62 × 104(+)
6.47 × 103
0.0176
1.59 × 104(=)
6.47 × 103
0.1188
1.49 × 104(=)
6.47 × 103
0.2905
1.80 × 104(+)
6.47 × 103
0.0036
T21.86 × 105 
1.23 × 105 
-
2.82 × 105(+)
1.23 × 105
0.0006
3.56 × 105(+)
1.23 × 105
0.0000
4.56 × 105(+)
1.23 × 105
0.0002
3.09 × 105(+)
1.23 × 105
0.0007
4.04 × 105(+)
1.23 × 105
0.0000
4.46 × 105(+)
1.23 × 105
0.0000
4.16 × 105(+)
1.23 × 105
0.0000
Number of “+/=/−”7/13/07/13/08/9/38/12/09/11/010/9/111/6/3
Table 4. Comparison between APMTO and its variants. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than the APMTO–w/o–APKT.
Table 4. Comparison between APMTO and its variants. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than the APMTO–w/o–APKT.
CEC2022APMTOAPMTO–w/o–APKT
Benchmark1T16.24 × 102
8.14 × 100
-
6.18 × 102(−) 
8.14 × 100 
0.0003
T26.26 × 102
1.08 × 101
-
6.21 × 102(=) 
1.08 × 101 
0.0850
Benchmark2T17.00 × 102 
9.10 × 10−3 
-
7.00 × 102(+)
9.10 × 10−3
0.0049
T27.00 × 102 
5.55 × 10−3 
-
7.00 × 102(+)
5.55 × 10−3
0.0215
Benchmark3T12.92 × 105 
1.69 × 105 
-
3.87 × 105(+)
1.69 × 105
0.0112
T22.71 × 105 
1.79 × 105 
-
4.24 × 105(+)
1.79 × 105
0.0015
Benchmark4T11.30 × 103
1.03 × 10−1
-
1.30 × 103(=) 
1.03 × 10−1 
0.5298
T21.30 × 103
7.52 × 10−2
-
1.30 × 103(=) 
7.52 × 10−2 
0.3790
Benchmark5T11.52 × 103 
1.04 × 101 
-
1.54 × 103(+)
1.04 × 101
0.0000
T21.52 × 103 
8.95 × 100 
-
1.54 × 103(+)
8.95 × 100
0.0000
Benchmark6T12.06 × 105 
1.11 × 105 
-
4.71 × 105(+)
1.11 × 105
0.0001
T21.33 × 105 
6.49 × 104 
-
3.29 × 105(+)
6.49 × 104
0.0000
Benchmark7T12.87 × 103
2.82 × 102
-
2.70 × 103(−) 
2.82 × 102 
0.0052
T22.87 × 103
2.53 × 102
-
2.74 × 103(=) 
2.53 × 102 
0.1120
Benchmark8T15.21 × 102 
3.34 × 10−2 
-
5.21 × 102(=)
3.34 × 10−2
0.5298
T25.21 × 102
2.57 × 10−2
-
5.21 × 102(=) 
2.57 × 10−2 
0.6952
Benchmark9T11.36 × 104 
2.37 × 103 
-
1.48 × 104(+)
2.37 × 103
0.0108
T21.62 × 103 
4.40 × 10−1 
-
1.62 × 103(+)
4.40 × 10−1
0.0018
Benchmark10T11.27 × 104 
6.47 × 103 
-
1.67 × 104(+)
6.47 × 103
0.0035
T21.86 × 105 
1.23 × 105 
-
3.59 × 105(+)
1.23 × 105
0.0002
Number of “+/=/−”12/6/2
Table 5. Comparison between APMTO and APMTO with different top values. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the top value.
Table 5. Comparison between APMTO and APMTO with different top values. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the top value.
topAPMTO (20)10405070100
Number of “+/=/−”7/6/77/8/55/11/410/8/29/11/0
Table 6. Comparison between APMTO and its variants with different APMaxFEs. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the APMaxFEs value.
Table 6. Comparison between APMTO and its variants with different APMaxFEs. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the APMaxFEs value.
APMaxFEsAPMTO (100)050500
Number of “+/=/−”11/6/311/9/09/9/2
Table 7. Comparison between APMTO and its variants with different population sizes N. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the N value.
Table 7. Comparison between APMTO and its variants with different population sizes N. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the N value.
NAPMTO (100)255075
Number of “+/=/−”14/4/26/10/44/13/3
Table 8. Comparison between APMTO and its variants with different F. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the F value.
Table 8. Comparison between APMTO and its variants with different F. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the F value.
FAPMTO (0.5)0.60.70.9
Number of “+/=/−”12/6/216/2/218/2/0
Table 9. Comparison between APMTO and its variants with different CR values. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the CR value.
Table 9. Comparison between APMTO and its variants with different CR values. The notations “+”, “=”, and “−” denote that APMTO performs better than, approximate to, and worse than APMTO with the CR value.
CRAPMTO (0.7)0.60.80.9
Number of “+/=/−”5/13/24/8/88/4/8
Table 10. Experimental results of APMTO and the compared EMTO algorithms on PKACP.
Table 10. Experimental results of APMTO and the compared EMTO algorithms on PKACP.
PKACPAPMTOMFEAMFEA–DGDAEMTOMKTDEOTMTOBLKT–DE
d20T13.57 × 10−1
8.06 × 10−5
-
3.57 × 10−1(+) 
8.06 × 10−5 
0.0022
3.66 × 10−1(+)
8.06 × 10−5
0.0000
3.57 × 10−1(=)
8.06 × 10−5
0.2905
3.57 × 10−1(=)
8.06 × 10−5
0.3329
3.57 × 10−1(=)
8.06 × 10−5
0.9234
3.57 × 10−1(−)
8.06 × 10−5
0.0000
T22.46 × 10−1
5.44 × 10−4
-
2.48 × 10−1(+) 
5.44 × 10−4 
0.0000
2.46 × 10−1(−)
5.44 × 10−4
0.0000
2.48 × 10−1(+)
5.44 × 10−4
0.0000
2.46 × 10−1(+)
5.44 × 10−4
0.0000
2.46 × 10−1(+)
5.44 × 10−4
0.0000
2.46 × 10−1(+)
5.44 × 10−4
0.0000
d25T13.58 × 10−1
1.20 × 10−4
-
3.58 × 10−1(+) 
1.20 × 10−4 
0.0000
3.67 × 10−1(+)
1.20 × 10−4
0.0000
3.58 × 10−1(+)
1.20 × 10−4
0.0085
3.58 × 10−1(=)
1.20 × 10−4
0.7062
3.58 × 10−1(+)
1.20 × 10−4
0.0189
3.58 × 10−1(−)
1.20 × 10−4
0.0014
T22.49 × 10−1
9.60 × 10−4
-
2.51 × 10−1(+) 
9.60 × 10−4 
0.0000
2.48 × 10−1(−)
9.60 × 10−4
0.0000
2.51 × 10−1(+)
9.60 × 10−4
0.0000
2.49 × 10−1(+)
9.60 × 10−4
0.0000
2.49 × 10−1(+)
9.60 × 10−4
0.0000
2.48 × 10−1(+)
9.60 × 10−4
0.0002
d30T13.58 × 10−1
5.73 × 10−5
-
3.58 × 10−1(+) 
5.73 × 10−5 
0.0000
3.70 × 10−1(+)
5.73 × 10−5
0.0000
3.58 × 10−1(+)
5.73 × 10−5
0.0000
3.58 × 10−1(+)
5.73 × 10−5
0.0242
3.58 × 10−1(+)
5.73 × 10−5
0.0009
3.58 × 10−1(−)
5.73 × 10−5
0.0196
T22.50 × 10−1
5.64 × 10−4
-
2.57 × 10−1(+) 
5.64 × 10−4 
0.0000
2.50 × 10−1(−)
5.64 × 10−4
0.0000
2.54 × 10−1(+)
5.64 × 10−4
0.0000
2.51 × 10−1(+)
5.64 × 10−4
0.0000
2.50 × 10−1(+)
5.64 × 10−4
0.0001
2.50 × 10−1(+)
5.64 × 10−4
0.0226
d35T14.94 × 10−2 
4.08 × 10−4 
-
4.98 × 10−2(+)
4.08 × 10−4
0.0000
5.51 × 10−2(+)
4.08 × 10−4
0.0000
5.00 × 10−2(+)
4.08 × 10−4
0.0000
4.94 × 10−2(+)
4.08 × 10−4
0.0001
5.18 × 10−2(+)
4.08 × 10−4
0.0000
4.93 × 10−2(+)
4.08 × 10−4
0.0042
T24.92 × 10−1 
1.44 × 10−5 
-
4.92 × 10−1(+)
1.44 × 10−5
0.0000
4.94 × 10−1(+)
1.44 × 10−5
0.0000
4.92 × 10−1(+)
1.44 × 10−5
0.0000
4.92 × 10−1(+)
1.44 × 10−5
0.0099
4.93 × 10−1(+)
1.44 × 10−5
0.0000
4.92 × 10−1(+)
1.44 × 10−5
0.0012
d40T12.14 × 10−1 
1.06 × 10−4 
-
2.15 × 10−1(+)
1.06 × 10−4
0.0000
2.27 × 10−1(+)
1.06 × 10−4
0.0000
2.14 × 10−1(+)
1.06 × 10−4
0.0000
2.14 × 10−1(+)
1.06 × 10−4
0.0007
2.15 × 10−1(+)
1.06 × 10−4
0.0000
2.14 × 10−1(=)
1.06 × 10−4
0.1087
T22.90 × 10−1
2.04 × 10−3
-
3.00 × 10−1(+) 
2.04 × 10−3 
0.0000
2.97 × 10−1(+)
2.04 × 10−3
0.0000
2.97 × 10−1(+)
2.04 × 10−3
0.0000
2.91 × 10−1(=)
2.04 × 10−3
0.0748
2.96 × 10−1(+)
2.04 × 10−3
0.0000
2.89 × 10−1(−)
2.04 × 10−3
0.0005
d50T12.83 × 10−1
4.64 × 10−3
-
2.97 × 10−1(+) 
4.64 × 10−3 
0.0000
2.79 × 10−1(−)
4.64 × 10−3
0.0000
2.93 × 10−1(+)
4.64 × 10−3
0.0000
2.84 × 10−1(=)
4.64 × 10−3
0.0594
2.86 × 10−1(+)
4.64 × 10−3
0.0001
2.80 × 10−1(−)
4.64 × 10−3
0.0003
T23.37 × 10−1 
7.08 × 10−5 
-
3.37 × 10−1(+)
7.08 × 10−5
0.0000
3.49 × 10−1(+)
7.08 × 10−5
0.0000
3.37 × 10−1(+)
7.08 × 10−5
0.0000
3.37 × 10−1(+)
7.08 × 10−5
0.0000
3.37 × 10−1(+)
7.08 × 10−5
0.0000
3.37 × 10−1(+)
7.08 × 10−5
0.0000
d100T14.82 × 10−1 
1.39 × 10−4 
-
4.83 × 10−1(+)
1.39 × 10−4
0.0000
4.90 × 10−1(+)
1.39 × 10−4
0.0000
4.82 × 10−1(+)
1.39 × 10−4
0.0000
4.82 × 10−1(=)
1.39 × 10−4
0.1669
4.84 × 10−1(+)
1.39 × 10−4
0.0000
4.82 × 10−1(+)
1.39 × 10−4
0.0088
T21.46 × 10−1
1.09 × 10−2
-
1.90 × 10−1(+) 
1.09 × 10−2 
0.0000
1.10 × 10−1(−)
1.09 × 10−2
0.0000
1.81 × 10−1(+)
1.09 × 10−2
0.0000
1.46 × 10−1(=)
1.09 × 10−2
0.9587
1.84 × 10−1(+)
1.09 × 10−2
0.0000
1.26 × 10−1(−)
1.09 × 10−2
0.0000
Number of “+/=/−”14/0/09/0/513/1/08/6/013/1/07/1/6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, J.-H.; Zhou, S.-Y.; Wang, Z.-J. Auxiliary Population Multitask Optimization Based on Chinese Semantic Understanding. Appl. Sci. 2025, 15, 9746. https://doi.org/10.3390/app15179746

AMA Style

Yuan J-H, Zhou S-Y, Wang Z-J. Auxiliary Population Multitask Optimization Based on Chinese Semantic Understanding. Applied Sciences. 2025; 15(17):9746. https://doi.org/10.3390/app15179746

Chicago/Turabian Style

Yuan, Ji-Heng, Shi-Yuan Zhou, and Zi-Jia Wang. 2025. "Auxiliary Population Multitask Optimization Based on Chinese Semantic Understanding" Applied Sciences 15, no. 17: 9746. https://doi.org/10.3390/app15179746

APA Style

Yuan, J.-H., Zhou, S.-Y., & Wang, Z.-J. (2025). Auxiliary Population Multitask Optimization Based on Chinese Semantic Understanding. Applied Sciences, 15(17), 9746. https://doi.org/10.3390/app15179746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop