Next Article in Journal
Evaluation of Integrated CNN, Transfer Learning, and BN with Thermography for Breast Cancer Detection
Next Article in Special Issue
Blockchain Secured Dynamic Machine Learning Pipeline for Manufacturing
Previous Article in Journal
Optimization of Effective Throughput in NOMA-Based Cognitive UAV Short-Packet Communication
Previous Article in Special Issue
Effective Selfish Mining Defense Strategies to Improve Bitcoin Dependability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Knowledge Sharing and Individually Guided Evolutionary Algorithm for Multi-Task Optimization Problems

1
Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
2
ECE Department, New Jersey Institute of Technology, Newark, NJ 07102, USA
3
Department of Computer Science, King Abdulaziz University, Jeddah 21481, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 602; https://doi.org/10.3390/app13010602
Submission received: 2 December 2022 / Revised: 27 December 2022 / Accepted: 28 December 2022 / Published: 1 January 2023

Abstract

:
Multi-task optimization (MTO) is a novel emerging evolutionary computation paradigm. It focuses on solving multiple optimization tasks concurrently while improving optimization performance by utilizing similarities among tasks and historical optimization knowledge. To ensure its high performance, it is important to choose proper individuals for each task. Most MTO algorithms limit each individual to one task, which weakens the effects of information exchange. To improve the efficiency of knowledge transfer and choose more suitable individuals to learn from other tasks, this work proposes a general MTO framework named individually guided multi-task optimization (IMTO). It divides evolutions into vertical and horizontal ones, and each individual is fully explored to learn experience from the execution of other tasks. By using the concept of skill membership, individuals with higher solving ability are selected. Besides, to further improve the effect of knowledge transfer, only inferior individuals are selected to learn from other tasks at each generation. The significant advantage of IMTO over the multifactorial evolutionary framework and baseline solvers is verified via a series of benchmark studies.

1. Introduction

Evolutionary algorithms (EAs) are population-based stochastic optimization methods that include mechanisms of natural selection and population genetics in the field of Artificial Intelligence [1,2,3]. They are based on a collective learning process and can start with an arbitrarily initialized population [4,5]. Individuals evolve towards better and better solution regions by means of repetitive reproduction and mutation [6,7,8,9]. Due to their flexibility, ease of interfacing, extensibility, and high search efficiency, population-based optimization has been successfully applied to path planning, task assignment, energy-saving, and network configuration [10,11,12,13,14,15,16].
Even though EAs are proven to be useful in many applications, they have some disadvantages. For example, it is known that there are many similar optimization problems in the real world. Traditional EAs just focus on solving a single optimization problem and regard each optimization as an independent process while ignoring similarities among different tasks. When handling a new optimization problem, EAs have to start from scratch by assuming that no historical knowledge of solving related problems can be used [17]. Inspired by the idea of transfer learning and multi-task learning that a system can learn knowledge or skills from performing previous tasks [18,19,20], this work aims to solve multiple optimization problems and explore their implicit facilitations.
Traditionally, optimization problems can be divided into single-objective optimization (SOO) and multi-objective one (MOO). In order to improve the search efficiency among related tasks, a new paradigm named multi-task optimization is proposed. Instead of solving multiple problems sequentially, MTO can utilize similarities of different tasks to improve search efficiencies [21,22,23]. Inspired by multifactorial inheritance, Gupta et al. proposed a novel genetic EA named multifactorial evolutionary algorithm (MFEA) to solve MTO problems [21]. In MFEA, complex developmental traits of offspring are influenced by the combination of genetic and cultural factors [24,25,26]. During the evolution, MFEA can utilize information transfer of similar problems to improve optimization performance, which differs from traditional single-task methods [27,28,29,30].
Many MFEA variants have been proposed [31,32]. For example, in order to explore the generality of multifactorial optimization (MFO) when using different population-based search mechanisms, Feng et al. [33] attempted to conduct MFO with particle swarm optimization (PSO) and differential evolution (DE). The derived two multi-tasking paradigms are named as multifactorial particle swarm optimization (MFPSO) and multifactorial differential evolution (MFDE). Chen et al. [17] proposed a general evolutionary framework named MaTEA for solving many-task optimization problems. It chose assisted tasks based on the KullbackLeibler divergence (KLD) mechanism that can be used to measure the similarity among different tasks.
With a strong motivation for improving optimization efficiency and choosing suitable individuals to learn from other tasks, we propose an individually guided multi-task optimization framework. It is capable of generalizing almost all existing single-objective solvers, and answers when to start transfer and who should learn. In this paper, we place four popular single-objective solvers, including genetic algorithm (GA), PSO, DE, and artificial bee colony (ABC) into our proposed framework to validate its usefulness in advancing the field of MTO optimization. We intend to make the following novel contributions:
(1)
We propose a novel MTO framework including the partial population information sharing and individual learning schemes to achieve higher search efficiency than existing frameworks.
(2)
In order to represent the interests of each individual for solving different tasks, we introduce a new concept of skill membership into the MTO framework.
(3)
We divide an MTO search process into vertical and horizontal evolutions, and the latter includes crossovers of individuals belonging to different tasks. Knowledge transfer is guided according to the task performance to suppress the negative transfer of each optimization task.
The rest of this paper is organized as follows: Section 2 introduces the background of MTO. IMTO is described in Section 3. Experimental studies are presented in Section 4. Section 5 concludes this work.

2. Related Work

MTO aims to conduct multiple tasks simultaneously through the use of knowledge transfer among different tasks. Supposing that there are K tasks to be completed and all of them are minimization problems. The jth task can be denoted as Tj with objective function f j :   X j , in space Xj. MTO aims to find a set of global optimal solutions { x 1 * , x 2 * , , x k * } which can satisfy
x i = argmin x f i ( x ) , i { 1 , 2 , , K }
MTO can solve multiple optimization tasks in a unified space and utilize similarities among different tasks to accelerate an optimization process, while single-task optimization just focuses on solving one task in one search space and finding the global optimal solution through vertical evolution. MFEA can transfer knowledge among different tasks by the interactions of genetic and cultural factors. During its operation, offspring not only inherit genetic factors, but also are influenced by habits and preferences of parents belonging to different tasks. To ensure different tasks can share useful optimization knowledge, it creates a unified search space for MTO problems.
We first introduce some definitions related to individuals in MFEA [21].
Factorial Cost: The factorial cost of candidate pi for a given task Tj can be given as Ψ j i = λ · δ j i + f j i , where λ is a penalizing multiplier, f j i is an objective function value, and δ j i represents the total constraint violation.
Factorial Rank: The factorial rank r j i of individual pi on task Tj is its index in the list of population members sorted in ascending order with respect to Ψ j .
Scalar Fitness: The scalar fitness is calculated via the list of factorial ranks { r 1 i ,   r 2 i ,   , r K i } , and φ i = 1 / min j { 1 , 2 , , K } { r j i } .
Skill Factor: The skill factor τi is a task whose individual pi performs the best, i.e., τ i = argmin j { r j i } , where j     { 1 , 2 , , K } .
Since the appearance of the MFO [21], many improvements have been made. Considering that knowledge transfer of different optimization tasks is sensitive to negative inter-task interactions, Bali et al. [34] proposed a novel evolutionary computation framework named MFEA-II. It can achieve online learning and explore similarities among distinct tasks. Based on MFEA-II, they further proposed a cognizant evolutionary multitasking approach called MO-MFEA-II to solve MOO problems [35]. Zheng et al. [23] proposed a self-regulated evolutionary multi-task optimization (SREMTO) that used an ability vector to reflect an individual’s solving ability for solving each task. It can automatically adapt the intensity of cross-task knowledge transfer to different and varying degrees of relatedness among different tasks. Liu et al. [36] proposed a surrogate-assisted multi-tasking memetic algorithm (SaM-MA) that used a surrogate model to assist an evolution procedure. Bali et al. proposed an enhanced MTO framework named LDA-MFEA that derived linear transformations among solution spaces of component tasks [37].
Existing MTO algorithms only focus on solving a small number of tasks at the same time. If many tasks need to be solved, they would become ineffective in such complex multi-task environment. To handle many-tasking problems, Tang et al. proposed a group-based MFEA to group similar tasks, such that the genetic information interaction could occur with the same group. To strengthen its effectiveness and efficiency, they further developed a new selection criterion and mating selection mechanism [38]. Zhang et al. proposed a framework named multisource selective transfer optimization that can choose source tasks well. Besides, the optimization instance representation method named centroid distribution is designed to measure the task relatedness of different optimization instances [39].
Due to its superior performance, MFEA has been used in many applications. For example, in order to evolve several Deep Q-Learning (DQL) models and converge to optimal policies, Martinez et al. [40] proposed a new MFO framework that blended meta-heuristic optimization, transfer learning, and DQL to automate the process of knowledge transfer and policy learning of distributed Reinforcement Learning (RL) agents. Feng et al. [41] explored the application of MTO to solve combinatorial optimization problems. They verified its efficiency by using vehicle routing as an illustrative combinatorial optimization problem. To overcome the difficulty that conventional evolutionary algorithms are not suitable for solving expensive optimization problems, Ding et al. [28] proposed a new multitasking evolutionary optimization framework named generalized MFEA (G-MFEA) to solve expensive problems. In G-MFEA, cheap optimization problems can transfer knowledge to expensive ones, thus leading to faster convergences of the latter.
Except for multifactorial-based MTO algorithms, multipopulation approaches also have been proposed to improve the efficiency of knowledge transfer. For example, Li et al. [42] proposed a multi-population framework to solve MTO problems, and each population has corresponding mating probability to exchange information. Cheng et al. [43] proposed a MultiTasking Coevolutionary Particle Swarm Optimization (MT-CPSO) algorithm. In it, the information from the assistant task can be transferred if the personal best solution of the target sub-population cannot improve the search performance.

3. Proposed Method

3.1. Motivations

In MTO, the optimization experience can be shared and each task can utilize the superior experience of other tasks. So, they are expected to evolve faster and achieve higher accuracy than traditional single-task optimization methods. As a typical MTO algorithm, MFEA demonstrates its performance for both SOO and MOO problems, as well as real-world optimization problems. MFEA uses the concept of a skill factor to divide the initial population into different task groups. However, one individual can be suitable for different tasks. This is common when these tasks have high similarities. The skill factor limits individuals to one task group and weakens the effects of information exchange among multiple tasks. As a result, the task pairs with high similarities can suffer from barely useful information exchange among each other. In order to better represent the solving ability of individuals on component tasks and choose more suitable individuals, we propose the concept of skill membership to show the solving ability of individuals on component tasks.

3.2. Proposed Framework

Our newly proposed IMTO is an individually guided multi-task optimization based on knowledge sharing as shown in Figure 1. Figure 2 shows the diagram of IMTO when solving two optimization problems. It includes the following novelties in comparison with other MTO algorithms:
(1)
We divide the optimization process of MTO into vertical and horizontal evolutions. Each task has its sub-population to execute vertical evolution, and each sub-population can be called a task groups. Traditional single-task optimization methods just contain vertical evolutions that find global optimal solutions by a series of operations, e.g., selection, crossover, and mutation. The distinction between the proposed MTO framework and traditional single-task optimizers lies in horizontal evolution among different task groups. Optimization processes of multiple tasks can be influenced by each other via the information interaction.
(2)
MTO algorithms are able to perform K optimization problems simultaneously. Suppose that the dimensionality of the jth task is Dj. We define the unified search space with dimensionality D = maxj{Dj} and each individual is encoded with random variables lying within the fixed range [0, 1].
(3)
IMTO divides the initial population into K task groups, and each task group can evolve independently. In order to represent the ability to perform each component task, we introduce the concept of skill membership. A candidate may enter multiple task groups as long as it shows high skill membership values on component tasks.
(4)
In order to confirm when the optimization information should communicate, we use the convergence rate to guide knowledge transfer. When the convergence rate shows that a task may be trapped into local optima, the knowledge transfer mechanism is triggered.

3.3. Individually Guided Multi-Task Evolutionary Optimization

Putting GA, PSO, DE, and ABC into IMTO leads to the corresponding algorithms named IMGA, IMPSO, IMDE, and IMABC. To better show the designed framework, we give the IMGA implementation as an example. Its pseudocode is described in Algorithm 1, where its module performing horizontal transfer is realized in Algorithm 2.
It is worth noting that individuals in MFO evolve together in each iteration, and the random mating probability (rmp) value decides the evolution directions of each individual. Different from MFO, vertical and horizontal evolutions are independent in IMTO and each task group of IMTO has its vertical evolution. The additional horizontal evolution is triggered when the convergence rate is less than zero, which is used to improve optimization performance.
We use skill membership values to represent individuals’ solving ability on different tasks, which can be used for guiding the division of different task groups before the evolution.
Definition 1
(Skill membership). For minimized optimization problems, the skill membership value of individual pi is defined as:
μ i j = min ( f j ) / f i j
where f i j is an objective function value of individual pi on task Tj, and min ( f j ) is the obtained minimum objective function value. If min ( f j ) or f i j is equal to zero, it will be replaced by random values ξ 1 and   ξ 2 that are close to zero. The skill membership value μ i = μ i j j = 1 k reflects the solving ability of individual pi on component task Tj. Next, we show how to divide the initial population into different task groups according to skill membership values. This paper records the rank of skill membership values in order from the highest to lowest, and the higher skill membership values mean higher solving ability.
At the beginning of IMGA, N 0 individuals are randomly initialized within the unified search space. Given K optimization tasks, IMGA uses a partition strategy to divide the initial population P 0 into K task groups that focus on solving component tasks. Since simply choosing better-performing individuals can be easily trapped into local optima, IMGA randomly chooses some individuals according to the randomly chosen ratio γ. Specifically, N0 individuals of the initial population P 0 are ranked according to their skill membership values. P 0 is divided into P 01 j and P 02 j for task Tj. P 01 j contains N(1-γ) individuals with the highest skill membership values, and P 02 j is composed of the remaining individuals of P0. IMGA chooses all candidates of P 01 j and randomly chooses N γ candidates from P 02 j to form task group Gj. The setting of parameter γ with a default value 0.2 is to be discussed later.
Algorithm 1. IMGA
Applsci 13 00602 i001
After a partition procedure, each task group contains N individuals, and K N = N 0 . Individuals belonging to the same task group can evolve independently through the vertical operators. Being similar to traditional single-task optimization, the vertical offspring generation in IMGA includes crossover and mutation operators. During the vertical evolution, each task can generate vertical offspring solutions to update the task group.
To confirm when the knowledge should transfer, we use a convergence rate to guide the horizontal knowledge transfer. The convergence rate of minimization problems in generation t is defined as ρ t = f t 1 * f t * , where f t 1 * and f t * represent optimal fitness values in generation of t − 1 and t. When the convergence rate of task Tj is less than zero, IMGA starts the knowledge transfer to avoid being trapped into local optima.
In this paper, the task that needs to learn knowledge from another task is called a target task, and the task used for providing knowledge is called an assistant task. To create the special environment to transfer optimization knowledge and protect specific properties of better-performing individuals, an archive-memory knowledge transfer scheme is introduced for partial population knowledge sharing. In the partial population knowledge sharing scheme, only partial information in the assistant and target tasks will be shared. The knowledge transfer archive Λ j t of task Tj in generation t includes superior individuals Λ ^ j t and inferior individuals Λ ˇ j t . It aims to maintain excellent evolutionary information and improve behaviors of inferior individuals. This work uses communication rate (Ω) to choose individuals of each task to form its knowledge transfer archive. By setting knowledge transfer archive size NΛ = ΩN, NΛ better-performing individuals and NΛ worse-performing individuals are added to knowledge transfer archives in each generation, and only the latter need to learn.
The communication rate Ω with a smaller value means that each task prefers maintaining its vertical evolution, and arbitrary communications among different tasks are permitted when Ω is close to 1. Cross-task communications can explore the entire search space and avoid being trapped into local optima. The setting of Ω with its default value of 0.2 is to be discussed later. IMGA applies crossover and mutation operations among different tasks to finish horizontal evolutions, as described in Algorithm 2. When there are more than two tasks, IMGA randomly chooses assisted tasks from component tasks to provide knowledge. This work adopts simulated binary crossover and polynomial mutation to realize information exchange [44]. The crossover probability of p C is set to be 0.9. Distribution indexes for crossover and mutation operator are set as η C = 10 and η M = 10.
Algorithm 2. Horizontal Transfer
Applsci 13 00602 i002
After the horizontal evolutin, we obtain horizontal offspring solutions Λ ˇ j _ H t for the target task T j . The generated horizontal offspring solutions are evaluated for the target task, which can provide additional useful evolution information. Additionally, then IMGA will choose N Λ better-performing individuals Λ ˇ j t to replace those worse-performing individuals. The horizontal evolution continues until a termination condition is met.

3.4. Computational Complexity

IMTO includes the following two operations: (1) Initialize the initial population and divide them into different task groups; and (2) finish vertical evolutions and the knowledge transfer among different tasks.
This work sets the size of initial population to KN, and each task has a sub-population of size N. IMGA begins with skill membership values’ assignment for the initial population. It takes O(K2N) to calculate objective function values and assign task groups. In every generation, vertical evolution takes O(KDN) to apply crossover and mutation operations, where D is the dimension of the decision space. For the worst case, all individuals need to perform horizontal evolution in every generation, and O(KDN) is required to keep information exchange. The worst-case computational complexity in every generation is O(KDN). So, IMGA has the complexity of O ( g ^ K d N + K 2 N ) , where g ^ is the maximal number of generations. The similar conclusions can be made with IMPSO, IMDE, and IMABC.

4. Experiments

To verify the generality and effectiveness of the MTO framework proposed in this paper, we select well-represented single-objective algorithms: GA [45], PSO [46,47], DE [48], and ABC [49] to combine with IMTO. The comparative experiments are performed on a series of MTO problems to demonstrate:
(1)
IMTO can significantly outperform corresponding baseline solvers.
(2)
In terms of the optimization knowledge transfer, IMTO outperforms the multifactorial optimization framework.
(3)
IMTO can adapt to different task similarities and promise high transfer effectiveness.

4.1. Experimental Setup

In order to show the knowledge transfer effects of different kinds of MTO algorithms, we use two test suites of test problems in our experiments. Test suite 1 contains 9 composite two-task problems used in CEC 2017 EMTO Competition [50]. These nine problems belong to nine categories with different degrees of overlap and different degrees of inter-task similarity. It uses CI, PI, and NI to represent complete, partial, and no intersection, and HS, MS, and LS to mean high, medium, and low similarity, respectively. Details of test problems are given in Table 1. T1 and T2 denote two component tasks of each multi-task optimization problem. The second test suite of the CEC2021 EMTO Competition consists of much more complex objective functions.
The initial population size is set to 100, and each task is assigned with 50 individuals. In MFEA, the distribution index for crossover η C is set as 10. Acceleration coefficients of MFPSO are set to 0.2. The weight of IMPSO and MFPSO decreases linearly from 0.9 to 0.4. In IMABC, the number of onlooker bees is set as 50. The value of random mating probability controls knowledge transfer in a multi-task paradigm, and in MFEA, MFPSO, and MFDE, we set rmp to 0.3 in all experiments by following [21,33]. We set individual learning p_il of MFEA, MFDE, and MFPSO to 0. The sensitivities of Ω and γ are to be discussed later.
Other parameters used in this work are summarized as: (1) Maximum number of generations: g ^ = 1000; (2) Independent number of runs: r ^ = 20.
We choose parallel GAs, PSOs, DEs, and ABCs as four baseline solvers, named as B-GA, B-PSO, B-DE, and B-ABC, respectively. All the baseline solvers share the same parameters and search operators of IMGA, IMPSO, IMDE, and IMABC to guarantee fair comparisons. Experiments are conducted by using the computer with a 1.00 GHz Intel Corei5 processor and 16 GB RAM under window10.

4.2. Parametric Analysis

(1)
Sensitivity of Ω: Communication rate Ω is used to control knowledge transfer among different tasks. We examine performance sensitivity with respect to this parameter for IMDE. We set it to 0.2, 0.4, 0.6, 0.8, and 1. Table 2 shows the best achieved fitness values in 20 runs versus Ω in IMDE on test suite 1, and the best one is shown in bold. It is clear that IMDE performs well with Ω from 0.2 to 1 with a small difference. Nevertheless, the larger Ω can encourage more individuals to learn from other tasks, thereby consuming more computing resources. To reduce running time on various problems, we employ a relatively small Ω in our algorithms.
(2)
Sensitivity of γ: Randomly chosen ratio γ is another control parameter utilized in IMTO. To discuss performance sensitivity to γ in IMTO, we test its different settings on test suite 1. Table 3 shows the best achieved fitness values in 20 runs versus γ in IMDE on test suite 1, and the best one is shown in bold. IMDE is easily trapped into local optima when only choosing better-performing individuals. However, adding some randomly selected individuals can have better convergence performance. It is clear that IMDE performs best when γ is set to 0.2. We thus set γ to 0.2 in our experiments.

4.3. Comparison with MFO

To verify the performance of IMTO, optimization results of IMTO and MFO on test suites 1 and 2 are summarized. MTO algorithms are designed based on different single-task algorithms. This work compares IMGA with MFEA, IMDE with MFDE, and IMPSO with MFPSO. The computational time results of different algorithms are shown in Table 4 and Figure 3.
It can be found that although IMGA, IMPSO, and IMDE need to finish information exchange in each generation, their running time is much shorter than MFO. For example, MFPSO needs 19.03s to optimize P2, while IMPSO just needs 3.43s in Table 4. This can also apply to IMGA and IMDE. IMTO prefers to maintain the vertical evolution of each task, and each generation just needs to evaluate objective functions and communicate information. However, MFEA, MFPSO, and MFDE need to divide task groups at each generation, and the calculations of factorial cost, factorial rank, scalar fitness, and skill factor consume many computation resources. Hence, the knowledge transfer scheme of IMTO is much more efficient than the method of multifactorial influence used in MFEA, MFPSO, and MFDE.
To better show effects of knowledge transfer on solution accuracy, the optimization results of IMTO and MFO on test suites 1 and 2 are summarized in Table 4 and Table 5. The mean and standard deviation of the best achieved fitness values obtained via 20 independent runs are presented. The best performance of fitness values is shown in bold. It is obvious that the proposed IMTO exhibits a better overall performance than its peer, i.e., MFO. To judge whether there is a significant difference among IMTO and its peers, Wilcoxon’s rank-sum test at the significant level of 0.05 is invoked. Symbols “−” and “+” denote that the compared MFO algorithm performs significantly better and significantly worse than IMTO, while “≈” indicates there is no statistically significant difference between them.
The performance superiority of IMTO over MFO can be evidenced by the fact that IMGA significantly outperforms MFEA in 11 out of 18 tasks, IMPSO outperforms MFPSO in 13 out of 18 tasks, and IMDE outperforms MFDE in 7 out of 18 tasks on test suite 1. Table 5 shows that the optimization accuracy of IMTO and MFO are relatively similar on complex problems, but the computation time results in Figure 3 show that IMTO can greatly shorten the running time. For example, MFPSO needs 228.1 s to optimize P7, while IMPSO just needs 26.5 s. IMTO allows individuals to belong to different tasks as long as it shows higher skill membership values, which promotes the individuals’ potential in task-solving. Besides, the superior individuals of assisted tasks can be used for providing knowledge to guide the evolution of target tasks, which can improve the optimization results. It is worth noting that all the results are shown in scientific notation, which means that presented results are limited by the precision. Even though some comparative results shown in Table 4 are nearly the same because of the precision limitation, their actual average best fitness values may be very different. When executing the Wilcoxon’s rank-sum test, there is a significant difference between these comparative results.
The convergence curves of IMTO and MFO in 1000 iterations are illustrated in Figure 4, and they can tell the search efficiencies of different algorithms. Each sub-figure shows the optimization processes of T1 and T2. Y-axis represents the fitness curves and X-axis denotes the number of generations. This work just shows the convergence trace of P5 due to page limitation. It can be easily found that IMTO possesses faster convergence than MFO. This is because shared archives can generate similar genetic materials, and IMGA, IMPSO, and IMDE can update the evolution direction of worse-performing individuals. Obviously, the more similar tasks, the greater optimization facilitation. MFEA, MFPSO, and MFDE control knowledge transfer through the rmp value and neglect task similarities, thus easily causing negative optimization transfer. So, the convergences of MFEA and MFPSO are slower than IMTO’s.

4.4. Comparison with Baseline Solvers

This section compares IMTO with baseline solvers to further illustrate its generality. Table 6 shows the mean and standard deviation of the best achieved fitness values obtained via 20 independent runs on test suite 1. The best fitness values are shown in bold.
It can be found that even though baseline solvers have different characteristics, IMTO always has superior optimization performance over traditional baseline solvers. The performance superiority of IMTO over baseline solvers can be evidenced by the fact that IMGA significantly outperforms B-GA in 17 out of 18 tasks, IMPSO outperforms B-PSO in 14 out of 18 tasks, IMDE outperforms B-DE in 13 out of 18 tasks, and IMABC outperforms B-ABC in 15 out of 18 tasks on test suite 1. The individually guided learning and archive-memory schemes of IMTO share useful optimization experience in the unified search space, and update inferior individuals continuously through learning from better-performing individuals of other tasks.
Solving tasks with low similarities shows higher challenges for MTO. The results in Table 6 show that IMGA achieves higher better optimization accuracy than B-GA in most cases. The knowledge transfer scheme can help the algorithms find globally optimal solutions for problems possessing different properties. The P6-T1 optimization result shows that the baseline solver GA just finds the fitness value of 1.47e+01, but IMGA can find the value of 3.74e+00 by sharing historical experience. Similarly, IMPSO, IMDE, and IMABC perform better in most cases than PSO, DE, and ABC according to Table 6. In IMTO, the superior individuals need not learn from other tasks, which can maintain the vertical evolution of each task. Compared with a single-task solver, inferior individuals can constantly learn from other tasks to avoid being trapped into local optima, thereby suppressing negative transfer.
The convergence curve of P5 in 1000 iterations is shown in Figure 5. It can be easily found that IMTO has faster convergence in most cases. IMTO’s additional horizontal evolution allows for the exchange of information among different tasks. Different task groups are located in homogenous search spaces, and assisted tasks can provide useful knowledge, thereby speeding up the algorithms’ convergence.

5. Discussion

In this work, we propose a novel framework to handle multi-task problems. It focuses on choosing more suitable and potential individuals to solve each task. The experiments on benchmark problems show that the inter-task learning method used in IMTO can significantly improve the solving accuracy. This is because the target task can learn superior experience from other tasks and avoid being trapped into local optima.
Clearly, the inter-task learning method decides the performance of MTO algorithms. However, the solution distributions of multiple tasks may be different. If the target task learns the knowledge from the assistant task directly, the algorithm performance may be even degraded. So, the inter-task learning faces a challenge that is how to learn knowledge correctly. Consequently, the knowledge transfer proposed in this work may have a limitation caused by a direct learning method. So, to further improve the solution accuracy, we need to minimize the discrepancy of different sub-populations, which can avoid that the learned individuals are not suitable for the original task. Besides, there is much information existing in the assistant task. If we can choose more useful knowledge for a target task, the effect of knowledge on solving the target task can be further improved.

6. Conclusions

This paper has presented a novel high-efficiency multi-task optimization framework focusing on choosing the most suitable individuals to handle each task, which can improve the optimization accuracy via knowledge transfer. It divides the initial population into different task groups according to individuals’ skill membership values. Each task group cannot only do the vertical evolution, but also horizontal evolution that contains knowledge transfer among different tasks. The knowledge transfer includes population information sharing and inter-task learning. To further confirm when the knowledge should transfer, the convergence rate is used to guide horizontal knowledge transfer. When a solution process is trapped into local optima, a knowledge transfer mechanism is triggered. Four representative ingle-objective optimization algorithms are used to combine with IMTO. The experimental results show that the proposed framework can significantly outperform baseline solvers and MFO.
Our future work aims to extend the proposed framework to solve multi-objective MTO problems [51,52,53,54,55,56] and make this kind of individually guided knowledge transfer more general. How to improve the effect of knowledge transfer maximally and avoid negative transfer minimally [57,58] deserve further research efforts.

Author Contributions

X.W., Method development, Writing and original draft preparation, and Performing experiments and result analysis; Q.K., Idea development, and Review and editing; M.Z., Paper revision and research supervision; Z.F., Data analysis, and Paper revision; A.A., Paper review and revision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (51775385); in part by the Strategy Research Project of Artificial Intelligence Algorithms of Ministry of Education of China; in part by the Shanghai Industrial Collaborative Science and Technology Innovation Project (2021-cyxt2-kj10); in part by Innovation Program of Shanghai Municipal Education Commission (202101070007E00098); and in part by The Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia has funded this project, under grant no. (RG-11-135-43). We are also grateful for the efforts from our colleagues in the Sino-German Center of Intelligent Systems, Tongji University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The test problems used in this work come from standard test suites.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hua, Y.; Liu, Q.; Hao, K.; Jin, Y. A survey of evolutionary algorithms for multi-objective optimization problems with irregular pareto fronts. IEEE/CAA J. Autom. Sin. 2021, 8, 303–318. [Google Scholar] [CrossRef]
  2. Tian, Y.; Li, X.; Ma, H.; Zhang, X.; Tan, K.C.; Jin, Y. Deep reinforcement learning based adaptive operator selection for evolutionary multi-objective optimization. IEEE Trans. Emerg. Topics Comput. Intell. 2022, 1–14. [Google Scholar] [CrossRef]
  3. Deng, Q.; Kang, Q.; Zhang, L.; Zhou, M.; An, J. Objective space-based population generation to accelerate evolutionary algorithms for large-scale many-objective optimization. IEEE Trans. Evol. Comput. 2022. [Google Scholar] [CrossRef]
  4. Nguyen, T.T.; Yang, S.; Branke, J. Evolutionary dynamic optimization: A survey of the state of the art. Swarm Evol. Comput. 2012, 6, 1–24. [Google Scholar] [CrossRef]
  5. Wang, B.-C.; Li, H.-X.; Li, J.; Wang, Y. Composite differential evolution for constrained evolutionary optimization. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1482–1495. [Google Scholar] [CrossRef]
  6. Zhang, L.; Wang, K.; Xu, L.; Sheng, W.; Kang, Q. Evolving ensembles using multi-objective genetic programming for imbalanced classification. Knowl.-Based Syst. 2022, 255, 109611. [Google Scholar] [CrossRef]
  7. Sun, Y.; Yen, G.G.; Yi, Z. IGD Indicator-based Evolutionary Algorithm for Many-objective Optimization Problems. IEEE Trans. Evol. Comput. 2019, 23, 173–187. [Google Scholar] [CrossRef] [Green Version]
  8. Xu, H.; Zeng, W.; Zeng, X.; Yen, G.G. An evolutionary algorithm based on minkowski distance for many-objective optimization. IEEE Trans. Cybern. 2019, 49, 3968–3979. [Google Scholar] [CrossRef]
  9. Hong, W.; Tang, K.; Zhou, A.; Ishibuchi, H.; Yao, X. A scalable indicator-based evolutionary algorithm for large-scale multiobjective optimization. IEEE Trans. Evol. Comput. 2019, 23, 525–537. [Google Scholar] [CrossRef]
  10. Chen, S.-Y.; Song, M.H. Energy-saving dynamic bias current control of active magnetic bearing positioning system using adaptive differential evolution. IEEE Trans. Syst. Man, Cybern. Syst. 2019, 49, 942–953. [Google Scholar] [CrossRef]
  11. Wang, X.; Choi, T.-M.; Liu, H.; Yue, X. A novel hybrid ant colony optimization algorithm for emergency transportation problems during post-disaster scenarios. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 545–556. [Google Scholar] [CrossRef]
  12. Wang, J.; Zhou, Y.; Wang, Y.; Zhang, J.; Chen, C.L.P.; Zheng, Z. Multiobjective vehicle routing problems with simultaneous delivery and pickup and time windows: Formulation, instances, and algorithms. IEEE Trans. Cybern. 2016, 46, 582–594. [Google Scholar] [CrossRef] [PubMed]
  13. Pan, Z.; Wang, L.; Wang, J.; Lu, J. Deep reinforcement learning based optimization algorithm for permutation flow-shop scheduling. IEEE Trans. Emerg. Topics Comput. Intell. 2021, 1–12. [Google Scholar] [CrossRef]
  14. Kang, Q.; Song, X.; Zhou, M. A collaborative resource allocation strategy for decomposition-based multiobjective evolutionary algorithms. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2416–2423. [Google Scholar] [CrossRef]
  15. Fu, Y.; Ding, M.; Zhou, C.; Hu, H. Route planning for unmanned aerial vehicle (UAV) on the sea using hybrid differential evolution and quantum-behaved particle swarm optimization. IEEE Trans. Syst. Man Cybern. Syst. 2013, 43, 1451–1465. [Google Scholar] [CrossRef]
  16. Lin, Q.; Fang, Z.; Chen, Y.; Tan, K.C.; Li, Y. Evolutionary architectural search for generative adversarial networks. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 783–794. [Google Scholar] [CrossRef]
  17. Chen, Y.; Zhong, J.; Feng, L.; Zhng, J. An adaptive archive-based evolutionary framework for many-task optimization. IEEE Trans. Emerg. Topics Comput. Intell. 2020, 4, 369–384. [Google Scholar] [CrossRef]
  18. Yao, S.; Kang, Q.; Zhou, M.; Rawa, M.; Albeshri, A. Discriminative manifold distribution alignment for domain adaptation. IEEE Trans. Syst. Man, Cybern. Syst. 2022. [Google Scholar] [CrossRef]
  19. Kang, Q.; Yao, S.; Zhou, M.; Zhang, K.; Abusorrah, A. Effective visual domain adaptation via generative adversarial distribution matching. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3919–3929. [Google Scholar] [CrossRef]
  20. Yao, S.; Kang, Q.; Zhou, M.; Rawa, M.; Abusorrah, A. A survey of transfer learning for machinery diagnostics and prognostics. Artif. Intell. Rev. 2022, 1–52. [Google Scholar] [CrossRef]
  21. Gupta, A.; Ong, Y.-S.; Feng, L. Multifactorial evolution: Toward evolutionary multitasking. IEEE Trans. Evol. Comput. 2016, 20, 343–357. [Google Scholar] [CrossRef]
  22. Wang, X.; Kang, Q.; Zhou, M.; Yao, S.; Abusorrah, A. Domain adaptation multitask optimization. IEEE Trans. Cybern. 2022. [Google Scholar] [CrossRef] [PubMed]
  23. Zheng, X.; Qin, A.K.; Gong, M.; Zhou, D. Self-regulated evolutionary multitask optimization. IEEE Trans. Evol. Comput. 2020, 24, 16–28. [Google Scholar] [CrossRef]
  24. Huang, S.; Zhong, J.; Yu, W. Surrogate-assisted evolutionary framework with adaptive knowledge transfer for multi-task optimization. IEEE Trans. Emerg. Top. Comput. 2021, 9, 1930–1944. [Google Scholar] [CrossRef]
  25. Dang, Q.; Gao, W.; Gong, M. An efficient mixture sampling model for gaussian estimation of distribution algorithm. Inf. Sci. 2022, 608, 1157–1182. [Google Scholar] [CrossRef]
  26. Gupta, A.; Ong, Y.-S.; Feng, L. Insights on transfer optimization: Because experience is the best teacher. IEEE Trans. Emerg. Topics. Comput. Intell. 2018, 2, 51–64. [Google Scholar] [CrossRef]
  27. Zhou, L.; Feng, L.; Tan, K.C.; Zhong, J.; Zhu, Z.; Liu, K.; Chen, C. Toward adaptive knowledge transfer in multifactorial evolutionary computation. IEEE Trans. Cybern. 2021, 51, 2563–2576. [Google Scholar] [CrossRef]
  28. Ding, J.; Yang, C.; Jin, Y.; Chai, T. Generalized multitasking for evolutionary optimization of expensive problems. IEEE Trans. Evol. Comput. 2019, 23, 44–58. [Google Scholar] [CrossRef]
  29. Feng, L.; Zhou, W.; Zhou, L.; Jiang, S.W.; Zhong, J.H.; Da, B.S.; Zhu, Z.X.; Wang, Y. Evolutionary multitasking via explicit autoencoding. IEEE Trans. Cybern. 2019, 49, 3457–3470. [Google Scholar] [CrossRef]
  30. Gupta, A.; Ong, Y.-S.; Feng, L.; Tan, K.C. Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Trans. Cybern. 2017, 47, 1652–1665. [Google Scholar] [CrossRef]
  31. Dang, Q.; Gao, W.; Gong, M. Multiobjective multitasking optimization assisted by multidirectional prediction method. Complex Intell. Syst. 2022, 8, 1663–1679. [Google Scholar] [CrossRef]
  32. Dang, Q.; Gao, W.; Gong, M. Dual transfer learning with generative filtering model for multiobjective multitasking optimization. Memetic Comput. 2022, 1–27. [Google Scholar] [CrossRef]
  33. Feng, L.; Zhou, W.; Zhou, L.; Jiang, S.W.; Zhong, J.H.; Da, B.S.; Zhu, Z.X.; Wang, Y. An empirical study of multifactorial PSO and multifactorial DE. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 921–928. [Google Scholar]
  34. Bali, K.K.; Ong, Y.S.; Gupta, A.; Tan, P.S. Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II. IEEE Trans. Evol. Comput. 2020, 24, 69–83. [Google Scholar] [CrossRef]
  35. Bali, K.K.; Gupta, A.; Ong, Y.-S.; Tan, P.S. Cognizant multitasking in multiobjective multifactorial evolution: MO-MFEA-II. IEEE Trans. Cybern. 2021, 51, 1784–1796. [Google Scholar] [CrossRef]
  36. Liu, D.; Huang, S.; Zhong, J. Surrogate-assisted multi-tasking memetic algorithm. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  37. Bali, K.K.; Gupta, A.; Feng, L.; Ong, Y.S.; Siew, T.P. Linearized domain adaptation in evolutionary multitasking. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 1295–1302. [Google Scholar]
  38. Tang, J.; Chen, Y.; Deng, Z.; Xiang, Y.; Joy, C.P. A group-based approach to improve multifactorial evolutionary algorithm. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 3870–3876. [Google Scholar]
  39. Zhang, J.; Zhou, W.; Chen, X.; Yao, W.; Cao, L. Multisource Selective Transfer Framework in Multiobjective Optimization Problems. IEEE Trans. Evol. Comput. 2020, 24, 424–438. [Google Scholar]
  40. Martinez, A.D.; Osaba, E.; Ser, J.D.; Herrera, F. Simultaneously evolving deep reinforcement learning models using multifactorial optimization. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020. [Google Scholar]
  41. Feng, L.; Huang, Y.; Zhou, L.; Zhong, J.; Gupta, A.; Tang, K.; Tan, K.C. Explicit evolutionary multitasking for combinatorial optimization: A case study on capacitated vehicle Routing Problem. IEEE Trans. Cybern. 2021, 51, 3143–3156. [Google Scholar] [CrossRef]
  42. Li, G.; Zhang, Q.; Gao, W. Multipopulation evolution framework for multifactorial optimization. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Kyoto, Japan, 6 July 2018; pp. 215–216. [Google Scholar]
  43. Cheng, M.-Y.; Gupta, A.; Ong, Y.-S.; Ni, Z.-W. Coevolutionary multitasking for concurrent global optimization: With case studies in complex engineering design. Eng. Appl. Artif. Intell. 2017, 64, 13–24. [Google Scholar] [CrossRef]
  44. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  45. Ghahramani, M.; Qiao, Y.; Zhou, M.; O’Hagan, A.; Sweeney, J. AI-based modeling and data-driven evaluation for smart manufacturing processes. IEEE/CAA J. Autom. Sin. 2020, 7, 1026–1037. [Google Scholar] [CrossRef]
  46. Wang, Y.; Zuo, X. An Effective Cloud Workflow Scheduling Approach Combining PSO and Idle Time Slot-Aware Rules. IEEE/CAA J. Autom. Sin. 2021, 8, 1079–1094. [Google Scholar] [CrossRef]
  47. Cao, Y.; Zhang, H.; Li, W.; Zhou, M.; Zhang, Y.; Chaovalitwongse, W.A. Comprehensive learning particle swarm optimization algorithm with local search for multimodal functions. IEEE Trans. Evol. Computat. 2019, 23, 718–731. [Google Scholar] [CrossRef]
  48. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  49. Wang, L.; Zhang, X. Antenna array design by artificial bee colony algorithm with similarity induced search method. IEEE Trans. Magn. 2019, 55, 1–4. [Google Scholar] [CrossRef]
  50. Da, B.; Ong, Y.; Feng, L.; Qin, A.K.; Gupta, A.; Zhu, Z.; Ting, C.; Tang, K.; Yao, X. Evolutionary Multitasking for Single-Objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results. 2017. Available online: https://arxiv.org/abs/1706.03470 (accessed on 27 December 2022).
  51. Gao, K.; Zhang, Y.; Su, R.; Yang, F.; Suganthan, P.N.; Zhou, M. Solving traffic signal scheduling problems in heterogeneous traffic network by using meta-heuristics. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3272–3282. [Google Scholar] [CrossRef]
  52. Fu, Y.; Zhou, M.; Guo, X.; Qi, L. Scheduling dual-objective stochastic hybrid flow shop with deteriorating jobs via bi-population evolutionary algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 5037–5048. [Google Scholar] [CrossRef]
  53. Guo, X.W.; Zhou, M.C.; Liu, S.X.; Qi, L. Lexicographic multiobjective scatter search for the optimization of sequence-dependent selective disassembly subject to multiresource constraints. IEEE Trans. Cybern. 2020, 50, 3307–3317. [Google Scholar] [CrossRef] [PubMed]
  54. Li, Q.; Gravina, R.; Li, Y.; Alsamhi, S.; Sun, F.; Fortino, G. Multi-user activity recognition: Challenges and opportunities. Inf. Fusion 2020, 63, 121–135. [Google Scholar] [CrossRef]
  55. Li, W.; He, L.; Cao, Y. Many-objective evolutionary algorithm with reference point-based fuzzy correlation entropy for energy-efficient job shop scheduling with limited workers. IEEE Trans. Cybern. 2022, 52, 10721–10734. [Google Scholar] [CrossRef]
  56. Wang, Y.; Gao, S.; Zhou, M.; Yu, Y. A multi-layered gravitational search algorithm for function optimization and real-world problems. IEEE/CAA J. Autom. Sin. 2021, 8, 94–109. [Google Scholar] [CrossRef]
  57. Zhang, W.; Deng, L.; Zhang, L.; Wu, D. A survey on negative transfer. IEEE/CAA J. Autom. Sin. 2022, 8, 94–109. [Google Scholar] [CrossRef]
  58. Huang, Z.; Yang, S.; Zhou, M.; Li, Z.; Gong, Z.; Chen, Y. Feature Map Distillation of Thin Nets for Low-Resolution Object Recognition. IEEE Trans. on Image Process. 2022, 31, 1364–1379. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of individually guided multi-task optimization (IMTO).
Figure 1. The flowchart of individually guided multi-task optimization (IMTO).
Applsci 13 00602 g001
Figure 2. Individually guided multi-task optimization (IMTO) when solving two optimization problems.
Figure 2. Individually guided multi-task optimization (IMTO) when solving two optimization problems.
Applsci 13 00602 g002
Figure 3. Comparison of six algorithms in terms of the average computational time (s) via 20 runs in test suite 2.
Figure 3. Comparison of six algorithms in terms of the average computational time (s) via 20 runs in test suite 2.
Applsci 13 00602 g003
Figure 4. Convergence traces of IMTO and MFO on multitasking problem P5 in test suite 1. (a) Convergence traces of IMGA and MFEA, (b) Convergence traces of IMPSO and MFPSO, and (c) Convergence traces of IMDE and MFDE.
Figure 4. Convergence traces of IMTO and MFO on multitasking problem P5 in test suite 1. (a) Convergence traces of IMGA and MFEA, (b) Convergence traces of IMPSO and MFPSO, and (c) Convergence traces of IMDE and MFDE.
Applsci 13 00602 g004
Figure 5. Convergence traces of IMTO and baseline solvers on multitasking problem P5 in test suite 1. (a) Convergence traces of IMGA and B-GA, (b) Convergence traces of IMPSO and B-PSO, (c) Convergence traces of IMDE and B-DE, and (d) Convergence traces of IMABC and B-ABC.
Figure 5. Convergence traces of IMTO and baseline solvers on multitasking problem P5 in test suite 1. (a) Convergence traces of IMGA and B-GA, (b) Convergence traces of IMPSO and B-PSO, (c) Convergence traces of IMDE and B-DE, and (d) Convergence traces of IMABC and B-ABC.
Applsci 13 00602 g005
Table 1. Summary of properties of problem pairs in test suite 1.
Table 1. Summary of properties of problem pairs in test suite 1.
Task SetCategoryTask ComponentDimensionalitySearch SpaceInter-Task Similarity
P1CI+HSGriewank (T1)
Rastrigin (T2)
50
50
[−100, 100]
[−50, 50]
1.00
P2CI+MSAckley (T1)
Rastrigin (T2)
50
50
[−50, 50]
[−50, 50]
0.22
P3CI+LSAckley (T1)
Schwefel (T2)
50
50
[−50, 50]
[−500, 500]
0.00
P4PI+HSRastrigin (T1)
Sphere (T2)
50
50
[−50, 50]
[−100, 100]
0.86
P5PI+MSAckey (T1)
Rosenbrock (T2)
50
50
[−50, 50]
[−50, 50]
0.21
P6PI+LSAckey (T1)
Weierstrass (T2)
50
25
[−50, 50]
[−0.5, 0.5]
0.07
P7NI+HSRosenbrock (T1)
Rastrigin (T2)
50
50
[−50, 50]
[−50, 50]
0.94
P8NI+MSGriewank (T1)
Weierstrass (T2)
50
50
[−100, 100]
[−0.5, 0.5]
0.36
P9NI+LSRastrigin (T1)
Schwefel (T2)
50
50
[−50, 50]
[−500, 500]
0.00
Table 2. The mean and standard deviation (in brackets) of the best achieved fitness values of IMDE with different communication rates.
Table 2. The mean and standard deviation (in brackets) of the best achieved fitness values of IMDE with different communication rates.
Task SetIMDE (Ω = 0.2)IMDE (Ω = 0.4)IMDE (Ω = 0.6)IMDE (Ω = 0.8)IMDE (Ω = 1)
P1T1
T2
1.20 × 10−4 (3.63 × 10−4)
5.03 × 101 (1.30 × 101)
2.02 × 10−3 (3.43 × 10−3)
4.32 × 101 (7.83 × 100)
1.92 × 10−3 (3.95 × 10−3)
3.22 × 101 (1.10 × 101)
6.99 × 10−4 (2.69 × 10−3)
3.48 × 101 (9.85 × 100)
4.44 × 10−4 (1.62 × 10−3)
2.84 × 101 (6.67 × 100)
P2T1
T2
4.74 × 10−3 (9.54 × 10−3)
4.43 × 101 (1.15 × 101)
8.84 × 10−2 (2.64 × 10−1)
3.98 × 101 (1.13 × 101)
1.11 × 10−1 (3.27 × 10−1)
3.78 × 101 (1.46 × 101)
9.05 × 10−4 (3.38 × 10−3)
3.04 × 101 (7.59 × 100)
1.02 × 10−2 (3.95 × 10−2)
2.88 × 101 (9.50 × 100)
P4T1
T2
8.10 × 101 (9.75 × 100)
2.44 × 10−6 (7.82 × 10−6)
7.29 × 101 (2.27 × 101)
8.06 × 10−4 (2.91 × 10−3)
7.46 × 101 (1.22 × 101)
4.97 × 10−5 (2.15 × 10−4)
7.40 × 101 (1.45 × 101)
1.46 × 10−6 (4.66 × 10−6)
6.77 × 101 (1.66 × 101)
1.23 × 10−4 (5.35 × 10−4)
P5T1
T2
6.74 × 10−5 (9.91 × 10−5)
6.48 × 101 (2.97 × 101)
7.29 × 10−5 (5.01 × 10−5)
9.82 × 101 (3.15 × 101)
3.14 × 10−4 (5.71 × 10−4)
7.82 × 101 (2.97 × 101)
1.32 × 10−4 (2.92 × 10−4)
6.96 × 101 (2.81 × 101)
9.48 × 10−5 (1.79 × 10−4)
7.33 × 101 (3.08 × 101)
P7T1
T2
9.25 × 101 (2.75 × 101)
5.14 × 101 (1.22 × 101)
8.83 × 101 (4.47 × 101)
4.72 × 101 (1.28 × 101)
7.78 × 101 (6.08 × 101)
4.10 × 101 (1.22 × 101)
6.86 × 101 (3.42 × 101)
3.53 × 101 (9.74 × 100)
8.29 × 101 (4.78 × 101)
3.74 × 101 (9.87 × 100)
P8T1
T2
1.41 × 10−3 (3.33 × 10−3)
5.93 × 100 (2.04 × 100)
3.39 × 10−3 (7.11 × 10−3)
3.84 × 100 (1.34 × 100)
1.49 × 10−3 (3.51 × 10−3)
4.41 × 100 (1.28 × 100)
2.03 × 10−3 (5.95 × 10−3)
4.20 × 100 (1.30 × 100)
1.62 × 10−3 (3.84 × 10−3)
3.55 × 100 (2.07 × 100)
Table 3. The mean and standard deviation (in brackets) of the best achieved fitness values of IMDE with different randomly chosen ratios.
Table 3. The mean and standard deviation (in brackets) of the best achieved fitness values of IMDE with different randomly chosen ratios.
Task SetIMDE (γ = 0)IMDE (γ = 0.2)IMDE (γ = 0.4)IMDE (γ = 0.6)IMDE (γ = 0.8)IMDE (γ = 1.0)
P1T1
T2
2.12 × 10−3 (4.22 × 10−3)
5.55 × 101 (1.67 × 101)
1.202 × 10−4 (3.632 × 10−4)
5.032 × 101 (1.302 × 101)
1.46 × 10−3 (4.21 × 10−3)
4.95 × 101 (1.23 × 101)
1.51 × 10−3 (2.95 × 10−3)
4.79 × 101 (1.39 × 101)
1.38 × 10−3 (3.36 × 10−3)
5.08 × 101 (1.34 × 101)
1.65 × 10−3 (3.22 × 10−3)
5.72 × 101 (1.41 × 101)
P2T1
T2
4.41 × 10−2 (1.92 × 10−1)
4.72 × 101 (1.47 × 101)
4.742 × 10−3 (9.542 × 10−3)
4.432 × 101 (1.152 × 101)
1.33 × 10−4 (1.86 × 10−4)
4.80 × 101 (1.48 × 101)
1.79 × 10−3 (3.96 × 10−3)
5.24 × 101 (1.82 × 101)
1.48 × 10−1 (3.50 × 10−1)
5.11 × 101 (1.52 × 101)
1.44 × 10−1 (4.41 × 10−1)
4.95 × 101 (1.20 × 101)
P4T1
T2
8.57 × 101 (2.72 × 101)
1.70 × 10−1 (6.46 × 10−1)
8.102 × 101 (9.752 × 10+00)
2.442 × 10−6 (7.822 × 10−6)
8.06 × 101 (1.78 × 101)
1.96 × 10−4 (7.32 × 10−4)
8.77 × 101 (2.75 × 101)
6.10 × 10−5 (2.57 × 10−4)
7.98 × 101 (2.01 × 101)
6.76 × 10−5 (2.94 × 10−4)
7.72 × 101 (2.35 × 101)
3.07 × 10−2 (9.20 × 10−2)
P5T1
T2
3.30 × 10−4 (6.71 × 10−4)
8.93 × 101 (2.67 × 101)
6.742 × 10−5 (9.912 × 10−5)
6.482 × 101 (2.972 × 101)
3.30 × 10−4 (8.34 × 10−4)
8.45 × 101 (2.96 × 101)
1.89 × 10−4 (2.95 × 10−4)
9.97 × 101 (3.19 × 101)
1.23 × 10−3 (4.55 × 10−3)
9.29 × 101 (2.75 × 101)
5.44 × 10−4 (1.38 × 10−3)
9.07 × 101 (3.23 × 101)
P7T1
T2
9.93 × 101 (4.71 × 101)
5.82 × 101 (1.06 × 101)
9.252 × 101 (2.752 × 101)
5.142 × 101 (1.222 × 101)
1.01 × 102 (5.66 × 101)
5.76 × 101 (9.24 × 100)
7.44 × 101 (2.56 × 101)
5.53 × 101 (1.41 × 101)
8.83 × 101 (3.74 × 101)
5.37 × 101 (1.04 × 101)
7.52 × 101 (4.35 × 101)
5.60 × 101 (1.02 × 101)
P8T1
T2
1.72 × 10−3 (3.97 × 10−3)
6.18 × 100 (2.55 × 100)
1.412 × 10−3 (3.332 × 10−3)
5.932 × 100 (2.042 × 100)
2.54 × 10−3 (6.20 × 10−3)
5.47 × 100 (2.95 × 100)
1.63 × 10−3 (2.92 × 10−3)
6.18 × 100 (3.71 × 100)
4.27 × 10−3 (6.69 × 10−3)
5.11 × 100 (2.06 × 100)
3.24 × 10−3 (6.19 × 10−3)
5.15 × 100 (2.67 × 100)
Table 4. Results of computational time(s) and the mean and standard deviation (in brackets) of the best fitness values in test suite 1.
Table 4. Results of computational time(s) and the mean and standard deviation (in brackets) of the best fitness values in test suite 1.
Task SetIMGAMFEAIMPSOMFPSOIMDEMFDE
GA-BasedPSO-Based DE-Based
Fitness TimeFitness TimeFitness TimeFitness TimeFitness TimeFitness Time
P1T1

T2
1.11 × 10−1
(5.27 × 10−2)
3.39 × 102
(4.72 × 101)
4.02 3.35 × 10−1 +
(4.88 × 10−2)
2.27 × 102
(5.33 × 101)
18.574.99 × 10−3
(6.57 × 10−3)
3.26 × 102
(6.26 × 101)
3.49 5.20 × 10−1 +
(1.42 × 10−1)
3.32 × 102
(2.54 × 101)
21.79 1.20 × 10−4
(3.63 × 10−4)
5.03 × 101
(1.30 × 101)
4.87 8.77 × 10−4 +
(2.63 × 10−3)
3.69 × 100
(1.14 × 101)
20.29
P2T1

T2
3.83 × 100
(8.78 × 10−1)
4.16 × 102
(3.65 × 101)
4.24 8.00 × 100 +
(6.38 × 10−1)
4.52 × 102 +
(5.68 × 101)
19.00 3.93 × 10−1
(4.54 × 10−1)
3.73 × 101
(1.20 × 101)
3.43 5.32 × 100 +
(6.68 × 10−1)
3.97 × 102 +
(4.24 × 101)
19.03 4.74 × 10−3
(9.54 × 10−3)
4.43 × 101
(1.15 × 101)
5.47 1.08 × 10−1 +
(3.28 × 10−1)
7.47 × 10−1
(2.83 × 100)
18.87
P3T1

T2
2.08 × 101
(4.19 × 10−1)
1.30 × 104
(6.82 × 102)
4.40 2.11 × 101
(7.66 × 10−2)
9.46 × 103
(6.91 × 102)
19.06 2.10 × 101
(5.53 × 10−2)
1.64 × 104
(3.28 × 102)
2.61 2.13 × 101 +
(5.07 × 102)
1.54 × 104
(8.39 × 102)
18.89 2.12 × 101
(3.86 × 10−2)
9.68 × 103
(2.25 × 103)
5.792.12 × 101
(3.33 × 10−2)
1.15 × 104 +
(1.45 × 103)
18.98
P4T1

T2
1.95 × 102
(4.43 × 101)
7.64 × 103
(6.35 × 102)
4.25 7.78 × 102 +
(1.00 × 102)
2.58 × 102
(8.90 × 101)
19.17 3.23 × 102
(8.88 × 101)
3.38 × 10−4
(2.65 × 10−4)
2.60 7.72 × 102 +
(1.09 × 102)
3.53 × 103 +
(8.34 × 102)
21.91 8.10 × 101
(9.75 × 100)
2.44 × 10−6
(7.82 × 10−6)
4.79 8.13 × 101
(1.71 × 101)
1.64 × 10−5 +
(1.30 × 10−5)
20.00
P5T1

T2
3.59 × 100
(7.93 × 10−1)
2.49 × 104
(2.25 × 104)
4.09 7.21 × 100 +
(5.99 × 10−1)
7.37 × 104 +
(4.33 × 104)
18.80 2.52 × 10−1
(3.81 × 10−1)
6.67 × 101
(3.11 × 101)
2.63 3.69 × 100 +
(6.01 × 10−1)
8.39 × 103 +
(3.85 × 103)
20.70 6.74 × 10−5
(9.91 × 10−5)
6.48 × 101
(2.97 × 101)
5.14 2.80 × 10−3 +
(5.52 × 10−3)
6.52 × 101
(2.28 × 101)
19.47
P6T1

T2
3.74 × 100
(8.99 × 10−1)
5.52 × 100
(8.98 × 10−1)
19.46 2.10 × 101 +
(7.61 × 10−2)
2.17 × 101 +
(2.49 × 100)
27.67 3.10 × 10−1
(5.02 × 10−1)
2.19 × 101
(3.61 × 100)
18.20 1.02 × 101 +
(1.34 × 100)
7.87 × 100
(1.47 × 100)
46.21 3.06 × 10−1
(5.67 × 10−1)
1.88 × 100
(2.37 × 100)
30.13 7.71 × 10−1 +
(1.08 × 100)
2.61 × 10−1
(6.59 × 10−1)
33.32
P7T1

T2
2.25 × 103
(1.74 × 103)
5.78 × 102
(2.49 × 102)
4.47 7.75 × 104 +
(3.70 × 104)
4.30 × 102
(6.03 × 101)
18.42 8.56 × 101
(8.06 × 101)
7.09 × 101
(3.80 × 101)
3.57 1.05 × 105 +
(4.72 × 104)
3.77 × 102 +
(6.90 × 101)
21.59 9.25 × 101
(2.75 × 101)
5.14 × 101
(1.22 × 101)
5.55 1.17 × 102
(1.17 × 102)
2.65 × 101
(1.92 × 101)
18.78
P8T1

T2
4.21 × 10−2
(1.28 × 10−2)
3.07 × 101
(5.12 × 10−1)
33.23 1.04 × 100 +
(4.22 × 102)
2.86 × 101
(2.66 × 100)
38.55 5.33 × 10−3
(6.56 × 10−3)
5.26 × 101
(3.80 × 100)
35.86 1.06 × 100 +
(3.68 × 10−2)
2.91 × 101
(2.01 × 100)
70.41 1.41 × 10−3
(3.33 × 10−3)
5.93 × 100
(2.04 × 100)
51.45 1.67 × 10−3 +
(3.99 × 10−3)
2.81 × 100
(1.20 × 100)
63.82
P9T1

T2
3.36 × 102
(8.16 × 102)
1.75 × 104
(5.37 × 102)
5.69 7.33 × 102 +
(7.24 × 101)
9.81 × 103
(6.92 × 102)
17.90 3.55 × 102
(6.91 × 101)
1.60 × 104
(4.90 × 102)
3.71 2.43 × 103 +
(6.27 × 102)
1.57 × 104
(5.88 × 102)
22.33 3.22 × 102
(1.06 × 102)
5.86 × 103
(5.84 × 102)
6.231.01 × 102
(2.66 × 101)
4.24 × 103
(8.61 × 102)
24.75
+/−/≈11/6/113/3/27/6/5
“+” means that IMTO significantly outperforms other algorithms. “−” means that IMTO is significantly outperformed by other algorithms. “≈” means that no significant differences between IMTO and compared algorithms.
Table 5. The mean and standard deviation (in brackets) of the best achieved fitness values in test Suite 2.
Table 5. The mean and standard deviation (in brackets) of the best achieved fitness values in test Suite 2.
Task SetIMGAMFEAIMPSOMFPSOIMDEMFDE
GA-BasedPSO-BasedDE-Based
P1T1
T2
6.226 × 102 (1.999 × 10−1)
6.279 × 102 (1.476 × 10−1)
6.241 × 102 (2.565 × 10−1) +
6.272 × 102 (1.722 × 10−1) −
6.232 × 102 (2.039 × 10−1)
6.262 × 102 (2.346 × 10−1)
6.235 × 102 (2.507 × 10−1) +
6.270 × 102 (2.411 × 10−1) +
6.217 × 102 (1.503 × 10−1)
6.246 × 102 (1.351 × 10−1)
6.217 × 102 (1.236 × 10−1) ≈
6.246 × 102 (1.217 × 10−1) ≈
P2T1
T2
7.112 × 102 (2.545 × 10−3)
7.194 × 102 (6.928 × 10−2)
7.113 × 102 (1.519 × 10−2) +
7.177 × 102 (1.642 × 10−2) −
7.112 × 102 (2.068 × 10−4)
7.176 × 102 (3.411 × 10−13)
7.113 × 102 (6.677 × 10−2) +
7.178 × 102 (1.976 × 10−1) +
7.112 × 102 (7.461 × 10−10)
7.176 × 102 (1.450 × 10−9)
7.112 × 102 (6.483 × 10−8) +
7.176 × 102 (1.465 × 10−7) +
P3T1
T2
2.887 × 106 (2.510 × 104)
5.787 × 107 (1.480 × 106)
2.974 × 106 (2.537 × 104) +
3.597 × 107 (2.123 × 105) −
2.834 × 106 (0.000 × 100)
3.474 × 107 (7.451 × 10−9)
2.878 × 106 (6.454 × 104) +
3.637 × 107 (1.434 × 106) +
2.834 × 106 (2.792 × 10−3)
3.474 × 107 (2.889 × 10−2)
2.834 × 106 (8.790 × 10−2) +
3.474 × 107 (1.096 × 100) +
P4T1
T2
1.304 × 103 (2.390 × 10−4)
1.305 × 103 (2.121 × 10−3)
3.400 × 105 (4.295 × 102) +
8.574 × 105 (1.582 × 103) +
1.304 × 103 (2.274 × 10−13)
1.305 × 103 (4.547 × 10−13)
1.304 × 103 (2.450 × 10−3) −
1.305 × 103 (3.531 × 10−3) −
1.304 × 103 (2.119 × 10−11)
1.305 × 103 (1.993 × 10−11)
1.304 × 103 (1.346 × 10−9) +
1.305 × 103 (1.320 × 10−9) +
P5T1
T2
3.374 × 105 (4.217 × 102)
9.640 × 105 (8.560 × 103)
3.400 × 105 (4.230 × 102) +
8.574 × 105 (1.548 × 103) −
3.366 × 105 (4.285 × 10−2)
8.491 × 105 (2.755 × 10−10)
3.384 × 105 (2.654 × 103) +
8.618 × 105 (8.758 × 103) +
3.366 × 105 (3.049 × 100)
8.491 × 105 (7.984 × 10−5)
3.366 × 105 (2.084 × 10−3) −
8.491 × 105 (7.594 × 10−3) +
P6T1
T2
1.868 × 108 (1.105 × 105)
2.815 × 109 (6.530 × 106)
1.892 × 108 (3.407 × 105) +
2.671 × 109 (2.557 × 106) −
1.867 × 108 (5.960 × 10−8)
2.653 × 109 (4.768 × 10−7)
1.885 × 108 (1.482 × 106) +
2.674 × 109 (1.551 × 107) +
1.867 × 108 (1.422 × 10−2)
2.653 × 109 (1.088 × 10−1)
1.867 × 108 (1.120 × 100) +
2.653 × 109 (9.594 × 100) +
P7T1
T2
6.221 × 104 (9.702 × 101)
1.724 × 104 (1.394 × 102)
6.284 × 104 (1.462 × 102) +
1.495 × 104 (2.677 × 101) −
6.201 × 104 (4.659 × 10−1)
1.478 × 104 (7.520 × 100)
6.323 × 104 (9.237 × 102) +
1.481 × 104 (4.567 × 101) +
6.201 × 104 (1.970 × 100)
1.477 × 104 (7.585 × 10−1)
6.201 × 104 (5.445 × 10−4) +
1.477 × 104 (2.106 × 100) ≈
P8T1
T2
5.201 × 102 (6.791 × 10−2)
5.214 × 102 (5.860 × 10−2)
5.203 × 102 (9.244 × 10−2) +
5.202 × 102 (8.411 × 10−2) −
5.208 × 102 (1.072 × 10−1)
5.207 × 102 (1.176 × 10−1)
5.205 × 102 (1.066 × 10−1) −
5.206 × 102 (1.349 × 10−1) ≈
5.202 × 102 (5.078 × 10−2)
5.202 × 102 (5.740 × 10−2)
5.202 × 102 (1.028 × 10−1) ≈
5.202 × 102 (6.809 × 10−2) ≈
P9T1
T2
1.898 × 104 (3.961 × 100)
1.624 × 103 (1.820 × 10−1)
1.902 × 104 (9.390 × 100) +
1.622 × 103 (7.316 × 10−2) −
1.897 × 104 (2.119 × 100)
1.622 × 103 (1.012 × 10−1)
1.906 × 104 (1.112 × 102) +
1.622 × 103 (1.118 × 10−1) ≈
1.897 × 104 (1.770 × 100)
1.622 × 103 (1.113 × 10−1)
1.897 × 104 (1.125 × 102) ≈
1.622 × 103 (8.822 × 10−2) ≈
P10T1
T2
1.947 × 109 (1.414 × 106)
7.516 × 108 (4.203 × 106)
1.957 × 109 (1.920 × 106) +
6.781 × 108 (6.624 × 105) −
1.945 × 109 (2.384 × 10−7)
6.728 × 108 (1.192 × 10−7)
1.972 × 109 (1.430 × 107) +
6.740 × 108 (2.580 × 106) +
1.945 × 109 (1.043 × 10−1)
6.728 × 108 (7.069 × 10−2)
1.945 × 109 (8.881 × 100) +
6.728 × 108 (4.084 × 100) +
+/−/≈11/9/015/3/212/1/7
“+” means that IMTO significantly outperforms other algorithms. “−” means that IMTO is significantly outperformed by other algorithms. “≈” means that no significant differences between IMTO and compared algorithms.
Table 6. The mean and standard deviation (in brackets) of the best achieved fitness values in test Suite 1.
Table 6. The mean and standard deviation (in brackets) of the best achieved fitness values in test Suite 1.
Task SetIMGAGAIMDEDEIMPSOPSOIMABCABC
GA-BasedDE-BasedPSO-BasedABC-Based
P1T1

T2
1.11 × 10−1
(5.27 × 10−2)
3.39 × 102
(4.72 × 101)
9.22 × 10−1 +
(4.21 × 10−1)
9.23 × 102 +
(7.73 × 102)
1.20 × 10−4
(3.63 × 10−4)
5.03 × 101
(1.30 × 101)
2.49 × 10−3
(5.33 × 10−3)
4.03 × 102 +
(1.84 × 101)
4.99 × 10−3
(6.57 × 10−3)
3.26 × 102
(6.26 × 101)
4.80 × 10−2 +
(1.13 × 10−2)
4.90 × 102 +
(7.21 × 101)
2.47 × 10−1
(1.22 × 10−1)
2.06 × 102
(6.73 × 101)
1.75 × 100 +
(1.32 × 10−1)
1.33 × 103 +
(1.47 × 102)
P2T1

T2
3.83 × 100
(8.78 × 10−1)
4.16 × 102
(3.65 × 101)
1.55 × 101 +
(3.39 × 100)
7.14 × 103 +
(7.28 × 103)
4.74 × 10−3
(9.54 × 10−3)
4.43 × 101
(1.15 × 101)
2.33 × 10−1 +
(4.46 × 10−1)
4.04 × 102 +
(2.66 × 101)
3.93 × 10−1
(4.54 × 10−1)
3.73 × 101
(1.20 × 101)
8.16 × 100 +
(1.44 × 100)
5.14 × 102 +
(1.11 × 102)
1.28 × 100
(8.18 × 101)
1.25 × 102
(8.33 × 101)
2.12 × 101 +
(3.61 × 10−2)
1.31 × 103 +
(1.30 × 102)
P3T1

T2
2.08 × 101
(4.19 × 10−1)
1.30 × 104
(6.82 × 102)
2.00 × 101
(4.14 × 10−2)
1.79 × 104 +
(6.29 × 102)
2.12 × 101
(3.86 × 10−2)
9.68 × 103
(2.25 × 103)
2.12 × 101 +
(2.77 × 10−2)
9.81 × 103
(1.72 × 103)
2.10 × 101
(5.53 × 10−2)
1.64 × 104
(3.28 × 102)
2.08 × 101
(1.28 × 10−1)
1.68 × 104 +
(5.75 × 102)
2.12 × 101
(3.57 × 10−2)
3.20 × 10119
(1.32 × 10120)
2.12 × 101
(3.43 × 10−2)
1.72 × 10122 +
(4.00 × 10122)
P4T1

T2
1.95 × 102
(4.43 × 101)
7.64 × 103
(6.35 × 102)
8.63 × 102 +
(2.95 × 102)
1.02 × 104 +
(7.25 × 102)
8.10 × 101
(9.75 × 100)
2.44 × 10−6
(7.82 × 10−6)
3.98 × 102 +
(2.01 × 101)
4.64 × 10−6 +
(1.81 × 10−5)
3.23 × 102
(8.88 × 101)
3.38 × 10−4
(2.65 × 10−4)
4.62 × 102 +
(7.88 × 101)
7.95 × 10−1 +
(2.54 × 10−1)
6.29 × 102
(4.97 × 101)
2.50 × 102
(1.06 × 102)
1.38 × 103 +
(2.19 × 102)
2.89 × 103 +
(4.87 × 102)
P5T1

T2
3.59 × 100
(7.93 × 10−1)
2.49 × 104
(2.25 × 104)
1.69 × 101 +
(3.94 × 100)
1.02 × 109 +
(9.87 × 108)
6.74 × 10−5
(9.91 × 10−5)
6.48 × 101
(2.97 × 101)
1.67 × 10−1 +
(4.40 × 10−1)
4.09 × 104 +
(1.61 × 105)
2.52 × 10−1
(3.81 × 10−1)
6.67 × 101
(3.11 × 101)
4.80 × 10−2
(1.35 × 100)
4.90 × 102 +
(1.07 × 102)
2.64 × 10−1
(6.23 × 10−2)
1.07 × 102
(5.57 × 100)
2.12 × 101 +
(4.47 × 10−2)
8.49 × 108 +
(2.22 × 108)
P6T1

T2
3.74 × 100
(8.99 × 10−1)
5.52 × 100
(8.98 × 10−1)
1.47 × 101 +
(3.81 × 100)
3.58 × 101 +
(1.45 × 100)
3.06 × 10−1
(5.67 × 10−1)
1.88 × 100
(2.37 × 100)
2.25 × 10−1
(4.21 × 10−1)
2.51 × 100
(2.67 × 100)
3.10 × 10−1
(5.02 × 10−1)
2.19 × 101
(3.61 × 100)
8.29 × 100 +
(1.31 × 100)
2.14 × 101
(3.77 × 100)
2.12 × 101
(4.34 × 10−2)
2.00 × 101
(4.47 × 100)
2.12 × 101
(3.51 × 10−2)
3.15 × 101 +
(1.61 × 100)
P7T1

T2
2.25 × 103
(1.74 × 103)
5.78 × 102
(2.49 × 102)
4.25 × 106 +
(9.03 × 106)
1.60 × 103 +
(1.25 × 103)
9.25 × 101
(2.75 × 101)
5.14 × 101
(1.22 × 101)
1.21 × 104 +
(3.03 × 104)
4.01 × 102 +
(1.82 × 101)
8.56 × 101
(8.06 × 101)
7.09 × 101
(3.80 × 101)
3.47 × 102 +
(1.81 × 102)
4.87 × 102 +
(9.55 × 101)
5.78 × 102
(1.66 × 102)
2.67 × 102
(3.88 × 101)
8.29 × 108 +
(1.76 × 108)
1.30 × 103 +
(1.25 × 102)
P8T1

T2
4.21 × 10−2
(1.28 × 10−2)
3.07 × 101
(5.12 × 10−1)
1.06 × 100 +
(3.91 × 10−1)
7.86 × 101 +
(1.90 × 100)
1.41 × 10−3
(3.33 × 10−3)
5.93 × 100
(2.04 × 100)
9.94 × 10−3 +
(2.17 × 10−2)
6.71 × 100
(1.44 × 100)
5.33 × 10−3
(6.56 × 10−3)
5.26 × 101
(3.80 × 100)
5.57 × 10−2 +
(1.68 × 10−2)
4.41 × 101
(1.42 × 101)
1.01 × 100
(3.17 × 10−2)
2.37 × 101
(1.15 × 100)
1.79 × 100 +
(1.32 × 10−1)
7.38 × 101 +
(1.76 × 100)
P9T1

T2
3.36 × 102
(8.16 × 102)
1.75 × 104
(5.37 × 102)
8.94 × 102 +
(3.07 × 102)
1.81 × 104 +
(6.28 × 102)
3.22 × 102
(1.06 × 102)
5.86 × 103
(5.84 × 102)
3.97 × 102 +
(2.92 × 101)
9.33 × 103 +
(1.57 × 103)
3.55 × 102
(6.91 × 101)
1.60 × 104
(4.90 × 102)
4.64 × 102 +
(1.33 × 102)
1.70 × 104 +
(5.97 × 102)
2.23 × 103
(3.87 × 102)
3.08 × 10120
(1.33 × 10121)
1.28 × 103
(1.86 × 102)
1.22 × 10122 +
(4.80 × 10122)
+/−/≈17/1/013/0/514/2/215/1/2
“+” means that IMTO significantly outperforms other algorithms. “−” means that IMTO is significantly outperformed by other algorithms. “≈” means that no significant differences between IMTO and compared algorithms.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Kang, Q.; Zhou, M.; Fan, Z.; Albeshri, A. A Knowledge Sharing and Individually Guided Evolutionary Algorithm for Multi-Task Optimization Problems. Appl. Sci. 2023, 13, 602. https://doi.org/10.3390/app13010602

AMA Style

Wang X, Kang Q, Zhou M, Fan Z, Albeshri A. A Knowledge Sharing and Individually Guided Evolutionary Algorithm for Multi-Task Optimization Problems. Applied Sciences. 2023; 13(1):602. https://doi.org/10.3390/app13010602

Chicago/Turabian Style

Wang, Xiaoling, Qi Kang, Mengchu Zhou, Zheng Fan, and Aiiad Albeshri. 2023. "A Knowledge Sharing and Individually Guided Evolutionary Algorithm for Multi-Task Optimization Problems" Applied Sciences 13, no. 1: 602. https://doi.org/10.3390/app13010602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop