Self-Adaptive Multi-Task Differential Evolution Optimization: With Case Studies in Weapon–Target Assignment Problem

: Multi-task optimization (MTO) is related to the problem of simultaneous optimization of multiple optimization problems, for the purpose of solving these problems better in terms of optimization accuracy or time cost. To handle MTO problems, there emerges many evolutionary MTO (EMTO) algorithms, which possess distinguished strategies or frameworks in the aspect of handling the knowledge transfer between different optimization problems (tasks). In this paper, we explore the possibility of developing a more efﬁcient EMTO solver based on differential evolution by introducing the strategies of a self-adaptive multi-task particle swarm optimization (SaMTPSO) algorithm, and by developing a new knowledge incorporation strategy. Then, we try to apply the proposed algorithm to solve the weapon–target assignment problem, which has never been explored in the ﬁeld of EMTO before. Experiments were conducted on a popular MTO test benchmark and a WTA-MTO test set. Experimental results show that knowledge transfer in the proposed algorithm is effective and efﬁcient, and EMTO is promising in solving WTA problems.


Introduction
Multi-task optimization (MTO) [1,2] studies the simultaneous optimization of multiple optimization problems, for the purpose of achieving higher optimization performance on each optimization problem, especially when compared to traditional optimization methodologies.MTO assumes that there is some common knowledge or common patterns between related optimization problems (i.e., tasks), and this common knowledge can be exploited during the optimization process of related tasks.Based on these assumptions, studies on MTO explore all possible frameworks or strategies to exploit the relatedness and the common knowledge between tasks, so as to develop more efficient optimization solvers.The results of these studies show that MTO is promising in finding better solutions for a batch of optimization problems [3,4].
Generally, the optimization of multiple optimization problems (tasks) is described as an MTO problem.To solve MTO problems, researchers have explored many optimization techniques over the years, including the works based on Bayesian optimization and the works based on evolutionary algorithms.In 2016, Gupta et al. [1] explores the possibility of evolutionary MTO (EMTO) for the first time.Since then, many efficient EMTO works are published.Some works focus on the adaptation of knowledge transfer to task relatedness [5][6][7][8], and others studies focus on the feasibility and effectiveness of EMTO in handling real-world problems [9][10][11][12].
Among all published works, we focus our attention to the self-adaptive multi-task particle swarm optimization (SaMTPSO) algorithm developed in [13].The SaMTPSO is developed using a traditional particle warm optimization algorithm.In the SaMTPSO, three main strategies, i.e., the developed knowledge transfer adaptation strategy, the focus search strategy and the knowledge incorporation strategy, are devised for the purpose of achieving adaptive knowledge transfer.In these strategies, the knowledge source pool, the success memory and the failure memory of every task help the SaMTPSO to learn about the probability of positively transferring knowledge from a task to another tasks, which is then used to transfer knowledge between the task adaptively.Therefore, this paper tries to introduce these strategies into a differential evolution algorithm, to develop a efficient solver for MTO.
Projectile weapons have become a powerful threat to hostile targets due to their advantage in inflicting damage to the protected asset of participants from great distances.To eliminate this threat, in many countries, air defense emerged and is constantly searching for methods to reduce missile threat, which in return has increased the number and quality of available missiles.Therefore, the weapon-target assignment problem, solving the effective allocation of air defense resources (i.e., weapons and targets), was proposed and has attracted the interest of many researchers.Over the years, many works have been performed, such as research using evolutionary algorithms [14,15] or research using exact algorithms [16,17]; however, the WTA problem, known as an NP-complete problem, is difficult to solve, especially when the number of targets or weapons becomes larger and larger.To this end, this paper explores the possibility of further improving the optimization efficiency of WTA problems via the EMTO techniques.
In this paper, the knowledge transfer adaptation strategy and the focus search strategy of the SaMTPSO are developed into the DE algorithms, which then enables the proposed algorithm to dynamically adapt to the complex inter-task relatedness of MTO problems.To better fit into the evolving structure of differential evolution (DE), a novel knowledge incorporation strategy is devised for the mutation of population, which helps the population generate better offspring through the incorporation of more useful information or knowledge.Befittingly, the proposed algorithm is named the self-adaptive multi-task differential evolution (SaMTDE) algorithm.In the experiments, after studying the knowledge transfer performance and the bi-task performance on a popular test benchmark, the SaMTDE was employed to solve WTA problems for the purpose of exploring the possibility of improving the optimization performance on these problems.Experimental results demonstrate the promising future of solving WTA problems via EMTO techniques.
The rest of the paper is organized as follows: In Section II, the mathematics model of MTO, some recent works on MTO and the WTA problem are introduced.In Section III, the details of the proposed algorithm are presented.In Section IV, the first two experiments are discussed to study the performance of the proposed algorithms.In the third experiment, the proposed algorithm was applied to solve the WTA problem; then, the performance of the SaMTDE is compared to that of several typical EMTO algorithms and traditional single-task algorithms.In the final section, some conclusions on this paper are detailed along with some possible future works.

Background
Multi-task optimization (MTO) studies how to solve multiple optimization problems simultaneously.In particular, if there is K optimization problems (each is assumed to be a minimization problem and is treated as a component task) to be optimized, the goal of MTO is to simultaneously optimize these K optimization tasks and find the optimal solution for each of the K tasks, i.e., {x o 1 , x o 2 , . . ., x o K } = argmin{ f 1 (x 1 ), f 2 (x 2 ), . . ., f k (x K )}, where x o t , x t ∈ X t , t = 1, 2, . . ., K and X t is a d t -dimensional search space for the t-th task T t as well as its objective function f t : X t → .
Research on MTO has been performed for many years.In 2013, Swersky et al. [18] employs Bayesian optimization [19,20] to develop their multi-task Bayesian optimization framework for the purpose of handling the problems of optimal hyperparameter settings more efficiently [21].In 2016, Gupta et al. [1] develops a multi-factorial optimization framework based on a genetic algorithm to solve numerical optimization problems, which is the first work to study the evolutionary MTO (EMTO) techniques.Since then, many EMTO works have been published [7,[22][23][24][25][26][27][28][29].In [2], Zheng et al. develops the flexible scheme by adapting knowledge transfer to task relatedness in the local region, which is implemented as a self-regulated evolutionary MTO (SREMTO) algorithm.In [30], Feng et al. introduced differential evolution (DE) into the framework of the above MFEA to develop a multi-factorial DE (MFDE) algorithm, where every individual can make use of the transferred knowledge from other tasks with a probability of rmp.

Self-Adaptive Multi-Task Particle Swarm Optimization
In [13], a self-adaptive multi-task particle swarm optimization (SaMTPSO) is proposed via the developed knowledge transfer adaptation strategy, the focus search strategy and the knowledge incorporation strategy.In the knowledge transfer adaptation strategy, every component task is optimized by one subpopulation, whose members are part of the SaMTPSO's population.Then, a knowledge source pool is devised for each task, and each pool consists of the K component tasks of an MTO problem, which is shown in Figure 1.Each of these tasks is now viewed as a knowledge source.For every individual in the subpopulation of each task, a candidate knowledge source will be chosen from the task's pool to provide knowledge to help the individual generate better solutions, which is according to a probability learned from the task's (the subpopulation's) experience in generating promising solutions via these sources previously.In the strategy, the chosen probability of the k-th source in task T t 's pool is defined as

Index
k is initialized with 1/K to let all K knowledge sources of T t have equal probability to be chosen at the beginning of optimization.A knowledge source for an individual of subpop t is chosen using the roulette wheel selection method [31].After evaluating the generated offspring for task T t using the transferred knowledge from these chosen sources at generation g, the numbers of offspring generated via the k-th source that can successfully or unsuccessfully enter the next generation, i.e., ns g t,k and n f g t,k , are respectively saved into the success memory and the failure memory of T t as shown in Figure 1.With the LP records in the success memory and the failure memory, the corresponding probabilities p t,k , as shown in Figure 2, are updated by where where bp is a based probability parameter to assign a small probability to those sources that have not been chosen by any individual in task T t within the previous LP generations and = 0.001 is a small constant value used to avoid the possibility of In the focus search strategy, the SaMTPSO monitors a task's success memory to observe the task's failures of knowledge transfer on every knowledge source task.Specifically, if every in the success memory of task T t , then the SaMTPSO activates the focus search strategy for the task T t , which then only allows each individual in subpop t choose a source task {T k } k=t .As a result, task T t can only explore and exploit the knowledge from itself, i.e., the knowledge source task is the task itself; therefore, there is only intra-task knowledge transfer remained.For the other tasks, the knowledge transfer adaptation strategy still works if the focus search strategy is not activated for them.For task T t , the focus search strategy will be deactivated if any progress are made for this task.
In the knowledge incorporation strategy, the authors develop two types of knowledge incorporation forms, for the purpose of helping every individual of each subpopulation to better exploit the transferred knowledge from its chosen source.

Weapon-Target Assignment Problem
The weapon-target assignment (WTA) problem is fundamental for the defense application of military operations.The problem is expected to find a proper assignment of interceptors (weapons) to the incoming missiles (targets) via the minimization of the expected damage of own-force assets.So far, there are two distinct categories of WTA problems, i.e., the static WTA (SWTA) problems and the dynamic WTA (DWTA) problems.The SWTA describes a simple scenario wherein a known number of targets (with known destructive values) are observed and a finite number of weapons (with known probabilities of successfully destroying the targets, in other words, the probabilities of a kill), engage the targets in a single stage.The DWTA is a multi-stage problem in which some weapons are engaged at the targets at a stage, then the DWTA assesses the outcome of this engagement and proposes new strategy for the next stage.In this paper, we only consider the SWTA problems, the WTA problems in the following paper refers to the SWTA problems.
In the WTA problem, we assume that there are W weapons and M targets, and all these weapons must be assigned to the targets.Then the WTA is to find a proper assignment for these weapons and targets through the minimization of the following equation: subject to, pi,j ∈ [0.0, 1.0] . ., m, where v j , j = 1, 2, . . ., M is the known destructive value of target j, pi,j is the probability of weapon i successfully destroying target j and A i,j is a Boolean value indicating whether to assign the i-th weapon to the j-th target.
The WTA problem is known as an NP-complete problem; therefore, it is very hard for conventional evolutionary algorithms to find the global optimal solution for a WTA problem.In this paper, we try to study the possibility of more efficiently resolving the WTA problems through the newly emerging EMTO techniques.

Self-Adaptive Multi-Task Differential Evolution Optimization
In this section, an efficient self-adaptive multi-task differential evolution (SaMTDE) algorithm is developed by introducing the knowledge transfer adaptation strategy and the focus search strategy of the SaMTPSO.Moreover, a novel knowledge incorporation strategy is developed for the SaMTDE.The basic structure of the SaMTDE is shown in Algorithm 1.
In the SaMTDE, a population with N individuals is first split into K subpopulations ({subpop t } K t=1 ), each optimizing one optimization task.According to the knowledge transfer adaptation strategy in the SaMTPSO, for a task T t , a success memory and a failure memory as described in Figure 1 are used to record the number of success and failures of individuals in subpop t in using the knowledge from their chosen tasks over the last LP generations, which are then used to compute the learned probabilities p t,k K k=1 for the task T t according to Equation (1).Because the learned probabilities p t,k K k=1 of T t have statistically described the probability of every task in helping task T t successfully, every individual in subpop t in the next generation can make full use of these probabilities by the roulette wheel selection method as shown in step 13 in Algorithm 1, so as to choose a most suitable knowledge source task whose knowledge can help it generate better offspring.For the focus search strategy in this proposed SaMTDE, an activate flag isFocus is set to 1 if there are no success records in a task T t 's success memory in step 24, for the purpose of activating the focus search on T t , which is achieved by allowing the individuals in subpop t to choose a knowledge source task from the only option as performed in step 11.

Knowledge Incorporation Strategy and Offspring Generation
In a typical DE algorithm, a mutant vector v i is first generated for every individual ind i of population using a mutation strategy, such as: where, x i is the genes of ind i .i 1 , i 2 , i 3 are the mutually exclusive indices of individuals in the population.F is a parameter to scale the difference in each formula.
To devise the knowledge incorporation strategy, a knowledge source has been chosen for every individual in the proposed algorithm in the step 11 and 13 of Algorithm 1, which means the individual can make use of the knowledge transferred from this source.Then the knowledge incorporation strategy generates a mutant vector for an individual by incorporating the knowledge from the individual's chosen source, in specifically, by revising the above Equation (3) for every individual ind t,i of subpopulation subpop t as follow: where the genes x ik,i Set ik ← t to perform focus search strategy.Set iks t,i ← ik to save the knowledge source task for every individual.

end if 28: end while
In Equation ( 4), the genes of a random selected individual ind ik,i 1 are transferred from a chosen source task T ik .In the field of EMTO, task's best genes found by the whole population are also the best choice to transfer between tasks.Though these genes may be the representative ones on tasks, the frequent transfer of such genes may result in negative impacts from the knowledge source tasks, especially when the source tasks become easily stuck in local optima.Therefore, the random selection process in the Equation ( 4) should be better in handling MTO problems.
In Algorithm 2, a mutant vector v t,i is first generated for individual ind t,i according to Equation (4) as shown in step 1. Next, crossover operation is performed on the genes x t,i of ind t,i and its mutant vector v t,i to generate a trail vector u t,i in step 2, which is formulated as follow: where crossover rate CR is a parameter of range [0, 1] to control the portion of u t,i that are copied from the mutant vectors v t,i .j rand is an integer that is randomly chosen from range [1, D], which is used to ensure the crossover operation is performed on at least one dimension of the trail vector.D is the dimension of search space.

Input:
ik (index for the chosen knowledge source) F, R (DE parameters) Output: x t,i (individual's position) 1: Generate a mutant vector v t,i by applying the mutation operation (Equation ( 4)) to subpop t .2: Perform crossover (Equation ( 5)) on the mutant vector to obtain a trail vector u t,i .

Evaluations and Selections
After crossover, trail vectors are generated for the population.In Algorithm 3, these trail vectors are evaluated on their corresponding tasks.Specifically, a trail vector u t,i is evaluated on the fitness function f t ( * ) of task T t as shown in step 7. Note that, every trail vector is only evaluated on one task, which greatly reduces the evaluation costs for the proposed SaMTDE.
After evaluation, selection operation is used to update each individual by choosing the better one between the individual and its trail vector.By comparing the fitness of an individual ind t,i and its trail vector u t,i , the genes x t,i of individual is updated using the better one as shown in the follow: which are implemented in the steps 6-15 of Algorithm 3.Meanwhile, the found best solution and the associated fitness for each task will be updated if a best gene is found within these trail vectors in step 10.At the same time, if a trail vector is better than the original individual, then the ns g t,ik is updated as shown in step 8, which has indicated that the knowledge from the chosen source T ik has helped in generating better genes.Contrarily, only the n f g t,ik is updated if a trail vector is not better than the original individual as shown in step 14.After updating, the ns g t,ik and n f g t,ik are stored in the success memory and the failure memory of the corresponding task T t in step 17.

Input:
K (number of tasks) iks t,i (the index of the chosen source by individual ind t,i ) Output: {x * 1 , x * 2 , . . ., x * K } (the found best solution on the K tasks)

Experiments
A set of experiments were conduced to demonstrate the performance of the developed SaMTDE by comparison with the MFDE, SaMTPSO, SREMTO, MFEA, JADE [32] as well as a single-task DE algorithm.Meanwhile, the SaMTDE was applied to solve multiple weapon-target assignment problems as previously introduced, which demonstrates the promising future of the SaMTDE in handling real-world optimization problems.Before the experiments are discussed, the employed test problems are presented as well as the experiment environments.

Test Problems
Two test suites were considered in the following experiments.Test suite I, shown in Table 1, contains 9 popular bi-task MTO problems that are used in the CEC'2017 Evolutionary Multi-Task Optimization Competition.These 9 well designed MTO problems differ from each other in the aspects of the overlap degree in their global optima and the inter-task relatedness.In the table, the optima of the two component tasks in problems 1 to 3 are of complete intersection, which means the component tasks have the same optima in the unified search space.For problems 7 to 9, the component tasks have no intersection in their global optima.For problems 4 to 6, the component tasks of partial intersection in their global optima.In each of these problems, the inter-task relatedness is measured via the inter-task similarity R s , which is computed via the Spearman's rank correlation coefficient [33].Details of these problems are given in Table 1.
In test suite II, 9 MTO problems were constructed using WTA problems.As shown in Table 2, each MTO problem consists of two component tasks, each task to be a WTA problem.For a WTA problem, the number of weapons and targets may be different.In problems 1 to 3, the number of weapons and the targets are the same in each task, and are the same as well between the two component tasks of each MTO problem.In problems 4 to 6, the number of weapons and the targets are the same in each task, but are not the same between the two component tasks of each MTO problem.In problems 7 to 9, the number of weapons and the targets are not the same in each task, but are the same between the two component tasks of each MTO problem.For the weapons and the targets in each task, the probability of a weapon destroys a target and the destructive value of every target are details in Tables A1-A9, which are attached in the Appendix A of this paper.

Experimental Setup
In the following experiments, we (1) analyze the effectiveness of knowledge transfer in the SaMTDE by comparing the performance on test suite I; (2) compare the proposed SaMTDE with the SaMTPSO, the MFDE, the SREMTO, the MFEA and the DE in terms of the performance on the 9 popular MTO problems from Table 1; (3) apply the SaMTDE to solve the WTA problems from Table II while comparing the results of the SaMTDE with that of the SaMTPSO, the MFDE, the SREMTO, the MFEA, the DE and the JADE.
The parameter settings for the SaMTDE is according to the SaMTPSO.The crossover rate and the scaling factor in SaMTDE are set experientially according to Zhang et al. [32].The settings for the SaMTPSO, the MFDE, the SREMTO, the MFEA, the DE and the JADE are as suggested in the original literature.

Parameter settings in JADE:
crossover rate CR: randomly generated from a normal distribution with mean 0.5 and sigma 0.1.-scaling factor F: randomly generated from a Cauchy distribution with location parameter 0.5 and scale parameter 0.1.
A total of 30 runs were conducted for all considered algorithms on every MTO test problem.The stopping criterion is the maximum number of function evaluations (maxFEs), which is set to 100,000, as is the case in many research papers.After the 30 runs on each MTO problem, the mean and standard deviation of the achieved objective function values (FVs) are presented for every algorithm.To compare the performance difference between multiple algorithms in the scenery of multitasking, a popular performance score is computed for the involved algorithms based on their achieved results on each MTO problem.According to [33], the score of an algorithm q, q ∈ [1, 2, . . ., Q] is computed by (I q,j,l − µ j )/σ j , where I q,j,l is the algorithm q's achieved FVs on the j-th task in l-th run, µ j and σ j are the mean and the standard deviation of the achieved FVs by all algorithms in all runs.Hence, an algorithm will have a smaller score if it performs better according to this definition.To study the effectiveness of knowledge transfer in the proposed SaMTDE, this experiment compared the achieved results of the SaMTDE with the DE algorithm on the 9 MTO test problems in Table 1.As previously introduced, the SaMTDE employs the same mutation operation, crossover operation, selection operation and parameter settings as the DE algorithm; by comparing their performance difference, we can directly observe the effectiveness of knowledge transfer in the SaMTDE.Table 3 reports the means and bracketed standard deviations of the achieved results by both the algorithms over 30 runs.As shown in Table 3, the proposed SaMTDE achieves better results on all 9 MTO test problems, and ultimately achieved a mean score −3.07 × 10 +1 .On the problems with complete intersection on their global optima, such as problems 1, 2 and 3, the performance improvement is very significant compared to that of the DE (a traditional single-task algorithm).On the other problems, such as problems 5, 6, 7 and 8, the SaMTDE can still achieve very good performance.Only on problems 4 and 9, the improvement is not very significant.For problem 4, it may due to the fact that the DE has optimized the two component tasks very well.Though the inter-task relatedness are very high across their search spaces (Rs = 0.86 as shown in Table 1), the lack of common features in their global optima disables the SaMTDE to further find better results at the final stage of optimization.For problem 9, knowledge transfer in the SaMTDE can not help the SaMTDE find better results due to the lack of inter-task relatedness (Rs = 0.0 as shown in Table 1).In summary, knowledge transfer in the proposed SaMTDE algorithm works efficiently.
In Figure 3, the averaged convergence curves of the SaMTDE over 30 runs are presented along with the curves of the DE algorithms.From the figures, we can see that the effectiveness of knowledge transfer in the SaMTDE occurs in around the 200 to the 500 generations.In problems 1 to 3, the significant performance improvement are again presented in a different view.For problem 9, we can see from the corresponding figure, knowledge transfer in the SaMTDE can hardly find better solutions for the two component tasks throughout the optimization process, because the tasks bare nearly no inter-task relatedness.In this experiment, the performance of the SaMTDE on the 9 popular MTO test problems from Table 1 are compared to that of several algorithms, including the SaMTPSO, the MFDE, the MFEA and the DE algorithm.Table 4 presents the means and bracketed standard deviations of the achieved FVs over 30 runs by these algorithms.By comparing the performance difference between the considered algorithms, we can observe that the SaMTDE can achieve okay results compared to that of the other algorithms, and has achieved the best mean score −3.4 × 10 +1 .Compared to the achieved mean score −1.85 × 10 +1 of the SaMTPSO, the SaMTDE has achieved better results.On problems 1 to 8, the SaMTDE can not achieve the best results, but the achieved results are still promising as presented in the table, especially considering the results of its single-task version (i.e., DE algorithm).The DE has the same mutation, crossover and selection operations as well as the same parameter settings; however, the DE can only achieve a mean score 2.51 × 10 +1 while the SaMTDE has achieved mean score −3.40 × 10 +1 ; therefore, the SaMTDE shows future promise.

Solving the Weapon-Target Optimization problems
In this experiment, the proposed SaMTDE was applied to handle the MTO problems from Table 2, which are constructed by two different weapon-target optimization problems.To decode from the continuous search space of SaMTDE, this example employs a simple strategy to decode directly.For example, a decision variable x in continue space is decoded for a WTA problem by round(x), where function round( * ) is an rounding function.To demonstrate the performance of the SaMTDE, the achieved results of the SaMTDE are compared with that of the SaMTPSO, the MFDE, the SREMTO, the MFEA, the DE and the JADE.By comparing with the DE and the JADE, we can observe the performance improvement of MTO in handling the WTA problems, which may show us another way to solve WTA problems more efficiently.The means and the bracketed standard deviations of the achieved FVs by every algorithm over 20 runs are presented in Table 5.
From Table 5, we can observe that the SaMTDE obtains the best mean score −3.2 × 10 +1 compared to all other algorithms.In 6 of the 9 MTO problems, the SaMTDE achieved the best scores.On the last 3 MTO problems, the SaMTDE has still achieved okay results, such as for problems 2, 6 and 9. Compared to the SaMTPSO, the performance of the SaMTDE is outstanding in all 9 MTO problems.

Conclusions and Future Work
In this paper, an efficient SaMTDE algorithm is proposed by introducing the knowledge transfer adaptation strategy and the focus strategy of the SaMTPSO into a traditional DE algorithm.Meanwhile, a novel knowledge incorporation strategy was devised to fit into the evolution structure of the SaMTDE better.Further, the SaMTDE was employed to handle WTA problems for the first time in the field of EMTO.Experiments were conducted on two test suites, including a popular MTO benchmark and a WTA-MTO benchmark.Experimental results have shown that knowledge transfer in the SaMTDE is effective and efficient, and EMTO techniques have potentials in solving WTA problems.
In the future, more knowledge incorporation strategies may be explored in further improving the performance of the SaMTDE.Further, we may apply the SaMTDE into the application of many other real-world problems, such as the large-scale optimization problems [34] and complicated non-convex optimization tasks [12,35,36].
Table A1.Details of the destructive value of every target and the probability of kill for every weapon in the two component tasks of problem 1 in Table 2.Each task is a WTA problem.

Figure 1 .
Figure 1.Task T t 's success memory and failure memory for its K knowledge sources during the last LP generations.

12 : else 13 :
Perform roulette wheel selection on p t,k K k=1 to obtain the index ik of the chosen source task.

4. 3 .
Experimental Results and Discussions 4.3.1.The Effectiveness of Knowledge Transfer in the SaMTDE

Figure 3 . 4 . 3 . 2 .
Figure 3. Averaged convergence curves of DEMTO versus the DEMTO_STO on the 9 test problems from CEC'2017 Evolutionary Multi-Task Optimization Competition.4.3.2.Comparison of SaMTDE with the SaMTPSO, the MFDE, SREMTO, the MFEA and the DE To ensure the probabilities of choosing sources in each task's pool are always summed to 1, the SaMTPSO further divided SR t,k by ∑ K k=1 SR t,k to calculated p t,k .The learned probabilities on K knowledge sources (denoted as KSs) for every task at generation g.
1 of individual ind ik,i 1 from subpop ik are transferred to help generate better mutant vector for individual ind t,i .ik is the index of the individual's chosen knowledge source task.: Evaluate the individuals of subpop t on task T t to obtain their fitness { f itness t,i } N s i=1 , the found best solution x t,best and the corresponding fitness f itness t,best for T t .

17: end for 18: end for 19:
Evaluate the offspring of subpop t on task T t , and update the T t 's success memory and failure memory (see Algorithm (3)).Set isFocus ← 1 if no success records in T t 's success memory, otherwise set isFocus ← 0., k ∈ [1, 2, . . ., K], out of the memories.
21:if g > LP then22:for task T t , t = 1 to K do 1: % Evaluate subpop t on task T t .2: for task T t , t = 1 to K do 6: if f itness t,i < f itness t,best then 10:x t,best ← x t,i , f itness t,best ← f itness t,i .t,i ← x t,i , f itness t,i = f itness t,i .

Table 1 .
Details of the 9 test problems in test suite I.

Table 2 .
Details of the 9 MTO problems regarding the WTA in test suite II.

Table 4 .
Achieved FVs (mean, bracketed standard deviation) of the SaMTDE, the SaMTPSO, the MFDE, the SREMTO, the MFEA and the DE.The best results are shown in bold.

Table 5 .
Achieved FVs (mean, bracketed standard deviation) of the SaMTDE, the SaMTPSO, the MFDE, the SREMTO, the MFEA, the DE and the JADE.The best results are shown in bold.

Table A2 .
Details of the destructive value of every target and the probability of kill for every weapon in the two component tasks of problem 2 in Table2.Each task is a WTA problem.

Table A3 .
Details of the destructive value of every target and the probability of kill for every weapon in the two component tasks of problem 4 in Table2.Each task is a WTA problem.

Table A4 .
Details of the destructive value of every target and the probability of kill for every weapon in the two component tasks of problem 3 in Table2.Each task is a WTA problem.

Table A5 .
Details of the destructive value of every target and the probability of kill for every weapon in the two component tasks of problem 5 in Table2.Each task is a WTA problem.

Table A7 .
Details of the destructive value of every target and the probability of kill for every weapon in the two component tasks of problem 7 in Table2.Each task is a WTA problem.

Table A8 .
Details of the destructive value of every target and the probability of kill for every weapon in the two component tasks of problem 8 in Table2.Each task is a WTA problem.