Next Article in Journal
3D Printing Materials Mimicking Human Tissues after Uptake of Iodinated Contrast Agents for Anthropomorphic Radiology Phantoms
Next Article in Special Issue
YOLO-LFPD: A Lightweight Method for Strip Surface Defect Detection
Previous Article in Journal / Special Issue
MSAO-EDA: A Modified Snow Ablation Optimizer by Hybridizing with Estimation of Distribution Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Bi-Operator Evolution for Multitasking Optimization Problems

1
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China
2
Institute of Computing Science and Technology, Guangzhou University, Guangzhou 510006, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(10), 604; https://doi.org/10.3390/biomimetics9100604
Submission received: 26 July 2024 / Revised: 5 October 2024 / Accepted: 5 October 2024 / Published: 8 October 2024

Abstract

:
The field of evolutionary multitasking optimization (EMTO) has been a highly anticipated research topic in recent years. EMTO aims to utilize evolutionary algorithms to concurrently solve complex problems involving multiple tasks. Despite considerable advancements in this field, numerous evolutionary multitasking algorithms continue to use a single evolutionary search operator (ESO) throughout the evolution process. This strategy struggles to completely adapt to different tasks, consequently hindering the algorithm’s performance. To overcome this challenge, this paper proposes multitasking evolutionary algorithms via an adaptive bi-operator strategy (BOMTEA). BOMTEA adopts a bi-operator strategy and adaptively controls the selection probability of each ESO according to its performance, which can determine the most suitable ESO for various tasks. In an experiment, BOMTEA showed outstanding results on two well-known multitasking benchmark tests, CEC17 and CEC22, and significantly outperformed other comparative algorithms.

1. Introduction

Evolutionary computation (EC) algorithms, including the genetic algorithm (GA) [1,2,3], particle swarm optimization (PSO) [4,5,6], and differential evolution (DE) [7,8,9], represent a type of heuristic optimization algorithm based on biological evolution process [10,11] and are suitable for various optimization problems, such as multimodal [12,13,14], large-scale [15,16], and multi-objective problems [17,18]. Because of its simplicity and ease of implementation, EC has found widespread applications across various fields, such as engineering [19,20], economics [21,22], biology [23], and computer science [24,25].
The traditional EC algorithm was originally designed to solve a single optimization problem or independent optimization problems. However, many optimization problems in the real world are interconnected or exhibit similarities to each other [26,27,28]. In this case, evolutionary multitasking optimization (EMTO) has emerged as a new research topic in EC, aiming to solve multiple problems simultaneously [29].
Generally, many multitasking evolutionary algorithms (MTEAs) only use one evolutionary search operator (ESO). For example, the multifactorial evolutionary algorithm (MFEA) [30], multifactorial evolutionary algorithm with online transfer parameter estimation (MFEA-II) [31] and multitasking genetic algorithm (MTGA) [32] only use the GA. Multifactorial DE (MFDE) [33], domain adaptation multitask optimization (DAMTO) [34] and block-level knowledge transfer DE (BLKT-DE) [35] only use DE/rand/1 to generate offspring. However, it is well known that only one ESO is not suitable for all questions. Taking the widely used CEC17 MTO benchmarks [36] as an example, according to the originally reported results of MFDE [33] and MFEA [30], MFDE performs better than MFEA on the complete-intersection, high-similarity (CIHS) and complete-intersection, medium-similarity (CIMS) problems, which indicates that the DE/rand/1 operator is suitable for the CIHS and CIMS problems. However, MFEA outperforms MFDE on the complete-intersection, low-similarity (CILS) problem, which means that the GA is more appropriate when solving CILS problems.
Therefore, some scholars began to use multiple ESOs to solve multitask problems. Feng et al. [37] introduced an EMT algorithm that features explicit genetic transfer between tasks (EMEA), where two populations are evolved using the GA and DE, respectively. Evolutionary multitasking via reinforcement learning (RLMFEA) [38] randomly selects DE or GA for evolution in each generation. Both EMEA and RLMFEA achieve better results than the previous algorithms, which indicates that the integration of the GA and DE has a good effect. However, this combination only remains at the fixed or random stage, without any adaptive mechanism. If the suitable ESO can be adaptively chosen for the corresponding problem, the performance will be significantly improved.
In this paper, an innovative MTEA called MTEA via adaptive bi-operator (BOMTEA) is proposed, which also combines the superiority of the GA and DE. Unlike other algorithms, the selection probability of each ESO is adaptively adjusted according to its performance, which can help the algorithm find the most suitable ESO for various tasks. In addition, MTEA also includes a novel knowledge transfer strategy to promote information sharing and communication among different tasks. The experimental results on multitasking benchmarks CEC17 and CEC22 fully demonstrate the effectiveness of BOMTEA, which can significantly outperform other comparative algorithms.
The rest of this article is structured as follows. Related knowledge of ESOs and the works related to EMTO are discussed in Section 2. BOMTEA is described in Section 3. Section 4 presents the experimental studies, while Section 5 concludes the article.

2. Preliminary

2.1. DE

The differential evolution (DE) algorithm seeks optimal solutions by simulating the evolutionary process observed in biological populations in nature. In DE, each potential solution is an individual, forming a population. These individuals improve and adapt to the environment through mutation, crossover, and selection operations.
  • Mutation
In DE, common mutation operations are based on differential mutation strategies. Taking the common DE/rand/1 as an example, for each individual (xi), a new mutated individual (vi) is generated according to Equation (1).
vi = xr1 + F × (xr2xr3)
Here, F represents the scaling factor, while xr1, xr2, and xr3 are the different individuals randomly chosen from the population.
2.
Crossover
Once the mutated individual (vi) is generated, DE executes a crossover operation between vi and xi to produce the trial vector (ui), as shown in Equation (2).
u i , j = v i , j ,   if   rand ( 0 ,   1 ) C r   or   j = = j rand x i , j ,   otherwise
Here, Cr denotes the crossover rate. jrand is a random integer within [1, D], ensuring that ui differs from xi in at least one dimension.
3.
Selection
This operation involves choosing the superior individual between xi and ui for the next generation. When tackling a minimization problem, it can be represented as Equation (3).
x i = u i ,   if   f ( u i ) f ( x i ) x i ,   otherwise
where f(*) is the function value corresponding to the individual.

2.2. Simulated Binary Crossover (SBX)

SBX is a crossover operation commonly used in evolutionary algorithms, especially in genetic algorithms. SBX is based on the exponential probability distribution, implying that the element at position i of the offspring is generated from an exponential probability distribution.
c 1 i = 1 2 1 β p 1 i + 1 + β p 2 i c 2 i = 1 2 1 + β p 1 i + 1 β p 2 i
where p1 and p2 represent two different individuals of the parent and c1 and c2 represent two different individuals of the offspring. The distribution of β is specified by
β ( u ) = ( 2 u ) 1 / ( η c + 1 ) , u 1 2 [ 2 ( 1 u ) ] 1 / ( η c + 1 ) , u > 1 2
where u is a randomly generated number within the range of [0, 1].

2.3. Evolutionary Multitasking Optimization

EMTO seeks to address multiple problems concurrently. EMTO can effectively utilize the correlation between tasks, so it can improve the results.
Suppose that EMTO comprises K single-objective tasks, all of which are minimization problems. The ith task, denoted as Ti, encompasses a search space (Xi) and an objective function (Fi: Xi→R) [33,39]. EMTO aims to discover a set of solutions that fulfill Equation (6).
x 1 * ,   x 2 * , ,   x K * = arg min { F 1 ( x 1 ) ,   F 2 ( x 2 ) , ,   F K ( x K ) }

2.4. MFEA

MFEA [30] is inspired by the biocultural models of multifactorial inheritance. Each individual in MFEA optimizes corresponding tasks through the skill factor. The framework of MFEA is shown in Algorithm 1, which includes assortative mating and vertical cultural transmission. When two individuals possessing different skill factors undergo a crossover operation with a random mating probability (rmp), information exchange between tasks occurs.
Algorithm 1: MFEA
Input: pa, pb: two parent candidates randomly selected from pop.
Output: ca, cb: the offspring generated.
Begin
1:  If τa == τb or rand < rmp:
2:    pa and pb crossover and mutate to get ca and cb.
3:       If τa == τb:
4:         ca imitates pa. cb imitates pb.
5:       Else
6:         If rand < 0.5:
7:           ca imitates pa. cb imitates pb.
8:         Else
9:           ca imitates pb. cb imitates pa.
10:         End If
11:       End If
12:  Else
13:    pa undergoes polynomial mutation to produce offspring ca.
14:    pb undergoes polynomial mutation to produce offspring cb.
15:  ca imitates pa. cb imitates pb.
16:  End If
End

2.5. Related Work

In the past, evolutionary algorithms typically focused on solving individual optimization problems, but population-based search had implicit parallelism. Therefore, Gupta et al. [30] proposed a new framework for evolutionary multitasking (EMT) called MFEA. In this algorithm, complex developmental features are transmitted to offspring through the interaction of genetic and cultural factors. Later, many scholars conducted research on the following three aspects: “how to transfer”, “when to transfer”, and “what to transfer”.
For the first aspect of “how to transfer”, researchers have focused on methods to transfer data from the source task to the target task. For instance, Feng et al. [37] introduced an EMT algorithm that features explicit genetic transfer between tasks. According to this algorithm, the source task is the result of adding noise to the target task, so a denoising autoencoder is used to obtain the mapping from the source task to the target task. Wu and Tan [32] estimated the bias between the two tasks and removed it in chromosomal transfer to the solutions from different tasks can be closer to each other. Zhou et al. [40] achieved adaptive selection of crossover operators. Algorithm can select suitable crossover operators for different problems. Wang et al. [34] used transfer component analysis (TCA) to map two populations with different marginal probability distributions to the same space for knowledge transfer.
As the second aspect of “when to transfer”, the rmp determines the frequency of knowledge transfer between different tasks. In some literature [30,33], rmp used fixed values. Some researchers also attempted to make algorithms adaptively adjust rmp during operation. Liaw and Ting [41] introduced a framework known as the evolution of biocoenosis through symbiosis (EBS). EBS can control the information exchange frequency based on the times where the best solution of the current task is improved by other tasks or current task. In EMEA [37], knowledge transfer between tasks occurs at fixed intervals. Bali et al. [31] produced a framework that leverages the similarities and discrepancies among different tasks to enhance the optimization process. Li et al. [38] introduced a MFEA based on reinforcement learning, known as RLMFEA, which can adaptively adjust the rmp of different tasks. In DAMTO [34], when the fitness of this generation does not improve compared to the previous generation, knowledge transfer occurs.
As the three aspects of “what to transfer”, a simple method is to directly use the promising solutions of another task as knowledge [30,31,38,41]. In addition, there was some algorithms [32,34,37] that measure the differences between solutions from different tasks and take corresponding compensation measures to generate knowledge.

3. BOMTEA

In this section, the detailed information about the BOMTEA is proposed. Specifically, the motivation of BOMTEA is presented. Next, the detailed process for adaptively adjusting eop to control the selection of the two ESOs is given. Then, the method of knowledge transfer is developed. Finally, the complete BOMTEA algorithm is presented.

3.1. Motivation

In recent years, with the development of MTEAs, researchers have conducted in-depth explorations in these three aspects. Despite significant progress in MTEAs, many existing MTEAs still rely on a single ESO. For instance, algorithms such as MFEA [30], MFEA-II [31] and MTGA [32] exclusively use GA as their primary search strategy. Similarly, algorithms like MFDE [33], DAMTO [34] and BLKT-DE [35] rely solely on the DE/rand/1 strategy from DE to generate offspring. Each ESO has its unique characteristics and advantages. For example, DE is renowned for its rapid convergence speed, allowing it to quickly identify potentially excellent solutions. However, this fast convergence also makes DE more susceptible to getting trapped in local optima, which limits its performance in complex optimization problems. In contrast, the strength of GA lies in their ability to effectively maintain population diversity, providing strong exploratory capabilities in global search. Therefore, a single ESO is not suitable for all types of problems, as the nature and structure of different problems vary, leading to differing demands on search strategies.
Therefore, some scholars have begun to explore the use of multiple ESOs to solve multi-task optimization problems, aiming to improve the overall performance and adaptability of the algorithms. Feng et al. [37] proposed EMEA, which achieves intra-population evolution by assigning GA and DE to two different populations when handling multi-task problems. Another approach is RLMFEA [38], which randomly selects either DE or GA for each iteration. However, while this integration based on a random mechanism is effective, it still has certain limitations, as it cannot adaptively choose the most suitable ESO based on the specific characteristics of the problem. If the algorithm can adaptively select the appropriate ESO according to the specific problem, its performance will be significantly enhanced.
Therefore, this paper proposes an innovative MTEA called BOMTEA, which combines the advantages of GA and DE. The innovation of BOMTEA lies in its adaptive mechanism, which monitors the performance of each ESO and dynamically adjusts their selection probabilities. This approach allows for the effective utilization of the characteristics of GA and DE in different optimization environments, thereby achieving improved multi-task processing capabilities.

3.2. The Adaptive Bi-Operator Strategy

Specifically, if there are more offspring generated using DE that are retained to the next generation, then the next eop should be larger. On the contrary, if there are more offspring generated using GA that are retained to the next generation, the next eop should be smaller. The formula for eop is shown in Equation (7).
e o p i t + 1 = n i , D E t n i , D E t + n i , G A t
where eopit+1 is the eop of the taski, generation t + 1. nti,DE is a descendant generated in the t-th generation using the DE operator and retained for the next generation. nti,GA is a descendant using the GA operator and retained for the next generation.
Considering that a high eop (eop > 0.9) would cause the algorithm to rely almost entirely on DE, this could lead to premature convergence to a local optimum, resulting in a loss of population diversity and impacting the algorithm’s performance. On the other hand, a low eop (eop < 0.1) means that the algorithm would hardly use DE, thus losing the advantages that DE provides, which could significantly reduce search efficiency and affect the optimization results. Additionally, to emphasize the faster convergence speed of DE compared to GA, BOMTEA is inclined to use DE. Therefore, in this article, the range of eop is set to [0.3, 0.9].
The ESO used in the population evolution process is randomly assigned. As shown in Algorithm 2, the individual generates offspring via DE/rand/1 with the probability of eop. Otherwise, SBX and polynomial mutation are employed to generate offspring.
Algorithm 2: Adaptive Bi-operator Strategy
Input: p: a parent from target task.
   eopi: Random selection probability of ESOs.
Output: c: the offspring generated.
Begin
1:  If rand < eopi:
2:    Generate offspring c using DE/rand/1.
3:  Else
4:    Generate offspring c using GA.
5:  End If
End

3.3. Knowledge Transfer

In EMTO, knowledge transfer plays a crucial role. It enables algorithms to share information across different tasks, thereby enhancing overall optimization efficiency. When knowledge gained from one task can be effectively applied to another task, the algorithm can accelerate the convergence process.
In order to perform knowledge transfer, if the ESO is GA, for each individual in the target task, an individual in the source task will be randomly selected for crossover and mutation, which are SBX and polynomial mutation respectively. Note that after crossover and mutation, two individuals are generated, but only one can participate in the evaluation. As shown in Algorithm 3, the selection here is random.
In order to perform knowledge transfer, if the ESO is DE, for each individual in the target task, an individual in the target task and two individuals in the source task will be randomly selected for DE. The special method is shown in Algorithm 4.
Algorithm 3: Knowledge Transfer of GA
Input: pa: a parent from target task.
   pb: a parent randomly selected from source task.
Output: c: the offspring generated.
Begin
1:  pa and pb crossover and mutate to give offspring ca and cb.
2:  If rand < 0.5:
3:    c = ca
4:  Else
5:    c = cb
6:  End If
End
Algorithm 4: Knowledge Transfer of DE
Input: pt: a parent from target task.
   popt: the population of target task.
   pops: the population of source task.
Output: c: the offspring generated.
Begin
1:  Select one individual xr1 from popt randomly and xr1! = pt.
2:  Select two individuals xr2, xr3 from pops randomly and xr2! = xr3.
3:  According to Formula (1) to generate mutated individual vi.
4:  According to Formula (2) to generate offspring c.
End

3.4. Framework

The structure of BOMTEA for addressing two-task optimization problems is outlined in Algorithm 5. According to this algorithm, the primary steps of BOMTEA can be elucidated as follows:
Step 1: In a unified search space, the populations of two tasks are randomly initialized. Each individual’s dimension is set to the maximum value among the dimensions in all tasks. All dimensions of the individuals are normalized to [0, 1] according to the lower bound L and upper bound U of the solution space of task Tj. This encoding method helps standardize the processing of features for different tasks. When the individual need to be evaluated, it should be decoded into the original solution by Equation (8) (line 1).
xi = Li + (UiLi) × yi
where Li is the minimum value of the i-th dimension of task Tj, Ui is the maximum value of the i-th dimension of task Tj, yi is the normalized value of the i-th dimension of the individual and xi is the decoded value of the i-th dimension of the individual.
Step 2: Compute the fitness value of each individual of pop1 and pop2 (line 2).
Step 3: Each individual can be assigned an ESO via Algorithm 2 (line 5).
Step 4: Each individual from taski can perform knowledge transfer via with a probability of rmpi. The specific knowledge transfer methods are shown in Algorithms 3 and 4. If the ESO assigned to this individual is GA, then knowledge transfer is performed through Algorithm 3. Otherwise, knowledge transfer will be carried out through Algorithm 4. (line 6–7).
Step 5: Individuals who do not undergo knowledge transfer generate offspring via the ESO assigned. (line 9).
Step 6: Based on the fitness value, the elite selection strategy is employed to choose the most suitable individuals for constructing the subsequent generation population (line 12).
Step 7: Finally, in order to achieve adaptive adjustment of eop during iteration. Formula (7) is used to adjust the value of eop for each generation (line 13).
Algorithm 5: BOMTEA
Begin
1:  Randomly initialize pop1 and pop2 for two tasks respectively.
2:  Evaluate each individual on each optimization task.
3:  While FEs < maxFEs:
4:    For each individual from pop1 or pop2:
5:      Perform Algorithm 2 to allocate ESO.
6:      If rand < rmpi:
7:        Perform knowledge transfer via Algorithm 3 or 4.
8:      Else
9:        Generate offspring via ESO.
10:      End If
11:    End For
12:    Select the fittest individuals to form the next pop1 or pop2.
13:    Get new eop1 and eop2 via the Formula (7).
14:  End While
End

4. Experimental Studies

In this study, two widely used MTO benchmarks, CEC17 [36] and CEC22 [42] are chosen to evaluate the performance of the BOMTEA. Meanwhile, five MTEAs are used to be compared which are MFEA (2016) [30], EMEA (2019) [37], MFEA-AKT (2021) [40], MTGA (2020) [32] and RLMFEA (2023) [38]. There algorithms cover both single population algorithms and multiple population algorithms, spanning the time period from 2016 to 2023, so the experimental results are credible.

4.1. Experimental Setup

The relevant parameter settings for the algorithms involved are as follows.
  • SBX and polynomial mutation in MFEA, EMEA, MFEA-AKT, MTGA, RLMFEA and BOMTEA: ηc = 10, ηm = 5.
  • DE in EMEA, RLMFEA, BOMTEA: F = 0.5, Cr = 0.6.
  • The random mating probability: rmp = 0.3.
  • The initial random selection probability of ESO: eop = 0.5.
  • Population size: NP = 100 for MFEA, EMEA, MFEA-AKT, MTGA, RLMFEA and BOMTEA.
  • Maximum number of fitness evaluations: MaxFEs = 100,000.
  • The parameters for which values are not provided are set to the optimal settings specified in the respective papers.
Each algorithm undergoes 30 independent runs to acquire the experimental outcomes. To assess the experimental results statistically, Wilcoxon’s rank sum test [43] with a significance level of α = 0.05 is employed.
The Origin 2022 version 9.9 is used to conduct the Wilcoxon’s rank sum test. In Origin, when selecting the Wilcoxon’s rank sum test, three null hypotheses H0 and alternative hypotheses H1 can be chosen:
(1) H0: Median1 = Median2 (If this null hypothesis is accepted, it indicates that our proposed algorithm and the comparison algorithm do not have a significant difference on this task).
H1: Median1 < > Median2 (If this alternative hypothesis is accepted, it indicates that our proposed algorithm and the comparison algorithm have significant difference on this task).
(2) H0: Median1 >= Median2.
H1: Median1 < Median2 (For minimization problems, if this alternative hypothesis is accepted, it indicates that our proposed algorithm performs significantly better than the comparison algorithm on this task).
(3) H0: Median1 <= Median2.
H1: Median1 > Median2 (For minimization problems, if this alternative hypothesis is accepted, it indicates that our proposed algorithm performs significantly worse than the comparison algorithm on this task).
By independently running the algorithm 30 times and testing with different alternative hypotheses, we can ultimately conclude that the results of our algorithm are “better/worse/similar to the results of the comparison algorithm. At the same time, the symbols “+/−/≈” denote that the results achieved by BOMTEA are “better/worse/similar to” those of the compared algorithm.

4.2. Experimental Results Comparisons on MTO Benchmarks

The mean fitness experimental results attained by BOMTEA, MFEA, EMEA, MFEA-AKT, MTGA and RLMFEA on CEC17 and CEC22 benchmarks are presented in Table 1. The best experimental result for each task is highlighted in boldface.

4.2.1. CEC17 MTO Benchmarks

The CEC17 MTO benchmarks include nine benchmark problems, each containing two tasks. The problems can be divided into complete intersections (CI), partial intersections (PI), and no intersections (NI) based on their degree of intersection. They can also be divided into high similarity (HS), medium similarity (MS), and low similarity (LS) based on their similarity. Each task corresponds to an optimization function that needs to be optimized. For instance, in the CIHS problem, there are actually two tasks involved, making CIHS a two-task optimization problem. The functions that require optimization in this case are the Griewank function and the Rastrigin function. For more details, please refer to [36].
From Table 1, it can be seen that on the CEC17 benchmarks, BOMTEA consistently outperforms the compared algorithms. For the 9 total problems, for task 1, BOMTEA outperforms MFEA, EMEA, MFEA-AKT, MTGA, and RLMFEA on 9, 9, 9, 7 and 8 tasks, respectively, and is only worse on 1 task on the MTGA. For task 2, BOMTEA outperforms them on 7, 7, 7, 6 and 5 tasks, respectively. This suggests that when addressing MTO problems, the adaptive bi-operator strategy proves superior to the single-operator strategy.
Furthermore, the convergence curves of BOMTEA and other comparative algorithms on the CEC17 problem are presented to study their convergence behavior, as shown in Figure 1. Firstly, as shown in Figure 1a–c,e, BOMTEA has a fast convergence rate on these issues. Secondly, as shown in Figure 1g,h, although the convergence speed of BOMTEA on this problem is almost the same as that of MTGA and RLMFEA in the early stage, the final results obtained by BOMTEA are superior to or are similar to these two algorithms respectively. Finally, as shown in Figure 1d, BOMTEA can easily get trapped in local optima when solving CILS problem. In practice, the comparative algorithms, including MFEA, EMEA, MFEA-AKT, and MTGA, also fall into local optima. However, BOMTEA has achieved significantly better results on other problems, further demonstrating the superiority of our proposed BOMTEA algorithm. Similarly, as shown in Figure 1f, the convergence speed of BOMTEA is slightly lower than that of EMEA but clearly faster than that of the other comparative algorithms. This indicates that, while there may be some discrepancies in certain specific scenarios, overall, BOMTEA exhibits strong performance and adaptability in solving various optimization problems, further reinforcing its position as an effective optimization algorithm. Note that GA and DE ESOs are also used in RLMFEA, but they are randomly selected and do not control their frequency. Therefore, although RLMFEA has similar results to BOMTEA in some problems, such as CILS, NIHS, NIMS, the overall results are not as good as BOMTEA.

4.2.2. CEC22 MTO Benchmarks

Similarly, in Wilcoxon’s rank-sum test results on the CEC22 benchmarks, BOMTEA is generally superior to the compared algorithms. For the 10 total problems, for task 1 in MFEA, EMEA, MFEA-AKT, MTGA, and RLMFEA, BOMTEA is only worse on 0, 1, 0, 3 and 1 tasks respectively and outperforms on 9, 8, 5, 5 and 2 tasks. Compared to RLMFEA, BOMTEA has fewer dominant problems and most of them are similar. This may be because they all use bi-operator strategy. For task 2, BOMTEA outperforms MFEA, EMEA, MFEA-AKT, MTGA, and RLMFEA on 9, 9, 7, 5 and 3 tasks, respectively. This suggests that when tackling MTO problems, the adaptive bi-operator strategy proves superior to the single-operator strategy.
This article also presents the convergence curves of these algorithms on the CEC22 test set, as shown in Figure 2. Firstly, it can be seen from Figure 2 that RLMFEA and BOMTEA have similar convergence rates. Secondly, as shown in Figure 2d,e, although the convergence speed of BOMTEA on this problem is almost the same as that of RLMFEA in the early stage, the final results obtained by BOMTEA are superior to these two algorithms. Finally, as shown in Figure 2a,b, although BOMTEA initially converges at a rate similar to other algorithms, it is easy to fall into local optima in this question.

5. Conclusions

In this study, an innovative MTEA called MTEA via adaptive bi-operator (BOMTEA) is proposed. Compared to single operator algorithms, BOMTEA combines the superiority of GA and DE. Compared to other Bi-operator algorithms, the selection probability of each ESO is adaptively adjusted according to its performance, which can help algorithm find the suitable ESO in various tasks. Additionally, MTEA introduce a novel knowledge transfer strategy to enhance information sharing and communication across different tasks. The experimental results on the multitasking benchmarks CEC17 and CEC22 fully demonstrate the effectiveness of BOMTEA, which demonstrates a significant performance advantage over other comparative algorithms.
In the future, we wish to further explore the adaptive adjustment of rmp under the BOMTEA framework by introducing techniques such as fuzzy system or reinforcement learning. This will enable us to enhance the performance of the BOMTEA even further. In addition, we also wish to apply adaptive bi-operator strategy to multimodal [44,45,46] or large-scale [47,48,49] problems.

Author Contributions

Conceptualization, C.W., Z.W. and Z.K.; methodology, C.W., Z.W. and Z.K.; software, C.W.; validation, C.W., Z.W. and Z.K.; formal analysis, C.W., Z.W. and Z.K.; investigation, C.W., Z.W. and Z.K.; resources, C.W., Z.W. and Z.K.; data curation, C.W., Z.W. and Z.K.; writing—original draft preparation, C.W.; writing—review and editing, Z.W. and Z.K.; visualization, C.W.; supervision, Z.W. and Z.K.; project administration, Z.W. and Z.K.; funding acquisition, Z.W. and Z.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundations of China (NSFC) under Grants 62106055 and 62176094, in part by the Guangdong Natural Science Foundation under Grants 2022A1515011825 and 2021B1515120078, and in part by the Guangzhou Science and Technology Planning Project under Grants 2023A04J0388, 2023A03J0662, and 2023A03J0113.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bao, X.; Wang, G.; Xu, L.; Wang, Z. Solving the Min-Max Clustered Traveling Salesmen Problem Based on Genetic Algorithm. Biomimetics 2023, 8, 238. [Google Scholar] [CrossRef] [PubMed]
  2. Poornima, B.S.; Sarris, I.E.; Chandan, K.; Nagaraja, K.V.; Kumar, R.V.; Ben Ahmed, S. Evolutionary Computing for the Radiative–Convective Heat Transfer of a Wetted Wavy Fin Using a Genetic Algorithm-Based Neural Network. Biomimetics 2023, 8, 574. [Google Scholar] [CrossRef] [PubMed]
  3. Yue, L.; Hu, P.; Zhu, J. Gender-Driven English Speech Emotion Recognition with Genetic Algorithm. Biomimetics 2024, 9, 360. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Z.-J.; Zhan, Z.-H.; Kwong, S.; Jin, H.; Zhang, J. Adaptive Granularity Learning Distributed Particle Swarm Optimization for Large-Scale Optimization. IEEE Trans. Cybern. 2021, 51, 1175–1188. [Google Scholar] [CrossRef]
  5. Zhu, J.; Liu, J.; Chen, Y.; Xue, X.; Sun, S. Binary Restructuring Particle Swarm Optimization and Its Application. Biomimetics 2023, 8, 266. [Google Scholar] [CrossRef] [PubMed]
  6. Tang, W.; Cao, L.; Chen, Y.; Chen, B.; Yue, Y. Solving Engineering Optimization Problems Based on Multi-Strategy Particle Swarm Optimization Hybrid Dandelion Optimization Algorithm. Biomimetics 2024, 9, 298. [Google Scholar] [CrossRef]
  7. Zhan, Z.-H.; Wang, Z.-J.; Jin, H.; Zhang, J. Adaptive Distributed Differential Evolution. IEEE Trans. Cybern. 2019, 50, 4633–4647. [Google Scholar] [CrossRef]
  8. Guo, Y.; Wang, Y.; Meng, K.; Zhu, Z. Otsu Multi-Threshold Image Segmentation Based on Adaptive Double-Mutation Differential Evolution. Biomimetics 2023, 8, 418. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, Z.-J.; Zhan, Z.-H.; Li, Y.; Kwong, S.; Jeon, S.-W.; Zhang, J. Fitness and Distance Based Local Search With Adaptive Differential Evolution for Multimodal Optimization Problems. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 7, 684–699. [Google Scholar] [CrossRef]
  10. Zhan, Z.-H.; Zhang, J.; Lin, Y.; Li, J.-Y.; Huang, T.; Guo, X.-Q.; Wei, F.-F.; Kwong, S.; Zhang, X.-Y.; You, R. Matrix-Based Evolutionary Computation. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 6, 315–328. [Google Scholar] [CrossRef]
  11. Zhan, Z.-H.; Li, J.-Y.; Kwong, S.; Zhang, J. Learning-Aided Evolution for Optimization. IEEE Trans. Evol. Comput. 2023, 27, 1794–1808. [Google Scholar] [CrossRef]
  12. Wang, Z.-J.; Zhan, Z.-H.; Lin, Y.; Yu, W.-J.; Yuan, H.-Q.; Gu, T.-L.; Kwong, S.; Zhang, J. Dual-Strategy Differential Evolution With Affinity Propagation Clustering for Multimodal Optimization Problems. IEEE Trans. Evol. Comput. 2018, 22, 894–908. [Google Scholar] [CrossRef]
  13. Li, X.; Li, M.; Yu, M.; Fan, Q. Fault Reconfiguration in Distribution Networks Based on Improved Discrete Multimodal Multi-Objective Particle Swarm Optimization Algorithm. Biomimetics 2023, 8, 431. [Google Scholar] [CrossRef] [PubMed]
  14. Huang, Y.-B.; Wang, Z.-J.; Zhang, Y.-H.; Wang, Y.-G.; Kwong, S.; Zhang, J. Wireless Sensor Networks-Based Adaptive Differential Evolution for Multimodal Optimization Problems. Appl. Soft Comput. 2024, 158, 111541. [Google Scholar] [CrossRef]
  15. Wang, Z.-J.; Yang, Q.; Zhang, Y.-H.; Chen, S.-H.; Wang, Y.-G. Superiority Combination Learning Distributed Particle Swarm Optimization for Large-Scale Optimization. Appl. Soft Comput. 2023, 136, 110101. [Google Scholar] [CrossRef]
  16. Liu, S.; Wang, Z.-J.; Wang, Y.-G.; Kwong, S.; Zhang, J. Bi-Directional Learning Particle Swarm Optimization for Large-Scale Optimization. Appl. Soft Comput. 2023, 149, 110990. [Google Scholar] [CrossRef]
  17. Zhan, Z.-H.; Li, J.; Cao, J.; Zhang, J.; Chung, H.S.-H.; Shi, Y.-H. Multiple Populations for Multiple Objectives: A Coevolutionary Technique for Solving Multiobjective Optimization Problems. IEEE Trans. Cybern. 2013, 43, 445–463. [Google Scholar] [CrossRef]
  18. Han, J.; Watanabe, S. A New Hyper-Heuristic Multi-Objective Optimisation Approach Based on MOEA/D Framework. Biomimetics 2023, 8, 521. [Google Scholar] [CrossRef]
  19. Gupta, A.; Zhou, L.; Ong, Y.-S.; Chen, Z.; Hou, Y. Half a Dozen Real-World Applications of Evolutionary Multitasking, and More. IEEE Comput. Intell. Mag. 2022, 17, 49–66. [Google Scholar] [CrossRef]
  20. Azad, A.S.; Islam, M.; Chakraborty, S. A Heuristic Initialized Stochastic Memetic Algorithm for MDPVRP With Interdependent Depot Operations. IEEE Trans. Cybern. 2017, 47, 4302–4315. [Google Scholar] [CrossRef] [PubMed]
  21. Ponsich, A.; Jaimes, A.L.; Coello, C.A.C. A Survey on Multiobjective Evolutionary Algorithms for the Solution of the Portfolio Optimization Problem and Other Finance and Economics Applications. IEEE Trans. Evol. Comput. 2013, 17, 321–344. [Google Scholar] [CrossRef]
  22. Xiong, J.; Liu, J.; Chen, Y.; Abbass, H.A. A Knowledge-Based Evolutionary Multiobjective Approach for Stochastic Extended Resource Investment Project Scheduling Problems. IEEE Trans. Evol. Comput. 2014, 18, 742–763. [Google Scholar] [CrossRef]
  23. McDonnell, J.R.; Reynolds, R.G.; Fogel, D.B. Special Session on Applications of Evolutionary Computation to Biology and Biochemistry. In Evolutionary Programming IV: Proceedings of the Fourth Annual Conference on Evolutionary Programming; MIT Press: Cambridge, MA, USA, 1995; p. 545. ISBN 978-0-262-29092-0. [Google Scholar]
  24. Chen, Z.-G.; Zhan, Z.-H.; Kwong, S.; Zhang, J. Evolutionary Computation for Intelligent Transportation in Smart Cities: A Survey [Review Article]. IEEE Comput. Intell. Mag. 2022, 17, 83–102. [Google Scholar] [CrossRef]
  25. Zhan, Z.-H.; Li, J.-Y.; Zhang, J. Evolutionary Deep Learning: A Survey. Neurocomputing 2022, 483, 42–58. [Google Scholar] [CrossRef]
  26. Zhang, F.; Mei, Y.; Nguyen, S.; Zhang, M.; Tan, K.C. Surrogate-Assisted Evolutionary Multitask Genetic Programming for Dynamic Flexible Job Shop Scheduling. IEEE Trans. Evol. Comput. 2021, 25, 651–665. [Google Scholar] [CrossRef]
  27. Feng, L.; Huang, Y.; Zhou, L.; Zhong, J.; Gupta, A.; Tang, K.; Tan, K.C. Explicit Evolutionary Multitasking for Combinatorial Optimization: A Case Study on Capacitated Vehicle Routing Problem. IEEE Trans. Cybern. 2021, 51, 3143–3156. [Google Scholar] [CrossRef]
  28. Zhou, Y.; Wang, T.; Peng, X. MFEA-IG: A Multi-Task Algorithm for Mobile Agents Path Planning. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
  29. Tan, K.C.; Feng, L.; Jiang, M. Evolutionary Transfer Optimization—A New Frontier in Evolutionary Computation Research. IEEE Comput. Intell. Mag. 2021, 16, 22–33. [Google Scholar] [CrossRef]
  30. Gupta, A.; Ong, Y.-S.; Feng, L. Multifactorial Evolution: Toward Evolutionary Multitasking. IEEE Trans. Evol. Comput. 2016, 20, 343–357. [Google Scholar] [CrossRef]
  31. Bali, K.K.; Ong, Y.-S.; Gupta, A.; Tan, P.S. Multifactorial Evolutionary Algorithm With Online Transfer Parameter Estimation: MFEA-II. IEEE Trans. Evol. Comput. 2020, 24, 69–83. [Google Scholar] [CrossRef]
  32. Wu, D.; Tan, X. Multitasking Genetic Algorithm (MTGA) for Fuzzy System Optimization. IEEE Trans. Fuzzy Syst. 2020, 28, 1050–1061. [Google Scholar] [CrossRef]
  33. Feng, L.; Zhou, W.; Zhou, L.; Jiang, S.W.; Zhong, J.H.; Da, B.S.; Zhu, Z.X.; Wang, Y. An Empirical Study of Multifactorial PSO and Multifactorial DE. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 921–928. [Google Scholar]
  34. Wang, X.; Kang, Q.; Zhou, M.; Yao, S.; Abusorrah, A. Domain Adaptation Multitask Optimization. IEEE Trans. Cybern. 2023, 53, 4567–4578. [Google Scholar] [CrossRef] [PubMed]
  35. Jiang, Y.; Zhan, Z.-H.; Tan, K.C.; Zhang, J. Block-Level Knowledge Transfer for Evolutionary Multitask Optimization. IEEE Trans. Cybern. 2023, 54, 558–571. [Google Scholar] [CrossRef] [PubMed]
  36. Da, B.; Ong, Y.-S.; Feng, L.; Qin, A.K.; Gupta, A.; Zhu, Z.; Ting, C.-K.; Tang, K.; Yao, X. Evolutionary Multitasking for Single-Objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results. arXiv 2017, arXiv:1706.03470. [Google Scholar]
  37. Feng, L.; Zhou, L.; Zhong, J.; Gupta, A.; Ong, Y.-S.; Tan, K.-C.; Qin, A.K. Evolutionary Multitasking via Explicit Autoencoding. IEEE Trans. Cybern. 2019, 49, 3457–3470. [Google Scholar] [CrossRef]
  38. Li, S.; Gong, W.; Wang, L.; Gu, Q. Evolutionary Multitasking via Reinforcement Learning. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 8, 762–775. [Google Scholar] [CrossRef]
  39. Wen, Y.-W.; Ting, C.-K. Parting Ways and Reallocating Resources in Evolutionary Multitasking. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 2404–2411. [Google Scholar]
  40. Zhou, L.; Feng, L.; Tan, K.C.; Zhong, J.; Zhu, Z.; Liu, K.; Chen, C. Toward Adaptive Knowledge Transfer in Multifactorial Evolutionary Computation. IEEE Trans. Cybern. 2021, 51, 2563–2576. [Google Scholar] [CrossRef]
  41. Liaw, R.-T.; Ting, C.-K. Evolutionary Many-Tasking Based on Biocoenosis through Symbiosis: A Framework and Benchmark Problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 2266–2273. [Google Scholar]
  42. Feng, L. IEEE CEC 2022 Competition on Evolutionary Multitask Optimization. Available online: http://www.bdsc.site/websites/MTO_competition_2021/MTO_Competition_WCCI_2022.html (accessed on 26 July 2024).
  43. Carrasco, J.; García, S.; Rueda, M.M.; Das, S.; Herrera, F. Recent Trends in the Use of Statistical Tests for Comparing Swarm and Evolutionary Computing Algorithms: Practical Guidelines and a Critical Review. Swarm Evol. Comput. 2020, 54, 100665. [Google Scholar] [CrossRef]
  44. Qu, B.Y.; Suganthan, P.N.; Das, S. A Distance-Based Locally Informed Particle Swarm Model for Multimodal Optimization. IEEE Trans. Evol. Comput. 2013, 17, 387–402. [Google Scholar] [CrossRef]
  45. Wang, Z.-J.; Zhou, Y.-R.; Zhang, J. Adaptive Estimation Distribution Distributed Differential Evolution for Multimodal Optimization Problems. IEEE Trans. Cybern. 2022, 52, 6059–6070. [Google Scholar] [CrossRef]
  46. Wang, Z.-J.; Zhan, Z.-H.; Lin, Y.; Yu, W.-J.; Wang, H.; Kwong, S.; Zhang, J. Automatic Niching Differential Evolution With Contour Prediction Approach for Multimodal Optimization Problems. IEEE Trans. Evol. Comput. 2020, 24, 114–128. [Google Scholar] [CrossRef]
  47. Binh, H.T.T.; Ngo, S.H. Survivable Flows Routing in Large Scale Network Design Using Genetic Algorithm. In Advances in Computer Science and Its Applications; Jeong, H.Y., Obaidat, M.S., Yen, N.Y., Park, J.J., Eds.; Lecture Notes in Electrical Engineering; Springer: Berlin/Heidelberg, Germany, 2014; Volume 279, pp. 345–351. ISBN 978-3-642-41673-6. [Google Scholar]
  48. Wang, Z.-J.; Zhan, Z.-H.; Yu, W.-J.; Lin, Y.; Zhang, J.; Gu, T.-L.; Zhang, J. Dynamic Group Learning Distributed Particle Swarm Optimization for Large-Scale Optimization and Its Application in Cloud Workflow Scheduling. IEEE Trans. Cybern. 2020, 50, 2715–2729. [Google Scholar] [CrossRef] [PubMed]
  49. Wang, Z.-J.; Jian, J.-R.; Zhan, Z.-H.; Li, Y.; Kwong, S.; Zhang, J. Gene Targeting Differential Evolution: A Simple and Efficient Method for Large-Scale Optimization. IEEE Trans. Evol. Comput. 2023, 27, 964–979. [Google Scholar] [CrossRef]
Figure 1. Convergence curves of the average fitness on (a) T1 of CEC17-CIHS; (b) T2 of CEC17-CIHS; (c) T1 of CEC17-CILS; (d) T2 of CEC17-CILS; (e) T1 of CEC17-PILS; (f) T2 of CEC17- PILS; (g) T1 of CEC17-NIHS; (h) T2 of CEC17-NIHS.
Figure 1. Convergence curves of the average fitness on (a) T1 of CEC17-CIHS; (b) T2 of CEC17-CIHS; (c) T1 of CEC17-CILS; (d) T2 of CEC17-CILS; (e) T1 of CEC17-PILS; (f) T2 of CEC17- PILS; (g) T1 of CEC17-NIHS; (h) T2 of CEC17-NIHS.
Biomimetics 09 00604 g001aBiomimetics 09 00604 g001b
Figure 2. Convergence curves of the average fitness on (a) T1 of CEC22-P1; (b) T2 of CEC22-P1; (c) T1 of CEC22-P6; (d) T2 of CEC22-P6; (e) T1 of CEC22-P9; (f) T2 of CEC22-P9; (g) T1 of CEC22-P10; (h) T2 of CEC22-P10.
Figure 2. Convergence curves of the average fitness on (a) T1 of CEC22-P1; (b) T2 of CEC22-P1; (c) T1 of CEC22-P6; (d) T2 of CEC22-P6; (e) T1 of CEC22-P9; (f) T2 of CEC22-P9; (g) T1 of CEC22-P10; (h) T2 of CEC22-P10.
Biomimetics 09 00604 g002aBiomimetics 09 00604 g002b
Table 1. The CEC17 and CEC22 experimental results of BOMTEA and other EMTO algorithms.
Table 1. The CEC17 and CEC22 experimental results of BOMTEA and other EMTO algorithms.
QuestionBOMTEAMFEAEMEAMFEA-AKTMTGARLMFEA
Task1Task2Task1Task2Task1Task2Task1Task2Task1Task2Task1Task2
CIHS4.97e−044.78e+003.80e−01 (+)2.04e+02 (+)5.66e−01 (+)4.13e+02 (+)3.42e−01 (+)1.86e+02 (+)2.72e−01 (+)2.05e+02 (+)1.89e−02 (+)5.42e+01 (+)
CIMS3.69e−011.72e+015.67e+00 (+)2.71e+02 (+)3.70e+00 (+)4.14e+02 (+)5.57e+00 (+)2.54e+02 (+)3.21e+00(+)2.36e+02 (+)2.32e+00 (+)8.61e+01 (+)
CILS2.01e+014.37e+032.02e+01 (+)4.04e+03 (−)2.05e+01 (+)1.21e+04 (+)2.02e+01 (+)3.85e+03 (−)2.00e+01 (−)4.11e+03 (≈)2.01e+01 (≈)3.24e+03 (−)
PIHS2.01e+021.37e−036.50e+02 (+)1.18e+01 (+)9.92e+02 (+)3.43e−01 (+)5.17e+02 (+)9.07e+00 (+)2.24e+02 (≈)3.09e+00 (+)2.37e+02 (+)2.96e−02 (+)
PIMS3.48e−019.15e+013.85e+00 (+)8.16e+02 (+)3.63e+00 (+)3.19e+02 (+)3.02e+00 (+)3.74e+02 (+)3.28e+00 (+)5.14e+02 (+)1.54e+00 (+)1.35e+02 (+)
PILS1.42e+002.13e+002.00e+01 (+)2.16e+01 (+)1.78e+01 (+)1.71e−01 (−)5.41e+00 (+)5.81e+00 (+)3.06e+00 (+)5.72e+00 (+)2.68e+00 (+)3.21e+00 (+)
NIHS1.50e+021.21e+027.68e+02 (+)2.71e+02 (+)6.37e+02 (+)4.16e+02 (+)5.18e+02 (+)2.20e+02 (+)5.68e+02 (+)1.98e+02 (+)2.12e+02 (+)1.16e+02 (≈)
NIMS2.80e−031.61e+014.17e−01 (+)2.73e+01 (+)7.31e−01 (+)1.20e+01 (−)4.13e−01 (+)2.42e+01 (+)3.79e−01 (+)1.54e+01 (≈)4.27e−02 (+)1.99e+01 (+)
NILS2.04e+024.33e+036.27e+02 (+)3.77e+03 (−)1.32e+03 (+)1.21e+04 (+)7.03e+02 (+)3.90e+03 (−)3.44e+02 (+)4.36e+03 (≈)2.89e+02 (+)3.23e+03 (−)
P16.34e+026.34e+026.51e+02 (+)6.53e+02 (+)6.29e+02 (−)6.18e+02 (−)6.32e+02 (≈)6.32e+02 (−)6.18e+02 (−)6.19e+02 (−)6.30e+02 (−)6.32e+02 (≈)
P27.00e+027.00e+027.01e+02 (+)7.01e+02 (+)7.05e+02 (+)7.01e+02 (+)7.01e+02 (+)7.01e+02 (+)7.01e+02 (+)7.00e+02 (+)7.00e+02 (+)7.00e+02 (+)
P31.43e+061.59e+064.18e+06 (+)3.63e+06 (+)3.23e+06 (+)6.33e+07 (+)1.22e+06 (≈)1.25e+06 (≈)3.06e+06 (+)2.84e+06 (+)1.60e+06 (≈)1.54e+06 (≈)
P41.30e+031.30e+031.30e+03 (+)1.30e+03 (+)1.30e+03 (+)1.30e+03 (+)1.30e+03 (+)1.30e+03 (≈)1.30e+03 (−)1.30e+03 (−)1.30e+03 (≈)1.30e+03 (≈)
P51.52e+031.52e+031.56e+03 (+)1.55e+03 (+)1.79e+03 (+)1.54e+03 (+)1.56e+03 (+)1.55e+03 (+)1.53e+03 (+)1.53e+03 (+)1.53e+03 (+)1.53e+03 (+)
P61.12e+067.34e+051.90e+06 (+)1.60e+06 (+)1.82e+06 (+)2.66e+07 (+)1.76e+06 (+)1.57e+06 (+)1.23e+06 (≈)1.20e+06 (+)9.38e+05 (≈)9.66e+05 (≈)
P73.18e+033.19e+033.52e+03 (+)3.52e+03 (+)3.41e+03 (+)4.64e+03 (+)3.23e+03 (≈)3.41e+03 (+)3.08e+03 (≈)3.11e+03 (≈)3.19e+03 (≈)3.20e+03 (≈)
P85.20e+025.20e+025.20e+02 (+)5.20e+02 (+)5.21e+02 (+)5.21e+02 (+)5.20e+02 (+)5.20e+02 (+)5.21e+02 (+)5.21e+02 (+)5.20e+02 (≈)5.20e+02 (+)
P97.56e+031.62e+038.10e+03 (+)1.62e+03 (+)8.62e+03 (+)1.62e+03 (+)7.82e+03 (≈)1.62e+03 (+)7.96e+03 (+)1.62e+03 (−)7.87e+03 (≈)1.62e+03 (≈)
P103.26e+042.14e+062.95e+04 (≈)2.63e+06 (≈)3.65e+04 (≈)2.57e+07 (+)2.72e+04 (≈)3.11e+06 (+)2.05e+04 (−)2.14e+06 (≈)3.12e+04 (≈)2.06e+06 (≈)
Number of +/≈/−18/1/016/1/217/1/116/0/314/5/014/2/312/3/411/5/310/8/19/8/2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Wang, Z.; Kou, Z. Adaptive Bi-Operator Evolution for Multitasking Optimization Problems. Biomimetics 2024, 9, 604. https://doi.org/10.3390/biomimetics9100604

AMA Style

Wang C, Wang Z, Kou Z. Adaptive Bi-Operator Evolution for Multitasking Optimization Problems. Biomimetics. 2024; 9(10):604. https://doi.org/10.3390/biomimetics9100604

Chicago/Turabian Style

Wang, Changlong, Zijia Wang, and Zheng Kou. 2024. "Adaptive Bi-Operator Evolution for Multitasking Optimization Problems" Biomimetics 9, no. 10: 604. https://doi.org/10.3390/biomimetics9100604

APA Style

Wang, C., Wang, Z., & Kou, Z. (2024). Adaptive Bi-Operator Evolution for Multitasking Optimization Problems. Biomimetics, 9(10), 604. https://doi.org/10.3390/biomimetics9100604

Article Metrics

Back to TopTop