Next Article in Journal
Statistical and Machine Learning Models for Air Quality: A Systematic Review of Methods and Challenges
Previous Article in Journal
Finite Element Computations on Mobile Devices: Optimization and Numerical Efficiency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Memetic Differential Evolution with Virtual Population and Multi-Mutation Strategies for Multimodal Optimization Problems

1
School of Artificial Intelligence, Chongqing Vocational Institute of Safety Technology, Chongqing 404020, China
2
College of Electrical Engineering and Automation, Hubei Normal University, Huangshi 435002, China
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(12), 784; https://doi.org/10.3390/a18120784
Submission received: 13 November 2025 / Revised: 3 December 2025 / Accepted: 6 December 2025 / Published: 11 December 2025

Abstract

Multimodal optimization is characterized by the dual imperative of achieving global peak diversity and local precision enhancement for discovered solutions. An adaptive memetic differential evolution is proposed in this work based on the virtual population mechanism and multi-mutation strategy to tackle these problems. Firstly, the virtual population mechanism (VPM) is designed to support the maintenance of population diversity, taking advantage of the distribution of a current population to obtain a virtual population. In this mechanism, the virtual population is used to provide certain requirements for the population evolution, but it does not participate in the evolution operation itself. The multi-mutation strategy (MMS) is further executed on the joint virtual and current populations, with the explicit aim of assigning promising candidates to exploitation tasks and less promising ones to exploration tasks during the creation of offspring. Additionally, a probabilistic local search (PLS) scheme is introduced to enhance the precision of elite solutions. This scheme specifically targets the fittest-and-farthest individuals, effectively addressing solution inaccuracies on the identified peaks. Through comprehensive benchmarking on standard test problems, the proposed algorithm demonstrates performance that is either superior or on par with existing methods, confirming its overall competitiveness.

1. Introduction

Evolutionary algorithms (EAs), representing a class of population-based metaheuristics, are commonly employed to address optimization problems characterized by a single global optimum [1,2,3]. Many practical optimization scenarios are characterized by the presence of multiple optima, necessitating their concurrent identification. Such problems fall under the purview of multimodal optimization (MMOPs), a prevalent class in real-world applications [4,5].
The effective adaptation of evolutionary algorithms (EAs) for multimodal optimization problems (MMOPs) necessitates a core mechanism for preserving population diversity, which is paramount for successfully converging upon multiple peaks. Consequently, numerous methodologies, including niching techniques [6,7], multi-objective optimization-based EAs [8,9], and hybrid evolutionary algorithms [10,11], have emerged for diversity maintenance. The predominant approach for diversity preservation is the niching techniques [12,13,14], which maintain population diversity by dividing a population into multiple subpopulations. The population is partitioned into niches, with each niche dedicated to a single peak. Through independent evolution, these niches converge to their designated optima, thus collectively solving the multimodal problem. Niche-based EA algorithms, such as CDE [15], NCDE [16], LoICDE [17], and self-CCDE [6], belong to the technique. While substantial progress has been made in addressing MMOPs through niching techniques, this approach is not without its limitations, which are discussed in [18]. Several persistent challenges remain, including the effective partitioning of populations, the non-trivial configuration of niching parameters for exploration–exploitation balance, and the retention of identified optima throughout the evolutionary process. This work presents a novel virtual population mechanism designed to generate an auxiliary set of solutions that work in concert with the main population. This synergy is engineered to support population diversity and thereby locate multiple peaks effectively.
A critical challenge in multimodal optimization extends beyond locating optimal regions to the precise refinement of these solutions. While virtual populations effectively facilitate broad exploration of the solution space, they often lack the requisite intensification capability for high-precision exploitation. This inherent limitation impedes final solution accuracy. To address this, researchers have proposed hybridizing EAs with local search operators, leading to the development of memetic algorithms [19,20,21] that synergistically combine global and local search. A newly introduced approach [22] in this domain leverages Gaussian sampling to augment the quality of seed solutions across the formed niches. While this method excels at locally refining solutions near optima, its efficacy diminishes markedly when applied to solutions distant from promising regions. Such an approach might impair its capability to precisely and efficiently identify the optima. In the present study, we optimize both individuals in the vicinity of the optima and those distant from them, thereby contributing to a significant enhancement of the precision of optima identification.
This paper presents an adaptive memetic DE variant enhanced with virtual population and multi-mutation strategies, designed to address these challenges in MMOPs. Within the proposed algorithm, a virtual population mechanism is developed to appropriately inject the required solutions into the current population to search the solution space competitively. Further, a fitness-based multi-mutation strategy is implemented to generate offspring using a combination of the current population and its virtual population. Such a strategy, integrated with the virtual population mechanism, is conducive to identifying multiple optima by directing high-capacity solutions toward intensive exploitation while steering low-capacity solutions toward broader exploration. Additionally, an adaptive Gaussian-based local operator is devised to effectively fine-tune the accuracy of the solutions (i.e., fittest-and-farthest solution) during evolution. A stringent assessment was performed for the presented algorithm based on the standard CEC’2013 multimodal benchmark suite. The results validate its capability to precisely identify a multitude of optima while securing superior overall performance against leading competitors.
Following this introduction, Section 2 delineates the problem model and the core principles underlying differential evolution. Section 3 surveys existing methodologies, while Section 4 delineates our proposed algorithm. Experimental analysis follows in Section 5 and culminates in Section 6 with conclusive insights.

2. Prerequisites

2.1. Problem Description

Multimodal optimization problems (MMOPs) refer to a class of problems focused on locating multiple global optimal solutions, which can be mathematically described as follows:
m a x i m i z e f ( x ) ,
where f ( x ) is the objective function, x = ( x 1 , , x D ) X is the decision vector, X = i = 1 D [ L i , U i ] is the decision space, L i and U i are the lower and upper bounds of x i , respectively, and D is the number of variables. Given an MMOP, as shown in (1), there exists a set of global optimal solutions X * that maximize the objective function f ( x ) .

2.2. Differential Evolution

The simplicity and robustness of the differential evolution (DE) algorithm, introduced by Storn et al. [23], have led to its extensive adoption across various scientific fields. These characteristics make it a suitable choice to be used as a base for our algorithm. Differential evolution operates through an evolving population of NP solutions, where mutation, crossover, and selection function as interconnected mechanisms steering the evolutionary progression. The process is schematically illustrated in Figure 1.
The population was randomly initialized, with each individual being uniformly distributed across the feasible domain defined by the D-dimensional decision variable bounds. The underlying process is governed by the following equation:
x i j = x min j + rand ( 0 , 1 ) × ( x max j x min j ) j = 1 , 2 , , D
where the parameters x min j and x max j specify the bounds of the jth dimension. The term r a n d ( 0 , 1 ) represents a random value sampled from a uniform distribution over the unit interval [ 0 , 1 ] . After initialization, the algorithm executes its core evolution loop, which sequentially performs mutation, crossover, and selection.
For every target vector, the corresponding mutation vector is constructed by a scaled vector difference between distinct population individuals. The DE/rand/1 strategy utilized in this study is defined by the following equation:
v i = x r 1 + F ( x r 2 x r 3 )
where v i corresponds to the mutation vector for the target vector x i ; r 1 , r 2 , and r 3 three mutually distinct integers randomly chosen from the range [ 1 , N P ] , with the additional constraint that none equal the current index i; and the scale factor F is dynamically assigned to x i and updated every generation.
Subsequent to the mutation phase, a binomial crossover mechanism constructs the offspring vector u i = [ u i 1 , u i 2 , , u i p ] corresponding to a target individual x i . This process is defined by:
u i j = v i j if rand ( 0 , 1 ) C R i or j = j r a n d x i j otherwise
where j r a n d is a randomly selected dimension from 1 , 2 , , D , which ensures the trial vector inherits at least one element from the mutant, while the crossover rate C R i [ 0 , 1 ] controls the fraction of components copied from the mutant vector.
A deterministic selection mechanism compares the target vector x i with its corresponding trial vector u i , retaining the superior candidate for the next generation. The selection criterion for the minimization objective is defined as:
x i = u i , if f ( u i ) f ( x i ) x i , otherwise
where f ( x ) is the fitness evaluation function for a solution x.

3. Related Works

Solving multimodal optimization problems (MMOPs) presents a dual imperative: maintaining population diversity to locate numerous peaks and achieving rapid convergence within each peak region under a limited budget of fitness evaluations (FEs). To structure our review of the related work addressing these challenges, we categorize the efforts along three primary dimensions.

3.1. Hybrid Evolutionary Algorithms

Significant efforts have been directed towards hybrid evolutionary algorithms that synergistically combine the strengths of different metaheuristics for MMOPs. These hybrids aim to mitigate the inherent limitations of individual algorithms, such as premature convergence or inadequate exploration–exploitation balance, particularly in complex multimodal optimization problems. Notable recent frameworks include the clustering-based hybrid PSO (CBHPSO), which integrates clustering techniques with particle swarm optimization to enhance population management and diversity preservation [11]. Similarly, hybrids like the real-coded genetic algorithm with PSO (RCGA-PSO) and its multi-objective variant (MORCGA-MOPSO-II) systematically incorporate genetic operators (e.g., crossover and mutation) within a PSO framework to improve solution quality and robustness [10]. These algorithms exemplify a trend where mechanisms from one paradigm (e.g., GA’s crossover) are embedded into another (e.g., PSO’s velocity update) to create more powerful search strategies. While the aforementioned hybrids often leverage GA operators, advanced DE variants like DADE [7] and POGAN-DE [24] have demonstrated remarkable performance by employing diversity enhancement strategies (such as niching, adaptive diversity control, co-evolution, etc.) to prevent premature convergence to a single peak and to promote the exploration of different optimal solutions. A key distinction lies in the fundamental search logic: algorithms like RCGA-PSO primarily build upon swarm or population-based updates infused with crossover, whereas DE and its variants rely on a unique vector differential-based mutation scheme. This scheme inherently promotes exploration and is often cited as a key strength of DE. Our proposed algorithm aligns with the philosophy of enhancing DE’s core competencies rather than grafting external operators.

3.2. Population Diversity

Given the objective of identifying numerous peaks, considerable research efforts are dedicated to maintaining diversity as a fundamental principle. The first category of approaches employs structured or multiple subpopulations to enhance population diversity [25,26]. For instance, Zaharie et al. [27] pioneered a distributed DE architecture where populations are partitioned into randomly connected subgroups, forming a decentralized topology. Within this framework, individual migration across subpopulations is probabilistically initiated after fixed generational intervals, ensuring controlled diversity infusion while mitigating premature homogeneity. Parallelly, De Falco et al. [28] devised a bio-inspired EA featuring torus-structured subpopulations, where migration emulates biological invasion dynamics: high-fitness individuals exceeding local averages systematically propagate to adjacent grid units. This spatial strategy not only preserves population diversity but also accelerates convergence in dynamic environments through adaptive response to environmental shifts. Dorronsoro and Bouvry [29] developed a DE algorithm integrating two subpopulations, with each adopting a unique mutation strategy. These subpopulations conduct periodic individual migration following a preset number of generations.
Additionally, niching technique plays a fundamental role in MMOPs by promoting and preserving population diversity [30,31,32,33]. For example, Varela [34] innovated crowding-integrated DE, where offspring systematically replace phenotypically proximate parents using Euclidean metrics, effectively counteracting energy landscape deception by sustaining multi-peak occupancy. Li et al. [35] advanced this paradigm by embedding clearing niching within mutation operators, prioritizing dominant niche members as base vectors rather than conventional random/target selections—a design that adaptively regulates niche radii to resolve overlapping peak ambiguities. In [33], individuals with index-based neighborhood information are applied to mutation operations to induce niche behavior in evolution. Gong et al. [36] presented a strategy utilizing index-based neighborhood information to split the entire population into self-contained subpopulations of equal size, with evolution proceeding through localized interactions among neighbors. Thomsen [15] proposed a crowding DE to preserve diversity by limiting competition between members of the nearest Euclidean distance. Li introduced the species-based DE (SDE) [37] and the species-based PSO (SPSO) [38], which were formed through the distance-based domain. Qu et al. [16] adopted a strategy that induces niches within the population. This approach generates offspring through neighborhood mutation confined to the same niche, causing individuals to converge towards the optimum of their respective subspaces. These approaches primarily focus on exploiting identified subspaces through structured populations or niching, thereby compromising global exploration and limiting the overall balance of the search. This work proposes an adaptive virtual population that leverages the distributional information of the current population to enhance diversity. The designed mechanism strengthens the algorithm’s global search capacity, which supports effective multi-modal optimization through simultaneous peak identification.

3.3. Local Searches

Traditional EAs based on niches could fail to accurately identify the optimal solution in the search space, which significantly reduces the precision of the local optimal solution. To enhance the exploitative prowess of EAs, the strategic inclusion of local search operators has been widely adopted. This integration mitigates the innate limitation of EAs in fine-tuning solutions, significantly boosting their local convergence performance. For instance, Chen et al. [39] introduced an elite learning mechanism (ELM), designed to enhance the accuracy of archived elite solutions and thereby mitigate the precision issues of located peaks. In [40], a local search algorithm utilizing a hill-climbing gradient is implemented on every individual within the population to speed up their convergence to their respective optima. In [21], a crossover-guided local search technique refines high-quality solutions identified in niches while the evolutionary algorithm runs. A hybrid PSO framework, proposed in [41], leverages simulated annealing for convergence and chaotic operators for diversity, thereby strengthening its overall search performance. The method improves the elite particles around the promising region with the first operation and the stagnant particles with the second operation. Ono et al. [42] combined quasi-Newtonian local search with a sharing genetic algorithm and artificial immune algorithm, making the algorithm better than the niche method in terms of high-dimensional multimodal functions. In [22], Yang et al. reinforced the search, targeting niche seeds by sampling the regions adjacent to these seeds. Reference [43] presents a contour prediction method (CPA) coupled with a two-level local search (TLLS) strategy. The CPA expedites convergence by estimating the contour landscape and roughly locating peaks through niche-based individual distributions, while the TLLS subsequently refines these initial estimates to improve solution accuracy. A hybrid scheme of niching methods and locally optimized operations has been shown to be promising, especially for improving individuals close to optimal values. In solving problems, however, it limits the performance improvement of individuals far from the optimal value. To accelerate convergence and enhance solution precision on identified peaks, this paper integrates a probabilistic local search (PLS) scheme. The PLS performs localized refinement on elite solutions, thereby addressing critical accuracy limitations.

4. Proposed Algorithm

This section presents an adaptive virtual population-based memetic differential evolution algorithm (VMPMA) for MMOPs, which incorporates multi-mutation strategies and an adaptive local search operator. The introduced algorithm initiates with an initial population and a group of predefined DE parameters. Each generational cycle produces a virtual population whose structure mirrors the distribution of the current population. This virtual population provides the individuals needed for the current population evolution, in which individuals in virtual populations only act as assistants, but do not participate in evolution. During this stage, a multi-mutation strategy acts on the joint virtual population and population to generate offspring, thereby driving high-potential individuals toward exploitation and low-potential ones toward exploration. Subsequently, the tailored adaptive local search mechanism is activated to enhance the most elite and distant solutions identified in the population. Finally, the DE parameters (i.e., crossover rate and scaling factor), are dynamically adjusted based on a feedback-driven adaptive scheme. The full process of the introduced approach is outlined in Algorithm 1.
The subsequent sections detail the implementation of the virtual population, multi-mutation, and adaptive local search strategies.
Algorithm 1 The procedure of VMPMA.
  • Step 1: Initialization. Initialize a population with random positions and all parameters, then evaluate the fitness of all individuals.
  • Step 2: Main Loop. Repeat the following iterative cycle until the termination criteria are met:
    1.
     Get a virtual population (vP) by virtual population mechanism (see Section 4.1).
    2.
     Generate offspring for x i in the population:
    (a)
     Perform the Multi-mutation strategy (see Section 4.2).
    i.
     If the fitness of x i is not less than the average fitness of the population.
    A.
     Compute the Euclidean distance from individual x i to all other individuals in the combined set P v P , excluding x i itself.
    B.
     Get the individual x n that’s closest to x i .
    C.
     Randomly sample two individuals ( x p 1 ˜ , x p 2 ˜ ) according to Equation (10).
    D.
     Generate a mutation vector v i using Equation (9).
    ii.
     Otherwise:
    A.
     Randomly select two individuals from ( P v P ).
    B.
     Construct the mutation vector v i by applying Equation (11).
    (b)
     Generate the trial vector u i through the binomial crossover operation defined in Equation (4).
    (c)
     Apply the selection operation selects the better one in u i and x i .
    3.
    Apply the adaptive local search strategy (refer to Section 4.3) to improve solutions meeting the criterion of being far from the global best in parameter space but close in fitness space.
    4.
    Update the DE parameters by employing the adaptation mechanism from [44].
  • Step 3: Output the solutions in terminal population.

4.1. Virtual Population Mechanism

During evolution, the requirements for exploration and exploitation capabilities are different at different stages. At the same time, the size of the exploration and exploitation of the individual requirements are also different at different stages. Guided by the exploration-first principle, the algorithm should prioritize broad exploration during early generations, dedicating most individuals to this task. As evolution progresses, the focus should progressively shift towards exploitation, concentrating resources on refining solutions in promising regions [45]. In order to locate more peaks, and considering the problem of limited resources, we propose an adaptive virtual population mechanism (VPM). This mechanism operates by leveraging the spatial distribution of the current population to generate a complementary virtual ensemble, thereby enabling concurrent multi-peak localization. A general description of the mechanism is shown in Algorithm 2.
Specifically, the proposed virtual population mechanism works as follows. First, we calculate the Euclidean distance between each individual to obtain the nearest neighbor of each individual. A virtual individual is generated between any two individuals, provided that the Euclidean distance separating them exceeds a fixed threshold. Finally, all the virtual individuals are combined into a virtual population of the current population, providing assistance for the evolution of the current population. In this sense, we define the threshold value as:
δ = U L 2 N P + i n S i z e
where U and L specify the upper and lower boundaries of the search space, respectively. NP designates the population size, and inSize represents the population expansion magnitude of the prior generation. Equation (6) governs the dynamic adjustment of the threshold in response to the population’s spatial distribution. During initial generations, when individuals are typically dispersed uniformly, the introduction of virtual individuals between existing solutions facilitates a more thorough exploration of the intervening regions. Instead, as the population evolves, individuals begin to cluster at the corresponding peak, and the increase in virtual individuals between the two populations contributes to the peak’s exploitation ability. The insertion mechanism is described in detail as:
x v i r d = x i d x j d 2
where x v i r represents an individual interpolated between x i and x j ; d corresponds to a dimension index from the set [ 1 , 2 , , D ] , and D specifies the total number of dimensions in the optimization problem. As a result, the new individuals could maintain the diversity of the population and the solution space could be searched more comprehensively. The insertion mechanism is based on the diversity of the population to automatically determine whether to execute. Here, we use a probability in the range (0, 1) to represent the variation in diversity, we compute it as:
p = 1 f m e a n f m i n + θ f m a x f m i n + θ
where f m a x , f m e a n , and f m i n indicate the maximal, average, and minimal objective function values evaluated across all individuals. To meet the case where f m a x = f m i n , θ (specifically, θ = 10 6 ) is utilized here and is set to a value small enough.
The basic principle of this virtual population mechanism is that, at the beginning of the population evolution, inserting a virtual individual into two individuals that are farther away (beyond a predetermined threshold) could help to locate multiple peaks and solve the problem of resource scarcity. During the final phases, virtual individuals are generated between distantly spaced solutions (exceeding the threshold), thereby refining solution accuracy and accelerating convergence.
Algorithm 2 Virtual Population Mechanism.
  • Step 1: Calculate probability p based on population diversity according to Equation (8).
  • Step 2: Should the sampled random value be less than the probability p, proceed as follows:
    1.
     Calculate the threshold δ for inserting virtual individuals between individuals according to Equation (6).
    2.
     Iterate through all individuals x i in the population as follows:
    (a)
     Calculate the Euclidean distance of each individual x i to the population and get the distance minD of the nearest individual x j to x i .
    (b)
     If the distance minD between each individual and its nearest individual exceeds the threshold δ , generate a new individual as virtual individual according to Equation (7).
  • Step 3: All virtual individuals are combined into a virtual population. Output the virtual population.

4.2. Multi-Mutation Strategy

The above VPM strategy can be used to support the maintenance of population diversity and to search for multiple peaks simultaneously. However, such a strategy may not necessarily lead to a valid and appropriate search in the population. The proposed multi-mutation strategy orchestrates population evolution through synergistic guidance mechanisms, enabling comprehensive exploration of the solution space. The methodology first segregates the population into two subgroups through fitness-driven division. This fitness-based criterion offers a straightforward method to partition the population into two distinct groups: “superior individuals” destined for exploitation, and “inferior individuals” designated for exploration. Second, we use different mutation strategies to evolve the corresponding individuals based on the association of the population and virtual population.
Specifically, for “superior individuals”, this strategy takes advantage of the situation that similar individuals may locate the same peak, resulting in the evolution of these individuals. Here, the distance between individuals is the degree of similarity between individuals, the smaller the distance, the more similar. Superior individuals, characterized by their high fitness, guide the search toward promising regions, thereby accelerating convergence. We first calculate the Euclidean distance between each individual in the “superior individuals” and the combination of population P and virtual population vP, and obtain the individual nearest to each other individual. Then, two points are derived from each individual’s nearest individual to help that individual evolve. For each individual in “superior individuals”, the mutation strategy is implemented as:
v i = x i + F i ( x p 1 ˜ x p 2 ˜ )
where v i designates the mutation vector for x i ; the vectors x p 1 ˜ and x p 2 ˜ are sampled from x n , defined as the closest individual to x i in ( P v P ) ; and F i represents the mutation factor assigned to x i , which is recomputed every generational cycle. Here, we sample individuals using the following operations:
x p d ˜ = x n d + r a n d ( x i d x n d )
where the index d traverses the set [ 1 , 2 , , D ] , and D specifies the total number of dimensions in the optimization problem. Inferior individuals are not subjected to local exploitation; instead, they are allocated to global exploration to probe undiscovered areas of the search space. The strategy for inferior individuals can ensure good diversity and explore more peaks, which is expressed as:
v i = x i + F i ( x r 1 ˜ x r 2 ˜ )
where x r 1 ˜ , x r 2 ˜ are the two individuals, randomly chosen from ( P v P ) , that are not identical to x i .
This strategy works well with VPM, and takes full advantage of individual fitness and distribution. Individuals with good fitness use their most similar individuals to exploit their corresponding peaks. At the same time, inferior individuals would explore more areas to ensure population diversity and find more peaks. The synergistic integration of the multi-mutation strategy and virtual population mechanism provides dual advantages: it effectively preserves population diversity for locating multiple global optima while simultaneously accelerating convergence rates.

4.3. Adaptive Local Search Strategy

While the VPM and multi-mutation strategy effectively enhance population diversity and identify the promising basins of multiple global optima, their solutions may lack precision. To bridge this gap, local search strategies [43,46] are incorporated to refine the solution accuracy and enable the precise localization of all global optima. In the present study, a probability-based local search (PLS) is employed to enhance the precision of the solutions (i.e., fittest-and-farthest). The general procedure is described in Algorithm 3.
Algorithm 3 Adaptive Local Search Operation.
  • Step 1: Calculate the maximum distance (maxD) between individuals in the population.
  • Step 2: For each individual x i in the population:
    1.
     Compute the Euclidean distance d i that separates each population member from the best-performing individual.
    2.
     Obtain the probability p i of each individual performing a local search by the Equation (14).
    3.
     If the generated random number is less than the probability p i :
    (a)
     Construct two trial solutions near x i by implementing the Gaussian sampling process detailed in (12) and (13);
    (b)
     Compare the two solutions and designate the prevailing one as x i r ; should x i r demonstrate superior fitness to x i , substitute x i with this refined candidate.
  • Step 3: Output the population.
The Gaussian local search, adopted based on its simplicity and robust performance [43,46], is implemented as follows:
x n e w = Gaussian ( x , σ )
where x n e w is generated by sampling from a Gaussian distribution centered at individual x with standard deviation σ .
In the implementation of the local search strategy, two key aspects must be addressed. The first is the configuration of the standard deviation σ in the Gaussian distribution. During the initial phases of evolution, emphasis should be placed on exploration; thus, a larger σ is adopted to sample a broader region and enhance population diversity. As the evolution progresses, the focus shifts toward accelerating convergence, and a smaller σ is used to narrow the sampling range, thereby refining solution accuracy. Furthermore, in higher-dimensional problems, greater diversity is generally required. To address this, an exponential decay model is introduced to gradually reduce σ over generations. Additionally, the initial value of σ is set to be positively correlated with the dimensionality of the problem. The standard deviation σ of the Gaussian distribution is given by:
σ = 10 1 ( 10 D + 3 ) N F E S / M N F E
where D represents the problem dimensionality, NFES denotes the current count of consumed fitness evaluations, and MNFE is the predefined maximum budget of FEs.
The second consideration is the selection of individuals for local search. This aims to balance diversity maintenance with solution refinement, focusing on high-quality individuals due to their higher likelihood of residing near global optima, thus prioritizing them to accelerate convergence. However, multiple high-fitness solutions may cluster around the same peak. Performing local search on all such solutions is computationally inefficient and redundant. Based on this, we use the similarity degree and fitness difference between individuals and the best individuals in the population to determine whether the individual participates in the local search. Specially, we first calculate the similarity between all the solutions. Similarity is quantified using Euclidean distance, while the maximum inter-individual separation ( m a x D ) within the current population is simultaneously determined. Then, similar to the above, we calculate the similarity between each individual and the best individual of the current population. Finally, the probability of each individual performing a local search is defined by:
p i = d i m a x D f i f m i n f m a x f m i n
where d i denotes the Euclidean distance separating x i from the optimal solution; f i indicates the ith individual’s fitness; and f m i n / f m a x specify the lower and upper bounds of the population’s fitness range, respectively.
From the above formula, we can see that the further the distance is from the best solution and the closer the fitness value is to the best solution, the greater the probability to perform a local search. Optimizing these high-probability solutions serves the dual purpose of promoting convergence and simultaneously safeguarding diversity. In addition, we set a smaller probability for the solutions whose fitness value is worse and is similar to the best solution to save resources.

5. Experiments

The experimental study is designed with two primary objectives: to assess the significance of the introduced strategies and to evaluate the efficacy of our proposed algorithm with that of established methods in the literature. The experimental platform consists of a desktop computer (Manufacturer: ASUS; Location: Chongqing, China) equipped with an Intel Core i7-8700 processor (3.20 GHz) and 8 GB of RAM. We conducted 51 independent trials for each algorithm to ensure statistical reliability, and the performance metrics reported herein represent the average outcomes. For clarity, the best results are boldfaced. Statistical analysis of the PR metrics (from 51 executions) is performed using the Wilcoxon rank-sum test at p = 0.05. The indicators +, −, and ≈ represent significant outperformance, underperformance, or statistical parity of our method against each benchmark, respectively.

5.1. Experimental Data and Settings

The CEC’2013 benchmark dataset [47] serves as the testbed for evaluating our algorithm’s capability in addressing multimodal optimization challenges (MMOPs). This benchmark comprises 20 multimodal functions characterized by diverse landscape features and varying complexities. Two widely adopted metrics, the peak ratio (PR) and the success rate (SR), are employed for performance evaluation. The PRis calculated as the mean percentage of global optima discovered over all separate executions, under the constraints of a maximum fitness evaluation count (MNFE) and a precision threshold ε . It is formally expressed as:
P R = i = 1 N R N P F i T N P N R
where NR signifies the count of independent trials, N P F i enumerates the successfully identified global peaks in the ith execution, and TNP corresponds to the actual number of global optima existing in the problem landscape.
The success rate (SR) quantifies the percentage of trials where all global optima are successfully identified within individual runs. It is formulated as:
S R = N S R N R
where N S R quantifies the successful executions observed within the complete set of N R independent trials.
In the experimental setup, we employ five accuracy levels ( ε = 1.0 × 10 1 , 1.0 × 10 2 , 1.0 × 10 3 , 1.0 × 10 4 , 1.0 × 10 5 ) for a comprehensive evaluation. To ensure a fair comparison, a uniform population size and fitness evaluation (FEs) budget are applied to all algorithms on each test function, as summarized in Table 1.

5.2. Exploring the Proposed Method

First, we evaluate the significance of VPM, MMS, and PLS in the proposed algorithm. To validate the contribution of each component, we conduct an ablation study comparing the proposed VMPMA with three variants: VMPMA_1 (without PLS), VMPMA_2 (without both PLS and MMS), and VMPMA_3 (without all three proposed strategies). All four algorithms are evaluated under identical parameter settings, and their peak ratio (PR) results are summarized in Table 2.
As evidenced in Table 2, the three strategies introduced in this work—VPM, MMS, and PLS—collectively contribute to a significant improvement in algorithmic performance. The incorporation of the VPM strategy alone, as seen in the comparison between VMPMA_2 and VMPMA_3, already leads to a noticeable increase in the number of located optima across most multimodal functions. This advantage is particularly pronounced on functions F6, F9, F11, F12, F15, F17, and F18, where VMPMA_2 achieves substantially higher average PR values than VMPMA_3. Consequently, by using a virtual population based on the population distribution to assist the evolution of the population, the VPM strategy can be effectively used to maintain the diversity of the population, so as to locate more optimal solutions. Comparing the performance of VMPMA_1 and VMPMA_2, the results show that the MMS can benefit the VMPMA_1, especially on functions F7, F9, F15, F17, and F20. Additionally, at some high accuracy level, it has a significant advantage on F8 and F19. This demonstrates that MMS further enhances performance by assigning complementary roles to individuals: directing high-potential solutions toward exploitation and low-potential ones toward exploration, creating a synergistic effect. Finally, from the results of VMPMA and VMPMA_1, we can see that the PLS is used to help identify the optimal solution accurately, and the performance of VMPMA_1 is outperformed by VMPMA on all functions except F9.
Additionally, in Table 3, we also present the average fitness and standard deviation of these four functions under the condition of ε = 1.0 × 10 4 .The mean fitness and standard deviation are recorded. As evidenced by the experimental results, functions F16–F20 represent cases where algorithms frequently do not converge fully to the required accuracy. For functions F1–F15, all compared algorithms consistently reached the optimal value (or the accuracy threshold) in all runs, making the final fitness values non-discriminative. The complete dataset is available upon request.

5.3. Comparing with Related Algorithms

In the present section, the algorithm proposed in this study is compared against 11 representative multimodal optimization algorithms, namely CDE [15], SDE [37], NCDE [16], NSDE [16], self-CCDE [6], self-CSDE [6], LMSEDA [22], AED-DDE [48], R2PSO [38], R3PSO [38], and LIPS [49]. In [15], the standard DE is extended as the crowding DE (CDE) for multimodal optimization, utilizing a crowding scheme to trace and preserve multiple peaks. In SDE [37], DE is extended by using the notion of speciation for solving MMOPs. NCDE and NSDE [16] tackle MMOPs using species and crowding schemes with neighborhood mutation, respectively, while self-CCDE and self-CSDE [6] build upon a cluster-based self-adaptive DE, similarly incorporating the respective niching schemes. In LMSEDA [22], the algorithm is developed for MMOPs by integrating clustering strategies for crowding and speciation within a multimodal EDA framework. AED-DDE [48] is a parameter-free niching algorithm for MMOPs, which integrates the adaptive estimation distribution with a distributed differential evolution framework. In addition, the R2PSO and R3PSO [38], as well as the lbest PSO niching algorithms, use a ring topology proposed for MMOPs. The LIPS technique [49] leverages neighborhood-driven particle interactions based on spatial proximity to navigate complex multi-peak optimization problems. To guarantee a fair comparison, all algorithms employ the same population size and maximum number of fitness evaluations (MNFE) for each function, as detailed in Table 1. The remaining parameters for the 11 compared methods are configured according to the optimal settings reported in their original publications.
Table 4, Table 5, Table 6, Table 7 and Table 8 list the comparison results of the methods on five different accuracy levels. Based on the experimental results, it can be observed that the method proposed in this study exhibits overall superior performance compared with the 11 comparative methods across all five accuracy levels. For example, at the level of ε = 1.0 × 10 1 , our method obtains the average PR results on 17 out of 20 functions. Meanwhile, the other 11 methods except AED-DDE obtain 8, 3, 5, 5, 13, 9, 6, 5, 4, and 14, respectively, and our method obtains 17. Statistical analysis confirms our approach exhibits statistically significant advantages, clearly outperforming all 11 competitors on 11, 17, 15, 15, 5, 9, 12, 15, 15, 5, and 2 functions, respectively. More importantly, the proposed method demonstrates superior efficacy at stricter accuracy levels, where its performance advantage over the 11 peer algorithms becomes most pronounced in locating global optima. For instance, the average PR value of CDE, SDE, R2PSO, R3PSO, SCCDE, SCSDE, NCDE, NSDE, LIPS, LMSEDA, and AED-DDE at the accuracy level of ε = 1.0 × 10 4 turn out to be 5, 3, 3, 4, 7, 5, 7, 2, 4, 10, and 10, respectively. On the contrary, our method obtains 11. In addition to the results of Wilcoxon rank-sum tests, our method is significantly better than CDE, SDE, R2PSO, R3PSO, SCCDE, SCSDE, NCDE, NSDE, LIPS, LMSEDA, and AED-DDE on 15, 17, 17, 16, 11, 14, 12, 17, 16, 10, and 7 functions, respectively. By carefully investigating the results, it can be found that, VMPMA locate all known optima on functions F1-F6 and F10 at five accuracy levels. Moreover, on F12, all the known optima are revealed except for the accuracy level of ε = 1.0 × 10 5 . Further, on F7 and F18, VMPMA is obviously superior to other methods for comparison. Overall, we can see that VMPMA is competitive against the 11 methods to be compared. Furthermore, the convergence curves in Figure 2 demonstrate that, compared with other algorithms, the proposed algorithm has a better ability to balance exploration and exploitation.

6. Conclusions

In this paper, an adaptive memetic differential evolution with virtual population and multi-mutation strategies is developed for MMOPs. First, a virtual population is obtained based on the current population distribution according to VPM. The virtual population contributes to the evolution of the population, rather than participating in the evolutionary process. In this way, it can support the maintenance of population diversity to locate multiple optimal solutions. Then, the joint of virtual population and population generate offspring according to MMS, which encourages high-potential individuals for exploitation and low-potential individuals for exploration. Finally, fittest-and-farthest individuals are improved by the PLS more precisely. Experimental findings demonstrate the superior comprehensive performance of our method when evaluated against eleven state-of-the-art multimodal optimization techniques.
Although the proposed method is promising in addressing MMOPs, it may encounter limitations in locating all optima in complex problems with high dimensions. To further enhance the performance of the proposed algorithm, the size of the virtual population requires further investigation. Additionally, the systematic generation of virtual individuals to enable effective exploration of the problem space also warrants in-depth research. Furthermore, applying the proposed algorithm to real-world problems represents another valuable direction for future research.

Author Contributions

T.D. and Z.W.: methodology, investigation, software, writing—original draft. Q.L.: methodology, writing—original draft, validation, supervision. L.Y.: conceptualization, writing—review and editing, supervision. Z.W.: writing—original draft, writing—review and editing. Y.W.: software, validation, visualization. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Research Project of Science and Technology Research Program from Chongqing Municipal Education Commission [Grant No. KJZD-K202404701], in part by the Natural Science Foundation of Hubei Province of China [Grant No. 2024AFB812], and in part by the Guiding Project of Scientific Research Plan of Hubei Provincial Department of Education of China [Grant No. B2023134].

Data Availability Statement

The data used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gong, M.; Tang, Z.; Li, H.; Zhang, J. Evolutionary multitasking with dynamic resource allocating strategy. IEEE Trans. Evol. Comput. 2019, 23, 858–869. [Google Scholar] [CrossRef]
  2. Yu, Y.; Gao, S.; Wang, Y.; Cheng, J.; Todo, Y. ASBSO: An improved brain storm optimization with flexible search length and memory-based selection. IEEE Access 2018, 6, 36977–36994. [Google Scholar] [CrossRef]
  3. Liu, X.F.; Zhan, Z.H.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. An energy efficient ant colony system for virtual machine placement in cloud computing. IEEE Trans. Evol. Comput. 2016, 22, 113–128. [Google Scholar] [CrossRef]
  4. Zaman, F.; Elsayed, S.M.; Ray, T.; Sarkerr, R.A. Evolutionary algorithms for finding nash equilibria in electricity markets. IEEE Trans. Evol. Comput. 2017, 22, 536–549. [Google Scholar] [CrossRef]
  5. Vidanalage, B.D.S.G.; Toulabi, M.S.; Filizadeh, S. Multimodal design optimization of V-shaped magnet IPM synchronous machines. IEEE Trans. Energy Convers. 2018, 33, 1547–1556. [Google Scholar] [CrossRef]
  6. Gao, W.; Yen, G.G.; Liu, S. A cluster-based differential evolution with self-adaptive strategy for multimodal optimization. IEEE Trans. Cybern. 2013, 44, 1314–1327. [Google Scholar] [CrossRef]
  7. Li, C.; Zhai, Y.; Palade, V.; Fang, W.; Lu, H.; Mao, L.; Sun, J. Diversity-based adaptive differential evolution algorithm for multimodal optimization problems. Swarm Evol. Comput. 2025, 93, 101869. [Google Scholar] [CrossRef]
  8. Cheng, R.; Li, M.; Li, K.; Yao, X. Evolutionary multiobjective optimization-based multimodal optimization: Fitness landscape approximation and peak detection. IEEE Trans. Evol. Comput. 2017, 22, 692–706. [Google Scholar] [CrossRef]
  9. Yan, L.; Guo, S.; Liang, J.; Qu, B.; Li, C.; Yu, K. A subspace strategy based coevolutionary framework for constrained multimodal multiobjective optimization problems. Swarm Evol. Comput. 2025, 95, 101941. [Google Scholar] [CrossRef]
  10. Chen, Y.; Li, L.; Xiao, J.; Yang, Y.; Liang, J.; Li, T. Particle swarm optimizer with crossover operation. Eng. Appl. Artif. Intell. 2018, 70, 159–169. [Google Scholar] [CrossRef]
  11. Akopov, A.S. A clustering-based hybrid particle swarm optimization algorithm for solving a multisectoral agent-based model. Stud. Inform. Control 2024, 33, 83–95. [Google Scholar] [CrossRef]
  12. Harik, G.R. Finding multimodal solutions using restricted tournament selection. In Proceedings of the ICGA, Pittsburgh, PA, USA, 15–19 July 1995; pp. 24–31. [Google Scholar]
  13. Della Cioppa, A.; De Stefano, C.; Marcelli, A. Where are the niches? Dynamic fitness sharing. IEEE Trans. Evol. Comput. 2007, 11, 453–465. [Google Scholar] [CrossRef]
  14. Miller, B.L.; Shaw, M.J. Genetic algorithms with dynamic niche sharing for multimodal function optimization. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 786–791. [Google Scholar]
  15. Thomsen, R. Multimodal optimization using crowding-based differential evolution. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753), Portland, OR, USA, 19–23 June 2004; Volume 2. pp. 1382–1389. [Google Scholar]
  16. Qu, B.Y.; Suganthan, P.N.; Liang, J.J. Differential evolution with neighborhood mutation for multimodal optimization. IEEE Trans. Evol. Comput. 2012, 16, 601–614. [Google Scholar] [CrossRef]
  17. Biswas, S.; Kundu, S.; Das, S. Inducing niching behavior in differential evolution through local information sharing. IEEE Trans. Evol. Comput. 2014, 19, 246–263. [Google Scholar] [CrossRef]
  18. Li, X.; Epitropakis, M.G.; Deb, K.; Engelbrecht, A. Seeking multiple solutions: An updated survey on niching methods and their applications. IEEE Trans. Evol. Comput. 2016, 21, 518–538. [Google Scholar] [CrossRef]
  19. Huang, T.; Gong, Y.J.; Kwong, S.; Wang, H.; Zhang, J. A niching memetic algorithm for multi-solution traveling salesman problem. IEEE Trans. Evol. Comput. 2019, 24, 508–522. [Google Scholar] [CrossRef]
  20. Shi, L.; Hu, Z.; Su, Q.; Miao, Y. A modified multifactorial differential evolution algorithm with optima-based transformation. Appl. Intell. 2023, 53, 2989–3001. [Google Scholar] [CrossRef]
  21. Wang, X.; Sheng, M.; Ye, K.; Lin, J.; Mao, J.; Chen, S.; Sheng, W. A multilevel sampling strategy based memetic differential evolution for multimodal optimization. Neurocomputing 2019, 334, 79–88. [Google Scholar] [CrossRef]
  22. Yang, Q.; Chen, W.N.; Li, Y.; Chen, C.P.; Xu, X.M.; Zhang, J. Multimodal estimation of distribution algorithms. IEEE Trans. Cybern. 2016, 47, 636–650. [Google Scholar] [CrossRef]
  23. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  24. Shuai, Y.; Shen, D.; Shen, M. Potential-optima guided adaptive neighborhood differential evolution algorithm for multimodal optimization problems. J. Supercomput. 2025, 81, 1–39. [Google Scholar] [CrossRef]
  25. Teo, J. Exploring dynamic self-adaptive populations in differential evolution. Soft Comput. 2006, 10, 673–686. [Google Scholar]
  26. Zamuda, A.; Brest, J. Population reduction differential evolution with multiple mutation strategies in real world industry challenges. In Proceedings of the International Symposium on Evolutionary Computation, Zakopane, Poland, 29 April–3 May 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 154–161. [Google Scholar]
  27. Zaharie, D.; Petcu, D. Parallel Implementation of Multi-population. Concurr. Inf. Process. Comput. 2005, 195, 223. [Google Scholar]
  28. De Falco, I.; Della Cioppa, A.; Maisto, D.; Scafuri, U.; Tarantino, E. Biological invasion–inspired migration in distributed evolutionary algorithms. Inf. Sci. 2012, 207, 50–65. [Google Scholar] [CrossRef]
  29. Dorronsoro, B.; Bouvry, P. Improving classical and decentralized differential evolution with new mutation operator and population topologies. IEEE Trans. Evol. Comput. 2011, 15, 67–98. [Google Scholar] [CrossRef]
  30. Jin, P.; Cen, J.; Feng, Q.; Ai, W.; Chen, H.; Qiao, H. Differential evolution with the mutation strategy transformation based on a quartile for numerical optimization. Appl. Intell. 2024, 54, 334–356. [Google Scholar] [CrossRef]
  31. Wang, L.; Li, J.; Yan, X. A variable population size opposition-based learning for differential evolution algorithm and its applications on feature selection. Appl. Intell. 2024, 54, 959–984. [Google Scholar] [CrossRef]
  32. Yuan, G.; Sun, G.; Deng, L.; Li, C.; Yang, G. A novel differential evolution algorithm based on periodic intervention and systematic regulation mechanisms. Appl. Intell. 2024, 54, 11779–11803. [Google Scholar] [CrossRef]
  33. Epitropakis, M.G.; Plagianakos, V.P.; Vrahatis, M.N. Multimodal optimization using niching differential evolution with index-based neighborhoods. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  34. Varela, D.; Santos, J. Protein structure prediction in an atomic model with differential evolution integrated with the crowding niching method. Nat. Comput. 2022, 21, 537–551. [Google Scholar] [CrossRef]
  35. Li, Y.; Guo, H.; Liu, X.; Li, Y.; Pan, W.; Gong, B.; Pang, S. New mutation strategies of differential evolution based on clearing niche mechanism. Soft Comput. 2017, 21, 5939–5974. [Google Scholar]
  36. Gong, Y.J.; Zhang, J.; Zhou, Y. Learning multimodal parameters: A bare-bones niching differential evolution approach. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 2944–2959. [Google Scholar] [CrossRef] [PubMed]
  37. Li, X. Efficient differential evolution using speciation for multimodal function optimization. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington DC, USA, 25–29 June 2005; pp. 873–880. [Google Scholar]
  38. Li, X. Niching without niching parameters: Particle swarm optimization using a ring topology. IEEE Trans. Evol. Comput. 2009, 14, 150–169. [Google Scholar] [CrossRef]
  39. Chen, Z.G.; Zhan, Z.H.; Wang, H.; Zhang, J. Distributed individuals for multiple peaks: A novel differential evolution for multimodal optimization problems. IEEE Trans. Evol. Comput. 2019, 24, 708–719. [Google Scholar] [CrossRef]
  40. Vitela, J.E.; Castaños, O. A real-coded niching memetic algorithm for continuous multimodal function optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 2170–2177. [Google Scholar]
  41. Ni, J.; Li, L.; Qiao, F.; Wu, Q. A novel memetic algorithm based on the comprehensive learning PSO. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  42. Ono, S.; Hirotani, Y.; Nakayama, S. Multiple solution search based on hybridization of real-coded evolutionary algorithm and quasi-newton method. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 1133–1140. [Google Scholar]
  43. Wang, Z.J.; Zhan, Z.H.; Lin, Y.; Yu, W.J.; Wang, H.; Kwong, S.; Zhang, J. Automatic niching differential evolution with contour prediction approach for multimodal optimization problems. IEEE Trans. Evol. Comput. 2019, 24, 114–128. [Google Scholar] [CrossRef]
  44. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  45. Karafotias, G.; Hoogendoorn, M.; Eiben, Á.E. Parameter control in evolutionary algorithms: Trends and challenges. IEEE Trans. Evol. Comput. 2014, 19, 167–187. [Google Scholar] [CrossRef]
  46. Zhou, A.; Sun, J.; Zhang, Q. An estimation of distribution algorithm with cheap and expensive local search methods. IEEE Trans. Evol. Comput. 2015, 19, 807–822. [Google Scholar] [CrossRef]
  47. Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization; Technical Report; RMIT University, Evolutionary Computation and Machine Learning Group: Melbourne, VIC, Australia, 2013. [Google Scholar]
  48. Wang, Z.J.; Zhou, Y.R.; Zhang, J. Adaptive estimation distribution distributed differential evolution for multimodal optimization problems. IEEE Trans. Cybern. 2020, 52, 6059–6070. [Google Scholar] [CrossRef]
  49. Qu, B.Y.; Suganthan, P.N.; Das, S. A distance-based locally informed particle swarm model for multimodal optimization. IEEE Trans. Evol. Comput. 2012, 17, 387–402. [Google Scholar] [CrossRef]
Figure 1. Differential evolution flowchart.
Figure 1. Differential evolution flowchart.
Algorithms 18 00784 g001
Figure 2. Convergence curves of relevant algorithms at ε = 1.0 × 10 4 .
Figure 2. Convergence curves of relevant algorithms at ε = 1.0 × 10 4 .
Algorithms 18 00784 g002
Table 1. Parameter setting.
Table 1. Parameter setting.
FunctionsPopulation SizeMNFE
F1–F580 5.0 × 10 4
F6100 2.0 × 10 5
F7300 2.0 × 10 5
F8–F9300 4.0 × 10 5
F10100 2.0 × 10 5
F11–F13200 2.0 × 10 5
F14–F20200 4.0 × 10 5
Table 2. Performance comparison of VMPMA and its variants based on PR (Note: the best values are highlighted in bold).
Table 2. Performance comparison of VMPMA and its variants based on PR (Note: the best values are highlighted in bold).
VMPVMPVMPVMPVMPVMPVMPVMPVMPVMPVMPVMPVMPVMPVMPVMP
MA MA_1 MA_2 MA_3 MA MA_1 MA_2 MA_3 MA MA_1 MA_2 MA_3 MA MA_1 MA_2 MA_3
ε F1F2F3F4
1.0E−11.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
1.0E−21.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
1.0E−31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
1.0E−41.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
1.0E−51.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
ε F5F6F7F8
1.0E−11.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0000.9440.9110.9960.997
1.0E−21.0001.0001.0001.0001.0001.0001.0001.0000.9230.8460.8090.8080.9360.9110.9870.989
1.0E−31.0001.0001.0001.0001.0001.0001.0001.0000.9210.8320.7820.7780.9790.9390.9490.947
1.0E−41.0001.0001.0001.0001.0001.0001.0000.9960.9200.8320.7580.7500.9260.8760.6630.696
1.0E−51.0001.0001.0001.0001.0001.0001.0000.9900.8890.8140.7290.7300.9190.8760.1830.211
ε F9F10F11F12
1.0E−10.6710.6910.8280.8211.0001.0001.0001.0001.0001.0001.0001.0001.0000.9610.9440.941
1.0E−20.4780.4670.3890.3801.0001.0001.0001.0000.9710.6990.7160.6671.0000.8900.8950.784
1.0E−30.4730.4270.3610.3471.0001.0001.0001.0000.9670.6900.6860.6671.0000.8330.8830.696
1.0E−40.4700.4130.3340.3301.0001.0001.0001.0000.9580.6700.6730.6671.0000.7840.7370.696
1.0E−50.4440.4130.3150.3121.0001.0001.0001.0000.9470.6670.6670.6670.9980.7480.7650.647
ε F13F14F15F16
1.0E−11.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
1.0E−20.7910.6670.6770.6670.6670.6670.6670.6670.6970.5980.5930.5270.6800.6700.6800.686
1.0E−30.7120.6670.6670.6670.6670.6670.6670.6670.6470.5980.5930.4880.6670.6670.6670.657
1.0E−40.7080.6670.6670.6670.6670.6670.6670.6670.6470.5960.5910.4860.6670.6670.6670.657
1.0E−50.7060.6670.6670.6670.6670.6670.6670.6670.6470.5960.5880.4610.6670.6670.6670.631
ε F17F18F19F20
1.0E−11.0000.9951.0001.0001.0001.0001.0001.0000.3090.2550.8280.9240.9980.9630.0070.003
1.0E−20.4930.4190.4090.3310.6670.6630.6800.5220.2620.2280.2570.2530.1250.1250.0000.000
1.0E−30.4930.4190.4090.3300.6670.6440.5980.4210.2620.2110.1250.1250.1250.1230.0000.000
1.0E−40.4690.4190.4090.2940.6670.4440.5260.4200.2620.1790.0150.0200.1250.0660.0000.000
1.0E−50.4260.4190.4090.2550.6630.3300.4150.2550.2620.1790.0000.0030.0030.0050.0000.000
Table 3. Comparing the fitness results delivered by VMPMA and its three variants on the CEC’2013 test suit with ε = 1.0 × 10 4 .
Table 3. Comparing the fitness results delivered by VMPMA and its three variants on the CEC’2013 test suit with ε = 1.0 × 10 4 .
FunctionsMethod
VMPMA_3 VMPMA_2 VMPMA_1 VMPMA
F1 2.000 × 10 2 ( 0.000 × 10 0 ) 2.000 × 10 2 ( 0.000 × 10 0 ) 2.000 × 10 2 ( 0.000 × 10 0 ) 2.000 × 10 2 ( 0.000 × 10 0 )
F2 1.000 × 10 0 ( 0.000 × 10 0 ) 1.000 × 10 0 ( 0.000 × 10 0 ) 1.000 × 10 0 ( 0.000 × 10 0 ) 1.000 × 10 0 ( 0.000 × 10 0 )
F3 1.000 × 10 0 ( 0.000 × 10 0 ) 1.000 × 10 0 ( 0.000 × 10 0 ) 1.000 × 10 0 ( 0.000 × 10 0 ) 1.000 × 10 0 ( 0.000 × 10 0 )
F4 2.000 × 10 2 ( 0.000 × 10 0 ) 2.000 × 10 2 ( 0.000 × 10 0 ) 2.000 × 10 2 ( 0.000 × 10 0 ) 2.000 × 10 2 ( 0.000 × 10 0 )
F5 1.032 × 10 0 ( 0.000 × 10 0 ) 1.032 × 10 0 ( 0.000 × 10 0 ) 1.032 × 10 0 ( 0.000 × 10 0 ) 1.032 × 10 0 ( 0.000 × 10 0 )
F6 1.867 × 10 2 ( 0.000 × 10 0 ) 1.867 × 10 2 ( 0.000 × 10 0 ) 1.867 × 10 2 ( 0.000 × 10 0 ) 1.867 × 10 2 ( 0.000 × 10 0 )
F7 1.000 × 10 0 ( 3.933 × 10 1 ) 1.000 × 10 0 ( 7.863 × 10 2 ) 1.000 × 10 0 ( 7.860 × 10 2 ) 1.000 × 10 0 ( 2.320 × 10 2 )
F8 2.709 × 10 3 ( 8.732 × 10 3 ) 2.709 × 10 3 ( 0.000 × 10 0 ) 2.709 × 10 3 ( 6.113 × 10 2 ) 2.709 × 10 3 ( 3.542 × 10 2 )
F9 1.000 × 10 0 ( 6.532 × 10 3 ) 1.000 × 10 0 ( 3.333 × 10 3 ) 1.000 × 10 0 ( 2.294 × 10 2 ) 1.000 × 10 0 ( 2.020 × 10 2 )
F10 2.000 × 10 0 ( 0.000 × 10 0 ) 2.000 × 10 0 ( 0.000 × 10 0 ) 2.000 × 10 0 ( 0.000 × 10 0 ) 2.000 × 10 0 ( 0.000 × 10 0 )
F11 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 )
F12 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 8.842 × 10 2 ) 6.141 × 10 18 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 )
F13 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 1.394 × 10 1 ) 0.000 × 10 0 ( 2.229 × 10 0 ) 0.000 × 10 0 ( 1.394 × 10 1 )
F14 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 )
F15 0.000 × 10 0 ( 8.842 × 10 2 ) 0.000 × 10 0 ( 8.842 × 10 2 ) 0.000 × 10 0 ( 1.768 × 10 1 ) 0.000 × 10 0 ( 5.594 × 10 2 )
F16 1.595 × 10 10 ( 8.523 × 10 3 ) 3.708 × 10 13 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 0.000 × 10 0 ) 0.000 × 10 0 ( 7.452 × 10 2 )
F17 2.627 × 10 2 ( 6.231 × 10 1 ) 9.925 × 10 7 ( 5.423 × 10 2 ) 1.023 × 10 9 ( 1.118 × 10 3 ) 2.439 × 10 13 ( 6.851 × 10 2 )
F18 1.311 × 10 3 ( 4.332 × 10 2 ) 1.336 × 10 6 ( 1.964 × 10 2 ) 4.519 × 10 7 ( 0.000 × 10 0 ) 6.154 × 10 11 ( 7.453 × 10 2 )
F19 4.224 × 10 2 ( 2.532 × 10 5 ) 1.017 × 10 2 ( 7.362 × 10 4 ) 5.281 × 10 4 ( 1.926 × 10 2 ) 1.843 × 10 7 ( 0.000 × 10 0 )
F20 7.892 × 10 1 ( 0.000 × 10 0 ) 1.697 × 10 1 ( 0.000 × 10 0 ) 8.518 × 10 3 ( 0.000 × 10 0 ) 2.628 × 10 5 ( 0.000 × 10 0 )
Table 4. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 1 (Note: the best values are highlighted in bold).
Table 4. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 1 (Note: the best values are highlighted in bold).
FuncVMPMACDESDER2PSOR3PSOSCCDE
PR SR PR SR PR SR PR SR PR SR PR SR
F11.0001.0001.0001.0000.9710.9411.0001.0001.0001.0001.0001.000
F21.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0000.7890.3921.0001.0001.0001.0001.0001.000
F51.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F61.0001.0001.0001.0000.2570.0000.5790.0000.6360.0001.0001.000
F71.0001.0000.9990.9800.4820.0000.9340.2550.9940.9410.9870.902
F80.9440.0590.1050.0000.0250.0000.0360.0000.4510.0001.0001.000
F90.6710.0000.5030.0000.0950.0000.2380.0000.1980.0000.4320.000
F101.0001.0001.0001.0000.8420.0390.9850.8240.9850.8241.0001.000
F111.0001.0000.9830.9410.6570.0000.9510.7450.9740.8631.0001.000
F121.0001.0000.3600.0000.6100.0000.5490.0000.5560.0000.9500.686
F131.0001.0000.8950.2550.6370.0000.7610.1180.9970.9801.0001.000
F141.0001.0001.0001.0000.4280.0000.9880.9020.8760.0001.0001.000
F151.0001.0000.9930.9610.2450.0000.7560.0000.7920.0000.9950.961
F161.0001.0000.8990.7060.1730.0000.6630.0000.7610.0001.0001.000
F171.0001.0000.3650.0200.5540.0000.0060.0000.3690.0000.9930.941
F181.0001.0000.9930.9800.1980.0000.2580.0000.4680.0001.0001.000
F190.3090.0000.0000.0000.0250.0000.0000.0000.0030.00004360.000
F200.9980.9800.0000.0000.0000.0000.0000.0000.0000.0000.4940.000
+ 11 17 15 15 5
0 0 0 0 2
9 3 5 5 13
FuncSCSDENCDENSDELIPSLMSEDAAED-DDE
PRSRPRSRPRSRPRSRPRSRPRSR
F10.9800.9611.0001.0001.0001.0000.7260.5491.0001.0001.0001.000
F21.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F51.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F60.9260.4900.9100.4900.0560.0000.8890.0390.9530.4711.0001.000
F71.0001.0000.8850.0200.0480.0000.4890.0001.0001.0001.0001.000
F80.9990.9800.9990.9800.0130.0000.6160.0000.6190.0000.8360.078
F90.2720.0000.4540.0000.0050.0000.1940.0000.3180.0000.4580.000
F101.0001.0001.0001.0000.0790.0000.9890.8821.0001.0001.0001.000
F111.0001.0000.7680.1180.8460.2550.9410.6471.0001.0001.0001.000
F120.7970.2750.7750.0590.1750.0000.9170.3730.9680.7451.0001.000
F130.9940.9610.6730.0000.1880.0000.7680.0200.9940.9611.0001.000
F141.0001.0000.6860.0000.7120.0200.6670.0001.0001.0001.0001.000
F150.6280.0000.4190.0000.5830.0000.6090.0000.9980.9801.0001.000
F161.0001.0000.7550.1570.7910.1770.4450.0001.0001.0001.0001.000
F170.3750.0000.2920.0000.3140.0000.2280.0001.0001.0001.0001.000
F180.4860.0000.9090.6470.5160.0200.6170.0001.0001.0001.0001.000
F190.2690.0000.3060.0000.2990.0000.0080.0000.8380.6080.6270.000
F200.1280.0000.4510.0000.3920.0000.0000.0001.0001.0001.0001.000
+9 12 15 15 5 2
2 1 0 0 1 1
9 7 5 5 14 17
Table 5. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 2 .
Table 5. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 2 .
FuncVMPMACDESDER2PSOR3PSOSCCDE
PR SR PR SR PR SR PR SR PR SR PR SR
F11.0001.0001.0001.0000.9710.9411.0001.0001.0001.0000.9900.980
F21.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0000.7890.3921.0001.0001.0001.0001.0001.000
F51.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F61.0001.0000.9900.9410.2570.0000.5790.0000.6330.0000.9990.980
F70.9230.4710.8840.0000.4780.0000.5670.0000.5370.0000.8830.020
F80.9360.7060.0000.0000.0250.0000.0360.0000.4060.0000.9970.980
F90.4780.0000.4740.0000.0950.0000.1270.0000.1980.0000.4560.000
F101.0001.0001.0001.0000.8420.0390.9420.4900.9620.6471.0001.000
F110.9710.8430.6050.0000.6570.0000.6600.0000.6700.0000.9530.902
F121.0001.0000.0540.0000.6050.0000.4490.0000.5510.0000.7820.000
F130.7910.0980.6270.0000.6370.0000.6600.0000.6630.0000.6670.000
F140.6670.0000.3370.0000.4280.0000.4370.0000.6440.0000.6670.000
F150.6970.0000.1470.0000.2450.0000.1740.0000.1790.0000.4040.000
F160.6800.0000.0620.0000.1730.0000.1830.0000.4610.0000.6670.000
F170.4930.0000.0070.0000.0910.0000.0060.0000.1230.0000.3110.000
F180.6670.0000.1340.0000.0090.0000.0580.0000.0650.0000.6470.000
F190.2620.0000.0000.0000.0250.0000.0000.0000.0030.0000.3200.000
F200.1250.0000.0000.0000.0000.0000.0000.0000.0000.0000.2530.000
+ 13 17 15 15 10
0 0 0 0 3
7 3 5 5 7
FuncSCSDENCDENSDELIPSLMSEDAAED-DDE
PRSRPRSRPRSRPRSRPRSRPRSR
F10.9800.9611.0001.0001.0001.0000.7260.5491.0001.0001.0001.000
F21.0001.0001.0001.0000.7860.2551.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0000.8870.0391.0001.0001.0001.0001.0001.000
F51.0001.0001.0001.0000.9280.3531.0001.0001.0001.0001.0001.000
F60.7690.0200.6590.0000.0560.0000.7980.0390.9390.3531.0001.000
F70.8820.0000.8840.0200.0360.0000.4890.0000.7460.0000.8080.039
F80.9900.9220.9990.9800.0130.0000.6130.0000.5530.0000.7060.000
F90.2600.0000.4540.0000.0050.0000.1670.0000.3180.0000.4020.000
F101.0001.0001.0001.0000.0790.0000.9890.8820.9980.9801.0001.000
F110.7060.0000.7030.0000.8140.1570.9280.5690.9310.5881.0001.000
F120.4880.0000.5220.0000.1350.0000.7690.0000.9580.6861.0001.000
F130.6670.0000.6670.0000.1880.0000.7680.0200.6670.0000.8080.078
F140.6670.0000.6670.0000.6670.0000.6630.0000.6670.0000.6670.000
F150.3850.0000.3700.0000.4830.0000.4350.0000.7260.0000.6470.000
F160.6570.0000.6440.0000.6600.0000.4400.0000.6670.0000.6670.000
F170.2670.0000.2430.0000.2570.0000.2280.0000.4930.0000.3750.000
F180.4800.0000.3600.0000.3630.0000.2170.0000.6280.0000.6540.000
F190.2480.0000.2300.0000.0980.0000.0080.0000.4070.0000.3750.000
F200.1290.0000.2530.0000.0030.0000.0000.0000.2500.0000.2500.000
+12 11 17 15 10 7
2 2 0 0 3 4
6 7 3 5 7 9
Table 6. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 3 .
Table 6. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 3 .
FuncVMPMACDESDER2PSOR3PSOSCCDE
PR SR PR SR PR SR PR SR PR SR PR SR
F11.0001.0001.0001.0000.9710.9410.9710.9411.0001.0000.9900.980
F21.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0000.7890.3920.8970.6470.9460.8041.0001.000
F51.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F61.0001.0000.9870.9220.2570.0000.4060.0000.6330.0000.9980.980
F70.9210.0780.8840.0000.4780.0000.5140.0000.4880.0000.8830.020
F80.9290.0390.0000.0000.0250.0000.0360.0000.4060.0000.9970.980
F90.4730.0000.4670.0000.0940.0000.0920.0000.1050.0000.4510.000
F101.0001.0001.0001.0000.8420.0390.9330.2160.9300.4311.0001.000
F110.9670.8240.6050.0000.6570.0000.6410.0000.6540.0000.9380.726
F121.0001.0000.0050.0000.5960.0000.3820.0000.5470.0000.6280.000
F130.7120.0000.3000.0000.6370.0000.6210.0000.6500.0000.6670.000
F140.6670.0000.2250.0000.4280.0000.4370.0000.6440.0000.6670.000
F150.6470.0000.0590.0000.2130.0000.1670.0000.1790.0000.3680.000
F160.6670.0000.0030.0000.1730.0000.1390.0000.4370.0000.6670.000
F170.4930.0000.0000.0000.0910.0000.0060.0000.1230.0000.2600.000
F180.6670.0000.0460.0000.0090.0000.0580.0000.0650.0000.6370.000
F190.2620.0000.0000.0000.0250.0000.0000.0000.0030.0000.2400.000
F200.1250.0000.0000.0000.0000.0000.0000.0000.0000.0000.2330.000
+ 14 17 17 16 10
0 0 0 0 2
6 3 3 4 8
FuncSCSDENCDENSDELIPSLMSEDAAED-DDE
PRSRPRSRPRSRPRSRPRSRPRSR
F10.9800.9611.0001.0001.0001.0000.7260.5491.0001.0001.0001.000
F21.0001.0001.0001.0000.7560.2551.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0000.3870.0390.9170.7841.0001.0001.0001.000
F51.0001.0001.0001.0000.7280.3531.0001.0001.0001.0001.0001.000
F60.7690.0200.5230.0000.0560.0000.6580.0000.9240.2161.0001.000
F70.8810.0000.8840.0200.0360.0000.4310.0000.6860.0000.8080.000
F80.9900.9220.9990.9800.0130.0000.5090.0000.3860.0000.7060.000
F90.2580.0000.4510.0000.0050.0000.1640.0000.2830.0000.3860.000
F100.9980.9800.9980.9800.0760.0000.9890.8820.9970.9611.0001.000
F110.6900.0000.6930.0000.7910.1370.9180.5100.9180.5101.0001.000
F120.2840.0000.3280.0000.1320.0000.7090.0000.9460.6081.0001.000
F130.6670.0000.6670.0000.1870.0000.7550.0200.6670.0000.7550.000
F140.6670.0000.6670.0000.1530.0000.6600.0000.6670.0000.6670.000
F150.3600.0000.3580.0000.1830.0000.4350.0000.7110.0000.6320.000
F160.6570.0000.6410.0000.1270.0000.3080.0000.6670.0000.6670.000
F170.2670.0000.2400.0000.0580.0000.2130.0000.4800.0000.3750.000
F180.4770.0000.2940.0000.0560.0000.2090.0000.6180.0000.6470.000
F190.2480.0000.1890.0000.0050.0000.0000.0000.4040.0000.3640.000
F200.1280.0000.2500.0000.0030.0000.0000.0000.2500.0000.2500.000
+13 12 18 16 10 6
2 2 0 1 3 4
5 6 2 3 7 10
Table 7. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 4 .
Table 7. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 4 .
FuncVMPMACDESDER2PSOR3PSOSCCDE
PR SR PR SR PR SR PR SR PR SR PR SR
F11.0001.0001.0001.0000.9710.9410.9710.9411.0001.0000.9900.980
F21.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0000.8871.0000.7890.3920.8760.5900.8870.6281.0001.000
F51.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F61.0001.0000.9700.8240.2570.0000.4060.0000.6330.0000.9870.824
F70.9200.0200.8840.0000.4780.0000.4480.0000.4410.0000.8820.000
F80.9260.0390.0000.0000.0250.0000.0220.0000.0840.0000.9970.980
F90.4700.0000.4040.0000.0940.0000.0350.0000.0430.0000.4510.000
F101.0001.0001.0001.0000.8350.0000.8940.0780.8610.1571.0001.000
F110.9580.7450.2090.0000.6570.0000.5980.0000.6210.0000.9200.667
F121.0001.0000.0030.0000.5960.0000.3260.0000.5470.0000.8200.000
F130.7080.0000.1680.0000.6370.0000.5750.0000.6240.0000.6670.000
F140.6670.0000.0780.0000.4280.0000.4370.0000.6440.0000.6670.000
F150.6470.0000.0050.0000.1980.0000.1330.0000.1790.0000.3680.000
F160.6670.0000.0000.0000.1730.0000.0580.0000.4370.0000.6670.000
F170.4690.0000.0000.0000.0910.0000.0060.0000.1230.0000.2600.000
F180.6670.0000.0030.0000.0090.0000.0000.0000.0650.0000.3270.000
F190.2620.0000.0000.0000.0250.0000.0000.0000.0000.0000.1670.000
F200.1250.0000.0000.0000.0000.0000.0000.0000.0000.0000.0680.000
+ 15 17 17 16 11
0 0 0 0 1
5 3 3 4 8
FuncSCSDENCDENSDELIPSLMSEDAAED-DDE
PRSRPRSRPRSRPRSRPRSRPRSR
F10.9800.9611.0001.0001.0001.0000.7260.5491.0001.0001.0001.000
F21.0001.0001.0001.0000.7560.2551.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0000.3870.0390.9170.7841.0001.0001.0001.000
F51.0001.0001.0001.0000.7280.3531.0001.0001.0001.0001.0001.000
F60.7690.0200.3820.0000.0560.0000.6550.0000.8870.0981.0001.000
F70.8810.0000.8820.0200.0360.0000.4310.0000.6230.0000.8080.000
F80.9900.9220.9990.9800.0130.0000.5050.0000.3860.0000.7060.000
F90.2580.0000.4460.0000.0050.0000.1040.0000.2420.0000.3840.000
F100.9940.9410.9980.9800.0760.0000.9890.8820.9920.9021.0001.000
F110.6860.0000.6930.0000.7740.0980.9180.5100.8530.4711.0001.000
F120.2400.0000.2330.0000.1320.0000.7040.0000.9360.5491.0001.000
F130.6670.0000.6380.0000.1870.0000.7430.0200.6670.0000.6790.000
F140.6670.0000.6670.0000.1530.0000.6600.0000.6670.0000.6670.000
F150.3600.0000.3480.0000.1830.0000.3730.0000.6990.0000.6300.000
F160.6570.0000.6210.0000.1270.0000.3080.0000.6670.0000.6670.000
F170.2600.0000.2300.0000.0580.0000.2130.0000.4650.0000.3750.000
F180.3590.0000.2450.0000.0560.0000.2030.0000.6110.0000.6470.000
F190.1260.0000.1570.0000.0050.0000.0000.0000.3990.0000.3040.000
F200.0870.0000.2330.0000.0030.0000.0000.0000.2500.0000.2500.000
+14 12 17 16 10 7
1 2 0 1 3 3
5 6 3 3 7 10
Table 8. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 5 .
Table 8. Peak ratio and success rate performance on the CEC’2013 benchmark suite under an accuracy requirement of ε = 1.0 × 10 5 .
FuncVMPMACDESDER2PSOR3PSOSCCDE
PR SR PR SR PR SR PR SR PR SR PR SR
F11.0001.0001.0001.0000.9710.9410.9710.9411.0001.0000.9900.980
F21.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0000.8811.0000.7890.3920.8760.5900.8870.6281.0001.000
F51.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F61.0001.0000.9440.7060.2570.0000.4060.0000.6330.0000.9790.726
F70.8890.0000.6860.0000.4730.0000.3960.0000.3970.0000.8720.000
F80.9190.0200.0000.0000.0250.0000.0220.0000.0840.0000.9950.941
F90.4440.0000.3750.0000.0940.0000.0100.0000.0430.0000.4510.000
F101.0001.0001.0001.0000.8270.0000.8350.0000.8610.1571.0001.000
F110.9470.4510.0330.0000.6570.0000.5100.0000.6210.0000.9180.647
F120.9980.9800.0000.0000.5960.0000.2380.0000.5470.0000.4460.000
F130.7060.0000.0260.0000.6370.0000.4710.0000.6240.0000.6670.000
F140.6670.0000.0030.0000.4280.0000.4050.0000.6440.0000.6670.000
F150.6470.0000.0000.0000.1930.0000.1330.0000.1790.0000.3680.000
F160.6670.0000.0000.0000.1730.0000.0580.0000.4370.0000.6670.000
F170.4260.0000.0000.0000.0910.0000.0060.0000.1230.0000.2470.000
F180.6630.0000.0030.0000.0090.0000.0000.0000.0650.0000.3270.000
F190.2620.0000.0000.0000.0250.0000.0000.0000.0000.0000.0680.000
F200.0030.0000.0000.0000.0000.0000.0000.0000.0000.0000.0150.000
+ 15 17 17 16 9
0 30 0 0 2
5 3 3 4 9
FuncSCSDENCDENSDELIPSLMSEDAAED-DDE
PRSRPRSRPRSRPRSRPRSRPRSR
F10.9800.9611.0001.0001.0001.0000.7200.5490.9120.8241.0001.000
F21.0001.0001.0001.0000.7560.2551.0001.0001.0001.0001.0001.000
F31.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
F41.0001.0001.0001.0000.3870.0390.9170.7841.0001.0001.0001.000
F51.0001.0001.0001.0000.7280.3531.0001.0001.0001.0001.0001.000
F60.7690.0200.2190.0000.0560.0000.6530.0000.8090.0201.0001.000
F70.8810.0000.8720.0200.0360.0000.4210.0000.5540.0000.8080.000
F80.9900.9220.9960.9220.0130.0000.5030.0000.3820.0000.7060.000
F90.2580.0000.4420.0000.0050.0000.1040.0000.2030.0000.3840.000
F100.9870.8630.9950.9410.0760.0000.9890.8820.9870.8431.0001.000
F110.6800.0000.6930.0000.7710.0780.9180.5100.8370.4511.0001.000
F120.1890.0000.2010.0000.1320.0000.7040.0000.9310.5101.0001.000
F130.6670.0000.6380.0000.1870.0000.7430.0200.6670.0000.6760.000
F140.6670.0000.6670.0000.1530.0000.6600.0000.6670.0000.6670.000
F150.3430.0000.3430.0000.1330.0000.3730.0000.6910.0000.6300.000
F160.6530.0000.6040.0000.1270.0000.3000.0000.6670.0000.6670.000
F170.2540.0000.2230.0000.0560.0000.2030.0000.4560.0000.3750.000
F180.0880.0000.1990.0000.0550.0000.1230.0000.6080.0000.6470.000
F190.0000.0000.1050.0000.0050.0000.0000.0000.3900.0000.3040.000
F200.0000.0000.2330.0000.0020.0000.0000.0000.2500.0000.2500.000
+14 12 17 16 9 7
1 2 0 1 4 3
5 6 3 3 7 10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ding, T.; Wang, Z.; Liu, Q.; Wang, Y.; Yan, L. An Adaptive Memetic Differential Evolution with Virtual Population and Multi-Mutation Strategies for Multimodal Optimization Problems. Algorithms 2025, 18, 784. https://doi.org/10.3390/a18120784

AMA Style

Ding T, Wang Z, Liu Q, Wang Y, Yan L. An Adaptive Memetic Differential Evolution with Virtual Population and Multi-Mutation Strategies for Multimodal Optimization Problems. Algorithms. 2025; 18(12):784. https://doi.org/10.3390/a18120784

Chicago/Turabian Style

Ding, Tianyan, Zuling Wang, Qingping Liu, Yongtao Wang, and Le Yan. 2025. "An Adaptive Memetic Differential Evolution with Virtual Population and Multi-Mutation Strategies for Multimodal Optimization Problems" Algorithms 18, no. 12: 784. https://doi.org/10.3390/a18120784

APA Style

Ding, T., Wang, Z., Liu, Q., Wang, Y., & Yan, L. (2025). An Adaptive Memetic Differential Evolution with Virtual Population and Multi-Mutation Strategies for Multimodal Optimization Problems. Algorithms, 18(12), 784. https://doi.org/10.3390/a18120784

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop