Next Article in Journal
Identification and Empirical Likelihood Inference in Nonlinear Regression Model with Nonignorable Nonresponse
Previous Article in Journal
Computational Investigation of Long Free-Span Submarine Pipelines with Buoyancy Modules Using an Automated Python–Abaqus Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Detection-and-Replacement-Based Order-Operator for Differential Evolution in Solving Complex Bound Constrained Optimization Problems

1
Faculty of Engineering, University of Toyama, Toyama-shi 930-8555, Japan
2
Faculty of Engineering, Yantai Vocational College, Yantai 264670, China
3
Faculty of Science and Technology, Hirosaki University, Hirosaki 036-8560, Japan
4
School of Engineering and Design, Technical University Munich, 85748 Garching, Germany
5
School of Mechanical Engineering, Tongji University, Shanghai 200082, China
6
Institute of AI for Industries, Chinese Academy of Sciences, 168 Tianquan Road, Nanjing 211135, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1389; https://doi.org/10.3390/math13091389
Submission received: 24 March 2025 / Revised: 16 April 2025 / Accepted: 23 April 2025 / Published: 24 April 2025

Abstract

:
The design of differential evolution (DE) operators has long been a key topic in the research of metaheuristic algorithms. This paper systematically reviews the functional differences between mechanism improvements and operator improvements in terms of exploration and exploitation capabilities, based on the general patterns of algorithm enhancements. It proposes a theoretical hypothesis: operator improvement is more directly associated with the enhancement of an algorithm’s exploitation capability. Accordingly, this paper designs a new differential operator, DE/current-to-pbest/order, based on the classic DE/current-to-pbest/1 operator. This new operator introduces a directional judgment mechanism and a replacement strategy based on individual fitness, ensuring that the differential vector consistently points toward better individuals. This enhancement improves the effectiveness of the search direction and significantly strengthens the algorithm’s ability to delve into high-quality solution regions. To verify the effectiveness and generality of the proposed operator, it is embedded into two mainstream evolutionary algorithm frameworks, JADE and LSHADE, to construct OJADE and OLSHADE. A systematic evaluation is conducted using two authoritative benchmark sets: CEC2017 and CEC2011. The CEC2017 set focuses on assessing the optimization capability of theoretical complex functions, covering problems of various dimensions and types; the CEC2011 set, on the other hand, targets multimodal and hybrid optimization challenges in real engineering contexts, featuring higher structural complexity and generalization requirements. On both benchmark sets, OLSHADE demonstrates outstanding solution quality, convergence efficiency, and result stability, showing particular advantages in high-dimensional complex problems, thus fully validating the effectiveness of the proposed operator in enhancing exploitation capability. In addition, the operator has a lightweight structure and is easy to integrate, with good portability and scalability. It can be embedded as a general-purpose module into more DE variants and EAs in the future, providing flexible support for further performance optimization in solving complex problems.

1. Introduction

Metaheuristic algorithms (MHAs) are a research direction developed to find global optimal solutions in complex black-box problems [1]. MHAs can be categorized into three main types: swarm intelligence algorithms (SIs), genetic algorithms (GAs), and evolutionary algorithms (EAs) [2]. SIs focus on the collective behavior of populations, often incorporating certain metaphors and designing individuals to form multiple groups [3]. GAs simulate the genetic inheritance process, generating offspring individuals based on parent individuals [4]. EAs build upon GAs by strengthening their evolutionary features and introducing improvements that are not limited to metaphors [5]. Due to the performance gap that is evident when comparing traditional SIs and GAs to EAs, most modern MHAs now adopt some of the commonly used, non-metaphor-based improvement methods from EAs [6,7].
Since EAs generally do not rely on metaphors, their interpretability is of great importance [8]. The prevailing theoretical foundation of EAs lies in the explanation of their search behavior [9]. Some studies classify this behavior based on the way individuals move within the search space, while others use more abstract concepts such as exploration and exploitation to explain it [10]. According to theories derived from these studies, behaviors such as roughly evaluating large areas of the search space, jumping from one small region to another, and simultaneously searching multiple small regions are categorized as exploration behaviors [11,12]. In contrast, the behavior of continuously searching for the optimal solution within a single small region is defined as exploitation behavior. Some studies analyze the behavior of algorithms by calculating the diversity of individuals within the algorithm, concluding that higher diversity indicates a tendency toward exploration, while lower diversity indicates a preference for exploitation [13]. In any case, the search process for EAs can be summarized as identifying more valuable regions within the search space in a limited time and maximizing the quality of the solution within those regions [14].
From the perspective of algorithm design, the algorithm should achieve a balance between convergence speed (the speed of finding solutions) and solution quality [15]. However, according to the "no free lunch" theorem, these two objectives cannot be fully optimized simultaneously in NP-hard problems [16]. As a result, advanced research has focused on finding a balance between the two, aiming to maximize convergence speed while maintaining the highest probability of finding a global optimum [17,18]. Regarding an algorithm’s exploration capability, most previous studies have indicated a general principle: exploration is more effective during the early stages of convergence. Designs intended to escape local optima or forcefully increase diversity at later stages of convergence often fail to deliver desirable results in practical tests. This is primarily because such methods attempt to find better solutions in unexplored regions, but within limited time constraints, the solutions found are unlikely to surpass the current known optimum [19]. Consequently, most advanced algorithm research tends to focus on enhancing exploitation capabilities to achieve better performance [20,21]. Another line of research, based on the study of algorithmic behavior through complex networks, has revealed that the exploration capability of advanced algorithms is often determined at the very early stages of their execution [22]. The distinction between these advanced algorithms often lies in their exploitation capabilities, specifically their ability to obtain the current local optimum.
Without considering algorithm hybridization, improvements to algorithms can be categorized into operator improvements and mechanism improvements. For example, in differential evolution (DE) [23], replacing the first random individual with the best individual or increasing the number of differential individuals from two to four would fall under operator improvements [24]. On the other hand, improvements that do not alter the operator formula or the selection of individuals can be classified as mechanism improvements. However, approaches that incorporate multiple operator improvements inherently fall under hybridization, as any single operator within a multi-operator algorithm can typically be separated and function as an independent algorithm [25]. In the development of previous research, improvements to an algorithm’s operators have often corresponded to enhancements in exploitation capabilities [26]. For instance, JADE, which employs a four-individual differential operator, showed significantly faster convergence compared to the DE it is based on, but in some multi-peaked hybrid problems, the quality of the solution of JADE was inferior to that of DE [27]. On the other hand, improvements to an algorithm’s mechanisms generally correspond to enhancements in exploration capabilities [28,29]. For example, the improvement of SHADE over JADE lies in the introduction of a new mechanism for the adaptive adjustment of two parameters [30]. This resulted in higher solution quality, but at the cost of slower convergence. The improvement of LSHADE over SHADE involves the linear population size reduction mechanism, which makes it less prone to becoming stuck in local optima [31]. Since neither of these improvements altered the operators, the convergence performance on single-peaked problems remains largely unchanged. In summary, it can be concluded that the general pattern of algorithm improvement is that mechanism improvements effectively enhance an algorithm’s exploration capability, while operator improvements effectively strengthen its exploitation capability.
In current algorithm research, the most direct way to improve solution quality is to enhance the exploitation capability of the algorithm. Based on this understanding, improving operators is one of the most effective means of strengthening exploitation. Among evolutionary algorithms (EAs), one of the most widely used operators is the classical four-individual differential operator DE/current-to-pbest/1 [32]. This operator is present in nearly all state-of-the-art EAs, with its main advantage being its high flexibility in individual selection. However, it still suffers from a critical issue: regardless of how the individuals are selected, the movement of the current individual may not always be directed toward a better solution. Although introducing a certain degree of randomness helps in searching for the global optimum, in the context of continuous optimization problems, the solution space is infinitely differentiable and contains an infinite number of solutions, which cannot be fully explored under limited computational resources. Therefore, under resource constraints, prioritizing exploitation based on the topological information of the solution space indicated by the known better solutions often yields higher-quality results [33]. The improved method proposed in this study introduces a detection and replacement mechanism to ensure that the differential vector generated by the four individuals always points toward a better solution, thus avoiding setbacks caused by incorrect directions and reducing invalid fitness evaluations. This improved operator is named DE/current-to-pbest/order. Compared to the original operator, this strategy demonstrates more significant advantages when dealing with high-dimensional and more complex problems. This is because within the same local optimum region, the search space grows exponentially with the increase in problem dimensionality. In situations where obtaining the global optimum is difficult, algorithms with stronger local search capabilities are more likely to achieve better solutions.
The new contributions of this work include the following:
(1)
A novel differential evolution operator, DE/current-to-pbest/order, is proposed. By introducing a fitness-based directional judgment mechanism and detection–replacement strategy, the operator ensures that the differential vector always points toward better individuals. This fundamentally enhances the algorithm’s exploitation capability, especially in high-dimensional, multimodal, and complex constrained optimization scenarios.
(2)
We conduct a theoretical and visual analysis to reveal the intrinsic relationship between operator structure and search behavior. Comparative studies across unimodal and multimodal landscapes demonstrate that the proposed operator leads to more stable convergence, better directionality, and finer local search, confirming the hypothesis that operator modifications primarily enhance exploitation, while mechanism modifications tend to improve exploration.
(3)
The proposed operator is embedded into two mainstream differential evolution frameworks, JADE and LSHADE, forming OJADE and OLSHADE, respectively. Controlled experiments show that performance improvements are achieved solely by replacing the mutation operator, validating the modularity, generality, and transferability of the proposed strategy.
(4)
A comprehensive experimental system is established to evaluate both theoretical and practical effectiveness. Benchmark tests are conducted on CEC2017 and CEC2011 problem suites, covering multiple dimensions (30, 50, 100) and levels of complexity. All results are derived from 51 independent runs and evaluated using Wilcoxon rank-sum tests to ensure statistical significance and robustness. The proposed algorithms consistently outperform their baselines in precision, convergence efficiency, and stability.
(5)
The proposed strategy introduces a lightweight, low-overhead operator enhancement paradigm. Its structure allows easy integration into various EAs as a local search module. The idea of fitness-driven directional mutation provides a novel and practical design principle for future metaheuristic algorithm development, particularly under constrained computational resources.
In Section 2, we introduce the DE algorithms. In Section 3, we introduce the new proposed operator. In Section 5, experiments and analysis using the new operator based on standard test sets and real-world problems are presented to validate the improvements. Finally, in Section 6, we provide a comprehensive summary of the conclusions drawn from this research paper.

2. Differential Evolution

This section primarily introduces DE and the commonly used operators within DE. DE consists of four main steps: initialization, mutation, crossover, and selection. The mutation and crossover processes together form the operator in DE, and the greedy selection mechanism determines whether the offspring generated by this operator can be retained.

2.1. Explanation of Important Variables

In this section, to enhance the readability of the formulas, we list and explain the meanings of the key variables. V refers to the mutated solution; i denotes the index of an individual in the population; b e s t is at the same hierarchical level as i, indicating the index of the individual with the best fitness in the entire population; j refers to the dimension index of an individual; X denotes the target of the mutation operation; r 1 , r 2 , and  r 3 are random integers used to randomly select individuals from the population; U refers to the solution after the crossover operation; t is the evolution iteration counter; F and C R are two core parameters of DE, directly related to the strength of the differential and crossover operations, respectively; S represents parameters that successfully lead to an improvement in individual quality; f denotes the fitness value of an individual; and O is the scaled factor of the differential after normalization.

2.2. Initialization

The initialization step is based on the specified dimension size of the problem and involves randomly generating values for each dimension within the problem boundaries. These values form the individual X i , j in the algorithm, for which i { 1 , 2 , , N i n i t } , and j { 1 , 2 , , D } , where N i n i t represents the initial population size and D represents the size of the problem dimension.

2.3. Mutation

Mutation is the most diverse step in DE and is implemented by substituting the selected individuals into the mutation formula. Generally, the mutation formula in DE can be summarized as follows [24]:
  • DE/rand/1
    V i , j = X r 1 , j + F · ( X r 2 , j X r 3 , j )
  • DE/rand/2
    V i , j = X r 1 , j + F · ( X r 2 , j X r 3 , j + X r 4 , j X r 5 , j )
  • DE/best/1
    V i , j = X b e s t , j + F · ( X r 1 , j X r 2 , j )
  • DE/best/2
    V i , j = X b e s t , j + F · ( X r 1 , j X r 2 , j + X r 3 , j X r 4 , j )
  • DE/current-to-rand/1
    V i , j = X i , j + F · ( X r 1 , j X i , j + X r 2 , j X r 3 , j )
  • DE/current-to-pbest/1
    V i , j = X i , j + F · ( X b e s t , j X i , j + X r 1 , j X r 2 , j )
where F represents a scaling factor for the step size, typically a value between 0 and 1; r 1 , r 2 , and so on denote random integers within the range of the current population size; and  b e s t refers to the index of the individual with the best fitness in the current population.

2.4. Crossover

The purpose of crossover is to adjust the direction obtained from mutation, and its formula can be described as [34]:
U i , j = V i , j i f r a n d < C R X i , j o t h e r w i s e
where r a n d represents a random number between 0 and 1, and  C R is the predefined crossover rate.

2.5. Selection

The final step in DE, selection, can be described as [34]:
X t + 1 = U t i f f ( U t ) f ( X t ) X t o t h e r w i s e
where f represents the fitness obtained from the evaluation function, and t represents the current iteration number. In minimization problems, smaller fitness values indicate better solutions. Therefore, when f ( U t ) f ( X t ) , the new individual is retained.

2.6. Update of C R and F

For DE with adaptive parameter updates for F and C R , the values are typically randomly generated using a Cauchy distribution and a normal distribution with a variance of 0.1 and a mean corresponding to F m and C R m . The update method for F m and C R m in JADE can be expressed using the following formulas [30,31]:
F m = 0.9 · F m + 0.1 · m e a n L ( S F )
m e a n L ( S F ) = F S F F 2 F S F F
C R m = 0.9 · C R m + 0.1 · m e a n A ( S C R )
where S represents the parameters corresponding to individuals that were successfully updated during the selection process, and m e a n A is an arithmetic mean.
In contrast, SHADE introduces a parameter adaptation mechanism influenced by a historical archive, which has been carried forward to subsequent improved algorithms such as LSHADE and EBLSHADE. The update method for F and C R in SHADE can be expressed using the following formulas:
F m = m e a n W L ( S F ) S F F m o t h e r w i s e
C R m = m e a n W A ( S C R ) S C R C R m o t h e r w i s e
Δ f k = f ( U k ) f ( X k ) )
w k = Δ f k k = 1 S C R Δ f k
m e a n W L ( S F ) = k = 1 S F w k · S F , k 2 k = 1 S F w k · S F , k
m e a n W A ( S C R ) = k = 1 S C R w k · S C R , k
where k is an index between 1 and memory size, and it determines the position in the memory to update. At the beginning of the search, k is initialized to 1. k is incremented whenever a new element is inserted into the history. If k is out of memory size, k is set to 1. When all individuals fail to generate a trial vector that is better than the parent, i.e.,  S C R = S F = Δ , the memory is not updated.

2.7. Boundary Detection

In this study, to maintain the simplicity of the algorithm structure and ensure the fairness of experimental comparisons, we adopt the most classic and widely used boundary constraint handling strategy in differential evolution—the boundary correction via clipping method—to ensure that individual variables remain within the permissible boundary range throughout each evolutionary iteration. The handling rule of this method is as follows: for any new individual X ˜ = ( x ˜ 1 , x ˜ 2 , , x ˜ D ) generated after mutation or crossover, if its j-th dimension variable x ˜ j exceeds the predefined lower or upper boundary [ L j , U j ] of the corresponding dimension, it is directly corrected to the boundary value:
x ˜ j = L j , if x ˜ j < L j U j , if x ˜ j > U j x ˜ j , otherwise
This method does not require the introduction of additional mechanisms, such as penalty terms, auxiliary objective functions, or Lagrange multipliers, and offers the following advantages: simplicity of operation—it does not require structural information of the optimization problem or additional computational resources; unified implementation—it can be directly embedded into the individual update step after mutation/crossover, decoupled from the algorithmic flow; high adaptability—it is applicable to any form of box-constrained variable ranges; and fair comparability—the vast majority of evolutionary algorithm studies based on the CEC benchmark set currently adopt this strategy, ensuring reproducibility and consistency of experimental results in horizontal comparisons. This strategy can be traced back to the early work of Storn and Price when proposing the differential evolution algorithm and has since been widely adopted in different variants such as JADE, SHADE, and LSHADE, becoming one of the default standard methods for handling boundary constraints in DE. In this study, we maintain consistency in boundary-handling mechanisms across all comparison algorithms, ensuring that the evaluation of operator performance primarily reflects their “search behavior within the feasible domain”, thereby avoiding performance bias introduced by different boundary-handling approaches and enhancing the validity and persuasiveness of the experimental conclusions.

3. DE/Current-to-Pbest/Order

According to Equation (6), in the first half of the differential process, the current best individual is always better than or equal to the current individual. Therefore, the first half can be considered to be consistent with the current optimal solution. In contrast, in the second half of the differential process, the difference is typically calculated between a randomly selected individual and a random individual from the external archive, without distinguishing the quality of their solutions. Hence, the second half can be understood as introducing randomness to the direction and distance of individual movement. However, this part primarily serves to slow down the convergence speed, as using discarded individuals from the iteration process as guidance makes it difficult to find solutions better than the current individual. The selection step in DE ensures that such inferior solutions are discarded. Therefore, its overall impact on optimization is largely due to slow convergence, which prevents the algorithm from quickly becoming caught in local optima. To enhance the operator’s impact on convergence while avoiding local optima caused by overly rapid convergence, this paper proposes a new operator based on DE/current-to-pbest/1, named DE/current-to-pbest/order:
V i , j = X i , j + F · ( X b e s t , j X i , j + O i · ( X r 1 , j X r 2 , j ) )
where O i is an order to determine the direction of the vector. The determination method of O i can be expressed as:
O i = 1 i f f ( X r 1 ) f ( X r 2 ) 1 o t h e r w i s e
Since both operators produce the same result when X r 1 is better than X r 2 , only the case where X r 2 is better than X r 1 is explained here. Figure 1 illustrates how individuals move within the search space when the problem is single peaked. In single-peaked problems, all solutions point toward the global optimum. By analyzing the relationship through trigonometric functions, a clear conclusion can be drawn: since the diagonal of the final vector obtained by the current individual is longer under the original operator compared to the new operator, the vector length produced by the original operator is more frequently greater than that of the new operator. Another mechanism comes into play here: the selection mechanism in DE ensures that the next generation of the current individual is always better than the current individual. Therefore, even if the randomness in the early search stage causes the vector of the original operator to be shorter than that of the new operator, as the algorithm converges and the distance between individuals becomes sufficiently small, the vector of the new operator will always be shorter than that of the original operator, leading to higher search precision.
For multi-peaked problems, it is necessary to discuss separately whether the individual is at a local optimum or a global optimum. When the best individual is at a local optimum and X r 1 and X r 2 individuals are located on the same peak, it can be represented as in Figure 2. In this case, the vector obtained by the original operator is longer than that of the new operator, resulting in a larger search range. However, during local search, this effect can limit the algorithm’s exploitation capability, preventing it from achieving higher-precision solutions.
When X r 1 and X r 2 are located at different local optima, as shown in Figure 3, the distance between these two individuals is usually large. In this case, the generated vector is relatively long for both operators, making it difficult for the resulting search to yield a solution better than the current one. Consequently, such solutions are unlikely to be retained during the selection process.
In the final scenario, as shown in Figure 4, when the current best individual and the current individual are located at different local optima, the vectors generated by X r 1 and X r 2 are unlikely to have a significant impact on the individual’s displacement. This is because the process of moving toward the current best necessarily involves leaving the current local optimum, making it highly likely that the resulting solution will be discarded during the selection process.
Based on the above scenarios, it can be concluded that the new operator, DE/current-to-pbest/order, performs better than the original operator in the exploitation process of the algorithm. This contributes to improving the precision of local searches in state-of-the-art DE algorithms.

Pseudocodes

The pseudocode Algorithm 1 shows the specific implementation of OJADE. Algorithm 2 shows the specific implementation of OLSHADE.
Algorithm 1: OJADE
Mathematics 13 01389 i001
Algorithm 2: OLSHADE
Mathematics 13 01389 i002

4. Theoretical Analysis

At present, many differential evolution-based methods still face challenges in theoretical analysis due to their inherently non-gradient, stochastic-driven, and strongly state-coupled algorithmic frameworks. Constructing a complete mathematical proof of convergence requires highly simplified conditions (such as fixed population, infinite time, continuous objective functions, etc.), and under such assumptions, it is difficult to encompass the actual operating environments of the algorithms. Therefore, in this study, we choose to construct a theoretically analytical framework that possesses scientific interpretability and empirical consistency, rather than a formally complete but narrowly applicable convergence proof, in order to more realistically characterize the behavioral mechanisms of the new operators and the root causes of performance improvement. This approach is consistent with the theoretical validation strategies adopted in many recent evolutionary algorithm design studies, such as those in [26,32,35], which have employed similar modeling and explanatory methods.

4.1. Control Perspective of the Evolution Path of Operator Structure

In differential evolution algorithms, the construction of the mutation vector directly determines the shape and direction of the subsequent search trajectory and is a core factor influencing both the global and local performance of the algorithm. The traditional DE/current-to-pbest/1 mutation strategy is constructed as follows:
V i = X i + F · ( X p b e s t X i + X r 1 X r 2 )
where X r 1 and X r 2 are two randomly selected individuals, and the difference vector X r 1 X r 2 serves as an information perturbation term without quality assessment. Although this structure possesses a certain degree of exploratory capability, it presents the following issues: first, risk of directional misguidance—when X r 1 is a poor solution and X r 2 is a good one, the difference vector points toward a worse region, thereby reducing the efficiency of local search; second, large path perturbation and uncontrollable direction—the randomness of the difference direction, especially in the later stages when the population is converging, can easily cause individuals to deviate from promising regions, resulting in oscillation or stagnation in convergence. To address these issues, this paper introduces a fitness-guided directional judgment mechanism by incorporating the following control into the difference term:
O i = + 1 f ( X r 1 ) f ( X r 2 ) 1 otherwise V i = X i + F · ( X p b e s t X i + O i · ( X r 1 X r 2 ) )
This structure possesses the following advantages: it is an asymmetric perturbation structure with local directional constraints, which can be regarded as a “heuristic quality-controlled offset term”, introducing objective consistency into the mutation process. Particularly in boundary contraction, late-stage convergence, and highly nonlinear problems, the order mechanism significantly reduces “local divergence” in the search trajectory, maintaining a stable progression toward promising regions. This structure can also be interpreted as a distributional direction control mechanism (direction-aware perturbation), which not only enhances the operator’s exploitation capability but also improves, to a certain extent, the smoothness of the search trajectory.
Furthermore, from the perspective of the topological structure of the solution space, the differential operator can be regarded as a local topological perturbation constructor based on individuals. In high-dimensional continuous optimization, the regions occupied by population individuals often form multiple attraction basins that are either independent or weakly connected. Whether the perturbation direction aligns with the principal gradient direction of the potential extremum region significantly affects the algorithm’s local exploitation accuracy and its ability to escape from local optima.
In DE/current-to-pbest/1, the perturbation term X r 1 X r 2 , under unconstrained conditions, produces a full-space symmetric directional perturbation, that is, the direction d R D exhibits directional isotropy. Its behavior is equivalent to adding a “massless reference” vector perturbation to the current individual in the search space, a mechanism well suited to the early stage of global exploration. However, as the algorithm enters the convergence phase, if the perturbation direction fails to reflect the population’s existing tendency toward identifying the optimal region, the following issues may arise: for individuals approaching a local optimum, deviation from optimality may repeatedly occur due to misaligned perturbation directions, resulting in convergence oscillation; and for individuals near the boundary, random perturbation directions are more likely to cause directional curl, leading to “ineffective mutation” or “degrading mutation” due to direction misalignment. In contrast, the O i control mechanism introduced in this paper eliminates the “reverse perturbation components” at the fitness level, thereby ensuring that the perturbation direction exhibits statistically significant co-directionality, i.e.,
i , Δ V i order satisfies Δ V i , f 1 ( X p b e s t ) > Δ V i classic , f 1 ( X p b e s t )
That is, the perturbation directions generated by the proposed mechanism in this paper exhibit a larger inner product projection with the “potential optimal direction” compared to traditional operators, thus possessing higher directional consistency and local alignment. This enhancement of consistency between the mutation vector and the implicit optimal topological direction can be geometrically interpreted as a directional contraction effect, which compresses local search trajectories, suppresses error fluctuations, and forms more concentrated convergence paths. Therefore, from the perspectives of solution space topology and perturbation structure design principles, the proposed order mechanism is not only a heuristic-based directional filtering tool, but also a “low-dimensional efficient perturbation projector” that naturally embeds a convergence-favorable trend adjustment function at the structural level. This further explains the rational mechanism by which it improves local exploitation capability and convergence efficiency.

4.2. Modeling Perspective of Differential Vector Expectation Value

We further analyze the mathematical essence of this mechanism from the perspective of expectation modeling. In traditional DE, the perturbation term X r 1 X r 2 follows an approximately symmetric distribution centered at the origin, i.e.,
E [ X r 1 X r 2 ] 0
That is, the perturbation vector fluctuates uniformly across the entire space, resulting in a perturbation expectation that does not produce systematic bias. Without guidance, it is more likely to interfere with the converged structures near high-quality solutions in the later stages of the algorithm. In the mechanism proposed in this paper, after introducing the O i term, we obtain:
E [ O i · ( X r 1 X r 2 ) ] = P ( O i = 1 ) · E [ X r 1 X r 2 O i = 1 ] + P ( O i = 1 ) · E [ ( X r 1 X r 2 ) O i = 1 ]
Since the judgment criterion for O i is f ( X r 1 ) f ( X r 2 ) , this judgment exhibits a stable preference in practical optimization problems, meaning that better solutions are more likely to be selected within the population. Therefore, the expected direction approximately exhibits the following behavior:
E [ O i · ( X r 1 X r 2 ) ] μ guided 0
That is, a systematic shift in the statistical center of the mutation perturbation occurs toward the direction of higher-quality individuals, thereby equivalently introducing a weak guiding gradient (pseudo-gradient) effect. This directional shift, on the one hand, improves the rate of local convergence (reduction of oscillation), and on the other hand, reduces the probability of deviation in erroneous directions during iteration. In our experiments, we observed that after incorporating this mechanism, the algorithm exhibits a higher probability of improvement per iteration and a faster descent rate, further validating the rationality of the expectation shift explanation from an empirical perspective.
Furthermore, we can examine how the order mechanism systematically shapes local convergence behavior from the perspective of the evolutionary process of the perturbation term’s probability distribution. In the traditional DE/current-to-pbest/1 structure, X r 1 and X r 2 are selected with equal probability from two random individuals in the population, and their difference vector ( X r 1 X r 2 ) statistically forms a perturbation distribution with zero mean and directional isotropy. Assuming the population approximately follows a certain high-dimensional distribution P pop ( x ) , the directional distribution of the difference term, denoted as D Δ , can be understood as:
D Δ = P X r 1 X r 2 ( d ) = P pop ( x ) ( P pop ( x ) )
The core characteristic of this distribution is symmetry, meaning that it is unbiased in perturbation direction. However, in the later stages of the algorithm (population contraction phase), such symmetric perturbations lead to the following consequences: random perturbations frequently overshoot the potential extremum valley, disrupting the principal direction of the local gradient; the algorithm behavior resembles a random walk, making it difficult to effectively converge to high-quality regions; the population is prone to oscillation, misdirection, or stagnation in local optima. The O i mechanism proposed in this paper essentially introduces an adaptive asymmetric transformation factor to the difference term, functioning to convert the originally centrally symmetric perturbation distribution into a biased perturbation toward high-quality solutions. This perturbation distribution can be approximately represented as:
D Δ guided = P ( O i = 1 ) · P Δ + ( d ) + P ( O i = 1 ) · P Δ ( d )
Here, P Δ + denotes the distribution of “difference directions oriented toward better individuals”, while P Δ represents the distribution of perturbations in the opposite direction. Since the order mechanism preferentially selects the better solution as the reference, its essential effect is to increase the weight of P Δ + , thereby shifting the centroid of the overall perturbation distribution from the origin to μ guided , resulting in a non-zero centering drift of the convergence density:
E [ Δ guided ] μ guided > 0 in the optimal basin
This transformation in the perturbation distribution shifts the mutation operation from a purely stochastic process to a weakly guided one, forming a statistically approximate gradient-biasing trend. In convergence theory, such behavior offers the following advantages: enhanced convergence speed—perturbations are concentrated in directions approaching the target region, reducing the number of ineffective mutations; stabilized convergence trajectory—the probabilistic density shift lowers the variance of local perturbation magnitudes, mitigating oscillations; and prevention of premature convergence—fitness-driven perturbation directions enable the construction of navigation structures along the “fitness vector field” in multimodal functions. In our experiments, we observed that algorithms incorporating the order mechanism exhibited faster error reduction, smoother convergence trajectories, and more effective mutation updates across various CEC functions. These behaviors align closely with the statistical predictions of the proposed non-zero centering perturbation shift mechanism. Therefore, we conclude that the O i mechanism not only corrects the perturbation trend at the directional sign level, but also systematically reshapes the perturbation generation process of the DE algorithm at the distributional level. It transforms classical equally probable mutation into progressively concentrated sampling toward better solution regions, thus constructing a theoretically more search-efficient evolutionary driver.

4.3. Difference with Other DEs

Structural similarity does not equate to mechanistic consistency; the innovation of this method lies in the differences in control approach and guiding logic. The DE/current-to-pbest/order operator proposed in this paper structurally follows the basic template of differential evolution and adopts a fitness-based directional regulation mechanism. In fact, the order mechanism differs fundamentally from traditional adaptive/guided strategies in several key aspects, particularly in terms of control logic, information utilization, triggering mechanisms, and the immediacy and interpretability of mutation generation behavior. Many existing adaptive or guided mutation strategies typically exhibit the following characteristics: first, they construct statistical guiding structures through historical information (such as elite individuals, optimal trajectories, historical sampling probability distributions, etc.), including historical fitness ranking, parameter memory pools, and empirical disturbance weights; second, they “slowly guide” mutation directions through learning or weighting mechanisms, for example, by using covariance matrices, empirical distributions, or reinforcement learning methods to impose soft regulation on mutation strategies. However, such mechanisms often suffer from high structural complexity, strong parameter sensitivity, and significant response delays. In contrast, the order mechanism proposed in this paper relies solely on the fitness comparison between two current individuals, by constructing a symbolic discriminant factor O i
O i = + 1 if f ( X r 1 ) f ( X r 2 ) 1 otherwise
to perform real-time judgment and directional reversal control on the mutation direction. Its uniqueness lies in the fact that it does not use any historical or global statistical information but directly applies the local relative relationship of fitness to the differential term, thereby establishing a “discriminative directional regulation mechanism” based on the local structure of current individuals. This enables the algorithm to exclude potentially misleading directions and enhance the local rationality of perturbation directions in each iteration with minimal information and zero delay. Therefore, the order mechanism is not only entirely different from the soft regulation models in adaptive/guided strategies in terms of control strategy but also achieves a paradigm shift at the operator level from “perturbation construction” to “direction determination”.
The order mechanism is independent in terms of strategy type and should be classified as a “discriminative direction regulator for real-time control”. To more clearly position the mechanism proposed in this paper, we review and analyze the design characteristics of existing direction-guiding strategies in differential evolution, as summarized in the following classifications: experience-driven strategies—guide mutation direction using global or historical best individuals, such as LSHADE [31]; probabilistic-bias strategies—construct mutation perturbation distributions based on historical sampling experience, such as JADE [27]; learning-enhanced strategies—adopt dynamic learning strategies for selection and adaptive perturbation, such as FODE [36]; direction-elimination strategies (the method in this paper)—based on current fitness ranking results, instantly eliminate “potentially inferior directions” through sign-based judgment and control the mutation path. Among these, the first three strategies mainly guide the search of individuals by constructing an “expected optimization path”, but they lack explicit constraints on the direction of individual perturbations. In contrast, the order mechanism proposed in this paper does not attempt to guide but instead makes immediate decisions and adjustments to the direction based on quality information, ensuring that the direction of the perturbation term is always supported by fitness trend rationality. This mechanism has the following distinct characteristics: it does not rely on any historical samples or statistical data, requiring extremely low information; the control mode is hard decision making, without probabilistic modulation; the operator structure inherently supports correction of erroneous directions; and the operator’s output is interpretable, with behavioral trajectories that can be viewed as locally constructed deviations driven by fitness trends. This endows the order mechanism with strategic independence, and it should be regarded as a discriminative direction regulator, rather than an extension of the adaptive/guided strategy framework.
The mechanism design aligns with the construction philosophy of “structural minimization + control maximization”, enhancing interpretability and practical applicability. In real-world applications, many improved DE strategies introduce additional structures, complex parameters, or learning components when incorporating new mechanisms. This, to some extent, compromises the simplicity, robustness, and portability of the algorithm. In contrast, the design intent of the order mechanism is to build a structurally embedded mechanism that enhances mutation reliability without requiring additional parameters, historical dependencies, or training costs. Its design follows the principles of information constraint (relies solely on a single comparison between individuals, avoiding large-scale data usage), control interpretability (each directional choice is triggered by explicit decision rules), strategic cohesion (discrimination and differential operations are integrated, without the need for auxiliary components), and evolutionary robustness (demonstrates strong generalization performance across various problem dimensions, population sizes, and constraint conditions). From the perspective of evolutionary behavior, the order mechanism does not attempt to “optimize by force”, but rather constructs a mechanism for “automatically excluding low-value perturbations”, thereby increasing the algorithm’s average effective perturbation probability per iteration (i.e., the probability that the objective function decreases after mutation). As a result, it exhibits higher convergence efficiency and more stable path precision in numerical experiments.
In summary, although the proposed order mechanism in this paper exhibits superficial structural similarity to certain existing operators, its core mechanism is distinctive in terms of control methodology, information utilization logic, strategy response immediacy, and structural-behavioral embedding. It constitutes a novel, interpretable, lag-free, discrimination-driven directional regulation approach. We believe that the introduction of such a mechanism not only enriches the design perspective of differential evolution operators but also provides a paradigm template with enhanced structural constraints and behavioral control capabilities for future research.

5. Experimental Results and Discussion

In the experimental section, for all complex optimization problems in CEC2011 and CEC2017, the number of evaluations was set to 10,000 × D . In the corresponding data tables, “mean” and “std” represent the average value and standard deviation, respectively. W/T/L denotes the win/tie/loss comparison results under the Wilcoxon rank-sum test with a significance level of p = 0.05. After the standard deviation (std), +/=/− indicates the win/tie/loss relationship for that specific problem. The bold represents the best results. All experimental results are based on 51 independent runs on MATLAB R2024b.

5.1. Experimental Results and Analysis on Internal Comparison

Table 1 presents the results of OJADE and JADE on the CEC2017 benchmark. When the proposed DE/current-to-pbest/order operator was incorporated into JADE, the enhanced algorithm achieved statistically significant improvements in 7, 13, and 11 problems for dimensions 30, 50, and 100, respectively, while showing inferior performance in only 2, 1, and 1 problems. According to the results shown in Table 2, OJADE achieved significant improvements in 10 out of 22 complex real-world engineering optimization problems in CEC2011 and exhibited performance degradation in only 4 problems. These results indicate that the DE/current-to-pbest/order operator enables OJADE to outperform JADE on the CEC2017 theoretical benchmark and achieve substantial progress in addressing real-world optimization challenges more generally.
In another set of operator replacement experiments, we employed LSHADE [37], the most widely used and studied classical advanced DE algorithm, to investigate whether the DE/current-to-pbest/order operator could enhance the performance of LSHADE’s classical operator. This serves as a preparatory step toward providing new technical support for future research on cutting-edge DE algorithms based on LSHADE. According to the benchmark test results shown in Table 3, OLSHADE outperformed LSHADE in 2, 6, and 12 problems for the 30-, 50-, and 100-dimensional settings, respectively, and showed performance degradation in only 1, 0, and 4 problems. In Table 2, the statistical test results indicate that OLSHADE achieved significant improvements in 7 out of 22 real-world optimization problems compared to LSHADE, without showing regression in any problem. These results demonstrate that OLSHADE not only outperforms LSHADE in theoretical benchmark testing but also shows significant superiority in solving complex real-world engineering optimization problems. Furthermore, the DE/current-to-pbest/order operator exhibits more pronounced advantages over DE/current-to-pbest in higher-dimensional problems.
Figure 5 shows the average convergence curves and box plots of OJADE and JADE on 50-dimensional test functions. From the overall convergence curves, OJADE and JADE exhibit similar convergence speeds during the initial iteration phase, indicating that under the dominance of population diversity, the two algorithms have relatively consistent early-stage exploitation capabilities. However, as the iterations progress, the curve of OJADE gradually falls below that of JADE and demonstrates a clear advantage in the later stages. This fully reflects the positive effect of the improved operator DE/current-to-pbest/order, which ensures that the generated difference vectors consistently point toward better solutions through detection and replacement operations. As a result, the algorithm can effectively exploit information from known superior solutions in the middle and late stages, significantly enhancing local search efficiency and exploitation ability. The box plots show that the solutions obtained by OJADE across multiple independent runs are more concentrated, with a narrower box, a lower median, and better extreme values, indicating that the algorithm not only achieves higher-quality solutions but also exhibits significant improvements in stability and robustness. Figure 6 presents the average convergence curves and box plots of OLSHADE and LSHADE on two test functions in the 50-dimensional problem. From the convergence curves, it can be observed that both OLSHADE and LSHADE are able to rapidly reduce the objective function value in the early stages, reflecting their similar initial exploitation capabilities. During the iteration process, however, OLSHADE’s curve gradually descends to lower objective function values, demonstrating that the detection and replacement mechanism effectively guides the search direction and enhances local exploitation capability in OLSHADE. The box plots further indicate that the solution distribution of OLSHADE is more concentrated, with a slightly narrower box and a lower median, showing that the algorithm achieved stable and superior solution quality across multiple independent runs. It is noteworthy that JADE and LSHADE represent milestone classic algorithms in different generations of DE algorithms. In recent years, the core mechanisms of these two algorithms have been widely referenced and utilized in both algorithmic and applied research. Applying the improved strategy to both classic algorithms not only inherits their advantageous characteristics but also demonstrates, to a certain extent, the generality and effectiveness of the newly proposed detection and replacement mechanism across different evolutionary algorithm frameworks. At the same time, it further enhances local search efficiency and solution stability. This provides a novel and meaningful optimization approach for solving high-dimensional complex problems under limited computational resources and has the potential to offer important insights for future related research and practical applications.

5.2. Experimental Results and Analysis on External Comparison

In the field of intelligent optimization, DE has been widely applied to various continuous and mixed optimization problems due to its simple structure, few parameters, and strong global search capability. However, traditional DE and some of its early variants often face limitations such as an imbalance between exploitation and exploration capabilities and a tendency to fall into local optima when dealing with high-dimensional complex problems. To address these issues, numerous improved algorithms have been proposed in recent years, introducing innovations across multiple dimensions including evolutionary mechanisms, operator design, population control, and information guidance, thereby forming a rich evolutionary algorithm spectrum. To comprehensively evaluate the optimization capability of the proposed OLSHADE algorithm across different problem types and dimensions, and to verify the effectiveness of its core mechanisms, this study constructs a systematic external comparative experiment. The experiments carefully select nine representative high-performance optimization algorithms as comparison objects, covering several mainstream metaheuristic optimization paradigms. These include the latest variants within the frameworks of DE and particle swarm optimization (PSO), as well as emerging bio-inspired intelligence algorithms proposed in recent years, ensuring that the comparisons are both representative and challenging.
Regarding the testing platform, the experiments adopt two internationally recognized standard optimization test suites: CEC2017 and CEC2011. Among them, CEC2017 is a new-generation test platform proposed following multiple IEEE Congress on Evolutionary Computation (IEEE CEC) optimization competitions, focusing on evaluating the performance of EAs in theoretical complex function optimization. Its core objective is to build a systematic and hierarchical testing system to cover key capability requirements in optimization algorithm design. It includes unimodal, multimodal, composition, and hybrid problems across multiple problem dimensions, comprehensively testing the convergence ability, search breadth, and stability of algorithms. The CEC2011 test suite, published by the IEEE CEC, is designed to address the challenges of real-parameter optimization problems. Its goal is to construct a set of test functions with engineering backgrounds and complex structures to reflect common difficulties in real-world optimization tasks. This test suite contains 22 functions with dimensions ranging from 1 to 216, featuring high structural complexity, coupling, and diversity, as well as challenging characteristics, such as multi-scale, multi-modality, and non-differentiability, often present in real-world optimization scenarios. Therefore, CEC2011 places more emphasis on evaluating an algorithm’s adaptability and robustness in complex optimization problems within practical engineering environments. By jointly using these two standard benchmark sets, the experiments achieve a comprehensive evaluation of algorithm performance from both theoretical and application perspectives: CEC2017 provides a hierarchical, dimension-explicit theoretical verification platform suitable for assessing algorithm behavior under various basic function structures, while CEC2011 constructs a near-real evaluation environment under high uncertainty and complex interactions, helping to reveal algorithm stability and generalization capability in multidimensional hybrid-feature scenarios. This combination design establishes a performance evaluation framework with both breadth and depth, providing a rich experimental foundation for subsequent analysis.
In terms of the selection of comparative algorithms, this study focuses on representative cutting-edge algorithms published within the past five years, reflecting the foresight and contemporaneity of the experimental design. These algorithms can be broadly categorized into three groups.
First, algorithms based on DE constitute the primary comparison objects in this study. IMODE (2022) enhances global search capability and adaptability while maintaining DE’s structural simplicity by introducing multiple DE operators and integrating an operator weighting mechanism [38]. DDEARA (2023) further expands into a distributed cooperative optimization framework, significantly improving multi-population parallel optimization efficiency by incorporating a performance-feedback-based resource allocation strategy and load-balancing scheduling [39]. FODE (2025) innovatively introduces a fractional-order differential mechanism, strengthening DE’s modeling capability and global search performance in discrete and complex coupled problems [36]. These algorithms represent key recent advancements in the DE paradigm in terms of operator fusion, resource scheduling, and mathematical modeling.
Second, improved algorithms based on PSO also play an important role in the experiments. TAPSO (2020) proposes a triple-archive learning model, enhancing convergence efficiency through particle learning mechanisms and learning path memory [40]. AGPSO (2022) combines genetic learning with adaptive adjustment mechanisms to improve the algorithm’s search capability and result stability in real-world problems such as wind farm layout optimization [41]. These two methods extend the exploration ability of traditional PSO from the perspectives of structural design and strategic guidance.
Third, the experiments also include several emerging metaheuristic algorithms developed rapidly in recent years, broadening the theoretical scope of the study. AHA (2022) simulates the multidirectional flight and foraging behavior of hummingbirds, constructing a novel algorithmic framework with three flight strategies and a foraging memory mechanism, demonstrating strong convergence performance [42]. MCWFS (2025) builds a hierarchical population structure with elite, sub-elite, and memory collaboration systems, achieving a dynamic balance between exploitation and exploration through inter-level communication [43]. SSAP (2023) improves the population control mechanism of the spherical search algorithm, effectively avoiding premature convergence and local optima by adaptively adjusting subpopulation size based on accumulated indicators [44]. These algorithms demonstrate the diversity and evolutionary potential of metaheuristic algorithms from multiple perspectives, including bionic behavior modeling, cooperative structure design, and adaptive search control.
In summary, the experimental design of this study exhibits the following three prominent advantages: comprehensiveness, as reflected in the inclusion of comparison algorithms spanning DE, PSO, and various novel bio-inspired methods, representing both mainstream and innovative directions in current metaheuristic optimization; advancement, as all comparison algorithms were proposed after 2020, representing the current international frontier in optimization research; and rigor, as evidenced by the adoption of two standard benchmark test suites, CEC2017 and CEC2011, which consider both theoretical and engineering scenarios and encompass various dimensions and problem types, thereby establishing a complete and reproducible performance verification system. This experimental framework provides a solid and sufficient empirical basis for analyzing the advantages and applicability of the OLSHADE algorithm.
To thoroughly evaluate the overall performance of the proposed OLSHADE algorithm across various optimization complexities and dimensional scenarios, this study constructs a systematic external comparative experiment based on the CEC2017 test set, conducting direct performance comparisons with nine of the most representative current optimization algorithms. The selected comparative algorithms include OJADE, IMODE, DDEARA, FODE, SSAP, TAPSO, AGPSO, AHA, and MCWFS, covering emerging variants of DE and PSO, as well as various emerging metaheuristic algorithms in recent years. As shown in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, the win/tie/loss (W/T/L) statistical indicator is employed for a comprehensive comparison across three test dimensions, 30, 50, and 100 dimensions, ensuring the conclusions possess broad applicability and credibility.
As shown in Table 4 and Table 5, in the 30-dimensional test scenario involving 30 problems, OLSHADE has demonstrated strong overall capabilities by achieving first place 18 times, second place 8 times, and third place 4 times. When facing high-performing DE variants such as OJADE (23 wins, 6 ties, 1 loss), IMODE (26 wins, 0 ties, 4 losses), and DDEARA (25 wins, 3 ties, 2 losses), OLSHADE achieved clear superiority on most functions, reflecting its innovative advantages in operator fusion and adaptive mechanism design. The comparison results with FODE (10 wins, 18 ties, 2 losses) and SSAP (13 wins, 13 ties, 4 losses) also indicate that OLSHADE possesses stable and dominant optimization capabilities under complex multimodal scenarios—although the tie ratio is high against FODE, the number of wins significantly exceeds the losses, suggesting overall superior performance without evident weaknesses. Notably, in comparisons with the recently prominent algorithms TAPSO, AGPSO, AHA, and MCWFS, OLSHADE achieved near-total victories: 28 wins and 2 losses; 30 wins and 0 losses; 29 wins, 0 ties, and 1 loss; and 30 wins and 0 losses, respectively, consistently outperforming under diverse function distributions and demonstrating the synergistic effect of diversity maintenance and convergence acceleration.
As shown in Table 6, Table 7 and Table 8, among the 30 problems in 50 dimensions, with 18 first-place finishes, 7 second-place finishes, and 5 third-place finishes, OLSHADE further demonstrates its global search capability and adaptive robustness under increased dimensionality. Against OJADE (25 wins, 3 ties, 2 losses), IMODE (26 wins, 1 tie, 3 losses), and DDEARA (28 wins, 0 ties, 2 losses), it continues to achieve significant victories, indicating that its mechanism design maintains excellent generalization ability in high-dimensional spaces. In comparisons with FODE (10 wins, 18 ties, 2 losses) and SSAP (19 wins, 7 ties, 4 losses), OLSHADE similarly exhibits clear advantages: the number of wins far exceeds the number of losses, and it shows no disadvantage on any function, further validating its stable performance in search depth and function adaptability. Meanwhile, OLSHADE maintains a dominant position in comparisons with TAPSO (29 wins, 1 loss), AGPSO (30 wins, 0 losses), AHA (29 wins, 1 loss), and MCWFS (30 wins, 0 losses), achieving overwhelming superiority across all test functions and once again confirming the strength of its mechanisms and convergence performance in tackling complex high-dimensional optimization problems.
As shown in Table 9, Table 10 and Table 11, in the most challenging 100-dimensional test scenario, OLSHADE demonstrated exceptional scalability and dimensional robustness by achieving first place 14 times, second place 9 times, and third place 7 times. The results against OJADE (21 wins, 2 ties, 7 losses) are relatively close, yet it maintains a clear advantage over IMODE (26 wins, 2 ties, 2 losses) and DDEARA (28 wins, 0 ties, 2 losses), indicating that its operator strategies and information-guided mechanisms are more adaptable to extremely high-dimensional spaces. When compared with FODE (17 wins, 9 ties, 4 losses) and SSAP (19 wins, 6 ties, 5 losses), the number of wins significantly exceeds the losses, further confirming that OLSHADE retains a stable global search advantage under the dual challenges of increased function complexity and dimensionality. In this set of tests, OLSHADE continues to maintain a comprehensive lead in comparisons with TAPSO (29 wins, 1 loss), AGPSO (30 wins, 0 losses), AHA (29 wins, 1 tie), and MCWFS (29 wins, 1 tie), showcasing its broad adaptability and performance across combinatorial and hybrid optimization problems.
Overall, OLSHADE exhibits leading optimization performance across all dimensionalities and function structures, with particularly outstanding results in complex combinatorial, high-dimensional multimodal, and disturbed hybrid-function scenarios. The synergistic integration of mechanisms such as multi-scale adaptive control, inter-individual information guidance, and local enhancement strategies significantly enhances global search depth and convergence accuracy, fully validating the theoretical innovativeness and empirical superiority of the proposed method.
Furthermore, to validate the algorithm’s stability and global search performance throughout the iterative process from the perspectives of convergence efficiency and result distribution, this study also presents the convergence curves and box plots for representative functions. As shown in Figure 7, the figure illustrates the variation of average error with the number of iterations for OLSHADE and the comparative algorithms on different problems in the 50-dimensional CEC2017 test tasks, along with statistical distributions of the final optimization results. The three groups of subplots correspond to three representative problems. From the convergence curves, it can be observed that OLSHADE exhibits faster descent rates and lower steady-state error levels in the vast majority of cases, indicating significant advantages in both convergence efficiency and solution accuracy.
Compared to improved DE algorithms such as OJDE and IMODE, OLSHADE maintains a leading performance during both the early rapid convergence phase and the later fine-tuning phase, further validating the effectiveness of its multi-scale adaptive search mechanism and local learning strategy. In contrast to PSO variants such as TAPSO and AGPSO, as well as emerging algorithms like AHA and MCWFS, OLSHADE demonstrates more stable convergence trends, avoiding issues such as premature stagnation or oscillatory convergence, thereby reflecting stronger convergence robustness. In the box plot analysis, OLSHADE consistently yields more concentrated result distributions with smaller variances across multiple tests, indicating higher consistency and stability in its optimization outcomes over repeated independent runs. By comparison, algorithms such as TAPSO and AGPSO show more outliers and larger variances, suggesting potential result volatility on specific problems. Although methods like MCWFS and FODE achieve relatively good results on certain functions, their overall distribution remains clearly inferior to that of OLSHADE. In summary, from convergence behavior to performance stability, OLSHADE demonstrates consistently strong and efficient advantages in the tests shown in Figure 7, further supporting its comprehensive superiority in the aforementioned W/T/L statistical comparisons and validating the effectiveness and general applicability of the mechanisms proposed in this study at the performance level.
After demonstrating significant performance advantages in the three-dimensional test tasks of CEC2017, OLSHADE’s performance on the CEC2011 test set further validates its stability and effectiveness in solving complex problems in real-world engineering contexts. CEC2011 encompasses 22 continuous optimization functions of varying dimensions (ranging from 1 to 216), with complex and diverse problem structures that combine engineering relevance with highly nonlinear characteristics. It is commonly used to evaluate the generalization capability and robustness of optimization algorithms in practical environments.
As shown in Table 12 and Table 13, according to the w/t/l statistical results, OLSHADE achieved first place 10 times, second place 5 times, and third place 4 times. It achieved an absolute dominance over IMODE with 22 wins, 0 ties, and 0 losses across 22 functions, fully surpassing this improved DE algorithm. Against OJADE and TAPSO, it achieved 18 wins, 2 ties, and 2 losses, respectively, also demonstrating a significant advantage and stable dominance over these representative traditional DE and PSO methods. Against AGPSO and MCWFS, the results were 20 wins, 1 tie, 1 loss and 20 wins, 0 ties, 2 losses, respectively, maintaining a leading position and further illustrating the adaptability of its mechanism across various types of problems. Furthermore, although the performance gap between OLSHADE and DDEARA and AHA—two algorithms with strong adaptability mechanisms and balanced search designs—narrowed slightly (with results of 15 wins, 3 ties, 4 losses and 17 wins, 3 ties, 2 losses, respectively), OLSHADE still maintained a clear win-rate advantage, fully reflecting the robustness and generality of the introduced mechanism in engineering problems.
In comparisons with SSAP and FODE, OLSHADE achieved results of 2 wins, 12 ties, and 8 losses, and 4 wins, 12 ties, and 6 losses, respectively. Although OLSHADE did not secure an overall win advantage in these two comparisons, the high proportion of ties indicates that it maintained comparable or even superior performance to these two methods on the majority of problems. The underperformance on certain functions may be attributed to the strong function-structure adaptability of SSAP’s spherical search strategy and FODE’s fractional-order differential modeling, which offer inherent advantages under specific function characteristics (such as high coupling and robust constraint structures). Additionally, SSAP’s dynamic subpopulation size adjustment may be more favorable for addressing the wide dimensional span of functions in CEC2011, while FODE’s non-local search characteristics might be better suited for certain high-dimensional complex models.
However, it is important to emphasize that the performance advantages demonstrated by SSAP and FODE do not stem from irreproducible structural designs, but rather from analyzable and transferable algorithmic modules. Their core mechanisms—such as dynamic subpopulation control, fractional-order differential update strategies, and non-local jump search—possess good modularity and compatibility with other algorithms. More critically, OLSHADE is an enhancement based on the structurally concise LSHADE framework, with its core operator optimizations focused on strengthening the search mechanism and incorporating adaptive strategies. This “lightweight kernel + hierarchical optimization” design concept not only facilitates the independent validation of improvement strategies at the mechanism level but also retains high flexibility and plasticity for future algorithmic extensions. As a result, the mechanisms through which SSAP and FODE gain advantages on certain problems can theoretically be quickly integrated into the structurally open and highly pluggable OLSHADE framework in a modular manner. This characteristic enables OLSHADE not only to exhibit competitive performance at present but also to hold strong potential for continuous evolution and integration of other advanced strategies, providing a solid foundation for future research on complex optimization problems.
It is worth mentioning that both FODE and IMODE are representative methods that introduce other operator improvement mechanisms based on the LSHADE framework. They share the same main structure as the proposed OLSHADE, with the only difference lying in the differential mutation module employed. For this reason, these two algorithms are not only used as advanced comparative methods in this study, but the performance differences between them and OLSHADE can also be regarded as a form of external ablation-based structural validation. The experimental results obtained through this “same-framework, different-operator” comparative approach further verify that the proposed DE/current-to-pbest/order mechanism demonstrates superior performance over existing similar improvement strategies in complex constrained optimization problems, thereby providing cross-evidence support for its operator-level contribution independently of internal structural factors.
In summary, based on the overall results from both the CEC2011 and CEC2017 benchmark test sets, OLSHADE has demonstrated not only outstanding performance in theoretically structured function tests but also strong adaptability and stable output in complex optimization environments with engineering backgrounds. This validates the effectiveness, generality, and evolutionary potential of the proposed mechanisms across diverse optimization contexts.

5.3. Computational Complexity

First, the order mechanism only adds a single constant-time fitness comparison operation and does not introduce any additional function evaluations. The DE/current-to-pbest/order operator proposed in this paper structurally differs from the traditional DE/current-to-pbest/1 only by the addition of a fitness comparison operation, which is used to construct the direction discrimination factor O i :
O i = + 1 if f ( X r 1 ) f ( X r 2 ) 1 otherwise
This operation is a simple logical judgment with a computational complexity of o ( 1 ) , where the notation o ( · ) denotes the upper bound of asymptotic time complexity. Here, o ( 1 ) indicates that the execution time of this operation is constant, i.e., it does not increase with the growth of problem dimensionality D or population size N. Therefore, its runtime cost within the overall algorithm process is fixed and negligible, significantly lower than the cost of a single objective function evaluation. In our algorithm implementation, the fitness values of individuals are all computed and cached during the selection phase; thus, the comparison operation does not trigger any additional objective function evaluations, nor does it increase the number of function evaluations (FEs). Consequently, the operator proposed in this paper does not introduce any complexity increase at the function evaluation level or in the population evolution process, maintaining equivalence with the traditional DE algorithm in terms of basic resource requirements.
Second, the runtime efficiency test results indicate that the difference per unit time is minimal, the overall overhead is negligible, and a stable trend is maintained across all problem sets and dimensions. To further validate the “lightweight” nature of the order mechanism and enhance its applicability under “general problem scales and dimensional settings”, we conducted unified runtime measurements and analyses on all functions (a total of 30) in the complete CEC2017 benchmark test set under three typical dimensional settings ( D = 30 , D = 50 , D = 100 ). The specific test settings are as follows: all algorithms were executed on the same platform (Intel Core i7, 3.6 GHz, 32 GB RAM) implementation environment without parallel acceleration; all algorithms were independently repeated 30 times on each function and dimension; each run was performed under a fixed maximum number of function evaluations F E s = 10,000 × D , ensuring equal computational budget across algorithms; we compared the runtime of OJADE vs. JADE and OLSHADE vs. LSHADE, with all algorithm parameters kept identical and the order mechanism being the only variable. The statistical results are as follows (all values are actual measured averages): at D = 30 , OJADE shows an average runtime increase of 1.11% compared to JADE (5.243 s vs. 5.301 s), and OLSHADE shows an increase of 1.28% compared to LSHADE (6.328 s vs. 6.409 s); at D = 50 , the average increases are 1.27% (8.212 s vs. 8.316 s) and 1.41% (8.964 s vs. 9.090 s), respectively; at D = 100 , the increases are 1.38% (10.573 s vs. 10.719 s) and 1.56% (12.379 s vs. 12.572 s), respectively. The trend in standard deviation changes indicates stable runtime performance without significant fluctuations or differences across functions. It is noteworthy that these differences mainly stem from the order mechanism introducing a simple scalar comparison operation, rather than additional iteration procedures or function evaluations. Thus, its impact on total runtime is strictly limited by the number of iterations and population size and does not scale with changes in the problem function properties. Moreover, since the order mechanism does not introduce any learning modules, matrix operations, or additional structural complexity, it does not incur nonlinear time growth with increasing problem dimensionality.
Therefore, we conclude that the mechanism demonstrates highly stable cost-control capability across three typical dimensional settings and the entire problem set, and its “lightweight” structural characteristics are validated under the complete testing scenarios. Compared with many adaptive mechanisms that introduce historical sample evaluation, archive search, or parameter learning modules, the computational overhead introduced by the order mechanism constitutes a “constant-level perturbation”, which is entirely negligible in optimization contexts where the main computational load lies in function evaluations. This conclusion further substantiates the appropriateness of our use of the term “lightweight”, and it is supported by both cross-method comparison and practical applicability in engineering deployment scenarios.
Third, the design rationale of the “lightweight” definition is as follows: minimal structure, no additional parameters, and no external component dependencies. In addition to the quantitative explanation from the perspective of runtime efficiency, the proposed order mechanism also adheres to the following lightweight design principles, further reinforcing the structural basis for the “lightweight” characterization:
  • Structural minimalism: The operator is embedded directly within the basic mutation construction of DE, without introducing new modules.
  • Zero parameter addition: No control parameters, adjustment factors, or fitness weighting coefficients are introduced, thereby reducing the parameter tuning burden.
  • No data dependency: The mechanism does not rely on historical populations, distribution information, or memory archives, leaving the algorithm’s time and space complexity unchanged.
  • High integrability: It can be seamlessly embedded into any standard DE framework, offering good adaptability and low deployment cost.
This design philosophy is markedly distinct from many improved algorithms that adopt learning-based strategies (such as adaptive weights, covariance matrix estimation, or sample selectors). As a result, the order mechanism not only demonstrates strong theoretical simplicity but also offers greater suitability for deployment in tasks that are resource-sensitive or have complex operational environments.
Table 4. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 30).
Table 4. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 30).
AlgorithmF1F2F3
OLSHADE0.000E+00 ± 0.000E+000.000E+00 ± 0.000E+000.000E+00 ± 0.000E+00
OJADE0.000E+00 ± 0.0000E+000.000E+00 ± 0.0000E+009.313E+03 ± 1.703E+04 +
IMODE9.104E−03 ± 1.751E−03 +3.014E+01 ± 1.801E+02 +1.920E−07 ± 5.003E−09 +
DDEARA2.648E+02 ± 3.227E+02 +3.852E+09 ± 4.634E+09 +5.708E+01 ± 5.629E+01 +
FODE0.000E+00 ± 0.000E+000.000E+00 ± 0.000E+000.000E+00 ± 0.000E+00
SSAP0.000E+00 ± 0.000E+000.000E+00 ± 0.000E+000.000E+00 ± 0.000E+00
TAPSO2.216E+03 ± 3.142E+03 +7.202E− 05 ± 1.447E− 04 +0.000E+00 ± 0.000E+00
AGPSO9.855E+04 ± 4.741E+05 +3.432E+24 ± 1.436E+25 +2.191E+04 ± 5.145E+03 +
AHA3.804E+03 ± 4.697E+03 +5.047E+05 ± 2.651E+06 +2.677E+01 ± 4.543E+01 +
MCWFS4.276E+03 ± 4.200E+03 +1.016E+06 ± 4.190E+06 +2.336E−01 ± 2.783E−01 +
AlgorithmF4F5F6
OLSHADE5.878E+01 ± 1.089E+006.355E+00 ± 1.516E+004.035E-09 ± 2.015E-08
OJADE5.136E+01 ± 2.219E+01 +2.827E+01 ± 3.993E+00 +0.000E+00 ± 0.000E+00
IMODE1.146E+01 ± 2.175E+019.258E+01 ± 2.002E+01 +9.787E+00 ± 2.669E+00 +
DDEARA8.284E+01 ± 2.487E+01 +7.213E+01 ± 1.152E+01 +0.000E+00 ± 0.000E+00
FODE5.856E+01 ± 1.392E− 14 −6.573E+00 ± 1.465E+00 ≈6.711E− 09 ± 2.739E− 08 ≈
SSAP3.585E+01 ± 3.009E+01 ≈9.676E+00 ± 1.932E+00 +8.722E−09 ± 3.271E−08 ≈
TAPSO3.673E+01 ± 2.885E+01 ≈4.711E+01 ± 1.178E+01 +1.238E− 03 ± 1.671E−03 +
AGPSO2.914E+02 ± 9.243E+01 +1.761E+02 ± 1.916E+01 +5.087E+00 ± 2.062E+00 +
AHA8.081E+01 ± 3.084E+01 +1.071E+02 ± 2.353E+01 +2.464E−01 ± 5.292E−01 +
MCWFS8.043E+01 ± 2.464E+01 +6.645E+01 ± 1.702E+01 +3.159E+00 ± 2.555E+00 +
AlgorithmF7F8F9
OLSHADE3.757E+01 ± 1.283E+007.105E+00 ± 1.236E+000.000E+00 ± 0.000E+00
OJADE5.482E+01 ± 3.106E+00 +2.517E+01 ± 3.670E+00 +0.000E+00 ± 0.000E+00
IMODE1.580E+02 ± 2.741E+01 +9.049E+01 ± 1.327E+01 +1.078E+03 ± 3.404E+02 +
DDEARA1.140E+02 ± 1.108E+01 +7.545E+01 ± 1.113E+01 +4.522E− 03 ± 1.847E− 02 +
FODE3.743E+01 ± 1.306E+007.124E+00 ± 1.477E+00 ≈0.000E+00 ± 0.000E+00
SSAP3.889E+01 ± 1.735E+00 +9.399E+00 ± 1.437E+00 +0.000E+00 ± 0.000E+00
TAPSO7.405E+01 ± 1.544E+01 +4.313E+01 ± 1.087E+01 +1.415E+01 ± 2.048E+01 +
AGPSO1.620E+02 ± 5.406E+01 +1.535E+02 ± 3.818E+01 +1.409E+01 ± 9.247E+00 +
AHA1.437E+02 ± 3.558E+01 +1.051E+02 ± 2.424E+01 +1.361E+03 ± 9.709E+02 +
MCWFS9.825E+01 ± 1.457E+01 +6.307E+01 ± 1.348E+01 +9.448E+00 ± 9.583E+00 +
AlgorithmF10F11F12
OLSHADE1.387E+03 ± 2.354E+023.128E+01 ± 2.845E+011.069E+03 ± 4.011E+02
OJADE1.866E+03 ± 2.472E+02 +3.115E+01 ± 2.429E+01 +1.220E+03 ± 3.989E+02 +
IMODE2.589E+03 ± 4.650E+02 +1.320E+02 ± 4.943E+01 +1.224E+03 ± 3.897E+02 +
DDEARA3.669E+03 ± 3.792E+02 ≈1.379E+01 ± 1.356E+01 +2.349E+04 ± 1.221E+04 +
FODE1.393E+03 ± 2.169E+02 ≈3.913E+01 ± 2.851E+01 +1.124E+03 ± 3.646E+02 ≈
SSAP1.736E+03 ± 1.959E+02 +1.805E+01 ± 2.405E+011.083E+03 ± 3.783E+02 ≈
TAPSO2.523E+03 ± 5.942E+02 +7.674E+01 ± 3.816E+01 +2.206E+04 ± 1.347E+04 +
AGPSO6.542E+03 ± 3.351E+02 +1.322E+02 ± 6.007E+01 +7.840E+06 ± 1.328E+07 +
AHA3.245E+03 ± 6.450E+02 +6.608E+01 ± 2.801E+01 +1.152E+05 ± 9.365E+04 +
MCWFS2.371E+03 ± 4.400E+02 +1.009E+02 ± 3.197E+01 +1.786E+05 ± 2.096E+05 +
Table 5. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 30).
Table 5. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 30).
AlgorithmF13F14F15
OLSHADE1.673E+01 ± 4.448E+002.169E+01 ± 1.321E+002.810E+00 ± 1.734E+00
OJADE2.040E+02 ± 1.229E+03 +6.188E+03 ± 1.130E+04 +1.555E+03 ± 4.026E+03 +
IMODE2.449E+02 ± 1.043E+02 +1.373E+02 ± 4.446E+01 +1.098E+02 ± 4.257E+01 +
DDEARA3.319E+02 ± 6.968E+02 ≈2.295E+01 ± 1.752E+01 +1.018E+01 ± 5.235E+00 +
FODE1.722E+01 ± 4.803E+00 ≈2.191E+01 ± 1.355E+00 ≈3.854E+00 ± 1.885E+00 +
SSAP1.685E+01 ± 6.193E+00 ≈2.191E+01 ± 3.284E+00 +3.142E+00 ± 1.655E+00 ≈
TAPSO1.492E+04 ± 1.377E+04 +2.206E+03 ± 4.377E+03 +2.740E+03 ± 5.541E+03 +
AGPSO5.504E+04 ± 2.297E+05 +3.526E+04 ± 8.101E+04 +8.491E+03 ± 8.313E+03 +
AHA1.165E+04 ± 1.027E+04 +1.243E+03 ± 1.541E+03 +2.308E+03 ± 2.436E+03 +
MCWFS2.015E+04 ± 1.241E+04 +1.967E+02 ± 4.116E+01 +6.704E+03 ± 5.253E+03 +
AlgorithmF16F17F18
OLSHADE4.246E+01 ± 4.153E+013.242E+01 ± 5.957E+002.175E+01 ± 9.353E− 01
OJADE4.1698E+02 ± 1.3342E+02 +6.8643E+01 ± 1.7497E+01 +5.3373E+03 ± 2.8839E+04 +
IMODE6.631E+02 ± 2.031E+02 +1.901E+02 ± 8.919E+01 +8.212E+01 ± 2.599E+01 +
DDEARA5.032E+02 ± 1.715E+02 +5.966E+01 ± 5.315E+01 +2.931E+03 ± 3.146E+03 +
FODE5.832E+01 ± 5.995E+01 ≈3.326E+01 ± 5.435E+00 ≈2.272E+01 ± 1.948E+00 +
SSAP2.147E+02 ± 9.313E+01 +3.702E+01 ± 6.470E+00 +2.250E+01 ± 1.588E+00 +
TAPSO6.990E+02 ± 2.892E+02 +2.067E+02 ± 1.220E+02 +5.034E+04 ± 2.632E+04 +
AGPSO1.359E+03 ± 2.055E+02 +2.777E+02 ± 1.646E+02 +6.947E+05 ± 7.499E+05 +
AHA9.237E+02 ± 2.804E+02 +3.425E+02 ± 1.865E+02 +2.115E+04 ± 1.339E+04 +
MCWFS4.530E+02 ± 1.733E+02 +1.459E+02 ± 7.120E+01 +2.428E+04 ± 1.406E+04 +
AlgorithmF19F20F21
OLSHADE5.047E+00 ± 1.516E+003.040E+01 ± 7.304E+002.070E+02 ± 1.833E+00
OJADE1.407E+03 ± 3.727E+03 +1.100E+02 ± 5.601E+01 +2.260E+02 ± 4.524E+00 +
IMODE2.208E+02 ± 7.484E+01 +1.354E+02 ± 7.375E+01 +1.012E+02 ± 1.717E+00
DDEARA6.331E+00 ± 2.280E+00 +5.987E+01 ± 6.539E+01 ≈2.781E+02 ± 1.210E+01 +
FODE5.941E+00 ± 1.750E+00 +3.501E+01 ± 5.533E+00 +2.071E+02 ± 1.441E+00 ≈
SSAP5.271E+00 ± 1.893E+00 ≈5.777E+01 ± 1.513E+01 +2.087E+02 ± 1.701E+00 +
TAPSO5.169E+03 ± 5.028E+03 +2.782E+02 ± 1.547E+02 +2.498E+02 ± 1.477E+01 +
AGPSO9.548E+03 ± 1.393E+04 +2.793E+02 ± 1.377E+02 +3.743E+02 ± 2.345E+01 +
AHA3.747E+03 ± 3.045E+03 +3.408E+02 ± 1.380E+02 +2.720E+02 ± 1.529E+01 +
MCWFS7.107E+03 ± 1.163E+04 +2.436E+02 ± 9.637E+01 +2.597E+02 ± 1.314E+01 +
AlgorithmF22F23F24
OLSHADE1.000E+02 ± 1.005E− 13 +3.499E+02 ± 2.3255E+00 +4.259E+02 ± 1.666E+00 +
OJADE1.000E+02 ± 1.005E− 13 ≈3.718E+02 ± 6.519E+00 +4.392E+02 ± 4.512E+00 +
IMODE4.556E+02 ± 2.966E+01 +5.735E+02 ± 9.761E+01 +7.249E+01 ± 2.508E+01
DDEARA1.000E+02 ± 0.000E+004.243E+02 ± 1.425E+01 +5.127E+02 ± 2.336E+01 +
FODE1.000E+02 ± 1.435E− 14 ≈3.498E+02 ± 2.656E+00 ≈4.256E+02 ± 1.904E+00 +
SSAP1.000E+02 ± 1.435E−14 ≈3.452E+02 ± 3.302E+004.213E+02 ± 2.086E+00 −
TAPSO4.780E+02 ± 9.702E+02 +4.157E+02 ± 2.003E+01 +4.977E+02 ± 2.264E+01 +
AGPSO1.021E+02 ± 2.319E+00 +5.930E+02 ± 2.099E+01 +6.565E+02 ± 2.180E+01 +
AHA1.006E+02 ± 1.274E+00 +4.383E+02 ± 2.768E+01 +5.076E+02 ± 2.818E+01 +
MCWFS1.000E+02 ± 1.525E−02 +4.157E+02 ± 1.555E+01 +4.735E+02 ± 1.452E+01 +
AlgorithmF25F26F27
OLSHADE3.867E+02 ± 2.475E−029.146E+02 ± 3.527E+015.022E+02 ± 5.631E+00
OJADE3.870E+02 ± 1.493E− 01 +1.174E+03 ± 7.331E+01 +5.028E+02 ± 6.544E+00 ≈
IMODE3.914E+02 ± 1.343E+01 +2.843E+02 ± 3.673E+01 −5.466E+02 ± 1.178E+01 +
DDEARA3.864E+02 ± 1.206E+00 +1.326E+03 ± 4.614E+02 +5.063E+02 ± 7.367E+00 +
FODE3.867E+02 ± 1.683E− 02 ≈9.162E+02 ± 3.717E+01 ≈5.036E+02 ± 5.544E+00 ≈
SSAP3.868E+02 ± 2.889E−02 +8.927E+02 ± 5.145E+01 −5.036E+02 ± 6.897E+00 ≈
TAPSO3.883E+02 ± 5.658E+00 +1.442E+03 ± 6.121E+02 +5.175E+02 ± 1.158E+01 +
AGPSO4.330E+02 ± 2.126E+01 +2.945E+03 ± 9.361E+02 +6.666E+02 ± 2.153E+01 +
AHA3.963E+02 ± 1.523E+01 +7.638E+02 ± 1.081E+035.425E+02 ± 1.504E+01 +
MCWFS3.873E+02 ± 2.454E+00 +1.476E+03 ± 4.442E+02 +5.180E+02 ± 1.354E+01 +
AlgorithmF28F29F30
OLSHADE3.310E+02 ± 5.136E+014.307E+02 ± 5.468E+001.988E+03 ± 5.345E+01
OJADE3.305E+02 ± 5.091E+01 ≈4.800E+02 ± 1.917E+01 +2.262E+03 ± 1.100E+03 +
IMODE3.299E+02 ± 5.577E+01 +6.863E+02 ± 9.191E+01 +3.213E+03 ± 7.184E+02 +
DDEARA4.024E+02 ± 1.621E+01 +5.229E+02 ± 6.971E+01 +3.361E+03 ± 1.271E+03 +
FODE3.279E+02 ± 4.875E+01 ≈4.320E+02 ± 6.806E+00 ≈1.992E+03 ± 6.255E+01 ≈
SSAP3.105E+02 ± 3.235E+014.413E+02 ± 7.007E+00 +2.069E+03 ± 6.809E+01 +
TAPSO3.296E+02 ± 5.472E+01 +6.323E+02 ± 1.713E+02 +4.914E+03 ± 2.782E+03 +
AGPSO5.465E+02 ± 7.722E+01 +8.682E+02 ± 1.783E+02 +9.170E+04 ± 1.539E+05 +
AHA4.043E+02 ± 1.871E+01 +7.038E+02 ± 1.582E+02 +4.779E+03 ± 1.701E+03 +
MCWFS3.614E+02 ± 4.767E+01 +5.965E+02 ± 7.869E+01 +1.346E+05 ± 1.151E+05 +
Algorithm w / t / l
OLSHADE
OJADE23/6/1
IMODE26/0/4
DDEARA25/3/2
FODE6/23/1
SSAP13/13/4
TAPSO28/2/0
AGPSO30/0/0
AHA29/0/1
MCWFS30/0/0
Table 6. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 50).
Table 6. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 50).
AlgorithmF1F2F3
OLSHADE0.000E+00 ± 0.000E+000.000E+00 ± 0.000E+000.000E+00 ± 0.000E+00
OJADE0.000E+00 ± 0.000E+003.800E−10 ± 2.714E−09 ≈1.713E+04 ± 3.537E+04 +
IMODE1.582E−02 ± 1.369E−03 +7.030E+31 ± 4.278E+32 +9.578E−07 ± 4.812E−08 +
DDEARA1.383E+03 ± 1.676E+03 +9.806E+09 ± 1.384E+09 +3.547E+01 ± 1.733E+02 +
FODE0.000E+00 ± 0.000E+000.000E+00 ± 0.000E+000.000E+00 ± 0.000E+00
SSAP0.000E+00 ± 0.000E+000.000E+00 ± 0.000E+000.000E+00 ± 0.000E+00
TAPSO3.330E+03 ± 4.628E+03 +2.574E-05 ± 4.529E-05 +7.109E+01 ± 1.838E+02 +
AGPSO6.138E+06 ± 4.045E+07 +7.731E+52 ± 5.483E+53 +7.827E+04 ± 1.101E+04 +
AHA2.870E+03 ± 3.387E+03 +1.233E+12 ± 3.396E+12 +1.215E+03 ± 7.699E+02 +
MCWFS7.205E+03 ± 6.452E+03 +1.042E+13 ± 5.386E+13 +6.751E+00 ± 2.415E+00 +
AlgorithmF4F5F6
OLSHADE7.348E+01 ± 5.120E+011.119E+01 ± 2.158E+009.831E−08 ± 1.827E−07
OJADE5.575E+01 ± 4.882E+01 ≈4.717E+01 ± 8.456E+00 +0.000E+00 ± 0.000E+00
IMODE3.217E+01 ± 4.497E+012.860E+02 ± 3.159E+01 +3.473E+01 ± 5.506E+00 +
DDEARA1.102E+02 ± 4.348E+01 +2.163E+02 ± 2.068E+01 +0.000E+00 ± 0.000E+00 −
FODE8.403E+01 ± 4.355E+01 ≈1.237E+01 ± 2.333E+00 +4.190E−05 ± 2.984E−04 ≈
SSAP3.375E+01 ± 4.343E+01 −2.342E+01 ± 2.085E+00 +7.759E-04 ± 2.181E-03 +
TAPSO4.704E+01 ± 4.271E+01 −1.135E+02 ± 2.844E+01 +8.111E-03 ± 6.440E-03 +
AGPSO8.643E+02 ± 2.489E+02 +3.566E+02 ± 4.186E+01 +1.472E+01 ± 2.961E+00 +
AHA1.067E+02 ± 4.738E+01 +2.921E+02 ± 3.396E+01 +1.909E+00 ± 2.652E+00 +
MCWFS1.223E+02 ± 4.548E+01 +1.359E+02 ± 2.768E+01 +1.002E+01 ± 4.584E+00 +
AlgorithmF7F8F9
OLSHADE6.292E+01 ± 1.703E+001.143E+01 ± 2.384E+001.755E−03 ± 1.254E−02
OJADE9.502E+01 ± 7.516E+00 +5.153E+01 ± 8.832E+00 +4.152E−01 ± 5.478E−01 +
IMODE5.130E+02 ± 8.630E+01 +2.975E+02 ± 3.953E+01 +8.924E+03 ± 1.547E+03 +
DDEARA2.869E+02 ± 2.166E+01 +2.134E+02 ± 2.157E+01 +3.640E-01 ± 6.848E-01 +
FODE6.357E+01 ± 2.046E+00 +1.241E+01 ± 2.079E+00 +0.000E+00 ± 0.000E+00
SSAP6.767E+01 ± 1.990E+00 +2.327E+01 ± 2.724E+00 +1.755E-03 ± 1.254E-02 ≈
TAPSO1.547E+02 ± 2.671E+01 +1.105E+02 ± 2.395E+01 +2.887E+02 ± 2.544E+02 +
AGPSO3.677E+02 ± 8.170E+01 +3.620E+02 ± 3.855E+01 +1.488E+03 ± 1.104E+03 +
AHA4.289E+02 ± 1.290E+02 +2.912E+02 ± 4.477E+01 +7.436E+03 ± 2.388E+03 +
MCWFS2.039E+02 ± 3.239E+01 +1.313E+02 ± 2.112E+01 +9.700E+02 ± 9.093E+02 +
AlgorithmF10F11F12
OLSHADE3.139E+03 ± 3.572E+024.683E+01 ± 7.595E+002.224E+03 ± 5.545E+02
OJADE3.719E+03 ± 3.310E+02 +1.303E+02 ± 4.004E+01 +5.565E+03 ± 2.488E+03 +
IMODE5.237E+03 ± 8.165E+02 +2.316E+02 ± 6.981E+01 +1.945E+03 ± 4.818E+02
DDEARA8.124E+03 ± 6.361E+02 +4.111E+01 ± 1.227E+011.033E+05 ± 7.431E+04 +
FODE3.201E+03 ± 3.018E+02 ≈5.248E+01 ± 9.443E+00 +2.242E+03 ± 5.669E+02 ≈
SSAP3.666E+03 ± 2.736E+02 +6.270E+01 ± 9.899E+00 +2.134E+03 ± 4.877E+02 ≈
TAPSO4.489E+03 ± 9.117E+02 +1.342E+02 ± 3.511E+01 +1.066E+05 ± 1.847E+05 +
AGPSO1.231E+04 ± 4.412E+02 +7.270E+02 ± 6.007E+02 +1.251E+08 ± 3.145E+08 +
AHA5.150E+03 ± 8.534E+02 +1.186E+02 ± 2.824E+01 +1.170E+06 ± 7.455E+05 +
MCWFS4.212E+03 ± 6.525E+02 +1.777E+02 ± 4.343E+01 +1.825E+06 ± 1.263E+06 +
Table 7. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 50).
Table 7. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 50).
AlgorithmF13F14F15
OLSHADE5.439E+01 ± 1.913E+01+2.817E+01 ± 2.825E+00+3.609E+01 ± 7.419E+00+
OJADE1.693E+02 ± 1.077E+02 +1.690E+04 ± 5.169E+04 +3.232E+02 ± 6.369E+02 +
IMODE4.710E+02 ± 1.460E+02 +2.460E+02 ± 6.591E+01 +3.103E+02 ± 8.744E+01 +
DDEARA2.644E+03 ± 2.776E+03 +1.248E+03 ± 2.356E+03 +9.130E+02 ± 1.561E+03 +
FODE5.949E+01 ± 3.023E+01 ≈3.309E+01 ± 4.569E+00 +4.290E+01 ± 9.035E+00 +
SSAP5.643E+01 ± 2.717E+01 ≈3.226E+01 ± 3.432E+00 +4.672E+01 ± 1.240E+01 +
TAPSO3.245E+03 ± 3.115E+03 +8.518E+03 ± 9.674E+03 +5.431E+03 ± 5.372E+03 +
AGPSO2.880E+06 ± 1.276E+07 +2.809E+05 ± 4.382E+05 +6.184E+03 ± 6.436E+03 +
AHA2.220E+03 ± 3.146E+03 +2.382E+04 ± 1.704E+04 +7.486E+03 ± 5.472E+03 +
MCWFS3.598E+04 ± 1.595E+04 +6.352E+02 ± 3.988E+02 +1.116E+04 ± 6.935E+03 +
AlgorithmF16F17F18
OLSHADE3.441E+02 ± 1.210E+02+2.186E+02 ± 8.707E+01+3.941E+01 ± 1.319E+01+
OJADE8.473E+02 ± 1.825E+02 +6.222E+02 ± 1.415E+02 +2.279E+04 ± 1.154E+05 +
IMODE1.574E+03 ± 4.701E+02 +1.473E+03 ± 2.667E+02 +1.847E+02 ± 6.384E+01 +
DDEARA1.059E+03 ± 2.301E+02 +7.896E+02 ± 1.831E+02 +1.676E+04 ± 1.076E+04 +
FODE3.973E+02 ± 1.394E+02 +2.353E+02 ± 7.937E+01 ≈4.973E+01 ± 1.801E+01 +
SSAP4.971E+02 ± 9.959E+01 +3.401E+02 ± 7.653E+01 +5.563E+01 ± 3.222E+01 +
TAPSO1.278E+03 ± 4.048E+02 +8.406E+02 ± 3.656E+02 +5.487E+04 ± 8.691E+04 +
AGPSO2.785E+03 ± 4.031E+02 +1.505E+03 ± 2.726E+02 +3.868E+06 ± 4.309E+06 +
AHA1.576E+03 ± 3.668E+02 +1.225E+03 ± 2.172E+02 +1.401E+05 ± 8.550E+04 +
MCWFS7.773E+02 ± 1.845E+02 +6.670E+02 ± 1.544E+02 +5.694E+04 ± 2.707E+04 +
AlgorithmF19F20F21
OLSHADE2.458E+01 ± 5.230E+00+1.650E+02 ± 5.287E+01+2.128E+02 ± 2.764E+00+
OJADE9.725E+02 ± 2.489E+03 +4.275E+02 ± 1.340E+02 +2.490E+02 ± 8.331E+00 +
IMODE1.575E+02 ± 1.079E+02 +9.405E+02 ± 1.953E+02 +4.955E+02 ± 4.050E+01 +
DDEARA6.782E+02 ± 1.506E+03 +5.945E+02 ± 1.559E+02 +4.208E+02 ± 4.154E+01 +
FODE4.250E+01 ± 1.369E+01 +1.980E+02 ± 6.168E+01 +2.130E+02 ± 2.231E+00 ≈
SSAP4.146E+01 ± 1.548E+01 +2.975E+02 ± 8.713E+01 +2.222E+02 ± 2.702E+00 +
TAPSO1.521E+04 ± 9.324E+03 +5.999E+02 ± 2.717E+02 +3.090E+02 ± 2.612E+01 +
AGPSO6.479E+04 ± 3.813E+05 +1.313E+03 ± 3.192E+02 +5.756E+02 ± 2.577E+01 +
AHA1.532E+04 ± 6.586E+03 +1.002E+03 ± 2.971E+02 +3.747E+02 ± 3.508E+01 +
MCWFS1.878E+04 ± 1.938E+04 +5.289E+02 ± 1.657E+02 +3.270E+02 ± 2.243E+01 +
AlgorithmF22F23F24
OLSHADE2.930E+03 ± 1.494E+03 +4.288E+02 ± 3.785E+00 +5.060E+02 ± 2.548E+00 +
OJADE3.487E+03 ± 1.681E+03 +4.755E+02 ± 1.144E+01 +5.355E+02 ± 7.240E+00 −
IMODE3.181E+03 ± 2.542E+03 ≈8.985E+02 ± 8.012E+01 +1.109E+03 ± 9.811E+01 +
DDEARA4.994E+03 ± 4.000E+03 +6.319E+02 ± 5.304E+01 +7.447E+02 ± 4.142E+01 +
FODE2.261E+03 ± 1.784E+03 ≈4.281E+02 ± 4.877E+005.057E+02 ± 3.173E+00 ≈
SSAP6.814E+02 ± 1.328E+034.293E+02 ± 7.962E+00 ≈5.016E+02 ± 6.503E+00
TAPSO5.057E+03 ± 1.460E+03 +5.802E+02 ± 3.957E+01 +6.806E+02 ± 4.209E+01 +
AGPSO1.122E+04 ± 3.842E+03 +9.861E+02 ± 4.881E+01 +1.055E+03 ± 4.607E+01 +
AHA4.562E+03 ± 3.301E+03 +6.527E+02 ± 4.779E+01 +7.735E+02 ± 5.738E+01 +
MCWFS3.798E+03 ± 1.929E+03 +5.698E+02 ± 3.489E+01 +6.266E+02 ± 2.781E+01 +
Table 8. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 50).
Table 8. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 50).
AlgorithmF25F26F27
OLSHADE4.815E+02 ± 3.491E+00+1.132E+03 ± 4.375E+01 +5.297E+02 ± 1.407E+01 +
OJADE5.261E+02 ± 3.309E+01 +1.548E+03 ± 1.339E+02 +5.533E+02 ± 2.766E+01 +
IMODE5.253E+02 ± 3.955E+01 +4.199E+03 ± 2.184E+03 +1.113E+03 ± 1.038E+02 +
DDEARA5.154E+02 ± 3.037E+01 +3.042E+03 ± 4.738E+02 +5.497E+02 ± 1.791E+01 +
FODE4.823E+02 ± 4.454E+00 ≈1.120E+03 ± 4.690E+01 ≈5.245E+02 ± 1.100E+01
SSAP5.298E+02 ± 2.664E+01 +1.180E+03 ± 6.905E+01 +5.334E+02 ± 8.194E+00 +
TAPSO5.434E+02 ± 4.041E+01 +2.427E+03 ± 9.454E+02 +6.319E+02 ± 5.718E+01 +
AGPSO9.207E+02 ± 1.121E+02 +5.958E+03 ± 6.938E+02 +1.419E+03 ± 1.126E+02 +
AHA5.707E+02 ± 2.905E+01 +1.061E+03 ± 2.049E+037.927E+02 ± 8.282E+01 +
MCWFS5.056E+02 ± 2.809E+01 +2.619E+03 ± 4.509E+02 +6.206E+02 ± 5.209E+01 +
AlgorithmF28F29F30
OLSHADE4.723E+02 ± 2.202E+01 +3.523E+02 ± 9.991E+00+6.630E+05 ± 7.068E+04 +
OJADE4.949E+02 ± 1.749E+01 +4.734E+02 ± 7.202E+01 +6.256E+05 ± 5.243E+04 +
IMODE4.866E+02 ± 2.083E+01 +1.967E+03 ± 3.468E+02 +6.001E+05 ± 3.584E+04
DDEARA4.768E+02 ± 1.914E+01 +5.082E+02 ± 1.745E+02 +7.515E+05 ± 7.976E+04 +
FODE4.656E+02 ± 1.698E+013.543E+02 ± 9.467E+00 ≈6.542E+05 ± 7.194E+04 ≈
SSAP4.933E+02 ± 2.248E+01 +3.737E+02 ± 1.128E+01 +6.047E+05 ± 2.782E+04 −
TAPSO4.867E+02 ± 2.111E+01 +9.333E+02 ± 2.386E+02 +7.546E+05 ± 8.163E+04 +
AGPSO1.207E+03 ± 2.199E+02 +2.037E+03 ± 4.919E+02 +1.236E+07 ± 6.858E+06 +
AHA5.173E+02 ± 2.983E+01 +1.188E+03 ± 2.769E+02 +8.993E+05 ± 1.410E+05 +
MCWFS4.695E+02 ± 1.786E+01 +9.624E+02 ± 2.068E+02 +1.219E+07 ± 3.024E+06 +
Algorithm w / t / l
OLSHADE
OJADE25/3/2
IMODE26/1/3
DDEARA28/0/2
FODE10/18/2
SSAP19/7/4
TAPSO29/0/1
AGPSO30/0/0
AHA29/0/1
MCWFS30/0/0
Table 9. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 100).
Table 9. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 100).
AlgorithmF1F2F3
OLSHADE0.000E+00 ± 0.000E+002.723E+00 ± 1.944E+014.870E-08 ± 6.659E-08
OJADE0.000E+00 ± 0.000E+002.984E+02 ± 1.926E+031.324E+05 ± 1.608E+05 +
IMODE1.191E+03 ± 6.482E+03 +1.457E+149 ± 7.035E+149 +7.044E-05 ± 1.808E-06
DDEARA2.209E+03 ± 2.384E+03 +1.000E+10 ± 0.000E+00 +4.948E+03 ± 5.381E+03 +
FODE6.162E-06 ± 3.912E-05 ≈1.155E+02 ± 3.106E+02 +7.938E-05 ± 2.144E-04 +
SSAP0.000E+00 ± 0.000E+007.089E+00 ± 2.242E+01 +8.267E-07 ± 1.730E-06 +
TAPSO6.515E+03 ± 7.931E+03 +2.587E+15 ± 1.549E+16 +3.364E+04 ± 1.120E+04 +
AGPSO9.755E+04 ± 1.848E+05 +1.611E+77 ± 1.151E+78 +1.492E+05 ± 4.428E+04 +
AHA7.588E+03 ± 9.408E+03 +2.663E+51 ± 1.897E+52 +1.777E+04 ± 5.132E+03 +
MCWFS9.079E+04 ± 3.076E+04 +9.912E+29 ± 6.301E+28 +4.310E+02 ± 2.126E+02 +
AlgorithmF4F5F6
OLSHADE8.157E+01 ± 7.187E+014.526E+01 ± 6.407E+006.037E-03 ± 4.512E-03
OJADE4.609E+01 ± 5.080E+01 −1.252E+02 ± 1.440E+01 +9.880E-05 ± 4.786E-04 −
IMODE9.713E+01 ± 9.964E+01 ≈1.020E+03 ± 6.795E+01 +6.204E+01 ± 3.806E+00 +
DDEARA2.048E+02 ± 2.560E+01 +6.211E+02 ± 2.113E+02 +0.000E+00 ± 0.000E+00
FODE6.557E+01 ± 5.810E+01 ≈5.959E+01 ± 9.130E+00 +2.908E-02 ± 1.939E-02 +
SSAP1.131E+01 ± 2.451E+017.774E+01 ± 8.908E+00 +2.211E-02 ± 1.418E-02 +
TAPSO1.798E+02 ± 4.054E+01 +3.474E+02 ± 5.239E+01 +7.098E-02 ± 2.767E-02 +
AGPSO3.081E+02 ± 5.051E+01 +3.946E+02 ± 6.658E+01 +1.438E-01 ± 3.827E-02 +
AHA2.767E+02 ± 4.583E+01 +7.752E+02 ± 5.488E+01 +8.461E+00 ± 7.571E+00 +
MCWFS2.744E+02 ± 4.414E+01 +4.220E+02 ± 6.341E+01 +3.158E+01 ± 6.911E+00 +
AlgorithmF7F8F9
OLSHADE1.401E+02 ± 4.128E+004.531E+01 ± 5.423E+002.800E-01 ± 2.954E-01
OJADE2.389E+02 ± 1.636E+01 +1.272E+02 ± 1.832E+01 +3.805E+01 ± 2.557E+01 +
IMODE2.224E+03 ± 3.693E+02 +1.091E+03 ± 8.806E+01 +3.208E+04 ± 3.575E+03 +
DDEARA8.643E+02 ± 3.998E+01 +5.860E+02 ± 2.432E+02 +1.506E+02 ± 1.233E+02 +
FODE1.715E+02 ± 8.765E+00 +5.986E+01 ± 8.102E+00 +1.240E+00 ± 1.103E+00 +
SSAP1.724E+02 ± 4.940E+00 +7.483E+01 ± 6.304E+00 +9.857E-01 ± 8.573E-01 +
TAPSO4.985E+02 ± 6.641E+01 +3.467E+02 ± 5.072E+01 +5.262E+03 ± 2.629E+03 +
AGPSO7.417E+02 ± 8.550E+01 +3.766E+02 ± 5.894E+01 +8.940E+03 ± 3.517E+03 +
AHA1.615E+03 ± 2.849E+02 +8.390E+02 ± 1.114E+02 +2.161E+04 ± 1.126E+03 +
MCWFS6.018E+02 ± 7.489E+01 +4.317E+02 ± 4.378E+01 +1.439E+04 ± 4.346E+03 +
AlgorithmF10F11F12
OLSHADE1.045E+04 ± 4.534E+024.655E+02 ± 8.087E+012.059E+04 ± 7.638E+03
OJADE1.014E+04 ± 4.738E+023.527E+03 ± 3.527E+03 +1.737E+04 ± 6.395E+03
IMODE1.215E+04 ± 1.112E+03 +1.314E+03 ± 2.440E+02 +7.326E+05 ± 5.202E+06 −
DDEARA2.137E+04 ± 1.436E+03 +3.225E+02 ± 3.045E+025.594E+05 ± 2.053E+05+
FODE1.055E+04 ± 5.166E+02 ≈5.702E+02 ± 8.489E+01 +2.145E+04 ± 8.226E+03 ≈
SSAP1.157E+04 ± 4.155E+02 +5.318E+02 ± 6.474E+01 +2.217E+04 ± 8.327E+03 ≈
TAPSO1.211E+04 ± 1.111E+03 +3.627E+02 ± 8.315E+01 −5.057E+05 ± 2.774E+05 +
AGPSO1.119E+04 ± 1.167E+03 +3.231E+04 ± 1.353E+04 +3.595E+07 ± 1.746E+07 +
AHA1.323E+04 ± 1.367E+03 +5.128E+02 ± 8.684E+01 ≈4.567E+06 ± 1.942E+06 +
MCWFS1.085E+04 ± 1.209E+03 +1.097E+03 ± 1.496E+02 +5.396E+06 ± 2.719E+06 +
Table 10. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 100).
Table 10. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 100).
AlgorithmF13F14F15
OLSHADE6.881E+02 ± 6.821E+022.522E+02 ± 2.930E+012.410E+02 ± 4.372E+01
OJADE3.175E+03 ± 2.664E+03 +5.615E+02 ± 1.873E+02 +3.913E+02 ± 2.140E+02 +
IMODE6.909E+02 ± 1.708E+02 +5.123E+02 ± 1.019E+02 +2.775E+02 ± 6.783E+01 +
DDEARA1.903E+03 ± 1.938E+03 +3.001E+04 ± 1.969E+04 +9.569E+02 ± 1.028E+03 +
FODE6.232E+02 ± 5.011E+022.469E+02 ± 3.161E+012.395E+02 ± 4.196E+01 ≈
SSAP8.261E+02 ± 5.787E+02 +2.522E+02 ± 2.768E+01 ≈2.365E+02 ± 4.443E+01
TAPSO3.782E+03 ± 3.573E+03 +3.393E+04 ± 2.299E+04 +2.166E+03 ± 2.978E+03 +
AGPSO2.420E+04 ± 1.928E+04 +3.769E+06 ± 3.642E+06 +5.793E+03 ± 4.997E+03 +
AHA5.157E+03 ± 4.763E+03 +2.527E+05 ± 9.424E+04 +1.759E+03 ± 2.278E+03 +
MCWFS3.290E+04 ± 1.222E+04 +1.553E+04 ± 1.106E+04 +2.122E+04 ± 9.397E+03 +
AlgorithmF16F17F18
OLSHADE1.460E+03 ± 2.722E+021.160E+03 ± 2.273E+022.147E+02 ± 4.099E+01
OJADE2.604E+03 ± 2.963E+02 +1.915E+03 ± 2.447E+02 +1.965E+03 ± 1.773E+03 +
IMODE4.390E+03 ± 9.865E+02 +4.026E+03 ± 5.945E+02 +2.739E+02 ± 7.621E+01 +
DDEARA3.829E+03 ± 9.186E+02 +2.590E+03 ± 3.929E+02 +2.130E+05 ± 9.633E+04 +
FODE1.929E+03 ± 2.114E+02 +1.302E+03 ± 2.259E+02 +2.106E+02 ± 4.426E+01 ≈
SSAP1.994E+03 ± 2.213E+02 +1.384E+03 ± 1.593E+02 +2.012E+02 ± 4.096E+01
TAPSO3.203E+03 ± 6.700E+02 +2.433E+03 ± 5.445E+02 +1.097E+05 ± 4.725E+04 +
AGPSO4.085E+03 ± 7.461E+02 +3.391E+03 ± 4.840E+02 +4.552E+06 ± 4.224E+06 +
AHA4.013E+03 ± 6.561E+02 +2.994E+03 ± 6.639E+02 +3.973E+05 ± 1.520E+05 +
MCWFS2.215E+03 ± 5.415E+02 +1.678E+03 ± 2.978E+02 +9.682E+04 ± 2.948E+04 +
AlgorithmF19F20F21
OLSHADE1.750E+02 ± 2.320E+011.660E+03 ± 2.093E+022.620E+02 ± 5.917E+00
OJADE3.650E+02 ± 4.270E+02 +1.869E+03 ± 2.054E+02 +3.462E+02 ± 1.852E+01 +
IMODE7.536E+02 ± 9.661E+02 +2.990E+03 ± 4.537E+02 +1.412E+03 ± 1.034E+02 +
DDEARA9.170E+02 ± 1.001E+03 +2.559E+03 ± 3.045E+02 +8.322E+02 ± 2.326E+02 +
FODE1.727E+02 ± 2.084E+01 ≈1.778E+03 ± 2.362E+02 +2.756E+02 ± 7.513E+00 +
SSAP1.706E+02 ± 1.989E+012.035E+03 ± 1.827E+02 +2.869E+02 ± 9.774E+00 +
TAPSO2.527E+03 ± 3.258E+03 +2.485E+03 ± 6.903E+02 +5.876E+02 ± 6.048E+01 +
AGPSO5.520E+03 ± 5.474E+03 +2.877E+03 ± 5.376E+02 +6.539E+02 ± 5.677E+01 +
AHA1.761E+03 ± 2.209E+03 +2.959E+03 ± 4.990E+02 +7.525E+02 ± 8.039E+01 +
MCWFS3.636E+04 ± 2.885E+04 +1.672E+03 ± 3.059E+02 ≈6.718E+02 ± 5.199E+01 +
AlgorithmF22F23F24
OLSHADE1.127E+04 ± 4.169E+025.722E+02 ± 1.097E+018.903E+02 ± 1.048E+01
OJADE1.140E+04 ± 5.275E+026.456E+02 ± 1.426E+01 +9.935E+02 ± 2.154E+01 +
IMODE1.370E+04 ± 1.268E+03 +2.005E+03 ± 1.524E+02 +2.602E+03 ± 3.364E+02 +
DDEARA2.318E+04 ± 1.281E+03 +7.217E+02 ± 7.175E+01 +1.135E+03 ± 1.416E+02 +
FODE1.158E+04 ± 6.455E+02 −5.780E+02 ± 1.263E+019.142E+02 ± 1.444E+01 +
SSAP1.235E+04 ± 2.315E+03 +5.925E+02 ± 1.252E+01 +8.783E+02 ± 1.345E+01 −
TAPSO1.316E+04 ± 2.225E+03 +8.049E+02 ± 4.514E+01 +1.321E+03 ± 7.214E+01 +
AGPSO1.252E+04 ± 1.109E+03 +8.129E+02 ± 4.525E+01 +1.375E+03 ± 6.757E+01 +
AHA1.721E+04 ± 1.923E+03 +9.370E+02 ± 5.812E+01 +1.685E+03 ± 9.157E+01 +
MCWFS1.241E+04 ± 2.144E+03 +1.042E+03 ± 6.883E+01 +1.408E+03 ± 1.027E+02 +
Table 11. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 100).
Table 11. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2017 (D = 100).
AlgorithmF25F26F27
OLSHADE7.437E+02 ± 3.441E+013.132E+03 ± 9.232E+016.468E+02 ± 1.747E+01
OJADE7.523E+02 ± 4.993E+01 ≈4.292E+03 ± 2.533E+02 +7.409E+02 ± 3.652E+01 +
IMODE7.159E+02 ± 4.740E+01 −1.295E+04 ± 2.007E+03 +1.610E+03 ± 2.545E+02 +
DDEARA7.588E+02 ± 4.696E+01 +5.858E+03 ± 1.353E+03 +7.210E+02 ± 3.261E+01 +
FODE7.557E+02 ± 4.473E+01 +3.409E+03 ± 1.003E+02 +6.127E+02 ± 1.666E+01 −
SSAP7.592E+02 ± 4.554E+01 ≈3.235E+03 ± 1.213E+02 +6.390E+02 ± 1.687E+01
TAPSO7.566E+02 ± 7.008E+01 +7.736E+03 ± 1.794E+03 +7.685E+02 ± 4.997E+01 +
AGPSO8.454E+02 ± 5.749E+01 +8.555E+03 ± 5.981E+02 +8.547E+02 ± 5.908E+01 +
AHA8.088E+02 ± 6.315E+01 +1.354E+04 ± 8.428E+03 +1.106E+03 ± 1.257E+02 +
MCWFS7.644E+02 ± 6.390E+01 +7.710E+03 ± 1.332E+03 +7.840E+02 ± 7.186E+01 +
AlgorithmF28F29F30
OLSHADE5.260E+02 ± 2.849E+011.132E+03 ± 1.246E+022.520E+03 ± 1.839E+02
OJADE5.243E+02 ± 4.102E+01 ≈2.134E+03 ± 2.733E+02 +3.857E+03 ± 1.742E+03 +
IMODE4.962E+02 ± 8.375E+014.714E+03 ± 6.889E+02 +9.596E+03 ± 6.335E+03 +
DDEARA5.666E+02 ± 1.373E+01 +1.747E+03 ± 5.494E+02 +6.929E+03 ± 3.117E+03 +
FODE5.355E+02 ± 2.451E+01 +1.238E+03 ± 1.756E+02 +2.401E+03 ± 1.402E+02 −
SSAP5.362E+02 ± 2.764E+01 +1.220E+03 ± 1.630E+02 +2.545E+03 ± 1.728E+02 ≈
TAPSO5.449E+02 ± 3.205E+01 +2.919E+03 ± 5.505E+02 +6.059E+03 ± 3.527E+03 +
AGPSO6.658E+02 ± 4.056E+01 +3.396E+03 ± 5.700E+02 +1.318E+05 ± 9.161E+04 +
AHA6.167E+02 ± 3.445E+01 +3.983E+03 ± 6.510E+02 +8.333E+03 ± 3.495E+03 +
MCWFS5.889E+02 ± 3.752E+01 +3.126E+03 ± 3.699E+02 +1.385E+06 ± 6.549E+05 +
Algorithm w / t / l
OLSHADE
OJADE21/2/7
IMODE26/2/2
DDEARA28/0/2
FODE17/9/4
SSAP19/6/5
TAPSO29/0/1
AGPSO30/0/0
AHA29/1/0
MCWFS29/1/0
Table 12. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2011.
Table 12. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2011.
AlgorithmG1G2G3
OLSHADE−2.586E-01 ± 1.580E+00−2.786E+01 ± 3.979E−01−1.151E-05 ± 2.837E−19
OJADE3.689E+00 ± 3.003E+00 +−2.595E+01 ± 5.211E−01 +1.151E−05 ± 2.023E−19 −
IMODE2.479E+01 ± 3.729E+00 +−2.244E+00 ± 4.192E−01 +1.151E−05 ± 1.790E−11 +
DDEARA9.506E+00 ± 3.438E+00 +−2.679E+01 ± 8.445E−01 +1.151E−05 ± 3.973E−19 +
FODE2.023E−02 ± 1.311E −01 ≈−2.768E+01 ± 4.393E−01 +1.162E−05 ± 3.041E−19 ≈
SSAP1.046E−04 ± 6.235E−04 −−2.790E+01 ± 4.289E−01 ≈1.151E−05 ± 2.632E−19 ≈
TAPSO1.224E+01 ± 7.020E+00 +−2.692E+01 ± 6.991E−01 +1.151E−05 ± 5.844E−19 +
AGPSO1.234E+01 ± 5.582E+00 +−1.645E+01 ± 4.390E+00 +1.151E-05 ± 7.394E-18 +
AHA6.219E+00 ± 6.259E+00 ≈−2.716E+01 ± 6.316E−01 +1.151E−05 ± 5.530E−19 +
MCWFS1.750E+01 ± 4.619E+00 +−1.898E+01 ± 3.842E+00 +1.151E−05 ± 8.746E−14 +
AlgorithmG4G5G6
OLSHADE1.867E+01 ± 3.094E+00−3.684E+01 ± 2.488E−02−2.917E+01 ± 1.820E−04
OJADE1.566E+01 ± 2.730E+00 −−3.669E+01 ± 2.608E−01 +−2.916E+01 ± 2.187E−03 +
IMODE2.165E+01 ± 8.897E−01 +−1.197E+01 ± 1.428E+00 +1.175E+01 ± 2.834E+01 +
DDEARA1.468E+01 ± 5.765E−01 −−3.526E+01 ± 2.937E+00 +−2.897E+01 ± 9.498E−01 −
FODE1.751E+01 ± 3.269E+00 −−3.691E+01 ± 1.212E−01 +−2.924E+01 ± 1.633E−04 +
SSAP1.860E+01 ± 3.142E+00 ≈−3.683E+01 ± 1.228E−01 ≈−2.917E+01 ± 1.032E−04 −
TAPSO1.523E+01 ± 2.315E+00 −−3.434E+01 ± 1.445E+00 +−2.790E+01 ± 1.514E+00 ≈
AGPSO1.550E+01 ± 2.079E+00 −−2.517E+01 ± 2.629E+00 +−2.074E+01 ± 3.021E+00 +
AHA1.582E+01 ± 1.844E+00 −−3.467E+01 ± 1.551E+00 +−2.664E+01 ± 2.860E+00 +
MCWFS1.685E+01 ± 2.994E+00 −−3.385E+01 ± 1.343E+00 +−2.444E+01 ± 3.052E+00 +
AlgorithmG7G8G9
OLSHADE1.115E+00 ± 8.613E−022.200E+02 ± 0.000E+008.559E+02 ± 2.093E+02
OJADE1.154E+00 ± 8.018E−02 +2.200E+02 ± 0.000E+00 ≈1.458E+03 ± 4.299E+02 +
IMODE2.471E+00 ± 2.533E−01 +2.184E+03 ± 9.807E+02 +2.876E+06 ± 1.039E+05 +
DDEARA1.188E+00 ± 8.040E−02 +2.200E+02 ± 0.000E+00 ≈1.680E+03 ± 5.972E+02 +
FODE1.113E+00 ± 1.005E−01 ≈2.200E+02 ± 0.000E+00 ≈3.418E+02 ± 9.612E+01 −
SSAP1.047E+00 ± 1.130E−01 −2.200E+02 ± 0.000E+00 ≈6.199E+02 ± 1.275E+02 −
TAPSO9.308E−01 ± 1.727E-01 −2.200E+02 ± 0.000E+00 ≈4.889E+03 ± 2.373E+03 +
AGPSO1.705E+00 ± 1.194E-01 +2.200E+02 ± 0.000E+00 ≈1.331E+06 ± 7.222E+04 +
AHA8.018E-01 ± 1.568E-01 −2.200E+02 ± 0.000E+00 ≈1.981E+03 ± 6.059E+02 +
MCWFS6.508E−01 ± 1.107E−01 −2.269E+02 ± 9.112E+00 +1.608E+05 ± 6.974E+04 +
AlgorithmG10G11G12
OLSHADE−2.157E+01 ± 9.418E−024.798E+04 ± 3.895E+021.739E+07 ± 3.887E+04
OJADE−2.153E+01 ± 2.087E−01 ≈5.240E+04 ± 5.093E+02 +1.741E+07 ± 5.500E+04 +
IMODE−1.008E+01 ± 4.597E+00 +2.649E+08 ± 2.422E+07 +5.584E+07 ± 9.539E+05 +
DDEARA−2.159E+01 ± 9.834E−02 ≈5.240E+04 ± 4.672E+02 +1.733E+07 ± 2.460E+04 −
FODE−2.165E+01 ± 8.750E−02 ≈4.821E+04 ± 4.720E+02 ≈1.743E+07 ± 2.050E+03 −
SSAP−2.157E+01 ± 9.622E−02 ≈4.797E+04 ± 3.817E+02 ≈1.734E+07 ± 2.247E+04 −
TAPSO−2.079E+01 ± 1.338E+00 +5.179E+04 ± 5.418E+02 +1.743E+07 ± 6.667E+04 +
AGPS O−1.839E+01 ± 1.952E+00 +3.949E+05 ± 2.065E+05 +3.577E+07 ± 7.933E+05 +
AHA−2.117E+01 ± 5.762E−01 +5.205E+04 ± 5.620E+02 +1.742E+07 ± 5.171E+04 +
MCWFS−1.904E+01 ± 2.124E+00 +5.196E+04 ± 4.854E+02 +2.264E+07 ± 5.421E+05 +
Table 13. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2011.
Table 13. Experiment results of OLSHADE and comparative algorithms on IEEE CEC2011.
AlgorithmG13G14G15
OLSHADE1.544E+04 ± 7.581E−011.810E+04 ± 3.670E+013.274E+04 ± 4.071E−01
OJADE1.544E+04 ± 1.167E+00 +1.825E+04 ± 3.073E+02 +3.298E+04 ± 7.189E+01 +
IMODE3.071E+04 ± 2.486E+04 +1.828E+04 ± 1.156E+02 +2.139E+05 ± 2.661E+05 +
DDEARA1.544E+04 ± 7.115E−02 +1.810E+04 ± 2.720E+01 ≈3.282E+04 ± 5.079E+01 +
FODE1.547E+04 ± 4.407E+00 ≈1.812E+04 ± 3.121E+01 ≈3.275E+04 ± 1.768E−01 ≈
SSAP1.544E+04 ± 1.151E−01 −1.810E+04 ± 4.116E+01 ≈3.274E+04 ± 3.231E−01 ≈
TAPSO1.550E+04 ± 3.307E+01 +1.921E+04 ± 2.398E+02 +3.303E+04 ± 7.732E+01 +
AGPSO1.547E+04 ± 1.610E+01 +1.921E+04 ± 1.747E+02 +3.312E+04 ± 9.556E+01 +
AHA1.545E+04 ± 7.783E+00 +1.834E+04 ± 9.756E+01 +3.293E+04 ± 4.605E+01 +
MCWFS1.547E+04 ± 1.796E+01 +1.904E+04 ± 1.596E+02 +3.307E+04 ± 8.090E+01 +
AlgorithmG16G17G18
OLSHADE1.233E+05 ± 4.165E+021.728E+06 ± 8.206E+039.247E+05 ± 5.504E+02
OJADE1.332E+05 ± 4.580E+03 +1.902E+06 ± 2.552E+04 +9.375E+05 ± 2.170E+03 +
IMODE4.623E+07 ± 1.419E+07 +1.165E+10 ± 1.959E+09 +1.225E+08 ± 9.029E+06 +
DDEARA1.266E+05 ± 1.112E+03 +1.881E+06 ± 9.974E+03 +9.346E+05 ± 2.121E+03 +
FODE1.237E+05 ± 4.419E+02 ≈1.725E+06 ± 7.018E+03 −9.251E+05 ± 4.569E+02 ≈
SSAP1.235E+05 ± 4.184E+02 +1.729E+06 ± 7.764E+03 ≈9.248E+05 ± 5.047E+02 ≈
TAPSO1.395E+05 ± 4.020E+03 +1.972E+06 ± 7.381E+04 +9.474E+05 ± 3.942E+03 +
AGPSO1.383E+05 ± 2.295E+03 +2.028E+06 ± 8.295E+04 +1.395E+07 ± 3.240E+06 +
AHA1.332E+05 ± 2.407E+03 +2.087E+06 ± 2.930E+05 +9.426E+05 ± 2.547E+03 +
MCWFS1.345E+05 ± 2.505E+03 +1.924E+06 ± 1.146E+04 +9.443E+05 ± 2.976E+03 +
AlgorithmG19G20G21
OLSHADE9.329E+05 ± 6.651E+029.248E+05 ± 5.890E+021.493E+01 ± 9.436E−01
OJADE1.020E+06 ± 1.739E+05 +9.381E+05 ± 2.604E+03 +1.591E+01 ± 1.804E+00 +
IMODE1.236E+08 ± 8.986E+06 +1.225E+08 ± 9.029E+06 +7.682E+01 ± 1.018E+01 +
DDEARA9.427E+05 ± 6.672E+03 +9.344E+05 ± 1.900E+03 +1.429E+01 ± 1.053E+00 −
FODE9.328E+05 ± 6.140E+02 +9.244E+05 ± 5.256E+02 −1.537E+01 ± 8.285E−01 ≈
SSAP9.331E+05 ± 5.335E+02 +9.245E+05 ± 3.433E+02 −1.439E+01 ± 1.413E+00 −
TAPSO1.082E+06 ± 7.300E+04 +9.472E+05 ± 3.570E+03 +1.625E+01 ± 2.422E+00 +
AGPSO1.570E+07 ± 3.952E+06 +1.441E+07 ± 3.586E+06 +1.919E+01 ± 3.866E+00 +
AHA1.058E+06 ± 8.351E+04 +9.416E+05 ± 2.363E+03 +1.469E+01 ± 3.142E+00 ≈
MCWFS1.013E+06 ± 4.203E+04 +9.445E+05 ± 2.527E+03 +1.593E+01 ± 2.773E+00 +
AlgorithmG22 w / t / l
OLSHADE1.756E+01 ± 2.058E+00
OJADE1.858E+01 ± 1.974E+00 +18/2/2
IMODE6.927E+01 ± 9.479E+00 +22/0/0
DDEARA1.799E+01 ± 3.074E+00 +15/3/4
FODE1.487E+01 ± 2.313E+00 −4/12/6
SSAP1.782E+01 ± 2.049E+00 ≈2/12/8
TAPSO2.165E+01 ± 2.740E+00 +18/2/2
AGPSO2.380E+01 ± 2.991E+00 +20/1/1
AHA1.969E+01 ± 4.249E+00 +17/3/2
MCWFS2.016E+01 ± 2.719E+00 +20/0/2
Figure 7. Convergence and box plot with OJADE and JADE on 50-dimensional problems in CEC2017.
Figure 7. Convergence and box plot with OJADE and JADE on 50-dimensional problems in CEC2017.
Mathematics 13 01389 g007

6. Conclusions

This paper addresses the issues of directional degeneration and insufficient local search capability in DE algorithms when applied to complex optimization problems. A novel differential operator, DE/current-to-pbest/order, based on a fitness-based judgment mechanism, is proposed. By introducing a “directional judgment–differential replacement” mechanism, this operator ensures that each individual search is oriented toward a better solution. As a result, it effectively enhances the algorithm’s exploitation ability and convergence efficiency in complex problems without increasing the number of parameters. Theoretical analysis and visualization experiments reveal the operator’s capability to perform fine-grained searches while maintaining diversity. Comparative studies with classical operators further verify the proposed operator’s broad applicability across unimodal, multimodal, and hybrid problems.
In terms of experimental design, the proposed operator is embedded into the current mainstream DE frameworks JADE and LSHADE, resulting in two new algorithm variants: OJADE and OLSHADE. Their performance is systematically evaluated using two major international benchmark test suites, CEC2017 and CEC2011. The results demonstrate that, across a wide range of test tasks involving multiple dimensions (30, 50, 100), various problem types (unimodal, multimodal, composition, and hybrid), and complex engineering structures, the proposed method consistently outperforms the baseline algorithms. In particular, it shows significant advantages in solution accuracy, convergence speed, and stability. Statistical tests and convergence analysis further confirm the generality and independent effectiveness of the proposed operator.
In the comparative experiments, OLSHADE is evaluated against nine representative optimization algorithms proposed in recent years, covering multiple optimization paradigms, including DE variants (IMODE, DDEARA, FODE), PSO variants (AGPSO, TAPSO), and emerging metaheuristic algorithms (AHA, MCWFS, SSAP), demonstrating the comprehensiveness and rigor of the comparison scope in this study. Most of these algorithms were proposed after 2020, representing the mainstream directions in current metaheuristic optimization research. The experimental results show that OLSHADE exhibits dominant performance across all dimensional tasks in CEC2017, with particularly notable advantages in high-dimensional problems (D = 50 and D = 100). In the context of real-parameter optimization in CEC2011, OLSHADE also demonstrates excellent generalization ability and robustness.
It is particularly noteworthy that, although OLSHADE does not exhibit a clear advantage over FODE and SSAP in certain functions, the performance gap mainly stems from the adaptive advantages of the structural enhancement mechanisms employed by these algorithms—such as fractional-order modeling and dynamic subpopulation control—toward specific problem structures. More importantly, these mechanisms inherently possess modular architectures that can, in theory, be readily integrated into OLSHADE, which features high plasticity and structural simplicity, to further improve performance in specific function categories. This is made possible by the structural design approach adopted in this study, which builds upon the classical LSHADE framework with a “lightweight core + core operator optimization” strategy. As a result, OLSHADE not only validates the independent effectiveness of the proposed operator but also retains ample room for future module integration and mechanism fusion. This characteristic renders OLSHADE a highly general and extensible optimization platform, suitable for various theoretical benchmarks and practical engineering tasks.
In summary, the proposed DE/current-to-pbest/order operator demonstrates significant advantages in both theoretical and experimental aspects. When embedded into mainstream frameworks, the resulting algorithms OJADE and OLSHADE exhibit outstanding performance across multiple dimensions and various types of optimization tasks, validating the practical value and research significance of the proposed method.
Future work will primarily focus on two key directions with high potential impact: (1) further development of self-adaptive parameter mechanisms tailored to specific problem structures and constraints, enhancing applicability in dynamic and constrained environments; and (2) embedding the proposed operator into broader DE frameworks in real-world engineering scenarios to systematically evaluate its transferability and integration capability. Additionally, investigating lightweight deployment strategies and interpretable search scheduling mechanisms will be considered as complementary directions.
Overall, this study offers a promising paradigm for algorithmic innovation and lays a solid foundation for building more general, pluggable, and practical metaheuristic optimization systems in the future.

Author Contributions

Conceptualization, Y.Y. and S.T.; methodology, S.T., Y.Y. and S.O.; software, Y.Y. and S.T.; validation, Y.Y. and S.L.; formal analysis, Y.Y. and S.T.; investigation, Y.Y. and R.Z.; resources, S.O.; data curation, S.O.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y. and S.T.; visualization, S.L., Y.Y. and S.T.; supervision, Y.Y. and Z.T.; project administration, Y.Y., Z.T. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research is partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant 23K11261; Japan Science and Technology Agency (JST) Support for Pioneering Research Initiated by the Next Generation (SPRING) under Grant JPMJSP2145; the Tongji University Support for Outstanding Ph.D Student Short-Term Overseas Research Funding, Grant Number 2023020043; and the Hirosaki University Research Start Support Program, Hirosaki University, Japan.

Data Availability Statement

Data can be made available upon request by contacting the corresponding author’s email address. The source code is publicly available at https://github.com/louiseklocky (accessed on 1 April 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gandomi, A.H.; Yang, X.S.; Talatahari, S.; Alavi, A.H. Metaheuristic algorithms in modeling and optimization. In Metaheuristic Applications in Structures and Infrastructures; Elsevier: Waltham, MA, USA, 2013; pp. 1–24. [Google Scholar]
  2. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  3. Chakraborty, A.; Kar, A.K. Swarm intelligence: A review of algorithms. In Nature-Inspired Computing and Optimization: Theory and Applications; Springer: Cham, Switzerland, 2017; pp. 475–494. [Google Scholar]
  4. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  5. Eiben, A.E.; Smith, J.E.; Eiben, A.; Smith, J. What is an evolutionary algorithm? In Introduction to Evolutionary Computing; Springer: Berlin/Heidelberg, Germany, 2015; pp. 25–48. [Google Scholar]
  6. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic algorithms: A comprehensive review. In Intelligent Data-Centric Systems, Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Application; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar] [CrossRef]
  7. Kachitvichyanukul, V. Comparison of three evolutionary algorithms: GA, PSO, and DE. Ind. Eng. Manag. Syst. 2012, 11, 215–223. [Google Scholar] [CrossRef]
  8. Vikhar, P.A. Evolutionary algorithms: A critical review and its future prospects. In Proceedings of the 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), Jalgoan, India, 22–24 December 2016; pp. 261–265. [Google Scholar]
  9. Yang, H.; Yu, Y.; Cheng, J.; Lei, Z.; Cai, Z.; Zhang, Z.; Gao, S. An intelligent metaphor-free spatial information sampling algorithm for balancing exploitation and exploration. Knowl.-Based Syst. 2022, 250, 109081. [Google Scholar] [CrossRef]
  10. Cai, Z.; Yang, X.; Zhou, M.; Zhan, Z.H.; Gao, S. Toward explicit control between exploration and exploitation in evolutionary algorithms: A case study of differential evolution. Inf. Sci. 2023, 649, 119656. [Google Scholar] [CrossRef]
  11. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. (CSUR) 2013, 45, 1–33. [Google Scholar] [CrossRef]
  12. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  13. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2021, 54, 2323–2409. [Google Scholar] [CrossRef]
  14. Bäck, T.; Schwefel, H.P. An overview of evolutionary algorithms for parameter optimization. Evol. Comput. 1993, 1, 1–23. [Google Scholar] [CrossRef]
  15. Li, X.; Li, J.; Yang, H.; Wang, Y.; Gao, S. Population interaction network in representative differential evolution algorithms: Power-law outperforms Poisson distribution. Phys. A Stat. Mech. Appl. 2022, 603, 127764. [Google Scholar] [CrossRef]
  16. Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization: Algorithms, Complexity and Applications; Springer: Cham, Switzerland, 2019; pp. 57–82. [Google Scholar]
  17. Meng, Z.; Chen, Y. Differential Evolution with exponential crossover can be also competitive on numerical optimization. Appl. Soft Comput. 2023, 146, 110750. [Google Scholar] [CrossRef]
  18. Sui, Q.; Yu, Y.; Wang, K.; Zhong, L.; Lei, Z.; Gao, S. Best-worst individuals driven multiple-layered differential evolution. Inf. Sci. 2024, 655, 119889. [Google Scholar] [CrossRef]
  19. Yang, Y.; Tao, S.; Yang, H.; Yuan, Z.; Tang, Z. Dynamic Complex Network, Exploring Differential Evolution Algorithms from Another Perspective. Mathematics 2023, 11, 2979. [Google Scholar] [CrossRef]
  20. Molina, D.; LaTorre, A.; Herrera, F. SHADE with Iterative Local Search for Large-Scale Global Optimization. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  21. Mohamed, A.W.; Hadi, A.A.; Jambi, K.M. Novel mutation strategy for enhancing SHADE and LSHADE algorithms for global numerical optimization. Swarm Evol. Comput. 2019, 50, 100455. [Google Scholar] [CrossRef]
  22. Yang, H.; Gao, S.; Lei, Z.; Li, J.; Yu, Y.; Wang, Y. An improved spherical evolution with enhanced exploration capabilities to address wind farm layout optimization problem. Eng. Appl. Artif. Intell. 2023, 123, 106198. [Google Scholar] [CrossRef]
  23. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  24. Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential evolution: A recent review based on state-of-the-art works. Alex. Eng. J. 2022, 61, 3831–3872. [Google Scholar] [CrossRef]
  25. Ting, T.; Yang, X.S.; Cheng, S.; Huang, K. Hybrid metaheuristic algorithms: Past, present, and future. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Springer: Cham, Switzerland, 2015; pp. 71–83. [Google Scholar]
  26. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
  27. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  28. Huang, F.z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  29. Yang, Y.; Tao, S.; Li, H.; Yang, H.; Tang, Z. A Multi-Local Search-Based SHADE for Wind Farm Layout Optimization. Electronics 2024, 13, 3196. [Google Scholar] [CrossRef]
  30. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 71–78. [Google Scholar]
  31. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1658–1665. [Google Scholar]
  32. Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
  33. Habashneh, M.; Cucuzza, R.; Aela, P.; Rad, M.M. Reliability-based topology optimization of imperfect structures considering uncertainty of load position. Structures 2024, 69, 107533. [Google Scholar] [CrossRef]
  34. Parouha, R.P.; Verma, P. A systematic overview of developments in differential evolution and particle swarm optimization with their advanced suggestion. Appl. Intell. 2022, 52, 10448–10492. [Google Scholar] [CrossRef]
  35. Zaharie, D. Influence of crossover on the behavior of differential evolution algorithms. Appl. Soft Comput. 2009, 9, 1126–1138. [Google Scholar] [CrossRef]
  36. Tao, S.; Liu, S.; Zhao, R.; Yang, Y.; Todo, H.; Yang, H. A State-of-the-Art Fractional Order-Driven Differential Evolution for Wind Farm Layout Optimization. Mathematics 2025, 13, 282. [Google Scholar] [CrossRef]
  37. Reyes-Davila, E.; Haro, E.H.; Casas-Ordaz, A.; Oliva, D.; Avalos, O. Differential evolution: A survey on their operators and variants. Arch. Comput. Methods Eng. 2025, 32, 83–112. [Google Scholar] [CrossRef]
  38. Sallam, K.M.; Abdel-Basset, M.; El-Abd, M.; Wagdy, A. IMODEII: An Improved IMODE algorithm based on the Reinforcement Learning. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–8. [Google Scholar]
  39. Li, J.Y.; Du, K.J.; Zhan, Z.H.; Wang, H.; Zhang, J. Distributed differential evolution with adaptive resource allocation. IEEE Trans. Cybern. 2022, 53, 2791–2804. [Google Scholar] [CrossRef]
  40. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.L.; Zhan, Z.H. Triple archives particle swarm optimization. IEEE Trans. Cybern. 2019, 50, 4862–4875. [Google Scholar] [CrossRef]
  41. Lei, Z.; Gao, S.; Wang, Y.; Yu, Y.; Guo, L. An adaptive replacement strategy-incorporated particle swarm optimizer for wind farm layout optimization. Energy Convers. Manag. 2022, 269, 116174. [Google Scholar] [CrossRef]
  42. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  43. LIU, S.; WANG, K.; YANG, H.; ZHENG, T.; LEI, Z.; JIA, M.; GAO, S. Hierarchical Chaotic Wingsuit Flying Search Algorithm with Balanced Exploitation and Exploration for Optimization. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2025, 108, 83–93. [Google Scholar] [CrossRef]
  44. Wang, K.; Wang, Y.; Tao, S.; Cai, Z.; Lei, Z.; Gao, S. Spherical search algorithm with adaptive population control for global continuous optimization problems. Appl. Soft Comput. 2023, 132, 109845. [Google Scholar] [CrossRef]
Figure 1. Two operators in single-peaked problem.
Figure 1. Two operators in single-peaked problem.
Mathematics 13 01389 g001
Figure 2. Two operators in multi-peaked problem case 1.
Figure 2. Two operators in multi-peaked problem case 1.
Mathematics 13 01389 g002
Figure 3. Two operators in multi-peaked problem case 2.
Figure 3. Two operators in multi-peaked problem case 2.
Mathematics 13 01389 g003
Figure 4. Two operators in multi-peaked problem case 3.
Figure 4. Two operators in multi-peaked problem case 3.
Mathematics 13 01389 g004
Figure 5. Convergence and box plot with OJADE and JADE on 50-dimension problems in CEC2017.
Figure 5. Convergence and box plot with OJADE and JADE on 50-dimension problems in CEC2017.
Mathematics 13 01389 g005
Figure 6. Convergence and box plot with OLSHADE and LSHADE on 50-dimensional problems in CEC2017.
Figure 6. Convergence and box plot with OLSHADE and LSHADE on 50-dimensional problems in CEC2017.
Mathematics 13 01389 g006
Table 1. The comparison results between OJADE and JADE on CEC2017 benchmark suite.
Table 1. The comparison results between OJADE and JADE on CEC2017 benchmark suite.
D = 30D = 50D = 100
OJADE JADE OJADE JADE OJADE JADE
Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std
F10.000E+000.000E+000.000E+000.000E+00=0.000E+000.000E+000.000E+000.000E+00=0.000E+000.000E+000.000E+000.000E+00=
F20.000E+000.000E+000.000E+000.000E+00=3.800E-102.714E-097.985E-093.140E-08+2.984E+021.926E+032.693E+171.923E+18+
F39.313E+031.703E+045.276E+031.282E+04=1.713E+043.537E+042.257E+044.140E+04=1.324E+051.608E+051.347E+051.637E+05
F45.136E+012.219E+014.167E+012.788E+01=5.575E+014.882E+014.269E+014.237E+01=4.609E+015.080E+016.922E+016.521E+01+
F52.827E+013.993E+002.650E+013.767E+004.717E+018.456E+005.300E+017.993E+00+1.252E+021.440E+011.498E+021.784E+01+
F60.000E+000.000E+000.000E+000.000E+00=0.000E+000.000E+000.000E+000.000E+00=9.880E-054.790E-044.253E-041.594E-03=
F75.482E+013.106E+005.317E+014.069E+009.502E+017.516E+001.000E+027.460E+00+2.389E+021.636E+012.820E+021.775E+01+
F82.517E+013.669E+002.530E+013.909E+00=5.153E+018.832E+005.521E+016.651E+00+1.272E+021.832E+011.472E+021.905E+01+
F90.000E+000.000E+000.000E+000.000E+00=4.152E-015.478E-011.191E+001.039E+00+3.805E+012.557E+011.369E+029.956E+01+
F101.866E+032.472E+021.976E+032.232E+02+3.719E+033.310E+023.706E+032.879E+02=1.014E+044.738E+021.011E+045.295E+02=
F113.115E+012.429E+013.203E+012.709E+01=1.303E+024.004E+011.288E+024.749E+01=3.527E+033.527E+033.439E+033.171E+03=
F121.220E+033.989E+021.259E+034.862E+02=5.565E+032.488E+035.052E+033.204E+031.737E+046.395E+032.101E+041.195E+04=
F132.040E+021.229E+034.744E+023.056E+03+1.693E+021.077E+022.673E+021.604E+02+3.175E+032.664E+033.147E+033.181E+03=
F146.188E+031.130E+042.105E+036.275E+03=1.690E+045.169E+041.127E+043.034E+04+5.615E+021.873E+025.402E+022.218E+02=
F151.555E+034.026E+032.573E+021.467E+03+3.232E+026.369E+023.115E+021.230E+02+3.913E+022.140E+023.763E+021.014E+02=
F164.170E+021.334E+023.931E+021.387E+02=8.473E+021.825E+028.241E+022.067E+02=2.604E+032.963E+022.541E+033.182E+02=
F176.864E+011.750E+017.138E+012.224E+01=6.222E+021.415E+026.059E+021.412E+02=1.915E+032.447E+021.947E+032.383E+02=
F185.337E+032.884E+047.616E+032.786E+04+2.279E+041.154E+051.727E+029.918E+01+1.965E+031.773E+032.029E+031.458E+03=
F191.407E+033.727E+031.026E+025.940E+02+9.725E+022.489E+031.517E+024.776E+01=3.650E+024.270E+021.018E+031.505E+03+
F201.100E+025.601E+011.060E+025.132E+01=4.275E+021.340E+024.511E+021.273E+02=1.869E+032.054E+021.933E+032.046E+02=
F212.260E+024.524E+002.258E+024.752E+00=2.490E+028.331E+002.522E+028.097E+00+3.462E+021.852E+013.648E+022.056E+01+
F221.000E+021.005E-131.002E+021.783E+00=3.487E+031.681E+033.662E+031.678E+03=1.140E+045.275E+021.146E+045.308E+02=
F233.718E+026.519E+003.730E+026.346E+00=4.755E+021.144E+014.787E+021.166E+01=6.456E+021.426E+016.553E+021.639E+01+
F244.392E+024.512E+004.401E+025.411E+00=5.355E+027.240E+005.433E+021.056E+01+9.935E+022.154E+011.035E+032.215E+01+
F253.870E+021.493E-013.870E+022.127E-01=5.261E+023.309E+015.234E+023.328E+01=7.523E+024.993E+017.403E+026.274E+01=
F261.174E+037.331E+011.192E+036.340E+01=1.548E+031.339E+021.637E+031.119E+02+4.292E+032.533E+024.685E+032.879E+02+
F275.028E+026.544E+005.060E+026.607E+00+5.533E+022.766E+015.568E+022.197E+01=7.409E+023.652E+017.289E+024.207E+01=
F283.305E+025.091E+013.288E+025.014E+01=4.949E+021.749E+014.884E+022.201E+01=5.243E+024.102E+015.224E+024.138E+01=
F294.800E+021.917E+014.860E+023.975E+01=4.734E+027.202E+014.820E+027.974E+01=2.134E+032.733E+022.185E+033.065E+02=
F302.262E+031.100E+032.174E+031.960E+02+6.256E+055.243E+046.761E+058.873E+04+3.857E+031.742E+033.557E+031.628E+03=
W/T/L7/21/213/16/111/18/1
Table 2. The results of internal comparisons on CEC2011 real-world complex engineering optimization problems.
Table 2. The results of internal comparisons on CEC2011 real-world complex engineering optimization problems.
OJADEJADE OLSHADELSHADE
Mean Std Mean Std Mean Std Mean Std
F13.689E+003.003E+005.119E+006.122E+00=2.586E-011.580E+006.814E-033.496E-02=
F2−2.595E+015.211E-01−2.782E+014.169E-01−2.786E+013.979E-01−2.775E+014.457E-01=
F31.151E-052.023E-191.151E-052.640E-19+1.151E-052.837E-191.151E-053.748E-19=
F41.566E+012.730E+001.863E+013.288E+00+1.867E+013.094E+001.892E+013.049E+00=
F5−3.669E+012.608E-01−3.646E+015.591E-01−3.684E+012.488E-02−3.681E+011.682E-01+
F6−2.916E+012.187E-03−2.896E+015.650E-01−2.917E+011.820E-04−2.917E+011.500E-04=
F71.154E+008.018E-021.188E+001.169E-01+1.115E+008.613E-021.125E+007.866E-02=
F82.200E+020.000E+002.200E+020.000E+00=2.200E+020.000E+002.200E+020.000E+00=
F91.458E+034.299E+021.966E+032.498E+03=8.559E+022.093E+029.858E+021.892E+02+
F10−2.153E+012.087E-01−2.145E+014.169E-01=−2.157E+019.418E-02−2.159E+011.063E-01=
F115.240E+045.093E+025.287E+042.635E+03=4.798E+043.895E+024.804E+043.530E+02=
F121.741E+075.500E+041.738E+075.875E+041.739E+073.887E+041.743E+076.437E+04+
F131.544E+041.167E+001.545E+049.086E+00+1.544E+047.581E-011.544E+044.377E+00+
F141.825E+043.073E+021.844E+041.886E+02+1.810E+043.670E+011.809E+043.440E+01=
F153.298E+047.189E+013.299E+048.870E+01=3.274E+044.071E-013.274E+043.613E-01+
F161.332E+054.580E+031.337E+054.642E+03=1.233E+054.165E+021.233E+055.172E+02=
F171.902E+062.552E+041.953E+061.629E+05+1.728E+068.206E+031.728E+067.526E+03=
F189.375E+052.170E+031.017E+061.815E+05+9.247E+055.504E+029.249E+054.881E+02+
F191.020E+061.739E+051.262E+063.987E+05+9.329E+056.651E+029.330E+055.564E+02=
F209.381E+052.604E+031.035E+062.479E+05+9.248E+055.890E+029.250E+056.943E+02+
F211.591E+011.804E+001.585E+011.376E+00=1.493E+019.436E-011.482E+019.350E-01=
F221.858E+011.974E+002.039E+012.541E+00+1.756E+012.058E+001.788E+011.877E+00=
W/T/L10/8/47/15/0
Table 3. The comparison results between OLSHADE and LSHADE on CEC2017 benchmark suite.
Table 3. The comparison results between OLSHADE and LSHADE on CEC2017 benchmark suite.
D = 30D = 50D = 100
OLSHADE LSHADE OLSHADE LSHADE OLSHADE LSHADE
Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std
F10.000E+000.000E+000.000E+000.000E+00=0.000E+000.000E+000.000E+000.000E+00=0.000E+000.000E+000.000E+000.000E+00=
F20.000E+000.000E+000.000E+000.000E+00=0.000E+000.000E+000.000E+000.000E+00=2.723E+001.944E+018.638E+023.938E+03+
F30.000E+000.000E+000.000E+000.000E+00=0.000E+000.000E+000.000E+000.000E+00=4.870E-086.659E-081.154E-061.942E-06+
F45.878E+011.089E+005.856E+013.411E-14=7.348E+015.120E+018.595E+015.097E+01=8.157E+017.187E+011.916E+022.374E+01+
F56.355E+001.516E+006.704E+001.398E+00=1.119E+012.158E+001.224E+012.101E+00+4.526E+016.407E+003.912E+013.882E+00
F64.035E-092.015E-082.684E-091.917E-08=9.831E-081.827E-072.479E-075.152E-07+6.037E-034.512E-035.792E-034.050E-03=
F73.757E+011.283E+003.727E+011.373E+00=6.292E+011.703E+006.372E+011.901E+00+1.401E+024.128E+001.402E+024.302E+00=
F87.105E+001.236E+006.939E+001.746E+00=1.143E+012.384E+001.298E+012.175E+00+4.531E+015.423E+003.861E+014.875E+00
F90.000E+000.000E+000.000E+000.000E+00=1.755E-031.254E-020.000E+000.000E+00=2.800E-012.954E-014.628E-014.634E-01+
F101.387E+032.354E+021.406E+032.346E+02=3.139E+033.572E+023.137E+032.787E+02=1.045E+044.534E+021.032E+044.644E+02=
F113.128E+012.845E+013.200E+012.892E+01=4.683E+017.595E+004.832E+017.515E+00=4.655E+028.087E+014.457E+021.018E+02+
F121.069E+034.011E+021.135E+033.169E+02=2.224E+035.545E+022.151E+034.153E+02=2.059E+047.638E+032.393E+049.517E+03+
F131.673E+014.448E+001.508E+015.460E+005.439E+011.913E+015.560E+012.935E+01=6.881E+026.821E+025.355E+023.683E+02=
F142.169E+011.321E+002.073E+014.890E+00=2.817E+012.825E+002.946E+013.743E+00+2.522E+022.930E+012.547E+022.820E+01=
F152.810E+001.734E+003.054E+001.386E+00=3.609E+017.419E+003.868E+011.058E+01=2.410E+024.372E+012.417E+024.551E+01=
F164.246E+014.153E+014.871E+014.346E+01=3.441E+021.210E+023.506E+021.328E+02=1.460E+032.722E+021.683E+032.454E+02+
F173.242E+015.957E+003.324E+015.363E+00=2.186E+028.707E+012.363E+026.536E+01=1.160E+032.273E+021.122E+031.839E+02=
F182.175E+019.353E-012.186E+011.149E+00=3.941E+011.319E+013.959E+011.124E+01=2.147E+024.099E+012.416E+024.820E+01+
F195.047E+001.516E+004.923E+001.607E+00=2.458E+015.230E+002.493E+017.511E+00=1.750E+022.320E+011.712E+022.161E+01=
F203.040E+017.304E+003.197E+015.973E+00=1.650E+025.287E+011.655E+026.448E+01=1.660E+032.093E+021.588E+032.104E+02=
F212.069E+021.833E+002.071E+021.299E+00=2.128E+022.764E+002.128E+022.622E+00=2.620E+025.917E+002.602E+024.960E+00=
F221.000E+021.005E-131.000E+021.005E-13=2.930E+031.494E+032.575E+031.619E+03=1.127E+044.169E+021.111E+045.027E+02+
F233.499E+022.325E+003.498E+022.924E+00=4.288E+023.785E+004.313E+023.798E+00+5.722E+021.097E+015.679E+021.241E+01=
F244.259E+021.666E+004.261E+021.702E+00=5.060E+022.548E+005.068E+022.538E+00=8.903E+021.048E+019.086E+027.160E+00+
F253.867E+022.475E-023.868E+022.276E-02+4.815E+023.491E+004.830E+026.694E+00=7.437E+023.441E+017.487E+022.861E+01=
F269.146E+023.527E+019.174E+023.303E+01=1.132E+034.375E+011.143E+035.297E+01=3.132E+039.232E+013.307E+037.994E+01+
F275.022E+025.631E+005.028E+026.462E+00=5.297E+021.407E+015.333E+021.496E+01=6.468E+021.747E+016.333E+021.652E+01
F283.310E+025.136E+013.225E+024.631E+01=4.723E+022.202E+014.732E+022.248E+01=5.260E+022.849E+015.166E+022.191E+01=
F294.307E+025.468E+004.323E+026.126E+00+3.523E+029.991E+003.514E+021.055E+01=1.132E+031.246E+021.230E+031.742E+02+
F301.988E+035.345E+011.978E+033.786E+01=6.630E+057.068E+046.587E+058.047E+04=2.520E+031.839E+022.382E+031.441E+02
W/T/L2/27/16/24/012/14/4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tao, S.; Liu, S.; Ohta, S.; Zhao, R.; Tang, Z.; Yang, Y. A Novel Detection-and-Replacement-Based Order-Operator for Differential Evolution in Solving Complex Bound Constrained Optimization Problems. Mathematics 2025, 13, 1389. https://doi.org/10.3390/math13091389

AMA Style

Tao S, Liu S, Ohta S, Zhao R, Tang Z, Yang Y. A Novel Detection-and-Replacement-Based Order-Operator for Differential Evolution in Solving Complex Bound Constrained Optimization Problems. Mathematics. 2025; 13(9):1389. https://doi.org/10.3390/math13091389

Chicago/Turabian Style

Tao, Sichen, Sicheng Liu, Shoya Ohta, Ruihan Zhao, Zheng Tang, and Yifei Yang. 2025. "A Novel Detection-and-Replacement-Based Order-Operator for Differential Evolution in Solving Complex Bound Constrained Optimization Problems" Mathematics 13, no. 9: 1389. https://doi.org/10.3390/math13091389

APA Style

Tao, S., Liu, S., Ohta, S., Zhao, R., Tang, Z., & Yang, Y. (2025). A Novel Detection-and-Replacement-Based Order-Operator for Differential Evolution in Solving Complex Bound Constrained Optimization Problems. Mathematics, 13(9), 1389. https://doi.org/10.3390/math13091389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop