Previous Article in Journal
Recent Advancements in Humanoid Robot Heads: Mechanics, Perception, and Computational Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Dung Beetle Optimizer with Multiple Strategies: An Application to Complex Engineering Problems

1
School of Artificial Intelligence and Information Engineering, East China University of Technology, Nanchang 330013, China
2
School of Surveying and Geoinformation Engineering, East China University of Technology, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(11), 717; https://doi.org/10.3390/biomimetics10110717
Submission received: 21 August 2025 / Revised: 16 September 2025 / Accepted: 26 September 2025 / Published: 23 October 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

Although the Dung Beetle Optimizer (DBO) is a promising new metaheuristic for global optimization, it often struggles with premature convergence and lacks the necessary precision when applied to complex optimization challenges. Therefore, we developed the Multi-Strategy Improved Dung Beetle Optimizer (MIDBO), an algorithm that incorporates several new strategies to enhance the performance of the standard DBO. The algorithm enhances initial population diversity by improving the distribution uniformity of the Circle chaotic map and combining it with a dynamic opposition-based learning strategy for initialization. A nonlinear oscillating balance factor and an improved foraging strategy are introduced to achieve a dynamic equilibrium between the algorithm’s global search and local refinement, thereby accelerating convergence. A multi-population differential co-evolutionary mechanism is designed, wherein the population is partitioned into three categories according to fitness, with each category using a unique mutation operator to execute targeted searches and avoid local optima. A comparative study against multiple metaheuristics on the CEC2017 and CEC2022 benchmarks was performed to comprehensively evaluate MIDBO’s performance. The practical effectiveness of the MIDBO algorithm was validated by applying it to three practical engineering challenges. The results demonstrate that MIDBO significantly outperformed the other algorithms, a success attributed to its superior optimization performance.

1. Introduction

An optimization algorithm seeks to identify the set of parameters that produces the optimal value (maximum or minimum) for an objective function, subject to a series of defined constraints [1]. Conventional optimization algorithms are typically contingent upon specific mathematical models or problem structures, which enable them to efficiently find the optimal solution for low-dimensional, well-structured, or gradient-based problems like linear, convex, and continuous optimization [2,3]. Nevertheless, optimization problems in practical applications exhibit significant diversity and are classified into multiple types based on whether they are single-objective or multi-objective, static or dynamic, and discrete or continuous [4]. The hallmarks of these problems include non-convexity, nonlinear constraints, high dimensionality, and substantial computational expense, with the difficulty of finding a solution escalating as the problem’s dimensions grow [5]. Therefore, employing traditional algorithms for such challenges is often problematic due to high computational complexity, the risk of stagnation at suboptimal solutions, and an inability to achieve the level of precision demanded by real-world engineering problems [6].
Against this backdrop, metaheuristic algorithms have seen widespread adoption in various optimization problems, attributable to their ease of implementation, their non-reliance on gradient data, and their lower susceptibility to premature convergence [7]. Metaheuristic algorithms are distinguished from traditional methods by their use of a search mechanism based on predefined rules and stochastic operators that, by operating independently of the search space’s gradient information, enables them to solve most real-world non-convex, nonlinear, and high-dimensional optimization problems [3]. Metaheuristic algorithms can find a solution at or near the global optimum under more complex constraints. Therefore, this class of algorithms is better suited for solving optimization tasks with complex constraint conditions.
Based on the description above, ongoing progress in metaheuristic optimization has spurred the emergence of various pioneering algorithms, which can be grouped into four classes based on their characteristics. The first class of algorithms is derived from principles of biological evolution, including Biogeography-based optimization (BBO) [8] and differential evolution (DE) [9]. A second category of algorithms draws from physical laws or mathematical principles, featuring algorithms such as the sine cosine algorithm (SCA) [10], thermal exchange optimization (TEO) [11], atom search optimization (ASO) [12], and Keplerian optimization algorithm (KOA) [13]. Human social behaviors provide the conceptual basis for the third category, which includes social group optimization (SGO) [14], league championship algorithm (LCA) [15], and the student psychology optimization (SPO) algorithm [16]. Lastly, swarm intelligence-based methods constitute the fourth class, including beluga whale optimization (BWO) [17], the walrus optimizer (WO) [18], the hippopotamus optimization algorithm (HO) [19], and Greylag Goose Optimization (GGO) [20].
While the multitude of optimization algorithms documented in prior research each possess distinct merits, according to the no free lunch (NFL) theorem, there is no one-size-fits-all algorithm that is universally optimal for every kind of optimization problem [21]. This theorem justifies the creation of domain-specific solutions, since an algorithm’s effectiveness on a particular set of problems does not guarantee similar success in other areas [22]. Consequently, motivated by the NFL theorem, researchers have spent the last few decades developing numerous algorithmic variants designed to advance the efficacy of metaheuristic approaches.
As an illustration, Xie et al. [23] developed a sparrow search algorithm enhanced with a dynamic classification mechanism. The specific improvements include (1) employing an elite opposition-based Chebyshev strategy when generating the initial population to ensure the starting solutions are both diverse and well-distributed, thereby preventing premature convergence, and (2) proposing a dynamic multi-subpopulation classification mechanism that stratifies the sparrow population into elite, intermediate, and inferior subgroups, with stratification dictated by individual fitness values. The elite subgroup employs an elite guidance strategy to accelerate the convergence process; the intermediate subgroup applies an adaptive dynamic inertia weight strategy, guiding it in balancing its exploratory and exploitative behaviors; and the inferior subgroup aims to enhance final convergence precision by employing a golden sine strategy. Zhu et al. [24] modified the snake optimizer with the addition of several new strategies. Several specific improvements were made, which are detailed below and include the following: (1) enhancing the quality of the algorithm’s initial solution by designing a multi-seed chaotic mapping mechanism to generate the initial population, resulting in a more homogenous dispersal of the starting individuals, (2) incorporating an anti-predation strategy within the exploration process, aiming to widen the searchable area while enhancing the rate and precision of convergence, and (3) proposing a bidirectional population evolution dynamics strategy aimed at bolstering local exploitation process, thereby steering the algorithm away from premature convergence and improving the equilibrium between its global and local search capabilities. Similarly, an adaptive and diversified version of the hiking optimization algorithm was introduced by Abdel-salam et al. [25]. Their specific improvements include (1) proposing a stratified random initialization strategy to be employed during initial population construction, thereby promoting greater diversity within the set of starting solutions, (2) proposing an enhanced leader coordination strategy to mitigate the risk of premature convergence, (3) an adaptive perturbation strategy introduced as a solution to local optima entrapment, aiming to improve the hiking optimization algorithm’s escape capability, and (4) employing a dynamic exploration strategy to maintain a good trade-off between the algorithm’s exploratory and exploitative phases.
A recent addition to the swarm intelligence family, the Dung Beetle Optimizer (DBO) [26], is an algorithm which draws its core principles from the life habits of dung beetles. It establishes an innovative search framework by simulating their behaviors of ball-rolling, dancing, breeding, foraging, and stealing. Distinguishing itself from other prominent swarm intelligence techniques, the DBO features a unique search process that adeptly manages the trade-off between global search and local refinement, resulting in both rapid convergence and high-precision solutions [27]. As a result, researchers have applied the DBO to solve numerous practical problems.
For instance, the authors of [28] applied an elite-based variant of the DBO to fine-tune the hyperparameters of an encrypted traffic classification model based on a multi-scale convolutional neural network. For the purpose of boosting the quantity and rated power of fuel cell stacks, the work in [29] utilized the DBO for the optimization of a manifold’s structural parameters, thereby improving the consistency of the fuel cell’s performance. In [30], a composite model combining a self-attention temporal convolutional network and a bidirectional long short-term memory network was developed for ultra-short-term photovoltaic power forecasting; this model’s hyperparameters were subsequently fine-tuned using the DBO for improved predictive accuracy. Furthermore, [31] developed an improved variant of the DBO specifically for application in search and rescue operations that utilize multiple UAVs within disaster environments, and the authors succeeded in finding the most efficient route in minimal time. Lastly, [32] optimized a long short-term memory model using the DBO which, when used in conjunction with the variational mode decomposition technique, predicted methane production from deep coal seams and improved the precision of daily yield forecasts.
Despite its superior performance relative to many algorithms in the swarm intelligence family in tackling specific optimization challenges, it also possesses certain shortcomings. The authors of [33] pointed out three drawbacks of the DBO. (1) The canonical DBO algorithm begins by creating its initial population through a random initialization method. The diversity of the initial population may be compromised because this stochastic approach can result in poor spatial coverage of individuals throughout the search space. (2) The foraging dung beetles’ lack of adaptability impairs their global exploration capability and increases their propensity for premature convergence. (3) Lastly, the strategy used to adjust the locations of thieving dung beetles is reliant on the current individual’s best solution, a factor that diminishes population diversity and reduces the convergence precision. The authors of [34] noted that the reliance of the position update rule for ball-rolling dung beetles in the global worst position impairs the algorithm’s global search performance and undermines the balance between its exploratory and exploitative capabilities. As pointed out in [35], the linear adjustment of parameter R as part of the dung beetle’s reproduction strategy, while straightforward, elevates the likelihood of the algorithm converging prematurely and may prevent the discovery of more optimal solutions. In summary, further enhancements can be made to the DBO to elevate its performance in optimization tasks.
This paper proposes an enhanced variant of the standard DBO with multiple strategies. The goal of this new algorithm is to address key weaknesses such as its population initialization, balancing exploration and exploitation, and its susceptibility to premature convergence. The algorithm enhances population diversity and augments the global search capabilities of the baseline algorithm by incorporating an efficient initialization mechanism and a dynamic balancing strategy. Ultimately, these enhancements enable the algorithm to surmount the challenge of premature convergence, thereby elevating its solution accuracy when tackling complex optimization problems. Through these improvements, this paper makes the following key contributions:
An enhanced version of the DBO algorithm, named MIDBO, is proposed, which improves overall performance by integrating four new strategies into the original framework.
We validate the numerical optimization effectiveness of the MIDBO on functions selected from the CEC2017 and CEC2022 benchmark collections and in comparison with six competing algorithms. Based on the experimental data, the MIDBO possesses a clear competitive edge.
To further test the MIDBO’s practical utility, it was applied to three practical engineering challenges, where its success in securing optimal solutions underscored its high performance when tackling complex challenges.
The subsequent sections of this paper are as follows. Section 2 introduces the standard DBO. Section 3 introduces the new improvement mechanisms in the proposed MIDBO. Section 4 presents comparative experiments involving the MIDBO against other optimization algorithms on two test suites. Section 5 applies the MIDBO to specific engineering application scenarios. Section 6 summarizes the paper and offers an outlook on potential future work.

2. DBO

The core inspiration for the dung beetle optimization algorithm stems from several natural activities of this species, namely rolling, dancing, foraging, stealing, and reproduction. From these habits, four distinct population update mechanisms are derived.

2.1. Rolling Dung Beetles

The act of rolling a dung ball to an appropriate spot is a natural behavior exhibited by dung beetles. The purpose of acquiring a dung ball is twofold: (1) for laying eggs and raising offspring and (2) as a source of food. To maintain a straight trajectory when rolling a dung ball, a beetle uses celestial cues—like the sun and moon—for orientation. To model this process, the algorithm randomly chooses a direction from the entire search space and maintains a straight path. Equations (1) and (2) describe the position updating in an obstacle-free environment:
X i ( t + 1 ) = X i ( t ) + α × k × X i ( t 1 ) + b × Δ x
Δ x = | X i ( t ) X W |
In this formulation, the current iteration is denoted by t, and the position of the ith dung beetle is given by X i ( t ) . The equation uses a constant deflection coefficient k ( 0 , 0.2 ] and a random number b from the interval (0, 1). The global worst position and a simulation of light intensity changes are represented by X W and Δ x , respectively. A deviation coefficient α is also used, whose value is assigned based on the probabilistic condition in Equation (3):
α = 1 λ > η 1 λ η
In the formula, λ is a random number in the range [ 0 , 1 ] , and η is a probability value set to 0.1 . A value of 1 signifies that the beetle does not deviate from the forward direction, while 1 denotes a deviation of the beetle’s path from the target.
When a dung beetle’s rolling path is obstructed, it performs a characteristic reorientation behavior, often described as a “dance”, to establish a new direction. Equation (4) describes this dancing behavior:
X i ( t + 1 ) = X i ( t ) + tan ( θ ) | X i ( t ) X i ( t 1 ) |
When the parameter θ equals 0, π /2, or π , the position is not altered; otherwise, θ is constrained within the range [0, π ].

2.2. Breeding Dung Beetles

Dung beetles in the wild will deliberately choose a protected area for laying their eggs. The source algorithm models this process using a boundary selection strategy, with its formula presented as follows in Equation (5):
L b * = max ( X b e s t l × ( 1 R ) , L b ) U b * = min ( X b e s t l × ( 1 + R ) , U b )
wherein the egg-laying area is constrained by the lower and upper bounds L b * and U b * , while the current local best position is denoted by X b e s t l . The equation also uses a dynamic parameter R, which decreases over time according to R = 1 t M (where M is the maximum number of iterations). Finally, L b and U b represent the lower and upper bounds for the entire search space, respectively.
Equation (5) indicates that the entire oviposition region is dynamic and varies as a function of R. Equation (6) provides the definition of the spawning location:
B i ( t + 1 ) = X b e s t l + m 1 × ( B i ( t ) L b * ) + m 2 × ( B i ( t ) U b * )
Two independent random vectors, m 1 and m 2 , are used in the equation. Each vector has a size of 1 × D , corresponding to the dimensionality (D) of the search space for the optimization problem.

2.3. Small Dung Beetles

Newly hatched dung beetles emerge from underground and begin to forage for food. To simulate this natural behavior, an optimal foraging region is delineated to guide their search. This region is mathematically defined by Equation (7):
L b b = max ( X b e s t g × ( 1 R ) , L b ) U b b = min ( X b e s t g × ( 1 + R ) , U b )
wherein, the optimal foraging region is bounded by the lower and upper limits L b b and U b b , respectively, and the global best position is represented by X b e s t g . The other parameters are consistent with their definitions in Equation (5). Equation (8) then provides the rule for updating the locations of small dung beetles:
X i ( t + 1 ) = X i ( t ) + K 1 × ( X i ( t ) L b b ) + K 2 × ( X i ( t ) U b b )
wherein, K 1 is a random number following a normal distribution and K 2 is a random vector within ( 0 , 1 ) .

2.4. Thieving Dung Beetles

A subset of the population, designated as “thieving dung beetles” (or “thieves”), engages in stealing dung balls. Based on the optimal foraging region defined in Equation (7), this competitive behavior occurs mainly in the area surrounding the global best position ( X b e s t g ). Consequently, the new position for these thieves is determined by Equation (9):
X i ( t + 1 ) = X b e s t g + S × g × | X i ( t ) X b e s t l | + | X i ( t ) X b e s t g |
wherein, S is defined as a constant, while g denotes a random vector with a size of 1 × D sampled from a normal distribution.

2.5. Pseudo-Code of DBO

The pseudo-code of the DBO is shown as Algorithm 1.
Algorithm 1 The framework of the DBO algorithm
Require: The maximum iteration T m a x , the size of the particle’s population N
Ensure: Optimal position X b e s t g and its fitness value f m i n
    1:   Initialize the particle’s population i 1 , 2 , , N and define its relevant parameters
    2:   while  t T m a x do
    3:     for  i = 1 to Number of rolling dung beetles do
    4:         a = rand ( 1 )
    5:        if  a 0.9  then
    6:           Update the rolling dung beetle’s position by using Equations (1) and (2)
    7:         else
    8:           Update the rolling dung beetle’s position by using Equation (4)
    9:        end if
  10:     end for
  11:     for  i = 1 to Number of breeding dung beetles do
  12:        Update breeding dung beetle’s position by using Equations (5) and (6)
  13:     end for
  14:     for  i = 1 to Number of small dung beetles do
  15:        Update small dung beetle’s position by using Equations (7) and (8)
  16:     end for
  17:     for  i = 1 to Number of stealing Dung Beetles do
  18:        Update stealing dung beetle’s location by using Equation (9)
  19:     end for
  20:      t = t + 1
  21:  end while
  22:  return X b e s t g and its fitness value f m i n

3. Proposed Algorithm

3.1. Population Initialization Based on Chaotic Opposition-Based Learning Strategy

The efficiency of swarm intelligence optimization algorithms is influenced by its population initialization, as a well-distributed initial population improves the coverage of the solution space, which in turn enhances both the convergence speed and precision [36]. The conventional DBO typically employs a uniform random initialization method to form its initial swarm, and while this stochastic nature is beneficial for exploring disparate zones within the search space and improves the prospects for discovering the global optimal solution, it presents a notable shortcoming. Specifically, there is no mechanism to secure a homogenous distribution, which often results in high-density pockets of candidates in some regions and low-density voids in others. Such an imbalanced spread poses a significant impediment to the algorithm’s initial convergence phase. To boost the quality of the initial candidate solutions, this study introduces a hybrid initialization strategy that fuses chaotic maps with dynamic opposition-based learning (OBL).
Compared with random initialization, chaotic maps exhibit superior performance in optimization searches, especially when locating the global optimum, an advantage that stems from their comprehensive traversal of the search domain and their effectiveness at evading local optima, thereby enabling a more thorough exploration of the solution domain [37]. The circle map, when compared with other prevalent chaotic maps like the Chebyshev, logistic, and tent maps, has gained attention for its strong ergodicity, high coverage of chaotic values, and strong stability [38,39]. However, experiments revealed that the values generated by the circle map tend to cluster heavily within the interval [0.2, 0.5], making its distribution non-uniform. For the purpose of improving the uniformity of its chaotic value distribution, a modified circle map formula was developed. Equation (10) presents the original formulation of the circle chaotic map:
x n + 1 = mod x n + 0.2 0.5 2 π sin ( 2 π x n ) , 1
wherein, x n represents the nth chaotic sequence number. The scatter plot and frequency histogram corresponding to the standard circle chaotic map are presented in Figure 1a,c, respectively.
As shown in Figure 1a,c, the standard circle chaotic map generates values that are predominantly concentrated within the [0.2, 0.5] interval. Such a dense clustering of initial candidate solutions compromises the DBO’s population diversity. This study addresses this limitation by proposing a modified circle chaotic map that alters the original formula in three primary ways: (1) the linear term x n is scaled by a factor of four; (2) the influence of the sine term is amplified by augmenting its coefficient from 0.5 to 0.8; and (3) the constant terms are reconfigured to 0.3 and 0.25 π . The resulting modified map yields superior chaotic properties compared with the original, with its formulation presented in Equation (11):
x n + 1 = mod 4 x n + 0.3 0.8 2 π sin ( 2 π x n ) + 0.25 π , 1
As shown in Figure 1b,d, the scatter plot and frequency histogram for the modified circle chaotic map demonstrate a distribution that exhibits considerably greater uniformity than the original map. Consequently, initializing the DBO with this improved map generates a more diverse initial population with superior spatial coverage.
Based on [40], the strategy of dynamic opposition-based learning (OBL) contributes to a more diverse population, improves the initial solution quality, and accelerates algorithmic convergence. Therefore, this study incorporates the dynamic OBL strategy within the DBO’s initial population generation stage, the mathematical formulation of which is presented in Equation (12):
X d o b l = X i n i t + r 1 × r 2 × ( L b + U b X i n i t ) X i n i t
In the equation, X i n i t represents the randomly generated initial population, with the variables r 1 and r 2 being random numbers sampled from a uniform distribution in the interval [0, 1].
The proposed initialization process, which integrates the improved circle chaotic map and the dynamic opposition-based learning method, proceeds as follows.
Step 1: First, form the starting population A using random initialization. Step 2: Generate a chaotic population B from population A according to Equation (11), and simultaneously generate an opposite population C from population A according to Equation (12). Step 3: Merge populations A, B, and C, and then compute each individual’s fitness score within the newly formed set. Step 4: Sort the resulting fitness values, and then form the final initial population by selecting the top N individuals from the merged group.

3.2. Oscillating Balance Factor

Swarm intelligence is fundamentally characterized by a balance between two distinct phases, namely exploration, where the algorithm scans the whole search area for the purpose of discovering potentially optimal regions, and exploitation, where the focus shifts to a more concentrated search within those regions to pinpoint the optimal solution.
Global exploration within the DBO framework is the primary responsibility of the ball-rolling dung beetles. To bolster this capability while establishing a more effective trade-off between global search and local refinement, a balance factor, denoted by w, which nonlinearly decreases over iterations at a rate adaptively modulated by a sinusoidal function for the purpose of establishing a trade-off between global exploration and local exploitation, is developed and applied to their position update rule, as formulated in Equation (13):
w = T m a x t T m a x | sin θ | α
wherein, the parameter θ is defined on the interval π 6 , π 2 , while the parameter α serves to control the amplitude of oscillation. Figure 2 illustrates the variation curves of the oscillatory equilibrium factor w as a function of different values of α over the course of 1000 iterations.
As illustrated by Equation (13) and Figure 2, the oscillating balance factor w gradually declines over the course of the iterations, with a faster rate of decrease in the beginning. Consequently, the algorithm’s step size transitions from large to small over the course of the search, which allows for a more reasonable allocation of resources between its global exploration and local refinement capabilities. Additionally, the sine function, which is parameterized by θ , injects a degree of randomness into the algorithm’s search during the exploration phase. To maintain a proper equilibrium for the oscillating balance factor in both the initial and final stages while also adhering to the principle of avoiding excessively large step sizes in the exploration phase and using the smallest possible step size during the exploitation phase, this study uses a value of 0.2 for the parameter α .
The process of updating their positions is modified by incorporating the oscillating balance factor, w, as formulated in Equations (14) and (15):
X i ( t + 1 ) = w × X i ( t ) + α × k × X i ( t 1 ) + b × Δ x
X i ( t + 1 ) = w × X i ( t ) + tan ( θ ) | X i ( t ) X i ( t 1 ) |

3.3. Improved Foraging Strategy

The generation of candidate solutions as part of the DBO’s foraging process is governed by two random numbers: K 1 and K 2 . A key limitation of this approach is its failure to utilize information from the current best solutions, thereby weakening the algorithm’s local refinement capabilities and slowing its rate of convergence. To address this deficiency, this paper enhances the foraging phase by integrating an optimal value guidance strategy and an adaptive t-distribution perturbation strategy, both of which are incorporated into the position-updating mechanism.
Drawing inspiration from the social learning mechanism in particle swarm optimization (PSO), this study employs an optimal value guidance strategy to accelerate convergence. The core principle of this strategy is to leverage the best-so-far solution to steer the search of subsequent candidates toward the global optimum [41]. By incorporating this strategy within the foraging stage, the position-updating mechanism is reformulated according to Equation (16):
X i ( t + 1 ) = X i ( t ) + K 1 × ( X i ( t ) L b b ) + K 2 × ( X i ( t ) U b b ) + λ × ( X g b e s t l X i ( t ) ) λ = e t M 1
In the equation, λ is the optimal value guidance factor.
The t-distribution [42] (Student’s distribution) is governed by a degree of freedom parameter n. Its two limiting cases are the Cauchy distribution ( n = 1 ), which provides strong global search capabilities, and the Gaussian distribution ( n ), which excels at local exploitation to improve convergence speed [43]. To leverage the complementary strengths of these two behaviors, this study introduces an adaptive t-distribution perturbation strategy. This strategy enables the algorithm to transition from wide-ranging exploration in the initial phases to focused exploitation in the final phases, which serves to enhance the overall convergence speed. Equation (17) provides the specific formula for this position update:
X n e w j = X b e s t j + t ( i t e r ) × X b e s t j t ( i t e r ) = cosh ( 3 × ( t / M ) ) 2
wherein, the positions of the optimal solution in the jth dimension before and after the perturbation are denoted by X b e s t j and X n e w j , respectively. The term t ( iter ) is a random value sampled from a t-distribution where the degrees of freedom are set by the current iteration number iter.

3.4. Multi-Population Differential Co-Evolutionary Mechanism

Despite boosting search efficiency, the original DBO’s position update strategy is susceptible to converging prematurely to local optima. Once trapped in such a state, the algorithm’s search is restricted to a less-than-optimal region of the solution domain, thereby impeding the discovery of the global optimum. The authors of [44] pointed out that this constraint can be addressed by increasing population diversity, a common strategy for which is to introduce mutation operations, among which the differential evolution algorithm has gained attention for its superior search mechanism.Therefore, to strengthen the algorithm’s potential for steering clear of local optima and bolster population diversity, we developed a multi-population differential co-evolutionary mechanism that applies mutation operations to the dung beetle population.
However, a uniform mutation strategy is suboptimal, as it fails to address the diverse evolutionary requirements of all individuals in the population. For example, individuals with high fitness values are usually clustered near the current best individual and thus require more emphasis on the local exploitation capability. In contrast, individuals with inferior fitness are typically distant from the optimal solution, thus requiring a stronger capacity for global exploration. This principle motivates our approach, which involves partitioning the population into three distinct groups, using fitness as the partitioning criterion—the top 20% of the population as the elite group, the middle 50% as the intermediate group, and the remaining 30% as the inferior group—and then applying a unique differential evolution operator to each group. For example, with a total population of 30, the members are allocated accordingly: 6 are assigned to the elite group, 15 are assigned to the intermediate group, and the final 9 form the inferior group. This method enhances the DBO’s overall search effectiveness and facilitates its escape from local optima. The specific strategies are detailed below.

3.4.1. Elite Group

Individuals in the population’s elite group, characterized by high fitness values, are presumed to be in proximity to the global optimum. Consequently, their search should focus on local exploitation—performing fine-grained searches within their immediate region—to pinpoint the optimal solution. To facilitate this, the DE/current-to-best/1 mutation operator is employed for this group. This operator was selected for its exceptional local exploitation capabilities, which align perfectly with the requirements of the elite group, despite its limited global exploration potential. The corresponding mathematical formulation is presented in Equation (18):
v i ( t ) = x i ( t ) + F × [ x b e s t ( t ) x i ( t ) ] + F × [ x r 1 ( t ) x r 2 ( t ) ]
wherein, the resulting mutant vector is denoted by v i ( t ) , while F serves as the scaling factor. The best individual in the current generation t is represented by x b e s t ( t ) . The terms x r 1 ( t ) and x r 2 ( t ) represent two separate individuals randomly selected from the population, excluding the target vector x i .
Subsequently, a crossover operation derived from differential evolution is applied, with its formulation presented in Equation (19):
u i , j ( t ) = v i , j ( t ) , if rand ( 0 , 1 ) C R or j = j r a n d x i , j ( t ) , otherwise
wherein, j r a n d is an integer selected randomly from the interval [ 1 , D ] and C R is the crossover rate parameter. Following this crossover operation, a selection operation is performed using a greedy criterion, as shown in Equation (20):
x i ( t + 1 ) = u i ( t ) , if f ( u i ( t ) ) f ( x i ( t ) ) x i ( t ) , otherwise

3.4.2. Intermediate Group

The intermediate group, which possesses fitness values situated between the superior and inferior extremes, has a dual function, being capable of both local learning from the elite population and global exploration, which assists the algorithm in achieving a more effective trade-off between global search and exploitation. To fulfill the intermediate group’s role of balancing the search, we utilize the DE/mean-current/2 mutation operator, which was selected for its inherent capacity to provide an effective trade-off between global search and local refinement. Equation (21) gives the mathematical definition for this operator:
v i ( t ) = x c 1 + F × [ x c 1 x i ( t ) ] + F × [ x c 2 x i ( t ) ]
wherein, x c 1 = x r 1 ( t ) + x r 2 ( t ) 2 , x c 2 = x r 1 ( t ) + x b e s t ( t ) 2 . After mutation, the crossover operation is performed using Equation (19), and the selection operation is performed using Equation (20).

3.4.3. Inferior Group

Individuals in the inferior group, characterized by poor fitness, are tasked with performing wide-ranging global exploration, which serves to maintain a diverse population and help the algorithm steer clear of local optima. The DE/rand/1 mutation operator is ideally suited for this purpose. By generating perturbations from two random difference vectors, this operator facilitates a broad search of the solution space without relying on guidance from other individuals, thus exhibiting excellent global exploration properties. Its mathematical definition is given by Equation (22):
v i ( t ) = x r 1 ( t ) + F × [ x r 2 ( t ) x r 3 ( t ) ]
The terms x r 1 ( t ) , x r 2 ( t ) , and x r 3 ( t ) represent three different individuals, chosen randomly from the existing population. Following the mutation phase, the crossover and selection steps are executed as defined in Equations (19) and (20).
The complete operational flow of the MIDBO is detailed in two places: a flowchart presented in Figure 3 and its corresponding pseudocode, given in Algorithm 2.

3.5. Time Complexity Analysis of MIDBO

Given a population with a size N, a solution space with a dimension D, and a maximum iteration count of M, the time complexity of the baseline DBO is O ( N × D × M ) . We will now analyze the computational overhead introduced by the strategies integrated into our proposed algorithm:
  • Population initialization with the chaotic opposition-based learning strategy: O ( N × D + N × log N ) .
  • Oscillating balance factor: O ( N × D ) .
  • Improved foraging strategy: O ( N × D ) .
  • Multi-population differential co-evolutionary mechanism: O ( N × D + N × log N ) .
Thus, the time complexity of the MIDBO can be expressed as O ( M × N × ( D + log N ) ) , indicating a rise in computational demands relative to the DBO. However, substantial performance gains for the MIDBO are confirmed through the experiments and real-world engineering cases presented in Section 4 and Section 5. Accordingly, this escalation in complexity is deemed an acceptable compromise in exchange for the algorithm’s augmented efficacy.
Algorithm 2 The framework of the MIDBO algorithm
Require: The maximum iteration T m a x , the size of the particle’s population N, obtain the initial population X of dung beetles by Equations (11) and (12)
Ensure: Optimal position X b e s t g and its fitness value f m i n
    1:    while t T m a x do
    2:      for  i = 1 to Number of rolling dung beetles do
    3:          α = rand ( 1 )
    4:         if  α 0.9  then
    5:            Update rolling dung beetle’s position by using Equation (14)
    6:         else
    7:            Update rolling dung beetle’s position by using Equation (15)
    8:         end if
    9:      end for
  10:      for  i = 1 to Number of breeding dung beetles do
  11:         Update breeding dung beetle’s position by using Equations (5) and (6)
  12:      end for
  13:      for  i = 1 to Number of small dung beetles do
  14:         if  rand > 0.5  then
  15:            Update small dung beetle’s position by using Equations (7) and (16)
  16:         else
  17:            Update small dung beetle’s position by using Equations (7) and (17)
  18:         end if
  19:      end for
  20:      for  i = 1 to Number of Stealing Dung Beetles do
  21:         Update stealing dung beetle’s location by using Equation (9)
  22:      end for
  23:      Perform the population mutation operation by using Equations (18)–(22)
  24:       t = t + 1
  25:    end while
  26:    return X b e s t g and its fitness value f m i n

4. Experiments

All experiments in this section were conducted on a conventional personal computer running MATLAB R2024a, configured with a Windows 11 (64-bit) OS, an Intel(R) Core(TM) i9-13900H CPU @ 2.60 GHz, and 24 GB of RAM. To evaluate the performance of the proposed MIDBO on the CEC2017 [45] and CEC2022 [46] test suites, it was compared with the hiking optimization algorithm (HOA) [47], whale optimization algorithm (WOA) [48], differential evolution (DE) [9], particle swarm optimization (PSO) [49], a dung beetle optimization algorithm based on quantum computing and a multi-strategy hybrid (QHDBO) [50], and the original DBO. Both test suites comprise unimodal, multimodal, hybrid, and composition functions, thereby facilitating a more scientific and comprehensive evaluation of the respective merits and demerits of each algorithm. Table 1 lists the specific parameter configurations for all algorithms used in the comparative analysis.

4.1. Performance Evaluation of Improved Strategies

Ablation experiments were performed to evaluate how the four integrated strategies, both individually and in combination, affect the DBO’s overall performance. Here, we define the relevant abbreviations used in the experiments: IDBO is DBO + chaotic opposition-based learning; ODBO is the DBO + oscillating balance factor; FDBO is the DBO + improved foraging strategy; DDBO is the DBO + multi-population differential co-evolutionary mechanism; IOFDBO is the DBO + chaotic opposition-based learning + oscillating balance factor + improved foraging strategy; IODDBO is the DBO + chaotic opposition-based learning + oscillating balance factor + multi-population differential co-evolutionary mechanism; IFDDBO is the DBO + chaotic opposition-based learning + improved foraging strategy + multi-population differential co-evolutionary mechanism; OFDDBO is the DBO + oscillating balance factor + improved foraging strategy + multi-population differential co-evolutionary mechanism. The performance of these DBO variants was analyzed using 12 benchmark functions selected from the CEC2017 benchmark set, covering unimodal, simple multimodal, hybrid, and composition functions. For the primary statistical analysis, the canonical DBO was treated as the baseline, which included the mean and ranking. Table 2 summarizes the findings from the ablation experiment.
The analysis of our experiments demonstrates that four new mechanisms, especially the improved foraging strategy and the multi-population differential co-evolutionary mechanism, collectively improved the DBO’s performance. Additionally, it can be seen from the rankings in the last row of the experimental results that IFDDBO, IODDBO, and OFDDBO, which were ranked second, third, and fourth, had better overall optimization performance than the other variants of the DBO. This result underscores the significant contributions of the three main strategies: chaotic opposition-based learning, improved foraging, and multi-population differential co-evolutionary. The MIDBO demonstrated superior performance compared with the DBO, exhibiting enhanced local optima avoidance and improved exploration–exploitation balance. This advancement is a result of the synergistic effect among the four proposed strategies. Together, they effectively guide the algorithm’s search toward the global optimum.

4.2. CEC2017 Benchmark Function Results and Analysis

To thoroughly assess the MIDBO’s performance, we benchmarked it against six leading swarm intelligence algorithms using the CEC2017 benchmark suite (29 test functions). This assessment validated MIDBO’s effectiveness and generalizability across diverse optimization problem types. For experimental consistency, we applied identical parameter settings to all algorithms for the population size (N = 30), maximum iterations (T = 1000), and dimensionality (D = 50). To mitigate stochastic variability, for each test function, every algorithm was run independently 30 times, with performance quantified through the mean, standard deviation, and ranking.
With an average rank of 1.1379 across the 29 CEC2017 test functions, the MIDBO achieved the highest overall rank, a finding clearly supported by the results in Table 3. Notably, MIDBO significantly outperformed the original DBO across all test functions, a result that validates the effectiveness of our proposed modifications. The MIDBO ranked first and recorded the best mean value when its performance was analyzed on unimodal functions F1 and F3. The MIDBO also obtained the best mean and rank on multimodal functions F4, F5, F8, and F10. However, it did not secure the best mean on F6 and F7, ranking third. On F9, the MIDBO’s mean and rank were inferior to PSO’s, placing it second. In the hybrid and composition functions, the MIDBO’s mean on F27 was inferior to that of the DE algorithm; however, despite not achieving the optimal mean, it still ranked first. For all functions other than these, it obtained the best mean and the first rank.
Table 3 presents an analysis of the 50-dimensional problem results on the CEC2017 benchmark, revealing a distinct performance hierarchy among the compared algorithms. The MIDBO secured the premier rank, surpassing the QHDBO, DBO, and HOA, which ranked second, third, and last, respectively. These results provide compelling evidence that the novel strategies implemented in the MIDBO are highly effective at enhancing the convergence precision of the baseline DBO. The seven algorithms were ranked by performance in the following order: MIDBO > QHDBO > DBO > DE > PSO > WOA > HOA.
The results obtained from the Wilcoxon rank-sum test are displayed in Table 4. A p value exceeding 0.05, highlighted in bold, indicates the absence of a statistically significant difference and is denoted by the symbol “=”. Conversely, a p value below 0.05 signals a significant performance disparity. In such instances, superiority was assigned to the algorithm with the lower mean value, where “+” signifies that the MIDBO is superior while “-” indicates the superiority of the comparison algorithm. Adhering to this convention, the summary tallies of the MIDBO’s performance against each competitor were 29/0/0, 21/8/0, 25/4/0, 29/0/0, 25/3/1, and 22/6/1. In the comparison between the MIDBO and DBO, p > 0.05 on functions F7, F10, F18, and F22, and the MIDBO’s performance was superior to the DBO’s on the remaining 25 functions. Compared with DE, p > 0.05 on functions F6, F14, F15, F23, F24, F26, F27, and F29, and the MIDBO’s performance was superior to DE’s on the remaining 21 functions. Compared with PSO, p > 0.05 on functions F9, F26, and F27. On function F6, PSO was superior to the MIDBO, but the MIDBO’s performance was superior to PSO’s on the remaining 25 functions. Compared with the QHDBO, p > 0.05 on functions F6, F10, F14, F18, F25, and F26. As the QHDBO achieved the optimal mean value on function F7, it was considered superior on this function. However, the MIDBO’s performance was superior to the QHDBO’s on the remaining 22 functions. A comparative analysis with the two other algorithms demonstrates that the MIDBO outperformed all of them. Consequently, the MIDBO exhibited superior capability in solving 50-dimensional CEC2017 benchmark functions.
Figure 4 presents how all algorithms converged for the CEC2017 test suite. For unimodal functions, the MIDBO’s convergence performance demonstrated substantial superiority over the competing algorithms. The MIDBO exhibited slower early-stage convergence than the competing algorithms for multimodal and hybrid functions. However, the incorporation of an enhanced foraging strategy and a multi-population differential cooperative evolution mechanism provides the algorithm with a more effective mechanism for circumventing local optima. Therefore, as the iterations progressed, after other algorithms already stagnated and converged to a suboptimal solution, the MIDBO’s convergence curve still showed a downward trend, and in terms of convergence accuracy, the MIDBO clearly outperformed its competitors. The MIDBO demonstrated a particularly strong performance on functions F4, F5, F10–F16, F18 and F19. Its optimization capabilities were also more prominent on the composition functions, specifically F20–F23, F26, F28, and F30, when compared with other algorithms.
Figure 5 presents the box plot analysis of algorithm performance on the CEC2017 benchmark suite. In most function cases, the MIDBO exhibited significantly smaller and lower quartile ranges, demonstrating both superior solution quality and enhanced stability relative to the competing methods. When set against the other algorithms, the MIDBO algorithm’s boxes were consistently smaller and positioned lower, especially on functions F1, F3–F4, F11–F16, F18–F19, F23–F25, and F28–F30. Overall, the box plots indicate that the MIDBO showed a significant improvement compared with the DBO.
Figure 6 displays the radar chart comparing the ranking performance of the seven algorithms across all 29 functions. The radar chart analysis revealed that the MIDBO achieved the minimal enclosed area among all algorithms, indicating superior overall performance. Conversely, the HOA demonstrated the maximal area, while the DBO ranked third in terms of the enclosed space magnitude. These results demonstrate that the implemented enhancement strategies substantially improved the DBO’s performance while showing the MIDBO’s overall superiority over all comparative algorithms.

4.3. CEC2022 Benchmark Function Results and Analysis

To conduct a more comprehensive evaluation of the MIDBO’s effectiveness in solving complex optimization problems, we performed a systematic comparative analysis between the MIDBO and other selected algorithms using the CEC2022 benchmark suite. The experimental parameters were set as follows: a population size of 30, termination criterion of 1000 iterations, and 30 replicates to ensure result robustness. The comprehensive performance evaluation, supported by statistical analysis of the experimental data, highlights the marked advantage of the MIDBO (Table 5). It achieved a first-place finish among all compared algorithms with an average rank of 1.5, a result that corroborates its potent capabilities in global optimization. The performance advantage of the MIDBO became particularly pronounced when evaluated on specific test functions. Specifically, on eight of the benchmark functions (F1, F2, F4, F6–F8, F11, and F12), the MIDBO delivered the best average convergence accuracy while consistently ranking first. This consistency indicates the algorithm’s proficiency in maintaining an efficient optimization process across problems with diverse complexities. An exception occurred on function F10, where the algorithm secured the third rank. Furthermore, on functions F3 and F9, the MIDBO failed to obtain the optimal mean and rank, placing it third behind the DBO and QHDBO. On function F5, although the MIDBO’s mean was marginally inferior to PSO’s, it still achieved the first rank. While there were minor fluctuations in performance on these few functions, the overall findings decisively confirm the effectiveness of the MIDBO in improving convergence accuracy.
According to Table 5, the seven algorithms’ performance on the CEC2022 benchmark can be summarized with the following ranking: MIDBO > DBO > QHDBO > DE > PSO > WOA > HOA.
A Wilcoxon rank sum test was conducted, with the findings detailed in Table 6. The specific win/tie/loss results were 12/0/0, 9/3/0, 6/5/1, 11/1/0, 11/1/0, and 8/3/1. In the comparison between the MIDBO and DBO, p > 0.05 was observed on functions F3–F5, F7, and F10. On function F9, the DBO’s performance was superior to the MIDBO’s. Nevertheless, the MIDBO outperformed the DBO on the remaining six functions. Compared with DE, p > 0.05 was observed on functions F2, F3, and F4, while the MIDBO was superior on the other nine functions. Similarly, in the comparisons with the WOA and PSO, p > 0.05 was observed on functions F6 and F5, respectively, with the MIDBO proving superior results on the remaining 11 functions in each case. Against the QHDBO, p > 0.05 was observed on functions F3, F8, and F11. The QHDBO was superior only on function F9, whereas the MIDBO held an advantage on the other eight functions. Finally, the MIDBO exhibited a comprehensive superiority over the HOA. Consequently, it achieved more competitive results on the CEC2022 benchmark.
Figure 7 presents how all algorithms converged for the CEC2022 test suite. On function F1, its convergence speed was initially slower than PSO’s. However, as the iterations progressed, PSO plateaued, while the MIDBO maintained its downward trend, ultimately reaching the optimal value. On function F5, the MIDBO’s initial convergence was surpassed by both DE and PSO. In the subsequent stages of the run, while DE’s rate of descent began to stagnate, the MIDBO maintained a consistent downward trajectory. Nevertheless, its convergence rate remained slower than that of PSO, and it ultimately failed to match the final precision achieved by the latter. Regarding function F9, the MIDBO initially outpaced all competing algorithms, but its progress began to plateau in the later stages, eventually reaching a final accuracy comparable to that of the DBO and QHDBO. The MIDBO demonstrated a superior rate of convergence and, in most instances, higher solution precision, with the exception of a few where it was either slower or failed to reach the optimal value. This robust performance affirms the efficacy of the multiple strategies, as their synergy is what empowers the algorithm to converge on the optimal optimum.
Using box plots, Figure 8 presents a comparative analysis of the algorithms’ effectiveness on the CEC2022 benchmark. Figure 8 illustrates that the MIDBO’s boxes were smaller and positioned lower on most functions, indicating its superior performance. For functions F3, F4, and F7, while the MIDBO exhibited a larger box size than PSO, its box was positioned lower. On F5, the MIDBO’s overall performance was inferior to PSO’s. On function F4, the MIDBO’s box was also larger than that of the HOA, but it maintained a lower position. In summary, the collective results establish the MIDBO’s clear superiority over its counterparts, and this outcome, supported by the box plot analysis, powerfully underscores the value of the enhancements presented in this paper.
Figure 9 displays the radar chart, which compares the ranks achieved by the seven algorithms for the 12 test functions from the CEC2022 suite. The figure reveals that the MIDBO’s enclosed shape had the smallest area. In contrast, the HOA’s area was the largest, while the DBO ranked second and the QHDBO ranked third. The radar chart’s enclosed areas confirm that the various proposed strategies notably improved the DBO’s optimization performance.

5. Engineering Optimization Issues

The effectiveness of the MIDBO in real-world scenarios is examined in this section. To accomplish this, its performance was validated on three standard engineering design case studies. The three issues included the tension–compression spring design problem [51], the pressure vessel design problem [52], and the speed reducer design problem [53]. Due to the multi-constraint nature of these engineering problems, because of its simplicity and straightforward application, this study utilized the penalty function method to manage the problem’s constraints. The function is formulated as shown below:
F ( x ) = f ( x ) + r · i = 1 m ( max ( 0 , g i ( x ) ) ) γ + j = 1 n ( | h j ( x ) | ) η
wherein, the modified objective function is denoted by F ( x ) . The experiments were conducted with the penalty coefficient r set to 10e100. The terms g i ( x ) and h j ( x ) correspond to the functions for the inequality and equality constraints, respectively, and γ and η are constants, which were set to two in this paper. The total counts of the inequality and equality constraint functions are denoted by m and n, respectively. A penalty function works by augmenting the objective function’s value with a penalty term in the event of a constraint violation by a candidate solution, thereby directing the algorithm’s search toward more promising and feasible regions.
For the engineering design problem experiments, the same set of comparative algorithms as that in the preceding section was used. The experimental parameters were configured as follows: a population size of 30, termination criterion of 500 iterations, and 20 replicates to ensure result robustness. Performance was evaluated using four statistical metrics: the best, mean, standard deviation, and worst values.

5.1. Tension–Compression Spring Design Issues

The classic tension–compression spring design challenge involves optimizing three key variables: the wire diameter (d), mean coil diameter (D), and number of active coils (N). The primary objective is to find the design with the lowest possible weight while adhering to constraints on the minimum deflection, surge frequency, and shear stress. The governing equations for this task are presented below in Equations (24)–(27).
Consider
x = [ x 1 , x 2 , x 3 ] = [ d , D , P ]
Minimize
f ( x ) = ( x 3 + 2 ) x 2 x 1 2
subject to
g 1 ( x ) = 1 x 2 3 x 3 71 , 785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12 , 566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0
The variable range is
0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15 .
The findings, summarized in Table 7 and Table 8, reveal that the MIDBO outperformed all other algorithms on this problem by achieving superior optimization accuracy and significantly better stability. The algorithm’s high ranking underscores its strong competitive edge when tackling the tension–compression spring design challenge.

5.2. Pressure Vessel Design Issues

The pressure vessel design challenge required optimizing four key parameters: the shell thickness ( x 1 ), head thickness ( x 2 ), inner radius ( x 3 ), and the vessel’s length (excluding heads) ( x 4 ). Minimizing the total cost of the vessel while satisfying four specific constraints was the main goal of this design problem. The governing mathematical model for this task is provided in Equations (28)–(31).
Consider
x = [ x 1 , x 2 , x 3 , x 4 ] = [ T s , T h , R , L ]
Minimize
f ( x ) = 1.7781 x 2 x 3 2 + 0.6224 x 1 x 3 x 4 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
subject to
g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 2 + 0.00954 x 3 0 , g 3 ( x ) = x 4 240 0 , g 4 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0
With a variable range
1 x 1 , x 2 99 , 10 x 3 , x 4 200 .
Table 9 and Table 10 provide a performance assessment of seven algorithms applied to the pressure vessel design challenge. The experimental outcomes unequivocally establish the significant advantages of the proposed MIDBO concerning its optimization accuracy and average performance. To be specific, the MIDBO outperformed all competitors in the two key metrics of the best and mean values, securing an optimal value of 5743.021200 and an average value of 6033.448733. Notably, the PSO algorithm exhibited superior performance in the worst value and Std metrics. Nonetheless, the MIDBO’s premier optimization capability and exceptional average performance confirm its enhanced overall efficacy and practical value for tackling engineering challenges characterized by complex constraints and stringent precision demands, setting it apart from the other compared algorithms.

5.3. Speed Reducer Design Issues

The speed reducer design issues involve optimizing a set of seven parameters: the face width ( b = x 1 ), module of teeth ( m = x 2 ), number of teeth on the pinion ( p = x 3 ), lengths of the first and second shafts between bearings ( l 1 = x 4 and l 2 = x 5 , respectively), and the diameters of the first and second shafts ( d 1 = x 6 and d 2 = x 7 , respectively). Finding the variable combination that yielded the lowest possible weight for the speed reducer was the primary objective, all while adhering to 11 engineering constraints. The governing equations for this task are provided in Equations (32)–(35).
Consider
x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ] = [ b , m , p , l 1 , l 2 , d 1 , d 2 ]
Minimize
f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 0.7854 x 1 ( x 4 x 6 2 x 5 x 7 2 )
subject to
g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 , g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 , g 3 ( x ) = 1.93 x 4 3 x 2 x 6 4 x 3 1 0 , g 4 ( x ) = 1.93 x 5 3 x 2 x 7 4 x 3 1 0 , g 5 ( x ) = ( 745 x 4 x 2 x 3 ) 2 + 16.9 × 10 6 110 x 6 3 1 0 , g 6 ( x ) = ( 745 x 5 x 2 x 3 ) 2 + 157.5 × 10 6 85 x 7 3 1 0 , g 7 ( x ) = x 2 x 3 40 1 0 , g 8 ( x ) = 5 x 2 x 1 1 0 , g 9 ( x ) = x 1 12 x 2 1 0 , g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 , g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0
With a variable range of
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5
As demonstrated by the data in Table 11 and Table 12, the different algorithms exhibited varied performance in terms of optimization accuracy and stability. Although the DE algorithm achieved the best value (2513.700952), its standard deviation (75.53042354) was fourth among all compared algorithms, indicating that its results were highly unstable. In contrast, our proposed MIDBO demonstrated an exceptional balance in its performance. It not only secured the second-best optimization accuracy (2994.234252), outperforming most of the comparative algorithms, including the baseline DBO, but more importantly, it achieved the best stability with a standard deviation of 0.000319158, far surpassing all other algorithms. Compared with the baseline DBO, the MIDBO showed significant improvements across all four metrics. In conclusion, the MIDBO, while preserving a high-accuracy solution capability, greatly enhanced algorithmic stability, rendering it an efficient and highly reliable option for solving such engineering optimization problems.

6. Conclusions

In this paper, we proposed an improved Dung Beetle Optimizer (MIDBO), which adds a chaotic opposition-based learning strategy, an oscillating balance factor strategy, an improved foraging strategy, and a multi-population differential co-evolutionary mechanism to the DBO. This improved algorithm was benchmarked against the original DBO and other swarm intelligence algorithms using the CEC2017 and CEC2022 benchmark suites to assess its performance. The MIDBO, as shown by the experimental results, excelled in two key areas: its speed of convergence and the precision of its solutions. Furthermore, the results from three real-world engineering challenges confirmed MIDBO’s strong performance when handling complex optimization tasks.
Despite the MIDBO’s overall strong performance, numerical experiments revealed its relatively poor convergence accuracy on certain functions across the two test suites, suggesting that there is still room for further enhancement. The complexity of the MIDBO has increased due to the addition of multiple strategies. Therefore, future work could involve combining the proposed MIDBO with other swarm intelligence algorithms to form more effective hybrid algorithms to further improve its performance. To enrich the international scope of this research and serve as an impetus for the algorithm’s continued evolution, our future efforts will focus on a comparative analysis involving advanced algorithms drawn from the global literature. This approach is designed to remedy the constrained range of our current reference works. The initiative aims to ensure that the MIDBO’s performance is evaluated against internationally recognized benchmarks and furnish a more resilient and diversified theoretical underpinning for its subsequent refinement.
Additionally, while the MIDBO has demonstrated commendable performance on engineering design problems, according to the NFL theorem, there is no universally optimal algorithm suited for all optimization challenges. Consequently, the extent of the MIDBO’s competitive edge in more arduous engineering domains warrants further investigation. This recognition impels our future work toward probing the applicability of the MIDBO in sophisticated engineering contexts—such as unmanned aerial vehicle path planning and task allocation, regression prediction, feature selection, and the optimization of model parameters—with the ultimate goal of broadening its range of practical applications. We hope that these studies can contribute to the development of the swarm intelligence field by providing more efficient methods and tools.

Author Contributions

Conceptualization, W.L.; methodology, W.L.; software, W.L.; validation, W.L.; formal analysis, Y.Y. and X.M.; investigation, Y.Z.; resources, Y.H.; data curation, J.C., Y.Y. and X.M.; writing—original draft preparation, W.L.; writing—review and editing, W.L. and Y.H.; visualization, W.L. and Y.Z.; supervision, Y.H.; project administration, J.C.; funding acquisition, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Graduate Innovation Special Fund of East China University of Technology (grant no. DHYC-2025072).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No dataset was used in this research.

Acknowledgments

The authors would like to thank the School of Artificial Intelligence and Information Engineering at East China University of Technology for providing the laboratory facilities.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HOAHiking optimization algorithm
WOAWhale optimization algorithm
QHDBODung beetle optimization algorithm based on quantum computing and multi-strategy
fusion
DBODung Beetle Optimizer
StdStandard deviation
MIDBOMulti-Strategy Improved Dung Beetle Optimizer
TEOThermal exchange optimization
BBOBiogeography-based optimization
ASOAtom search optimization
DEDifferential evolution
KOAKeplerian optimization algorithm
SGOSocial group optimization
LCALeague championship algorithm
SCASine cosine algorithm
SPOStudent psychology optimization
BWOBeluga whale optimization
PSOParticle swarm optimization
WOWalrus optimizer
OBLOpposition-based learning
HOHippopotamus optimization algorithm
GGOGreylag Goose Optimization
NFLNo free lunch
MeanMean value
BestBest value
WorstWorst value

References

  1. Mostafa, R.R.; Gaheen, M.A.; Abd ElAziz, M.; Al-Betar, M.A.; Ewees, A.A. An improved gorilla troops optimizer for global optimization problems and feature selection. Knowl.-Based Syst. 2023, 269, 110462. [Google Scholar]
  2. Huang, H.; Wu, R.; Huang, H.; Wei, J.; Han, Z.; Wen, L.; Yuan, Y. Multi-strategy improved artificial rabbit optimization algorithm based on fusion centroid and elite guidance mechanisms. Comput. Methods Appl. Mech. Eng. 2024, 425, 116915. [Google Scholar] [CrossRef]
  3. Mao, Z.; Yang, Z.; Luo, D.; Lin, D.; Jiang, Q.; Huang, G.; Liao, Z. A multi-strategy enhanced dung beetle algorithm for solving real-world engineering problems. Artif. Intell. Rev. 2025, 58, 253. [Google Scholar] [CrossRef]
  4. Dhiman, G.; Kumar, V. Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowl.-Based Syst. 2018, 159, 20–50. [Google Scholar] [CrossRef]
  5. Jiang, Y.; Wu, Q.; Zhu, S.; Zhang, L. Orca predation algorithm: A novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 2022, 188, 116026. [Google Scholar] [CrossRef]
  6. Yan, J.; Hu, G.; Jia, H.; Hussien, A.G.; Abualigah, L. GPSOM: Group-based particle swarm optimization with multiple strategies for engineering applications. J. Big Data 2025, 12, 114. [Google Scholar] [CrossRef]
  7. Nematollahi, A.F.; Rahiminejad, A.; Vahidi, B. A novel physical based meta-heuristic optimization method known as lightning attachment procedure optimization. Appl. Soft Comput. 2017, 59, 596–621. [Google Scholar] [CrossRef]
  8. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  9. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar]
  10. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar]
  11. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  12. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  13. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  14. Satapathy, S.; Naik, A. Social group optimization (SGO): A new population evolutionary optimization technique. Complex Intell. Syst. 2016, 2, 173–203. [Google Scholar] [CrossRef]
  15. Kashan, A.H. League Championship Algorithm (LCA): An algorithm for global optimization inspired by sport championships. Appl. Soft Comput. 2014, 16, 171–200. [Google Scholar] [CrossRef]
  16. Das, B.; Mukherjee, V.; Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  17. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  18. Han, M.; Du, Z.; Yuen, K.F.; Zhu, H.; Li, Y.; Yuan, Q. Walrus optimizer: A novel nature-inspired metaheuristic algorithm. Expert Syst. Appl. 2024, 239, 122413. [Google Scholar] [CrossRef]
  19. Amiri, M.H.; Mehrabi Hashjin, N.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  20. El-Kenawy, E.-S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag goose optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  21. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 2002, 1, 67–82. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  23. Xie, G.; Zhang, M.; Wang, D.; Yang, M. Economic-environmental dispatch of isolated microgrids based on dynamic classification sparrow search algorithm. Expert Syst. Appl. 2025, 271, 126677. [Google Scholar] [CrossRef]
  24. Zhu, Y.; Huang, H.; Wei, J.; Yi, J.; Liu, J.; Li, M. ISO: An improved snake optimizer with multi-strategy enhancement for engineering optimization. Expert Syst. Appl. 2025, 281, 127660. [Google Scholar] [CrossRef]
  25. Abdel-salam, M.; Alomari, S.A.; Almomani, M.H.; Hu, G.; Lee, S.; Saleem, K.; Smerat, A.; Abualigah, L. Quadruple strategy-driven hiking optimization algorithm for low and high-dimensional feature selection and real-world skin cancer classification. Knowl.-Based Syst. 2025, 315, 113286. [Google Scholar]
  26. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar]
  27. Zhang, D.; Wang, Z.; Zhao, Y.; Sun, F. Multi-Strategy Fusion Improved Dung Beetle Optimization Algorithm and Engineering Design Application. IEEE Access 2024, 12, 97771–97786. [Google Scholar] [CrossRef]
  28. Peng, Q.; Fu, X.; Lin, F.; Zhu, X.; Ning, J.; Li, F. Multi-Scale Convolutional Neural Networks optimized by elite strategy dung beetle optimization algorithm for encrypted traffic classification. Expert Syst. Appl. 2025, 264, 125729. [Google Scholar] [CrossRef]
  29. Yu, X.; Chen, X.; Huang, R.; Chen, J.; Bao, M.; He, H.; Wang, L.; Lu, G. Optimization of manifold structural parameters for high-power proton exchange membrane fuel cell stack. Int. J. Hydrogen Energy 2025, 100, 921–935. [Google Scholar]
  30. Quan, R.; Qiu, Z.; Wan, H.; Yang, Z.; Li, X. Dung beetle optimization algorithm-based hybrid deep learning model for ultra-short-term PV power prediction. iScience 2024, 27, 111126. [Google Scholar] [CrossRef]
  31. Yang, L.; Zhang, X.; Li, Z.; Li, L.; Shi, Y. A LODBO algorithm for multi-UAV search and rescue path planning in disaster areas. Chin. J. Aeronaut. 2025, 38, 103301. [Google Scholar] [CrossRef]
  32. Wang, D.; Li, Z.; Guo, J.; Lai, J. Forecasting deep coalbed methane production using variational mode decomposition and dung beetle optimized long and short-term memory model. Gas Sci. Eng. 2025, 135, 205549. [Google Scholar] [CrossRef]
  33. Li, Y.; Sun, K.; Yao, Q.; Wang, L. A dual-optimization wind speed forecasting model based on deep learning and improved dung beetle optimization algorithm. Energy 2024, 286, 129604. [Google Scholar] [CrossRef]
  34. Wu, Q.; Xu, H.; Liu, M. Applying an Improved Dung Beetle Optimizer Algorithm to Network Traffic Identification. CMC-Comput. Mater. Contin. 2024, 78, 4091. [Google Scholar] [CrossRef]
  35. Yu, M.; Du, J.; Xu, X.; Xu, J.; Jiang, F.; Fu, S.; Zhang, J.; Liang, A. A multi-strategy enhanced Dung Beetle Optimization for real-world engineering problems and UAV path planning. Alex. Eng. J. 2025, 118, 406–434. [Google Scholar] [CrossRef]
  36. Yan, K.; Cao, W. Improved Cooperative Search Algorithm with Multi-Strategy Fusion. In Proceedings of the 2024 5th International Conference on Mechatronics Technology and Intelligent Manufacturing (ICMTIM), Nanjing, China, 26–28 April 2024; pp. 725–728. [Google Scholar]
  37. Liu, F.; Xu, W.; Feng, Z.; Yu, C.; Liang, X.; Su, Q.; Gao, J. Task Allocation and Path Planning Method for Unmanned Underwater Vehicles. Drones 2025, 9, 411. [Google Scholar] [CrossRef]
  38. Wang, W.; Tian, J. An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle Chaos Map and Levy Flight Operator. Electronics 2022, 11, 3678. [Google Scholar] [CrossRef]
  39. Zhou, Y.; Shao, Z.; Li, H.; Chen, J.; Sun, H.; Wang, Y.; Wang, N.; Pei, L.; Wang, Z.; Zhang, H.; et al. A Novel Back Propagation Neural Network Based on the Harris Hawks Optimization Algorithm for the Remaining Useful Life Prediction of Lithium-Ion Batteries. Energies 2025, 18, 3842. [Google Scholar] [CrossRef]
  40. Xu, Y.; Yang, Z.; Li, X.; Kang, H.; Yang, X. Dynamic opposite learning enhanced teaching–learning-based optimization. Knowl.-Based Syst. 2020, 188, 104966. [Google Scholar] [CrossRef]
  41. Zhou, Y.; Lu, F.; Xu, J.; Wu, L. Multi-Agent Cross-Domain Collaborative Task Allocation Problem Based on Multi-Strategy Improved Dung Beetle Optimization Algorithm. Appl. Sci. 2024, 14, 7175. [Google Scholar] [CrossRef]
  42. Ke, J.; Chen, T. Data Decomposition Modeling Based on Improved Dung Beetle Optimization Algorithm for Wind Power Prediction. Data 2024, 9, 146. [Google Scholar] [CrossRef]
  43. Yang, X.; Liu, J.; Liu, Y.; Xu, P.; Yu, L.; Zhu, L.; Chen, H.; Deng, W. A Novel Adaptive Sparrow Search Algorithm Based on Chaotic Mapping and T-Distribution Mutation. Appl. Sci. 2021, 11, 11192. [Google Scholar] [CrossRef]
  44. Xiong, W.; Zhu, D.; Li, R.; Yao, Y.; Zhou, C.; Cheng, S. An effective method for global optimization–Improved slime mould algorithm combine multiple strategies. Egypt. Inform. J. 2024, 11, 100442. [Google Scholar] [CrossRef]
  45. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization. Nanyang Technol., Univ.; Singapore, Tech. Rep.; Nanyang Technological University: Singapore, 2016; pp. 1–18. [Google Scholar]
  46. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  47. Oladejo, S.O.; Ekwe, S.O.; Mirjalili, S. The Hiking Optimization Algorithm: A novel human-based metaheuristic approach. Knowl.-Based Syst. 2024, 296, 111880. [Google Scholar] [CrossRef]
  48. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  49. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  50. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  51. Dehghani, M.; Aly, M.; Rodriguez, J.; Sheybani, E.; Javidi, G. A Novel Nature-Inspired Optimization Algorithm: Grizzly Bear Fat Increase Optimizer. Biomimetics 2025, 10, 379. [Google Scholar] [CrossRef] [PubMed]
  52. Liu, J.; Wang, R.; Deng, Y.; Huang, X.; Li, Z. Sharpbelly Fish Optimization Algorithm: A Bio-Inspired Metaheuristic for Complex Engineering. Biomimetics 2025, 10, 445. [Google Scholar] [CrossRef] [PubMed]
  53. Wei, J.; Gu, Y.; Yan, Y.; Li, Z.; Lu, B.; Pan, S.; Cheong, N. LSEWOA: An Enhanced Whale Optimization Algorithm with Multi-Strategy for Numerical and Engineering Design Optimization Problems. Sensors 2025, 25, 2054. [Google Scholar] [CrossRef]
Figure 1. Scatter plot and frequency distribution histogram for the improved circle chaotic map.
Figure 1. Scatter plot and frequency distribution histogram for the improved circle chaotic map.
Biomimetics 10 00717 g001
Figure 2. Impact of the parameter α on the variation in w.
Figure 2. Impact of the parameter α on the variation in w.
Biomimetics 10 00717 g002
Figure 3. Flowchart of the MIDBO.
Figure 3. Flowchart of the MIDBO.
Biomimetics 10 00717 g003
Figure 4. CEC 2017 convergence curves.
Figure 4. CEC 2017 convergence curves.
Biomimetics 10 00717 g004aBiomimetics 10 00717 g004b
Figure 5. CEC2017 box plots, where “+” denotes outliers.
Figure 5. CEC2017 box plots, where “+” denotes outliers.
Biomimetics 10 00717 g005aBiomimetics 10 00717 g005b
Figure 6. Radar chart for the CEC2017 test suite.
Figure 6. Radar chart for the CEC2017 test suite.
Biomimetics 10 00717 g006
Figure 7. CEC2022 convergence curves.
Figure 7. CEC2022 convergence curves.
Biomimetics 10 00717 g007
Figure 8. CEC2022 box plots, where “+” denotes outliers.
Figure 8. CEC2022 box plots, where “+” denotes outliers.
Biomimetics 10 00717 g008
Figure 9. Radar chart for the CEC2022 test suite.
Figure 9. Radar chart for the CEC2022 test suite.
Biomimetics 10 00717 g009
Table 1. Algorithm parameters.
Table 1. Algorithm parameters.
AlgorithmParameterValue
HOAAngle of inclination θ i , t [0, 50 ]
Sweep factor[1,3]
WOAa[2,0]
b1
DEF0.7
C R 0.85
DBOk0.1
b0.3
S0.5
PSO c 1 , c 2 2
QHDBORDB6
EFDB13
SDB11
MIDBOThe same as the DBO parameters
Table 2. Ablation experiment results.
Table 2. Ablation experiment results.
FunctionsDBOIDBOODBOFDBODDBOIOFDBOIODDBOIFDDBOOFDDBOMIDBO
F39.1480 × 10 04 8.8074 × 10 04 9.0500 × 10 04 7.2175 × 10 04 5.2573 × 10 04 7.1975 × 10 04 4.9882 × 10 04 5.1352 × 10 04 5.2631 × 10 04 4.9674 × 10 04
F95.8306 × 10 03 6.1156 × 10 03 5.3464 × 10 03 4.6291 × 10 03 3.8974 × 10 03 4.7167 × 10 03 3.8323 × 10 03 4.1896 × 10 03 3.7567 × 10 03 3.5808 × 10 03
F106.5226 × 10 03 6.3232 × 10 03 6.5760 × 10 03 6.0762 × 10 03 5.9728 × 10 03 6.1474 × 10 03 6.3496 × 10 03 5.6195 × 10 03 5.8266 × 10 03 5.7701 × 10 03
F111.8481 × 10 03 1.5606 × 10 03 1.9749 × 10 03 1.3736 × 10 03 1.3377 × 10 03 1.3722 × 10 03 1.3187 × 10 03 1.3266 × 10 03 1.3145 × 10 03 1.2949 × 10 03
F131.3558 × 10 07 1.0774 × 10 07 2.1910 × 10 07 2.1798 × 10 06 1.2350 × 10 05 1.3423 × 10 05 1.5383 × 10 05 3.4804 × 10 05 1.7070 × 10 05 1.1127 × 10 05
F152.6815 × 10 05 5.0277 × 10 04 8.1435 × 10 05 1.8088 × 10 07 2.7319 × 10 04 2.7360 × 10 04 1.3875 × 10 04 1.6778 × 10 04 1.3703 × 10 04 1.2670 × 10 04
F163.3652 × 10 03 3.3148 × 10 03 3.3964 × 10 03 3.0473 × 10 03 2.9245 × 10 03 2.9599 × 10 03 2.8980 × 10 03 2.8956 × 10 03 2.9186 × 10 03 2.8847 × 10 03
F172.6582 × 10 03 2.4675 × 10 03 2.6686 × 10 03 2.5544 × 10 03 2.4240 × 10 03 2.5031 × 10 03 2.3827 × 10 03 2.3913 × 10 03 2.4043 × 10 03 2.3716 × 10 03
F192.3895 × 10 06 6.9655 × 10 06 4.0966 × 10 06 5.1020 × 10 04 2.9648 × 10 04 4.9796 × 10 04 1.8855 × 10 04 2.8460 × 10 04 2.3722 × 10 04 1.1785 × 10 04
F225.4224 × 10 03 2.6580 × 10 03 5.0387 × 10 03 3.2489 × 10 03 4.0564 × 10 03 2.3804 × 10 03 3.6102 × 10 03 2.4382 × 10 03 3.3497 × 10 03 2.3443 × 10 03
F243.1860 × 10 03 3.1593 × 10 03 3.1862 × 10 03 3.1353 × 10 03 3.0027 × 10 03 3.1231 × 10 03 3.0056 × 10 03 2.9823 × 10 03 2.9992 × 10 03 2.9843 × 10 03
F273.3383 × 10 03 3.3188 × 10 03 3.3405 × 10 03 3.3370 × 10 03 3.2671 × 10 03 3.3342 × 10 03 3.2695 × 10 03 3.2749 × 10 03 3.2706 × 10 03 3.2667 × 10 03
Average rank9.588.089.176.084.425.673.253.173.751.83
Final rank10897563241
Note: The best results are highlighted in bold.
Table 3. CEC2017 test results (Dim = 50).
Table 3. CEC2017 test results (Dim = 50).
FunctionMetricHOADEDBOWOAPSOQHDBOMIDBO
F1mean8.5501 × 10 10 1.0703 × 10 10 8.5495 × 10 08 7.9417 × 10 09 1.6788 × 10 10 2.8328 × 10 08 7.5694 × 10 07
std1.1503 × 10 10 1.2858 × 10 10 5.4221 × 10 08 2.5000 × 10 09 3.1106 × 10 09 3.8752 × 10 08 1.6702 × 10 08
rank7534621
F3mean1.5060 × 10 05 5.4257 × 10 05 2.0660 × 10 05 2.4727 × 10 05 1.6887 × 10 05 2.7622 × 10 05 1.3188 × 10 05
std1.6112 × 10 04 1.6836 × 10 05 3.7777 × 10 04 7.9227 × 10 04 3.7158 × 10 04 6.2939 × 10 04 2.2485 × 10 04
rank2745361
F4mean2.1229 × 10 04 2.2476 × 10 03 9.3666 × 10 02 2.4055 × 10 03 1.8142 × 10 03 7.4855 × 10 02 6.6259 × 10 02
std4.3562 × 10 03 2.0552 × 10 03 2.3795 × 10 02 5.5134 × 10 02 6.6578 × 10 02 1.0029 × 10 02 6.8947 × 10 01
rank7436521
F5mean1.0831 × 10 03 8.8866 × 10 02 9.5324 × 10 02 1.0997 × 10 03 9.7246 × 10 02 8.6202 × 10 02 8.2331 × 10 02
std3.6603 × 10 01 8.6824 × 10 01 1.0332 × 10 02 7.6843 × 10 01 2.6504 × 10 01 5.5533 × 10 01 5.0245 × 10 01
rank6347521
F6mean6.8011 × 10 02 6.4660 × 10 02 6.6350 × 10 02 6.9150 × 10 02 6.4470 × 10 02 6.5259 × 10 02 6.4980 × 10 02
std7.0286 × 10 00 1.3926 × 10 01 1.0441 × 10 01 1.0893 × 10 01 9.4866 × 10 00 8.8893 × 10 00 7.0185 × 10 00
rank6257143
F7mean1.8284 × 10 03 2.0185 × 10 03 1.4199 × 10 03 1.8787 × 10 03 1.5635 × 10 03 1.3630 × 10 03 1.4381 × 10 03
std6.9015 × 10 01 3.5056 × 10 02 2.0400 × 10 02 1.1413 × 10 02 5.0765 × 10 01 1.0252 × 10 02 1.3456 × 10 02
rank5637412
F8mean1.4127 × 10 03 1.2143 × 10 03 1.2947 × 10 03 1.3955 × 10 03 1.2794 × 10 03 1.1445 × 10 03 1.1120 × 10 03
std4.4990 × 10 01 1.0119 × 10 02 9.0161 × 10 01 8.5121 × 10 01 2.7752 × 10 01 4.1225 × 10 01 4.2551 × 10 01
rank7356421
F9mean2.5961 × 10 04 2.5891 × 10 04 2.2432 × 10 04 3.4267 × 10 04 1.2187 × 10 04 2.1279 × 10 04 1.2867 × 10 04
std4.5413 × 10 03 9.5442 × 10 03 6.9474 × 10 03 7.7342 × 10 03 5.4466 × 10 03 1.1476 × 10 04 5.4010 × 10 03
rank5647132
F10mean1.3227 × 10 04 1.3139 × 10 04 1.0490 × 10 04 1.2749 × 10 04 1.4697 × 10 04 1.0735 × 10 04 9.4563 × 10 03
std8.7954 × 10 02 1.5325 × 10 03 2.3260 × 10 03 1.2102 × 10 03 4.8484 × 10 02 2.9806 × 10 03 1.5827 × 10 03
rank6524731
F11mean1.8849 × 10 04 3.2673 × 10 04 2.8221 × 10 03 5.3163 × 10 03 3.9096 × 10 03 1.8923 × 10 03 1.5796 × 10 03
std2.4992 × 10 03 2.5130 × 10 04 1.8224 × 10 03 1.4180 × 10 03 7.4525 × 10 02 3.0039 × 10 02 3.5460 × 10 02
rank6735421
F12mean4.7782 × 10 10 1.4146 × 10 09 3.8830 × 10 08 1.8606 × 10 09 5.8107 × 10 09 1.9902 × 10 08 4.0991 × 10 07
std9.1222 × 10 09 2.3082 × 10 09 4.8494 × 10 08 6.3968 × 10 08 2.7816 × 10 09 1.6713 × 10 08 3.8590 × 10 07
rank7435621
F13mean2.4779 × 10 10 6.4374 × 10 08 3.1567 × 10 07 1.1138 × 10 08 1.4752 × 10 09 5.5005 × 10 06 1.2803 × 10 05
std8.9960 × 10 09 1.4725 × 10 09 4.5117 × 10 07 7.7859 × 10 07 1.2828 × 10 09 7.7124 × 10 06 1.1488 × 10 05
rank7345621
F14mean5.6274 × 10 07 6.2251 × 10 06 3.3208 × 10 06 4.5881 × 10 06 2.2088 × 10 06 1.8594 × 10 06 1.1050 × 10 06
std3.8611 × 10 07 1.2230 × 10 07 3.3273 × 10 06 4.3184 × 10 06 4.0268 × 10 06 1.6278 × 10 06 6.9628 × 10 05
rank7256431
F15mean4.1982 × 10 09 4.0295 × 10 07 4.4859 × 10 07 1.7444 × 10 07 3.1713 × 10 08 1.2247 × 10 05 4.1742 × 10 04
std2.7023 × 10 09 1.9422 × 10 08 1.4271 × 10 08 2.2923 × 10 07 7.2589 × 10 07 6.9388 × 10 04 2.6410 × 10 04
rank7245631
F16mean7.1270 × 10 03 4.5082 × 10 03 4.6632 × 10 03 5.9137 × 10 03 5.0266 × 10 03 4.7454 × 10 03 3.8350 × 10 03
std8.9921 × 10 02 7.2752 × 10 02 6.8366 × 10 02 9.7806 × 10 02 3.8689 × 10 02 7.4447 × 10 02 4.7648 × 10 02
rank7236541
F17mean4.9343 × 10 03 4.2452 × 10 03 4.2215 × 10 03 4.3360 × 10 03 4.4604 × 10 03 3.8339 × 10 03 3.4381 × 10 03
std7.5421 × 10 02 7.1008 × 10 02 5.3508 × 10 02 5.6636 × 10 02 2.5988 × 10 02 4.4537 × 10 02 2.3454 × 10 02
rank7345621
F18mean1.0312 × 10 08 4.5927 × 10 07 7.9394 × 10 06 3.8041 × 10 07 1.2055 × 10 07 6.7218 × 10 06 4.3365 × 10 06
std5.1922 × 10 07 7.2484 × 10 07 1.0713 × 10 07 3.2022 × 10 07 4.5469 × 10 06 1.2256 × 10 07 3.2555 × 10 06
rank7536421
F19mean1.1333 × 10 09 1.3032 × 10 07 6.8058 × 10 06 7.1569 × 10 06 1.6059 × 10 08 1.3002 × 10 06 7.7778 × 10 04
std8.9663 × 10 08 3.8916 × 10 07 9.6921 × 10 06 7.2814 × 10 06 4.5921 × 10 07 1.6547 × 10 06 1.1138 × 10 05
rank7245631
F20mean3.6182 × 10 03 4.4082 × 10 03 3.8125 × 10 03 3.9643 × 10 03 3.9237 × 10 03 3.7196 × 10 03 3.2433 × 10 03
std2.7673 × 10 02 5.6270 × 10 02 3.1105 × 10 02 2.5650 × 10 02 2.2477 × 10 02 4.2420 × 10 02 3.8337 × 10 02
rank2745631
F21mean2.9785 × 10 03 2.6972 × 10 03 2.8510 × 10 03 2.9934 × 10 03 2.7927 × 10 03 2.8806 × 10 03 2.6001 × 10 03
std7.3108 × 10 01 9.3352 × 10 01 1.0139 × 10 02 8.4173 × 10 01 3.9788 × 10 01 1.1817 × 10 02 6.3260 × 10 01
rank7246351
F22mean1.4980 × 10 04 1.5366 × 10 04 1.1557 × 10 04 1.4498 × 10 04 1.6500 × 10 04 1.2125 × 10 04 1.0351 × 10 04
std1.0407 × 10 03 1.8469 × 10 03 2.6051 × 10 03 1.1381 × 10 03 4.9895 × 10 02 2.0855 × 10 03 3.2066 × 10 03
rank5624731
F23mean4.5101 × 10 03 3.2081 × 10 03 3.4950 × 10 03 3.8239 × 10 03 3.4571 × 10 03 3.4869 × 10 03 3.1820 × 10 03
std2.1190 × 10 02 9.7704 × 10 01 1.4288 × 10 02 1.7085 × 10 02 1.1261 × 10 02 2.2048 × 10 02 9.1383 × 10 01
rank7256431
F24mean5.0335 × 10 03 3.3022 × 10 03 3.6881 × 10 03 3.8560 × 10 03 3.6574 × 10 03 3.5858 × 10 03 3.2918 × 10 03
std2.6841 × 10 02 1.0355 × 10 02 1.7572 × 10 02 1.4696 × 10 02 1.6632 × 10 02 1.7405 × 10 02 8.1293 × 10 01
rank7256431
F25mean1.1711 × 10 04 4.3564 × 10 03 3.2447 × 10 03 4.1799 × 10 03 3.8277 × 10 03 3.1547 × 10 03 3.1516 × 10 03
std1.0519 × 10 03 1.5789 × 10 03 1.5819 × 10 02 3.6394 × 10 02 2.2231 × 10 02 4.9728 × 10 01 3.8338 × 10 01
rank7536421
F26mean1.5821 × 10 04 8.9820 × 10 03 9.1528 × 10 03 1.4149 × 10 04 8.3066 × 10 03 7.9520 × 10 03 7.5113 × 10 03
std6.4252 × 10 02 1.2961 × 10 03 1.9386 × 10 03 1.4740 × 10 03 1.5008 × 10 03 2.6612 × 10 03 2.5289 × 10 03
rank7456321
F27mean7.0429 × 10 03 3.6814 × 10 03 4.1240 × 10 03 4.6575 × 10 03 3.7278 × 10 03 3.9018 × 10 03 3.6976 × 10 03
std5.8660 × 10 02 1.4029 × 10 02 3.1507 × 10 02 6.3587 × 10 02 2.2514 × 10 02 2.4758 × 10 02 1.3074 × 10 02
rank7256341
F28mean1.0616 × 10 04 6.3288 × 10 03 5.8985 × 10 03 5.1477 × 10 03 4.5125 × 10 03 3.8613 × 10 03 3.4346 × 10 03
std7.9570 × 10 02 1.7491 × 10 03 2.0751 × 10 03 3.3643 × 10 02 8.9637 × 10 02 1.1724 × 10 03 5.9516 × 10 01
rank7645321
F29mean1.8839 × 10 04 5.5454 × 10 03 6.2820 × 10 03 8.3695 × 10 03 6.2301 × 10 03 5.7750 × 10 03 5.2988 × 10 03
std1.1022 × 10 04 6.8123 × 10 02 9.0411 × 10 02 1.2495 × 10 03 3.2188 × 10 02 6.5789 × 10 02 4.1837 × 10 02
rank7256431
F30mean3.0313 × 10 09 2.9087 × 10 08 4.0330 × 10 07 2.6642 × 10 08 4.4150 × 10 08 1.6624 × 10 07 3.3476 × 10 06
std1.5907 × 10 09 1.0033 × 10 09 3.3621 × 10 07 9.3361 × 10 07 8.8401 × 10 07 1.5359 × 10 07 2.5463 × 10 06
rank7435621
Average rank6.31033.89663.82765.86214.48282.75861.1379
Final rank7436521
Note: The best results for each metric are shown in bold.
Table 4. Wilcoxon rank sum test.
Table 4. Wilcoxon rank sum test.
FunctionMIDBO vs. HOAMIDBO vs. DEMIDBO vs. DBOMIDBO vs. WOAMIDBO vs. PSOMIDBO vs. QHDBO
F13.0199 × 10 11 3.0199 × 10 11 1.4110 × 10 09 3.0199 × 10 11 3.0199 × 10 11 2.6784 × 10 06
F35.8737 × 10 04 3.0199 × 10 11 4.1997 × 10 10 6.1210 × 10 10 1.2493 × 10 05 3.0199 × 10 11
F43.0199 × 10 11 1.6132 × 10 10 2.3897 × 10 08 3.0199 × 10 11 3.0199 × 10 11 3.1821 × 10 04
F53.0199 × 10 11 1.1143 × 10 03 1.8608 × 10 06 3.0199 × 10 11 3.0199 × 10 11 5.3221 × 10 03
F63.0199 × 10 11 5.5546 × 10 02 2.1540 × 10 06 3.0199 × 10 11 1.5969 × 10 03 3.4783 × 10 01
F73.0199 × 10 11 2.2273 × 10 09 5.9969 × 10 01 3.6897 × 10 11 1.0907 × 10 05 1.1711 × 10 02
F83.0199 × 10 11 2.5974 × 10 05 1.0702 × 10 09 3.0199 × 10 11 3.0199 × 10 11 3.3386 × 10 03
F92.9215 × 10 09 1.0666 × 10 07 1.4918 × 10 06 1.0937 × 10 10 4.4642 × 10 01 3.8481 × 10 03
F101.7769 × 10 10 2.9215 × 10 09 1.0869 × 10 01 2.6695 × 10 09 3.0199 × 10 11 2.2823 × 10 01
F113.0199 × 10 11 3.3384 × 10 11 1.5581 × 10 08 3.0199 × 10 11 4.0772 × 10 11 7.7387 × 10 06
F123.0199 × 10 11 1.2870 × 10 09 8.4948 × 10 09 3.0199 × 10 11 3.0199 × 10 11 4.8011 × 10 07
F133.0199 × 10 11 4.3531 × 10 05 8.8910 × 10 10 3.0199 × 10 11 3.0199 × 10 11 8.8910 × 10 10
F143.0199 × 10 11 3.3285 × 10 01 1.6798 × 10 03 7.0881 × 10 08 2.8129 × 10 02 6.1452 × 10 02
F153.0199 × 10 11 2.1702 × 10 01 2.1947 × 10 08 3.0199 × 10 11 3.0199 × 10 11 2.1959 × 10 07
F163.0199 × 10 11 5.2640 × 10 04 3.3242 × 10 06 1.7769 × 10 10 3.1589 × 10 10 1.8608 × 10 06
F172.6099 × 10 10 8.8411 × 10 07 1.7294 × 10 07 2.0152 × 10 08 1.9568 × 10 10 4.7138 × 10 04
F183.0199 × 10 11 1.6062 × 10 06 5.5923 × 10 01 1.1737 × 10 09 2.3897 × 10 08 2.5188 × 10 01
F193.0199 × 10 11 8.3146 × 10 03 1.0105 × 10 08 3.0199 × 10 11 3.0199 × 10 11 2.2273 × 10 09
F209.2113 × 10 05 1.2870 × 10 09 4.8011 × 10 07 3.0199 × 10 11 1.5581 × 10 08 9.2113 × 10 05
F213.0199 × 10 11 2.9590 × 10 05 8.1527 × 10 11 3.0199 × 10 11 8.9934 × 10 11 1.0937 × 10 10
F225.9673 × 10 09 4.5726 × 10 09 1.1199 × 10 01 2.8314 × 10 08 9.9186 × 10 11 1.6285 × 10 02
F233.0199 × 10 11 2.6433 × 10 01 6.5183 × 10 09 3.3384 × 10 11 1.2023 × 10 08 1.3594 × 10 07
F243.0199 × 10 11 9.5873 × 10 01 1.2057 × 10 10 3.0199 × 10 11 3.6897 × 10 11 1.0702 × 10 09
F253.0199 × 10 11 3.6897 × 10 11 1.7666 × 10 03 3.0199 × 10 11 3.0199 × 10 11 1.0000 × 10 00
F263.0199 × 10 11 6.1452 × 10 02 1.5638 × 10 02 3.6897 × 10 11 3.2553 × 10 01 4.4642 × 10 01
F273.0199 × 10 11 6.9522 × 10 01 9.0632 × 10 08 8.1527 × 10 11 7.2827 × 10 01 2.8389 × 10 04
F283.0199 × 10 11 1.2057 × 10 10 9.9186 × 10 11 3.0199 × 10 11 3.0199 × 10 11 1.2493 × 10 05
F293.0199 × 10 11 1.7613 × 10 01 1.0188 × 10 05 3.3384 × 10 11 6.7220 × 10 10 1.0315 × 10 02
F303.0199 × 10 11 2.1947 × 10 08 8.1527 × 10 11 3.0199 × 10 11 3.0199 × 10 11 1.4294 × 10 08
+/=/–29 / 0 / 021 / 8 / 025 / 4 / 029 / 0 / 025 / 3 / 122 / 6 / 1
Table 5. CEC2022 test results (Dim = 10).
Table 5. CEC2022 test results (Dim = 10).
FunctionMetricHOADEDBOWOAPSOQHDBOMIDBO
F1mean1.7064 × 10 04 3.2804 × 10 04 1.5248 × 10 03 1.6810 × 10 04 1.0413 × 10 03 1.9943 × 10 03 3.0123 × 10 02
std5.6621 × 10 03 2.1361 × 10 04 1.0832 × 10 03 1.4218 × 10 04 4.5302 × 10 02 1.8545 × 10 03 1.4402 × 10 00
rank6745231
F2mean9.2853 × 10 02 4.2406 × 10 02 4.2942 × 10 02 4.5524 × 10 02 4.2920 × 10 02 4.1376 × 10 02 4.0688 × 10 02
std3.1368 × 10 02 6.0127 × 10 01 4.6766 × 10 01 5.5145 × 10 01 3.9029 × 10 01 2.6473 × 10 01 1.4363 × 10 01
rank7345621
F3mean6.2104 × 10 02 6.0802 × 10 02 6.0512 × 10 02 6.2729 × 10 02 6.1053 × 10 02 6.0563 × 10 02 6.0570 × 10 02
std5.1745 × 10 00 1.0092 × 10 01 5.7223 × 10 00 1.2314 × 10 01 2.3142 × 10 00 5.0129 × 10 00 3.9824 × 10 00
rank6417523
F4mean8.4058 × 10 02 8.3120 × 10 02 8.3168 × 10 02 8.4939 × 10 02 8.3382 × 10 02 8.3740 × 10 02 8.2601 × 10 02
std7.1790 × 10 00 1.1006 × 10 01 1.2597 × 10 01 1.7514 × 10 01 5.6920 × 10 00 1.4188 × 10 01 8.5509 × 10 00
rank6327451
F5mean2.0645 × 10 03 1.1928 × 10 03 1.1292 × 10 03 2.2127 × 10 03 9.7276 × 10 02 1.2190 × 10 03 1.0115 × 10 03
std6.3730 × 10 02 2.8676 × 10 02 2.6854 × 10 02 8.6516 × 10 02 3.1363 × 10 01 3.9367 × 10 02 1.4081 × 10 02
rank6537241
F6mean4.2760 × 10 05 6.4884 × 10 04 8.6806 × 10 03 5.4586 × 10 03 6.3540 × 10 05 8.7718 × 10 03 3.8056 × 10 03
std7.9737 × 10 05 2.0806 × 10 05 5.0788 × 10 03 4.1045 × 10 03 4.8372 × 10 05 5.4267 × 10 03 3.2408 × 10 03
rank6542731
F7mean2.0745 × 10 03 2.0705 × 10 03 2.0441 × 10 03 2.0603 × 10 03 2.0534 × 10 03 2.0511 × 10 03 2.0372 × 10 03
std3.8530 × 10 01 3.8261 × 10 01 1.5445 × 10 01 2.0246 × 10 01 3.0430 × 10 01 1.9888 × 10 01 1.1743 × 10 01
rank7625341
F8mean2.2357 × 10 03 2.2441 × 10 03 2.2309 × 10 03 2.2446 × 10 03 2.2298 × 10 03 2.2295 × 10 03 2.2271 × 10 03
std1.5262 × 10 01 3.5131 × 10 01 1.0339 × 10 01 2.7851 × 10 01 2.9796 × 10 00 8.4267 × 10 00 7.6904 × 10 00
rank4637521
F9mean2.4420 × 10 03 2.4485 × 10 03 2.4000 × 10 03 2.4091 × 10 03 2.5517 × 10 03 2.4000 × 10 03 2.4000 × 10 03
std3.3310 × 10 01 6.1344 × 10 01 1.0706 × 10 09 3.6253 × 10 00 5.6752 × 10 01 8.6941 × 10 13 3.9602 × 10 04
rank6524713
F10mean2.5136 × 10 03 3.0926 × 10 03 2.5015 × 10 03 2.7174 × 10 03 2.5625 × 10 03 2.5003 × 10 03 2.5000 × 10 03
std6.6841 × 10 00 3.3752 × 10 02 3.6086 × 10 00 3.8502 × 10 02 1.0694 × 10 00 1.0524 × 10 00 1.3236 × 10 03
rank4725613
F11mean2.7084 × 10 03 2.6152 × 10 03 2.6001 × 10 03 2.6030 × 10 03 2.6472 × 10 03 2.6001 × 10 03 2.6000 × 10 03
std4.5881 × 10 01 2.1023 × 10 01 3.3402 × 10 01 1.2130 × 10 00 4.5366 × 10 00 7.7601 × 10 02 1.3510 × 10 02
rank7435621
F12mean2.9588 × 10 03 2.9546 × 10 03 2.9545 × 10 03 2.9547 × 10 03 2.9550 × 10 03 2.9547 × 10 03 2.9544 × 10 03
std1.3617 × 10 00 2.1264 × 10 01 1.2114 × 10 01 8.4741 × 10 02 2.6819 × 10 01 4.6252 × 10 13 2.9845 × 10 02
rank7324651
Average rank64.83332.66675.254.91672.83331.5
Final rank7426531
Note: The best results for each metric are shown in bold.
Table 6. Wilcoxon rank sum test results on the CEC2022 test set.
Table 6. Wilcoxon rank sum test results on the CEC2022 test set.
FunctionMIDBO vs. HOAMIDBO vs. DEMIDBO vs. DBOMIDBO vs. WOAMIDBO vs. PSOMIDBO vs. QHDBO
F13.0199 × 10 11 3.0199 × 10 11 6.5261 × 10 07 3.0199 × 10 11 3.0199 × 10 11 5.5999 × 10 07
F23.0199 × 10 11 5.3685 × 10 02 1.5288 × 10 05 6.5277 × 10 08 5.0723 × 10 10 3.3386 × 10 03
F37.3891 × 10 11 9.9410 × 10 01 1.8090 × 10 01 1.7769 × 10 10 3.3242 × 10 06 4.9178 × 10 01
F49.8329 × 10 08 7.7272 × 10 02 8.2357 × 10 02 3.2555 × 10 07 2.0058 × 10 04 1.8575 × 10 03
F58.9934 × 10 11 1.7649 × 10 02 9.6263 × 10 02 6.6955 × 10 11 8.0727 × 10 01 1.3272 × 10 02
F67.0430 × 10 07 2.1959 × 10 07 5.0912 × 10 06 6.5671 × 10 02 3.0199 × 10 11 5.9706 × 10 05
F73.5201 × 10 07 2.2780 × 10 05 1.2235 × 10 01 7.0430 × 10 07 2.5306 × 10 04 7.9590 × 10 03
F81.6813 × 10 04 1.4298 × 10 05 3.3874 × 10 02 1.1077 × 10 06 6.7650 × 10 05 1.0869 × 10 01
F93.0161 × 10 11 5.1763 × 10 07 1.4495 × 10 09 3.0161 × 10 11 3.0161 × 10 11 2.9703 × 10 11
F103.0161 × 10 11 3.0161 × 10 11 7.3444 × 10 02 3.0161 × 10 11 3.0161 × 10 11 2.9422 × 10 07
F113.0199 × 10 11 2.3866 × 10 04 1.4932 × 10 04 3.0199 × 10 11 3.0199 × 10 11 1.2235 × 10 01
F122.6859 × 10 11 3.0953 × 10 02 2.1483 × 10 02 5.5571 × 10 10 2.6859 × 10 11 1.0410 × 10 12
+/=/−12 / 0 / 09 / 3 / 06 / 5 / 111 / 1 / 011 / 1 / 08 / 3 / 1
Table 7. Best results for tension–compression spring design issues.
Table 7. Best results for tension–compression spring design issues.
AlgorithmOptimum VariablesBest CostRanking
dDP
HOA0.051490410.3501947611.752058630.012768217
DE0.051855710.3607400711.056990060.012665743
DBO0.050.3174254714.027761950.012719054
WOA0.051538750.3531124111.503529080.012665672
PSO0.053355660.398121899.2299107760.012727806
QHDBO0.050.3174176914.032395200.012722425
MIDBO0.051622580.3551206511.383213490.012665311
Note: The best results are highlighted in bold.
Table 8. Statistical results of tension–compression spring design issues.
Table 8. Statistical results of tension–compression spring design issues.
AlgorithmBestMeanStdWorst
HOA0.012768210.014222200.001090430.01705629
DE0.012665740.014185420.003645430.02883747
DBO0.012719050.015190930.002438280.01800172
WOA0.012665670.013811820.001255100.01777675
PSO0.012727800.013252530.001152480.01802708
QHDBO0.012722423972.1697712226.031839721.5549
MIDBO0.012665310.012874730.000318920.01385540
Note: The best results for each metric are shown in bold.
Table 9. Best results for pressure vessel design issues.
Table 9. Best results for pressure vessel design issues.
AlgorithmOptimum VariablesBest CostRanking
T s T h R L
HOA14.469084879.4154474446.15582410132.10203846497.6397687
DE12.329374015.9596081240.319618912005743.0282594
DBO11.783877376.3696563040.319618722005743.0282072
WOA12.080285167.4538200340.31964720199.99965295914.3812035
PSO12.069999236.9664821240.68269309196.37797655944.3599446
QHDBO12.207674526.4569144440.319618722005743.0282072
MIDBO11.587688205.7585697340.33293551199.81481465743.0212001
Table 10. Statistical results of pressure vessel design issues.
Table 10. Statistical results of pressure vessel design issues.
AlgorithmBestMeanStdWorst
HOA6497.6397687151.331985377.02413577710.421479
DE5743.0282596245.583864556.92503297911.454283
DBO5743.0282076126.716891441.45476947198.824766
WOA5914.3812037846.2764992547.83559315978.5745
PSO5944.3599446383.284761355.88787597184.192856
QHDBO5743.0282076452.409249490.56382157198.824766
MIDBO5743.0212006033.448733376.13537997198.824766
Table 11. Best results for speed reducer design issues.
Table 11. Best results for speed reducer design issues.
AlgorithmOptimum VariablesBest CostRanking
b m p l 1 l 2 d 1 d 2
HOA3.52839020.705796517.08151748.02370407.99541093.39162075.34673753108.6054057
DE8.10484680.710.00456276.94209567.71790783.36142955.28911392513.7009521
DBO3.49903780.7178.37.71520893.35253255.28665443003.5696256
WOA3.49894280.717.01099227.37.71184983.35548585.28665202998.5231314
PSO3.50074690.7177.65111817.78633163.36360235.28770103003.4037695
QHDBO3.49896300.7177.30161667.71661023.35057135.28666122994.2911853
MIDBO3.49903770.7177.37.71520893.35054105.28665442994.2342522
Table 12. Statistical results for speed reducer design issues.
Table 12. Statistical results for speed reducer design issues.
AlgorithmBestMeanStdWorst
HOA3108.6054057117.8688487620.55565828686.3642
DE2513.7009522679.43759275.530423542898.544731
DBO3003.5696253031.82896716.151500373056.000079
WOA2998.5231313227.816987412.04996964863.87868
PSO3003.4037693036.14809416.351928643057.403942
QHDBO2994.2911856659.6988488941.28575131812.79325
MIDBO2994.2342522994.2343310.0003191582994.235682
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lv, W.; He, Y.; Yang, Y.; Ma, X.; Chen, J.; Zhang, Y. Improving the Dung Beetle Optimizer with Multiple Strategies: An Application to Complex Engineering Problems. Biomimetics 2025, 10, 717. https://doi.org/10.3390/biomimetics10110717

AMA Style

Lv W, He Y, Yang Y, Ma X, Chen J, Zhang Y. Improving the Dung Beetle Optimizer with Multiple Strategies: An Application to Complex Engineering Problems. Biomimetics. 2025; 10(11):717. https://doi.org/10.3390/biomimetics10110717

Chicago/Turabian Style

Lv, Wei, Yueshun He, Yuankun Yang, Xiaohui Ma, Jie Chen, and Yuxuan Zhang. 2025. "Improving the Dung Beetle Optimizer with Multiple Strategies: An Application to Complex Engineering Problems" Biomimetics 10, no. 11: 717. https://doi.org/10.3390/biomimetics10110717

APA Style

Lv, W., He, Y., Yang, Y., Ma, X., Chen, J., & Zhang, Y. (2025). Improving the Dung Beetle Optimizer with Multiple Strategies: An Application to Complex Engineering Problems. Biomimetics, 10(11), 717. https://doi.org/10.3390/biomimetics10110717

Article Metrics

Back to TopTop