Next Article in Journal / Special Issue
Intelligent Learning-Based Methods for Determining the Ideal Team Size in Agile Practices
Previous Article in Journal
Can Plants Perceive Human Gestures? Using AI to Track Eurythmic Human–Plant Interaction
Previous Article in Special Issue
A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Improved Dung Beetle Optimization Algorithm and Its Applications

1
School of Information Science and Technology, Yunnan Normal University, Kunming 650500, China
2
Department of Internet of Things and Artificial Intelligence, Wuxi Vocational College of Science and Technology, Wuxi 214028, China
3
College of Engineering, Informatics, and Applied Sciences, Flagstaff, AZ 86011, USA
4
Department of Computer Science and Technology, Kean University, Union, NJ 07083, USA
5
School of Information Science and Engineering, Yunnan University, Kunming 650500, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(5), 291; https://doi.org/10.3390/biomimetics9050291
Submission received: 1 April 2024 / Revised: 3 May 2024 / Accepted: 5 May 2024 / Published: 13 May 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2024)

Abstract

:
The dung beetle optimization (DBO) algorithm, a swarm intelligence-based metaheuristic, is renowned for its robust optimization capability and fast convergence speed. However, it also suffers from low population diversity, susceptibility to local optima solutions, and unsatisfactory convergence speed when facing complex optimization problems. In response, this paper proposes the multi-strategy improved dung beetle optimization algorithm (MDBO). The core improvements include using Latin hypercube sampling for better population initialization and the introduction of a novel differential variation strategy, termed “Mean Differential Variation”, to enhance the algorithm’s ability to evade local optima. Moreover, a strategy combining lens imaging reverse learning and dimension-by-dimension optimization was proposed and applied to the current optimal solution. Through comprehensive performance testing on standard benchmark functions from CEC2017 and CEC2020, MDBO demonstrates superior performance in terms of optimization accuracy, stability, and convergence speed compared with other classical metaheuristic optimization algorithms. Additionally, the efficacy of MDBO in addressing complex real-world engineering problems is validated through three representative engineering application scenarios namely extension/compression spring design problems, reducer design problems, and welded beam design problems.

1. Introduction

Optimization is everywhere, be it engineering design, industrial design, business planning, holiday planning, etc. We use optimization techniques to solve problems intelligently by choosing the best from many available options [1]. At its core, it involves the quest for an optimal set of parameter values within specified constraints, aimed at either maximizing or minimizing system performance indicators [2]. Due to the involvement of many decision variables, complex nonlinear constraints, and objective functions, efficient methods are required for solving them. Traditional algorithms typically start from singularities and rely on gradient information [3]. However, many real-world optimization problems are often characterized as black-box problems, where specific expressions, gradient information, and derivatives are unknown [4]. Metaheuristic algorithms (MAs) are computational intelligence paradigms especially used for sophisticated solving optimization problems [5]. MAs present a promising avenue for tackling most real-world nonlinear and multimodal optimization challenges by offering acceptable solutions through iterative trial and error [6].
These algorithms are classified into evolutionary-based [7], physics-based [8], and swarm intelligence-based [9] categories. Evolutionary-based algorithms, rooted in natural selection and genetics, include genetic algorithm (GA)  [10] and differential evolution (DE)  [11]. GA evolves potential solutions by simulating natural selection and genetic mechanisms like replication, crossover, and mutation operations, gradually converging toward the optimal solution. DE mimics biological evolution to seek the optimal solution by leveraging differences among individuals in the population to guide the search direction and iteratively evolve towards the optimum. Physics-based algorithms allow each search agent to interact and move in the search space according to certain physical rules, with common algorithms, including simulated annealing (SA) [12], the gravitational search algorithm (GSA) [13], and the sine cosine algorithm (SCA) [14]. The SA algorithm simulates the physical annealing process, randomly exploring the solution space to find the global optimum solution, and utilizes a probability jump mechanism to avoid local optima, thus achieving global optimization. GSA is inspired by natural gravitational forces and object movements, aiming to find the global optimum by adjusting the positions of objects in the solution space for optimization search. Meanwhile, SCA utilizes the fluctuating properties of sine and cosine functions to generate random candidate solutions and, through an adaptive balance of exploration and exploitation stages, achieves global optimization search. Swarm intelligence (SI) algorithms are inspired by collective behaviors of social insects and animals [15]. Some classic swarm intelligence algorithms include particle swarm optimization (PSO) [16], ant colony optimization (ACO) [17], artificial bee colony (ABC) [18], grey wolf optimizer (GWO) [19], whale optimization algorithm (WOA) [20], Harris hawks optimization (HHO) [21], sparrow search algorithm (SSA) [22], and the slime mold algorithm (SMA) [23]. These algorithms exhibit characteristics of self-organization, adaptation, and self-learning and are widely applied across various domains [24].
The dung beetle optimization (DBO) algorithm [25] is a swarm intelligence algorithm, proposed in 2022, and has attracted considerable attention due to its well-optimized performance and unique design inspiration among a plethora of metaheuristic algorithms. DBO emulates various life behaviors of dung beetle populations, such as rolling balls, dancing, foraging, stealing, and reproduction, thereby constructing a novel optimization strategy. Experimental results demonstrate that DBO exhibits good performance in solving some classical optimization problems. Nevertheless, achieving desirable results when using the DBO algorithm to solve complex optimization problems remains a challenge. Specifically, the drawbacks of DBO are primarily evident in the following aspects: Firstly, during the initialization phase, the utilization of randomly generated populations may lead to an uneven distribution within the solution space, consequently restricting exploration and potentially trapping the algorithm in local optima. Secondly, the inclination toward greediness of the algorithm throughout the search process may precipitate premature convergence on local optima, disregarding the global optimum and resulting in suboptimal outcomes. Furthermore, akin to other swarm intelligence algorithms, when solving multi-dimensional objective functions, neglecting the evolution of specific dimensions due to inter-dimensional interference deteriorates convergence speed and compromises solution quality. As asserted by the “No Free Lunch” (NFL) theorem [26], every algorithm has its inherent limitations, and there is no one algorithm that can solve all optimization problems. Therefore, many scholars are dedicated to proposing new algorithms or improving existing ones to address various real-world optimization problems. This paper addresses the deficiencies and limitations of the original DBO algorithm by proposing a multi-strategy improved Dung Beetle Optimization algorithm (MDBO). The MDBO aims to enhance the global optimization capability of the original DBO by introducing multiple strategies, improving the convergence accuracy and speed of the algorithm. Then, the overall performance of the MDBO algorithm is validated through experiments across various aspects. Overall, the main contributions of this paper are as follows:
  • The Latin hypercube sampling (LHS) initialization strategy replaces the original random initialization method of DBO to generate higher-quality initial candidate solutions.
  • Introducing a mean difference mutation strategy enhances the capability of the algorithm to escape local optimal solutions by mutating the population.
  • A strategy that combines lens imaging inverse learning with dimension-by-dimension optimization is proposed and applied to the current optimal solution to enhance its quality.
  • The proposed MDBO algorithm is verified to outperform other classical metaheuristic algorithms in terms of performance by comparing the solution accuracy, convergence speed, and stability of the CEC2017 and CEC2020 functions, respectively.
  • Further, MDBO was successfully applied to three real-world engineering optimization problems, validating its superior capability in solving complex engineering problems.
This paper is organized as follows. The basic dung beetle optimization algorithm is introduced in Section 2. The multi-strategy improved dung beetle optimization algorithm (MDBO) is proposed in Section 3 to address the shortcomings of the dung beetle optimization algorithm. In Section 4, the improved multi-strategy dung beetle optimization algorithm is experimentally compared with other algorithms in various aspects to verify the effectiveness of the improvement measures. Section 5 uses the improved algorithm in real-world engineering applications to further explore the practical applicability of the improved algorithm. Section 6 summarizes the full work.

2. The Basic Dung Beetle Optimization Algorithm (DBO)

The dung beetle optimization algorithm is inspired by the behaviors of dung beetles such as rolling, dancing, foraging, stealing, and reproduction. Four population renewal strategies are designed based on these behaviors.

2.1. Ball-Rolling Dung Beetles

Dung beetles constantly update their position in sunlight by sensing environmental factors such as sunlight or wind direction, a behavior that can be accurately described by the mathematical model described in Equation (1).
x i t + 1 = x i t + α × k × x i t 1 + b × x , x = x i t X w .
t denotes the current iteration number, x i t denotes the position of the i-th dung beetle at the t-th iteration, α denotes whether the dung beetle’s direction deviates or not, and its value is set to 1 or −1 according to the probability, with 1 denoting no deviation and −1 denoting a deviation. k ( 0 , 0.2 ] is a deflection coefficient, and b denotes a constant that belongs to (0,1), and X w is used to denote the global worst position, and x is used to simulate the change of light intensity. Rolling dung beetles have a certain probability of encountering an obstacle, when the rolling dung beetle encounters an obstacle and cannot proceed further, the dung beetle acquires a new direction by dancing, and the dancing behavior is defined in Equation (2).
x i t + 1 = x i t + t a n θ x i t x i t 1 .
where θ 0 , π , the position of the dung beetle will not be updated when the values of θ are π / 2 and π .

2.2. Breeding Dung Beetles

In order to provide a safe environment for the offspring, dung beetles will spawn in a safe area, and a safe spawning area is defined as a boundary selection strategy as shown in Equation (3).
R = 1 t / T m a x , L b * = m a x ( X * × ( 1 R ) , L b ) , U b * = m i n ( X * × ( 1 + R ) , U b ) .
where R denotes the convergence factor, T m a x denotes the maximum number of iterations, X * denotes the local optimal position, L b * and U b * denote the lower and upper boundaries of the spawning area, respectively, and L b and U b denote the lower and upper bounds of the objective function. As shown in Figure 1, the outermost large circle represents the upper and lower boundaries of the optimization problem, and the inner circle represents the area where the dung beetles breed. X * is denoted by the black ball, the red dots denote the positions of the breeding balls, the blue dots denote the positions of the rolling dung beetles, and the yellow dots denote the boundaries. When the spawning area is determined, each female dung beetle lays an egg in the area of the spawning area in each iteration.
From Equation (3), it is found that the spawning area follows the value of R dynamically, therefore, the location of the laid egg also changes dynamically, and the spawning location is defined in Equation (4).
B i ( t + 1 ) = X * + b 1 × ( B i ( t ) L b * ) + b 2 × ( B i ( t ) U b * ) .
In this context, B i ( t ) denotes the location of the i-th breeding ball at the t-th iteration, b 1 and b 2 denote 1 × D independent random vectors, D denotes the dimension of the optimization problem, and the symbol ‘×’ means two vectors conduct element-wise multiplication. The position of the breeding ball is strictly limited to the spawning area.

2.3. Foraging Dung Beetles

When young dung beetles forage, they also need to establish the best foraging area to guide them to forage, and the foraging area is defined in Equation (5).
L b b = m a x ( X b × ( 1 R ) , L b ) , U b b = m i n ( X b × ( 1 + R ) , U b ) .
where the L b b and U b b sub-tables denote the lower and upper bounds of the best foraging area and X b denotes the global best position. The location update of the small dung beetle is defined in Equation (6).
x i ( t + 1 ) = x i ( t ) + C 1 × ( x i ( t ) L b b ) + C 2 × ( x i ( t ) U b b ) .
C 1 is a random number obeying a normal distribution and C 2 denotes a random vector belonging to (0,1).

2.4. Stealing Dung Beetles

The stealing behavior denotes stealing dung balls from other dung beetles. During the iterative process, the location information update strategy of the thief dung beetle is defined in Equation (7).
x i ( t + 1 ) = X b + S × g × ( x i ( t ) X * + | x i ( t ) X b | ) .
where S denotes a constant and g is a random vector of size obeying a normal distribution.

2.5. The DBO Algorithm Implementation Steps

The distribution of the population in DBO is shown in Figure 2, where the number of matrices indicates the number of dung beetles, and the blue, yellow, green, and red matrices represent ball-rolling dung beetles, breeding dung beetles, foraging dung beetles, and stealing dung beetles, respectively. The overall pseudo-code of the DBO algorithm is shown in Algorithm 1.
Algorithm 1: The framework of the DBO algorithm.
Biomimetics 09 00291 i001

3. Multi-Strategy Improved Dung Beetle Optimization Algorithm (MDBO)

The basic characteristics of the dung beetle optimization algorithm can be derived from its principle. The ball-rolling behavior enhances the global search ability of the algorithm across all phases, while reproduction and foraging behaviors facilitate exploration around the optimal position of the individual. With each iteration, the dynamic boundary and range of search decrease gradually. The stealing behavior entails a dynamic localized search near the optimal individual. Despite the simplicity of the DBO algorithm and its successful application in certain engineering design problems, it exhibits several drawbacks. Striking a balance between global exploration and local exploitation poses challenges, and algorithms are prone to falling into local optima [27]. To rectify these issues, this study proposes enhancements in the ensuing sections.

3.1. Latin Hypercube Sampling to Initialize Populations

The DBO algorithm usually relies on a stochastic initialization strategy to generate the initial population when solving complex optimization problems. This randomization helps to explore different regions of the solution space, thus increasing the chance of finding a globally optimal solution. However, random initialization also has an obvious drawback: it cannot ensure the uniform distribution of the population in the solution space. Especially in high-dimensional search spaces, it requires a large number of points to obtain a good distribution, and these points may be close to each other or even overlap [28]. This may result in the population being too concentrated in some regions and too sparse in others. This uneven distribution is very detrimental to the early convergence of the algorithm.
To address this issue, this study introduces an initialization method called Latin hypercube sampling (LHS) [29,30]. The fundamental concept of LHS involves partitioning the sample space into multiple uniform and non-overlapping subspaces and selecting a single data point from each subspace as a sampling point. This approach guarantees a uniform distribution of sample points across the defined domain, thereby mitigating the risk of over-concentration or sparse distribution of agents. Mathematically, the generated sample is represented using Equation (8).
x i = 1 n r + i 1 n .
where r is a uniform random number in (0,1), x i is the sample in the ith interval, and n is the total number of samples. When the total number of samples is 10, the sample x 1 = 1 10 r + 0 10 in the first interval has a range of [0,0.1], and similarly the sample in the second interval has a range of [0.1,0.2], and so on to obtain all the sampling points of all LHSs.
Compared to random or stratified sampling methods, Latin hypercube sampling (LHS) exhibits stronger spatial filling capability and convergence characteristics [31]. This attribute has led to its widespread application in the initialization of populations in intelligent algorithms. Figure 3 illustrates a two-dimensional comparison between the distributions of 10 randomly generated populations and populations generated using LHS. It is evident from the figure that the population distribution generated by LHS is more uniform, with no overlapping individuals. Therefore, this method can generate higher-quality initial populations, laying a better foundation for subsequent algorithm optimization.
As a metaheuristic algorithm based on swarm intelligence, the dung beetle optimization algorithm is mathematically modeled in the same way at initialization as other algorithms as shown in Equation (9). The set of points acquired through LHS can often be mapped to the solution space of the objective function using an equation similar to the one depicted in Equation (10).
X = X 1 X i X N = x 1 , 1 x 1 , i x 1 , D x i , 1 x i , i x i , D x N , 1 x N , i x N , D N × D
X i = l b + ( u b l b ) × L H S i .
Here, X is the population matrix, X i is the ith DBO member (candidate solution), N is the number of dung beetles, D is the number of decision variables, l b and u b represent the upper and lower bounds of the problem to be optimized, L H S i denotes the ith vector obtained using Latin Hypercube Sampling.

3.2. Mean Differential Variation

Throughout the iterative process, as the population gradually converges towards optimal individuals, there is a tendency for decreased population diversity. To prevent premature convergence of the algorithm caused by a reduction in population diversity throughout the iteration process, this paper introduces the mean differential variation [32]. Depending on the stage of the iteration, this method can be categorized into two variants, denoted as DE/mean-current/1 and DE/mean-current-best/1 respectively. Both variants initially select two individuals, X r 1 and X r 2 , randomly from the current population, and calculate two new vectors, X c 1 and X c 2 , according to Equation (11).
X c 1 = X r 1 + X r 2 2 , X c 2 = X r 1 + X b 2 .
The first variation strategy, which proceeds according to Equation (12), is unique in that it employs two fundamental vectors external to the current population. This strategy not only helps to escape the problem of population stagnation but also effectively maintains the diversity of the population, thus promoting the exploration capability of the algorithm. Consequently, the algorithm is able to search in a wider solution space, thereby augmenting the likelihood of discovering a globally optimal solution.
X i = X c 1 + F ( X c 1 X i ) + F ( X c 2 X i ) .
The second variation strategy is executed based on Equation (13), where the generation of new vectors incorporates information about the global optimal solution. This improvement allows the algorithm to perform a more intensive search in the vicinity of the optimal solution, thus finely exploring small variations in the solution space. In this way, the algorithm is able to approximate the global optimal solution more accurately, improving the accuracy and efficiency of the solution.
X i = X b + F ( X c 1 X i ) + F ( X c 2 X i ) .
In Equations (12) and (13), X b represents the current best individual, X i denotes the individual currently undergoing mutation, and F is the scaling factor. In the first type of mutation, F = 0.25 , while in the second type of mutation, F = ( 1 2 × rand ( 1 ) ) × 0.5 . Both types of mutations are executed in a cooperative manner. In the first two-thirds of the iterations, the first type of mutation is exclusively performed as it provides good search and exploitation capabilities. In the last one-third of the iterations, the second type of mutation is executed to conduct a more intensive search. Overall, as in Equation (14), we have the following:
F = 0.25 X i = X c 1 + F ( X c 1 X i ) + F ( X c 2 X i ) t < T max 2 3 , F = ( 1 2 × rand ( 1 ) ) × 0.5 X i = X b + F ( X c 1 X i ) + F ( X c 2 X i ) o t h e r w i s e .
This strategy of searching near individuals in the early stages and exploring near the global optimum in the later stages effectively helps the algorithm escape local optima, thereby enhancing the algorithm’s global search capability and convergence speed.

3.3. Fusion Lens Imaging Backward Learning and Dimension-by-Dimension Optimization

The position of the current best individual is particularly important, but in the basic dung beetle optimization (DBO) algorithm, the information contained in the current best individual is not fully utilized, leading to a lack of exploitation of the best individual. Therefore, this paper introduces the lens imaging reverse learning strategy [33,34] to perturb the best individual to help the algorithm escape local optima. The idea is to generate a reverse position based on the current coordinates to expand the search range, which can effectively avoid local optima and broaden the search scope of the algorithm. The principle of the lens imaging reverse learning strategy is depicted in Figure 4.
Suppose within a certain space, the global optimal position X b is obtained by projecting an individual P with a height of h onto the x-axis. Here, l b and u b represent the lower and upper limits of the coordinate axis. Placing a convex lens with a focal length f at the origin O, a point P * with a height h * can be obtained through the convex lens. At this point, the projection X n b of P * on the x-axis is the reverse solution. According to the principle of lens imaging, Equation (15) can be derived.
l b + u b 2 X b X n b l b + u b 2 = h h * .
Let h h * = k , and by transformation, we obtain Equation (16).
X n b = u b + l b 2 + u b + l b 2 · k X b k .
By adjusting the value of k in the lens imaging reverse learning, the dynamic reverse solution can be obtained. A smaller k produces a larger range of inverse solutions, while a larger k can produce a smaller inverse. This paper introduces an adaptive k as Equation (17). As the number of iterations increases, the value of k will grow from small to large, to meet the characteristics of a large-scale search in the early stage and a fine search in the late stage.
k = ( 1 + ( t T m a x ) 0.5 ) 10 .
In dung beetle optimization (DBO), each agent represents a potential solution. When updating each agent, updates are made across all dimensions, overlooking the changes in dimensions within each agent. Suppose a dimension within an agent moves towards a better solution, but degradation in other dimensions leads to a decrease in the overall solution quality, resulting in the abandonment of that solution. This would waste evaluation efforts and deteriorate convergence speed [35]. Based on a greedy per-dimension update strategy, the evolutionary dimension of solutions will not be overlooked due to degradation in other dimensions, allowing any update value that can improve the solution to be accepted. Ensuring that the algorithm can utilize evolutionary information from individual dimensions for better local search, thereby obtaining higher-quality solutions and improving the convergence speed [36].
In this paper, a strategy combining lens imaging reverse learning and dimension-by-dimension optimization. The core idea of this strategy lies in updating the best value obtained through lens imaging reverse learning in a per-dimension manner, combined with greedy rules to optimize the solution. Specifically, initially, a mutation operation is applied to the best individual X b as shown in Equation (16), resulting in a mutated individual X n b . Subsequently, the fitness values of X b and X n b are compared, and the individual with better fitness is chosen as the benchmark position. Then, all dimensions of another position are used to replace the corresponding dimensions of the benchmark position one by one. In the process of per-dimension replacement, a greedy rule is adopted: If the overall fitness value improves after replacing a dimension, the replaced value of that dimension is retained; otherwise, the benchmark position remains unchanged. Through such per-dimension optimization, the structure of the solution can be finely adjusted, further enhancing the quality of the solution. Finally, the reference position after dimension replacement becomes the new X b of the next generation. This process integrates the idea of lens imaging reverse learning and dimensional optimization, aiming to approach the global optimal solution gradually through continuous iteration and optimization. The complete algorithm flow is shown in Algorithm 2.
Algorithm 2: Fusion lens imaging backward learning and dimension-by-dimension optimization strategies.
Biomimetics 09 00291 i002

3.4. Complexity Analysis of MDBO

Assume that N represents the number of populations, D represents the dimension of the optimization problem, and T represents the maximum number of iterations, DBO exhibits an initialization phase complexity of t 1 = O(N*D) and an iterative process complexity of t 2 = O(T*N*D), resulting in a total complexity of t 1 + t 2 = O(T*N*D). For MDBO, the complexity of initializing the population using Latin hypercube is t 3 = O(N*D), the average differential variance complexity is t 4 = O(N*T), the complexity of fusing lens imaging reverse learning and dimension-by-dimension optimization is t 5 = O(T*D), and the complexity of the iterative process is the same as that of DBO as t 2 . Hence, the complexity of MDBO is t 2 + t 3 + t 4 + t 5 = O(T*N*D), equivalent to DBO, and its performance does not depend on the higher complexity.

3.5. The MDBO Algorithm Implementation Steps

The basic framework of the MDBO algorithm is outlined in Algorithm 3. To provide a clear visualization of the process, Figure 5 illustrates the flowchart of MDBO. This algorithm aims to enhance search efficiency and convergence speed during optimization by employing a combination of multiple strategies. Specifically, the MDBO algorithm utilizes Latin hypercube sampling for improved population initialization and introduces a novel differential variation strategy called “Mean Differential Variation” to enhance its ability to evade local optima. Moreover, applying lens imaging reverse learning to the current optimal solution to expand the algorithm’s search space, and combining it with a dimension-by-dimension optimization strategy to improve the quality of the solution.
Algorithm 3: The framework of the MDBO algorithm
Biomimetics 09 00291 i003

4. Experimental Results and Discussions

In order to evaluate the performance of the improved algorithm comprehensively, this paper selects two sets of benchmark functions: CEC2017 [37] and CEC2020 [38]. The details of the benchmark functions are shown in Table 1. In CEC2017, the original F2 function has been excluded due to loss of testing capability, thus leaving 29 single-objective benchmark functions for testing. Among these, F1 and F2 are single-peaked functions with only one global minimum, F3–F9 are simple multi-modal functions with local minima, F10–F19 are mixed functions containing three or more CEC2017 benchmark functions after rotation or displacement, and F20–F29 are composite functions formed by at least three mixed functions or CEC2017 benchmark functions after rotation and displacement. CEC2020 consists of one composite single-peaked function F1, three multi-peaked functions F2–F4 after rotation and displacement, three mixed functions F5–F7, and three composite functions F8–F10.
The comparison algorithms encompass DBO [25], WOA [20], GWO [19], SCA [14], SSA [22], HHO [21]. To ensure the fairness of the experiments, the initial population size for all algorithms is set to 30, and the maximum number of iterations is set to 500. To eliminate the influence of randomness in the experiments, each algorithm is independently executed 30 times to statistically analyze its results. MATLAB R2020b is utilized for software implementation.

4.1. CEC2017 Test Function Results and Analysis

4.1.1. Analysis of CEC2017 Statistical Results

The statistical outcomes for the CEC2017 test function in 30 and 100 dimensions were meticulously documented. These include the minimum (min), mean, and standard deviation (std) of each algorithm’s independent execution conducted 30 times. The best average result for each test function is accentuated in bold font. The last row “Total” indicates the number of times each algorithm achieved the best result among all test functions. The statistical results for 30 and 100 dimensions are presented in Table 2 and Table 3, respectively.
From Table 2 and Table 3, the comprehensive analysis reveals that, overall, for the 30-dimensional case, MDBO obtained the optimal solution in 21 out of 29 test functions. For the remaining 8 test functions, MDBO achieved the second-best result, while GWO obtained 2 optimal values and SSA obtained 6 optimal values. However, in the case of 100 dimensions, MDBO only attained the best solution in 15 out of 29 test functions, yielding suboptimal outcomes in the remaining 14 test functions. A detailed examination reveals the following:
  • MDBO did not achieve the best performance among all algorithms on the unimodal function F1, whether in the 30-dimensional or 100-dimensional case. It demonstrated superior performance compared to other algorithms but fell short of SSA. Notably, MDBO excelled in the unimodal function F2, outperforming all algorithms in both 30 and 100 dimensions.
  • In the simple multimodal problems F3–F9, MDBO achieved the best average fitness value in 6 out of 7 test functions in the 30-dimensional scenario, except for F5, where it trailed slightly behind GWO. However, in the 100-dimensional scenario, MDBO exhibited weaker performance compared to GWO on functions F4, F5, F6, and F7, and weaker than SSA on functions F3, F8, and F9. Nevertheless, an improvement was observed in all benchmark functions compared to the basic DBO.
  • For the hybrid functions F10–F19, in the 30-dimensional scenario, MDBO obtained the minimum values on all 7 test functions compared to the other algorithms. It ranked second after SSA in functions F11, F12, and F18. In the 100-dimensional scenario, MDBO secured minimum values in 7 out of 10 test functions, excluding F11, F13, and F17.
  • In the case of composite functions F20–F29, when the dimension is 30, MDBO only did not achieve the best results on F24, F27, and F28 but secured the top position in the remaining 7 functions. When the dimension is 100, MDBO exhibited weaker performance compared to SSA in functions F21, F24, and F27 but obtained the best results in the remaining 7 functions.
To comprehensively evaluate the performance of all algorithms, this study conducted Friedman tests on the average results of 30 independent optimization runs for 30 test functions for each algorithm. The average rankings of all algorithms on the test functions were calculated, where a lower average ranking indicates better algorithm performance. The Friedman test results for dimensions 30 and 100 are shown in Figure 6. From the results, it is evident that the average rankings for dimensions 30 and 100 maintain similar trends, with MDBO achieving the lowest average ranking, followed by SSA, GWO, DBO, HHO, WOA, and SCA, respectively. This suggests that, compared to other algorithms, MDBO generally exhibits superior performance.

4.1.2. CEC2017 Convergence Curve Analysis

In order to assess both the accuracy and convergence speed of the algorithms, convergence curves were plotted for MDBO and other algorithms at dimension 30, as illustrated in Figure 7. It is worth noting that in each subplot, the horizontal axis represents the number of iterations, while the vertical axis represents the average convergence curve over 30 runs. From the figure, the following can be observed:
  • For the unimodal problem F1, initially, the convergence speed of MDBO was slower than SSA. However, after approximately two-thirds of the iterations, its convergence speed accelerated and gradually caught up with SSA, achieving results close to SSA. As for unimodal problem F2, the convergence speed of MDBO was comparable to other comparative algorithms. However, owing to its superior exploration capability, MDBO converged to a better solution.
  • For the simple multimodal functions F3, MDBO, and SSA exhibited comparable convergence speed and accuracy, outperforming all other comparative algorithms. Concerning F4, F7, F7, and F9, initially, only the convergence speeds of GWO and SSA were similar to MDBO. However, after around two-thirds of the iterations, the convergence speeds of SSA and GWO slowed down, while the convergence speed of MDBO accelerated, rapidly converging to better positions. Regarding F5 and F6, the convergence speed of MDBO was on par with GWO and superior to other algorithms.
  • In the case of hybrid functions F10–F19, MDBO demonstrated decent convergence speed, particularly excelling in F15 and F19, maintaining a leading position consistently. Concerning F10, F11, F13, F14, F17, and F18, MDBO exhibited similar convergence speed and accuracy to SSA. Regarding F12, MDBO’s performance was inferior to SSA but significantly outperformed other comparative algorithms, showing a substantial improvement over DBO. As for F16, the results obtained by all algorithms were similar, with minor differences.
  • In the case of composite functions F20, F21, F22, F23, and F25, MDBO consistently demonstrated the fastest convergence speed and accuracy, outperforming all other comparative algorithms, especially evident in F21, where it significantly surpassed other algorithms. Regarding F24, F26, F27, and F28, MDBO’s performance was comparable to other algorithms, slightly superior in certain benchmark functions. Concerning F29, the results are shown in Figure 8, which can be found that the convergence speed of SSA and MDBO was similar, but MDBO had a slight edge.

4.2. CEC2020 Test Function Results and Analysis

4.2.1. Analysis of CEC2020 Statistical Results

The experimental statistical findings for the CEC2020 test function with a dimension of 20 are depicted in Table 4. This table meticulously records the minimum (min), mean, and standard deviation (std) values resulting from 30 independent runs for each algorithm. Notably, the best average result among all algorithms is marked in bold. Furthermore, the concluding row of the table provides a tally of occurrences wherein each algorithm attained the optimal value across all test functions. From Table 4, it becomes apparent that MDBO outperformed its counterparts in nine test functions, with only a marginal deviation observed in comparison to GWO in the F3 test function.
In employing Friedman’s test, an assessment of the average rank for each algorithm across all test function outcomes was undertaken. As delineated in Figure 9, a discernible tendency is found: MDBO achieves the lowest rank, followed by SSA, GWO, DBO, HHO, WOA, and SCA. This unequivocally underscores the pronounced superiority of MDBO over its algorithmic counterparts.

4.2.2. CEC2020 Convergence Curve Analysis

Similarly, the average convergence curves for CEC2020 in 20 dimensions were plotted as shown in Figure 10. It can be observed that in the unimodal function F1, SSA initially exhibited faster convergence compared to MDBO. However, as iterations progressed, SSA became trapped in local optima, while MDBO demonstrated superior exploration capability, eventually discovering better solutions. In F2, MDBO initially exhibits slower convergence compared to DBO and SSA. Nevertheless, as DBO and SSA fall into local optima, MDBO maintains a decent convergence rate. In F6, F7, and F8, MDBO significantly outperforms other comparative algorithms in terms of convergence speed, precision, and stability. Moreover, MDBO demonstrates varying degrees of superiority in the remaining test functions. This robustly validates the effectiveness of MDBO in addressing complex optimization problems.

4.3. Wilcoxon Rank Sum Test

The Wilcoxon rank-sum test [39,40] is a non-parametric statistical test used to further determine whether the differences between the improved algorithm and the comparative algorithms are significant. In this study, the results of running the six comparative algorithms and MDBO 30 times were used as samples. The Wilcoxon rank-sum test was applied at a significance level of 0.05. When the test result’s p-value is less than 0.05, it indicates a significant difference between the two compared algorithms; otherwise, it suggests that the results of the two algorithms are comparable.
p-values of the Wilcoxon rank-sum test for CEC2017 at dimensions 30 and 100 are displayed in Table 5 and Table 6, respectively. The p-values of the Wilcoxon rank-sum test for CEC2020 at dimension 20 are shown in Table 7. Values with p-values greater than 0.05 are highlighted in bold. The last row of each table summarizes the number of times all comparative algorithms had p-values less than 0.05 across all test functions.
Based on the results in Table 5, it is evident that at a dimension of 30 in CEC2017, MDBO exhibits significant disparities when compared to both WOA and SCA across all test functions. In contrast, when juxtaposed with DBO and HHO, MDBO shows significant differences in all 28 test functions. Moreover, in comparison with GWO and SSA, MDBO demonstrates significant disparities in 18 and 21 test functions, respectively. Further scrutiny of Table 6 reveals that as the dimension increases to 100, MDBO exhibits significant differences compared to DBO, WOA, and SCA across all functions. When compared to GWO, SSA, and HHO, only a minority of functions show similar results, with significant differences apparent in the majority of cases.
Furthermore, according to Table 7, among the ten benchmark functions of CEC2020, it is evident that MDBO exhibits comparable performance with GWO in functions F7 and F9, whereas, in the remaining test functions, MDBO demonstrates a significant advantage. Conversely, when compared to SSA, significant differences are observed in only 5 test functions. However, when compared to DBO, WOA, SCA, and HHO, MDBO consistently demonstrates absolute superiority across all test functions.

4.4. Summary of Experiments

Upon scrutinizing the statistical results and convergence curves derived from CEC2017 at 30 dimensions, it becomes evident that MDBO exhibits superior optimization capabilities characterized by enhanced seeking ability, greater stability, accelerated convergence speed, and heightened convergence accuracy compared to its algorithmic counterparts. This trend persists even when the dimensionality is increased to 100, as MDBO continues to demonstrate commendable performance in tackling high-dimensional optimization challenges. To further validate the efficacy of MDBO in addressing complex problem landscapes, additional experimentation was conducted using CEC2020, which reaffirmed MDBO’s consistent and robust performance in handling intricate optimization scenarios, underscoring its adaptability and reliability in real-world applications.
In order to verify the difference between MDBO and other algorithms, the p-values of CEC2017 at 30 dimensions, 100 dimensions, and CEC2020 at 20 dimensions were calculated using the Wilcoxon rank sum test. The results show that MDBO is significantly different from other algorithms and has obvious advantages.
Through performance testing of the MDBO algorithm from multiple aspects, it is evident that MDBO exhibits noteworthy competitiveness in terms of convergence speed, accuracy, stability, and robustness when juxtaposed against contemporary algorithms. Moreover, its performance remains steadfast even amidst the complexities of high-dimensional optimization challenges, affirming the efficacy and relevance of MDBO in modern optimization contexts.

5. Engineering Application Design Issues

To further validate the reliability of MDBO in practical engineering applications, three typical engineering design problems are employed to assess its optimization performance across various practical scenarios. These problems include extension/compression spring design problems [41], reducer design problems [42], and welded beam design problems [43].
The engineering design optimization problem is classified as a constrained optimization problem involving variables, necessitating dealing with constraints [44]. Three primary methods are commonly employed for constraint processing: The penalty function method, feasibility rule, and multi-objective method. In this study, the external penalty function method is adopted, whereby constraints are transformed into penalty functions, thus integrating them with the objective function. This integration results in a new objective function, defined in Equation (18).
F ( x ) = f ( x ) + w · ( i = 1 m ( m a x ( 0 , g i ( x ) ) ) ) 2 ,
F ( x ) represents the fitness function value, while f ( x ) and g i ( x ) represent the objective function value and the constraint function, respectively. w is the penalty parameter of the penalty function, which is set to 10 e 100 in this article. w makes the violation of constraints in the optimization process will be punished, so as to find the optimal solution satisfying the constraints.
In the experimental comparison of algorithms for design problems in engineering applications, the comparison algorithms are DBO [25], WOA [20], GWO [19], SCA [14], SSA [22], HHO [21], and for all algorithms, the population size is set to 30 and the maximum number of iterations is 500. In practical engineering scenarios, the reliability of optimization algorithms is crucial. While high average trial run values may initially indicate promising performance, large standard deviations can signal instability and unreliability, particularly in computationally expensive real-world problems where multiple trial runs may not be feasible due to limited computational resources. Therefore, to ensure robustness and reliability, this study conducts 30 independent runs of each algorithm and computes both the mean and standard deviation of their performance metrics. This approach provides a comprehensive evaluation, accounting for both average performance and stability, essential for assessing algorithm suitability in real-world engineering applications.

5.1. Extension/Compression Spring Design Issues

The extension/compression spring design problem, illustrated in Figure 11, seeks to minimize spring weight by optimizing parameters such as wire diameter (d), average coil diameter (D), and the number of active coils (N). The optimization variables are defined by Equation (19), while the objective function is abstracted as in Equation (20). Constraints are formulated in Equation (21), and upper and lower boundaries are set by Equation (22). This problem endeavors to identify the optimal parameter combination to achieve desired performance while simultaneously minimizing spring weight, thereby facilitating efficient and lightweight spring design for diverse applications.
Consider:
x = x 1 , x 2 , x 3 = d , D , N ,
Minimize:
f ( x ) = ( x 3 + 2 ) x 2 x 1 2 ,
Subject to:
g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0
Parameters range:
0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15 .
The experiment counted the average and standard deviation 30 times solving the results of each algorithm in this problem, and randomly selected the optimal results and optimal parameters of a certain time to show the results, the results are shown in Table 8. It is evident that MDBO achieves the lowest manufacturing cost in solving this problem. Furthermore, the consistency of this result is supported by the mean and standard deviation of the outcomes, indicating its stability and reliability.

5.2. Reducer Design Issues

The schematic diagram of the speed reducer design problem is depicted in Figure 12. The problem involves seven design variables, which are end face width ( x 1 ), number of tooth modules ( x 2 ), number of teeth in the pinion ( x 3 ), length of the first shaft between the bearings ( x 4 ), length of the second shaft between the bearings ( x 5 ), diameter of the first shaft ( x 6 ), and diameter of the second shaft ( x 7 ). The objective of the problem is to minimize the total weight of the gearbox by optimizing seven variables. The objective function is represented by Equation (23), while the constraints are described by Equation (24). The upper and lower bounds for each variable are defined by Equation (25).
Minimize:
f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 )
Subject to:
g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 , g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 , g 3 ( x ) = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 , g 4 ( x ) = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 , g 5 ( x ) = ( 745 x 4 x 2 x 3 ) 2 + 16.9 × 10 6 110.0 x 6 3 1 0 , g 6 ( x ) = ( 745 x 4 x 2 x 3 ) 2 + 157.5 × 10 6 85.0 x 6 3 1 0 , g 7 ( x ) = x 2 x 3 40 1 0 , g 8 ( x ) = 5 x 2 x 1 1 0 , g 9 ( x ) = x 1 12 x 2 1 0 , g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 , g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 ,
Parameters range:
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.8 x 5 8.3 , 2.9 x 6 3.9 , 5.0 x 7 5.5 ,
The experimental results of the reducer design problem are presented in Table 9. From the average value, MDBO exhibits slightly superior performance compared to SSA and significantly outperforms other algorithms, underscoring its efficacy in achieving high solution accuracy for this problem. Additionally, considering the standard deviation, MDBO showcases the lowest value, indicating its exceptional stability and robustness in producing consistent results across multiple runs.

5.3. Welded Beam Design Issues

The objective of the welded beam design problem is to minimize the cost of the welded beam. As shown in Figure 13, the welded beam design problem exists with four parametric variables: Weld thickness (h), length of the connected portion of the bar (l), height of the bar (t), and thickness of the reinforcement bar (b) as in Equation (26). The objective function is defined in Equation (27), and its minimization process is bounded by the constraints of shear stresses ( τ ), bending stresses in the beam ( θ ), flexural loads on the bar ( P c ), and end disturbances in the beam ( δ ) as in Equation (28). The four variable parameters are bounded as in Equation (29), and the values of certain parameters and their solutions are as Equation (30).
Consider:
x ~ = x 1 , x 2 , x 3 , x 4 = h , l , t , b ,
Minimize:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) ,
Subject to:
g 1 ( x ) = τ ( x ) τ max 0 g 2 ( x ) = σ ( x ) τ max 0 g 3 ( x ) = δ ( x ) τ max 0 g 4 ( x ) = x 1 x 4 0 g 5 ( x ) = P P c ( x ) 0 g 6 ( x ) = 0.125 x 1 0 g 7 ( x ) = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 5.0 0
Parameters range:
0.1 x 1 , x 4 2 , 0.1 x 2 , x 3 10 ,
where
τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 , τ = P 2 x 1 x 2 , τ = M R J , M = P ( L + x 2 2 ) , R = x 2 2 4 + ( x 1 + x 3 2 ) 2 , J = 2 ( 2 x 1 x 2 ( x 2 2 12 + ( x 1 + x 3 2 ) 2 ) ) , σ ( x ) = 6 P L x 4 x 3 2 , δ ( x ) = 4 P L 3 E x 3 3 x 4 , P c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 ( 1 x 3 2 L E 4 G ) , P = 6000 l b , L = 14 i n , δ max = 0.25 i n , E = 30 × 10 6 p s i , G = 12 × 10 6 p s i , τ max = 13600 p s i , σ max = 30000 p s i .
The optimization results for the welded beam design problem are displayed in Table 10. It is evident that MDBO achieves the lowest average manufacturing cost, with a value of 1.692769435 when the optimization result of MDBO is x = [0.205729953, 3.234915914, 9.036617034, 0.205729953]. When compared with other algorithms, MDBO demonstrates competitive performance, highlighting its effectiveness in this particular optimization task.

6. Conclusions

In this paper, based on the deficiencies of the DBO algorithm, the multi-strategy improved DBO algorithm (MDBO) is proposed. Firstly, Latin hypercube sampling is used to initialize the population to improve the diversity of the population and reduce the possibility of the algorithm falling into local optimal solutions. Second, mean difference variation is introduced to the population individuals to balance the local and global exploration of the algorithm and improve the algorithm’s ability to escape from the local optimum. Finally, fusion lens imaging back learning and dimension-by-dimension optimization are performed on the global optimal solution to make full use of the optimal solution information while improving the quality of the optimal solution and promoting the convergence of the algorithm. To verify the performance of the MDBO, this paper evaluates the performance of the algorithm from several aspects using the CEC2017 and CEC2020 test functions. Finally, the proposed MDBO algorithm is successfully applied to three real-world engineering application problems.
Through a large number of experimental results in several aspects, the proposed MDBO algorithm exhibits stronger optimization ability, faster convergence speed, higher convergence accuracy, and better robustness than other classical meta-heuristic algorithms, and it also demonstrates better performance in some engineering practical applications. However, MDBO still faces challenges in obtaining the theoretical optimum when solving some complex problems in a short time. In future work, on the one hand, some other novel algorithms can be combined to improve the efficiency and optimization ability of the algorithm; on the other hand, the optimized algorithm can be used to solve more complex optimization problems in reality, such as the UAV path planning, polling system [45,46], and the NP-hard problem.

Author Contributions

Conceptualization, B.H. and X.W.; methodology, M.Y.; software, H.Z.; validation, M.Y., X.W. and H.Z.; formal analysis, H.Y.; investigation, H.Z.; resources, H.Y.; data curation, M.Y.; writing—original draft preparation, M.Y.; writing—review and editing, X.W.; visualization, M.Y.; supervision, B.H.; project administration, X.W.; funding acquisition, B.H. All authors have read and agreed to the published version of this manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hussain, K.; Mohd Salleh, M.N.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
  2. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  3. Kallioras, N.A.; Lagaros, N.D.; Avtzis, D.N. Pity beetle algorithm–A new metaheuristic inspired by the behavior of bark beetles. Adv. Eng. Softw. 2018, 121, 147–166. [Google Scholar] [CrossRef]
  4. Wang, X.; Wei, Y.; Guo, Z.; Wang, J.; Yu, H.; Hu, B. A Sinh–Cosh-Enhanced DBO Algorithm Applied to Global Optimization Problems. Biomimetics 2024, 9, 271. [Google Scholar] [CrossRef]
  5. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic algorithms: A comprehensive review. Comput. Intell. Multimed. Big Data Cloud Eng. Appl. 2018, 185–231. [Google Scholar]
  6. Rajwar, K.; Deep, K.; Das, S. An exhaustive review of the metaheuristic algorithms for search and optimization: Taxonomy, applications, and open challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef]
  7. Gabis, A.B.; Meraihi, Y.; Mirjalili, S.; Ramdane-Cherif, A. A comprehensive survey of sine cosine algorithm: Variants and applications. Artif. Intell. Rev. 2021, 54, 5469–5540. [Google Scholar] [CrossRef]
  8. Zhong, C.; Li, G.; Meng, Z.; Li, H.; He, W. A self-adaptive quantum equilibrium optimizer with artificial bee colony for feature selection. Comput. Biol. Med. 2023, 153, 106520. [Google Scholar] [CrossRef]
  9. Gharehchopogh, F.S.; Gholizadeh, H. A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
  10. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  11. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  12. Ingber, L. Very fast simulated re-annealing. Math. Comput. Model. 1989, 12, 967–973. [Google Scholar] [CrossRef]
  13. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  14. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  15. Chakraborty, A.; Kar, A.K. Swarm intelligence: A review of algorithms. Nat.-Inspired Comput. Optim. Theory Appl. 2017, 10, 475–494. [Google Scholar]
  16. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-international Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: New York City, NY, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  17. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  18. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  22. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  23. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  24. Varshney, M.; Kumar, P.; Ali, M.; Gulzar, Y. Using the Grey Wolf Aquila Synergistic Algorithm for Design Problems in Structural Engineering. Biomimetics 2024, 9, 54. [Google Scholar] [CrossRef]
  25. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  26. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  27. Zhang, R.; Zhu, Y. Predicting the Mechanical Properties of Heat-Treated Woods Using Optimization-Algorithm-Based BPNN. Forests 2023, 14, 935. [Google Scholar] [CrossRef]
  28. Tharwat, A.; Schenck, W. Population initialization techniques for evolutionary algorithms for single-objective constrained optimization problems: Deterministic vs. stochastic techniques. Swarm Evol. Comput. 2021, 67, 100952. [Google Scholar] [CrossRef]
  29. Yu, H.; Chung, C.; Wong, K.; Lee, H.; Zhang, J. Probabilistic load flow evaluation with hybrid latin hypercube sampling and cholesky decomposition. IEEE Trans. Power Syst. 2009, 24, 661–667. [Google Scholar] [CrossRef]
  30. Tu, B.; Wang, F.; Huo, Y.; Wang, X. A hybrid algorithm of grey wolf optimizer and harris hawks optimization for solving global optimization problems with improved convergence performance. Sci. Rep. 2023, 13, 22909. [Google Scholar] [CrossRef]
  31. Rosli, S.J.; Rahim, H.A.; Abdul Rani, K.N.; Ngadiran, R.; Ahmad, R.B.; Yahaya, N.Z.; Abdulmalek, M.; Jusoh, M.; Yasin, M.N.M.; Sabapathy, T.; et al. A hybrid modified method of the sine cosine algorithm using Latin hypercube sampling with the cuckoo search algorithm for optimization problems. Electronics 2020, 9, 1786. [Google Scholar] [CrossRef]
  32. Layeb, A. Differential evolution algorithms with novel mutations, adaptive parameters, and Weibull flight operator. Soft Comput. 2024, 1–53. [Google Scholar] [CrossRef]
  33. Xiao, Y.; Cui, H.; Hussien, A.G.; Hashim, F.A. MSAO: A multi-strategy boosted snow ablation optimizer for global optimization and real-world engineering applications. Adv. Eng. Inform. 2024, 61, 102464. [Google Scholar] [CrossRef]
  34. He, Y.; Wang, M. An improved chaos sparrow search algorithm for UAV path planning. Sci. Rep. 2024, 14, 366. [Google Scholar] [CrossRef]
  35. Qu, D.; Zhang, R.; Peng, S.; Wen, Z.; Yu, C.; Wang, R.; Yang, T.; Zhou, Y. A three-phase sheep optimization algorithm for numerical and engineering optimization problems. Expert Syst. Appl. 2024, 248, 123338. [Google Scholar] [CrossRef]
  36. Li, Y.; Zhao, Y.; Liu, J. Dimension by dimension dynamic sine cosine algorithm for global optimization problems. Appl. Soft Comput. 2021, 98, 106933. [Google Scholar] [CrossRef]
  37. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; Nanyang Technological University: Singapore, 2016; pp. 1–18. [Google Scholar]
  38. Liang, J.J.; Qu, B.; Gong, D.; Yue, C. Problem Definitions and Evaluation Criteria for the CEC 2019 Special Session on Multimodal Multiobjective Optimization; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, 2019. [Google Scholar]
  39. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  40. Zhao, S.; Zhang, T.; Cai, L.; Yang, R. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst. Appl. 2024, 238, 121744. [Google Scholar] [CrossRef]
  41. Li, K.; Huang, H.; Fu, S.; Ma, C.; Fan, Q.; Zhu, Y. A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116199. [Google Scholar] [CrossRef]
  42. He, K.; Zhang, Y.; Wang, Y.K.; Zhou, R.H.; Zhang, H.Z. EABOA: Enhanced adaptive butterfly optimization algorithm for numerical optimization and engineering design problems. Alex. Eng. J. 2024, 87, 543–573. [Google Scholar] [CrossRef]
  43. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  44. Zhong, R.; Yu, J.; Zhang, C.; Munetomo, M. SRIME: A strengthened RIME with Latin hypercube sampling and embedded distance-based selection for engineering optimization problems. Neural Comput. Appl. 2024, 36, 1–20. [Google Scholar] [CrossRef]
  45. Wang, X.; Yang, Z.; Ding, H. Application of Polling Scheduling in Mobile Edge Computing. Axioms 2023, 12, 709. [Google Scholar] [CrossRef]
  46. Wang, X.; Yang, Z.; Ding, H.; Guan, Z. Analysis and prediction of UAV-assisted mobile edge computing systems. Math. Biosci. Eng. 2023, 20, 21267–21291. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Boundary selection strategy.
Figure 1. Boundary selection strategy.
Biomimetics 09 00291 g001
Figure 2. Species distribution.
Figure 2. Species distribution.
Biomimetics 09 00291 g002
Figure 3. Comparison of 10 point sets generated by LHS and 10 randomly generated point sets. Where (a) denotes the 10 point sets generated by LHS and (b) denotes the 10 randomly generated point sets.
Figure 3. Comparison of 10 point sets generated by LHS and 10 randomly generated point sets. Where (a) denotes the 10 point sets generated by LHS and (b) denotes the 10 randomly generated point sets.
Biomimetics 09 00291 g003
Figure 4. Lens imaging reverse learning.
Figure 4. Lens imaging reverse learning.
Biomimetics 09 00291 g004
Figure 5. The flowchart of MDBO.
Figure 5. The flowchart of MDBO.
Biomimetics 09 00291 g005
Figure 6. CEC2017 average rank.
Figure 6. CEC2017 average rank.
Biomimetics 09 00291 g006
Figure 7. The CEC2017 iteration curve when the dimension is 30.
Figure 7. The CEC2017 iteration curve when the dimension is 30.
Biomimetics 09 00291 g007
Figure 8. F29 convergence curve.
Figure 8. F29 convergence curve.
Biomimetics 09 00291 g008
Figure 9. CEC2020 average rank.
Figure 9. CEC2020 average rank.
Biomimetics 09 00291 g009
Figure 10. CEC2020 iteration curve when the dimension is 20.
Figure 10. CEC2020 iteration curve when the dimension is 20.
Biomimetics 09 00291 g010
Figure 11. Extension/compression spring design issues.
Figure 11. Extension/compression spring design issues.
Biomimetics 09 00291 g011
Figure 12. Reducer design issues.
Figure 12. Reducer design issues.
Biomimetics 09 00291 g012
Figure 13. Welded beam design issues.
Figure 13. Welded beam design issues.
Biomimetics 09 00291 g013
Table 1. CEC2017 and CEC 2020 functions.
Table 1. CEC2017 and CEC 2020 functions.
TypeFunctionDimensionMinimumCEC Type
Unimodal functionsShifted and Rotated Bent Cigar Function30D, 100D100CEC 2017 F1
Shifted and Rotated Zakharov Function30D, 100D200CEC 2017 F2
Shifted and Rotated Bent Cigar Function20D100CEC 2020 F1
Simple multimodalShifted and Rotated Rosenbrock’s Function30D, 100D300CEC 2017 F3
Shifted and Rotated Rastrigin’s Function30D, 100D400CEC 2017 F4
Shifted and Rotated Expanded Scaffer’s F6 Function30D, 100D500CEC 2017 F5
Shifted and Rotated Lunacek Bi_Rastrigin Function30D, 100D600CEC 2017 F6
Shifted and Rotated Non-Continuous Rastrigin’s Function30D, 100D700CEC 2017 F7
Shifted and Rotated Lecy Function30D, 100D800CEC 2017 F8
Shifted and Rotated Schwefel’s Function30D, 100D900CEC 2017 F9
Basic functionsShifted and Rotated Schwefel’s Function20D700CEC 2020 F2
Shifted and Rotated Lunacek Bi_Rastrigin Function20D1900CEC 2020 F3
Expanded Rosenbrock’s plus Griewangk’s Function20D1700CEC 2020 F4
Hybrid functionsHybrid Function 1 (N = 3)30D, 100D1000CEC 2017 F10
Hybrid Function 2 (N = 3)30D, 100D1100CEC 2017 F11
Hybrid Function 3 (N = 3)30D, 100D1200CEC 2017 F12
Hybrid Function 4 (N = 4)30D, 100D1300CEC 2017 F13
Hybrid Function 5 (N = 4)30D, 100D1400CEC 2017 F14
Hybrid Function 6 (N = 4)30D, 100D1500CEC 2017 F15
Hybrid Function 6 (N = 5)30D, 100D1600CEC 2017 F16
Hybrid Function 6 (N = 5)30D, 100D1700CEC 2017 F17
Hybrid Function 6 (N = 5)30D, 100D1800CEC 2017 F18
Hybrid Function 6 (N = 6)30D, 100D1900CEC 2017 F19
Hybrid Function 1 (N = 3)20D1700CEC 2020 F5
Hybrid Function 2 (N = 4)20D1600CEC 2020 F6
Hybrid Function 3 (N = 5)20D2100CEC 2020 F7
Composition functionsComposition Function 1 (N = 3)30D, 100D2000CEC 2017 F20
Composition Function 2 (N = 3)30D, 100D2100CEC 2017 F21
Composition Function 3 (N = 4)30D, 100D2200CEC 2017 F22
Composition Function 4 (N = 4)30D, 100D2300CEC 2017 F23
Composition Function 5 (N = 5)30D, 100D2400CEC 2017 F24
Composition Function 6 (N = 5)30D, 100D2500CEC 2017 F25
Composition Function 7 (N = 6)30D, 100D2600CEC 2017 F26
Composition Function 7 (N = 6)30D, 100D2700CEC 2017 F27
Composition Function 9 (N = 3)30D, 100D2800CEC 2017 F28
Composition Function 10 (N = 3)30D, 100D2900CEC 2017 F29
Composition Function 1 (N = 3)20D2200CEC 2020 F8
Composition Function 2 (N = 4)20D2400CEC 2020 F9
Composition Function 3 (N = 5)20D2500CEC 2020 F10
Search range: [−100,100]D
Table 2. CEC2017 dimension for 30 test results.
Table 2. CEC2017 dimension for 30 test results.
MDBODBOWOAGWOSCASSAHHO
min2.83E+028.37E+072.64E+094.48E+081.22E+101.70E+021.43E+08
F1mean1.57E+043.42E+085.36E+092.46E+092.04E+106.43E+034.77E+08
std3.61E+042.67E+082.11E+091.53E+093.95E+096.19E+032.79E+08
min1.89E+046.92E+041.21E+054.69E+045.35E+043.33E+044.11E+04
F2mean3.22E+049.73E+042.84E+056.55E+048.74E+045.06E+045.64E+04
std7.14E+034.07E+047.69E+041.22E+041.73E+047.85E+037.67E+03
min4.70E+025.31E+026.74E+025.10E+021.61E+034.30E+025.52E+02
F3mean5.01E+026.71E+021.53E+036.40E+022.93E+035.06E+027.58E+02
std1.58E+011.52E+026.60E+021.39E+027.80E+022.49E+011.43E+02
min5.57E+026.48E+027.35E+025.77E+027.80E+026.46E+026.93E+02
F4mean6.00E+027.52E+028.56E+026.24E+028.20E+027.46E+027.76E+02
std2.73E+015.55E+014.77E+013.58E+012.24E+015.58E+013.35E+01
min6.10E+026.31E+026.59E+026.04E+026.47E+026.22E+026.54E+02
F5mean6.17E+026.50E+026.76E+026.12E+026.65E+026.48E+026.67E+02
std4.68E+009.32E+001.01E+014.11E+008.43E+001.08E+016.26E+00
min8.01E+028.86E+021.12E+038.14E+021.18E+031.04E+031.14E+03
F6mean8.85E+021.00E+031.30E+039.01E+021.26E+031.23E+031.32E+03
std4.30E+016.51E+019.88E+015.93E+016.29E+018.59E+016.01E+01
min8.48E+029.25E+029.66E+028.60E+021.05E+039.06E+029.48E+02
F7mean8.90E+021.03E+031.09E+039.00E+021.10E+039.77E+029.93E+02
std2.66E+014.78E+016.51E+012.47E+012.62E+012.59E+012.38E+01
min1.14E+033.10E+035.54E+031.24E+036.24E+033.92E+036.34E+03
F8mean1.79E+036.85E+031.20E+042.86E+038.65E+035.31E+038.56E+03
std3.76E+022.49E+033.31E+031.10E+031.75E+033.93E+021.16E+03
min3.54E+034.64E+036.40E+033.51E+037.83E+034.29E+034.79E+03
F9mean5.07E+036.53E+037.34E+035.14E+038.97E+035.30E+035.99E+03
std6.92E+021.13E+036.15E+021.30E+033.27E+025.14E+027.47E+02
min1.16E+031.35E+034.94E+031.36E+032.84E+031.20E+031.28E+03
F10mean1.21E+031.97E+031.06E+042.57E+034.46E+031.33E+031.60E+03
std3.34E+017.72E+024.43E+031.13E+031.22E+037.98E+011.59E+02
min1.85E+055.49E+066.25E+071.13E+071.31E+091.73E+059.26E+06
F11mean2.47E+066.00E+075.05E+081.14E+082.59E+091.40E+068.78E+07
std2.89E+067.06E+073.63E+089.79E+077.54E+081.15E+068.97E+07
min1.65E+032.28E+041.42E+064.75E+044.67E+085.48E+034.53E+05
F12mean6.09E+059.81E+061.48E+072.13E+071.22E+091.76E+052.07E+06
std1.53E+061.97E+071.71E+076.03E+077.62E+088.08E+054.18E+06
min2.92E+038.75E+035.49E+042.36E+047.79E+043.65E+031.66E+04
F13mean4.47E+044.14E+052.56E+067.38E+057.67E+055.23E+041.29E+06
std4.56E+046.55E+052.51E+068.95E+055.65E+054.35E+041.22E+06
min1.71E+033.38E+032.19E+052.22E+045.46E+062.12E+032.53E+04
F14mean1.36E+041.17E+055.22E+063.88E+065.59E+071.46E+041.54E+05
std1.00E+041.96E+058.26E+061.37E+074.20E+071.39E+046.03E+04
min2.09E+032.37E+032.99E+032.32E+033.57E+032.16E+033.01E+03
F15mean2.68E+033.25E+034.46E+032.71E+034.16E+032.89E+033.71E+03
std3.14E+024.60E+026.61E+023.62E+022.72E+024.21E+024.90E+02
min1.80E+032.19E+032.09E+031.80E+032.43E+031.99E+032.12E+03
F16mean2.10E+032.72E+032.74E+032.11E+032.84E+032.55E+032.66E+03
std1.90E+022.58E+023.18E+021.81E+022.13E+022.79E+023.61E+02
min1.06E+051.19E+052.64E+057.38E+043.43E+061.02E+051.56E+05
F17mean4.92E+053.91E+061.49E+072.49E+061.74E+076.74E+052.96E+06
std3.49E+055.61E+061.30E+072.79E+061.24E+077.25E+052.78E+06
min1.98E+032.57E+033.63E+051.80E+042.12E+072.05E+038.69E+04
F18mean1.41E+047.24E+062.60E+071.45E+061.10E+081.13E+041.79E+06
std1.50E+041.34E+072.53E+072.65E+069.34E+071.36E+041.68E+06
min2.14E+032.37E+032.40E+032.19E+032.61E+032.37E+032.41E+03
F19mean2.43E+032.76E+032.91E+032.51E+032.94E+032.74E+032.83E+03
std2.20E+022.14E+022.38E+021.79E+021.51E+022.39E+022.13E+02
min2.34E+032.46E+032.55E+032.36E+032.54E+032.44E+032.51E+03
F20mean2.38E+032.57E+032.64E+032.41E+032.61E+032.50E+032.59E+03
std2.14E+016.16E+015.41E+014.02E+013.01E+014.79E+015.17E+01
min2.30E+032.35E+033.20E+032.52E+034.55E+032.30E+035.14E+03
F21mean2.30E+034.62E+037.68E+034.94E+039.71E+035.46E+037.59E+03
std6.67E+002.50E+032.12E+031.98E+031.77E+032.49E+038.72E+02
min2.70E+032.87E+032.98E+032.73E+033.01E+032.77E+033.07E+03
F22mean2.76E+033.03E+033.15E+032.79E+033.08E+032.93E+033.29E+03
std3.29E+019.46E+019.99E+015.10E+014.57E+017.21E+011.30E+02
min2.87E+033.00E+033.07E+032.86E+033.17E+032.94E+033.25E+03
F23mean2.92E+033.19E+033.30E+032.99E+033.25E+033.08E+033.54E+03
std3.76E+011.00E+021.08E+027.86E+013.51E+018.70E+011.38E+02
min2.88E+032.91E+033.12E+032.93E+033.28E+032.88E+032.95E+03
F24mean2.91E+032.99E+033.26E+033.02E+033.58E+032.89E+033.01E+03
std2.10E+016.62E+019.69E+018.49E+012.44E+021.46E+014.24E+01
min2.90E+035.44E+035.40E+034.41E+037.08E+032.80E+036.65E+03
F25mean4.71E+037.05E+038.38E+035.07E+037.92E+035.59E+038.44E+03
std6.29E+028.53E+021.22E+035.34E+024.42E+021.41E+031.15E+03
min3.21E+033.25E+033.28E+033.23E+033.38E+033.22E+033.32E+03
F26mean3.26E+033.33E+033.45E+033.27E+033.58E+033.26E+033.65E+03
std2.69E+016.40E+011.02E+022.69E+018.63E+013.42E+012.13E+02
min3.21E+033.30E+033.45E+033.30E+034.17E+033.20E+033.34E+03
F27mean3.24E+033.62E+033.89E+033.51E+034.55E+033.23E+033.47E+03
std1.99E+016.90E+022.32E+021.42E+023.38E+022.00E+019.65E+01
min3.54E+033.75E+034.35E+033.51E+034.71E+033.79E+034.24E+03
F28mean3.87E+034.46E+035.49E+033.83E+035.24E+034.23E+035.09E+03
std2.01E+024.13E+027.93E+021.86E+023.20E+022.82E+024.76E+02
min7.10E+032.52E+049.58E+068.65E+059.55E+076.78E+037.14E+05
F29mean2.53E+042.45E+065.94E+071.27E+071.99E+083.53E+041.54E+07
std3.50E+043.97E+064.60E+079.80E+068.05E+078.69E+041.65E+07
Total21002060
Table 3. CEC2017 dimensions for 100 test results.
Table 3. CEC2017 dimensions for 100 test results.
MDBODBOWOAGWOSCASSAHHO
min4.26E+092.03E+108.72E+102.78E+101.84E+112.33E+083.78E+10
F1mean1.49E+108.51E+101.12E+115.39E+102.12E+114.03E+085.09E+10
std7.78E+097.00E+101.18E+109.82E+091.36E+101.21E+086.18E+09
min3.01E+053.40E+054.29E+054.11E+054.70E+053.25E+053.23E+05
F2mean3.55E+056.33E+058.93E+055.24E+055.96E+057.68E+053.60E+05
std3.56E+042.54E+051.50E+057.56E+047.93E+041.94E+058.43E+04
min1.12E+033.56E+031.38E+042.83E+033.66E+049.27E+026.35E+03
F3mean1.87E+031.97E+042.09E+045.98E+035.36E+041.03E+039.23E+03
std4.41E+022.01E+044.82E+031.82E+037.81E+035.63E+011.62E+03
min1.12E+031.32E+031.72E+031.08E+031.93E+031.30E+031.56E+03
F4mean1.31E+031.70E+031.98E+031.26E+032.07E+031.37E+031.68E+03
std8.34E+012.27E+021.17E+021.40E+025.63E+014.06E+015.20E+01
min6.38E+026.61E+026.88E+026.41E+026.95E+026.61E+026.85E+02
F5mean6.52E+026.77E+027.07E+026.46E+027.05E+026.65E+026.92E+02
std6.09E+001.07E+019.51E+003.61E+005.19E+002.25E+003.94E+00
min2.03E+032.31E+033.58E+031.98E+033.73E+032.67E+033.48E+03
F6mean2.40E+032.91E+033.82E+032.23E+034.18E+033.21E+033.77E+03
std1.60E+023.43E+021.42E+021.50E+022.78E+021.42E+021.15E+02
min1.43E+031.76E+032.21E+031.42E+032.26E+031.71E+032.01E+03
F7mean1.58E+032.21E+032.41E+031.56E+032.43E+031.84E+032.14E+03
std7.67E+012.24E+021.11E+026.95E+016.49E+014.97E+016.22E+01
min1.84E+046.06E+045.53E+042.24E+047.34E+042.45E+046.23E+04
F8mean2.65E+047.70E+047.87E+044.52E+049.21E+042.55E+046.97E+04
std4.32E+036.40E+031.58E+041.24E+041.21E+046.55E+024.42E+03
min1.71E+041.82E+042.65E+041.63E+043.19E+041.45E+042.24E+04
F9mean2.00E+042.81E+042.93E+042.06E+043.32E+041.72E+042.47E+04
std1.33E+034.97E+031.30E+035.22E+034.93E+021.46E+031.68E+03
min2.10E+041.46E+051.81E+056.84E+041.15E+055.45E+049.18E+04
F10mean4.63E+042.35E+053.20E+059.27E+041.74E+058.95E+041.47E+05
std1.17E+044.46E+041.35E+051.41E+043.58E+041.95E+043.37E+04
min1.02E+082.69E+091.60E+105.03E+097.31E+107.31E+076.63E+09
F11mean4.76E+087.29E+093.12E+101.23E+109.98E+101.66E+081.12E+10
std5.16E+082.71E+098.34E+095.65E+091.09E+104.93E+072.63E+09
min9.06E+034.18E+051.09E+098.02E+079.89E+092.71E+044.97E+07
F12mean2.59E+043.84E+083.22E+091.79E+091.77E+101.83E+053.18E+08
std2.11E+043.07E+081.71E+091.45E+093.29E+096.86E+052.19E+08
min1.11E+062.97E+069.74E+061.82E+061.57E+076.54E+052.80E+06
F13mean3.20E+062.07E+072.44E+071.07E+076.54E+072.30E+069.48E+06
std1.71E+061.15E+071.09E+076.29E+063.18E+071.06E+063.17E+06
min2.98E+031.65E+051.89E+083.47E+073.05E+097.07E+035.57E+06
F14mean6.39E+039.12E+075.10E+082.62E+086.06E+092.03E+042.05E+07
std4.23E+031.57E+082.40E+084.00E+081.64E+091.19E+042.43E+07
min4.16E+036.91E+031.12E+044.97E+031.29E+045.39E+038.92E+03
F15mean6.32E+039.50E+031.72E+046.66E+031.50E+046.56E+031.08E+04
std8.81E+021.67E+033.13E+037.15E+021.13E+036.33E+021.24E+03
min3.88E+036.38E+038.49E+034.26E+031.63E+045.06E+036.77E+03
F16mean5.13E+039.53E+032.00E+045.49E+038.59E+045.99E+038.26E+03
std6.58E+021.90E+031.73E+046.94E+021.15E+055.89E+021.27E+03
min1.43E+066.33E+065.21E+062.90E+065.36E+071.07E+062.30E+06
F17mean4.26E+062.75E+072.04E+071.10E+071.29E+083.11E+069.50E+06
std1.77E+061.59E+071.10E+078.00E+065.14E+071.45E+064.69E+06
min2.20E+031.04E+071.78E+081.48E+072.61E+093.03E+039.49E+06
F18mean7.28E+038.54E+076.26E+082.17E+085.23E+091.76E+045.00E+07
std6.35E+036.48E+073.80E+082.33E+081.26E+092.29E+043.15E+07
min3.97E+035.91E+036.23E+033.87E+037.25E+034.04E+035.09E+03
F19mean5.08E+037.15E+037.16E+035.47E+038.04E+035.90E+036.16E+03
std4.13E+027.67E+025.49E+021.04E+033.44E+026.93E+024.77E+02
min2.90E+033.57E+034.02E+032.97E+033.97E+033.37E+033.97E+03
F20mean3.00E+034.05E+034.50E+033.11E+034.20E+033.65E+034.40E+03
std7.73E+011.97E+021.97E+021.19E+028.33E+011.86E+022.37E+02
min2.13E+042.12E+042.89E+041.94E+043.40E+041.71E+042.42E+04
F21mean2.42E+042.90E+043.17E+042.42E+043.53E+042.01E+042.76E+04
std1.18E+034.78E+031.35E+035.30E+036.19E+021.60E+031.79E+03
min3.32E+034.28E+034.75E+033.51E+034.92E+033.93E+035.45E+03
F22mean3.49E+034.91E+035.34E+033.72E+035.21E+034.21E+035.86E+03
std8.56E+012.45E+022.28E+029.79E+011.40E+022.00E+023.09E+02
min3.74E+035.39E+036.10E+034.20E+036.47E+034.55E+037.00E+03
F23mean3.94E+036.09E+036.73E+034.49E+037.36E+035.20E+038.53E+03
std9.16E+014.65E+023.57E+021.84E+023.96E+023.69E+026.93E+02
min4.10E+035.19E+039.05E+035.66E+031.76E+043.45E+035.89E+03
F24mean4.72E+038.45E+031.10E+047.15E+032.25E+043.68E+036.79E+03
std4.59E+024.95E+031.05E+038.97E+022.87E+037.96E+015.15E+02
min1.17E+042.04E+043.21E+041.43E+043.60E+045.01E+032.90E+04
F25mean1.33E+042.63E+043.80E+041.77E+044.17E+042.14E+043.12E+04
std1.88E+033.57E+033.13E+031.42E+032.79E+036.56E+031.34E+03
min3.54E+034.03E+034.74E+033.80E+037.68E+033.60E+035.33E+03
F26mean3.77E+034.56E+036.01E+034.34E+038.53E+033.84E+037.16E+03
std1.63E+022.99E+028.97E+022.67E+024.51E+021.86E+021.24E+03
min3.82E+037.73E+031.18E+046.84E+032.36E+043.71E+037.49E+03
F27mean4.70E+031.88E+041.47E+049.14E+032.71E+043.83E+039.48E+03
std5.94E+025.81E+031.38E+031.31E+032.29E+036.75E+018.93E+02
min6.61E+038.16E+031.56E+047.53E+032.23E+046.58E+031.07E+04
F28mean7.73E+031.21E+042.21E+049.37E+033.54E+047.78E+031.33E+04
std5.39E+025.76E+035.72E+031.02E+031.00E+045.67E+021.40E+03
min5.89E+043.60E+071.01E+097.84E+079.50E+092.36E+053.05E+08
F29mean4.79E+052.73E+082.72E+091.38E+091.34E+107.52E+057.27E+08
std2.68E+051.66E+081.24E+091.10E+092.88E+093.87E+053.59E+08
Total150040100
Table 4. CEC2020 dimension for 20 test results.
Table 4. CEC2020 dimension for 20 test results.
MDBODBOWOAGWOSCASSAHHO
min1.28E+029.95E+035.11E+084.62E+055.64E+091.17E+028.36E+06
F1mean2.84E+033.00E+071.37E+091.10E+098.84E+093.46E+033.44E+07
std3.21E+032.51E+076.48E+081.12E+091.95E+093.92E+032.88E+07
min1.58E+032.48E+033.37E+032.03E+034.85E+031.84E+032.68E+03
F2mean2.65E+033.62E+034.29E+032.90E+035.43E+032.97E+033.55E+03
std4.94E+026.04E+024.96E+024.55E+022.64E+024.56E+024.31E+02
min7.58E+027.81E+029.03E+027.53E+028.92E+027.94E+028.76E+02
F3mean8.08E+028.40E+029.72E+027.80E+029.49E+028.93E+029.39E+02
std2.96E+014.22E+014.13E+012.00E+012.50E+014.92E+013.37E+01
min1.90E+031.91E+031.95E+031.90E+032.57E+031.90E+031.92E+03
F4mean1.91E+031.94E+032.91E+032.05E+034.91E+031.91E+031.93E+03
std4.44E+005.39E+011.64E+034.75E+021.92E+034.26E+001.05E+01
min1.91E+041.60E+055.76E+055.33E+047.43E+056.47E+043.02E+04
F5mean1.58E+051.06E+062.93E+061.13E+062.54E+062.06E+058.70E+05
std1.18E+051.16E+061.66E+061.03E+061.33E+061.00E+056.93E+05
min1.61E+031.67E+032.06E+031.68E+032.17E+031.60E+031.95E+03
F6mean1.75E+032.26E+032.63E+032.07E+032.51E+031.98E+032.29E+03
std1.25E+022.99E+023.13E+022.25E+021.90E+022.30E+021.95E+02
min1.21E+041.38E+041.79E+051.05E+041.98E+059.53E+035.51E+04
F7mean8.84E+044.44E+052.33E+061.29E+059.11E+051.88E+055.01E+05
std8.12E+046.09E+052.76E+069.69E+046.53E+052.04E+054.42E+05
min2.30E+032.31E+032.39E+032.31E+032.95E+032.30E+032.32E+03
F8mean2.30E+032.67E+034.60E+033.22E+035.45E+033.64E+033.50E+03
std1.00E+008.82E+021.82E+031.04E+031.87E+031.61E+031.54E+03
min2.82E+032.90E+032.89E+032.83E+032.99E+032.84E+033.01E+03
F9mean2.86E+032.99E+033.04E+032.87E+033.03E+032.94E+033.21E+03
std2.20E+014.89E+017.28E+014.47E+012.10E+017.26E+011.31E+02
min2.91E+032.91E+033.01E+032.92E+033.11E+032.90E+032.96E+03
F10mean2.95E+032.98E+033.13E+033.00E+033.29E+032.97E+033.02E+03
std3.52E+015.58E+017.10E+016.65E+011.54E+023.23E+012.50E+01
Total9001000
Table 5. CEC2017 dimension for the 30 Wilcoxon rank sum test.
Table 5. CEC2017 dimension for the 30 Wilcoxon rank sum test.
DBOWOAGWOSCASSAHHO
F13.02E-113.02E-113.02E-113.02E-112.28E-013.02E-11
F21.01E-083.02E-111.19E-012.03E-091.64E-051.91E-01
F38.99E-113.02E-112.87E-103.02E-117.73E-013.69E-11
F43.69E-113.02E-114.23E-033.02E-114.08E-113.02E-11
F57.39E-113.02E-114.06E-023.02E-116.70E-113.02E-11
F63.83E-063.02E-112.28E-013.02E-111.96E-103.02E-11
F74.50E-113.02E-111.26E-013.02E-111.33E-103.02E-11
F83.69E-113.02E-112.75E-033.02E-113.02E-113.02E-11
F99.51E-066.07E-111.67E-013.02E-118.24E-028.29E-06
F103.02E-113.02E-113.02E-113.02E-111.60E-073.02E-11
F114.98E-113.02E-113.34E-113.02E-113.79E-014.08E-11
F125.07E-104.62E-103.47E-103.02E-113.34E-035.57E-10
F134.64E-056.12E-106.05E-072.87E-109.00E-011.29E-09
F142.20E-073.02E-116.07E-113.02E-118.30E-015.49E-11
F151.31E-083.02E-119.59E-013.02E-111.63E-023.02E-11
F164.57E-093.82E-108.53E-014.08E-115.09E-066.53E-08
F175.37E-026.52E-093.11E-016.12E-102.42E-029.21E-05
F181.70E-083.02E-115.49E-113.02E-116.10E-013.02E-11
F193.99E-041.85E-086.10E-011.55E-099.51E-068.48E-09
F203.69E-113.02E-114.21E-023.02E-112.61E-103.69E-11
F213.02E-113.02E-113.02E-113.02E-112.32E-023.02E-11
F225.49E-113.02E-114.51E-023.02E-113.16E-103.02E-11
F239.92E-113.02E-112.42E-023.02E-115.97E-093.02E-11
F247.38E-103.02E-113.02E-113.02E-113.03E-023.02E-11
F253.82E-103.02E-115.37E-023.02E-111.25E-053.02E-11
F263.09E-061.21E-106.10E-013.02E-111.27E-023.34E-11
F273.02E-113.02E-113.02E-113.02E-114.08E-053.02E-11
F286.12E-103.02E-111.12E-013.02E-113.83E-064.98E-11
F291.09E-103.02E-113.02E-113.02E-116.73E-013.02E-11
Total282918292128
Table 6. CEC2017 dimension for 100 Wilcoxon rank sum test.
Table 6. CEC2017 dimension for 100 Wilcoxon rank sum test.
DBOWOAGWOSCASSAHHO
F12.15E-103.02E-114.50E-113.02E-113.02E-113.02E-11
F21.56E-083.34E-118.99E-113.02E-113.47E-107.51E-01
F33.02E-113.02E-113.34E-113.02E-113.34E-113.02E-11
F43.82E-103.02E-111.44E-033.02E-111.32E-043.02E-11
F56.70E-113.02E-112.60E-053.02E-111.69E-093.02E-11
F66.52E-093.02E-118.15E-053.02E-113.69E-113.02E-11
F73.02E-113.02E-112.71E-013.02E-113.69E-113.02E-11
F83.02E-113.02E-114.69E-083.02E-112.84E-013.02E-11
F91.73E-073.02E-113.03E-023.02E-111.85E-083.69E-11
F103.02E-113.02E-113.34E-113.02E-118.99E-113.02E-11
F113.02E-113.02E-113.02E-113.02E-111.17E-053.02E-11
F123.02E-113.02E-113.02E-113.02E-116.01E-083.02E-11
F132.87E-103.02E-111.60E-073.02E-114.84E-028.89E-10
F143.02E-113.02E-113.02E-113.02E-111.31E-083.02E-11
F151.61E-103.02E-111.41E-013.02E-114.38E-013.02E-11
F163.34E-113.02E-117.48E-023.02E-115.46E-063.02E-11
F174.50E-118.99E-113.65E-083.02E-111.03E-027.60E-07
F183.02E-113.02E-113.02E-113.02E-113.18E-033.02E-11
F193.02E-113.02E-112.34E-013.02E-111.86E-069.76E-10
F203.02E-113.02E-112.28E-053.02E-113.02E-113.02E-11
F211.34E-053.02E-111.44E-033.02E-119.92E-111.07E-09
F223.02E-113.02E-111.41E-093.02E-113.02E-113.02E-11
F233.02E-113.02E-113.02E-113.02E-113.02E-113.02E-11
F241.46E-103.02E-113.69E-113.02E-113.02E-113.02E-11
F255.49E-113.02E-111.07E-093.02E-111.73E-063.02E-11
F266.07E-113.02E-115.07E-103.02E-115.19E-023.02E-11
F273.02E-113.02E-113.02E-113.02E-112.37E-103.02E-11
F285.49E-113.02E-117.12E-093.02E-119.47E-013.02E-11
F293.02E-113.02E-113.02E-113.02E-114.03E-033.02E-11
Total292925292628
Table 7. CEC2020 dimension for 20 Wilcoxon rank sum test.
Table 7. CEC2020 dimension for 20 Wilcoxon rank sum test.
DBOWOAGWOSCASSAHHO
F13.02E-113.02E-113.02E-113.02E-117.96E-013.02E-11
F21.87E-071.33E-101.91E-023.02E-115.32E-034.31E-08
F33.38E-049.17E-086.92E-076.80E-086.92E-076.80E-08
F46.67E-066.80E-087.90E-086.80E-088.60E-012.56E-07
F51.56E-083.02E-112.15E-063.02E-113.39E-022.38E-07
F62.06E-067.90E-089.17E-086.80E-081.61E-041.66E-07
F72.25E-048.15E-118.77E-026.70E-115.55E-021.07E-07
F86.01E-076.80E-086.80E-086.80E-083.65E-016.80E-08
F94.50E-114.08E-119.47E-013.02E-111.11E-063.02E-11
F106.10E-033.02E-115.97E-053.02E-117.98E-021.17E-09
Total1010810510
Table 8. Extension/compression spring design issues.
Table 8. Extension/compression spring design issues.
AlgorithmdDNCostMeanStd
MDBO0.052056270.36561626110.785726640.0126676780.0129120980.000699969
DBO0.050.31715560614.073839870.0127447710.0137903230.001855465
WOA0.0590385650.5606641844.9485526590.013579030.0135454880.000955076
GWO0.0502836970.32355601813.571376410.0127388690.0128068820.00016799
SCA0.050.31473243114.563038890.0130323140.0131089420.000402296
SSA0.050.31742541614.027769750.0127190540.0136071650.001516753
HHO0.0613015930.6351001293.9572559920.0142177860.0137927690.001029114
Table 9. Reducer design issues.
Table 9. Reducer design issues.
Algorithmx1x2X3x4x5X6x7CostMeanStd
MDBO3.50000.717.00007.30007.80003.35025.28672996.34822996.34820.0000
DBO3.50000.717.00008.30008.30003.35225.28693016.77043031.675459.1161
WOA3.60000.718.44087.51668.25793.34955.49363450.30183427.3548612.9289
GWO3.50670.717.00008.28968.00263.36035.28983016.80893010.84184.3596
SCA3.60000.717.00007.79878.30003.52465.29673104.48673127.824843.4361
SSA3.50000.717.00007.30007.80003.35025.28672996.34822996.65931.7041
HHO3.51210.720.67477.30008.02873.34805.29053705.93163536.5022454.2663
Table 10. Welded beam design issues.
Table 10. Welded beam design issues.
AlgorithmhltdCostMeanStd
MDBO0.2057299533.2349159149.0366170340.2057299531.6927694351.69612130.0127085
DBO0.1415652725.4996708879.0454007090.2061234041.8708708091.74965650.0424308
WOA0.3616758171.7171310378.5116194270.361988652.5779211662.46589000.6156943
GWO0.2042594343.2707352739.0432813410.2057528651.6967809221.69836410.0030933
SCA0.1813149633.549003446100.2024914251.8384906061.88350810.0628893
SSA0.2057295233.2349152579.036638480.2057295671.6927694811.78333990.2636062
HHO0.1942185433.6011025588.8361920970.2151686451.7600359311.93065480.1725821
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye , M.; Zhou, H.; Yang, H.; Hu, B.; Wang, X. Multi-Strategy Improved Dung Beetle Optimization Algorithm and Its Applications. Biomimetics 2024, 9, 291. https://doi.org/10.3390/biomimetics9050291

AMA Style

Ye  M, Zhou H, Yang H, Hu B, Wang X. Multi-Strategy Improved Dung Beetle Optimization Algorithm and Its Applications. Biomimetics. 2024; 9(5):291. https://doi.org/10.3390/biomimetics9050291

Chicago/Turabian Style

Ye , Mingjun, Heng Zhou, Haoyu Yang, Bin Hu, and Xiong Wang. 2024. "Multi-Strategy Improved Dung Beetle Optimization Algorithm and Its Applications" Biomimetics 9, no. 5: 291. https://doi.org/10.3390/biomimetics9050291

Article Metrics

Back to TopTop