Next Article in Journal
Facile One-Pot Preparation of Self-Assembled Hyaluronate/Doxorubicin Nanoaggregates for Cancer Therapy
Previous Article in Journal
How the Structure and Wettability Properties of Morpho peleides Butterfly Wings Can Be a Source of Inspiration
Previous Article in Special Issue
Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MHO: A Modified Hippopotamus Optimization Algorithm for Global Optimization and Engineering Design Problems

by
Tao Han
,
Haiyan Wang
,
Tingting Li
*,
Quanzeng Liu
and
Yourui Huang
School of Electrical & Information Engineering, Anhui University of Science and Technology, Huainan 232001, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(2), 90; https://doi.org/10.3390/biomimetics10020090
Submission received: 7 January 2025 / Revised: 3 February 2025 / Accepted: 3 February 2025 / Published: 5 February 2025

Abstract

:
The hippopotamus optimization algorithm (HO) is a novel metaheuristic algorithm that solves optimization problems by simulating the behavior of hippopotamuses. However, the traditional HO algorithm may encounter performance degradation and fall into local optima when dealing with complex global optimization and engineering design problems. In order to solve these problems, this paper proposes a modified hippopotamus optimization algorithm (MHO) to enhance the convergence speed and solution accuracy of the HO algorithm by introducing a sine chaotic map to initialize the population, changing the convergence factor in the growth mechanism, and incorporating the small-hole imaging reverse learning strategy. The MHO algorithm is tested on 23 benchmark functions and successfully solves three engineering design problems. According to the experimental data, the MHO algorithm obtains optimal performance on 13 of these functions and three design problems, exits the local optimum faster, and has better ordering and stability than the other nine metaheuristics. This study proposes the MHO algorithm, which offers fresh insights into practical engineering problems and parameter optimization.

Graphical Abstract

1. Introduction

Finding the maximum value of a given objective function under specified constraints is the goal of optimization problems, which are found in a variety of disciplines, including computer science, mathematics, engineering, and economics. All optimization problems consist of three components: the objective function, constraints, and decision variables [1].
Traditional optimization algorithms, such as linear programming, quadratic programming, dynamic programming, etc., provide a solid mathematical foundation and efficient solutions for solving deterministic, convex, and well-structured optimization problems, but they usually require the problem to have a specific mathematical structure, and they are prone to fall into locally optimal solutions, especially in the case of multi-peak problems, where it is difficult to find the globally optimal solution, and the results of the solution strongly depend on the initial values [2]. The rise of metaheuristic algorithms compensates for the limitations of conventional optimization algorithms, as they are very flexible and adaptable and offer new tools and techniques for resolving challenging optimization issues in the real world. These algorithms are independent of the problem’s form and do not require knowledge of the objective function’s derivatives [3].
Metaheuristics are high-level algorithms that model social behaviors or natural phenomena to discover an approximate optimal solution to complex optimization problems. There are a wide variety of metaheuristic algorithms, which can be categorized into three groups based on their inspiration and working principles: evolution-based algorithms, group intelligence-based algorithms, and algorithms based on physical principles [4]. Evolution-based algorithms are mainly used to realize the overall progress of the population and finally complete the optimal solution by simulating the evolutionary law of superiority and inferiority in nature (Darwin’s law) [5]. Among the most prominent examples of these are genetic algorithms (GA) [6] and differential evolution (DE) [7]. Genetic algorithms simulate the process of biological evolution and optimize the solution through selection, crossover and mutation operations, with strong global search abilities which are suitable for discrete optimization problems. Differential evolution algorithms generate new solutions through the different operations of individuals in a population, which is excellent in dealing with nonlinear and multimodal optimization problems. By simulating a group’s intelligence, group intelligence-based algorithms [8,9] aim to produce a globally optimal solution. Each group in this algorithm is a biological population, and the most representative examples are the particle swarm optimization algorithm [10] and the ant colony algorithm [11], which use the cooperative behavior of a population to accomplish tasks that individuals are unable to complete. The PSO simulates the social behavior of bird or fish flocks and achieves global optimization through collaboration among individuals, which is simple, efficient, and suitable for continuous optimization problems. The ACO simulates the foraging behavior of ants and optimizes the paths through a pheromone mechanism, which is excellent in path optimization problems. There are also many other popular algorithms, such as the artificial bee colony algorithm [12], which simulates the foraging behavior of bees to optimize solutions through information sharing and collaboration, the bat optimization algorithm [13], which simulates the echolocation behavior of bats to optimize solutions through frequency and amplitude adjustment, and the gray wolf optimization algorithm [14], which simulates the collaboration and competition between leaders and followers in gray wolf packs. All of these algorithms have strong global search capabilities. The firefly algorithm (FA), which simulates the behavior of fireflies glowing to attract mates, optimizes solutions through light intensity and movement rules for multi-peak optimization problems. The fundamental concept of physical principle-based algorithms, of which simulated annealing (SA) [15] is the best example, is to use natural processes or physics principles as the basis for search techniques used to solve complex optimization problems. It mimics the annealing process of solids and performs well in combinatorial optimization problems by controlling the “temperature” parameter to balance global exploration and local exploitation in the search process. In addition to the above algorithms, others include the gravitational search algorithm (GSA) [16] and the water cycle algorithm (WCA) [17]. The GSA optimizes the solution by simulating gravitational interactions between celestial bodies and using mutual attraction between masses, demonstrating a strong global search capability. The WCA, on the other hand, simulates water cycle processes in nature and uses the convergence and dispersion mechanism of water flow to optimize the solution, which also has excellent global search performance. In addition, there are special types of hybrid optimization algorithms, which combine the features of two or more metaheuristics to enhance the performance of the algorithms by incorporating different search mechanisms. For example, the hybrid particle swarm optimization algorithm with differential evolution (DEPSO [18]) combines the population intelligence of the particle swarm optimization algorithm and the variability capability of differential evolution, which enables DEPSO to efficiently balance global and local searches and to improve the efficiency and effectiveness of the optimization process, especially for global optimization problems in continuous space. Based on a three-phase model that includes hippopotamus positioning in rivers and ponds, defense strategies against predators, and escape strategies, the HO is a new algorithm inspired by hippopotamus population behaviors which was proposed by Amiri [19] et al. in 2024. In the optimization sector, the hippopotamus optimization (HO) algorithm stands out for its excellent performance, which is able to quickly identify and converge to the optimal solution and effectively avoid falling into local minima. The algorithm’s efficient local search strategy and fast optimality-finding speed enable it to excel in solving complex problems. It effectively balances global exploration and local exploitation and is able to quickly find high-quality solutions, making it an effective tool for solving complex optimization problems.
Currently, metaheuristic algorithms have a wide range of application prospects in the field of engineering optimization. Hu [20] et al. used four metaheuristic algorithms, namely, the African vulture optimization algorithm (AVOA), the teaching–learning-based optimization algorithm (TLBO), the sparrow search algorithm (SSA), and the gray wolf optimization algorithm (GWO), to optimize a hybrid model and proposed integrated prediction of steady-state thermal performance prediction data for an energy pile-driven model. Sun [21] et al. responded to most of the industrial design problems and proposed a fuzzy logic particle swarm optimization algorithm based on the associative constraints processing method. A particle swarm optimization algorithm was used as a searcher, and a set of fuzzy logic rules integrating the feasibility of the individual was designed to enhance its searching ability. Wu [22] et al. responded to the ant colony optimization algorithm’s limitations, such as early blind searching, slow convergence, low path smoothness, and other limitations, and proposed an ant colony optimization algorithm based on farthest point optimization and a multi-objective strategy. Palanisamy and Krishnaswamy [23] used hybrid HHO-PSO (hybrid particle swarm optimization) for failure testing of wire ropes for hardness, wear and tear analysis, tensile strength, and fatigue life and adopted a hybrid HHO-based artificial neural network-based HHO (Hybrid ANN-HHO) to predict the performance of the experimental wire ropes. Liu [24] et al. proposed an improved adaptive hierarchical optimization algorithm (HSMAOA) in response to problems such as premature convergence and falling into local optimization when dealing with complex optimization problems in arithmetic optimization algorithms. Cui [25] et al. combined the whale optimization algorithm (WOA) with attention-to-the-technology (ATT) and convolutional neural networks (CNNs) to optimize the hyperparameters of the LSTM model and proposed a new load prediction model to address the over-reliance of most methods on the default hyperparameter settings. Che [26] et al. used a circular chaotic map as well as a nonlinear function for multi-strategy improvement of the whale optimization algorithm (WOA) and used the improved WOA to optimize the key parameters of the LSTM to improve its performance and modeling time. Elsisi [27] used a different learning process based on the improved gray wolf optimizer (IGWO) and fitness–distance balancing (FDB) methodology to balance the original gray wolf optimizer’s exploration and development approach and design a new automated adaptive model predictive control (AMPC) for self-driving cars to solve the rectification problem of self-driving car parameters and the uncertainty of the vision system. Karaman [28] et al. used the artificial bee colony (ABC) optimization algorithm to go in search of the optimal solution for the hyperparameters and activation function of the YOLOv5 algorithm and enhance the accuracy of colonoscopy. Yu and Zhang [29], in order to minimize the wake flow effect, proposed an adaptive moth flame optimization algorithm with enhanced detection exploitation capability (MFOEE) to optimize the turbine layout of wind farms. Dong [30] et al. optimized the genetic algorithm (GA) based on the characteristics of flood avoidance path planning and proposed an improved ant colony genetic optimization hybrid algorithm (ACO-GA) to achieve dynamic planning of evacuation paths for dam-breaking floods. Shanmugapriya [31] et al. proposed an IoT-based HESS energy management strategy for electric vehicles by optimizing the weight parameters of a neural network using the COA technique to improve the SAGAN algorithm in order to improve the battery life of electric vehicles. Beşkirli and Dağ [32] proposed an improved CPA algorithm (I-CPA) based on the instructional factor strategy and applied it to the problem of solar photovoltaic (PV) module parameter identification in order to improve the accuracy and efficiency of PV model parameter estimation. Beşkirli and Dağ [33] proposed a multi-strategy-based tree seed algorithm (MS-TSA) which effectively improves the global search capability and convergence performance of the algorithm by introducing an adaptive weighting mechanism, a chaotic elite learning method, and an experience-based learning strategy. It performs well in both CEC2017 and CEC2020 benchmark tests and achieves significant optimization results in solar PV model parameter estimation. Liu [34] et al. proposed an improved DBO algorithm and applied it to the optimal design of off-grid hybrid renewable energy systems to evaluate the energy cost with life cycle cost as the objective function. However, the above algorithms face the challenges of data size and complexity in practical applications and still suffer from the problem of easily falling into local optima, low efficiency, and insufficient robustness, which limit the performance and applicability of the algorithms.
When solving real-world problems, the HO algorithm excels due to its adaptability and robustness and is able to maintain stable performance in a wide range of optimization problems, making it an ideal choice for fast and efficient optimization problems. Maurya [35] et al. used the hippopotamus optimization algorithm (HO) to optimize distributed generation planning and network reconfiguration in the consideration of different loading models in order to improve the performance of a power grid. Chen [36] et al. addressed the limitations of the VMD algorithm and improved it by using the excellent optimization capability of the HO algorithm to achieve preliminary denoising, and in doing so, proposed a single-sign-on modal identification method based on hippopotamus optimization-variational modal decomposition (HO-VMD) and singular value decomposition-regularized total least squares-Prony (SVD-RTLS-Prony) algorithms. Ribeiro and Muñoz [37] used particle swarm optimization, hippopotamus optimization, and differential evolution algorithms to tune a controller with the aim of minimizing the root mean square (RMS) current of the batteries in an integrated vehicle simulation, thus mitigating battery stress events and prolonging its lifetime. Wang [38] et al. used an improved hippopotamus optimization algorithm (IHO) to improve solar photovoltaic (PV) output prediction accuracy. The IHO algorithm addresses the limitations of traditional algorithms in terms of search efficiency, convergence speed, and global searching. Mashru [39] et al. proposed the multi-objective hippopotamus optimizer (MOHO), which is a unique approach that excels in solving complex structural optimization problems. Abdelaziz [40] et al. used the hippopotamus optimization algorithm (HO) to optimize two key metrics and proposed a new optimization framework to cope with the problem of the volatility of renewable energy generation and unpredictable electric vehicle charging demand to enhance the performance of the grid. Baihan [41] et al. proposed an optimizer-optimized CNN-LSTM approach that hybridizes the hippopotamus optimization algorithm (HOA) and the pathfinder algorithm (PFA) with the aim of improving the accuracy of sign language recognition. Amiri [42] et al. designed and trained two new neuro-fuzzy networks using the hippopotamus optimization algorithm with the aim of creating an anti-noise network with high accuracy and low parameter counts for detecting and isolating faults in gas turbines in power plants. In addition to the above applications, there are many global optimization and engineering design problems. However, the theory of “no-free-lunch” (NFL) states that no optimization algorithm can solve all problems [43], and each existing optimization algorithm can only achieve the expected results on certain types of problems, so improvement of the HO algorithm is still necessary. Although the HO algorithm has many advantages, its performance level decreases when dealing with complex global optimization and engineering design problems, and it cannot avoid falling into local optima. It is still necessary to adjust the algorithm parameters and strategies according to specific problems in practical applications in order to fully utilize its potential. Therefore, we propose the MHO algorithm to enhance the ability of HO to solve these problems. The main contributions of this paper are as follows:
  • Use the method of the sine chaotic map to replace the original population initialization method in order to prevent the HO algorithm from settling into local optimal solutions and to produce high-quality starting solutions.
  • Introduce a new convergence factor to alter the growth mechanism of hippopotamus populations during the exploration phase improves the global search capability of HO.
  • Incorporate a small-hole imaging reverse learning strategy into the hippopotamus escaping predator stage to avoid interference between dimensions, expand the search range of the algorithm to avoid falling into a local optimum, and thus improve the performance of the algorithm.
  • The MHO model is tested on 23 benchmark functions, the optimization ability of the model is tested by comparing it with other algorithms, and three engineering design problems are successfully solved.
The structure of this paper is as follows: Section 2 presents the hippopotamus algorithm and three methods for enhancing the hippopotamus optimization algorithm; Section 3 presents experiments and analysis, including evaluating the experimental results and comparing the MHO algorithm with other algorithms; Section 4 applies MHO to three engineering design problems; and Section 5 provides a summary of the entire work.

2. Improved Algorithm

2.1. Sine Chaotic Map

A sine chaotic map [44] is a kind of chaotic system that generates chaotic sequences by nonlinear transformation of a sinusoidal function, which becomes a typical representative of a chaotic map due to the advantages of simple structure and high efficiency, and its mathematical expression is
x k + 1 = α s i n x k
where k is a non-negative integer; x k 0 , 1 denotes the value of the current iteration step; and α 0 , 1 is the chaos coefficient control parameter.
The sine map starts chaotic behavior when the parameter α is close to 0.87, and superior chaotic properties can be observed when α is close to 1. Therefore, the introduction of the sine chaotic map into the random initialization of the initial value of the hippopotamus optimization (HO) algorithm can make the hippopotamus population uniformly distributed throughout the search space, which improves the diversity of the initial population, enhances the global search capability of the HO algorithm, and effectively avoids falling into the local optimal solution. Figure 1 shows the population distribution initialized by the algorithm:
In the HO algorithm, a hippopotamus is a candidate solution to the optimization problem, which means that each hippopotamus’ position in the search space is updated to represent the values of the decision variables. Thus, each hippopotamus is represented as a vector and the population of hippopotamuses is mathematically characterized by a matrix. Similar to traditional optimization algorithms, the initialization phase of HO involves the generation of a random initial solution, and the vector of decision variables is generated as follows:
X i : x i , j = l b j + r × u b j l b j ,   i = 1 , 2 , , N ;   j = 1 , 2 , , m
where X i denotes the location of the i t h candidate solution, r is a random number in the range of 0~1, and l b and u b represent the lower and upper limits of the j t h decision variable, respectively. Let N denote the population size of hippopotamus in the herd, while m denotes the number of decision variables in the problem and the population matrix is composed by Equation (3).
x = x 1 x i x N N × m = x 1 , 1 x 1 , j x 1 , m x i , 1 x i , j x i , m x N , 1 x N , j x N , m N × m
Consequently, the improved expression for the initialization phase is
X i : x i , j = l b j + S i n e _ c h a o s u b j l b j
and furthermore,
S i n e _ c h a o s = α s i n k π x
where k is a parameter that controls the chaotic behavior and x is an initial value.

2.2. Change Growth Mechanism

The growth mechanism is a key component of the hippopotamus optimization algorithm that determines how the search strategy is updated to find better solutions based on current information.
In the original growth mechanism, the exploration phase of the HO algorithm models the activity of the hippopotamus itself in the entire herd. The authors subdivided the whole population into four segments, i.e., adult female hippopotamus, young hippopotamus, adult male hippopotamus, and the dominant male hippopotamus (the leader of the herd). The dominant hippopotamus was determined iteratively based on the value of the objective function (minimizing the minimum value of the problem and maximizing the maximum value of the problem).
In a typical hippopotamus herd, several females are positioned around the males, and the herd leader defends the herd and territory from possible attacks. When hippopotamus calves reach adulthood, the dominant male ejects them from the herd. Subsequently, these expelled males are asked to attract females or compete for dominance with other established male members. The location of the herd’s male hippopotamus in a lake or pond is represented mathematically by Equation (6).
X i Mhipoo : x i Mhippo = x i j + y 1 D hippo I 1 x i j
In Equation (6), X i Mhipoo denotes the position of the male hippopotamus and D hippo indicates the location of the dominant hippopotamus. As shown in Equation (7), r 1 , , 4 is a random vector between 0 and 1, r 5 is a random number between 0 and 1, I 1 and I 2 are integers between 1 and 2. M G i is the average of a number of randomly selected hippopotamuses, which includes the currently considered hippopotamus with equal probability, y 1 is a random number between 0 and 1, and e 1 and e 2 are random integers that can be either 1 or 0.
h = I 2 × r 1 + ~ e 1 2 × r 2 1 r 3 I 1 × r 4 + ~ e 2 r 5
T = exp t M a x _ i t e r a t i o n s
X i F B hippo : x i F B hippo = x i j + h 1 · D hippo I 2 M G i , T > 0.6 Ξ , e l s e
Ξ = x i j + h 2 · M G i D hippo , r 6 > 0.5 l b j + r 7 u b j l b j , e l s e f o r   i = 1 , 2 , , N 2   and   j = 1 , 2 , , m
Equations (9) and (10) describe the position of the female or immature hippopotamus in the herd ( X i F B h i p p o ). The majority of immature hippos are with their mothers, but due to curiosity, sometimes immature hippos are separated from the herd or stay away from their mothers.
If the convergence factor T is greater than 0.6, this means that the immature hippo has distanced itself from its mother (Equation (9)). If r 6 is greater than 0.5, this means that the immature hippopotamus has distanced itself from its mother but is still in or near the herd; otherwise, it has left the herd. Equations (9) and (10) are based on modeling this behavior for immature and female hippos. Randomly chosen numbers or vectors, denoted as I 1 and I 2 , are extracted from the set of five scenarios outlined in equation h . In Equation (10), r 7 is a random number between 0 and 1. Equations (11) and (12) describe the position update of female or immature hippos. The objective function value is denoted by F i :
X i = X i M hippo F i M hippo < F i X i else
X i = X i F B hippo F i F B hippo X i
Using h-vectors, I 1 and I 2 scenarios enhance the algorithm’s global search and improve its exploration capabilities.
The growth mechanism is improved by introducing a new convergence factor T , which is specifically designed to dynamically adjust the behavioral patterns of immature hippos, and the following equation is an improved formulation of T :
T = 1 t M a x _ i t e r a t i o n s 6
X i F B hippo : x i j F B hippo = x i j + h 1 · D hippo I 2 M G i , T > 0.95 Ξ , e l s e
Ξ = x i j + h 2 · M G i D hippo , r 6 > 0.5 l b j + r 7 · u b j l b j , e l s e f o r   i = 1 , 2 , , N 2   and   j = 1 , 2 , , m
where t is the current iteration number and M a x _ i t e r a t i o n s is the maximum iteration numbers.
Plots of the functions of Equations (8) and (13) before and after the improvement are shown in Figure 2. The simulated immature hippopotamus individuals will show a higher propensity to explore within the hippopotamus population or within the surrounding area when T > 0.95 (Equation (14)). This behavior promotes the algorithm to refine its search in a local region close to the current optimal solution, thus enhancing the algorithm’s search accuracy and efficiency in that region. The immature hippo attempts to move away from the present optimal solution when T 0.95 and r 6 > 0.5 . This is a method intended to prolong the search in order to lower the possibility that the algorithm would fall into a local optimum and to enable a more thorough investigation of the global solution space (Equation (15)). The algorithm is able to identify and escape potential local optimality traps more efficiently this way, thus increasing the probability of finding a globally optimal solution. When r 6 0.5 , immature hippos perform random exploration, allowing the algorithm to maintain diversity and avoid premature convergence. This improvement enhances the HO algorithm’s search capability and adaptability by better simulating the natural behavior of hippos.

2.3. Small-Hole Imaging Reverse Learning Strategy

Many academics have proposed the reverse learning strategy to address the issue that most intelligent optimization algorithms are prone to local extremes [45]. The core idea behind this strategy is to create a corresponding reverse solution for the current solution during population optimization, compare the objective function values of these two solutions, and choose the better solution to move on to the next iteration. Based on this approach, this study presents small-hole imaging reverse learning [46] technique to enhance population variety, which enhances the algorithm’s global search capability and more accurately approximates the global optimal solution.
The principle of small-hole imaging is shown in Figure 3, which is a combined method combining pinhole imaging with dimension-by-dimension inverse learning derived from LensOBL [47]. The aim is to find an inverse solution for each dimension of the feasible solution, thus reducing the risk of the algorithm falling into a local optimum.
Assume that in a certain space, there is a flame p with height h whose projection on the X-axis is X best j (the j t h dimensional optimal solution), the upper and lower bounds of the coordinate axes are a j and b j (the upper and lower bounds of the j t h dimensional solution), and a screen with a small hole is placed on the base O . The flame passing through the small hole can receive an inverted image p with height h on the receiving screen. The flame passing through the small hole can receive an inverted image p of height h on the receiving screen, and then a reversed point X best (the reversed solution of the j t h dimensional solution) is obtained on the X-axis through small-hole imaging. Therefore, from the principle of small-hole imaging, Equation (16) can be derived.
a j b j 2 X best X best a j b j 2 = h h
Let h h = n ; through the transformation to obtain X best , the expression is Equation (17), and Equation (18) is obtained when n = 1 .
X best = a j + b j 2 + a j + b j 2 n X best n
X best = a j + b j X best
As can be seen from Equation (18), small-hole imaging reverse learning is the correct general reverse learning strategy when n = 1 , but at this time, small-hole imaging learning is only the current optimal position through general reverse learning to obtain a fixed reverse point; this fixed position is frequently far away from the global optimal position. Therefore, by adjusting the distance between the receiving screen and the small-hole screen to change the adjustment factor n , we can use the algorithm to obtain an optimal solution closer to the position, making it jump out of the local optimal region and closer to the global optimal region.
The development phase of the original hippopotamus algorithm describes a hippopotamus fleeing from a predator. Another behavior of a hippopotamus facing a predator occurs when a hippopotamus is unable to repel a predator with its defensive behaviors, so the hippopotamus tries to get out of the area in order to avoid the predator. This strategy causes the hippo to find a safe location close to its current position. In the third phase, the authors simulate this behavior, which improves the algorithm’s local search capabilities. Random places are created close to the hippo’s present location in order to simulate this behavior.
X i HippoE : x i j HippoE = x i j + r 10 · l b j local + s 1 · u b j local l b j local ( i = 1 , 2 , , N · j = 1 , 2 , , m )
l b j local = l b j t ,   u b j local = u b j t ,   t = 1 , 2 , , τ .
s = 2 × r 11 1 r 12 r 13
where X i Hippo ε is the position of the hippo when it escaped from the predator, and it is searched to find the closest safe position. Out of the three s situations, s 1 is a randomly selected vector or number (Equation (21)). Better localized search is encouraged by the possibilities that the s equations take into account, and r 11 represents a random vector between 0 and 1, while r 10 and r 13 denote random numbers generated in the range of 0 to 1. In addition, r 12 is a normally distributed random number. t denotes the current iteration number, while τ denotes the highest iteration number.
X i = X i Hippo ε , F i Hippo ε < F i X i , F i Hippo ε F i
The fact that the fitness value improved at the new position suggested that the hippopotamus had relocated to a safer area close to its original location.
Incorporating the small-hole imaging reverse learning strategy into the HO algorithm can effectively improve the diversity and optimization efficiency of the algorithm. This strategy enhances population diversity and expands the search range through chaotic sequences while mapping the optimal solution dimension by dimension to reduce interdimensional interference and improve global search capability. Additionally, it enhances stability, lowers the possibility of a local optimum, dynamically modifies the search range, and synchronizes the global search with the local exploitation capabilities, all of which help the algorithm to find a better solution with each iteration.

2.4. Algorithmic Process

The program details of MHO are shown in the flowchart in Figure 4. Firstly, create an initial population using the sine chaotic map, and set the iteration counter to i = 1 and the time counter t = 1 . Secondly, it is divided into three phases: When i N / 2 , enter the first phase (Phase 1), which is the position update of the hippopotamus in the river or pond (exploration phase). Use Equations (9) and (14) to calculate the positions of male and female hippos, respectively, and update the positions of the hippos using Equations (11) and (12). When i > N / 2 , it enters the second phase (phase 2), i.e., hippopotamus defense against predators, which is consistent with the original hippopotamus algorithm. The third phase begins when i > N , where the hippopotamus escapes from the predator, and the final position of the hippopotamus is calculated and the hippo’s nearest safe position is updated using Equation (17). Finally, if the time counter is at t < T , increase t and reset the iteration counter i = 1 to continue the iteration. The optimal objective function solution discovered by the MHO algorithm is output once the maximum number of iterations T has been reached.

2.5. Computational Complexity

Time complexity is a basic index to evaluate the efficiency of algorithms, which is analyzed in this paper by using the method of BIG-O [48]. Assuming that the population size is P , the dimension is D , and the number of iterations is T , the time complexities of the HO algorithm and the MHO algorithm are analyzed as follows:
The standard HO algorithm consists of two phases: a random population initialization phase and a subsequent hippo position update. In the initialization phase, the time complexity of HO can be expressed as T 1 = O P × D . In the position updating phase, the hippopotamus employs position updating in rivers or ponds for defense and escape from predatory mechanisms. The computational complexity of each iteration is O P × D , and after T iterations, the computational complexity accumulates as T 2 = O T × P × D . Therefore, the total time complexity of HO can be expressed as T H O = T 1 + T 2 with O P time complexity.
The proposed MHO algorithm consists of three phases: population initialization based on sine chaotic mapping, hippopotamus position updating, and the small-hole imaging reverse learning phase. In the sine chaotic mapping-based population initialization phase, the time complexity of the MHO initialization is denoted as T 1 and is expressed as T 1 = O P × D . The hippopotamus position update phase is very similar to the HO phase, with a time complexity which is consistent with HO. In the small-hole imaging reverse learning phase, the time complexity of this phase, denoted as T 3 = O P × D , is executed in each iteration, resulting in a total complexity of O T × P × D . Thus, the overall time complexity of the MHO algorithm can be summarized as T M H O = T 1 + T 2 + T 3 with a final time complexity of O P . It is worth noting that the time complexity of MHO is comparable to HO, which indicates that the enhancement strategy proposed in this study does not affect the solution efficiency of the algorithm.

3. Experiment

In this section, a series of experiments are designed to validate the effectiveness of the HO improvement algorithm, and we have chosen 23 benchmark test functions to evaluate the MHO algorithm and to perform comparison experiments with nine other meta-heuristic algorithms. In addition, ablation experiments of the algorithm were conducted to explore the contribution and impact of different components in the MHO algorithm.

3.1. Experimental Setup and Evaluation Criteria

To ensure the fairness and validity of the experiments, this paper proposes that the HO-based improved algorithm MHO, as well as other nature-inspired algorithms, are programmed and implemented in an experimental environment on Windows 10, all on a computer configured with 12th Gen Intel (R) Core (TM) i5-12600KF 3.70 GHz processor, 16 Gb RAM, and using the programming language MATLAB 2019b. The performance of the algorithms is evaluated using the following evaluation criteria:
Mean: the average value computed by the algorithm after executing it several times for the benchmark function tested. The mean value indicates the general effectiveness of the algorithm in finding the optimal solution, i.e., the desired performance of the algorithm. A lower mean value indicates that the algorithm is able to find a better solution on average over multiple runs. The formula is calculated as in Equation (23):
M e a n = 1 S i = 1 S F i
where S is the number of executions and F i denotes the result of the i t h execution.
Standard deviation: the standard deviation calculated by the algorithm after executing the test functions many times. The smaller the standard deviation, the more stable the performance of the algorithm, which usually means that the algorithm has better robustness. The formula is shown in Equation (24):
S t d = 1 S i = 1 S F i 1 S i = 1 S F i 2
Rank: ranks the results of the Friedman test for all algorithms; the lower the mean and Std, the higher the rank. Algorithms with the same result are given comparative ranks to each other. “Rank-Count” represents the cumulative sum of the ranks, “Ave-Rank”’ represents the average of the ranks, and “Overall-Rank” is the final ranking of the algorithms in all comparisons.

3.2. Test Function

In order to test the improved performance of the MHO algorithm, 23 benchmark functions with different characteristics are used for testing and the specific function information is shown in Table 1, which contains the dimensionality (Dim), the domain, and the known theoretical optimum of the function. These test functions are grouped into three categories: single-peak test functions for f 1 f 7 , multimodal functions for f 8 f 13 , and fixed-dimension functions for f 14 f 23 . The single-peak benchmark function is characterized by the existence of only one global optimum solution and is monotonic and deterministic, so it is suitable for evaluating the speed of convergence and the development capability of optimization algorithms. The multimodal function has multiple local optimal solutions but only one global optimal solution, which makes it commonly used to test the global search capability of optimization algorithms and their ability to avoid falling into local optima. Fixed-dimension multimodal functions, on the other hand, are usually defined in a specific dimension, meaning that their complexity and difficulty are fixed and do not change as the dimension changes. This ensures consistency and comparability of tests.

3.3. Sensitivity Analysis

MHO is a population-based optimizer that performs the optimization process through iterative computation. Therefore, it can be expected that the experimental results are usually influenced by the number of fitness evaluations ( F E s = P t ), where P is the population size and t is the number of iterations. Most of the studies in the literature fix F E s at 15,000 iterations, i.e., when P = 30 and t = 500 . However, different P and t settings can have an impact on the algorithm’s performance. Therefore, we chose three different p / t combinations (20/75000, 30/500, and 60/250) to analyze their effects on the MHO algorithm. Seventeen test functions were randomly selected for sensitivity analysis, and the experimental results are shown in Table 2.
As can be seen in Table 2, for f 7 ,   f 12 ,   f 20 the best results for these three functions are achieved for four different p / t settings. For f 16 ,   f 19 ,   f 20 ,   f 21 ,   f 23 of these six functions, the p / t of 20/750 has the smallest value of standard deviation. p / t of 30/500 for the functions f 14 and f 17 exhibits smaller values of Std. Rank-Count is the sum of the rank values of all functions for the same set of p / t , where the Rank-Count value of 32.5 for p / t of 30/500 is the smallest. After the Friedman test, it can be seen that the first place on the final ranking (Overall-Rank) is p / t of 30/500, so it can be concluded that this experimental result is the best and is set as a fixed parameter for the experiment in this paper.

3.4. Experimental Results

Comparative experiments were conducted on the above twenty-three test functions for HO as well as variants of HO (HO1, HO2, and HO3) and comparing them with the Harris hawk algorithm (HHO) [49], honey badger algorithm (HBA) [50], dung beetle optimization algorithm (DBO) [51], particle swarm optimization algorithm (PSO), and whale optimization algorithm (WOA) [52], where HO1 is the HO with the introduction of the sine chaotic map after HO, using Equation (4) to replace the population initialization method, HO2 incorporates the small-hole imaging reverse learning strategy, using Equation (17) to add a reverse learning process, and HO3 is to improve the growth mechanism of HO (Equation (13)). The evaluation process is set with uniform parameters to ensure fairness, and each algorithm will perform 50 cycles each time. The experimental results are shown in Table 3.
Observing the data in Table 3 yields the performance of our MHO algorithm and its comparative algorithms on several benchmark functions. The MHO algorithm outperforms the other algorithms in terms of mean and standard deviation on the functions f 1 f 4 and is slightly inferior to the HHO algorithm on the functions f 5 f 6 , but the MHO algorithm is second only to the HHO algorithm in terms of mean and standard deviation on the function f 6 . The MHO algorithm is superior to all other algorithms except the variant HO2 on the function f 7 . For the multimodal benchmark function f 8 f 13 , the MHO algorithm performs optimally on the function f 9 ,   f 10 ,   f 11 . It is inferior to the HHO on both the f 12 and f 13 functions, but the mean value of f 13 is second only to the HHO and is superior to the other algorithms. For the fixed-dimensional test functions f 14 f 23 , it outperforms the other algorithms in terms of mean and standard deviation for the six test functions f 14 , f 15 and f 20 f 23 , and while the standard deviation is slightly worse than the other algorithms for the four functions, the mean values are optimal. For the fixed-dimensional test functions f 14 f 23 , MHO outperforms the other algorithms in terms of mean and standard deviation for the six test functions, while the standard deviation is slightly worse than the other algorithms for the four functions f 16 f 19 , but the mean values are optimal.
Summarizing the above results, it can be seen that the MHO algorithm shows a clear advantage in the benchmark function. Whether on single-peak, multi-peak, or hybrid functions, it shows excellent optimization performance and stability. These results fully demonstrate the effectiveness and superiority of the MHO algorithm in solving complex optimization problems.

3.5. Friedman Test

The Friedman test [53] provides an effective tool for performance comparison of optimization algorithms, statistical significance analysis, robustness assessment, and multi-objective optimization, which allows us to make scientific and reasonable algorithm selections and applications. Through the Friedman test, we can fairly compare the performance of different algorithms and reduce the bias caused by the selection of specific problems, so as to make objective evaluations and scientific decisions.
Therefore, in order to further compare the overall performance of these 10 algorithms, the algorithms are ranked using the Friedman test, and Table 4 shows the performance rankings of the 10 algorithms, including MHO, on 17 randomly selected functions out of the 23 benchmark functions mentioned above. From the table, it can be concluded that MHO has a sum of 50.5 ranking numbers, an average ranking of 2.1597, and a final ranking of 1, which indicates that MHO has the best overall performance. The results of the Friedman test once again prove that MHO performs better than the other algorithms.

3.6. Convergence Analysis

The convergence curve usually indicates the process of the algorithm gradually approaching the optimal solution during the iteration process, which can simply and intuitively show the advantages and disadvantages of one or more algorithms, so the convergence analysis is a key step in verifying whether the MHO algorithm can stably find the optimal solution or near-optimal solution to the optimization problem.
In this study, the average fitness value of the objective function is used as the criterion for evaluating the convergence of the algorithms, and each algorithm is iterated up to 500 times. We visualize the experimental results of ten algorithms, including MHO, on 23 benchmark functions, and the obtained convergence curves are subjected to convergence analysis. As shown in Figure 5, Figure 6 and Figure 7, the convergence plots are shown for the single-peak function, the multimodal function, and the fixed-dimension function, respectively.
All the convergence curves of the single-peak function are shown in Figure 5. The initial solution of MHO is always the lowest among the convergence curves on these seven functions, indicating that it is able to find a good quality solution at the initial stage. Among them, except for f 7 , the variant HO2 has similar curves to MHO, and the convergence speed as well as the accuracy is optimal, which reflects the effectiveness of the reverse learning strategy for small-aperture imaging. All the curves converge to the same level; except for the PSO of f 2 and the WOA of f 3 , all curves tend to converge. On the f 7 function, the convergence speed of MHO is not similar to other algorithms, but the value of its optimal solution is the smallest, so the overall performance is better than other algorithms.
All convergence curves for the multi-peak function are shown in Figure 6. Again, the curves of MHO and HO2 are similar on the six functions on the graphs. On the f 8 function, the fitness values of MHO and other functions are significantly lower than those of HHO, but on all functions other than that, the dominance of MHO is similar to that on the single-peak function. On f 9 , it can be seen that MHO and the green line of HO2 converge preferentially, followed by four similar lines for HO, HO1, HO3, and HHO converging one after another; PSO shows the worst convergence rate and fitness values on f 8 f 11 .
Figure 7 shows the multimodal function with fixed dimensions. There are few overall differences between all the algorithms in function f 14 f 19 , but there are noticeable differences between the curves in the detailed presentation. The same characteristics of MHO are exhibited in all these functions—a rapid decline in the initial period, showing a fast rate of convergence—and the other algorithms also show faster convergence on specific functions, but with lower fitness values for MHO. Among the functions f 20 f 23 , HHO has the worst overall performance, and MHO shows good optimization performance with fast convergence speed and optimal solutions on all functions. The other algorithms also perform well on specific functions but, overall, the MHO algorithm shows competitiveness in these tests.

3.7. Stability Analysis

In this section, box-and-line plots are used to analyze the stability of all the algorithms, which are run independently 50 times, again using the experimental results for the 23 benchmark functions. Figure 8, Figure 9 and Figure 10 show the box-and-line plots for the single-peak function, the multi-peak function, and the fixed-dimension multimodal function, respectively. As an example, the boxplots of the functions in Figure 7 are presented as an evaluation method of the boxplots.
In the box-and-line plot, the red horizontal line represents the median, with lower values indicating better performance of the algorithm on the test function. It is the primary metric for evaluating the performance of the algorithms. MHO has a low median, with all the algorithms except HHO showing similar performance. The blue boxes for DBO and WOA show the interquartile range (IQR), where a smaller IQR indicates a more stable algorithmic performance. Thus, MHO, HO, HO1, HO2, HO3, HHO, HBA, and WOA all show better stability. The red crosses represent outliers, and the smaller the number, the better the stability. Here, only HHO, HBA, and PSO have outliers, implying that the other algorithms are more stable. The gray dotted line represents the whisker; the longer the whisker, the more discrete the data are. The longer whisker of DBO shows that it performs poorly. The stability of the algorithm can be analyzed by combining the above evaluation parameters. As a side note, we have chosen some representative examples to keep the data concise.
Among the single-peak functions shown in Figure 8, MHO has the lowest median, the smallest outliers, the smallest interquartile spacing, and shows better stability, while PSO is the least stable; the other four single-peak functions are consistent in their general trend with the representative cases shown.
Among the multi-peak functions presented in Figure 9, MHO has the lowest median and performs better in terms of stability, with only slightly more outliers on f 11 ,   f 13 than HHO; among the functions not presented, the overall trend is consistent with the representative cases presented, with MHO being slightly weaker in terms of stability than HHO as well as PSO performing the worst.
Among the multimodal functions shown in Figure 10, MHO has the lowest median and outliers and the best stability, while HHO performs the worst. The general trend in the performance of the algorithms in the non-shown functions is consistent with the shown functions. The combined boxplot analysis of the above algorithms leads to the conclusion that MHO has the best stability.

4. Application to Engineering Design Problems

Three typical engineering constraint problems—reducer design [54], gear train design [55], and step taper pulley design [56]—are chosen for examination in this section in order to further confirm the efficacy of MHO in resolving global optimization issues. Because of their intricate restrictions and multi-objective optimization features, these issues are not only significant in engineering practice but also make excellent examples for evaluating the effectiveness of optimization methods. The trials are set up as 50 rounds of cycles with a maximum number of iterations per round of 50, and we will compare MHO’s performance with that of other algorithms to confirm its effectiveness.

4.1. Speed Reducer Design Problem

Reducers are key components in mechanical drive systems. As shown in Figure 11, the design of a speed reducer is challenging. This is because seven design variables are involved: face width ( x 1 ), module of teeth ( x 2 ), number of teeth on the pinion ( x 3 ), length of the first shaft between the bearings ( x 4 ), length of the second shaft between the bearings ( x 5 ), diameter of the first shaft ( x 6 ), and diameter of the second shaft ( x 7 ). The objective is to minimize the total weight of the gearbox while satisfying 11 constraints. The constraints include bending stresses in the gear teeth, surface stresses, lateral deflections of shaft 1 and shaft 2 due to transmitted forces, and stresses in shaft 1 and shaft 2. The mathematical model is shown in Equation (25):
C o n s i d e r   x ¯ = x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 = b , m , p , l 1 , l 2 , d 1 , d 2 M i n i m i z e   f ( x ¯ ) = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3 43.0934 1.508 x 1 x 6 2 + x 7 2 + 7.4777 x 6 3 + x 7 3 + 0.7854 x 4 x 6 2 + x 5 x 7 2 S u b j e c t   t o g 1 ( x ¯ ) = 27 x 1 x 2 2 x 3 1 0 , g 2 ( x ¯ ) = 397 x 1 x 2 2 x 3 2 1 0 , g 3 ( x ¯ ) = 1.93 x 4 3 x 2 x 6 4 x 3 1 0 , g 4 ( x ¯ ) = 1.93 x 5 3 x 2 x 7 4 x 3 1 0 , g 5 ( x ¯ ) = 745 x 4 / x 2 x 3 2 + 16.9 × 10 6 1 / 2 110 x 6 2 1 0 , g 6 x ¯ = 745 x 5 / x 2 x 3 2 + 157.9 × 10 6 1 / 2 85 x 7 3 1 0 , g 7 ( x ¯ ) = x 2 x 3 40 1 0 , g 8 ( x ¯ ) = 5 x 2 x 1 1 0 , g 9 ( x ¯ ) = x 1 12 x 2 1 0 , g 10 ( z ¯ ) = 1.5 z 6 + 1.9 z 4 1 0 , g 11 ( z ) = 1.5 z 7 + 1.9 z 5 1 0 , w h e r e 2.6 x 1 3.6 ,   0.7 x 2 0.8 ,   17 x 3 28 ,   7.3 x 4 8.3 , 7.3 z 5 8.3 , 2.9 x 6 3.9 ,   5.0 x 7 5.5 .
MHO is compared to nine other algorithms in order to address the speed reducer design problem. Table 5 displays the results of the experiment, and it is evident that MHO is the least costly algorithm.

4.2. Gear Train Design Problem

The gear train design problem aims to minimize the cost of the gear ratios shown in Figure 12. This problem has four integer decision variables, where N 1 , N 2 , N 3 , and N 4 represent the number of teeth of four different gears. The mathematical model is shown in Equation (26):
C o n s i d e r   x = x 1 , x 2 , x 3 , x 4 = N 1 , N 2 , N 3 , N 4 M i n i m i z e   f x = 1 6.931 x 2 x 3 x 1 x 4 2 , V a r i a b l e   r a n g e   12 x i 60 ,   i = 1 , 2 , 3 , 4 .
The MHO algorithm is employed to optimize the design of gear systems, and its results are compared with those of nine other algorithms. The experimental results are shown in Table 6. The optimal value obtained by MHO is lower than that of the other nine algorithms, indicating that MHO achieved a better value and superior performance in this problem.

4.3. Step-Cone Pulley Problem

A stepped conical pulley is a pulley consisting of two or more conical pulleys connected as shown in Figure 13 with five design variables: d i , the diameter of the pulley at step i 1 , 4 , and ω , the width of the belt and the pulleys at each step. The goal of the system is to minimize the weight of the step conical pulley, and the problem contains 11 nonlinear constraints to ensure that the transmission power is at least 0.75 hp. Equation (27) is the mathematical model of the stepped conical pulley problem:
C o n s i d e r   x = x 1 , x 2 , x 3 , x 4 , x 5 = d 1 , d 2 , d 3 , d 4 , ω M i n i m i z e   f ( x ¯ ) = ρ x 5 x 1 2 11 + N 1 N 2 + x 2 2 1 + N 2 N 2 + x 3 2 1 + N 3 N 2 + x 4 2 1 + N 4 N 2 S u b j e c t   t o   h 1 ( x ¯ ) = C 1 C 2 = 0 , h 2 ( x ¯ ) = C 1 C 3 = 0 , h 3 ( x ¯ ) = C 1 C 4 = 0 , g i = 1 , 2 , 3.4 ( x ¯ ) = R i 2 , g i = 1 , 2 , 3.4 ( x ¯ ) = ( 0.75 × 745.6998 ) P i 0 w h e r e , C i = π x i 2 1 + N i N + N i N 1 2 4 a + 2 a ,   i = ( 1 , 2 , 3 , 4 ) R i = exp μ π 2 sin 1 N i N 1 x i 2 a ,   i = ( 1 , 2 , 3 , 4 ) , P i = st x 5 1 R i π x i N i 60 ,   i = ( 1 , 2 , 3 , 4 ) , t = 8   mm , s = 1.75   MPa , μ = 0.35 ,   ρ = 7200   kg / m 3 ,   a = 3   mm .
The stepped tapered pulley problem is solved using MHO, and it is compared to nine alternative techniques. A maximum of 50 iterations and 50 training rounds were used in each experiment. MHO is best, according to the experiment results, which are displayed in Table 7.

5. Conclusions and Outlook

In this paper, we propose a modified hippopotamus optimization algorithm that aims to further improve the algorithm’s performance and address the issue of the algorithm’s easy descent into local optima.
The introduction of the sine chaotic map to initialize the population improves the diversity and randomness of the hippopotamus population, which enables the hippopotamus optimization algorithm to achieve a better balance between global and local searching, thus improving the initial solution quality as well as the convergence speed.
Premature convergence can be avoided by optimizing the hippo’s position update technique with a new convergence factor. In addition, a small-hole imaging reverse learning strategy is incorporated to improve the performance of the algorithm by mapping the current optimal solution of the algorithm dimension by dimension, avoiding interference between the dimensions, and at the same time expanding the search range of the algorithm.
Also, the proposed algorithm was experimented on with 23 test functions, and the performance of MHO was compared with HO and its variants as well as other metaheuristics, and the mean and standard deviation of the algorithm’s optimized search were calculated. The experimental results show that MHO is optimal in terms of mean and standard deviation for thirteen test functions, while failing to optimize in terms of mean and standard deviation for only five test functions. After analyzing the experimental results by using sensitivity analysis and the Friedman test for stability and convergence, respectively, it is concluded that MHO has a higher ranking and stability and can jump out of local optima faster. In order to further verify the ability of MHO in solving global optimization problems, it is applied to three engineering design problems and compared with other algorithms, and the results show that MHO obtains very impressive outcomes. The above experiments fully demonstrate that compared with other existing algorithms, MHO possesses a stronger global search capability and is able to explore the solution space more efficiently, thus searching for potential optimal solutions more comprehensively. In addition, MHO significantly improves its adaptability to complex optimization problems by dynamically adjusting the search direction and step size, thus achieving faster convergence. In terms of local searching, MHO is able to locate the optimal solution more accurately, especially for high-dimensional complex optimization problems, and its unique mechanism enables it to avoid falling into the local optimal trap. MHO also demonstrates higher robustness and outperforms the other nine compared algorithms in both parameter optimization and real engineering problems.
Nevertheless, MHO still has a tendency to converge to locally optimal solutions for certain functions when working with global optimization issues. On the complicated reducer design challenges, MHO’s solution performance is also not very steady. Therefore, we will continue to improve the exploration and production capability of MHO in our future research. Meanwhile, we will apply MHO to a wider range of problems, such as multi-objective optimization and current popular neural networks.

Author Contributions

T.H.: writing—review and editing, software, formal analysis, and conceptualization. H.W.: writing—review and editing, writing—original draft, software, and methodology. T.L.: visualization, supervision, resources, and data curation. Q.L.: writing—review and editing, visualization, funding acquisition, methodology, and conceptualization. Y.H.: supervision, resources, validation, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Anhui Provincial Colleges and Universities Collaborative Innovation Project (GXXT-2023-068), and the Anhui University of Science and Technology Graduate Innovation Fund Project (2023CX2086).

Data Availability Statement

The data generated from the analysis in this study can be found in this article. This study does not report the original code, which is available for academic purposes from the lead contact. Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.

Acknowledgments

We would like to thank the School of Electrical and Information Engineering at Anhui University of Science and Technology for providing the laboratory.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dhiman, G.; Garg, M.; Nagar, A.; Kumar, V.; Dehghani, M. A novel algorithm for global optimization: Rat swarm optimizer. J. Ambient Intell. Humaniz. Comput. 2021, 12, 8457–8482. [Google Scholar] [CrossRef]
  2. Gharaei, A.; Shekarabi, S.; Karimi, M. Modelling and optimal lot-sizing of the replenishments in constrained, multi-product and bi-objective EPQ models with defective products: Generalised cross decomposition. Int. J. Syst. Sci. 2020, 7, 262–274. [Google Scholar] [CrossRef]
  3. Sun, Y.; Chen, Y. Multi-population improved whale optimization algorithm for high dimensional optimization. Appl. Soft Comput. 2024, 112, 107854. [Google Scholar] [CrossRef]
  4. Shen, Y.; Zhang, C.; Gharehchopogh, F.; Mirjalili, S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [Google Scholar] [CrossRef]
  5. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020, 32, 12363–12379. [Google Scholar] [CrossRef]
  6. Baluja, S.; Caruana, R. Removing the Genetics from the Standard Genetic Algorithm. In Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, CA, USA, 9–12 July 1995. [Google Scholar]
  7. Coelho, L.; Mariani, V. Improved differential evolution algorithms for handling economic dispatch optimization with generator constraints. Energy Convers. Manag. 2006, 48, 1631–1639. [Google Scholar] [CrossRef]
  8. Ma, H.; Ye, S.; Simon, D.; Fei, M. Conceptual and numerical comparisons of swarm intelligence optimization algorithms. Soft Comput. 2017, 21, 3081–3100. [Google Scholar] [CrossRef]
  9. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE-CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  11. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  12. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Technical Report; Erciyes University: Kayseri, Türkiye, 2005; Available online: https://abc.erciyes.edu.tr/pub/tr06_2005.pdf (accessed on 15 July 2024).
  13. Yang, X.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  15. Yu, V.F.; Jewpanya, P.; Redi, A.; Tsao, Y.C. Adaptive neighborhood simulated annealing for the heterogeneous fleet vehicle routing problem with multiple crossdocks. Comput. Oper. Res. 2021, 129, 105205. [Google Scholar] [CrossRef]
  16. Rashedi, E.; Nezamabadipour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  17. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [Google Scholar] [CrossRef]
  18. Wang, S.H.; Li, Y.Z.; Yang, H.Y. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization. Appl. Soft Comput. 2019, 81, 105496. [Google Scholar] [CrossRef]
  19. Shanmugapriya, P.; Kumar, T.S.; Kirubadevi, S.; Prasad, P.V. IoT based energy management strategy for hybrid electric storage system in EV using SAGAN-COA approach. J. Energy Storage 2024, 104, 114315. [Google Scholar] [CrossRef]
  20. Hu, S.J.; Kong, G.Q.; Zhang, C.S.; Fu, J.H.; Li, S.Y.; Yang, Q. Data-driven models for the steady thermal performance prediction of energy piles optimized by metaheuristic algorithms. Energy 2024, 313, 134000. [Google Scholar] [CrossRef]
  21. Sun, B.; Peng, P.; Tan, G.; Pan, M.; Li, L.; Tian, Y. A fuzzy logic constrained particle swarm optimization algorithm for industrial design problems. Appl. Soft Comput. 2024, 167, 112456. Available online: https://api.semanticscholar.org/CorpusID:274134625 (accessed on 15 July 2024). [CrossRef]
  22. Wu, S.; Dong, A.; Li, Q.; Wei, W.; Zhang, Y.; Ye, Z. Application of ant colony optimization algorithm based on farthest point optimization and multi-objective strategy in robot path planning. Appl. Soft Comput. 2024, 167, 112433. [Google Scholar] [CrossRef]
  23. Palanisamy, S.K.; Krishnaswamy, M. Optimization and forecasting of reinforced wire ropes for tower crane by using hybrid HHO-PSO and ANN-HHO algorithms. Int. J. Fatigue 2024, 190, 108663. [Google Scholar] [CrossRef]
  24. Liu, J.; Zhao, J.; Li, Y.; Zhou, H. HSMAOA: An enhanced arithmetic optimization algorithm with an adaptive hierarchical structure for its solution analysis and application in optimization problems. Thin-Walled Struct. 2025, 206, 112631. [Google Scholar] [CrossRef]
  25. Cui, X.; Zhu, J.; Jia, L.; Wang, J.; Wu, Y. A novel heat load prediction model of district heating system based on hybrid whale optimization algorithm (WOA) and CNN-LSTM with attention mechanism. Energy 2024, 312, 133536. [Google Scholar] [CrossRef]
  26. Che, Z.; Peng, C.; Yue, C. Optimizing LSTM with multi-strategy improved WOA for robust prediction of high-speed machine tests data. Chaos Soliton Fract. 2024, 178, 114394. [Google Scholar] [CrossRef]
  27. Elsisi, M. Optimal design of adaptive model predictive control based on improved GWO for autonomous vehicle considering system vision uncertainty. Appl. Soft Comput. 2024, 158, 111581. [Google Scholar] [CrossRef]
  28. Karaman, A.; Pacal, I.; Basturk, A.; Akay, B.; Nalbantoglu, U.; Coskun, S.; Sahin, O.; Karaboga, D. Robust real-time polyp detection system design based on YOLO algorithms by optimizing activation functions and hyper-parameters with artificial bee colony (ABC). Expert Syst. Appl. 2023, 221, 119741. [Google Scholar] [CrossRef]
  29. Yu, X.; Zhang, W. Address wind farm layout problems by an adaptive Moth-flame Optimization Algorithm. Appl. Soft. Comput. 2024, 167, 112462. [Google Scholar] [CrossRef]
  30. Dong, K.; Yang, D.W.; Sheng, J.B.; Zhang, W.D.; Jing, P.R. Dynamic planning method of evacuation route in dam-break flood scenario based on the ACO-GA hybrid algorithm. Int. J. Disaster Risk Reduct. 2024, 100, 104219. [Google Scholar] [CrossRef]
  31. Liu, X.; Wang, J.S.; Zhang, S.B.; Guan, X.Y.; Gao, Y.Z. Optimization scheduling of off-grid hybrid renewable energy systems based on dung beetle optimizer with convergence factor and mathematical spiral. Renew. Energy 2024, 237, 121874. [Google Scholar] [CrossRef]
  32. Beşkirli, A.; Dağ, İ. I-CPA: An Improved Carnivorous Plant Algorithm for Solar Photovoltaic Parameter Identification Problem. Biomimetics 2023, 8, 569. [Google Scholar] [CrossRef]
  33. Beşkirli, A.; Dağ, İ. Mustafa Servet Kiran. A tree seed algorithm with multi-strategy for parameter estimation of solar photovoltaic models. Appl. Soft Comput. 2024, 167, 112220. [Google Scholar] [CrossRef]
  34. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. Available online: https://api.semanticscholar.org/CorpusID:268083241 (accessed on 15 July 2024). [CrossRef] [PubMed]
  35. Maurya, P.; Tiwari, P.; Pratap, A. Application of the hippopotamus optimization algorithm for distribution network reconfiguration with distributed generation considering different load models for enhancement of power system performance. Electr. Eng. 2024. SN-1432-0487. [Google Scholar] [CrossRef]
  36. Chen, Y.; Wu, F.; Shi, L.; Li, Y.; Qi, P.; Guo, X. Identification of Sub-Synchronous Oscillation Mode Based on HO-VMD and SVD-Regularized TLS-Prony Methods. Energies 2024, 17, 5067. [Google Scholar] [CrossRef]
  37. Ribeiro, A.N.; Muñoz, D.M. Neural network controller for hybrid energy management system applied to electric vehicles. J. Energy Storage 2024, 104, 114502. [Google Scholar] [CrossRef]
  38. Wang, H.; Binti Mansor, N.N.; Mokhlis, H.B. Novel Hybrid Optimization Technique for Solar Photovoltaic Output Prediction Using Improved Hippopotamus Algorithm. Appl. Sci. 2024, 14, 7803. [Google Scholar] [CrossRef]
  39. Mashru, N.; Tejani, G.G.; Patel, P.; Khishe, M. Optimal truss design with MOHO: A multi-objective optimization perspective. PLoS ONE. 2024, 19, e0308474. Available online: https://api.semanticscholar.org/CorpusID:271905232 (accessed on 15 July 2024). [CrossRef]
  40. Abdelaziz, M.A.; Ali, A.A.; Swief, R.A.; Elazab, R. Optimizing energy-efficient grid performance: Integrating electric vehicles, DSTATCOM, and renewable sources using the Hippopotamus Optimization Algorithm. Sci. Rep. 2024, 14, 28974. [Google Scholar] [CrossRef]
  41. Baihan, A.; Alutaibi, A.I.; Alshehri, M.; Sharma, S.K. Sign language recognition using modified deep learning network and hybrid optimization: A hybrid optimizer (HO) based optimized CNNSa-LSTM approach. Sci. Rep. 2024, 14, 26111. [Google Scholar] [CrossRef]
  42. Amiri, M.H.; Hashjin, N.M.; Najafabadi, M.K.; Beheshti, A.; Khodadadi, N. An innovative data-driven AI approach for detecting and isolating faults in gas turbines at power plants. Expert Syst. Appl. 2025, 263, 125497. [Google Scholar] [CrossRef]
  43. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evolut. Comput. 1997, 1, 67–82. Available online: https://dl.acm.org/doi/10.1109/4235.585893 (accessed on 15 July 2024). [CrossRef]
  44. Wang, M.M.; Song, X.G.; Liu, S.H.; Zhao, X.Q.; Zhou, N.R. A novel 2D Log-Logistic–Sine chaotic map for image encryption. Nonlinear Dyn. 2025, 113, 2867–2896. [Google Scholar] [CrossRef]
  45. Sedigheh Mahdavi, Shahryar Rahnamayan, Kalyanmoy Deb, Opposition based learning: A literature review. Swarm Evol. Comput. 2018, 39, 1–23. [CrossRef]
  46. Jiao, J.; Li, J. Enhanced fireworks algorithm based on particle swarm optimization and reverse learning of small-hole imaging experiment. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022. [Google Scholar]
  47. Yu, F.; Guan, J.; Wu, H.; Chen, Y.; Xia, X. Lens imaging opposition-based learning for differential evolution with cauchy perturbation. Appl. Soft Comput. 2024, 152, 111211. [Google Scholar] [CrossRef]
  48. Phalke, S.; Vaidya, Y.; Metkar, S. Big-O Time Complexity Analysis Of Algorithm. In Proceedings of the International Conference on Signal and Information Processing (IConSIP), Pune, India, 26–27 August 2022. [Google Scholar]
  49. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  50. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  51. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
  52. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  53. Röhmel, J. The permutation distribution of the Friedman test. Comput. Stat. Data. Anal. 1997, 26, 83–99. [Google Scholar] [CrossRef]
  54. Golinski, J. Optimal synthesis problems solved by meansof nonlinear programming and random methods. J. Mech. 1970, 5, 287–309. [Google Scholar] [CrossRef]
  55. Huang, Y.; Liu, Q.; Song, H.; Han, T.; Li, T. CMGWO: Grey wolf optimizer for fusion cell-like P systems. Heliyon 2024, 10, e34496. [Google Scholar] [CrossRef]
  56. Han, T.; Li, T.; Liu, Q.; Huang, Y.; Song, H. A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems. Algorithms 2024, 17, 573. [Google Scholar] [CrossRef]
Figure 1. Comparison of the distribution of algorithmic initialization: (a) histogram of frequency distribution of conventional random initialization; (b) scatter plot of the distribution of conventional random initialization in two-dimensional space; (c) histogram of frequency distribution of sinusoidal chaotic map initialization; and (d) scatter plot of the distribution of sinusoidal chaotic map initialization in two-dimensional space.
Figure 1. Comparison of the distribution of algorithmic initialization: (a) histogram of frequency distribution of conventional random initialization; (b) scatter plot of the distribution of conventional random initialization in two-dimensional space; (c) histogram of frequency distribution of sinusoidal chaotic map initialization; and (d) scatter plot of the distribution of sinusoidal chaotic map initialization in two-dimensional space.
Biomimetics 10 00090 g001
Figure 2. Plots of convergence factor T before and after improvement.
Figure 2. Plots of convergence factor T before and after improvement.
Biomimetics 10 00090 g002
Figure 3. Schematic diagram of small-hole imaging reverse learning.
Figure 3. Schematic diagram of small-hole imaging reverse learning.
Biomimetics 10 00090 g003
Figure 4. Flowchart of MHO algorithm.
Figure 4. Flowchart of MHO algorithm.
Biomimetics 10 00090 g004
Figure 5. Convergence plots of single-peak function.
Figure 5. Convergence plots of single-peak function.
Biomimetics 10 00090 g005
Figure 6. Convergence plots of multi-peak function.
Figure 6. Convergence plots of multi-peak function.
Biomimetics 10 00090 g006
Figure 7. Convergence plots of fixed-dimensional multimodal function.
Figure 7. Convergence plots of fixed-dimensional multimodal function.
Biomimetics 10 00090 g007
Figure 8. Boxplots of single-peak function.
Figure 8. Boxplots of single-peak function.
Biomimetics 10 00090 g008
Figure 9. Boxplots of multi-peak function.
Figure 9. Boxplots of multi-peak function.
Biomimetics 10 00090 g009
Figure 10. Boxplots of multimodal functions with fixed dimensions.
Figure 10. Boxplots of multimodal functions with fixed dimensions.
Biomimetics 10 00090 g010
Figure 11. Speed reducer design problem diagram.
Figure 11. Speed reducer design problem diagram.
Biomimetics 10 00090 g011
Figure 12. Gear system design problem diagram.
Figure 12. Gear system design problem diagram.
Biomimetics 10 00090 g012
Figure 13. Step-cone pulley problem diagram.
Figure 13. Step-cone pulley problem diagram.
Biomimetics 10 00090 g013
Table 1. Benchmark functions.
Table 1. Benchmark functions.
FunctionDimensionDomainTheoretical Optimum
f 1 ( x ) = i = 1 n x i 2 30[−100,100]0
f 2 ( x ) = i = 1 n x i + i = 1 n x i 30[−10,10]0
f 3 ( x ) = i = 0 n 1 j = 0 j < i x i 2 30[−100,100]0
f 4 ( x ) = max i x i ,   1 i n 30[−100,100]0
f 5 ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30[−30,30]0
f 6 ( x ) = i = 1 n x i + 0.5 2 30[−100,100]0
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m 0 , 1 30[−1.28,1.28]0
f 8 ( x ) = i = 1 n x i sin x i 30[−500,500]−12,569.4
f 9 ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 30[−5.12,5.12]0
f 10 ( x ) = 20 exp 0.2 1 / n × i = 1 n x i 2 exp 1 / n × i = 1 n cos 2 π x i + 20 + e 30[−32,32]0
f 11 = 1 / 4000 × i = 1 n x i 2 i = 1 n cos x i / x + 1 + 1 30[−600,600]0
f 12 ( x ) = π / n × 10 sin π y 1 + y n 1 2 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 } y i = 1 + x i + 1 / 4 u x i , a , k , m = k x i a m x i > a 0 a < x i < 1 k x i a m x i < a 30[−50,50]0
f 13 ( x ) = 0.1 { i = 1 n x i 1 2 1 + sin 2 3 π x 1 + 1 sin 2 3 π x 1 + x n 1 2 1 + sin 2 2 π x n i } + i = 1 n u x i , 5 , 100 , 4 30[−50,50]0
f 14 ( x ) = 1 / 500 + j = 1 25 1 / j + i = 1 2 x i a i j 6 1 2[−65,65]1
f 15 ( x ) = i = 1 11 a i x 1 b i 2 + b i x 2 / b i 2 + b i x 3 + x 4 2 4[−5,5]0.00003075
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + x 1 6 / 3 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5,5]−1.0316285
f 17 ( x ) = x 2 5.1 / 4 π 2 x 1 2 + 5 / π x 1 6 2 + 10 1 1 / 8 π cos x 1 + 10 2[−5,5]0.398
f 18 ( x ) = 1 + x 1 + x 2 + 1 2 × 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 · 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2[−2,2]3
f 19 ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2 3[0,1]−3.86
f 20 ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2 6[0,1]−3.32
f 21 ( x ) = i = 1 5 x a i x a i T + c i 1 4[0,10]−10
f 22 ( x ) = i = 1 7 x a i x a i T + c i 1 4[0,10]−10
f 23 ( x ) = i = 1 10 x a i x a i T + c i 1 4[0,10]−10
Table 2. Sensitivity analysis of p/t.
Table 2. Sensitivity analysis of p/t.
FunctionCriterionp/tp/tp/t
20/75030/50060/250
f 1 Mean0.00000.00000.0000
Std0.00000.00000.0000
Rank222
f 2 Mean0.00000.00000.0000
Std0.00000.00000.0000
Rank222
f 3 Mean0.00000.00000.0000
Std0.00000.00000.0000
Rank222
f 4 Mean0.00000.00000.0000
Std0.00000.00000.0000
Rank222
f 7 Mean3.2851 × 10−52.8911 × 10−52.7563 × 10−5
Std3.4492 × 10−53.1692 × 10−52.6425 × 10−5
Rank321
f 9 Mean0.00000.00000.0000
Std0.00000.00000.0000
Rank222
f 10 Mean9.5923 × 10−168.8818 × 10−168.8818 × 10−16
Std5.0243 × 10−160.00000.0000
Rank31.51.5
f 11 Mean0.00000.00000.0000
Std0.00000.00000.0000
Rank222
f 12 Mean3.6758 × 10−41.5255 × 10−42.9019 × 10−4
Std5.1790 × 10−43.5965 × 10−44.3157 × 10−4
Rank312
f 14 Mean9.9800 × 10−19.9800 × 10−19.9800 × 10−1
Std2.7532 × 10−131.5468 × 10−135.5052 × 10−13
Rank222
f 16 Mean−1.0316−1.0316−1.0316
Std3.2394 × 10−113.6381 × 10−101.5708 × 10−9
Rank222
f 17 Mean3.9789 × 10−13.9789 × 10−13.9789 × 10−1
Std4.4950 × 10−103.9781 × 10−102.7622 × 10−9
Rank222
f 19 Mean−3.8628−3.8628−3.8628
Std1.1802 × 10−85.0575 × 10−86.2302 × 10−8
Rank222
f 20 Mean−3.2904−3.2849−3.2794
Std5.4536 × 1025.8998 × 1026.7029 × 102
Rank123
f 21 Mean−1.0153 × 101−1.0153 × 101−1.0153 × 101
Std8.8671 × 10−71.7912 × 10−63.8080 × 10−6
Rank222
f 22 Mean−1.0403 × 10−1−1.0403 × 10−1−1.0403 × 10−1
Std6.7735 × 10−71.5216 × 10−66.3626 × 10−6
Rank222
f 23 Mean−1.053 × 101−1.056 × 101−1.056 × 101
Std7.0815 × 10−71.5856 × 10−69.3393 × 10−6
Rank222
Rank-Count3632.533.5
Ave-Rank2.111.911.97
Overall-Rank312
Table 3. Experimental results of MHO and its comparison algorithms.
Table 3. Experimental results of MHO and its comparison algorithms.
FunctionAlgorithmMeanStdFunctionAlgorithmMeanStd
f 1 MHO0.00000.0000 f 13 MHO5.2459 × 109−31.2318 × 10−2
HO0.00000.0000HO4.0138 × 108−39.4581 × 10−3
HO10.00000.0000HO11.7004 × 104−34.5484 × 10−3
HO20.00000.0000HO24.3263 × 103−39.7711 × 10−3
HO30.00000.0000HO31.0953 × 10−21.4091 × 10−2
HHO3.3722 × 10−932.3825 × 10−92HHO9.9801 × 10−51.3388 × 10−4
HBA6.9026 × 10−723.3528 × 10−71HBA4.6997 × 10−13.3318 × 10−1
DBO5.1200 × 10−1132.6559 × 10−112DBO5.9622 × 102−14.7776 × 10−1
PSO2.52741.0479PSO6.1253 × 103−12.3179 × 10−1
WOA1.0234 × 10−726.5245 × 10−72WOA5.7769 × 10−13.1337 × 10−1
f 2 MHO0.00000.0000 f 14 MHO9.9800 × 10−11.5468 × 10−13
HO4.0358 × 10−1820.0000HO9.9800 × 10−13.0556 × 10−13
HO12.8707 × 10−1820.0000HO19.9800 × 10−12.4026 × 10−13
HO20.00000.0000HO29.9800 × 10−11.2184 × 10−12
HO31.1752 × 10−1800.0000HO39.9800 × 10−18.5786 × 10−12
HHO9.2603 × 10−514.8939 × 10−50HHO1.68801.7750
HBA2.9432 × 10−721.2928 × 10−71HBA1.35171.5159
DBO1.5566 × 10−541.1007 × 10−53DBO1.47418.7881 × 10−1
PSO4.50051.4881PSO3.34602.3845
WOA8.9092 × 10−505.0084 × 10−49WOA2.13922.3978
f 3 MHO0.00000.0000 f 15 MHO3.0751 × 10−44.0690 × 10−8
HO0.00000.0000HO3.0752 × 10−41.0325 × 10−7
HO10.00000.0000HO13.0755 × 10−43.2657 × 10−7
HO20.00000.0000HO23.0752 × 10−44.7547 × 10−8
HO30.00000.0000HO33.0753 × 10−41.1442 × 10−7
HHO3.1810 × 10−722.2493 × 10−71HHO3.7963 × 10−41.8487 × 10−4
HBA3.2286 × 10−932.2092 × 10−92HBA4.9321 × 10−38.4966 × 10−3
DBO6.6703 × 10−404.7166 × 10−39DBO7.2540 × 10−43.1412 × 10−4
PSO1.7653 × 1024.5978 × 101PSO8.9744 × 10−42.2012 × 10−4
WOA4.1941 × 1041.5808 × 104WOA6.9236 × 10−44.7036 × 104
f 4 MHO0.00000.0000 f 16 MHO−1.03163.6381 × 10−10
HO2.8900 × 10−1840.0000HO−1.03162.6830 × 10−10
HO16.8888 × 10−1810.0000HO1−1.03161.5659 × 10−10
HO20.0000 0.0000HO2−1.03166.7215 × 10−11
HO35.9113 × 10−1790.0000HO3−1.03166.3616 × 10−11
HHO8.0996 × 10−495.6060 × 10−48HHO−1.03164.6598 × 10−9
HBA1.9110 × 10−567.6981 × 10−56HBA−1.03163.2812 × 10−16
DBO1.2250 × 10−498.6389 × 10−49DBO−1.03163.3269 × 10−16
PSO1.9645 2.5026 × 10−1PSO−1.03164.2439 × 10−16
WOA5.2418 × 1012.5506 × 101WOA−1.03167.1329 × 10−10
f 5 MHO1.1509 × 10−11.7645 × 10−1 f 17 MHO3.9789 × 10−13.9781 × 10−10
HO5.4281 × 10−28.3571 × 10−2HO3.9789 × 10−12.3634 × 10−9
HO13.4397 × 10−26.5186 × 10−2HO13.9789 × 10−12.4502 × 10−9
HO26.2677 × 10−21.0042 × 10−1HO23.9789 × 10−11.6152 × 10−9
HO39.1590 × 10−21.1923 × 10−1HO33.9789 × 10−11.8410 × 10−9
HHO1.2253 × 10−21.8448 × 10−2HHO3.9790 × 10−12.4452 × 10−5
HBA2.4033 × 1018.3381 × 10−1HBA3.9789 × 10−13.3645 × 10−16
DBO2.5734 × 1012.7141 × 10−1DBO3.9789 × 10−13.3645 × 10−16
PSO9.2719 × 1024.7898 × 102PSO3.9789 × 10−13.3645 × 10−16
WOA2.7988 × 1014.9259 × 10−1WOA3.9790 × 10−11.2215 × 10−5
f 6 MHO9.4915 × 10−39.7607 × 10−3 f 18 MHO3.00008.0461 × 10−9
HO8.1739 × 10−31.1091 × 10−2HO3.00006.2997 × 10−9
HO11.1798 × 10−21.3268 × 10−2HO13.00001.4408 × 10−9
HO28.6772 × 10−39.5263 × 10−3HO23.00007.5526 × 10−8
HO32.2698 × 10−21.0268 × 10−2HO33.00002.8452 × 10−9
HHO1.5709 × 10−42.4626 × 10−4HHO3.00006.7092 × 10−7
HBA4.7001 × 10−29.6949 × 10−2HBA4.08005.3446 × 10+0
DBO6.2521 × 10−32.9742 × 10−2DBO3.00002.4628 × 10−15
PSO2.3410 × 10+01.1406 × 10+0PSO3.00005.3245 × 10−15
WOA4.1539 × 10−12.5095 × 10−1WOA3.00013.0702 × 10−4
f 7 MHO2.8911 × 10−53.1692 × 10−5 f 19 MHO−3.86285.0575 × 10−8
HO3.2820 × 10−53.3002 × 10−5HO-3.86284.5360 × 10−8
HO13.5287 × 10−53.9834 × 10−5HO1−3.86281.2794 × 10−8
HO22.5891 × 10−52.8914 × 10−5HO2−3.86283.8218 × 10−8
HO33.6700 × 10−53.4275 × 10−5HO3−3.86282.8635 × 10−8
HHO6.9272 × 10−56.3855 × 10−5HHO−3.86072.8916 × 10−3
HBA8.5039 × 10−59.4110 × 10−5HBA−3.86152.9188 × 10−3
DBO8.2668 × 10−58.4546 × 10−5DBO−3.86152.9188 × 10−3
PSO1.2883 × 10−41.7180 × 10−4PSO−3.86282.4066 × 10−15
WOA6.6807 × 10−57.6012 × 10−5WOA−3.85641.1435 × 10−2
f 8 MHO−8.9787 × 1032.1339 × 103 f 20 MHO−3.28495.8998 × 10−2
HO−7.9594 × 1031.5232 × 103HO−3.26876.6724 × 10−2
HO1−1.9759 × 1042.9040 × 103HO1−3.22306.0775 × 10−2
HO2−8.9256 × 1032.0041 × 103HO2−3.27526.3694 × 10−2
HO3−7.9414 × 1031.5311 × 103HO3−3.27226.5502 × 10−2
HHO−1.2553 × 1048.7389 × 101HHO−3.13809.6909 × 10−2
HBA−8.6957 × 1038.9116 × 102HBA−3.24581.4477 × 10−1
DBO−8.8398 × 1031.7724 × 103DBO−3.22731.2686 × 10−1
PSO−6.0159 × 1031.3308 × 103PSO−3.26735.9858 × 10−2
WOA−1.0110 × 1041.7827 × 103WOA−3.24259.8594 × 10−2
f 9 MHO0.00000.0000 f 21 MHO−1.0153 × 1011.7912 × 10−6
HO0.00000.0000HO−1.0153 × 1013.3627 × 10−6
HO10.00000.0000HO1−1.0153 × 1012.4596 × 10−6
HO20.00000.0000HO2−1.0153 × 1012.7432 × 10−6
HO30.00000.0000HO3−1.0153 × 1011.1416 × 10−5
HHO0.00000.0000HHO−5.44481.3442
HBA0.00000.0000HBA−9.63192.0943
DBO3.1725 × 10−11.9833DBO−8.02352.6724
PSO1.6470 × 10+23.3948 × 101PSO−6.59153.2019
WOA2.2737 × 10−151.1252 × 10−14WOA−8.37492.6669
f 10 MHO8.8818 × 10−160.0000 f 22 MHO1.0403 × 1011.5216 × 10−6
HO8.8818 × 10−160.0000HO−1.0403 × 1011.6082 × 10−6
HO18.8818 × 10−160.0000HO1−1.0403 × 1012.2791 × 10−6
HO28.8818 × 10−160.0000HO2−1.0403 × 1011.7314 × 10−6
HO38.8818 × 10−160.0000HO3−1.0403 × 1012.3335 × 10−5
HHO8.8818 × 10−160.0000HHO−5.18677.2997 × 10−1
HBA3.9818 × 10−12.8156HBA−9.54402.3556
DBO8.8818 × 10−160.0000DBO−8.03112.7098
PSO2.62984.0400 × 10−1PSO−8.94252.5016
WOA4.9383 × 10−152.3816 × 10−15WOA−8.31572.8065
f 11 MHO0.00000.0000 f 23 MHO1.0536 × 1011.5856 × 106
HO0.00000.0000HO−1.0536 × 1011.9358 × 106
HO10.00000.0000HO1−1.0536 × 1011.6338 × 106
HO20.00000.0000HO2−1.0536 × 1012.1182 × 106
HO30.00000.0000HO3−1.0536 × 1012.4098 × 106
HHO0.00000.0000HHO−5.26931.1627
HBA0.00000.0000HBA−9.20442.8850
DBO0.00000.0000DBO−8.87442.7522
PSO1.2258 × 10−14.9294 × 10−2PSO−9.66462.2171
WOA5.7736 × 10−32.8737 × 10−2WOA−7.53173.3854
f 12 MHO1.5255 × 10−43.5965 × 10−4
HO3.1657 × 10−46.1589 × 10−4
HO13.9595 × 10−46.2673 × 10−4
HO22.5974 × 10−44.5742 × 10−4
HO37.4742 × 10−46.2937 × 10−4
HHO6.7478 × 10−61.0577 × 10−5
HBA2.3525 × 10−31.4694 × 10−2
DBO4.4967 × 10−51.3964 × 10−4
PSO4.0556 × 10−23.2624 × 10−2
WOA2.6887 × 10−22.8830 × 10−2
Table 4. Performance ratings of MHO and its comparative algorithms.
Table 4. Performance ratings of MHO and its comparative algorithms.
FunctionMHOHOHO1HO2HO3HHOHBADBOPSOWOA
f 1 33333796108
f 2 1.5431.55867109
f 3 33333768910
f 4 1.5341.55867910
f 6 34526187109
f 9 44444449108
f 10 44444410498
f 11 4.54.54.54.54.54.54.54.5109
f 12 35647182109
f 14 33333876109
f 15 13524610789
f 16 5.55.55.55.55.55.55.55.55.55.5
f 19 3.53.53.53.53.597.57.53.510
f 20 15624108937
f 21 33333968107
f 22 33333106978
f 23 33333107869
Rank-Count50.563.568.552.570.5112118.5114.5140144.5
Ave-Rank2.19572.76092.97832.28263.06524.86965.15224.97836.08706.2826
Overall-Rank13425687910
Table 5. Comparison of the results for the speed reducer design problem.
Table 5. Comparison of the results for the speed reducer design problem.
AlgorithmOptimal ValueOptimal Cost
x 1 x 2 x 3 x 4 x 5 x 6 x 7
MHO3.59997.0000 × 10−11.7000 × 1018.30007.79783.39855.29353.0614 × 103
HO3.51457.0000 × 10−11.7000 × 1017.48857.93943.35385.41913.0942 × 103
HO13.54737.0000 × 10−11.7000 × 1017.60567.98263.74875.28853.1373 × 103
HO23.57477.0000 × 10−11.7000 × 1017.30007.88433.35485.41993.1155 × 103
HO33.51477.0000 × 10−11.7000 × 1017.81148.19513.35155.48573.1476 × 103
HHO3.50347.0000 × 10−11.8508 × 1018.09937.83163.79845.39243.4771 × 103
HBA3.50007.0000 × 10−11.7000 × 1018.21947.99553.58915.28693.0754 × 103
DBO3.60007.0000 × 10−11.7000 × 1018.30007.71543.90005.28673.2093 × 103
PSO3.60007.0000 × 10−11.7000 × 1017.80638.30003.90005.28693.2164 × 103
WOA3.60007.1931 × 10−11.7000 × 1018.29997.83663.35185.29033.1389 × 103
Table 6. Comparison of the results of gear train design problem.
Table 6. Comparison of the results of gear train design problem.
AlgorithmOptimal ValueOptimal Cost
N 1 N 2 N 3 N 4
MHO441321431.5450 × 10−10
HO571237548.8876 × 10−10
HO1591521373.0676 × 10−10
HO2562313376.6021 × 10−10
HO3551237561.5247 × 10−8
HHO471226469.9216 × 10−10
HBA341517522.3576 × 10−9
DBO601515262.3576 × 10−9
PSO573712548.8876 × 10−10
WOA523512562.3576 × 10−9
Table 7. Comparison of the results of step-cone pulley problem.
Table 7. Comparison of the results of step-cone pulley problem.
AlgorithmOptimal ValueOptimal Cost
d 1 d 2 d 3 d 4 ω
MHO3.9835 × 1015.4824 × 1017.3067 × 1018.7626 × 1018.8851 × 1012.7377 × 1092
HO4.0922 × 1035.6309 × 1017.5110 × 1018.9975 × 1018.6176 × 1011.4460 × 1093
HO14.0863 × 1015.6226 × 1017.4988 × 1018.9873 × 1018.8590 × 1013.3731 × 1092
HO24.0683 × 1015.5973 × 1017.4724 × 1018.9455 × 1018.9458 × 1015.9786 × 1093
HO34.0427 × 1015.5560 × 1017.4205 × 1018.8981 × 1018.9309 × 1019.2168 × 1093
HHO4.1957 × 1015.5823 × 1018.3645 × 1018.6757 × 1018.8616 × 1015.2464 × 1097
HBA4.0818 × 1015.6155 × 1017.4881 × 1018.9761 × 1018.6001 × 1014.8097 × 1092
DBO4.0928 × 1015.6330 × 1017.5113 × 1019.0000 × 1019.0000 × 1018.5877 × 1092
PSO4.0147 × 1015.5184 × 1017.3648 × 1018.8431 × 1019.0000 × 1011.2714 × 1094
WOA4.0969 × 1015.8129 × 1017.5737 × 1018.7173 × 1018.7229 × 1018.5726 × 1096
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, T.; Wang, H.; Li, T.; Liu, Q.; Huang, Y. MHO: A Modified Hippopotamus Optimization Algorithm for Global Optimization and Engineering Design Problems. Biomimetics 2025, 10, 90. https://doi.org/10.3390/biomimetics10020090

AMA Style

Han T, Wang H, Li T, Liu Q, Huang Y. MHO: A Modified Hippopotamus Optimization Algorithm for Global Optimization and Engineering Design Problems. Biomimetics. 2025; 10(2):90. https://doi.org/10.3390/biomimetics10020090

Chicago/Turabian Style

Han, Tao, Haiyan Wang, Tingting Li, Quanzeng Liu, and Yourui Huang. 2025. "MHO: A Modified Hippopotamus Optimization Algorithm for Global Optimization and Engineering Design Problems" Biomimetics 10, no. 2: 90. https://doi.org/10.3390/biomimetics10020090

APA Style

Han, T., Wang, H., Li, T., Liu, Q., & Huang, Y. (2025). MHO: A Modified Hippopotamus Optimization Algorithm for Global Optimization and Engineering Design Problems. Biomimetics, 10(2), 90. https://doi.org/10.3390/biomimetics10020090

Article Metrics

Back to TopTop