Next Article in Journal
An Innovative Approaches to Fault Sealing Evaluation in the Nanpu No. 1 Structural Belt, Nanpu Depression, Bohai Bay Basin
Previous Article in Journal
Synthesis of Cellulose-Based Fluorescent Carbon Dots for the Detection of Fe(III) in Aqueous Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Honey Badger Algorithm Based on Elite Tangent Search and Differential Mutation with Applications in Fault Diagnosis

1
Gansu Natural Energy Research Institute, Lanzhou 730046, China
2
School of Electronics and Information Engineering, Lanzhou Petrochemical Vocational and Technical University, Lanzhou 730060, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(1), 256; https://doi.org/10.3390/pr13010256
Submission received: 3 December 2024 / Revised: 13 January 2025 / Accepted: 14 January 2025 / Published: 17 January 2025
(This article belongs to the Section Sustainable Processes)

Abstract

:
This paper presents a critique of the Honey Badger Algorithm (HBA) with regard to its limited exploitation capabilities, susceptibility to local optima, and inadequate pre-exploration mechanisms. In order to address these issues, we propose the Improved Honey Badger Algorithm (IHBA), which integrates the Elite Tangent Search Algorithm (ETSA) and differential mutation strategies. Our approach employs cubic chaotic mapping in the initialization phase and a random value perturbation strategy in the pre-iterative stage to enhance exploration and prevent premature convergence. In the event that the optimal population value remains unaltered across three iterations, the elite tangent search with differential variation is employed to accelerate convergence and enhance precision. Comparative experiments on partial CEC2017 test functions demonstrate that the IHBA achieves faster convergence, greater accuracy, and improved robustness. Moreover, the IHBA is applied to the fault diagnosis of rolling bearings in electric motors to construct the IHBA-VMD-CNN-BiLSTM fault diagnosis model, which quickly and accurately identifies fault types. Experimental verification confirms that this method enhances the speed and accuracy of rolling bearing fault identification compared to traditional approaches.

1. Introduction

Electric motors are essential components in various industries, converting electrical energy into mechanical energy. Rolling bearings are critical for the smooth operation of many motor components, and their failure is one of the leading causes of motor malfunction. Bearing failures account for over 40% of motor faults [1,2], and if not detected early, can lead to severe consequences, such as prolonged downtime, financial losses, and even risks to human health and safety. Therefore, it is crucial to develop effective fault diagnosis techniques to ensure the reliable operation of electric motors, especially in environments with fluctuating loads and harsh conditions.
Recent advancements in swarm intelligence algorithms have provided promising solutions for fault diagnosis. These algorithms, which mimic the collective behavior of natural organisms, have been successfully applied to various optimization problems, including motor fault diagnosis. Popular algorithms include the Sparrow Search Algorithm (SSA) [3], Whale Optimization Algorithm (WOA) [4], Pelican Optimization Algorithm (POA) [5], Gray Wolf Optimization (GWO) [6], Harris Hawk Algorithm (HHO) [7], etc. These algorithms have demonstrated the capacity for effective global search and adaptability. However, despite their advantages, these algorithms face significant challenges, including premature convergence, limited exploitation capabilities, and susceptibility to local optima, which hinder their performance in complex fault diagnosis tasks.
The Honey Badger Algorithm (HBA), introduced by Fatma et al. in 2021 [8], is inspired by the foraging behavior of honey badgers. While it offers several advantages, such as a simple structure and few adjustable parameters, HBA struggles with issues like inadequate exploration and a slow convergence rate. In response to these shortcomings, numerous modifications have been proposed, such as incorporating reverse learning strategies [9], chaotic mapping [10], and hybrid approaches [11]. However, these improvements often involve increased complexity and do not fully address the underlying limitations in the algorithm’s exploration and exploitation balance.
In this paper, we introduce the Improved Honey Badger Algorithm (IHBA), which integrates the Elite Tangent Search Algorithm (ETSA) and differential mutation strategies to overcome the limitations of traditional HBA. Our proposed method improves exploration by employing cubic chaotic mapping during the initialization phase and utilizes a random value perturbation strategy in the pre-iterative stage to prevent premature convergence. Moreover, when the optimal population value remains unchanged for three consecutive iterations, an elite tangent search with differential variation is triggered to enhance convergence speed and precision. We validate the performance of the IHBA through comparative experiments on several CEC2017 test functions, demonstrating that the IHBA significantly outperforms the traditional HBA in terms of convergence speed, accuracy, and robustness. To further assess the advantages of the IHBA, we conduct ablation experiments comparing it with the original HBA, Dung Beetle Optimizer (DBO) [12], Coati Optimization Algorithm (COA) [13], Osprey Optimization Algorithm (OOA) [14], Beluga Whale Optimization (BWO) [15], Osprey–Cauchy–Sparrow Search Algorithm (OCSSA) [16], and Harris Hawk Algorithm (HHO) [7]. The experimental results clearly show that the IHBA achieves a notable improvement in both the speed of convergence and the accuracy of the optimization process, with enhanced robustness. Furthermore, the IHBA is successfully applied to the fault diagnosis of rolling bearings in electric motors, where we develop an IHBA-VMD-CNN-BiLSTM fault diagnosis model. The experimental results validate that this method significantly improves the speed and accuracy of fault detection, providing a more reliable solution for motor condition monitoring than traditional approaches.
The contributions of this paper are twofold:
(1)
The development of the IHBA addresses the key shortcomings of the original HBA by enhancing exploration and accelerating convergence.
(2)
The successful application of the IHBA in motor bearing fault diagnosis, providing a novel approach to improving fault detection accuracy in industrial settings.

2. Honey Badger Algorithm (HBA)

The Honey Badger Algorithm (HBA) is a metaheuristic algorithm designed to tackle continuous optimization problems. It mimics the intelligent foraging behavior exhibited by honey badgers in nature and boasts advantages such as high efficiency, a straightforward structure, and ease of implementation [8]. The HBA is structured into two distinct phases as follows, the “Digging Phase” and the “Honey Phase”, which are elaborated upon as follows:
Step 1: Initialization phase. We initialize the number of honey badgers (population size N s ) and their respective positions based on Equation (1) as follows:
x i = l b i + r 1 × ( u b i + l b i ) ,
where x i is the i th honey badger position referring to a candidate solution in a population of N s , while l b i and u b i are, respectively, the lower and upper bounds of the search domain, i [1, 2, …, N s ], and r 1 is a random number between 0 and 1.
Step 2: Defining intensity ( I i ). Intensity is related to concentration strength of the prey and distance between it and the i th honey badger. I i is the smell intensity of the prey; if the smell intensity is high, the motion will be fast and vice versa, as defined by Equation (2).
I i = r 2 × S 4 π d i 2 ,
S = x i x i + 1 2 ,
d i = x best x i ,
where r 2 is a random number between 0 and 1, S is the source strength or concentration strength, d i denotes the distance between prey and the i th badger, and x best is the current global optimal food source location, i.e., the optimal solution.
Step 3: Update density factor. The density factor ( α ) controls time-varying randomization to ensure the smooth transition from exploration to exploitation. We update the decreasing factor α that decreases with iterations to decrease randomization with time, using Equation (5) as follows:
α = C × exp ( t t max ) ,
where t is the number of current iterations and t max is the maximum number of iterations, C 1 (default = 2).
Step 4: Digging phase. In the digging phase, a honey badger performs actions in a Cardioid shape. The Cardioid motion can be simulated by Equation (4) as follows:
x new = x best + F × β × I × x best + F × r 3 × α × d i × cos ( 2 π r 4 ) × 1 cos ( 2 π r 5 ) ,
where x new is the updated position of the honey badger individual β 1 ; r 3 , r 4 , and r 5 are three different random numbers between 0 and 1; and F works as the flag that alters search direction, which is determined using Equation (7) as follows:
F = 1 , r 6 < 0.5 1 , else ,
where r 6 is a random number between 0 and 1. In the digging phase, a honey badger heavily relies on the smell intensity, I i , of the prey, x best ; the distance between the badger and the prey, d i ; and the time-varying search influence factor, α . Moreover, during the digging phase, a badger may receive disturbances, F , allowing it to find an even better prey location.
Step 5: Honey phase. The case of a honey badger following a honey guide bird to reach beehive can be simulated using Equation (8) as follows:
x new = x best + F × r 7 × α × d i ,
where r 7 is a random number between 0 and 1.

3. Improved Honey Badger Algorithm (IHBA)

Like other heuristic algorithms, HBA is premature and susceptible to local optimization in some complex problems. This phenomenon constrains the algorithm’s precision and offers significant potential for enhancement. The improved algorithm introduces several new mechanisms, including a cubic chaotic mapping mechanism, a random value perturbation strategy, an elite tangent search, and a differential mutation strategy.

3.1. Cubic Chaotic Mapping

As the initial population of the basic honey badger search algorithm is randomly generated, it is impossible to guarantee that the initial positions of the individuals are uniformly distributed in the search space. This has an impact on the search speed and optimization performance of the algorithm. The IHBA initialization process incorporates cubic mapping to enhance the traversal of the initial population.
z i + 1 = ρ z i ( 1 z i 2 ) x i + 1 = l b i + 1 + z i + 1 × ( u b i + 1 l b i + 1 ) ,
where z i is cubic chaotic sequence, ρ = 3 .
In the initial population generation process utilizing chaotic mapping, solutions with inferior or intermediate positions may be produced. These solutions may prove unfavorable for the subsequent search for the optimal solution or may impede the convergence of the algorithm. Accordingly, a population filtering mechanism is employed in the HBA of elite tangent search with differential variation, whereby solutions with superior initial positions are excluded. As illustrated in Figure 1, the number of solutions is initially generated at twice the expected value during the initialization phase. Subsequently, the optimal half of the solutions is selected as the initial population based on the calculated fitness values.
The combination of cubic chaotic mapping with elite tangent search and differential variation enables the HBA to generate populations that are more closely aligned with the optimal solution location during the initialization phase. The pseudocode of the population filtering mechanism using cubic chaotic mapping is illustrated in Algorithm 1.
Algorithm 1: Population filtering mechanism
Initialization of honey badger population size N s ;
1. Use Equation (9) to generate a population x o with 2 N s solutions with random positions;
2. Calculate the fitness function value for each solutions;
3. Sort all individuals according to their fitness function values in descending order;
Processes 13 00256 i001
7. Return x i .

3.2. Random Value Perturbation Strategy

In the IHBA, a key coefficient, A , is introduced to determine the strategy of honey badger position updating, aimed at balancing the algorithm’s global search capability and local convergence speed. The design of this coefficient A incorporates the ideas of linear decreasing and stochastic perturbation in order to dynamically adjust the search strategy so as to avoid premature convergence and enhance the algorithm’s global optimization seeking ability. The expression for A is as follows:
A = 2 m r m
where m = 2 2 t / t max , whose value decreases linearly from 2 to 0, and r is a random number within (0, 1).
At the beginning of the iteration ( t is small), the value of m is close to 2, which makes the absolute value of A greater than or equal to 1. Especially when r is close to 1, the algorithm tends to perform a search strategy with random individual perturbations, which increases the traversal of the search space and helps discover the globally optimal solution. As the number of iterations t increases, m gradually decreases, resulting in the absolute value of A tending to 0. In the later stages of the iteration, the algorithm relies more on the population optimum x best for position updating, which facilitates the local convergence of the algorithm and accelerates the convergence to the optimal solution.
When A 1 , the algorithm performs a search strategy with randomized individual perturbations. This means that the honey badger’s position update does not only depend on the current best individual, x best , but is also influenced by a random individual, which helps the algorithm go beyond the local optimum and increase the diversity of the search.
Digging phase:
x new = x rand + F × β × I i × x rand + F × r 3 × α × ( x rand x i ) × cos ( 2 π r 4 ) × [ 1 cos ( 2 π r 5 ) ] ,
Honey phase:
x new = x rand + F × r 7 × α × ( x rand x i ) ,
When A < 1 , the algorithm returns to the x best -based position update strategy, which helps the algorithm to perform a fine-grained search in the vicinity of the discovered high-quality solutions, thus speeding up the convergence.

3.3. Elite Tangent Search and Differential Mutation Strategy

The following elite tangent search with a differential variation strategy is performed when the population is iterating and the best value found three times remains unchanged.
Elite subpopulation: In each iteration, the top half of individuals with lower fitness (i.e., better performance, since usually lower fitness values indicate better solutions) are classified into the elite subpopulation based on their fitness value. The elite subpopulation contains the best individuals in the current population, retaining the best solutions. This aims to protect these excellent solutions from being corrupted by subsequent genetic manipulations while allowing them to serve as the basis for future iterations.
Migration strategy for elite subpopulation: When an individual is close to the current optimal solution, it searches the area around it. This behavior, called local search, can improve the convergence speed and solution accuracy. Because the fitness of the elite subpopulation is close to the current optimal solution, the elite subpopulation is allowed to develop locally to improve convergence speed and solution accuracy.
The migration strategy of the elite subpopulation is updated by incorporating the Tangent Search Algorithm (TSA) of Reference [17] used for position updating.
x new = x best + s t e p tan ( θ ) ( rand x best o p t S ) if   x best = o p t S ( a ) x new = x best + s t e p tan ( θ ) ( x best o p t S )   if   x best o p t S ( b ) ,
s t e p = 10 sign ( rand 0.5 ) norm ( o p t S ) log ( 1 + 10 dim / t ) ,
where o p t S is the best current solution used to guide the search process towards the best solution, θ = rand * π / 2.1 .
Exploration Subpopulation: The second half of the individuals are classified into the exploration subpopulation as opposed to the elite subpopulation. These individuals are usually more highly adapted, i.e., poorer performers. The main purpose of the exploring subpopulation is to explore new regions in the solution space through genetic manipulation to find potentially superior solutions. Since these individuals are not the current best, they are more likely to generate new, possibly superior solutions through operations such as mutation.
Exploration of the evolutionary strategy of the subpopulation: From Equation (13), it can be seen that in the HBA, the position of the honey badger individuals in the population is updated by generating new individuals in the vicinity of the current individual and the current optimal individual x best , which means that the other individuals in the population move towards x best . If x best is a local optimal solution, as the iteration continues, the honey badger individuals in the population gather around x best , which leads to poor population diversity and makes the algorithm converge prematurely; therefore, to solve these problems, this paper adopts a differential mutation strategy. Inspired by the variation strategy in the differential evolutionary algorithm, a stochastic difference is performed using the current honey badger individual, the current optimal individual, and a randomly selected honey badger individual in the population in order to generate a new individual, which is implemented using Equation (15).
x new = x rand 1 + F 0 × ( x rand 2 x rand 3 ) ,
where F 0 is the differential evolutionary contraction factor set to 0.4; x rand1 , x rand2 , and x rand3 are randomly selected honey badger individuals. The pseudocode of the IHBA is shown in Algorithm 2.
Algorithm 2: Pseudocode of the IHBA.
Input:
Set honey badger population size N s ;
Maximum number of iterations t max ;
Dimension D ;
Lower bound l b , upper bound u b ;
Set parameters: constant C , attraction factor β ;
Output:
f best : Global best fitness;
x best : Position of the global best individual;
1. Initialize the population x i using Algorithm 1;
2. Evaluate the fitness of each honey badger position x i using objective function and assign to f i , i [1, 2, …, Ns];
3. Save best position x best and assign fitness to f best ;
4. while  t < t max do
5. Update the decreasing factor α using Equation (5);
6. For i = 1 to N s do
7. Calculate the intensity I i using Equation (2);
8. Compute the adaptation parameter A using Equation (10);
9. if A 1 then
10. if  r < 0.5 then
11. Update the position x new using Equation (11), Select a random individual x rand instead of the globally optimal x best ;
12. else
13. Update the position x new using Equation (12);
14. end if
15. else
16. if  r < 0.5 then 
17. Update the position x new using Equation (6);
18. else
19. Update the position x new using Equation (8);
20. end if
21. end if
22. Evaluate the new individual x new and assign to the fitness f new ;
23. if  f new f i then 
24. Set  x i = x new and f i = f new ;
25. end if
26. if  f new f best then 
27. Set x best = x new and f best = f new ;
28. end if
29. if  x best remains unchanged for three consecutive iterations, then execute the elite tangent and differential mutation strategies then
30. Sort the population fitness in ascending order;
31. Divide the population into two groups ( top   N s / 2 (optimal half) as elite subgroups and after N s / 2 as exploration subgroups);
32. for  i = 1 to N s / 2 do 
33. Generate a random angle θ in the range [0, π/2.1];
34. Calculate the step size using Equation (14);
35. if o p t S == x best then 
36. Update the position x new using Equation (13(a));
37. else
38. Update the position x new using Equation (13(b));
39. end if
40. end for
41. for  i = N s / 2 + 1 to N s do 
42. Randomly select three distinct individuals rand1, rand2, rand3 from the population;
43. Update the position x new using Equation (15);
44. end for
45. end if
46. if  f new f i then 
47. Set x i = x new and f i = f new ;
48. end if
49. if  f new f best then 
50. Set x best = x new and f best = f new ;
51. end if
52. end for
53. end while Stop criteria satisfied.
54. Return f best , x best .

4. Experiment and Results

To assess the optimization performance of the Improved Honey Badger Algorithm (IHBA), we selected 12 benchmark test functions from the CEC2017 suite, each possessing distinct characteristics. For the comparative experiments, we evaluated the IHBA against the following seven other algorithms: the original HBA, COA, DBO, OCSSA, BWO, OOA, and HHO.
To guarantee the fairness of our experimental evaluations, we established a consistent simulation environment using MATLAB R2023a on a Windows 11 operating system powered by an Intel (R) Core (TM) i7-11800H @ 2.3 GHz CPU (The manufacturer of the equipment is Intel Corporation, sourced from Santa Clara, CA, USA.) with 16.0 GB of RAM. Before presenting the results, we outline the experimental setup and crucial parameter settings, detailed in Table 1. The significance of these settings lies in ensuring a level playing field for all algorithms being compared. Specifically, we set the maximum number of iterations to 500 for all algorithms, maintained a population size of 30, and conducted each experiment independently 30 times to ensure statistical reliability.

4.1. Test Function

The 13 test functions selected in this paper are shown in Table 2. f 1 and f 3 are unimodal functions that have no local minima but only global minima, and these functions test the convergence of the algorithm. f 4 , f 7 , f 8 , and f 9 are simple multimodal functions with local extreme points that are used to test the ability of the algorithm to jump out of the local optimum. f 13 , f 14 , f 15 , and f 16 are hybrid functions, with each subfunction being assigned a certain weight. f 21 and f 23 are composition functions composed of at least three hybrid functions or CEC2017 benchmark functions rotated and shifted, and each subfunction has an additional bias value and a weight, which further increase the optimization difficulty of the algorithm, and multiple test functions can fully verify the algorithm’s solution performance.

4.2. Results and Discussions

4.2.1. CEC2017 Test Function Set Optimization Experiment

This section presents and discusses the experimental results obtained by evaluating eight algorithms across 13 test functions in 10-dimensional, 30-dimensional, and 50-dimensional spaces. Each experiment was independently replicated 30 times to ensure statistical reliability, and the comprehensive results are summarized in Table 3. We undertake a detailed analysis of the performance of each algorithm, focusing on their mean and variance in achieving optimal solutions. This analysis highlights key consistent observations and patterns across different dimensions and test functions.
Table 3 demonstrates that the optimized IHBA outperforms the seven comparison algorithms across most functions and dimensions, showcasing superior optimization capabilities and achieving outstanding results.
In the 10-dimensional case, the IHBA achieves the optimal average value across all tested functions, demonstrating superior optimization performance compared to the other seven comparison algorithms. Specifically, for functions f 3 and f 9 , the IHBA attains the best optimal values, average values, and standard deviations, indicating its strong consistency and solving ability in both unimodal and simple multimodal problems. For function f 1 , the standard deviation of the IHBA is slightly inferior to that of OCSSA; however, the IHBA still outperforms the other six comparison algorithms, showcasing high competitiveness. Regarding function f 4 , the IHBA matches the optimal values of the OCSSA, HBA, and HHO and surpasses the other four comparison algorithms. For functions f 7 , f 13 , f 14 , f 15 , f 16 , and f 23 , the IHBA exhibits superior optimal values, average values, and standard deviations compared to the other seven comparison algorithms, indicating strong robustness on multimodal and hybrid functions. In the case of function f 21 , the average value of the IHBA is slightly inferior to that of the DBO algorithm but exceeds the other six comparison algorithms, and its standard deviation is slightly lower than BWO, OOA, DBO, and COA but better than the other three comparison algorithms. For function f 8 , the standard deviation of the IHBA is slightly inferior to that of the DBO algorithm, but its optimal and average values are superior to the other six comparison algorithms.
In the 30-dimensional case, the IHBA demonstrates superior optimization capabilities, particularly in simple multimodal and hybrid functions, while maintaining strong stability and global exploration. Specifically, for unimodal function f 1 , the IHBA achieves superior optimal and average values compared to the other seven comparison algorithms, with its standard deviation being slightly inferior to OCSSA but better than the other six comparison algorithms. For unimodal function f 3 , the IHBA outperforms the other seven comparison algorithms in terms of optimal value, standard deviation, and average value. Regarding the simple multimodal function f 4 , the IHBA achieves better optimal and average values than the other seven comparison algorithms, although its standard deviation is slightly inferior to the HBA. For the simple multimodal function f 8 , the IHBA attains superior optimal and average values compared to the other seven comparison algorithms, with its standard deviation being slightly inferior to the BWO and COA. In the case of simple multimodal functions f 7 and f 9 , the IHBA achieves superior optimal values, standard deviations, and average values compared to the other seven comparison algorithms. For hybrid functions f 13 and f 14 , the IHBA demonstrates better optimal values, average values, and standard deviations than the other seven comparison algorithms. Regarding the hybrid function f 15 , the IHBA achieves a superior optimal value and standard deviation compared to the other seven comparison algorithms, although its average value is slightly inferior to the OCSSA and better than the other six comparison algorithms. For hybrid function f 16 , the IHBA’s optimal and average values surpass those of the other seven comparison algorithms, while its standard deviation is slightly inferior to that of the DBO. In the case of the composition function f 21 , the IHBA exhibits a better standard deviation than the other seven comparison algorithms, though its optimal and average values are slightly inferior to the OCSSA. Lastly, for the composition function f 23 , the IHBA achieves the best optimal values, average values, and standard deviations.
In the 50-dimensional case, the IHBA continues to demonstrate strong optimization capabilities, particularly in unimodal, hybrid, and composition functions. However, the stability performance of certain simple multimodal functions shows variability. Specifically, for unimodal functions f 1 and f 3 , the IHBA achieves superior optimal values, standard deviations, and average values compared to the other seven comparison algorithms. Regarding the simple multimodal function f 4 , the optimal value of the IHBA is slightly inferior to that of the HBA, yet it surpasses the other six comparison algorithms in terms of optimal value, standard deviation, and average value. For simple multimodal functions f 7 , f 8 , and f 9 , the IHBA attains the best optimal and average values, although its standard deviation is less favorable. In the case of hybrid functions f 13 , f 14 , and f 15 , the IHBA records the highest optimal values, average values, and standard deviations. For the hybrid function f 16 , the optimal and average values of the IHBA exceed those of the other seven comparison algorithms, while its standard deviation is marginally inferior to that of the OCSSA. Lastly, for composition functions f 21 and f 23 , the IHBA achieves the best optimal values, average values, and standard deviations.
The analysis of the optimization results indicates that the IHBA outperforms the other algorithms on the CEC 2017 benchmark, demonstrating superiority across multiple dimensions. It is also exceptionally effective in solving high-dimensional and complex problems, particularly compared to the HBA. Experimental findings show that the enhanced strategy proposed in this paper substantially enhances both the accuracy and stability of the HBA.
Evaluating the complexity of an algorithm by directly comparing its running time is an intuitive method. Table 4 provides a detailed list of the execution times of the IHBA and HBA on 12 test functions. It is evident from the data in the table that the IHBA generally outperforms the HBA in terms of runtime across all test functions. In addition, with the increase in problem dimensions, both the IHBA and HBA show an upward trend in computation time.
Figure 2 presents a radar chart that analyzes the performance of the IHBA compared to seven other benchmark algorithms using the CEC2017 test functions. The IHBA demonstrates superior performance across multiple dimensions, highlighting its effectiveness in tackling complex optimization problems. Specifically, the IHBA excels in convergence speed, solution accuracy, and algorithm stability, outperforming its counterparts, especially in handling multimodal functions and high-dimensional optimization scenarios. Additionally, the IHBA achieves a balanced trade-off between exploration and exploitation, effectively preventing premature convergence while maintaining robust global search capabilities. In contrast, although some competing algorithms may exhibit strengths in specific test functions, their performance lacks the consistency and comprehensiveness demonstrated by the IHBA. The analysis derived from the radar chart underscores the IHBA’s enhanced adaptability and robustness, positioning it as a highly competitive method within the evolutionary algorithm landscape. These findings suggest that the IHBA is well-suited for diverse optimization challenges, offering reliable and efficient solutions across various complex environments.
To further compare and analyze the optimization-seeking ability of the improved algorithms, the average fitness results of each algorithm are ranked, and the standard deviations are compared to see if they are equal. The rankings are tied if both the mean and standard deviation are equal. A lower average performance ranking result for the algorithm indicates a superior optimization ability and enhanced optimization search performance. Figure 3 demonstrates the average merit-seeking ranking results of various algorithms. The IHBA is ranked first, with an average ranking result of 1.33, while for the OCSSA, HBA, HHO, BWO, OOA, DBO, and COA, the average ranking results are 3.00, 2.17, 4.50, 7.83, 6.42, 5.25, and 5.50, respectively. From the average ranking results, it can be seen that the ability of the IHBA to find the best results is better than the other seven compared algorithms.
To analyze the distribution characteristics of the IHBA optimization results, box plots were drawn based on the results of each algorithm running independently for 30 times, as shown in Figure 4. The upper and lower boundaries of each box represent the upper and lower quartiles, respectively, and the center line of each box is the median of the 30 optimization results. The symbol “o” indicates the outliers within the box. It can be seen from the box plots that, except for Figure 4e,j, the 30 optimization results of the IHBA are relatively concentrated. Therefore, the improved algorithm in this paper is strong and robust.
Boxplots can reflect the stability of a set of data. The boxplots of the algorithm on 12 representative test functions are plotted in Figure 4, and it is found that the IHBA solves the problem in a more stable manner than the other algorithms, and the distribution of the convergence values is more centralized, which is obviously more superior to the other algorithms and indicates that the IHBA has a better robustness.

4.2.2. Wilcoxon Rank Sum Test

The Wilcoxon rank sum test is a commonly used hypothesis test for analyzing significant algorithm differences. It is widely employed to verify the effectiveness of optimization algorithm improvement strategies and ascertain whether there are significant differences between various algorithms.
The test results are presented in Table 5, where the p-values determine whether a significant difference exists. The significance level is set at the commonly used value of 0.05. If p < 0.5, the null hypothesis can be rejected, indicating a significant difference between the two compared algorithms. Based on the statistical test results, it is evident that the p-values for the IHBA, when compared to the other seven algorithms, are predominantly less than 0.05. This suggests a significant difference in search capability between the improved and comparative algorithms, indicating its statistical superiority.

4.2.3. CEC2017 Convergence Curve of the Test Function Set

To verify the convergence performance of the improved algorithm and to visually compare the convergence speeds of different algorithms, we compared the convergence curves of swarm intelligence algorithms when solving function extremum problems.
Observing the aforementioned convergence curve, it can be concluded that the IHBA exhibits the highest optimization accuracy across all graphs. In Figure 5a,e,h, the IHBA did not converge as quickly as other algorithms during the initial stages of iteration. However, as the number of iterations increased after 50 iterations had been completed, the IHBA escaped from the local optimum and quickly converged to a better optimization result, demonstrating that the improved algorithm can escape from local optima. In Figure 5b–d,f,i–k, the IHBA quickly converged to the optimal value among all algorithms, and consistently maintained its lead throughout the iterative process, ultimately converging to the best optimization result. At the same time, it can be seen that the convergence curve of the MSDBO algorithm is relatively smooth, and it is less likely to fall into the situation of update stagnation. In Figure 5g,l, many algorithms quickly converge to local optima, but the HBA and IHBA converge near the optimum value, with the IHBA outperforming HBA.

5. Engineering Optimization

To further demonstrate the effectiveness and engineering applicability of the IHBA, it is applied to the bearing fault diagnosis problem. The Variational Mode Decomposition (VMD) algorithm, an adaptive signal decomposition method, uses vital parameters like the penalty factor α and the number of modes K to enhance performance. The IHBA, a novel metaheuristic algorithm, was effectively applied to optimize the VMD parameters [ K , α ] for bearing fault diagnosis, quickly achieving notable results compared to other optimization techniques. Ultimately, VMD was integrated with convolutional neural networks and bidirectional long short-term memory (CNN-BILSTM) to facilitate the diagnosis of bearing faults.

5.1. Optimizing the Parameters of VMD

Different optimization algorithms are used to optimize the VMD parameters, and the adaptability curves are shown in Figure 6. By using the IHBA to optimize the key parameters of VMD, the signal analysis performance can be significantly enhanced. Especially when dealing with complex fault vibration signals, the algorithm in this paper can improve the decomposition quality, reduce the computational complexity, and obtain better accuracy and adaptability.
Figure 7 illustrates the convergence curve and parameter variation curve obtained during the optimization of the VMD parameters by the IHBA. As depicted in Figure 8, the convergence process shows that the algorithm tends to stabilize after four iterations, indicating convergence. At this point, the fitness function (minimum permutation entropy) value is 0.2364, with the optimal parameter combinations for the number of decomposition modes ( K ) and the penalty factor ( α ) being (15, 6000).
Figure 8a–d shows the time-domain comparison diagrams before and after signal decomposition and reconstruction for a normal bearing and inner ring, outer ring, and rolling body bearing faults, respectively. These figures demonstrate that after optimized VMD reconstruction, the time-domain signals exhibit significant noise reduction, and the impact characteristics of the fault signals become more pronounced.

5.2. Rolling Bearing Fault Diagnosis

The experimental data used in this study were sourced from the Case Western Reserve University (CWRU) database on motor bearing failures [18]. As shown in Figure 9, a test bench was employed to simulate bearing failures, with a vibration signal sampling frequency of 1024 Hz. The experiment utilized a deep groove ball bearing model SKF6205, with failures simulated via electrical discharge machining (EDM). For diagnostic purposes, 10 fault categories were established, encompassing four types of states as follows: normal bearing, rolling element fault, outer ring fault, and inner ring fault. Each fault state was further categorized based on fault diameters of 0.007 inches, 0.014 inches, and 0.021 inches, resulting in nine distinct states representing varying degrees of failure. The dataset for each fault type consisted of 120 samples, with each sample containing 2048 data points. In the experimental setup, 90 samples were allocated for model training, while 30 samples were reserved for model testing. The division of the fault dataset is detailed in Table 6.
Determining the relevant parameters is crucial in the VMD of the original vibration signals of rolling bearings. Specifically, the number of modal components K and the penalty parameter α significantly influence the decomposition results. The value of K dictates the number of decomposed modal components. If K is set too low, under-decomposition occurs, leading to a loss of key information and an inability to extract essential features. Conversely, if K is set too high, it results in an excessive number of modal components, causing overlapping center frequencies and making signal feature differentiation difficult. The penalty parameter α primarily affects the bandwidth of each modal component, with an appropriate value enhancing the accuracy of the reconstructed signal. In this paper, we propose an approach that optimizes VMD using an IHBA, combining it with a CNN-BiLSTM architecture for the fault diagnosis of motor bearings. This method effectively extracts fault features from the original vibration signals. After the feature extraction process, each of the 10 states is represented by 120 samples, resulting in a 1200 × 9 matrix. Each row of this matrix is labeled with numbers 1–10 to indicate different fault types. The fault diagnosis process, as implemented in this study, is depicted in Figure 10.
Figure 11 illustrates the accuracy curve of a fault diagnosis model based on IHBA-VMD-CNN-BiLSTM. The model performs excellently during both the training and testing phases, stabilizing after approximately 200 iterations and achieving a testing accuracy of up to 99.67% at 400 iterations.
Figure 12 shows the loss function curve of a fault diagnosis model based on IHBA-VMD-CNN-BiLSTM. The model demonstrates excellent performance, with both training and testing losses rapidly decreasing and stabilizing at low values after around 50 iterations.
To evaluate the performance of IHBA-VMD-CNN-BiLSTM, the proposed model is compared with other state-of-the-art (SOTA) models [19]. The confusion matrix and classification results for fault diagnosis are presented in Figure 13. After repeated validation, the IHBA-VMD-CNN-BiLSTM model demonstrates superior performance, achieving a recognition accuracy of 96.7% for normal faults, and 100% for all other fault types. The overall recognition accuracy for the 10 types of faults reaches an impressive 99.34%. Furthermore, the recognition speed of the IHBA-VMD-CNN-BiLSTM model is significantly improved, demonstrating its effectiveness in fault diagnosis.
Figure 14 shows the diagnostic accuracy of different experimental methods using the CWRU dataset. The figure presents the accuracy and average processing time of eight different fault diagnosis models applied to 10 types of faults. The IHBA-VMD-CNN-BiLSTM model stands out as the most effective method for fault diagnosis, achieving 100% accuracy for all faults except one, with an overall accuracy of 99.67%. It maintains a very short processing time of 3.34 s, second only to CNN-SVM, which has significantly lower accuracy. Its consistently high accuracy across different fault types demonstrates its robustness. The combination of optimization (IHBA) and advanced feature extraction (VMD) significantly enhances the performance of the CNN-BiLSTM model, making it the best choice for practical applications requiring high accuracy and efficiency in fault diagnosis.

6. Conclusions

To address the limitations of the HBA, including poor exploitation, susceptibility to local optima, and insufficient pre-exploration, this study introduces the following key enhancements: cubic chaotic mapping for population diversity, random value perturbation, and repeated random searches to prevent premature convergence and strengthen global optimization. Additionally, an elite tangent search with differential variation accelerates convergence and improves accuracy by leveraging optimal solution information. Simulation and bearing fault diagnosis experiments validate the proposed IHBA’s robustness, stability, and practical applicability, demonstrating its effectiveness in enhancing decomposition quality, reducing computational complexity, and achieving superior precision and adaptability in complex signal analysis tasks. Additionally, experiments on bearing fault diagnosis confirmed the practical applicability of the IHBA in engineering. In summary, the IHBA has performed excellently in enhancing initial population traversal and improving global optimization capabilities. However, its limitations, such as sensitivity to algorithm parameter settings, high computational complexity, need for improvement in convergence speed, and lack of universal validation, also require attention. Future research directions can explore and improve these limitations in depth to further enhance the performance and application scope of the IHBA.

Author Contributions

Conceptualization, H.T.; methodology, C.Y.; software, C.Y.; validation, C.P.; formal analysis, C.Y.; investigation, H.T.; resources, H.T.; data curation, C.P.; writing—original draft preparation, C.Y.; writing—review and editing, H.T.; supervision, project administration, funding acquisition, H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (51967012), the Gansu Natural Science Foundation (23JRRA1664), and the Innovative Ability Enhancement Project of Gansu Provincial Higher Education (2023A-199), Gansu Province Longyuan Youth Innovation Talent Team Project (310100296012).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alonso-González, M.; Díaz, V.G.; Pérez, B.L.; G-Bustelo, B.C.P.; Anzola, J.P. Bearing Fault Diagnosis With Envelope Analysis and Machine Learning Approaches Using CWRU Dataset. IEEE Access 2023, 11, 57796–57805. [Google Scholar] [CrossRef]
  2. Ke, Z.; Di, C.; Bao, X. Adaptive Suppression of Mode Mixing in CEEMD Based on Genetic Algorithm for Motor Bearing Fault Diagnosis. IEEE Trans. Magn. 2022, 58, 1–6. [Google Scholar] [CrossRef]
  3. Xue, J.K.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  4. Huang, K.W.; Wu, Z.X.; Jiang, C.L.; Huang, Z.H.; Lee, S.H. WPO: A Whale Particle Optimization Algorithm. Int. J. Comput. Intell. Syst. 2023, 16, 1–16. [Google Scholar] [CrossRef]
  5. Trojovsky, P.; Dehghani, M. Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications. Sensor 2022, 22, 855. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  7. Heidari, A.; Mirjalili, S.; Faris, H.; Mafarja, M. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  8. Hashim, F.A.; Houssein, E.H.; Talpur, K.; Mabrouk, M.; Al-Atabany, W. Honey badger algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  9. Abasi, A.K.; Aloqaily, M. Optimization of CNN using modified honey badger algorithm for sleep apnea detection. Expert Syst. Appl. 2023, 229, 120484. [Google Scholar] [CrossRef]
  10. Zhou, C.; Gao, B.; Yang, H.; Zhang, X.; Liu, J.; Li, L. Junction temperature prediction of insulated-gate bipolar transistors in wind power systems based on an improved honey badger algorithm. Energies 2022, 15, 7366. [Google Scholar] [CrossRef]
  11. Düzenli, T.; Onay, F.K.; Aydemir, S.B. Improved honey badger algorithms for parameter extraction in photovoltaic models. Optik 2022, 268, 1169731. [Google Scholar] [CrossRef]
  12. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  13. Mohammad, D.; Zeinab, M.; Eva, T.; Pavel, T. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar]
  14. Dehghani, M.; TrojovskýSec, P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Engine Automot. Eng. 2022, 8, 1126450. [Google Scholar] [CrossRef]
  15. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  16. Yong, C.; Ting, H.; Peng, C. Enhancing sparrow search algorithm with OCSSA: Integrating osprey optimization and Cauchy mutation for improved convergence and precision. Electron. Lett. 2024, 60, e13127. [Google Scholar] [CrossRef]
  17. Layeb, A. Tangent search algorithm for solving optimization problems. Neural Comput. Appl. 2022, 34, 8853–8884. [Google Scholar] [CrossRef]
  18. Available online: https://engineering.case.edu/bearingdatacenter (accessed on 1 October 2024).
  19. Yong, C.; Guangqing, B. Enhancing Rolling Bearing Fault Diagnosis in Motors using the OCSSA-VMD-CNN-BiLSTM Model: A Novel Approach for Fast and Accurate Identification. IEEE Access 2024, 12, 78463–78479. [Google Scholar]
Figure 1. Schematic diagram of the population filtering process.
Figure 1. Schematic diagram of the population filtering process.
Processes 13 00256 g001
Figure 2. Radar chart of sort of algorithms.
Figure 2. Radar chart of sort of algorithms.
Processes 13 00256 g002
Figure 3. Results of the sort of average performance.
Figure 3. Results of the sort of average performance.
Processes 13 00256 g003
Figure 4. Convergence boxplot of the test function in 30-dimensional spaces.
Figure 4. Convergence boxplot of the test function in 30-dimensional spaces.
Processes 13 00256 g004
Figure 5. Twelve classic test functions and convergence curves.
Figure 5. Twelve classic test functions and convergence curves.
Processes 13 00256 g005aProcesses 13 00256 g005b
Figure 6. Fitness curves for VMD parameter optimization.
Figure 6. Fitness curves for VMD parameter optimization.
Processes 13 00256 g006
Figure 7. Optimization process curve of outer ring faulty bearing 0.007.
Figure 7. Optimization process curve of outer ring faulty bearing 0.007.
Processes 13 00256 g007aProcesses 13 00256 g007b
Figure 8. Time-domain comparison between the original and the reconstructed signals of different faults.
Figure 8. Time-domain comparison between the original and the reconstructed signals of different faults.
Processes 13 00256 g008
Figure 9. Bearing fault simulation test bench.
Figure 9. Bearing fault simulation test bench.
Processes 13 00256 g009
Figure 10. Based on the IHBA-VMD-CNN-BiLSTM rolling bearing failure prediction flowchart.
Figure 10. Based on the IHBA-VMD-CNN-BiLSTM rolling bearing failure prediction flowchart.
Processes 13 00256 g010
Figure 11. Accuracy curve of the fault diagnosis model based on IHBA-VMD-CNN-BiLSTM.
Figure 11. Accuracy curve of the fault diagnosis model based on IHBA-VMD-CNN-BiLSTM.
Processes 13 00256 g011
Figure 12. Loss function curve of the fault diagnosis model based on IHBA-VMD-CNN-BiLSTM.
Figure 12. Loss function curve of the fault diagnosis model based on IHBA-VMD-CNN-BiLSTM.
Processes 13 00256 g012
Figure 13. The accuracy of the test results for the eight methods.
Figure 13. The accuracy of the test results for the eight methods.
Processes 13 00256 g013aProcesses 13 00256 g013b
Figure 14. Diagnostic accuracy of different experimental methods.
Figure 14. Diagnostic accuracy of different experimental methods.
Processes 13 00256 g014
Table 1. Main parameter settings.
Table 1. Main parameter settings.
AlgorithmParameter Setting
COA a decreases linearly from 2 to 0, N c = 5
DBO γ = 0.1 , k = 0.1 , u = 0.3 , s = 0.5 , α { 1 , 1 }
OCSSA P D = 0.2 , S D = 0.1 , R 2 = 0.8 , S T = 0.8
BWO W f is probability of whale fall decreased at interval [0.1, 0.05]
OOA r i , j are random numbers at interval [0, 1], I i , j are random numbers from the set {1, 2}
HHO E 0 variable changes from −1 to 1, J = [ 0 , 2 ]
HBA β = 6 , C = 2
IHBA β = 6 , C = 2 , F 0 = 0.4
Table 2. Test function.
Table 2. Test function.
TypeNo.FunctionDInitial Range F i = F i ( x )
Unimodal function f 1 Shifted and rotated Bent Cigar Function30 [ 100 ,   100 ] D 100
f 3 Shifted and rotated Zakharov Function30 [ 100 ,   100 ] D 300
Simple multimodal function f 4 Shifted and rotated Rosenbrock Function30 [ 100 ,   100 ] D 400
f 7 Shifted and rotated Lunacek Bi-Rastrigin Function30 [ 100 ,   100 ] D 700
f 8 Shifted and rotated Non-Continuous Rastrigin Function30 [ 100 ,   100 ] D 800
f 9 Shifted and rotated Levy Function30 [ 100 ,   100 ] D 900
Hybrid
function
f 13 Hybrid Function 3 (N = 3)30 [ 100 ,   100 ] D 1300
f 14 Hybrid Function 4 (N = 4)30 [ 100 ,   100 ] D 1400
f 15 Hybrid Function 5 (N = 4)30 [ 100 ,   100 ] D 1500
f 16 Hybrid Function 6 (N = 4)30 [ 100 ,   100 ] D 1600
Composition
function
f 21 Composition Function 1 (N = 3)30 [ 100 ,   100 ] D 2100
f 23 Composition Function 3 (N = 4)30 [ 100 ,   100 ] D 2200
Table 3. Optimization results and comparison for CEC-2017 test functions in three different dimensions.
Table 3. Optimization results and comparison for CEC-2017 test functions in three different dimensions.
Fun.DMeas.IHBAOCSSAHBAHHOBWOOOADBOCOA
f 1 10Best1.00 × 1021.04 × 1021.33 × 1021.90 × 1054.39 × 1091.55 × 1091.84 × 1084.24 × 109
Std2.47 × 1032.22 × 1033.50 × 1033.01 × 1053.83 × 1092.74 × 1095.45 × 1082.86 × 109
Ave2.06 × 1033.04 × 1033.44 × 1036.90 × 1051.42 × 10108.21 × 1091.18 × 1098.15 × 109
30Best6.43 × 1021.37 × 1043.95 × 1034.24 × 1075.48 × 10103.27 × 10102.04 × 10103.51 × 1010
Std4.83 × 1053.60 × 1058.40 × 1053.74 × 1074.97 × 1097.04 × 1093.73 × 1097.54 × 109
Ave1.31 × 1052.35 × 1052.06 × 1058.84 × 1076.77 × 10105.12 × 10102.74 × 10105.29 × 1010
50Best1.57 × 1066.12 × 1076.91 × 1067.22 × 1081.10 × 10118.76 × 10105.97 × 10108.83 × 1010
Std1.57 × 1079.29 × 1073.67 × 1087.45 × 1085.55 × 1098.24 × 1095.36 × 1098.32 × 109
Ave2.06 × 1072.07 × 1082.54 × 1081.55 × 1091.22 × 10111.09 × 10116.90 × 10101.09 × 1011
f 3 10Best3.00 × 1023.00 × 1023.00 × 1023.02 × 1027.61 × 1034.45 × 1035.01 × 1023.65 × 103
Std2.68 × 1073.99 × 10−33.05 × 10−55.33 × 1011.93 × 1033.05 × 1031.49 × 1032.94 × 103
Ave3.00 × 1023.00 × 1023.00 × 1023.40 × 1021.19 × 1041.22 × 1043.28 × 1039.47 × 103
30Best2.00 × 1032.76 × 1041.33 × 1043.18 × 1046.91 × 1046.90 × 1046.50 × 1046.52 × 104
Std3.10 × 1038.38 × 1036.84 × 1037.26 × 1031.08 × 1047.71 × 1039.02 × 1037.46 × 103
Ave5.86 × 1034.19 × 1042.76 × 1044.45 × 1049.32 × 1048.49 × 1048.17 × 1048.50 × 104
50Best4.52 × 1041.33 × 1057.60 × 1041.17 × 1051.73 × 1051.34 × 1051.68 × 1051.63 × 105
Std1.29 × 1042.85 × 1042.29 × 1041.71 × 1042.32 × 1043.18 × 1044.48 × 1041.58 × 104
Ave6.48 × 1041.85 × 1051.24 × 1051.51 × 1052.13 × 1052.12 × 1052.26 × 1051.92 × 105
f 4 10Best4.00 × 1024.00 × 1024.00 × 1024.00 × 1028.44 × 1024.80 × 1024.40 × 1026.21 × 102
Std5.54 × 1012.23 × 1011.433.93 × 1014.47 × 1022.71 × 1022.68 × 1013.24 × 102
Ave4.01 × 1024.12 × 1024.02 × 1024.31 × 1021.62 × 1038.58 × 1024.82 × 1029.58 × 102
30Best4.10 × 1024.74 × 1024.46 × 1025.08 × 1021.11 × 1048.30 × 1033.16 × 1037.46 × 103
Std2.79 × 1012.44 × 1012.33 × 1015.66 × 1012.44 × 1032.78 × 1031.93 × 1032.89 × 103
Ave4.99 × 1025.10 × 1025.01 × 1026.06 × 1021.70 × 1041.33 × 1046.37 × 1031.37 × 104
50Best5.27 × 1025.13 × 1024.81 × 1027.24 × 1024.08 × 1041.79 × 1041.34 × 1042.48 × 104
Std5.48 × 1016.64 × 1018.01 × 1012.19 × 1023.10 × 1037.04 × 1032.08 × 1035.95 × 103
Ave6.09 × 1026.22 × 1026.47 × 1021.14 × 1034.61 × 1043.50 × 1041.70 × 1043.62 × 104
f 7 10Best7.15 × 1027.24 × 1027.19 × 1027.54 × 1028.09 × 1027.56 × 1027.61 × 1027.62 × 102
Std4.712.59 × 1019.861.38 × 1018.721.82 × 1011.06 × 1011.84 × 101
Ave7.23 × 1027.62 × 1027.32 × 1027.84 × 1028.30 × 1027.90 × 1027.80 × 1027.95 × 102
30Best7.58 × 1028.58 × 1028.32 × 1021.12 × 1031.33 × 1031.26 × 1031.11 × 1031.27 × 103
Std2.89 × 1011.51 × 1025.24 × 1016.52 × 1013.57 × 1016.42 × 1013.43 × 1015.49 × 101
Ave8.00 × 1021.09 × 1039.10 × 1021.29 × 1031.47 × 1031.39 × 1031.20 × 1031.42 × 103
50Best8.69 × 1021.08 × 1031.02 × 1031.66 × 1031.99 × 1031.81 × 1031.58 × 1031.88 × 103
Std4.44 × 1011.92 × 1021.12 × 1028.83 × 1013.14 × 1019.47 × 1016.95 × 1017.10 × 101
Ave9.40 × 1021.56 × 1031.20 × 1031.86 × 1032.09 × 1032.01 × 1031.76 × 1032.03 × 103
f 8 10Best8.01 × 1028.19 × 1028.09 × 1028.11 × 1028.41 × 1028.21 × 1028.26 × 1028.32 × 102
Std5.929.056.268.855.959.225.879.20
Ave8.12 × 1028.35 × 1028.18 × 1028.29 × 1028.54 × 1028.49 × 1028.35 × 1028.54 × 102
30Best8.28 × 1029.13 × 1028.42 × 1029.32 × 1021.13 × 1031.08 × 1031.01 × 1031.05 × 103
Std1.63 × 1013.15 × 1012.45 × 1012.35 × 1011.58 × 1012.49 × 1012.44 × 1012.62 × 101
Ave8.51 × 1029.70 × 1028.96 × 1029.78 × 1021.16 × 1031.12 × 1031.06 × 1031.13 × 103
50Best8.72 × 1021.08 × 1039.88 × 1021.14 × 1031.48 × 1031.42 × 1031.31 × 1031.40 × 103
Std3.79 × 1014.42 × 1013.56 × 1013.69 × 1012.25 × 1013.28 × 1014.06 × 1013.48 × 101
Ave9.40 × 1021.15 × 1031.05 × 1031.21 × 1031.52 × 1031.47 × 1031.39 × 1031.48 × 103
f 9 10Best9.00 × 1029.00 × 1029.00 × 1021.12 × 1031.45 × 1039.77 × 1029.53 × 1021.04 × 103
Std2.27 × 1021.60 × 1022.512.00 × 1021.28 × 1021.89 × 1029.49 × 1012.18 × 102
Ave9.00 × 1021.01 × 1039.01 × 1021.50 × 1031.80 × 1031.34 × 1031.06 × 1031.41 × 103
30Best9.06 × 1022.81 × 1031.06 × 1035.72 × 1039.35 × 1035.84 × 1034.57 × 1034.95 × 103
Std5.99 × 1017.57 × 1029.18 × 1029.25 × 1027.96 × 1021.21 × 1031.45 × 1031.80 × 103
Ave9.76 × 1025.07 × 1032.60 × 1037.93 × 1031.12 × 1049.17 × 1037.86 × 1039.92 × 103
50Best1.14 × 1038.28 × 1034.60 × 1031.89 × 1043.37 × 1042.94 × 1042.14 × 1042.76 × 104
Std4.02 × 1021.99 × 1034.17 × 1033.87 × 1032.16 × 1033.04 × 1033.95 × 1033.69 × 103
Ave1.80 × 1031.32 × 1041.01 × 1042.88 × 1043.82 × 1043.40 × 1043.13 × 1043.45 × 104
f 13 10Best1.31 × 1032.23 × 1031.48 × 1032.11 × 1033.57 × 1052.37 × 1032.81 × 1032.44 × 103
Std1.07 × 1023.09 × 1031.27 × 1041.34 × 1042.02 × 1071.23 × 1041.63 × 1046.92 × 103
Ave1.38 × 1031.46 × 1049.24 × 1031.49 × 1041.59 × 1071.36 × 1042.39 × 1041.05 × 104
30Best3.01 × 1035.12 × 1037.38 × 1032.78 × 1057.33 × 1099.86 × 1082.11 × 1087.82 × 108
Std2.58 × 1044.25 × 1043.56 × 1047.35 × 1054.41 × 1093.69 × 1091.13 × 1093.70 × 109
Ave3.16 × 1043.43 × 1044.35 × 1049.50 × 1051.50 × 1097.00 × 1091.83 × 1096.95 × 109
50Best4.63 × 1031.35 × 1041.25 × 1043.23 × 1064.73 × 10102.48 × 10103.42 × 1091.80 × 1010
Std1.51 × 1044.75 × 1046.52 × 1049.01 × 1061.21 × 10101.19 × 10105.57 × 1091.07 × 1010
Ave2.10 × 1047.86 × 1046.94 × 1049.69 × 1066.67 × 10104.58 × 10101.23 × 10103.84 × 1010
f 14 10Best1.40 × 1031.43 × 1031.44 × 1031.48 × 1031.57 × 1031.45 × 1031.52 × 1031.46 × 103
Std7.906.85 × 1015.64 × 1013.82 × 1022.01 × 1022.11 × 1037.10 × 1023.35 × 101
Ave1.42 × 1031.50 × 1031.52 × 1031.68 × 1031.80 × 1032.55 × 1032.15 × 1031.51 × 103
30Best1.56 × 1035.38 × 1032.72 × 1036.68 × 1033.15 × 1061.28 × 1055.56 × 1041.41 × 105
Std2.15 × 1033.40 × 1041.95 × 1047.07 × 1057.65 × 1062.50 × 1064.64 × 1051.95 × 106
Ave2.65 × 1034.87 × 1042.67 × 1047.96 × 1051.33 × 1072.56 × 1065.94 × 1052.08 × 106
50Best6.85 × 1031.20 × 1053.17 × 1041.81 × 1055.03 × 1071.29 × 1074.76 × 1055.02 × 106
Std5.38 × 1043.67 × 1055.14 × 1051.97 × 1061.28 × 1086.41 × 1077.42 × 1062.87 × 107
Ave6.27 × 1044.96 × 1052.50 × 1053.06 × 1062.32 × 1089.59 × 1078.46 × 1064.63 × 107
f 15 10Best1.50 × 1031.63 × 1031.56 × 1082.05 × 1033.15 × 1031.82 × 1031.94 × 1031.69 × 103
Std4.82 × 1017.73 × 1029.54 × 1023.02 × 1032.18 × 1034.54 × 1031.64 × 1032.89 × 103
Ave1.53 × 1032.22 × 1031.84 × 1036.58 × 1038.69 × 1031.12 × 1044.08 × 1034.61 × 103
30Best1.68 × 1032.04 × 1031.96 × 1033.37 × 1048.99 × 1075.54 × 1063.61 × 1054.78 × 106
Std1.36 × 1041.42 × 1041.64 × 1044.89 × 1044.60 × 1084.38 × 1084.89 × 1064.02 × 108
Ave1.21 × 1049.61 × 1031.91 × 1049.07 × 1041.04 × 1093.83 × 1084.59 × 1064.11 × 108
50Best2.12 × 1035.89 × 1036.06 × 1032.73 × 1058.59 × 1092.38 × 1091.38 × 1081.44 × 109
Std9.93 × 1032.09 × 1041.18 × 1044.41 × 1052.76 × 1092.96 × 1099.10 × 1083.29 × 109
Ave1.44 × 1043.71 × 1042.51 × 1049.50 × 1051.36 × 10107.64 × 1091.62 × 1097.90 × 109
f 16 10Best1.60 × 1031.60 × 1031.60 × 1031.61 × 1031.92 × 1031.71 × 1031.63 × 1031.70 × 103
Std5.54 × 1011.89 × 1021.00 × 1021.44 × 1021.05 × 1021.38 × 1029.35 × 1011.34 × 102
Ave1.64 × 1031.98 × 1031.73 × 1031.87 × 1032.19 × 1031.97 × 1031.78 × 1031.97 × 103
30Best1.75 × 1031.99 × 1031.75 × 1032.67 × 1036.00 × 1033.90 × 1033.53 × 1033.76 × 103
Std3.30 × 1023.83 × 1025.68 × 1024.91 × 1021.78 × 1036.78 × 1022.71 × 1028.40 × 102
Ave2.35 × 1032.80 × 1032.72 × 1033.56 × 1038.39 × 1035.11 × 1033.98 × 1035.35 × 103
50Best2.15 × 1032.58 × 1032.41 × 1033.52 × 1037.75 × 1036.51 × 1034.62 × 1036.26 × 103
Std5.67 × 1024.11 × 1024.60 × 1027.80 × 1021.74 × 1031.20 × 1035.28 × 1021.53 × 103
Ave3.25 × 1033.64 × 1033.46 × 1034.67 × 1031.19 × 1048.41 × 1035.64 × 1038.81 × 103
f 21 10Best2.20 × 1032.20 × 1032.20 × 1032.20 × 1032.23 × 1032.23 × 1032.21 × 1032.23 × 103
Std5.51 × 1016.09 × 1015.52 × 1016.24 × 1015.45 × 1014.93 × 1018.334.44 × 101
Ave2.27 × 1032.31 × 1032.28 × 1032.33 × 1032.31 × 1032.33 × 1032.22 × 1032.35 × 103
30Best2.33 × 1032.20 × 1032.36 × 1032.48 × 1032.58 × 1032.58 × 1032.33 × 1032.64 × 103
Std1.39 × 1011.51 × 1023.11 × 1014.27 × 1016.86 × 1014.73 × 1019.83 × 1014.01 × 101
Ave2.35 × 1032.33 × 1032.40 × 1032.57 × 1032.70 × 1032.69 × 1032.53 × 1032.71 × 103
50Best2.37 × 1032.49 × 1032.41 × 1032.72 × 1033.16 × 1033.02 × 1032.89 × 1033.08 × 103
Std2.41 × 1016.94 × 1014.49 × 1019.14 × 1018.19 × 1015.92 × 1014.68 × 1018.02 × 101
Ave2.41 × 1032.63 × 1032.52 × 1032.91 × 1033.30 × 1033.13 × 1032.97 × 1033.20 × 103
f 23 10Best2.60 × 1032.61 × 1032.61 × 1032.61 × 1032.66 × 1032.66 × 1032.63 × 1032.66 × 103
Std9.081.00 × 1011.47 × 1012.91 × 1012.94 × 1012.21 × 1011.73 × 1012.55 × 101
Ave2.62 × 1032.62 × 1032.62 × 1032.66 × 1032.73 × 1032.69 × 1032.66 × 1032.69 × 103
30Best2.66 × 1032.75 × 1032.73 × 1033.04 × 1033.54 × 1033.27 × 1033.03 × 1033.25 × 103
Std2.27 × 1017.59 × 1013.53 × 1011.22 × 1021.97 × 1021.53 × 1029.57 × 1011.16 × 102
Ave2.71 × 1032.85 × 1032.79 × 1033.20 × 1033.84 × 1033.59 × 1033.16 × 1033.49 × 103
50Best2.80 × 1032.97 × 1032.88 × 1033.51 × 1034.58 × 1033.97 × 1033.51 × 1034.12 × 103
Std3.64 × 1011.11 × 1027.79 × 1011.82 × 1021.44 × 1022.02 × 1021.65 × 1021.72 × 102
Ave2.87 × 1033.14 × 1033.02 × 1033.85 × 1034.83 × 1034.50 × 1033.84 × 1034.40 × 103
Table 4. Average time for CEC-2017 test functions in three different dimensions.
Table 4. Average time for CEC-2017 test functions in three different dimensions.
Fun. f 1 f 3 f 4
Dim.103050103050103050
IHBA7.86 × 10−21.02 × 10−11.40 × 10−17.37 × 10−21.16 × 10−11.61 × 10−16.99 × 10−29.70 × 10−21.37 × 10−1
HBA6.05 × 10−28.30 × 10−21.18 × 10−15.81 × 10−28.28 × 10−21.19 × 10−15.87 × 10−28.29 × 10−21.18 × 10−1
Fun. f 7 f 8 f 9
Dim.103050103050103050
IHBA9.24 × 10−21.33 × 10−11.85 × 10−19.25 × 10−21.36 × 10−11.81 × 10−18.80 × 10−21.40 × 10−12.06 × 10−1
HBA6.79 × 10−21.02 × 10−11.46 × 10−16.47 × 10−21.02 × 10−11.45 × 10−16.78 × 10−21.05 × 10−11.46 × 10−1
Fun. f 13 f 14 f 15
Dim.103050103050103050
IHBA8.65 × 10−21.11 × 10−11.51 × 10−19.46 × 10−21.45 × 10−11.98 × 10−18.57 × 10−21.08 × 10−11.45 × 10−1
HBA6.37 × 10−29.54 × 10−21.37 × 10−16.71 × 10−21.11 × 10−11.62 × 10−16.07 × 10−29.08 × 10−21.27 × 10−1
Fun. f 16 f 21 f 23
Dim.103050103050103050
IHBA9.42 × 10−21.31 × 10−11.81 × 10−11.04 × 10−11.89 × 10−13.01 × 10−11.26 × 10−12.16 × 10−13.66 × 10−1
HBA6.36 × 10−29.92 × 10−21.39 × 10−17.96 × 10−21.47 × 10−12.38 × 10−19.06 × 10−21.72 × 10−12.91 × 10−1
Table 5. The p-values of the Wilcoxon rank sum test (dimension 30) in CEC2017.
Table 5. The p-values of the Wilcoxon rank sum test (dimension 30) in CEC2017.
FunctionOCSSAHBAHHOBWOOOADBOCOA
f17.48 × 10−28.50 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f33.69 × 10−112.03 × 10−73.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f41.17 × 10−45.09 × 10−66.12 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f71.46 × 10−105.97 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f86.67 × 10−115.40 × 10−41.69 × 10−93.02 × 10−113.33 × 10−113.33 × 10−113.02 × 10−11
f91.05 × 10−97.16 × 10−92.74 × 10−112.74 × 10−112.74 × 10−112.74 × 10−112.74 × 10−11
f133.02 × 10−114.08 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f143.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
f156.70 × 10−112.23 × 10−93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−11
f164.69 × 10−81.87 × 10−59.26 × 10−93.02 × 10−118.99 × 10−114.69 × 10−86.07 × 10−11
f211.34 × 10−56.79 × 10−23.09 × 10−61.63 × 10−22.00 × 10−51.86 × 10−12.20 × 10−7
f234.98 × 10−41.68 × 10−32.19 × 10−83.02 × 10−113.02 × 10−111.09 × 10−103.02 × 10−11
Table 6. Division results of the fault dataset.
Table 6. Division results of the fault dataset.
Fault TypeNormalInner Ring FaultRolling Ball FaultOuter Ring Fault
Fault label12345678910
Fault diameter (d/inch)-0.0070.0140.0210.0070.0140.0210.0070.0140.021
Training sample90909090909090909090
Test sample30303030303030303030
Sample data size2048204820482048204820482048204820482048
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ting, H.; Yong, C.; Peng, C. Improved Honey Badger Algorithm Based on Elite Tangent Search and Differential Mutation with Applications in Fault Diagnosis. Processes 2025, 13, 256. https://doi.org/10.3390/pr13010256

AMA Style

Ting H, Yong C, Peng C. Improved Honey Badger Algorithm Based on Elite Tangent Search and Differential Mutation with Applications in Fault Diagnosis. Processes. 2025; 13(1):256. https://doi.org/10.3390/pr13010256

Chicago/Turabian Style

Ting, He, Chang Yong, and Chen Peng. 2025. "Improved Honey Badger Algorithm Based on Elite Tangent Search and Differential Mutation with Applications in Fault Diagnosis" Processes 13, no. 1: 256. https://doi.org/10.3390/pr13010256

APA Style

Ting, H., Yong, C., & Peng, C. (2025). Improved Honey Badger Algorithm Based on Elite Tangent Search and Differential Mutation with Applications in Fault Diagnosis. Processes, 13(1), 256. https://doi.org/10.3390/pr13010256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop