Next Article in Journal
Analytically Pricing a Vulnerable Option under a Stochastic Liquidity Risk Model with Stochastic Volatility
Previous Article in Journal
CL-NOTEARS: Continuous Optimization Algorithm Based on Curriculum Learning Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hierarchical Learning-Enhanced Chaotic Crayfish Optimization Algorithm: Improving Extreme Learning Machine Diagnostics in Breast Cancer

School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2641; https://doi.org/10.3390/math12172641
Submission received: 8 August 2024 / Revised: 22 August 2024 / Accepted: 24 August 2024 / Published: 26 August 2024

Abstract

:
Extreme learning machines (ELMs), single hidden-layer feedforward neural networks, are renowned for their speed and efficiency in classification and regression tasks. However, their generalization ability is often undermined by the random generation of hidden layer weights and biases. To address this issue, this paper introduces a Hierarchical Learning-based Chaotic Crayfish Optimization Algorithm (HLCCOA) aimed at enhancing the generalization ability of ELMs. Initially, to resolve the problems of slow search speed and premature convergence typical of traditional crayfish optimization algorithms (COAs), the HLCCOA utilizes chaotic sequences for population position initialization. The ergodicity of chaos is leveraged to boost population diversity, laying the groundwork for effective global search efforts. Additionally, a hierarchical learning mechanism encourages under-performing individuals to engage in extensive cross-layer learning for enhanced global exploration, while top performers directly learn from elite individuals at the highest layer to improve their local exploitation abilities. Rigorous testing with CEC2019 and CEC2022 suites shows the HLCCOA’s superiority over both the original COA and nine renowned heuristic algorithms. Ultimately, the HLCCOA-optimized extreme learning machine model, the HLCCOA-ELM, exhibits superior performance over reported benchmark models in terms of accuracy, sensitivity, and specificity for UCI breast cancer diagnosis, underscoring the HLCCOA’s practicality and robustness, as well as the HLCCOA-ELM’s commendable generalization performance.

1. Introduction

In recent years, owing to the escalating complexity in problem-solving and the increasing uncertainties in final outcomes, there has been a marked rise in the demand for efficient optimization algorithms. These algorithms can be broadly categorized into two groups, i.e., deterministic algorithms and stochastic intelligent algorithms [1]. Deterministic algorithms are suitable for problems with a clear and unchanging pathway to the solution, whereas stochastic intelligent algorithms are better equipped for highly complex and ambiguous problems, iteratively approaching or attaining optimal solutions [2]. Among them, one type of stochastic intelligent algorithm, known as the meta-heuristic algorithm, has seen widespread application across numerous scientific fields, such as engineering [3,4], economics [5], computer science [6], and so on [7].
Their nature- and society-inspired mechanisms aim to avoid local optima, demonstrating the broad application potential of meta-heuristic algorithms when combined with neural networks. For instance, ref. [8] utilizes artificial neural networks to predict the heat transfer performance and pressure drop in a horizontal tube with twisted-tape inserts and employs a multi-objective genetic algorithm to find the Pareto optimal front for heat transfer efficiency and friction factors, thereby optimizing design parameters. Ref. [9] describes the adjustment of hyperparameters in long short-term memory (LSTM) networks using an improved reptile search algorithm (RSA) to enhance the efficiency of wind power generation forecasting. Ref. [10] discusses the application of the seagull optimization and particle swarm algorithm in optimizing the structural framework of convolutional neural networks (CNNs) for accurate cancer region detection in oral images. Ref. [11] uses an evolutionary crow search algorithm with adaptive flight length and interactive memory mechanisms to effectively overcome the local minima problem in multilayer perceptrons, successfully applying it to feature detection in diseases such as COVID-19, diabetes, breast cancer, and cardiovascular diseases.
Furthermore, based on the objects and methods they emulate [7,12,13], meta-heuristic algorithms can be roughly divided into four categories, as illustrated in Figure 1, i.e., evolution-based, physics-based, human-based, and swarm-based algorithms.
Evolution-based algorithms draw inspiration from Darwin’s and Mendel’s theories of natural evolution by simulating the selection and reproduction processes of biological entities within their environment. Some of the most popular algorithms include the genetic algorithm (GA) [14], differential evolution (DE) [15,16], evolutionary mating algorithm (EMA) [17], arithmetic optimization algorithm (AOA) [18], and genetic programming (GP) [19]. Physics-based optimization algorithms mimic rules and phenomena observed within physical systems. Some of the most widely accepted algorithms include the electrostatic discharge algorithm (ESDA) [20], multi-verse optimizer (MVO) [21], Henry gas solubility optimization (HGSO) [22], and nuclear reaction optimization (NRO) [23]. Human-based algorithms attempt to construct global optimization mechanisms by utilizing characteristics of human social and cognitive activities. Some of the most typical algorithms include teaching–learning-based optimization (TLBO) [24], student psychology-based optimization (SPBO) [25], league championship algorithm (LCA) [26], and imperialist competitive algorithm (ICA) [27]. Additionally, swarm-based algorithms aim to emulate the foraging behaviors and movement characteristics of biological swarms. The most popular algorithms are particle swarm optimization (PSO) [28], whale optimization algorithm (WOA) [12], gray wolf optimizer (GWO) [29], cheetah optimization algorithm (CO) [30], grasshopper optimization algorithm (GOA) [31], salp swarm algorithm (SSA) [32], gorilla troops optimizer (GTO) [33], artificial hummingbird algorithm (AHA) [34], prairie dog optimization (PDO) [35], sparrow search algorithm combining sine–cosine and Cauchy mutation (SCSSA) [36], and the recently developed crayfish optimization algorithm (COA) [37].
The COA introduces a creative approach by simulating the survival strategies of crayfish in nature, including foraging, summer resort, and competition [37]. This algorithm modulates crayfish behavior patterns by adjusting environmental temperature parameters and quantifies foraging effectiveness through the amount of food consumed. In high-temperature environments, crayfish tend to engage in competitive or summer resort behaviors, corresponding to the exploration phase. Conversely, in suitable temperate conditions, crayfish become active and enter a foraging mode, denoting the exploitation phase. Therefore, due to these biomimetic characteristics, the COA is effective in solving constrained engineering design problems. However, the COA does exhibit limitations, notably its lack of learning mechanisms for individuals other than the optimal exemplar, leading to reduced population diversity. This limitation leads to poor convergence and inadequate exploratory capabilities when dealing with high-dimensional and complex non-convex problems, resulting in premature convergence issues.
In designing meta-heuristic algorithms, balancing exploration and exploitation is crucial [38,39]. Effective exploration is essential to cover the search space and identify potential optimal regions. Thorough exploitation of these regions is necessary to facilitate rapid convergence to the global optimum. However, insufficient exploration can reduce search diversity, leading to entrapment in local optima, while inadequate exploitation may prevent accurate localization of the optimal solution. Additionally, the integration of special chaotic sequences has been shown to enhance global search performance beyond what is achievable with purely random methods, as reported in refs. [11,14]. Chaotic motion facilitates the traversal of every state around the attractors, allowing meta-heuristic algorithms to explore more extensively and avoid local optima. For example, the niching chaotic optimization algorithm (NCOA) [15] and concepts of chaotic evolution (CE) [16,17] utilize the rapid mixing properties of chaos to spread more effectively across the search spaces, facilitating faster convergence to optimal solutions. Relying solely on pseudo-random sequences from chaos maps for initializing population positions can enhance optimization performance, as demonstrated by the particle swarm optimization with chaos-based initialization [40] and the multi-strategy boosted hybrid artificial hummingbird algorithm (LCAHA), which combines Lévy flight and a sinusoidal chaotic map [41].
Moreover, although the COA exhibits innovative biomimetic features, its over-reliance on foraging in elite individuals, while also lacking inter-group learning and competitive behavior, reduces its population diversity. Furthermore, the “No Free Lunch” theorem [42,43] posits that no single algorithm can effectively solve all types of optimization problem, which underscores the necessity for continual innovation. Indeed, the complexity and difficulty of modern engineering optimization are increasing, with problems often being non-separable, non-convex, highly parameter-dependent, and having extensive search domains [41]. When faced with such engineering optimization challenges, most meta-heuristic algorithms may encounter performance degradation or insufficient convergence, underscoring the need for ongoing development of tailored improvement strategies for different engineering requirements [44]. Motivated by the four factors mentioned above, this paper seeks to employ a novel population update strategy aimed at enhancing the effectiveness of the COA in solving complex and practical optimization problems.
This paper proposes a novel Hierarchical Learning-based Chaotic Crayfish Optimization Algorithm (HLCCOA). Initially, to maintain population diversity and avoid premature convergence, the Tent and Chebyshev mapping initialization strategy is presented. It utilizes the ergodicity and pseudo-random characteristics of chaos to expand the initial solutions across the entire solution space, thereby enhancing the early global search ability through a higher-quality initial population. Additionally, to balance the exploration and exploitation abilities of the algorithm, a hierarchical learning mechanism is integrated into COA, which not only enhances population diversity but also assists in escaping local optima. The novelties and primary contributions of the HLCCOA are as follows:
  • Tent and Chebyshev maps are integrated into the HLCCOA for initializing the population, laying the foundation for subsequent global exploration by obtaining a high-quality initial population.
  • A hierarchical learning mechanism is incorporated into the HLCCOA, which accelerates the convergence rate and improves the precision of optimization by strengthening the learning from exemplary individuals in each layer of the population.
  • Comprehensive numerical experiments are designed. The HLCCOA is compared and analyzed against nine popular existing meta-heuristic optimization algorithms using the CEC2019 and CEC2022 suites, yielding favorable experimental results.
  • An HLCCOA-ELM model for breast cancer diagnosis improves training by optimizing parameters to overcome local optima and raises accuracy, demonstrating the HLCCOA-ELM’s strong generalization performance.
The structure of this paper is outlined as follows. Section 2 briefly reviews COA’s theoretical aspects. Section 3 elaborates on the algorithmic design and structure of the HLCCOA. Section 4 presents and analyzes experimental results. Section 5 focuses on the HLCCOA-ELM model for breast cancer diagnosis, including experimental analysis and discussion. Finally, Section 6 points out conclusions and potential directions for future research.

2. Crayfish Optimization Algorithm

The crayfish optimization algorithm is inspired by the foraging, summer resort, and competitive behaviors of crayfish [37,45]. The growth rate of crayfish is influenced by changes in environmental temperature, indicating that temperature plays a significant role in their survival and behavioral patterns [46].
The definition of temperature is as shown in Equation (1), where t e m p represents the temperature of the crayfish’s environment. When the t e m p exceeds 30 °C, crayfish seek cooler places for shelter. When the t e m p   < 30 °C, crayfish engage in foraging behavior. Additionally, the food intake of crayfish approximates a normal distribution in response to changes in environmental temperature, as modeled in Equation (2). Notably, crayfish exhibit strong foraging behavior in temperatures ranging from 20 °C to 30 °C. A schematic representation of crayfish food intake is illustrated in Figure 2.
t e m p = r a n d × 15 + 20
p = C 1 · 1 σ 2 π · exp ( t e m p μ ) 2 2 σ 2 ,
where μ denotes the optimal temperature for crayfish growth, and σ and C 1 are used to regulate the crayfish’s food intake at different temperatures.

2.1. Summer Resort Pattern of Crayfish

When t e m p > 30 , it indicates that temperature is too high, and crayfish enter the summer resort pattern. The updated formula for the target shelter cave X s h a d e is defined as in Equation (3):
X s h a d e = ( X G 1 + X G 2 ) ( X G 1 + X G 2 ) 2 2 ,
where X G 1 represents the best position achieved so far, and  X G 2 signifies the optimal position generated under the present motion state of the population. If  r a n d < 0.5 , it suggests an absence of shelter competition, enabling direct shelter entry for crayfish to evade summer, utilizing Equation (4) for this process:
X i I t + 1 = X i I t + C 2 · r a n d · ( X s h a d e X i I t ) ,
where X i I t represents the position of the i-th crayfish at the current iteration number, and  X i I t + 1 denotes the position of the ( i + 1 )-th crayfish at the next generation’s iteration number. Additionally, C 2 is a decreasing coefficient, as shown in Equation (5):
C 2 = 2 ( I t / M a x_I t e r a t i o n ) ,
where M a x_I t e r a t i o n represents the maximum number of iterations.

2.2. Competition Pattern of Crayfish

When t e m p < 30 °C and r a n d 0.5 , it implies that multiple crayfish are interested in the same shelter. At this point, they compete for shelter according to Equation (6):
X i I t + 1 = X i I t X z I t + X s h a d e ,
where z represents a random individual crayfish, as shown in Equation (7):
z = r o u n d ( r a n d · ( N 1 ) ) + 1 .
During the competition pattern, crayfish vie with each other, where crayfish X i adjusts its position according to the position of another crayfish X z . Through the competition process among crayfish, the search range of COA is expanded.

2.3. Foraging Pattern of Crayfish

When t e m p < 30 °C, it is suitable for crayfish to forage. Upon foraging, crayfish assess the size of the food. If the food is too large, crayfish use their claws to tear it apart, alternating feeding with their second and third legs. In this case, the position of the target food is defined as:
X f o o d = X G 1 .
The size of food is defined as in Equation (9):
Q = C 3 · r a n d · ( f i t n e s s i f i t n e s s i f i t n e s s f o o d f i t n e s s f o o d ) ,
where C 3 is the food factor representing the maximum size of the food, typically set at 3; f i t n e s s i denotes the fitness value of the ith crayfish; and  f i t n e s s f o o d represents the fitness value of the target food position. When Q > ( C 3 + 1 ) / 2 , it signifies that the food is too large, and the crayfish use their claws to tear the food. The corresponding mathematical equation is defined as in Equation (10):
X f o o d = exp ( 1 Q ) × X f o o d .
After shredding the food into smaller pieces, crayfish alternately use their second and third legs to eat food. The simulation of this process uses a combination of sine and cosine functions. The captured food quantity also depends on the food intake of crayfish p, as described by Equation (11):
X i I t + 1 = X i I t + X f o o d · p · ( cos ( 2 π · r a n d ) sin ( 2 π · r a n d ) ) .
When Q ( C 3 + 1 ) / 2 , it indicates that the food is not too big, allowing the crayfish to move directly towards the food. This process is expressed as in Equation (12):
X i I t + 1 = ( X i I t X f o o d ) · p + p · r a n d · X i I t .
During the foraging pattern, crayfish adopt appropriate forging strategies based on the size of the encountered food Q.

3. Hierarchical Learning-Based Chaotic Crayfish Optimization Algorithm

3.1. Chaotic Map

Chaos is a kind of nonlinear deterministic system, characterized by its extreme sensitivity to initial conditions, ergodicity, and pseudo-randomness, making it impractical for long-term predictions [47,48]. Recent studies have shown that certain chaotic systems with strong spatial ergodicity can exhibit relatively rapid search performance in exploring global optima [49,50]. Search strategies based on chaotic motion, as opposed to primarily probability-based random searches, can execute a more effective comprehensive search of the solution space [41,51]. This study introduces two types of chaotic maps with excellent ergodic properties, i.e., the Tent and Chebyshev maps, defined as follows.
The equation for the Tent map is given as follows in Equation (13):
y ( t + 1 ) = y ( t ) / γ 0 < y ( t ) < γ ( 1 y ( t ) ) / ( 1 γ ) γ < y ( t ) < 1 .
The equation for the Chebyshev map is as shown in Equation (14):
y ( t + 1 ) = cos ( k cos 1 y ( t ) ) ,
where t denotes the iteration number in chaotic map; y ( t ) represents the chaotic value at the t-th iteration; γ is the control parameter for the Tent map, ranging from 0 to 1; and k serves as the control parameter for the Chebyshev map, with its value range being ( 0 , + ) . When γ = 0.4 and k = 5 , the Lyapunov exponents of these two dynamic systems are 3.6247 and 1.6101, respectively, both of which are positive, indicating their entry into a chaotic state. Figure 3a presents a three-dimensional scatter diagram of chaotic sequences generated after 200 iterations of the Tent map (indicated by yellow points) and the Chebyshev map (indicated by cyan points). Figure 3b and Figure 3c respectively display the distribution histograms for the sequence points of the Tent and Chebyshev maps. Observations of the scatter diagram reveal that both chaotic maps exhibit outstanding spatial-filling characteristics, with their generated points nearly covering the entire spatial region. Moreover, the distributions in the histograms also illustrate the ability of the Tent and Chebyshev maps to effectively sample the space with varying degrees of probabilistic randomness during their exploration.
In the HLCCOA, the crayfish population is bifurcated into two distinct groups. The first leverages the Tent map for a uniform distribution across the search space, bolstering global exploration. The second utilizes the Chebyshev map for strategic boundary positioning, enhancing extreme value detection. This dual-strategy initialization ensures efficient boundary exploration of the solution space at the beginning and subsequent concentration on deep potential optima within the core area.

3.2. Hierarchical Learning Mechanism

To enhance the learning behavior among populations and diversity within the COA, this paper proposes a hierarchical learning mechanism. Figure 4 illustrates the division of the entire crayfish population ( C P ) into multiple hierarchies. Firstly, the population is sorted by fitness from best to worst. The ranking of the best individual is 1, while the worst is C P . The population is divided into N L (an integer) number of hierarchies. If  C P is divisible by N L , each hierarchy will have an equal number of individuals; if not, the remainder N P mod N L is added to the last hierarchy, i.e., the N L -th layer. Concurrently, the crayfish population number in each hierarchy is defined as L P . The hierarchy with the best individuals is designated as the first hierarchy, i.e.,  L 1 . Conversely, the layer containing the worst individuals is marked as the last hierarchy, i.e.,  L N L . Let X L , M (where L = 1 , 2 , N L ; M = 1 , 2 , , L P ) denote the M-th crayfish individual in the L-th hierarchy. The aim of the HLCCOA is to allocate suitable learning exemplars to each hierarchy. Further, p = ( p 1 , p 2 , , p N L ) governs the number of learning exemplars for crayfish populations in different hierarchies, with the number of exemplars for the L-th layer being defined as in Equation (15):
p L = ( i 1 ) · L P C P · 100 % .
Building on Equation (15), individuals can choose to learn from better individuals in any hierarchy. Lower-ranking individuals enhance group diversity by doing more extensive learning from the majority of the whole group. Specifically, for  X 1 , M , learning is restricted to a few top individuals in the L 1 layer, where p 1 = 0.2 , which means X 1 , M allows for learning only from the top 20 % of individuals in the L 1 layer. For  X L , M , they select from the top l = 1 L L P · p l (where l = 1 , , L ) elite of the entire population, thus continuously learning from elite individuals and effectively exploring and exploiting the surrounding space during behaviors like summer resort, competition, and foraging. As a result, the updating methods for individuals X G 1 and X G 2 in the population are as follows:
X G 1 = X b e s t p 1 ; X G 2 = X b e s t r ,
where X b e s t p 1 represents the learning exemplars in the first layer, and X b e s t r denotes that the M-th individual X L , M in the L-th layer randomly selects a learning exemplar from the elite group. The expression for r is shown as follows:
r = r o u n d ( r a n d · ( h = 1 L L P · p h 1 ) ) + 1 .
This mechanism enables the crayfish elites to exhibit diversity and a certain degree of randomness in cave and food selection, aiding in escaping local optima and increasing the probability of discovering global optima. Additionally, the mechanism effectively retains information of potential solutions from the previous layer and integrates it thoroughly with the elite individuals of the current layer, thereby achieving a balance between search diversity and the convergence rate of the algorithm to some extent. The pseudo-code for the HLCCOA is presented in Algorithm 1, and its algorithmic workflow diagram is depicted in Figure 5.
Algorithm 1 Pseudo-Code of the HLCCOA Algorithm
Require: 
N, D i m , f ( x ) , X i , L o w , U p , N L , L P , M a x_I t e r a t i o n ( M a x_I t e r a t i o n is set to 500 in this paper).
Ensure: 
Output optimal solution and its fitness f ( X ) .
  1:
Initialize positions X of N crayfish with Tent map and Chebyshev map, calculate the fitness of value of population to obtain X G , X L .
  2:
Set parameters C 1 , C 2 , C 3 .
  3:
while I t M a x_I t e r a t i o n do
  4:
    Define temperature temp. And sort X by the fitness in ascending order and partition it into NL layers equally.
  5:
    for i-th layer from 1 to N L  do
  6:
        for j-th layer from 1 to L P  do
  7:
           if  t e m p > 30 °C then
  8:
               Define temperature t e m p
  9:
               Define cave X s h a d e according to Equation (3).
10:
               if  r a n d < 0.5  then
11:
                   Crayfish conduct the summer resort stage according to Equation (4).
12:
               else
13:
                   Crayfish compete for caves through Equation (6).
14:
               end if
15:
           else
16:
               The food intake p and food size Q are obtained by Equations (2) and (9).
17:
               if  Q > 2  then
18:
                   Crayfish shreds food by Equation (10) and crayfish forage according to Equation (11).
19:
               else
20:
                   Crayfish foraging according to Equation (12).
21:
               end if
22:
           end if
23:
           Update X G and X L according to Equations (16) and (17).
24:
           if  f ( V i , j ) f ( X i , j )  then
25:
                X i , j V i , j
26:
           end if
27:
        end for
28:
    end for
29:
end while

4. Experimental Results and Discussion of the HLCCOA

To rigorously assess the performance of the HLCCOA, this paper configures two sets of benchmark test suites, i.e., CEC2019 [52] and CEC2022 [53]. The CEC2019 test suite comprises 10 single-objective functions, each benchmarked against a theoretical optimum of 1. The CEC2022 suite features 12 test functions with boundary constraints, which are divided into four categories: unimodal (F1), multimodal (F2–F5), hybrid (F6–F8), and composition functions (F9–F12). The evaluations mainly consider two dimensions: 10D and 20D.
Additionally, nine current popular meta-heuristic optimization algorithms are selected for comparative analysis. Experimental results demonstrate that the HLCCOA proposed in this research shows significant promise in solving complex non-convex problems. For detailed information on the control parameters for the original COA and these nine competitors, refer to Table 1. All experiments in this section set the population size of the algorithms at 50, with 500 generations of evolution, and they are independently repeated for 30 runs. This setup is designed to facilitate comparative analysis of the optimization performance and stability differences among different algorithms.
The hardware configuration of the computer used in this study is an AMD Ryzen 7-5800H CPU with Radeon Graphics @ 3.20 GHz, and the simulation platform used for implementing all the algorithms and comparative experiments mentioned in this paper is MATLAB R2021b. Detailed experimental procedures and analysis are elaborated in the following sections.

4.1. Parameter Analysis of the HLCCOA

The HLCCOA, compared to COA, incorporates two critical control parameters: the population division layers (NL) and the sample selection probability ( p 1 ) for layer L 1 , both of which significantly influence the evolutionary trends of the population. Therefore, conducting a sensitivity analysis on these parameters and determining the optimal configuration are essential.
Firstly, the impact of the number of the NL is examined. A smaller NL leads individuals to learn from a few optimal examples, while a larger NL decreases the likelihood of selecting top performers. Comparative experiments are conducted with five NL values (i.e., 2, 4, 6, 8, and 10). As shown in Figure 6, the box plots of the HLCCOA at various NL values for CEC2022-20D demonstrate performance improvements over COA. Notably, NL = 4 produces the most compact box plot distribution and the lowest median across most test functions, indicating higher stability and optimization efficiency. Rankings from the Friedman test, presented in Table 2, reveal that the HLCCOA with NL = 4 achieves the highest ranking, making it the optimal choice.
Secondly, the influence of p 1 is explored. A very small p 1 results in the selection of overly specific optimal individuals, whereas a very large p 1 leads to the selection of overly generalized optimal individuals. Comparative experiments are conducted with six p 1 values (i.e., 0.1, 0.2, 0.4, 0.6, 0.8, and 1). Figure 7 displays the box plots of the HLCCOA for various p 1 in CEC2022-20D. For unimodal problems, p 1 = 0.1 and p 1 = 0.2 demonstrate superior performance, whereas for multimodal problems, p 1 = 0.2 , p 1 = 0.4 , and p 1 = 0.6 exhibit even better performance. Rankings from the Friedman test, presented in Table 2, demonstrate that the HLCCOA with p 1 = 0.2 achieves the highest ranking, confirming it as the optimal choice.

4.2. Wilcoxon Signed-Rank Sum Test Results and Analysis

For an effective comparative analysis of algorithm performance differences on identical problems, the Wilcoxon signed-rank test [55] is applied. The significance level is set at 5 % . In the statistical results, “≈” indicates no significant difference, “+” denotes superiority of the corresponding HLCCOA, and “- ” signifies the opposite.
In this experiment, all considered algorithms are compared using the mean ( M e a n ) and standard deviation ( S t d ) of the best solutions obtained so far as two evaluation criteria. The formulas are as follows:
M e a n = 1 T i = 1 T P i * ,
S t d = 1 T ( P i * M e a n ) 2 ,
where P i * represents the best solution obtained so far in the i-th independent experimental run, and T denotes the number of independent runs. The smaller the mean, the closer the feasible solution obtained by the algorithm is to the standard solution (true value), indicating that the algorithm possesses superior average performance. Moreover, the smaller the standard deviation, the more stable and reliable the feasible solution provided by the algorithm.
Table 3 presents the Wilcoxon signed-rank test results for various algorithms on the CEC2019 benchmark functions. It can be seen that, in dealing with most problems, the HLCCOA exhibits superior performance compared to other competitors, particularly AOA, GOA, SSA, WOA, PDO, and PSOGSA. Notably, the HLCCOA surpasses the original COA in 7 out of 10 test functions. Additionally, Table 4 and Table 5 display the Wilcoxon signed-rank test results for algorithms on the 10-dimensional and 20-dimensional test problems of the CEC2022 suite, respectively. These statistical data indicate that the HLCCOA outperforms other competitors in solving the majority of these problems across both dimensions, particularly against AOA, WOA, PDO, and PSOGSA. It notably exceeds the performance of the original COA in more than 50 % of the test functions, demonstrating its strong competitiveness.

4.3. Convergence Curve Analysis

To comprehensively assess the convergence of the HLCCOA and other algorithms, the best fitness function curves based on the data from 30 independent runs are plotted and shown in Figure 8 and Figure 9. For CEC19-F1 and CEC19-F2, which are highly conditioned, non-separable, and fully parameter-dependent multimodal problems, the HLCCOA, SCSSA, and COA converged rapidly and performed admirably. For CEC19-F3, which is multimodal but not highly conditioned, SCSSA took the lead, with the HLCCOA also surpassing algorithms such as AOA, SPBO, and WOA. In functions CEC19-F4 to CEC19-F8, which have a large number of local optima, the HLCCOA stands out, especially in CEC19-F7, where it surpasses other algorithms in the later stages of iterations. In function CEC19-F8, the HLCCOA achieves better solutions during the middle and later stages. For multimodal, non-separable problems CEC19-F9 and CEC19-F10, the HLCCOA closely follows WOA and SPBO, showing its strong competitiveness. In CEC19-F10, the HLCCOA converges quickly and is significantly superior to the other competitors. For the unimodal function CEC22-F1, the HLCCOA approaches local optimality with the highest precision compared with 10 other competitive algorithms. In the case of multimodal functions CEC22-F2 to CEC22-F5, the HLCCOA demonstrates an effective ability to avoid the pitfalls of local optima. With regards to the composite functions CEC22-F6 to CEC22-F8, the HLCCOA exhibits rapid convergence in dealing with non-smooth optimization problems. As for the composite functions CEC22-F9 to CEC22-F12, the HLCCOA shows significant improvements in its ability to handle multiple interacting optimization issues.
The time complexity of optimization algorithms is primarily influenced by the complexity of the objective function, the size of the population, and the number of evolutionary iterations. Since the population size and target functions are consistent across different algorithms in this experiment, the number of evolutionary generations is used to assess the computational burden of the algorithms. Figure 8 and Figure 9 demonstrate that, in the tests for CEC2019, CEC2022-10D, and CEC2022-20D, the HLCCOA often finds better solutions within fewer iterations. Specifically, for CEC2019, functions F4, F6, F7, F8, and F10 reached optimal solutions in 200, 100, 410, 350, and 150 generations, respectively; for CEC2022-10D, functions F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, and F11 achieved optimal outcomes in 400, 60, 100, 200, 100, 100, 120, 220, 100, 20, and 80 generations, respectively; while for CEC2022-20D, functions F1, F2, F3, F4, F5, F6, F7, F8, F10, F11, and F12 achieved their best results in 260, 60, 100, 100, 40, 100, 200, 20, 20, 100, and 400 generations, respectively. Overall, the HLCCOA demonstrates a remarkable optimization advantage for these problems.

4.4. Analysis of Box Plot Results

To visually present the performance distribution of the algorithms, Figure 10 and Figure 11 display the box plots of the HLCCOA and its competing algorithms. The box plots illustrate the final outcome of 30 independent runs, including the median, quartiles, and extremes (marked with a red “+”). In the face of test functions CEC19-F1 and CEC19-F2, the HLCCOA, COA, PDO, and SCSSA all record obtaining the optimal solution in 30 runs. When dealing with the CEC19-F3 problem, although SSA has the chance to achieve the best result, its variability is higher than that of the HLCCOA. For the CEC19-F7 problem, excluding extremes, the lowest end of the EMA box plot is the lowest, indicating its potential to find the optimal solution; however, the box plot of the HLCCOA has a relatively narrow border, which indicates its good stability. For CEC22-F6, CEC22-F8, CEC22-F9, and CEC22-F12, the HLCCOA achieves similar statistical distribution results as COA, but the median of its statistics is better than the rest of the competitors. On other problems, the HLCCOA’s box plots are typically narrower, and the median is lower, indicating both good fitness and robust performance. Especially when dealing with the CEC19-F10 problem, the bottom of the HLCCOA’s box plot is the lowest, suggesting a greater likelihood of outperforming other competitors in searching for the optimal solution.

4.5. Friedman Test for the HLCCOA

In order to fairly and comprehensively compare the performance of different algorithms, the Friedman test [56] is utilized to calculate the average ranks of each algorithm on the CEC2019 and CEC2022 test sets. Figure 12 and Figure 13 respectively present bar charts of the Friedman average ranks of the HLCCOA and other competing algorithms on the two test sets. From Figure 12, it can be observed that the HLCCOA holds the top position with an average rank of 1.95, followed by EMA, COA, SCSSA, and SSA. Figure 13 demonstrates that, in 10-dimensional and 20-dimensional problems, the HLCCOA achieves average ranks of 1.42 and 1.33, respectively, surpassing EMA, SSA, COA, and GOA, and significantly outperforming the AOA, WOA, PDO, and PSOGSA algorithms. These results highlight the outstanding competitiveness and superiority of the HLCCOA algorithm in solving non-convex complex problems.

5. Breast Cancer Detection Based on the HLCCOA-ELM

Breast cancer (BC) is a common hereditary disease characterized by the uncontrolled proliferation and expansion of breast cells [57]. If breast cancer cells are not identified and differentiated early on, the diseased cells may invade adjacent vital organs, potentially posing a threat to life. Despite significant breakthroughs in research and treatment of breast cancer, it remains a major global public health issue, presenting ongoing challenges [58]. The extreme learning machine (ELM) [59], a type of single hidden-layer feedforward neural network, is widely used in classification and regression tasks due to its rapid and effective characteristics. Its theoretical framework is shown in Figure 14. The learning process of ELM primarily consists of two steps:
  • Assume that the input of the ELM consists of N distinct samples ( a i , t i ) , with a hidden layer composed of K neurons. Further, a i = [ a i 1 , a i 2 , , a i n ] T is the n-dimensional input vector for the i-th sample, and t i = [ t i 1 , t i 2 , t i l ] T is the l-dimensional output vector for the i-th sample. Random values are assigned to the connection weights ω i , j ( i = 1 , 2 , , N , j = 1 , 2 , K ) and to the biases b j ( j = 1 , 2 , K ) between the input and hidden layers. Subsequently, the hidden layer output matrix H is generated. The expression for H is as follows:
    H = h 1 ( ω 1 , 1 · a 1 + b 1 ) h K ( ω 1 , K · a 1 + b K ) h 1 ( ω N , 1 · a N + b 1 ) h K ( ω N , K · a N + b K ) .
  • The output function of the ELM can be expressed as follows:
    f K = j = 1 K β j h j ( w j , a , b j ) = H β ,
    where β j represents the weight vector connecting the j-th hidden neuron to the output neurons, and β = [ β 1 , β 2 , , β K ] denotes the collective set of weight vectors between the hidden layer and the output layer. To achieve the minimum training error for the network, it is necessary to find the least squares solution β ^ for the linear system H β = T , where T is the real output of the ELM. This training process can be represented through Equations (22) and (23):
    arg min β T H β 2 ,
    β ^ = H T ,
    where H represents the Moore–Penrose generalized inverse matrix of H. The final output of the network can be obtained using Equation (24):
    T o u t = ( H β ^ ) T .
In the field of BC research and diagnosis, the ELM is widely employed as a computer-aided diagnosis technique (CAD), primarily aimed at assisting physicians in making more accurate and prompt diagnostic decisions [60]. Despite the significant advantages demonstrated by the ELM, it does carry certain limitations [61]: the performance of the ELM is highly contingent upon the initial settings of the network structure. Due to the random generation of hidden-layer weights and biases, this may lead to instability in its generalization ability. In some instances, the ELM is capable of providing good generalization performance, but in other cases, particularly when dealing with extremely complex or noisy datasets, its performance might diminish.
To enhance the generalization ability, stability, and reliability of the ELM in processing nonlinear and complex data, this paper proposes an HLCCOA-optimized extreme learning machine model, i.e., HLCCOA-ELM, primarily applied in the classification diagnosis of breast cancer. The algorithmic flowchart of the model is shown in Figure 15. In this approach, the hidden layer weights and biases of the ELM are conceptualized as a solution set P = [ ω 11 , , ω N K , b 1 , , b K ] , which the optimization algorithm aims to refine. Initially, the positions of the individuals in the population are randomized in accordance with the dimensions of solution space P. These positions are dynamically updated as the HLCCOA algorithm progresses. Upon meeting the termination criteria, the algorithm yields the optimal initial parameters for the ELM’s network structure, as determined by the HLCCOA. To verify the efficacy of the HLCCOA-ELM model, this research selected the Wisconsin Diagnostic Breast Cancer (WDBC) data from the publicly available UCI dataset for classification experiments. In these experiments, the model uses the sine function as the activation function, has 30 input features and 8 neurons in the hidden layer, with the output being a binary classification indicating whether or not breast cancer is present. The population size of the HLCCOA algorithm is set to 30, and the number of iterations is set to 100. Further, 70 % of the dataset is used for training, and the remaining 30 % for testing, with classification accuracy as the main metric for assessing the iterative effects of the HLCCOA, conducted in 10 independent runs. For a fair assessment of the performance of the HLCCOA-ELM, the study also presents test results of the ELM model optimized by COA under the same experimental conditions. Figure 16 displays the confusion matrix diagrams corresponding to the highest accuracy rates of the standard ELM, COA-ELM, and HLCCOA-ELM, respectively, in 10 independent runs. The accuracy rate of the HLCCOA-ELM reached as high as 99.4152 % , followed by COA-ELM at 98.8304 % , and that of standard ELM only 95.3216 % . Table 6 and Table 7 summarize the test results of other ELM optimization models across 10 independent runs, including detailed assessments of accuracy, sensitivity, and specificity. These tables specifically present the lowest, highest, average, and standard deviation values for each of these evaluation criteria. The data for these models are derived from test results obtained under the same experimental conditions as those documented in other literature [60,62]. The comparative results clearly demonstrate the high accuracy, sensitivity, and specificity of the HLCCOA-ELM model, as well as the commendable generalization performance.
Furthermore, to investigate the impact of the number of hidden neurons and the HLCCOA population size on the ELM model, box plots from 10 independent experiments are drawn, as shown in Figure 17 and Figure 18. Figure 17 displays the diagnostic accuracy, sensitivity, and specificity of the model when the number of hidden layer neurons is 4, 8, 12, 16, 20, and 24. Although the specificity reaches the highest values, the HLCCOA-ELM model with 12 hidden neurons exhibits the highest median values for accuracy and sensitivity and the narrowest box plot, indicating the best model stability. Additionally, Figure 17 also indicates that increasing the number of hidden layer neurons does not necessarily enhance model performance. Figure 18 shows the diagnostic accuracy, sensitivity, and specificity for HLCCOA population sizes of 10, 30, 50, 70, 80, and 100. While all population sizes achieve the highest specificity, the HLCCOA-ELM model with a population size of 50 shows the highest median values for accuracy and sensitivity and the narrowest box plot, demonstrating optimal stability for this population size.
Figure 19 presents box plots for the HLCCOA-ELM, COA-ELM, and standard ELM models based on 10 independent trials conducted on the WDBC test set, illustrating the central tendency and dispersion for metrics such as accuracy, sensitivity, and specificity. It can be observed that the median value for the HLCCOA-ELM is the closest to 1 and exhibits the least dispersion among the data. This indicates that the HLCCOA-ELM not only maintains a high level of accuracy but also demonstrates strong robustness. Figure 20 depicts the fitness evaluation curves of the HLCCOA and COA during the ELM optimization process. The results indicate that the HLCCOA rapidly achieves an accuracy of 99.4152 % by the 11th generation, while COA only reaches an accuracy of 98.8304 % by the 66th generation. Additionally, the accuracy rates of both models are significantly higher than the optimum performance of the standard ELM model. Based on these results, it is apparent that the HLCCOA demonstrates higher applicability and effectiveness in such supervised learning optimization tasks.

6. Conclusions

This study presents the Hierarchical Learning-enhanced Chaotic Crayfish Optimization Algorithm (HLCCOA). Initially, the HLCCOA employs chaotic sequences from Tent and Chebyshev mappings for population initialization, enhancing diversity and improving global search effectiveness. The algorithm’s use of chaos’s excellent ergodicity increases the chances of identifying optimal solutions. To mitigate local optima challenges in complex non-convex problems, the HLCCOA incorporates a hierarchical learning mechanism. This approach prompts extensive cross-level learning for less efficient individuals to strengthen global exploration, while allowing high-performing individuals to learn directly from top-tier elites, refining their local exploitation skills. This strategy not only accelerates convergence but also balances exploration and exploitation. In comparative tests against 10 meta-heuristic algorithms using the CEC2019 and CEC2022 benchmarks, the HLCCOA shows superior effectiveness and robustness. Additionally, applying the HLCCOA to optimize the hidden layer weights and biases of extreme learning machines (ELMs) significantly enhances their generalization performance. The HLCCOA-ELM model, tested on the UCI breast cancer diagnostic dataset, validates the HLCCOA’s practicality and reliability, along with the strong generalization ability of the HLCCOA-ELM. Future work aims to apply HLCCOA to more complex real-world engineering challenges, including image segmentation, energy scheduling, and feature selection.

Author Contributions

Conceptualization, J.Z. and Y.D.; methodology, J.Z. and Y.D.; validation, J.Z. and Y.D.; formal analysis, J.Z.; investigation, J.Z. and Y.D.; resources, J.Z.; data curation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z. and Y.D.; visualization, J.Z. and Y.D.; supervision, J.Z. and Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The authors would like to express their thanks to the reviewers, who supplied feedback to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. El-kenawy, E.-S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag Goose Optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122–147. [Google Scholar] [CrossRef]
  2. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2021, 54, 2323–2409. [Google Scholar] [CrossRef]
  3. Sang, Y.; Tan, J.; Liu, W. Research on Many-Objective Flexible Job Shop A Modified Sand Cat Swarm Optimization Algorithm Based on Multi-Strategy Fusion and Its Application in Engineering Problems. Mathematics 2024, 12, 2153. [Google Scholar] [CrossRef]
  4. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  5. Ozkaya, B.; Duman, S.; Kahraman, H.T.; Guvenc, U. Optimal solution of the combined heat and power economic dispatch problem by adaptive fitness-distance balance based artificial rabbits optimization algorithm. Expert Syst. Appl. 2024, 238, 122272. [Google Scholar] [CrossRef]
  6. Awwal, A.M.; Yahaya, M.M.; Pakkaranang, N.; Pholasa, N. A New Variant of the Conjugate Descent Method for Solving Unconstrained Optimization Problems and Applications. Mathematics 2024, 12, 2430. [Google Scholar] [CrossRef]
  7. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-Qaness, M.A.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  8. Miandoab, A.R.; Bagherzadeh, S.A.; Isfahani, A.H.M. Numerical study of the effects of twisted-tape inserts on heat transfer parameters and pressure drop across a tube carrying Graphene Oxide nanofluid: An optimization by implementation of Artificial Neural Network and Genetic Algorithm. Eng. Anal. Bound. Elem. 2022, 140, 1–11. [Google Scholar] [CrossRef]
  9. Pavlov-Kagadejev, M.; Jovanovic, L.; Bacanin, N.; Deveci, M.; Zivkovic, M.; Tuba, M.; Strumberger, I.; Pedrycz, W. Optimizing long-short-term memory models via metaheuristics for decomposition aided wind energy generation forecasting. Artif. Intell. Rev. 2024, 57, 45. [Google Scholar] [CrossRef]
  10. Huang, Q.; Ding, H.; Razmjooy, N. Oral cancer detection using convolutional neural network optimized by combined seagull optimization algorithm. Biomed. Signal Process. Control 2024, 87, 105546. [Google Scholar] [CrossRef]
  11. Zamani, H.; Nadimi-Shahraki, M.H. An evolutionary crow search algorithm equipped with interactive memory mechanism to optimize artificial neural network for disease diagnosis. Biomed. Signal Process. Control 2024, 90, 105879. [Google Scholar] [CrossRef]
  12. Deng, L.; Liu, S. Deficiencies of the whale optimization algorithm and its validation method. Expert Syst. Appl. 2024, 237, 121544. [Google Scholar] [CrossRef]
  13. MunishKhanna; Singh, L.K.; Garg, H. A novel approach for human diseases prediction using nature inspired computing & machine learning approach. Multimed. Tools Appl. 2024, 83, 17773–17809. [Google Scholar] [CrossRef]
  14. Cavallaro, C.; Cutello, V.; Pavone, M.; Zito, F. Machine Learning and Genetic Algorithms: A case study on image reconstruction. Knowl.-Based Syst. 2024, 284, 111194. [Google Scholar] [CrossRef]
  15. Formica, G.; Milicchio, F. Kinship-based differential evolution algorithm for unconstrained numerical optimization. Nonlinear Dyn. 2020, 99, 1341–1361. [Google Scholar] [CrossRef]
  16. Qiao, K.; Liang, J.; Qu, B.; Yu, K.; Yue, C.; Song, H. Differential Evolution with Level-Based Learning Mechanism. Complex Syst. Model. Simul. 2022, 2, 487–516. [Google Scholar] [CrossRef]
  17. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H.; Mirjalili, S. Evolutionary mating algorithm. Neural Comput. Appl. 2023, 35, 35–58. [Google Scholar] [CrossRef]
  18. Pamuk, N.; Uzun, U.E. Optimal allocation of distributed generations and capacitor banks in distribution systems using arithmetic optimization algorithm. Appl. Sci. 2024, 14, 831. [Google Scholar] [CrossRef]
  19. Xu, M.; Mei, Y.; Zhang, F.; Zhang, M. Genetic Programming and Reinforcement Learning on Learning Heuristics for Dynamic Scheduling: A Preliminary Comparison. IEEE Comput. Intell. Mag. 2024, 19, 18–33. [Google Scholar] [CrossRef]
  20. Fallah, A.M.; Ghafourian, E.; Shahzamani Sichani, L.; Ghafourian, H.; Arandian, B.; Nehdi, M.L. Novel Neural Network Optimized by Electrostatic Discharge Algorithm for Modification of Buildings Energy Performance. Sustainability 2023, 15, 2884. [Google Scholar] [CrossRef]
  21. Han, Y.; Chen, W.; Heidari, A.A.; Chen, H.; Zhang, X. Balancing Exploration–Exploitation of Multi-verse Optimizer for Parameter Extraction on Photovoltaic Models. J. Bionic Eng. 2024, 21, 1022–1054. [Google Scholar] [CrossRef]
  22. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Advances in Henry Gas Solubility Optimization: A Physics-Inspired Metaheuristic Algorithm With Its Variants and Applications. IEEE Access 2024, 12, 26062–26095. [Google Scholar] [CrossRef]
  23. Wei, Z.; Huang, C.; Wang, X.; Han, T.; Li, Y. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access 2019, 7, 66084–66109. [Google Scholar] [CrossRef]
  24. Yu, X.; Zhang, W. A teaching-learning-based optimization algorithm with reinforcement learning to address wind farm layout optimization problem. Appl. Soft Comput. 2024, 151, 111135. [Google Scholar] [CrossRef]
  25. Das, B.; Mukherjee, V.; Das, D. Applying student psychology-based optimization algorithm to optimize the performance of a thermoelectric generator. Int. J. Green Energy 2024, 21, 1–12. [Google Scholar] [CrossRef]
  26. Hosseinzadeh, M.; Mohammed, A.H.; Rahmani, A.M.; Alenizi, F.A.; Zandavi, S.M.; Yousefpoor, E.; Ahmed, O.H.; Hussain Malik, M.; Tightiz, L. A secure routing approach based on league championship algorithm for wireless body sensor networks in healthcare. PLoS ONE 2023, 18, e0290119. [Google Scholar] [CrossRef]
  27. Elyasi, M.; Selcuk, Y.S.; Özener, O.; Coban, E. Imperialist competitive algorithm for unrelated parallel machine scheduling with sequence-and-machine-dependent setups and compatibility and workload constraints. Comput. Ind. Eng. 2024, 190, 110086. [Google Scholar] [CrossRef]
  28. Hao, Y.; Li, H. Target Damage Calculation Method of Nash Equilibrium Solution Based on Particle Swarm between Projectile and Target Confrontation Game. Mathematics 2024, 12, 2166. [Google Scholar] [CrossRef]
  29. Liu, Y.; As’ arry, A.; Hassan, M.K.; Hairuddin, A.A.; Mohamad, H. Review of the grey wolf optimization algorithm: Variants and applications. Neural Comput. Appl. 2024, 36, 2713–2735. [Google Scholar] [CrossRef]
  30. Sait, S.M.; Mehta, P.; Gürses, D.; Yildiz, A.R. Cheetah optimization algorithm for optimum design of heat exchangers. Mater. Test. 2023, 65, 1230–1236. [Google Scholar] [CrossRef]
  31. Alirezapour, H.; Mansouri, N.; Mohammad Hasani Zade, B. A Comprehensive Survey on Feature Selection with Grasshopper Optimization Algorithm. Neural Process. Lett. 2024, 56, 28. [Google Scholar] [CrossRef]
  32. Lee, C.-Y.; Le, T.-A.; Chen, Y.-C.; Hsu, S.-C. Application of Salp Swarm Algorithm and Extended Repository Feature Selection Method in Bearing Fault Diagnosis. Mathematics 2024, 12, 1718. [Google Scholar] [CrossRef]
  33. Abdelaal, A.K.; Alhamahmy, A.I.A.; Attia, H.E.D.; El-Fergany, A.A. Maximizing solar radiations of PV panels using artificial gorilla troops reinforced by experimental investigations. Sci. Rep. 2024, 14, 3562. [Google Scholar] [CrossRef] [PubMed]
  34. Hosseinzadeh, M.; Rahmani, A.M.; Husari, F.M.; Alsalami, O.M.; Marzougui, M.; Nguyen, G.N.; Lee, S.-W. A Survey of Artificial Hummingbird Algorithm and Its Variants: Statistical Analysis, Performance Evaluation, and Structural Reviewing. Arch. Comput. Methods Eng. 2024. [Google Scholar] [CrossRef]
  35. Ezugwu, A.E.; Agushaka, J.O.; Abualigah, L.; Mirjalili, S.; Gandomi, A.H. Prairie dog optimization algorithm. Neural Comput. Appl. 2022, 34, 20017–20065. [Google Scholar] [CrossRef]
  36. Li, A.; Quan, L.; Cui, G.; Xie, S. Sparrow search algorithm combining sine-cosine and cauchy mutation. Neural Comput. Appl. 2022, 58, 91–99. [Google Scholar] [CrossRef]
  37. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  38. Zelinka, I.; Celikovskỳ, S.; Richter, H.; Chen, G. Evolutionary Algorithms and Chaotic Systems; Springer: Berlin/Heidelberg, Germany, 2010; Volume 267, pp. 1919–1979. [Google Scholar] [CrossRef]
  39. Aditya, N.; Mahapatra, S.S. Switching from exploration to exploitation in gravitational search algorithm based on diversity with Chaos. Inf. Sci. 2023, 635, 298–327. [Google Scholar] [CrossRef]
  40. Tian, D. Particle swarm optimization with chaos-based initialization for numerical optimization. Intell. Autom. Soft Comput. 2017, 1–12. [Google Scholar] [CrossRef]
  41. Hu, G.; Zhong, J.; Zhao, C.; Wei, G.; Chang, C.-T. LCAHA: A hybrid artificial hummingbird algorithm with multi-strategy for engineering applications. Comput. Methods Appl. Mech. Eng. 2023, 415, 116238. [Google Scholar] [CrossRef]
  42. Adam, S.P.; Alexandropoulos, S.-A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. Approx. Optim. Algorithms Complex. Appl. 2019, 145, 57–82. [Google Scholar] [CrossRef]
  43. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  44. Wang, J.; Li, Y.; Hu, G.; Yang, M. LCAHA: An enhanced artificial hummingbird algorithm and its application in truss topology engineering optimization. Adv. Eng. Inform. 2022, 54, 101761. [Google Scholar] [CrossRef]
  45. Jia, H.; Zhou, X.; Zhang, J.; Abualigah, L.; Yildiz, A.R.; Hussien, A.G. Modified crayfish optimization algorithm for solving multiple engineering application problems. Artif. Intell. Rev. 2024, 57, 127. [Google Scholar] [CrossRef]
  46. Allan, E.L.; Froneman, P.W.; Hodgson, A.N. Effects of temperature and salinity on the standard metabolic rate (SMR) of the caridean shrimp Palaemon peringueyi. J. Exp. Mar. Biol. Ecol. 2006, 337, 103–108. [Google Scholar] [CrossRef]
  47. Xu, Y.-P.; Tan, J.-W.; Zhu, D.-J.; Ouyang, P.; Taheri, B. Model identification of the proton exchange membrane fuel cells by extreme learning machine and a developed version of arithmetic optimization algorithm. Energy Rep. 2021, 7, 2332–2342. [Google Scholar] [CrossRef]
  48. Lathrop, D. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Phys. Today 2015, 68, 54–55. [Google Scholar] [CrossRef]
  49. Huang, H.; Liang, Q.; Hu, S.; Yang, C. Chaotic heuristic assisted method for the search path planning of the multi-BWBUG cooperative system. Expert Syst. Appl. 2024, 237, 121596. [Google Scholar] [CrossRef]
  50. Wang, B.; Zhang, Z.; Siarry, P.; Liu, X.; Królczyk, G.; Hua, D.; Brumercik, F.; Li, Z. A nonlinear African vulture optimization algorithm combining Henon chaotic mapping theory and reverse learning competition strategy. Expert Syst. Appl. 2024, 236, 121413. [Google Scholar] [CrossRef]
  51. Zelinka, I.; Diep, Q.B.; Snášel, V.; Das, S.; Innocenti, G.; Tesi, A.; Schoen, F.; Kuznetsov, N.V. Impact of chaotic dynamics on the performance of metaheuristic optimization algorithms: An experimental analysis. Inf. Sci. 2022, 587, 692–719. [Google Scholar] [CrossRef]
  52. Epstein, A.; Ergezer, M.; Marshall, I.; Shue, W. GADE with fitness-based opposition and tidal mutation for solving IEEE CEC2019 100-digit challenge. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; Volume 7, pp. 395–402. [Google Scholar] [CrossRef]
  53. Sun, B.; Sun, Y.; Li, W. Multiple Topology SHADE with Tolerance-based Composite Framework for CEC2022 Single Objective Bound Constrained Numerical Optimization. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar] [CrossRef]
  54. Zishan, F.; Akbari, E.; Montoya, O.D.; Giral-Ramírez, D.A.; Molina-Cabrera, A. Efficient PID Control Design for Frequency Regulation in an Independent Microgrid Based on the Hybrid PSO-GSA Algorithm. Electronics 2022, 11, 3886. [Google Scholar] [CrossRef]
  55. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  56. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  57. Abhisheka, B.; Biswas, S.K.; Purkayastha, B. A Comprehensive Review on Breast Cancer Detection, Classification and Segmentation Using Deep Learning. Arch. Comput. Methods Eng. 2023, 30, 5023–5052. [Google Scholar] [CrossRef]
  58. Rakha, E.A.; Tse, G.M.; Quinn, C.M. An update on the pathological classification of breast cancer. Histopathology 2023, 82, 5–16. [Google Scholar] [CrossRef] [PubMed]
  59. Udmale, S.S.; Nath, A.G.; Singh, D.; Singh, A.; Cheng, X.; Anand, D.; Singh, S.K. An optimized extreme learning machine-based novel model for bearing fault classification. Expert Syst. 2024, 41, e13432. [Google Scholar] [CrossRef]
  60. Jiang, F.; Zhu, Q.; Tian, T. Breast Cancer Detection Based on Modified Harris Hawks Optimization and Extreme Learning Machine Embedded with Feature Weighting. Neural Process. Lett. 2023, 55, 3631–3654. [Google Scholar] [CrossRef]
  61. Eshtay, M.; Faris, H.; Obeid, N. Metaheuristic-based extreme learning machines: A review of design formulations and applications. Int. J. Mach. Learn. Cybern. 2019, 10, 1543–1561. [Google Scholar] [CrossRef]
  62. Kumar, P.; Nair, G.G. An efficient classification framework for breast cancer using hyper parameter tuned Random Decision Forest Classifier and Bayesian Optimization. Biomed. Signal Process. Control 2021, 68, 102682. [Google Scholar] [CrossRef]
  63. Dalwinder, S.; Birmohan, S.; Manpreet, K. Simultaneous feature weighting and parameter determination of neural networks using ant lion optimization for the classification of breast cancer. Biocybern. Biomed. Eng. 2020, 40, 337–351. [Google Scholar] [CrossRef]
  64. Wang, S.; Wang, Y.; Wang, D.; Yin, Y.; Wang, Y.; Jin, Y. An improved random forest-based rule extraction method for breast cancer diagnosis. Appl. Soft Comput. 2020, 86, 105941. [Google Scholar] [CrossRef]
  65. Abdel-Basset, M.; El-Shahat, D.; El-Henawy, I.; De Albuquerque, V.H.C.; Mirjalili, S. A new fusion of grey wolf optimizer algorithm with a two-phase mutation for feature selection. Expert Syst. Appl. 2020, 139, 112824. [Google Scholar] [CrossRef]
  66. Naik, A.K.; Kuppili, V.; Edla, D.R. Efficient feature selection using one-pass generalized classifier neural network and binary bat algorithm with a novel fitness function. Soft Comput. 2020, 24, 4575–4587. [Google Scholar] [CrossRef]
  67. Rao, H.; Shi, X.; Rodrigue, A.K.; Feng, J.; Xia, Y.; Elhoseny, M.; Yuan, X.; Gu, L. Feature selection based on artificial bee colony and gradient boosting decision tree. Appl. Soft Comput. 2019, 74, 634–642. [Google Scholar] [CrossRef]
  68. Wang, H.; Zheng, B.; Yoon, S.W.; Ko, H.S. A support vector machine-based ensemble algorithm for breast cancer diagnosis. Eur. J. Oper. Res. 2018, 267, 687–699. [Google Scholar] [CrossRef]
Figure 1. Classification of meta-heuristic optimization algorithms.
Figure 1. Classification of meta-heuristic optimization algorithms.
Mathematics 12 02641 g001
Figure 2. Schematic illustration of crayfish food intake influenced by temperature.
Figure 2. Schematic illustration of crayfish food intake influenced by temperature.
Mathematics 12 02641 g002
Figure 3. Scatter diagram and distribution histograms of the Tent map and the Chebyshev map.
Figure 3. Scatter diagram and distribution histograms of the Tent map and the Chebyshev map.
Mathematics 12 02641 g003
Figure 4. The entire population size is divided into NL learning layers.
Figure 4. The entire population size is divided into NL learning layers.
Mathematics 12 02641 g004
Figure 5. Algorithmic workflow diagram of the proposed HLCCOA.
Figure 5. Algorithmic workflow diagram of the proposed HLCCOA.
Mathematics 12 02641 g005
Figure 6. Box plots of the HLCCOA with different NL values for CEC2022-20D, where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Figure 6. Box plots of the HLCCOA with different NL values for CEC2022-20D, where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Mathematics 12 02641 g006
Figure 7. Box plots of the HLCCOA with different p 1 values for CEC2022-20D, where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Figure 7. Box plots of the HLCCOA with different p 1 values for CEC2022-20D, where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Mathematics 12 02641 g007
Figure 8. The fitness evaluation curves of the HLCCOA and other competitors for CEC2019, CEC2022-10D and CEC2022-20D (F1 and F2).
Figure 8. The fitness evaluation curves of the HLCCOA and other competitors for CEC2019, CEC2022-10D and CEC2022-20D (F1 and F2).
Mathematics 12 02641 g008
Figure 9. The fitness evaluation curves of the HLCCOA and other competitors for CEC2022-20D (from F3 to F12).
Figure 9. The fitness evaluation curves of the HLCCOA and other competitors for CEC2022-20D (from F3 to F12).
Mathematics 12 02641 g009
Figure 10. Box plots of the HLCCOA and other competitors for CEC2019, CEC2022-10D and CEC2022 (F1 and F2), where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Figure 10. Box plots of the HLCCOA and other competitors for CEC2019, CEC2022-10D and CEC2022 (F1 and F2), where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Mathematics 12 02641 g010aMathematics 12 02641 g010b
Figure 11. Box plots of the HLCCOA and other competitors for CEC2022-20D (from F3 to F12), where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Figure 11. Box plots of the HLCCOA and other competitors for CEC2022-20D (from F3 to F12), where red ‘+’ denotes extreme values and blue boxes show quartiles from 30 experiments.
Mathematics 12 02641 g011
Figure 12. Column chart of Friedman’s mean rank among the HLCCOA and other algorithms for CEC2019.
Figure 12. Column chart of Friedman’s mean rank among the HLCCOA and other algorithms for CEC2019.
Mathematics 12 02641 g012
Figure 13. Column chart of Friedman’s mean rank among the HLCCOA and other algorithms for CEC2022.
Figure 13. Column chart of Friedman’s mean rank among the HLCCOA and other algorithms for CEC2022.
Mathematics 12 02641 g013
Figure 14. The network architecture of the ELM.
Figure 14. The network architecture of the ELM.
Mathematics 12 02641 g014
Figure 15. Flowchart of the ELM algorithm combined with the HLCCOA.
Figure 15. Flowchart of the ELM algorithm combined with the HLCCOA.
Mathematics 12 02641 g015
Figure 16. Confusion graph analysis of the HLCCOA-ELM model.
Figure 16. Confusion graph analysis of the HLCCOA-ELM model.
Mathematics 12 02641 g016
Figure 17. Box diagram analysis of the HLCCOA-ELM model with different numbers of hidden neurons.
Figure 17. Box diagram analysis of the HLCCOA-ELM model with different numbers of hidden neurons.
Mathematics 12 02641 g017
Figure 18. Box diagram analysis of the HLCCOA-ELM model with different population sizes.
Figure 18. Box diagram analysis of the HLCCOA-ELM model with different population sizes.
Mathematics 12 02641 g018
Figure 19. Box diagram analysis of the HLCCOA-ELM model, where red ‘+’ denotes extreme values and blue boxes show quartiles from 10 experiments.
Figure 19. Box diagram analysis of the HLCCOA-ELM model, where red ‘+’ denotes extreme values and blue boxes show quartiles from 10 experiments.
Mathematics 12 02641 g019
Figure 20. Fitness evaluation curves of the HLCCOA and COA during the ELM optimization process.
Figure 20. Fitness evaluation curves of the HLCCOA and COA during the ELM optimization process.
Mathematics 12 02641 g020
Table 1. The currently popular meta-heuristic optimization algorithms.
Table 1. The currently popular meta-heuristic optimization algorithms.
AlgorithmsParametersValues
HLCCOANumber of population division layers N L N L = 4
Sample selection probability ( p 1 ) for L 1 layer p 1 = 0.2
COA [37]Decreasing coefficient C 1 C 1 = 2
Ambient temperature t e m p ( 15 , 35 )
Coefficient of food size C 2 C 2 = 3
Coefficient of food intake C 3 C 3 = 0.2
AOA [18]Minimum acceleration0.2
Maximum acceleration0.2
Control parameter0.499
Sensitive parameter5
GOA [31]Adaptive parameter c ( c min , c max )
Intensity of attraction f0.5
Attractive length scale l1.5
EMA [17]Crossover probability C r C r = 0.8
Probability of encountering the predator r r = 0.2
SSA [32]Control parameters r 1 , r 2 , r 3 r 1 = 2 e ( 4 l L ) 2
r 2 [ 0 , 1 ]
r 3 [ 0 , 1 ]
WOA [12] A = 2 a · r a a [ 0 , 2 ]
C = 2 · r r [ 0 , 1 ]
SPBO [25]Number of student subjects M M = 5
PDO [35]Account for individual PD difference ρ ρ = 0.005
Food source alarm ε ε = 0.1
Old fitness values G B e s t G B e s t = Φ
New fitness values C B e s t C B e s t = Φ
PSOGSA [54]Gravitational constant G 0 G 0 = 1
Velocity inertia weight C 1 C 1 = 0.5
Velocity inertia weight C 2 C 2 = 1.5
SCSSA [36]Proportion of finders P D P D = 0.3
Proportion of investigators S D S D = 0.1
Alert threshold R 2 R 2 = 0.8
Table 2. Friedman’s mean rank among the HLCCOA with different NL and p 1 .
Table 2. Friedman’s mean rank among the HLCCOA with different NL and p 1 .
NL ChangesRank p 1 ChangesRank
COA5.6667HLCCOA ( p 1 = 0.1 )4.3000
HLCCOA (NL = 2)1.6000HLCCOA ( p 1 = 0.2 )2.0667
HLCCOA (NL = 4)1.4000HLCCOA ( p 1 = 0.4 )4.9667
HLCCOA (NL = 6)3.7667HLCCOA ( p 1 = 0.6 )2.3667
HLCCOA (NL = 8)3.7000HLCCOA ( p 1 = 0.8 )3.6667
HLCCOA (NL = 10)4.8667HLCCOA ( p 1 = 1 )3.6333
Table 3. Statistical results of the HLCCOA compared with other optimization algorithms on CEC2019.
Table 3. Statistical results of the HLCCOA compared with other optimization algorithms on CEC2019.
FunIndexHLCCOACOAAOAGOAEMASSAWOASPBOPDOPSOGSASCSSA
F1Mean 1.00 × 10 0 1.00 × 10 0 2.94 × 10 4 + 2.05 × 10 6 + 1.79 × 10 5 + 1.58 × 10 6 + 1.10 × 10 7 + 8.49 × 10 1 1.00 × 10 0 2.29 × 10 7 + 1.00 × 10 0
Std00 9.74 × 10 4 2.45 × 10 6 3.41 × 10 5 1.53 × 10 6 1.23 × 10 7 4.60 × 10 2 0 5.30 × 10 7 0
F2Mean 4.88 × 10 0 4.80 × 10 0 1.02 × 10 4 + 9.98 × 10 2 + 1.23 × 10 3 + 8.46 × 10 2 + 7.51 × 10 3 + 6.94 × 10 1 + 5.00 × 10 0 3.07 × 10 3 + 5.00 × 10 0
Std 2.29 × 10 1 3.24 × 10 1 3.25 × 10 3 5.70 × 10 2 9.58 × 10 2 7.06 × 10 2 2.97 × 10 3 1.94 × 10 2 2.12 × 10 6 3.10 × 10 3 0
F3Mean 3.64 × 10 0 6.45 × 10 0 + 9.65 × 10 0 + 8.44 × 10 0 + 3.63 × 10 0 3.39 × 10 0 5.60 × 10 0 + 5.32 × 10 0 + 7.47 × 10 0 + 6.61 × 10 0 + 3.16 × 10 0
Std 1.69 × 10 0 2.60 × 10 0 1.10 × 10 0 2.16 × 10 0 2.16 × 10 0 1.92 × 10 0 2.10 × 10 0 1.43 × 10 0 1.83 × 10 0 2.87 × 10 0 2.05 × 10 0
F4Mean 3.13 × 10 1 3.64 × 10 1 + 4.87 × 10 1 + 2.17 × 10 1 1.80 × 10 1 2.52 × 10 1 4.89 × 10 1 + 3.82 × 10 1 + 7.47 × 10 1 + 3.73 × 10 1 + 4.49 × 10 1 +
Std 2.83 × 10 1 2.06 × 10 1 1.38 × 10 1 1.02 × 10 1 7.47 × 10 0 1.09 × 10 1 1.89 × 10 1 9.44 × 10 0 1.39 × 10 1 1.46 × 10 1 2.34 × 10 1
F5Mean 1.10 × 10 0 1.20 × 10 0 + 6.52 × 10 1 + 1.27 × 10 0 + 1.09 × 10 0 1.24 × 10 0 + 2.07 × 10 0 + 2.49 × 10 0 + 5.52 × 10 1 + 7.48 × 10 0 + 1.19 × 10 0 +
Std 1.12 × 10 1 1.03 × 10 1 2.11 × 10 1 1.68 × 10 1 4.90 × 10 2 1.50 × 10 1 3.91 × 10 1 4.24 × 10 1 2.07 × 10 1 1.67 × 10 1 1.13 × 10 1
F6Mean 3.03 × 10 0 3.90 × 10 0 + 1.04 × 10 1 + 4.63 × 10 0 + 2.87 × 10 0 4.23 × 10 0 + 8.59 × 10 0 + 7.04 × 10 0 + 1.06 × 10 1 + 5.28 × 10 0 + 4.08 × 10 0 +
Std 1.65 × 10 0 1.74 × 10 0 1.54 × 10 0 1.98 × 10 0 1.46 × 10 0 2.34 × 10 0 1.68 × 10 0 1.18 × 10 0 1.17 × 10 0 1.88 × 10 0 1.65 × 10 0
F7Mean 8.36 × 10 2 1.19 × 10 3 + 1.29 × 10 3 + 1.00 × 10 3 + 5.85 × 10 2 1.04 × 10 3 + 1.25 × 10 3 + 1.20 × 10 3 + 1.70 × 10 3 + 1.26 × 10 3 + 1.25 × 10 3 +
Std 1.99 × 10 2 3.19 × 10 2 2.74 × 10 2 2.21 × 10 2 2.42 × 10 2 3.11 × 10 2 3.07 × 10 2 2.17 × 10 2 3.08 × 10 2 2.93 × 10 2 3.61 × 10 2
F8Mean 3.59 × 10 0 3.90 × 10 0 + 4.69 × 10 0 + 4.11 × 10 0 + 3.68 × 10 0 4.18 × 10 0 + 4.63 × 10 0 + 4.32 × 10 0 + 4.87 × 10 0 + 4.63 × 10 0 + 4.10 × 10 0 +
Std 4.95 × 10 1 3.16 × 10 1 3.91 × 10 1 4.37 × 10 1 4.42 × 10 1 3.03 × 10 1 3.61 × 10 1 2.80 × 10 1 2.23 × 10 1 4.10 × 10 1 3.95 × 10 1
F9Mean 1.20 × 10 0 1.34 × 10 0 + 2.32 × 10 0 + 1.28 × 10 0 + 1.25 × 10 0 + 1.37 × 10 0 + 1.34 × 10 0 + 1.40 × 10 0 + 3.00 × 10 0 + 1.46 × 10 0 + 1.32 × 10 0 +
Std 6.23 × 10 2 9.58 × 10 2 8.02 × 10 1 1.13 × 10 1 8.16 × 10 2 1.68 × 10 1 1.65 × 10 1 1.49 × 10 1 8.70 × 10 1 2.24 × 10 1 1.33 × 10 1
F10Mean 1.93 × 10 1 1.91 × 10 1 2.11 × 10 1 2.11 × 10 1 + 2.12 × 10 1 2.03 × 10 1 + 2.13 × 10 1 + 2.12 × 10 1 + 2.13 × 10 1 + 2.11 × 10 1 + 2.07 × 10 1 +
Std 5.66 × 10 0 5.41 × 10 0 3.85 × 10 2 7.50 × 10 2 1.00 × 10 1 3.65 × 10 0 1.36 × 10 1 8.70 × 10 2 1.49 × 10 1 9.57 × 10 2 3.29 × 10 0
Chaos wins+799381098107
Similar2116201203
Competitor
wins
-1001000000
Table 4. Statistical results of the HLCCOA compared with other optimization algorithms on CEC2022-10D.
Table 4. Statistical results of the HLCCOA compared with other optimization algorithms on CEC2022-10D.
FunIndexHLCCOACOAAOAGOAEMASSAWOASPBOPDOPSOGSASCSSA
F1Mean 3.46 × 10 2 1.84 × 10 3 + 1.16 × 10 4 + 3.00 × 10 2 1.65 × 10 3 + 3.00 × 10 2 2.50 × 10 4 + 5.51 × 10 2 + 1.20 × 10 4 + 5.17 × 10 3 5.74 × 10 2 +
Std 5.98 × 10 1 1.08 × 10 3 4.59 × 10 3 3.96 × 10 1 1.08 × 10 3 7.75 × 10 1 1.17 × 10 4 2.55 × 10 2 4.93 × 10 3 7.91 × 10 3 2.61 × 10 2
F2Mean 4.06 × 10 2 4.10 × 10 2 + 1.19 × 10 3 + 4.06 × 10 2 + 4.09 × 10 2 + 4.07 × 10 2 + 4.56 × 10 2 + 4.32 × 10 2 + 9.92 × 10 2 + 4.47 × 10 2 + 4.11 × 10 2 +
Std 1.62 × 10 1 1.67 × 10 1 4.99 × 10 2 3.33 × 10 0 4.73 × 10 0 7.80 × 10 0 7.41 × 10 1 3.11 × 10 1 3.47 × 10 2 7.23 × 10 1 1.75 × 10 1
F3Mean 6.02 × 10 2 6.06 × 10 2 + 6.37 × 10 2 + 6.04 × 10 2 + 6.00 × 10 2 6.12 × 10 2 + 6.35 × 10 2 + 6.21 × 10 2 + 6.50 × 10 2 + 6.20 × 10 2 + 6.06 × 10 2 +
Std 8.58 × 10 0 9.58 × 10 0 7.61 × 10 0 5.22 × 10 0 1.80 × 10 1 9.41 × 10 0 1.18 × 10 1 7.06 × 10 0 8.72 × 10 0 1.61 × 10 1 1.17 × 10 1
F4Mean 8.23 × 10 2 8.31 × 10 2 + 8.31 × 10 2 + 8.21 × 10 2 8.20 × 10 2 8.21 × 10 2 8.40 × 10 2 + 8.28 × 10 2 8.48 × 10 2 + 8.40 × 10 2 + 8.30 × 10 2
Std 9.93 × 10 0 5.47 × 10 0 8.17 × 10 0 9.17 × 10 0 1.21 × 10 1 8.37 × 10 0 1.48 × 10 1 8.81 × 10 0 9.46 × 10 0 1.44 × 10 1 7.46 × 10 0
F5Mean 9.15 × 10 2 1.01 × 10 3 + 1.39 × 10 3 + 9.02 × 10 2 9.01 × 10 2 9.64 × 10 2 + 1.44 × 10 3 + 1.07 × 10 3 + 1.48 × 10 3 + 1.51 × 10 3 + 1.35 × 10 3 +
Std 7.06 × 10 1 1.96 × 10 2 1.87 × 10 2 3.29 × 10 0 1.98 × 10 0 1.47 × 10 2 3.30 × 10 2 1.13 × 10 2 1.82 × 10 2 4.46 × 10 2 2.30 × 10 2
F6Mean 2.36 × 10 3 4.13 × 10 3 + 8.58 × 10 3 + 3.76 × 10 3 + 4.66 × 10 3 + 4.27 × 10 3 + 5.13 × 10 3 + 3.54 × 10 3 + 8.61 × 10 7 + 5.65 × 10 3 + 3.28 × 10 3 +
Std 5.65 × 10 2 2.00 × 10 3 2.41 × 10 4 1.82 × 10 3 2.16 × 10 3 2.15 × 10 3 2.57 × 10 3 1.92 × 10 3 1.84 × 10 8 1.99 × 10 3 1.56 × 10 3
F7Mean 2.02 × 10 3 2.02 × 10 3 2.11 × 10 3 + 2.06 × 10 3 + 2.03 × 10 3 + 2.05 × 10 3 + 2.07 × 10 3 + 2.05 × 10 3 + 2.11 × 10 3 + 2.05 × 10 3 + 2.04 × 10 3 +
Std 1.02 × 10 1 7.95 × 10 0 4.09 × 10 1 3.85 × 10 1 1.19 × 10 1 2.21 × 10 1 2.89 × 10 1 1.07 × 10 1 3.84 × 10 1 3.31 × 10 1 3.22 × 10 1
F8Mean 2.22 × 10 3 2.22 × 10 3 + 2.26 × 10 3 + 2.26 × 10 3 + 2.22 × 10 3 2.23 × 10 3 + 2.24 × 10 3 + 2.23 × 10 3 + 2.25 × 10 3 + 2.25 × 10 3 + 2.22 × 10 3
Std 4.43 × 10 0 6.78 × 10 0 6.39 × 10 1 5.66 × 10 1 4.94 × 10 0 3.03 × 10 0 1.24 × 10 1 2.62 × 10 0 2.47 × 10 1 4.74 × 10 1 9.13 × 10 0
F9Mean 2.53 × 10 3 2.53 × 10 3 + 2.73 × 10 3 + 2.55 × 10 3 + 2.53 × 10 3 2.55 × 10 3 + 2.59 × 10 3 + 2.54 × 10 3 + 2.74 × 10 3 + 2.56 × 10 3 2.53 × 10 3 +
Std 1.79 × 10 5 2.05 × 10 4 5.59 × 10 1 4.78 × 10 1 6.40 × 10 12 3.67 × 10 1 4.94 × 10 1 1.11 × 10 1 6.68 × 10 1 5.10 × 10 1 2.68 × 10 1
F10Mean 2.53 × 10 3 2.54 × 10 3 2.65 × 10 3 + 2.62 × 10 3 + 2.55 × 10 3 + 2.51 × 10 3 2.60 × 10 3 + 2.50 × 10 3 2.59 × 10 3 + 2.69 × 10 3 + 2.56 × 10 3 +
Std 5.00 × 10 1 6.03 × 10 1 1.79 × 10 2 2.01 × 10 2 5.71 × 10 1 3.80 × 10 1 1.90 × 10 2 3.96 × 10 1 8.12 × 10 1 3.03 × 10 2 6.28 × 10 1
F11Mean 2.68 × 10 3 2.72 × 10 3 + 3.39 × 10 3 + 2.73 × 10 3 + 2.66 × 10 3 2.65 × 10 3 2.82 × 10 3 + 2.71 × 10 3 + 3.22 × 10 3 + 2.91 × 10 3 + 2.74 × 10 3 +
Std 1.12 × 10 2 1.34 × 10 2 4.01 × 10 2 1.73 × 10 2 7.32 × 10 1 1.13 × 10 2 1.51 × 10 2 4.24 × 10 1 3.42 × 10 2 2.15 × 10 2 2.18 × 10 2
F12Mean 2.87 × 10 3 2.87 × 10 3 3.02 × 10 3 + 2.86 × 10 3 2.86 × 10 3 2.86 × 10 3 2.89 × 10 3 + 2.88 × 10 3 + 2.89 × 10 3 + 2.88 × 10 3 2.86 × 10 3
Std 3.83 × 10 0 1.79 × 10 0 6.20 × 10 1 1.93 × 10 0 1.99 × 10 0 1.68 × 10 0 3.62 × 10 1 1.46 × 10 1 1.14 × 10 1 2.44 × 10 1 4.38 × 10 0
Chaos wins+91285712101299
Similar2023301031
Competitor
wins
-1024201002
Table 5. Statistical results of the HLCCOA compared with other optimization algorithms on CEC2022-20D.
Table 5. Statistical results of the HLCCOA compared with other optimization algorithms on CEC2022-20D.
FunIndexHLCCOACOAAOAGOAEMASSAWOASPBOPDOPSOGSASCSSA
F1Mean 1.01 × 10 4 4.13 × 10 4 + 3.50 × 10 4 + 1.05 × 10 3 4.73 × 10 4 + 4.19 × 10 3 2.64 × 10 4 + 1.48 × 10 3 7.63 × 10 4 + 2.12 × 10 4 + 4.52 × 10 4 +
Std 3.69 × 10 3 1.43 × 10 4 1.23 × 10 4 1.15 × 10 3 1.08 × 10 4 2.45 × 10 3 7.00 × 10 3 6.37 × 10 2 1.85 × 10 4 1.29 × 10 4 1.52 × 10 4
F2Mean 4.52 × 10 2 4.70 × 10 2 + 2.39 × 10 3 + 4.52 × 10 2 4.54 × 10 2 4.60 × 10 2 5.82 × 10 2 + 5.48 × 10 2 + 2.18 × 10 3 + 4.96 × 10 2 + 4.66 × 10 2 +
Std 1.19 × 10 1 2.44 × 10 1 7.13 × 10 2 2.10 × 10 1 1.25 × 10 1 1.42 × 10 1 7.10 × 10 1 4.30 × 10 1 5.69 × 10 2 6.71 × 10 1 3.64 × 10 1
F3Mean 6.12 × 10 2 6.27 × 10 2 + 6.62 × 10 2 + 6.24 × 10 2 + 6.00 × 10 2 6.40 × 10 2 + 6.67 × 10 2 + 6.52 × 10 2 + 6.76 × 10 2 + 6.50 × 10 2 + 6.24 × 10 2 +
Std 1.33 × 10 1 1.73 × 10 1 7.56 × 10 0 1.10 × 10 1 6.32 × 10 1 1.49 × 10 1 1.25 × 10 1 9.32 × 10 0 8.06 × 10 0 1.27 × 10 1 1.47 × 10 1
F4Mean 8.81 × 10 2 8.84 × 10 2 9.49 × 10 2 + 8.71 × 10 2 9.25 × 10 2 + 8.78 × 10 2 9.28 × 10 2 + 9.17 × 10 2 + 9.55 × 10 2 + 9.19 × 10 2 + 8.95 × 10 2 +
Std 1.46 × 10 1 1.77 × 10 1 1.54 × 10 1 2.25 × 10 1 2.72 × 10 1 2.36 × 10 1 3.25 × 10 1 1.60 × 10 1 2.65 × 10 1 3.71 × 10 1 1.80 × 10 1
F5Mean 1.84 × 10 3 2.61 × 10 3 + 2.94 × 10 3 + 1.77 × 10 3 9.54 × 10 2 2.10 × 10 3 4.15 × 10 3 + 3.05 × 10 3 + 3.10 × 10 3 + 3.71 × 10 3 + 2.39 × 10 3 +
Std 6.77 × 10 2 4.68 × 10 2 4.43 × 10 2 8.96 × 10 2 9.31 × 10 1 5.91 × 10 2 1.34 × 10 3 4.82 × 10 2 6.29 × 10 2 1.24 × 10 3 2.61 × 10 2
F6Mean 4.38 × 10 3 4.59 × 10 3 8.61 × 10 8 + 1.15 × 10 4 + 5.03 × 10 3 9.43 × 10 3 1.28 × 10 6 + 8.28 × 10 5 + 9.57 × 10 8 + 1.91 × 10 6 + 7.44 × 10 3
Std 2.39 × 10 3 4.14 × 10 3 7.09 × 10 8 1.37 × 10 4 3.86 × 10 3 8.52 × 10 3 1.97 × 10 6 5.44 × 10 5 9.17 × 10 8 7.01 × 10 6 7.51 × 10 3
F7Mean 2.11 × 10 3 2.12 × 10 3 2.19 × 10 3 + 2.19 × 10 3 + 2.07 × 10 3 2.12 × 10 3 + 2.24 × 10 3 + 2.12 × 10 3 + 2.22 × 10 3 + 2.23 × 10 3 + 2.12 × 10 3 +
Std 9.96 × 10 1 8.44 × 10 1 4.67 × 10 1 7.76 × 10 1 3.19 × 10 1 3.62 × 10 1 5.65 × 10 1 2.67 × 10 1 5.89 × 10 1 7.59 × 10 1 4.23 × 10 1
F8Mean 2.23 × 10 3 2.28 × 10 3 + 2.47 × 10 3 + 2.30 × 10 3 + 2.23 × 10 3 + 2.29 × 10 3 + 2.30 × 10 3 + 2.25 × 10 3 + 2.78 × 10 3 + 2.36 × 10 3 + 2.25 × 10 3
Std 2.11 × 10 0 6.35 × 10 1 2.03 × 10 2 6.53 × 10 1 2.25 × 10 1 7.47 × 10 1 6.86 × 10 1 2.25 × 10 1 5.97 × 10 2 1.22 × 10 2 4.22 × 10 1
F9Mean 2.48 × 10 3 2.48 × 10 3 3.07 × 10 3 + 2.52 × 10 3 + 2.48 × 10 3 + 2.51 × 10 3 + 2.58 × 10 3 + 2.55 × 10 3 + 2.92 × 10 3 + 2.52 × 10 3 + 2.48 × 10 3
Std 1.16 × 10 1 1.33 × 10 1 2.04 × 10 2 4.84 × 10 1 6.94 × 10 0 3.25 × 10 1 5.17 × 10 1 3.96 × 10 1 1.37 × 10 2 4.62 × 10 1 6.55 × 10 2
F10Mean 2.60 × 10 3 3.82 × 10 3 + 5.47 × 10 3 + 4.47 × 10 3 + 2.96 × 10 3 + 3.36 × 10 3 + 4.40 × 10 3 + 2.51 × 10 3 5.77 × 10 3 + 3.88 × 10 3 + 3.85 × 10 3 +
Std 4.16 × 10 2 1.26 × 10 3 1.19 × 10 3 7.88 × 10 2 4.19 × 10 2 1.07 × 10 3 1.46 × 10 3 5.28 × 10 1 1.32 × 10 3 1.08 × 10 3 1.03 × 10 3
F11Mean 2.93 × 10 3 3.00 × 10 3 + 8.30 × 10 3 + 3.12 × 10 3 + 3.01 × 10 3 + 2.95 × 10 3 3.40 × 10 3 + 4.23 × 10 3 + 7.89 × 10 3 + 4.11 × 10 3 + 2.94 × 10 3 +
Std 7.85 × 10 1 1.20 × 10 2 9.36 × 10 2 4.06 × 10 2 2.62 × 10 2 1.45 × 10 2 1.04 × 10 2 3.67 × 10 2 9.03 × 10 2 1.27 × 10 3 9.54 × 10 1
F12Mean 2.99 × 10 3 2.99 × 10 3 3.77 × 10 3 + 2.98 × 10 3 2.95 × 10 3 2.97 × 10 3 3.10 × 10 3 + 3.15 × 10 3 + 3.18 × 10 3 + 3.01 × 10 3 3.01 × 10 3
Std 3.15 × 10 1 2.98 × 10 1 2.04 × 10 2 3.48 × 10 1 1.11 × 10 1 1.98 × 10 1 1.21 × 10 2 1.16 × 10 2 8.04 × 10 1 4.96 × 10 1 5.03 × 10 1
Chaos wins+712765121012118
Similar4032500013
Competitor
wins
-1024202001
Table 6. Comparisons of the proposed model with benchmark models.
Table 6. Comparisons of the proposed model with benchmark models.
ModelLowest (%)Highest (%)Average (%)Std (%)
AccuracyBPNN84.023794.082891.06513.0108
GRNN89.940889.940889.94080.0000
ELM84.211095.322090.35104.0539
FW-ELM94.082896.449795.08880.7026
FW-HHO-ELM97.041498.816697.75150.4428
FW-WOA-ELM96.449798.816697.81070.6509
FW-AFSA-ELM95.858098.224997.04140.7001
FW-PHHO-ELM98.224999.408398.75740.3187
COA-ELM97.076098.830098.12900.6639
HLCCOA-ELM98.830099.415099.12300.3082
SensitivityBPNN70.491895.082082.95087.5196
GRNN86.885286.885286.88520.0000
ELM70.423094.444483.14006.4359
FW-ELM85.964991.228188.77191.4035
FW-HHO-ELM94.736898.245696.31581.4573
FW-WOA-ELM94.736896.491296.14040.7018
FW-AFSA-ELM92.982598.245695.96491.3702
FW-PHHO-ELM96.491298.245697.36840.8772
COA-ELM92.188097.101095.46301.5433
HLCCOA-ELM96.5520100.000097.99701.3076
SpecificityBPNN89.814899.074195.64813.0160
GRNN91.666791.666791.66670.0000
ELM86.1390100.000094.70004.2570
FW-ELM96.4286100.000098.30361.1607
FW-HHO-ELM97.3214100.000098.48210.8036
FW-WOA-ELM96.4286100.000098.66071.1469
FW-AFSA-ELM95.535799.107197.58931.2012
FW-PHHO-ELM99.1071100.000099.46430.4374
COA-ELM99.0100100.000099.71800.4545
HLCCOA-ELM100.0000100.0000100.00000.0000
Table 7. Comparisons of the proposed model with benchmark models.
Table 7. Comparisons of the proposed model with benchmark models.
ReferenceModelAcc (%)Sens (%)Spec (%)
Proposed modelHLCCOA-ELM99.12397.997100
COA-ELM98.12995.46399.718
Feng et al. [60]FW-PHHHO-ELM98.7697.3799.46
Pratheep et al. [62]FW-BOA-RDF84100
Singh et al. [63]FW-ALO-BPNN98.3796.4399.52
Wang et al. [64]IRFRE95.0993.496.09
Abdel-Basset et al. [65]FS-TMGWO-KNN94.82
Naik et al. [66]FS-BBA-OGCNN93.5490.7695.24
Rao et al. [67]FS-ABC-GBDT92.8
Wang et al. [68]SVM-Ensemble97.6894.7599.49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Diao, Y. Hierarchical Learning-Enhanced Chaotic Crayfish Optimization Algorithm: Improving Extreme Learning Machine Diagnostics in Breast Cancer. Mathematics 2024, 12, 2641. https://doi.org/10.3390/math12172641

AMA Style

Zhang J, Diao Y. Hierarchical Learning-Enhanced Chaotic Crayfish Optimization Algorithm: Improving Extreme Learning Machine Diagnostics in Breast Cancer. Mathematics. 2024; 12(17):2641. https://doi.org/10.3390/math12172641

Chicago/Turabian Style

Zhang, Jilong, and Yuan Diao. 2024. "Hierarchical Learning-Enhanced Chaotic Crayfish Optimization Algorithm: Improving Extreme Learning Machine Diagnostics in Breast Cancer" Mathematics 12, no. 17: 2641. https://doi.org/10.3390/math12172641

APA Style

Zhang, J., & Diao, Y. (2024). Hierarchical Learning-Enhanced Chaotic Crayfish Optimization Algorithm: Improving Extreme Learning Machine Diagnostics in Breast Cancer. Mathematics, 12(17), 2641. https://doi.org/10.3390/math12172641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop