Next Article in Journal
Optimization of Grinding Process of Sunflower Meal for Obtaining Protein-Enriched Fractions
Next Article in Special Issue
Prediction Model for the Chemical Futures Price Using Improved Genetic Algorithm Based Long Short-Term Memory
Previous Article in Journal
Building a Digital Twin Simulator Checking the Effectiveness of TEG-ICE Integration in Reducing Fuel Consumption Using Spatiotemporal Thermal Filming Handled by Neural Network Technique
Previous Article in Special Issue
Diversity-Based Evolutionary Population Dynamics: A New Operator for Grey Wolf Optimizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications

College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(12), 2703; https://doi.org/10.3390/pr10122703
Submission received: 21 November 2022 / Revised: 9 December 2022 / Accepted: 12 December 2022 / Published: 14 December 2022
(This article belongs to the Special Issue Evolutionary Process for Engineering Optimization (II))

Abstract

:
Aquila Optimizer (AO) and Artificial Rabbits Optimization (ARO) are two recently developed meta-heuristic optimization algorithms. Although AO has powerful exploration capability, it still suffers from poor solution accuracy and premature convergence when addressing some complex cases due to the insufficient exploitation phase. In contrast, ARO possesses very competitive exploitation potential, but its exploration ability needs to be more satisfactory. To ameliorate the above-mentioned limitations in a single algorithm and achieve better overall optimization performance, this paper proposes a novel chaotic opposition-based learning-driven hybrid AO and ARO algorithm called CHAOARO. Firstly, the global exploration phase of AO is combined with the local exploitation phase of ARO to maintain the respective valuable search capabilities. Then, an adaptive switching mechanism (ASM) is designed to better balance the exploration and exploitation procedures. Finally, we introduce the chaotic opposition-based learning (COBL) strategy to avoid the algorithm fall into the local optima. To comprehensively verify the effectiveness and superiority of the proposed work, CHAOARO is compared with the original AO, ARO, and several state-of-the-art algorithms on 23 classical benchmark functions and the IEEE CEC2019 test suite. Systematic comparisons demonstrate that CHAOARO can significantly outperform other competitor methods in terms of solution accuracy, convergence speed, and robustness. Furthermore, the promising prospect of CHAOARO in real-world applications is highlighted by resolving five industrial engineering design problems and photovoltaic (PV) model parameter identification problem.

1. Introduction

Optimization can be considered the process of finding the best solution among all candidates for a particular problem so as to maximize profits, efficiency, and performance with limited resource consumption [1]. Optimization problems widely exist in different disciplines and engineering fields, such as feature selection [2], industrial design [3], Internet of Things task scheduling [4], data clustering [5], and aerospace control [6,7], which makes the study on optimization techniques become a hot topic and have drawn the attention of many scholars. Deterministic mathematical programming methodology commonly requires the target function of an optimization problem to be convex and differentiable. Nevertheless, over the past few decades, the complexity of real-life optimization problems has increased dramatically. These conventional optimization methods struggle to find optimal or near-optimal solutions effectively when facing the challenges of large-scale, multimodal, and non-convex search domains [8]. Therefore, to accomplish the desired goal, meta-heuristic algorithms (MAs) are gradually becoming very popular as powerful tools to solve such intractable global optimization problems [9]. The meta-heuristic algorithm is a kind of stochastic algorithm, which assumes the optimization problem as a black box and then iteratively adopts different random operators to sample the search domain for better decision variables. Compared with traditional techniques, MAs have the unique advantages of conceptual simplicity, high flexibility, and no gradient information required [10]. Benefiting from these advantages, MAs are always capable of showing excellent performance in various scientific and industrial application scenarios. This drives the continuous interest of more worldwide scholars devoted to MAs.
MAs generally build mathematical optimization models by simulating various stochastic occurrences in nature. Based on different design philosophies, MAs can be classified into four dominant categories [11]: evolution-based algorithms, physics-based algorithms, swarm-based algorithms, and human-based algorithms. Evolutionary algorithms mimic the laws of natural selection in biology and use operators like selection, crossover, and mutation to evolve the initial population toward the global optimum. Physics-based algorithms are derived from physical phenomena in the universe and update the search agent with formulas borrowed from physical theories. Swarm-based algorithms are inspired by the social behavior within a group of animals, plants, or other organisms. The common feature of these algorithms is the sharing of biological information of all individuals in the optimization process. The last category of methods, human-based algorithms, originates from human cooperative behavior and activities in the community. Table 1 shows the details of some well-known optimization paradigms belonging to these four classes of algorithms. The core components of MAs are global exploration and local exploitation. In the early iterations, a well-organized optimizer would explore the entire search space as much as possible to locate promising regions where the global optimal solution may exist. Then, in the later stage, more local exploitation is performed to improve the final quality of the solution based on the previously obtained space information. Generally speaking, it is critical for MAs to maintain a good balance between exploration and exploitation, which is related to the effectiveness of algorithms in solving complex optimization problems [12].
Though MAs play an increasingly prominent role in computational science, some skepticism also emerges: since there are already many famous MAs like those mentioned above, why is it necessary to present new algorithms as well as further innovations? That is because, according to the No-Free-Lunch (NFL) [41] theorem, no one algorithm can be guaranteed to work for all optimization problems. In fact, the average performance of the original optimizers is almost the same, and most of them still have some drawbacks to be eliminated, such as poor solution accuracy, slow convergence speed, and ease of falling into local optima. Hence, motivated by the NFL theorem, apart from developing new MAs, lots of scholars try to improve existing optimization algorithms by employing some helpful measures. Currently, there are three popular trends in the improvement of existing MAs [42]: (1) Embed one or more search mechanisms into the algorithm, (2) Hybridize two or more algorithms, and (3) Hybridize two or more algorithms with further enhancement by one or more search mechanisms. Among them, the third approach is highly favored because it can effectively promote valuable information exchange and diversity between search agents in the optimization process of the hybrid algorithm; meanwhile, the improvement strategies would also assist in boosting the overall performance [3]. For example, Zhang et al. [43] presented a chaotic hybridization optimizer of SCA and HHO, namely CSCAHHO. In CSCAHHO, the stable convergence ability of SCA and the fast convergence speed of HHO were fully retained. Besides, chaotic mapping was used to improve the randomness of the algorithm. Compared with basic SCA, HHO, and five advanced algorithms, it was proven that CSCAHHO could provide better convergence accuracy and stability. Cheng et al. [44] integrated PSO and GWO into an efficient optimization approach known as IPSO-GWO. To further increase population diversity and avoid the local optimum, Logistic mapping and adaptive inertial weight were embedded in this hybrid algorithm. In a case study of global path planning for mobile robots, IPSO-GWO was able to find the optimal path with a faster convergence speed than traditional methods. In [45], the authors proposed a hybrid LSMA-TLBO algorithm. First, SMA was combined with TLBO to balance the exploration and exploitation, and then Lévy flight was introduced into the hybrid algorithm to improve its global search capability further. Experimental results showed that LSMA-TLBO has superior performance over other competitors on 33 well-known benchmark functions and six engineering design problems. Moreover, Liu et al. [3] constructed a novel improved hybrid technique (HAGSA) based on the AOA and Golden Sine Algorithm (GSA). The Brownian mutation was also employed to enhance the local exploitation competence of HAGSA.
In this study, we center on two state-of-the-art swarm intelligence optimization algorithms, namely Aquila Optimizer (AO) and Artificial Rabbits Optimization (ARO). The AO algorithm simulates the Aquila’s behavior during the process of hunting the prey, which was first proposed by Abualigah et al. [46] in 2021. Preliminary studies have demonstrated that AO has quite a few advantages, such as easy implementation, stable global exploration capability, and unique search ways. In view of this, AO is widely applied to scientific research and real-world optimization problems [47,48]. However, as with other MAs, the canonical AO inevitably suffers from poor convergence accuracy and proneness to fall into local optima in certain cases, mainly due to its inadequate exploitation phase [49,50]. Consequently, many variants of AO were suggested to enhance its searchability for global optimization. Yu et al. [51] proposed a modified AO-based method called mAO, which assimilated opposition-based learning and restart strategy to strengthen the exploration capability of the algorithm and employed chaotic local search to boost the exploitation trend. Compared with the original AO and nine other algorithms, mAO can obtain better results on 29 benchmark functions and five constrained engineering design issues. Zhao et al. [49] presented a simplified AO by removing the two exploitation strategies and keeping only the position update formula of the exploration phase for the whole iteration. The effectiveness of the proposed method was validated by a series of comprehensive experiments. In [52], an improved version of AO, namely IAO, was developed for solving continuous numerical optimization problems. First, a novel search control factor was used to refine the hunting strategies. Furthermore, opposition-based learning and Gaussian mutation were integrated to enhance the general search performance of IAO. Wang et al. [50] combined the exploration phase of AO and the exploitation phase of HHO into a new hybrid optimizer termed as IHAOHHO. Additionally, random opposition-based learning and nonlinear escaping energy parameter were introduced to help the algorithm avoid local optima and accelerate convergence. Experimental findings revealed that IHAOHHO shows excellent performance on benchmark function optimization tests. In [53], a new hybrid meta-heuristic optimization technique named AGO was raised based on the AO and Grasshopper Optimization Algorithm (GOA). The proposed AGO was applied to optimize the motor geometry of Surface Inset Permanent Magnet Synchronous Motor (SI-PMSM) to minimize the total core loss. The results indicated that AGO could effectively reduce core losses, thereby increasing the power density of the motor. Also, Zhang et al. [54] merged the merits of AO and AOA to design the AOAOA algorithm. Simulation experiments on 27 benchmark test functions and three practical engineering applications fully verified the superior robustness and convergence accuracy of AOAOA.
In 2022, Wang et al. [55] first proposed the ARO algorithm by modeling the survival strategies of rabbits in nature. The strategy of rabbits searching for food away from their nests represents the exploration capability of the algorithm, whereas the strategy of rabbits randomly choosing one burrow to shelter from predators reflects the exploitation capability of the algorithm. Despite its strong local exploitation potential and good search efficiency, ARO is plagued by the unstable exploration phase so that it tends to fall into the local optimum when processing some complex or high-latitude problems [56]. Since this algorithm has been proposed only a short time ago, there is not any study on the improvement of ARO.
Given the above discussion and the encouragement of NFL theorem, this paper attempts to hybridize AO and ARO to make full use of their respective advantages and then proposes a novel chaotic opposition-based learning-driven hybrid AO and ARO optimizer for solving complex global optimization problems, namely CHAOARO. To our knowledge, such hybridization has never been used before. First, the exploration strategy of AO is integrated into ARO to achieve better overall search performance. Then, an adaptive switching mechanism (ASM) is designed to establish a robust exploration-exploitation balance in the hybrid algorithm. Finally, chaotic opposition-based learning (COBL) is used to update the current best solution to increase the population diversity and avoid the local minimum stagnation. On the basis of various metrics, the performance of the proposed CHAOARO is compared with those of the original AO, ARO, and other existing MAs using a total of thirty-three benchmark functions, including seven unimodal, six multimodal, ten fix-dimension multimodal, and ten modern CEC2019 benchmark functions. Moreover, five industrial engineering design problems and the parameter identification problem of the photovoltaic (PV) model are employed to test the applicability of CHAOARO in solving real-life optimization problems. Experimental results indicate that the proposed work performs better than other comparison algorithms in terms of solution accuracy, convergence speed, and stability. The main contributions of this paper can be shortened as follows:
  • A new hybrid meta-heuristic algorithm, CHAOARO, is proposed based on AO and ARO for global optimization;
  • The ASM and COBL strategies are adopted to synergistically improve the global exploration and local exploitation capabilities of the hybrid algorithm;
  • The performance of CHAOARO is thoroughly compared with that of AO, ARO, and several state-of-the-art optimizers on 23 classical benchmark functions and 10 IEEE CEC2019 test functions;
  • Five constrained engineering optimization problems and PV cell/module parameter identification problem are considered to highlight the applicability of CHAOARO in addressing real-life optimization tasks;
  • Experimental results demonstrate that CHAOARO can significantly outperform other competitor algorithms in most cases.
The structure of this paper is organized as follows: A brief review of the original AO and ARO is presented in Section 2. Section 3 details the two novel search operators employed, namely ASM and COBL, as well as the framework of the proposed CHAOARO algorithm in this paper. In Section 4, a series of comparison experiments based on the 23 classical benchmark functions and IEEE CEC2019 test suite are carried out to fully validate the superiority of the proposed technique. In Section 5 and Section 6, the proposed CHAOARO is applied to solve five common industrial engineering design problems and identify the optimal parameters for PV system, respectively. The conclusion and potential future research are given in Section 7.

2. Preliminary Knowledge

2.1. Aquila Optimizer (AO)

AO is a novel swarm-based meta-heuristic algorithm proposed by Abualigah et al. [46] in 2021 which mimics the intelligent hunting behaviors of the Aquila, a famous genus of bird of prey found in the Northern Hemisphere. Aquila uses its breakneck flight speed and powerful claws to attack the intended prey, and it is able to switch between different predation methods depending on the prey species, including (1) High soar with vertical stoop, (2) Contour flight with short glide attack, (3) Low flight with slow descent attack, and (4) Walking and grabbing prey. Accordingly, in the mathematical model of the AO algorithm, the first two hunting strategies of Aquila are defined as the exploration phase, whereas the last two hunting strategies belong to the exploitation phase. To achieve a smooth transition between global exploration and local exploitation, each phase of the algorithm is performed or not based on the condition that if t ( 2 3 ) T , the exploration steps are executed; otherwise, the exploitation steps will be executed, where t is the current iteration and T is the maximum number of iterations. In the following, the four strategies involved in the mathematical model of AO are described.

2.1.1. Expanded Exploration: High Soar with Vertical Stoop

In the first strategy, Aquila conducts a preliminary search at high altitude to detect the target. Once the best hunting area is determined, Aquila will dive vertically toward the prey. This behavior is modeled as follows:
X i ( t + 1 ) = X b e s t ( t ) × ( 1 t T ) + X m ( t ) X b e s t ( t ) × r 1
where X i ( t + 1 ) denotes the candidate position vector of i -th Aquila in the next iteration t + 1 . X b e s t ( t ) denotes the best solution obtained so far. X m ( t ) represents the mean position value of all individuals in the population, which can be calculated by Equation (2). r 1 is a random value between 0 and 1.
X m ( t ) = 1 N i = 1 N X i ( t )
where N refers to the population size, and X i ( t ) denotes the position vector of i -th Aquila in the current iteration.

2.1.2. Narrowed Exploration: Contour Flight with Short Glide Attack

This is the most common hunting strategy utilized by Aquila. When the prey area is located, Aquila will shift from soaring high to hovering above the target prey and look for a suitable opportunity to attack. At this moment, the position update formula is shown as:
X i ( t + 1 ) = X b e s t ( t ) × LF ( D ) + X r ( t ) + ( y x ) × r 2
where X r ( t ) denotes the position of a random Aquila individual. D denotes the dimension size of the given problem. r 2 is a random number between 0 and 1. LF ( · ) stands for the Lévy flight distribution function, which is expressed as follows:
LF ( x ) = 0.01 × u × σ | v | 1 β , σ = ( Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β ) × β × 2 ( β 1 2 ) ) 1 / β
where u and v are random numbers in the range of 0 and 1, Γ ( · ) is the gamma function, and β is a constant equal to 1.5. In Equation (3), y and x are employed to depict the contour spiral shape, which can be calculated using Equation (5).
{ x = ( R + U × D 1 ) × sin ( ω × D 1 + 3 × π 2 ) y = ( R + U × D 1 ) × cos ( ω × D 1 + 3 × π 2 )
where R means a fixed number of search cycles between 1 and 20, U denotes a small value fixed to 0.00565, D 1 is integer numbers from 1 to the dimension size ( D ), and ω equals 0.005.

2.1.3. Expanded Exploitation: Low Flight with Slow Descent Attack

In the third strategy, when the location of the prey has been roughly specified, Aquila descends vertically and then makes an initial attack on the prey to observe its reaction. This predation behavior can be simulated as in Equation (6).
X i ( t + 1 ) = ( X b e s t   ( t ) X m ( t ) ) × α r 3 + ( ( u b l b ) × r 4 + l b ) × δ
where α and δ are the exploitation adjustment coefficients fixed to 0.1. r 3 and r 4 are random numbers within the interval [ 0 ,   1 ] . u b and l b are the upper and lower bounds of the search domain, respectively.

2.1.4. Narrowed Exploitation: Walking and Grabbing Prey

In this step, Aquila comes to the land, follows the random escape trajectory of the target prey in pursuit, and finally launches a precise attack. The mathematical representation of this behavior is given by:
X i ( t + 1 ) = Q F × X b e s t ( t ) G 1 × X i ( t ) × r 5 G 2 × LF ( D ) + G 1 × r 6
{ Q F ( t ) = t 2 × r 7 1 ( 1 T ) 2 G 1 = 2 × r 8 1 G 2 = 2 × ( 1 t T )
where Q F denotes the quality function used to balance the search strategy, G 1 denotes the motion parameter of Aquila in tracking the absconding prey, which is a random number in the range of −1 and 1, G 2 denotes the flight slope of Aquila in tracking the absconding prey, which decreases linearly from 2 to 0, r 5 , r 6 , r 7 , r 8 are all random numbers between 0 and 1.
The flow chart of the original AO algorithm is presented in Figure 1.

2.2. Artificial Rabbits Optimization (ARO)

As a new gradient-free meta-heuristic algorithm developed by Wang et al. [55] in 2022, ARO simulates the survival skills of rabbits in nature. Rabbits are herbivores, which primarily eat grass and leafy weeds. To avoid predators detecting their own nests, rabbits would not consume the grass surrounding the holes; instead, they often look for food away from the nest. This detour foraging strategy is defined as exploration in ARO. Moreover, to further reduce the likelihood of being captured by predators or hunters, rabbits are adept at digging many holes for their nests and then randomly select one as a shelter. This random hiding strategy is considered as exploitation in ARO. Due to their lower level in the food chain, rabbits need to run fast to avoid the danger from numerous predators, which will lead to a decrease in their energy, so rabbits have to adaptively shift between detour foraging and random hiding based on the energy status. With the above knowledge about the biological habits of rabbits, the mathematical model of ARO is constructed, including exploration, transition from exploration to exploitation, and exploitation. Subsequently, we will briefly outline each phase in ARO.

2.2.1. Detour Foraging (Exploration)

In ARO, it is assumed that each rabbit in the population has its own region with some grass and burrows. During foraging activities, the rabbit tends to randomly move to the far-away areas of other individuals in search for food and overlooks what lies close at hand, just like an old Chinese proverb says: “A rabbit doesn’t eat grass near its own nest”. This behavior is called detour foraging, and its mathematical model is represented as follows:
X i ( t + 1 ) = X j ( t ) + A × ( X i ( t ) X j ( t ) ) + r o u n d ( 0.5 × ( 0.05 + R 1 ) ) × n 1 , i , j = 1 , , N   and   i j
A = L × c
L = ( e e ( t 1 T ) 2 ) × sin ( 2 π R 2 )
c ( k ) = { 1 ,   i f   k = = g ( l ) 0 ,   o t h e r w i s e     k = 1 , , D   and   l = 1 , , R 3 × D
g = r a n d p e r m ( D )
n 1 ~ N ( 0 , 1 )
where X i ( t + 1 ) is the candidate position of the i -th rabbit in the next iteration t + 1 . X i ( t ) and X j ( t ) denotes the position of the i -th rabbit and the j -th rabbit in the current iteration t , respectively. N denotes the population size. t is the current iteration. T is the maximum number of iterations. D represents the dimension size of the specific problem. · stands for the ceiling function. r o u n d ( · ) signifies rounding to the nearest integer. r a n d p e r m ( · ) represents a randomly selected integer between 1 and D . R 1 , R 2 , and R 3 are all random number in the interval [ 0 ,   1 ] . n 1 follows the standard normal distribution. L represents the length of the movement step while conducting the detour foraging.

2.2.2. Transition from Exploration to Exploitation

In ARO, rabbits are inclined to implement continual detour foraging in the early stage of the iteration, whereas in the later stages of the search, they frequently execute random hiding. To maintain a good balance between exploration and exploitation, a concept of rabbit energy E is introduced, which will gradually decrease over time. The formula for the energy factor E is as follows:
E ( t ) = 4 ( 1 t T ) ln 1 R 4
where R 4 is a random number between 0 and 1. The value of the energy coefficient E varies in the interval [ 0 ,   2 ] . When E > 1 , it means that the rabbit has lots of energy to randomly explore the foraging area of other different individuals so that the detour foraging occurs, and this phase is defined as the exploration. In the case of E 1 , it indicates that the rabbit has less energy for physical activity, so it needs to perform random hiding to escape from predation, and the ARO algorithm enters the exploitation phase.

2.2.3. Random Hiding (Exploitation)

Rabbits are usually confronted with chase and attack from predators. To survive, they would dig a number of different holes around the nest for shelter. In ARO, at each iteration, a rabbit always generates D burrows along the dimension of the search space and then randomly selects one among them for hiding to decrease the probability of being captured. The mathematical model of this behavior is simulated as follows:
X i ( t + 1 ) = X i ( t ) + A × ( R 5 × b i , r ( t ) X i ( t ) )
b i , r ( t ) = X i ( t ) + H × g r ( k ) × X i ( t )
g r ( k ) = { 1 ,   i f   k = = R 6 × D 0 ,   o t h e r w i s e
H = T t + 1 T × n 2
n 2 ~ N ( 0 , 1 )
where the parameter A can be calculated using Equations (10)–(13), b i , r ( t ) represents a randomly selected burrow of the i -th rabbit from D burrows used for hiding in the current iteration t , R 5 and R 6 are two random numbers between 0 and 1, and n 2 follows the standard normal distribution.
The flow chart of the original ARO algorithm is presented in Figure 2.

3. The Proposed CHAOARO Algorithm

3.1. Hybridization of AO with ARO Algorithms

In the exploration phase of AO, the algorithm simulates the predatory behavior of Aquila rapidly pursuing target prey within a wide flight area. When updating the positions of search agents, the current global optimal position is added directly to improve searchability and accelerate the convergence (see Equations (1) and (3)). Nevertheless, at the later stage, the selected search domain cannot be exploited thoroughly, and the weak escape mechanism of Lévy flight makes the algorithm easily fall into local optima (see Equation (7)). As demonstrated in earlier studies: the convergence curve of AO remains the same during later iterations, especially on the multi-modal benchmark functions [49]. Most of the quality results achieved throughout the optimization process are likely to come from the contributions of exploration. Thus, the AO algorithm has an excellent global exploration capability, but meanwhile, its exploitation phase is still inadequate. On the contrary, the experimental findings of the ARO algorithm indicate that the defects of poor population diversity and slow convergence speed exist in the early exploration phase, and the detour foraging mechanism cannot provide sufficient volatility for search agents to explore the whole search space as much as possible (see Equation (9)). As the number of iterations increases, the energy coefficient E of the rabbit gradually decreases, and ARO enters the development stage. The random hiding behavior makes the individuals in the swarm continuously move closer to the neighborhood of the global optimal point, which significantly improves the solution accuracy (see Equation (16)). So, ARO owns good local exploitation ability.
Based on the above characteristic analysis, we consider the framework of ARO as the main body and preliminarily hybridize the exploration phase of AO with it to give full play to the strengths of the two basic algorithms and preserve more robust global and local search capabilities as well as faster convergence speed.

3.2. Adaptive Switching Mechanism (ASM)

For most bio-inspired optimizers, how to more effectively balance exploration and exploration is key to improving the algorithm’s performance. Exploration is a process of leaving any local region and subsequently exploring unknown spaces, while exploitation refers to the process of probing a local region to find a promising solution. Generally, exploration should be executed in the early stages of the algorithm, whereas exploitation is implemented in the later stages [57]. A successful hybrid algorithm needs to be equipped with effective switching mechanisms to reasonably balance the relationship between exploration and exploitation in the search for the global optimum [58]. In order to better balance the exploration process of AO and the exploitation phase of ARO, another parameter needs to be involved in the new combined optimizer to guide the search direction of individuals. The random number may be an option [54], however, considering the starvation ratio F of vultures in the African Vultures Optimization Algorithm (AVOA) [59] can lead the algorithm to transit smoothly from exploration to exploitation, thus providing competitive performance even when solving challenging optimization problems, therefore, an adaptive switching mechanism (ASM) is proposed based on this. The mathematical representation for ASM is as follows:
F = ( 2 × r a n d + 1 ) × z × ( 1 t T ) + g
g = h × ( sin w ( π 2 × t T ) + cos ( π 2 × t T ) 1 )
where r a n d denotes a random value in the interval [ 0 ,   1 ] , z is a random number between −1 and 1, t and T are the current number of iterations and the maximum iteration, respectively, h is a random number between −2 and 2, and w is a constant equal to 2.5. Figure 3 illustrates the dynamic behavior of F over 1000 iterations during the optimization operation. As per the AVOA algorithm, when | F | 1 , vultures look for food in different regions (exploration), and if | F | < 1 , vultures search for food near the optimal solution (exploitation). The ASM based on the F -value ensures that the algorithm focuses on global exploration in the early iterations whilst retaining the possibility of local search. As the value of F gradually decreases, the algorithm performs more local exploitation in the later stage.

3.3. Chaotic Opposition-Based Learning (COBL)

Opposition-based learning (OBL) is a powerful optimization technique in the field of intelligence computation first proposed by Tizhoosh [60]. In general, MAs start with some initial random solutions and try to continuously approach the global optimum through iterative calculation. The search procedure terminates once certain pre-defined conditions are met. If no related advance information about the solution is available, it may take quite a long time to converge. As illustrated in Figure 4, the main principle of OBL is to simultaneously evaluate the fitness values of the current solution and its corresponding opposite solution, then retain the dominant individual to continue with the next iteration, thus effectively strengthening the population diversity. It turns out that the generated opposite candidate solution has almost a 50% higher probability of being close to the global optimum than the current solution. Therefore, OBL has been widely implemented to enhance the optimization performance of many basic MAs [61,62]. The mathematical model for OBL is presented as follows:
X ^ = l b + u b X
where X ^ denotes the generated opposite solution, X is the current solution, u b and l b represent the upper and lower bounds of the search domain, respectively.
As can be seen from Equation (23), OBL can only yield the opposite solution at a fixed location, which works well in the early stage of optimization, but as the search process continues, it may occur that the opposite solution falls near the local optimum, and other individuals will quickly move towards this region, resulting in premature convergence and poor solution accuracy. To this end, reference [63] proposed a random opposition-based learning (ROBL) strategy by introducing the random perturbation to modify Equation (23) as follows:
X ^ = l b + u b r a n d X
where r a n d is a random number between 0 and 1. Although ROBL can enhance the population diversity of the algorithm and help avoid local optima to some extent, the algorithm’s convergence speed is still not satisfactory.
Chaos is a dynamic behavior found in nonlinear systems with three essential characteristics of chaos: ergodicity, regularity, and randomness [64]. Compared with random search, which mainly relies on probability distributions, the chaotic map can thoroughly investigate the search space at a much higher speed benefiting from its dynamic properties. To further improve population diversity and global convergence speed, this paper combines traditional OBL together with chaotic maps and proposes a chaotic opposition-based learning (COBL) strategy. The mathematical formula is given as follows:
X c o ^ = l b + u b φ X
where X c o ^ denotes the generated inverse solution of X , and φ is the chaotic map value.
In our paper, ten common chaotic maps are used to combine with OBL, including Chebyshev map, circle map, gauss map, iterative map, logistic map, piecewise map, sine map, singer map, sinusoidal map, and tent map. The images of these chaotic maps are shown in Figure 5, and the specific equations are listed in Table 2. In the next section, we will test in detail which map is more suitable to be employed for boosting the optimization performance of the proposed algorithm.

3.4. Detailed Design of CHAOARO

On the basis of the above Section 3.1, Section 3.2 and Section 3.3, the proposed methodology is summarized as follows. First, the exploration phase of AO is hybridized with the exploitation phase of ARO to achieve a more stable overall optimization performance. Then, ASM is designed to control the smooth switch from exploration to exploitation, which improves the efficiency of the algorithm in finding the most promising domain. In addition, AO and ARO share a common drawback of local optima stagnation. For this reason, the COBL strategy is utilized to update the current optimal solution before the next iteration calculation to further enhance the population diversity and local optima avoidance. All these operations significantly boost the convergence speed, solution accuracy, and robustness of both single algorithms. Finally, this new hybrid version of Aquila Optimizer and Artificial Rabbits Optimization driven by chaotic opposition-based learning can be abbreviated as CHAOARO. Figure 6 presents the flow chart of CHAOARO, and its pseudo-code is described in Algorithm 1.
Computational complexity is an important metric to evaluate the time consumption of an algorithm when solving optimization problems. With the pseudo-code shown in Algorithm 1, it can be concluded that the computational complexity of the proposed CHAOARO is related to the population size ( N ) , the dimension space of problems ( D ) , and the maximum number of iterations ( T ) . In the initialization process, the positions of all search agents are randomly generated in the search space, which requires a computational complexity of O ( N ) . Afterward, throughout the iteration procedure, it takes O ( N × T + N × D × T ) to carry out the fitness evaluation and position update. Accordingly, the total computational complexity of CHAOARO should be O ( N + N T + N D T ) . Compared with the canonical AO and ARO algorithms, the computational complexity of the proposed method does not increase.
Algorithm 1. Pseudo-code of the proposed CHAOARO
  • Initialize the population size N and the maximum iterations T
  • Initialize the position of each search agent X i ( i = 1 , 2 , , N )
  • While  t T
  •   Check if the position goes beyond the search limits and adjust it
  •   Evaluate the fitness values of all search agents
  •   Set X b e s t as the best solution obtained so far
  •   For each X i
  •     Calculate the starvation ratio F using Equation (21) //ASM
  •     If  | F | 1  then //Exploration of AO
  •     If  r a n d 0.5  then r a n d is a random number between 0 and 1
  •       Update the search agent’s position using Equation (1)
  •     Else
  •       Update the search agent’s position using Equation (3)
  •     End If
  •   Else
  •     Calculate the energy factor E using Equation (15)
  •     If  E > 1  then
  •      Update the search agent’s position using Equation (9)  //Detour foraging of ARO
  •     Else
  •      Update the search agent’s position using Equation (16) //Random hiding of ARO
  •     End If
  •   End If
  •   Generate the opposite solution of X b e s t using Equation (25),
      Select the one with better fitness into the next generation //COBL
  • End For
  • t = t + 1
  • End While
  • Return  X b e s t

4. Experimental Results and Discussion

In this section, a series of systematic experimental studies are conducted on 23 classical benchmark functions (IEEE CEC2005 test suite) and the IEEE CEC2019 test set to comprehensively investigate the performance of the proposed CHAOARO method. In order to illustrate the superiority of CHAOARO, seven state-of-the-art MAs, and two improved algorithms are employed for comparison analysis, namely AO [46], GWO [27], WOA [30], SCA [21], TSA [33], GJO [36], ARO [55], Weighted Chimp Optimization Algorithm (WChOA) [65], and Dynamic Arithmetic Optimization Algorithm (DAOA) [66]. Table 3 lists the main parameters used in each algorithm, which are the same as those recommended in the original research papers. Note that the parameter settings appearing in the position update model (Equations (1) and (3)) for the exploration phase of AO are also inherited into CHAOARO. In the experiment, for all involved algorithms, the population size is fixed to 30, and the maximum iteration is set as 30 for a fair comparison.
Through 30 independent runs, the obtained average fitness (Avg) and standard deviation (Std) results are recorded as evaluation criteria, where the average fitness value reflects the optimization accuracy of an algorithm, which can be calculated as follows:
Avg = 1 t i m e s i = 1 t i m e s O i
where t i m e s stands for the total number of runs, and O i denotes the outcome of the i -th operation. The closer the average fitness is to the theoretical optimal solution, the better the searchability of the algorithm. On the other hand, the standard deviation reveals the departure degree of the experimental data, and the smaller the standard deviation, the higher the stability of the algorithm. The mathematical formula for the standard deviation is as follows:
Std = 1 t i m e s 1 i = 1 t i m e s ( O i Avg ) 2
Besides, two non-parametric statistical techniques, including the Friedman ranking test [67] and the Wilcoxon rank-sum test [68], are further performed to check whether CHAOARO is significantly different from other comparison methods. All the experiments are implemented in MATLAB R2017a with Microsoft Windows 10 system, and the hardware platform configuration of the computer is Intel (R) Core (TM) i5-10300H CPU @ 2.50 GHz and 16 GB RAM.

4.1. Experiment 1: Classical Benchmark Functions

To verify the effectiveness of the proposed CHAOARO in solving simple numerical optimization problems, we select 23 classical benchmark functions with different characteristics from [10] for testing. These benchmark functions can be classified into three categories: unimodal, multimodal, and fix-dimension multimodal functions. Unimodal functions (F1~F7) have only one global optimum, which can be utilized to estimate the exploitation propensity of the algorithm. For multimodal and fix-dimension multimodal functions (F8~F23), they contain a large number of local optima and are therefore usually adopted to examine the exploration and local optima avoidance abilities of the algorithm. Table 4 provides the details of the 23 classical benchmark functions.
First, the impact of ten different combinations of the chaotic map and opposition-based learning on the performance of the proposed CHAOARO algorithm is studied. Afterward, we compare CHAOARO with other selected advanced algorithms in turn with respect to numerical results, convergence behavior, boxplot, computational consumption, Wilcoxon rank-sum test, and scalability in the dimensional space.

4.1.1. Chaotic Map Selection Analysis

The COBL strategy designed in Section 3.3 integrates chaotic maps and traditional opposition-based learning to prevent the algorithm from getting trapped in the local optimum during iterations. To confirm which chaotic map in Table 2 should be used, this part tests the optimization performance of CHAOARO with ten different chaotic maps on 23 classical benchmark functions. After 30 independent runs, the average fitness and standard deviation results obtained are listed in Table 5.
As can be clearly seen from Table 5, in most test cases, CHAOARO based on CM3 (gauss/mouse map) performs better than that using other chaotic maps. When solving unimodal functions F1~F7, CM3 (gauss/mouse map) ranks first among all its peers, especially on F1~F4, which can consistently find the theoretical optimal value (0). For multimodal and fix-dimension multimodal functions F8~F23, the gauss/mouse map also provides satisfactory solutions. Eventually, CHAOARO with gauss/mouse map obtains the minimum Friedman mean ranking value of 1.4348. This indicates that the gauss/mouse map has the best effect in improving the comprehensive performance of the algorithm; therefore, it is selected to generate chaotic map values φ for the COBL strategy in this paper.

4.1.2. Evaluation of Exploitation and Exploration

Based on the properties of the unimodal, multimodal, and fix-dimension multimodal benchmark functions described earlier, in this subsection, we perform a systematic evaluation of the exploitation and exploration propensities of the proposed optimizer. The specific parameter settings have been shown in Table 3. After 30 runs on the 30-dimensional test functions F1~F23, the average fitness and standard deviation values obtained by CHAOARO and other competitor algorithms are recorded in Table 6.
As seen from Table 6, for unimodal functions F1~F7, as far as the average fitness is concerned, CHAOARO is able to precisely pinpoint the theoretical optimal solution (0) on F1~F4, whereas the obtained results of other comparison algorithms are not satisfactory. Meanwhile, it is noteworthy that the solution accuracy of the hybrid algorithm has a significant improvement over the basic AO and ARO. On functions F5 and F6, although CHAOARO fails to search for the global optimum, it still achieves the best average fitness value in the competition with the remaining nine algorithms. Regarding function F7, the proposed CHAOARO also provides very competitive average fitness, which is second only to the best TSA. Additionally, CHAOARO achieves the smallest standard deviation among all algorithms on functions F1~F6, but marginally inferior to TSA on function F7. Since the unimodal function has only one global optimal value, these experimental data indicate that CHAOARO has strong local exploitation potential. This is because the COBL strategy can effectively expand the unknown search domain and the hybrid operation facilitates the exchange of useful information among individuals in the population.
With regard to solving multimodal and fix-dimension multimodal functions F8~F23, CHAOARO maintains good convergence accuracy and robustness, which can reveal the best results on 15 out of 16 benchmark functions both in terms of average fitness and standard deviation. Especially on functions F12, F13, F15, F20, F21, F22, and F23, CHAOARO has an overwhelming advantage compared with the basic AO and ARO, as well as other optimization methods. On functions F16~F19, some algorithms achieve the same average fitness as CHAOARO. However, the calculated standard deviation value of the proposed algorithm is the smallest among them. This highlights the superior stability of CHAOARO. Considering that the multimodal and fix-dimension multimodal functions are characterized by numerous local minima, these results prove the excellent exploration and local optima avoidance capabilities of CHAOARO. It can be explained that CHAOARO takes full advantage of the powerful exploration trend of AO, and the ASM mechanism can better balance the algorithm exploration and exploitation.
Moreover, the final Friedman mean ranking of CHAOARO obtained on 23 classical benchmark functions is 1.0870, which ranks first among these algorithms. Hence, it can be believed that the searchability of the hybrid algorithm proposed in this paper has been significantly enhanced.

4.1.3. Analysis of Convergence Behavior

To study the convergence behavior of the algorithm throughout the process of finding the global optimal solution, Figure 7 illustrates the convergence curves of AO, GWO, WOA, TSA, GJO, ARO, WChOA, DAOA, and CHAOARO on 23 classical benchmark functions. From Figure 7, it can be observed that the proposed CHAOARO can effectively reach the global optimal solution at the initial stage of the unimodal benchmark functions F1~F4 and shows the fastest convergence speed, while the original AO, ARO, and other comparison algorithms converge slowly with unsatisfactory convergence accuracy. On functions F5 and F6, CHAOARO has a similar convergence trend to AO in the early search process, but gradually AO falls into the local optima, while the proposed algorithm still maximizes the information in the search space to improve the quality of the solution. Finally, CHAOARO provides the best convergence accuracy among all these optimization methods. On function F7, CHAOARO also gains a good ameliorate in the field of convergence accuracy and speed compared with AO and ARO. For multimodal functions F8~F13, the proposed CHAOARO continues its outstanding search performance. Specifically, on functions F9 and F11, CHAOARO is able to obtain the theoretical optimum with the minimum number of iterations among all algorithms. On functions F8, F10, F12, and F13, the convergence accuracy and speed of CHAOARO once again outperform the other competitors in varying degrees. For fix-dimension multimodal functions F14~F23, CHAOARO can quickly transfer from exploration to exploitation, converge to the global optimal value in the early searching stage, and not fall into the local optimum. Compared with its peers, CHAOARO possesses a significant superiority in terms of convergence accuracy and speed.
To sum up, the convergence pattern of the proposed CHAOARO is obviously improved, and it can rapidly and precisely locate an excellent solution for both unimodal and multimodal functions.

4.1.4. Boxplot Analysis

To better describe the consistency between the data obtained from 30 independent runs, in this subsection, the boxplot diagram is utilized to reflect the distribution characteristics of each algorithm. Figure 8 depicts the boxplots of AO, GWO, WOA, SCA, TSA, GJO, ARO, WChOA, DAOA, and CHAOARO on 12 representative benchmark functions. In this diagram, the center mark of each box signifies the median obtained, the lowest and largest points on the edges are the minimum and maximum values, respectively, and the symbol ”+” denotes an outlier. From Figure 9, we can notice that the objective distribution of CHAOARO is narrower than that of comparison algorithms in most cases, which suggests that the proposed algorithm has excellent robustness in solving these test problems. In particular, on functions F1, F2, F3, F4, F9, F10, F11, F14, F16, and F17, CHAOARO does not produce any outliers. On the remaining functions, although there exist individual outliers, the general distribution of CHAOARO regarding the median, minimum, and maximum values is likewise more concentrated than others. Experimental findings demonstrate that the stability of CHAOARO is considerably improved compared to its predecessors AO and ARO, which benefits largely from the key ASM and COBL strategies.

4.1.5. Wilcoxon Rank-Sum Test

The Wilcoxon rank-sum test is a non-parametric statistical method used to assess the performance difference between two samples at a significance level of 0.05 [68]. To be specific, if the p-value is less than 0.05, it means that there is a significant difference between CHAOARO and the comparison algorithm, i.e., CHAOARO performs better than its opponent (“+”). In contrast, when the p-value is greater than 0.05, it indicates that the difference between CHAOARO and the comparison algorithm is not apparent, i.e., CHAOARO performs worse than its opponent (“−”). In addition, NaN represents that CHAOARO and the comparison algorithm have consistent performance in terms of statistics (“=”). Table 7 outlines the p-values between CHAOARO and each comparison optimizer obtained via Wilcoxon rank-sum test on 23 benchmark functions. As can be seen from this table, the proposed method is significantly superior to AO on 20 functions, GWO on 23 functions, WOA on 22 functions, SCA on 23 functions, TSA on 23 functions, GJO on 21 functions, ARO on 16 functions, WChOA on 21 functions, and DAOA on 23 functions. These statistical results provide evidence that CHAOARO shows better significant optimization performance on almost all test functions compared with other comparison algorithms.

4.1.6. Computation Time Analysis

To quantitatively analyze the computational cost of the proposed algorithm, the average runtime of ten algorithms on 23 benchmark functions is reported in Table 8. We calculate the total operation time for each algorithm and give the corresponding rankings as follows: WChOA (27.9750 s) > AO (6.2460 s) > GJO (5.4630 s) > CHAOARO (4.3330 s) > ARO (4.2020 s) > GWO (4.0796 s) > TSA (3.5097 s) > SCA (3.3658 s) > DAOA (2.7242 s) > WOA (2.3238 s). From the results, it can be seen that the time consumption of CHAOARO is slightly higher than that of ARO but significantly lower than that of AO. Although the algorithm designed in this paper has a more complex framework compared with the basic AO and ARO algorithms, mainly due to the hybrid operation, ASM, and COBL mechanisms, its optimization performance is greatly improved. On the other hand, using the added steps to obtain more reliable solutions does not result in too much time cost required to be sacrificed by CHAOARO. On the whole, given that CHAOARO has better exploration and exploitation capabilities than other comparison algorithms, a little computation time is acceptable. The proposed method is expected to be successfully adopted in some real-time applications.

4.1.7. Scalability Analysis

The scalability test can be used to investigate the impact of problems in different dimensions on the optimization performance of the algorithm. To check whether the proposed algorithm suffers from dimension disaster while tackling high-dimensional optimization problems, this subsection will apply CHAOARO to optimize the 13 variable-dimension benchmark functions F1~F13 in Table 4. We increase the test dimensions ( D ) from 30 to 50, 100, and 500, and the average fitness and standard deviation results obtained from 30 independent runs of each algorithm are filled in Table 9.
As can be found in Table 9, CHAOARO also exhibits superior search capabilities than comparison algorithms on high-dimensional problems. With the expansion of dimensionality, more elements need to be optimized, so the convergence accuracy of most algorithms decreases to some extent, but CHAOARO is able to steadily find the theoretical optimum (0) on functions F1, F2, F3, F4, F9, and F11 in all dimensions. For functions F5, F6, F12, and F13, the Avg and Std of the original AO and ARO gradually deteriorate with the increase of dimension; nevertheless, CHAOARO maintains high solution accuracy. For functions F7 and F8, the performance of CHAOARO is slightly inferior to that of TSA and WOA, respectively, ranking second among all algorithms. Figure 9 illustrates the Friedman mean rankings of CHAOARO and the other nine comparison algorithms on these scalable functions. From this figure, it can be seen that CHAOARO has the best overall performance among all algorithms regardless of dimensionality.
Based on the above, it can be concluded that when resolving high-dimensional problems, CHAOARO can maintain well exploration and exploitation trends at the same time.

4.2. Experiment 2: IEEE CEC2019 Test Suite

The above series of comparison experiments based on classical benchmark functions have successfully witnessed the superiority of CHAOARO in solving simple optimization problems. To further validate the effectiveness of our improved algorithm in addressing complex optimization problems, in this section, we will evaluate the performance of CHAOARO by using the IEEE CEC2019 test suite. Table 10 provides the details of the ten benchmark functions in the IEEE CEC2019 test suite. Likewise, CHAOARO and the other nine algorithms run independently 30 times on each function with the maximum iteration and population size set as 500 and 30, respectively. And the experimental results are shown in Table 11.
From Table 11, it is evident that CHAOARO can show better optimization performance than its peers on almost all test functions. On functions CEC01 and CEC04~CEC10, the solution obtained by CHAOARO is much closer to the theoretical optimum than other comparison algorithms. On functions CEC02 and CEC03, though certain optimizers can achieve the same average fitness value as CHAOARO, the latter has a smaller standard deviation, which again proves the excellent robustness of the proposed work. Furthermore, the Wilcoxon rank-sum test and Friedman ranking test used for statistical analysis are also conducted to check whether the performance of CHAOARO has been significantly improved on the IEEE CEC2019 test set. According to the statistical results, CHAOARO outperforms AO, GWO, WOA, SCA, TSA, GJO, WChOA, and DAOA on 10 test functions and outperforms ARO on 9 functions. Besides, CHAOARO gains a Friedman mean ranking value of 1.0, which ranks first in the competition. These findings imply that the proposed algorithm not only can provide higher-quality solutions for simple optimization problems but also is very competitive in solving complex numerical optimization problems.
Figure 10 plots the convergence curves of CHAOARO and other comparison algorithms on 10 CEC2019 test functions. It can be visually seen from this figure that CHAOARO scores a promising convergence pattern, which shows great improvements over the basic AO and ARO. On functions CEC01, CEC02, CEC07, and CEC08, CHAOARO has a large decay rate in the early search phase, which enables it to obtain the most satisfactory outcomes with the least number of iterations among all algorithms. This is primarily attributed to the increased population diversity by COBL strategy. On functions CEC04, CEC05, CEC06, and CEC10, the superior exploitation capability of CHAOARO is well demonstrated. CHAOARO follows the same trend as some of its competitors in the early iterations, but in the later iterations, as most algorithms fall into the local optima, the proposed method is still approaching the global optimum and thus achieves better final convergence accuracy. On functions CEC03 and CEC09, CHAOARO also performs very competitively in terms of convergence accuracy and speed.
In conclusion, no matter whether handling simple or complex numerical problems, CHAOARO can be trusted to offer reliable optimization performance in most scenarios. CHAOARO succeeds the strengths of the original AO and ARO and employs ASM and COBL strategies to compensate for defects like the tendency to fall into the local optima and the imbalance between exploration and exploitation, thus achieving better solution accuracy, convergence speed, and robustness. Next, we are going to validate the practicality of the proposed hybrid technique for real-world optimization tasks.

5. CHAOARO for Solving Engineering Design Problems

In this section, five complex engineering design problems, including pressure vessel design problem, cantilever beam design problem, tubular column design problem, speed reducer design problem, and rolling element bearing design problem, are used to sufficiently verify the effectiveness of the proposed CHAOARO at the real-world application level. In contrast to the benchmark functions, these engineering test cases contain several equality and inequality constraints, which present a major challenge for MAs. To deal with these inequality constraints in the problem, here we introduce the penalty function [69] to modify the original objective function. Similarly, CHAOARO runs independently 30 times on each problem with the population size ( N ) and the maximum number of iterations ( T ) fixed to 30 and 500, respectively, and the optimal results obtained are compared with different famous optimization methods released in previous studies.

5.1. Design of Pressure Vessel

The design of pressure vessels is a common engineering test case appeared in previous studies for evaluating the performance of optimization techniques. In this design, the objective is to minimize the overall fabrication cost of a cylindrical vessel capped at both ends by hemispherical heads. As illustrated in Figure 11, the thickness of the shell ( T s = x 1 ), the thickness of the head ( T h = x 2 ), the vessel’s inner radius ( R = x 3 ), and the length of the vessel without heads ( L = x 4 ) are the four decision parameters for optimization. This problem is mathematically stated as follows:
Consider x = [ x 1 , x 2 , x 3 , x 4 ] = [ T s , T h , R , L ]
Minimize f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to
g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 3 + 0.00954 x 3 0 , g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 , g 4 ( x ) = x 4 240 0 .
Variable range
0 x 1 , x 2 99 , 10 x 3 , x 4 200 .
Table 12 reports the results of CHAOARO and other optimization techniques for the pressure vessel design problem. As can be seen from this table, the minimum cost of 5885.5834 is attained by CHAOARO when the four variables T s , T h , R , and L are set as 0.7783, 0.3847, 40.3254, and 199.9213, respectively. Compared with AO, MVO, WOA, GWO, MFO, HHO, SMA, GJO, ARO, and AOASC, the proposed method provides the best design outcome. Therefore, it is reasonable to believe that CHAOARO has remarkable advantages in resolving such problem.

5.2. Design of Cantilever Beam

The cantilever beam design problem originated by [71] is one of the most representative issues in the area of mechanics and civil engineering. This problem aims to minimize the total weight of a cantilever beam with a square section while satisfying the load-carrying conditions. As illustrated in Figure 12, the height or width of the five square hollow elements are the decision variables that need to be taken into account in the minimization process, and the thickness of each element is constant. The mathematical formula of this problem can be expressed as follows:
Consider x = [ x 1 , x 2 , x 3 , x 4 , x 5 ]
Minimize f ( x ) = 0.6224 ( x 1 + x 2 + x 3 + x 4 + x 5 )
Subject to
g ( x ) = 61 x 1 3 + 27 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
Variable range
0.01 x 1 , x 2 , x 3 , x 4 , x 5 100
In Table 13, the optimum variables and solutions of different algorithms for the cantilever beam design problem are listed. As it presents, CHAOARO can reveal a much smaller weight than the majority of other competitors, which is 1.339956. In addition, the performance of RUN on this application is equally competitive. These experimental data demonstrate the promising potential of CHAOARO in terms of reducing the total weight of cantilever beams.

5.3. Design of Tubular Column

In this optimization, the task is to design a uniform column of the tubular section with the length L = 250   cm at minimum cost so as to withstand the compressive load P = 2500 kgf. As illustrated in Figure 13, this problem involves two design variables, namely the column’s mean diameter ( d = x 1 ) and the thickness of tube ( t = x 2 ). Besides, the yield stress ( σ y ), modulus of elasticity ( E ), and density ( ρ ) of the material used to construct the column are 500   kgf / cm 2 , 0.85 × 10 6   kgf / cm 2 , and 0.0025   kgf / cm 3 , respectively. For it, the mathematical model is as follows:
Consider x = [ x 1 , x 2 ] = [ d , t ]
Minimize f ( x ) = 9.8 x 1 x 2 + 2 x 1
Subject to
g 1 ( x ) = P π x 1 x 2 σ y 1 0 , g 2 ( x ) = 8 P L 2 π 3 E x 1 x 2 ( x 1 2 + x 2 2 ) 1 0 , g 3 ( x ) = 2.0 x 1 1 0 , g 4 ( x ) = x 1 14 1 0 , g 5 ( x ) = 0.2 x 2 1 0 , g 6 ( x ) = x 2 8 1 0 .
Variable range
2 x 1 14 , 0.2 x 2 0.8 .
Table 14 records the comparison results between the proposed CHAOARO and the remaining algorithms for tackling the tubular column design problem. It is evident that CHAOARO obtains the lowest optimum cost of 26.48636 among these algorithms when the two variables d and t are set as 5.45218 and 0.29163. Nevertheless, the performances of the basic ARO and AO accordingly rank 3rd and 10th. It is proved that CHAOARO has better effects regarding this design.

5.4. Design of Speed Reducer

The speed reducer is one of the most critical components in the gearbox system [77]. The goal of this optimization problem is to reduce the weight of a speed reducer as much as possible under different constraints on surface stress, bending stress, stress in the shafts, and transverse deflection of the shafts. As depicted in Figure 14, there are seven decision variables to be considered in this optimal design, including the face width ( x 1 ), the module of teeth ( x 2 ), the number of teeth in the pinion ( x 3 ), the length of the first shaft between bearings ( x 4 ), the length of the second shaft between bearings ( x 5 ), and the diameter of the shafts ( x 6 , x 7 ). The mathematical formulation of this problem is given as follows:
Consider x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ]
Minimize
f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 )
Subject to
g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 , g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 , g 3 ( x ) = 1.93 x 4 2 x 2 x 6 4 x 3 1 0 , g 4 ( x ) = 1.93 x 5 2 x 2 x 7 4 x 3 1 0 , g 5 ( x ) = ( 745 x 5 x 2 x 3 ) 2 + 16.9 × 10 6 110 x 6 3 1 0 , g 6 ( x ) = ( 745 x 5 x 2 x 3 ) 2 + 157.5 × 10 6 85 x 7 3 1 0 , g 7 ( x ) = x 2 x 3 40 1 0 , g 8 ( x ) = 5 x 2 x 1 1 0 , g 9 ( x ) = x 1 12 x 2 1 0 , g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 , g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 .
Variable range
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5.0 x 7 5.5 .
CHAOARO is employed to optimize this problem and compared with seven other algorithms respectively. The obtained results are summarized in Table 15. From this table, it is not difficult to observe that the proposed CHAOARO outperforms all other comparison methods published in the literature and achieves the minimum weight of 2994.4488. This effectively indicates that CHAOARO possesses an excellent global optimization capability in the design of the speed reducer.

5.5. Design of Rolling Element Bearing

The last test case, rolling element bearing design problem, contains ten decision variables and nine constraints for modeling and geometry-based limitations. Its main purpose is to maximize the dynamic load-carrying capacity of a rolling element bearing illustrated in Figure 15. The geometric design parameters are pitch diameter ( D m ), ball diameter ( D b ), number of balls ( Z ), inner ( f i ) and outer ( f o ) raceway curvature radius coefficient, K d m i n , K d m a x , δ , e , and ζ . Mathematically, this problem is described below:
Consider x = [ D m , D b , Z , f i , f o , K d min , K d max , δ , e , ζ ]
Maximize
f ( x ) = { f c Z 2 / 3 D b 1.8 , i f   D b 25.4 mm 3.647 f c Z 2 / 3 D b 1.4 , o t h e r w i s e
Subject to
g 1 ( x ) = ϕ 0 2 sin 1 ( D b / D m ) Z + 1 0 , g 2 ( x ) = 2 D b K d min ( D d ) > 0 , g 3 ( x ) = K d max ( D d ) 2 D b 0 , g 4 ( x ) = ζ B w D b 0 , g 5 ( x ) = D m 0.5 ( D + d ) 0 , g 6 ( x ) = ( 0.5 + e ) ( D + d ) D m 0 , g 7 ( x ) = 0.5 ( D D m D b ) δ D b 0 , g 8 ( x ) = f i 0.515 , g 9 ( x ) = f o 0.515 .
where
f c = 37.91 [ 1 + { 1.04 ( 1 γ 1 + γ ) 1.72 ( f i ( 2 f o 1 ) f o ( 2 f i 1 ) ) 0.41 } 10 / 3 ] 0.3 × [ γ 0.3 ( 1 γ ) 1.39 ( 1 + γ ) 1 / 3 ] [ 2 f i 2 f i 1 ] 0.41 , ϕ   0 = 2 π cos 1 { ( D d ) / 2 3 ( T / 4 ) } 2 + { D / 2 T / 4 D b } 2 { d / 2 + T / 4 } 2 2 { ( D d ) / 2 3 ( T / 4 ) } { D / 2 T / 4 D b } , γ = D b D m , f i = r i D b , f o = r o D b , T = D d 2 D b , D = 160 , d = 90 , B w = 30 , r i = r o = 11.033 , 0.5 ( D + d ) D m 0.6 ( D + d ) , 0.15 ( D d ) D b 0.45 ( D d ) , 0.515 f i ,   f o 0.6 , 4 Z 50 , 0.4 K d min 0.5 , 0.6 K d max 0.7 , 0.3 δ 0.4 , 0.02 e 0.1 , 0.6 ζ 0.85 .
The detailed results of CHAOARO for this problem are compared with other meta-heuristics in Table 16. It can be seen that the developed technique is able to find a more reliable solution compared with its peers. The load-carrying capacity of CHAOARO optimized design is 85548.8272, showing a significant improvement. This instance once again validates the merits of CHAOARO from the practical application aspect.
In summary, this section showcases the effectiveness of the proposed CHAOARO in dealing with real-world engineering test problems subject to different constraints. CHAOARO could perform better than the basic AO, ARO, as well as other existing optimizers with high-quality solutions, largely attributed to the hybrid operation, adaptive switching coefficient F , and COBL that well balance and boost the algorithm exploration and exploitation to varying degrees. In the next section, the superiority of CHAOARO will be further illustrated in another practical case study—parameter identification of PV model.

6. CHAOARO for Parameter Identification of Photovoltaic Model

To cope with the crisis of climate change, environmental pollution, and the depletion of conventional fossil fuels, an increased emphasis has been placed on the search for high-quality renewable energy sources in recent years [83]. Among different renewable energy sources, solar energy is regarded as one of the most promising renewable energy sources since it is clean, abundant, and pollution-free. In most parts of the world, PV systems are widely used to convert solar energy into electrical energy for power generation. The performance of a PV system relies on the chosen PV model and the unknown parameters in the model [84]. Currently, several PV models have been designed, such as single diode model (SDM), double diode model (DDM), and triple diode model (TDM). However, SDM is still extensively utilized in practice attributed to its simplicity and accuracy. Because PV systems usually operate in harsh outdoor environments, a variety of uncertainties may directly effect changes in model parameters, thus reducing the utilization efficiency of solar energy. Hence, it is of great practical significance to develop an accurate and robust method for identifying the unknown parameters of the PV model.
In this section, CHAOARO is applied to solve the parameter identification problem of SDM to further verify the superiority of the proposed method. As the most prevalent model to characterize the properties of PV power generation, the SDM consists of a photo-generated current source I ph , a parallel diode D , a parallel resistance R sh , and an equivalent series resistance R s , shown in Figure 16. In this model, the output current I o according to Kirchhoff’s current law can be expressed as follows:
I o = I ph I d I sh = I ph I sd [ exp ( q ( V o + R s I o ) n k T ) 1 ] V o + I o R s R sh
where I ph denotes the photo-generated current, I d denotes the diode current, I sh indicates the shunt resistance current, I sd represents the reverse saturation current of the diode D , q is the electron charge equal to 1.60217646 × 10 19   C , V o is the output voltage, R s and R sh are the series and parallel resistances, respectively, n is the diode ideality coefficient, k is the Boltzmann constant equal to 1.3806503 × 10 23   J / K , and T stands for the absolute temperature in Kelvin.
From Equation (28), it can be observed that there are a total of five unknown parameters that need to be estimated for SDM, namely I ph , I sd , R s , R sh , and n .
To identify the PV parameters using the meta-heuristic algorithm, it is necessary to define an objective function for this optimization problem first. Here, the root mean square error (RMSE) [85], which can reflect the degree of error between the actual measured data and the data estimated by CHAOARO, is introduced as the objective function:
min F ( X ) = RMSE ( X ) = 1 N k = 1 N f k ( V o , I o , X ) 2
where N denotes the number of experimental data. The smaller the RMSE value achieved, the more accurate the identified parameters are.
For SDM, f k ( V o , I o , X ) and X in Equation (29) are as follows:
{ f k ( V o , I o , X ) = I ph I sd [ exp ( q ( V o + R s I o ) n k T ) 1 ] V o + I o R s R sh I o X = { I ph , I sd , R s , R sh , n } , where   0 I ph 1 , 0 I sd 1 , 0 R s 0.5 , 0 R sh 100 , 1 n 2 .
Based on the actual measured current-voltage data in reference [86], where commercial silicon R.T.C French solar cells with a diameter of 57 mm were operated under 1000   W / m 2 at 33 °C, we utilize the proposed method to identify the five unknown parameters of SDM. CHAOARO runs independently 30 times on this test problem with the population size ( N ) and the maximum number of iterations ( T ) set to 30 and 500 respectively, and the obtained optimal parameters and RMSE-value are reported in Table 17.
As can be seen from Table 17, CHAOARO obtains the smallest RMSE value of 7.7330 × 10−4 compared to other state-of-the-art peer competitors, namely ABC [85], SMA [87], IHBA [88], IBES [89], CPSO [90], OBDSSA [91], and GOTLBO [92], indicating that CHAOARO has the highest accuracy for parameter identification. Furthermore, the best-extracted parameters attained by CHAOARO are used to generate the current-voltage (I-V) and power-voltage (P-V) characteristic curves, as shown in Figure 17. From this figure, it is clear that the estimated values of CHAOARO can fit the actual measured data well, which again demonstrates that the proposed method has excellent prospects and robustness in solving the parameter identification problem of SDM for PV systems.

7. Conclusions and Future Research

In this study, for the characteristics of AO and ARO, we skillfully combine these two algorithms and propose a new hybrid meta-heuristic optimization paradigm, called CHAOARO, to provide more reliable solutions for complex global optimization problems. The proposed method aims to overcome the limitation of the original algorithm’s insufficient search strategies, enrich the diversity of populations, and avoid local optimal stagnation. In CHAOARO, firstly, the global exploration stage of AO and the local exploitation stage of ARO are integrated together to obtain superior overall performance and convergence speed. Secondly, based on the starvation ratio F in African Vultures Optimization Algorithm, an adaptive switching mechanism is designed to better balance the exploration and exploitation patterns of the algorithm. Moreover, the chaotic opposition-based learning tactic is utilized to assist the individual in exploring more unknown search domains and increase the possibility of getting rid of the local optima. To thoroughly evaluate the performance of CHAOARO, we use 23 classical benchmark functions, including thirteen unimodal and multimodal benchmark functions under different dimensions and ten fix-dimension multimodal benchmark functions, as well as the famous IEEE CEC2019 test suite. The significant differences between different competitor algorithms in a statistical sense are verified by using the Friedman ranking test and Wilcoxon rank-sum test. Compared with AO, ARO, and seven other state-of-the-art metaheuristics, the experimental results credibly demonstrate that CHAOARO has superior competitiveness in terms of convergence speed, solution accuracy, local optima avoidance, and stability no matter when solving simple or challenging numerical problems. To prove the effectiveness of the proposed method in practical applications, CHAOARO is further applied to tackle five engineering design problems and the parameter extraction problem of the PV model. Our findings indicate that CHAOARO is a promising auxiliary tool for addressing real-world optimization tasks.
Although CHAOARO can effectively outperform the original AO and ARO, its optimization performance still has room for further improvement. As can be seen from Table 6, the results of CHAOARO on functions F7 and F8 are not the most perfect. In the next work, we will try to further enhance the exploration and exploitation capabilities of CHAOARO for better solution accuracy via introducing other modification techniques, such as adaptive β-hill climbing, Lévy flight, and evolutionary population dynamics. And the more challenging IEEE CEC2022 test suite will hopefully be employed to evaluate the performance differences between CHAOARO and some improved variants of AO. In addition, CHAOARO is ready to be applied to solve real-world optimization problems in more fields, for instance, multi-level threshold image segmentation, node localization of wireless sensor network, path planning for drones in a three-dimensional environment, parameter self-tuning of speed proportional integral differential (PID) controller for brushless direct current motors, and fault diagnosis of rolling bearing. It would also make sense to develop a multi-objective version of the CHAOARO algorithm for complex multi-objective projects.

Author Contributions

Conceptualization, Y.W. and Y.X.; methodology, Y.W.; software, Y.X.; validation, Y.W. and Y.X.; formal analysis, J.L.; writing—original draft preparation, Y.W. and J.L.; writing—review and editing, Y.X.; visualization, Y.W.; supervision, J.L. and Y.G.; funding acquisition, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Natural Science Foundation of China under Grant 52075090, Key Research and Development Program Projects of Heilongjiang Province under Grant GA21A403, Fundamental Research Funds for the Central Universities under Grant 2572021BF01, and Natural Science Foundation of Heilongjiang Province under Grant YQ2021E002.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors are grateful to the editor and reviewers for their constructive comments and suggestions, which have improved the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xiao, Y.; Sun, X.; Guo, Y.; Cui, H.; Wang, Y.; Li, J.; Li, S. An enhanced honey badger algorithm based on Lévy flight and refraction opposition-based learning for engineering design problems. J. Intell. Fuzzy Syst. 2022, 43, 4517–4540. [Google Scholar] [CrossRef]
  2. Jia, H.; Zhang, W.; Zheng, R.; Wang, S.; Leng, X.; Cao, N. Ensemble mutation slime mould algorithm with restart mechanism for feature selection. Int. J. Intell. Syst. 2021, 37, 2335–2370. [Google Scholar] [CrossRef]
  3. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L.; Liu, Y. A hybrid arithmetic optimization and golden sine algorithm for solving industrial engineering design problems. Mathematics 2022, 10, 1567. [Google Scholar] [CrossRef]
  4. Abd Elaziz, M.; Abualigah, L.; Attiya, I. Advanced optimization technique for scheduling IoT tasks in cloud-fog computing environments. Future Gener. Comput. Syst. 2021, 124, 142–154. [Google Scholar] [CrossRef]
  5. Guo, W.; Xu, P.; Dai, F.; Hou, Z. Harris hawks optimization algorithm based on elite fractional mutation for data clustering. Appl. Intell. 2022, 52, 11407–11433. [Google Scholar] [CrossRef]
  6. Shi, K.; Liu, C.; Sun, Z.; Yue, X. Coupled orbit-attitude dynamics and trajectory tracking control for spacecraft electromagnetic docking. Appl. Math. Model. 2022, 101, 553–572. [Google Scholar] [CrossRef]
  7. Liu, C.; Yue, X.; Zhang, J.; Shi, K. Active disturbance rejection control for delayed electromagnetic docking of spacecraft in elliptical orbits. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 2257–2268. [Google Scholar] [CrossRef]
  8. Hu, G.; Zhong, J.; Du, B.; Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Meth. Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  9. Yang, J.; Liu, Z.; Zhang, X.; Hu, G. Elite chaotic manta ray algorithm integrated with chaotic initialization and opposition-based learning. Mathematics 2022, 10, 2960. [Google Scholar] [CrossRef]
  10. Xiao, Y.; Guo, Y.; Cui, H.; Wang, Y.; Li, J.; Zhang, Y. IHAOAVOA: An improved hybrid aquila optimizer and African vultures optimization algorithm for global optimization problems. Math. Biosci. Eng. 2022, 19, 10963–11017. [Google Scholar] [CrossRef]
  11. Wen, C.; Jia, H.; Wu, D.; Rao, H.; Li, S.; Liu, Q.; Abualigah, L. Modified remora optimization algorithm with multistrategies for global optimization problem. Mathematics 2022, 10, 3604. [Google Scholar] [CrossRef]
  12. Jia, H.; Sun, K.; Zhang, W.; Leng, X. An enhanced chimp optimization algorithm for continuous optimization domains. Complex Intell. Syst. 2021, 8, 65–82. [Google Scholar] [CrossRef]
  13. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
  14. Storn, R.; Price, K. Differential evolution-A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  15. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  16. Cheraghalipour, A.; Hajiaghaei-Keshteli, M.; Paydar, M.M. Tree Growth Algorithm (TGA): A novel approach for solving optimization problems. Eng. Appl. Artif. Intell. 2018, 72, 393–414. [Google Scholar] [CrossRef]
  17. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  18. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  19. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  21. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  22. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  23. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Meth. Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  24. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
  25. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  26. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2011, 29, 17–35. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  28. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  29. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  30. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  32. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  33. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  34. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  35. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  36. Chopra, N.; Mohsin Ansari, M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  37. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  38. Manjarres, D.; Landa-Torres, I.; Gil-Lopez, S.; Del Ser, J.; Bilbao, M.N.; Salcedo-Sanz, S.; Geem, Z.W. A survey on applications of the harmony search algorithm. Eng. Appl. Artif. Intell. 2013, 26, 1818–1831. [Google Scholar] [CrossRef]
  39. Zhang, Q.; Wang, R.; Yang, J.; Ding, K.; Li, Y.; Hu, J. Collective decision optimization algorithm: A new heuristic optimization method. Neurocomputing 2017, 221, 123–137. [Google Scholar] [CrossRef]
  40. Askari, Q.; Younas, I.; Saeed, M. Political optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105703. [Google Scholar] [CrossRef]
  41. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  42. Zheng, R.; Jia, H.; Abualigah, L.; Liu, Q.; Wang, S. Deep ensemble of slime mold algorithm and arithmetic optimization algorithm for global optimization. Processes 2021, 9, 1774. [Google Scholar] [CrossRef]
  43. Zhang, Y.J.; Yan, Y.X.; Zhao, J.; Gao, Z.M. CSCAHHO: Chaotic hybridization algorithm of the Sine Cosine with Harris Hawk optimization algorithms for solving global optimization problems. PLoS ONE 2022, 17, e0263387. [Google Scholar] [CrossRef] [PubMed]
  44. Cheng, X.; Li, J.; Zheng, C.; Zhang, J.; Zhao, M. An improved PSO-GWO algorithm with chaos and adaptive inertial weight for robot path planning. Front. Neurorobot. 2021, 15, 770361. [Google Scholar] [CrossRef] [PubMed]
  45. Kundu, T.; Garg, H. LSMA-TLBO: A hybrid SMA-TLBO algorithm with lévy flight based mutation for numerical optimization and engineering design problems. Adv. Eng. Softw. 2022, 172, 103185. [Google Scholar] [CrossRef]
  46. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  47. Guo, Z.; Yang, B.; Han, Y.; He, T.; He, P.; Meng, X.; He, X. Optimal PID tuning of PLL for PV inverter based on aquila optimizer. Front. Energy Res. 2022, 9, 812467. [Google Scholar] [CrossRef]
  48. Fatani, A.; Dahou, A.; Al-Qaness, M.A.A.; Lu, S.; Abd Elaziz, M. Advanced feature extraction and selection approach using deep learning and Aquila optimizer for IoT intrusion detection system. Sensors 2021, 22, 140. [Google Scholar] [CrossRef]
  49. Zhao, J.; Gao, Z.-M.; Chen, H.-F. The simplified aquila optimization algorithm. IEEE Access 2022, 10, 22487–22515. [Google Scholar] [CrossRef]
  50. Wang, S.; Jia, H.; Abualigah, L.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021, 9, 1551. [Google Scholar] [CrossRef]
  51. Yu, H.; Jia, H.; Zhou, J.; Hussien, A.G. Enhanced Aquila optimizer algorithm for global optimization and constrained engineering problems. Math. Biosci. Eng. 2022, 19, 14173–14211. [Google Scholar] [CrossRef]
  52. Gao, B.; Shi, Y.; Xu, F.; Xu, X. An improved Aquila optimizer based on search control factor and mutations. Processes 2022, 10, 1451. [Google Scholar] [CrossRef]
  53. Verma, M.; Sreejeth, M.; Singh, M. Application of hybrid metaheuristic technique to study influence of core material and core trench on performance of surface inset PMSM. Arab. J. Sci. Eng. 2021, 47, 3037–3053. [Google Scholar] [CrossRef]
  54. Zhang, Y.-J.; Yan, Y.-X.; Zhao, J.; Gao, Z.-M. AOAAO: The hybrid algorithm of arithmetic optimization algorithm with aquila optimizer. IEEE Access 2022, 10, 10907–10933. [Google Scholar] [CrossRef]
  55. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  56. Wang, Y.; Huang, L.; Zhong, J.; Hu, G. LARO: Opposition-based learning boosted artificial rabbits-inspired optimization algorithm with Lévy flight. Symmetry 2022, 14, 2282. [Google Scholar] [CrossRef]
  57. Zhuoran, Z.; Changqiang, H.; Hanqiao, H.; Shangqin, T.; Kangsheng, D. An optimization method: Hummingbirds optimization algorithm. J. Syst. Eng. Electron. 2018, 29, 386–404. [Google Scholar] [CrossRef]
  58. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Meth. Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  59. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  60. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar] [CrossRef]
  61. Nguyen, T.-T.; Wang, H.-J.; Dao, T.-K.; Pan, J.-S.; Liu, J.-H.; Weng, S. An improved slime mold algorithm and its application for optimal operation of cascade hydropower stations. IEEE Access 2020, 8, 226754–226772. [Google Scholar] [CrossRef]
  62. Wang, S.; Jia, H.; Liu, Q.; Zheng, R. An improved hybrid Aquila Optimizer and Harris Hawks Optimization for global optimization. Math. Biosci. Eng. 2021, 18, 7076–7109. [Google Scholar] [CrossRef]
  63. Long, W.; Jiao, J.; Liang, X.; Cai, S.; Xu, M. A random opposition-based learning grey wolf optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar] [CrossRef]
  64. Xiao, Y.; Sun, X.; Zhang, Y.; Guo, Y.; Wang, Y.; Li, J. An improved slime mould algorithm based on tent chaotic mapping and nonlinear inertia weight. Int. J. Innov. Comput Inf. Control 2021, 17, 2151–2176. [Google Scholar] [CrossRef]
  65. Khishe, M.; Nezhadshahbodaghi, M.; Mosavi, M.R.; Martin, D. A weighted chimp optimization algorithm. IEEE Access 2021, 9, 158508–158539. [Google Scholar] [CrossRef]
  66. Khodadadi, N.; Snasel, V.; Mirjalili, S. Dynamic arithmetic optimization algorithm for truss optimization under natural frequency constraints. IEEE Access 2022, 10, 16188–16208. [Google Scholar] [CrossRef]
  67. Theodorsson-Norheim, E. Friedman and Quade tests: Basic computer program to perform nonparametric two-way analysis of variance and multiple comparisons on ranks of several related samples. Comput. Biol. Med. 1987, 17, 85–99. [Google Scholar] [CrossRef]
  68. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  69. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  70. Abualigah, L.; Ewees, A.A.; Al-qaness, M.A.A.; Elaziz, M.A.; Yousri, D.; Ibrahim, R.A.; Altalhi, M. Boosting arithmetic optimization algorithm by sine cosine algorithm and levy flight distribution for solving engineering optimization problems. Neural Comput. Appl. 2022, 34, 8823–8852. [Google Scholar] [CrossRef]
  71. Chickermane, H.; Gea, H.C. Structural optimization using a new local approximation method. Int. J. Numer. Methods Eng. 1996, 39, 829–846. [Google Scholar] [CrossRef]
  72. Cheng, M.-Y.; Prayogo, D. Symbiotic organisms search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar] [CrossRef]
  73. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  74. Song, M.; Jia, H.; Abualigah, L.; Liu, Q.; Lin, Z.; Wu, D.; Altalhi, M. Modified harris hawks optimization algorithm with exploration factor and random walk strategy. Comput. Intell. Neurosci. 2022, 2022, 4673665. [Google Scholar] [CrossRef] [PubMed]
  75. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social network search for solving engineering optimization problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef]
  76. Garg, H. A hybrid GSA-GA algorithm for constrained optimization problems. Inf. Sci. 2019, 478, 499–523. [Google Scholar] [CrossRef]
  77. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf mongoose optimization algorithm. Comput. Meth. Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  78. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  79. Baykasoğlu, A.; Ozsoydan, F.B. Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl. Soft Comput. 2015, 36, 152–164. [Google Scholar] [CrossRef]
  80. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  81. Gupta, S.; Deep, K.; Moayedi, H.; Foong, L.K.; Assad, A. Sine cosine grey wolf optimizer to solve engineering design problems. Eng. Comput. 2021, 37, 3123–3149. [Google Scholar] [CrossRef]
  82. Xiao, Y.; Sun, X.; Guo, Y.; Li, S.; Zhang, Y.; Wang, Y. An improved gorilla troops optimizer based on lens opposition-based learning and adaptive β-Hill climbing for global optimization. CMES-Comput. Model. Eng. Sci. 2022, 131, 815–850. [Google Scholar] [CrossRef]
  83. Chen, X.; Yu, K. Hybridizing cuckoo search algorithm with biogeography-based optimization for estimating photovoltaic model parameters. Sol. Energy 2019, 180, 192–206. [Google Scholar] [CrossRef]
  84. Zhao, J.; Zhang, Y.; Li, S.; Wang, Y.; Yan, Y.; Gao, Z. A chaotic self-adaptive JAYA algorithm for parameter extraction of photovoltaic models. Math. Biosci. Eng. 2022, 19, 5638–5670. [Google Scholar] [CrossRef] [PubMed]
  85. Oliva, D.; Cuevas, E.; Pajares, G. Parameter identification of solar cells using artificial bee colony optimization. Energy 2014, 72, 93–102. [Google Scholar] [CrossRef]
  86. Easwarakhanthan, T.; Bottin, J.; Bouhouch, I.; Boutrit, C. Nonlinear minimization algorithm for determining the solar cell parameters with microcomputers. Int. J. Sol. Energy 1986, 4, 1–12. [Google Scholar] [CrossRef]
  87. Kumar, C.; Raj, T.D.; Premkumar, M.; Raj, T.D. A new stochastic slime mould optimization algorithm for the estimation of solar photovoltaic cell parameters. Optik 2020, 223, 165277. [Google Scholar] [CrossRef]
  88. Lei, W.; He, Q.; Yang, L.; Jiao, H. Solar photovoltaic cell parameter identification based on improved honey badger algorithm. Sustainability 2022, 14, 8897. [Google Scholar] [CrossRef]
  89. Ramadan, A.; Kamel, S.; Hassan, M.H.; Khurshaid, T.; Rahmann, C. An improved bald eagle search algorithm for parameter estimation of different photovoltaic models. Processes 2021, 9, 1127. [Google Scholar] [CrossRef]
  90. Huang, W.; Jiang, C.; Xue, L.; Song, D. Extracting solar cell model parameters based on chaos particle swarm algorithm. In Proceedings of the 2011 International Conference on Electric Information and Control Engineering, Wuhan, China, 15–17 April 2011; pp. 398–402. [Google Scholar] [CrossRef]
  91. Wang, Z.; Ding, H.; Yang, J.; Wang, J.; Li, B.; Yang, Z.; Hou, P. Advanced orthogonal opposition-based learning-driven dynamic salp swarm algorithm: Framework and case studies. IET Control Theory Appl. 2022, 16, 945–971. [Google Scholar] [CrossRef]
  92. Chen, X.; Yu, K.; Du, W.; Zhao, W.; Liu, G. Parameters identification of solar cell models using generalized oppositional teaching learning based optimization. Energy 2016, 99, 170–180. [Google Scholar] [CrossRef]
Figure 1. Flow chart of AO.
Figure 1. Flow chart of AO.
Processes 10 02703 g001
Figure 2. Flow chart of ARO.
Figure 2. Flow chart of ARO.
Processes 10 02703 g002
Figure 3. Dynamic behavior of F during 1000 iterations.
Figure 3. Dynamic behavior of F during 1000 iterations.
Processes 10 02703 g003
Figure 4. Principle of the traditional opposition-based learning.
Figure 4. Principle of the traditional opposition-based learning.
Processes 10 02703 g004
Figure 5. Visualization of ten commonly used chaotic maps.
Figure 5. Visualization of ten commonly used chaotic maps.
Processes 10 02703 g005
Figure 6. Flow chart of the proposed CHAOARO.
Figure 6. Flow chart of the proposed CHAOARO.
Processes 10 02703 g006
Figure 7. Convergence curves of CHAOARO and other comparison algorithms on 23 benchmark functions.
Figure 7. Convergence curves of CHAOARO and other comparison algorithms on 23 benchmark functions.
Processes 10 02703 g007aProcesses 10 02703 g007b
Figure 8. Boxplots of CHAOARO and other comparison algorithms on some benchmark functions.
Figure 8. Boxplots of CHAOARO and other comparison algorithms on some benchmark functions.
Processes 10 02703 g008
Figure 9. Friedman mean ranking of CHAOARO and its peers in different dimensions.
Figure 9. Friedman mean ranking of CHAOARO and its peers in different dimensions.
Processes 10 02703 g009
Figure 10. Convergence curves of CHAOARO and other comparison algorithms on 10 CEC2019 test functions.
Figure 10. Convergence curves of CHAOARO and other comparison algorithms on 10 CEC2019 test functions.
Processes 10 02703 g010
Figure 11. Schematic illustration of pressure vessel design problem.
Figure 11. Schematic illustration of pressure vessel design problem.
Processes 10 02703 g011
Figure 12. Schematic illustration of cantilever beam design problem.
Figure 12. Schematic illustration of cantilever beam design problem.
Processes 10 02703 g012
Figure 13. Schematic illustration of tubular column design problem.
Figure 13. Schematic illustration of tubular column design problem.
Processes 10 02703 g013
Figure 14. Schematic illustration of speed reducer design problem.
Figure 14. Schematic illustration of speed reducer design problem.
Processes 10 02703 g014
Figure 15. Schematic illustration of rolling element bearing design problem.
Figure 15. Schematic illustration of rolling element bearing design problem.
Processes 10 02703 g015
Figure 16. Structure of single diode model.
Figure 16. Structure of single diode model.
Processes 10 02703 g016
Figure 17. Fitting curves between the measured data and estimated data obtained by CHAOARO on the SDM. (a) I-V characteristics; (b) P-V characteristics.
Figure 17. Fitting curves between the measured data and estimated data obtained by CHAOARO on the SDM. (a) I-V characteristics; (b) P-V characteristics.
Processes 10 02703 g017
Table 1. Classification of meta-heuristic algorithms.
Table 1. Classification of meta-heuristic algorithms.
ClassificationAlgorithmInspirationYearReference
EvolutionaryGenetic Algorithm (GA)Evolutionary concepts1992[13]
Differential Evolution (DE)Darwin’s theory of evolution1997[14]
Biogeography-Based Optimization (BBO)Biogeography regarding the migration of species2008[15]
Tree Growth Algorithm (TGA)Competition of trees for food and light2018[16]
Physics-basedSimulated Annealing (SA)Annealing process in metallurgy1983[17]
Gravity Search Algorithm (GSA)Law of gravity and mass interactions2009[18]
Black Hole Algorithm (BHA)Black hole phenomenon2013[19]
Multi-Verse Optimizer (MVO)Multi-verse theory2015[20]
Sine Cosine Algorithm (SCA)Sine/cosine functions 2016[21]
Henry gas solubility optimization (HGSO)Huddling behavior of gas2019[22]
Arithmetic Optimization Algorithm (AOA)Distribution behavior of arithmetic operators in mathematics2021[23]
Swarm-basedParticle Swarm Optimization (PSO)Foraging behavior of bird flocks1995[24]
Ant Colony Optimization (ACO)Foraging behavior of some ant species2006[25]
Cuckoo Search (CS)Breed behavior of certain cuckoo species2011[26]
Grey Wolf Optimizer (GWO)Leadership hierarchy and hunting mechanism of grey wolves2014[27]
Ant Lion Optimizer (ALO)Hunting mechanism of antlions2015[28]
Moth-Flame Optimization (MFO)Navigation of moths in nature2015[29]
Whale Optimization Algorithm (WOA)Social behavior of humpback whales2016[30]
Salp Swarm Algorithm (SSA)Swarming behavior of salps2017[31]
Harris Hawks Optimization (HHO)Cooperative behavior and chasing style of Harris’ hawks2019[32]
Tunicate Swarm Algorithm (TSA)Jet propulsion behavior of tunicates2020[33]
Slime Mould Algorithm (SMA)Oscillation mode of slime mould2020[34]
Reptile Search Algorithm (RSA)Hunting behavior of Crocodiles2022[35]
Golden Jackal Optimization (GJO)Hunting behavior of golden jackals2022[36]
Human-basedTeaching Learning-Based Optimization (TLBO)Teaching and learning in a classroom2011[37]
Harmony Search (HS)Behavior of a music orchestra2013[38]
Collective Decision Optimization (CDO)Decision-making characteristics of humans2017[39]
Political Optimizer (PO)Multi-phased process of politics2020[40]
Table 2. Ten chaotic maps.
Table 2. Ten chaotic maps.
NoMap NameEquation
CM1Chebyshev φ i + 1 = cos ( i cos 1 ( φ i ) )
CM2Circle φ i + 1 = mod ( φ i + b ( a 2 π ) sin ( 2 π φ i ) , 1 ) ; a = 0.5 , b = 0.2
CM3Gauss/mouse φ i + 1 = { 1 ,                 φ i = 0 1 mod ( φ i ,   1 ) ,   otherwise
CM4Iterative φ i + 1 = sin ( a π φ i ) , a = 0.7
CM5Logistic φ i + 1 = a φ i ( 1 φ i ) , a = 4
CM6Piecewise φ i + 1 = { φ i / P , 0 φ i < P φ i P 0.5 P , P φ i 0.5 1 P φ i 0.5 P , 0.5 φ i < 1 P 1 φ i P , 1 P φ i < 1 , P = 0.4
CM7Sine φ i + 1 = a 4 sin ( π φ i ) , a = 4
CM8Singer φ i + 1 = μ ( 7.86 φ i 23.31 φ i 2 + 28.75 φ i 3 13.301875 φ i 4 ) , μ = 1.07
CM9Sinusoidal φ i + 1 = a φ i sin ( π φ i ) , a = 2.3
CM10Tent φ i + 1 = { φ i / 0.7 ,               φ i < 0.7 10 3 ( 1 φ i ) , φ i 0.7
Table 3. Parameter settings for CHAOARO and other selected competitor algorithms.
Table 3. Parameter settings for CHAOARO and other selected competitor algorithms.
AlgorithmParameter Setting
AO [46] U = 0.00565 ; R = 10 ; ω = 0.005 ; α = 0.1 ; δ = 0.1 ; G 1 [ 1 ,   1 ] ; G 2 = [ 2 ,   0 ]
GWO [27] a = [ 2 ,   0 ]
WOA [30] b = 1 ; a 1 = [ 2 ,   0 ] ; a 2 = [ 2 , 1 ]
SCA [21] a = 2
TSA [33] P m i n = 1 ; P m a x = 4
GJO [36] c 1 = 1.5
ARO [55]
WChOA [65] f = [ 2.5 ,   0 ] ;   M = G a u s s   c h a o t i c   v a l u e
DAOA [66] α = 25 ; μ = 0.001
CHAOARO U = 0.00565 ; R = 10 ; ω = 0.005
Table 4. Characteristics of the 23 classical benchmark functions (UM: unimodal, MM: multimodal, FM: fix-dimension multimodal, Dim: dimension, Range: search boundaries, Fmin: theoretical optimal value).
Table 4. Characteristics of the 23 classical benchmark functions (UM: unimodal, MM: multimodal, FM: fix-dimension multimodal, Dim: dimension, Range: search boundaries, Fmin: theoretical optimal value).
FunctionTypeDim (D)RangeFmin
F 1 ( x ) = i = 1 D x i 2 UM30[−100, 100]0
F 2 ( x ) = i = 1 D | x i | + i = 1 D | x i | UM30[−10, 10]0
F 3 ( x ) = i = 1 D ( j = 1 D x j ) 2 UM30[−100, 100]0
F 4 ( x ) = m a x i { | x i | , 1 i D } UM30[−100, 100]0
F 5 ( x ) = i = 1 D 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] UM30[−30, 30]0
F 6 ( x ) = i = 1 D ( | x i + 0.5 | ) 2 UM30[−100, 100]0
F 7 ( x ) = i = 1 D i x i 4 + r a n d o m [ 0 , 1 ) UM30[−1.28, 1.28]0
F 8 ( x ) = i = 1 D x i sin ( | x i | ) MM30[−500, 500]−418.9829 × D
F 9 ( x ) = i = 1 D [ x i 2 10 cos ( 2 π x i ) + 10 ] MM30[−5.12, 5.12]0
F 10 ( x ) = 20 exp ( 0.2 1 n i = 1 D x i 2 ) exp ( 1 n i = 1 D cos ( 2 π x i ) ) + 20 + e MM30[−32, 32]0
F 11 ( x ) = 1 4000 i = 1 D x i 2 i = 1 D cos ( x i i ) + 1 MM30[−600, 600]0
F 12 ( x ) = π D { 10 sin ( π y 1 ) + i = 1 D 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y D 1 ) 2 } + i = 1 D u ( x i , 10 , 100 , 4 )
y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = { k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a
MM30[−50, 50]0
F 13 ( x ) = 0.1 { sin 2 ( 3 π x i ) + i = 1 D ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x D 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 D u ( x i , 5 , 100 , 4 ) MM30[−50, 50]0
F 14 ( x ) = ( 1 500 + j = 1 25 ( j + i = 1 2 ( x i a i j ) 6 ) 1 ) 1 FM2[−65, 65]0.998
F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 FM4[−5, 5]0.00030
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 FM2[−5, 5]−1.0316
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 FM2[−5, 5]0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 2 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] FM2[−2, 2]3
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) FM3[−1, 2]−3.8628
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) FM6[0, 1]−3.32
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 FM4[0, 10]−10.1532
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 FM4[0, 10]−10.4028
F 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 FM4[0, 10]−10.5363
Table 5. Comparison results of different chaotic maps on 23 benchmark functions.
Table 5. Comparison results of different chaotic maps on 23 benchmark functions.
FnCriteriaCM1CM2CM3CM4CM5CM6CM7CM8CM9CM10
F1Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1007.26 × 10−2881.84 × 10−1650.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111119101
F2Avg3.68 × 10−2862.81 × 10−2300.00 × 1005.71 × 10−2481.21 × 10−3103.49 × 10−2031.07 × 10−3051.03 × 10−1443.75 × 10−848.25 × 10−204
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1001.21 × 10−1446.69 × 10−840.00 × 100
Rank46152839107
F3Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1002.97 × 10−2865.18 × 10−1650.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111119101
F4Avg2.59 × 10−2867.95 × 10−2310.00 × 1003.07 × 10−2485.02 × 10−3111.12 × 10−2033.84 × 10−3065.29 × 10−1452.75 × 10−812.05 × 10−204
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1008.98 × 10−1454.23 × 10−810.00 × 100
Rank46152839107
F5Avg1.19 × 10−23.11 × 10−23.21 × 10−33.04 × 10−28.96 × 10−38.00 × 10−35.91 × 10−33.71 × 10−31.65 × 10−21.37 × 10−2
Std3.74 × 10−21.35 × 10−13.70 × 10−37.81 × 10−23.68 × 10−22.69 × 10−28.18 × 10−35.05 × 10−36.09 × 10−23.63 × 10−2
Rank61019543287
F6Avg4.45 × 10−63.24 × 10−62.13 × 10−66.26 × 10−63.12 × 10−63.52 × 10−63.51 × 10−62.32 × 10−63.74 × 10−64.28 × 10−6
Std8.39 × 10−64.45 × 10−62.86 × 10−61.76 × 10−54.16 × 10−63.94 × 10−65.40 × 10−62.70 × 10−69.22 × 10−64.85 × 10−6
Rank94110365278
F7Avg1.81 × 10−42.44 × 10−41.76 × 10−42.10 × 10−42.65 × 10−42.03 × 10−42.03 × 10−41.80 × 10−42.28 × 10−42.00 × 10−4
Std1.77 × 10−41.90 × 10−41.74 × 10−41.63 × 10−42.40 × 10−42.41 × 10−42.03 × 10−41.38 × 10−41.81 × 10−41.55 × 10−4
Rank39171065284
F8Avg−9051.0832−9152.4981−9400.3304−9281.9785−9290.1890−8903.7512−9024.0377−9007.2763−9003.1787−9282.5212
Std8.87 × 1029.51 × 1027.22 × 1027.99 × 1028.87 × 1025.49 × 1026.66 × 1021.17 × 1036.82 × 1029.29 × 102
Rank65142107893
F9Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank1111111111
F10Avg8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank1111111111
F11Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank1111111111
F12Avg2.87 × 10−75.11 × 10−71.75 × 10−72.22 × 10−73.24 × 10−72.55 × 10−72.38 × 10−71.62 × 10−72.53 × 10−73.50 × 10−7
Std5.34 × 10−71.37 × 10−62.26 × 10−72.36 × 10−77.49 × 10−72.77 × 10−73.87 × 10−72.23 × 10−73.81 × 10−74.45 × 10−7
Rank71023864159
F13Avg2.73 × 10−61.79 × 10−61.75 × 10−63.86 × 10−63.09 × 10−61.40 × 10−61.99 × 10−61.92 × 10−61.73 × 10−62.92 × 10−6
Std5.59 × 10−62.13 × 10−62.31 × 10−66.32 × 10−65.80 × 10−61.63 × 10−62.19 × 10−63.43 × 10−61.80 × 10−65.76 × 10−6
Rank74310916528
F14Avg1.97 × 1009.98 × 10−19.98 × 10−11.13 × 1001.13 × 1001.84 × 1001.59 × 1001.20 × 1001.20 × 1001.06 × 100
Std2.97 × 1001.75 × 10−147.14 × 10−175.03 × 10−15.24 × 10−12.74 × 1002.18 × 1006.05 × 10−12.35 × 10−13.62 × 10−1
Rank10214598763
F15Avg3.38 × 10−43.08 × 10−43.08 × 10−43.07 × 10−43.52 × 10−43.42 × 10−43.42 × 10−43.69 × 10−43.38 × 10−43.07 × 10−4
Std1.68 × 10−48.34 × 10−81.55 × 10−84.31 × 10−82.35 × 10−41.68 × 10−41.86 × 10−42.35 × 10−41.67 × 10−42.41 × 10−9
Rank64329781051
F16Avg−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std5.76 × 10−166.12 × 10−165.21 × 10−165.83 × 10−165.53 × 10−166.12 × 10−165.61 × 10−165.98 × 10−165.90 × 10−165.83 × 10−16
Rank4915293875
F17Avg3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank1111111111
F18Avg3.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
Std1.53 × 10−151.32 × 10−151.26 × 10−152.18 × 10−156.39 × 10−161.31 × 10−155.71 × 10−161.42 × 10−151.40 × 10−151.37 × 10−15
Rank95310241876
F19Avg−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628
Std2.64 × 10−152.60 × 10−152.60 × 10−152.60 × 10−152.67 × 10−152.64 × 10−152.67 × 10−152.55 × 10−152.60 × 10−152.64 × 10−15
Rank6222969126
F20Avg−3.2784−3.2744−3.2982−3.2824−3.2902−3.2586−3.2744−3.2823−3.2902−3.2863
Std5.83 × 10−25.92 × 10−24.84 × 10−25.83 × 10−25.35 × 10−26.03 × 10−25.92 × 10−25.70 × 10−25.35 × 10−25.54 × 10−2
Rank79152108624
F21Avg−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532
Std5.56 × 10−155.63 × 10−155.56 × 10−157.05 × 10−105.63 × 10−155.58 × 10−151.75 × 10−145.56 × 10−155.69 × 10−155.58 × 10−15
Rank16110649184
F22Avg−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029
Std4.66 × 10−165.71 × 10−160.00 × 1002.35 × 10−103.30 × 10−160.00 × 1003.30 × 10−163.32 × 10−151.81 × 10−154.66 × 10−16
Rank57110313985
F23Avg−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364
Std1.68 × 10−151.85 × 10−91.64 × 10−151.71 × 10−151.68 × 10−151.65 × 10−151.55 × 10−151.68 × 10−151.62 × 10−151.65 × 10−15
Rank61039641624
Friedman Mean Rank4.60874.95651.43485.04353.95654.73914.00005.04355.65224.2174
Final Ranking57182638104
The best values obtained have been highlighted in boldface.
Table 6. Comparison results of CHAOARO and other algorithms on 23 benchmark functions.
Table 6. Comparison results of CHAOARO and other algorithms on 23 benchmark functions.
FnCriteriaAOGWOWOASCATSAGJOAROWChOADAOACHAOARO
F1Avg2.48 × 10−1031.58 × 10−273.55 × 10−724.81 × 1012.04 × 10−1943.60 × 10−544.10 × 10−586.78 × 10−2818.21 × 1000.00 × 100
Std1.36 × 10−1023.76 × 10−271.94 × 10−711.54 × 1020.00 × 1009.28 × 10−542.08 × 10−570.00 × 1004.32 × 1000.00 × 100
Rank48510376291
F2Avg2.33 × 10−549.47 × 10−172.50 × 10−511.71 × 10−21.20 × 10−1001.97 × 10−322.43 × 10−325.44 × 10−1451.18 × 1060.00 × 100
Std1.27 × 10−536.62 × 10−178.78 × 10−512.71 × 10−23.13 × 10−1002.56 × 10−327.41 × 10−327.88 × 10−1455.23 × 1060.00 × 100
Rank48593672101
F3Avg3.34 × 10−1029.76 × 10−63.82 × 1048.88 × 1032.12 × 10−1822.19 × 10−178.18 × 10−403.76 × 10−1901.88 × 1030.00 × 100
Std1.83 × 10−1011.46 × 10−51.46 × 1045.24 × 1030.00 × 1006.63 × 10−174.39 × 10−390.00 × 1006.16 × 1020.00 × 100
Rank47109365281
F4Avg1.08 × 10−517.69 × 10−74.74 × 1013.94 × 1018.50 × 10−922.27 × 10−161.32 × 10−233.50 × 10−1371.25 × 1010.00 × 100
Std4.12 × 10−516.64 × 10−73.12 × 1011.27 × 1013.06 × 10−913.94 × 10−166.84 × 10−231.68 × 10−1365.92 × 1000.00 × 100
Rank47109365281
F5Avg9.19 × 10−32.71 × 1012.81 × 1016.27 × 1042.86 × 1012.77 × 1013.79 × 1002.90 × 1011.22 × 1032.66 × 10−3
Std3.46 × 10−38.65 × 10−14.52 × 10−11.63 × 1054.04 × 10−18.07 × 10−18.90 × 1002.05 × 10−31.39 × 1034.89 × 10−4
Rank24610753891
F6Avg1.83 × 10−48.21 × 10−14.18 × 10−11.92 × 1016.30 × 1002.58 × 1001.63 × 10−32.39 × 1008.07 × 1001.53 × 10−6
Std2.56 × 10−44.40 × 10−12.75 × 10−13.49 × 1018.02 × 10−14.71 × 10−17.53 × 10−42.93 × 10−12.86 × 1002.28 × 10−6
Rank25410873691
F7Avg9.23 × 10−52.05 × 10−32.17 × 10−31.03 × 10−17.55 × 10−55.59 × 10−48.47 × 10−41.52 × 10−49.77 × 10−28.52 × 10−5
Std9.80 × 10−51.12 × 10−32.10 × 10−31.09 × 10−17.24 × 10−53.97 × 10−47.51 × 10−41.39 × 10−43.64 × 10−28.29 × 10−5
Rank37810156492
F8Avg−6.45 × 103−5.98 × 103−1.05 × 104−3.70 × 103−3.35 × 103−4.28 × 103−9.05 × 103−2.49 × 103−7.33 × 103−1.03 × 104
Std2.07 × 1038.50 × 1023.02 × 1021.67 × 1034.30 × 1021.26 × 1038.17 × 1024.40 × 1027.00 × 1024.89 × 102
Rank56189731042
F9Avg0.00 × 1002.63 × 1007.52 × 1004.41 × 1013.50 × 1010.00 × 1000.00 × 1000.00 × 1005.34 × 1010.00 × 100
Std0.00 × 1002.70 × 1004.12 × 1014.34 × 1015.76 × 1010.00 × 1000.00 × 1000.00 × 1001.33 × 1010.00 × 100
Rank16798111101
F10Avg8.88 × 10−161.03 × 10−134.80 × 10−151.22 × 1014.56 × 10−157.28 × 10−158.88 × 10−164.20 × 10−153.13 × 1008.88 × 10−16
Std0.00 × 1001.73 × 10−143.00 × 10−159.12 × 1006.49 × 10−161.45 × 10−150.00 × 1009.01 × 10−167.69 × 10−10.00 × 100
Rank18610571491
F11Avg0.00 × 1001.95 × 10−36.10 × 10−39.62 × 10−12.60 × 10−30.00 × 1000.00 × 1000.00 × 1001.06 × 1000.00 × 100
Std0.00 × 1005.13 × 10−33.34 × 10−25.54 × 10−18.20 × 10−30.00 × 1000.00 × 1000.00 × 1002.53 × 10−20.00 × 100
Rank16897111101
F12Avg1.09 × 10−64.20 × 10−22.12 × 10−22.91 × 1049.82 × 10−12.42 × 10−17.40 × 10−51.68 × 10−15.65 × 1002.25 × 10−7
Std1.01 × 10−62.06 × 10−21.71 × 10−29.95 × 1043.10 × 10−11.04 × 10−15.54 × 10−53.32 × 10−22.78 × 1003.24 × 10−7
Rank25410873691
F13Avg8.95 × 10−66.09 × 10−15.18 × 10−18.05 × 1052.53 × 1001.71 × 1005.38 × 10−33.00 × 1004.35 × 1002.11 × 10−6
Std1.43 × 10−52.20 × 10−12.41 × 10−12.96 × 1062.93 × 10−12.18 × 10−19.63 × 10−32.30 × 10−55.74 × 1003.41 × 10−6
Rank25410763891
F14Avg5.13 × 1005.33 × 1003.29 × 1002.05 × 1001.17 × 1016.08 × 1009.98 × 10−11.71 × 1002.72 × 1009.98 × 10−1
Std5.01 × 1004.56 × 1003.51 × 1001.90 × 1004.96 × 1004.64 × 1004.12 × 10−171.79 × 1001.27 × 1000.00 × 100
Rank78641092351
F15Avg3.29 × 10−42.44 × 10−36.59 × 10−41.03 × 10−38.27 × 10−33.12 × 10−33.22 × 10−44.11 × 10−21.09 × 10−23.12 × 10−4
Std2.60 × 10−56.08 × 10−34.16 × 10−43.67 × 10−41.76 × 10−26.88 × 10−36.45 × 10−53.91 × 10−28.31 × 10−32.26 × 10−5
Rank36458721091
F16Avg−1.0314−1.0316−1.0316−1.0316−1.0242−1.0316−1.0316−1.0031−0.9228−1.0316
Std1.07 × 10−32.12 × 10−82.70 × 10−103.71 × 10−51.36 × 10−22.61 × 10−75.68 × 10−167.48 × 10−32.82 × 10−15.61 × 10−16
Rank74368529101
F17Avg3.98 × 10−13.98 × 10−13.98 × 10−14.01 × 10−14.00 × 10−13.98 × 10−13.98 × 10−11.19 × 1001.00 × 1003.98 × 10−1
Std2.96 × 10−57.64 × 10−71.50 × 10−53.25 × 10−32.57 × 10−38.90 × 10−50.00 × 1008.70 × 10−17.97 × 10−10.00 × 100
Rank53487611091
F18Avg1.66 × 1013.00 × 1003.00 × 1003.00 × 1001.05 × 1013.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
Std1.38 × 1015.16 × 10−51.59 × 10−41.12 × 10−42.22 × 1016.59 × 10−61.47 × 10−151.21 × 10−43.18 × 10−41.28 × 10−15
Rank10475932681
F19Avg−3.8384−3.8611−3.8563−3.8546−3.8595−3.8606−3.8628−3.4620−3.8112−3.8628
Std3.90 × 10−22.64 × 10−31.08 × 10−23.41 × 10−32.60 × 10−33.49 × 10−33.65 × 10−153.35 × 10−11.96 × 10−12.61 × 10−15
Rank83675421091
F20Avg−3.2582−3.2697−3.2483−2.9609−3.1590−3.1552−3.2744−1.7121−3.2744−3.2824
Std7.46 × 10−27.35 × 10−29.16 × 10−23.12 × 10−11.66 × 10−11.12 × 10−15.92 × 10−24.60 × 10−15.92 × 10−25.70 × 10−2
Rank54697821021
F21Avg−10.1521−8.8856−8.2071−2.8999−7.1024−8.7078−9.6483−0.9418−5.8722−10.1532
Std1.11 × 10−32.37 × 1002.71 × 1001.85 × 1001.65 × 1002.45 × 1001.91 × 1003.53 × 10−13.02 × 1009.43 × 10−5
Rank24697531081
F22Avg−10.4012−10.4012−7.2731−2.9022−6.5091−9.2546−9.8260−1.3450−5.8760−10.4029
Std2.14 × 10−31.15 × 10−33.28 × 1001.75 × 1002.06 × 1002.34 × 1001.77 × 1007.23 × 10−13.38 × 1004.66 × 10−16
Rank32697541081
F23Avg−10.5348−10.5346−6.5171−3.6233−6.1982−9.4430−10.3130−1.2424−5.4826−10.5364
Std2.21 × 10−31.02 × 10−32.99 × 1001.44 × 1002.68 × 1002.51 × 1001.22 × 1004.17 × 10−13.73 × 1001.84 × 10−15
Rank23697541081
Friedman Mean Rank3.78265.34785.73918.43486.39135.56523.08706.26098.21741.0870
Final Ranking34698527101
The best values obtained have been highlighted in boldface.
Table 7. Statistical results of Wilcoxon rank-sum test for different algorithms on 23 benchmark functions.
Table 7. Statistical results of Wilcoxon rank-sum test for different algorithms on 23 benchmark functions.
FnCHAOARO VS.
AOGWOWOASCATSAGJOAROWChOADAOA
F11.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F21.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F31.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F41.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F52.12 × 10−43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.41 × 10−92.95 × 10−113.02 × 10−11
F62.15 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F72.92 × 10−23.02 × 10−112.92 × 10−93.02 × 10−115.83 × 10−37.04 × 10−71.47 × 10−78.77 × 10−33.02 × 10−11
F81.07 × 10−73.02 × 10−113.18 × 10−33.02 × 10−113.02 × 10−113.02 × 10−117.74 × 10−63.02 × 10−114.18 × 10−9
F9NaN1.17 × 10−121.61 × 10−11.21 × 10−126.25 × 10−10NaNNaNNaN1.21 × 10−12
F10NaN1.04 × 10−123.76 × 10−81.21 × 10−122.71 × 10−141.55 × 10−13NaN7.15 × 10−131.21 × 10−12
F11NaN2.16 × 10−23.34 × 10−31.21 × 10−124.19 × 10−2NaNNaNNaN1.21 × 10−12
F122.32 × 10−63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F138.68 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.86 × 10−113.02 × 10−11
F141.72 × 10−121.72 × 10−121.72 × 10−121.72 × 10−121.72 × 10−121.72 × 10−123.34 × 10−51.72 × 10−121.72 × 10−12
F153.50 × 10−99.76 × 10−109.92 × 10−113.34 × 10−119.92 × 10−113.82 × 10−103.67 × 10−33.02 × 10−113.02 × 10−11
F161.45 × 10−111.45 × 10−111.45 × 10−111.45 × 10−111.45 × 10−111.45 × 10−118.04 × 10−21.45 × 10−111.45 × 10−11
F171.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12NaN1.21 × 10−121.21 × 10−12
F186.47 × 10−126.47 × 10−126.47 × 10−126.47 × 10−126.47 × 10−126.47 × 10−122.23 × 10−16.47 × 10−126.47 × 10−12
F197.57 × 10−127.57 × 10−127.57 × 10−127.57 × 10−127.57 × 10−127.57 × 10−123.26 × 10−17.57 × 10−127.57 × 10−12
F203.99 × 10−41.25 × 10−41.25 × 10−43.02 × 10−111.11 × 10−64.68 × 10−81.78 × 10−53.02 × 10−116.76 × 10−5
F211.17 × 10−113.16 × 10−123.16 × 10−123.16 × 10−123.16 × 10−123.16 × 10−124.22 × 10−103.16 × 10−128.44 × 10−12
F222.36 × 10−122.36 × 10−122.36 × 10−122.36 × 10−122.36 × 10−122.36 × 10−121.89 × 10−82.36 × 10−122.36 × 10−12
F239.04 × 10−129.04 × 10−129.04 × 10−129.04 × 10−129.04 × 10−129.04 × 10−121.04 × 10−59.04 × 10−129.04 × 10−12
+/=/−20/3/023/0/022/0/123/0/023/0/021/2/016/4/321/2/023/0/0
The p-value obtained greater than 0.05 have been highlighted in boldface.
Table 8. Average computation time of CHAOARO and other algorithms on all benchmark functions (unit: s).
Table 8. Average computation time of CHAOARO and other algorithms on all benchmark functions (unit: s).
FnAOGWOWOASCATSAGJOAROWChOADAOACHAOARO
F12.08 × 10−11.91 × 10−15.68 × 10−21.30 × 10−11.51 × 10−12.72 × 10−11.40 × 10−11.86 × 1009.08 × 10−21.72 × 10−1
F21.82 × 10−11.74 × 10−15.39 × 10−21.28 × 10−11.46 × 10−12.31 × 10−11.53 × 10−11.88 × 1008.88 × 10−21.43 × 10−1
F34.79 × 10−13.11 × 10−11.94 × 10−12.60 × 10−12.76 × 10−13.60 × 10−12.79 × 10−11.96 × 1002.20 × 10−12.82 × 10−1
F41.82 × 10−11.80 × 10−15.10 × 10−21.40 × 10−11.48 × 10−12.35 × 10−11.38 × 10−11.93 × 1008.28 × 10−21.44 × 10−1
F52.15 × 10−12.12 × 10−17.87 × 10−21.54 × 10−11.70 × 10−12.54 × 10−11.47 × 10−11.92 × 1001.00 × 10−11.56 × 10−1
F61.83 × 10−11.98 × 10−15.66 × 10−21.37 × 10−11.57 × 10−12.50 × 10−11.39 × 10−11.82 × 1008.77 × 10−21.35 × 10−1
F73.07 × 10−12.55 × 10−11.30 × 10−12.03 × 10−11.94 × 10−13.11 × 10−12.15 × 10−11.87 × 1001.47 × 10−12.16 × 10−1
F82.13 × 10−11.87 × 10−16.55 × 10−21.48 × 10−11.65 × 10−12.46 × 10−11.57 × 10−11.85 × 1001.06 × 10−11.62 × 10−1
F91.93 × 10−12.00 × 10−16.20 × 10−21.34 × 10−11.40 × 10−12.27 × 10−11.24 × 10−11.87 × 1009.87 × 10−21.43 × 10−1
F101.90 × 10−11.89 × 10−16.13 × 10−21.39 × 10−11.38 × 10−12.28 × 10−11.39 × 10−11.81 × 1009.58 × 10−21.36 × 10−1
F112.26 × 10−12.02 × 10−17.32 × 10−21.60 × 10−11.65 × 10−12.45 × 10−11.51 × 10−11.90 × 1001.15 × 10−11.59 × 10−1
F125.45 × 10−13.44 × 10−12.26 × 10−12.99 × 10−13.18 × 10−14.01 × 10−13.28 × 10−11.96 × 1002.83 × 10−13.41 × 10−1
F135.35 × 10−13.42 × 10−12.35 × 10−13.05 × 10−12.98 × 10−13.75 × 10−13.14 × 10−11.93 × 1002.67 × 10−13.33 × 10−1
F149.38 × 10−14.27 × 10−14.27 × 10−14.22 × 10−14.37 × 10−15.09 × 10−15.08 × 10−15.35 × 10−14.28 × 10−15.35 × 10−1
F151.63 × 10−16.73 × 10−25.17 × 10−26.26 × 10−26.15 × 10−21.43 × 10−11.34 × 10−13.42 × 10−14.93 × 10−21.29 × 10−1
F161.60 × 10−15.64 × 10−25.17 × 10−25.36 × 10−25.51 × 10−21.43 × 10−11.30 × 10−12.39 × 10−14.60 × 10−21.33 × 10−1
F171.53 × 10−15.09 × 10−25.12 × 10−24.69 × 10−24.88 × 10−21.23 × 10−11.27 × 10−12.26 × 10−14.61 × 10−21.34 × 10−1
F181.54 × 10−15.07 × 10−24.02 × 10−24.47 × 10−24.44 × 10−21.27 × 10−11.30 × 10−12.23 × 10−14.18 × 10−21.33 × 10−1
F191.81 × 10−17.72 × 10−26.50 × 10−26.95 × 10−26.60 × 10−21.45 × 10−11.39 × 10−12.84 × 10−15.41 × 10−21.39 × 10−1
F201.69 × 10−18.13 × 10−25.37 × 10−27.06 × 10−27.32 × 10−21.55 × 10−11.47 × 10−14.74 × 10−15.24 × 10−21.37 × 10−1
F212.08 × 10−17.92 × 10−26.33 × 10−27.74 × 10−27.97 × 10−21.55 × 10−11.52 × 10−13.79 × 10−16.93 × 10−21.52 × 10−1
F222.19 × 10−19.86 × 10−28.30 × 10−28.25 × 10−28.29 × 10−21.56 × 10−11.46 × 10−13.42 × 10−17.09 × 10−21.54 × 10−1
F232.43 × 10−11.06 × 10−19.30 × 10−29.90 × 10−29.51 × 10−21.72 × 10−11.65 × 10−13.71 × 10−18.37 × 10−21.65 × 10−1
Total
runtime
6.24604.07962.32383.36583.50975.46304.202027.97502.72424.3330
The best values obtained have been highlighted in boldface.
Table 9. Comparison results of CHAOARO and other algorithms on 13 unimodal and multimodal benchmark functions in different dimensions.
Table 9. Comparison results of CHAOARO and other algorithms on 13 unimodal and multimodal benchmark functions in different dimensions.
FnDimensionCriteriaAOGWOWOASCATSAGJOAROWChOADAOACHAOARO
F150Avg4.99 × 10−1104.10 × 10−202.25 × 10−738.37 × 1026.82 × 10−1863.19 × 10−404.35 × 10−557.27 × 10−2725.52 × 1010.00 × 100
Std2.19 × 10−1093.91 × 10−201.22 × 10−729.04 × 1020.00 × 1006.97 × 10−401.60 × 10−540.00 × 1001.55 × 1010.00 × 100
100Avg9.85 × 10−1021.31 × 10−125.71 × 10−731.06 × 1041.39 × 10−1761.51 × 10−272.53 × 10−521.97 × 10−2629.03 × 1020.00 × 100
Std4.63 × 10−1019.19 × 10−131.87 × 10−727.41 × 1030.00 × 1003.13 × 10−279.66 × 10−520.00 × 1002.17 × 1020.00 × 100
500Avg9.79 × 10−991.54 × 10−34.52 × 10−681.93 × 1051.02 × 10−1628.15 × 10−133.13 × 10−493.35 × 10−2492.86 × 1050.00 × 100
Std4.02 × 10−985.27 × 10−42.45 × 10−677.20 × 1040.00 × 1001.25 × 10−121.69 × 10−480.00 × 1002.11 × 1040.00 × 100
F250Avg9.78 × 10−572.25 × 10−121.95 × 10−496.00 × 10−11.19 × 10−958.35 × 10−257.24 × 10−322.17 × 10−1395.00 × 10140.00 × 100
Std4.50 × 10−561.31 × 10−121.07 × 10−487.89 × 10−13.80 × 10−951.58 × 10−242.45 × 10−313.26 × 10−1392.66 × 10150.00 × 100
100Avg1.06 × 10−614.01 × 10−82.83 × 10−496.44 × 1003.34 × 10−911.33 × 10−179.08 × 10−315.58 × 10−1344.99 × 10400.00 × 100
Std5.83 × 10−611.31 × 10−89.54 × 10−495.09 × 1004.03 × 10−918.41 × 10−181.89 × 10−308.48 × 10−1342.73 × 10410.00 × 100
500Avg1.58 × 10−561.06 × 10−21.68 × 10−491.01 × 1025.50 × 10−835.72 × 10−92.22 × 10−295.75 × 10−1276.86 × 102460.00 × 100
Std8.64 × 10−561.67 × 10−36.06 × 10−496.08 × 1012.13 × 10−832.66 × 10−95.21 × 10−294.62 × 10−127Inf0.00 × 100
F350Avg1.93 × 10−1003.79 × 10−11.96 × 1055.34 × 1046.24 × 10−1753.04 × 10−92.46 × 10−372.57 × 10−1781.73 × 1040.00 × 100
Std1.02 × 10−999.20 × 10−13.78 × 1041.72 × 1040.00 × 1001.18 × 10−81.35 × 10−360.00 × 1004.30 × 1030.00 × 100
100Avg4.93 × 10−987.18 × 1021.19 × 1062.36 × 1051.42 × 10−1662.16 × 1011.45 × 10−341.12 × 10−341.07 × 1050.00 × 100
Std1.75 × 10−975.34 × 1023.34 × 1056.07 × 1040.00 × 1001.18 × 1027.94 × 10−346.16 × 10−349.99 × 1030.00 × 100
500Avg4.70 × 10−1003.29 × 1052.84 × 1076.83 × 1061.59 × 10−1546.14 × 1042.29 × 10−313.03 × 1032.87 × 1060.00 × 100
Std2.24 × 10−997.04 × 1049.61 × 1061.56 × 1063.07 × 10−1546.75 × 1048.60 × 10−311.42 × 1043.62 × 1050.00 × 100
F450Avg2.24 × 10−516.27 × 10−46.94 × 1016.72 × 1012.52 × 10−886.93 × 10−74.71 × 10−235.16 × 10−1264.56 × 1010.00 × 100
Std1.22 × 10−507.18 × 10−42.75 × 1016.48 × 1005.03 × 10−882.77 × 10−61.33 × 10−222.83 × 10−1258.64 × 1000.00 × 100
100Avg1.61 × 10−518.37 × 10−17.93 × 1018.95 × 1012.36 × 10−844.16 × 1001.51 × 10−215.43 × 10−967.35 × 1010.00 × 100
Std7.16 × 10−519.52 × 10−12.07 × 1012.96 × 1003.38 × 10−847.70 × 1006.59 × 10−211.90 × 10−956.21 × 1000.00 × 100
500Avg5.05 × 10−556.36 × 1018.08 × 1019.90 × 1012.01 × 10−758.26 × 1012.60 × 10−195.29 × 10−749.71 × 1010.00 × 100
Std2.03 × 10−546.24 × 1001.81 × 1014.10 × 10−19.25 × 10−754.30 × 1006.57 × 10−192.38 × 10−731.08 × 1000.00 × 100
F550Avg2.36 × 10−24.73 × 1014.82 × 1016.13 × 1064.86 × 1014.78 × 1015.21 × 10−14.90 × 1015.69 × 1031.06 × 10−2
Std4.98 × 10−27.14 × 10−13.67 × 10−16.39 × 1063.00 × 10−17.98 × 10−15.50 × 10−12.95 × 10−46.80 × 1032.23 × 10−2
100Avg2.55 × 10−19.79 × 1019.82 × 1011.20 × 1089.85 × 1019.83 × 1011.70 × 1009.90 × 1018.33 × 1041.36 × 10−2
Std8.35 × 10−17.02 × 10−11.96 × 10−16.73 × 1072.60 × 10−14.54 × 10−12.10 × 1001.09 × 10−13.48 × 1042.41 × 10−2
500Avg1.16 × 1004.98 × 1024.96 × 1022.03 × 1094.98 × 1024.98 × 1027.40 × 1004.99 × 1026.68 × 1081.03 × 10−1
Std1.22 × 1002.63 × 10−13.49 × 10−15.69 × 1082.66 × 10−12.02 × 10−19.29 × 1001.21 × 10−19.84 × 1071.11 × 10−1
F650Avg3.88 × 10−42.77 × 1001.24 × 1009.46 × 1021.12 × 1016.14 × 1003.21 × 10−25.60 × 1005.49 × 1011.50 × 10−5
Std6.94 × 10−44.90 × 10−14.53 × 10−11.13 × 1037.38 × 10−17.29 × 10−12.86 × 10−24.41 × 10−11.31 × 1012.09 × 10−5
100Avg3.18 × 10−49.79 × 1004.52 × 1001.17 × 1042.41 × 1011.65 × 1014.09 × 10−11.56 × 1018.71 × 1021.71 × 10−4
Std6.57 × 10−49.06 × 10−11.24 × 1007.86 × 1038.07 × 10−18.91 × 10−12.00 × 10−17.13 × 10−11.71 × 1023.90 × 10−4
500Avg1.45 × 10−39.11 × 1013.08 × 1012.01 × 1051.25 × 1021.10 × 1024.25 × 1001.12 × 1022.82 × 1057.90 × 10−4
Std3.38 × 10−31.34 × 1008.25 × 1007.46 × 1041.32 × 10−71.36 × 1001.68 × 1001.28 × 1001.94 × 1041.33 × 10−3
F750Avg2.37 × 10−43.52 × 10−33.81 × 10−32.68 × 1009.04 × 10−57.04 × 10−48.22 × 10−41.19 × 10−43.88 × 10−19.69 × 10−5
Std1.96 × 10−41.72 × 10−35.06 × 10−32.83 × 1006.21 × 10−54.95 × 10−45.22 × 10−49.93 × 10−57.76 × 10−21.71 × 10−4
100Avg2.59 × 10−47.68 × 10−33.50 × 10−31.78 × 1028.68 × 10−51.50 × 10−38.56 × 10−41.20 × 10−42.47 × 1001.00 × 10−4
Std2.00 × 10−42.99 × 10−35.11 × 10−38.68 × 1018.17 × 10−57.88 × 10−47.74 × 10−49.80 × 10−54.96 × 10−19.68 × 10−5
500Avg3.18 × 10−44.68 × 10−24.54 × 10−31.47 × 1048.81 × 10−56.37 × 10−31.18 × 10−32.08 × 10−44.20 × 1031.34 × 10−4
Std2.49 × 10−49.83 × 1034.80 × 10−33.11 × 1038.59 × 10−52.99 × 10−36.76 × 10−42.40 × 10−47.61 × 1021.69 × 10−4
F850Avg−8.40 × 103−9.15 × 103−1.68 × 104−4.79 × 103−4.44 × 103−5.49 × 103−1.37 × 104−3.25 × 103−1.13 × 104−1.50 × 104
Std1.71 × 1031.19 × 1033.10 × 1033.80 × 1027.01 × 1021.86 × 1037.95 × 1024.45 × 1021.15 × 1037.76 × 102
100Avg−1.11 × 104−1.63 × 104−3.51 × 104−6.75 × 103−6.32 × 103−9.77 × 103−2.23 × 104−4.55 × 103−1.97 × 104−2.42 × 104
Std3.38 × 1031.56 × 1034.87 × 1036.01 × 1029.38 × 1024.01 × 1032.55 × 1034.37 × 1021.57 × 1031.21 × 103
500Avg−2.86 × 104−5.45 × 104−1.84 × 105−1.54 × 104−1.37 × 104−2.72 × 104−6.19 × 104−1.05 × 104−5.63 × 104−6.46 × 104
Std8.37 × 1031.15 × 1042.77 × 1041.36 × 1032.14 × 1031.40 × 1042.40 × 1031.41 × 1035.34 × 1032.27 × 103
F950Avg0.00 × 1005.99 × 1000.00 × 1001.35 × 1025.65 × 10−10.00 × 1000.00 × 1000.00 × 1001.55 × 1020.00 × 100
Std0.00 × 1006.77 × 1000.00 × 1005.68 × 1016.26 × 10−10.00 × 1000.00 × 1000.00 × 1001.95 × 1010.00 × 100
100Avg0.00 × 1001.01 × 1010.00 × 1002.86 × 1028.35 × 10−10.00 × 1000.00 × 1000.00 × 1006.03 × 1020.00 × 100
Std0.00 × 1007.30 × 1000.00 × 1001.05 × 1026.56 × 10−10.00 × 1000.00 × 1000.00 × 1003.83 × 1010.00 × 100
500Avg3.03 × 10−146.65 × 1010.00 × 1001.20 × 1034.34 × 10−13.40 × 10−120.00 × 1000.00 × 1005.03 × 1030.00 × 100
Std1.66 × 10−131.81 × 1010.00 × 1004.02 × 1025.74 × 10−11.72 × 10−120.00 × 1000.00 × 1002.61 × 1020.00 × 100
F1050Avg8.88 × 10−164.50 × 10−114.09 × 10−151.63 × 1014.80 × 10−151.34 × 10−148.88 × 10−164.44 × 10−155.14 × 1008.88 × 10−16
Std0.00 × 1002.51 × 10−112.16 × 10−156.91 × 1001.08 × 10−153.33 × 10−150.00 × 1000.00 × 1008.40 × 10−10.00 × 100
100Avg8.88 × 10−161.32 × 10−74.80 × 10−151.95 × 1015.98 × 10−155.30 × 10−148.88 × 10−164.09 × 10−151.42 × 1018.88 × 10−16
Std0.00 × 1005.46 × 10−82.85 × 10−152.95 × 1001.79 × 10−151.14 × 10−140.00 × 1001.08 × 10−152.81 × 1000.00 × 100
500Avg8.88 × 10−161.73 × 10−34.68 × 10−151.94 × 1017.16 × 10−153.13 × 10−88.88 × 10−164.09 × 10−151.94 × 1018.88 × 10−16
Std0.00 × 1002.61 × 10−42.27 × 10−153.23 × 1001.53 × 10−151.45 × 10−80.00 × 1001.08 × 10−154.60 × 10−20.00 × 100
F1150Avg0.00 × 1003.93 × 10−31.41 × 10−29.60 × 1001.60 × 10−30.00 × 1000.00 × 1000.00 × 1001.55 × 1000.00 × 100
Std0.00 × 1007.52 × 10−35.43 × 10−28.83 × 1003.67 × 10−30.00 × 1000.00 × 1000.00 × 1001.68 × 10−10.00 × 100
100Avg0.00 × 1005.44 × 10−30.00 × 1008.96 × 1019.84 × 10−40.00 × 1000.00 × 1000.00 × 1009.14 × 1000.00 × 100
Std0.00 × 1001.12 × 10−20.00 × 1005.30 × 1013.01 × 10−30.00 × 1000.00 × 1000.00 × 1001.64 × 1000.00 × 100
500Avg0.00 × 1002.16 × 10−20.00 × 1001.68 × 1034.69 × 10−41.12 × 10−30.00 × 1000.00 × 1002.58 × 1030.00 × 100
Std0.00 × 1004.03 × 10−20.00 × 1007.29 × 1022.57 × 10−36.14 × 10−30.00 × 1000.00 × 1001.96 × 1020.00 × 100
F1250Avg2.33 × 10−61.16 × 10−12.18 × 10−21.53 × 1071.09 × 1003.81 × 10−13.12 × 10−32.63 × 10−11.54 × 1017.69 × 10−7
Std4.17 × 10−67.66 × 10−29.32 × 10−31.60 × 1072.18 × 10−19.88 × 10−21.23 × 10−23.75 × 10−29.64 × 1001.51 × 10−6
100Avg1.74 × 10−62.95 × 10−15.04 × 10−23.26 × 1081.18 × 1005.85 × 10−14.94 × 10−34.86 × 10−12.76 × 1028.11 × 10−7
Std4.41 × 10−67.17 × 10−23.13 × 10−21.68 × 1089.35 × 10−28.16 × 10−22.75 × 10−36.09 × 10−25.62 × 1021.17 × 10−6
500Avg1.62 × 10−67.47 × 10−19.08 × 10−25.93 × 1091.20 × 1009.35 × 10−11.37 × 10−29.91 × 10−11.14 × 1095.41 × 10−7
Std3.38 × 10−64.58 × 10−24.73 × 10−21.31 × 1099.37 × 10−32.39 × 10−24.41 × 10−32.74 × 10−22.51 × 1086.73 × 10−7
F1350Avg1.36 × 10−52.18 × 1001.13 × 1003.16 × 1074.73 × 1003.52 × 1002.83 × 10−25.00 × 1007.85 × 1016.92 × 10−6
Std1.29 × 10−53.65 × 10−14.54 × 10−13.59 × 1072.08 × 10−12.45 × 10−12.74 × 10−24.93 × 10−82.85 × 1011.11 × 10−5
100Avg2.10 × 10−56.90 × 1002.84 × 1004.80 × 1089.69 × 1008.46 × 1002.76 × 10−11.00 × 1018.03 × 1031.40 × 10−5
Std2.94 × 10−54.47 × 10−11.01 × 1002.79 × 1083.18 × 10−13.39 × 10−12.19 × 10−11.23 × 10−79.28 × 1032.10 × 10−5
500Avg7.84 × 10−45.11 × 1011.90 × 1019.97 × 1094.98 × 1014.79 × 1012.79 × 1005.00 × 1012.42 × 1091.83 × 10−4
Std1.33 × 10−31.49 × 1004.90 × 1001.80 × 1096.15 × 10−23.66 × 10−12.11 × 1006.00 × 10−74.90 × 1083.21 × 10−4
The best values obtained have been highlighted in boldface.
Table 10. IEEE CEC2019 test suite.
Table 10. IEEE CEC2019 test suite.
FunctionNameDimension (D)RangeFmin
CEC01Storn’s Chebyshev Polynomial Fitting Problem9[−8192, 8192]1
CEC02Inverse Hilbert Matrix Problem16[−16384, 16384]1
CEC03Lennard-Jones Minimum Energy Cluster18[−4, 4]1
CEC04Rastrigin’s Function10[−100, 100]1
CEC05Griewangk’s Function10[−100, 100]1
CEC06Weierstrass Function10[−100, 100]1
CEC07Modified Schwefel’s Function10[−100, 100]1
CEC08Expanded Schaffer’s F6 Function10[−100, 100]1
CEC09Happy Cat Function10[−100, 100]1
CEC10Ackley Function10[−100, 100]1
Table 11. Comparison results of CHAOARO and other algorithms on 10 CEC2019 test functions.
Table 11. Comparison results of CHAOARO and other algorithms on 10 CEC2019 test functions.
FnCriteriaAOGWOWOASCATSAGJOAROWChOADAOACHAOARO
CEC01Avg1.31 × 1051.80 × 1084.28 × 10109.17 × 1096.38 × 1042.56 × 1084.16 × 1049.65 × 1047.51 × 10104.05 × 104
Std3.59 × 1042.43 × 1086.20 × 10101.11 × 10101.38 × 1046.33 × 1083.33 × 1039.87 × 1035.82 × 10102.74 × 103
p-value3.02 × 10−116.70 × 10−113.02 × 10−113.02 × 10−116.70 × 10−113.65 × 10−81.62 × 10−53.02 × 10−113.02 × 10−11
Rank56983724101
CEC02Avg1.79 × 1011.74 × 1011.74 × 1011.75 × 1011.84 × 1011.74 × 1011.73 × 1011.75 × 1016.08 × 1011.73 × 101
Std2.88 × 10−12.91 × 10−26.07 × 10−27.72 × 10−26.66 × 10−17.74 × 10−22.82 × 10−51.87 × 10−12.32 × 1016.16 × 10−6
p-value3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.43 × 10−33.02 × 10−113.02 × 10−11
Rank83469527101
CEC03Avg1.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 101
Std6.18 × 10−44.52 × 10−48.09 × 10−75.85 × 10−41.67 × 10−33.66 × 10−41.45 × 10−86.33 × 10−41.14 × 10−71.38 × 10−10
p-value3.02 × 10−114.08 × 10−111.61 × 10−103.02 × 10−113.02 × 10−113.02 × 10−117.51 × 10−103.02 × 10−113.02 × 10−11
Rank86471052931
CEC04Avg7.95 × 1035.54 × 1013.51 × 1021.53 × 1037.01 × 1031.16 × 1033.98 × 1012.17 × 1047.09 × 1013.92 × 101
Std2.46 × 1032.18 × 1012.57 × 1025.31 × 1023.32 × 1031.32 × 1032.11 × 1017.97 × 1032.00 × 1011.60 × 101
p-value3.02 × 10−112.16 × 10−33.69 × 10−113.02 × 10−113.02 × 10−111.33 × 10−101.23 × 10−33.02 × 10−111.87 × 10−7
Rank93578621041
CEC05Avg4.18 × 1001.39 × 1001.93 × 1002.24 × 1003.23 × 1001.69 × 1001.14 × 1005.72 × 1001.31 × 1001.10 × 100
Std8.36 × 10−12.35 × 10−15.18 × 10−11.31 × 10−19.84 × 10−14.38 × 10−19.58 × 10−27.81 × 10−11.31 × 10−18.78 × 10−2
p-value3.02 × 10−111.25 × 10−73.34 × 10−113.02 × 10−113.02 × 10−112.37 × 10−107.51 × 10−13.02 × 10−111.16 × 10−7
Rank94678521031
CEC06Avg9.89 × 1001.10 × 1019.34 × 1001.11 × 1011.11 × 1011.11 × 1015.47 × 1001.09 × 1011.09 × 1015.35 × 100
Std1.17 × 1008.05 × 10−11.22 × 1007.01 × 10−18.57 × 10−16.59 × 10−11.17 × 1006.23 × 10−16.46 × 10−11.09 × 100
p-value3.69 × 10−113.02 × 10−116.70 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.07 × 10−43.02 × 10−113.02 × 10−11
Rank47391082561
CEC07Avg8.05 × 1024.79 × 1026.14 × 1027.97 × 1029.52 × 1026.76 × 1021.29 × 1021.00 × 1033.48 × 1021.02 × 102
Std2.38 × 1022.85 × 1022.33 × 1021.71 × 1023.03 × 1022.86 × 1021.10 × 1021.90 × 1022.33 × 1021.05 × 102
p-value4.08 × 10−112.78 × 10−73.82 × 10−104.08 × 10−113.69 × 10−115.00 × 10−92.82 × 10−83.02 × 10−112.68 × 10−4
Rank84579621031
CEC08Avg5.88 × 1005.27 × 1006.02 × 1006.03 × 1006.40 × 1005.59 × 1004.46 × 1006.61 × 1005.63 × 1004.17 × 100
Std5.82 × 10−17.25 × 10−16.31 × 10−14.87 × 10−16.33 × 10−18.57 × 10−16.99 × 10−16.90 × 10−17.29 × 10−13.61 × 10−1
p-value4.20 × 10−101.39 × 10−61.33 × 10−106.70 × 10−113.02 × 10−111.60 × 10−71.19 × 10−53.02 × 10−111.85 × 10−8
Rank63789421051
CEC09Avg1.32 × 1034.37 × 1004.74 × 1001.23 × 1026.02 × 1021.04 × 1022.75 × 1006.80 × 1022.96 × 1002.72 × 100
Std4.25 × 1029.39 × 10−17.64 × 10−19.64 × 1016.08 × 1023.04 × 1022.76 × 10−11.26 × 1014.30 × 10−12.46 × 10−1
p-value3.02 × 10−118.89 × 10−104.08 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.53 × 10−33.02 × 10−113.67 × 10−3
Rank10457862931
CEC10Avg2.03 × 1012.05 × 1012.03 × 1012.05 × 1012.05 × 1012.03 × 1012.00 × 1012.05 × 1012.04 × 1011.82 × 101
Std1.25 × 10−17.57 × 10−21.37 × 10−19.19 × 10−29.50 × 10−21.10 × 10−16.03 × 10−29.61 × 10−27.20 × 10−25.75 × 10−2
p-value2.15 × 10−103.02 × 10−116.07 × 10−113.02 × 10−113.02 × 10−114.20 × 10−101.51 × 10−103.02 × 10−113.02 × 10−11
Rank47589321061
+/=/−10/0/010/0/010/0/010/0/010/0/010/0/09/0/110/0/010/0/0
Friedman Mean Rank7.14.75.37.48.35.52.08.45.31.0
Final Ranking73489621041
The best values obtained have been highlighted in boldface.
Table 12. Comparison results for pressure vessel design problem.
Table 12. Comparison results for pressure vessel design problem.
AlgorithmsOptimal Values for VariablesMinimum Cost
T s ( x 1 ) T h ( x 2 ) R ( x 3 ) L ( x 4 )
AO [46]1.05400.182859.621938.80505949.2258
MVO [20]0.81250.437542.0907176.73876060.8066
WOA [30]0.81250.437542.0987176.63906059.7410
GWO [27]0.81250.434542.0892176.75876051.5639
MFO [29]0.81250.437542.0984176.63666059.7143
HHO [32]0.81760.407242.0917176.71966000.4626
SMA [34]0.79310.393240.6711196.21785994.1857
GJO [36]0.77830.384840.3219200.00005887.0711
ARO [55]0.77820.384840.3234199.94795885.6679
AOASC [70]0.82540.426242.7605169.33966048.6812
CHAOARO0.77830.384740.3254199.92135885.5834
The best values obtained have been highlighted in boldface.
Table 13. Comparison results for cantilever beam design problem.
Table 13. Comparison results for cantilever beam design problem.
AlgorithmsOptimal Values for VariablesMinimum Weight
x 1 x 2 x 3 x 4 x 5
CS [26]6.00895.30494.50233.50772.15041.339990
MVO [20]6.02395.30604.49503.49602.15271.339960
MFO [29]5.98495.31674.49733.51362.16161.339988
SMA [34]6.01785.31094.49383.50112.15021.339957
ARO [55]6.00685.31144.49353.50292.15901.339960
SOS [72]6.01885.30344.49593.49902.15561.339960
AHA [58]6.01385.30244.49633.50842.15271.339957
RUN [73]6.00495.31904.48683.50332.15951.339956
HAGSA [3]5.92715.39624.50813.47602.17261.340400
ERHHO [74]6.05095.26394.51403.46052.18781.340200
CHAOARO6.01635.30994.49513.50072.15171.339956
The best values obtained have been highlighted in boldface.
Table 14. Comparison results for tubular column design problem.
Table 14. Comparison results for tubular column design problem.
AlgorithmsOptimal Values for VariablesMinimum Cost
d ( x 1 ) t ( x 2 )
AO5.416390.2982626.66455
CS [26]5.451390.2919626.53217
SNS [75]5.451160.2919726.49950
GWO5.456430.2914226.49618
WOA5.456580.2913926.49516
SCA5.378100.3051026.83667
GJO5.450860.2919426.49682
ARO5.456650.2913926.49559
ChOA5.468780.2932726.65508
GSA-GA [76]5.451160.2919726.53133
CHAOARO5.452180.2916326.48636
The best values obtained have been highlighted in boldface.
Table 15. Comparison results for speed reducer design problem.
Table 15. Comparison results for speed reducer design problem.
AlgorithmsOptimal Values for VariablesMinimum Weight
x 1 x 2 x 3 x 4 x 5 x 6 x 7
AO [46]3.50210.700017.00007.30997.74763.36415.29943007.7328
AOA [23]3.503840.7177.37.729333.356495.28672997.9157
SSA [69]3.5000590.7177.37.83.3512095.2868132996.7077
SHO [78]3.501590.7177.37.83.351275.288742998.5507
AFA [79]3.5000000.7000177.3024897.8000673.3502195.2866832996.3727
STOA [80]3.501240.7177.37.83.334255.265382995.9578
SC-GWO [81]3.500640.7177.306437.806173.350345.286942996.9859
CHAOARO3.500010.7000177.300027.715353.350575.286662994.4488
The best values obtained have been highlighted in boldface.
Table 16. Comparison results for rolling element bearing design problem.
Table 16. Comparison results for rolling element bearing design problem.
AlgorithmsHHO [32]RSA [35]TLBO [37]RUN [73]IGTO [82]CHAOARO
D m 125125.1722125.7191125.2142125125.719
D b 21.0000021.2973421.4255921.5979621.4188521.42554
Z 11.0920710.8852111.0000011.4024010.9411010.65574
f i 0.515000.5152530.515000.515000.515000.51500
f o 0.515000.5177640.515000.515000.515000.5151428
K d m i n 0.400000.412450.4242660.400590.400000.4574078
K d m a x 0.600000.6323380.6339480.614670.700000.6544766
δ 0.300000.3019110.300000.305300.300000.3000026
e 0.050470.0243950.0688580.020000.020000.05889122
ζ 0.600000.60240.7994980.636650.600000.6698756
Optimal load-carrying
capacity
83,011.88383,486.6481,859.7483,680.4785,067.96285,548.8272
The best values obtained have been highlighted in boldface.
Table 17. Comparison results for the parameter identification of SDM.
Table 17. Comparison results for the parameter identification of SDM.
Algorithms I p h   ( A ) I s d   ( μ A ) R s   ( Ω ) R s h   ( Ω ) n RMSE
ABC [85]0.7607840.3215230.03639853.6391.4806019.8169 × 10−4
SMA [87]0.760760.323140.0363753.714891.481149.8482 × 10−4
IHBA [88]0.761010.394450.03478955.5381.49511.0272 × 10−3
IBES [89]0.7607760.3230.03637753.718531.47689.8600 × 10−4
CPSO [90]0.76070.40.035459.0121.50331.3900 × 10−3
OBDSSA [91]0.76080.365960.035956.16621.49391.0161 × 10−3
GOTLBO [92]0.7607800.3315520.03626554.1154261.4838209.8744 × 10−4
CHAOARO0.7607760.3149500.03648553.195981.4786407.7330 × 10−4
The best values obtained have been highlighted in boldface.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Xiao, Y.; Guo, Y.; Li, J. Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications. Processes 2022, 10, 2703. https://doi.org/10.3390/pr10122703

AMA Style

Wang Y, Xiao Y, Guo Y, Li J. Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications. Processes. 2022; 10(12):2703. https://doi.org/10.3390/pr10122703

Chicago/Turabian Style

Wang, Yangwei, Yaning Xiao, Yanling Guo, and Jian Li. 2022. "Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications" Processes 10, no. 12: 2703. https://doi.org/10.3390/pr10122703

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop