Next Article in Journal
Method for Bottle Opening with a Dual-Arm Robot
Previous Article in Journal
Optimizing Deep Learning Models with Improved BWO for TEC Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Reinforced Whale Optimization Algorithm for Solving Mathematical Optimization Problems

1
School of Information Engineering, Tianjin University of Commerce, Beichen, Tianjin 300134, China
2
College of Science, Tianjin University of Commerce, Beichen, Tianjin 300134, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(9), 576; https://doi.org/10.3390/biomimetics9090576
Submission received: 20 August 2024 / Revised: 20 September 2024 / Accepted: 20 September 2024 / Published: 22 September 2024

Abstract

The whale optimization algorithm has several advantages, such as simple operation, few control parameters, and a strong ability to jump out of the local optimum, and has been used to solve various practical optimization problems. In order to improve its convergence speed and solution quality, a reinforced whale optimization algorithm (RWOA) was designed. Firstly, an opposition-based learning strategy is used to generate other optima based on the best optimal solution found during the algorithm’s iteration, which can increase the diversity of the optimal solution and accelerate the convergence speed. Secondly, a dynamic adaptive coefficient is introduced in the two stages of prey and bubble net, which can balance exploration and exploitation. Finally, a kind of individual information-reinforced mechanism is utilized during the encircling prey stage to improve the solution quality. The performance of the RWOA is validated using 23 benchmark test functions, 29 CEC-2017 test functions, and 12 CEC-2022 test functions. Experiment results demonstrate that the RWOA exhibits better convergence accuracy and algorithm stability than the WOA on 20 benchmark test functions, 21 CEC-2017 test functions, and 8 CEC-2022 test functions, separately. Wilcoxon’s rank sum test shows that there are significant statistical differences between the RWOA and other algorithms

Graphical Abstract

1. Introduction

An optimization problem entails finding the most favorable solution among various decision-making options while adhering to specific constraints. The problem is usually nonlinear and discrete, so an accurate model is difficult to establish, and so an optimization problem is usually called a “difficult” problem. As a branch of applied mathematics, optimization problems have found application in diverse fields such as military, engineering, and management. In this context, swarm intelligence optimization algorithms are employed to address such problems. Compared with traditional optimization algorithms, swarm intelligence algorithms do not require specific conditions for the objective function or constraints of the optimization problem. These algorithms can optimize without relying on exact formulas or mathematical models. Additionally, these algorithms exhibit adaptability to uncertain factors encountered during the optimization process. Extensive research has proven their effectiveness in solving optimization problems.
A swarm intelligence optimization algorithm is a global optimization technique that imitates the behavior of insects, animals, and birds in nature. With the continuous exploration of scholars, a series of swarm intelligence optimization algorithms have been proposed, and some representative partial algorithms are presented in Table 1.
This paper examines the whale optimization algorithm (WOA), a novel meta-heuristic algorithm inspired and proposed by the hunting behavior of humpback whales. The WOA has many advantages, including simple operation, fewer control parameters, and strong ability to escape the local optimum. These characteristics have motivated researchers to apply it to solve various practical problems, such as feature selection [11,12], workshop scheduling [13,14,15], and image segmentation [16,17,18]. However, when faced with high-dimensional and complex problems, the WOA exhibits certain limitations, such as slow optimization speed, low optimization accuracy, and inadequate search and development capabilities [19,20]. In order to enhance the performance of the WOA, many scholars have proposed improvements. Lee et al. [21] proposed a hybrid whale optimization algorithm based on the genetic algorithm and heat exchange optimization algorithm (GWOA-TEO), which use memory-based crossover operators and a memory-based position update mechanism for leading solutions to improve the search ability. In this study, the antagonistic learning strategy was used to generate the antithetical solution of the global optimal solution, which not only accelerates the speed of the algorithm but also improves the search ability of the algorithm. Prasad et al. [22] used logistic chaos mapping to adjust the control parameters of the WOA, and the problem of precocious convergence was effectively alleviated. In this study, the adaptive inertia coefficient was used to control the speed of the algorithm, which better balances the global search ability and local search ability of the algorithm. Sun et al. [23] developed a multi-population improved whale optimization algorithm (MIWOA), which leverages the current optimal individual and weighting center to refine the search process, boosting both search capability and convergence speed. The algorithm divides the population into superior and inferior groups, which maintains population diversity and enhances the exploration ability. In this study, the antagonistic learning strategy was used to generate the antithetical solution of the global optimal solution, which not only accelerates the speed of the algorithm but also enhances the search ability of the algorithm. By incorporating individual historical optimal information, the algorithm expands the potential search space, increases population diversity, and speeds up convergence to the global optimum. Li et al. [24] proposed a multi-objective whale optimization algorithm based on multi-leader guidance, which employs an opposition-based learning strategy to improve the distribution of the initial population. In this study, the oppositional learning strategy was used to generate the antagonistic solution of the global optimal solution, and it was observed that this strategy is more effective in enhancing the global optimum than in improving the initial population distribution. Islam et al. [25] combined the whale optimization algorithm with the multi-objective non-dominant sequencing technique to develop a hybrid algorithm called the Non-dominant Sequencing Whale Optimization Algorithm (NSWOA), which was applied to optimize the parameters of dynamic and static load controllers of islanded microgrids. Saha et al. [26] introduced the Cosine-Modified Whale Optimization Algorithm, which uses the cosine function to select the control parameter a and utilizes the correction factor to reduce the step size during the position update process, which appropriately balances the development and exploration ability of the algorithm. This study utilized an adaptive inertia coefficient to balance the algorithm’s exploration and exploitation abilities, and the individual historical optimal information and the global optimal solution are introduced, which makes the RWOA have stronger, balanced development and exploration capabilities. Liu et al. [27] employed an elite search strategy to enhance global optimization and introduced an adaptive variable speed strategy to balance the algorithm’s search and development capabilities. This study achieved a better balance between the development and search capabilities by incorporating individual historical optimal solutions, adaptive inertia coefficients, and enhanced global optimal solutions. Jin et al. [28] integrated the Eagle strategy [29] and uniform mutation into the whale optimization algorithm, which better balanced the global and local search capabilities. In [30], an adversarial learning strategy was introduced in the initial population generation to improve the diversity of algorithm. This studyusedan antagonistic learning strategy to refine the global optimal solution, and the experiments revealed that this approach is more effective in improving the global optimal solution. Furthermore, Lin et al. [31] proposed a niche hybrid heuristic whale optimization algorithm. Zong et al. [32] proposed a memory strategy based on fractional-order expansion to enhance the search ability of the whale optimization algorithm. Zhou et al. [33] developed a hybrid chameleon whale optimization algorithm (HWOA-CHM) inspired by the chameleon’s hunting mechanism, which improves the convergence accuracy of the algorithm. Shen et al. [34] divided the population into three sub-populations of equal size according to the fitness value of the individual, and each population was given a different update mechanism, which improved the global optimization ability of the algorithm. Although these variants of the whale optimization algorithm show improvements over the traditional WOA, it is important to note that no single optimization algorithm can perfectly solve all optimization problems, as stated by the no-free-lunch theorem [35]. Therefore, the pursuit of optimizing optimization algorithms remains a continuous and meaningful endeavor.
To address the limitations of the whale optimization algorithm, this paper proposes a reinforced whale optimization algorithm (RWOA), in which three corrections are made on the basis of the traditional WOA: Initially, an opposition-based learning strategy is incorporated to enhance the search for the global optimal solution. This strategy generates an opposing solution of the optimal solution, increasing the selectivity and aiding the algorithm find the target position faster. Secondly, an adaptive inertia coefficient is introduced in both the prey and bubble net stages. This dynamic control of the convergence speed enables individuals to focus more on utilizing their own position information in the early stages and approach the global optimal faster in the later stages. Additionally, the utilization of individual information is enhanced during the prey stage, which accelerates the algorithm’s convergence speed and improves the overall solution quality.
The main contributions of this paper are briefly summarized as follows:
(1) A new optimization algorithm called the RWOA is proposed. The algorithm incorporates an opposing learning strategy into the traditional WOA’s optimal solution, introduces adaptive inertia coefficients in the encircling prey and bubble net attack stages, and enhances the utilization of individual information during the encircling prey stage.
(2) The performance of the RWOA is verified using 23 standard test functions, 29 CEC-2017 test functions, and 12 CEC-2022 test functions.
(3) The effectiveness and superiority of the proposed algorithm is verified through the analysis of various statistical indicators, such as the mean and standard deviation. Additionally, the results of Wilcoxon’s rank sum test indicate a significant statistical difference between the RWOA and other comparison algorithms at a significance test level of 0.05.
The remainder of the paper is organized as follows: Section 2 reviews the basic whale optimization algorithms. The detailed procedure of the proposed RWOA is described in Section 3. Section 4 examines the performance of both the WOA and RWOA on various test functions, followed by a comparison and analysis of RWOA’s performance against other advanced optimization algorithms in Section 5. All experimental results are discussed in detail in Section 6. Finally, Section 7 summarizes this study’s findings and outlines future research directions.

2. Whale Optimization Algorithm (WOA)

In 2016, the whale optimization algorithm (WOA) was proposed by Mirjalili et al. [8], which was inspired by the predatory behavior of humpback whales. The algorithm can be divided into three main stages: encircling prey, bubble net attack, and searching for prey.

2.1. Exploitation Phase

During this stage of the WOA, the whales have found their target prey and approach the prey in a siege manner. The formulas in Table 2 are employed to simulate this predatory behavior.

2.2. Exploration Phase

The WOA stipulates that when |A| ≥ 1, each whale will randomly select another whale from the population and learn from it, using Equations (1) and (2) to simulate this process.
D rand t = | C · X rand   t   X i t |
X i t + 1 = X rand t   A · D rand t
Here, X rand t is the position vector of the randomly selected whale individual in the tth iteration, and D rand t is the distance between the ith whale individual in the tth iteration and the randomly selected whale individual.

3. A Reinforced Whale Optimization Algorithm

Although the WOA is simple to operate, has few parameters to be adjusted, has strong stability, and has a strong ability to escape the local optimum due to its unique update mechanism, it still has several drawbacks, such as low convergence accuracy and slow convergence speed. In order to overcome the shortcomings of the WOA, this paper proposes an enhanced version called the reinforced whale optimization algorithm (RWOA). The improvement strategy of the RWOA focuses on three main areas. Firstly, the opposition learning strategy is employed to generate antagonistic solutions for the global optimal solution after each iteration. Then, a greedy strategy selects the better fitness value as the global optimal solution. This approach expands the range of potential optimal solutions, which helps to accelerate the algorithm’s convergence speed. Secondly, based on the acceleration formula in the red-tailed hawk optimization algorithm, this paper improves the contraction bracketing mechanism and the spiral update position mechanism. In the early stage, whale individuals focus more on using their own information, and they get closer to their prey faster in the later stages. This approach better balances the algorithm’s exploration and exploitation capabilities. Thirdly, the original whale optimization algorithm lacks the full use of an individual’s valuable information. To address this limitation, this study added the utilization of an individual’s historical optimal information into the shrinking encircling mechanism of the WOA, so that the algorithm exhibits accelerated convergence speed and improved solution accuracy. The detailed improvement strategy is summarized as follows.

3.1. Opposition-Based Learning

Opposition-based learning (OBL) explores less examined areas of the solution space by creating solutions that are directly opposed to the current ones. These opposing solutions introduce new candidate solutions in diverse regions, thereby enhancing population diversity. These solutions offer alternative directions, which help address the limitations of the current solutions, broaden the search area, and enable the algorithm to swiftly converge to regions with high-quality solutions, thereby accelerating the process of finding the global optimum. OBL has been extensively utilized in various intelligent optimization algorithms [16,36,37,38,39,40,41] to enhance algorithm performance and has proven their effectiveness. Building on this idea, this study employed an opposition-based learning strategy to produce a new antagonistic solution for the global optimal solution after each iteration. This approach enlarges the algorithm’s search space and enhances solution diversity, aiding the algorithm in escaping local optima and thereby enabling it to identify the global optimal solution more quickly and accurately. The specific process is as follows:
(1) A complete update is performed on all individuals in the population, and each individual undergoes either a transformation to a better position or remains in the same position.
(2) The fitness value of each individual is calculated to find the global optimal solution, including the one before this iteration.
(3) The stochastic opposition-based learning strategy is employed to generate the antithetical solution (gbest′) of the global optimal solution (gbest), and the specific formula is as shown in Equation (3).
gbest = rand ( 1 ,   dim )   × ( lb + ub )     gbest
rand(1, dim) creates a matrix with 1 row and dim columns, where the elements are randomly generated from a uniform distribution in the range [0, 1]. Here, lb and ub represent the lower and upper limits of the feasible solution, respectively.
(4) Whether the generated antithetical solution (gbest′) falls within the specified upper and lower bounds is verified. If it exceeds these bounds, adjust it to the nearest limit value. The fitness value of gbest′ is calculated, and then the one with a favorable fitness value is selected as the global optimal solution (gbest) using a greedy strategy.

3.2. Adaptive Weight Strategy

Many intelligent optimization algorithms have learned the idea of improving the inertia weight in the algorithm formula and achieving better results [40,42,43,44,45]; this study adopted the acceleration formula from the red-tailed hawk algorithm [46] to improve the two stages of the WOA’s encirclement of prey and bubble net attack. Introducing the dynamic adaptive inertia coefficient into the individual position update formulas allows individuals in the population to be less influenced by the global optimal solution in the early stages and to gradually increase this influence as iterations progress. Initially, individuals focus more on utilizing their own information, which reduces the tendency of the population to converge prematurely on inferior solutions and helps avoid local optima. In the later stages, the influence of the global optimal solution on the individual increases, which accelerates the convergence of the inferior solution to the optimal solution. This adjustment also effectively balances the algorithm’s global and local search capabilities. The expression for this improvement is as follows:
w = sin 2 ( 2 . 5     t / T max )
where t represents the current iteration number, and T max represents the maximum number of iterations of the algorithm. The trajectory of w over time is shown in Figure 1.
The improved formula is
D i t = | C · gbest     X i t |
D p = | gbest     X i t |
X i t + 1 = { w   ·   gbest     A   ·   D i t ,         p   <   0 . 5   ( i )       w   ·   gbest + D p   ·   e bl   ·   cos ( 2 π l ) ,   p     0 . 5   ( ii )    
Here, w increases with the increase in the number of iterations t, which makes individuals pay more attention to self-exploration in the early stages of the algorithm and are more inclined to learn from society in the later stages of the algorithm. So, the algorithm approaches the global optimal solution more effectively with each iteration. The convergence speed of the algorithm is accelerated, and the global and local search capabilities of the algorithm are better balanced.

3.3. Improved Encircling Prey Mechanics

Inspired by [47,48], the comprehensive utilization of an individual’s information in the population, and the survival principle of “survival of the fittest” observed in the biological world, it is evident that individuals must continuously learn and accumulate knowledge to thrive. And individual learning can not only come from self-learning but also from learning from others. Self-learning is conducive to individual innovation, while learning from others accelerates individual knowledge accumulation. In this study, this learning mechanism was introduced into the prey encirclement stage of the whale optimization algorithm, which only introduces the use of the global optimal information in the population, while overlooking the potential of individual information. Through increasing the utilization of the historical optimal information of individuals in the population in the algorithm’s stage of surrounding prey, the individuals can not only learn from the optimal individuals of the population but also combine the use of their own optimal information in the past. By incorporating the utilization of individual information, our approach expands the potential search space within the population. This expansion leads to improved diversity among individuals, allowing for a more comprehensive exploration of the solution space. As a result, the algorithm can converge to the global optimal solution more quickly and with greater accuracy. The improved encircling prey mechanism is presented as follows:
X i t + 1 = w   ·   gbest     A   ·   D i t + A   ·   | pbest i t   X i t |
Here, pbest i t is the historical optimal position of the ith individual after the t-th iteration.
This function is employed during the prey encirclement stage of the algorithm, which corresponds to the phase of identifying the global optimal solution. By incorporating individual information from within the population, this function effectively mitigates the risk of population convergence to local optimal solutions. Additionally, it enhances the diversity of the population, contributing to a more comprehensive exploration of the solution space.
The pseudo-code for the RWOA is given in Algorithm 1, and an overall flowchart of the algorithm is given in Figure 2.
Algorithm 1. Pseudo-code of RWOA
1 :   Initialized   population   individuals   X i (i = 1, 2, …, N) are randomly generated within the range of the problem space
2: Each individual is assigned a value to the corresponding pbest. Calculate the fitness value of all individuals, find the best fitness value, and assign it to gbest.
3: Initialize the parameters a, A, C, l, p, w
4: t = 0
5: While t < Tmax do
6:    if p < 0.5
7:         if |A| < 1
8:           Update each individual position using Equations (5) and (8)
9:         else if |A| ≥ 1
10:           Select a random search agent Xrand
11:           Update each individual position using Equations (1) and (2)
12:         end if
13:    else if p ≥ 0.5
14:         Update each individual position using Equation (6) and (ii) in Equation (7)
15:    end if
16:    Update the individual historical optimal position pbest and its fitness value
17:    Update the best optimal position gbest using the opposition-based learning strategy, Equation (3); calculate the fitness value, and select the one with the best fitness value to reassign it to gbest
18:    Boundary checks and adjustments
19:    t = t + 1
20:    Update the parameters a, A, C, l, p, w
21: end while
22: Output the global best solution (gbest)

3.4. Space Complexity Analysis

In this subsection, the space complexity of the algorithm is determined using Big O notation. The space complexity of the WOA is O(N × D), where N represents population size and D represents the dimension. According to the flowchart and pseudo-code of the RWOA, it can be seen that the complexity of the global optimal solution in the population after each iteration can be expressed as O(D). The adaptive inertia coefficient introduced in the RWOA does not increase the spatial complexity of the algorithm. And the computational complexity of the historical optimal solution of each individual can be expressed as O(N × D). The overall computational complexity of the RWOA is obtained by summing up these three complexities. Therefore, the final computational complexity of the RWOA is O(N × D). The results indicate that the space complexity of the RWOA is the same as that of the WOA, implying that the proposed algorithm does not increase the algorithm complexity.

4. Performance Testing of RWOA and WOA

In order to verify the effectiveness of the proposed RWOA, 23 benchmark functions were used to evaluate the performance of the RWOA in exploration, development, and minimization. Among the 23 benchmark functions, F1–F7 are uni-modal functions with only one extreme value used to evaluate the exploitation capability of the RWOA; F8–F13 are multi-modal functions with multiple local extreme values used to test the exploration performance of the RWOA; and F14–F23 are fixed-dimensional functions used to test the performance of the algorithm in low dimensions. In addition, 29 CEC-2017 test functions and 12 CEC-2022 test functions were used to further verify the performance of the RWOA.
All tests were performed on one machine: CPU: Intel(R) Core(TM) i7-8550U CPU @ 1.80 GHz 1.99 GHz, Windows11 operating system, and MATLAB R2021a. The population number was set as 40. The maximum number of iterations was set as 500. And the average value of the significance test level was 0.05. In order to reduce statistical errors, each algorithm was run independently 30 times.

4.1. Performance Testing on 23 Benchmark Functions

In this subsection, the performances of the RWOA and WOA are compared on 23 benchmark functions with 50 dimensions. The comparison is based on their exploitation ability, exploration ability, and algorithm convergence ability. For each test function, the RWOA and WOA were independently executed thirty times to find the global optimal solution. Then, the mean and standard deviation of the results from the thirty trials were calculated, with the mean denoted as Mean and the standard deviation denoted as SD. The mean represents the convergence accuracy of the algorithm, while the standard deviation represents its stability. It is important to note that smaller values for both the mean and standard deviation indicate better algorithm performance.

4.1.1. Exploitation Capability Evaluation through Uni-Modal Functions

Uni-modal functions are those that have only one global optimal value in the interval, and these functions are suitable for evaluating the exploitation ability of optimization algorithms. Therefore, this study used the F1–F7 functions to assess the exploitation ability of the WOA and RWOA. The results of the experiment are recorded in Table 3, with the best performances highlighted in bold.
It can be clearly seen from Table 3 that for these functions, the proposed RWOA exhibits superior performance compared to the original WOA in terms of the maximum, mean, and standard deviation values. With the exception of the F7 function, the RWOA invariably achieves smaller minimum values, and the minimum value obtained by the two algorithms on the F7 function is almost the same. These experimental results clearly demonstrate that the RWOA has a stronger exploitation ability and better stability than the WOA.

4.1.2. Exploration Ability Evaluation through Multi-Modal Functions

The effectiveness of both algorithm’s exploration ability was evaluated by testing them on multi-modal functions, which typically possess numerous local optimal values that increase exponentially with the problem size. In this study, the performances of the algorithms were also assessed using these multi-modal functions. Specifically, functions F8–F13 represent high-dimensional multi-modal functions, and their results are recorded in Table 4. Functions F14–F23 represent fixed-dimensional (low-dimensional) multi-modal functions, and their results are recorded in Table 5.
As shown in Table 4, the proposed RWOA outperforms the WOA in all high-dimensional multi-modal functions. This suggests that the proposed algorithm exhibits enhanced exploration capabilities compared to the original algorithm. Furthermore, the values of all parameters are lower, indicating that the proposed algorithm is less prone to becoming trapped in local optima.
As can be seen from Table 5, the RWOA successfully identifies the global optimal value on F14–F20 for the fixed-dimension multi-modal function category, while also demonstrating greater stability than the WOA. The minimum values of the RWOA on the three test functions (F21–F23) are equal to those of the original WOA, and the mean values are similar as well. However, the maximum values and standard deviations are lower than the conventional WOA, indicating that the RWOA is generally better than the WOA. In summary, the RWOA has a better exploration capacity than the WOA on most test functions.

4.1.3. Analysis of Statistical Test and Convergence Performance

Wilcoxon’s rank sum test was performed on the benchmark functions of the RWOA and WOA, and the results are presented in Table 2, Table 3 and Table 4. The obtained p-values of the test are below the significance test level, indicating a statistically significant difference between the two algorithms. Conversely, if the p-values were above the significance test level, it would suggest that there is no significant difference between the two algorithms. As indicated in Table 2, Table 3 and Table 4, at a significance test level of 0.05, except for functions F9, F18, and F20, the p-values for the remaining twenty functions are smaller than the significance test level. This suggests a significant difference between the RWOA and WOA under the statistical test. Combined with the previous analysis of the mean, standard deviation, and other indicators, it is clear that the proposed variant shows superior performance compared to the WOA on 23 benchmark functions.
In this section, simulation plots of some test functions are provided to compare the convergence accuracy and speed of the two algorithms. From these graphs, the convergence performances of the two algorithms can be compared more intuitively. The blue line is the convergence curve of the WOA, and the red line is the convergence curve of the RWOA. In examining Figure 3, it is evident that the RWOA exhibits superior convergence accuracy and speed compared to the WOA.

4.2. Performance Testing on CEC-2017

In this study, the performance of the proposed RWOA was tested on the CEC2017 benchmark functions with 10 dimensions. Among the 30 functions, the remaining 29 functions were selected for testing because F2 exhibits strong instability in high dimensions. CEC2017 benchmark functions are mainly divided into four categories: The uni-modal functions (F1–F3) have the characteristic of a narrow ridge, which is inseparable and smooth, and they are used to challenge the convergence ability of the algorithm. The simple multi-modal functions (F4–F10) offer numerous local optimal solutions to test the ability of the algorithm to jump out of the local optimum. The hybrid functions (F11–F20) exhibit minimal deviation between local optimal and global optimal values, evaluating the algorithm’s performance in navigating complex landscapes. And the synthesis functions (F21–F30) encompass all the characteristics of the previous categories, providing a comprehensive assessment of the algorithm’s overall performance. Compared to other test functions, the CEC2017 benchmark functions offer a broader range of functionality, better reflecting the algorithm’s performance.
A careful analysis of Table 6 reveals that the Mean value of the RWOA is less than that of the WOA on all 21 functions, and the SD value of the RWOA is less than that of the WOA on almost all functions. In addition, almost all functions showed significant statistical differences in the functions where the RWOA is better than the WOA, and the p-value of the statistical test is at the test level of 0.05. Based on these results, it is evident that the proposed variant exhibits superior performance over the WOA on CEC2017 test functions.

4.3. Performance Testing on CEC-2022

CEC-2022 encompasses a collection of 12 benchmark functions that serve as a standard for evaluating optimization problems. It is currently the most widely used and newest test set. This test set is mainly divided into four categories: uni-modal functions (F1), basic functions (F2–F5), mixed functions (F6–F8), and combined functions (F9–F12). These test functions are known for their inherent complexity, making them highly challenging for evaluating the performances of optimization algorithms. In this study, the performances of the original whale optimization algorithm and the improved whale optimization algorithm were compared on this test set, with the results for 10 dimensions presented here.
From Table 7, it can be found that on the 12 functions of CEC2022, there are 8 Mean values and SD values of the RWOA that are smaller than those of the WOA. Furthermore, on the functions that the RWOA is better than the WOA, the p-value of the statistical test shows a significant statistical difference at the test level of 0.05. Based on these findings, it is clear that the proposed variant showcases superior performance over the WOA on the CEC2022 test functions.

5. Comparison of Performance with Other Algorithms

5.1. Compare with TSA and ABC

In this study, the TSA and ABC were first chosen for comparison with the proposed RWOA on ten benchmark testing functions. The TSA and ABC have been verified in that they are effective optimization algorithms for solving mathematical problems in the literature [6]. Therefore, this paper refers to 10 testing function results from the literature [6], which are denoted in Table 8. For the fairness of the comparison, the experimental condition is the same as in the literature [6]. For the RWOA and WOA, the number of population individuals was set as 40, the maximum iteration was 500, and each algorithm was run independently 30 times. The detailed results are presented in Table 8.
The experimental results indicate that all algorithms can find the optimal value on function TF7. TSA obtains the lowest Mean and SD on functions TF1, TF2, and TF3, as well as the lowest Mean on TF9. ABC achieves the best performance on functions TF4 and TF6. Compared to the other algorithms, the RWOA displays the smallest Mean and SD on functions TF3, TF5, and TF8, along with the lowest SD on function TF9. Analysis indicates that the RWOA demonstrates strong stability and a notable convergence ability compared to other algorithms.

5.2. Comparison of the Performance of RWOA with Multiple Algorithms

To conduct a comprehensive assessment of the performance of the RWOA, 23 benchmark test functions were employed in 10, 50, and 100 dimensions. In addition, several state-of-the-art optimization algorithms were used as comparative algorithm, including the KH, DBO, CS, SCA, PSO, GWO, MPA, and WOA. To ensure fairness, the number of population individuals for each algorithm was set to 40. Additionally, the maximum number of iterations was fixed at 500. Each algorithm was run independently 30 times, and the different parameter values of each algorithm remained consistent with their respective original algorithm; details are shown in Table 9.
As depicted in Table 10, the RWOA can still converge to a precise value in the low-dimensional case. With exceptions for functions F5 and F6, the RWOA consistently outperforms the WOA in terms of convergence accuracy, and the standard deviation of the RWOA is less than that of the WOA besides on function F6. The p-values of Wilcoxon’s rank sum test of the RWOA and WOA are almost less than 0.05 at the significance test level of 0.05, indicating that there is a significant statistically different between them, which proves the effectiveness of the proposed RWOA. The mean and standard deviation of the RWOA on functions F1-F4 are better than those of all algorithms except for KH, and there are significant statistical differences under the significance test level of 0.05. The RWOA stands out in terms of convergence accuracy and stability on functions F7, F8, F9, F10, and F11, surpassing other comparison algorithms, and the statistical test p-values below 0.05, indicating that the RWOA has a better performance than these algorithms. Additionally, it can be seen from Figure 4 that the RWOA achieves optimal solutions within its allowable range and exhibits a fast convergence speed, which indicates the stability and superiority of the improved algorithm in this paper.
As can be observed from Table 11, the mean values of the RWOA exhibit superior performance on functions F1–F4, second only to that of KH, indicating that the RWOA has a better performance in the single-peak function. The RWOA achieves the best mean values among the eight comparison algorithms on functions F7–F11, indicating its excellent performance in handling multi-modal functions. Additionally, although the accuracy of other functions may not be optimal, it is not significantly far from the optimal accuracy. At the same time, from the comparison of all standard deviations of each algorithm, the RWOA also has excellent performance in terms of standard deviation. Through the non-parametric statistical test, significant differences are observed between the RWOA and the eight algorithms across nearly all functions at a significance level of 0.05. Overall, the RWOA has better stability and convergence accuracy than other algorithms.
Furthermore, in order to better illustrate the convergence performance of various algorithms and compare the convergence speed of the algorithms more intuitively, the convergence speed curves of each algorithm were simulated in this study, as shown in Figure 5. From the figures, the convergence speed and convergence accuracy of the proposed algorithm exhibit a better performance than those of the other algorithms on most functions.
After careful analysis of Table 12, it can be concluded that the mean obtained by the RWOA on functions F1–F4 is second only to that of KH, while the Mean value of the RWOA is the best on functions F6–F13. This indicates that the RWOA can find the global optimal solution more effectively than other algorithms when dealing with high-dimensional problems. The RWOA consistently exhibits the smallest standard deviation (SD) across nearly all functions, which proves that the RWOA has superior stability compared to the other algorithms in handling high-dimensional problems. The p-values obtained by the non-parametric statistical test show that there are statistical differences between the eight algorithms and the RWOA in almost all functions when the significance test level is 0.05.
The simulation curves of the algorithm’s convergence speed in 100 dimensions are depicted in Figure 6. These graphs clearly demonstrate that the RWOA has obvious advantages, as it exhibits a faster convergence speed and higher convergence accuracy compared to those of the other algorithms.

6. Discussions

In this paper, a new algorithm called the reinforced whale optimization algorithm (RWOA) is proposed. This algorithm incorporates adaptive coefficients and opposition-based learning strategies, and additionally, population information is integrated into the encircling prey update formula. To evaluate the performance of the RWOA, extensive experiments were conducted on 23 benchmark functions, 29 CEC-2017 test functions, and 12 CEC-2022 test functions, and the following conclusions are obtained:
(1) Among the 23 benchmark functions, the RWOA has smaller Mean and SD values compared to the WOA for all uni-modal test functions except function 7. The experimental results indicate that the RWOA exhibits a stronger development ability and better stability than the WOA. On all multi-modal test functions, the RWOA also demonstrates smaller Mean and SD values than the WOA, highlighting its enhanced exploration ability and reduced likelihood of local optima. On the fixed-dimension functions, the RWOA can find the global optimal value and has better stability than the WOA. The results of Wilcoxon’s rank sum test reveal that more than 85% of the p-values for the 23 benchmark functions are below the threshold at a significance level of 0.05. This indicates a significant difference between the RWOA and WOA.
(2) Among the 29 CEC2017 benchmark functions, the Mean value of the RWOA was found to be lower than that of the WOA for 21 functions, and the SD value of the RWOA is lower than that of the WOA for nearly all functions. In addition, the p-values of the statistical test show significant difference across almost all functions, at a significance level of 0.05.
(3) Among the 12 CEC2022 benchmark functions, the RWOA has smaller Mean and SD values than the WOA for 8 functions. Furthermore, on the function that the RWOA is better than the WOA, the p-values of the statistical test show a significant statistical difference at the test level of 0.05.
(4) The RWOA was compared to eight state-of-the-art optimization algorithms on 23 benchmark functions at dimensions of 10, 50, and 100, obtaining the following results:
(4.1) Dimensions 10 and 50: The Mean and SD values of the RWOA are smaller than those of all algorithms except KH for functions F1–F4. The RWOA achieves the best mean values among the eight comparison algorithms for functions F7–F11. Although the accuracy for other functions may not be optimal, it is not significantly lower than the best. The RWOA also performs exceptionally well in terms of the standard deviation. Non-parametric statistical tests revealed significant differences between the RWOA and the eight algorithms for nearly all functions at a significance level of 0.05. According to the convergence curve, the RWOA outperforms the other eight algorithms, demonstrating superior convergence speed and accuracy.
(4.2) Dimension 100: The RWOA achieves the lowest mean on functions F1–F4, second only to that of KH, and provides the best mean values for functions F6–F13. It consistently shows the smallest standard deviation across nearly all functions. Non-parametric statistical tests revealed significant differences between the RWOA and the eight algorithms for almost all functions at a 0.05 significance level. The convergence curve indicates that the RWOA has superior convergence speed and accuracy compared to the other algorithms.

7. Conclusions

To address the shortcomings of the whale optimization algorithm, such as slow convergence speed and low accuracy, a reinforced whale optimization algorithm is proposed. This new algorithm introduces adaptive coefficients and adversarial learning strategies, which improve the convergence speed and better balance the algorithm’s exploration and exploitation capabilities. Additionally, the utilization of population information in the contraction update formula is increased, accelerating the algorithm’s convergence speed and enhancing solution quality. To evaluate the performance of the RWOA, it was tested on 23 benchmark functions, 29 CEC-2017 test functions, and 12 CEC-2022 test functions. Statistical test indicators like the Mean and SD show that the RWOA demonstrates a higher convergence accuracy and greater stability compared to eight comparison algorithms across most test functions. The convergence speed simulation curves of the RWOA compared with the eight algorithms reveal that the RWOA converges faster and with higher accuracy. The non-parametric Wilcoxon sign rank sum test indicates a significant statistical difference between the RWOA and the other eight algorithms at the 0.05 significance level. Overall, the RWOA balances the exploration and development capabilities of the algorithm without increasing its complexity, achieving a faster convergence speed and higher accuracy.
In the future, based on the RWOA, a new multi-objective RWOA will be designed to solve multi-objective optimization problems and applied to address practical optimization problems.

Author Contributions

Conceptualization, Y.M.; methodology, Y.M. and X.W.; software, Y.M. and X.W.; data curation, X.W.; writing—original draft preparation, Y.M.; writing—review and editing, X.W. and W.M.; visualization, X.W.; supervision, Y.M.; project administration, Y.M.; funding acquisition, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62203332), the Natural Science Foundation of Tianjin (Grant No. 20JCQNJC00430), and Tianjin Research Innovation Project for Postgraduate Students (2022SKYZ310).

Data Availability Statement

This manuscript does not report data generation or analysis.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN’95), Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  2. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  3. Yang, X.S.; Deb, S. Cuckoo Search via Lévy Flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  4. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  5. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  6. Kiran, M.S. TSA: Tree-seed algorithm for continuous optimization. Expert Syst. Appl. 2015, 42, 6686–6698. [Google Scholar] [CrossRef]
  7. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  9. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  10. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  11. Tawhid, M.A.; Ibrahim, A.M. Feature selection based on rough set approach, wrapper approach, and binary whale optimization algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 573–602. [Google Scholar] [CrossRef]
  12. Mafarja, M.; Thaher, T.; Al-Betar, M.A.; Too, J.; Awadallah, M.A.; Abu Doush, I.; Turabieh, H. Classification framework for faulty-software using enhanced exploratory whale optimizer-based feature selection scheme and random forest ensemble learning. Appl. Intell. 2023, 53, 18715–18757. [Google Scholar] [CrossRef]
  13. Jiang, T.; Zhang, C.; Sun, Q.M. Green job shop scheduling problem with discrete whale optimization algorithm. IEEE Access 2019, 7, 43153–43166. [Google Scholar] [CrossRef]
  14. Liu, M.; Yao, X.; Li, Y. Hybrid whale optimization algorithm enhanced with Lévy flight and differential evolution for job shop scheduling problems. Appl. Soft Comput. 2020, 87, 105954. [Google Scholar] [CrossRef]
  15. Zhao, F.; Xu, Z.; Bao, H.; Xu, T.; Zhu, N. A cooperative whale optimization algorithm for energy-efficient scheduling of the distributed blocking flow-shop with sequence-dependent setup time. Comput. Ind. Eng. 2023, 178, 109082. [Google Scholar] [CrossRef]
  16. Ewees, A.A.; Abd Elaziz, M.; Oliva, D. A new multi-objective optimization algorithm combined with opposition-based learning. Expert Syst. Appl. 2021, 165, 113844. [Google Scholar] [CrossRef]
  17. Ma, G.; Yue, X. An improved whale optimization algorithm based on multilevel threshold image segmentation using the Otsu method. Eng. Appl. Artif. Intell. 2022, 113, 104960. [Google Scholar] [CrossRef]
  18. Agrawal, S.; Panda, R.; Choudhury, P.; Abraham, A. Dominant color component and adaptive whale optimization algorithm for multilevel thresholding of color images. Knowl. Based Syst. 2022, 240, 108172. [Google Scholar] [CrossRef]
  19. Sun, Y.; Wang, X.; Chen, Y.; Liu, Z. A modified whale optimization algorithm for large-scale global optimization problems. Expert Syst. Appl. 2018, 114, 563–577. [Google Scholar] [CrossRef]
  20. Chen, H.; Yang, C.; Heidari, A.A.; Zhao, X. An efficient double adaptive random spare reinforced whale optimization algorithm. Expert Syst. Appl. 2020, 154, 113018. [Google Scholar] [CrossRef]
  21. Lee, C.Y.; Zhuo, G.L. A hybrid whale optimization algorithm for global optimization. Mathematics 2021, 9, 1477. [Google Scholar] [CrossRef]
  22. Prasad, D.; Mukherjee, A.; Mukherjee, V. Temperature dependent optimal power flow using chaotic whale optimization algorithm. Expert Syst. 2021, 38, e12685. [Google Scholar] [CrossRef]
  23. Sun, Y.; Chen, Y. Multi-population improved whale optimization algorithm for high dimensional optimization. Appl. Soft Comput. 2021, 112, 107854. [Google Scholar] [CrossRef]
  24. Li, Y.; Li, W.G.; Zhao, Y.T.; Liu, A. Opposition-based multi-objective whale optimization algorithm with multi-leader guiding. Soft Comput. 2021, 25, 15131–15161. [Google Scholar] [CrossRef]
  25. Islam, Q.N.U.; Ahmed, A.; Abdullah, S.M. Optimized controller design for islanded microgrid using non-dominated sorting whale optimization algorithm (NSWOA). Ain Shams Eng. J. 2021, 12, 3677–3689. [Google Scholar] [CrossRef]
  26. Saha, N.; Panda, S. Cosine adapted modified whale optimization algorithm for control of switched reluctance motor. Comput. Intell. 2022, 38, 978–1017. [Google Scholar] [CrossRef]
  27. Liu, J.; Shi, J.; Hao, F.; Dai, M. A novel enhanced global exploration whale optimization algorithm based on Lévy flights and judgment mechanism for global continuous optimization problems. Eng. Comput. 2023, 39, 2433–2461. [Google Scholar] [CrossRef]
  28. Jin, H.; Lv, S.; Yang, Z.; Liu, Y. Eagle strategy using uniform mutation and modified whale optimization algorithm for QoS-aware cloud service composition. Appl. Soft Comput. 2022, 114, 108053. [Google Scholar] [CrossRef]
  29. Yang, X.S.; Deb, S. Eagle strategy using Lévy walk and firefly algorithms for stochastic optimization. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 101–111. [Google Scholar]
  30. Guo, Q.; Gao, L.; Chu, X.; Sun, H. Parameter identification for static var compensator model using sensitivity analysis and improved whale optimization algorithm. CSEE J. Power Energy Syst. 2022, 8, 535–547. [Google Scholar]
  31. Lin, X.; Yu, X.; Li, W. A heuristic whale optimization algorithm with niching strategy for global multi-dimensional engineering optimization. Comput. Ind. Eng. 2022, 171, 108361. [Google Scholar] [CrossRef]
  32. Zong, X.; Liu, J.; Ye, Z.; Liu, Y. Whale optimization algorithm based on Levy flight and memory for static smooth path planning. Int. J. Mod. Phys. C 2022, 33, 2250138. [Google Scholar] [CrossRef]
  33. Zhou, R.; Zhang, Y.; He, K. A novel hybrid binary whale optimization algorithm with chameleon hunting mechanism for wrapper feature selection in QSAR classification model: A drug-induced liver injury case study. Expert Syst. Appl. 2023, 234, 121015. [Google Scholar] [CrossRef]
  34. Shen, Y.; Zhang, C.; Gharehchopogh, F.S.; Mirjalili, S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [Google Scholar] [CrossRef]
  35. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  36. Shekhawat, S.; Saxena, A. Development and applications of an intelligent crow search algorithm based on opposition based learning. ISA Trans. 2020, 99, 210–230. [Google Scholar] [CrossRef] [PubMed]
  37. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S.; Chakrabortty, R.K.; Ryan, M.J. MOEO-EED: A multi-objective equilibrium optimizer with exploration–exploitation dominance strategy. Knowl. Based Syst. 2021, 214, 106717. [Google Scholar] [CrossRef]
  38. Yuan, J.; Zhao, Z.; Liu, Y.; He, B.; Wang, L.; Xie, B.; Gao, Y. DMPPT control of photovoltaic microgrid based on improved sparrow search algorithm. IEEE Access 2021, 9, 16623–16629. [Google Scholar] [CrossRef]
  39. Ouyang, C.; Zhu, D.; Qiu, Y. Lens Learning Sparrow Search Algorithm. Math. Probl. Eng. 2021, 2021, 9935090. [Google Scholar] [CrossRef]
  40. Jiao, J.; Cheng, J.; Liu, Y.; Yang, H.; Tan, D.; Cheng, P.; Zhang, Y.; Jiang, C.; Chen, Z. Inversion of TEM measurement data via a quantum particle swarm optimization algorithm with the elite opposition-based learning strategy. Comput. Geosci. 2023, 174, 105334. [Google Scholar] [CrossRef]
  41. Han, B.; Li, B.; Qin, C. A novel hybrid particle swarm optimization with marine predators. Swarm Evol. Comput. 2023, 83, 101375. [Google Scholar] [CrossRef]
  42. Jensi, R.; Jiji, G.W. An enhanced particle swarm optimization with levy flight for global optimization. Appl. Soft Comput. 2016, 43, 248–261. [Google Scholar] [CrossRef]
  43. Zhang, C.; Ding, S. A stochastic configuration network based on chaotic sparrow search algorithm. Knowl. Based Syst. 2021, 220, 106924. [Google Scholar] [CrossRef]
  44. Ma, Y.; Zhang, X.; Song, J.; Chen, L. A modified teaching–learning-based optimization algorithm for solving optimization problem. Knowl. Based Syst. 2021, 212, 106599. [Google Scholar] [CrossRef]
  45. Ma, Y.; Chang, C.; Lin, Z.; Zhang, X.; Song, J.; Chen, L. Modified Marine Predators Algorithm hybridized with teaching-learning mechanism for solving optimization problems. Math. Biosci. Eng. 2022, 20, 93–127. [Google Scholar] [CrossRef] [PubMed]
  46. Ferahtia, S.; Houari, A.; Rezk, H.; Djerioui, A.; Machmoum, M.; Motahhir, S.; Ait-Ahmed, M. Red-tailed hawk algorithm for numerical optimization and real-world problems. Sci. Rep. 2023, 13, 12950. [Google Scholar] [CrossRef] [PubMed]
  47. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  48. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
Figure 1. Trajectory diagram of w.
Figure 1. Trajectory diagram of w.
Biomimetics 09 00576 g001
Figure 2. Flowchart of RWOA.
Figure 2. Flowchart of RWOA.
Biomimetics 09 00576 g002
Figure 3. Comparison of convergence curves of WOA and RWOA.
Figure 3. Comparison of convergence curves of WOA and RWOA.
Biomimetics 09 00576 g003
Figure 4. Convergence curves at dimension = 10.
Figure 4. Convergence curves at dimension = 10.
Biomimetics 09 00576 g004
Figure 5. Convergence curves at dimension = 50.
Figure 5. Convergence curves at dimension = 50.
Biomimetics 09 00576 g005aBiomimetics 09 00576 g005b
Figure 6. Convergence curves at dimension = 100.
Figure 6. Convergence curves at dimension = 100.
Biomimetics 09 00576 g006
Table 1. Introduction to algorithms.
Table 1. Introduction to algorithms.
AlgorithmPros and ConsYearRefs.
PSOIt has a strong global search ability and fast convergence speed, but it easily falls into the local optimal solution.1995[1]
ABCA good balance of exploration and exploitation capabilities. However, it converges slowly and is sensitive to parameter configurations.2005[2]
CSThe model is simple, has few parameters, and is highly versatile, but it easily falls into local optimum.2009[3]
KHAStrong global search ability, fast convergence speed, and good robustness.2012[4]
GWOIts parameters are few, and the convergence speed is fast, but the algorithm easily matures early, and the convergence accuracy is low.2014[5]
TSAIt is easy to operate and provides a good balance between exploration and development capabilities. However, it is very sensitive to parameter selection and incurs high computational costs for high-dimensional problems.2015[6]
SCAThe structure is simple, and the calculation efficiency is high, but the convergence accuracy is low.2016[7]
WOAThe operation is simple, the control parameters are few, and the ability to jump out of the local optimum is strong, but the optimization speed is slow, the accuracy is low, and the exploration and development ability of the algorithm is poor.2016[8]
MPAIt has few parameters, has a simple structure, and is easy to implement with high calculation accuracy, but it easily falls into the local optimum and has a poor balance of mining and exploration, poor convergence speed, and solution quality.2020[9]
DBOIt has strong stability, fast search speed, and high accuracy, but it easily falls into the local optimum.2022[10]
Table 2. Location updates in the development phase.
Table 2. Location updates in the development phase.
TacticsUpdate FormulasCondition
encircling prey D i t = | C   ·   gbest   X i t | p < 0.5, |A| < 1
X i   t + 1 = gbest     A   ·   D i t
A = 2 α r 1     α
α = 2     2 t / T max
C = 2 r 2
bubble net attack X i t + 1 = gbest + D p   ·   e bl   ·   cos ( 2 π l ) p ≥ 0.5
D p = | gbest     X i t |
Note: X i t represents the position vector of the ith individual in the tth iteration (here, i = 1,2, …, N; N is the total number of whales); gbest represents the position vector of the target prey, A and C are coefficients; r 1 , r 2 , l, and p are random numbers between [0, 1]; α is decreasing linearly from 2 to 0; D p indicates the distance of the ith whale to the prey; b is a constant; t represents the current iteration number; and Tmax represents the maximum number of iterations for the population.
Table 3. Experimental results of RWOA and WOA on seven uni-modal functions.
Table 3. Experimental results of RWOA and WOA on seven uni-modal functions.
FunctionsRWOAWOA
MaxMeanMinSDMaxMeanMinSDp-Values
F12.36 × 10−1801.12 × 10−1814.68 × 10−1960.004.38 × 10−801.52 × 10−814.63 × 10−907.99 × 10−811.73 × 10−6
F27.59 × 10−1013.10 × 10−1026.03 × 10−1071.40 × 10−1019.61 × 10−525.07 × 10−533.71 × 10−591.84 × 10−521.73 × 10−6
F31.25 × 10−1214.43 × 10−1234.01 × 10−1432.29 × 10−1222.38 × 1051.74 × 1051.10 × 1053.58 × 1041.73 × 10−6
F42.95 × 10−729.90 × 10−741.24 × 10−835.39 × 10−7392.3061.300.2830.81.73 × 10−6
F548.2047.6046.900.3148.6047.9047.000.435.32 × 10−3
F60.810.456.76 × 10−20.181.620.720.140.291.89 × 10−4
F71.81 × 10−44.36 × 10−59.05 × 10−64.17 × 10−51.82 × 10−24.38 × 10−37.87 × 10−64.90 × 10−31.92 × 10−6
Table 4. Experimental results of RWOA and WOA on six high-dimensional multi-modal functions.
Table 4. Experimental results of RWOA and WOA on six high-dimensional multi-modal functions.
FunctionRWOAWOA
Max MeanMinSDMax MeanMinSDp-Values
F8−2.00 × 104−2.09 × 104−2.09 × 1042.26 × 102−1.26 × 104−1.72 × 104−2.09 × 1042.80 × 1031.73 × 10−6
F90.000.000.000.000.000.000.000.001.00
F108.88 × 10−168.88 × 10−168.88 × 10−160.007.99 × 10−154.91 × 10−158.88 × 10−162.23 × 10−153.17 × 10−6
F110.000.000.000.000.212.15 × 10−20.005.74 × 10−20.13
F122.40 × 10−21.27 × 10−21.49 × 10−45.57 × 10−33.51 × 10−21.51 × 10−26.04 × 10−37.27 × 10−30.22
F130.620.316.74 × 10−20.141.760.780.280.322.35 × 10−6
Table 5. Experimental results of RWOA and WOA on ten fixed-dimensional multi-modal functions.
Table 5. Experimental results of RWOA and WOA on ten fixed-dimensional multi-modal functions.
FunctionRWOAWOA
RWOA WOA
MaxMeanMinSDMaxMeanMinSDp-Values
F1410.801.821.001.8710.802.961.003.250.15
F156.35 × 10−43.32 × 10−43.08 × 10−46.45 × 10−57.64 × 10−38.64 × 10−43.10 × 10−41.34 × 10−34.73 × 10−6
F16−1.03−1.03−1.034.35 × 10−6−1.03−1.03−1.038.74 × 10−101.73 × 10−6
F170.400.400.402.15 × 10−50.400.400.401.32 × 10−51.75 × 10−2
F183.003.003.001.16 × 10−43.003.003.005.42 × 10−50.67
F19−3.85−3.86−3.862.69 × 10−3−3.850−3.86−3.864.69 × 10−30.13
F20−3.02−3.21−3.329.51 × 10−2−3.05−3.21−3.329.57 × 10−20.88
F21−5.05−7.11−10.22.45−2.63−8.46−10.202.681.11 × 10−2
F22−5.08−6.97−10.42.52−2.77−7.79−10.403.080.26
F23−5.12−7.25−10.52.57−1.68−7.83−10.503.255.45 × 10−2
Table 6. Experimental results of RWOA and WOA on CEC-2017.
Table 6. Experimental results of RWOA and WOA on CEC-2017.
FunctionRWOAWOA
MaxMeanMinSDMaxMeanMinSDp-Values
CECF16.02 × 1092.08 × 1094.24 × 1081.07 × 1094.99 × 1084.15 × 1073.81 × 1069.18 × 1071.73 × 10−6
CECF36.91 × 1033.06 × 1031.57 × 1031.22 × 1033.71 × 1046.54 × 1037.37 × 1027.83 × 1039.78 × 10−2
CECF46.51 × 1024.99 × 1024.16 × 10258.006.15 × 1024.49 × 1024.01 × 10259.702.26 × 103
CECF55.92 × 1025.70 × 1025.48 × 10211.906.23 × 1025.60 × 1025.18 × 10222.705.98 × 10−2
CECF66.62 × 1026.37 × 1026.08 × 10210.906.95 × 1026.42 × 1026.17 × 10216.900.13
CECF78.30 × 1027.96 × 1027.66 × 10214.408.72 × 1027.85 × 1027.46 × 10230.005.98 × 10−2
CECF88.62 × 1028.37 × 1028.26 × 1027.718.76 × 1028.46 × 1028.19 × 10216.901.75 × 10−2
CECF91.52 × 1031.29 × 1031.03 × 1031.30 × 1022.94 × 1031.64 × 1031.09 × 1035.01 × 1025.71 × 10−4
CECF102.82 × 1032.17 × 1031.42 × 1033.41 × 1022.91 × 1032.21 × 1031.51 × 1033.72 × 1020.59
CECF111.31 × 1031.22 × 1031.14 × 10343.701.57 × 1031.26 × 1031.13 × 1031.12 × 1020.12
CECF121.62 × 1076.33 × 1061.44 × 1053.94 × 1062.23 × 1076.63 × 1069.59 × 1047.00 × 1060.89
CECF133.59 × 1041.15 × 1043.04 × 1037.62 × 1033.46 × 1041.33 × 1042.07 × 1031.11 × 1040.37
CECF145.17 × 1032.22 × 1031.50 × 1039.61 × 1025.95 × 1032.85 × 1031.49 × 1031.67 × 1030.36
CECF151.93 × 1046.15 × 1031.65 × 1034.26 × 1034.85 × 1041.26 × 1042.59 × 1039.97 × 1031.04 × 10−3
CECF162.24 × 1031.98 × 1031.73 × 10399.602.17 × 1031.90 × 1031.64 × 1031.49 × 1022.30 × 10−2
CECF171.83 × 1031.80 × 1031.76 × 10315.801.94 × 1031.83 × 1031.75 × 10357.206.04 × 10−3
CECF183.72 × 1041.35 × 1043.30 × 1039.31 × 1034.25 × 1042.04 × 1043.30 × 1031.19 × 1042.18 × 10−2
CECF196.24 × 1051.03 × 1056.38 × 1031.44 × 1051.37 × 1069.76 × 1042.00 × 1032.58 × 1050.17
CECF202.30 × 1032.19 × 1032.11 × 10348.302.35 × 1032.20 × 1032.05 × 10376.800.69
CECF212.37 × 1032.25 × 1032.21 × 10340.602.38 × 1032.34 × 1032.22 × 10347.908.19 × 10−5
CECF222.62 × 1032.42 × 1032.31 × 10394.404.00 × 1032.43 × 1032.24 × 1034.16 × 1021.11 × 10−3
CECF232.70 × 1032.67 × 1032.64 × 10316.402.70 × 1032.65 × 1032.62 × 10319.507.71 × 10−4
CECF242.81 × 1032.75 × 1032.55 × 10378.202.85 × 1032.79 × 1032.76 × 10324.705.32 × 10−3
CECF253.28 × 1033.06 × 1032.95 × 10393.702.97 × 1032.94 × 1032.68 × 10352.804.73 × 10−6
CECF264.12 × 1033.33 × 1032.82 × 1032.79 × 1024.68 × 1033.62 × 1032.88 × 1035.74 × 1021.48 × 10−2
CECF273.24 × 1033.14 × 1033.11 × 10333.603.24 × 1033.15 × 1033.09 × 10343.400.72
CECF283.74 × 1033.49 × 1033.18 × 1031.13 × 1023.74 × 1033.48 × 1033.22 × 1031.66 × 1020.86
CECF293.60 × 1033.36 × 1033.25 × 10376.003.73 × 1033.41 × 1033.17 × 1031.18 × 1020.13
CECF303.88 × 1066.53 × 1055.35 × 1049.29 × 1056.04 × 1061.47 × 1065.82 × 1031.49 × 1061.96 × 10−2
Table 7. Experimental results of RWOA and WOA on CEC-2022.
Table 7. Experimental results of RWOA and WOA on CEC-2022.
FunctionWOA RWOA
MeanSDMeanSDp-Values
CF12.19 × 1049.96 × 1032.40 × 1039.83 × 1021.73 × 10−6
CF24.53 × 10258.505.52 × 1021.14 × 1021.15 × 10−4
CF36.40 × 10213.306.35 × 10210.801.75 × 10−2
CF48.41 × 10213.308.34 × 1025.404.70 × 10−3
CF51.47 × 1033.70 × 1021.35 × 1031.80 × 1020.27
CF64.67 × 1031.68 × 1032.14 × 1043.13 × 1041.60 × 10−4
CF72.07 × 10322.902.10 × 10326.907.16 × 10−4
CF82.23 × 1037.69 2.23 × 1035.823.60 × 10−3
CF92.61 × 10350.402.63 × 10340.400.15
CF102.63 × 1032.55 × 1022.57 × 10379.600.47
CF112.98 × 10391.302.96 × 1032.58 × 1020.57
CF122.90 × 10345.302.90 × 10325.800.85
Table 8. Performance comparisons among four algorithms.
Table 8. Performance comparisons among four algorithms.
TSA [6]ABC [6]WOARWOA
MeanSDMeanSDMeanSDMeanSD
TF17.64 × 10−2430.002.25 × 10−178.00 × 10−187.33 × 10−892.86 × 10−882.14 × 10−1900.00
TF23.49 × 10−1471.75 × 10−1467.76 × 10−172.06 × 10−173.65 × 10−591.30 × 10−586.86 × 10−1063.53 × 10−105
TF39.47 × 10−635.15 × 10−622.08 × 10−144.56 × 10−149.70 × 10−53.28 × 10−41.05 × 10−783.98 × 10−78
TF43.33 × 10−20.180.000.007.64 × 10−61.00 × 10−51.34 × 10−42.02 × 10−4
TF55.16 × 10−43.05 × 10−42.39 × 10−31.22 × 10−31.20 × 10−31.30 × 10−35.75 × 10−58.73 × 10−5
TF60.321.053.42 × 10−23.57 × 10−21.883.63 1.21 0.50
TF70.000.000.000.000.000.000.000.00
TF82.02 × 10−21.57 × 10−21.23 × 10−32.80 × 10−34.65 × 10−29.47 × 10−20.000.00
TF97.11 × 10−161.45 × 10−153.20 × 10−151.08 × 10−153.49 × 10−151.85 × 10−158.88 × 10−160.00
TF109.42 × 10−323.34 × 10−470.218.82 × 10−183.11 × 10−40.001.85 × 10−41.92 × 10−4
Table 9. Algorithm parameter settings.
Table 9. Algorithm parameter settings.
AlgorithmParametersYearRefs.
PSO ω = 0.2 ~ 0.9 ,   c 1 = c 2   = 2 1995[1]
CSpa = 0.25, α = 1 ,   β = 1.5 2009[3]
KHA V f = 0.02 ,   D max = 0.005 ,   N max = 0.01 2012[4]
GWO α = 2 0 , linearly decrease2014[5]
SCA α = 2 2016[7]
WOAb = 1,
α = 2 0 , linearly decrease
2016[8]
MPAP = 0.5, FADs = 0.22020[9]
DBO α = 1   or 1 , b = 0.3, k = 0.1, S = 0.5, P_percent = 0.22022[10]
Table 10. Comparison of RWOA with other algorithms on F1–F13 with 10 dimensions.
Table 10. Comparison of RWOA with other algorithms on F1–F13 with 10 dimensions.
FunctionParametersCSDBOGWOKHPSOSCAWOAMPARWOA
F1Mean2.71 × 10−31.26 × 10−1042.49 × 10−640.007.01 × 10−231.33 × 10−129.30 × 10−854.85 × 10−302.36 × 10−181
SD1.71 × 10−36.91 × 10−1048.63 × 10−640.001.66 × 10−223.94 × 10−122.70 × 10−846.15 × 10−300.00
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F2Mean2.28 × 10−51.11 × 10−519.00 × 10−372.81 × 10−1706.34 × 10−131.17 × 10−91.88 × 10−543.29 × 10−176.86 × 10−105
SD3.44 × 10−56.05 × 10−512.11 × 10−360.008.23 × 10−133.29 × 10−96.81 × 10−543.85 × 10−171.94 × 10−104
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F3Mean3.29 × 10−37.59 × 10−811.19 × 10−270.003.26 × 10−72.10 × 10−31.85 × 1023.03 × 10−141.25 × 10−138
SD3.15 × 10−34.16 × 10−806.32 × 10−270.006.30 × 10−78.88 × 10−34.17 × 1026.51 × 10−146.86 × 10−138
p-values1.73 × 10−62.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F4Mean2.09 × 10−39.85 × 10−489.58 × 10−216.79 × 10−1652.52 × 10−64.59 × 10−40.681.16 × 10−122.07 × 10−77
SD1.73 × 10−35.39 × 10−471.38 × 10−200.003.19 × 10−68.99 × 10−41.621.02 × 10−127.49 × 10−77
p-values1.73 × 10−62.60 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F5Mean1.89 × 10−44.586.508.598.717.376.631.566.75
SD2.92 × 10−40.820.562.96 × 10−21.5.600.290.620.380.47
p-values1.73 × 10−61.92 × 10−67.19 × 10−21.73 × 10−60.154.07 × 10−50.601.73 × 10−6NA
F6Mean2.68 × 10−33.36 × 10−323.05 × 10−60.965.70 × 10−230.413.64 × 10−41.13 × 10−126.05 × 10−3
SD2.15 × 10−36.84 × 10−321.12 × 10−60.291.29 × 10−220.112.96 × 10−48.15 × 10−134.23 × 10−3
p-values1.59 × 10−31.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F7Mean1.65 × 10−71.05 × 10−35.65 × 10−49.90 × 10−56.11 × 10−32.02 × 10−31.78 × 10−37.35 × 10−47.35 × 10−5
SD1.42 × 10−75.64 × 10−44.47 × 10−47.15 × 10−52.95 × 10−31.54 × 10−32.12 × 10−34.73 × 10−49.56 × 10−5
p-values1.73 × 10−61.73 × 10−63.52 × 10−68.59 × 10−21.73 × 10−61.73 × 10−61.73 × 10−62.13 × 10−6NA
F8Mean0.61−3.63 × 103−2.74 × 103−1.59 × 103−2.42 × 103−2.20 × 103−3.45 × 103−3.57 × 103−4.07 × 103
SD0.524.47 × 1022.71 × 1023.19 × 1023.31 × 1021.62 × 1025.89 × 1022.16 × 1022.74 × 102
p-values1.73 × 10−66.32 × 10−51.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.32 × 10−56.34 × 10−6NA
F9Mean2.77 × 10−61.380.100.120.005.150.240.710.00
SD1.86 × 10−64.470.560.560.003.271.323.870.00
p-values1.73 × 10−60.130.251.001.73 × 10−62.69 × 10−51.000.50NA
F10Mean1.50 × 10−48.88 × 10−166.81 × 10−158.88 × 10−166.02 × 10−122.43 × 10−75.03 × 10−155.27 × 10−158.88 × 10−16
SD9.47 × 10−50.001.70 × 10−150.007.74 × 10−125.67 × 10−72.30 × 10−151.53 × 10−150.00
p-values1.73 × 10−61.006.25 × 10−71.001.73 × 10−61.73 × 10−63.56 × 10−64.00 × 10−7NA
F11Mean1.052.34 × 10−23.51 × 10−20.000.237.88 × 10−24.16 × 10−20.000.00
SD0.905.33 × 10−26.51 × 10−20.000.160.148.49 × 10−20.000.00
p-values1.73 × 10−61.56 × 10−28.86 × 10−51.001.73 × 10−61.73 × 10−67.81 × 10−31.00NA
F12Mean4.51 × 10−45.19 × 10−123.92 × 10−30.361.33 × 10−248.97 × 10−25.03 × 10−35.06 × 10−132.55 × 10−3
SD3.15 × 10−42.83 × 10−117.98 × 10−30.223.77 × 10−243.05 × 10−28.99 × 10−33.86 × 10−132.11 × 10−3
p-values4.29 × 10−61.73 × 10−60.161.73 × 10−61.73 × 10−61.73 × 10−60.541.73 × 10−6NA
F13Mean5.27 × 10−43.75 × 10−43.39 × 10−30.761.62 × 10−220.281.55 × 10−23.22 × 10−121.67 × 10−2
SD3.60 × 10−42.00 × 10−31.85 × 10−20.186.35 × 10−228.24 × 10−22.92 × 10−23.16 × 10−121.16 × 10−2
p-values1.73 × 10−61.73 × 10−63.11 × 10−51.73 × 10−61.73 × 10−61.73 × 10−65.71 × 10−21.73 × 10−6NA
Note: NA in the table indicates invalid data.
Table 11. Comparison of RWOA with other algorithms on F1–F13 with 50 dimensions.
Table 11. Comparison of RWOA with other algorithms on F1–F13 with 50 dimensions.
FunctionParametersCSDBOGWOKHPSOSCAWOAMPARWOA
F1Mean1.86 × 1021.72 × 10−1083.46 × 10−220.009.18 × 10−25.71 × 1022.89 × 10−791.69 × 10−202.04 × 10−180
SD9.51 × 1029.44 × 10−1082.68 × 10−220.008.33 × 10−27.49 × 1021.13 × 10−781.51 × 10−200.00
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F2Mean0.102.49 × 10−571.21 × 10−134.63 × 10−1710.890.562.31 × 10−524.69 × 10−124.65 × 10−102
SD0.111.25 × 10−566.37 × 10−140.000.490.586.12 × 10−524.11 × 10−121.24 × 10−101
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F3Mean15.608.44 × 10−133.84 × 10−20.001.21 × 1034.38 × 1041.77 × 1050.131.55 × 10−121
SD17.804.62 × 10−126.26 × 10−20.003.25 × 1021.36 × 1043.56 × 1040.288.46 × 10−121
p-values1.73 × 10−64.29 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F4Mean7.86 × 1021.11 × 10−509.06 × 10−53.02 × 10−1673.2768.1059.903.53 × 10−89.18 × 10−75
SD4.24 × 1036.08 × 10−506.86 × 10−50.000.527.2228.501.37 × 10−83.46 × 10−74
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F5Mean0.9845.7047.3048.503.27 × 1023.50 × 10648.0046.2047.70
SD1.420.250.821.51 × 10−22.10 × 1024.08 × 1060.390.510.30
p-values1.73 × 10−61.73 × 10−66.87 × 10−21.92 × 10−61.73 × 10−61.73 × 10−61.48 × 10−41.92 × 10−6NA
F6Mean9.431.58 × 10−22.2510.607.24 × 10−24.34 × 1020.690.270.47
SD16.103.69 × 10−20.560.683.85 × 10−24.25 × 1020.210.160.25
p-values5.79 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−62.96 × 10−36.64 × 10−4NA
F7Mean1.46 × 10−32.09 × 10−32.49 × 10−31.11 × 10−41.853.472.66 × 10−39.50 × 10−44.27 × 10−5
SD1.29 × 10−31.25 × 10−39.91 × 10−49.99 × 10−50.784.232.84 × 10−35.65 × 10−43.63 × 10−5
p-values2.88 × 10−61.73 × 10−61.73 × 10−68.31 × 10−41.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F8Mean7.19 × 106−1.41 × 104−9.14 × 103−3.34 × 103−7.53 × 103−4.87 × 103−1.86 × 104−1.33 × 104−2.06 × 104
SD1.89 × 1072.64 × 1031.26 × 1036.65 × 1022.31 × 1033.71 × 1022.72 × 1037.24 × 1028.46 × 102
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.20 × 10−41.73 × 10−6NA
F9Mean2.68 × 10−21.262.490.001.40 × 1021.08 × 1020.000.000.00
SD2.37 × 10−24.822.950.0033.5063.000.000.000.00
p-values1.73 × 10−60.501.73 × 10−61.001.73 × 10−61.73 × 10−61.001.00NA
F10Mean1.178.88 × 10−163.19 × 10−128.88 × 10−161.441.89 × 1014.20 × 10−151.89 × 10−118.88 × 10−16
SD1.390.001.73 × 10−120.000.574.542.27 × 10−151.05 × 10−110.00
p-values1.73 × 10−61.001.73 × 10−61.001.73 × 10−61.73 × 10−68.19 × 10−61.73 × 10−6NA
F11Mean5.10 × 1070.003.31 × 10−30.006.86 × 10−34.930.000.000.00
SD1.27 × 1080.007.15 × 10−30.007.87 × 10−32.680.000.000.00
p-values1.73 × 10−61.007.81 × 10−31.001.73 × 10−61.73 × 10−61.001.00NA
F12Mean2.022.30 × 10−30.111.012.26 × 10−21.44 × 1071.74 × 10−26.73 × 10−31.07 × 10−2
SD2.021.14 × 10−29.51 × 10−20.135.52 × 10−22.43 × 1071.03 × 10−25.39 × 10−35.96 × 10−3
p-values1.73 × 10−63.11 × 10−51.73 × 10−61.73 × 10−67.19 × 10−21.73 × 10−69.27 × 10−31.29 × 10−3NA
F13Mean2.211.341.844.948.34 × 10−21.34 × 1070.830.380.25
SD1.930.730.311.57 × 10−46.15 × 10−21.92 × 1070.310.200.10
p-values3.88 × 10−62.13 × 10−61.73 × 10−61.73 × 10−67.69 × 10−61.73 × 10−61.73 × 10−68.94 × 10−4NA
Table 12. Comparison of RWOA with other algorithms on F1–F13 with 100 dimensions.
Table 12. Comparison of RWOA with other algorithms on F1–F13 with 100 dimensions.
FunctionParametersCSDBOGWOKHPSOSCAWOAMPARWOA
F1Mean7.25 × 1031.56 × 10−1224.36 × 10−140.0016.609.84 × 1031.90 × 10−787.29 × 10−191.67 × 10−179
SD3.77 × 1047.15 × 10−1222.72 × 10−140.006.225.70 × 1031.01 × 10−775.87 × 10−190.00
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F2Mean0.338.00 × 10−546.36 × 10−92.21 × 10−16928.008.273.86 × 10−524.41 × 10−111.16 × 10−99
SD0.384.38 × 10−531.82 × 10−90.006.535.848.77 × 10−524.39 × 10−115.14 × 10−99
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F3Mean19.703.64 × 10−322.81 × 1020.001.44 × 1042.35 × 1059.50 × 10510.807.60 × 10−107
SD26.001.99 × 10−312.57 × 1020.002.95 × 1036.43 × 1042.05 × 10510.704.15 × 10−106
p-values1.73 × 10−68.47 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F4Mean37.009.71 × 10−550.368.12 × 10−16710.8088.7073.603.22 × 10−72.33 × 10−74
SD1.34 × 1025.18 × 10−540.360.001.672.7324.301.17 × 10−71.18 × 10−73
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6NA
F5Mean2.9096.9097.6098.201.04 × 1049.68 × 10797.9097.2097.80
SD3.120.820.791.36 × 10−23.49 × 1034.82 × 1070.310.750.32
p-values1.73 × 10−61.06 × 10−40.491.73 × 10−61.73 × 10−61.73 × 10−65.45 × 10−21.48 × 10−3NA
F6Mean1.79 × 1032.609.3023.1013.309.67 × 1032.374.801.25
SD9.72 × 1030.520.760.634.365.75 × 1030.850.740.54
p-values4.07 × 10−53.18 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.32 × 10−51.73 × 10−6NA
F7Mean3.83 × 10−31.68 × 10−35.60 × 10−31.06 × 10−41.48 × 1031.33 × 1021.99 × 10−31.67 × 10−34.68 × 105
SD4.59 × 10−31.65 × 10−32.22 × 10−39.41 × 10−52.27 × 10284.102.34 × 10−37.41 × 10−46.72 × 10−5
p-values3.52 × 10−61.73 × 10−61.73 × 10−61.11 × 10−31.73 × 10−61.73 × 10−61.92 × 10−61.73 × 10−6NA
F8Mean1.10 × 107−2.77 × 104−1.71 × 104−4.53 × 103−1.15 × 104−7.06 × 103−3.69 × 104−2.38 × 104 −4.16 × 104
SD3.82 × 1076.83 × 1031.54 × 1036.40 × 1024.25 × 1035.02 × 1024.75 × 1031.27 × 1034.26 × 102
p-values1.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.36 × 10−51.73 × 10−6NA
F9Mean9.36 × 10−20.007.220.005.38 × 1022.62 × 1020.000.000.00
SD0.100.005.530.0067.001.23 × 1020.000.000.00
p-values1.73 × 10−61.001.73 × 10−61.001.73 × 10−61.73 × 10−61.001.00NA
F10Mean4.268.88 × 10−162.44 × 10−88.88 × 10−163.4418.104.44 × 10−159.23 × 10−118.88 × 1016
SD4.870.008.64 × 10−90.000.234.812.47 × 10−154.32 × 10−110.00
p-values1.73 × 10−61.001.73 × 10−61.001.73 × 10−61.73 × 10−61.14 × 10−51.73 × 10−6
F11Mean4.57 × 1080.003.63 × 10−30.000.2973.300.000.000.00
SD1.82 × 1090.008.27 × 10−30.006.33 × 10−252.900.000.000.00
p-values1.73 × 10−61.001.73 × 10−61.001.73 × 10−61.73 × 10−61.001.00NA
F12Mean7.582.50 × 10−20.251.113.392.98 × 1082.82 × 10−26.08 × 10−21.50 × 102
SD9.829.14 × 10−37.61 × 10−29.29 × 10−21.641.23 × 1081.58 × 10−21.39 × 10−27.37 × 10−3
p-values2.13 × 10−63.88 × 10−41.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−68.92 × 10−51.73 × 10−6NA
F13Mean5.017.506.399.9149.104.52 × 1081.946.780.72
SD4.991.420.481.18 × 10−323.402.24 × 1080.802.490.27
p-values4.07 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.88 × 10−61.73 × 10−6NA
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Y.; Wang, X.; Meng, W. A Reinforced Whale Optimization Algorithm for Solving Mathematical Optimization Problems. Biomimetics 2024, 9, 576. https://doi.org/10.3390/biomimetics9090576

AMA Style

Ma Y, Wang X, Meng W. A Reinforced Whale Optimization Algorithm for Solving Mathematical Optimization Problems. Biomimetics. 2024; 9(9):576. https://doi.org/10.3390/biomimetics9090576

Chicago/Turabian Style

Ma, Yunpeng, Xiaolu Wang, and Wanting Meng. 2024. "A Reinforced Whale Optimization Algorithm for Solving Mathematical Optimization Problems" Biomimetics 9, no. 9: 576. https://doi.org/10.3390/biomimetics9090576

APA Style

Ma, Y., Wang, X., & Meng, W. (2024). A Reinforced Whale Optimization Algorithm for Solving Mathematical Optimization Problems. Biomimetics, 9(9), 576. https://doi.org/10.3390/biomimetics9090576

Article Metrics

Back to TopTop