Abstract
As an efficient meta-heuristic algorithm, the whale optimization algorithm (WOA) has been extensively applied to practical problems. However, WOA still has the drawbacks of converging slowly, and jumping out from extreme points especially for large scale optimization problems. To overcome these defects, a modified whale optimization algorithm integrated with a crisscross optimization algorithm (MWOA-CS) is proposed. In MWOA-CS, each dimension of the optimization problem updates its position by randomly performing improved WOA or crisscross optimization algorithm during the entire iterative process. The improved WOA adopts the new nonlinear convergence factor and nonlinear inertia weight to tune the ability of exploitation and exploration. To analyze the performance of MWOA-CS, a series of numerical experiments were performed on 30 test benchmark functions with dimension ranging from 300 to 1000. The experimental results revealed that the presented MWOA-CS provided better convergence speed and accuracy, and meanwhile, displayed a significantly more effective and robust performance than the original WOA and other state of the art meta-heuristic algorithms for solving large scale global optimization problems.
1. Introduction
In the information era with big data, large scale global optimization problems with high dimension are widespread in diverse practical areas, such as finance and economy [1], parameter estimation [2], machine learning [3] and image processing [4]. These issues cannot be effectively addressed by the traditional optimization algorithms [5,6,7,8,9] because of high dimension, nonlinear, non-differential and even no special express. In this context, meta-heuristic algorithms, as an intelligent computing method, have attracted the strong attention of scholars and have been favorably received by engineers.
Different from the traditional optimization algorithms based on gradient information, meta-heuristic algorithms adopt a “trial and error” mechanism [10] in the search to find the best solution possible in the iterative procedure. This mechanism, with convenient implementation, can easily discover one high-quality solution with less computation cost for optimization problems. Therefore, meta-heuristic algorithms can be regarded as a better way to solve large scale global optimization problems. To date, driven by numerous diverse practical optimization problems, many meta-heuristic algorithms have been investigated by scholars, such as particle swarm optimization (PSO) [11], crisscross optimization algorithm (CSO) [12], honey badger algorithm (HBA) [13], ant colony optimization (ACO) [14], artificial bee colony (ABC) [15], fish swarm optimization (FSO) [16], grey wolf optimizer (GWO) [17], tunicate swarm algorithm (TSA) [18], etc.
The complexity of large scale optimization problems increases exponentially as the number of dimensions increase, leading to the “curse of dimensionality” [19]. Although meta-heuristic algorithms exhibit better performance for some practical problems, they still suffer from the puzzle of low convergence and precision, especially for large scale optimization problems. Therefore, it is still a significant research topic as to how meta-heuristic algorithms can remain at faster convergence rates, yet avoid the premature phenomenon. To overcome these drawbacks, much research has been performed to improve the performance of the standard algorithm by modifying parameters or introducing various mechanisms, and remarkable results have been achieved.
Whale optimization algorithm [20] has been proved to exhibit better optimization capability in the engineering and industry field [21], but still has disadvantages such as imbalance between exploration and exploitation, premature convergence and low accuracy. Hitherto, numerous improved versions of WOA have been presented by many scholars. In [22,23,24], the strategy of constructing nonlinear parameters was introduced into WOA to improve its performance, by better tuning the exploitation and exploration capability. To enhance the optimization ability of WOA for large scale optimization problems, Sun et al. [25] employed a nonlinear convergence factor based on cosine function to balance the global and local search process, and improved the avoidant ability of local minimum through a Lévy flight strategy. Similarly, Abdel-Basset et al. [26] adopted Lévy flight and logical chaos mapping to construct an improved WOA. Jin et al. [27] constructed a modified whale optimization algorithm containing three aspects: firstly, different evaluation and random operators were utilized to strengthen the capability of exploration; secondly, Lévy flight-based a step factor to enhance the capability of exploitation; and thirdly, an adaptive weight was constructed to tune the progress of exploration and exploitation. Saafan et al. [28] presented a hybrid algorithm integrating an improved whale optimization algorithm with salp swarm algorithm. Firstly, an improved WOA based on exponential convergence factor was designed, and then, according to the condition given in advance, the hybrid algorithm selected the improved WOA or salp swarm algorithm to perform the optimization task. El-Aziz et al. [29] constructed a hyper-heuristic DEWOA combing WOA and DE, where the DE algorithm was adopted to automatically select one chaotic mapping and one opposition-based learning, to gain an initial population with high-quality and enhance the ability of exploration and escaping from local solutions. Chakraborty et al. [30] constructed an improved whale optimization algorithm by modifying parameters to solve high dimension problems. Nadimi-Shahraki et al. [31] utilized whale optimization algorithm and moth flame optimization algorithm to present an effective hyper-heuristic algorithm WMFO to solve optimal power flow problems of complex structure. Nadimi-Shahraki et al. [32] adopted Levy motion and Brownian motion to achieve a better balance between the exploration and exploitation of whale optimization algorithm. Liu et al. [33] proposed an enhanced global exploration whale optimization algorithm (EGE-WOA) for complex multi-dimensional problems. EGE-WOA employed Levy flights to improve the ability of its global exploration, and adopted new convergent dual adaptive weights to perfect the convergence procedure. On the whole, these improved variants of WOA have displayed better performance for dealing with optimization problems. However, according to the theorem of no free lunch for optimizations [34], they still face, somewhat, defects of plaguing into local optima and premature convergence when dealing with large scale optimization problems with higher dimension.
In order to enhance the optimization ability of WOA to solve large scale optimization problems, this paper presents a new modified WOA algorithm, namely, MWOA-CS. MWOA-CS is composed of a modified WOA and crisscross optimization algorithm, which contains the following improvements: (1) the design of a new nonlinear convergence factor to improve the relationship between the exploration and exploitation of WOA; (2) a cosine-based inertia weight is introduced into WOA to further optimize the optimization procedure; (3) in each iteration, each dimension randomly performs a modified whale optimization algorithm or crisscross optimization algorithm. This update mechanism will randomly divide the population dimension into parts, utilizing the exploration capability of WOA and the exploitation ability of CSO to reduce the search blind spots and avoid the premature phenomena in some dimensions; (4) a large number of numerical experiments are carried out to verify that the proposed MWOA-CS displays better performance for large scale global optimization problems.
The rest of this paper is organized as follows. Section 2 presents a brief conception of WOA, followed by an introduction of the detailed improvements and contributions of the proposed MWOA-CS in Section 3. In Section 4, a discussion and analysis of numerical experiments are presented. Finally, Section 5 presents the conclusions of this paper.
2. Standard Whale Optimization Algorithm
As an intelligent computation algorithm for solving optimization problems, WOA is inspired by the predation behaviors of humpback whales. Its optimization progress consists of three strategies: encircling prey, bubble-net attacking method and search for prey.
2.1. Encircling Prey
The positions of whale population considered as a candidate solution set can be represented as:
where indicates the size of the whale population and is dimension of the optimization problem. represents a position vector of the th individual whale, and means a feasible solution for a given optimization problem. Since optimal information is unknown beforehand, WOA algorithm regards the current best solution as the prey or the approximate optimum in the iterative process. Once the optimal whale individual (the current best solution) is obtained, the others in the population update their positions towards the prey along with iterations. The corresponding mathematical formula is defined as follows:
where is the current iterative number, represents the whale population in iteration , indicates the optimal whale (the best) achieved so far and will be updated if a better position is found in each iteration, is multiplication of one element by one element, and stands for a step size to renew the location of whale population. and as coefficient vectors are calculated by the following expressions:
where and are random vectors uniformly distributed in the interval . is a convergence factor to tune the relationship between exploitation and exploration where indicates the maximum iterative number.
2.2. Bubble-Net Attacking Method
The bubble-net attacking method of WOA performs a fine search near the current best solution in the feasible domain, which is similar to the bubble-net feeding behavior of humpback whales. Its mathematical model is defined as:
where represents the distance between the current whale individual and the best solution. is a random value and is a constant value defining the shape of the spiral.
When , the current whale population randomly executes the bubble-net attacking method or encircling prey to perform a local search near the prey (the current best solution), which depicts the exploitation phase of WOA algorithm. Accordingly, the location renewal mechanism of the current whale population can be described as:
where is a random value.
2.3. Search for Prey
Search for prey is an important part of WOA algorithm to achieve the global search. When , this random search strategy is performed. The current whale population randomly chooses an individual as a reference (the prey) to update its position. This strategy attempts to find a better solution in the whole search space, which represents the exploration (global search) ability of WOA. Its mathematical model is similar to that of encircling prey, but the reference prey is different. The formula is written as:
is a randomly chosen whale from the current population. The meanings of other symbols are the same as in Section 2.1. Therefore, the optimization progress of WOA is divided by into exploitation (local search) phase and exploration (global search) phase. If , WOA conducts the exploration phase, otherwise it performs the exploitation stage. The flowchart of WOA is described in Figure 1.
Figure 1.
The flow diagram of WOA.
3. Some Improvements of WOA
Although WOA outperforms some meta-heuristic algorithms, such as PSO, ABC and DE [21], it still contains some defects of low precision and premature convergence, especially when solving large scale optimization problems. In order to overcome these shortcomings, a new improved WOA is proposed by constructing nonlinear parameters and introducing other search methods in this section. The following details the construction of improvements in the MWOA-CS.
3.1. Nonlinear Convergence Factor
In the basic WOA, the convergence factor tunes the global and local search phase by controlling the value of . According to Equation (2), the larger the value of is, the stronger the global search capability is. On the contrary, the smaller the value of is, the stronger the local search ability is. When dealing with optimization problems, the ideal optimization process is to have a strong global search ability with a fast convergence speed, and meanwhile, maintain fast convergence with high accuracy. Obviously, the linear decrement strategy of does not meet this expectation. Based on the characteristic of power function, a nonlinear convergence factor is proposed, and its update formula is as follows:
where is a nonlinear regulation coefficient, depicting the curve smoothness of parameter . Figure 2 illustrates the change curves of under different values of . Large amounts of numerical results on test functions describe a fact that compared with the strategy of linearly decreasing, nonlinear strategy is more beneficial in enhancing the optimization ability of the algorithm.
Figure 2.
Change curves of nonlinear of parameter with different .
3.2. Nonlinear Inertia Weight
As a significant parameter in PSO, inertia weight represents an influence factor of the current optimal solution on population location update. A larger inertia weight value indicates a larger renewal step and enhances the ability to escape from local solutions, which is conductive to global exploration. Meanwhile a smaller inertia weight value can perform a more fine-grained local search to improve the convergence precision. Hence, an intuitive perception is that the weight value decreases along with the iterative process, which is more beneficial to the balance between exploration ability and exploitation ability. However, in the standard WOA, the inertia weight value is set to 1 during the whole optimization procedure. Although this strategy can ensure the performance of algorithm, it may be not optimal. How to construct an appropriate dynamic inertia weight to improve the performance of WOA is of great significance to solve optimization problems effectively. In this paper, a nonlinear inertia weight function is proposed:
Obviously, is a cosine function with the period . The parameter controls the change period of . Figure 3 illustrates the variation curves of under different during the whole optimization procedure. The value of is determined by a large number of numerical experiments as for unimodal function and for multimodal function, which implies that . According to Figure 3, the value of decreases first, and then increases in the whole optimization process, which is not in line with our intuitive cognition and has guiding significance of the construction of inertia weight.
Figure 3.
Nonlinear weight with diverse values .
The new update mechanism of the modified WOA can be expressed as follows:
The presented cosine-based inertia weight strategy promotes the WOA to dynamically adjust the value of the inertia weight with respect to the iterative number (seen in Figure 3). This strategy enables the optimal whale position to provide a different effect for other individuals to renew their location in the iterative procedure, and further tunes the exploration ability and exploitation ability of the standard WOA algorithm.
3.3. Proposed MWOA-CS
WOA exhibits poor convergence rate and accuracy when dealing with large scale global optimization problems. This may be due to the complexity of optimization problems, that the update mechanism of WOA cannot reach certain search blind spots and that some dimensions drop into local optimum. Meanwhile we observed that the whale optimization algorithm has better exploration ability. Crisscross optimization algorithm (CSO) [12] is created by using a horizontal crossover operator in addition to a vertical crossover operator, which has better accuracy when dealing with low dimensional problems. Inspired by this, to enhance the optimization ability of WOA, CSO is embedded into the improved WOA to construct a hybrid algorithm (MWOA-CS) in this paper. In MWOA-CS, each dimension randomly performs MWOA or CSO within the feasible region of optimization problems, which can enhance the exploration ability of the algorithm. On the whole, according to the diversity of the current whale population, the problem dimension is randomly divided into two parts. One part executes MWOA, and the other performs CSO.
The variation of population during the optimization process can be defined as [35]:
where is the th value of average solution , and it is computed as follows:
and then, is normalized using the logistic function ().
According to the value of , the current whale population is randomly divided into two sub populations and in terms of dimension, namely,. Then with the dimension performs the WOA, and with the dimension is updated by the CSO. The specific update formulas of CSO are as follows:
Horizontal crossover:
where and are random values, and . and are candidate solutions that are the offspring of and , respectively.
Vertical crossover:
where represents a random number. That is, the th and th dimension of the individual are chosen to conduct the vertical crossover operation and yield an offspring . It is noted that , and are compared with their parents, and the individual with better fitness is retained to enter the next iteration.
3.4. The Pseudo of MWOA-CS
The above-mentioned contents detail the improvement strategies of basic WOA to enhance its performance. The corresponding pseudo code of the modified WOA algorithm (MWOA-CS) is described in Algorithm 1.
| Algorithm 1: The description of MWOA-CS | |||||
| Input: Population size , Maximum iteration , Problem dimension | |||||
| 1: Generate the whale population | |||||
| 2: Evaluate the fitness of each individual and achieve the best individual | |||||
| 3: While do | |||||
| 4: Calculate and by the Equations (10) and (11), respectively | |||||
| 5: For do | |||||
| 6: Compute of the current population based on Equation (14) | |||||
| 7: , update the and generate a random number | |||||
| 8: For do | |||||
| 9: If then | |||||
| 10: Ifthen | |||||
| 11: Update the position of by Equation (12) | |||||
| 12: Else | |||||
| 13: Select one random individual | |||||
| 14: Update the position of by Equation (13) | |||||
| 15: EndIf | |||||
| 16: Else | |||||
| 17: Update the position of by Equation (12) | |||||
| 18: EndIf | |||||
| 19: EndFor | |||||
| 20: EndFor | |||||
| 21: If then | |||||
| 22: Update the positions of the sub population by Equations (16–18) | |||||
| 23: EndIf | |||||
| 24: Evaluate the fitness of each individual and Update | |||||
| 25: | |||||
| 26: End While | |||||
| Output: The optimal solution |
4. Experimental Results and Analysis
This section conducts numerical experiments to verify and discuss the optimization ability of proposed MWOA-CS on 30 large scale global optimization problems, as listed in Table 1. MWOA-CS is compared with five other meta-heuristic algorithms in aspects of convergence speed and precision, in addition to stability. The comparison algorithms selected in this paper are ESPSO [11], HBA [13], CSO [12], GWO [36] and WOA [20]. All experiments are programmed using Matlab and executed on a computer with Core with GHz and main memory.
Table 1.
Description of benchmark test functions.
4.1. Benchmark Functions and Experimental Settings
According to the number of local optimal solutions, the benchmark functions to verify the performance of MWOA-CS are divided into two categories: unimodal function (UM) and multimodal function (MM). with only one global minimum represents the unimodal function, which is employed to test the exploitation ability of the studied meta-heuristic algorithm. with multiple local optimums are the multimodal functions, which are utilized to measure the exploration ability of a meta-heuristic algorithm. The dimensions of all test functions are taken as 300, 500 and 1000, respectively.
For convenience, the parameter settings among the selected comparative algorithms are shown in Table 2. Meanwhile, due to containing random factors, each algorithm deals with each test function 30 times independently, for the accuracy of the experimental results.
Table 2.
Parameter settings.
4.2. Comparison and Analysis of Statistical Results
In the numerical experiments, the dimension of benchmark functions is set to 300, 500 and 1000, respectively. The experimental results on different dimensions are presented in Table 3, Table 4 and Table 5, where the better results are highlighted in bold. Mean and standard deviation (std) are significant indicators to measure the performance of the meta heuristic algorithm. Based on the Table 3, Table 4 and Table 5, the following conclusions are drawn:
Table 3.
Comparison results of algorithms for test problems with dim = 300.
Table 4.
Comparison results of algorithms for test problems with dim = 500.
Table 5.
Comparison results of algorithms for test problems with dim = 1000.
(1) For the unimodal functions, the proposed MWOA-CS exhibits significantly better exploitation performance than the five other algorithms. Moreover, with the increase in benchmark functions, the performance of MWOA-CS is not affected, while the other comparison algorithms are damaged in some cases. For and , although HBA can obtain the best std as MWOA-CS, its solution accuracy is still inferior to MWOA-CS.
(2) For the multimodal functions, MWOA-CS displays a good exploration ability except for and . When dealing with and , MWOA-CS is the most efficient method, compared with the others. For and , both MWOA-CS and HBA can achieve the global solution, and WOA is the third most effective algorithm. For and , MWOA-CS, WOA and HBA show the best optimization performance, followed by CSO. For , MWOA-CS shows the best performance with respect to accuracy, but the standard deviation is inferior to ESPSO, HBA and WOA, and stronger than CSO and GWO. For and , CSO exhibits the best competitive method, and MWOA-CS the second best, followed by WOA.
Moreover, Wilcoxon rank sum test [37] was adopted to statistically evaluate the performance of MWOA-CS. The results of all test functions with dimensions ranging from 300 to 1000 are shown in Table 5. The achieved from Wilcoxon rank sum test indicates the difference between MWOA-CS and the comparison method, where the significance level is 0.05. indicates that MWOA-CS has more significant statistical advantage than the comparison algorithm; indicates that the optimal solutions of the comparison algorithms are similar; indicates that the statistical performance of MWOA-CS is weaker than the comparison algorithm. The results illustrate that the of most test functions is less than 0.05, indicating that MWOA-CS can effectively solve test optimization problems more robustly. There still exist some cases where the is larger than 0.05: MWOA-CS and HBA on with the dimension of 300; and MWOA-CS and CSO on and with dimensions of 300, 500 and 1000.
In brief, the results illustrate that MWOA-CS shows excellent exploitation ability to jump out of local solutions, and good exploration ability to search the whole feasible region. Hence MWOA-CS can effectively solve large scale global optimization problems.
4.3. Convergence Analysis of Comparison Algorithms
In order to further observe the difference of the optimization process of the comparison algorithms, Figure 4 provides the average convergence rate curves over 30 run times, where x-axis and y-axis are the iteration and best fitness values obtained so far, respectively. The dimension of the test functions is set at 1000. It can be seen that MWOA-CS has outstanding advantages in terms of convergence speed and accuracy. Although the performance of MWOA-CS is weaker than CSO for and , it is still superior to the other comparison algorithms. The results mean that MWOA-CS can effectively obtain one satisfactory solution of large scale optimization problems with less time.

Figure 4.
Convergence curves for some selected unimodal /multimodal problems with dim = 1000.
4.4. Boxplot Analysis of Comparison Algorithms
The boxplot graphs of the comparison algorithms on the benchmark functions with 1000 dimension are portrayed in Figure 5, which can further illustrate the stability of the meta-heuristic algorithm. Based on Table 5 and Table 6 and the graphs shown in Figure 5, it is easy to determine that MWOA-CS outperforms the five other algorithms in solving most test optimization problems with 1000 dimension. For and , although MWOA-CS exhibits the ability to jump out the local solution, it still shows premature phenomenon in the convergence. Unfortunately, this avoidant ability is still weaker than CSO. Consequently, it can be concluded that MWOA-CS exhibits better efficiency and robustness than the other comparison algorithms in solving classical benchmark functions with large scale dimension.


Figure 5.
Box-plot of some selected unimodal /multimodal test problems with 1000 Dim.
Table 6.
Comparison results of Wilcoxon rand test sum for all test problems.
5. Conclusions
This study proposed an improved WOA, which had good efficiency and robustness to solve large scale optimization problems. In order to enhance the optimization ability of WOA, nonlinear convergence factor and cosine-based inertia weight were designed to balance the exploration and exploitation ability of WOA. Meanwhile, the update mechanism of CSO was introduced into the improved WOA to improve local optimal avoidance. According to current population diversity, the optimization problem dimension was divided into two parts, one of which performed improved WOA with better robustness for large scale problems, and the other executed CSO, providing higher accuracy for low dimension problems. To test the performance of MWOA-CS, extensive numerical experiments were carried out on 30 benchmark functions of different dimensions, and the numerical results were compared with five other meta-heuristic algorithms. The compared results indicated that MSWOA achieved higher quality solutions within fewer iteration, and exhibited faster convergence rate and stronger stability for the majority of test functions. In addition, with the increase in problem dimension, the proposed algorithm showed better robustness. MWOA-CS can be applied to large-scale constrained optimization problems, multi-objective optimization and practical engineering problems in the near future.
Author Contributions
Conceptualization, G.S. and Y.S.; methodology, G.S. and Y.S.; software, G.S.; validation, G.S., Y.S. and R.Z.; formal analysis, Y.S. and R.Z.; investigation, Y.S. and R.Z.; resources, G.S., Y.S. and R.Z.; data curation, G.S., Y.S. and R.Z.; writing—original draft preparation, G.S.; writing—review and editing, G.S., Y.S. and R.Z.; visualization, G.S. and Y.S.; supervision, R.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the National Nature Science Foundation (NNSF) of China (Grant Nos. 12071112, 11471102, 12101195, 202300410146) and the Basic Research Projects for Key Scientific Research Projects in Henan Province (Grant No. 20ZX001).
Acknowledgments
The authors would like to thank the editors and reviewers for handling and reviewing our paper.
Conflicts of Interest
The authors declare that they have no conflict of interest.
Abbreviations
| WOA | Whale optimization algorithm |
| CSO | Crisscross optimization algorithm |
| MWOA-CS | Modified whale optimization algorithm integrated with crisscross optimization algorithm |
| ESPSO | Particle swarm optimization algorithm using eagle strategy |
| HBA | Honey badger algorithm |
| GWO | Grey wolf optimizer |
| std | Standard deviation |
| Dim | Dimension |
| UM | Unimodal function |
| MM | Multimodal function |
References
- Wang, Y.; Wang, H. Neural Network Model for Energy Low Carbon Economy and Financial Risk Based on PSO Intelligent Algorithms. J. Intell. Fuzzy Syst. 2019, 37, 6151–6163. [Google Scholar] [CrossRef]
- Rezk, H.; Arfaoui, J.; Gomaa, M.R. Optimal Parameter Estimation of Solar PV Panel Based on Hybrid Particle Swarm and Grey Wolf Optimization Algorithms. Int. J. Interact. Multimed. Artif. Intell. 2021, 6, 145. [Google Scholar] [CrossRef]
- Chen, H.; Zhang, Q.; Luo, J.; Xu, Y.; Zhang, X. An Enhanced Bacterial Foraging Optimization and Its Application for Training Kernel Extreme Learning Machine. Appl. Soft Comput. 2020, 86, 105884. [Google Scholar] [CrossRef]
- Du, Y.; Yang, N. Analysis of Image Processing Algorithm Based on Bionic Intelligent Optimization. Clust. Comput. 2019, 22, 3505–3512. [Google Scholar] [CrossRef]
- Shang, Y.; Zhang, L. A Filled Function Method for Finding a Global Minimizer on Global Integer Optimization. J. Comput. Appl. Math. 2005, 181, 200–210. [Google Scholar] [CrossRef] [Green Version]
- Shang, Y.; Zhang, L. Finding Discrete Global Minima with a Filled Function for Integer Programming. Eur. J. Oper. Res. 2008, 189, 31–40. [Google Scholar] [CrossRef]
- Shang, Y.; Pu, D.; Jiang, A. Finding Global Minimizer with One-Parameter Filled Function on Unconstrained Global Optimization. Appl. Math. Comput. 2007, 191, 176–182. [Google Scholar] [CrossRef]
- Shang, Y.-L.; Sun, Z.-Y.; Jiang, X.-Y. Modified Filled Function Method for Global Discrete Optimization. In Proceedings of the 3rd World Congress on Global Optimization in Engineering and Science, Anhui, China, 8–12 July 2013; Gao, D., Ruan, N., Xing, W., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 57–68. [Google Scholar]
- Shang, Y.; Wang, W.; Zhang, L. Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization. Math. Probl. Eng. 2010, 2010, 602831. [Google Scholar] [CrossRef]
- Boussaïd, I.; Lepagnot, J.; Siarry, P. A Survey on Optimization Metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
- Yapıcı, H.; Çetinkaya, N. An Improved Particle Swarm Optimization Algorithm Using Eagle Strategy for Power Loss Minimization. Math. Probl. Eng. 2017, 2017, 1063045. [Google Scholar] [CrossRef] [Green Version]
- Meng, A.; Chen, Y.; Yin, H.; Chen, S. Crisscross Optimization Algorithm and Its Application. Knowl.-Based Syst. 2014, 67, 218–229. [Google Scholar] [CrossRef]
- Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New Metaheuristic Algorithm for Solving Optimization Problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
- Polnik, W.; Stobiecki, J.; Byrski, A.; Kisiel-Dorohinicki, M. Ant Colony Optimization–Evolutionary Hybrid Optimization with Translation of Problem Representation. Comput. Intell. 2021, 37, 891–923. [Google Scholar] [CrossRef]
- Ghanem, W.A.H.M.; Jantan, A.; Ghaleb, S.A.A.; Nasser, A.B. An Efficient Intrusion Detection Model Based on Hybridization of Artificial Bee Colony and Dragonfly Algorithms for Training Multilayer Perceptrons. IEEE Access 2020, 8, 130452–130475. [Google Scholar] [CrossRef]
- Albert, P.; Nanjappan, M. An Efficient Kernel FCM and Artificial Fish Swarm Optimization-Based Optimal Resource Allocation in Cloud. J. Circuits Syst. Comput. 2020, 29, 2050253. [Google Scholar] [CrossRef]
- Mallika, C.; Selvamuthukumaran, S. A Hybrid Crow Search and Grey Wolf Optimization Technique for Enhanced Medical Data Classification in Diabetes Diagnosis System. Int. J. Comput. Intell. Syst. 2021, 14, 157. [Google Scholar] [CrossRef]
- Rizk-Allah, R.M.; Saleh, O.; Hagag, E.A.; Mousa, A.A.A. Enhanced Tunicate Swarm Algorithm for Solving Large-Scale Nonlinear Optimization Problems. Int. J. Comput. Intell. Syst. 2021, 14, 189. [Google Scholar] [CrossRef]
- Rahnamayan, S.; Wang, G.G. Solving Large Scale Optimization Problems by Opposition-Based Differential Evolution (ODE). WSEAS Trans. Comput. 2008, 7, 13. [Google Scholar]
- Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Gharehchopogh, F.S.; Gholizadeh, H. A Comprehensive Survey: Whale Optimization Algorithm and Its Applications. Swarm Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
- Kaur, G.; Arora, S. Chaotic Whale Optimization Algorithm. J. Comput. Des. Eng. 2018, 5, 275–284. [Google Scholar] [CrossRef]
- Sayed, G.I.; Darwish, A.; Hassanien, A.E. A New Chaotic Whale Optimization Algorithm for Features Selection. J. Classif. 2018, 35, 300–344. [Google Scholar] [CrossRef]
- Ding, H.; Wu, Z.; Zhao, L. Whale Optimization Algorithm Based on Nonlinear Convergence Factor and Chaotic Inertial Weight. Concurr. Comput. Pract. Exp. 2020, 32, e5949. [Google Scholar] [CrossRef]
- Sun, Y.; Wang, X.; Chen, Y.; Liu, Z. A Modified Whale Optimization Algorithm for Large-Scale Global Optimization Problems. Expert Syst. Appl. 2018, 114, 563–577. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Abdle-Fatah, L.; Sangaiah, A.K. An Improved Lévy Based Whale Optimization Algorithm for Bandwidth-Efficient Virtual Machine Placement in Cloud Computing Environment. Clust. Comput. 2019, 22, 8319–8334. [Google Scholar] [CrossRef]
- Jin, Q.; Xu, Z.; Cai, W. An Improved Whale Optimization Algorithm with Random Evolution and Special Reinforcement Dual-Operation Strategy Collaboration. Symmetry 2021, 13, 238. [Google Scholar] [CrossRef]
- Saafan, M.M.; El-Gendy, E.M. IWOSSA: An Improved Whale Optimization Salp Swarm Algorithm for Solving Optimization Problems. Expert Syst. Appl. 2021, 176, 114901. [Google Scholar] [CrossRef]
- Elaziz, M.A.; Mirjalili, S. A Hyper-Heuristic for Improving the Initial Population of Whale Optimization Algorithm. Knowl.-Based Syst. 2019, 172, 42–63. [Google Scholar] [CrossRef]
- Chakraborty, S.; Saha, A.K.; Chakraborty, R.; Saha, M. An Enhanced Whale Optimization Algorithm for Large Scale Optimization Problems. Knowl.-Based Syst. 2021, 233, 107543. [Google Scholar] [CrossRef]
- Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Oliva, D. Hybridizing of Whale and Moth-Flame Optimization Algorithms to Solve Diverse Scales of Optimal Power Flow Problem. Electronics 2022, 11, 831. [Google Scholar] [CrossRef]
- Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Abualigah, L.; Abd Elaziz, M.; Oliva, D. EWOA-OPF: Effective Whale Optimization Algorithm to Solve Optimal Power Flow Problem. Electronics 2021, 10, 2975. [Google Scholar] [CrossRef]
- Liu, J.; Shi, J.; Hao, F.; Dai, M. A Novel Enhanced Global Exploration Whale Optimization Algorithm Based on Lévy Flights and Judgment Mechanism for Global Continuous Optimization Problems. Eng. Comput. 2022, 1–29. [Google Scholar] [CrossRef]
- Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
- Bansal, J.C.; Farswan, P. A Novel Disruption in Biogeography-Based Optimization with Application to Optimal Power Flow Problem. Appl. Intell. 2017, 46, 590–615. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
- Družeta, S.; Ivić, S. Examination of Benefits of Personal Fitness Improvement Dependent Inertia for Particle Swarm Optimization. Soft Comput. 2017, 21, 3387–3400. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).