Abstract
Intelligent optimization algorithms are crucial for solving complex engineering problems. The Parrot Optimization (PO) algorithm shows potential but has issues like local-optimum trapping and slow convergence. This study presents the Chaotic–Gaussian–Barycenter Parrot Optimization (CGBPO), a modified PO algorithm. CGBPO addresses these problems in three ways: using chaotic logistic mapping for random initialization to boost population diversity, applying Gaussian mutation to updated individual positions to avoid premature local-optimum convergence, and integrating a barycenter opposition-based learning strategy during iterations to expand the search space. Evaluated on the CEC2017 and CEC2022 benchmark suites against seven other algorithms, CGBPO outperforms them in convergence speed, solution accuracy, and stability. When applied to two practical engineering problems, CGBPO demonstrates superior adaptability and robustness. In an indoor visible light positioning simulation, CGBPO’s estimated positions are closer to the actual ones compared to PO, with the best coverage and smallest average error.
1. Introduction
With the increasing complexity of real-world problems, traditional methods have encountered numerous limitations. Against such challenging circumstances, intelligent optimization algorithms have progressively emerged. They stem from the observation and emulation of phenomena in nature and the intelligent behaviors of organisms. Initially, such algorithms were rudimentary and merely simulated biological evolution; examples include the Simulated Annealing (SA) algorithm [1], the Genetic Algorithm (GA) [2], the Particle Swarm Optimization (PSO) algorithm [3], and the Ant Colony Optimization (ACO) algorithm [4].
With the rapid development of science and technology, intelligent optimization algorithms have evolved and diversified into numerous types. These include the Bacterial Foraging Optimization (BFO) [5], Artificial Bee Colony (ABC) [6], Cuckoo Search (CS) [7], Bat Algorithm (BA) [8], Moth Flame Optimization (MFO) [9], Pigeon Swarm Optimization Algorithm (PSOA) [10], Spider Monkey Optimization (SMO) [11], Seagull Optimization algorithm (SOA) [12], Remora Optimization Algorithm (ROA) [13], Black Smoke Swallow Optimization algorithm (STOA) [14], and Gray Wolf Optimization (GWO) [15] algorithms. In recent years, new ones like the Harris Hawks Optimization (HHO) [16], Catch fish optimization algorithm (CFOA) [17], Pelican Optimization Algorithm (POA) [18], and Crayfish Optimization Algorithm (COA) [19] have also emerged.
Most intelligent optimization algorithms simulate the behaviors and habits of natural organisms to efficiently solve complex problems. For instance, authors in [20] proposed a Hybrid Whale Particle Swarm Optimization (HWPSO) algorithm, leveraging the “forced” whale and “capping” phenomenon. Evaluated in tasks like determining operational amplifier circuit size, minimizing micro-electro-mechanical system switch pull-in voltage, and reducing differential amplifier random offset, the HWPSO operated efficiently and improved optimal values significantly. In [21], an Improved Whale Optimization Algorithm (IWOA) was proposed, combining crossover and mutation from the Differential Evolution (DE) algorithm and introducing a cloud-adaptive inertia weight. This algorithm was applied to truss structure optimization for 52-bar planar and 25-bar spatial trusses. Authors of Ref. [22] enhanced the Seagull Optimization Algorithm with Levy flight (LSOA), achieving good results in path-planning problems. Moreover, Ref. [23] introduced a Multi-strategy Golden Jackal Optimization (MGJO) algorithm. It initialized the population via chaotic mapping, adopted a non-linear dynamic inertia weight and used Cauchy mutation to boost population diversity, effectively estimating parameters of the non-linear Muskingum model.
The Parrot Optimization (PO) [24], proposed by J. Lian et al. in 2024, is inspired by parrots’ learning behaviors. It emulates parrots’ foraging, lingering, communication, and wariness of strangers to extensively search for the optimal solution in the search space. Yet, when handling complex problems with high-dimensional, multi-modal, and non-linear objective functions, the PO algorithm’s search efficiency and solution accuracy may suffer. It often converges prematurely to local optima or fails to fully explore the search space, hindering the discovery of the global optimum. Furthermore, lacking an adaptive parameter adjustment mechanism, the PO algorithm’s parameters cannot be automatically tuned. Manual parameter adjustment for various optimization problems is thus required, decreasing the algorithm’s convergence rate.
To address PO’s drawbacks, like becoming trapped in local optima and slow convergence, we developed a multi-strategy PO algorithm named CGBPO. Firstly, chaotic logistic mapping replaces traditional random initialization. This distributes the initial population more evenly and widely, preventing individuals from clustering in specific search space areas [25]. It boosts population diversity and enhances global search ability. Secondly, Gaussian mutation is applied to updated individual positions [26]. Everyone has a chance to generate new, relatively random positions nearby, breaking local convergence and increasing diversity. Thirdly, after each iteration, we calculate the centroid of the current population and generate the corresponding opposite solution [27]. Then, we explore the search space from the direction opposite to the current individuals based on the centroid, guiding the population to more promising areas. This makes the search more directional, comprehensive, and better at handling complex high-dimensional and multimodal optimization problems.
To validate the CGBPO algorithm, 30 independent experiments were carried out in CEC2017 [28], CEC2022 [29], and two engineering problems, followed by a comparison with seven swarm intelligence algorithms. The results show that the three strategies in CGBPO effectively enhance its performance, notably addressing premature local optimization and slow convergence issues.
The paper is organized as follows: Section 2 overviews the PO algorithm. Section 3 elaborates on the proposed CGBPO with multiple strategies. Section 4 and Section 5 detail the experimental tests of CGBPO on CEC2017 and CEC2022 benchmarks and analyze the results. Section 6 presents the comparative tests of CGBPO for engineering problems. Section 7 describes the application of CGBPO in the indoor visible-light-positioning system. Finally, Section 8 summarizes the study and outlines future research directions.
2. Parrot Optimization Algorithm
The PO algorithm primarily encompasses the four behaviors detailed below.
2.1. Foraging Behavior
Through observing the location of food or their owner’s position, parrots estimate the approximate location of food and then fly toward it. Thus, the movement of parrots is modeled using the following formula:
where stands for the current position; symbolizes the updated position; is the maximum number of iterations; represents the average position of the current population, as defined in Equation (2); represents the Levy distribution, as defined in Equation (3), which serves to depict the flight of the parrots; γ is given a value of 1.5, which is used to describe the flight of the parrots; represents the current optimal position; represents the current iteration; represents the movement based on the relative position to the owner, and represents determination of the location of food more precisely through observing the positions of the whole population.
2.2. Staying Behavior
Modeling the behavior of parrots staying randomly on different parts of their owner’s body allows for the incorporation of randomness into the search process:
where indicates the process of flying toward the owner, and symbolizes randomly stopping on a certain part of the owner’s body.
2.3. Communicating Behavior
Parrots are inherently gregarious and often communicate within their groups, either by flying towards the flock or communicating while staying away from it. In the PO algorithm, it is assumed that these two behaviors have an equal probability of occurrence, and the average position of the current population is taken as the representation of the flock’s center. This behavior is modeled as follows:
where represents the course of an individual becoming part of a parrot flock for communication, while represents the situation where an individual takes off right after communicating. Both behaviors are realized through generating a random number within the interval [0, 1].
2.4. Fear of Strangers Behavior
Parrots have a natural fear of unfamiliar individuals and, so, will stay away from strangers and seek protection from their owners. This behavior is modeled as follows:
where indicates the process of reorienting to fly toward the owner, and symbolizes the procedure of distancing oneself from strangers.
3. Multi-Strategy Parrot Optimization Algorithm (CGBPO)
3.1. Chaotic Logistic Map Strategy
The chaotic logistic map is defined using a simple recurrence relation, as depicted in Equation (7):
In this context, stands for the state value in the th iteration, while represents the state value in the subsequent iteration. The parameter serves as a control factor, and its value generally falls within the range of 0 to 4. Additionally, when the value of is confined to the interval between 0 and 1, as the number of iterations increases, the sequences generated in the subsequent steps will display entirely distinct changing tendencies. Furthermore, when the value of lies within the approximate interval from 3.57 to 4, the system displays chaotic characteristics. In this case, 4 is selected. For different initial values, the system exhibits chaotic behavior after multiple iterations. More precisely, the sequence of system state values appears to be disorderly and fluctuates in a random manner. Even a minute difference in the initial value will result in completely different trajectories for the subsequent state values as the iterations progress.
There are other common chaotic strategies like the Tent map [30] and the Sine map [31]. The Tent map shows chaotic features in a narrow range around the control parameter value of 2, so its chaotic domain is relatively narrow. The Sine map turns chaotic when the control parameter approaches 1, but the boundaries of its chaotic regime are ambiguous. Compared with them, the logistic map has outstanding randomness and ergodicity. It can evenly cover the defined interval without obvious data aggregation. It is highly sensitive to the initial value; a tiny change can lead to greatly different results after multiple iterations, a key chaotic feature. The Tent and Sine maps are also sensitive to initial conditions, but less so than the logistic map. In the early stages, differences from different initial values are not obvious.
Due to this, the logistic map is used in the optimization algorithm’s initialization and perturbation. It can boost the algorithm’s global search ability, avoid local optima, and enhance overall optimization performance. In the proposed PO method’s initialization, the logistic map sets the initial positions of individual parrots, introducing chaotic perturbations. After iterations, individual search trajectories become more random, breaking away from local optima that the traditional update mode may become stuck in. This improves the algorithm’s global search ability, balances exploration and exploitation, and enhances its optimization ability.
To assess the performance of the selected map strategies, the CEC2022 standard dataset is used with MATLAB(R2023a) simulation software. Three algorithms—CPO, TPO, and SPO—improving the PO algorithm with the chaotic logistic map, Tent map, and Sine map, respectively, are tested. Then, the test results are compared and analyzed. The population size is set at 30, the maximum number of iterations is 300, and each of the three algorithms runs independently 30 times for every test function.
As shown in Figure 1a, the CPO’s radar chart has the least area fluctuation. CPO ranked first for six functions and second for three functions, indicating highly stable results across multiple runs, unaffected by random factors. In Figure 1b, the ranking chart, sorted by average fitness values, shows that CPO has the lowest average fitness value. In the comprehensive test, the CPO algorithm obtained better solutions and outperformed the other two algorithms in convergence performance.
Figure 1.
Radar chart (a) and ranking chart (b) of three algorithms using map strategies.
3.2. Gaussian Mutation Strategy
Gaussian mutation randomly modifies the genetic information of individuals with random perturbation values that adhere to a Gaussian distribution (normal distribution), thereby giving rise to new individuals after the mutation process. The Gaussian mutation function is presented in Equation (8):
The function is employed to generate random numbers that comply with the normal distribution. In this context, and serve as the mean and standard deviation of the Gaussian function, respectively, as illustrated in Equations (9) and (10):
Typically, symbolizes the lower bound of the variable, while represents the upper bound of the variable. The average of the upper and lower bounds is taken as the central position of the normal distribution. This configuration ensures that the generated random mutation values are theoretically distributed in a relatively uniform manner around this value range. Given that it is grounded in a normal distribution, approximately 99.7% of the mutation values will fall within the range of the mean plus or minus three times the standard deviation, that is, . Consequently, there is a high likelihood that the mutation values will lie within this interval centered around the mean, with a width equivalent to (as , extending by this amount on both sides precisely covers the width of the interval). This implies that the degree of dispersion of the mutation is relatively moderate. It will neither cause the new solutions generated by mutation to be overly scattered and distant from the original value range nor make the mutation overly concentrated, thus losing the ability to explore new solution spaces. Instead, it represents a form of exploration that has a certain breadth yet remains controllable within the given upper- and lower-bound intervals. Consequently, when the range of the upper and lower bounds (i.e., the value of ) is large, σ will also be large, signifying an increase in the degree of dispersion of the mutation. With the mutation operation, a larger value range can be explored more comprehensively, and it is more likely that individuals can break free from local optimal solutions and search for new areas in the solution space that are removed from the current solution. Conversely, if the range of the upper and lower bounds is small, will become smaller, the degree of dispersion of the mutation will decrease, and the new solutions generated by the mutation operation will be closer to the current mean position, focusing more on fine-tuning and optimizing the local area within a relatively small interval.
Cauchy mutation [32] and non-uniform mutation [33] are two common mutation strategies in optimization algorithms. Cauchy mutation modifies individual genes using the Cauchy distribution. It randomly samples a value from this distribution and adds it to the original gene value to obtain the mutated one. The Cauchy distribution’s heavy tail means there is a high chance of obtaining a value far from the central value. So, Cauchy mutation is mainly for global search, but it has low local search accuracy. Also, its large mutation values lead to poor stability and fluctuations result.
The probability of non-uniform mutation declines with iterations, aiding global exploration initially and focusing on local development later. The mutation amplitude varies dynamically, large at first then small. Its direction is random and parameter adjustable. Yet, parameter setting is tough and demands much debugging. Its adaptability to different problems is limited with varying effects, has high computational complexity, and may still become trapped in local optima, hindering global optimization.
In contrast, Gaussian mutation is highly stable. Based on the normal distribution, it gives the mutation process direction and concentration. Most mutation values are within a certain range, ensuring the algorithm’s stable convergence. For local research, Gaussian mutation converges fast and accurately, allowing for precise adjustments to the current solution, making it ideal for scenarios requiring local optimization precision.
To check the performance of the chosen chaotic strategies, three algorithms—GPO, CAPO, and NPO, which enhance the PO algorithm with Gaussian mutation, Cauchy mutation, and No-uniform mutation, respectively—are used for testing. As shown in Figure 2a, the GPO’s radar chart has the least area fluctuation. GPO ranked first for seven functions and second for three functions, proving its results across multiple runs were highly stable and unaffected by random factors. Notably, GPO had the lowest average fitness value of 1.58 in Figure 2b. In the comprehensive test, GPO performed the best.
Figure 2.
Radar chart (a) and ranking chart (b) of three algorithms using mutation strategies.
3.3. Barycenter Opposition-Based Learning Strategy
In this study, barycenter opposition-based learning is adopted. In the initial iterations, as the differences between individuals in the population are relatively large, the generation of mutant parrots allows for more areas to be explored. Meanwhile, in later iterations, although the quantity of individuals in the population decreases, the mutant parrots can still maintain diversity. The barycenter is defined as follows:
Suppose that denotes the values of parrots in the th dimension. The population consists of N individuals. Then, the barycenter of the parrot population in the th dimension is given by Equation (11), and the population barycenter is .
Barycenter opposition-based mutation: Suppose that is the th parrot with D dimensions. If the selected mutation dimension is the th dimension, then the barycenter opposition-based solution corresponding to the th parrot is , which is determined using Equation (12):
where is a contraction factor, the value of which is a random figure within the range. During the iteration process, for each parrot in the parrot population, a certain dimension is selected for mutation. Then, the mutation result is compared with the position of the previous generation, and the better mutation is retained.
Opposition-based learning [34] and elite opposition-based learning [35] are common strategies in optimization algorithms. Opposition-based learning is simple and effective at the start for boosting population diversity, providing more search trajectories and enhancing exploration. But it has a limited way of generating opposite individuals, relying only on individual features and search space boundaries, ignoring other individuals’ distribution and relationships, which may limit its performance in complex scenarios.
Elite opposition-based learning selects elite individuals with high fitness as the core to generate opposite ones, leveraging their high-quality information to find better solutions and speed up convergence. However, focusing on elites reduces population diversity, increasing the risk of premature local optima and missing the global optimum.
In contrast, the Barycenter opposition-based learning strategy uses the population’s overall barycenter information and random contraction adjustment to explore a new solution space opposite the original individuals. It aims to find better solutions different from the current ones, enhancing population diversity, helping the algorithm escape local optima and search for the entire solution space more efficiently.
To verify the performance of the chosen opposition-based learning strategies, three algorithms—BPO, OPO, and EPO, which enhance the PO algorithm with barycenter opposition-based learning, traditional opposition-based learning, and elite opposition-based learning, respectively—are employed for testing. As shown in Figure 3a, BPO’s radar chart has the least area fluctuation. BPO ranked first for nine functions and second for two functions, demonstrating that its results across multiple runs were highly stable and unaffected by random factors. Notably, BPO had the lowest average fitness value of 1.33 in Figure 3b. In the comprehensive test, BPO performed the best.
Figure 3.
Radar chart (a) and ranking chart (b) of three algorithms using opposition-based-learning strategies.
3.4. Ablation Study of CGBPO
An ablation study was carried out to clarify the contributions of each newly added strategy. The CGBPO algorithm without chaotic mapping (CGBPO1), without Gaussian mutation (CGBPO2), and without barycenter opposition-based learning (CGBPO3), and the original CGBPO algorithm were tested using the CEC2022 standard dataset with MATLAB simulation software. The population size was 30, the maximum iterations were 300, and each algorithm ran independently 30 times for every test function.
In Figure 4a, CGBPO’s radar chart had the least area fluctuation, ranking first for six functions and second for four. The radar chart areas of the other three algorithms were much larger, showing the positive effects of the three new strategies on algorithm performance. Notably, in Figure 4b, CGBPO had the lowest average fitness value of 1.33 and the best performance in the comprehensive test. In the ranking chart, CGBPO1 ranked second, CGBPO2 fourth, and CGBPO3 third, indicating that Gaussian mutation contributed most to algorithm improvement.
Figure 4.
Radar chart (a) and ranking chart (b) of ablation study.
3.5. Pseudo-Code of CGBPO
The comprehensive structure of the CGBPO is presented in Figure 5 and Algorithm 1, which offer a meticulous roadmap for the whole improvement process, encompassing its iterative steps, as well as the utilized search strategies.
| Algorithm 1: Pseudo-Code of CGBPO |
| 1: Initialize the CGBPO parameters |
| 2: Initialize the solutions’ positions using the chaos strategy by Equation (7) |
| 3: For i = 1: N do |
| 4: Calculate the fitness value of all search agents |
| 5: End |
| 6: For i = 1: Max_iter do |
| 7: Find the best position and worst position: |
| 8: For j = 1: N do |
| 9: St = randi([1,4]) |
| 10: Behavior 1: Foraging behavior |
| 11: If St == 1 Then |
| 12: Update position by Equation (1) |
| 13: Behavior 2: Staying behavior |
| 14: Elseif St == 2 Then |
| 15: Update position by Equation (4) |
| 16: Behavior 3: Communicating behavior |
| 17: Elseif St == 3 Then |
| 18: Update position by Equation (5) |
| 19: Behavior 4: The fear of strangers’ behavior |
| 20: Elseif St == 4 Then |
| 21: Update position by Equation (6) |
| 22: End |
| 23: Update position using gaussian mutation by Equation (8) |
| 24: End |
| 25: Generate new solutions using the barycenter opposition-based learning: |
| 26: For i = 1: N do |
| 27: Calculate the values of the original function |
| 28: Update position using the barycenter opposition-based learning strategy by Equation (12) |
| 29: End |
| 30: Return the best solution |
| 31: End |
Figure 5.
Flowchart of CGBPO.
3.6. Comparative Analysis of the Time Complexity Between CGBPO and PO
3.6.1. Time Complexity Analysis of the PO Algorithm
The population initialization operation of PO adopts a simple random initialization method. It generates individuals with a dimension of dim, and the time complexity is . When calculating the initial fitness values, the objective function values are calculated for N individuals, respectively. Assuming that the time complexity of calculating the objective function once is (where depends on the complexity of the objective function), the time complexity of calculating the initial fitness values is . The sorting operation uses the sort of function. The time complexity of common sorting algorithms is . Therefore, the total time complexity of the initialization stage is .
The PO algorithm has two nested loops in each iteration. The outer loop runs times, and the inner loop operates on the individuals in the population. When updating individuals, calculations such as Levy flight are involved, and its time complexity mainly depends on the dimension dim. Assuming that the time complexity of each individual update operation is (where m is a constant related to the specific calculation), then the time complexity of individual updates in each iteration is . The boundary control operation also judges and adjusts the dim dimensions of N individuals, and its time complexity is . The time complexities of updating the global optimal solution and sorting the population are and , respectively. Therefore, the time complexity of each iteration is , and the time complexity of the entire iteration stage is .
Combining the initialization and iteration stages, the time complexity of PO is . When , dim, and are large, by ignoring the lower-order terms, the main time complexity can be approximated as .
3.6.2. Time Complexity Analysis of the CGBPO Algorithm
The CGBPO algorithm uses chaotic initialization. This function generates chaotic values and maps them to the search space to generate individuals with a dimension of dim. The time complexity is . The subsequent operations, such as calculating fitness values and sorting are the same as those of PO. Therefore, the total time complexity of the initialization stage is .
During the iteration process of CGBPO, Gaussian mutation and centroid-based opposition-learning operations are added. Each mutation operation involves calculations and boundary checks for the dim dimensions of an individual. Assuming that the time complexity of each mutation operation is (where p is a constant related to the mutation strategy), the time complexity of mutating N individuals is . The opposition-learning operation also conducts calculations and boundary control for N individuals, and its time complexity is . Adding the original individual update, boundary control, global optimal solution update, and population-sorting operations of t PO, the time complexity of each iteration is , and the time complexity of the entire iteration stage is .
Combining the initialization and iteration stages, the time complexity of CGBPO is . When , dim, and are large, by ignoring the lower-order terms, the main time complexity can be approximated as .
3.6.3. Comparison of the Time Complexities of the Two Algorithms
As can be seen from the above analysis, after ignoring the lower-order terms, the main time complexities of PO and CGBPO are approximately. This means that in large-scale problems, when , , and dim increase, the growth trends in the calculation times of the two algorithms are basically the same.
The CGBPO algorithm adds mutation and opposition-based learning operations during the iteration process. There are additional terms related to mutation and opposition-based learning, namely Term and Term , in its time–complexity expression. In an actual operation, if the mutation and opposition-based learning operations are computationally complex, the time for each iteration of CGBPO will be longer than that of PO. However, this may also bring better optimization effects, helping the algorithm converge to a better solution more quickly.
3.6.4. Experimental and Comparative Analyses of the Time
Table 1 presents the running times of the CGBPO and PO algorithms, along with the ratio of CGBPO’s running time to that of PO. Using the CEC2022 standard dataset and MATLAB simulation software, the test sets the population size at 30 and the maximum number of iterations at 300. Each of the algorithms runs independently 30 times for each test function. Overall, at a low dimension of 30, CGBPO generally takes longer to run than PO, showing lower computational efficiency. But as the dimension rises to 50 and 100, the running-time gap between the two narrows. Different test functions affect the running-time ratio differently. For complex functions, the ratio drops more notably with increasing dimension, indicating CGBPO’s potential in handling high-dimensional complex functions.
Table 1.
Test results of the Avg_Time.
4. Experimental and Comparative Analyses of the CEC2017 Benchmark Suite
To evaluate CGBPO’s performance, the CEC2017 and CEC2022 benchmarks were selected. Using MATLAB simulation software, nine algorithms were tested and compared: Parrot Optimization (PO), Harris Hawks Optimization (HHO) [16], Antlion Optimizer (AO) [36], FOX optimization [37], Beluga Whale Optimization (BWO) [38], GOOSE optimization [39], Whale Optimization (WOA) [40], CMA-ES [41], and CGBPO. The population size was set to 30, the maximum iterations to 300, and the dimension to 10, with other parameters following the original literature. Each algorithm ran independently 30 times.
The CEC2017 benchmark has 29 functions. F1, F3, and F4 are unimodal functions, used to assess global convergence; F5–F11 are simple multi-modal functions for testing the ability to escape local optima; F12–F21 are hybrid functions, F22–F27 are composition functions, and F28–F30 are extended unimodal functions, all for testing algorithms’ handling of complex issues. Some CEC2017 test functions’ 3D graphs are shown in Figure 6.
Figure 6.
Three-dimensional graphs of some test functions in the CEC2017 benchmark suite.
F1 has an obvious global minimum near the image bottom center, useful for evaluating if an algorithm can find the global optimum. F7 is multimodal with multiple local and a global extreme point, often used to evaluate global search and escaping local optima abilities. F17 has a complex stepped-like structure with multiple local extreme values for evaluating an algorithm’s global-optimum-finding ability for multi-extreme-value functions. F27 is extremely complex multimodal with many local extreme points used to evaluate an algorithm’s global optimum finding and avoiding local optima abilities in complex multimodal scenarios. F30 has a complex surface with multiple local extreme regions for evaluating an algorithm’s performance in complex function environments, including global search, convergence speed, and avoiding local optima.
4.1. Statistical Results of Comparative Tests
Comparing CGBPO with eight other algorithms on CEC2017, Table 2, Table 3 and Table 4 reveal its significant performance edge. On unimodal functions, CGBPO has notable advantages on F1, F3, and F4, though CMA-ES excels best on them. For simple multimodal functions, CGBPO performs optimally on F5, F6, F8–F10. In hybrid functions, CGBPO outperforms others on F16, F17, F20, and F21; CMA-ES tops F12–F15, F18, and F19, with CGBPO ranking second. For composition functions, CGBPO leads F22, F27, and F28 and is competitive on others, while CMA-ES performs best overall.
Table 2.
Test results on CEC2017 (unimodal and simple multi-modal functions).
Table 3.
Test results on CEC2017 (hybrid functions).
Table 4.
Test results on CEC2017 (composition functions and extended unimodal functions).
Data shows that CGBPO performs well on various test functions, simple or complex. Analyzing multiple functions, CGBPO generally surpasses PO and some other algorithms (like HHO, AO, BWO) in convergence speed, solution accuracy, and stability. This proves CGBPO’s comprehensive performance advantage and its ability to offer better solutions for practical engineering problems.
4.2. Algorithm Convergence Curve on CEC2017
Figure 7 displays the comparative convergence curves of CGBPO and eight other algorithms for functions F1–F30. In most functions’ convergence profiles, the CGBPO curve trends sharply downward. For unimodal functions, CGBPO’s downward trend, convergence speed, and final fitness value rank second only to CMA-ES.

Figure 7.
Convergence curves of the proposed and compared functions on CEC2017.
On simple multimodal functions like F6, F8, F9, and F10, CGBPO’s curve has a clear downward trend with a fast decline rate, reaching a low fitness value early in iteration and having a lower final convergence fitness than most other algorithms. For F7, while other algorithms’ curves fluctuate greatly, CGBPO remains stable and achieves better convergence.
On hybrid functions, such as F16, F17, F20, and F21, CGBPO’s curve drops rapidly and reaches a low fitness value early, outperforming others. On F12–F15, F18, and F19, CMA-ES performs best and CGBPO ranks second, still ahead of other algorithms.
For composition functions like F22, 24, 25, F27, and F28, CGBPO’s curve drops quickly and has a low final fitness value for F22, F27, and F28, showing the best performance. For other functions, CGBPO is also competitive. Despite CMA-ES performing better on some functions, overall, CGBPO shows excellent or strong performance across various functions and has a comprehensive performance advantage.
4.3. Algorithm Box Plot on CEC2017
Figure 8 shows box plots of CGBPO and other algorithms. In box plots of multiple functions, CGBPO’s box and whisker lengths indicate data dispersion. Its constituent elements include:The rectangular box in the middle can show the distribution range of the middle 50% of the data; The horizontal line in the middle of the box represents the median of the data, which divides the data into two parts with equal quantities and reflects the central tendency of the data; The line segments extending from the upper and lower ends of the box show the distribution range of the data. The points outside the whiskers are abnormal values that deviate significantly from the overall data.

Figure 8.
Box plots of functions on CEC2017.
For unimodal function F1, CGBPO’s box plot is at the lowest, with the smallest median fitness value, converging to a better solution accurately. For F3, CMA-ES’s box plot is lowest, with CGBPO second. For F4, CGBPO performs well with a low median fitness and high accuracy. CGBPO’s box plots for F1 and F4 are short, showing strong stability; for F3, CMA-ES is relatively stable.
For simple multimodal functions like F6, F8, F9, and F10, CGBPO’s box plots are low, with small median fitness values, converging to better solutions accurately. It also excels in F7, more accurate than most algorithms. These box plots are mostly short, indicating good stability.
For hybrid functions such as F16, F17, F20, and F21, CGBPO’s box plots are low, with high accuracy in converged solutions. For F12–F15, F18, and F19, CMA-ES performs best, CGBPO second, still having an accuracy advantage. For good-performing functions of CGBPO, box plots are short with good stability; for CMA-ES-dominated functions, CGBPO also has good stability.
For composition functions like F22, F27, and F28, CGBPO’s box plots are at the lowest, with the highest convergence accuracy. For other functions, CGBPO is competitive, converging to good solutions. For advantageous functions, its box plots are short, indicating good stability; for others, its overall stability is acceptable and comparable to other algorithms.
In general, CGBPO shows excellent comprehensive performance for different functions, having advantages in both convergence accuracy and stability for many functions.
4.4. Wilcoxon’s Rank-Sum Test on CEC2017
Table 5 compares the p-values of CGBPO and swarm intelligence algorithms via Wilcoxon’s rank-sum test [42] on the CEC2017 benchmark. For functions like F3, F4, F9, F16, F21, and F24, CGBPO’s p-values compared to the other eight algorithms are all much less than 0.05, showing its performance significantly surpasses theirs.
Table 5.
p-values obtained from Wilcoxon’s rank-sum test on CEC2017.
In some functions, the p-values between CGBPO and specific algorithms exceed 0.05, meaning no significant performance difference. For example, on function F13, CGBPO’s p-values relative to PO, HHO, AO, and WOA are 8.53 × 10−1, 4.20 × 10−1, 1.41 × 10−1, and 2.84 × 10−1, respectively, indicating similar performance.
Overall, CGBPO performs excellently in most function tests, showing significant differences from and often outperforming other algorithms. Even when its performance is like some algorithms for certain functions, its advantages are still evident, proving its strong competitiveness in solving CEC2017 benchmark problems.
4.5. Radar Chart and Average Ranking Chart
Figure 9 show the radar chart and average ranking chart of CGBPO and eight other intelligent algorithms on CEC2017. Notably, CGBPO’s radar chart has the least area fluctuation. It ranked first for two functions and second for twenty functions, indicating highly stable results across multiple runs, unaffected by random factors.
Figure 9.
Radar chart (a) and ranking chart (b) for functions in CEC2017.
Significantly, CGBPO had the lowest average fitness value and topped the ranking. This shows that in the comprehensive test, CGBPO obtained better solutions and outperformed the other eight algorithms in convergence performance.
4.6. Analysis of High-Dimensional Function Tests
To confirm CGBPO’s superiority in handling high-dimensional complex problems, the dimension was set to 100 with other parameters unchanged. High-dimensional function test results in Table 6, Table 7 and Table 8 show that on all high-dimensional unimodal functions, CGBPO has notable advantages and stability in minimum and mean values. On multimodal functions, it significantly outperforms other algorithms. On hybrid and composition functions, it reaches the theoretical optimal value. In some functions’ standard deviation data, it ranks second only to CMA-ES. Overall, CGBPO still excels in high-dimensional complex problems and offers better solutions.
Table 6.
Test results on CEC2017 for high-dimensional functions with dimension 100 (unimodal and simple multi-modal functions).
Table 7.
Test results on CEC2017 for high-dimensional functions with dimension 100 (hybrid functions).
Table 8.
Test results on CEC2017 for high-dimensional functions with dimension 100 (composition functions and extended unimodal functions).
5. Experimental and Comparative Analyses on the CEC2022 Benchmark Suite
The CEC2022 benchmark suite has 12 single-objective test functions with boundary constraints, categorized as follows: F1 is a unimodal function for evaluating convergence speed and accuracy; F2–F5 are multimodal functions with multiple local optima, testing global search capabilities; F6-F8 are hybrid functions for comprehensively assessing algorithm performance under complex conditions; F9–F12 are composite functions for testing the ability to handle complex tasks. Some CEC2022 test functions’ 3D graphs are shown in Figure 10.
Figure 10.
Three-dimensional graphs of some test functions in CEC2022.
F4 has a complex multimodal shape with many local extreme points used to evaluate optimization algorithms’ performance in multimodal environments, like global search and avoiding local optima. F7 has a stepped distribution with multiple levels and potential multiple local extreme values used to evaluate algorithms’ ability to find the global optimum for complex multi-extreme-value functions. F10 has a highly complex multimodal shape and numerous local extreme points, making its optimization difficult, and can evaluate algorithms’ performance in complex multimodal scenarios, such as finding the global optimum and escaping local optima.
5.1. Statistical Results of Algorithm Tests on CEC2022
Table 9 shows the min, std, and avg data of CGBPO and eight other algorithms for 12 test functions. On unimodal function F1, CMA-ES performs best. CGBPO’s minimum and average values rank second only to CMA-ES and are much better than others, indicating high convergence accuracy on unimodal functions.
Table 9.
Test results on CEC2022.
On multimodal function F3, CGBPO has the best minimum value and small standard deviation, showing strong stability. On F4, algorithms’ minimum values are close, and CGBPO’s has an edge with a reasonable standard deviation. On F5, CGBPO’s minimum value is better than most, except CMA-ES, proving good comprehensive performance on multimodal functions.
On hybrid function F6, CMA-ES has the best minimum value, and CGBPO ranks second but has a large standard deviation, resulting in poor stability. On F7, CGBPO has the best minimum value and small standard deviation, showing good stability. On F8, CMA-ES has the best minimum value, CGBPO ranks third with a small standard deviation, indicating strong stability. CGBPO is competitive overall despite performance fluctuations on hybrid functions.
On composition function F9, CMA-ES has the best minimum value, CGBPO ranks second, close to CMA-ES, and has a small standard deviation. On F10, CGBPO has the best minimum value and extremely small standard deviation, showing excellent stability. On F11, CMA-ES has the best minimum value, CGBPO ranks third with a small standard deviation. On F12, CMA-ES has the best minimum value, and CGBPO ranks second with a small standard deviation, indicating good comprehensive performance on composition functions.
In general, CGBPO shows high convergence accuracy, strong stability, and good global convergence on various functions. Its comprehensive performance is remarkable among many algorithms. Despite some flaws in individual functions, its overall performance is good, with obvious advantages on unimodal, multimodal, and composition functions.
5.2. Algorithm Convergence Curve on CEC2022
Figure 11 displays the convergence graphs of CGBPO and other algorithms for functions F1–F12. For unimodal function F1, CMA-ES’s curve drops fastest with the lowest final average fitness, performing best. CGBPO’s curve also drops quickly, with its final average fitness ranking second only to CMA-ES and better than others, showing good convergence speed and accuracy.
Figure 11.
Convergence curves of functions on CEC2022.
For multimodal functions, in F3, CGBPO’s curve starts low with a clear downward trend and the lowest final average fitness, performing best. In F4, algorithms’ curves are close initially, but CGBPO’s has a better downward trend later with the optimal average fitness. In F5, CGBPO’s curve drops rapidly and has the lowest final average fitness, indicating good convergence speed and accuracy.
For hybrid functions, in F6, CMA-ES’s curve has an obvious downward trend later with the lowest final average fitness, while CGBPO’s performs well early but is overtaken later. In F7, CGBPO’s curve has the lowest final average fitness. In F8, CMA-ES’s curve has the lowest final average fitness, and CGBPO’s drop smoothly. CGBPO is competitive on hybrid functions, though less so than on unimodal and multimodal ones.
For composition functions, in F9, CMA-ES and CGBPO’s curves have similar downward trends with close and low final average fitness, CGBPO being slightly worse. In F10, CGBPO’s curve drops rapidly and has the lowest final average fitness, performing best. In F11, algorithms’ curves vary greatly, and CGBPO’s drop fast early. In F12, CMA-ES’s curve has the lowest final average fitness, and CGBPO’s ranks second, showing good overall performance on composition functions.
5.3. Algorithm Box Plot on CEC2022
Figure 12 shows box plots of test data distribution. For unimodal function F1, CMA-ES’s box plot is at the lowest, with the smallest fitness median and the best converged solution. CGBPO’s box plot is low with a short box, meaning it converges with a good solution and has small data dispersion, showing good stability.
Figure 12.
Box plot of functions on CEC2022.
For multimodal function F3, CGBPO’s box plot is at the lowest, with the smallest fitness median, indicating the best performance and converging to a better solution. Its box is short, showing good stability. For F4, CGBPO’s box plot is at the lowest. Despite outliers, the overall fitness median is small, with a good convergence effect. For F5, CGBPO’s box plot is at the lowest, with a fitness value much lower than most algorithms, showing high convergence accuracy.
For hybrid function F7, CGBPO’s box plot is low, with a small fitness median, good convergence, and small data dispersion, indicating good stability. For F8, CMA-ES’s box plot is at the lowest, performing best, while CGBPO’s is at an intermediate level, competitive but slightly worse.
For composition function F10, CGBPO’s box plot is at the lowest, with the smallest fitness median, the best performance, and a short box for good stability. For F12, CMA-ES’s box plot is at the lowest, and CGBPO’s is the second lowest, with good performance.
In general, CGBPO has obvious advantages on multimodal and composition functions, strong competitiveness on unimodal and hybrid functions, and outstanding comprehensive performance across different functions.
5.4. Wilcoxon’s Rank-Sum Test on CEC2022
Table 10 shows the rank-sum test results of the CEC2022 benchmark test. When comparing CGBPO with BWO, FOX, GOOSE, WOA, and CMA-ES, the p-values of all 12 functions are below 0.05, indicating that CGBPO significantly outperforms PO and BWO.
Table 10.
p-values from Wilcoxon’s rank-sum test on CEC2022.
For the four functions, the p-values of CGBPO relative to AO exceed 0.05, meaning CGBPO and AO have similar performance on these functions.
Overall, CGBPO performs remarkably in most function tests, showing significant differences from and often outperforming other algorithms. Clearly, CGBPO maintains strong competitiveness in solving the CEC2022 benchmark problems.
5.5. Radar Chart and Average Ranking Chart
Figure 13 shows a radar chart and an average ranking chart comparing CGBPO with seven other algorithms on CEC2022. Notably, CGBPO’s radar chart has the least fluctuations. It ranked first for three functions (including F10) and second for four (like F5), showing remarkable performance and distinct advantages on these functions in Figure 13a.
Figure 13.
Radar chart (a) and ranking chart (b) for functions in CEC2022.
The CGBPO algorithm had an average ranking of 2.25, tying for first place with CMA-ES in Figure 13b. It demonstrated outstanding comprehensive performance across multiple test functions, outperforming the other seven algorithms overall.
6. Application in Engineering Problems
To verify CGBPO’s performance in handling complex engineering problems, tests were conducted on the design optimization of industrial refrigeration systems [43] and the optimization of Himmel Blau’s function [44]. Then, the optimization results were compared and analyzed with those of the other seven algorithms mentioned above.
6.1. Optimization Results for Industrial Refrigeration Systems
With the continuous depletion of basic energy resources, energy conservation and emissions reduction have become a key concern across all industries recently. Industrial refrigeration systems consume a large amount of energy in enterprises. The design optimization of such systems seeks to strike a fine balance among performance, cost, and efficiency. This design issue involves 14 design variables and 15 constraint conditions in total. The mathematical model thereof is presented as follows:
Design variables:
Objective function:
Constraint conditions:
Range of values:
Table 11 compares the optimal results of the CGBPO algorithm with those of other methods. Clearly, CGBPO achieved the minimum optimized value of 5.32 × 10−1, showing its ability to meet stricter precision requirements. Also, CGBPO’s average value and standard deviation were lower than those of the other eight algorithms.
Table 11.
Test results regarding the design optimization problem for industrial refrigeration systems.
By comparing the convergence curves in Figure 14 and the box plots in Figure 15, it is evident that CGBPO was the most stable and converged fastest in the later iteration stage. This means CGBPO can optimize the refrigeration system design more quickly, reducing the system adjustment time while ensuring system performance.
Figure 14.
Convergence curves regarding the design optimization problem for industrial refrigeration systems.
Figure 15.
Box plots regarding the design optimization problem for industrial refrigeration systems.
In conclusion, CGBPO showed superiority and suitability in solving the design optimization problem of industrial refrigeration systems.
6.2. Optimization of Himmel Blau’s Function
Himmel Blau’s function is a commonly used multimodal function for evaluating optimization algorithm performance. It is mainly applied to analyze non-linear constrained optimization problems. Notably, this function has six non-linear constraints, involves five variables, and its mathematical expression is as follows:
Minimize
subject to
where
with the bounds
As Table 12 shows, CGBPO achieves a minimum value of −30,665.4, the lowest among compared algorithms. Regarding std and avg values, CGBPO outperforms its counterparts. From Figure 16 and Figure 17, CGBPO has the fastest convergence speed when optimizing functions, obtaining the optimal solution earliest, showing the best stability, and having the highest solution accuracy.
Table 12.
Test results for Himmel Blau’s function optimization problem.
Figure 16.
Convergence curves for Himmel Blau’s function optimization problem.
Figure 17.
Box plots for Himmel Blau’s function optimization problem.
These results clearly prove that CGBPO excelled in solving Himmel Blau’s function optimization problem. It not only had an edge in finding the optimal solution but also surpassed other algorithms in stability and overall performance, making it highly competitive.
7. Application of the CGBPO Algorithm in Indoor Visible Light Positioning
The CGBPO algorithm is applied to indoor visible light positioning [45]. In the indoor wireless visible-light transmission model, Light-Emitting Diodes (LEDs) are signal sources for data transmission, and Photo Diodes (PDs) are receivers for data reception to achieve high-precision positioning. A 5 m × 5 m × 6 m positioning model is established. Four LEDs on the ceiling, with coordinates (5, 0, 6), (0, 0, 6), (0, 5, 6), and (5, 5, 6), respectively, are set as signal sources.
To test the positioning error of CGBPO, at a height of 2 m, signal receivers are placed every 0.5 m in the 5 m length and 5 m width directions, creating 121 test points. A positioning-simulation experiment is conducted using MATLAB, and the experiment’s relevant parameters are set as shown in Table 13.
Table 13.
Related-parameter settings.
Figure 18 displays the distribution of the actual positions of PDs and the estimated positions from PO. The CGBPO algorithm’s estimated positions are nearer to the actual PD positions. Moreover, CGBPO has the optimal coverage, which reaches 100%.
Figure 18.
Distribution diagram of actual location.
Figure 19 presents the error curves of the estimated positions for the two algorithms. Clearly, among the 121 test points, CGBPO has the smallest error in estimated positions, while PO shows relatively larger errors at each test point.
Figure 19.
Curve of estimated position error.
Figure 20 is a bar chart of the average errors of the estimated positions for the two algorithms. By comparison, the average error of PO’s estimated positions is about 0.0070 cm, while that of CGBPO is about 0.00034807 cm. The CGBPO algorithm shows the most stable positioning performance.
Figure 20.
Comparison of average errors of estimated positions.
8. Conclusions
To overcome the drawbacks of the PO algorithm, like becoming stuck in local optima and slow convergence for complex problems, this study proposed the CGBPO algorithm, which has improved performance and applicability. CGBPO uses chaotic logistic mapping for initialization to increase population diversity, applies Gaussian mutation to update individual positions to avoid premature local convergence, and incorporates the barycenter opposite learning approach to generate opposite solutions and boost global search ability. Simulation experiments on CEC2017 and CEC2022 benchmarks, comparing them with eight intelligent optimization algorithms, and two complex engineering problems showed that CGBPO can improve solution accuracy and convergence speed, balancing global and local search. In industrial refrigeration system design optimization and Himmel Blau’s function optimization, CGBPO achieved the highest accuracy, shortest optimization time, and best stability. In indoor visible light positioning, CGBPO’s estimated positions were closer to actual PD positions, with the best coverage and smallest average error (about 0.00034807 cm) compared to PO.
Future research will explore integrating CGBPO with other advanced methods like the MAMGD optimization method, aiming to further improve convergence speed and training-result accuracy [46]. Combining CGBPO with the differential evolution (DE [47]) algorithm and implementing multi-objective optimization (e.g., extending to NSGA-II [48]) will also be considered to enhance its performance, expand applicability, and meet the increasing needs for complex optimization.
Author Contributions
Conceptualization, Y.Y., M.F., C.J., P.W. and X.Z.; methodology, Y.Y.; software, Y.Y.; validation, Y.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y. All authors have read and agreed to the published version of the manuscript.
Funding
This study was supported by the Natural Science Key Project of West Anhui University (WXZR202307), the University Key Research Project of Department of Education Anhui Province (2022AH051683 and 2024AH051994), and the University Innovation Team Project of Department of Education Anhui Province (2023AH010078).
Institutional Review Board Statement
Not applicable.
Data Availability Statement
All the data presented in this study are available within the main text.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| DOAJ | Directory of open access journals |
| TLA | Three-letter acronym |
| LD | Linear dichroism |
References
- Metropolis, N. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 2004, 21, 1087–1092. [Google Scholar] [CrossRef]
- Holland, J.H. Adaptation in Natural and Artificial Systems; The MIT Press: Cambridge, MA, USA, 1975. [Google Scholar]
- Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks: 1995, 1942–1948, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar] [CrossRef]
- Blum, C. Ant Colony Optimization; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2009. [Google Scholar]
- Passino, K.M. Bacterial Foraging Optimization. Int. J. Swarm Intell. Res. 2010, 1, 16. [Google Scholar] [CrossRef]
- Basturk, B.; Karaboga, D. An artificial bee colony (ABC) algorithm for numeric function optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, 12–14 May 2006. [Google Scholar]
- Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
- Yang, X.S.; Gandomi, A.H. Bat Algorithm: A Novel Approach for Global Engineering Optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
- Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. -Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
- Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
- Bansal, J.C.; Sharma, H.; Jadon, S.S.; Clerc, M. Spider Monkey Optimization algorithm for numerical optimization. Memetic Comput. 2014, 6, 31–47. [Google Scholar] [CrossRef]
- Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
- Jia, H.M.; Peng, X.X.; Lang, C.B. Remora Optimization Algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
- Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawk’s optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
- Jia, H.; Wen, Q.; Wang, Y.; Mirjalili, S. Catch fish optimization algorithm: A new human behavior algorithm for solving clustering problems. Clust. Comput. 2024, 27, 13295–13332. [Google Scholar] [CrossRef]
- Trojovsk, P.; Dehghani, M. Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
- Jia, H.M.; Rao, H.H.; Wen, C.S.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
- Laskar, N.M.; Guha, K.; Chanda, S.; Baishnab, K.L.; Paul, P.K. HWPSO: A new Hybrid Whale-Particle Swarm Optimization Algorithm and its application in Electronic Design Optimization Problems. Appl. Intell. 2019, 49, 265–291. [Google Scholar] [CrossRef]
- Jiang, F.; Wang, L.; Bai, L. An Improved Whale Algorithm and Its Application in Truss Optimization. J. Bionic Eng. (Engl. Ed.) 2021, 18, 12. [Google Scholar] [CrossRef]
- Chen, J.; Chen, X.; Fu, Z. Improvement of the Seagull Optimization Algorithm and Its Application in Path Planning. J. Phys. Conf. Ser. 2022, 2216, 012076. [Google Scholar] [CrossRef]
- Jun, W.; Wen-Chuan, W.; Lin, Q.; Hu, X.X. Multi-strategy Fusion Improved Golden Jackal Optimization Algorithm and its Application in Parameter Estimation of the Muskingum Model. China Rural Water Hydropower 2024, 2, 1–7. [Google Scholar] [CrossRef]
- Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
- Zhang, M.; Wang, D.; Yang, J. Hybrid-Flash Butterfly Optimization Algorithm with Logistic Mapping for Solving the Engineering Constrained Optimization Problems. Entropy 2022, 24, 525. [Google Scholar] [CrossRef]
- Shaik AL, H.P.; Manoharan, M.K.; Pani, A.K.; Avala, R.R.; Chen, C.-M. Gaussian mutation–spider monkey optimization (GM-SMO) model for remote sensing scene classification. Remote Sens. 2022, 14, 6279. [Google Scholar] [CrossRef]
- Song, T.; Zhang, L. A Moth—Flame Optimization Algorithm Combining Centroid—Based Opposite Mutation. Intell. Comput. Appl. 2020, 12, 104–115. [Google Scholar] [CrossRef]
- Liang, X.M.; Shi, L.Y.; Long, W. An improved snake optimization algorithm based on hybrid strategies and its application. Comput. Eng. Sci. 2024, 46, 693–706. [Google Scholar]
- Ahrari, A.; Elsayed, S.; Sarker, R.; Essam, D.; Coello, C.A.C. Problem definition and evaluation criteria for the cec’2022 competition on dynamic multimodal optimization. In Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI 2022), Padua, Italy, 18–23 July 2022; pp. 18–23. [Google Scholar]
- Bae, J.; Hwang, C.; Jun, D. The uniform laws of large numbers for the tent map. Stat. Probab. Lett. 2010, 80, 1437–1441. [Google Scholar] [CrossRef]
- Liu, Z.Y.; Liang, S.B.; Yuan, H.; Sun, H.K.; Liang, J. An Adaptive Equalization Optimization Algorithm Based on the Simplex Method. Chin. J. Sens. Actuators 2022, 35, 8. [Google Scholar]
- Gao, W.X.; Liu, S.; Xiao, Z.Y.; Yu, J. Butterfly Algorithm Optimized by Cauchy Mutation and Adaptive Weight. Comput. Eng. Appl. 2020, 56, 8. [Google Scholar] [CrossRef]
- Wang, C.; Ou, Y.; Shan, Z. Improved Particle Swarm Optimization Algorithm Based on Dynamic Change Speed Attenuation Factor and Inertia Weight Factor. J. Phys. Conf. Ser. 2021, 1732, 012072. [Google Scholar] [CrossRef]
- Zhan, H.X.; Wang, T.H.; Zhang, X. A Snake Optimization Algorithm Combining Opposite—Learning Mechanism and Differential Evolution Strategy. J. Zhengzhou Univ. (Nat. Sci. Ed.) 2024, 56, 25–31. [Google Scholar]
- Ban, Y.F.; Zhang, D.M.; Zuo, F.Q.; Shen, Q.W. Gazelle Optimization Algorithm Guided by Elite Opposite—Learning and Cauchy Perturbation. Foreign Electron. Meas. Technol. 2024, 43, 1–13. [Google Scholar]
- Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Matlab Code of Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
- Mohammed, H.; Rashid, T. FOX: A FOX-inspired optimization algorithm. Appl. Intell. 2023, 53, 1030–1050. [Google Scholar] [CrossRef]
- Zhong, C.T.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
- Hamad, R.K.; Rashid, T.A. GOOSE algorithm: A powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst. 2024, 15, 1249–1274. [Google Scholar] [CrossRef]
- Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Uchida, K.; Hamano, R.; Nomura, M.; Saito, S.; Shirakawa, S. CMA-ES for Safe Optimization. arXiv 2024, arXiv:2405.10534. [Google Scholar]
- Lin, Y.D. Data Analysis of Energy Use Right Verification Based on Wilcoxon Signed–Rank Test. Chem. Eng. Equip. 2021, 187–189. [Google Scholar] [CrossRef]
- Li, Y.; Liang, X.; Liu, J.S.; Zhou, H. Solving Engineering Optimization Problems Based on Improved Balance Optimizer Algorithm. Comput. Integr. Manuf. Syst. 2023, 1–34. [Google Scholar] [CrossRef]
- Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
- Jia, C.C.; Yang, T.; Wang, C.J.; Mengli, S. High accuracy 3D indoor visible light positioning method based on the improved adaptive cuckoo search algorithm. Arab. J. Sci. Eng. 2022, 47, 2479–2498. [Google Scholar]
- Sakovich, N.; Aksenov, D.; Pleshakova, E.; Gataullin, S. MAMGD: Gradient-Based Optimization Method Using Exponential Decay. Technologies 2024, 12, 154. [Google Scholar] [CrossRef]
- Stron, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1995, 11, 341–359. [Google Scholar] [CrossRef]
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).