Abstract
This paper proposes a new meta-heuristic optimization algorithm, the crown growth optimizer (CGO), inspired by the tree crown growth process. CGO innovatively combines global search and local optimization strategies by simulating the growing, sprouting, and pruning mechanisms in tree crown growth. The pruning mechanism balances the exploration and exploitation of the two stages of growing and sprouting, inspired by Ludvig’s law and the Fibonacci series. We performed a comprehensive performance evaluation of CGO on the standard testbed CEC2017 and the real-world problem set CEC2020-RW and compared it to a variety of mainstream algorithms such as SMA, SKA, DBO, GWO, MVO, HHO, WOA, EWOA, and AVOA. The best result of CGO after Friedman testing was 1.6333/10, and the significance level of all comparison results under Wilcoxon testing was lower than 0.05. The experimental results show that the mean and standard deviation of repeated CGO experiments are better than those of the comparison algorithm. In addition, CGO also achieved excellent results in specific applications of robot path planning and photovoltaic parameter extraction, further verifying its effectiveness and broad application potential in practical engineering problems.
MSC:
65K10
1. Introduction
As an essential tool in modern computational intelligence, meta-heuristic optimization algorithms have achieved remarkable results in solving complex optimization problems. By simulating biological, physical, and social phenomena in nature, these algorithms provide a flexible and efficient way to solve challenges in various engineering and scientific fields.
Meta-heuristic optimization algorithms have been widely used in various optimization problems. However, despite the success of existing meta-heuristic algorithms in many applications, some things could still be improved. Many algorithms tend to fall into local optimal solutions when dealing with complex and challenging problems, and it is not easy to find global optimal solutions [1]. Secondly, some algorithms have room for improvement in convergence speed and computational efficiency.
To solve these problems, we propose CGO. CGO combines the principles of tree branch growth, biological evolution, and natural selection to propose an innovative search mechanism. The design of CGO is inspired by the natural process of tree crown growth, specifically, the growth strategies of trees as they compete for light and living space. Trees grow and branch constantly to maximize their ability to capture sunlight and absorb nutrients. For example, maple trees (Acer spp.) and oak trees (Quercus palustris Münchh.) demonstrate a high degree of flexibility and adaptability during their growth.
As shown in Figure 1, maple trees show another exciting growth pattern. Maple trees have a complex crown structure and, by constantly sprouting and growing new branches, maple trees can quickly adapt to changes in the surrounding environment. Oak trees usually proliferate during seedling to compete for more sunlight in the forest. Once an oak tree’s crown breaks through the forest’s crown, its growth rate may slow, but branching continues to increase to extend its coverage. In addition, trees naturally “prune” as they grow—removing branches that consume resources but contribute little to overall growth. This mechanism ensures that trees focus their resources on the most promising branches, optimizing their growth efficiency.
Figure 1.
Schematic illustration of the source of inspiration for CGO.
It has the global search ability of the traditional meta-heuristic algorithm and improves its search efficiency in complex multidimensional space through a unique local search strategy. CGO seeks the optimal solution in the solution space by constructing a “virtual tree” to simulate the expansion, germination, and competition during crown growth. Our work refers to these three processes: growing, sprouting, and pruning.
The major contributions are summarized as follows:
- A novel biology-based algorithm, the crown growth optimizer (CGO), is proposed, simulating the process of branches growing and sprouting of the tree crown.
- The ability of the CGO algorithm was tested on thirty 10-dimensional, 30-dimensional, and 50-dimensional CEC2017 benchmark functions, and compared with nine other advanced optimizers; CGO achieved outstanding performance.
- CGO was tested on 20 CEC2020 real-world constraint questions.
- CGO algorithm was tested on two typical engineering problems: robot path planning and photovoltaic parameter extraction.
- Used statistical analyses such as Wilcoxon’s and Friedman’s tests to investigate the strength of CGO algorithms.
2. Related Works
Typical meta-heuristic algorithms can be divided into the following categories: The first category is evolution-based algorithms, which typically include the Genetic Algorithm (GA) [2] and Differential Evolution (DE) [3], which make use of the crossing and mutation mechanism of chromosomes to update agent search location. The second category is the algorithms based on physical rules, like the Gravity Search Algorithm (GSA) [4], Simulated Annealing algorithm (SA) [5], and Multiverse Optimization (MVO) [6]; these algorithms make use of the physical laws of nature. The third type of algorithm is based on mathematics, and is derived from mathematical functions, formulas, and theories, such as the Sine and Cosine Algorithm (SCA) [7], and Arithmetic Optimization Algorithm (AOA) [8]. The fourth type of algorithm is a population-based algorithm, which is derived from the behavior of foraging, breeding, and hunting in organisms, such as Particle Swarm Optimization (PSO) [9], Artificial Bee Colony algorithm (ABC) [10], Gray Wolf Optimization (GWO) [11], Whale Optimization Algorithm (WOA) [12], and Harris Hawk Optimization (HHO) [13]. The above classification is not absolute, and the same algorithm may contain multiple mechanisms. They have been used to solve various industrial problems with great success, including in the field of UAV path planning. However, faced with complex environments, the performance of most algorithms can be further improved. In the in-depth development of meta-heuristic algorithms, many researchers focus on introducing more parameters, mechanisms, and multi-level search.
In recent years, many advanced optimization algorithms have been proposed. Nadimi [14] has proposed an improved Gray Wolf Optimizer (I-GWO) to solve global optimization and engineering design problems. A Dimension Learning-based Hunting (DLH) search strategy was proposed, inheriting from the individual hunting behavior of wolves in nature. It has achieved excellent results on the CEC 2018 benchmark function. Luo jing [15] proposed a 3D path-planning algorithm based on Improved Holographic Particle Swarm Optimization (IHPSO), which uses the system clustering method and the information entropy grouping strategy instead of a random grouping of structure–particle swarm optimization. Fouad [16] introduces the PMST-CHIO, a novel variant of the Coronavirus Herd Immunity Optimizer (CHIO) algorithm. It innovatively integrates a parallel multi-swarm treatment mechanism, significantly enhancing the standard CHIO’s exploration and exploitation capabilities. Wentao Wang [17] proposes an Improved Tuna Swarm Optimization algorithm based on a sigmoid nonlinear weighting strategy, multi-subgroup Gaussian mutation operator, and elite individual genetic strategy called SGGTSO.
The above algorithms perform well in their respective fields. However, they have limitations, such as easily falling into local optimal solutions, slow convergence speed, etc. This paper proposes a crown growth optimizer (CGO). By simulating the tree crown growth process and combining global and local search strategies, the search efficiency and stability of the algorithm were improved.
3. Crown Growth Optimizer
3.1. Inspiration
The crown growth optimizer (CGO) was inspired by the ability of tree growing to optimize natural growth by competing for light, nutrients, and space resources. In this algorithm, the crown of a tree is regarded as a population of branches, and each branch is a search agent. As trees grow, they adjust the shape of the crown to the location of the population. The growth of branches is affected by various resources such as light, temperature, water and soil nutrients. These resources are abstracted as objective functions in the optimization process so that the tree crown growth optimizer can be widely used in various optimization problems.
The physiological mechanisms of the tree regulate the crown as a whole. We used three mechanisms of crown development: growing, sprouting, and pruning. The growth process is the process of exploring the parameter space of a branch, and this process may find more beneficial resources for tree growth. The sprouting process is the process of growing new branches near the current branches, which is the full use of parameter space, and, the more abundant the resources of the branches, the more likely it is to grow new branches. The tree crown adaptively adjusts the development of branches with factors such as season, life span, and climate, such as paying more attention to the growth of branches or paying more attention to the development of new branches, or actively shedding leaves and aging branches in seasons that are not conducive to survival. These mechanisms are abstracted as pruning, which balances the growth and sprouting processes and periodically eliminates disadvantaged individuals.
Figure 2 shows a schematic diagram of these processes. Using these physiological processes, we simulated the developmental behavior of several branches of a tree crown and developed the crown growth optimizer (CGO).
Figure 2.
Schematic of three mechanisms of CGO.
3.2. Crown Branch Initialization
The iterative process of CGO starts with an initial population of randomly generated branches of the crown. Each branch represents a search agent. D is the dimensions of the problem, and the upper and lower bounds on the search dimensions are and , respectively. The number of searched branches is N, and the initial locations of all branches can be represented by a matrix of N rows and D columns:
where is the random number of . The search branch’s position can be described as .
3.3. Growing Stage
The growing stage simulates the growth of the branch towards a more resourceful location, which allows the search agent to find a better solution at the current location. Branches are encouraged to focus on obtaining better solutions rather than scaling in environments with high dispersion characteristics. In the life of trees, the branches do not grow all the time, but, as time progresses, they show the characteristics of first fast and then slow. The branch growth velocity can be modeled as follows:
where t is the iteration step, T is the maximum iteration, and are the maximum and minimum growth velocity, respectively, and b is the scaling factor, used to adjust the shape of the curve. The variation trend of Equation (2) is shown in Figure 3.
Figure 3.
The growth velocity when , , .
In the growth stage, the random number of the Gaussian distribution is used to simulate the uncertainty of the growth direction. The Gaussian number has a dynamic and uniform step size, which ensures that the agent can explore as much as possible in the search space and spread to more feasible areas. The direction of growth depends on two reference directions. First, the branch will generally grow towards the better resource position since represents the best known location for growth, so indicates the current branch growth reference direction towards the better resource. Second, the branches are distributed radially relative to the tree trunk. So, indicates the current branch’s reference direction deviating from the crown’s centroid:
The growth direction of branches is highly random, so the Gaussian random number is used to describe this feature:
is a random sequence of standard Gaussian distributions of to generate growth randomness for each dimension. The mean of , , and the variance .
Then, the position renewal equation of the growing stage is as follows, and Figure 4 shows the schematic diagram of the growing stage:
is a random number between 0 and 1, and ∗ represents the dot multiplication operation. represents the centroid position of the entire population, calculated by
where represents the d-th dimensional coordinates of the i-th search branch. grows in the direction of global optimality in general and is diversified by combining two random numbers. The effect of makes the growth of gradually slow down with each iteration.
Figure 4.
Schematic diagram of the growing stage.
In the growing stage, the branches are not shielded from each other as much as possible, that is, the position of the search agent is pretty close. We designed a repulsion mechanism to adjust the direction of branch growth if the target branch is too close to other branches. The formula for the repulsion mechanism is shown in Equation (7):
A high-dimensional spherical region is constructed with the location of as the center and the radius of . The other included search agents form a neighborhood set , and represents the number of elements in . In the above formula, represents the vector where each search agent in points to the current agent . is the sum of all vectors, representing the repulsive force of the population, where is a repulsive modulator. The agent updating scheme after repulsive force modification can be summarized as follows:
The above formula can represent the two cases in which the branch needs repulsive force modification and does not need modification. When is in the location of the other branches’ shade, , because , and with the same meaning as Equation (5).
3.4. Sprouting Stage
The sprouting stage simulates the growth process of new shoots on the crown branches. Well-grown branches have more nutrients and are more likely to sprout, so an elite pool was constructed to select the baseline location for sprouting:
where and represents the first three individuals with the best fitness value, and represents the centroid position of individuals whose fitness values ranked in the top 50%, shown as Equation (10). is the centroid position of the population, shown as Equation (6). The elite pool individuals represent the best-developed individuals in the tree crown, representing the most resource-rich locations in the known space with the highest probability of sprouting. It is also where the optimizer primary exploitation takes place. After the location of all search branches is updated, the algorithm updates the elite pool once, representing the location of the next iteration:
Unlike the growing stage, which has a historical process, sprouting is a discrete and mutated process for the updating of the search agents. The location of the branches is highly dispersed. A Gaussian random number is used to simulate the uncertainty of the sprouting position. In order to build the diversity of sprouting, we use two reference directions :
Each search agent can be updated using the following formula, and Figure 5 shows the schematic diagram of the sprouting stage:
is a standard Gaussian random sequence, and is a random search individual from the elite pool. is a random number between 0 and 1. The sprouting stage allows for a more extraordinary exploitation ability under the condition of accepting wrong solutions to a certain extent.
Figure 5.
Schematic diagram of the sprouting stage.
3.5. Pruning Mechanism
We designed the pruning process to balance the two stages of sprouting and growing. Researchers have observed that new branches of trees often need to lie dormant for some time during their growth before they can sprout new branches. Ludvig’s law states that a sapling grows a new branch after an interval of, say, a year. The new branches are dormant in the second year, while the old ones still sprout. After that, the old shoots and those that have been dormant for a year sprout simultaneously, and the new shoots that sprouted in the same year become dormant the following year. In this way, the number of branches and sprouts of a tree in each year constitutes a Fibonacci sequence:
If the number of items in the sequence n is large enough, the ratio of the number of shoots of the two adjacent generations is close to the golden ratio of , which means that, when the number of shoots of the current generation is N, the number of shoots of the next generation is .
Within each iteration, the population is randomly divided into two sub-populations and . The numbers of the two sub-populations at the beginning of the iteration are , and the symbol indicates rounding. Branches within each sub-population enter the growth or sprouting stage, respectively. Since the growth of a tree is limited by its carrying capacity, the number of branches cannot increase forever, so the iterative process gradually reduces the number of growing branches and gradually increases the number of newly sprouted branches. A pruning time interval is defined, and, when the iteration enters a new pruning time interval, i.e., , the algorithm performs two additional actions. First, the sizes of the two populations are adjusted: and . Second, some branches that are lower in the fitness order are pruned off and these branches are initialized in the search space by Equation (1). According to Ludvig’s law and the Fibonacci series, the amount of per pruning is set to . These two additional actions are performed only in iterations of , and the rest of the iterations are only updated in the distribution structure of the sub-population (Algorithm 1).
| Algorithm 1: Pruning mechanism |
|
In this process, branches with poor fitness are eliminated and replaced with new shoots, further improving the exploitation effect in the vicinity of elite branch individuals.
3.6. CGO Process and Computational Complexity
The process of the CGO algorithm is shown in Figure 6. In the primary iteration, all branches are divided into two parts to balance the effects of exploration and exploitation. Within each iteration, each branch completes the action of being assigned to a sub-population. After the branch position of the next iteration is updated, we recalculate the branch’s fitness and update the elite pool.
Figure 6.
The main flow chart of CGO.
The algorithm flow of CGO is shown in the pseudo-code of Algorithm 2.
| Algorithm 2: Crown growth optimizer (CGO) |
![]() |
The computational complexity of CGO mainly depends on four elements: population initialization, position updating, fitness calculation, and fitness ranking. The computational complexity for generating individual positions is , where N is the number of search agents and D is the solution space dimension. The fitness values of all individuals need to be calculated and sorted once in each iteration. The calculation complexity is , where T is the maximum number of iterations. The computational complexity for updating the positions of all agents in the growing and sprouting stage is . In addition, an additional loop is required to calculate the repulsive force, the computational complexity of which is . Each pruning process requires of branches to be replaced and the complexity consumed is . So, overall, the time complexity of CGO is .
4. Experiment and Simulation
This work used the benchmark function to test the proposed CGO algorithm’s performance. CGO was further used to solve typical engineering problems, including parameter extraction for photovoltaic systems and robot path planning.
The experiment and simulation studies in this section used MATLAB. The code ran on a computer equipped with a 12th Gen Intel(R) Core(TM) i7–12700H @ 2.30 GHz CPU, 16.0 GB RAM, and the Windows 11 operating system. The version of MATLAB used was R2024a.
4.1. Testing Results on CEC2017 Benchmark Functions
4.1.1. Experiment Settings
Experimental studies were performed on thirty benchmark functions from the CEC2017 test suite, respectively. More details on these typical testing problems can be found in the paper ([18]). These benchmark functions include four types: unimodal problems (F1–F3), simple multimodal problems (F4–F10), hybrid problems (F11–F20), and composition problems (F21–F30). These benchmark problems can reflect the algorithm’s performance in real-world optimization problems. We compared the proposed algorithm with the other nine advanced optimization algorithms. They were SMA [19], BKA [20], DBO [21], GWO [11], WOA [12], EWOA [22], HHO [13], MVO [6], and AVOA [23]. The main parameters of the algorithms involved are shown in Table 1.
Table 1.
The main parameters of algorithms involved.
In addition, the fundamental parameters remained consistent, such as the number of search agents , the search dimension , the search upper bound , and the search lower bound . Each algorithm was independently repeated 20 times, and two evaluation metrics were utilized to compare and analyze the optimization performance of each method intuitively: average value (mean) and standard deviation (std):
where the mean reflects the convergence accuracy of the algorithm, std quantifies the dispersion degree of the optimization results, i represents the number of repeated runs, n is the total number of runs, and represents the global optimal solution of the run.
Statistics such as standard deviation or variance can be used to measure the diversity of swarms in the search space. In this study, Positional Diversity was used to describe changes in the diversity of the branch swarm, defined as Equation (16):
where is the coordinates of the i particle in the j dimension, and is the average coordinates of all individuals in the j dimension.
At the same time, the Friedman test was used to rank the average fitness of CGO and other algorithms [24]. In Equation (17), k is the sequence number of the algorithm, is the average ranking of the algorithm, and n is the number of test cases. The test assumes a distribution with degrees of freedom. It first finds the rank of algorithms individually and then calculates the average rank to get the final rank of each algorithm for the considered problem.
4.1.2. Convergence Behavior of CGO
In this part, the convergence behavior of CGO was studied utilizing several CEC2017 benchmarks in the 2-dimensional parametric space. Specifically, in this experiment, the convergence behavior of CGO was reflected by the search history, convergence graph, swarm’s diversity, and diagram of trajectory in the first dimension. The benchmark functions No. 1, 3, 5, 6, 10, 22, 24, and 26 of CEC2017 were selected for testing. In this part, we set , sub-population sizes started with , and to more clearly show the search history of the group exploration process.
As depicted in Figure 7, the first column is a description of the search space, which reveals the selected unimodal problem CEC2017–1; CEC2017–3 as the smooth structure problem, CEC2017–5, CEC2017–6, and CEC2017–10 as the simple multimodal problems; and CEC2017–22, CEC2017–24, and CEC2017-10 as the simple multimodal problems. There are a large number of locally optimal solutions in CEC2017–26 complex hybrid problems. The selected function simulates the real solution space well.
Figure 7.
The performed CGO population search process on several 2D CEC2017 benchmarks.
The second column shows the historical position of the swarm at different iteration times in different colors. It can be clearly seen that, at the beginning of the iteration (, represented by black, cyan, and green), the swarms show a high degree of dispersion and tend to discover potential and promising areas. In the middle and late iterations (, denoted by yellow, orange, and red), swarms tend to cluster in the globally optimal solution. This shows that CGO achieves a significant trade-off between exploration and exploitation.
The convergence graph in the third column is the most widely used metric for validating the performance of the meta-heuristic optimizer. As shown in Figure 7, the convergence graph obtained by CGO shows that the algorithm has a fast convergence rate on all eight benchmarks. It can be seen from CEC2017–24 that, when dealing with hybrid problems with multiple local optima, the CGO algorithm sometimes falls into a local optimal state temporarily. However, under the sprouting mechanism based on elite pool guidance, the algorithm achieves good fitness. In practice, this suggests that CGO has good exploratory capabilities to preserve the diversity of the population while avoiding local optimality.
The fourth and fifth columns are the position diversity changes of the population and the movement trajectory of the average position of the swarms in the first dimension, respectively, which reflect the role of CGO in balanced exploration and exploitation. It can be seen that, at 130 iterations, due to the reduction of the growth population to 0, all swarms rapidly gather at the elite pool and deeply exploit the vicinity of the elite pool. Because the diversity of the population is well preserved in the initial stages of the iteration, the trajectories of the individuals show mutations and significant changes, suggesting that CGO is more likely to explore the potential, high-quality solutions.
4.1.3. The Swarm Behavior of the Growing Stage and Sprouting Stage in CGO
This part shows in detail how the growing stage and sprouting stage change over several iterations to better reveal how CGO works. Figure 8a shows the growth process of the clustered population (green) in three-time steps (yellow, orange, red). It can be seen that, during the exploitation stage, individuals will grow in the opposite direction of the centroid, which gives the group the possibility to explore the potential optimal solution. The swarm diversity (calculated using Equation (16)) in these four-time steps is , respectively, indicating that the dispersion of the population is increasing. Figure 8b shows the change in population diversity when only growth occurs in 100 iterative steps. It can be seen that, in the early iteration stage, branches rapidly grow to the entire search space. The iteration is carried out. Due to the Gaussian distribution characteristic of the random number in Equation (8), the population is given the possibility of inward growth. Hence, the diversity change of the population reaches a balance.
Figure 8.
The swarm behavior of growing stage.
Figure 9a shows the position change of the random initial population (green) in the sprouting stage. Tracking the same particle can be seen as tracking an elite pool individual, which gives the group the ability to converge on the best individual and exploit them deeply. Figure 9b shows the change of group diversity during this process. The individual can quickly converge to an optimal location within dozens of iterations. This illustrates the exploitative power of CGO, and the exploitation process can be very rapid due to the guiding role of the elite pool.
Figure 9.
The swarm behavior of sprouting stage.
4.1.4. Optimized Performance of CGO
This part computed CGO and nine other algorithms on thirty 10-dimensional, 30-dimensional, and 50-dimensional problems of CEC2017. After calculation, the solution results are shown in Table 2, Table 3 and Table 4, where the bold terms are the optimal solution results under the benchmark function in the row. We discuss the experimental results according to the class of the benchmark function.
Table 2.
Results of CGO and each algorithm on CEC2017 10-dimension benchmark function.
Table 3.
Results of CGO and each algorithm on CEC2017 30-dimension benchmark function.
Table 4.
Results of CGO and each algorithm on CEC2017 50-dimension benchmark function.
1. Unimodal functions (F1–F3): These three functions only have one global best solution. The gradient near the optimal value of those functions is very small relative to other spaces, which is suitable for testing the algorithm’s exploitation ability. CGO performs better than other algorithms on the 10-dimensional F1–F3 problem. This shows that CGO has enough exploitation ability near the optimal value. Because the sprouting stage of CGO is guided by the elite pool, CGO can concentrate on exploitation in such unimodal problems. Compared with other swarm algorithms, such as BKA, DBO, EWOA, etc., a part of swarms are allocated to the exploration in the middle and later iterations, so the results of the CGO algorithm are better.
2. Simple multimodal functions (F4–F10): These functions have many locally optimal solutions suitable for testing the algorithm’s exploration ability. CGO exhibits the best global exploration capabilities and gets the best results on all benchmark functions except the 30-dimensional F4 function. The highly dispersed and repulsive mechanism of the growth stage of CGO can improve the diversity of the population, so it is conducive to finding more potential optimal solutions.
3. Hybrid functions (F11–F20): This kind of function contains many unimodal and multimodal functions, which are more challenging to optimize. CGO achieved the best results in 20 out of 30 benchmark functions in 10, 30, and 50 dimensions. In addition, EWOA and BKA are also prominent in this kind of function. The pooling mechanism and priority selection strategy of EWOA improve the local and global search capability of WOA, and BKA integrates the Cauchy mutation strategy and the leader strategy to enhance the global search capability and the convergence speed of the algorithm. This also leaves room for improvements in CGO algorithms.
4. Composition functions (F21–F30): The composition benchmark functions combine all the above function combinations. CGO achieved the best results in 17 of the 30 benchmark functions in 10, 30, and 50 dimensions. This shows that CGO’s optimization performance is better than that of the existing advanced algorithms.
The main reason why CGO outperforms other algorithms in unimodal functions, multimodal functions, and composition functions is that the algorithm introduces an elite pool. It is different from other algorithms, such as DBO, WOA, BKA, etc., which only record the position of the only optimal solution. In the process of exploration, CGO can record several optimal locations, median locations, and centroid locations of the whole world without losing the statistical characteristics of the population, so that CGO has the possibility of discovering potential local optimal solutions. The sprouting stage is completely based on the guidance of the elite pool, and the synthesis of two vectors and is directional and random. These mechanisms can make CGO’s ability to exploit the optimal solution of the solution space significantly better than other algorithms.
CGO and nine other algorithms’ rank on all CEC2017 benchmark functions are shown in Figure 10. The green line represents the CGO algorithm, which is distributed in the center area of the radar map, indicating that, in most problems, CGO is significantly better than other algorithms.
Figure 10.
Ranking of CGO and other algorithms on CEC2017 benchmark functions.
Table 5 shows the Friedman test rankings for all the algorithms above. The proposed CGO algorithm performs very well, with a comprehensive ranking of 2.5000 on the 10-dimension functions, 2.4333 on the 30-dimension function, and 2.4000 on the 50-dimension functions. In particular, as the dimension of the solution space increases, the CGO algorithm shows better optimization performance.
Table 5.
The Friedman test ranking of CGO and other algorithms on CEC2017.
Additionally, the Wilcoxon test [24] was conducted on CGO and nine other algorithms based on 10-, 30-, and 50-dimensional CEC2017 functions. As the test outcomes show in Table 6, in most cases, the attained p-values are less than 5%. Only in the case of a 10-dimensional function does the p-value of CGO vs. BKA and CGO vs. EWOA exceed 5%. This shows that CGO’s performance is close to that of BKA and EWOA in such cases. In most cases, the optimization performance of CGO is significantly better than that of other algorithms.
Table 6.
The Wilcoxon test result of CGO and other algorithms on CEC2017.
The evolution curve (Figure 11 and Figure 12) shows that CGO’s (green line) convergence speed is much faster than other algorithms, and it can quickly converge to the optimal value. CGO showed rapid evolution in the early stage of iteration, showing an excellent ability to search the global space. In contrast, in the middle and late stages of iteration, CGO maintains a persistent local exploitation ability on many functions, and its evolution curve consistently and slowly declines to avoid premature convergence.
Figure 11.
Evolution curve of CGO and other algorithms on CEC2017 30-dimension reference function (F1–F15).
Figure 12.
Evolution curve of CGO and other algorithms on CEC2017 30-dimension reference function (F16–F30).
4.1.5. Analysis of Pruning Mechanism in CGO
In this part, we analyzed the influence of the pruning time interval of the pruning mechanism on the exploration and exploitation of CGO. In [25], Hussain et al. put forward an approach to measure and analyze the capability of exploitation and exploration in meta-heuristic algorithms. We used this method to measure the extent of population exploration and exploitation:
where represents the maximum diversity. and refer to the exploration percentage and exploitation percentage, respectively. Figure 13 shows the proportions of exploration (blue) and exploitation (green) capability in crown population when adjusting on the same benchmark function CEC2017-10.
Figure 13.
The influence of on optimization.
Figure 13 shows the capability of exploitation and exploration of individuals after adjusting the pruning time interval . In Figure 13, it can be seen that there are two distinct stages of particle behavior. In the first stage, the primary behavior of the group is exploration. With the increase of , the time interval for CGO to perform pruning also increases. The duration of the exploration phase also increases, and, in this phase, particle swarms are more focused on spreading out into the search space and finding more potential locations. In each pruning period, the relative amounts of and are renewed once, and the sprouting effect gradually exceeds the growth effect. Therefore, it can be seen that the exploration capability declines with each iteration while the exploitation capability gradually increases. Until is reduced to 0, all individuals enter the sprouting stage, the exploration effect rapidly reduces, and the exploitation effect rapidly increases. Due to the elite guidance and randomness of the sprouting stage, Equation (12), as well as the last selection process (Section 3.5) of the pruning mechanism, the optimization does not converge immediately but continues to deepen the exploitation within the known optimal space to find a better solution.
We checked the results of the diversity change of CGO at on CEC2017 and compared the results of different solutions. Table 7 and Figure 14 do an excellent job of explaining the ability of to regulate exploitation and exploitation. CEC2017-1 and CEC2017-3 problems are unimodal problems, and branches do not need to explore other better solutions, so they should be more focused on exploitation. A shorter would be more appropriate. For multimodal and hybrid problems, such as CEC2017-5, CEC2017-6, CEC2017-10, CEC2017-22, CEC2017-24, and CEC2017-26, due to the existence of a large number of locally optimal solutions, it is essential to explore the process thoroughly. Therefore, when is more significant, the search performance is better. The more complex the problem, the more thorough the exploration phase needs to be. At the same time, when , the CGO solution results are poor because the necessary exploitation process is lacking.
Table 7.
Effects of different pruning time intervals on optimization results.
Figure 14.
Exploitation percentage and exploration percentage on CEC2017 benchmarks.
This example illustrates how the pruning mechanism can balance and utilize the two phases of exploration and exploitation, as well as provide an adaptive approach to different problem types. Usually, we set because this is the most balanced result.
4.1.6. Influence of Population Size N and Max Iteration T on Optimization
Figure 15 and Figure 16, respectively, show the optimizing results under the number of different branches of population and the maximum number of iterations . The vertical coordinate of each set of blue bars is the optimization solution error, the horizontal coordinate is the value N or T, the vertical coordinate of orange bars is the improvement efficiency, and the horizontal coordinate is the number of increases of N or T. The solution error and improvement efficiency are defined as in Equation (19).
where is the theoretical optimal solution, is the optimal solution obtained by CGO, and k is the serial number of the traversal settings of N and T. When N and T are increased, CGO can get better solutions. However, with the increase of N and T, the efficiency of improving results is also limited, which also causes a waste of calculation time. By comparison, in the selected 10-dimensional problem, where N takes 100 and T takes 200, it has almost reached the maximum optimization ability. Further increases in N and T will not significantly improve CGO’s ability to exploit.
Figure 15.
The influence of population size N on optimization.
Figure 16.
The influence of max iteration T on optimization.
Table 8 shows the solving time of CGO and nine other algorithms on thirty 10-dimensional CEC2017s. The solution parameters are set to and . Each optimizer calculates each benchmark function 20 times independently and then sums operation times to calculate the average time to solve a single problem. It can be seen that CGO is ahead of most advanced optimization algorithms in average processing time, including MVO, WOA, DBO, GWO, AVOA, BKA, SMA, etc. Among the algorithms compared, only HHO has a better computation time than CGO. This shows the high efficiency of this algorithm.
Table 8.
The average solving time of CGO and nine other algorithms on thirty 10-D CEC2017 functions.
4.2. Testing Results on CEC2020 Real-World Constrained Problems
Unlike CEC2017, CEC2020 contains a set of engineering problems with real environmental constraints called CEC2020–RW. CEC2020–RW is highly non-convex and complex and contains several equality and inequality constraints. The penalty function method is used as the constraint processing method.
There are six types of problems in CEC2020–RW: industrial chemical processes, process synthesis and design problems, mechanical engineering problems, power system problems, and livestock feed ration optimization. Mathematical expressions for these problems can be found in paper [26]. This section used CGO and nine other algorithms to solve 20 typical problems of CEC2020–RW. The selected problem names, main parameters, and theoretical optimal values are shown in Table 9. Each algorithm was independently executed 20 times, with 1000 iterations as the termination criterion, and the number of searching agents was 100. D is the problem dimension, g is the number of equality constraints, h is the number of inequality constraints, and is the theoretical optimal value.
Table 9.
Names and parameters of 20 typical problems in CEC2020-RW.
After solving the problem, the results of CGO and nine other algorithms on CEC2020–RW are shown in Table 10. The best results are shown in bold. In addition, the Friedman ranking of each algorithm and the Wilcoxon test of each algorithm and CGO are shown in Table 10 and Table 11; CGO ranks 1.4000 out of ten algorithms and meets the significance of p-value = 5% in the Wilcoxon test with the other nine algorithms. CGO achieved the best results in all problems except RW07, RW08, and RW18 (where it ranked 2nd, 3rd, and 6th, respectively). Furthermore, Figure 17 shows the ranking of algorithms’ performances on CEC2020-RW. This shows that CGO can meet the needs of real engineering problems and has made promising advancements.
Table 10.
Results of CGO and each algorithm on CEC2020-RW problems.
Table 11.
The Wilcoxon test result of CGO and other algorithms on CEC2020-RW.
Figure 17.
Ranking of CGO and other algorithms on CEC2020-RW problems.
4.3. Solving Robot Path Planning by CGO
4.3.1. Problem Definition
This section used the CGO algorithm to plan a robot’s path from a starting point to a goal point and avoid obstacles. We constructed a 2D configuration space with a range of . The starting point was and the goal point was . Nine obstacles were distributed in the space, as shown in Figure 18. Table 12 lists the spherical center position and radius of each obstacle. The number of path control points for path planning is 5 (not including the starting and goal point), so the dimension of the decision variable is 10. The maximum number of iterations of the algorithm , the number of search agents is 100, and it ran 10 times independently. Other parameters are defined as above.
Figure 18.
Configuration space for robot path planning.
Table 12.
Center coordinates and classes of obstacles.
The 2D path-planning problem in this study can be expressed as the following optimization model:
In optimization, the obstacle threat is added to the objective function as a penalty function and . The calculation method of is shown in Figure 19. When the path point is inside the obstacle , the distance d to the center is less than the radius r; then, the point is threatened. If , it is not threatened. The robot path is obtained from the control points solved by the optimizer through cubic spline interpolation. The total number of interpolation points is .
Figure 19.
Obstacle threat penalty function.
4.3.2. Optimized Performance of CGO
Figure 20b represents the spatial trend of each planned path after 10 instances of CGO repeated calculations, in which the red line represents the longest path, the green line represents the shortest path, and the path control points are marked with circular rows. All trajectories satisfy obstacle avoidance constraints.
Figure 20.
Results on robot path planning.
Among those ten planned paths, the optimal path is 14.6166, and the longest is 16.2720. As can be seen from the figure, the distance between the two paths is small, which also shows that path planning in an environment is a very challenging problem. The length of the standard deviation of 10 paths is 0.6868, indicating that CGO has good stability in solving this problem.
Figure 20a shows the optimal solution path of various algorithms, Table 13 lists the length of the solution results of different algorithms, and Figure 21 shows the average evolution curves of different optimization algorithms. It can be seen from Figure 20b that most algorithms tend to pass through the middle of obstacles to reach the target point, while the paths planned by GWO and DBO pass through both sides of obstacles, and MVO finds a different path than most algorithms. From the best and mean results shown in Table 13, the CGO algorithm is better the others, and the DBO algorithm performs the worst. From the point of view of the length standard deviation of the 10-times path generated by various paths, CGO has the most minor standard deviation. This further proves that the CGO algorithm is effective and guarantees the optimality and stability of the generated path. According to the average evolution curve (Figure 21), the evolution process of the CGO algorithm is relatively rapid in the initial iteration (Iteration < 40) compared with DBO, MVO, HHO, and other algorithms. The results show that the local exploitation ability of CGO, especially the exploitation ability, is better than the existing advanced algorithms.
Table 13.
The results of CGO and other algorithms in robot path planning.
Figure 21.
Evolution curve of CGO and various algorithms in robot path planning.
In summary, the CGO proposed in this paper performs well in robot path planning. This shows that this algorithm has advantages in solving complex real problems and can be further researched and applied.
4.4. Extracting the Parameter of Photovoltaic Systems by CGO
4.4.1. Problem Definition
In the new energy field, photovoltaic (PV) systems are powerful tools for harnessing solar energy and converting it directly into electricity. Therefore, an efficient and accurate PV system model must be designed based on the parameters extracted from the measured current-voltage data.
This section used CGO and nine other algorithms to extract the core parameters of photovoltaic systems. Three classical PV models, the single-diode model (SDM), double-diode model (DDM), and PV module-diode model (PVMM), were adopted. The equivalent circuit diagram of the three models is shown in Figure 22. Five core parameters need to be extracted in SDM: the source of photocurrent (), the reverse saturation current (), the series resistance (), the shunt resistance (), and the ideal factor of diodes (n), as depicted in Figure 22a. Based on Kirchhoff’s current law, the output current () in SDM can be calculated as in the following equation, Equation (21):
where represents the parallel resistance current, denotes the diode current, denotes the output voltage, q is the electron charge (), k indicates Boltzmann’s constant (), and is the temperature measured in Kelvin. The main purpose of this problem is to minimize the difference between the experimental data estimated by the algorithm and the real measured data as much as possible, so the root mean square error (RMSE) is adopted as the objective function as follows in Equation (22):
where M is the amount of specified experimental data and represents the solution vector for the five parameters (). Similarly, the output current () in DDM (Figure 22b) and PVMM (Figure 22c) can be calculated as in the following Equations (23) and (24):
Figure 22.
Equivalent circuit diagrams for photovoltaic cells.
In the DDM model, denotes the diffusion current, and indicates the saturation current. and refer to the ideality factors of diodes, represents the solution vector for the seven parameters (). In the PVMM model, denotes the number of cells connected in parallel, and denotes the number of cells connected in series. represents the solution vector for the five parameters (). The bounds of different parameters in three PV models are reported in Table 14.
Table 14.
Bounds of different parameters in three PV models.
4.4.2. Optimized Performance of CGO
In this experiment, the benchmark measured current-voltage data were obtained from Easwarakhanthan et al. ([27]), which used a commercial RTC France silicon solar cell with 57 mm diameter (under 1000 W/m2 at 33 °C). We set , , and independently ran the experiment 20 times on SDM, DDM, and PVMM with each optimizer. Table 15 shows the parameter extraction results of each algorithm on the three models. CGO achieved the best result on SDM, with RMSE = 0.001875, and the second-best result on DDM and PVMM under EWOA. This shows that CGO can solve PV parameter extraction and is ahead of most optimization algorithms. Figure 23 shows the comparison between the test results of CGO on SDM, DDM, and PVMM and the actual measured values. We can observe that the experimental data obtained by the CGO algorithm can perfectly fit the measured data.
Table 15.
The results of CGO and each algorithm on PV parameter extraction.
Figure 23.
Comparisons between measured data and experimental data attained by CGO for SDM, DDM, and PVMM.
5. Conclusions and Discussion
This paper proposed a novel meta-heuristic optimization algorithm, the crown growth optimizer (CGO), which provides an innovative optimization method by simulating the growing, sprouting, and pruning mechanisms during tree crown growth. CGO introduces the two core stages of tree crown growth and sprouting and combines the pruning mechanism to balance the two stages, thus improving the search efficiency and the quality of reconciliation.
Experimental results on CEC2017 and CEC2020-RW show that CGO achieves remarkable results on multidimensional optimization problems, showing global search capability and stability. Significantly, when solving complex engineering problems, CGO showed a high convergence speed and accuracy. The algorithms compared included SMA, SKA, DBO, GWO, MVO, HHO, WOA, EWOA, and AVOA. The experimental results show that CGO performs excellently in most test functions and practical applications.
In addition, this paper verified the practical application value of CGO through experiments in two specific applications: robot path planning and photovoltaic parameter extraction. In the robot path-planning problem, CGO could effectively plan the optimal path, avoid obstacles and minimize the path length. In the photovoltaic parameter extraction, CGO successfully optimized the model parameters and improved the efficiency of the photovoltaic system. These application examples further prove the vast application potential of CGO in practical engineering problems.
Although CGO has performed well in several benchmarks and real-world applications, there are still some shortcomings and room for improvement. Future research can consider the following directions:
- Adaptive parameter adjustment: introducing an adaptive parameter adjustment mechanism enables the algorithm to adjust parameters in different types of problems automatically.
- Hybrid algorithm: combined with the advantages of other optimization algorithms, further enhancing the global search and local search capabilities of CGO to improve the algorithm’s overall performance.
- Parallelization and distributed computing: through parallelization and distributed computing technology, improving the computational efficiency of CGO, especially when dealing with large-scale and high-dimensional problems.
In conclusion, the crown growth optimizer, as a new meta-heuristic optimization algorithm, not only puts forward a new optimization mechanism in theory but also shows its effectiveness and broad application prospects in practice. Future research and improvements will further enhance the performance of CGO, allowing it to play a more significant role in more complex optimization problems.
Author Contributions
Conceptualization, C.L.; formal analysis, W.L.; investigation, D.Z.; methodology, C.L., D.Z. and W.L.; software, C.L. and W.L.; validation, C.L. and D.Z.; writing—original draft, C.L.; writing—review and editing, D.Z. and W.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data are available in a publicly accessible repository. The CGO code and CEC2017 and CEC2020-RW’s parameters are included at https://github.com/RivenSartre/Crown-Growth-Optimizer, accessed on 5 June 2024.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
- The main parameters of CGO.
| D | Problem dimensions |
| N | The number of agents |
| The number of growing agents | |
| The number of sprouting agents | |
| Growing sub-population | |
| Sprouting sub-population | |
| t | Iterations |
| T | Total iterations |
| Search lower bound | |
| Search upper bound | |
| Searching agent | |
| The centroid agent | |
| The median agent | |
| The elite pool | |
| V | Growth velocity |
| Gaussian number | |
| Random numbers from 0 to 1 | |
| Growth reference directions | |
| Sprouting reference directions |
References
- Sayouti, A.B.Y. Hybrid Meta-Heuristic Algorithms for Optimal Sizing of Hybrid Renewable Energy System: A Review of the State-of-the-Art. Arch. Comput. Methods Eng. 2022, 29, 4049–4083. [Google Scholar]
- Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA; London, UK, 1992; pp. 89–109. [Google Scholar]
- Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
- Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
- Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
- Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
- Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
- Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
- Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
- Luo, J.; Liang, Q.; Li, H. UAV penetration mission path planning based on improved holonic particle swarm optimization. J. Syst. Eng. Electron. 2023, 34, 197–213. [Google Scholar] [CrossRef]
- Allouani, F.; Abdelaziz, A.; Chris, H.; Xiao-Zhi, G.; Sofiane, B.; Ilyes, B.; Nadhira, K.; Hanen, S. Enhancing Individual UAV Path Planning with Parallel Multi-Swarm Treatment Coronavirus Herd Immunity Optimizer (PMST-CHIO) Algorithm. IEEE Access 2024, 12, 28395–28416. [Google Scholar]
- Wentao, W.; Chen, Y.; Jun, T. SGGTSO: A Spherical Vector-Based Optimization Algorithm for 3D UAV Path Planning. Drones 2023, 7, 452. [Google Scholar] [CrossRef]
- Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation 2017, San Sebastian, Spain, 5–8 June 2017; p. 1. [Google Scholar]
- Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
- Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 7, 98. [Google Scholar] [CrossRef]
- Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2022, 79, 7305–7336. [Google Scholar] [CrossRef]
- Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef]
- Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
- Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
- Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
- Ali, A.K.W.Z. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar]
- Easwarakhanthan, T.; Bottin, J.; Bouhouch, I.; Boutrit, C. Nonlinear Minimization Algorithm for Determining the Solar Cell Parameters with Microcomputers. Int. J. Sol. Energy 1986, 4, 1–12. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
