Next Article in Journal
Energy-Efficient Spiking Segmenter for Frame and Event-Based Images
Next Article in Special Issue
Bidirectional Jump Point Search Path-Planning Algorithm Based on Electricity-Guided Navigation Behavior of Electric Eels and Map Preprocessing
Previous Article in Journal
Application of an Enhanced Whale Optimization Algorithm on Coverage Optimization of Sensor
Previous Article in Special Issue
Drawer Algorithm: A New Metaheuristic Approach for Solving Optimization Problems in Engineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PECSO: An Improved Chicken Swarm Optimization Algorithm with Performance-Enhanced Strategy and Its Application

1
School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China
2
School of Information Science, Guangdong University of Finance and Economics, Guangzhou 510320, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2023, 8(4), 355; https://doi.org/10.3390/biomimetics8040355
Submission received: 14 June 2023 / Revised: 24 July 2023 / Accepted: 4 August 2023 / Published: 10 August 2023
(This article belongs to the Special Issue Bioinspired Algorithms)

Abstract

:
To solve the problems of low convergence accuracy, slow speed, and common falls into local optima of the Chicken Swarm Optimization Algorithm (CSO), a performance enhancement strategy of the CSO algorithm (PECSO) is proposed with the aim of overcoming its deficiencies. Firstly, the hierarchy is established by the free grouping mechanism, which enhances the diversity of individuals in the hierarchy and expands the exploration range of the search space. Secondly, the number of niches is divided, with the hen as the center. By introducing synchronous updating and spiral learning strategies among the individuals in the niche, the balance between exploration and exploitation can be maintained more effectively. Finally, the performance of the PECSO algorithm is verified by the CEC2017 benchmark function. Experiments show that, compared with other algorithms, the proposed algorithm has the advantages of fast convergence, high precision and strong stability. Meanwhile, in order to investigate the potential of the PECSO algorithm in dealing with practical problems, three engineering optimization cases and the inverse kinematic solution of the robot are considered. The simulation results indicate that the PECSO algorithm can obtain a good solution to engineering optimization problems and has a better competitive effect on solving the inverse kinematics of robots.

1. Introduction

Swarm intelligence algorithms have been widely recognized since they were proposed in the 1990s [1]. They have a simple structure, good scalability, wide adaptability and strong robustness. Based on different biological habits and social behaviors, scholars have proposed numerous swarm intelligence algorithms, such as particle swarm optimization algorithm (PSO) [2], genetic algorithm (GA) [3], bat algorithm (BA) [4], social spider optimizer [5], CSO algorithm [6], moth–flame optimization algorithm (MFO) [7], whale optimization algorithm (WOA) [8], marine predators algorithm [9], battle royale optimization algorithm (BRO) [10], groundwater flow algorithm [11], egret swarm optimization algorithm [12], coati optimization algorithm [13], wild geese migration optimization algorithm [14], drawer algorithm [15], snake optimizer [16], fire hawk optimizer [17], etc. The algorithms are studied in terms of parameter set, convergence, topology and application. Among them, the CSO algorithm is a bionic swarm intelligent optimization technology proposed by Meng et al., named after the foraging behavior of chickens. Its main idea is to construct a random search method by simulating the behavior of roosters, hens and chicks in a chicken flock. On this basis, the optimization problem is solved through three chicken position update equations in the hierarchy. The principle of this algorithm is simple and easy to implement.
The algorithm has the advantages of simple principles, easy implementation and simple parameter setting. It has been widely used in the fields of trajectory optimization, economic dispatching, power system, image processing, wind speed prediction [18], and so on. For example, Mu et al. [19] used the CSO algorithm to optimize a robot trajectory for polishing a metal surface. The target of the optimization is to minimize the running time under kinematic constraints such as velocity and acceleration. Li et al. [20] applied a CSO algorithm to hypersonic vehicle trajectory optimization. Yu et al. [21] proposed a hybrid localization scheme for mine monitoring using a CSO algorithm and wheel graph, which minimized the inter-cluster complexity and improved the localization accuracy. Lin et al. [22] designed a CSO algorithm (GCSO) based on a high-efficiency graphics processing unit, which increased the population diversity and accelerated the convergence speed through parallel operations.
With the continuous expansion of application fields and the increase in problem complexity [23], the CSO algorithm has shown some deficiencies in solving these complex, high-dimensional problems, including low convergence accuracy and poor global exploration ability [24]. To solve these problems, many scholars have improved it, and now there are many variants of the CSO algorithm. In terms of initializing the population, the diversity of the population is enhanced by introducing a variety of strategies, such as chaos theory, mutation mechanisms, elimination–dispersion operations and deduplication factors [25,26], which are more conducive to finding the optimal solution to the problem. However, most means of improvement have been designed for individual update methods. Meng and Li [27] proposed to improve the CSO algorithm by using quantum theory to modify the update method of chicks. The algorithm was applied to the parameter optimization of the improved Dempster–Shafer structural probability fuzzy logic system, achieving good results for wind speed forecasting. Wang et al. [28] introduced an exploration–exploitation balance strategy in the CSO algorithm; 102 benchmark functions and two practical problems verified its excellent performance. Liang et al. [29] innovated an ICSO algorithm by using Lévy flight and nonlinear weight reduction to verify its outstanding performance in robot path planning. The other type is hybrid meta-heuristic algorithms, the combination of CSO with other algorithms. Wang et al. [30] provided an effective method to solve the multi-objective optimization problem based on an optimized CSO algorithm. The improved scheme includes dual external archives, a boundary learning strategy and fast, non-dominated sorting, and its superior performance has been verified by 14 benchmark functions. Li et al. [31] introduced the information-sharing strategy, spiral motion strategy and chaotic perturbation mechanisms into the CSO algorithm, which improved the identity of the photovoltaic model’s parameters. In addition, the combination of multiple algorithms is also a hot research topic. Deore et al. [32] integrated a chimp–CSO algorithm into the training of the network intrusion detection process. Torabi and Safi-Esfahani [33] combined the improved raven roosts optimization algorithm with the CSO algorithm to solve the task scheduling problem. Pushpa et al. [34] integrated a fractional artificial bee–CSO algorithm for virtual machine placement in the cloud. These hybrid algorithms have proved to have superior computational performance.
In summary, the CSO algorithm outperforms many naturally inspired algorithms for most benchmark functions and when solving practical problems. However, the “no free lunch theorem” shows that it is of great significance to further research the improvement of the CSO algorithm [35]. Therefore, an improved CSO algorithm with a performance-enhanced strategy is proposed, named the PECSO algorithm. It introduces the free grouping mechanism, synchronous updates and spiral learning strategy. The position updating method of the roosters, hens and chicks is redesigned.
The main contributions of this paper are summarized as follows:
  • A hierarchy using a free grouping mechanism is proposed, which not only bolsters the diversity of individuals within this hierarchy but also enhances the overall search capability of the population;
  • Synchronous updating and spiral learning strategies are implemented that fortify the algorithm’s ability to sidestep local optima. This approach also fosters a more efficient balance between exploitation and exploration;
  • PECSO algorithm exhibits superior global search capability, faster convergence speed and higher accuracy, as confirmed by the CEC2017 benchmark function;
  • The exceptional performance of the PECSO algorithm is further substantiated by its successful application to two practical problems.
The rest of this paper is organized as follows: Section 2 explains the foundational principles of the CSO algorithm. Section 3 introduces our proposed PECSO algorithm and elaborates on its various facets. In Section 4, we conduct benchmark function experiments using the PECSO algorithm. Section 5 demonstrates the resolution of two practical problems employing the PECSO algorithm. Finally, in Section 6, we provide a comprehensive summary of the paper, discuss the study’s limitations, and suggest directions for future research.

2. Chicken Swarm Optimization Algorithm

The classical CSO algorithm regards the solution of the problem as a source of food for chickens, and the fitness value in the algorithm represents the quality of the food. According to the fitness value, individuals in the chicken flock are sorted, and the flock is divided into several subgroups. Each subgroup divides the individuals into three levels: roosters, hens and chicks, and the proportions of roosters, hens and chicks are Nr, Nh and Nc, respectively.
In the algorithm, x i , j t represents the position of the i-th chicken in the t-th iteration of the j-dimensional search space. The individuals with the lowest fitness value are selected as the roosters. The roosters walk randomly in the search space, and their position is updated, as shown in Equation (1).
x i , j t + 1 = x i , j t     1 + r a n d n 0 , σ 2  
where r a n d n   0 , σ 2 is a Gaussian distribution random number. The calculation of σ 2 is shown in Equation (2).
σ 2 = 1 , f i f k e x p f k f i / f i + ε ,     o t h e r w i s e
Individuals with better fitness are selected as hens, which move following the rooster. The hen’s position is updated as shown in Equation (3).
x i , j t + 1 = x i , j t + S 1     r a n d     x r 1 , j t x i , j t + S 2     r a n d     x r 2 , j t x i , j t
where r1 is the individual rooster followed by the i-th hen. The r2 is a randomly selected rooster or hen (r2 ≠ r1). Calculate the weights S 1 = e x p ( f i f r 1 ) / a b s f i + ε and S 2 = e x p f r 2 f i . f r 1 and f r 2 are fitness values corresponding to r1 and r2, respectively.
Except for the roosters and hens, other individuals are defined as chicks. The chicks follow their mother’s movement, and the chick’s position is updated, as shown in Equation (4).
x i , j t + 1 = x i , j t + F L     x m , j t x i , j t
where FL [0, 2], x m , j t is the position of the i-th mother chick.

3. Improved CSO Algorithm

Many variants of the CSO algorithm have been proposed. However, slow convergence speed and falling into local optimization are still the main shortcomings of the CSO algorithm in solving practical optimization problems. Therefore, to improve the convergence accuracy and speed of the CSO algorithm, a better balance between exploitation and exploration has been achieved. In this paper, we propose the PECSO algorithm, which is based on a free grouping mechanism, synchronous update and spiral learning strategies. We present the PECSO algorithm in detail, give the mathematical model and pseudocode of the PECSO algorithm, and perform a time complexity analysis.

3.1. New Population Distribution

The hierarchy structure of the CSO algorithm is established by the fitness value, which is simple but suffers from the disadvantage of the low diversity of individuals in the hierarchy. Therefore, we introduced a free grouping mechanism to redesign the swarm hierarchy, which improves the diversity of individuals in different hierarchies of the algorithm. Firstly, the method freely divides the randomly initialized population into 0.5Nr groups. Within each group, roosters (nr), hens (nh) and chicks (nc) are selected based on the size of the fitness value. Secondly, multiple niches are established within the group, with the hens as the center and L as the radius, as shown in Equation (5). Finally, the hen summons her chicks within the niche, as shown in Equation (6). The population distribution state is formed as shown in Figure 1.
L = α u b d l b d / N h
x c = x h + 2 r a n d 1     L
where u b d / l b d is the upper/lower boundary of the D-dimensional solution space. α is the radius factor of the niche.

3.2. Individual Updating Methods

This subsection introduces several updating methods that we propose, including a best-guided search for roosters, a bi-objective search for hens and a simultaneous and spiral search for chicks.

3.2.1. Best-Guided Search for Roosters

Roosters are the leaders carrying the excellent message, and their selection and updating is important. Therefore, this paper proposes the best-guided search method for roosters. Specifically, we discarded the practice of selecting a single rooster in the traditional CSO algorithm and instead selected multiple individuals with better fitness values within the group as roosters. Meanwhile, the exploration is carried out with the goal of the global optimal individual ( x b e s t ). The improved roosters can effectively utilize the historical experience of the population and have a stronger exploitation ability to overcome the problem of low convergence accuracy of the CSO algorithm. The updating step of the roosters is shown in Equation (7).
S R = x b e s t x i , r t  
The updated position of the roosters is shown in Equation (8).
x i , r t + 1 = x i , r t + r a n d n 0 ,   σ 2 S R  

3.2.2. Bi-Objective Search for Hens

Hens are the middle level of the CSO population and should have both exploration and exploitation capabilities and coordinate the roles of both. On the one hand, they can inherit the excellent information of the rooster. On the other hand, they repel other hens and protect the chicks from being disturbed while performing their exploratory functions. On this basis, this paper reconsiders the search goal of hens and proposes a bi-objective search strategy. Specifically, (1) combining the current optimal solution position with the position of the optimal rooster r1 in the group realizes the full utilization of the optimal information and improves the exploitation ability of the hen. (2) Combining the position of rooster r2 within the group (r1 ≠ r2) with the hen positions within other niches. It enhances diversity and enables large-scale exploration. The updating step ( S L ) of the hens is shown in Equation (9).
S L = c 1 x i , r 1 t x i , h t + c 2 x b e s t x i , h t i f   p < 0.9 c 3 x i , r 2 t x i , h t + c 4 x i , k t x i , h t e l s e
The updated position of the hens is shown in Equation (10).
x i , h t + 1 = x i , h t + η S L
where p [0, 1]. c 1 , c 2 , c 3 , c 4 are random numbers between 0 and 1. x i , r 1 t is a rooster indexed by a hen, x i , r 2 t is a randomly selected rooster (r2 ≠ r1), and x i , k t is a competing hen (kh). x i , r 1 t and x i , h t are the position of the i-th rooster and hen at the t-th iteration, respectively. η (0, 1) is the moving step factor of the niche.

3.2.3. Simultaneous and Spiral Search for Chicks

The chicks are followers of the hen and develop excellent exploration abilities by observing and learning the exploration behavior of the hen. Based on this, we propose synchronous updating and spiral learning strategies for chicks. Synchronized updating means that all chicks follow the same direction and step size as the hen for updating movement within a niche. This method can ensure the consistency of the hen and the chicks in the niche, which enhances the local exploration ability and jointly explores the potential solution space. The process of synchronous update is shown in Figure 2. Spiral learning means that individual chicks can move towards the hen (central point) in the niche, and the step size is gradually reduced during the movement, thus searching for the optimum more accurately. Specifically, the distance between the current individual chick and the hen is calculated, and the spiral radius is determined according to certain rules. Afterwards, the chick’s position is updated through the spiral radius and angle increments, and the movement trajectory is shown in Figure 3. Compared with the traditional linear updating method, the spiral update can expand the updating dimension of the chick’s position, make it more diverse and enhance the exploration ability of the algorithm. Through the synergy of synchronous updating and spiral learning, we can obtain the spiral updating step of the chicks, as shown in Equation (11).
S O = S     e β ϑ     c o s 2 π ϑ + x i , h t
The updated position of the chicks is shown in Equation (12).
x i , c t + 1 = x i , c t + η S L + φ S O
where S = x i , c t x i , h t , β = 1 is the logarithmic helix coefficient, and ϑ 1 ,   1 , φ (0, 1) is a random number.

3.3. The Implementation and Computational Complexity of PECSO Algorithm

3.3.1. The Implementation of PECSO Algorithm

The PECSO algorithm is used to optimize the diversity and update methods of individuals in different levels of the CSO algorithm. The specific pseudocode is given in Algorithm 1, and Figure 4 shows the flowchart of the PECSO algorithm.
Algorithm 1: Pseudocode of PECSO algorithm
Initialize a population of N chickens and define the related parameters;
While t < Gmax
 If (t % G == 0)
  Free grouping of populations and selection of roosters and hens within each group based on fitness values;
  Many niches are established with the hens as the center and L as the radius, according to Equations (5) and (6);
  Chicks are summoned by hens within the niche to recreate the hierarchy mechanism and to mark them.
 End if
 For i = 1: Nr
  Update the position of the roosters by Equation (8);
 End for
 For i = 1: Nh
  Synchronous update step of the niche is calculated by Equation (9);
  Update the position of the hens by Equation (10);
 End for
 For i = 1: Nc
  Spiral learning of chicks by Equation (11);
  Update the position of the chicks by Equation (12);
 End for
Evaluate the new solution, and update them if they are superior to the previous ones;
End while

3.3.2. The Computational Complexity of PECSO Algorithm

The computational complexity refers to the amount of computational work required during the algorithm’s execution. It mainly depends on the number of problems executed repeatedly. The computational complexity of the PECSO algorithm is described by BigO notation. According to Algorithm 1, the population size, maximum number of iterations and dimension are represented by N, T and D, respectively.
The computational complexity of the CSO algorithm mainly includes population initialization O 2 N × D , population update O 2 N × T × D and regime update O N × T × D / G . Therefore, O CSO is shown in Equation (13).
O CSO = O 2 N × D + O 2 N × T × D + O N × T × D / G  
The computational complexity of the PECSO algorithm mainly includes the population initialization O 2 N × D , the position update of individuals within the population O 2 N × T × D , and the establishment of hierarchy O N + N + N h × T × D / G based on the free grouping strategies (messing up the order, free grouping, summoning chicks). Therefore, O PECSO is shown in Equation (14).
O PECSO = O 2 N × D + O 2 N × T × D + O N + N + N h × T × D / G  
It can be seen from Equations (13) and (14) that the computational complexity of the PECSO and CSO algorithms is of the same order of magnitude. However, the PECSO algorithm adds two steps in updating the hierarchical relationship every G time, including the disruption of the population order and the summoning of the chicks by the hens. Therefore, the computational complexity of the PECSO algorithm is slightly higher than the CSO algorithm.

4. Simulation Experiment and Result Analysis

In this section, first, we perform the experimental settings, including the selection of parameters and benchmark functions. Secondly, the qualitative analysis of the PECSO algorithm is carried out in terms of four indexes (2D search history, 1D trajectory, average fitness values and convergence curves). Finally, the computational performance of the PECSO algorithm is quantitatively analyzed and compared with the other seven algorithms, in which three measurement criteria, including mean, standard deviation (std) and time, are considered; the unit of time is seconds (s).

4.1. Experimental Settings

Parameters: the common parameters of all algorithms are set to the same, where N = 100, T = 500, D = 10, 30, 50. All common parameters of the CSO algorithm include N r = 0.2 , N h = 0.2 N , N c = N N r N h , G = 10 . Other main parameters of the algorithm are shown in Table 1. In addition, the experiment of each benchmark function is repeated 50 times to reduce the influence of random factors.
Benchmark Function: this paper selects the CEC2017 benchmark function for experiments (excluding F2) [36]. The unimodal functions (F1, F3) have only one extreme point in the search space, and it is difficult to converge to the global optimum. Therefore, the unimodal function is used to test the search accuracy. The multimodal functions (F4–F10) have multiple local extreme points, which can be used to test the global search performance. The hybrid functions (F11–F20) and composition functions (F21–F30) are a combination of unimodal and multimodal functions. More complex functions can further test the algorithm’s ability to balance exploration and exploitation.

4.2. Qualitative Analysis

The qualitative results of the PECSO algorithm are given in Figure 5, including the visualization of the benchmark function, the search history of the PECSO algorithm on the 2D benchmark test problem, the first-dimensional trajectory, the average fitness and the convergence curve. The discussion is as follows.
The second column of search history shows the location history information of each individual in the search space. It can be seen that the individuals are sparsely distributed in the search space, mostly clustered around the global optimal solution. This indicates that individuals reasonably cover a large area of the search space, and the PECSO algorithm has the exploration and exploitation abilities. However, the search history cannot show the exploratory order of individuals during the iterative process. Therefore, the third column gives the first-dimensional trajectory curves of representative individuals in each iteration. It shows the mutation of the individual during the initial iteration, which is gradually weakened throughout the iterations. According to references [37,38], this behavior ensures that the PECSO algorithm eventually converges to one point of the search space. The average fitness values and convergence curves are given in the fourth and fifth columns; it is known that the PECSO algorithm gradually approaches the optimal solution in the iterative process. Multiple convergence stages indicate that the PECSO algorithm can jump out of the local optimal value and search again, which shows that the PECSO algorithm has good local optimal avoidance ability and strong convergence ability. In brief, the PECSO algorithm effectively maintains a balance between exploitation and exploration, exhibiting advantages such as rapid convergence speed and robust global optimization capability.

4.3. Quantitative Analysis

This section compares the PECSO algorithm with the other seven algorithms, including PSO, CSO, MFO, WOA, and BRO, as well as the ICSO algorithm (ICSO1) proposed by Liang [29] and the improved CSO algorithm (ICSO2) proposed by Li [39], through the fitness value evaluation. The results are shown in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, and we can draw the following conclusions.
  • From the unimodal and multimodal functions, we can find that the PECSO algorithm achieves the minimum mean and standard deviation. From the hybrid and composition functions, the PECSO algorithm obtained the best value of 80%. This shows that the PECSO algorithm has high convergence accuracy and strong global exploration ability, and its computing performance is more competitive;
  • The experimental results show that the solving ability of unimodal and multimodal functions is not affected by dimensional changes, while hybrid and composite functions get more excellent computational results in higher dimensions. This indicates that the PECSO algorithm can balance the exploitation and exploration well and has a strong ability to jump out of the local optimum. The possible reason is that the free grouping mechanism improves the establishment of the hierarchy and increases the diversity of roosters in the population. Meanwhile, synchronous updating of individuals in niche and spiral learning of chicks can effectively improve the exploitation breadth and exploration depth of the PECSO algorithm;
  • The running time of the PECSO algorithm is slightly higher than that of the CSO algorithm, but they have the same order of magnitude. It shows that the PECSO algorithm effectively improves computational performance;
  • We rank the test results of all algorithms on the benchmark function, and the average value is the indicator. Figure 6 finds that the convergence results of the PECSO algorithm are outstanding in different test dimensions.
The box plots of the eight algorithms for some benchmark functions (D = 30, independent experiments 50 times) are given in Figure 7. The solid line in the middle of each box represents the median fitness value, and the shorter the box and whiskers, the more concentrated the convergence results. It can be seen that the PECSO algorithm has strong stability.
Figure 8 shows the convergence curves on some benchmark functions in the case of D = 30. Figure 8a–d shows the convergence curves of the PECSO algorithm on some unimodal and multimodal functions, which achieve the best convergence accuracy and speed. The convergence curves of the PECSO algorithm on some hybrid and composite functions are given in Figure 8f–h. In fact, there are 20 functions of this type, and the PECSO algorithm has achieved the best convergence effect on 17 benchmark functions, ranking second on three functions (F20, F23, F26). This further demonstrates the excellent convergence capability of the PECSO algorithm.

5. Case Analysis of Practical Application Problems

In this section, we further investigate the performance of the PECSO algorithm by solving three classical engineering optimization problems and robot inverse kinematics. Moreover, compared with other algorithms reported in the literature.

5.1. Engineering Optimization Problems

This section selects three classical engineering application problems to test the computational potential of the PECSO algorithm in dealing with practical problems. This mainly includes the three-bar truss design [40], pressure vessel design [41] and tension/compression spring design [42]; the constraint design can be regarded as the optimal solution of the function F31, F32 and F33. For details, refer to Table 8. Their structures are shown in Figure 9.
Table 9 shows the statistical results of the three engineering optimization problems. The PECSO algorithm performs better than the CSO algorithm on three engineering optimization problems, and the results are compared with those of the FCSO algorithm in reference [43]. The results obtained by the PECSO algorithm are within the scope of practical applications and meet the constraint requirements. Meanwhile, the PECSO algorithm shows excellent applicability and stability in engineering optimization problems.

5.2. Solve Inverse Kinematics of PUMA 560 Robot

In this section, the PECSO algorithm is used to solve the inverse kinematics of the PUMA 560 robot, which includes the kinematics modeling, the establishment of the objective function and the simulation experiment.

5.2.1. Kinematic Modeling and Objective Function Establishment

The kinematic modeling of the robots involves establishing a coordinate system on each link of the kinematic chain, refer to Figure 10. The posture of the robot end effector is described in cartesian space by a homogeneous transformation. The kinematic equation is shown in Equation (15).
T 6 0 = T 1 0 θ 1 · T 2 1 θ 2 · T 3 2 θ 3 · T 3 2 θ 3 · T 6 5 θ n
The coordinate transformation relationship between adjacent links in the robot kinematic chain is obtained by the Denavit–Hartenberg (D-H) parameter method [10], as shown in Equation (16).
T i i 1 = cos θ i cos α i · sin θ i sin θ i cos α i · cos θ i sin α i · sin θ i a i · cos θ i cos θ i · sin α i a i · sin θ i 0 sin α i 0   0 cos α i d i 0 1
The positive kinematics equation of the PUMA 560 robot can be obtained by the combination of Equations (15) and (16), as shown in Equation (17) [44].
n x o x n y o y α x p x α y p y n z o z 0     0 α z   p z 0 1 = T 6 0
where   n , o and α represent rotational elements of the pose matrix, and p represents the elements of the position vector.
The objective function of the solution is the Euclidean distance between the desired and actual end positions, as shown in Equation (18).
E r r o r = P P
where P = p x ,   p y ,   p z represents the desired position, and P = p x , p y , p z represents the actual position.
Each joint variable needs to meet a different boundary constraint range, which is restricted by the mechanical principle of the robot, as shown in Equation (19).
θ i , m i n θ i θ i , m a x i = 1 , 2 6
where θ i , m i n / θ i , m a x represents the upper/lower limits of the i-th joint variable, respectively. The specific values are shown in Table 10.

5.2.2. Simulation Experiment and Analysis

The simulation experiment for solving inverse kinematics is carried out by the PECSO and CSO algorithms. The test point is a randomly selected end position within the range of movement of the PUMA 560 robot. The relevant parameter settings are the same as in Section 4.1, and the experimental results are shown in Table 11; among them, the results of the BRO algorithm are taken from reference [10]. The results show that the PECSO algorithm has higher solution accuracy than the CSO and BRO algorithms, which also indicates that the PECSO algorithm is feasible for solving the robot kinematic inverse. Moreover, with the increase in the N and the T, the computational performance of all algorithms is gradually enhanced. We find that the change of the T has a greater impact on the calculation results.

6. Conclusions

This paper proposes a CSO algorithm with a performance-enhanced strategy. The algorithm utilizes a free grouping mechanism to establish a hierarchy and select the roosters and hens. Establishing a niche centered around hens and gathering chicks. Roosters are updated with the goal of global optimum, and hens and chicks are updated synchronously in the niche. To increase exploration capability, chicks also perform spiral learning. They improve the singularity of rooster selection and the simplicity of individual position updating and effectively enhance the overall performance of the CSO algorithm. In the simulation, 29 benchmark functions are utilized to verify that the PECSO algorithm has outstanding performance in comparison with the other seven algorithms. In addition, three engineering optimization problems and PMUA 560 robot inverse kinematics solutions are solved based on the PECSO algorithm. It shows that the PECSO algorithm has excellent universality in complex practical problems and has certain practicability and development prospects in solving optimization problems.
The high-performance PECSO algorithm is of great significance for solving complex problems, improving search efficiency, enhancing robustness and adapting to dynamic environments. However, there are still some limitations. From the qualitative analysis, it can be found that the running time of the PECSO algorithm is slightly higher than that of the CSO algorithm. When dealing with large-scale data and complex problems, it may lead to an increase in computational complexity, and the running time of the PECSO algorithm will also increase, which may result in a large distance from the CSO algorithm in terms of running time. Next, to obtain better results, we can focus our main research direction on reducing the computational complexity of the algorithms.

Author Contributions

Conceptualization, Y.Z.; methodology, Y.Z.; software, Y.Z.; validation, Y.Z.; data curation, Y.Z.; resources, Y.Z.; writing—original draft preparation, Y.Z., L.W. and J.Z.; writing—review and editing, Y.Z., L.W. and J.Z.; supervision, J.Z.; project administration, L.W.; funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Social Science Fund of China under Grant No. 22BTJ057.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article and are also available from the corresponding authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kumar, N.; Shaikh, A.A.; Mahato, S.K.; Bhunia, A.K. Applications of new hybrid algorithm based on advanced cuckoo search and adaptive Gaussian quantum behaved particle swarm optimization in solving ordinary differential equations. Expert Syst. Appl. 2021, 172, 114646. [Google Scholar] [CrossRef]
  2. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  3. Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks: Theory and Applications; Springer: Cham, Switzerland, 2019; Volume 780, pp. 43–55. [Google Scholar]
  4. Yang, X.S. A New Metaheuristic Bat-Inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010). In Studies in Computational Intelligence; González, J.R., Pelta, D.A., Cruz, C., Terrazas, G., Krasnogor, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 284, pp. 65–74. [Google Scholar]
  5. Cuevas, E.; Cienfuegos, M.; Zaldívar, D.; Pérez-Cisneros, M. A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst. Appl. 2013, 40, 6374–6384. [Google Scholar] [CrossRef] [Green Version]
  6. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A new bio-inspired algorithm: Chicken swarm optimization. In Proceedings of the International Conference in Swarm Intelligence, Hefei, China, 17–20 October 2014; pp. 86–94. [Google Scholar]
  7. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  9. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  10. Rahkar Farshi, T. Battle royale optimization algorithm. Neural Comput. Appl. 2021, 33, 1139–1157. [Google Scholar] [CrossRef]
  11. Guha, R.; Ghosh, S.; Ghosh, K.K.; Cuevas, E.; Perez-Cisneros, M.; Sarkar, R. Groundwater flow algorithm: A novel hydro-geology based optimization algorithm. IEEE Access 2022, 10, 132193–132211. [Google Scholar] [CrossRef]
  12. Chen, Z.; Francis, A.; Li, S.; Liao, B.; Xiao, D.; Ha, T.T.; Li, J.; Ding, L.; Cao, X. Egret Swarm Optimization Algorithm: An Evolutionary Computation Approach for Model Free Optimization. Biomimetics 2022, 7, 144. [Google Scholar] [CrossRef]
  13. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  14. Wu, H.; Zhang, X.; Song, L.; Zhang, Y.; Gu, L.; Zhao, X. Wild Geese Migration Optimization Algorithm: A New Meta-Heuristic Algorithm for Solving Inverse Kinematics of Robot. Comput. Intel. Neurosc. 2022, 2022, 5191758. [Google Scholar] [CrossRef]
  15. Trojovská, E.; Dehghani, M.; Leiva, V. Drawer Algorithm: A New Metaheuristic Approach for Solving Optimization Problems in Engineering. Biomimetics 2023, 8, 239. [Google Scholar] [CrossRef]
  16. Hashim, F.A.; Hussien, A.G. Snake optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  17. Azizi, M.; Talatahari, S.; Gandomi, A.H. Fire Hawk Optimizer: A novel metaheuristic algorithm. Artif. Intell. Rev. 2023, 56, 287–363. [Google Scholar] [CrossRef]
  18. Wang, J.; Zhang, F.; Liu, H.; Ding, J.; Gao, C. Interruptible load scheduling model based on an improved chicken swarm optimization algorithm. CSEE J. Power Energy 2020, 7, 232–240. [Google Scholar]
  19. Mu, Y.; Zhang, L.; Chen, X.; Gao, X. Optimal trajectory planning for robotic manipulators using chicken swarm optimization. In Proceedings of the 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 27–28 August 2016; Volume 2, pp. 369–373. [Google Scholar]
  20. Li, Y.; Wu, Y.; Qu, X. Chicken swarm–based method for ascent trajectory optimization of hypersonic vehicles. J. Aerosp. Eng. 2017, 30, 04017043. [Google Scholar] [CrossRef]
  21. Yu, X.; Zhou, L.; Li, X. A novel hybrid localization scheme for deep mine based on wheel graph and chicken swarm optimization. Comput. Netw. 2019, 154, 73–78. [Google Scholar] [CrossRef]
  22. Lin, M.; Zhong, Y.; Chen, R. Graphic process units-based chicken swarm optimization algorithm for function optimization problems. Concurr. Comput. Pract. Exp. 2021, 33, e5953. [Google Scholar] [CrossRef]
  23. Li, M.; Bi, X.; Wang, L.; Han, X.; Wang, L.; Zhou, W. Text Similarity Measurement Method and Application of Online Medical Community Based on Density Peak Clustering. J. Organ. End User Comput. 2022, 34, 1–25. [Google Scholar] [CrossRef]
  24. Zouache, D.; Arby, Y.O.; Nouioua, F.; Abdelaziz, F.B. Multi-objective chicken swarm optimization: A novel algorithm for solving multi-objective optimization problems. Comput. Ind. Eng. 2019, 129, 377–391. [Google Scholar] [CrossRef]
  25. Ahmed, K.; Hassanien, A.E.; Bhattacharyya, S. A novel chaotic chicken swarm optimization algorithm for feature selection. In Proceedings of the 2017 Third International Conference on Research in Computational Intelligence and Communication Networks, Kolkata, India, 3–5 November 2017; pp. 259–264. [Google Scholar]
  26. Deb, S.; Gao, X.Z.; Tammi, K.; Kalita, K.; Mahanta, P. Recent studies on chicken swarm optimization algorithm: A review (2014–2018). Artif. Intell. Rev. 2020, 53, 1737–1765. [Google Scholar] [CrossRef]
  27. Meng, X.B.; Li, H.X. Dempster-Shafer based probabilistic fuzzy logic system for wind speed prediction. In Proceedings of the 2017 International Conference on Fuzzy Theory and Its Applications, Pingtung, Taiwan, 12–15 November 2017; pp. 1–5. [Google Scholar]
  28. Wang, Y.; Sui, C.; Liu, C.; Sun, J.; Wang, Y. Chicken swarm optimization with an enhanced exploration–exploitation tradeoff and its application. Soft Comput. 2023, 27, 8013–8028. [Google Scholar] [CrossRef]
  29. Liang, X.; Kou, D.; Wen, L. An improved chicken swarm optimization algorithm and its application in robot path planning. IEEE Access 2020, 8, 49543–49550. [Google Scholar] [CrossRef]
  30. Wang, Z.; Zhang, W.; Guo, Y.; Han, M.; Wan, B.; Liang, S. A multi-objective chicken swarm optimization algorithm based on dual external archive with various elites. Appl. Soft Comput. 2023, 133, 109920. [Google Scholar] [CrossRef]
  31. Li, M.; Li, C.; Huang, Z.; Huang, J.; Wang, G.; Liu, P.X. Spiral-based chaotic chicken swarm optimization algorithm for parameters identification of photovoltaic models. Soft Comput. 2021, 25, 12875–12898. [Google Scholar] [CrossRef]
  32. Deore, B.; Bhosale, S. Hybrid Optimization Enabled Robust CNN-LSTM Technique for Network Intrusion Detection. IEEE Access 2022, 10, 65611–65622. [Google Scholar] [CrossRef]
  33. Torabi, S.; Safi-Esfahani, F. A dynamic task scheduling framework based on chicken swarm and improved raven roosting optimization methods in cloud computing. J. Supercomput. 2018, 74, 2581–2626. [Google Scholar] [CrossRef]
  34. Pushpa, R.; Siddappa, M. Fractional Artificial Bee Chicken Swarm Optimization technique for QoS aware virtual machine placement in cloud. Concurr. Comput. Pract. Exp. 2023, 35, e7532. [Google Scholar] [CrossRef]
  35. Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization: Algorithms, Complexity and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 57–82. [Google Scholar]
  36. Chhabra, A.; Hussien, A.G.; Hashim, F.A. Improved bald eagle search algorithm for global optimization and feature selection. Alex. Eng. J. 2023, 68, 141–180. [Google Scholar] [CrossRef]
  37. Van den Bergh, F.; Engelbrecht, A.P. A study of particle swarm optimization particle trajectories. Inform. Sci. 2006, 176, 937–971. [Google Scholar] [CrossRef]
  38. Osuna-Enciso, V.; Cuevas, E.; Castañeda, B.M. A diversity metric for population-based metaheuristic algorithms. Inform. Sci. 2022, 586, 192–208. [Google Scholar] [CrossRef]
  39. Li, Y.C.; Wang, S.W.; Han, M.X. Truss structure optimization based on improved chicken swarm optimization algorithm. Adv. Civ. Eng. 2019, 2019, 6902428. [Google Scholar] [CrossRef]
  40. Xu, X.; Hu, Z.; Su, Q.; Li, Y.; Dai, J. Multivariable grey prediction evolution algorithm: A new metaheuristic. Appl. Soft Comput. 2020, 89, 106086. [Google Scholar] [CrossRef]
  41. Dehghani, M.; Trojovský, P. Serval Optimization Algorithm: A New Bio-Inspired Approach for Solving Optimization Problems. Biomimetics 2022, 7, 204. [Google Scholar] [CrossRef] [PubMed]
  42. Trojovský, P.; Dehghani, M. Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 149. [Google Scholar] [CrossRef]
  43. Wang, Z.; Qin, C.; Wan, B. An Adaptive Fuzzy Chicken Swarm Optimization Algorithm. Math. Probl. Eng. 2021, 2021, 8896794. [Google Scholar] [CrossRef]
  44. Lopez-Franco, C.; Hernandez-Barragan, J.; Alanis, A.Y.; Arana-Daniel, N. A soft computing approach for inverse kinematics of robot manipulators. Eng. Appl. Artif. Intell. 2018, 74, 104–120. [Google Scholar] [CrossRef]
Figure 1. Hierarchy distribution strategy based on the free grouping mechanism.
Figure 1. Hierarchy distribution strategy based on the free grouping mechanism.
Biomimetics 08 00355 g001
Figure 2. Synchronous update process of individuals in the niche. (a) Synchronous update trajectory. (b) Synchronous update of the navigation map.
Figure 2. Synchronous update process of individuals in the niche. (a) Synchronous update trajectory. (b) Synchronous update of the navigation map.
Biomimetics 08 00355 g002
Figure 3. Chick’s spiral update position.
Figure 3. Chick’s spiral update position.
Biomimetics 08 00355 g003
Figure 4. Flowchart of the PECSO algorithm.
Figure 4. Flowchart of the PECSO algorithm.
Biomimetics 08 00355 g004
Figure 5. Qualitative results of PECSO algorithm: (a) visual diagram; (b) search history; (c) trajectory; (d) average fitness; (e) convergence curve.
Figure 5. Qualitative results of PECSO algorithm: (a) visual diagram; (b) search history; (c) trajectory; (d) average fitness; (e) convergence curve.
Biomimetics 08 00355 g005
Figure 6. Average ranking of 8 algorithms in different dimensions.
Figure 6. Average ranking of 8 algorithms in different dimensions.
Biomimetics 08 00355 g006
Figure 7. Box plots of 8 algorithms on some benchmark functions, the functions are selected as (a) F1; (b) F3; (c) F4; (d) F5; (e) F11; (f) F12; (g) F21; (h) F22.
Figure 7. Box plots of 8 algorithms on some benchmark functions, the functions are selected as (a) F1; (b) F3; (c) F4; (d) F5; (e) F11; (f) F12; (g) F21; (h) F22.
Biomimetics 08 00355 g007aBiomimetics 08 00355 g007b
Figure 8. Average convergence results of 8 algorithms on benchmark functions, the functions are selected as (a) F1; (b) F3; (c) F4; (d) F5; (e) F11; (f) F12; (g) F21; (h) F22.
Figure 8. Average convergence results of 8 algorithms on benchmark functions, the functions are selected as (a) F1; (b) F3; (c) F4; (d) F5; (e) F11; (f) F12; (g) F21; (h) F22.
Biomimetics 08 00355 g008aBiomimetics 08 00355 g008b
Figure 9. The schematic of three engineering optimization problems: (a) three-bar truss; (b) pressure vessel; (c) tension/compression spring.
Figure 9. The schematic of three engineering optimization problems: (a) three-bar truss; (b) pressure vessel; (c) tension/compression spring.
Biomimetics 08 00355 g009
Figure 10. Kinematic chains solid shape of PUMA 560 robot joints.
Figure 10. Kinematic chains solid shape of PUMA 560 robot joints.
Biomimetics 08 00355 g010
Table 1. The main parameters of the 8 algorithms.
Table 1. The main parameters of the 8 algorithms.
AlgorithmsParameters
PSOThe inertia weight is w = 0.8, the two learning factors are c1 = c2 = 2, Vmax = 1.5, Vmin = −1.5
CSOFL ∈ [0, 2]
MFOb = 1, t = [−1, 1], a ∈ [−1, −2]
WOAa is decreasing linearly from 2 to 0, b = 1
BROMaximum damage is 3
ICSO1FL ∈ [0.4, 1], c = 10, λ = 1.5
ICSO2FL ∈ [0.4, 0.9], ωmax = 0.9, ωmin = 0.4, K = 200
PECSOη = 0.5, α = 1
Table 2. Results and comparison of all algorithms on unimodal and multimodal functions (D = 10).
Table 2. Results and comparison of all algorithms on unimodal and multimodal functions (D = 10).
Func.IndexPSOCSOMFOWOABROICSO1ICSO2PECSO
F1Mean1.19 × 1092.08 × 1097.77 × 1038.84 × 1061.7 × 1093.44 × 1066.76 × 1071205.033
Std8.40 × 1081.81 × 1095.18 × 1037.28 × 1063.5 × 1081.03 × 1072.74 × 1081358.712
Time0.130.270.200.172.240.870.280.43
F3Mean7607.6531.65 × 1044505.3911505.6611214.7642539.5299645.9020.00211
Std4050.9918819.2335588.509962.3674374.36171392.1538211.6380.00220
Time0.130.260.200.172.480.850.270.41
F4Mean64.25915105.170412.0331041.85066117.915216.2339635.222563.47833
Std33.79702118.597516.9466529.6370133.3336424.3540536.099822.21821
Time0.130.250.200.162.610.850.270.41
F5Mean51.2397536.9360728.4467740.7368749.0056625.4649831.5191514.01220
Std7.51036015.4257810.774559.5797309.968579.69902012.838646.215580
Time0.150.270.220.182.540.850.290.43
F6Mean21.271003.5317600.52263438.1451428.204012.4642166.3801080.011514
Std5.7645624.6808411.62506912.808286.997292.2946136.7767810.077628
Time0.210.340.280.252.660.930.350.50
F7Mean94.937840.8299734.4199468.2711050.6935237.7138535.0613127.01692
Std13.7043311.9484713.1167315.4373012.0882110.290979.6794547.839129
Time0.160.270.230.202.560.860.300.44
F8Mean54.8840025.6685626.5371555.425533.1550224.2120028.8901415.50151
Std7.90787211.3182510.5020016.7337610.293959.12845412.486594.807887
Time0.150.280.220.192.530.860.290.43
F9Mean248.4728325.44007.10053761.833201.995254.94315169.34080.009137
Std133.3334323.397228.2465336.279117.324297.29246263.70570.064244
Time0.160.270.230.192.470.860.300.43
F10Mean1145.5071080.774868.3683832.66631137.677947.96201008.287580.5303
Std235.2473368.7106278.4062297.1957263.6751306.0519314.4698227.9308
Time0.170.280.240.202.570.870.310.44
Table 3. Results and comparison of all algorithms on hybrid and composition functions (D = 10).
Table 3. Results and comparison of all algorithms on hybrid and composition functions (D = 10).
Func.IndexPSOCSOMFOWOABROICSO1ICSO2PECSO
F11Mean311.2918278.967528.77234159.737390.9578965.08224184.279822.71487
Std142.0046417.483343.1677275.3243526.0628951.13908187.414313.87131
Time0.140.260.220.182.490.860.280.42
F12Mean7.32 × 1065.06 × 1071.16 × 1062.99 × 1061.06 × 1063.73 × 1065.73 × 1062.25 × 104
Std5.24 × 1061.47 × 1083.51 × 1063.08 × 1066.33 × 1055.43 × 1066.98 × 1061.76 × 104
Time0.140.270.220.182.610.860.280.42
F13Mean1.30 × 1052.08 × 1049173.8123.78 × 1042.39 × 1041.69 × 1042.87 × 1041.22 × 104
Std8.75 × 1041.41 × 1041.04 × 1044420.4816179.1731.14 × 1042.55 × 1047825.147
Time0.150.270.220.192.740.860.280.43
F14Mean237.00721915.462886.1556360.5561748.7817394.2500830.6854208.7593
Std114.15931134.785992.0691534.7172841.1497532.2103829.9934225.2404
Time0.160.280.240.202.720.880.300.44
F15Mean1.08 × 1041.51 × 1043901.5602445.8355238.8642894.8931.79 × 104635.7582
Std1.26 × 1042.37 × 1044555.7201718.9923064.8494904.7542.58 × 104898.4517
Time0.140.260.210.182.770.850.270.42
F16Mean95.86476246.365196.7712176.65239221.5054127.2405189.5364128.5848
Std39.74258193.3885101.908260.6962294.5633196.35577162.7510133.6224
Time0.150.280.220.202.810.870.290.43
F17Mean115.804778.9895439.90501100.645279.9175451.4078075.7679934.72186
Std22.3931359.7908417.9042123.6276414.1892420.6377747.8147018.89641
Time0.200.320.280.242.790.920.350.49
F18Mean9.44 × 1041.38 × 1041.98 × 1043677.2712341.6112.07 × 1041.63 × 1044234.677
Std8.17 × 1041.41 × 1041.37 × 1045591.3773629.0371.80 × 1041.40 × 1044720.534
Time0.150.270.230.192.660.870.290.43
F19Mean3185.1751.57 × 1043953.2653.59 × 1045630.7873064.9982.55 × 1042063.317
Std4483.4883.58 × 1046371.6321.72 × 1043265.7844810.7074.74 × 1041732.792
Time0.470.550.550.513.061.190.610.75
F20Mean114.137181.4000540.45046163.3919115.792769.7357971.5745531.83129
Std26.5230868.7999026.4557768.6789248.1542655.6415347.1677223.36321
Time0.210.330.280.252.680.930.350.50
F21Mean108.5544175.3791179.7778113.0315121.3235116.6314132.7406124.6529
Std3.91645569.5998261.9016818.754887.03827325.5972935.7067249.31258
Time0.210.330.280.252.600.930.340.49
F22Mean160.7779189.0434103.9039124.7609179.4140103.5302146.9429103.5816
Std25.27810181.754011.4995219.3255927.0370327.9458480.2328913.24524
Time0.240.330.320.282.670.940.390.52
F23Mean354.0431335.8440323.8059347.7348378.6894327.7684327.9389319.8174
Std14.3181313.699087.75738113.5492037.6539810.7887611.141537.389369
Time0.260.350.340.302.690.950.390.54
F24Mean390.1786380.6473342.3239390.7160278.5041321.0273357.7536342.1947
Std12.0290816.3317659.2059319.66264135.544593.8943846.8436450.68501
Time0.260.370.350.312.740.970.410.55
F25Mean474.3985488.2035435.5922450.2481477.5108438.5894457.1487429.4796
Std24.3943458.4886720.1893312.6605415.2180723.6786532.5897122.76244
Time0.230.330.310.282.820.940.370.51
F26Mean480.2783546.6617391.4012669.7582709.4012410.5817440.7229343.9639
Std66.33779208.969329.35429186.9457145.417069.87310102.7568178.5598
Time0.290.370.360.332.821.000.430.57
F27Mean421.9835404.2941392.2372400.5075459.2579394.4270398.4692378.7198
Std22.3617910.364451.7688436.30089321.390023.46686014.866032.289450
Time0.290.400.370.342.721.000.440.55
F28Mean629.9686593.1006488.41981549.12104542.0223567.1627563.77963477.4404
Std48.01225119.160794.850651109.08613114.3551145.1582131.3917550.74241
Time0.270.370.350.312.780.970.410.55
F29Mean440.9476420.3877307.4976599.8618385.9158346.8368378.78393332.0303
Std90.6671378.3633942.56096100.588455.3200868.5698391.61292951.85453
Time0.270.370.340.312.880.990.410.55
F30Mean8.99 × 1052.36 × 1066.59 × 1058.97 × 1051.06 × 1067.26 × 1051.13 × 1067980.850
Std3.76 × 1052.91 × 1064.25 × 1058.83 × 1057.47 × 1052.37 × 1057.85 × 1051.07 × 104
Time0.530.620.620.582.981.260.670.82
Table 4. Results and comparison of all algorithms on unimodal and multimodal functions (D = 30).
Table 4. Results and comparison of all algorithms on unimodal and multimodal functions (D = 30).
Func.IndexPSOCSOMFOWOABROICSO1ICSO2PECSO
F1Mean1.34 × 10102.73 × 10104.483 × 1095.09 × 1082.17 × 10102.81 × 1096.261 × 1092739.501
Std3.48 × 1091.52 × 10103.413 × 1092.78 × 1081.80 × 1091.23 × 1094.036 × 1093625.805
Time0.950.440.343.620.652.420.700.58
F3Mean1.17 × 1053.28 × 1051.25 × 1051.74 × 1053.85 × 1048.77 × 1042.07 × 1051.58 × 104
Std2.71 × 1041.09 × 1053.50 × 1042.16 × 1044410.0731.44 × 1046.49 × 1057661.569
Time0.350.660.590.453.562.430.700.92
F4Mean1469.5245544.343390.5003252.07105043.692436.8082609.4365108.1792
Std914.99743144.406275.121853.93749607.4736147.1840483.454724.44122
Time0.340.650.570.443.542.420.700.90
F5Mean329.0702241.3721184.9542338.5593327.8808173.2287206.7531134.8208
Std34.1180251.7579941.8408183.8381735.8894129.5973344.6113530.73982
Time0.410.690.640.513.672.430.760.99
F6Mean58.2629724.7757128.8989875.0207971.2693423.0354729.4176915.95344
Std8.9372128.31897112.991857.3751036.2491666.2595387.6481615.371144
Time0.620.930.860.713.822.690.981.20
F7Mean605.8144566.8085270.0943496.9897462.7574305.4317411.9734236.5616
Std92.37522233.948494.5589974.6291169.8844156.01690146.072161.23752
Time0.440.720.660.533.692.490.791.00
F8Mean321.0630204.4143188.0338217.3065268.7952159.4270189.6993112.6484
Std29.9602844.2380945.1530451.7525730.9785127.4313243.2542821.42594
Time0.420.700.650.523.652.470.771.00
F9Mean8833.9407870.2995318.8188419.4947220.8433734.7416789.7442230.670
Std2717.4271822.2991927.1052392.5761745.1621155.8322678.077672.1063
Time0.430.710.650.523.702.500.771.00
F10Mean7268.2255052.1114223.8465851.9817032.7664593.4544732.0353558.053
Std579.40931015.415601.8196624.8404552.85821112.3501007.524512.6436
Time0.470.730.700.563.672.510.831.04
Table 5. Results and comparison of all algorithms on hybrid and composition functions (D = 30).
Table 5. Results and comparison of all algorithms on hybrid and composition functions (D = 30).
Func.IndexPSOCSOMFOWOABROICSO1ICSO2PECSO
F11Mean3203.0628204.801415.75482976.8491235.2971163.5852201.071130.4050
Std1116.1185263.914173.7117771.5592183.0142424.47231927.31146.74434
Time0.380.700.620.483.702.460.740.96
F12Mean1.56 × 1091.84 × 1091.12 × 1082.26 × 1084.27 × 1091.27 × 1084.71 × 1084.63 × 106
Std7.45 × 1081.71 × 1091.76 × 1088.09 × 1077.23 × 1081.15 × 1087.89 × 1086.48 × 106
Time0.420.730.660.523.652.510.781.00
F13Mean4.82 × 1086.96 × 1087.60 × 1064.94 × 1051.49 × 1097.07 × 1068.29 × 1078997.900
Std6.87 × 1081.32 × 1092.16 × 1074.03 × 1056.69 × 1081.71 × 1072.79 × 1087355.072
Time0.410.710.640.503.692.490.760.98
F14Mean1.21 × 1062.81 × 1062.22 × 1051.56 × 1064.15 × 1054.19 × 1052.06 × 1068.96 × 104
Std8.21 × 1054.52 × 1063.02 × 1058.01 × 1051.80 × 1053.84 × 1053.40 × 1066.69 × 104
Time0.460.760.700.573.682.550.821.05
F15Mean4.73 × 1076.83 × 1074.57 × 1042.94 × 1051.76 × 1041.19 × 1051.82 × 1072457.060
Std3.88 × 1072.76 × 1085.23 × 1042.75 × 1058239.1389.38 × 1041.28 × 1082232.645
Time0.370.680.620.473.632.460.730.97
F16Mean2463.6872046.7901424.6362526.6032844.3881573.6441784.1441024.343
Std321.4705444.9287354.0005457.4043534.7903335.2769476.9434296.9336
Time0.420.720.650.523.662.500.770.99
F17Mean1244.2031160.452741.88611117.3621097.900888.3213938.2913594.9978
Std166.6690310.1898230.2786217.6323245.0365216.0549234.7641178.2540
Time0.600.880.840.703.842.700.971.19
F18Mean1.10× 1072.78 × 1074.59 × 1067.79 × 1061.89 × 1064.26 × 1061.78 × 1071.24 × 106
Std2.16 × 1074.76 × 1076.17 × 1066.71 × 1061.29 × 1065.93 × 1062.10 × 1071.22 × 106
Time0.410.730.650.513.642.490.770.99
F19Mean9.85 × 1077.53 × 1071.16 × 1077.90 × 1063.20 × 1066.46 × 1063.63 × 1071.31 × 104
Std7.74 × 1072.78 × 1083.77 × 1075.45 × 1061.55 × 1061.09 × 1076.26 × 1078282.171
Time1.491.661.731.594.723.581.852.08
F20Mean875.0725460.3876600.2112730.9683698.5802651.5624682.8055589.3753
Std107.2989157.7933167.0108135.9671132.8971187.7451165.6334154.3269
Time0.640.910.880.743.862.701.011.22
F21Mean521.9030435.4149388.0635504.4011523.8071361.9451399.0245331.2058
Std37.3906641.7566242.0032355.5378638.0151437.6970246.9692432.57443
Time0.731.030.960.833.952.801.101.31
F22Mean1613.9414313.309535.56311411.6733937.591771.14681065.725100.899
Std382.66471549.842339.00541276.843470.3364483.0173716.97571.40774
Time0.811.011.040.904.032.831.181.38
F23Mean852.2221592.1274516.2654733.83871047.188567.1112564.7586517.8929
Std69.8512462.1034833.8301788.5506987.5687658.0506657.3673434.17164
Time0.871.101.110.984.122.901.261.46
F24Mean975.0453707.0991577.1475801.78431161.468638.8340640.0261577.9206
Std63.8072463.1292532.17908590.5733574.8369158.1270153.0082941.37280
Time0.941.181.191.054.192.981.331.52
F25Mean1506.0031705.441490.2608604.7758982.2687606.5717765.2691410.8481
Std324.0564946.073681.8820828.9565254.1900779.75298221.328717.81139
Time0.861.131.090.964.092.961.241.43
F26Mean4852.7684253.4762722.6075958.7516565.9723327.3923697.8813090.922
Std533.5813573.0618274.62801007.016473.8096653.7841651.5328794.3931
Time1.051.251.291.144.333.131.441.63
F27Mean937.8946623.9552535.5071678.17791395.561589.2438608.5513500.007
Std106.191159.6451218.5994287.56073151.752744.3843755.083690.00022
Time1.171.421.421.284.483.221.561.74
F28Mean1692.8892461.631849.0510690.72082101.717728.96051165.746499.0489
Std524.06671250.615341.086466.65202153.0218113.1344914.907425.70241
Time1.031.281.271.144.283.131.421.60
F29Mean2052.8641743.1861112.5012680.0392848.1971492.5071647.332957.8104
Std317.6705459.9219239.7581405.3932436.8206384.2833373.5187260.3517
Time0.891.151.130.994.143.011.271.45
F30Mean1.41 × 1084.20 × 1072.24 × 1057.23 × 1077.52 × 1077,37 × 1061.61 × 1077395.534
Std4.34 × 1071.76 × 1083.11 × 1053.01 × 1073.70 × 1071.13 × 1072.21 × 1078448.894
Time1.781.942.011.885.023.892.152.34
Table 6. Results and comparison of all algorithms on unimodal and multimodal functions (D = 50).
Table 6. Results and comparison of all algorithms on unimodal and multimodal functions (D = 50).
Func.IndexPSOCSOMFOWOABROICSO1ICSO2PECSO
F1Mean5.63 × 1053.49 × 1094.41 × 10106.74 × 10107.75 × 10101.94 × 10102.66 × 10102.84 × 1010
Std1.81 × 1051.22 × 1091.2 × 10103.75 × 1092.05 × 10104.69 × 1099.15 × 1091.27 × 1010
Time0.740.400.362.740.481.280.520.57
F3Mean3.44 × 1056.02 × 1053.29 × 1051.68 × 1051.24 × 1052.06 × 1054.26 × 1051.09 × 105
Std6.26 × 1042.17 × 1056.39 × 1041.85 × 1048174.4643.29 × 1048.94 × 1042.13 × 104
Time0.360.480.570.402.841.280.510.75
F4Mean5690.7521.69 × 1042609.9111043.7991.83 × 1042672.4363252.521231.4210
Std2289.6731.09 × 1041572.541319.60701838.551855.66032307.47364.89824
Time0.380.500.610.432.941.360.550.80
F5Mean650.2707491.5995430.7602452.2232541.0584385.6534443.5153244.8575
Std38.1536860.0282159.2452472.9441237.0784342.8994156.0468036.71267
Time0.460.520.670.502.961.310.620.90
F6Mean78.9487542.8042049.2593493.7750088.5101041.3340348.0743834.50339
Std11.713887.1760389.03186710.250647.6011787.4130338.6579318.025067
Time0.760.900.980.803.171.680.911.19
F7Mean1463.5511514.4361162.3771049.8731114.199774.58311093.967594.6951
Std167.0322410.5448392.3836125.547693.9503192.84511219.493598.13384
Time0.470.540.690.522.951.370.630.91
F8Mean680.3976532.7670426.4445550.0613582.9400412.1534454.2666282.2703
Std59.5397981.3257262.9138699.0124937.2187749.5097955.7750947.00482
Time0.480.540.700.532.851.340.640.94
F9Mean3.54 × 1043.53 × 1041.53 × 1042.79 × 1042.98 × 1041.68 × 1042.55 × 1048737.336
Std7605.1177445.7064118.0537288.7754430.7183228.3368366.8522363.323
Time0.480.540.680.523.001.400.640.89
F10Mean1.29 × 1048245.277648.6611.14 × 1041.31 × 1049208.5489198.5736289.793
Std939.4008770.3391052.3311215.152763.53111548.3421202.687761.1483
Time0.540.570.750.583.101.380.690.95
Table 7. Results and comparison of all algorithms on hybrid and composition functions (D = 50).
Table 7. Results and comparison of all algorithms on hybrid and composition functions (D = 50).
Func.IndexPSOCSOMFOWOABROICSO1ICSO2PECSO
F11Mean1.49 × 1041.39 × 1043035.5012262.6749726.2806716.6071.08 × 104297.8482
Std4415.4065694.7632031.633611.76861325.5232346.0003821.73865.36438
Time0.410.510.630.452.911.340.560.84
F12Mean1.10 × 10102.17 × 10102.82 × 1095.86 × 1083.84 × 10101.71 × 1093.27 × 1097.60 × 106
Std4.34 × 1091.24 × 10102.34 × 1092.45 × 1084.93 × 1097.31 × 1083.53 × 1096.38 × 106
Time0.490.580.700.533.061.440.640.90
F13Mean1.34 × 10104.92 × 1093.03 × 1082.14 × 1071.35 × 10102.19 × 1089.31 × 1081.28 × 104
Std7.95 × 1095.59 × 1096.37 × 1081.93 × 1073.35 × 1092.14 × 1082.25 × 1095481.711
Time0.430.530.650.482.891.360.580.86
F14Mean3.47 × 1061.16 × 1072.04 × 1062.09 × 1068.68 × 1064.24 × 1067.63 × 1065.87 × 105
Std1.33 × 1062.06 × 1072.71 × 1061.19 × 1064.27 × 1063.71 × 1061.14 × 1074.96 × 105
Time0.540.600.760.592.991.460.680.97
F15Mean9.36 × 1086.9 × 1084.82 × 1073.85 × 1061.55 × 1092.09 × 1073.59 × 1086099.017
Std6.20 × 1081.36 × 1091.53 × 1083.87 × 1066.37 × 1084.34 × 1077.18 × 1088801.897
Time0.400.510.630.452.741.330.550.83
F16Mean4411.1783637.5622596.2594285.8905069.8372921.3963255.2631616.872
Std498.3067650.2152474.93581016.954641.6171542.7487659.8195413.2548
Time0.460.540.670.502.911.370.610.84
F17Mean3593.1284266.5802120.9722585.7642456.9102309.7322834.7761531.586
Std421.87923966.818526.7664458.2317379.7385374.7005699.3271323.0368
Time0.720.760.930.763.211.660.871.13
F18Mean2.39 × 1073.67 × 1077.77 × 1071.54 × 1071.97 × 1071.35 × 1072.78 × 1073.63 × 106
Std8.35 × 1063.27 × 1076.23 × 1063.79 × 1065.16 × 1061.01 × 1072.96 × 1072.03 × 106
Time0.450.550.650.493.021.370.590.86
F19Mean2.23 × 1084.61 × 1089.93 × 1063.88 × 1065.32 × 1089.01 × 1069.22 × 1071.15 × 104
Std7.95 × 1076.02 × 1083.53 × 1072.95 × 1062.42 × 1089.42 × 1062.69 × 1086970.648
Time2.061.952.282.104.623.002.222.48
F20Mean2101.1911766.0571504.1381631.0011658.9221308.2841588.4651085.147
Std206.0341448.9127328.6857317.7398260.7149293.6980349.2722292.5697
Time0.770.790.980.813.401.610.921.16
F21Mean883.4526793.8037603.0783887.1336914.5522608.4892638.0682459.1692
Std63.5798085.1775564.1976593.9401364.0855160.5777662.4426355.04673
Time1.071.141.291.123.661.951.241.51
F22Mean1.39 × 1049626.4548046.3441.09 × 1041.39 × 1049342.1759661.5806939.285
Std623.79611480.078918.60191135.879704.26881732.4101083.247795.9884
Time1.191.111.401.233.771.961.351.59
F23Mean1611.4901133.761820.96871413.7941916.904971.2375951.0280782.8817
Std156.833896.9731152.54330134.8206122.356199.87764101.240260.84830
Time1.361.311.591.423.842.141.551.81
F24Mean1498.2211128.335804.22031338.0202125.471984.80151003.564857.9812
Std80.27079103.424143.84820133.5501110.176290.13087104.874360.08306
Time1.471.451.711.544.042.261.661.91
F25Mean6779.7047451.4682327.7571245.9037023.5162449.3374074.011682.3776
Std1627.4633851.6061359.057192.4039423.6441594.13242346.96633.78171
Time1.411.421.621.463.962.341.581.82
F26Mean9547.8528824.9325175.2041.13 × 1041.11 × 1045531.9186907.5355999.610
Std1455.8451760.854597.22841432.749543.46981480.8051277.4111957.419
Time1.711.611.921.754.302.631.872.10
F27Mean1671.7161165.097848.48371656.8023582.8091124.5531184.41500.0116
Std298.3491178.407996.35885345.4130270.3357208.2852231.6980.000195
Time1.951.952.202.024.602.792.132.36
F28Mean5902.5957109.4575508.6452403.6856397.0922353.1705400.214508.2917
Std1279.437768.2124593.6870341.5047335.9758653.28352199.04339.60585
Time1.751.741.971.814.452.701.922.15
F29Mean5421.9774928.2392062.4705966.1209369.4293161.5403918.4731545.224
Std1141.9933064.262494.68791165.9992320.738795.29871796.600369.3750
Time1.251.261.471.303.812.201.411.65
F30Mean8.11 × 1081.33 × 1093.13 × 1071.30 × 1081.42 × 1097.13 × 1072.34 × 1082.49 × 105
Std4.13 × 1081.24 × 1097.12 × 1073.44 × 1075.09 × 1085.83 × 1072.73 × 1086.59 × 105
Time2.612.472.842.665.323.572.783.00
Table 8. The basic information of the engineering optimization problem.
Table 8. The basic information of the engineering optimization problem.
Func.NameExpressionConstraintVariable Scope
F31Three-bar truss design F 21 x = 2 2 x 1 + x 2 × l 2 x 1 + x 2 / 2 x 1 2 + 2 x 1 x 2 P σ 0
1 / 2 x 2 + x 1 P σ 0
0 x i 1
l = 100   cm
P = 2 KN / cm 2
σ = 2 KN / cm 2
F32Pressure vessel design F 22 x = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 x 1 + 0.0193 x 3 0
x 2 + 0.00954 x 3 0
π x 3 2 x 4 4 π x 3 3 / 3 1,960,000
x 4 240 0
0 x 1 ,   2 100
10 x 3 ,   4 200
x 1 ,   2 is multiple of 0.0625.
F33Tension/compression spring design F 23 x = x 3 + 2 x 2 x 1 2 1 x 2 3 x 3 / 71.785 x 1 4 0
4 x 2 2 x 1 x 2 / 12.566 x 2 x 1 3 x 1 4 + 1 / 5.108 x 1 2 1
1 140.45 x 1 / x 2 2 x 3 0
x 1 + x 2 / 1.5 1 0
0 x i 1
l = 100   cm
P = 2 KN / cm 2
σ = 2 KN / cm 2
Table 9. Statistical results of three engineering optimization problems.
Table 9. Statistical results of three engineering optimization problems.
Func.AlgorithmOptimized ResultOptimization Variable
BestWorstStdMeanx1x2x3x4
F31PECSO263.8959265.77560.4604264.19860.788480.40879--
CSO264.1046267.45080.7792265.10010.781860.42962--
FCSO264.3374267.21951.0166265.2885----
F32PECSO6059.71437337.4904376.00456355.17380.812500.4375042.09845176.6366
CSO6112.67397512.0098386.88356631.43320.875000.4375045.19547141.9197
FCSO12272.281864.7252945.7244803.109----
F33PECSO0.01270.01630.00080.01320.051790.3590211.15565-
CSO0.01270.01760.00110.01350.051800.3593511.14283-
FCSO0.01280.01320.00010.0130----
Table 10. D-H parameters of PUMA 560 robot.
Table 10. D-H parameters of PUMA 560 robot.
NO. Link   Length   ( a i ) Link   Torsion   Angle   ( α i ) Link   Offsets   ( d i ) Range   of   Link   Joint   Angle   ( θ i )  
θ i , m i n θ i , m a x
1090°0−160°160°
200−245°45°
30.4318 m−90°0.1491 m−45°225°
40.0203 m90°0.1331 m−110°170°
50−90°0−100°100°
600−266°266°
Table 11. Comparison results for inverse kinematics of PUMA 560 robot.
Table 11. Comparison results for inverse kinematics of PUMA 560 robot.
NTPECSOCSOBRO
MeanStdMeanStdMeanStd
1001000.001770.002410.010860.003731.9146 × 10−53.7751 × 10−5
1003007.61 × 10−71.29 × 10−60.008800.004111.0689 × 10−42.1312 × 10−5
2001003.46 × 10−59.43 × 10−50.008400.003516.9530 × 10−52.3740 × 10−5
2003001.16 × 10−74.84 × 10−70.007010.003526.2040 × 10−61.5333 × 10−5
3001001.77 × 10−73.45 × 10−70.007780.002671.8821 × 10−63.8133 × 10−6
3003001.32 × 10−89.58 × 10−90.006330.003267.1473 × 10−72.7865 × 10−6
3005005.49 × 10−94.41 × 10−90.005070.002831.8914 × 10−76.4582 × 10−7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wang, L.; Zhao, J. PECSO: An Improved Chicken Swarm Optimization Algorithm with Performance-Enhanced Strategy and Its Application. Biomimetics 2023, 8, 355. https://doi.org/10.3390/biomimetics8040355

AMA Style

Zhang Y, Wang L, Zhao J. PECSO: An Improved Chicken Swarm Optimization Algorithm with Performance-Enhanced Strategy and Its Application. Biomimetics. 2023; 8(4):355. https://doi.org/10.3390/biomimetics8040355

Chicago/Turabian Style

Zhang, Yufei, Limin Wang, and Jianping Zhao. 2023. "PECSO: An Improved Chicken Swarm Optimization Algorithm with Performance-Enhanced Strategy and Its Application" Biomimetics 8, no. 4: 355. https://doi.org/10.3390/biomimetics8040355

APA Style

Zhang, Y., Wang, L., & Zhao, J. (2023). PECSO: An Improved Chicken Swarm Optimization Algorithm with Performance-Enhanced Strategy and Its Application. Biomimetics, 8(4), 355. https://doi.org/10.3390/biomimetics8040355

Article Metrics

Back to TopTop