Next Article in Journal
Enzymatically Hydrolysed Common Buckwheat (Fagopyrum esculentum M.) as a Fermentable Source of Oligosaccharides and Sugars
Next Article in Special Issue
Hybrid Algorithm of Improved Beetle Antenna Search and Artificial Fish Swarm
Previous Article in Journal
A Multidisciplinary Pathway for the Diagnosis and Prosthodontic Management of a Patient with Medication-Related Osteonecrosis of the Jaw (MRONJ)
Previous Article in Special Issue
An Adaptive Multiobjective Genetic Algorithm with Multi-Strategy Fusion for Resource Allocation in Elastic Multi-Core Fiber Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Compact Cat Swarm Optimization Algorithm Based on Small Sample Probability Model

School of Computer Science, Yangtze University, Jingzhou 434025, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8209; https://doi.org/10.3390/app12168209
Submission received: 27 May 2022 / Revised: 1 July 2022 / Accepted: 5 August 2022 / Published: 17 August 2022

Abstract

:
In this paper, a compact cat swarm optimization algorithm based on a Small Sample Probability Model (SSPCCSO) is proposed. In the same way as with previous algorithms, there is a tracking mode and a searching mode in the processing of searching for optimal solutions, but besides these, a novel differential operator is introduced in the searching mode, and it is proved that this could greatly enhance the search ability for the potential global best solution. Another highlight of this algorithm is that the gradient descent method is adopted to increase the convergence velocity and reduce the computation cost. More importantly, a small sample probability model is designed to represent the population of samples instead of the normal probability distribution. This representation method could run with low computing power of the equipment, and the whole algorithm only uses a cat with no historical position and velocity. Therefore, it is suitable for solving optimization problems with limited hardware. In the experiment, SSPCCSO is superior to other compact evolutionary algorithms in most benchmark functions and can also perform well compared to some population-based evolutionary algorithms. It provides a new means of solving small sample optimization problems.

1. Introduction

Lately, compact evolutionary algorithms (cEA) have rapidly developed. In 1999, Georges R. Harik et.al proposed a novel compact genetic algorithm (cGA) [1]. It mimics the behavior of a simple GA with standard binary coding for order-one problem. It could almost obtain the same performance as a standard GA under this simple representation. Besides these, it engendered an idea that it is possible to use a probability distribution to represent populations, in order to reduce memory usage. Inspired by cGA, Ernesto Mininno et al. proposed a real-valued Compact Genetic Algorithms (rcGA) [2]. It firstly employed a normal probabilistic distribution model to describe the statistic features of all of the samplings. Individuals could be generated directly from this normal probabilistic distribution model. The most successful highlight of rcGA lies in that it employed effective updating rules to update the parameters of the normal probabilistic distribution functions (PDF). A compact Differential Evolutionary algorithm (cDE) [3] was presented by Ernesto Mininno et al. in 2011. Though it was based on the same normal probabilistic model, it inherited the essential features of differential evolutionary (DE). The efficient performance together with modest requirements made it suitable for the environment with small computational power. After cDE, cPSO [4] was proposed in 2013. Unlike the other PSO versions, it stored neither the positions nor the velocities, and only a particle was employed in the whole algorithm; what is more, it also employed a normal probabilistic model to simulate the swarm’s behavior. This modest representation enables cPSO to run in those devices with limited computational power or limited memory. In 2018, Ming Zhao [5] proposed a novel compact cat swarm optimization algorithm based on a differential method, with better performance than some similar algorithms.
These compact evolutionary algorithms employed a probability distribution to explicitly represent the population of the solutions. Normally, a normal distribution function is introduced. Instead of a large number of populations and variables, only the expectation and variance of the representative probability model are adopted, a type of probability model and a particle are adopted and a few variables and limited run spaces are required; that is, a compact idea is used to design the algorithm. It is known to all that a good distribution is equivalent to linkage learning [6,7]. A normal distribution model suits the simulation of those samplings with a large size [8,9]. Obviously, not all of the samplings could be described by normal distribution. There are still some samplings with non-normal distribution. If we employ a normal distribution to simulate them compulsively, the performance will be barely satisfactory. Therefore, the problems with a small size could probably be described by a special non-normal distribution.
Inspired by the literature [1,2,3,4,5], we expected to find another non-normal probabilistic distribution model to represent the samplings, which would have a different mean and the variance under the different parameters. Updating the rules for the mean and the variance could help the probabilistic model to generate highly effective solutions. Giving overall consideration to the features of some non-normal probability models, a gamma probabilistic distribution function is employed in this study.
Meanwhile, the corresponding evolutionary algorithm will be considered. It will cooperate with the gamma probabilistic model to try to find the best solution for the optimization problems. Chu et al. [10] proposed a cat swarm optimization (CSO) algorithm in 2007. A novel combination searching strategy was employed in CSO, in comparison with the corresponding evolutionary algorithms, a higher performance was shown in the standard test functions. Then, Tsai and his group [10] developed it further and proposed some improved versions, such as parallel cat swarm optimization (PCSO) [11] and reinforced parallel cat swarm optimization (EPCSO) [12]. It was also widely used in some application domains with a pretty good performance [13,14,15]. It is population-based. There is still no population-less version for CSO. So, in this paper, we also select CSO to combine the gamma probabilistic distribution, which is a Small Sample Probability Model. In order to reduce computing costs and the velocity up convergence rate, a gradient descent method is introduced to the seeking mode of CSO. Thus, a novel compact cat swarm optimization scheme with a Small Sample Probability Model (SSPCCSO) is proposed. We will employ this novel algorithm to test whether it could solve some problems of a small size.
The remainder of this research is organized as follows: Section 2 presents the sampling mechanism and cat swarm optimization. In Section 3, the proposed compact cat swarm optimization with gamma distribution and gradient descent method is discussed in detail. Section 4 displays the experimental results, and Section 5 summarizes this study.

2. Related Work

In this section, the sampling mechanism based on real-valued coding is presented in detail, and the cat swarm optimization algorithm is also introduced.

2.1. The Sampling Mechanism Based on Real-Valued

As mentioned above, the main feature of compact algorithms is population-less, see [1,2,3,4]; a virtual population based on probabilistic model is introduced to represent the populations. In real-valued coding compact algorithms [2,3,4], solutions are generated through this probabilistic model. Mean and variance are parameters for the probabilistic density function (PDF). A little modification for the two parameters will affect the solutions in the next generation. So, this probabilistic model is named as Perturbation Vector (PV). It is encoded as Formula (1):
P V t = [ u t , δ t ]
where μ and σ are, respectively, mean and standard deviation values of the corresponding PDF; top t is generations.
Without losing generality, all of designed variable x should be normalized to interval [−1,1], for a PDF, its corresponding Cumulative Distribution Function (CDF) may not be equal to 1 because some of the variables will be out of [−1,1]. An error function [15] is introduced to solve this problem; thus, the truncated PDF is presented as Formula (2):
P D F ( t r u n c a t e d ( x i ) ) = 2 π e ( x i u i ) 2 2 ( σ i ) 2 σ i ( e r f ( u i + 1 2 σ i ) e r f ( u i 1 2 σ i ) )
where x i is the i-th dimension of designed variable x ; u i   and   δ i are mean and variance associated with x i . The corresponding CDF value can be obtained through Formula (3). When a Cumulative Distribution Function value is generated, the corresponding x could be calculated by computing the inverse function of Formula (3). The sampling mechanism of PV can be seen in Figure 1.
C D F ( x i ) = 1 1 P D F ( t r u n c a t e d   x i ) d x i
The sampling procedure could be described as follows. Firstly, a random number in [0,1] is generated under a normal distribution, and it will be taken as a Cumulative Distribution Function value; then, computing the inverse function of the Cumulative Distribution Function, the calculated value is x i . x i is a needed solution, however, the solution is not generated directly, it was obtained based on e evolutionary computation and evolutionary computation. The sampling mechanism could be interpreted as Figure 1.
In the real process of sampling, in order to reduce the calculated cost, an approximate computing for the designed x [i] is implemented, by means of the Chebyshev polynomials [16].
Another highlight for the sampling mechanism of the compact evolutionary algorithms with real-value is updating the rules for mean and variance. When two individuals are compared, the winner indicated the one with better fitness, and the other is the loser. A more effective solution would be expected to be generated from PV through updating mean and variance. The updating rules are shown as Formulas (4) and (5):
μ t + 1 ( i ) = μ t ( i ) + 1 N p w i n n e r ( i ) l o s e r ( i )
σ t + 1 ( i ) 2 = σ t ( i ) 2 + μ t ( i ) 2 μ t + 1 ( i ) 2 + 1 N p w i n n e r ( i ) 2 l o s e r ( i ) 2
Np is the size of the virtual population, and top t is the generations, i is a dimension of the designed variable x. The details can be seen in the literature [2,3,4].

2.2. Cat Swarm Optimization (CSO)

Inspired by the behavior of cats, Chu and Tsai [10] proposed the cat swarm optimization algorithm in 2007. A combination of two search logics is employed in this algorithm, i.e., the seeking mode and the tracing mode. All of the cats will be divided into two groups before the iterations. Just like PSO, its update rules are very similar to the traditional Particle Swarm Optimization (PSO) algorithm; the designed cat represents a solution for the project to be solved, and each designed cat has its own position and velocity. The solutions are updated by the cat’s position and velocity, and estimated based on its degree of adaptability for the project; the global best will be chosen and conduct all of the cats to seek its next position and velocity. From the perspective of biological group behavior, this is obviously different from the particle swarm optimization algorithm. The details of the CSO algorithm will be introduced in the following section.

2.2.1. Seeking Mode

The number of cat populations in the seeking mode is decided by a parameter GR (group rate), which, normally, is set to be 0.98 [8]. When the cats are in seeking mode, the GR is used as a minor tune-up for the cats’ position, and does change their velocities. The following steps will be implemented.
Firstly, every cat will copy its own position many times, according to the size of the parameter SMP, and the position will be stored in the corresponding seeking mode pool (SMP) unit. Then, each position in the SMP will be recalculated by a mutagenic operator, a dimension of the expected variable x i could be chosen to mutate, and the range of variation would be decided by a random number, which is up to 20% of x i . The mutation operation is described as Formula (6):
x i = x i + Δ x i
The position with the best fitness in the SMP will be chosen to update x i .

2.2.2. Tracing Mode

The evolutionary process for the cats in tracing mode is similar to the particle in the PSO algorithms. However, they are still somewhat different. Each cat in tracing mode will only trace the cat with the global best fitness to update its own velocity and position. The particles in PSO [16] trace both the global best individual and the local best individual. The updating rule for the tracing mode can be expressed as the Formulas (7) and (8):
v k t   + 1 = ω v k t + C o n s t r a n d o m [ x g b t x k t ]
x k ( t + 1 ) = x k ( t ) + v k ( t + 1 )
where x g b is the position of the cat with the best fitness; x k is the position of c a t k ; t is the generation for iterations. C o n s t is a constant and r a n d o m is a random number in [0,1].
The cat in seeking mode will be compared with the cat with the best fitness in tracing mode; the winner would be chosen to update the variable. The final x g b is the required solution.

3. The Proposed Compact Cat Swarm Optimization Scheme Based on Small Sample Probability Model (SSPCCSO)

In this section, a novel compact swarm optimization scheme based on the Small Sample Probability Model will be proposed. First, a sampling mechanism with a new gamma distribution model will be introduced in Section 3.1, then a new differential operation will be implemented in seeking mode. Another highlight, a gradient descent method, will also be presented in Section 3.2. Section 3 states the tracing mode, and Section 4 introduces the SSPCCSO.

3.1. Virtual Population and Sampling Mechanism with Real-Valued

The main feature of the compact evolutionary optimization algorithms is population-less. A probabilistic model is employed to represent the distribution of the solution sets, instead of processing an actual population. A gamma distribution model is employed to act as the Perturbation Vector (PV). The PV is also a n × 2 matrix and it is expressed as Formula (1). As is mentioned above, the PV is introduced to generate a new individual. The sampling mechanism is the same as the PV with a normal distribution (see Figure 1).
Normally, a gamma probability density function (PDF) and its CDF [17] are presented as Formula (9) or (10):
f x ; k , θ = 1 Γ k θ k x k 1 e x / θ
F x ; k , θ = 0 x f u ; k , θ d u = 1 Γ k γ ( k , x θ )
where k and θ are two parameters for Gamma P D F ; the P D F curve is shown as Figure 2 [13].
The lower incomplete Gamma function is defined by γ s , x = 0 x t s 1 e t d t (note that the upper incomplete Gamma function is Γ s , x = x t s 1 e t d t , and the ordinary Gamma function is defined as Γ s = Γ s , 0 = γ ( s , ) ). Thus, the mean and variance for gamma P D F is calculated by E X = k θ and V a r X = k θ 2 .
Most of the variables are in the interval [0,20]. We define [0,20] as an all solution domain. However, there may still be some potential solutions out of [0,20], so an error function must be employed to map those potential solutions out of [0,20] into [0,20].
Without losing generality, all of the variables should be normalized to [−1,1]. As discussed above, we define the variables x in [0,20], then new variable will be mapped as y = 10 x + 1 . Thus, we consider the truncated Gamma distribution to [0,20], and then:
f 10 ( x + 1 ) ; k , θ = 10 k 1 Γ k θ k ( x + 1 ) k 1 e 10 ( x + 1 ) / θ = 1 10 Γ k ( θ 10 ) k ( x + 1 ) k 1 e ( x + 1 ) / ( θ 10 ) = 1 10 Γ k t k ( x + 1 ) k 1 e ( x + 1 ) / t = 1 10 f x + 1 ; k , t
F 10 x + 1 ; k , θ = 1 Γ k γ k , 10 x + 1 θ = 1 Γ k γ k , x + 1 t
Thus, the new distribution truncated on [−1,1] is represented by Formula (13):
P D F t r u n c a t e d = 1 10 f x + 1 ; k , θ 1 Γ k γ k , 2 θ = 1 10 γ k , 2 θ θ k ( x + 1 ) k 1 e ( x + 1 ) / t
Then, it could ensure that any of the solutions out of [−1,1] could map to [−1,1], see Figure 3. Its CDF is presented by Formula (14):
C D F t r u n c a t e d = 1 Γ k γ k , x + 1 θ 1 Γ k γ k , 2 θ = γ k , x + 1 θ γ k , 2 θ
According to Formulas (13) and (14), the mean and variance could be represented by μ = k θ 10 and σ = k θ 2 100 .
The sampling mechanism of a designed variable could be described as follows:
First, the scheme will generate a random number p between 0 and 1, according to the uniform distribution model, and the parameters μ i and σ i for the Perturbation Vector will be Initialized ( μ i = 0 and σ i = λ ). This p is the Cumulative Distribution Function (CDF) value for the expected variable x , it is a Cumulative Distribution Function value for the corresponding PDF, then the inverse function of CDF in rand (0, 1) is introduced, according to Formula (14). Finally, a new x [ i ] will be obtained.
Apparently, according to the definition of normal distribution, its solutions’ domain should be [ , + ], but real projects are limited to a specific domain, The optimal solution obtained by updating the rules may be not located in the definition domain, an error will be generated for mapping an infinite space to a finite space; the mapping function is introduced to solve the error for mapping. Because of the particularity of the Cumulative Distribution Function for Gamma distribution, all of the solutions almost locate in [0,20]. The mapping problem turns out to be a finite domain into another finite domain, the error disappears automatically, thus, an error function is not required.

The Perturbation Vector Updating Rule

The PV of the virtual population is designed to create new solutions; the parameters of PV could be updated to create more significant individuals. The updating rules for PV are also the same as in the literature [2,3,4]. They are expressed as Formulas (4) and (5). There are two very important vectors, w i n n e r and l o s e r , in Formulas (4) and (5), in which winner indicates the individual with best fitness when two solutions are compared. From Formulas (4) and (5), μ and σ are updated by an vector 1 N p ( w i n n e r l o s e r ) , this vector could adjust the values of μ and σ . Apparently, the new vector 1 N p ( w i n n e r l o s e r ) conducts the forward orientation of μ ; thus, the μ would approach the w i n n e r , in order to obtain the next solution more effectively; σ is designed to conduct the step size, when the current solution is far from the best solution, a large size σ could be used. When the current solution is close to the best solution, a small σ would be used. Thus, a new solution in the next iteration would be generated more effectively by this updating rule. This can be seen in Figure 4.

3.2. Seeking Mode

Compared with CSO, the SSPCCSO has two highlights: one is the differential operator, and the other is the gradient descent method. The details about these will be presented in this section.

3.2.1. Differential Operator

The cat in seeking mode will update its position by another new way. For the seeking mode of CSO, the position of a cat will be updated through Formula (6), while in SSPCCSO, a differential operator is introduced to enhance the search ability of the cat for the local best solution, and we call this differential operator the Exploration Vector (EV), it is presented as Formula (15):
x = x + F ( w i n n e r l o s e r )
where F [ 1 / N p , 0.2 ] is a scale factor which controls the length of the exploration vector ( w i n n e r l o s e r ) .
It had to re-mention the vector ( w i n n e r l o s e r ) , the updating rule for seeking in the CSO is described as Formula (6); according to Formula (6), x will be changed from the view of x itself, that is to say, the updating vector ( x + Δ x ) is only a simple amplification or reduction in the scalar on its direction. It can find the better solution based on two directions, it can be seen on the left of Figure 5, according to the proposed Formula (15), where it could search for the best solution in all directions. It can also be seen on the right of Figure 5.
(Winner-loser) is an exploration vector, it could face in all of the potential directions, and it has challenges in seeking for a better solution, see on the right of Figure 5.
With reference to the literature [10,11,12], the proportion for cats in the seeking mode and cats in the tracing mode is 98:2, and the size of memory pool is five times of the current cat. However, in the proposed SSPCCSO, a cat is only in seeking mode or in tracing mode. In order to mimic the search logic of the CSO algorithm, the updating rule would be implemented 245 times in each iteration; thus, the computing cost may be too high to be accepted. A gradient descent method [18,19] is introduced to reduce the computing cost and obtain the real local solution for the designed x. The details will be discussed in the following section.

3.2.2. Gradient Descent Method

As mentioned in the previous section, the updating rule in seeking mode will be implemented many times, and the process will be run with a higher computing cost. It may not be accepted for some of the engineering problems. With full consideration of these factors, a gradient descent method (GD) [19,20] is introduced. Firstly, a convergence rate (CR) is shown as Formula (16):
C R = f i t n e s s ( t + 2 ) f i t n e s s ( t + 3 ) f i t n e s s ( t ) f i t n e s s ( t + 1 )
where f i t n e s s ( t ) is the fitness for the cat in t generation. When C R < 1 , it means that the acquired solution begins to converge [20].
When the selection for the local best solution is being implemented, and the termination condition is not met, and meanwhile the local best cat is not updated for many generations, the rest loop would be unnecessary. Even though the local best solution is frequently to obtain a differential operator, sometimes this local best solution is not actually the local best solution. Based on these factors, much unnecessary computing cost could be reduced, and a more effective solution could be found with less time. Thus, a gradient is introduced and is defined as Formula (17).
f = [ f x 1 , f x 2 , , f x n ]
where n is the dimension of designed variable x . It is known to all that the real local best solution of variable x will be quickly obtained, according to the gradient vector of the function, and it can be calculated by Formula (18):
x ( t + 1 ) = x ( t ) + f d
where d is the step length, and it could firstly be set as the vector (winner-loser), then it can be optimized by processing m a x ( f ( x i + f d ) . The flow chart of the steepest gradient descent method is shown as Figure 6.
When the evolutionary procedure of the designed x goes into the gradient descent phase, the computing cost will be highly reduced, and the real local best solution will be quickly obtained.

3.3. Tracing Mode

The cat’s behavior in tracing mode is simulated to the particle in the PSO algorithm, but each cat only traces the cat with the global best position to update its own velocity and position. The updating rule of tracing mode could be presented as Formulas (7) and (8). All of the parameters are the same to the original CSO [9].
The combination of seeking mode and tracing mode could ensure that the cat swarm optimization algorithm converges quickly and prevents the solution from the local optimum.
A cat with the best fitness will be chosen to compare which is the final best solution. When the iterations meet the termination condition, the final x g b is the solution for the problem.

3.4. The Procedure and Pseudo Code for SSPCCSO

For the proposed SSPCCSO, normally, all of the variables would be mapped into the intervals [−1,1], μ and σ are initialized as μ i = 0 and σ i = λ , a random number in the range [−1,1] is chosen as the global best x g b . Then, the cat would be randomly grouped into a mode.
In the iteration phase, when the cat is in tracing mode, a local best solution x l b is generated from PV, the cat’s position and velocity is updated by Formulas (7) and (8), and the comparison between x l b and c a t . x t + 1 is used to determine which is winner and which is loser. Then, winner and loser are applied to updating the PV. Another comparison between the c a t . x t + 1 and x g b is used to decide who is x g b . When the cat is in seeking mode, firstly, a new candidate solution x l b from PV would compare with c a t . x ; it is also employed to determine the winner and loser, as winner and loser in this case are applied to update c a t . x t + 1 by Formula (12) and PV. These steps would be reduplicated for many times in a loop. When this loop is stagnation, a gradient descent method is involved to reduce the computing cost and find the real local best. No matter which mode the cat is in, a x g b would be chosen in each run. For the sake of clarity, the pseudo code for SSPCCSO is shown in Figure 7.

4. Experimental Results and Analysis

With reference to the literature [4,20], the SSPCCSO would be tested on 47 benchmark functions, which include those where their coordinates had been transformed and their shifted [21]. The composition test functions [22] are also introduced; all of the benchmark functions are listed in the Appendix A.
SSPCCSO is a compact optimization algorithm based on small size samplings. Firstly, we select rcGA, cDE and cPSO as the compared algorithms. As SSPCCSO is also a member of evolutionary algorithms, traditional DE [23], PSO [24] and CSO [9] should be considered. From the perspective of saving memory, ISPO [25] is an indispensable object; it should be one of the objects of comparison with SSPCCSO.
All of the experiments were carried out on a personal computer with MATLAB language, which is equipped with Pentium (R) dual core E6600 CPU, 3.06 GHz and 2.96 gb RAM. The operation system is set at windows XP platform. Each comparison algorithm will select a set of parameters through which the best results can be obtained. With reference to the literature [4,26], all of the parameters for each compared algorithm are listed in Table 1. For all of the real population-based algorithms, the population size is 60, for all of the virtual population-based algorithms, the population size is 300. To achieve a truly fair comparison for all of the compared algorithms, all of the algorithms involved in the comparison were evaluated by taking the average value after running for over 30 times. In all of the tables, each value represents the corresponding mean value and the standard deviation value calculated for each comparison algorithm within 30 times, and “+”, “−” and ”=“ have the same implication as in the literature [2].
The remains of this section are described as below: first, a comparison for memory usage is listed in Table 2. Then, the comparisons for the memory-saving algorithms are presented. Next to this, the comparisons between the population-based algorithms and SSPCCSO will be shown; the analyses of the results for SSPCCSO are summarized in the final section.

4.1. Comparison for Memory Usage

The proposed SSPCCSO scheme adopts a gamma probability distribution model to represent the population, and only a cat is used. The cat has the same data structure as the particle of cPSO, so it will also have the same memory space as the cPSO, in which only five persistent variables are required for storing the whole algorithm. The memory usage status for all of the compared algorithms can be seen in Table 2.
From the data in Table 2, it can be seen that SSPCCSO, cPSO, cDE, rcGA and ISPO have modest memory requirements. They belong to compact optimization algorithms. In relation to memory usage, the SSPCCSO is better than the CSO which is population-based.

4.2. Comparisons for Compact Optimization Algorithms

The experimental data in Table 3 display the results for the SSPCCSO and other compared compact bio-inspired algorithms. In all of the experimental results of the 47 test functions, the SSPCCSO exhibited quite good performance. Compared with the cPSO, SSPCCSO is outperformed on 30 test functions, on the contrary, the cPSO exceeds SSPCCSO only on 17 benchmark functions. Among all of the memory-saving algorithms, SSPCCSO is out-performed by the other compared algorithms over 24 functions. From the view of the mathematics method, it lies in two factors, first of all, a differential operator in seeking mode is introduced to substitute for the original mutation operator. The difference between a solution c a t . x [ i ] and another variable may generate a moving direction, it may be a 360 degree angle transformation for the existing solution; the magnitude for the c a t . x [ i ] variation is decided by the size of the vector ( w i n n e r l o s e r ) . The potentially more efficient solution around c a t . x [ i ] will be found according to this searching method, which was similar to the cDE algorithm. Secondly, SSPCCSO also kept the search logic of the PSO, that is to say, the proposed SSPCCSO has the search ability of both the cPSO and cDE. It combines these two searching abilities; thus, it is not surprising that its search performance exceeds these two algorithms.

4.3. Comparison between the Corresponding Population-Based Algorithms and SSPCCSO

In addition to the comparison with memory saving algorithms, another group of comparisons between memory-saving and non-memory-saving algorithms are also used to test the performance of the algorithms; The comparison between sspccso, CSO, PSO and de will be arranged in this group of experiments. Table 4 presents the comparison results of the 47 test functions. SSPCCSO did better in 10 benchmarks, even when only one cat was employed. This status also happens in comparison between DE [23], cDE [2], PSO [24] and cPSO [4]. Obviously, Compared with population-based bio-inspired algorithms, the search ability of a single individual in SPCCSO is limited, and there is no large population collective cooperative search. However, the performance of SPCCSO still exceeds other algorithms.

4.4. Comparison against Swarm-Based Version Algorithms Based on Iterations and Solution

Another indicator of the algorithm’s performance is the convergence rate; after several iterations, different algorithms will converge to different results. A comparison of the convergence results based on the same test function and the same number of iterations is shown in Table 5. Test function 1 [22] is selected.
Table 5 shows that the SSPCCSO exceeds the other compared algorithms with a faster convergence rate, except for CSO. It ensures gradual convergence in the early iterations. Table 6 shows that the SSPCCSO is also much better than the PSO and CPSO in terms of the convergence results
Due to space constraints, no more comparison results are displayed. This situation also could be obtained with other test functions results.
Because of too many local cycles in the seeking mode, there is not any advantage shown in the computing costs. However, the gradient descent method introduced can make up for this shortcoming. It can end unnecessary calculations in advance, so the algorithm achieves a better performance in less running time.

5. Conclusions

A novel compact cat swarm optimization scheme based on gradient descent is proposed in this study. It kept the search logic of CSO, but introduced a gradient descent method into the scheme to seek for the optimal solution. According to the experimental results, this scheme could greatly reduce the computing costs. It also outperformed all of the relative compact optimization algorithms in most of the test benchmark functions. More significantly, its design is based on gamma probability distribution for solving small size sampling problems, so it probably suggests a new solution for optimization problems with small sampling.

Author Contributions

Conceptualization, M.Z.; software, Z.H. Resources, T.L.; data curation, Y.Y.; writing—original draft preparation, M.Z.; writing—review and editing, Z.H.; funding acquisition, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Hubei Provincial Department of Education: 21D031.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

(1) Test function 1:
f 1 ( y ) = i = 1 D z i 2 z i = y o ; D = [ 100 , 100 ] 30
(2) Test function 2:
f 2 ( y ) = i = 1 D ( j = 1 i x j ) 2 z i = x o ; D = [ 100 , 100 ] 30
(3) Test function 3:
f 3 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] ,   D = [ 100 , 100 ] 30
(4) Test function 4:
f 4 ( x ) = 20 e 0.2 1 n i = 1 n z i e 1 n i = 1 n cos ( 2 p i z i ) + 20 + e ,   z i = x o ; D = [ 32 , 32 ] 30
(5) Test function 5:
f 5 ( x ) = i = 1 n z i 2 4000 i = 1 n cos ( z i i ) + 1 ,   z i = x o ;   D = [ 600 , 600 ] 30 ,  
(6) Test function 6:
f 6 ( x ) = 10 n + i = 1 n [ z i 2 10 cos ( 2 π z i ) ] ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 5 , 5 ] 30
(7) Test function 7:
f 7 ( x ) = i = 1 M [ y i 2 10 cos ( 2 π y i ) ] + 10 n y i = z i i f | z i | < 1 / 2 r o u n d ( 2 z i ) / 2 i f | z i | > 1 / 2 z i = x o ; D = [ 500 , 500 ] 30
(8) Test function 8:
f 8 ( x ) = 418.9829 n + i = 1 n ( x i sin | x i | ) ,   D = [ 500 , 500 ] 30
(9) Test function 9:
f 9 ( x ) = m a x i ( | A i x i B i | ) , B i = A i × o i , D = [ 100 , 100 ] 30
(10) Test function 10:
f 10 ( x ) = i = 1 n ( A i x i B i ( x ) ) 2
(11) Test function 11:
f 11 ( x ) = 20 e 0.2 1 n i = 1 n z i e 1 n i = 1 n cos ( 2 p i z i ) + 20 + e ,   z i = M ( x o ) , C o n d ( M ) = 1 , D = [ 32 , 32 ] 30
(12) Test function 12:
f 12 ( x ) = i = 1 n z i 2 4000 i = 1 n cos ( z i i ) + 1 z i = M ( x o ) , C o n d ( M ) = 3 , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 600 , 600 ] 30
(13) Test function 13:
f 13 ( x ) = 10 n + i = 1 M [ z i 2 10 cos ( 2 π z i ) ] z i = M ( x o ) , C o n d ( M ) = 3 , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 5 , 5 ] 30
(14) Test function 14:
o = [ o 1 , o 2 , o 3 , o n ] f 14 ( x ) = i = 1 n k = 0 k m a x ( a k cos ( 2 π b k ( z i + 0.5 ) ) ) n k = 0 k m a x a k cos ( 2 π b k ) 0.5 a = 0.5 , b = 0.3 , k m a x = 20 , z = M ( x o ) , M = 5 , D = [ 0.5 , 0.5 ] 30
(15) Test function 15:
f 15 ( x ) = i = 1 n | x i | i = 1 n | x i | ,   D = [ 10 , 10 ] 10
(16) Test function 16:
f 16 ( x ) = max i = 1 n | x i | ,   D = [ 100 , 100 ] 10
(17) Test function 17:
f 17 ( x ) = π n { 10 sin 2 π y 1 + i = 1 n ( ( y i 1 ) 2 ( 1 + 10 sin 2 π y i ) ) + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) where   y i = 1 + 1 4 ( x i + 1 ) ,   and   u ( x , a , k , m ) = k ( x i a ) m i f x i > a 0 f | x i | a k ( x i a ) m i f x i < a D = [ 50 , 50 ] 10
(18) Test function 18:
f 18 ( x ) = 1 10 { sin 2 3 π x 1 + i = 1 n 1 ( ( x i 1 ) 2 ( 1 + sin 2 3 π x i + 1 ) ) } + 1 10 { ( x n 1 ) ( 1 + sin 2 π x n ) 2 } + i = 1 n u ( x i , 5 , 100 , 4 )
where D = [ 50 , 50 ] 10
(19) Test function 19:
f 19 ( x ) = 10 n + i = 1 n [ z i 2 10 cos ( 2 π z i ) ] ,   z i = x o ;   D = [ 5 , 5 ] 50
(20) Test function 20:
f 20 ( x ) = i = 1 n sin ( x i ) [ sin ( i x i 2 π ) ] 2 m , m = 10 , D = [ 0 , π ] 50
(21) Test function 21:
f 21 ( x ) = 418.9829 n + i = 1 n ( x i sin ( | x i | ) ) , D = [ 500 , 500 ] 30
(22) Test function 22:
f 22 ( x ) = 20 e 0.2 1 n i = 1 n z i e 1 n i = 1 n cos ( 2 p i z i ) + 20 + e , z i = x o ;   D = [ 32 , 32 ] 100
(23) Test function 23:
f 23 ( x ) = i = 1 n sin ( x i ) i = 1 n ( x i ) ,   D = [ 10 , 10 ] 100
(24) Test function 24:
f 24 ( x ) = i = 1 n ( i x i 2 ) ,   D = [ 10 , 10 ] 100
(25) Test function 25:
f 25 ( x ) = 1 + cos ( 12 i = 1 n x i 2 ) 1 2 i = 1 n x i 2 + 2 , D = [ 5.12 , 5.12 ] 100
(26) Test function 26:
f 26 ( x ) = i = 1 n sin ( x i ) [ sin ( i x i 2 π ) ] 2 m ,   m = 10 , D = [ 0 , π ] 100
(27) Test function 27:
f 27 ( x ) = i = 1 n 5 i x i 2 ,   D = [ 5.12 , 5.12 ] 100
(28) Test function 28:
f 28 ( x ) = 10 n + i = 1 n [ z i 2 10 cos ( 2 π z i ) ] ,   z i = x o ;   D = [ 5.12 , 5.12 ] 100
(29) Test function 29:
f 29 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] ,   D = [ 100 , 100 ] 100
(30) Test function 30:
f 30 ( x ) = i = 1 n j = 1 i x j 2 , D = [ 65536 ,   65536 ] 100
(31) Test function 31:
f 31 ( x ) = 418.9829 n + i = 1 n x i sin ( | x i | ) ,   D = [ 500 ,   500 ] 100
(32) Test function 32:
f 32 ( x ) = i = 1 D z i 2 ,   z i = x o ; D = [ 5 , 5 ] 100
(33) Test function 33:
f 33 ( x ) = m a x i | z i | ,   z i = x o ; D = [ 100 , 100 ] 100
(34) Test function 34:
f 34 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] ,   D = [ 100 , 100 ] 100
(35) Test function 35:
f 35 ( x ) = 10 n + i = 1 n [ z i 2 10 cos ( 2 π z i ) ] ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 5 , 5 ] 30
(36) Test function 36:
f 36 ( x ) = 1 4000 i = 1 n z i 2 i = 1 n cos ( z i i ) + 1 ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 600 , 600 ] 100
(37) Test function 37:
f 37 ( x ) = 20 e 0.2 1 n i = 1 n z i e 1 n i = 1 n cos ( 2 p i z i ) + 20 + e ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 5 , 5 ] 100
(38) Test function 38:
f 38 ( x ) = i = 1 n f r a c t a l 1 D ( x i + t w i s t ( x ( i mod n ) + 1 ) t w i s t ( x ) = 4 ( x 4 2 x 3 + x 2 ) f r a c t a l 1 D ( x ) k = 1 3 1 2 k 1 1 r a n 2 ( 0 ) d o u b l e d i p ( x , r a n 1 ( 0 ) , 1 2 k 1 ( 2 r a n 1 ( 0 ) ) ) d o u b l e d i p ( x , c , s ) = ( 6144 ( x c ) 6 3088 ( x c ) 4 392 ( x c ) 2 + 1 ) s 0.5 < x < 0.5 0 o t h e r w i s e D = [ 1 , 1 ] 100
(39) Test function 39:
f 39 ( x ) = i = 1 D z i 2 ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 100 , 100 ] 50
(40) Test function 40:
f 40 ( x ) = i = 1 n ( j i z i ) 2 ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 100 , 100 ] 50
(41) Test function 41:
f 41 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] , D = [ 100 , 100 ] 50
(42) Test function 42:
f 42 ( x ) = 20 e 0.2 1 n i = 1 n z i e 1 n i = 1 n cos ( 2 p i z i ) + 20 + e ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 32 , 32 ] 50
(43) Test function 43:
f 43 ( x ) = i = 1 n z i 2 4000 i = 1 n cos ( z i i ) + 1 ,   z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 600 , 600 ] 50
(44) Test function 44:
f 44 ( x ) = 10 n + i = 1 n [ z i 2 10 cos ( 2 π z i ) ] z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 5 , 5 ] 50
(45) Test function 45:
f 45 ( x ) = i = 1 M [ y i 2 10 cos ( 2 π y i ) ] + 10 n y i = z i i f | z i | < 1 / 2 r o u n d ( 2 z i ) / 2 i f | z i | > 1 / 2 z i = x o , o = [ o 1 , o 2 , o 3 , o n ] , D = [ 500 , 500 ] 50
(46) Test function 46:
f 46 ( x ) = i = 1 n ( j = 1 i x j ) 2 ,   D = [ 100 , 100 ] 50
(47) Test function 47:
f 47 ( x ) = i = 1 n ( A i x i B i ( x ) ) 2 ,   D = [ π , π ] 50

References

  1. Harik, G.R.; Lobo, F.G.; Goldberg, D.E. The compact genetic algorithm. IEEE Trans. Evol. Comput. 1999, 3, 287–297. [Google Scholar] [CrossRef]
  2. Mininno, E.; Cupertino, F.; Naso, D. Real-valued compact genetic algorithms for embedded microcontroller optimization. IEEE Trans. Evol. Comput. 2008, 12, 203–219. [Google Scholar] [CrossRef]
  3. Mininno, E.; Neri, F.; Cupertino, F.; Naso, D. Compact differential evolution. IEEE Trans. Evol. Comput. 2011, 15, 32–54. [Google Scholar] [CrossRef]
  4. Neri, F.; Mininno, E.; Iacca, G. Compact Particle Swarm Optimization. Inf. Sci. 2013, 239, 96–121. [Google Scholar] [CrossRef]
  5. Zhao, M. A novel compact cat swarm optimization based on differential method. Enterp. Inf. Syst. 2020, 14, 196–220. [Google Scholar] [CrossRef]
  6. Muzaffar, E.; Kevin, L.; Fayzul, P. Shuffled frog-leaping algorithm: A memetic meta-heuristic for discrete optimization. Eng. Optim. 2001, 38, 129–154. [Google Scholar] [CrossRef]
  7. Li, L.X.; Shao, Z.J.; Qian, J.X. An optimizing method based on autonomous animals: Fish-swarm algorithm. Syst. Eng. Theory Pract. 2002, 22, 32–38. [Google Scholar]
  8. Luo, X.; Yang, Y.; Li, X. Modified shuffled frog-leaping algorithm to solve traveling salesman problem. J. Commun. 2009, 30, 130–135. [Google Scholar]
  9. Luo, J.; Li, X. Improved shuffled frog leaping algorithm for solving TSP. J. Shenzhen Univ. Sci. Eng. 2010, 27, 173–179. [Google Scholar]
  10. Chu, S.C.; Tsai, P.W.; Pan, J.S. Cat swarm optimization. In Proceedings of the 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; pp. 854–858. [Google Scholar]
  11. Tsai, P.W.; Pan, J.S.; Chen, S.M.; Liao, B.Y.; Hao, S.P. Parallel cat swarm optimization. In Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, China, 12–15 July 2008; pp. 3328–3333. [Google Scholar]
  12. Wang, Z.H.; Chang, C.C.; Li, M.C. Optimizing least-significant-bit substitution using cat swarm. Inf. Sci. 2012, 192, 98–108. [Google Scholar] [CrossRef]
  13. Available online: https://en.wikipedia.org/wiki/Gamma_distribution#/media/File:Gamma_distribution_pdf.svg (accessed on 1 March 2022).
  14. Panda, G.; Pradhan, P.M.; Majhi, B. IIR system identification using cat swarm optimization. Expert Syst. Appl. 2011, 38, 12671–12683. [Google Scholar] [CrossRef]
  15. Gautschi, W. Error Function and Fresnel Integrals. In Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; Abramowitz, M., Stegun, I.A., Eds.; Dover Publications: Mineola, NY, USA, 1972; Chapter 7; pp. 297–309. [Google Scholar]
  16. Cody, W.J. Rational Chebyshev approximations for the error function. Math. Comput. 1969, 23, 631–637. [Google Scholar] [CrossRef]
  17. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  18. Luc, D. Non-Uniform Random Variate Generation; Springer: New York, NY, USA, 1986; Chapter 9, Section 3; pp. 401–428. [Google Scholar]
  19. Snyman, J.A. Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms; Springer: New York, NY, USA, 2005. [Google Scholar]
  20. Wang, H.; Xu, X. Stop Criterion Based on Convergence Properties of GA. J. Wuhan Univ. Technol. Transp. Sci. Eng. 2012, 36, 1091–1094. [Google Scholar]
  21. Tang, K.; Yao, X.; Suganthan, P.N.; MacNish, C.; Chen, Y.P.; Chen, C.M.; Yang, Z. Benchmark Functions for the CEC’ 2008 Special Session and Competition on Large Scale Global Optimization; Nature Inspired Computation and Applications Laboratory, USTC: Hefei, China, 2007. [Google Scholar]
  22. Liang, J.J.; Suganthan, P.N.; Deb, K. Novel Composition Test Functions for Numerical Global Optimization. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 68–75. [Google Scholar]
  23. Neri, F.; Tirronen, V. Recent advances in differential evolution: A review and experimental analysis. Artif. Intell. Rev. 2010, 33, 61–106. [Google Scholar] [CrossRef]
  24. Clerc, M. Particle Swarm Optimization Webpage. Available online: http://clerc.maurice.free.fr/pso/ (accessed on 1 March 2022).
  25. Zhou, J.; Ji, Z.; Shen, L. Simplified intelligence single particle optimization based neural network for digit recognition. In Proceedings of the Chinese Conference on Pattern Recognition, Beijing, China, 22–24 October 2008; pp. 1031–1847. [Google Scholar]
  26. Pedersen, M.E.H. Good Parameters for Particle Swarm Optimization; Technical Report no. HL1001; Hvass Lab.: Copenhagen, Denmark, 2010. [Google Scholar]
Figure 1. Sampling Mechanism for Real-Valued Compact.
Figure 1. Sampling Mechanism for Real-Valued Compact.
Applsci 12 08209 g001
Figure 2. Gamma Probability Density Function.
Figure 2. Gamma Probability Density Function.
Applsci 12 08209 g002
Figure 3. Error mapping for some solutions out of the decision domain.
Figure 3. Error mapping for some solutions out of the decision domain.
Applsci 12 08209 g003
Figure 4. The interpretation for updating rule with winner and loser.
Figure 4. The interpretation for updating rule with winner and loser.
Applsci 12 08209 g004
Figure 5. The search direction for different updating rules.
Figure 5. The search direction for different updating rules.
Applsci 12 08209 g005
Figure 6. The flow chart for gradient descent method in seeking mode.
Figure 6. The flow chart for gradient descent method in seeking mode.
Applsci 12 08209 g006
Figure 7. The pseudo-code for SSPCCSO.
Figure 7. The pseudo-code for SSPCCSO.
Applsci 12 08209 g007
Table 1. Selected parameters list for all compared algorithms in this projection.
Table 1. Selected parameters list for all compared algorithms in this projection.
AlgorithmParametersLiteratureAlgorithmParametersLiterature
rcGA N p = 300 [1]DE N p = 60 , F = 0.5 , C r = 0.9 [23]
cDE N p = 300 , F = 0.5 C r = 0.3 [2]PSO ϕ 1 = 0.2 , ϕ 2 = 0.07 , ϕ 3 = 3.74 γ 1 = γ 2 = 1 , N p = 60 [24]
cPSO ϕ 1 = 0.2 , ϕ 2 = 0.07 , ϕ 3 = 3.74 γ 1 = γ 2 = 1 , N p = 300 [3]ISPO A = 1 , P = 10 , B = 2 , S f = 4 H = 30 , ε ε = 1.0 × 10 5 [25]
CSO N p = 60 , c 1 = c 2 = 2 W = 0.9 [10]SSPCCSO w = 0.4 , c 1 = 2 , c 2 = 0.07 N p = 300
Table 2. Running memory space for all compared algorithms.
Table 2. Running memory space for all compared algorithms.
AlgorithmComponentsMemory Slots
ISPOSingle individual, 1 global best2
rcGAOne individual, persistent elitism, 1 sampling4
cDE3 sampling, 1 global best4
cPSO1 sampling, 5 persistent variables5
SSPCCSOThe same to cPSO5
PSOPopulation-based, history and current individuals2NP
DEPopulation-based, current individuals onlyNP
CSOPopulation-based, history and current individuals2NP
Table 3. Comparison for memory-saving algorithms.
Table 3. Comparison for memory-saving algorithms.
FunctionrCGAcDEISPOcPSOWSSPCCSO
fu11.427 × 104 ± 9.27 × 1038.73 × 10−28 ± 1.86 × 10−288.437 × 10−31 ± 3.31 × 10−316.471 × 101 ± 2.28 × 101+6.170 × 10−3 ± 1.05 × 10−3
fu22.851 × 104 ± 6.58 × 1033.778 × 103 ± 1.85 × 1031.184 × 101 ± 5.92 × 1002.560 × 103 ± 2.37 × 103+3.625 × 102 ± 7.02 × 102
fu31.282 × 109 ± 1.58 × 1091.291 × 102± 1.84 × 1022.026 × 102 ± 3.28 × 1021.320 × 105 ± 7.46 × 104+5.776 × 10−1 ± 7.01 × 100
fu41.874 × 101 ± 3.59 × 10−18.694 × 10−2 ± 2.97 × 10−11.942 × 101 ± 1.57 × 10−13.728 × 100 ± 3.71 × 10−1+5.574 × 10−1 ± 3.07 × 10−2
fu56.434 × 10−3 ± 1.31 × 10−24.289 × 10−3 ± 1.38 × 10−21.124 × 101 ± 1.77 × 1019.63 × 10−8 ± 3.07 × 10−81.613 × 10−2 ± 2.37 × 10−3
fu61.963 × 102 ± 2.85 × 1017.944 × 101 ± 1.48 × 1012.548 × 102± 4.23 × 1012.94 × 101 ± 7.94 × 100+2.399 × 10−2 ± 4.09 × 10−3
fu72.312 × 103 ± 2.47 × 1034.983 × 103 ± 3.78 × 1032.254 × 103 ± 8.62 × 1024.614 × 102 ± 2.40 × 102+2.991 × 102 ± 1.75 × 10−1
fu83.194 × 103 ± 8.01 × 1021.673 × 103 ± 4.48 × 1025.768 × 103 ± 5.38 × 1023.160 × 103 ± 9.75 × 102+1.248 × 101 ± 1.90 × 100
fu91.008 × 104 ± 2.35 × 1038.548 × 103 ± 2.14 × 1032.755 × 104 ± 6.08 × 1031.344 × 104 ± 1.74 × 1031.111 × 105 ± 5.29 × 104
fu103.697 × 105 ± 1.78 × 1054.265 × 104 ± 2.35 × 1044.326 × 103± 4.54 × 1031.040 × 106 ± 1.16 × 105+9.390 × 105 ± 1.13 × 104
fu111.851 × 101 ± 4.37 × 10−11.708 × 100 ± 1.11 × 1001.948 × 101 ± 1.89 × 10−13.699 × 100 ± 3.53 × 10−1+8.328 × 10−2 ± 8.23 × 10−2
fu125.769 × 10−2 ± 1.05 × 10−12.395 × 10−1 ± 2.03 × 10−10.001 × 10−1± 0.01× 1009.567 × 10−8 ± 2.69 × 10−8+1.018 × 10−8± 2.61 × 10−9
fu132.154 × 102 ± 3.96 × 1011.314 × 102 ± 1.87 × 1012.566 × 102 ± 4.15 × 1013.924 × 101 ± 2.31 × 1012.70 × 102 ± 1.81 × 10−5
fu143.246 × 101 ± 4.53 × 1002.988 × 101 ± 3.47 × 1004.777 × 101 ± 4.34 × 1003.943 × 101 ± 1.15 × 1007.142 × 102 ± 2.37 × 10−1
fu155.251 × 100 ± 5.19 × 1002.315 × 10−16 ± 5.65 × 10−161.184 × 10−6 ± 2.89 × 10−171.778 × 100 ± 4.27 × 10−1+9.427 × 10−3 ± 6.15 × 10−3
fu16−1.001 × 102 ± 4.43 × 10−9−1.001 × 102 ± 1.63 × 10−9−1.001 × 102 ± 8.38 × 10−15−1.001 × 102 ± 8.45 × 10−5=−1.001 × 102 ± 0.00 × 100
fu171.452 × 100 ± 1.88 × 1002.817 × 10−23 ± 3.16 × 10−239.994 × 10−1 ± 1.56 × 1001.702 × 100 ± 7.08 × 10−1+9.518 × 10−5 ± 1.57 × 106
fu18−5.485 × 10−1 ± 1.11 × 100−1.150 × 100 ± 4.98 × 10−16−2.258 × 10−1 ± 1.28 × 100−1.030 × 100± 7.56 × 10−1−4.104 × 10−1± 8.97 × 10−4
fu194.338 × 102 ± 4.75 × 1012.603 × 102 ± 3.04 × 1014.044 × 102 ± 4.15 × 1014.403 × 101 ± 3.44 × 1014.500 × 102 ± 2.60 × 10−3
fu20−1.517 × 101 ± 2.76 × 100−3.347 × 101 ± 1.87 × 100−3.348 × 101 ± 1.64 × 100−2.063 × 101 ± 2.33 × 100−1.988 × 101 ± 2.33 × 10−1
fu218.372 × 103 ± 1.62 × 1035.343 × 103 ± 8.47 × 1029.679 × 103 ± 1.09 × 1034.784 × 103 ± 1.09 × 1031.42 × 100 ± 2.26 × 100
fu222.014 × 101 ± 1.48 × 10−11.787 × 101 ± 2.89 × 10−11.951 × 101 ± 7.51 × 10−23.899 × 10−1 ± 5.19 × 10−1+7.139 × 10−2 ± 4.17 × 10−2
fu231.645 × 102 ± 2.36 × 1014.042 × 101 ± 1.41 × 1011.247 × 10−13± 1.01 × 10−144.657 × 10−2± 2.39 × 10−2=7.319 × 10−2 ± 5.17 × 10−3
fu248.488 × 104 ± 8.14 × 1032.941 × 103 ± 1.59 × 1031.252 × 10−30± 3.09 × 10−316.918 × 10−2 ± 2.54 × 10−28.961 × 10−3 ± 1.46 × 10−2
fu25−6.349 × 10−3± 3.24 × 10−4−9.161 × 10−3 ± 6.27 × 10−4−4.551 × 10−3 ± 3.79 × 10−4−7.85 × 10−1 ± 1.60 × 10−140.000 × 100 ± 0.00 × 100
fu26−2.178 × 101 ± 3.09 × 100−4.938 × 101 ± 3.54 × 100−6.556 × 101 ± 3.18 × 100−2.920 × 101 ± 2.53 × 100=−3.970 × 101 ± 1.52 × 10−1
fu272.524 × 105 ± 2.58 × 1041.051 × 104 ± 6.31 × 1033.498 × 10−30± 8.75 × 10−312.217 × 10−2 ± 4.04 × 10−31.033 × 10−2 ± 1.39 × 10−2
fu281.166 × 103 ± 7.35 × 1014.218 × 102 ± 3.72 × 1017.942 × 102 ± 7.69 × 1018.776 × 10−3 ± 2.88 × 10−3=6.631 × 10−3 ± 1.24 × 10−4
fu296.906 × 1010 ± 1.38 × 10105.643 × 108 ± 4.98 × 1083.503 × 102 ± 3.91 × 1021.220 × 102 ± 2.81 × 101+1.286 × 100 ± 2.50 × 100
fu301.297 × 1011 ± 2.63 × 10107.066 × 1010 ± 1.19 × 10109.702 × 109 ± 3.26 × 1094.928 × 106 ± 6.56 × 1051.109 × 108 ± 3.44 × 108
fu312.148 × 104 ± 2.51 × 1031.842 × 104 ± 1.29 × 1031.971 × 104 ± 1.28 × 1031.045 × 104 ± 2.94 × 1034.989 × 104 ± 8.38 × 104
fu321.591 × 103 ± 1.27 × 1031.062 × 10−5 ± 9.78 × 10−62.684 × 10−30± 4.75 × 10−311.531 × 10−2 ± 3.80 × 10−3=1.774 × 10−2 ± 2.89 × 10−2
fu331.258 × 102 ± 6.44 × 1008.948 × 101 ± 6.18 × 1001.773 × 102 ± 5.91 × 1007.370 × 101 ± 3.32 × 100+−9.9 × 101 ± 0.00 × 100
fu345.331 × 1010 ± 3.51 × 10108.041 × 109± 4.88 × 1092.476 × 102 ± 2.13 × 1034.896 × 105 ± 2.21 × 105+1.790 × 100 ± 2.67 × 100
fu359.384 × 102 ± 1.78 × 1025.578 × 102 ± 8.53 × 1011.612 × 103 ± 2.32 × 1026.701 × 102 ± 6.36 × 101+1.046 × 10−2± 1.66 × 10−2
fu367.462 × 102 ± 2.32 × 1022.422 × 102 ± 8.72 × 101−1.273 × 102± 3.77 × 100−1.082 × 102 ± 4.21 × 1003.137 × 10−2 ± 6.55 × 10−2
fu375.5078 × 102 ± 1.83 × 10−15.478 × 102 ± 9.65 × 10−15.498 × 102 ± 4.64 × 10−25.492 × 102 ± 2.51 × 10−1+6.525 × 10−2 ± 4.01 × 10−2
fu38−1.201 × 103 ± 4.77 × 101−1.407 × 103 ± 3.24 × 101−1.267 × 103 ± 5.18 × 101−1.284 × 103 ± 3.90 × 1010.000 × 100 ± 0.00 × 100
fu396.157 × 104 ± 1.54 × 1044.98 × 10−27 ± 4.22 × 10−271.445 × 10−30± 5.58 × 10−314.314 × 10−3 ± 1.24 × 10−38.704 × 10−3 ± 2.16 × 10−4
fu407.518 × 104 ± 1.08 × 1043.316 × 104 ± 8.12 × 1035.665 × 102 ± 2.19 × 1024.375 × 100 ± 9.83 × 10−1+1.463 × 10−1 ± 2.02 × 10−1
fu411.044 × 1010 ± 4.34 × 1091.098 × 103 ± 1.86 × 1032.575 × 102 ± 3.11 × 1028.941 × 101 ± 5.26 × 1011.397 × 100 ± 5.03 × 100
fu421.949 × 101 ± 2.59 × 10−18.003 × 100 ± 4.31 × 1001.949 × 101± 1.48 × 10−11.277 × 100 ± 3.68 × 10−1+7.812 × 10−2 ± 3.72 × 10−2
fu432.978 × 10−1± 3.723 × 10−11.354 × 10−1 ± 2.31 × 10−16.857 × 100 ± 1.06 × 1011.084 × 100 ± 3.16 × 10−1+2.039 × 10−2± 3.72 × 10−2
fu444.707 × 10−3 ± 7.38 × 10−30.001 × 100 ± 0.01 × 1000.001 × 100 ± 0.01 × 1000.001 × 100 ± 0.01 × 100=0.001 × 100 ± 0.01 × 100
fu454.258 × 104 ± 4.15 × 1042.534 × 104 ± 6.28 × 1034.066 × 103 ± 9.66 × 1025.051 × 101 ± 4.28 × 101+1.433 × 101 ± 1.92 × 101
fu462.368 × 104 ± 3.45 × 1032.01 × 104 ± 3.04 × 1033.776 × 104 ± 6.47 × 1032.320 × 104 ± 3.38 × 103+3.577 × 103 ± 8.68 × 103
fu472.087 × 106 ± 7.97 × 105 4.588 × 105 ± 1.69 × 1051.589 × 104 ± 1.74 × 1041.395 × 106 ± 1.14 × 1063.749 × 106 ± 2.47 × 106
Table 4. Comparison among SSPCCSO, CSO, PSO and DE.
Table 4. Comparison among SSPCCSO, CSO, PSO and DE.
BenmarkDEPSOWCSOWSSPCCSO
fu18.269 × 101 ± 1.90 × 1011.095 × 104 ± 2.30 × 1030.000 × 100± 0.00 × 1006.170 × 10−3 ± 1.05 × 10−3
fu23.063 × 104 ± 3.70 × 1034.232 × 104 ± 1.84 × 103+0.000 × 100± 0.00 × 1003.625 × 102 ± 7.02 × 102
fu32.715 × 100± 1.11 × 1061.103 × 109 ± 5.07 × 108+2.890 × 101 ± 1.394 × 10−25.776 × 10−1 ± 7.01 × 100
fu44.072 × 101 ± 1.98 × 10−11.639 × 101 ± 1.21 × 100+0.001 × 100± 0.00 × 1005.574 × 10−1 ± 3.07 × 10−2
fu57.195 × 101 ± 9.73 × 1000.001 × 100± 0.001 × 1000.001 × 100± 0.01 × 1001.613 × 10−2 ± 2.37 × 10−3
fu62.151 × 102± 9.08 × 1002.887 × 102± 3.28 × 101+0.001 × 100± 0.00 × 1002.399 × 10−2 ± 4.09 × 10−3
fu72.408 × 105 ± 4.96 × 1041.321 × 105 ± 1.03 × 104+2.990 × 102 ± 0.00 × 100=2.991 × 102 ± 1.75 × 10−1
fu86.328 × 103 ± 2.36 × 1026.677 × 103 ± 6.44 × 1023.161 × 103 ± 9.76 × 1021.248 × 100± 1.90 × 100
fu91.633 × 104 ± 1.13 × 1031.305 × 104 ± 3.17 × 103+1.344 × 104 ± 1.74 × 103+1.111 × 105 ± 5.29 × 104
fu108.508 × 105± 9.24 × 1049.716 × 105 ± 1.57 × 1053.222 × 106 ± 1.68 × 105+9.390 × 105 ± 1.13 × 104
fu114.217 × 100 ± 1.58 × 10−11.707 × 101± 1.73 × 100+−1.84 × 10−6 ± 0.01 × 1008.328 × 10−2 ± 8.23 × 10−2
fu126.536 × 101± 1.02 × 1011.139 × 101 ± 3.08 × 101+9.568 × 10−8± 2.68 × 10−8=1.018 × 10−8 ± 2.61 × 109
fu132.586 × 102±1.12 × 1013.155 × 102± 2.19 × 101+2.701 × 102 ± 0.01 × 100=2.70 × 102± 1.81 × 10−5
fu144.003 × 101 ± 1.09 × 1003.966 × 101± 1.18 × 1007.049 × 102± 2.22 × 100=7.142 × 102 ± 2.37 × 10−1
fu157.443 × 10−2 ± 1.89 × 10−54.083 × 100 ± 2.23 × 100+0.001 × 100± 0.01 × 1009.427 × 10−3 ± 6.15 × 10−3
fu16−9.942 × 10−8± 1.08 × 10−1−1.001 × 102± 0.01 × 100=−1.001 × 102± 8.46 × 10−5=−1.000 × 102± 0.00 × 100
fu179.424 × 10−8± 5.16 × 10−81.046 × 101 ± 5.08 × 1001.631 × 100 ± 5.84 × 10−19.518 × 10−5 ± 1.57 × 10−6
fu18−1.151 × 100± 3.37 × 10−73.502 × 103 ± 9.85 × 103+5454 × 10−1 ± 3.27 × 10−1+−4.104 × 10−1 ± 8.97 × 10−4
fu194.701 × 102 ± 1.44 × 1016.107 × 102 ± 3.43 × 101+4.501 × 102± 0.01 × 100=4.500 × 102 ± 2.60 × 10−3
fu20−1.278 × 101 ± 4.28 × 10−1−1.936 × 101 ± 1.72 × × 100+−1.103 × 101 ± 1.08 × 100+−1.988 × 101 ± 2.33 × 10−1
fu211.268 × 104 ± 3.62 × 1029.691 × 103 ± 1.14 × 1034.785 × 103 ± 1.48 × 1021.42 × 100 ± 2.26 × 100
fu221.828 × 101 ± 4.24 × 1012.004 × 101 ± 3.75 × 10−1+0.001 × 100± 0.01 × 1007.139 × 10−2 ± 4.17 × 10−2
fu231.611 × 102 ± 6.39 × 1001.815 × 101 ± 1.01 × 101+4.658 × 10−2± 2.38 × 10−27.319 × 10−2 ± 5.17 × 10−3
fu242.386 × 104 ± 3.48 × 1036.501 × 104 ± 9.68 × 103+0.001 × 100± 0.01 × 1008.961 × 10−3 ± 1.46 × 10−2
fu25−1.119 × 10−2± 1.29 × 10−3−7.493 × 10−3 ± 1.06 × 10−30.001 × 100 ± 0.01 × 100=0.000 × 100 ± 0.00 × 100
fu26−1.589 × 101 ± 5.26 × 10−1−2.677 × 101 ± 2.01 × 100−1.978 × 101 ± 1.43 × 100=−3.970 × 101 ± 1.52 × 10−1
fu278.899 × 104 ± 8.79 × 1031.925 × 104 ± 1.93 × 104+0.001 × 100± 0.01 × 1001.033 × 10−2 ± 1.39 × 10−2
fu281.177 × 103 ± 2.54 × 1011.279 × 103± 4.45 × 101+0.001 × 100± 0.01 × 1006.631 × 10−3 ± 1.24 × 10−4
fu292.636 × 1010 ± 5.09 × 1093.854 × 1010 ± 1.42 × 1010+9.899 × 101± 1.85 × 1001.286 × 100 ± 2.50 × 100
fu301.477 × 1011 ± 1.27 × 10101.017 × 1011 ± 1.96 × 1010+0.001 × 100± 8.36 × 1001.109 × 108 ± 3.44 × 108
fu313.026 × 104 ± 4.79 × 1022.368 × 104 ± 1.89 × 1031.046 × 104± 2.95 × 1034.989 × 104 ± 8.38 × 104
fu322.094 × 105 ± 1.64 × 1041.138 × 104 ± 1.68 × 103+1.532 × 10−2± 3.81 × 10−31.774 × 10−2 ± 2.89 × 10−2
fu331.186 × 102 ± 2.84 × 1001.407 × 102 ± 1.28 × 101+7.371 × 103 ± 3.33 × 100+−9.9 × 101 ± 0.00 × 100
fu348.197 × 1010 ± 1.12 × 10107.765 × 1010 ± 2.17 × 1010+9.898 × 102 ± 2.25 × 10−2+1.790 × 100 ± 2.67 × 100
fu351.398 × 103 ± 4.25 × 1011.055 × 103 ± 1.48 × 102+0.001 × 100± 0.01 × 1001.046 × 10−2 ± 1.66 × 10−2
fu361.567 × 103 ± 1.34 × 1021.242 × 103 ± 2.45 × 102+1.083 × 102 ± 4.22 × 100+3.137 × 10−2 ± 6.55 × 10−2
fu375.507 × 102 ± 1.25 × 10−15.508 × 102 ± 1.71 × 10−1+5.493 × 102± 2.51 × 10−1+6.525 × 10−2 ± 4.01 × 10−2
fu38−1.056 × 103 ± 1.09 × 101−1.283 × 103 ± 2.18 × 102−1.285 × 103± 3.91 × 1010.000 × 100 ± 0.00 × 100
fu398.338 × 103 ± 1.12 × 1031.201 × 103 ± 2.18 × 102+0.001 × 100± 0.01 × 1008.704 × 10−3 ± 2.16 × 10−4
fu408.969 × 104 ± 7.75 × 1031.701 × 104 ± 3.08 × 103+0.001 × 100± 0.01 × 1001.463 × 10−1 ± 2.02 × 10−1
fu412.142 × 109 ± 5.71 × 1081.784 × 107 ± 5.52 × 106+4.898 × 101 ± 1.38 × 10−21.397 × 100± 5.03 × 100
fu421.364 × 101 ± 4.46 × 10−16.878 × 100 ± 4.73 × 10−1+0.001 × 100± 0.01 × 1007.812 × 10−2 ± 3.72 × 10−2
fu433.711 × 10−2 ± 3.27 × 1012.555 × 10−2 ± 4.98 × 1010.001 × 100± 0.01 × 1002.039 × 10−2 ± 3.72 × 10−2
fu444.684 × 102 ± 1.35 × 1010.001 × 100± 0.01 × 100=0.001 × 100± 0.01 × 100=0.001 × 100 ± 0.01 × 100
fu452.539 × 106 ± 2.09 × 1051.134 × 106 ± 2.17 × 103+0.001 × 100± 0.01 × 1001.433 × 101 ± 1.92 × 101
fu463.152 × 104 ± 1.18 × 1031.888 × 104 ± 2.17 × 103+0.0010 × 100± 0.01 × 1003.577 × 103 ± 8.68 × 103
fu474.675 × 106 ± 2.19 × 1052.183 × 106± 4.01 × 1051.694 × 107 ± 4.31 × 106+3.749 × 106 ± 2.47 × 106
Table 5. The convergence results on test function 1 based on the same iterations.
Table 5. The convergence results on test function 1 based on the same iterations.
IterationsPSOCSOcPSOSSPCCSO
I 1008202.63170.00000055.6955.809
I 2003754.94830.00000019.8524.277
I 10001417.46880.0000000.616480.03692
I 20001414.28680.0000000.368940.000032
Table 6. The convergence results on test function 4 based on the same iterations.
Table 6. The convergence results on test function 4 based on the same iterations.
IterationsPSOCSOcPSOSSPCCSO
I 10014.385788.88 × 10−166.345810.602
I 20012.034798.88 × 10−164.67254.942
I 50010.316888.88 × 10−163.33703.824
I 10009.052378.88 × 10−162.33931.427
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, Z.; Zhao, M.; Luo, T.; Yang, Y. A Compact Cat Swarm Optimization Algorithm Based on Small Sample Probability Model. Appl. Sci. 2022, 12, 8209. https://doi.org/10.3390/app12168209

AMA Style

He Z, Zhao M, Luo T, Yang Y. A Compact Cat Swarm Optimization Algorithm Based on Small Sample Probability Model. Applied Sciences. 2022; 12(16):8209. https://doi.org/10.3390/app12168209

Chicago/Turabian Style

He, Zeyu, Ming Zhao, Tie Luo, and Yimin Yang. 2022. "A Compact Cat Swarm Optimization Algorithm Based on Small Sample Probability Model" Applied Sciences 12, no. 16: 8209. https://doi.org/10.3390/app12168209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop