Next Article in Journal
Estimating the Local Radius of Convergence for Picard Iteration
Next Article in Special Issue
Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions
Previous Article in Journal
Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Previous Article in Special Issue
A Modified Cloud Particles Differential Evolution Algorithm for Real-Parameter Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization

1
School of Information Science and Engineering, Guangxi University for Nationalities, Nanning 530006, China
2
Key Laboratories of Guangxi High Schools Complex System and Computational Intelligence, Nanning 530006, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(1), 9; https://doi.org/10.3390/a10010009
Submission received: 27 November 2016 / Revised: 27 December 2016 / Accepted: 4 January 2017 / Published: 8 January 2017
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications)

Abstract

:
The Social Spider Optimization algorithm (SSO) is a novel metaheuristic optimization algorithm. To enhance the convergence speed and computational accuracy of the algorithm, in this paper, an elite opposition-based Social Spider Optimization algorithm (EOSSO) is proposed; we use an elite opposition-based learning strategy to enhance the convergence speed and computational accuracy of the SSO algorithm. The 23 benchmark functions are tested, and the results show that the proposed elite opposition-based Social Spider Optimization algorithm is able to obtain an accurate solution, and it also has a fast convergence speed and a high degree of stability.

1. Introduction

A swarm intelligence optimization algorithm originates from the simulation of various types of biological behavior in nature and has the characteristics of simple operation, good optimization performance and strong robustness. Inspired by this idea, many bio-inspired swarm intelligent optimization algorithms are proposed, such as, ant colony optimization (ACO) [1], differential evolution (DE) [2], particle swarm optimization (PSO) [3], firefly algorithm (FA) [4], glowworm swarm optimization (GSO) [5], monkey search (MS) [6], harmony search (HS) [7], cuckoo search (CS) [8,9], bat algorithm (BA) [10], krill herd algorithm (KH) [11,12,13], Swarm intelligence optimization algorithm can solve problems which traditional methods cannot handle effectively and have shown excellent performance in many respects, and its application scope has been greatly expanded.
The Social Spider Optimization algorithm (SSO), proposed by Erik Cuevas in 2013 [14], is a novel metaheuristic optimization algorithm that simulates social-spider behavior. Although SSO has obtained good performance on many optimization problems, it still falls easily into a local optimal solution. In order to enhance the performance of SSO on optimization problems, this paper presents a novel SSO algorithm called EOSSO by using OBL and elite selection mechanism. Opposition-based Learning (OBL) is a new concept in computational intelligence, many algorithms have used the OBL mechanism [15,16], and it has been proven to be an effective strategy to improve performance of various optimization algorithms. The main idea behind OBL is to transform solutions in the current search space to a new search space. By simultaneously considering the solutions in the current search space and the transformed search space, OBL can provide a higher chance of finding solutions which are closer to the global optimum. However, OBL could not be suitable for all kinds of optimization problems. For instance, the transformed candidate may jump away from the global optimum when solving multimodal problems. To avoid this case, a new elite selection mechanism based on the population is used after the transformation. The proposed elite opposition-based Social Spider Optimization algorithm is validated by 23 benchmark functions. The results show that the proposed algorithm is able to obtained accurate solution, and it also has a fast convergence speed and a high degree of stability.
The rest of the paper is organized as follows. Section 2 introduces the original Social Spider Optimization algorithm (SSO). Section 3 proposes a new elite opposition-based Social Spider Optimization algorithm (EOSSO). A series of comparison experiments on various benchmarks in Section 4. The results analysis is described in Section 5. Finally, conclusion and future works can be found and discussed in Section 6.

2. The Social Spider Optimization Algorithm

The SSO algorithm is divided into two different search agents (spiders): males and females. Depending on gender, each individual is conducted by a set of different evolutionary operators which mimic different cooperative behaviors that are commonly assumed within the colony. The SSO algorithm starts by defining the number of female and male spiders that will be characterized as individuals in the search space. The number of females ( N f ) is randomly selected within the range of 65%–90% of the entire population ( N ). Therefore, N f is calculated by the following equation:
N f = f l o o r [ ( 0.9 r a n d * 0.25 ) * N ]
where, r a n d is a random number in the range [0, 1], whereas f l o o r ( . ) maps a real number to an integer number. The number of male spiders ( N m ) is computed as the complement between N and N f . It is calculated as follows:
N m = N N f
where, the population ( S ) is composed by N elements and is divided into sub-groups F and M . The group ( F ) include the set of female individuals ( F = { f 1 , f 2 , , f N f } ) and the group ( M ) include the set of male individuals ( M = { m 1 , m 2 , , m N m } ), where S = F M ( S = { s 1 , s 2 , , s N } ), such that S = { s 1 = f 1 , s 2 = f 2 , , s N f = f N f , s N f + 1 = m 1 , s N f + 2 = m 2 , , S N = m N m } .

2.1. Fitness Assignation

In natural metaphor, the spider size is the characteristic that evaluates the individual capacity, every spider receives a weight ( w i ) which represents the solution quality that corresponds to the spider ( i ) of the population ( S ). The weight of every spider is defined:
w i = J ( s i ) w o r s t s b e s t s w o r s t s
where, J ( s i ) is the fitness value obtained by the evaluation of the spider position ( s i ) with regard to the objective function ( J ( . ) ). The values of w o r s t s and b e s t s are defined as follows (considering a maximization problem):
b e s t s = max ( J ( s k ) ) k { 1 , 2 , , N }   and   w o r s t s = min ( J ( s k ) ) k { 1 , 2 , , N }

2.2. Modeling of the Vibrations through the Communal Web

The communal web is used as a mechanism to transmit information among the colony members. The vibrations depend on the weight and distance of the spider which has generated them. The vibrations perceived by the individual ( i ) as a result of the information transmitted by the member ( j ) are modeled as the following equation:
V i b i , j = w j * e d i , j 2
where, d i , j is the Euclidian distance between the spiders i and j , such that d i , j = | | s i s j | | .
Although it is virtually possible to compute perceived vibrations by considering any pair of individuals, but only three special relationships are considered within the SSO approach:
(1)
Vibrations ( V i b c i ) are perceived by the individual ( i ) as a result of the information transmitted by the member ( c ) who is an individual that has two characteristics: it is the nearest member to individual ( i ) and possesses a higher weight in comparison to individual ( i ) ( w c > w i ).
V i b c i = w c * e d i , c 2
(2)
The vibrations ( V i b b i ) perceived by the individual ( i ) as a result of the information transmitted by the member ( b ), with individual ( b ) being the individual holding the best weight (best fitness value) of the entire population ( S ), such that w b = max ( w k ) k { 1 , 2 , , N } .
V i b b i = w b * e d i , b 2
(3)
The vibrations ( V i b f i ) perceived by the individual ( i ) as a result of the information transmitted by the member ( f ), with individual ( f ) being the nearest female individual to individual ( i ).
V i b f i = w f * e d i , f 2

2.3. Initializing the Population

The SSO algorithm begins by initializing the set ( S ) of N social-spider positions. Each social-spider position, f i or m i , is an n-dimensional vector containing the parameter values to be optimized. Such values are randomly and uniformly distributed between the pre-specified lower initial parameter bound ( p j l o w ) and the upper initial parameter bound ( p j h i g h ), just as it described by the following expressions:
f k , j 0 = p j l o w + r a n d ( 0 , 1 ) * ( p j h i g h p j l o w ) k = 1 , 2 , , N f ; j = 1 , 2 , , n m k , j 0 = p j l o w + r a n d ( 0 , 1 ) * ( p j h i g h p j l o w ) k = 1 , 2 , , N m ; j = 1 , 2 , , n
where k and j are the individual indexes, whereas zero signals the initial population. The function ( r a n d ( 0 , 1 ) ) generates a random number between 0 and 1.

2.4. Cooperative Operators

2.4.1. Female Cooperative Operator

To emulate the cooperative behavior of the female spider, the operator considers the position change of the female spider ( i ) at each iteration process. The position change is computed as a combination of three different elements. The first one involves the change in regard to the nearest member to individual ( i ) that holds a higher weight and produces the vibration ( V i b c i ). The second one considers the change regarding the best individual of the entire population ( S ) who produces the vibration ( V i b b i ). Finally, the third one incorporates a random movement.
For this operation, a uniform random number ( r m ) is generated within the range [0, 1]. If random number ( r m ) is smaller than a threshold ( P F ), an attraction movement is generated; otherwise, a repulsion movement is produced. Therefore, such operator can be modeled as follows:
f i k + 1 = { f i k + α * V i b c i * ( s c f i k ) + β * V i b b i * ( s b f i k ) + δ * ( r a n d 1 2 ) ; r m < P F f i k α * V i b c i * ( s c f i k ) β * V i b b i * ( s b f i k ) + δ * ( r a n d 1 2 ) ; r m > P F
where, α , β , δ and r a n d are random numbers between 0 and 1, whereas k represents the iteration number. The individual ( s c ) represent the nearest and holds a higher weight of member to individual ( i ), the individual ( s s ) represent that the social-spider with a highest weight in the entire population ( S ).

2.4.2. Male Cooperative Operator

In the natural, male population of social-spider is divided into two classes: dominant members ( D ) and non-dominant members ( N D ) of male spiders. Male members, with a weight value above the median value within the male population, are considered the dominant individuals ( D ). On the other hand, those under the median value are labeled as non-dominant ( N D ) males.
In order to implement this computation, the male population ( M = { m 1 , m 2 , , m N m } ) is arranged according to their weight value in decreasing order. Thus, the individual whose weight ( w N f + m ) is located in the middle is considered the median male member. Since indexes of the male population ( M ) in regard to the entire population ( S ) are increased by the number of female members ( N f ), the median weight is indexed by N f + m . According to this, the male spider positions change as follows:
m i k + 1 = { m i k + α * V i b f i * ( s f m i k ) + δ * ( r a n d 1 2 ) ; w N f + i > w N f + m m i k + α * ( h = 1 N m m h k * w N f + h h = 1 N m w N f + h m i k ) ; w N f + i w N f + m
where, the individual ( s f ) represents the nearest female individual to the male member ( i ), whereas median ( h = 1 N m m h k * w N f + h / h = 1 N m w N f + h ) correspond to the weighted mean of the male population ( M ).

2.5. Mating Operator

Mating in a social-spider colony is performed by dominant males and the female members. Under such circumstances, when a dominant male ( m g ) spider ( g D ) locates a set of female members ( E g ) within range of mating ( r ), it mates, forming a new brood ( s n e w ) which is generated considering all the elements of the set ( T g ), which has been generated by the union ( E g m g ). It is emphasized that if the set ( E g ) is empty, the mating operation is canceled. The range ( r ) is defined as a radius which depends on the size of the search space. Such radius ( r ) is computed according to the following model:
r = j = 1 n p j h i g h p j l o w 2 n
In the mating process, the weight of each involved social-spider (elements of T g ) defines the probability of influence for each individual into the new brood. The social-spider holding a heavier weight is more likely to influence the new product, while elements with lighter weight have a lower probability. The influence probability ( P s i ) of each member is assigned by the roulette method, which is defined as follows:
P s i = w i j T g w j
where i T g .
Once the new spider is formed, it is compared to holding the worst spider ( s w o ) of the colony, according to their weight values (where w w o = min ( w l ) l { 1 , 2 , , N } ). If the new spider is better than the worst spider, the worst spider is replaced by the new one. Otherwise, the new spider is discarded and the population does not suffer changes.

2.6. Computational Procedure

The computational procedure for the Algorithm 1 can be summarized as follow:
Algorithm 1: The Social spider optimization algorithm
Step 1: Think N as the total number of n-dimensional colony members, define the number of male ( N m ) and females spiders ( N f ) in the entire population ( S ).
N f = f l o o r [ ( 0.9 r a n d * 0.25 ) * N ] ,   N m = N N f
Where r a n d is a random number in the range [0, 1], whereas f l o o r ( . ) maps a real number to an integer number.
Step 2: Initialize randomly the female members ( F = { f 1 , f 2 , , f N f } ), male members ( M = { m 1 , m 2 , , m N m } ) and calculate the radius of mating.
r = j = 1 n p j h i g h p j l o w 2 n
Step 3: Calculation the weight of every spider of S .
  For ( i =1; i < N +1; i ++)
     w i = J ( s i ) w o r s t s b e s t s w o r s t s , where b e s t s = max ( J ( s k ) ) k { 1 , 2 , , N } and w o r s t s = min ( J ( s k ) ) k { 1 , 2 , , N }
  End For
Step 4: Female spider’s movement according to the female cooperative operator.
 For ( i =1; i < N f +1; i ++)
    Calculate V i b c i and V i b b i
    If ( r m < P F ), where r m r a n d ( 0 , 1 )
     f i k + 1 = f i k + α * V i b c i * ( s c f i k ) + β * V i b b i * ( s b f i k ) + δ * ( r a n d 1 2 )
    Else If
     f i k + 1 = f i k α * V i b c i * ( s c f i k ) β * V i b b i * ( s b f i k ) + δ * ( r a n d 1 2 )
    End If
 End For
Step 5: Move the male spiders according to the male cooperative operator
 Find the median male individual ( w N f + m ) from M .
  For ( i =1; i < N m +1; i ++)
    Calculate V i b f i
    If ( w N f + i > w N f + m )
     m i k + 1 = m i k + α * V i b f i * ( s f m i k ) + δ * ( r a n d 1 2 )
    Else If
     m i k + 1 = m i k + α * ( h = 1 N m m h k * w N f + h h = 1 N m w N f + h m i k )
    End If
  End For
Step 6: Perform the mating operation
 For ( i =1; i < N m +1; i ++)
    If ( m i D )
    Find E i
      If ( E i is not empty)
      From s n e w using the roulette method
        If ( w n e w > w w o )
         s w o = s n e w
        End If
      End If
    End If
  End For
Step 7: if the stop criteria is met, the process is finished; otherwise, go back to Step 3.

3. Elite Opposition-Based Social Spider Optimization Algorithm (EOSSO)

When the SSO calculates unimodal function and multimodal function, we can clearly discover that the solutions obtained by SSO are not good enough. In order to enhance accuracy of the algorithm, we append one optimization strategy to SSO. There is global elite opposition-based learning strategy (GEOLS). In this part, we introduce opposition-based learning (OBL) and global elite opposition based learning strategy (GEOLS).

3.1. Opposition Based Learning (OBL)

OBL is, basically, a machine intelligence strategy which was proposed by Tizhoosh in [17]. It considers the current individual and its opposite individual simultaneously in order to get a better approximation at the same time for a current candidate solution. It has been also proved that an opposite candidate solution has a greater opportunity to be closer to the global optimal solution than a random candidate solution [17]. Therefore, the concept of OBL has been utilized to enhance population based algorithms in [18,19,20,21]. The general, OBL concept has been, successfully, applied in some areas of research work such as in reinforcement learning [22], window memorization for morphological algorithms [23], image processing using the opposite fuzzy sets [24,25] and also in some popular optimization techniques like ant colony optimization [26,27,28], GA [29], artificial neural networks with opposite transfer function and back propagation [30,31], DE, PSO with Cauchy mutation [32], gravitational search algorithm [33], harmonic search algorithm [34], and BBO [35,36]. In proposing this technique, some definitions are clearly defined below:
A.Opposite number:
Let p [ x , y ] be a real number. The opposite number of p ( p * ) is defined by:
p * = x + y p
B.Opposite point:
Let p = ( s 1 , s 2 , , s n ) be a point in n-dimensional search space, where p r [ x r , y r ] and r = { 1 , 2 , , n } . The opposite point is defined by
p r * = x r + y r p r
C.Opposition based population initialization:
By utilizing opposite points, a suitable starting candidate solution may be obtained even when there is not a priori knowledge about the solution. The main steps of the proposed approach are listed as follows:
Step 1
Initialize the population set in a random manner.
Step 2
Calculate opposite population by:
o p a , b = x b + y b p a , b
where a = 1 , 2 , , N , b = 1 , 2 , , n and p a , b and o p a , b denote the b th variable of the a th vector of the population and opposite population, respectively.
Step 3
Select the fittest N individuals from { p o p } as initial population.
D.Opposition based generation jumping
If we apply similar approach to the current population, the whole evolutionary process can be forced to jump to a new solution candidate who is more suitable than the current one. From this comparison, the fittest N individuals are selected. In each generation, search space is reduced to calculate the opposite points, i.e.,
o p a , b = M i n b g n + M a x b g n p a , b
where a = 1 , 2 , , N and b = 1 , 2 , .... , n . [ M i n b g n , M a x b g n ] is the current interval in the population which is becoming increasingly smaller than the corresponding initial range [ x b , y b ] .

3.2. Global Elite Opposition-Based Learning Strategy (GEOLS)

The Social Spider Optimization algorithm use cooperation among female groups in global search process. It is simulated by changes in the position of female spiders. As we know that it is a stochastic process, the probability of getting a good solution is relatively low. For increasing the probability of obtained a better solution to the problem in global search process and expand the searching space, this strategy is applied to the proposed EOSSO.
Elite opposition-based Learning is a new technique in the field of intelligence computation. Its main ideology is: for a feasible solution, calculate and evaluate the opposite solution at the same time, and choose the better one as the individual of next generation. In this paper, individual with the best fitness value in the population is viewed as the elite individual. For explaining the definition of elite opposition-based solution, an example should be exhibited. If we suppose that the elite individual of the population is X e = ( x e , 1 , x e , 2 , .... , x e , n ) . For the individual X i = ( x i , 1 , x i , 2 , .... , x i , n ) , the elite opposition-based solution of X i which can be defined as X i * = ( x i , 1 * , x i , 2 * , , x i , n * ) . In addition, it can be obtained by following equation:
x i , j * = k ( d a j + d b j ) x e , j , i = 1 , 2 , , N ; j = 1 , 2 , , n .
where N is the population size, n is the dimension of X , k U ( 0 , 1 ) and ( d a j d b j ) is the dynamic bound of j th decision variable. d a j , d b j can be obtained by following equation:
d a j = min ( x i , j ) , d b j = max ( x i , j )
As we know that the shrink of searching space may cause algorithm stuck in local minimal. Thus, in this proposed algorithm, we will update d a j and d b j every 50 generations. Dynamic bound is good at restoring searching experience. However, it can make x i , j * jump out of ( d a j d b j ) , if that happened, equation below should be used to reset x i , j * .
x i , j * = r a n d ( d a j , d b j ) ,   if   x i , j * < d a j   or   x i , j * > d b j
In global searching process, this strategy expands the searching space of algorithm and it can strengthen the diversity of the population, thus the proposed global searching ability can be enhanced by this optimization strategy.

4. Simulation Experiments

In this section, 23 benchmark test functions [37] are applied to evaluate the optimal performance of EOSSO. The space dimension, scope and optimal value of 23 functions are shown in Table 1, Table 2 and Table 3. The rest of this section is organized as follows: experimental setup is given in Section 4.1 and the comparison of each algorithm performance is shown in Section 4.2.
The benchmark functions selected can be divided into three categories (i.e., unimodal benchmark functions, multimodal benchmark functions and fixed-dimension multimodal benchmark functions). They are f 1 f 7 for unimodal benchmark functions, f 8 f 13 for multimodal benchmark functions and f 14 f 23 for fixed-dimension multimodal benchmark functions.

4.1. Experimental Setup

The proposed elite opposition-based Social Spider Optimization algorithm (EOSSO) compared with ABC, BA, DA, GGSA and SSO, respectively using the mean and standard deviation to compare their optimal performance. The parameters settings of algorithms are as follows: for all the optimization algorithms, population size N = 50, the iteration number is 1000 and execute 30 independent experiments. All of algorithms were programmed in MATLAB R2012a, simulated with an Inter (R) Core(TM) i5-4590 CPU and 4 GB memory.

4.2. Comparison of Each Algorithm Performance

The 30 independent runs were made for the six algorithms, the results obtained by six algorithms are presented in Table 4, Table 5 and Table 6. The evolution curves and the variance diagram of f 1 , f 4 , f 7 , f 9 , f 10 , f 11 , f 16 , f 18 , f 19 and f 23 obtained by six algorithms are presented in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 and Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20, respectively.
In Table 4, Table 5 and Table 6, The “Function” represents test function, “Dim” represents dimension size, each number in the column “mean” is the average global optimal value of 30 time independent operation, the “best” is the best global optimal value of 30 time independent operation, the “worst” is the worst global optimal value of 30 time independent operation, each number in the column “std.” represents standard deviation value of 30 time independent operation.

5. Result Analysis

Seen from Table 4, in unimodal benchmark functions, EOSSO can get a better optimal solution for f 1 , f 2 , f 3 , f 4 , f 5 and f 7 and has a very strong robustness. For f 6 , the precision of optimal fitness value and mean fitness value of GGSA are higher than other algorithms. For the seven functions in unimodal benchmark functions, standard deviation of EOSSO is less than that of other algorithm. In addition, this means that in the optimization of unimodal function, EOSSO has better stability.
Similarly, seen from Table 5, EOSSO can find the optimal solution for f 9 and f 11 . In addition, the standard deviations are zeros. For f 9 , f 10 and f 11 , the precision of mean fitness value, best fitness value, worst fitness value and standard deviation of EOSSO are better than other algorithms. These results mean that EOSSO has a strong searching ability and a great stability for solving multimodal function optimization.
For f 14 f 23 , we can see from Table 6 above that the best fitness value, the worst fitness value, mean fitness value and standard deviation produced by EOSSO are better than other algorithms. In addition, for, EOSSO can get better best fitness value and mean fitness value and worst fitness value, but standard deviation are worse than that of SSO. After analyzing Table 4, Table 5 and Table 6, a conclusion can be easily drawn that EOSSO has a great ability for solving function optimization problems according to the experimental results.
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 show the evolution curves of fitness value for f 1 , f 4 , f 7 , f 9 , f 10 , f 11 , f 16 , f 18 , f 19 and f 23 . From these Figures, we can easily find that EOSSO converge faster than other population based algorithms mentioned above, and the values obtained by EOSSO are closer to the optimal value of benchmark functions. These show that EOSSO has a faster convergence speed and a better precision than SSO and other population based algorithms. Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20 show the anova test of global minimum for f 1 , f 4 , f 7 , f 9 , f 10 , f 11 , f 16 , f 18 , f 19 and f 23 . From these figures, we can discover that the standard deviation of EOSSO is much smaller, and the number of outlier is less than other algorithms. These imply that EOSSO has a great performance with a high degree of stability. In sum, proposed EOSSO is an algorithm with fast convergence speed, high level of precision and a great performance of stability.
In Section 4.2, 23 standard benchmark functions are selected to evaluate the performance of elite opposition-based Social Spider Optimization algorithm (EOSSO). f 1 f 7 are unimodal function, f 8 f 13 are multimodal function, f 14 f 23 are fixed-dimension multimodal benchmark function. Experiment results are listed in Table 4, Table 5 and Table 6. Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20 are evaluation curves of fitness values and anova test of global minimum for f 1 , f 4 , f 7 , f 9 , f 10 , f 11 , f 16 , f 18 , f 19 and f 23 . The results in tables show that a more precise solution can be found by EOSSO. Figures listed in paper reflect a fact that EOSSO has a faster convergence speed and a higher stability.

6. Conclusions and Future Works

In order to overcome the disadvantage of Social Spider Optimization algorithm that it is still easy to fall into a local optimal solution, this paper presents a novel SSO algorithm called EOSSO by using OBL and the elite selection mechanism. Opposition-based Learning (OBL) is a new concept in computational intelligence and it has been proven to be an effective strategy to improve performance of various optimization algorithms. The main idea behind OBL is to transform solutions in the current search space to a new search space. By simultaneously considering the solutions in the current search space and the transformed search space, OBL can provide a higher chance of finding solutions that are closer to the global optimum. This kind of mechanism enhances the diversity of the population, which helps to improve its exploration ability. From the results of the 23 benchmark functions, the performance of EOSSO is better than, or at least comparable with other population-based algorithm mentioned in this paper. EOSSO has a fast convergence speed, a relatively high degree of stability and it is also much more accurate in precision.
For EOSSO, there are various issues that still deserve further study. Firstly, multi-objective optimization problems can be seen here and there in the real world. Compared with single objective optimization problems, it is often very challenging to obtain high-equality solution. The proposed elite opposition-based Social Spider Optimization algorithm should be used to solve these multi-objective optimization problems in the future to validate its performance. Secondly, there exist many NP-hard problems in literature, such as the traveling salesman problem, graph coloring problem, radial basis probabilistic neural networks, and finder of polynomials based on root moments. In order to test the performance of EOSSO, it should be used to solve these NP-hard problems in the future. Thirdly, the proposed EOSSO should solve some practical engineering problems in the future, for example, welding beam and spring pressure design problem, vehicle scheduling optimization problem and scheduling optimization of hydropower station, etc. Finally, although the proposed algorithm is tested with 23 benchmark functions, a more comprehensive computational study should be made to test the efficiency of the proposed solution technique in the future.

Acknowledgments

This work is supported by National Science Foundation of China under Grants No. 61463007; 61563008, and Project of Guangxi Natural Science Foundation under Grant No. 2016GXNSFAA380264.

Author Contributions

Ruixin Zhao designed algorithm; Qifang Luo performed the experiments and results analysis; Yongquan Zhou wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  2. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  3. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 1995; Volume IV, pp. 1942–1948.
  4. Yang, X.S. Multi-objective firefly algorithm for continuous optimization. Eng. Comput. 2013, 29, 175–184. [Google Scholar] [CrossRef]
  5. Liu, J.; Zhou, Y.; Zhao, G. Leader glowworm swarm optimization algorithm for solving nonlinear equations systems. Electr. Rev. 2012, 88, 101–106. [Google Scholar]
  6. Mucherino, A.; Seref, O. Monkey search: A novel metaheuristic search for global optimization. In Proceedings of the American Institute of Physics Conference, Gainesville, FL, USA, 28–30 March 2007; pp. 162–173.
  7. Alatas, B. Chaotic harmony search algorithms. Appl. Math. Comput. 2010, 216, 2687–2699. [Google Scholar] [CrossRef]
  8. Yang, X.S.; Deb, S. Cuckoo search via Levy flights. In Proceedings of the World Congress on Nature and Biologically Inspired Computing (NaBIC2009), Coimbatote, India, 9–11 December 2009; pp. 210–214.
  9. Wang, G.-G.; Gandomi, A.H.; Zhao, X.; Chu, H.E. Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput. 2016, 20, 273–285. [Google Scholar] [CrossRef]
  10. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization; Gonzalez, J.R., Pelta, D.A., Cruz, C., Eds.; Springer: Berlin, Germany, 2010; pp. 65–74. [Google Scholar]
  11. Wang, G.-G.; Guo, L.; Gandomi, A.H.; Hao, G.-S.; Wang, H. Chaotic krill herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
  12. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370. [Google Scholar] [CrossRef]
  13. Wang, G.; Guo, L.; Wang, H.; Duan, H.; Liu, L.; Li, J. Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput. Appl. 2014, 24, 853–871. [Google Scholar] [CrossRef]
  14. Cuevas, E.; Cienfuegos, M.; Zaldivar, D.; Perez-Cisneros, M. A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst. Appl. 2013, 40, 6374–6384. [Google Scholar] [CrossRef]
  15. Wang, H.; Rahnamay, S.; Wu, Z. Parallel differential evolution with self-adapting control parameters and generalized opposition-based learning for solving high-dimensional optimization problems. J. Parallel Distrib. Comput. 2013, 73, 62–73. [Google Scholar] [CrossRef]
  16. Zhou, Y.; Wang, R.; Luo, Q. Elite opposition-based flower pollination algorithm. Neurocomputing 2016, 188, 294–310. [Google Scholar] [CrossRef]
  17. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computation Intelligence on Modeling Control Automation and International Conference on Intelligent Agents, Web Technologies Internet Commerce, Vienna, Austria, 28–30 November 2005; pp. 695–701.
  18. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition versus randomness in soft computing techniques. Appl. Soft Comput. 2008, 8, 906–918. [Google Scholar] [CrossRef]
  19. Wang, H.; Li, H.; Liu, Y.; Li, C.; Zeng, S. Opposition based particle swarm algorithm with Cauchy mutation. In Proceedings of the IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4750–4756.
  20. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution for optimization of noisy problems. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 1865–1872.
  21. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 2010–2017.
  22. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 2, 64–79. [Google Scholar] [CrossRef]
  23. Tizhoosh, H.R. Opposition-based reinforcement learning. J. Adv. Comput. Intell. Inf. 2006, 10, 578–585. [Google Scholar]
  24. Khalvati, F.; Tizhoosh, H.R.; Aagaard, M.D. Opposition-based window memorization for morphological algorithms. In Proceedings of the IEEE Symposium on Computational Intelligence in Image and Signal Processing, Honolulu, HI, USA, 1–5 April 2007; pp. 425–430.
  25. Al-Qunaieer, F.S.; Tizhoosh, H.R.; Rahnamayan, S. Oppositional fuzzy image thresholding. In Proceedings of the IEEE International Conference on Fuzzy Systems, Barcelona, Spain, 18–23 July 2010; pp. 1–7.
  26. Tizhoosh, H.R. Opposite fuzzy sets with applications in image processing. In Proceedings of the Joint 2009 International Fuzzy Systems Association World Congress and 2009 European Society of Fuzzy Logic and Technology Conference, Lisbon, Portugal, 20–24 July 2009; pp. 36–41.
  27. Malisia, A.R.; Tizhoosh, H.R. Applying opposition-based ideas to the ant colony system. In Proceedings of the IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007; pp. 182–191.
  28. Haiping, M.; Xieyong, R.; Baogen, J. Oppositional ant colony optimization algorithm and its application to fault monitoring. In Proceedings of the 29th Chinese Control Conference (CCC), Beijing, China, 29–31 July 2010; pp. 3895–3903.
  29. Lin, Z.Y.; Wang, L.L. A new opposition-based compact genetic algorithm with fluctuation. J. Comput. Inf. Syst. 2010, 6, 897–904. [Google Scholar]
  30. Ventresca, M.; Tizhoosh, H.R. Improving the convergence of back propagation by opposite transfer functions. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’06), Vancouver, BC, Canada, 16–21 July 2006; pp. 4777–4784.
  31. Ventresca, M.; Tizhoosh, H.R. Opposite transfer functions and back propagation through time. In Proceedings of the IEEE Symposium on Foundations of Computational Intelligence, Honolulu, HI, USA, 1–5 April 2007; pp. 570–577.
  32. Han, L.; He, X.S. A novel opposition-based particle swarm optimization for noisy problems. In Proceedings of the Third International Conference on Natural Computation (ICNC 2007), Haikou, China, 24–27 August 2007; pp. 624–629.
  33. Shaw, B.; Mukherjee, V.; Ghoshal, S.P. A novel opposition-based gravitational search algorithm for combined economic and emission dispatch problems of power systems. Int. J. Electr. Power Energy Syst. 2012, 35, 21–33. [Google Scholar] [CrossRef]
  34. Chatterjee, A.; Ghoshal, S.P.; Mukherjee, V. Solution of combined economic and emission dispatch problems of power systems by an opposition based harmony search algorithm. Int. J. Electr. Power Energy Syst. 2012, 39, 9–20. [Google Scholar] [CrossRef]
  35. Bhattacharya, A.; Chattopadhyay, P.K. Solution of economic power dispatch problems using oppositional biogeography-based optimization. Electr. Power Compon. Syst. 2010, 38, 1139–1160. [Google Scholar] [CrossRef]
  36. Ergezer, M.; Simon, D.; Du, D.W. Oppositional biogeography-based optimization. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC 2009), San Antonio, TX, USA, 11–14 October 2009; pp. 1009–1014.
  37. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
Figure 1. Dim = 30, evolution curves of fitness value for f 1 .
Figure 1. Dim = 30, evolution curves of fitness value for f 1 .
Algorithms 10 00009 g001
Figure 2. Dim = 30, evolution curves of fitness value for f 4 .
Figure 2. Dim = 30, evolution curves of fitness value for f 4 .
Algorithms 10 00009 g002
Figure 3. Dim = 30, evolution curves of fitness value for f 7 .
Figure 3. Dim = 30, evolution curves of fitness value for f 7 .
Algorithms 10 00009 g003
Figure 4. Dim = 30, evolution curves of fitness value for f 9 .
Figure 4. Dim = 30, evolution curves of fitness value for f 9 .
Algorithms 10 00009 g004
Figure 5. Dim = 30, evolution curves of fitness value for f 10 .
Figure 5. Dim = 30, evolution curves of fitness value for f 10 .
Algorithms 10 00009 g005
Figure 6. Dim = 30, evolution curves of fitness value for f 11 .
Figure 6. Dim = 30, evolution curves of fitness value for f 11 .
Algorithms 10 00009 g006
Figure 7. Dim = 2, evolution curves of fitness value for f 16 .
Figure 7. Dim = 2, evolution curves of fitness value for f 16 .
Algorithms 10 00009 g007
Figure 8. Dim = 2, evolution curves of fitness value for f 18 .
Figure 8. Dim = 2, evolution curves of fitness value for f 18 .
Algorithms 10 00009 g008
Figure 9. Dim = 3, evolution curves of fitness value for f 19 .
Figure 9. Dim = 3, evolution curves of fitness value for f 19 .
Algorithms 10 00009 g009
Figure 10. Dim = 4, evolution curves of fitness value for f 23 .
Figure 10. Dim = 4, evolution curves of fitness value for f 23 .
Algorithms 10 00009 g010
Figure 11. Dim = 30, variance diagram of fitness value for f 1 .
Figure 11. Dim = 30, variance diagram of fitness value for f 1 .
Algorithms 10 00009 g011
Figure 12. Dim = 30, variance diagram of fitness value for f 4 .
Figure 12. Dim = 30, variance diagram of fitness value for f 4 .
Algorithms 10 00009 g012
Figure 13. Dim = 30, variance diagram of fitness value for f 7 .
Figure 13. Dim = 30, variance diagram of fitness value for f 7 .
Algorithms 10 00009 g013
Figure 14. Dim = 30, variance diagram of fitness value for f 9 .
Figure 14. Dim = 30, variance diagram of fitness value for f 9 .
Algorithms 10 00009 g014
Figure 15. Dim = 30, variance diagram of fitness value for f 10 .
Figure 15. Dim = 30, variance diagram of fitness value for f 10 .
Algorithms 10 00009 g015
Figure 16. Dim = 30, variance diagram of fitness value for f 11 .
Figure 16. Dim = 30, variance diagram of fitness value for f 11 .
Algorithms 10 00009 g016
Figure 17. Dim = 2, variance diagram of fitness value for f 16 .
Figure 17. Dim = 2, variance diagram of fitness value for f 16 .
Algorithms 10 00009 g017
Figure 18. Dim = 2, variance diagram of fitness value for f 18 .
Figure 18. Dim = 2, variance diagram of fitness value for f 18 .
Algorithms 10 00009 g018
Figure 19. Dim = 3, variance diagram of fitness value for f 19 .
Figure 19. Dim = 3, variance diagram of fitness value for f 19 .
Algorithms 10 00009 g019
Figure 20. Dim = 4, variance diagram of fitness value for f 23 .
Figure 20. Dim = 4, variance diagram of fitness value for f 23 .
Algorithms 10 00009 g020
Table 1. Unimodal benchmark function.
Table 1. Unimodal benchmark function.
FunctionDimRangefmin
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
f 3 ( x ) = i = 1 n ( j 1 i x j ) 2 30[−100, 100]0
f 4 ( x ) = max i { | x i | , 1 i n } 30[−100, 100]0
f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
f 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0.1 ) 30[−1.28, 1.28]0
Table 2. Multimodal benchmark function.
Table 2. Multimodal benchmark function.
FunctionDimRangefmin
f 8 ( x ) = i = 1 n x i sin ( | x i | ) 30[−500, 500]−418.9829*5
f 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
f 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
f 11 ( x ) = 1 4000 i = 1 n x i 2 1 n cos ( x i i ) + 1 30[−600, 600]0
f 12 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 u ( x i , a , k , m ) = { k ( x i a ) m ; x i > a 0 ; a < x i < a k ( x i a ) m ; x i < a 30[−50, 50]0
f 13 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + 10 sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
Table 3. Fixed-dimension multimodal benchmark function.
Table 3. Fixed-dimension multimodal benchmark function.
FunctionDimRangefmin
f 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) ) 1 2[−65, 65]1
f 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.00030
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
f 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
f 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 * × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
f 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[1, 3]−3.86
f 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
f 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
f 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
f 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 4. Simulation results for test unimodal benchmark function.
Table 4. Simulation results for test unimodal benchmark function.
FunctionDimAlgorithmBestWorstMeanstd.
f 1 30ABC3.69 × 10−82.32 × 10−65.34 × 10−75.40 × 10−7
BA4.58 × 1025.79 × 1033.13 × 1031.39 × 103
GGSA4.45 × 10−204.17 × 10−191.72 × 10−199.49 × 10−20
DA73.21.54 × 1035.50 × 1023.37 × 102
SSO8.63 × 10−28.63 × 10−28.63 × 10−20
EOSSO3.53 × 10−673.53 × 10−673.53 × 10−672.01 × 10−82
f 2 30ABC9.47 × 10−66.56 × 10−52.75 × 10−51.27 × 10−5
BA1.14 × 1022.93 × 1091.03 × 1085.35 × 108
GGSA8.97 × 10−103.42 × 10−91.88 × 10−95.72 × 10−10
DA5.81 × 10−320.58.4452.2
SSO1.201.201.202.26 × 10−16
EOSSO1.92 × 10−381.92 × 10−381.92 × 10−381.33 × 10−53
f 3 30ABC1.14 × 1042.45 × 1041.97 × 1053.08 × 103
BA4.33 × 1031.71 × 1049.12 × 1033.17 × 103
GGSA1.32 × 1025.37 × 1022.68 × 10290.5
DA3.60 × 1021.49 × 1045.39 × 1033.87 × 103
SSO1.981.9819.81.13 × 10−15
EOSSO7.61 × 10−767.61× 10−767.61 × 10−766.24 × 10−91
f 4 30ABC59.673.967.836.2
BA42.968.956.065.4
GGSA5.25 × 10−101.96 × 10−91.24 × 10−94.04 × 10−10
DA7.3824.114.843.9
SSO1.47 × 10−11.47 × 10−11.47 × 10−15.56 × 10−17
EOSSO1.20 × 10−371.20 × 10−371.20 × 10−372.12 × 10−53
f 5 30ABC76789.337.920.7
BA2435.12 × 10259.492.4
GGSA2551.18 × 10233.022.0
DA30.92.36 × 1053.41 × 1045.38 × 104
SSO35.435.435.40
EOSSO26.826.826.81.08 × 10−14
f 6 30ABC3.43 × 10−86.92 × 10−61.01 × 10−61.67 × 10−6
BA6.99 × 1025.71 × 1032.82 × 1031.30 × 103
GGSA0000
DA1.20 × 1021.56 × 1034.22 × 1023.16 × 102
SSO1.08 × 10−11.08 × 10−11.08 × 10−11.41 × 10−17
EOSSO5.01 × 10−15.01 × 10−15.01 × 10−12.26 × 10−16
f 7 30ABC4.19 × 10−110.27.09 × 10−11.58 × 10−1
BA6.93 × 10−32.09 × 10−21.23 × 10−23.65 × 10−3
GGSA3.90 × 10−31.39 × 10−11.73 × 10−22.37 × 10−2
DA1.75 × 10−23.25 × 10−11.27 × 10−18.00 × 10−2
SSO1.68 × 10−21.68 × 10−21.68 × 10−20
EOSSO1.22 × 10−41.22 × 10−41.22 × 10−45.51 × 10−20
Table 5. Simulation results for test multimodal benchmark function.
Table 5. Simulation results for test multimodal benchmark function.
FunctionDimAlgorithmBestWorstMeanstd.
f 8 30ABC−1.21 × 104−1.10 × 104−1.16 × 1042.81 × 102
BA−8.45 × 103−5.97 × 103−7.06 × 1037.40 × 102
GGSA−3.93 × 103−2.17 × 103−3.08 × 1033.64 × 102
DA7.3824.114.84.39
SSO−7.61 × 103−7.61 × 103−7.61 × 1032.78 × 1012
EOSSO−7.69 × 103−7.69 × 103−7.69 × 1033.70 × 1012
f 9 30ABC1.0211.65.472.9
BA1.48 × 1022.51 × 1021.99 × 10226.3
GGSA7.9523.8164.57
DA34.31.99 × 1021.14 × 10249.9
SSO63.363.363.30
EOSSO0000
f 10 30ABC9.16 × 1051.58 × 1034.79 × 1043.79 × 104
BA18.519.9192.50 × 101
GGSA1.48 × 10105.26 × 10103.09 × 10107.42 × 1011
DA4.44 × 101510.86.462.01
SSO3.12 × 1013.12 × 1013.12 × 1015.65 × 1017
EOSSO4.44 × 10154.44 × 10154.44 × 10150
f 11 30ABC2.31 × 1071.34 × 1026.06 × 1042.47× 103
BA2.98 × 1025.34 × 1024.31 × 10260
GGSA1.637.873.591.3
DA1.5617.55.053.4
SSO1.26 × 1021.26 × 1021.26 × 1020
EOSSO0000
f 12 30ABC1.02 × 10101.67 × 1083.32 × 1093.87 × 109
BA22.149.738.37.94
GGSA4.78 × 10222.07 × 1013.83 × 1026.14 × 102
DA1.7152.612.212.9
SSO1.37 × 1031.37 × 1031.37 × 1032.21 × 1019
EOSSO9.29 × 1039.29 × 1039.29 × 1033.53 × 1018
f 13 30ABC1.27 × 1083.55 × 1062.93 × 1076.53 × 107
BA80.81.19 × 1021.04 × 10210.1
GGSA5.41 × 10211.09 × 1023.66 × 1032.00 × 103
DA4.953.09 × 1041.52 × 1035.69 × 103
SSO2.04 × 1022.04 × 1022.04 × 1023.53 × 1018
EOSSO6.17 × 1016.17 × 1016.17 × 1013.39 × 1016
Table 6. Simulation results for test fixed-dimension multimodal benchmark function.
Table 6. Simulation results for test fixed-dimension multimodal benchmark function.
FunctionDimAlgorithmBestWorstMeanstd.
f 14 2ABC9.98 × 1019.98 × 1019.98 × 1011.20 × 105
BA9.98 × 10122.9106.97
GGSA9.98 × 10110.732.36
DA9.98 × 1019.98 × 1019.98 × 1018.09 × 1011
SSO1.991.991.991.36 × 1015
EOSSO9.98 × 1019.98 × 1019.98 × 1014.52 × 1016
f 15 4ABC4.16 × 1041.48 × 1039.93 × 1042.52 × 104
BA3.07 × 1045.19 × 1031.10 × 1031.17 × 103
GGSA4.80 × 1043.43 × 1031.78 × 1036.36 × 104
DA4.91 × 1043.44 × 1031.28 × 1036.43 × 103
SSO4.18 × 1044.18 × 1044.18 × 1041.65 × 1019
EOSSO3.70 × 1043.70 × 1043.70 × 1042.21 × 1019
f 16 2ABC−1.03−1.03−1.031.25 × 106
BA−1.032.1−8.18 × 1016.19 × 101
GGSA−1.03−1.03−1.036.18 × 1016
DA−1.03−1.03−1.036.27 × 108
SSO−1.03−1.03−1.034.52 × 1016
EOSSO−1.03−1.03−1.036.78 × 1016
f 17 2ABC3.97 × 1013.98 × 1013.97 × 1013.64 × 105
BA3.97 × 1013.97 × 1013.97 × 1018.40 × 109
GGSA3.97 × 1013.97 × 1013.97 × 1010
DA3.97 × 1013.97 × 1013.97 × 1017.62 × 107
SSO3.97 × 1013.97 × 1013.97 × 1010
EOSSO3.97 × 1013.97 × 1013.97 × 1010
f 18 2ABC33.0338.64 × 103
BA384.1221.6
GGSA3331.98 × 1015
DA3333.28 × 107
SSO3334.52 × 1016
EOSSO3331.81 × 1015
f 19 3ABC−3.86−3.86−3.861.77 × 106
BA−3.86−3.08−3.831.41 × 101
GGSA−3.86−3.86−3.862.52 × 1015
DA−3.86−3.85−3.861.05 × 103
SSO−3.86−3.86−3.861.36 × 1015
EOSSO−3.86−3.86−3.864.48 × 1014
f 20 6ABC−3.32−3.31−3.321.03 × 103
BA−3.32−3.2−3.256.02 × 102
GGSA−3.32−2.81−3.299.71 × 102
DA−3.32−3.07−3.267.34 × 102
SSO−3.2−3.2−3.21.36 × 1015
EOSSO−3.32−3.2−3.22.17 × 102
f 21 4ABC−10.1−9.64−9.891.58 × 101
BA−10.1−2.63−4.962.85
GGSA−10.1−5.05−5.229.30 × 101
DA−10.1−5.09−9.641.54
SSO−10.1−10.1−10.11.81 × 1015
EOSSO−10.1−10.1−10.19.70 × 1011
f 22 4ABC−10.3−9.78−10.11.90× 101
BA−10.42.755.333.19
GGSA−10.4−5.08−7.562.69
DA−10.4−5.08−9.122.25
SSO−10.3−10.3−10.35.42 × 1015
EOSSO−10.4−10.4−10.42.02 × 1011
f 23 4ABC−10.4−9.67−1.01 × 1012.06 × 101
BA−10.5−2.42−5.363.52
GGSA−10.5−10.5−10.51.32 × 1015
DA−10.5−5.17−9.981.63
SSO−10.5−10.5−10.53.61 × 1015
EOSSO−10.5−10.5−10.52.45 × 1011

Share and Cite

MDPI and ACS Style

Zhao, R.; Luo, Q.; Zhou, Y. Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization. Algorithms 2017, 10, 9. https://doi.org/10.3390/a10010009

AMA Style

Zhao R, Luo Q, Zhou Y. Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization. Algorithms. 2017; 10(1):9. https://doi.org/10.3390/a10010009

Chicago/Turabian Style

Zhao, Ruxin, Qifang Luo, and Yongquan Zhou. 2017. "Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization" Algorithms 10, no. 1: 9. https://doi.org/10.3390/a10010009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop