Next Article in Journal
Handheld Real-Time LED-Based Photoacoustic and Ultrasound Imaging System for Accurate Visualization of Clinical Metal Needles and Superficial Vasculature to Guide Minimally Invasive Procedures
Previous Article in Journal
Heart Rate Estimated from Body Movements at Six Degrees of Freedom by Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation

College of Electronic Science, National University of Defense Technology, Changsha 410000, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(5), 1393; https://doi.org/10.3390/s18051393
Submission received: 16 March 2018 / Revised: 26 April 2018 / Accepted: 27 April 2018 / Published: 1 May 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
Particle Swarm Optimization (PSO) is a well-known meta-heuristic. It has been widely used in both research and engineering fields. However, the original PSO generally suffers from premature convergence, especially in multimodal problems. In this paper, we propose a double-group PSO (DG-PSO) algorithm to improve the performance. DG-PSO uses a double-group based evolution framework. The individuals are divided into two groups: an advantaged group and a disadvantaged group. The advantaged group works according to the original PSO, while two new strategies are developed for the disadvantaged group. The proposed algorithm is firstly evaluated by comparing it with the other five popular PSO variants and two state-of-the-art meta-heuristics on various benchmark functions. The results demonstrate that DG-PSO shows a remarkable performance in terms of accuracy and stability. Then, we apply DG-PSO to multilevel thresholding for remote sensing image segmentation. The results show that the proposed algorithm outperforms five other popular algorithms in meta-heuristic-based multilevel thresholding, which verifies the effectiveness of the proposed algorithm.

1. Introduction

Particle Swarm Optimization (PSO) is an evolutionary optimization algorithm based on swarm intelligence. It is originally proposed by Kennedy and Eberhart in 1995 [1] and is known for its effectiveness and simplicity. It has been proved to be outstanding in solving many complex optimization problems such as power systems [2], neural network training [3], global path planning [4], and feature selection [5].
However, PSO also suffers from two limitations. One is that the original PSO tends to converge to the local optima when applied to complex problems. On the other hand, the convergence speed of the original PSO and most of its variants is slow, especially on high-dimensional problems [6]. Therefore, accelerating the convergence speed and avoiding the local optima convergence have become the two most important and appealing goals in particle swarm optimization studies [7,8]. Specifically, the studies can be classified into three strategies: parameter selection strategy, topology strategy and learning strategy.
The parameter selection refers to the optimization of the inertial weight factor, convergence factor, and the acceleration constant. The inertial weight factor is introduced by Shi and Eberhart to improve the update of velocity [9]. Further studies also show that applying linear decreasing [10], nonlinear [11], exponential [12] and Gaussian [13] strategy to optimize the inertia weight can enhance the overall performance. The convergence factor is proposed by Clerc and Kennedy to enhance the final convergence [14]. In addition, detailed studies [15,16,17] show that the acceleration constant takes an important role on convergence performance.
The topology strategy is generally employed to improve exploration and avoid premature convergence. In topology strategy, individuals learn from the neighborhood rather than the whole swarm. Therefore, more information would be shared during the search process, which is useful to improve optimization performance. A number of topologies including ring or circle topology, wheel topology, star topology, pyramid topology, Von Neumann topology and random topology are suggested by Kennedy in [18]. Generally, a large neighborhood is good for simple problems, whereas a small neighborhood is helpful for avoiding premature convergence on complex problems [19]. Reference [20] studied the topology extensively, which provides a useful guide of topology selection. It points out that an optimal topology is both problem-specific and computational-budget-dependent and two formulas have been introduced to estimate optimal topology parameters based on numerical experiments.
In the original PSO, all individuals keep learning from the global best solution and their individual best experience in the whole search process. This may lead to premature convergence [21]. To overcome the problem, some novel learning strategies have been developed in recent years. A comprehensive learning strategy is developed to improve the performance on complex multimodal functions in [22]. Reference [23] introduces a cooperative approach to solve high-dimensional optimization problems with multiple swarms. A cooperatively coevolving strategy is proposed in [24] to further improve the performance. Sun et al. introduce a global guaranteed convergence optimizer called quantum behaved particle swarm optimization, which improves the performance by increasing the population diversity [25]. A variant with double learning patterns is developed in [26], which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity.
However, the three strategies above still face the following shortcomings. In parameter selection, some strategies do improve the overall performance in many cases, but the effect is limited [19], and it is hard to obtain an optimal parameter for all cases. In topology strategy and learning strategy, although the exploration is improved to avoid premature convergence, the convergence speed is reduced at the same time.
In this paper, we design a double-group particle swarm optimization (DG-PSO) to improve the performance. The whole population is divided into two groups: an advantaged group and a disadvantaged group. The modification is focused on the disadvantaged group. A novel learning strategy is developed based on the comprehensive learning strategy and the self-pollination strategy in another popular metaheuristic called Flower Pollination Algorithm (FPA). In addition, a diversity enhancing strategy is also designed to avoid premature convergence. Compared with those published works, the main contribution in this paper is that a novel variant called DG-PSO is proposed which shows remarkable performance compared with five other popular variants and two meta-heuristics. Two new ideas are developed in DG-PSO: a learning strategy, which combines the comprehensive learning strategy [22] and the self-pollination strategy [27], and a diversity enhancing strategy, which adds disturbance to the individuals in the disadvantaged group to avoid premature convergence in multimodal problem. In addition, we also apply the algorithm to multilevel thresholding for image segmentation, which verifies the effectiveness of DG-PSO and provides a good choice of the metaheuristic algorithm to implement multilevel thresholding. The rest of the paper is organized as follows: Section 2 reviews the original PSO and some related works. The strategies and framework of the proposed algorithm are presented in detail in Section 3, followed by the experiments in Section 4. Then, the further application on multilevel thresholding for image segmentation is shown in Section 5.

2. Background

In this section, firstly, we outline the original PSO. Then, two basic works for our algorithm including the comprehensive learning strategy and the self-pollination strategy in FPA are introduced, respectively.

2.1. Particle Swarm Optimization

Similar to other meta-heuristics, PSO is based on swarm intelligence. The swarm is composed of a set of particles i [ 1 , 2 , n ] . A particle moves in the search space with a velocity. The position and velocity of the particle are dynamically adjusted according to its own and its companion’s historical experience. Each particle’s position is associated with a candidate solution to the problem, and better solutions are obtained via evolution. The performance of a solution is judged by a given fitness function (e.g., smaller fitness function values indicate better solutions for the minimization problem). For a D-dimensional problem, there are four main vectors:
  • The velocity ( v i = [ v i 1 , v i 2 v i D ] ) : v i denotes the moving speed and direction of the particle i .
  • Position ( x i = [ x i 1 , x i 2 x i D ] ) : x i is the current position of particle i . It is updated using its velocity v i . It can be regarded as a candidate solution.
  • Previous best position ( p b e s t i = [ p b e s t i 1 , p b e s t i 2 p b e s t i D ] ) : P b e s t i represents the historical best position of particle i . It is updated using the best position that particle i has ever found.
  • Global best position ( g b e s t = [ g b e s t 1 , g b e s t 2 g b e s t D ] ) : g b e s t is the best position the swarm has ever found. It is updated with the best p b e s t in each generation. The final g b e s t corresponds to the final solution of the whole algorithm.
Note that each of the solutions or candidate solutions represent a set of to-be-optimized parameters, where D is the number of parameters. In the original PSO, the position and velocity are updated by learning from the current global best solution and its previous best solution according to Equations (1) and (2):
v i d ( t + 1 ) = w × v i d ( t ) + c 1 r 1 × ( p b e s t i d x i d ( t ) ) + c 2 r 2 × ( g b e s t d x i d ( t ) ) ,
x i d ( t + 1 ) = x i d ( t ) + v i d ( t + 1 ) ,
where i [ 1 , 2 , n ] is the i th particle; c 1 , c 2 are called acceleration constants; r 1 , r 2 are two uniformly distributed random number within [ 1 , 0 ] ; and w is the inertial weight factor.

2.2. Comprehensive Learning Strategy

In the strategy of original PSO, the particles only learn from the global best solution, while its personal best solution may lead to premature convergence. As a consequence, the Comprehensive Learning PSO (CLPSO) is developed to improve the learning strategy [22]. It employs a comprehensive learning strategy, which allows each particle to learn from many particles. Specifically, each dimension of the particle in CLPSO learns from a random particle in the swarm as Equation (3) shows:
v i d ( t + 1 ) = w × v i d ( t ) + c × r i d × ( p b e s t f ( i , d ) d x i d ( t ) ) ,
where f ( i , d ) is the function to define which particle’s p b e s t we should choose (for the i th particle to follow and learn from). Specifically, for each dimension d of particle i , a random number is generated. If the number is larger than a certain threshold, then the corresponding dimension will learn from its own p b e s t . Otherwise, f ( i , d ) works as follows:
  • Randomly choose two particles out of the whole population excluding the particle whose velocity is updated;
  • Compare the fitness of the two particles’ p b e s t and choose the better one;
  • Use the winner’s p b e s t as the exemplar for the d th dimension of the particle to learn from using Equation (3).
Specially, if all the winners are the p b e s t of their own ( p b e s t i ), it will randomly choose one dimension from the p b e s t of another particle to learn from. The framework of CLPSO is very similar to the original PSO, and it has been well tested that CLPSO is effective in optimizing benchmark functions and real-world problems [22,28,29,30,31].

2.3. Self-Pollination Strategy in the Flower Pollination Algorithm

Flower pollination algorithm is a popular nature inspired meta-heuristic in [27]. It has been widely used in many fields such as sizing optimization of truss structures [27], economic load dispatch problem in power systems [32], Sudoku Puzzles [33] and feature selection [34] since being published in 2012.
As a swarm-based metaheuristic algorithm, each individual i in the swarm is called a pollen individual. Each pollen individual is associated with a candidate solution ( s o l i = [ s o l i 1 , s o l i 2 s o l i D ] ) in the search space. FPA searches using the global and local search techniques, where the local search simulates the self-pollination process. The self-pollination strategy is one of the basic ideas in FPA (the other one is cross-pollination). Self-pollination occurs when there are no pollen vectors (Pollen vectors, or called pollinators, can be very diverse. It is estimate there are at least 200,000 variety of pollen vectors such as insects, bats and birds [27]) such as wind or insects or when the pollen individuals are pollinated within the same plant. Such self-pollination behaviors are concluded in the following two rules below:
  • Self-pollination corresponds to the local pollination.
  • Pollinators can develop flower constancy, which is regarded as a reproduction probability that is proportional to the similarity of two flowers involved.
Based on the two rules above, the self-pollination strategy is drawn as Equation (4) shows. Different from PSO, s o l i is the only vector that associates with each pollen individual. s o l i not only represents the position of the pollen individual i , but also plays the role of the best solution this individual has ever found (to understand what is the “ s o l ”, we can refer to the solutions in PSO such as the position x and the previous best solution p b e s t ). It generates the new solutions by using the previous one and two other solutions chosen randomly from the population:
s o l i d = s o l i d + ε ( s o l r 1 d s o l r 2 d ) ,
where s o l r 1 and s o l r 2 are two random solutions in the current generation, which mimics the flower constancy in a limited neighborhood. ε is a uniformly distributed random number within [0,1] used to implement a local random walk. As rule 1 indicates, the self-pollination is considered as local pollination, which often occurs in a limited neighborhood of the particle itself. It can be regarded as the local search around the current position of the pollen individual.

3. The Proposed Algorithm

In this section, we describe the proposed algorithm. Figure 1 shows the overall flowchart, where the process colored by yellow is the core idea of our algorithm. Different from the original PSO, we separate all particles into two groups in DG-PSO: an advantaged group (with the population of x 1 , x 2 x m ) and a disadvantaged group (with the population of x m + 1 , x m + 2 x n , where n > m ). The advantaged group evolves according to the same theory as the original PSO (Equations (1) and (2)), while the disadvantaged group is updated with two novel strategies: a learning strategy and a diversity enhancing strategy. We focus on the explanation of how the disadvantaged group works. As shown in Figure 2, the two new strategies work as two sequential processing stages in the update of the disadvantaged group, which will be discussed carefully in the following two subsections. In addition, the detailed steps and the whole framework of the proposed method are given in Section 3.3. Finally, we discuss and compare the proposed algorithm with other related works in Section 3.4.

3.1. The Learning Strategy

The learning strategy is based on the self-pollination strategy introduced in Section 2. We firstly employ the previous best solution p b e s t to be the solution “ s o l ” in Equation (4) (rather than the position x , this is because p b e s t represent the best historical experience of each particle, which is more worthy to learn from compared with the position x ). Then, it becomes Equation (5) for the particle i :
x i d = p b e s t i d + ε × ( p b e s t r 1 d p b e s t r 2 d ) ,
where i = m + 1 , , n denotes the particles in the disadvantaged group; p b e s t r 1 and p b e s t r 2 are two solutions chosen randomly from the p b e s t of the whole population (Specifically, r 1 and r 2 are two random integers chosen from sequence 1 , 2 , n ( r 1 r 2 ) . These two parameters keep the same for all dimensions when updating a particle i . In addition, they are regenerated for different particles.). ε represents the scaling factor to perform a random walk satisfying a uniform distributed within [ 0 , 1 ] . Similar to the original self-pollination strategy, Equation (5) can be considered as the local search around the solution (position) p b e s t i .
On the other hand, as the comprehensive learning strategy generally defines a more suitable solution for the particles to learn from, we additionally replaced the p b e s t i d in Equation (5) with p b e s t f ( i , d ) d given in Equation (6), where f ( i , d ) [ 1 , 2 m ] is the strategy to identify a particle’s p b e s t for the d th dimension of particle i to learn from:
x i d = p b e s t f ( i , d ) d + ε × ( p b e s t r 1 d p b e s t r 2 d ) ,
f ( i , d ) works according to the comprehensive learning strategy. For the d th dimension of particle i , the specific procedure to identify the p b e s t f ( i , d ) d is shown as follows:
  • Randomly choose two particles out of the advantaged group;
  • Compare the fitness of the two particles’ p b e s t and choose the better one;
  • Use the d th dimension of the winner’s p b e s t as the p b e s t f ( i , d ) d for the corresponding dimension of the i th particle to learn from.
Then, a new position is generated using Equation (6) for the particle i in the disadvantaged group to update. Using (6), the particles in the disadvantaged group can learn from the information derived from different particles’ historical best position. The strategy is different from the original self-pollination because we perform local search around the new generated position p b e s t f ( i , d ) d rather than the particle itself. The reason is that always searching the area around the position itself may reduce the search efficiency because some particles may be located in the low-promising area. In contrast, making more use of the good information from the advantaged group (using the comprehensive learning strategy) is inductive to the search efficiency.

3.2. The Diversity Enhancing Strategy

PSO often suffers from premature convergence, especially when optimizing the multimodal problem. It is because the original PSO algorithm only employs an attraction phase Equation (1), in which all particles in the swarm move quickly to the same area and the diversity decreases quickly [35]. This generally leads to converging to the local optima due to the loss of diversity [22]. In such case, improving diversity becomes an important issue in PSO research [22,36]. As diversity is lost due to particles getting clustered together [37], adding disturbance to the particles is helpful for them to escape from the local optimal and enhance diversity. Therefore, we developed a strategy to push the particle away from their current position by adding disturbance given in Equation (7):
x i , d = x i , d + r a n d 1 × s ,
where s is the scaling factor that controls the intensity of the disturbance. As shown in Equation (8), it is identified using the whole search range of the corresponding dimension (which denotes the strong disturbance) or the Euclidean distance of the two p b e s t chosen in the learning strategy (which denotes a relatively weak disturbance). The strong disturbance is designed for the case that the particle falls r a n d 1 into a large-area local optimum. Therefore, a big jump is needed to escape. The weak disturbance is designed for the case that the particle is close to the global optimum. In such case, a small random walk is more helpful to approaching the optimum.
s = { | U b d L b d | , r a n d 2 < 0.5 , p b e s t r 1 p b e s t r 2 , o t h e r w i s e ,
where, r a n d 1 and r a n d 2 are two random number uniformly generated within [ 0 , 1 ] and U b and L b represents the upper and lower bounds of the search space.
Specifically, the strategy works as follows. For each dimension of particle i , we generate a random number within [ 0 , 1 ] . If the number is smaller than the given threshold P , the diversity of the corresponding dimension will be enhanced by adding a random disturbance using (7) and (8). With the disturbance, the particles are more capable to escape from the local optimal and avoid premature convergence.

3.3. The Framework

Algorithm 1 shows the detailed steps of updating the disadvantaged group, which is the core of our modification. Apart from Algorithm 1, another minor modification in the proposed algorithm is that all particles in the two groups should be redistributed according to their fitness at the end of each generation. m particles with better fitness (for minimization problem, “better” means “smaller”) are distributed to the advantaged group, whereas others are distributed to the disadvantaged group. The overall framework and the detailed steps are shown in Figure 1 and Algorithm 2, respectively, where MaxFEs is the maximum number of function evaluations that represent the maximum computation cost.
Algorithm 1. The Steps for Updating the Disadvantaged Group
1For i = m + 1 : n
2      Randomly choose two p b e s t : p b e s t r 2 and p b e s t r 2 out of the whole population;
3/* Learning stage */
4       For d = 1 : D
5 1            Generate two different integers a and b within [1,2…m];
6 2            If f p b e s t a < f p b e s t b
7                   p b e s t f ( i , d ) d = p b e s t a d ;
8            Else
9                    p b e s t f ( i , d ) d = p b e s t b d ;
10            End
11               x i d = p b e s t f ( i , d ) d + ε × ( p b e s t r 1 d p b e s t r 2 d ) ;
12      End
13/* Diversity Enhancing stage */
14       For j = 1 : D
15             If r a n d < p
16                  Draw a scaling factor using Equation (8);
17                  Add disturbance for the current dimension using Equation (7);
18           End
19      End
20End
1 This step aims to choose two pbest from the advantaged group; 2 fpbest stands for the fitness value of the pbest, which has been recorded before.
Algorithm 2. The Steps of the Proposed Algorithm
1Randomly initialize n particles;
2m particles with better fitness value for the advantaged group; others for the disadvantaged;
3While f e s < M a x F E s
4         For i = 1 : m
5               Update the particle i in the advantaged group using Equations (1) and (2);
6        End
7        Evaluate the fitness of the advantaged group;
8         Update pbest and record the corresponding fitness as fpbest.
9        Update the disadvantaged group using Algorithm I;
10        Evaluate the fitness of the disadvantaged group;
11         Update pbest and record the corresponding fitness as fpbest.
12         f e s = f e s + n ;
13         Redistribute the whole population;
14End

3.4. Discussion and Comparison of the Proposed Algorithm with Other Related Works

As mentioned above, we combined the current existing comprehensive learning strategy with the self-pollination strategy in FPA. Specifically, we firstly applied the self-pollination strategy to PSO. Then, the comprehensive learning strategy is used to identify an exemplar for the particles in the disadvantaged group to learn from. Note that we choose the exemplar in the advantaged group rather than in the whole swarm. This strategy aims to improve the learning efficiency of the disadvantaged group. Obviously, such strategy is different from CLPSO (because CLPSO uses the comprehensive learning to modify the learning strategy of the original PSO as introduced in Section 2, whereas we proposed a new learning strategy).
Based on the analysis above, CLPSO, FPA would be used to compare with the proposed one. In addition, since we also developed a diversity enhancing strategy to further improve the performance, it is also necessary to evaluate its effectiveness. We firstly define:
  • dg-PSO: the proposed algorithm that only employs the learning strategy;
  • DG-PSO: the proposed algorithm that employs both the learning strategy and the diversity enhancing strategy.
Then, the effectiveness of the diversity enhancing strategy can be evaluated by comparing the performance of dg-PSO with DG-PSO.

4. Experiments on Benchmark Functions

In this section, we first describe the 20 benchmark functions used for performance evaluation. Then, the algorithms and the necessary parameters for comparison are introduced. Finally, the results are shown and discussed in detail.

4.1. The Benchmark Functions

The 20 benchmark functions employed in the experiments are presented in Table 1. All the functions are the minimization problem, which is defined according to [38,39] in the search space [ 100 , 100 ] . The functions can be categorized into four classes, namely (1) basic problems; (2) rotated problems; (3) shifted problems; and (4) complex problems. The basic problems include not only the basic unimodal and multimodal problems, but also a noisy problem (F4), an expanded (F8) and an expanded hybrid problem (F9). The rotated problems are designed to overcome the drawback in the basic functions that the variables are separable and the local optima are regularly distributed. In these rotated problems, the original variable x is rotated by left multiplying the orthogonal matrix M , i.e., y = M × x . Shifted problems are designed to overcome two other problems (in basic functions) including: each dimension value of the global optimum is always the same, and the global optimum is usually located at the centre of the search space. In addition, the complex problems include both rotation and shift.

4.2. Algorithms and Parameters

Table 2 shows the five PSO variants and two other popular meta-heuristics used in the comparison. These algorithms include not only the algorithms we mentioned before (CLPSO, FPA), but also some other state-of-the-art algorithms, which are chosen according to the three strategies introduced in Section 1. We give a brief description of them here. First, Modified PSO (MPSO) [36] uses parameter selection based strategy, of which the population size and inertial weight are adaptively adjusted within the search process. Second, Unified PSO (UPSO) [40] and Fully Informed PSO (FIPS) [41] are two neighbourhood topology strategy based variants. UPSO represents the unified PSO, which is a combination of the original PSO and the topology strategy based PSO. FIPS means the fully informed PSO, which employs the fully informed neighbourhood topology. Finally, Fitness-distance-Ratio PSO (FDR-PSO) [42] and CLPSO [22] are chosen from learning strategy based variants, where FDR-PSO employs a fitness-distance-ratio strategy to identify a “fittest-and-closest” particle to modify the learning strategy. In addition, another novel meta-heuristic called Social Spider Optimization (SSO) [43] is also chosen to give the comparison as comprehensive as possible. In addition, DG-PSO and dg-PSO are the proposed algorithms, where only DG-PSO has diversity enhancing strategy.
The parameters of the involved algorithm are set as follows. For dg-PSO and DG-PSO, the population size of the advantaged group and the disadvantage group are set to 30 and 25 respectively; the possibility p of diversity enhancing is set to 1 / D . The population size for other PSO variants are set to 40 [44], except MPSO, which employs the adaptive population strategy (initial value, minimum and maximum are 5, 5 and 40, respectively) [36]. Other parameters are listed in Table 2. We performed the evaluation in both 30 dimensions with M a x F E s = 4 × 10 5 [45] and 50 dimensions with M a x F E s = 7 × 10 5 . Thirty runs are conducted for each function, and the mean fitness error and the corresponding deviation are calculated (the error is defined by the difference between the fitness function value and the minimum, i.e., E r r o r = F i t n e s s F min ). All the experiments are carried out using MATLAB 2016 on the same machine with an Intel I5-4590 CPU @ 3.3 GHz processor (Intel, Santa Clara, CA, USA), 4.00 GB memory, and Windows 7 Professional operating system (Microsoft, Redmond, WA, USA).

4.3. Results and Discussion

The mean fitness error values and the corresponding standard deviation are shown in Table 3 and Table 4, respectively, where “Mean” represents the mean fitness error of which the best one in each case is shown in bold; “Std” means the standard deviation. We perform the Wilcoxon Signed Rank test to give a rigorous comparison, in which the significance level is set as 0.05. The results are represented by “C” in the tables, where the three kinds of symbols indicate the performance of DG-PSO: “+” means DG-PSO is relatively better, “=“ means insignificant and “-” means DG-PSO is relatively worse. We make a sum of the comparison results and showed the final results in the form of “W/T/L” in the bottom of each table, where “W/T/L” means the number of problems DG-PSO win, tie and lose respectively compared with the corresponding algorithm.
We firstly compare DG-PSO with other published algorithms. According to the statistical results in Table 3 (30D), DG-PSO showed better or close performance in all functions when comparing with CLPSO (W/T/L = 18/2/0) and FPA (W/T/L = 19/1/0). In addition, in the comparison with other PSO variants, FDR-PSO (W/T/L = 18/1/1) seems to be the most competitive one to the proposed algorithm (except dg-PSO), however, it only wins in one case. In addition, DG-PSO even wins in all 20 comparisons compared with the recently published meta-heuristic SSO. Similar results are obtained in 50D where DG-PSO still shows great advantages over all other algorithms. The most competitive algorithm to the proposed algorithm is FDR-PSO (W/T/L = 17/2/1) (except dg-PSO), but, obviously, the results still show the superiority of our algorithm. Then, comparing DG-PSO with dg-PSO, the results are W/T/L = 13/1/6 in both 30D and 50D. Specifically, DG-PSO shows much better performance on multimodal problems such as F3, F5, F6, F8, F9, F14, F15, F16, F19 and F20. However, by comparing DG-PSO with dg-PSO, we can find that the diversity enhancing also brings significant inefficiency to DG-PSO on unimodal problems (F1, F2, F4, F10, F17 and F18). This is mainly because the diversity enhancing strategy weakens the exploitation.
To rank the algorithms clearly, the Friedman test is used to compare the involved algorithms using all the data of mean fitness error values on the 20 problems. The Friedman test is the best-known procedure for testing the differences between more than two related samples [48], which can detect significant differences between the behavior of two or more algorithms. We conduct two tests that rank the algorithms on the 30D and 50D, respectively. The significance level is set to 0.05. Table 5 presents the numerical rankings obtained by the test. In addition, the corresponding graphical ranking results are shown in Figure 2 and Figure 3, where the center square indicates the average rank of the corresponding algorithm and the line denotes the confidence intervals. Smaller ranks mean better performance and, when there is no overlap on the intervals of any two algorithms, they are significantly different. The results in these two figures clearly demonstrate that the proposed algorithm outperforms all other algorithms including dg-PSO, CLPSO and FPA in both 30D and 50D.
For further evaluation, the convergence performance and average time consumption are also compared in Figure 4 and Figure 5, respectively. The results of F8, F10, and F14 in 50-D are given to exemplify the performance. From Figure 4, we observe that DG-PSO has outstanding performance on the multimodal problems (F8 and F14), while dg-PSO obtained the best result in unimodal function F10. From Figure 5, it can be found that DG-PSO consumes slightly more than the two related algorithms: CLPSO and FPA. However, the time consumption of DG-PSO is still acceptable when compared with the other algorithms such as FDR-PSO, UPSO, MPSO and SSO.

5. DG-PSO Based Remote Sensing Image Segmentation

Image segmentation is a fundamental task in remote sensing applications [49], such as change detection and object-based classification. It is used with the expectation that it will divide the image into semantically significant regions, or objects, to be recognized by further processing steps [50]. This work attracts a lot of researchers in the past decade but is still an intractable problem [51]. In terms of all the existing segmentation methods, one of the most popular segmentation techniques is thresholding due to its simplicity, robustness and accuracy [52].
The thresholding methods can be divided into two categories: the bi-level thresholding and multilevel thresholding. If the object in an image is separated from the background using a single threshold value, it is called the bi-level thresholding. In contrast, the multilevel thresholding means that the given image are classified into several different regions according to multiple thresholds. In remote sensing image segmentation, bi-level thresholding does not give appropriate performance, and there are strong requirements of multilevel thresholding [53]. Therefore, numerous studies have been reported [47,53,54,55,56,57,58] in multilevel thresholding.
The most popular way [53,54,55,56,57,58,59,60,61] to search the optimal thresholds is to maximize some discriminating criteria (fitness function). The traditional method searches the optimal thresholds using exhaustive search strategies, which lead to high computation costs. In recent years, meta-heuristics based methods gained the attention of researchers because of the high computation inefficiency. Quantities of algorithms have been introduced to this area such as PSO [36], Differential Evolution (DE) [62], Artificial Bee Colony (ABC) [59,63,64], Wind Driven Optimization (WDO) [56], Cuckoo Search (CS) [65] and SSO [47]. However, the remote sensing images are very difficult to segment accurately due to multimodality of the histograms [53]. Therefore, improving the performance of the metaheuristic algorithms is necessary for the remote sensing image segmentation.
In this section, we applied the proposed algorithm to multilevel thresholding for optical remote sensing image segmentation. We first describe the problem. Then, the experimental setup is introduced carefully in Section 5.2. Finally, the results and analysis are given in detail.

5.1. Problem Definition

This subsection deals with the problem definition of multilevel thresholding problem. As we mention above, multilevel thresholding methods generally search the optimal thresholds by maximizing some criteria. In the literature, Otsu’s criterion [66] has been widely employed [36,67,68]. It generally provides image segmentation with satisfactory results [69] and is known for its simplicity and effectivity with respect to uniformity and shape measures and can usually obtain optimal global threshold value [58].
Let l [ 0 , 1 L 1 ] be the gray level of a given an image I , where L is the total gray levels, the problem is then defined as follows. Firstly, the image histogram is calculated and normalized, which is denoted by P l , l = 0 , 1 , L 1 . For the ( D + 1 ) class thresholding problem, there are D thresholds k d , ( d = 1 , 2 , D ) that segment the image into D + 1 classes. Assume that k 0 ( k 0 = 0 ) and k D + 1 ( k D + 1 = L ) denote the upper and lower bound. Then, the thresholds can be sorted with k 0 < k 1 < < k d < k D + 1 , and the problem is defined using (9):
( k 1 , k 2 k D ) = arg k 0 < k 1 < < k d < k D + 1 max { F ( k 1 , k 2 k D ) } ,
where F = d = 0 D ω d ( μ d μ T ) 2 , μ d = l = k d k d + 1 l P l ω d . Here, ω d = l = k d k d + 1 P l is the probability of the occurrence of the dth class. μ T = l = 1 L l P l is the total mean intensity of the original image.

5.2. Experimental Setup

To demonstrate the superiority of the proposed method, five popular meta-heuristic algorithms in multilevel thresholding including DE, ABC, CS, MPSO, SSO are chosen to compare with the proposed algorithm. All of these algorithms are demonstrated to have good performance in multilevel thresholding in the corresponding reference in Table 6. Specifically, ABC performs better than PSO when the level of thresholds is higher than two in [59]. Reference [53] demonstrates that CS showed remarkable performances in multilevel thresholding problems and could outperform the other known algorithms, such as DE, PSO, WDO and ABC. MPSO shows better performance than Genetic Algorithm (GA) and the original PSO [36]. SSO is applied to multilevel thresholding in [47] and it clearly outperforms PSO, BAT algorithm and FPA in [47]. The parameters of these algorithms are set according to the corresponding work shown in Table 6. The parameters of our proposed algorithm are the same as that in Section 4.
All populations are uniformly randomly initialized. Thirty independent runs are carried out for each algorithm on each image on 2, 3, 4, 5, 7, 9, 15 and 20 thresholds [68,69], respectively. All algorithms are conducted with the same maximum function evaluation: M a x F E s = 3000 * D in identical search space: [ 0 , 256 ) . All methods are adapted for integer optimization problems using the rounding method. Specifically, the search space is defined as [ 0 , 256 ) for 8-bit gray-scale images, and the integer is obtained by rounding down (e.g., 255.6 is rounded to 255). Figure 6 shows the test images (These images are taken from a very-high-resolution remote sensing image dataset constructed by Gong Cheng et al. from Northwestern Polytechnical University [70].

5.3. Results and Discussion

In detail, the mean fitness and the corresponding standard deviation are given in Table 7, where the best one in each case of the mean fitness is shown in bold. It is easy to find that our algorithm obtains the best results in all cases in terms of the mean fitness, except the case of 7-level thresholding (D = 7) of image C. To evaluate the effectiveness of our algorithm’s improvement over other ones, the involved algorithms are also ranked with the Friedman test. We conduct two tests that ranked the algorithms on the normal (D = 2, 3, 4 and 5) level and high level (The high level thresholding is popularly employed in multilevel thresholding [68,69]) (D = 7, 9, 15 and 20), respectively. Therefore, 40 variables ( i × t × m = 40 ) are used in each comparison in each test, where i = 5 is the number of images, t = 4 denoted the number of levels, and m = 2 denoted the number of used measures including the meant fitness and the corresponding standard deviation. The significance level is set to 0.05. Table 8 and the two figures (Figure 7 and Figure 8) present the numerical rankings and graphical results obtained by the test, where better performance is denoted by smaller ranks.
From the results of normal level thresholding shown in Figure 7, the proposed algorithm significantly outperforms DE, ABC and MPSO, and also showed an advantage over the other two algorithms. It can be observed from Figure 8 that the proposed algorithm ranks even better in high level thresholding, which showed a significant difference from all algorithms except CS (our algorithm also ranks better than CS). Figure 9 and Figure 10 show the segmentation results. The pseudo color image shows the whole thresholding results, where each level of the image is represented by the regions with the same color. The binary images show some of the objects separated from the original image, which proved the effectiveness of the segmentation.
In conclusion, the results demonstrated that the proposed algorithm shows remarkable performance in multilevel thresholding when compared with other popular meta-heuristics in this research area.

6. Conclusions

This paper proposes a variant of particle swarm optimization called DG-PSO. DG-PSO uses a double-group based evolution framework. The individuals in DG-PSO are divided into two groups according to their fitness values. Two main ideas are introduced in the evolution of the disadvantaged group: a hybrid strategy for learning and a diversity enhancing strategy for avoiding premature convergence. The experimental results on various benchmark functions demonstrate that: although DG-PSO consumes slightly more time than the two related algorithms of the proposed algorithm: CLPSO and FPA; DG-PSO achieves a significant improvement in terms of mean fitness error, the corresponding standard deviation and convergence performance over all contrast algorithms. In addition, we further apply the proposed algorithm to multilevel thresholding for remote sensing image segmentation. The results also show the effectiveness of DG-PSO.

Author Contributions

L.S. designed the experiments and write the paper; X.H. analyzed the data and results; C.F. performed the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 1944, pp. 1942–1948. [Google Scholar]
  2. Alrashidi, M.R.; El-Hawary, M.E. A Survey of Particle Swarm Optimization Applications in Electric Power Systems. IEEE Trans. Evol. Comput. 2009, 13, 913–918. [Google Scholar] [CrossRef]
  3. Gisbert, S.; Michael, S.; Michael, M. Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioinf. 2006, 7, 125. [Google Scholar]
  4. Tang, B.; Zhu, Z.; Luo, J. A convergence-guaranteed particle swarm optimization method for mobile robot global path planning. Assem. Autom. 2017, 37, 114–129. [Google Scholar] [CrossRef]
  5. Xue, B.; Zhang, M.; Browne, W.N. Particle swarm optimisation for feature selection in classification: Novel initialisation and updating mechanisms. Appl. Soft Comput. 2014, 18, 261–276. [Google Scholar] [CrossRef]
  6. Eslami, M.; Shareef, H.; Khajehzadeh, M.; Mohamed, A. A Survey of the State of the Art in Particle Swarm Optimization. Res. J. Appl. Sci. Eng. Technol. 2012, 4, 1181–1197. [Google Scholar]
  7. Zhang, Q.; Liu, W.; Meng, X.; Yang, B.; Vasilakos, A.V. Vector coevolving particle swarm optimization algorithm. Inf. Sci. 2017, 394–395, 273–298. [Google Scholar] [CrossRef]
  8. Cui, Q.; Li, Q.; Li, G.; Li, Z.; Han, X.; Lee, H.P.; Liang, Y.; Wang, B.; Jiang, J.; Wu, C. Globally-Optimal Prediction-Based Adaptive Mutation Particle Swarm Optimization. Inf. Sci. 2017, 418–419, 186–217. [Google Scholar] [CrossRef]
  9. Shi, Y.; Eberhart, R. A Modified particle swarm optimizer. In Proceedings of the International Conference on Evolutionary Computation Proceedings 1998 (IEEE ICEC Conference), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  10. Shi, Y.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation (CEC 99), Washington, DC, USA, 6–9 July 1999; Volume 321, pp. 320–324. [Google Scholar]
  11. Li, L.; Xue, B.; Niu, B.; Chai, Y.; Wu, J. The novel nonlinear strategy of inertia weight in particle swarm optimization. In Proceedings of the International Conference on Bio-Inspired Computing (Bic-Ta 2009), Beijing, China, 16–19 October 2009; pp. 1–5. [Google Scholar]
  12. Wu, J.; He, X.X.; Zhao, W.G.; Rui, W. Exponential inertia weight particle swarm algorithm for dynamics optimization of electromechanical coupling system. In Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; pp. 479–483. [Google Scholar]
  13. Pant, M.; Radha, T.; Singh, V.P. Particle Swarm Optimization Using Gaussian Inertia Weight. In Proceedings of the International Conference on Computational Intelligence and Multimedia Applications, Sivakasi, India, 13–15 December 2007; pp. 97–102. [Google Scholar]
  14. Clerc, M.; Kennedy, J. The particle swarm—Explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  15. Bajpai, P.; Singh, S.N. Fuzzy Adaptive Particle Swarm Optimization for Bidding Strategy in Uniform Price Spot Market. IEEE Trans. Power Syst. 2007, 22, 2152–2160. [Google Scholar] [CrossRef]
  16. Chang, X.Y.; Li, R.J. Experimental analysis of acceleration coefficient in particle swarm optimization algorithm. Comput. Eng. 2010, 36, 183–186. [Google Scholar]
  17. Zhan, Z.H.; Zhang, J. Adaptive Particle Swarm Optimization. IEEE Trans. Syst. Man Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [PubMed]
  18. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation (2002 CEC), Honolulu, HI, USA, 12–17 May 2002; pp. 1671–1676. [Google Scholar]
  19. Gou, J.; Lei, Y.X.; Guo, W.P.; Wang, C.; Cai, Y.Q.; Luo, W. A novel improved particle swarm optimization algorithm based on individual difference evolution. Appl. Soft Comput. 2017, 57, 468–481. [Google Scholar] [CrossRef]
  20. Liu, Q.; Wei, W.; Yuan, H.; Zhan, Z.H.; Li, Y. Topology selection for particle swarm optimization. Inf. Sci. 2016, 363, 154–173. [Google Scholar] [CrossRef]
  21. Gong, Y.J.; Li, J.J.; Zhou, Y.; Li, Y.; Chung, H.S.; Shi, Y.H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2016, 46, 2277–2290. [Google Scholar] [CrossRef] [PubMed]
  22. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  23. Frans, V.D.B.; Engelbrecht, A.P. A Cooperative approach to particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 225–239. [Google Scholar]
  24. Li, X.; Yao, X. Cooperatively Coevolving Particle Swarms for Large Scale Optimization. IEEE Trans. Evol. Comput. 2012, 16, 210–224. [Google Scholar]
  25. Sun, J.; Xu, W.; Fang, W. A Diversity-Guided Quantum-Behaved Particle Swarm Optimization Algorithm. In Asia-Pacific Conference on Simulated Evolution and Learning; Springer: Berlin/Heidelberg, Germany, 2006; pp. 497–504. [Google Scholar]
  26. Shen, Y.; Wei, L.; Zeng, C.; Chen, J. Particle Swarm Optimization with double learning patterns. Comput. Intell. Neurosci. 2016, 2016, 32. [Google Scholar] [CrossRef] [PubMed]
  27. Yang, X.S. Flower pollination algorithm for global optimization. In International Conference on Unconventional Computation and Natural Computation 2012; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7445, pp. 240–249. [Google Scholar]
  28. Gao, L.; Hailu, A. Comprehensive Learning Particle Swarm Optimizer for Constrained Mixed-Variable Optimization Problems. Int. J. Comput. Intell. Syst. 2010, 3, 832–842. [Google Scholar] [CrossRef]
  29. Hu, Z.; Bao, Y.; Xiong, T. Comprehensive learning particle swarm optimization based memetic algorithm for model selection in short-term load forecasting using support vector regression. Appl. Soft Comput. 2014, 25, 15–25. [Google Scholar] [CrossRef]
  30. Mahadevan, K.; Kannan, P.S. Comprehensive learning particle swarm optimization for reactive power dispatch. Appl. Soft Comput. 2010, 10, 641–652. [Google Scholar] [CrossRef]
  31. Zhang, X.; Yu, X.; Qin, H. Optimal operation of multi-reservoir hydropower systems using enhanced comprehensive learning particle swarm optimization. J. Hydro-Environ. Res. 2016, 10, 50–63. [Google Scholar] [CrossRef]
  32. Abdelaziz, A.Y.; Ali, E.S.; Elazim, S.M.A. Combined economic and emission dispatch solution using Flower Pollination Algorithm. Int. J. Electr. Power Energy Syst. 2016, 80, 264–274. [Google Scholar] [CrossRef]
  33. Abdel-Raouf, O.; Abdel-Baset, M.; El-Henawy, I. A Novel Hybrid Flower Pollination Algorithm with Chaotic Harmony Search for Solving Sudoku Puzzles. Int. J. Mod. Educ. Comput. Sci. 2014, 7, 126–132. [Google Scholar]
  34. Sayed, S.A.F.; Nabil, E.; Badr, A. A binary clonal flower pollination algorithm for feature selection. Pattern Recognit. Lett. 2016, 77, 21–27. [Google Scholar] [CrossRef]
  35. Wang, H.; Sun, H.; Li, C.; Rahnamayan, S.; Pan, J.S. Diversity enhanced particle swarm optimization with neighborhood search. Inf. Sci. 2013, 223, 119–135. [Google Scholar] [CrossRef]
  36. Liu, Y.; Mu, C.; Kou, W.; Liu, J. Modified particle swarm optimization-based multilevel thresholding for image segmentation. Soft Comput. 2015, 19, 1311–1327. [Google Scholar] [CrossRef]
  37. Cheng, S.; Shi, Y.; Qin, Q. Population diversity based study on search information propagation in particle swarm optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1272–1279. [Google Scholar]
  38. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report 201311; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  39. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 Special Session on Real Parameter Optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
  40. Parsopoulos, K.E.; Vrahatis, M.N. UPSO: A unified particle swarm optimization scheme. Lect. Ser. Comput. Comput. Sci. 2004, 1, 868–873. [Google Scholar]
  41. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  42. Peram, T.; Veeramachaneni, K.; Mohan, C.K. Fitness-distance-ratio based particle swarm optimization. In Proceedings of the Swarm Intelligence Symposium (SIS 2003), Indianapolis, IN, USA, 24–26 April 2003; pp. 174–181. [Google Scholar]
  43. Cuevas, E.; Cortés, M.A.D.; Navarro, D.A.O. A Swarm Global Optimization Algorithm Inspired in the Behavior of the Social-Spider. Exp. Syst. Appl. 2014, 40, 6374–6384. [Google Scholar] [CrossRef]
  44. Zhang, Q.; Liu, W.; Meng, X.; Yang, B.; Vasilakos, A.V. Vector coevolving particle swarm optimization algorithm. Inf. Sci. 2017, 394, 273–298. [Google Scholar] [CrossRef]
  45. Qin, Q.; Cheng, S.; Zhang, Q.; Li, L.; Shi, Y. Particle Swarm Optimization With Interswarm Interactive Learning Strategy. IEEE Trans. Cybern. 2016, 46, 2238–2251. [Google Scholar] [CrossRef] [PubMed]
  46. Parsopoulos, K.E.; Vrahatis, M.N. A unified particle swarm optimization scheme. In Proceedings of International Conference on Computational Methods in Sciences and Engineering; Ser Lecture Series on Computer & Computational Sciences Attica; VSP International Science: Tripolis, Greece, 2003. [Google Scholar]
  47. Ouadfel, S.; Taleb-Ahmed, A. Social spiders optimization and flower pollination algorithm for multilevel image thresholding: A performance study. Exp. Syst. Appl. 2016, 55, 566–584. [Google Scholar] [CrossRef]
  48. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  49. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  50. Trias-Sanz, R.; Stamon, G.; Louchet, J. Using colour, texture, and hierarchial segmentation for high-resolution remote sensing. ISPRS J. Photogramm. Remote Sens. 2008, 63, 156–168. [Google Scholar] [CrossRef]
  51. Li, N.; Huo, H.; Fang, T. A Novel Texture-Preceded Segmentation Algorithm for High-Resolution Imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2818–2828. [Google Scholar]
  52. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–168. [Google Scholar]
  53. Bhandari, A.K.; Kumar, A.; Singh, G.K. Tsallis entropy based multilevel thresholding for colored satellite image segmentation using evolutionary algorithms. Exp. Syst. Appl. 2015, 42, 8707–8730. [Google Scholar] [CrossRef]
  54. Bhandari, A.K.; Kumar, A.; Chaudhary, S.; Singh, G.K. A novel color image multilevel thresholding based segmentation using nature inspired optimization algorithms. Exp. Syst. Appl. 2016, 63, 112–133. [Google Scholar] [CrossRef]
  55. Bhandari, A.K.; Kumar, A.; Singh, G.K. Modified artificial bee colony based computationally efficient multilevel thresholding for satellite image segmentation using Kapur’s, Otsu and Tsallis functions. Exp. Syst. Appl. 2015, 42, 1573–1601. [Google Scholar] [CrossRef]
  56. Bhandari, A.K.; Singh, V.K.; Kumar, A.; Singh, G.K. Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur’s entropy. Exp. Syst. Appl. Int. J. 2014, 41, 3538–3560. [Google Scholar] [CrossRef]
  57. Khaled, A.; Abdel-Kader, R.F.; Yasein, M.S. A Hybrid Color Image Quantization Algorithm Based on k-Means and Harmony Search Algorithms. Appl. Artif. Intell. 2016, 30, 331–351. [Google Scholar] [CrossRef]
  58. Mala, C.; Sridevi, M. Multilevel threshold selection for image segmentation using soft computing techniques. Soft Comput. 2016, 20, 1793–1810. [Google Scholar] [CrossRef]
  59. Akay, B. A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding. Appl. Soft Comput. 2013, 13, 3066–3091. [Google Scholar] [CrossRef]
  60. Dey, S.; Saha, I.; Bhattacharyya, S.; Maulik, U. Multi-level thresholding using quantum inspired meta-heuristics. Knowl. Based Syst. 2014, 67, 373–400. [Google Scholar] [CrossRef]
  61. He, L.; Huang, S. Modified firefly algorithm based multilevel thresholding for color image segmentation. Neurocomputing 2017, 240, 152–174. [Google Scholar] [CrossRef]
  62. Cuevas, E.; Zaldivar, D.; Pérez-Cisneros, M. A novel multi-threshold segmentation approach based on differential evolution optimization. Exp. Syst. Appl. 2010, 37, 5265–5271. [Google Scholar] [CrossRef]
  63. Horng, M.H. Multilevel thresholding selection based on the artificial bee colony algorithm for image segmentation. Exp. Syst. Appl. 2011, 38, 13785–13791. [Google Scholar] [CrossRef]
  64. Osuna-Enciso, V.N.; Cuevas, E.; Sossa, H. A comparison of nature inspired algorithms for multi-threshold image segmentation. Exp. Syst. Appl. 2013, 40, 1213–1219. [Google Scholar] [CrossRef]
  65. Pare, S.; Kumar, A.; Bajaj, V.; Singh, G.K. A multilevel color image segmentation technique based on cuckoo search algorithm and energy curve. Appl. Soft Comput. 2016, 47, 76–102. [Google Scholar] [CrossRef]
  66. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  67. Ayala, H.V.H.; Santos, F.M.D.; Mariani, V.C.; Coelho, L.D.S. Image thresholding segmentation based on a novel beta differential evolution approach. Exp. Syst. Appl. 2015, 42, 2136–2142. [Google Scholar] [CrossRef]
  68. Mlakar, U.; Potočnik, B.; Brest, J. A hybrid differential evolution for optimal multilevel image thresholding. Exp. Syst. Appl. 2016, 65, 221–232. [Google Scholar] [CrossRef]
  69. Gao, H.; Kwong, S.; Yang, J.; Cao, J. Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation. Inf. Sci. 2013, 250, 82–112. [Google Scholar] [CrossRef]
  70. Cheng, G.; Han, J.; Zhou, P.; Guo, L. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS J. Photogramm. Remote Sens. 2014, 98, 119–132. [Google Scholar] [CrossRef]
Figure 1. The overall framework of the proposed algorithm.
Figure 1. The overall framework of the proposed algorithm.
Sensors 18 01393 g001
Figure 2. Friedman test of 30D problems.
Figure 2. Friedman test of 30D problems.
Sensors 18 01393 g002
Figure 3. Friedman test of 50D problems.
Figure 3. Friedman test of 50D problems.
Sensors 18 01393 g003
Figure 4. Convergence performance. (a) F8, multimodal; (b) F10, rotated unimodal; (c) F14, shifted multimodal.
Figure 4. Convergence performance. (a) F8, multimodal; (b) F10, rotated unimodal; (c) F14, shifted multimodal.
Sensors 18 01393 g004
Figure 5. Average time consumption. (a) F8, multimodal; (b) F10, rotated unimodal; (c) F14, shifted multimodal.
Figure 5. Average time consumption. (a) F8, multimodal; (b) F10, rotated unimodal; (c) F14, shifted multimodal.
Sensors 18 01393 g005
Figure 6. Images used in the experiments. (a) Image a; (b) Image b; (c) Image c; (d) Image d; (e) Image e.
Figure 6. Images used in the experiments. (a) Image a; (b) Image b; (c) Image c; (d) Image d; (e) Image e.
Sensors 18 01393 g006aSensors 18 01393 g006b
Figure 7. Friedman test of the normal level.
Figure 7. Friedman test of the normal level.
Sensors 18 01393 g007
Figure 8. Friedman test of the high level.
Figure 8. Friedman test of the high level.
Sensors 18 01393 g008
Figure 9. The segmentation results of the Image a. (a) 3-level thresholding; (b) 9-level thresholding.
Figure 9. The segmentation results of the Image a. (a) 3-level thresholding; (b) 9-level thresholding.
Sensors 18 01393 g009aSensors 18 01393 g009b
Figure 10. The segmentation results of the Image d. (a) 3-level thresholding; (b) 9-level thresholding, which separated the target better.
Figure 10. The segmentation results of the Image d. (a) 3-level thresholding; (b) 9-level thresholding, which separated the target better.
Sensors 18 01393 g010
Table 1. Description of the benchmark functions.
Table 1. Description of the benchmark functions.
No.NameDefinition F min Modality
F1Schwefel 1.2 F 1 ( x ) = d = 1 D ( j = 1 d x j ) 2 0Unimodal
F2Bent Cigar F 2 ( x ) = x 1 2 + 10 6 d = 2 D x d 2 0Unimodal
F3Modified Schwefel F 3 ( x ) = 418.9829 D d = 1 D g ( z d ) , z d = x d + 4.209687462275036 e 2 g ( z d ) = { z d sin ( | z d | 1 / 2 ) ( 500 mod ( z d , 500 ) ) sin | 500 mod ( z d , 500 ) | ( z d 500 ) 2 10000 D ( mod ( | z d | , 500 ) 500 ) sin | mod ( | z d | , 500 ) 500 | ( z d + 500 ) 2 10000 D 0Multimodal
F4Schwefel 1.2 with Noise F 4 ( x ) = F 1 ( 1 + 0.4 | N ( 0 , 1 ) | ) 0Unimodal
F5Rosenbrock F 5 ( x ) = d = 1 D 1 ( 100 ( x d 2 x d + 1 ) 2 + ( x d 1 ) 2 ) 0Multimodal
F6Rastrigin F 6 ( x ) = d = 1 D ( x d 2 10 cos ( 2 π x d ) + 10 ) 0Multimodal
F7Katsuura F 7 ( x ) = 10 D 2 d = 1 D ( 1 + d j = 1 32 2 i x d r o u n d ( 2 i x d ) 2 i ) 10 D 1.2 10 D 1.2 0Multimodal
F8Expanded Scaffer F6 F 8 ( x ) = g ( x 1 , x 2 ) + g ( x 2 , x 3 ) + + g ( x D , x 1 ) w h e r e   g ( x , y ) = 0.5 + ( sin 2 ( x 2 + y 2 ) 0.5 ) ( 1 + 0.001 ( x 2 + y 2 ) ) 2 0Multimodal
F9Expanded Griewank plus Rosenbrock Function F 9 ( x ) = g ( F 5 ( x 1 , x 2 ) ) + g ( F 5 ( x 2 , x 3 ) ) + + g ( F 5 ( x D , x 1 ) ) w h e r e   g ( y ) = d = 1 D y d 2 4000 d = 1 D cos ( y d d ) + 1 0Multimodal
F10Rotated Bent Cigar F 10 ( x ) = F 2 ( z ) , z = M × x 0Unimodal
F11Rotated Rosenbrock F 11 ( x ) = F 4 ( z ) , z = M × x 0Multimodal
F12Rotated Expanded Scaffer F6 F 12 ( x ) = F 8 ( z ) , z = M × x 0Multimodal
F13Rotated Expanded Griewank plus Rosenbrock F 13 ( x ) = F 9 ( z ) , z = M × x 0Multimodal
F14Shifted Rastrigin F 14 ( x ) = F 6 ( z ) + f b i a s 1 , z = x o , f b i a s 1 = 800 800Multimodal
F15Shifted Expanded Scaffer F6 F 15 ( x ) = F 8 ( z ) + f b i a s 2 , z = x o , f b i a s 2 = 1600 1600Multimodal
F16Shifted Expanded Griewank plus Rosenbrock F 16 ( x ) = F 9 ( z ) + f b i a s 3 , z = x o , f b i a s 3 = 1500 1500Multimodal
F17Shifted Rotated Bent Cigar F 18 ( x ) = F 2 ( x ) + f b i a s 5 , z = M ( x o ) , f b i a s 5 = 200 200Unimodal
F18Shifted Rotated Discus F 17 ( x ) = g ( x ) + f b i a s 4 , z = M ( x o ) , f b i a s 4 = 200 g ( y ) = 10 6 y 1 2 + d = 2 D y d 2 300Unimodal
F19Shifted Rotated Expanded Scaffer F6 F 19 ( x ) = F 8 ( x ) + f b i a s 6 , z = M ( x o ) , f b i a s 6 = 1600 1600Multimodal
F20Shifted Rotated Expanded Griewank plus Rosenbrock F 20 ( x ) = F 9 ( x ) + f b i a s 7 , z = M ( x o ) , f b i a s 7 = 1500 1500Multimodal
Table 2. Parameters and references of the involved algorithms.
Table 2. Parameters and references of the involved algorithms.
PSO VariantsParametersReference
FDR-PSO w = 0.9 0.5 × g / M a x _ i t e r ; c 1 = c 2 = 2.0 [42]
UPSO w = 0.7298 ; c 1 = c 2 = 1.49445 [46]
FIPS w = 0.7298 ; c 1 = c 2 = 1.49445 [41]
CLPSO w = 0.9 0.5 × g / M a x _ i t e r ; c 1 = c 2 = 2.0 [22]
MPSO w min = 0.3 w max = 0.9 ; c 1 = c 2 = 2.0 ;[36]
Other State-of-the-Art Meta-HeuristicsParametersReference
FPAPopulation size
Switch possibility
n = 25
p = 0.8
[27]
SSOPopulation size
The threshold
n = 50
P F = 0.7
[47]
The Proposed VariantsParametersReference
dg-PSO w = 0.9 0.5 × g / M a x _ i t e r ; c 1 = c 2 = 2.0 Ours (without diversity enhancing)
DG-PSO w = 0.9 0.5 × g / M a x _ i t e r ; c 1 = c 2 = 2.0 Ours (with diversity enhancing)
Table 3. Statistical results on 30 dimensions.
Table 3. Statistical results on 30 dimensions.
No.ItemFDRUPSOFIPSCLPSOMPSOFPASSOdg-PSODG-PSO
F1Mean3.07 × 10−193.72 × 10−127.05 × 10+007.10 × 10−031.12 × 10−023.60 × 10−032.71 × 10−021.23 × 10−393.45 × 10−29
Std1.25 × 10−191.94 × 10−122.47 × 10+004.96 × 10−035.30 × 10−032.57 × 10−031.41 × 10−022.72 × 10−391.79 × 10−29
C+++++++-/
F2Mean5.90 × 10−2069.59 × 10−1764.32 × 10−276.75 × 10−789.84 × 10−153.81 × 10−058.18 × 10+021.28 × 10−2261.68 × 10−126
Std0.00 × 10+000.00 × 10+001.14 × 10−273.68 × 10−786.75 × 10−153.59 × 10−051.67 × 10+020.00 × 10+001.88 × 10−126
C--+++++-/
F3Mean3.00 × 10+022.48 × 10+032.58 × 10+033.79 × 10+022.43 × 10+032.49 × 10+031.50 × 10+031.47 × 10+031.66 × 10+02
Std1.71 × 10+022.75 × 10+022.28 × 10+021.35 × 10+021.73 × 10+027.24 × 10+012.60 × 10+027.50 × 10+027.96 × 10+01
C++++++++/
F4Mean2.14 × 10+031.12 × 10+031.80 × 10+023.24 × 10+025.00 × 10+034.39 × 10+014.88 × 10+007.73 × 10−031.37 × 10−01
Std1.56 × 10+034.92 × 10+025.49 × 10+011.23 × 10+022.49 × 10+038.42 × 10+004.00 × 10+001.20 × 10−026.15 × 10−02
C+++++++-/
F5Mean4.01 × 10−041.49 × 10+002.20 × 10+012.04 × 10+013.19 × 10+016.60 × 10+001.00 × 10+011.05 × 10+004.22 × 10−16
Std1.51 × 10−041.02 × 10+002.55 × 10−016.99 × 10−011.39 × 10+014.55 × 10+008.67 × 10−011.95 × 10+002.16 × 10−16
C++++++++/
F6Mean3.04 × 10+016.93 × 10+016.11 × 10+017.76 × 10+003.42 × 10+013.67 × 10+014.31 × 10+013.08 × 10+010.00 × 10+00
Std2.65 × 10+008.09 × 10+004.66 × 10+001.66 × 10+004.86 × 10+003.33 × 10+007.74 × 10+001.41 × 10+010.00 × 10+00
C++++++++/
F7Mean0.00 × 10+006.70 × 10−021.89 × 10+000.00 × 10+001.96 × 10−010.00 × 10+008.92 × 10−010.00 × 10+000.00 × 10+00
Std0.00 × 10+001.39 × 10−021.40 × 10−010.00 × 10+001.10 × 10−010.00 × 10+001.14 × 10−010.00 × 10+000.00 × 10+00
C=++=+=+=/
F8Mean5.34 × 10+009.98 × 10+009.98 × 10+003.36 × 10+008.27 × 10+001.13 × 10+018.64 × 10+004.27 × 10+003.43 × 10−01
Std1.26 × 10+009.82 × 10−013.99 × 10−011.11 × 10+001.93 × 10+002.69 × 10−012.43 × 10−012.84 × 10+002.39 × 10−01
C++++++++/
F9Mean2.99 × 10+006.43 × 10+001.15 × 10+012.18 × 10+003.35 × 10+008.53 × 10+001.95 × 10+015.77 × 10+009.93 × 10−01
Std7.01 × 10−011.41 × 10+009.73 × 10−016.75 × 10−018.39 × 10−012.92 × 10+002.29 × 10+001.23 × 10+001.53 × 10−01
C++++++++/
F10Mean2.62 × 10+004.77 × 10+021.15 × 10+036.10 × 10+001.98 × 10+012.22 × 10+001.78 × 10+034.30 × 10−267.30 × 10−09
Std1.74 × 10+003.15 × 10+022.59 × 10+022.75 × 10+005.26 × 10+005.07 × 10−019.43 × 10+014.13 × 10−264.83 × 10−09
C+++++++-/
F11Mean1.29 × 10+011.98 × 10+012.51 × 10+015.05 × 10+013.22 × 10+012.06 × 10+002.37 × 10+012.15 × 10+011.19 × 10−01
Std3.64 × 10+001.24 × 10+003.49 × 10−011.61 × 10+011.84 × 10+011.06 × 10+007.37 × 10+001.82 × 10+016.20 × 10−02
C++++++++/
F12Mean5.42 × 10+006.47 × 10+001.46 × 10+015.28 × 10+001.20 × 10+019.28 × 10+002.00 × 10+011.10 × 10+012.84 × 10+00
Std9.36 × 10−014.13 × 10−015.10 × 10−016.47 × 10−012.18 × 10+007.55 × 10−011.22 × 10+009.25 × 10−013.91 × 10−01
C++++++++/
F13Mean9.38 × 10+001.14 × 10+011.13 × 10+011.01 × 10+011.02 × 10+011.21 × 10+018.76 × 10+001.68 × 10+017.71 × 10+00
Std5.11 × 10−011.20 × 10−012.99 × 10−013.96 × 10−017.00 × 10−011.54 × 10−013.22 × 10−012.22 × 10+005.43 × 10−01
C++++++++/
F14Mean2.13 × 10+018.88 × 10+016.23 × 10+013.05 × 10+011.29 × 10+027.82 × 10+019.59 × 10+017.76 × 10+019.09 × 10−14
Std2.10 × 10+004.66 × 10+004.09 × 10+004.22 × 10+002.74 × 10+016.16 × 10+001.27 × 10+012.36 × 10+012.54 × 10−14
C++++++++/
F15Mean3.07 × 10+007.44 × 10+001.23 × 10+014.61 × 10+001.23 × 10+011.37 × 10+012.10 × 10+017.55 × 10+001.09 × 10+00
Std2.63 × 10−016.44 × 10−015.58 × 10−011.95 × 10+008.38 × 10+002.09 × 10+001.69 × 10+006.22 × 10−011.21 × 10−01
C++++++++/
F16Mean7.15 × 10+001.08 × 10+011.04 × 10+013.69 × 10+001.08 × 10+011.20 × 10+011.22 × 10+013.70 × 10+002.91 × 10−01
Std5.17 × 10−012.38 × 10−011.36 × 10−016.79 × 10−017.00 × 10−019.60 × 10−021.15 × 10−017.70 × 10−012.65 × 10−12
C++++++++/
F17Mean2.89 × 10+081.39 × 10+041.60 × 10+034.79 × 10+081.11 × 10+096.83 × 10+036.98 × 10+076.11 × 10−121.88 × 10−02
Std1.62 × 10+086.35 × 10+035.97 × 10+022.14 × 10+085.35 × 10+083.55 × 10+031.19 × 10+076.16 × 10−127.65 × 10−03
C+++++++-/
F18Mean6.14 × 10+005.65 × 10+021.63 × 10+031.78 × 10+032.08 × 10+041.50 × 10+012.13 × 10+046.82 × 10−133.55 × 10−07
Std3.43 × 10+004.07 × 10+021.50 × 10+021.12 × 10+031.15 × 10+042.81 × 10+001.87 × 10+033.38 × 10−131.75 × 10−07
C+++++++-/
F19Mean1.05 × 10+011.18 × 10+011.18 × 10+011.02 × 10+011.22 × 10+011.21 × 10+011.24 × 10+011.15 × 10+011.01 × 10+01
Std2.05 × 10−012.31 × 10−018.86 × 10−023.08 × 10−015.40 × 10−011.25 × 10−011.04 × 10−013.52 × 10−013.28 × 10−01
C+++=++++/
F20Mean7.67 × 10+007.22 × 10+001.33 × 10+015.70 × 10+001.56 × 10+011.65 × 10+012.38 × 10+014.77 × 10+004.18 × 10+00
Std1.93 × 10+001.10 × 10+007.55 × 10−011.01 × 10+005.62 × 10+002.66 × 10+002.29 × 10+003.01 × 10−014.65 × 10−01
C++++++++/
W/T/L18/1/119/0/1B18/2/020/0/019/1/020/0/013/1/6/
Note: the gray bacground highlights the best result on each function.
Table 4. Statistical results on 50 dimensions.
Table 4. Statistical results on 50 dimensions.
No.ItemFDRUPSOFIPSCLPSOMPSOFPASSOdg-PSODG-PSO
F1Mean6.98 × 10−082.52 × 10−053.22 × 10+031.27 × 10+027.10 × 10+004.24 × 10−011.13 × 10−027.35 × 10−226.46 × 10−18
Std3.53 × 10−081.22 × 10−058.23 × 10+025.25 × 10+015.24 × 10+002.32 × 10−016.12 × 10−037.50 × 10−222.54 × 10−18
C+++++++-/
F2Mean4.28 × 10−1645.86 × 10−2002.00 × 10−228.77 × 10−746.04 × 10−136.44 × 10−051.50 × 10+036.55 × 10−2161.44 × 10−121
Std0.00 × 10+000.00 × 10+005.27 × 10−236.05 × 10−742.97 × 10−133.84 × 10−051.85 × 10+023.63 × 10−2169.20 × 10−122
C--++++++/
F3Mean1.31 × 10+035.27 × 10+037.77 × 10+037.57 × 10+024.11 × 10+034.81 × 10+033.38 × 10+033.68 × 10+034.50 × 10+02
Std4.86 × 10+024.04 × 10+022.29 × 10+022.90 × 10+026.46 × 10+021.55 × 10+025.58 × 10+021.88 × 10+031.21 × 10+02
C++++++++/
F4Mean5.10 × 10+031.66 × 10+041.05 × 10+041.25 × 10+044.37 × 10+023.76 × 10+025.04 × 10+038.21 × 10+011.45 × 10+02
Std3.72 × 10+034.01 × 10+031.34 × 10+035.04 × 10+031.74 × 10+021.09 × 10+022.49 × 10+033.57 × 10+017.05 × 10+01
C+++++++-/
F5Mean2.49 × 10+011.92 × 10−014.22 × 10+019.21 × 10+017.25 × 10−029.03 × 10+003.79 × 10+011.85 × 10+007.21 × 10−02
Std1.27 × 10+019.25 × 10−021.96 × 10−012.78 × 10+014.75 × 10−025.00 × 10+003.11 × 10+001.72 × 10+004.46 × 10−01
C++++=+++/
F6Mean6.29 × 10+011.31 × 10+021.67 × 10+021.43 × 10+017.02 × 10+016.94 × 10+016.95 × 10+015.94 × 10+016.04 × 10−15
Std6.33 × 10+001.72 × 10+019.78 × 10+005.67 × 10−011.65 × 10+018.82 × 10+007.57 × 10+001.48 × 10+017.94 × 10−16
C++++++++/
F7Mean0.00 × 10+001.45 × 10−012.60 × 10+000.00 × 10+003.05 × 10−010.00 × 10+001.46 × 10+000.00 × 10+000.00 × 10+00
Std0.00 × 10+001.28 × 10−029.52 × 10−020.00 × 10+001.75 × 10−010.00 × 10+001.25 × 10−010.00 × 10+000.00 × 10+00
C=++=+=++/
F8Mean1.16 × 10+012.00 × 10+011.96 × 10+014.93 × 10+001.98 × 10+011.91 × 10+011.54 × 10+011.05 × 10+018.74 × 10−01
Std2.37 × 10+004.87 × 10−015.52 × 10−011.73 × 10+007.46 × 10−019.88 × 10−015.30 × 10−014.85 × 10−011.76 × 10−01
C++++++++/
F9Mean7.66 × 10+001.58 × 10+012.83 × 10+013.85 × 10+001.08 × 10+012.21 × 10+013.53 × 10+016.36 × 10+001.88 × 10+00
Std1.16 × 10+003.02 × 10+001.05 × 10+009.01 × 10−012.79 × 10+003.53 × 10+002.68 × 10+009.90 × 10−013.46 × 10−01
C++++++++/
F10Mean1.02 × 10+021.46 × 10+038.12 × 10+034.46 × 10+022.81 × 10+021.14 × 10+023.08 × 10+032.69 × 10−079.16 × 10−05
Std1.68 × 10+014.09 × 10+027.63 × 10+027.17 × 10+011.01 × 10+025.14 × 10+011.87 × 10+022.24 × 10−073.04 × 10−05
C++++++++/
F11Mean6.08 × 10+015.25 × 10+014.49 × 10+011.16 × 10+025.98 × 10+014.61 × 10−025.53 × 10+011.70 × 10+012.02 × 10−02
Std1.35 × 10+011.27 × 10+013.52 × 10−014.43 × 10+011.63 × 10+011.62 × 10−027.07 × 10+001.32 × 10+011.05 × 10−02
C++++++++/
F12Mean1.15 × 10+012.42 × 10+013.13 × 10+011.19 × 10+012.38 × 10+012.19 × 10+014.23 × 10+011.78 × 10+008.18 × 10+00
Std1.33 × 10+003.59 × 10+002.74 × 10−011.09 × 10+002.57 × 10+008.40 × 10−013.23 × 10+006.58 × 10−016.69 × 10−01
C+++++++-/
F13Mean1.73 × 10+012.03 × 10+012.13 × 10+011.89 × 10+011.87 × 10+012.11 × 10+011.60 × 10+012.62 × 10+011.58 × 10+01
Std5.78 × 10−014.40 × 10−011.64 × 10−013.40 × 10−017.67 × 10−011.84 × 10−015.61 × 10−016.18 × 10+005.60 × 10−01
C++++++=+/
F14Mean7.73 × 10+011.93 × 10+021.73 × 10+026.77 × 10+012.82 × 10+021.55 × 10+022.83 × 10+022.48 × 10+021.59 × 10−13
Std7.70 × 10+001.76 × 10+011.20 × 10+014.42 × 10+003.04 × 10+011.07 × 10+012.23 × 10+012.04 × 10+013.11 × 10−14
C++++++++/
F15Mean2.50 × 10+012.16 × 10+012.71 × 10+015.84 × 10+031.07 × 10+015.38 × 10+014.95 × 10+012.14 × 10+011.62 × 10+00
Std1.03 × 10+013.16 × 10+009.78 × 10−014.23 × 10+031.73 × 10+009.85 × 10+001.06 × 10+003.81 × 10−017.70 × 10−02
C++++++++/
F16Mean1.26 × 10+012.02 × 10+012.03 × 10+015.43 × 10+002.09 × 10+012.10 × 10+012.17 × 10+011.40 × 10+018.70 × 10−01
Std1.01 × 10+002.30 × 10−016.67 × 10−025.09 × 10−019.02 × 10−012.92 × 10−018.77 × 10−025.39 × 10+002.86 × 10−01
C++++++++/
F17Mean1.41 × 10+091.96 × 10+034.83 × 10+043.83 × 10+092.03 × 10+101.51 × 10+042.89 × 10+087.71 × 10+081.58 × 10+04
Std7.06 × 10+089.96 × 10+023.28 × 10+041.15 × 10+091.38 × 10+105.62 × 10+036.63 × 10+078.60 × 10+088.68 × 10+03
C+-+++=++/
F18Mean1.88 × 10+034.90 × 10+031.09 × 10+043.05 × 10+033.73 × 10+031.48 × 10+036.35 × 10+046.23 × 10−093.37 × 10−03
Std1.28 × 10+035.95 × 10+029.56 × 10+021.00 × 10+031.57 × 10+033.26 × 10+023.80 × 10+032.11 × 10−091.52 × 10−03
C+++++++-/
F19Mean1.95 × 10+012.12 × 10+012.17 × 10+011.92 × 10+012.07 × 10+012.17 × 10+012.22 × 10+012.07 × 10+011.93 × 10+01
Std5.51 × 10−011.08 × 10−011.44 × 10−013.62 × 10−011.55 × 10−012.64 × 10−016.31 × 10−028.82 × 10−012.37 × 10−01
C=++=++++
F20Mean7.71 × 10+012.70 × 10+013.19 × 10+011.15 × 10+023.94 × 10+024.45 × 10+015.55 × 10+011.19 × 10+018.64 × 10+00
Std4.93 × 10+014.94 × 10+006.65 × 10−015.86 × 10+011.99 × 10+027.92 × 10+004.90 × 10+007.90 × 10+009.42 × 10−01
C++++++++/
W/T/L17/2/119/0/120/0/019/1/019/1/019/2/019/1/013/1/6/
Note: the gray bacground highlights the best result on each function.
Table 5. Numerical rankings of the Friedman test.
Table 5. Numerical rankings of the Friedman test.
DimensionsFDRUPSOFIPSCLPSOMPSOFPASSOdg-PSODG-PSO
30-D3.455.6256.7254.26.955.97.33.351.5
50-D4.255.67.07556.0255.2756.853.2751.65
Table 6. Parameters and references of the algorithms.
Table 6. Parameters and references of the algorithms.
AlgorithmParametersValueReference
DEPopulation size40[62]
Scaling factor0.8
Crossover possibility0.25
ABCSwam size20[59]
Max trial limit50
CSNumber of nests25[53]
Step size1
Mutation probability value0.25
Scale factor1.5
MPSOMaximum, minimum swarm size40, 5[36]
acceleration constants c 1 , c 2 2, 2
Maximum, minimum inertial weight0.9, 0.3
SSOPopulation size50[47]
The threshold PF0.7
Table 7. Statistical results.
Table 7. Statistical results.
ImageDItemDEABCCSMPSOSSOOurs
a2Mean1.79165 × 10+031.79165 × 10+031.79165 × 10+031.79165 × 10+031.79165 × 10+031.79165 × 10+03
Std4.67 × 10−134.67 × 10−134.67 × 10−134.67 × 10−134.67 × 10−134.67 × 10−13
3Mean1.94917 × 10+031.94918 × 10+031.94919 × 10+031.94919 × 10+031.94919 × 10+031.94919 × 10+03
Std6.90 × 10−021.59 × 10−029.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−13
4Mean2.03403 × 10+032.03416 × 10+032.03427 × 10+032.03427 × 10+032.03427 × 10+032.03427 × 10+03
Std6.01 × 10−015.45 × 10−020.00 × 10+000.00 × 10+000.00 × 10+000.00 × 10+00
5Mean2.06635 × 10+032.06618 × 10+032.06648 × 10+032.06648 × 10+032.06648 × 10+032.06648 × 10+03
Std2.74 × 10−011.79 × 10−014.50 × 10−035.84 × 10−030.00 × 10+000.00 × 10+00
7Mean2.09715 × 10+032.09653 × 10+032.09734 × 10+032.09739 × 10+032.09739 × 10+032.09739 × 10+03
Std2.25 × 10−014.38 × 10−014.63 × 10−024.20 × 10−041.42 × 10−034.67 × 10−13
9Mean2.11153 × 10+032.11097 × 10+032.11188 × 10+032.11158 × 10+032.11192 × 10+032.11192 × 10+03
Std4.20 × 10−013.52 × 10−016.55 × 10−022.43 × 10−018.36 × 10−029.06 × 10−02
15Mean2.12912 × 10+032.12857 × 10+032.12965 × 10+032.12976 × 10+032.12922 × 10+032.12995 × 10+03
Std8.17 × 10−012.30 × 10−011.69 × 10−013.25 × 10−016.58 × 10−011.40 × 10−01
20Mean2.13280 × 10+032.13370 × 10+032.13444 × 10+032.13464 × 10+032.13333 × 10+032.13472 × 10+03
Std7.99 × 10−012.01 × 10−011.18 × 10−015.27 × 10−016.86 × 10−011.44 × 10−01
b2Mean2.27112 × 10+032.27112 × 10+032.27112 × 10+032.27112 × 10+032.27112 × 10+032.27112 × 10+03
Std9.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−13
3Mean2.50343 × 10+032.50346 × 10+032.50347 × 10+032.50347 × 10+032.50347 × 10+032.50347 × 10+03
Std1.37 × 10−011.40 × 10−024.67 × 10−134.67 × 10−134.67 × 10−134.67 × 10−13
4Mean2.58706 × 10+032.58695 × 10+032.58710 × 10+032.58709 × 10+032.58710 × 10+032.58710 × 10+03
Std1.12 × 10−011.30 × 10−014.67 × 10−134.67 × 10−134.67 × 10−134.67 × 10−13
5Mean2.62673 × 10+032.62643 × 10+032.62685 × 10+032.62686 × 10+032.62686 × 10+032.62686 × 10+03
Std1.49 × 10−012.85 × 10−011.39 × 10−026.24 × 10−029.33 × 10−139.33 × 10−13
7Mean2.66456 × 10+032.66401 × 10+032.66481 × 10+032.66488 × 10+032.66490 × 10+032.66490 × 10+03
Std2.78 × 10−012.76 × 10−015.67 × 10−022.08 × 10−014.67 × 10−134.67 × 10−13
9Mean2.68400 × 10+032.68338 × 10+032.68448 × 10+032.68458 × 10+032.68457 × 10+032.68459 × 10+03
Std4.71 × 10−014.59 × 10−015.68 × 10−023.55 × 10−012.69 × 10−028.48 × 10−03
15Mean2.70493 × 10+032.70461 × 10+032.70564 × 10+032.70564 × 10+032.70535 × 10+032.70588 × 10+03
Std7.97 × 10−012.72 × 10−011.34 × 10−013.15 × 10−011.10 × 10+007.63 × 10−02
20Mean2.70907 × 10+032.71046 × 10+032.71127 × 10+032.71168 × 10+032.71019 × 10+032.71153 × 10+03
Std1.30 × 10+002.09 × 10−011.54 × 10−015.25 × 10−017.90 × 10−012.79 × 10−01
c2Mean2.67669 × 10+032.67669 × 10+032.67669 × 10+032.67669 × 10+032.67669 × 10+032.67669 × 10+03
Std9.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−13
3Mean2.87034 × 10+032.87040 × 10+032.87042 × 10+032.87042 × 10+032.87042 × 10+032.87042 × 10+03
Std2.80 × 10−012.94 × 10−029.33 × 10−139.33 × 10−139.33 × 10−139.33 × 10−13
4Mean2.93107 × 10+032.93110 × 10+032.93128 × 10+032.93129 × 10+032.93129 × 10+032.93129 × 10+03
Std5.24 × 10−011.69 × 10−012.87 × 10−032.36 × 10−011.40 × 10−121.40 × 10−12
5Mean2.96630 × 10+032.96622 × 10+032.96645 × 10+032.96645 × 10+032.96645 × 10+032.96645 × 10+03
Std3.08 × 10−011.72 × 10−011.90 × 10−031.65 × 10−011.40 × 10−121.57 × 10−03
7Mean2.99969 × 10+032.99923 × 10+032.99982 × 10+032.99979 × 10+032.99980 × 10+032.99981 × 10+03
Std2.09 × 10−012.68 × 10−014.74 × 10−029.48 × 10−038.60 × 10−028.25 × 10−02
9Mean3.01430 × 10+033.01377 × 10+033.01450 × 10+033.01462 × 10+033.01467 × 10+033.01474 × 10+03
Std4.31 × 10−013.59 × 10−012.06 × 10−013.49 × 10−013.39 × 10−012.45 × 10−01
15Mean3.02911 × 10+033.02886 × 10+033.02963 × 10+033.02973 × 10+033.02936 × 10+033.02989 × 10+03
Std8.23 × 10−011.91 × 10−011.53 × 10−012.02 × 10−018.09 × 10−012.10 × 10−01
20Mean3.03250 × 10+033.03354 × 10+033.03409 × 10+033.03443 × 10+033.03310 × 10+033.03447 × 10+03
Std7.16 × 10−011.50 × 10−011.07 × 10−015.52 × 10−016.31 × 10−011.05 × 10−01
d2Mean3.50826 × 10+033.50826 × 10+033.50826 × 10+033.50826 × 10+033.50826 × 10+033.50826 × 10+03
Std1.40 × 10−121.40 × 10−121.40 × 10−121.40 × 10−121.40 × 10−121.40 × 10−12
3Mean3.65657 × 10+033.65661 × 10+033.65665 × 10+033.65665 × 10+033.65665 × 10+033.65665 × 10+03
Std2.31 × 10−015.15 × 10−022.33 × 10−122.05 × 10−022.33 × 10−122.33 × 10−12
4Mean3.73546 × 10+033.73537 × 10+033.73562 × 10+033.73562 × 10+033.73562 × 10+033.73562 × 10+03
Std2.28 × 10−011.56 × 10−014.67 × 10−132.78 × 10−024.67 × 10−134.67 × 10−13
5Mean3.78074 × 10+033.78052 × 10+033.78100 × 10+033.78099 × 10+033.78100 × 10+033.78100 × 10+03
Std4.10 × 10−012.56 × 10−011.23 × 10−023.15 × 10−019.33 × 10−139.33 × 10−13
7Mean3.81857 × 10+033.81818 × 10+033.81897 × 10+033.81864 × 10+033.81865 × 10+033.81883 × 10+03
Std6.02 × 10−014.10 × 10−013.38 × 10−024.35 × 10−011.11 × 10+008.17 × 10−01
9Mean3.83628 × 10+033.83555 × 10+033.83665 × 10+033.83681 × 10+033.83680 × 10+033.83682 × 10+03
Std5.38 × 10−014.48 × 10−018.98 × 10−025.02 × 10−011.62 × 10−023.18 × 10−03
15Mean3.85501 × 10+033.85465 × 10+033.85558 × 10+033.85598 × 10+033.85543 × 10+033.85593 × 10+03
Std9.03 × 10−012.48 × 10−011.44 × 10−018.12 × 10−016.33 × 10−011.62 × 10−01
20Mean3.85891 × 10+033.86036 × 10+033.86097 × 10+033.86131 × 10+033.86011 × 10+033.86135 × 10+03
Std9.91 × 10−012.52 × 10−011.64 × 10−016.12 × 10−017.65 × 10−011.97 × 10−01
e2Mean1.07218 × 10+031.07218 × 10+031.07218 × 10+031.07218 × 10+031.07218 × 10+031.07218 × 10+03
Std0.00 × 10+000.00 × 10+000.00 × 10+000.00 × 10+000.00 × 10+000.00 × 10+00
3Mean1.17024 × 10+031.17021 × 10+031.17025 × 10+031.17024 × 10+031.17025 × 10+031.17025 × 10+03
Std1.49 × 10−024.97 × 10−022.33 × 10−132.33 × 10−132.33 × 10−132.33 × 10−13
4Mean1.21391 × 10+031.21383 × 10+031.21398 × 10+031.21399 × 10+031.21399 × 10+031.21399 × 10+03
Std1.29 × 10−011.23 × 10−016.60 × 10−033.89 × 10−022.33 × 10−132.33 × 10−13
5Mean1.24246 × 10+031.24221 × 10+031.24261 × 10+031.24265 × 10+031.24265 × 10+031.24265 × 10+03
Std3.18 × 10−011.81 × 10−016.27 × 10−021.95 × 10−014.67 × 10−134.67 × 10−13
7Mean1.27623 × 10+031.27577 × 10+031.27656 × 10+031.27660 × 10+031.27661 × 10+031.27661 × 10+03
Std5.52 × 10−014.12 × 10−015.43 × 10−022.55 × 10−012.33 × 10−132.33 × 10−13
9Mean1.29227 × 10+031.29152 × 10+031.29254 × 10+031.29267 × 10+031.29267 × 10+031.29268 × 10+03
Std4.55 × 10−014.68 × 10−017.29 × 10−023.04 × 10−011.42 × 10−023.65 × 10−03
15Mean1.30947 × 10+031.30914 × 10+031.30986 × 10+031.31018 × 10+031.30972 × 10+031.31035 × 10+03
Std7.66 × 10−013.43 × 10−012.49 × 10−013.64 × 10−016.56 × 10−011.45 × 10−01
20Mean1.31345 × 10+031.31412 × 10+031.31467 × 10+031.31501 × 10+031.31379 × 10+031.31522 × 10+03
Std8.91 × 10−012.70 × 10−011.94 × 10−014.25 × 10−017.16 × 10−011.09 × 10−01
Note: the gray bacground highlights the best result on each function.
Table 8. Numerical rankings of the Friedman tests.
Table 8. Numerical rankings of the Friedman tests.
LevelDEABCCSMPSOSSOOurs
Normal level5.254.7752.86253.31252.7752.025
High level5.31254.76252.66253.11253.21.95

Share and Cite

MDPI and ACS Style

Shen, L.; Huang, X.; Fan, C. Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation. Sensors 2018, 18, 1393. https://doi.org/10.3390/s18051393

AMA Style

Shen L, Huang X, Fan C. Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation. Sensors. 2018; 18(5):1393. https://doi.org/10.3390/s18051393

Chicago/Turabian Style

Shen, Liang, Xiaotao Huang, and Chongyi Fan. 2018. "Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation" Sensors 18, no. 5: 1393. https://doi.org/10.3390/s18051393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop