Next Article in Journal
A Two-Parameter Model: Properties and Estimation under Ranked Sampling
Previous Article in Journal
Volatility Modeling: An Overview of Equity Markets in the Euro Area during COVID-19 Pandemic
Previous Article in Special Issue
Advanced Metaheuristic Method for Decision-Making in a Dynamic Job Shop Scheduling Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Shuffle-Based Artificial Bee Colony Algorithm for Solving Integer Programming and Minimax Problems

Faculty of Applied Management, Economics and Finance, University Business Academy in Novi Sad,Jevrejska 24, 11000 Belgrade, Serbia
Mathematics 2021, 9(11), 1211; https://doi.org/10.3390/math9111211
Submission received: 25 March 2021 / Revised: 23 May 2021 / Accepted: 25 May 2021 / Published: 27 May 2021
(This article belongs to the Special Issue Advances of Metaheuristic Computation)

Abstract

:
The artificial bee colony (ABC) algorithm is a prominent swarm intelligence technique due to its simple structure and effective performance. However, the ABC algorithm has a slow convergence rate when it is used to solve complex optimization problems since its solution search equation is more of an exploration than exploitation operator. This paper presents an improved ABC algorithm for solving integer programming and minimax problems. The proposed approach employs a modified ABC search operator, which exploits the useful information of the current best solution in the onlooker phase with the intention of improving its exploitation tendency. Furthermore, the shuffle mutation operator is applied to the created solutions in both bee phases to help the search achieve a better balance between the global exploration and local exploitation abilities and to provide a valuable convergence speed. The experimental results, obtained by testing on seven integer programming problems and ten minimax problems, show that the overall performance of the proposed approach is superior to the ABC. Additionally, it obtains competitive results compared with other state-of-the-art algorithms.

1. Introduction

A wide variety of problems from different areas can be formulated as integer programming and minimax problems. Some applications in which integer programming problems appear are system-reliability design, scheduling, capital budgeting, warehouse location, portfolio analysis, automated production systems, mechanical design, transportation and cartography [1,2,3,4,5]. Furthermore, minimax optimization problems are found in many applications, such as optimal control, engineering design, game theory, signal and data processing [6,7,8,9,10].
Since integer programming is known to be NP-hard, solving these problems is considered a challenging task. Dynamic programming and branch-and-bound (BB) are well-known exact integer programming methods [11,12]. These methods divide the feasible region into smaller sub-regions or problems into sub-problems. The main drawback of dynamic programming is that the amount of computation necessary for an optimal solution exponentially grows as the number of variables rises. Branch-and-bound techniques have a high computational cost when solving large-scale problems that require the exploration of a search tree containing hundreds of nodes [11].
Metaheuristic optimization algorithms provide high-quality solutions in an acceptable amount of time. These techniques do not make any presumptions about the problem and can be used to solve a broad class of challenging optimization problems [13,14,15,16,17,18]. One of the most notable classes of metaheuristics, swarm intelligence (SI) algorithms, has foundations in imitating the collective behavior of biological agents. Particle swarm optimization (PSO) [19], artificial bee colony (ABC) [20], harmony search (HS) [21], firefly algorithm (FA) [22], gravitational search algorithm (GSA) [23], cuckoo search (CS) [24], whale optimization algorithm (WOA) [25] and bat algorithm (BA) [26] are some of the notable SI algorithms. In the last two decades, many SI algorithms were applied to solve integer programming problems. For instance, the PSO was employed to solve integer programming problems in [2]. On standard test problems, the PSO outperformed the branch-and-bound method in most cases.
Sequential quadratic programming (SQP) and smoothing techniques are common strategies for solving minimax problems [6]. These methods perform local minimization and require derivatives information for the objective function, which in most applications are not analytically available. Furthermore, SQP and smoothing techniques struggle to achieve satisfactory solutions when the objective function is discontinuous. On the other hand, metaheuristics are problem-independent optimization methods. Search operators of these methods use some randomness, which enables the algorithm to move away from a local optimum to search on a global scale [27]. Hence, metaheuristic optimization algorithms are considered an adequate alternative for minimax problems.
Since their invention, the original variants of metaheuristic algorithms have been modified to improve their performances. In [28], a memetic PSO algorithm that integrates local search methods to the basic PSO was developed. The local and global variants of the memetic PSO scheme were tested to solve minimax and integer programming problems. The experimental results showed that the memetic PSO outperformed the corresponding variants of the PSO algorithm in the majority of benchmarks. A hybrid cuckoo search algorithm with the Nelder Mead method, named HCSNM, for solving integer programming and minimax problems is proposed in [29]. In [29], it was concluded that the use of the Nelder Mead method enhances the convergence speed of the basic CS technique. A hybrid bat algorithm (HBDS) to solve integer programming is proposed in [30]. The HBDS incorporates direct search methods in the BA to enhance the intensification ability of the BA. Recently, a new hybrid harmony search algorithm with the multidirectional search method, called MDHSA, is developed to enhance the performance of the standard HS algorithm for solving integer programming and minimax problems [31].
The efficiency of the basic ABC algorithm for integer programming problems was investigated in [11]. To our knowledge, the ABC is not tested on a minimax test function in any of the studies. Therefore, investigating the performance of the standard ABC algorithm for solving minimax problems and proposing suitable modifications with the aim to further improve its performance for integer programming and minimax problems is a research problem.
Motivated by these reasons, this paper presents a shuffle-based artificial bee colony algorithm (SB-ABC) for solving integer programming and minimax problems. Although ABC has achieved success in different research fields, it was noticed that the exploitation ability of the ABC is deficient because of a randomly picked neighborhood food source in its solution search equation [32]. Therefore, the ABC algorithm has a slow convergence rate when it is applied to solve complex optimization problems. In order to enhance the exploitation ability of the ABC algorithm, the proposed approach employs a modified ABC search operator, which exploits the useful information of the current best solution in the onlooker phase. Furthermore, in certain iterations, the shuffle mutation operator is applied to the newly created solutions in both bee phases. In that way, the proposed algorithm provides useful diversity in the population, which is crucial in finding a good balance between exploitation and exploration. The SB-ABC algorithm is tested on seven integer programming problems and ten minimax problems. The obtained results for integer programming problems are compared to those of the ABC, BB method and 12 other metaheuristics. For minimax problems, the achieved results are compared to those of the ABC, SQP method and 11 other algorithms. Experimental results indicated that the SB-ABC algorithm obtained highly competitive results in comparison with the other algorithms presented in the literature.
The paper is organized as follows. In Section 2, definitions of minimax and integer programming problems are given. The standard ABC is presented in Section 3. The proposed shuffle-based artificial bee colony approach is explained in Section 4. In Section 5, the optimization results are presented and analyzed. In Section 6, the influence of the proposed modifications on the performance of the SB-ABC algorithm is discussed. Section 7 provides concluding remarks.

2. Problem Statements

An integer programming problem is a discrete optimization problem where all of the variables are limited to integer values. A general integer programming problem can be stated as [11]:
m i n f ( x ) , x S Z n
where S is the feasible region and Z denotes the set of integers. A problem where some variables are constrained to integers while some variables are not is a mixed integer programming problem. A special instance of the integer programming problem is that in which the variables are restricted to be either 0 or 1. This case is called the 0–1 programming problem or the binary integer programming problem.
Minimax optimization deals with a composition of an inner maximization problem and an outer minimization problem. A general form of the minimax problem can be stated as [31]:
m i n F ( x )
where
F ( x ) = m a x f i ( x ) , i = 1 , , m
with f i ( x ) : S R n R , i = 1 , , m .
Furthermore, a nonlinear programming problem, with inequality constrains, of the form
m i n F ( x ) , g i ( x ) 0 , i = 1 , , m ,
can be transformed to minimax problems as follows:
m i n m a x f i ( x ) , i = 1 , , m
where
f 1 ( x ) = F ( x ) , f i ( x ) = F ( x ) α i · g i ( x ) α i > 0
for i = 2 , , m . It has been shown that when α i is large enough, the optimum point of the minimax problem coincides with the optimum point of the nonlinear programming problem [6].

3. Artificial Bee Colony Algorithm

Foraging behavior of a honey bee swarm motivated the development of the ABC algorithm [20]. The population of artificial bees is made of employed bees, onlooker bees and scout bees. One-half of the population consists of employed bees. Onlookers and scouts make the other half of the population. In the basic ABC, each food source represents a possible solution for the problem, and the number of the employed bees is equal to the number of food sources. All bees that are presently exploiting a food source are employed bees. The onlooker bees aim to choose promising food sources from those discovered by the employed bees according to the probability proportional to the quality of the food source. After the selection of the food source, the onlookers further seek food in the vicinity of the selected food source. The scout bees are transformed from several employed bees that abandon their unpromising food sources to seek new ones.
The control parameters of the basic ABC algorithm are the size of the population ( S P ), which is equal to the sum of employed and onlooker bees, the maximum cycle number ( M C N ), and parameter l i m i t , which represents the number of trials for abandoning the food source. In the initialization phase, the ABC creates randomly distributed initial population, which includes S P solutions. Following this step, three phases—employed, onlooker and scout—are repeated for a certain number of iterations. After each iteration, the best-discovered solution is saved.
Each employed bee seeks a better food source in the employed phase. The search operator used to create a novel food source v i from the old one x i is given by:
v i j = x i j + φ · ( x i j x l j )
where j is arandomly picked index of a parameter, x l is a randomly selected food source that is different from x i and φ is a uniform random number between (−1, 1). Greedy selection between old and new food sources decides whether the old food source will be replaced by the new one.
In the onlooker phase, each onlooker bee chooses a food source according to the probability that is proportional to the fitness value. The same search strategy, which is given by Equation (7), is used to generate a candidate food source from the picked one. Greedy selection between old and new food sources decides whether the old food source (solution) will be updated. In the scout phase, a solution that can not be updated through a predetermined number of trials is replaced with a randomly created solution.
Many variants of ABC for solving continuous optimization problems were proposed [33,34,35,36,37,38,39,40]. For instance, an enhanced version of ABC, which introduces modifications related to elite strategy and dimension learning, is invented in [33]. The ABC variant, which uses novel search strategies in employed and onlooker bee phases, is developed in [34]. In [35], a hybrid method, which combines firefly and multi-strategy ABC, is developed for solving numerical optimization problems. An enhanced ABC based on the multi-strategy fusion is ABC variant and is proposed to improve the search ability of ABC with a small population [40].
Although the standard ABC was initially invented for continuous optimization problems, the modified variants have also been proposed for combinatorial and discrete problems [41,42,43,44,45,46]. Akay and Karaboga modified the ABC algorithm in order to solve integer programming problems. In this version of the ABC, a new control parameter called modification rate ( M R ) is employed in its solution search strategy [11]. The modification rate parameter controls the possible modifications of optimization parameters. In [41], an ABC algorithm with a modified choice function for the traveling salesman problem is developed. Two novel ABC algorithms in which a multiple colonies strategy is adopted are proposed to solve the vehicle routing problem [43]. The ABC technique that integrates the initial solutions, an elitism strategy, recovery and local search schemes is a newly developed variant of ABC for solving the operating room scheduling problem [45]. An improved ABC algorithm for solving the strength–redundancy allocation problem is presented in [46]. In general, application fields of the ABC method are data mining, neural networks, image processing, cryptanalysis, data clustering and engineering [47,48,49,50,51,52,53].

4. The Proposed Approach: SB-ABC

Important characteristics of each metaheuristic algorithm are exploitation and exploration [54]. Exploitation refers to the process of visiting areas of a search space in the neighborhood of previously found satisfactory solutions. Exploration is the process of generating solutions with ample diversity and far from the current solutions. A balanced combination of these conflicting processes is essential for successful optimization performance. According to Equation (7), the new individual is generated by moving the old solution to a randomly picked solution, and the direction of the search is random. Consequently, the solution search equation given by Equation (7) has good exploration tendency, but it is not promising at exploitation. Since too much exploration tends to decrease the convergence speed of the algorithm [35], the proposed approach uses modified ABC search equations in employed and onlooker bee phases. To obtain useful diversity in the population, in each bee phase, the shuffle mutation operator is applied to new candidate solutions.
To create a new solution v i from the solution x i in the employed bee phase, the SB-ABC algorithm uses a search strategy that is described by [11]:
v i j = x i j + φ i · ( x i j x k j ) , if R i j < M R x i j , otherwise
where j { 1 , 2 , , D } and D is the number of optimization parameters or dimensions of the problem. In Equation (8), x k is a randomly selected food source that is different from x i , φ i is a uniform random number between (−1, 1), R i j is a randomly chosen real number in range (0, 1) and M R is modification rate control parameter whose value is in the range (0, 1). A higher value of the M R parameter will enable more parameters to be changed in the parent solution with the aim to increase the convergence speed of the basic ABC algorithm.
In the onlooker bee phase of the SB-ABC algorithm, the solutions are chosen according to the probability, which is given by [51]:
p i = 0.9 · ( f i t i / m a x f i t ) + 0.1
where the best fitness value in the population is denoted by maxfit, while f i t i marks the fitness value of the ith solution.
Inspired by the variant of the ABC proposed to solve numerical optimization, gbest-guided artificial bee colony (GABC) algorithm [55], we modify the search equation described by Equation (8) as follows:
v i j = x i j + φ i j · ( x i j x k j ) + ϕ i j · ( y j x i j ) , if R i j < M R x i j , otherwise
where j { 1 , 2 , , D } and D is the number of optimization parameters, i.e., dimension of the problem. In Equation (10), v i is a new candidate solution, x i is parent solution, φ i j is a uniform random number in range (−1, 1), ϕ i j is a uniform random number in the segment [0, 1.5], x k is a randomly selected food source that is different from x i , y j is the jth parameter of the best solution found so far, and R i j is a randomly chosen real number within (0,1). According to Equation (10), the third term can move the new potential solution towards the global best solution. Hence, the modified search strategy given by Equation (10) can enhance the exploitation tendency of the basic ABC algorithm.
The right amount of population diversity is of great significance in achieving a proper balance between exploitation and exploration. In the SB-ABC algorithm, the exploitation is increased by using the modified search equation in the onlooker bee phase. Thus, the differences among individuals of a population are decreased since the search process is quite focused on a local region of good solutions. To promote diversity at certain stages of the search process, a new parameter called random permutation production interval ( R P P I ) is introduced in the SB-ABC. This parameter is used as follows: after each R P P I th cycle, the shuffle mutation operator is applied to new candidate solutions at employed and onlooker bee phases. The shuffle mutation is a mutation operator where the mutated solution takes the components of the original solution, applying a permutation to them [56]. Usage of the shuffle mutation operator enables a better exploration of solutions but only every R P P I iterations.
The proposed approach computes a value t r i a l i for each solution x i during the search process. A value t r i a l i characterizes the non-advanced number of the solution x i used for the abandonment. In the scout phase of the SB-ABC algorithm, one solution with the highest t r i a l value that is greater than the value of l i m i t control parameter, if such solution occurs, is exchanged with a randomly generated solution.
The pseudo-code of the employed bee phase is presented in Algorithm 1, while the procedure of the onlooker bee phase is described in Algorithm 2. The input of Algorithm 1 involves the current solutions x i with corresponding values t r i a l i , i = 1 , 2 , , S P / 2 , current c y c l e value, values of M R and R P P I parameters, and the objective function f. The output of Algorithm 1 is the updated population of solutions x i and t r i a l i values, i = 1 , 2 , , S P / 2 , which will be employed in the onlooker bee phase. The input of Algorithm 2 includes the current population of solutions x i with corresponding values t r i a l i , i = 1 , 2 , , S P / 2 , current c y c l e value, values of M R and R P P I parameters, and the objective function f. The output of Algorithm 2 is the updated population of solutions x i and t r i a l i values, i = 1 , 2 , , S P / 2 , which will be used in the next iteration. The pseudo-code of the SB-ABC algorithm is presented in Algorithm 3. The input of Algorithm 3 includes the values of S P , M C N , M R , l i m i t and R P P I control parameters and the objective function f. The output of Algorithm 3 is the best solution found.
It is important to mention that the proposed approach SB-ABC introduces two modifications in comparison with the ABC algorithm adjusted for integer programming problems: use of the modified ABC search operator described by Equation (10) and the application of the shuffle mutation operator. The crucial difference between these two approaches consists in the different balance of exploitation and exploration. Exploitation is enhanced in the onlooker phase by applying the global best solution to guide the search process. Useful diversity of the population and better exploration of solutions is achieved on the global level by applying the shuffle mutation operator every R P P I th iteration.
The SB-ABC algorithm employs three specific control parameters to manage the search process: modification rate M R , l i m i t and R P P I , which determines the cycles in which the shuffle mutation operator is applied to candidate solutions. It also uses standard control parameters for all population-based metaheuristics, the population size and maximum number of cycles. In order to solve the integer programming problems, the SB-ABC rounds the parameter values to the closest integer after evolution according to Equations (8) and (10). Solutions were also rounded after the initialization phase and scout phase of the algorithm. Therefore, they were considered as integer numbers for all operations.
Algorithm 1 Employed bee phase of the SB-ABC algorithm
  • for i = 1 to S P / 2 do
  •  Create candidate solution v i for x i by Equation (8);
  • if ( c y c l e mod R P P I = 0 ) then
  •   Create a random permutation r p of {1, 2, …, D};
  •   for j = 1 to D do
  •     v i , j = v i , r p j ;
  •   end for
  • end if
  •  Evaluate the solution v i ;
  • if f( v i ) < f( x i ) then
  •    x i = v i ;
  •    t r i a l i = 0;
  • else
  •    t r i a l i = t r i a l i + 1;
  • end if
  • end for
Algorithm 2 Onlooker bee phase of the SB-ABC algorithm
  • for i = 1 to S P / 2 do
  •  Calculate the probability p i by Equation (9);
  • end for
  • t = 1;
  • i = 1;
  • while ( t S P / 2 ) do
  • if ( r a n d ( 0 , 1 ) < p i ) then
  •    t = t + 1 ;
  •   Create candidate solution v i for x i by by Equation (10);
  •   if ( c y c l e mod R P P I = 0 ) then
  •    Create a random permutation r p of {1, 2, …, D};
  •    for j = 1 to D do
  •      v i , j = v i , r p j ;
  •    end for
  •   end if
  •   Evaluate the solution v i ;
  •   if f( v i ) < f( x i ) then
  •     x i = v i ;
  •     t r i a l i = 0;
  •   else
  •     t r i a l i = t r i a l i + 1;
  •   end if
  • end if
  • i = i + 1 ;
  • if (i = S P / 2 ) then
  •    i = 1 ;
  • end if
  • end while
Algorithm 3 Pseudo-code of the SB-ABC algorithm
  • Initial control parameters of the SB-ABC;
  • Generate initial population of solutions x i , i = 1 , 2 , , S P / 2 randomly in the search space;
  • Calculate objective function value of each solution x i , i = 1 , 2 , , S P / 2 ;
  • Set t r i a l i = 0, i = 1 , 2 , , S P / 2 ;
  • c y c l e = 0;
  • repeat
  •  Execute Algorithm 1;
  •  Execute Algorithm 2;
  •  One solution with the highest t r i a l value that is greater than the abandonment
  •  threshold, if such solution occurs, is exchanged with a randomly generated solution;
  •  Save the current best solution;
  • c y c l e = c y c l e + 1;
  • until ( c y c l e = M C N )

5. Experimental Study

The performance of the SB-ABC algorithm is evaluated through seven integer programming problems and ten minimax problems widely used in the literature. The proposed algorithm is implemented in Java, and it was run on a PC with an Intel(R) Core(TM) i5-4460 3.2 GHz processor. In order to show the efficiency of the SB-ABC algorithm, it is compared with several algorithms that were previously applied to solve these problems. In the next subsections, brief descriptions of the used benchmark problems and results of a comparison between the SB-ABC and other state-of-the-art approaches are presented.

5.1. Benchmark Problems

In this section, the integer programming and minimax optimization test problems are described. To test the performance of the SB-ABC algorithm on integer programming problems, seven problems widely used in the literature are employed. The mathematical models of these problems can be found in [11,28,31]. These problems are presented below:
Test problem F I 1 is defined in [28]:
F I 1 ( x ) = | x 1 | + | x 2 | + + | x D |
where D is the dimension of the problem or number of optimization parameters. The global minimum is F I 1 ( x * ) = 0 .
Test problem F I 2 is defined in [28]:
F I 2 ( x ) = x T x = x 1 x 2 x D x 1 x 2 x D
where D is the dimension of the problem. The global minimum is F I 2 ( x * ) = 0 .
Test problem F I 3 is defined in [28]:
F I 3 ( x ) = 15 27 36 18 12 x + x T 35 20 10 32 10 20 40 6 31 32 10 6 11 6 10 32 31 6 38 20 10 32 10 20 31 x
The global minimum is F I 3 ( x * ) = 737 .
Test problem F I 4 is defined in [28]:
F I 4 ( x ) = ( 9 x 1 2 + 2 x 2 2 11 ) 2 + ( 3 x 1 + 4 x 2 2 7 ) 2
The global minimum is F I 4 ( x * ) = 0 .
Test problem F I 5 is defined in [28]:
F I 5 ( x ) = ( x 1 + 10 x 2 ) 2 + 5 ( x 3 x 4 ) 2 + ( x 2 2 x 3 ) 4 + 10 ( x 1 x 4 ) 4
The global minimum is F I 5 ( x * ) = 0 .
Test problem F I 6 is defined in [28]:
F I 6 ( x ) = 2 x 1 2 + 3 x 2 2 + 4 x 1 x 2 6 x 1 3 x 2
The global minimum is F I 6 ( x * ) = 6 .
Test problem F I 7 is defined in [28]:
F I 7 ( x ) = 3803.84 138.08 x 1 232.92 x 2 + 123.08 x 1 2 + 203.64 x 2 2 + 182.25 x 1 x 2
The global minimum is F I 7 ( x * ) = 3833.12 .
To investigate the efficiency of the SB-ABC algorithm on minimax problems, ten benchmark functions are considered [6,28,31]. These benchmarks are presented as follows:
Test problem F M 1 is defined in [31,57]:
F M 1 ( x ) = m a x f i ( x ) , i = 1 , 2 , 3
f 1 ( x ) = x 1 2 + x 2 4 f 2 ( x ) = ( 2 x 1 ) 2 + ( 2 x 2 ) 2 f 3 ( x ) = 2 e x p ( x 1 + x 2 )
The desired error goal for this problem is F M 1 ( x * ) = 1.9522245.
Test problem F M 2 is defined in [31]:
F M 2 ( x ) = m a x f i ( x ) , i = 1 , 2 , 3
f 1 ( x ) = x 1 4 + x 2 2 f 2 ( x ) = ( 2 x 1 ) 2 + ( 2 x 2 ) 2 f 3 ( x ) = 2 e x p ( x 1 + x 2 )
The desired error goal for this problem is F M 2 ( x * ) = 2.
Test problem F M 3 is defined in [6,57]:
F M 3 ( x ) = x 1 2 + x 2 2 + 2 x 3 2 + x 4 2 5 x 1 5 x 2 21 x 3 + 7 x 4
g 2 ( x ) = x 1 2 x 2 2 x 3 2 x 4 2 x 1 + x 2 x 3 + x 4 + 8 g 3 ( x ) = x 1 2 2 x 2 2 x 3 2 2 x 4 2 + x 1 + x 4 + 10 g 4 ( x ) = x 1 2 x 2 2 x 3 2 2 x 1 + x 2 + x 4 + 5
The desired error goal for this problem is F M 3 ( x * ) = –40.1.
Test problem F M 4 is defined in [31]:
F M 4 ( x ) = m a x f i ( x ) , i = 1 , 2
f 1 ( x ) = | x 1 + 2 x 2 7 | f 2 ( x ) = | 2 x 1 + x 2 5 |
The desired error goal for this problem is F M 4 ( x * ) = 10 4 .
Test problem F M 5 is defined in [31]:
F M 5 ( x ) = m a x f i ( x )
f i ( x ) = | x i | , i = 1 , , 10
The desired error goal for this problem is F M 5 ( x * ) = 10 4 .
Five other test problems were selected from [57]. The name of the minimax benchmark problems, the dimension of the problem, the number of f i ( x ) functions and desired error goal are reported in Table 1.

5.2. The General Performance of the SB-ABC for Integer Programming Problems

Because the SB-ABC is an improved variant of the ABC, in this section, a comparison between the SB-ABC and ABC algorithm adjusted to solve integer programming problems through seven integer programming problems is presented. The common traditional technique, the branch-and-bound (BB) method, is also included in the comparison with the proposed approach.
The preliminary testing of the SB-ABC was done with the aim of obtaining suitable combinations of parameter values. The SP parameter was set to 20. This value was detected to be a proper selection for all executed tests. The increasing value of this control parameter will increase the computational cost without any enhancement in the reached results. Our tests verified the previous reasoning that a value 0.8 for the M R parameter is a good choice for solving these optimization problems [11]. Additionally, it was experimentally determined that a value of 50 for the parameter l i m i t and a value of 3 for the parameter R P P I are suitable for the SB-ABC algorithm. It was observed that significantly lower or higher values of the l i m i t parameter can deteriorate the obtained results. Higher values of the R P P I parameter would lead to the less frequent use of the shuffle mutation operator and consequently weaker performance of the SB-ABC algorithm. In the SB-ABC, during the initialization step, S P /2 solutions are evaluated, and there are S P /2 employed bees, S P /2 onlookers and a maximum of one scout bee per iteration. Therefore, the maximum number of function evaluations for the SB-ABC is S P / 2 + ( S P + 1 ) · M C N . The maximum number of function evaluations executed by the SB-ABC for all benchmarks was set to 20,000 and the SB-ABC was terminated when the global minimum was reached.
The results of the BB method and ABC algorithm were taken from their original papers [2,11]. For these comparisons, in the BB method and ABC algorithm, the maximum number of function evaluations was set to 25,000. When an accuracy of 10 6 was achieved, these methods were terminated. The BB method, ABC and SB-ABC algorithms are conducted for 30 independent runs for each benchmark problem.
The following metrics are used to estimate the performances of the BB, ABC and SB-ABC. The convergence speed of each algorithm is compared by recording the mean number of function evaluations (mean) required to reach the acceptable value. If the mean value is smaller, the convergence speed is faster. Since SI algorithms are stochastic, the obtained mean results are not the same in each run. To examine the stability of each method, standard deviation (SD) values are measured. The performance of an algorithm is more stable if the standard deviation value is lower. The success rate (SR) is used as a metric for robustness or reliability of methods. This rate is defined as the ratio of successful runs in the total number of executed runs. A run is considered successful if an algorithm obtains a solution for which the value of the objective function is less than the corresponding acceptable value. If the value of SR is greater, the reliability of the algorithm is better. In Table 2, the mean, corresponding standard deviation (SD) values and SR values of the BB method, ABC and SB-ABC for the benchmark problems F I 1 with 5, 10, 15, 20, 25 and 30 variables and test problems F I 2 F I 7 over 30 runs are given. The best mean results are indicated in bold.
As shown in Table 2, with respect to the SR results reached by these methods, the SB-ABC performs the most reliably since, for each test case, the obtained SR result of the SB-ABC is 100%. The BB method performance is less robust than the SB-ABC for problem F I 1 with 30 variables, since it achieved only 14 successful runs out of 30, while in the remaining test cases, both approaches obtained the same SR results. The SB-ABC and ABC algorithms obtained the same SR results on all test problems, with the exception of problem F I 3 , where the ABC performance was less robust. With respect to the mean results, from Table 2, it can be observed that the SB-ABC performs better than its rivals in the majority of cases. To be exact, the SB-ABC is better than the BB method and ABC in 12 and 11 test cases, respectively. On the other hand, the BB method has better mean results for problem F I 2 , while the ABC outperformed the SB-ABC for test functions F I 5 and F I 7 . With respect to the standard deviation results, from Table 2, it can be seen that the SB-ABC performance is more stable than the BB and ABC methods in most cases.

5.3. Comparison against Other State-of-the-Art Algorithms for Integer Programming Problems

To further demonstrate the efficiency of the SB-ABC, it is benchmarked against 12 other metaheuristic algorithms that were previously successfully used to solve integer programming problems. These algorithms are the basic PSO and its four variants RWMPSOg, RWMPSOl, PSOg, PSOl [28], standard cuckoo search (CS), firefly algorithm (FA), gravitational search algorithm (GSA), whale optimization algorithm (WOA), hybrid cuckoo search algorithm with Nelder Mead method (HCSNM) [29], hybrid bat algorithm (HBDS) [30] and the recently proposed hybrid harmony search algorithm with multidirectional search method (MDHSA) [31].
The results obtained by the RWMPSOg, RWMPSOl, PSOg, PSOl are taken from [28], the results reached by the HCSNM are taken from [29], the results achieved by HBDS are taken from [30], while the results of the MDHSA and basic PSO, CS, FA, GSA and WOA are taken from [31]. In Table 3, the mean, corresponding standard deviation values and SR values of the RWMPSOg, RWMPSOl, PSOg, PSOl, HCSNM, MDHSA and SB-ABC for the benchmark problems F I 1 with 5 variables and test problems F I 2 F I 7 over 50 runs are given. Table 4 presents the mean and standard deviation values obtained by the PSO, FA, CS, GSA, WOA, HBDS and SB-ABC for problem F I 1 with 5 variables and test problems F I 2 F I 7 over 50 runs. The best mean results are in bold. The metaheuristics used for comparison with the SB-ABC also performed the maximum number of function evaluations of 20,000. Since the results of these 12 algorithms are achieved over 50 runs, the statistical results of the SB-ABC over 50 runs are presented in Table 3 and Table 4.
The results from Table 3 and Table 4 show that the proposed algorithm obtained better mean results on the majority of benchmark problems in comparison with its competitors. Precisely, the SB-ABC is better than RWMPSOg, RWMPSOl, PSOg, PSOl, MDHSA, HCSNM, PSO, FA, CS, GSA, WOA and HBDS in six, six, seven, seven, five, four, seven, seven, seven, seven, seven, and six test problems, respectively. In contrast, the SB-ABC is outperformed by the RWMPSOg, RWMPSOl, PSOg, PSOl, MDHSA, HCSNM, PSO, FA, CS, GSA, WOA and HBDS in one, one, zero, zero, two, three, zero, zero, zero, zero, zero and one test problems, respectively. From the standard deviation values presented in Table 3 and Table 4, it can be observed that the proposed SB-ABC has lower standard deviation values on the majority of benchmark problems in comparison with RWMPSOg, RWMPSOl, PSOg PSOl, PSO, CS, GSA and WOA. On the other hand, the HCSNM, MDHSA, FA and HBDS have lower standard deviations compared to the SB-ABC for most of the cases. In addition, the SR results demonstrate that the SB-ABC achieved a 100% success rate on all benchmark problems.

5.4. The General Performance of the SB-ABC for Minimax Problems

In this section, the performance of the SB-ABC for solving minimax problems is investigated. The performance of the SB-ABC is compared to the performance of the SQP method and standard ABC algorithm. The fair comparison is ensured since the SQP, ABC and SB-ABC algorithms employed the maximum number of function evaluations of 20,000. The run is counted as successful when the desired error goal is reached within the maximum number of function evaluations.
The specific parameter settings of the SB-ABC are kept the same, as mentioned in Section 5.2. Since the standard ABC is not tested to solve minimax problems in any of the studies, we have tested its performance in solving problems F M 1 F M 10 . The ABC employed the following parameter settings, S P is 20, M R is 0.8 and l i m i t is 5 · S P · D . These values of control parameters were used in the standard ABC, adjusted to solve integer programming problems [11]. The results of the SQP method were taken from the respective paper [2].
In Table 5, the mean, corresponding standard deviation (SD) values and SR values of the SQP method, ABC and SB-ABC for the benchmark problems F M 1 F M 10 over 30 runs are presented. The best mean results are in bold. The mark (-) for F M 10 in the SQP method means that the results are not reported in its original paper. From Table 5, it can be noticed that the SB-ABC converges faster to the global minimum in comparison with the SQP method and ABC for the majority of test problems.
From the obtained mean values, it can be observed that the SB-ABC has better performance than the SQP and ABC methods in 6 and 10 test problems, respectively. On the other hand, the SB-ABC is outperformed by the SQP and ABC on three and zero benchmarks, respectively. Furthermore, the SR results indicate that the SB-ABC performance is more robust in comparison with the SQP on six test problems ( F M 1 , F M 2 , F M 6 , F M 7 , F M 8 and F M 9 ), while both methods reached the same SR results for the rest of the benchmarks. With respect to the standard deviation results, from Table 5, it can be noticed that the SB-ABC performance is more stable than the SQP and ABC methods in most cases. Furthermore, from Table 5, it can be seen that the SB-ABC performs more reliably than the ABC on four benchmark problems ( F M 1 , F M 5 , F M 8 and F M 10 ), while both algorithms achieved the same SR results for the remaining problems.

5.5. Comparison against Other State-of-the-Art Algorithms for Minimax Problems

In order to further examine the efficiency of the SB-ABC for minimax problems, its performance is compared to the performance of 10 other algorithms that were previously successfully used to solve these problems. These methods are the heuristic pattern search algorithm HPS2, the basic PSO and its two variants RWMPSOg and UPSOm [58], HCSNM [29], MDHSA [31], CS, FA, GSA and WOA.
The results obtained by the RWMPSOg are taken from [28], the results reached by the HPS2 are taken from [59], the results achieved by UPSOm are taken from [58], the results obtained by the HCSNM are taken from [29], while the results of the MDHSA, PSO, FA, CS, GSA and WOA are taken from [31]. In Table 6, the mean, standard deviation values and SR values of the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA and SB-ABC for the benchmark problems F M 1 F M 10 over 50 runs are given. Table 7 presents the mean and standard deviation values obtained by the PSO, FA, CS, GSA, WOA and SB-ABC for benchmark problems F M 1 F M 10 over 50 runs. The best mean results are in bold. The metaheuristics used for comparison with the SB-ABC also performed the maximum number of function evaluations of 20,000. The SB-ABC was configured with the specific parameter values, as described in Section 5.2. Since the results of these 10 algorithms are achieved over 50 runs, the statistical results of the SB-ABC over 50 runs are presented in Table 6 and Table 7. In Table 6, the mark (-) indicates that the results are not presented in the corresponding paper.
The results from Table 6 and Table 7 show that the proposed algorithm obtained better mean results on the majority of benchmark problems in comparison with its competitors. Concretely, the SB-ABC outperformed the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA, PSO, FA, CS, GSA and WOA in 5, 8, 6, 6, 7, 9, 8, 10, 9 and 10 test problems, respectively. In contrast, the SB-ABC is outperformed by the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA, PSO, FA, CS, GSA and WOA in three, one, zero, three, three, one, two, zero, one and zero benchmark problems, respectively. From standard deviation values presented in Table 6 and Table 7, it can be seen that the SB-ABC has lower standard deviation values for most of the cases in comparison with the UPSOm, RWMPSOg, CS and WOA. On the other hand, the HPS2, HCSNM, MDHSA, PSO, FA and GSA have lower standard deviations compared to the SB-ABC in most cases. With respect to the SR results reached by these methods, the SB-ABC performs the same or more reliably in comparison with the HPS2, UPSOm, RWMPSOg, HCSNM and MDHSA in these minimax problems.

6. Discussion

The impact of the introduced modifications on the SB-ABC will be examined in this section. Seven integer programming problems and ten minimax problems were solved by two diverse variants of the SB-ABC. The obtained results of each variant are compared with respect to the same of the developed SB-ABC approach. In the following text, these variants are presented:
  • Variant 1: To examine the effectiveness of employing the modified ABC operator in the onlooker bee phase given by Equation (10), an SB-ABC version that uses the standard ABC search equation is tested. The label SB-ABC1 is used for this variant.
  • Variant 2: To investigate the effectiveness of employing the shuffle mutation operator in employed and onlooker bee phases, an SB-ABC version that does not include this operator is tested. The label SB-ABC2 is used for this variant.
Each SB-ABC version is run 50 times for each test problem. The maximum number of function evaluations was 20,000 for each method. The tested methods were configured with specific parameter values, as described in Section 5.2. The mean, standard deviation values and SR values obtained by the SB-ABC1, SB-ABC2 and SB-ABC for seven integer programming problems and ten minimax problems are presented in Table 8 and Table 9. A result in boldface denotes the best mean result. The convergence graphs achieved by the SB-ABC1, SB-ABC2 and SB-ABC on the four picked integer programming functions and the four selected minimax optimization problems are given in Figure 1 and Figure 2, respectively.
From the mean results presented in Table 8, it can be seen that the SB-ABC results outperform SB-ABC1 and SB-ABC2 versions on all integer programming benchmark problems. With respect to the SR values, it can be noticed that each algorithm achieved a 100% success rate on all F I 1 F I 7 integer programming problems. From the standard deviation values presented in Table 8, it can be observed that the SB-ABC has lower standard deviation values for most of the cases in comparison with the SB-ABC1 and SB-ABC2 versions.
Compared with the SB-ABC1, it can be seen from Table 9 that the proposed algorithm reached better mean results and the same or better SR results for all F M 1 F M 10 minimax problems. When comparing the SB-ABC with respect to the SB-ABC2, it can be noticed from Table 9 that the SB-ABC obtained better mean values for eight minimax problems and slightly worse mean results for the remaining two benchmarks ( F M 4 and F M 10 ). From the standard deviation values presented in Table 9, it can be noticed that the SB-ABC has lower standard deviation values for most of the cases in comparison with the SB-ABC1 and SB-ABC2 variants. According to the SR results presented in Table 9, the SB-ABC outperformed the SB-ABC2 in five test problems ( F M 5 , F M 6 , F M 8 , F M 9 and F M 10 ), while the SB-ABC and SB-ABC2 achieved the same results in the remaining benchmarks. As shown in Figure 1 and Figure 2, SB-ABC converges faster to the optimum on the selected problems in comparison with its two variants.
These observations indicate that each introduced modification contributes to the satisfactory performance of the SB-ABC. Use of the shuffle mutation operator provides useful diversity and consequently helps the SB-ABC to locate the favorable areas within the search space. Adding the modified ABC operator enables an enhanced exploitation ability of the algorithm. Combining these modifications significantly improves the convergence speed and robustness of the SB-ABC algorithm.

7. Conclusions

In this paper, a novel shuffle-based artificial bee colony algorithm (SB-ABC) is proposed with the intention to solve integer programming and minimax problems. The proposed algorithm employs the shuffle mutation operator and modified ABC search strategy with an aim to improve the exploitation tendency of the algorithm to provide a valuable convergence speed. The proposed approach was applied to solve seven integer programming and ten minimax problems taken from the literature. The SB-ABC algorithm obtained highly competitive results in comparison with the standard ABC, BB method and 12 other metaheuristic algorithms in solving integer programming problems. Compared with the standard ABC, SQP method and 10 other state-of-the-art algorithms, the SB-ABC showed better performance for the majority of the minimax test problems.
The effects of introduced modifications related to the shuffle mutation operator and modified ABC search operator have been investigated. It is experimentally validated that the use of each introduced modification is of great importance in achieving a satisfactory performance of the SB-ABC with respect to the convergence speed and robustness. In the proposed SB-ABC method, the balance between the global exploration and local exploitation abilities is addressed by suitable configuration of control parameters. As in many other swarm intelligence techniques, the problem of discovering appropriate values for the control parameters also exists in the SB-ABC algorithm. Extending the SB-ABC method with a self-adaptive control mechanism to reach better exploration and exploitation balance during distinct search phases is a promising direction for future study. Developing a hybrid algorithm that would incorporate different operators of some other well-established metaheuristic methods in the SB-ABC algorithm for solving large scale integer programming and minimax problems will also be examined in future work. Another possible way for creating a hybrid approach is to employ certain metaheuristic methods to assume a role of a local optimizer, while the SB-ABC algorithm would perform a global search. In addition, the application of the proposed SB-ABC approach for solving some combinatorial optimization problems will be investigated.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Coit, D.W.; Zio, E. The evolution of system reliability optimization. Reliab. Eng. Syst. Saf. 2019, 192, 106259. [Google Scholar] [CrossRef]
  2. Laskari, E.C.; Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimization for integer programming. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; pp. 1582–1587. [Google Scholar]
  3. Kim, T.-H.; Cho, M.; Shin, S. Constrained Mixed-Variable Design Optimization Based on Particle Swarm Optimizer with a Diversity Classifier for Cyclically Neighboring Subpopulations. Mathematics 2020, 8, 2016. [Google Scholar] [CrossRef]
  4. Agarana, M.C.; Ajayi, O.O.; Akinwumi, I.I. Integer programming algorithm for public transport system in sub-saharan african cities. Wit. Trans. Built. Environ. 2019, 182, 339–350. [Google Scholar]
  5. Haunert, J.-H.; Wolff, A. Beyond Maximum Independent Set: An Extended Integer Programming Formulation for Point Labeling. ISPRS Int. J. Geo-Inf. 2017, 6, 342. [Google Scholar] [CrossRef] [Green Version]
  6. Laskari, E.C.; Parsopoulos, K.E.; Vrahatis, M.N. Particle swarm optimization for minimax problems. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No.02TH8600), Honolulu, HI, USA, 12–17 May 2002; pp. 1576–1581. [Google Scholar]
  7. Khakifirooz, M.; Chien, C.; Fathi, M.; Pardalos, P.M. Minimax Optimization for Recipe Management in High-Mixed Semiconductor Lithography Process. IEEE Trans. Industr. Inform. 2020, 16, 4975–4985. [Google Scholar] [CrossRef]
  8. Razaviyayn, M.; Huang, T.; Lu, S.; Nouiehed, M.; Sanjabi, M.; Hong, M. Nonconvex Min-Max Optimization: Applications, Challenges, and Recent Theoretical Advances. IEEE Signal Process. Mag. 2020, 37, 55–66. [Google Scholar] [CrossRef]
  9. Zhou, Z.; Yang, Q. An Active Set Smoothing Method for Solving Unconstrained Minimax Problems. Math. Probl. Eng. 2020, 2020, 1–25. [Google Scholar] [CrossRef]
  10. Ma, G.; Zhang, Y.; Liu, M. A generalized gradient projection method based on a new working set for minimax optimization problems with inequality constraints. J. Inequal Appl. 2017, 51, 1–14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Akay, B.; Karaboga, D. Solving Integer Programming Problems by Using Artificial Bee Colony Algorithm. In AI*IA 2009: Emergent Perspectives in Artificial Intelligence; Serra, R., Cucchiara, R., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5883, pp. 355–364. [Google Scholar]
  12. Tarray, T.A.; Bhat, M. A nonlinear programming problem using branch and bound method. Investig. Oper. 2017, 38, 291–298. [Google Scholar]
  13. Brajević, I.; Ignjatović, J. An upgraded firefly algorithm with feasibility-based rules for constrained engineering optimization problems. J. Intell. Manuf. 2019, 30, 2545–2574. [Google Scholar] [CrossRef]
  14. Stojanović, I.; Brajević, I.; Stanimirović, P.S.; Kazakovtsev, L.A.; Zdravev, Z. Application of Heuristic and Metaheuristic Algorithms in Solving Constrained Weber Problem with Feasible Region Bounded by Arcs. Math. Probl. Eng. 2017, 2017, 1–13. [Google Scholar] [CrossRef] [Green Version]
  15. Liu, Q.; Li, X.; Liu, H.; Guo, Z. Multi-objective metaheuristics for discrete optimization problems: A review of the state-of-the-art. Appl. Soft Comput. 2020, 93, 106382. [Google Scholar] [CrossRef]
  16. Ng, K.K.H.; Lee, C.K.M.; Chan, F.T.S.; Lv, Y. Review on meta-heuristics approaches for airside operation research. Appl. Soft Comput. 2018, 66, 104–133. [Google Scholar] [CrossRef]
  17. Iliopoulou, C.; Kepaptsoglou, K.; Vlahogianni, E. Metaheuristics for the transit route network design problem: A review and comparative analysis. Public Transp. 2019, 11, 487–521. [Google Scholar] [CrossRef]
  18. Bala, A.; Ismail, I.; Ibrahim, R.; Sait, S.M. Applications of Metaheuristics in Reservoir Computing Techniques: A Review. IEEE Access 2018, 6, 58012–58029. [Google Scholar] [CrossRef]
  19. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  20. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  21. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  22. Yang, X.S. Firefly algorithms for multimodal optimization. In Stochastic Algorithms: Foundations and Applications, SAGA 2009; Watanabe, O., Zeugmann, T., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2009; Volume 5792, pp. 169–178. [Google Scholar]
  23. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  24. Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  25. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  26. Yang, X.S. A New Metaheuristic Bat-Inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); González, J.R., Pelta, D.A., Cruz, C., Terrazas, G., Krasnogor, N., Eds.; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2010; Volume 284, pp. 65–74. [Google Scholar]
  27. Brajević, I.; Stanimirović, P. An improved chaotic firefly algorithm for global numerical optimization. Int. J. Comput. Intell. Syst. 2018, 12, 131–148. [Google Scholar] [CrossRef] [Green Version]
  28. Petalas, Y.G.; Parsopoulos, K.E.; Vrahatis, M.N. Memetic particle swarm optimization. Ann. Oper. Res. 2007, 156, 99–127. [Google Scholar] [CrossRef]
  29. Ali, A.F.; Tawhid, M.A. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems. Springerplus 2016, 5, 1–22. [Google Scholar] [CrossRef] [Green Version]
  30. Ali, A.F.; Tawhid, M.A. Solving Integer Programming Problems by Hybrid Bat Algorithm and Direct Search Method. Trends Artif. Intell. 2018, 2, 46–59. [Google Scholar]
  31. Tawhid, M.A.; Ali, A.F.; Tawhid, M.A. Multidirectional harmony search algorithm for solving integer programming and minimax problems. Int. J. Bio-Inspir. Com. 2019, 13, 141–158. [Google Scholar] [CrossRef]
  32. Xiang, W.; Meng, X.; Li, Y.; He, R.; An, M. An improved artificial bee colony algorithm based on the gravity model. Inf. Sci. 2018, 429, 49–71. [Google Scholar] [CrossRef]
  33. Xiao, S.; Wang, W.; Wang, H.; Tan, D.; Wang, Y.; Yu, X.; Wu, R. An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning. Mathematics 2019, 7, 289. [Google Scholar] [CrossRef] [Green Version]
  34. Lin, Q.; Zhu, M.; Li, G.; Wang, W.; Cui, L.; Chen, J.; Lu, J. A novel artificial bee colony algorithm with local and global information interaction. Appl. Soft Comput. 2018, 62, 702–735. [Google Scholar] [CrossRef]
  35. Brajević, I.; Stanimirović, P.S.; Li, S.; Cao, X. A Hybrid Firefly and Multi-Strategy Artificial Bee Colony Algorithm. Int. J. Comput. Intell. Syst. 2020, 13, 810–821. [Google Scholar] [CrossRef]
  36. Karaboga, D.; Akay, B.; Karaboga, N. A survey on the studies employing machine learning (ML) for enhancing artificial bee colony (ABC) optimization algorithm. Cogent Eng. 2020, 7, 1855741. [Google Scholar] [CrossRef]
  37. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y.; Naseem, R. Artificial bee colony algorithm: A component-wise analysis using diversity measurement. J. King Saud Univ. Comp. Inf. Sci. 2020, 32, 794–808. [Google Scholar] [CrossRef]
  38. Gorkemli, B.; Karaboga, D. A quick semantic artificial bee colony programming (qsABCP) for symbolic regression. Inf. Sci. 2019, 502, 346–362. [Google Scholar] [CrossRef]
  39. Aslan, S.; Karaboga, D.; Badem, H. A new artificial bee colony algorithm employing intelligent forager forwarding strategies. Appl. Soft Comput. 2020, 96, 106656. [Google Scholar] [CrossRef]
  40. Song, X.; Zhao, M.; Xing, S. A multi-strategy fusion artificial bee colony algorithm with small population. Expert Syst. Appl. 2020, 142, 112921. [Google Scholar] [CrossRef]
  41. Choong, S.S.; Wong, L.-P.; Lim, C.P. An artificial bee colony algorithm with a Modified Choice Function for the traveling salesman problem. Swarm Evol. Comput. 2019, 44, 622–635. [Google Scholar] [CrossRef]
  42. Karaboga, D.; Gorkemli, B. Solving Traveling Salesman Problem by Using Combinatorial Artificial Bee Colony Algorithms. Int. J. Artif. Intell. Tools 2019, 28, 1950004. [Google Scholar] [CrossRef]
  43. Ng, K.K.H.; Lee, K.M.; Zhang, S.Z.; Wu, K.; Ho, W. A multiple colonies artificial bee colony algorithm for a capacitated vehicle routing problem and re-routing strategies under time-dependent traffic congestion. Comput. Ind. Eng. 2017, 109, 151–168. [Google Scholar] [CrossRef]
  44. Sedighizadeh, D.; Mazaheripour, H. Optimization of multi objective vehicle routing problem using a new hybrid algorithm based on particle swarm optimization and artificial bee colony algorithm considering Precedence constraints. Alex. Eng. J. 2018, 57, 2225–2239. [Google Scholar] [CrossRef]
  45. Lin, Y.-K.; Li, M.-Y. Solving Operating Room Scheduling Problem Using Artificial Bee Colony Algorithm. Healthcare 2021, 9, 152. [Google Scholar] [CrossRef]
  46. Zhang, J.; Li, L.; Chen, Z. Strength–redundancy allocation problem using artificial bee colony algorithm for multi-state systems. Reliab. Eng. Syst. Saf. 2021, 209, 107494. [Google Scholar] [CrossRef]
  47. Hancer, E.; Karaboga, D. A comprehensive survey of traditional, merge-split and evolutionary approaches proposed for determination of cluster number. Swarm Evol. Comput. 2017, 32, 49–67. [Google Scholar] [CrossRef]
  48. Caliskan, A.; Çil, Z.A.; Badem, H.; Karaboga, D. Regression-Based Neuro-Fuzzy Network Trained by ABC Algorithm for High-Density Impulse Noise Elimination. IEEE Trans. Fuzzy Syst. 2020, 28, 1084–1095. [Google Scholar] [CrossRef]
  49. Akay, B. A Binomial Crossover Based Artificial Bee Colony Algorithm for Cryptanalysis of Polyalphabetic Cipher. Teh. Vjesn. 2020, 27, 1825–1835. [Google Scholar]
  50. Kumar, A.; Kumar, D.; Jarial, S.K. A Review on Artificial Bee Colony Algorithms and Their Applications to Data Clustering. Cybern. Inf. Technol. 2017, 17, 3–28. [Google Scholar] [CrossRef] [Green Version]
  51. Brajevic, I. Crossover-based artificial bee colony algorithm for constrained optimization problems. Neural. Comput. Appl. 2015, 26, 1587–1601. [Google Scholar] [CrossRef]
  52. Aslan, S.; Karaboga, D. A genetic Artificial Bee Colony algorithm for signal reconstruction based big data optimization. Appl. Soft Comput. 2020, 88, 106053. [Google Scholar] [CrossRef]
  53. Pooja, S.G. Innovative Review on Artificial Bee Colony Algorithm and Its Variants. In Advances in Computing and Intelligent Systems; Sharma, H., Govindan, K., Poonia, R., Kumar, S., El-Medany, W., Eds.; Algorithms for Intelligent Systems; Springer: Singapore, 2020; pp. 165–176. [Google Scholar]
  54. Črepinšek, M.; Liu, S.-H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. 2013, 45, 1–33. [Google Scholar] [CrossRef]
  55. Zhu, G.; Kwong, S. Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 2010, 217, 3166–3173. [Google Scholar] [CrossRef]
  56. Canali, C.; Lancellotti, R. GASP: Genetic Algorithms for Service Placement in Fog Computing Systems. Algorithms 2019, 12, 201. [Google Scholar] [CrossRef] [Green Version]
  57. Lukšan, L.; Vlček, J. Test Problems for Non-Smooth Unconstrained and Linearly Constrained Optimization; Technical Report 798; Institute of Computer Science, Academy of Sciences of the Czech Republic: Prague, Czech Republic, 2000. [Google Scholar]
  58. Parsopoulos, K.E.; Vrahatis, M.N. Unified particle swarm optimization for tackling operations research problems. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, SIS 2005, Pasadena, CA, USA, 8–10 June 2005; pp. 53–59. [Google Scholar]
  59. Santo, I.A.C.P.E.; Fernandes, E.M.G.P. Heuristics pattern search for bound constrained minimax problems. In Computational Science and Its Applications—ICCSA 2011; Murgante, B., Gervasi, O., Iglesias, A., Taniar, D., Apduhan, B.O., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6784, pp. 174–184. [Google Scholar]
Figure 1. Convergence graphs of SB-ABC1, SB-ABC2 and SB-ABC for the selected integer programming problems.
Figure 1. Convergence graphs of SB-ABC1, SB-ABC2 and SB-ABC for the selected integer programming problems.
Mathematics 09 01211 g001
Figure 2. Convergence graphs of SB-ABC1, SB-ABC2 and SB-ABC for the selected minimax problems.
Figure 2. Convergence graphs of SB-ABC1, SB-ABC2 and SB-ABC for the selected minimax problems.
Mathematics 09 01211 g002
Table 1. Properties of the minimax test problems F M 1 F M 10 .
Table 1. Properties of the minimax test problems F M 1 F M 10 .
FunctionDimension (D)# f i ( x ) Desired Error Goal
F M 1 (CB2)231.9522245
F M 2 232
F M 3 (Rosen-Suzuki)44–40.1
F M 4 22 10 4
F M 5 1010 10 4
F M 6 (SPIRAL)22 10 4
F M 7 (Polak 6)44−40.1
F M 8 (Wong 1)75680.9
F M 9 (OET6)4210.1
F M 10 (Filter)9410.61852848 · 10 2
Table 2. Comparison results of the BB method, ABC and SB-ABC for the F I 1 F I 7 integer programming problems.
Table 2. Comparison results of the BB method, ABC and SB-ABC for the F I 1 F I 7 integer programming problems.
BB ABC SB-ABC
ProbDMeanSDSRMeanSDSRMeanSDSR
F I 1 51167.83659.830/3037664.630/30216.062.0530/30
105495.81676.330/30727.364.430/30381.3351.6230/30
1510,177.12393.430/3097460.530/30508.6765.0530/30
2016,291.33797.930/301275.397.730/30624.086.0130/30
2523,689.72574.220/301554.7108.630/30725.3393.9430/30
3025,908.6755.514/301906129.930/30796.6777.1330/30
F I 2 5139.7102.630/30449.356.730/30239.3352.5330/30
F I 3 24185.532.830/30138506711.324/303916.671773.6730/30
F I 4 4316.9125.430/30240.779.430/3090.062.3430/30
F I 5 227541030.130/30193.353.530/30421.33163.6230/30
F I 6 22111530/30258.7113.630/30140.6757.3830/30
F I 7 2358.614.730/30106.744.830/30177.33130.2030/30
Table 3. Comparison results of the RWMPSOg, RWMPSOl, PSOg, PSOl, MDHSA, HCSNM and SB-ABC for the F I 1 F I 7 integer programming problems.
Table 3. Comparison results of the RWMPSOg, RWMPSOl, PSOg, PSOl, MDHSA, HCSNM and SB-ABC for the F I 1 F I 7 integer programming problems.
ProbMetricRWMPSOgRWMPSOlPSOgPSOlHCSNMMDHSASB-ABC
F I 1 Mean27,176.330,923.929,435.331,252638.3176.01218.0
SD8657240542,03918184.344.26560.95
SR50/5050/5034/5050/5050/5050/5050/50
F I 2 Mean578.5773.9606.4830.2232.64152.48240.8
SD136.5285.51192064.282.56564.74
SR50/5050/5050/5050/5050/5050/5050/50
F I 3 Mean6490.69292.612,68111,3201668.1531.44034.8
SD6913244435,067380343.230.742852.2
SR50/5050/5050/5050/5050/5050/5050/50
F I 4 Mean215218.7369.6390174.04182.74107.6
SD97.9115.3113.2134.66.2141.6063.48
SR50/5050/5050/5050/5050/5050/5050/50
F I 5 Mean1521.82102.914992472.4884.48449.12425.2
SD360.7689.5513.1637.556.242.413174.11
SR50/5050/5043/5050/5050/5050/5050/50
F I 6 Mean110.9112204.8256155.89188.3144.0
SD48.648.762107.55.1641.6368.35
SR50/5050/5050/5050/5050/5050/5050/50
F I 7 Mean242.7248.9421.2466210.3192.6185.2
SD132.2134.4130.41656.3937.33123.21
SR50/5050/5050/5050/5050/5050/5050/50
Table 4. Comparison results of the PSO, FA, CS, GSA, WOA, HBDS and SB-ABC for the F I 1 F I 7 integer programming problems.
Table 4. Comparison results of the PSO, FA, CS, GSA, WOA, HBDS and SB-ABC for the F I 1 F I 7 integer programming problems.
ProbMetricPSOFACSGSAWOAHBDSSB-ABC
F I 1 Mean20,0001617.1311,880.15202018,436.36656.56218.0
SD0.00114.77623.41112.45568.4788.6560.95
F I 2 Mean17,540.17834.157176.23106010,134.53344.22240.8
SD1054.56146.85637.7578.69483.2543.3264.74
F I 3 Mean20,0001225.176400.2551602946.631137.564034.8
SD0.00128.39819.94214.2524.2585.612852.2
F I 4 Mean16,240.36476.164920.3516809255.42260.8107.6
SD1484.9631.29247.1989.41857.3610.3963.48
F I 5 Mean13,120.451315.537540.3872506272.471177.12425.2
SD1711.83113.01440.82425.36925.35111.6174.11
F I 6 Mean1340.14345.714875.351520.2318,420.18149.08144.0
SD265.2135.52865.11231.56869.258.2168.35
F I 7 Mean1220.46675.483660.451100.249248.12222.91185.2
SD177.1936.36383.2385.23962.3511.19123.21
Table 5. Comparison results of the ABC and SB-ABC for the F M 1 F M 10 minimax problems.
Table 5. Comparison results of the ABC and SB-ABC for the F M 1 F M 10 minimax problems.
SQP ABC SB-ABC
ProblemMeanSDSRMeanSDSRMeanSDSR
F M 1 4044.58116.624/307522.05486.0629/30964.67319.0730/30
F M 2 8035.79939.918/301997.2741.5930/30586.67110.5530/30
F M 3 135.521.130/30600.8130.7130/30314.6788.3830/30
F M 4 140.638.530/301854.0400.8030/30736.67114.2030/30
F M 5 611.6200.630/3019,022.82028.1324/301614.67176.8630/30
F M 6 15,684.07302.010/302215.22197.8230/30348.66214.4530/30
F M 7 20,0000.00/302986.82327.8430/30422.0148.8030/30
F M 8 20,0000.00/3018,442.83707.9318/307288.04827.8729/30
F M 9 4886.58488.422/303244.42562.2030/30852.0740.8030/30
F M 10 ---20,000.00.00/306584.05648.2427/30
Table 6. Comparison results of the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA, HCSNM and SB-ABC for the F M 1 F M 10 minimax problems.
Table 6. Comparison results of the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA, HCSNM and SB-ABC for the F M 1 F M 10 minimax problems.
ProblemMetricHPS2UPSOmRWMPSOgHCSNMMDHSASB-ABC
F M 1 Mean1848.71993.82415.3705.621564.86986.4
SD2619.4853.71244.214.72156.89470.92
SR99%100%100%100%100%100%
F M 2 Mean635.81775.6-624.24555.64599.6
SD114.3241.9-20.8355.33124.55
SR94%100%-100%85%100%
F M 3 Mean141.21670.43991.3906.281839.6329.6
SD28.4530.62545.298.2483.65119.01
SR37%100%100%100%100%100%
F M 4 Mean772.01701.62947.8670.22633.9743.6
SD60.8184.9257.011.0763.04125.77
SR100%100%100%100%81%100%
F M 5 Mean1809.118294.518,520.14442.768382.581722.0
SD2750.32389.4776.987.159198.26332.82
SR94%100%100%95%75%100%
F M 6 Mean4114.73435.51308.81103.862064.44392.4
SD1150.21487.6505.5125.3673.10359.93
SR100%100%100%95%95%100%
F M 7 Mean-6618.50-2629.3364706.32476.0
SD-2597.54-84.80174.03113.91
SR-100%-75%80%100%
F M 8 Mean283.02128.5-2724.7841758736.0
SD123.9597.4-227.2496.905178.29
SR64%100%-95%75%98%
F M 9 Mean324.13332.54404.0977.562253.5934.8
SD173.11775.43308.9176.82130.58765.36
SR100%100%100%100%95%100%
F M 10 Mean----9432.246730.4
SD----156.395065.53
SR-----94%
Table 7. Comparison results of the PSO, FA, CS, GSA, WOA and SB-ABC for the F M 1 F M 10 minimax problems.
Table 7. Comparison results of the PSO, FA, CS, GSA, WOA and SB-ABC for the F M 1 F M 10 minimax problems.
ProblemMetricPSOFACSGSAWOASB-ABC
F M 1 Mean3535.461125.615375.521620.410,126.36986.4
SD491.66189.56613.35126.251583.65470.92
F M 2 Mean20,000785.176150.341980.510,263.45599.6
SD0.0031.94519.65253.69758.58124.55
F M 3 Mean2920.15695.543745.191800.71523.36329.6
SD269.4850.03878.0945.58121.89119.01
F M 4 Mean5680.17782.525845.231680.410,253.58743.6
SD937.4486.77804.3658.78980.45125.77
F M 5 Mean20,00013,692.137895.1411,800.611,458.361722.0
SD0.00900.121077.0725.361785.36332.82
F M 6 Mean5643.652685.2511,915.241860.61235.69392.4
SD-610.07341.45253.6948.69359.93
F M 7 Mean20,0007659.4520,0007200.419,465.35476.0
SD0.00583.211788.181986.252568.39113.91
F M 8 Mean6220.258147.4514,754.148500.69186.258736.0
SD727.441026.221391.581453.67485.795178.29
F M 9 Mean6680.19748.176765.241440.73648.69934.8
SD509.3498.59843.49245.36896.47765.36
F M 10 Mean18,125.36011,124.5510,436.2211,254.613,242.246730.4
SD2356.581254.5823.152145.252536.365065.53
Table 8. Comparison results of the SB-ABC1, SB-ABC2 and SB-ABC for the F I 1 F I 7 integer programming problems.
Table 8. Comparison results of the SB-ABC1, SB-ABC2 and SB-ABC for the F I 1 F I 7 integer programming problems.
SB-ABC1 SB-ABC2 SB-ABC
ProbDMeanSDSRMeanSDSRMeanSDSR
F I 1 5284.868.30100%427.6163.60100%218.060.95100%
10506.488.67100%1690.8653.42100%387.649.13100%
15653.271.95100%2896.0940.17100%511.253.54100%
20815.33124.75100%4967.331347.21100%644.470.68100%
25893.2137.78100%6350.01369.01100%732.882.43100%
301014.8140.96100%8222.01491.48100%816.876.03100%
F I 2 5326.884.25100%511.6241.86100%240.864.74100%
F I 3 27524.45462.02100%4165.62647.41100%4034.82852.2100%
F I 4 4131.277.68100%163.6107.68100%107.663.48100%
F I 5 2472.8190.06100%1188.8522.33100%425.2174.11100%
F I 6 2157.264.37100%147.670.64100%144.068.35100%
F I 7 2224.8192.56100%190.853.99100%185.2123.21100%
Table 9. Comparison results of the SB-ABC1, SB-ABC2 and SB-ABC for the F M 1 F M 10 minimax problems.
Table 9. Comparison results of the SB-ABC1, SB-ABC2 and SB-ABC for the F M 1 F M 10 minimax problems.
SB-ABC1 SB-ABC2 SB-ABC
ProbMeanSDSRMeanSDSRMeanSDSR
F M 1 1150.8381.05100%996.8322.20100%986.4470.92100%
F M 2 896.8217.01100%770.8169.52100%599.6124.55100%
F M 3 446.4193.76100%365.6162.83100%329.6119.01100%
F M 4 1098.8217.16100%706.8129.77100%743.6125.77100%
F M 5 2515.6429.91100%20,000.00.00%1722.0332.82100%
F M 6 674.81027.67100%4123.66755.4188%392.4359.93100%
F M 7 595.2233.15100%2196.03016.41100%476.0113.91100%
F M 8 10,949.24522.9994%9770.46020.8286%8736.05178.2998%
F M 9 956.8689.11100%3199.64290.6198%934.8765.36100%
F M 10 13,504.86895.9454%6451.26825.5186%6730.45065.5394%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brajević, I. A Shuffle-Based Artificial Bee Colony Algorithm for Solving Integer Programming and Minimax Problems. Mathematics 2021, 9, 1211. https://doi.org/10.3390/math9111211

AMA Style

Brajević I. A Shuffle-Based Artificial Bee Colony Algorithm for Solving Integer Programming and Minimax Problems. Mathematics. 2021; 9(11):1211. https://doi.org/10.3390/math9111211

Chicago/Turabian Style

Brajević, Ivona. 2021. "A Shuffle-Based Artificial Bee Colony Algorithm for Solving Integer Programming and Minimax Problems" Mathematics 9, no. 11: 1211. https://doi.org/10.3390/math9111211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop