A Shufﬂe-Based Artiﬁcial Bee Colony Algorithm for Solving Integer Programming and Minimax Problems

: The artiﬁcial bee colony (ABC) algorithm is a prominent swarm intelligence technique due to its simple structure and effective performance. However, the ABC algorithm has a slow convergence rate when it is used to solve complex optimization problems since its solution search equation is more of an exploration than exploitation operator. This paper presents an improved ABC algorithm for solving integer programming and minimax problems. The proposed approach employs a modiﬁed ABC search operator, which exploits the useful information of the current best solution in the onlooker phase with the intention of improving its exploitation tendency. Furthermore, the shufﬂe mutation operator is applied to the created solutions in both bee phases to help the search achieve a better balance between the global exploration and local exploitation abilities and to provide a valuable convergence speed. The experimental results, obtained by testing on seven integer programming problems and ten minimax problems, show that the overall performance of the proposed approach is superior to the ABC. Additionally, it obtains competitive results compared with other state-of-the-art algorithms.


Introduction
A wide variety of problems from different areas can be formulated as integer programming and minimax problems. Some applications in which integer programming problems appear are system-reliability design, scheduling, capital budgeting, warehouse location, portfolio analysis, automated production systems, mechanical design, transportation and cartography [1][2][3][4][5]. Furthermore, minimax optimization problems are found in many applications, such as optimal control, engineering design, game theory, signal and data processing [6][7][8][9][10].
Since integer programming is known to be NP-hard, solving these problems is considered a challenging task. Dynamic programming and branch-and-bound (BB) are wellknown exact integer programming methods [11,12]. These methods divide the feasible region into smaller sub-regions or problems into sub-problems. The main drawback of dynamic programming is that the amount of computation necessary for an optimal solution exponentially grows as the number of variables rises. Branch-and-bound techniques have a high computational cost when solving large-scale problems that require the exploration of a search tree containing hundreds of nodes [11].
Metaheuristic optimization algorithms provide high-quality solutions in an acceptable amount of time. These techniques do not make any presumptions about the problem and can be used to solve a broad class of challenging optimization problems [13][14][15][16][17][18]. One of the most notable classes of metaheuristics, swarm intelligence (SI) algorithms, has foundations in imitating the collective behavior of biological agents. Particle swarm optimization (PSO) [19], artificial bee colony (ABC) [20], harmony search (HS) [21], firefly algorithm (FA) [22], gravitational search algorithm (GSA) [23], cuckoo search (CS) [24], whale optimization algorithm (WOA) [25] and bat algorithm (BA) [26] are some of the notable SI algorithms. In the last two decades, many SI algorithms were applied to solve integer programming problems. For instance, the PSO was employed to solve integer programming problems in [2]. On standard test problems, the PSO outperformed the branch-and-bound method in most cases.
Sequential quadratic programming (SQP) and smoothing techniques are common strategies for solving minimax problems [6]. These methods perform local minimization and require derivatives information for the objective function, which in most applications are not analytically available. Furthermore, SQP and smoothing techniques struggle to achieve satisfactory solutions when the objective function is discontinuous. On the other hand, metaheuristics are problem-independent optimization methods. Search operators of these methods use some randomness, which enables the algorithm to move away from a local optimum to search on a global scale [27]. Hence, metaheuristic optimization algorithms are considered an adequate alternative for minimax problems.
Since their invention, the original variants of metaheuristic algorithms have been modified to improve their performances. In [28], a memetic PSO algorithm that integrates local search methods to the basic PSO was developed. The local and global variants of the memetic PSO scheme were tested to solve minimax and integer programming problems. The experimental results showed that the memetic PSO outperformed the corresponding variants of the PSO algorithm in the majority of benchmarks. A hybrid cuckoo search algorithm with the Nelder Mead method, named HCSNM, for solving integer programming and minimax problems is proposed in [29]. In [29], it was concluded that the use of the Nelder Mead method enhances the convergence speed of the basic CS technique. A hybrid bat algorithm (HBDS) to solve integer programming is proposed in [30]. The HBDS incorporates direct search methods in the BA to enhance the intensification ability of the BA. Recently, a new hybrid harmony search algorithm with the multidirectional search method, called MDHSA, is developed to enhance the performance of the standard HS algorithm for solving integer programming and minimax problems [31].
The efficiency of the basic ABC algorithm for integer programming problems was investigated in [11]. To our knowledge, the ABC is not tested on a minimax test function in any of the studies. Therefore, investigating the performance of the standard ABC algorithm for solving minimax problems and proposing suitable modifications with the aim to further improve its performance for integer programming and minimax problems is a research problem.
Motivated by these reasons, this paper presents a shuffle-based artificial bee colony algorithm (SB-ABC) for solving integer programming and minimax problems. Although ABC has achieved success in different research fields, it was noticed that the exploitation ability of the ABC is deficient because of a randomly picked neighborhood food source in its solution search equation [32]. Therefore, the ABC algorithm has a slow convergence rate when it is applied to solve complex optimization problems. In order to enhance the exploitation ability of the ABC algorithm, the proposed approach employs a modified ABC search operator, which exploits the useful information of the current best solution in the onlooker phase. Furthermore, in certain iterations, the shuffle mutation operator is applied to the newly created solutions in both bee phases. In that way, the proposed algorithm provides useful diversity in the population, which is crucial in finding a good balance between exploitation and exploration. The SB-ABC algorithm is tested on seven integer programming problems and ten minimax problems. The obtained results for integer programming problems are compared to those of the ABC, BB method and 12 other metaheuristics. For minimax problems, the achieved results are compared to those of the ABC, SQP method and 11 other algorithms. Experimental results indicated that the SB-ABC algorithm obtained highly competitive results in comparison with the other algorithms presented in the literature.
The paper is organized as follows. In Section 2, definitions of minimax and integer programming problems are given. The standard ABC is presented in Section 3. The proposed shuffle-based artificial bee colony approach is explained in Section 4. In Section 5, the optimization results are presented and analyzed. In Section 6, the influence of the proposed modifications on the performance of the SB-ABC algorithm is discussed. Section 7 provides concluding remarks.

Problem Statements
An integer programming problem is a discrete optimization problem where all of the variables are limited to integer values. A general integer programming problem can be stated as [11]: where S is the feasible region and Z denotes the set of integers. A problem where some variables are constrained to integers while some variables are not is a mixed integer programming problem. A special instance of the integer programming problem is that in which the variables are restricted to be either 0 or 1. This case is called the 0-1 programming problem or the binary integer programming problem. Minimax optimization deals with a composition of an inner maximization problem and an outer minimization problem. A general form of the minimax problem can be stated as [31]: where Furthermore, a nonlinear programming problem, with inequality constrains, of the form min F(x), can be transformed to minimax problems as follows: where for i = 2, . . . , m. It has been shown that when α i is large enough, the optimum point of the minimax problem coincides with the optimum point of the nonlinear programming problem [6].

Artificial Bee Colony Algorithm
Foraging behavior of a honey bee swarm motivated the development of the ABC algorithm [20]. The population of artificial bees is made of employed bees, onlooker bees and scout bees. One-half of the population consists of employed bees. Onlookers and scouts make the other half of the population. In the basic ABC, each food source represents a possible solution for the problem, and the number of the employed bees is equal to the number of food sources. All bees that are presently exploiting a food source are employed bees. The onlooker bees aim to choose promising food sources from those discovered by the employed bees according to the probability proportional to the quality of the food source. After the selection of the food source, the onlookers further seek food in the vicinity of the selected food source. The scout bees are transformed from several employed bees that abandon their unpromising food sources to seek new ones.
The control parameters of the basic ABC algorithm are the size of the population (SP), which is equal to the sum of employed and onlooker bees, the maximum cycle number (MCN), and parameter limit, which represents the number of trials for abandoning the food source. In the initialization phase, the ABC creates randomly distributed initial population, which includes SP solutions. Following this step, three phases-employed, onlooker and scout-are repeated for a certain number of iterations. After each iteration, the best-discovered solution is saved.
Each employed bee seeks a better food source in the employed phase. The search operator used to create a novel food source v i from the old one x i is given by: where j is arandomly picked index of a parameter, x l is a randomly selected food source that is different from x i and ϕ is a uniform random number between (−1, 1). Greedy selection between old and new food sources decides whether the old food source will be replaced by the new one.
In the onlooker phase, each onlooker bee chooses a food source according to the probability that is proportional to the fitness value. The same search strategy, which is given by Equation (7), is used to generate a candidate food source from the picked one. Greedy selection between old and new food sources decides whether the old food source (solution) will be updated. In the scout phase, a solution that can not be updated through a predetermined number of trials is replaced with a randomly created solution.
Many variants of ABC for solving continuous optimization problems were proposed [33][34][35][36][37][38][39][40]. For instance, an enhanced version of ABC, which introduces modifications related to elite strategy and dimension learning, is invented in [33]. The ABC variant, which uses novel search strategies in employed and onlooker bee phases, is developed in [34]. In [35], a hybrid method, which combines firefly and multi-strategy ABC, is developed for solving numerical optimization problems. An enhanced ABC based on the multi-strategy fusion is ABC variant and is proposed to improve the search ability of ABC with a small population [40].
Although the standard ABC was initially invented for continuous optimization problems, the modified variants have also been proposed for combinatorial and discrete problems [41][42][43][44][45][46]. Akay and Karaboga modified the ABC algorithm in order to solve integer programming problems. In this version of the ABC, a new control parameter called modification rate (MR) is employed in its solution search strategy [11]. The modification rate parameter controls the possible modifications of optimization parameters. In [41], an ABC algorithm with a modified choice function for the traveling salesman problem is developed. Two novel ABC algorithms in which a multiple colonies strategy is adopted are proposed to solve the vehicle routing problem [43]. The ABC technique that integrates the initial solutions, an elitism strategy, recovery and local search schemes is a newly developed variant of ABC for solving the operating room scheduling problem [45]. An improved ABC algorithm for solving the strength-redundancy allocation problem is presented in [46]. In general, application fields of the ABC method are data mining, neural networks, image processing, cryptanalysis, data clustering and engineering [47][48][49][50][51][52][53].

The Proposed Approach: SB-ABC
Important characteristics of each metaheuristic algorithm are exploitation and exploration [54]. Exploitation refers to the process of visiting areas of a search space in the neighborhood of previously found satisfactory solutions. Exploration is the process of generating solutions with ample diversity and far from the current solutions. A balanced combination of these conflicting processes is essential for successful optimization performance. According to Equation (7), the new individual is generated by moving the old solution to a randomly picked solution, and the direction of the search is random. Consequently, the solution search equation given by Equation (7) has good exploration tendency, but it is not promising at exploitation. Since too much exploration tends to decrease the convergence speed of the algorithm [35], the proposed approach uses modified ABC search equations in employed and onlooker bee phases. To obtain useful diversity in the population, in each bee phase, the shuffle mutation operator is applied to new candidate solutions.
To create a new solution v i from the solution x i in the employed bee phase, the SB-ABC algorithm uses a search strategy that is described by [11]: where j ∈ {1, 2, . . . , D} and D is the number of optimization parameters or dimensions of the problem. In Equation (8), x k is a randomly selected food source that is different from x i , ϕ i is a uniform random number between (−1, 1), R ij is a randomly chosen real number in range (0, 1) and MR is modification rate control parameter whose value is in the range (0, 1). A higher value of the MR parameter will enable more parameters to be changed in the parent solution with the aim to increase the convergence speed of the basic ABC algorithm.
In the onlooker bee phase of the SB-ABC algorithm, the solutions are chosen according to the probability, which is given by [51]: where the best fitness value in the population is denoted by maxfit, while f it i marks the fitness value of the ith solution. Inspired by the variant of the ABC proposed to solve numerical optimization, gbestguided artificial bee colony (GABC) algorithm [55], we modify the search equation described by Equation (8) as follows: where j ∈ {1, 2, . . . , D} and D is the number of optimization parameters, i.e., dimension of the problem. In Equation (10), v i is a new candidate solution, x i is parent solution, ϕ ij is a uniform random number in range (−1, 1), φ ij is a uniform random number in the segment [0, 1.5], x k is a randomly selected food source that is different from x i , y j is the jth parameter of the best solution found so far, and R ij is a randomly chosen real number within (0,1). According to Equation (10), the third term can move the new potential solution towards the global best solution. Hence, the modified search strategy given by Equation (10) can enhance the exploitation tendency of the basic ABC algorithm. The right amount of population diversity is of great significance in achieving a proper balance between exploitation and exploration. In the SB-ABC algorithm, the exploitation is increased by using the modified search equation in the onlooker bee phase. Thus, the differences among individuals of a population are decreased since the search process is quite focused on a local region of good solutions. To promote diversity at certain stages of the search process, a new parameter called random permutation production interval (RPPI) is introduced in the SB-ABC. This parameter is used as follows: after each RPPIth cycle, the shuffle mutation operator is applied to new candidate solutions at employed and onlooker bee phases. The shuffle mutation is a mutation operator where the mutated solution takes the components of the original solution, applying a permutation to them [56]. Usage of the shuffle mutation operator enables a better exploration of solutions but only every RPPI iterations.
The proposed approach computes a value trial i for each solution x i during the search process. A value trial i characterizes the non-advanced number of the solution x i used for the abandonment. In the scout phase of the SB-ABC algorithm, one solution with the highest trial value that is greater than the value of limit control parameter, if such solution occurs, is exchanged with a randomly generated solution.
The pseudo-code of the employed bee phase is presented in Algorithm 1, while the procedure of the onlooker bee phase is described in Algorithm 2. The input of Algorithm 1 involves the current solutions x i with corresponding values trial i , i = 1, 2, . . . , SP/2, current cycle value, values of MR and RPPI parameters, and the objective function f . The output of Algorithm 1 is the updated population of solutions x i and trial i values, i = 1, 2, . . . , SP/2, which will be employed in the onlooker bee phase. The input of Algorithm 2 includes the current population of solutions x i with corresponding values trial i , i = 1, 2, . . . , SP/2, current cycle value, values of MR and RPPI parameters, and the objective function f . The output of Algorithm 2 is the updated population of solutions x i and trial i values, i = 1, 2, . . . , SP/2, which will be used in the next iteration. The pseudo-code of the SB-ABC algorithm is presented in Algorithm 3. The input of Algorithm 3 includes the values of SP, MCN, MR, limit and RPPI control parameters and the objective function f . The output of Algorithm 3 is the best solution found.
It is important to mention that the proposed approach SB-ABC introduces two modifications in comparison with the ABC algorithm adjusted for integer programming problems: use of the modified ABC search operator described by Equation (10) and the application of the shuffle mutation operator. The crucial difference between these two approaches consists in the different balance of exploitation and exploration. Exploitation is enhanced in the onlooker phase by applying the global best solution to guide the search process. Useful diversity of the population and better exploration of solutions is achieved on the global level by applying the shuffle mutation operator every RPPIth iteration.
The SB-ABC algorithm employs three specific control parameters to manage the search process: modification rate MR, limit and RPPI, which determines the cycles in which the shuffle mutation operator is applied to candidate solutions. It also uses standard control parameters for all population-based metaheuristics, the population size and maximum number of cycles. In order to solve the integer programming problems, the SB-ABC rounds the parameter values to the closest integer after evolution according to Equations (8) and (10). Solutions were also rounded after the initialization phase and scout phase of the algorithm. Therefore, they were considered as integer numbers for all operations.

Algorithm 1 Employed bee phase of the SB-ABC algorithm
Algorithm 2 Onlooker bee phase of the SB-ABC algorithm Calculate the probability p i by Equation (9); end for t = 1; i = 1; Create candidate solution v i for x i by by Equation (10); if (cycle mod RPPI = 0) then Create a random permutation rp of {1, 2, . . . , D}; One solution with the highest trial value that is greater than the abandonment threshold, if such solution occurs, is exchanged with a randomly generated solution; Save the current best solution; cycle = cycle + 1; until (cycle = MCN)

Experimental Study
The performance of the SB-ABC algorithm is evaluated through seven integer programming problems and ten minimax problems widely used in the literature. The proposed algorithm is implemented in Java, and it was run on a PC with an Intel(R) Core(TM) i5-4460 3.2 GHz processor. In order to show the efficiency of the SB-ABC algorithm, it is compared with several algorithms that were previously applied to solve these problems. In the next subsections, brief descriptions of the used benchmark problems and results of a comparison between the SB-ABC and other state-of-the-art approaches are presented.

Benchmark Problems
In this section, the integer programming and minimax optimization test problems are described. To test the performance of the SB-ABC algorithm on integer programming problems, seven problems widely used in the literature are employed. The mathematical models of these problems can be found in [11,28,31]. These problems are presented below: Test problem FI 1 is defined in [28]: where D is the dimension of the problem or number of optimization parameters. The global minimum is FI 1 (x * ) = 0. Test problem FI 2 is defined in [28]: where D is the dimension of the problem. The global minimum is FI 2 (x * ) = 0. Test problem FI 3 is defined in [28]: The global minimum is FI 3 (x * ) = −737. Test problem FI 4 is defined in [28]: The global minimum is FI 4 (x * ) = 0. Test problem FI 5 is defined in [28]: The global minimum is FI 5 (x * ) = 0. Test problem FI 6 is defined in [28]: The global minimum is FI 6 (x * ) = −6. Test problem FI 7 is defined in [28]: The global minimum is FI 7 (x * ) = −3833.12.
To investigate the efficiency of the SB-ABC algorithm on minimax problems, ten benchmark functions are considered [6,28,31]. These benchmarks are presented as follows: Test problem FM 1 is defined in [31,57]: The desired error goal for this problem is FM 1 (x * ) = 1.9522245. Test problem FM 2 is defined in [31]: The desired error goal for this problem is FM 2 (x * ) = 2. Test problem FM 3 is defined in [6,57]: The desired error goal for this problem is FM 3 (x * ) = -40.1. Test problem FM 4 is defined in [31]: The desired error goal for this problem is FM 4 (x * ) = 10 −4 . Test problem FM 5 is defined in [31]: The desired error goal for this problem is FM 5 (x * ) = 10 −4 .
Five other test problems were selected from [57]. The name of the minimax benchmark problems, the dimension of the problem, the number of f i (x) functions and desired error goal are reported in Table 1.

The General Performance of the SB-ABC for Integer Programming Problems
Because the SB-ABC is an improved variant of the ABC, in this section, a comparison between the SB-ABC and ABC algorithm adjusted to solve integer programming problems through seven integer programming problems is presented. The common traditional technique, the branch-and-bound (BB) method, is also included in the comparison with the proposed approach.
The preliminary testing of the SB-ABC was done with the aim of obtaining suitable combinations of parameter values. The SP parameter was set to 20. This value was detected to be a proper selection for all executed tests. The increasing value of this control parameter will increase the computational cost without any enhancement in the reached results. Our tests verified the previous reasoning that a value 0.8 for the MR parameter is a good choice for solving these optimization problems [11]. Additionally, it was experimentally determined that a value of 50 for the parameter limit and a value of 3 for the parameter RPPI are suitable for the SB-ABC algorithm. It was observed that significantly lower or higher values of the limit parameter can deteriorate the obtained results. Higher values of the RPPI parameter would lead to the less frequent use of the shuffle mutation operator and consequently weaker performance of the SB-ABC algorithm. In the SB-ABC, during the initialization step, SP/2 solutions are evaluated, and there are SP/2 employed bees, SP/2 onlookers and a maximum of one scout bee per iteration. Therefore, the maximum number of function evaluations for the SB-ABC is SP/2 + (SP + 1) · MCN. The maximum number of function evaluations executed by the SB-ABC for all benchmarks was set to 20,000 and the SB-ABC was terminated when the global minimum was reached.
The results of the BB method and ABC algorithm were taken from their original papers [2,11]. For these comparisons, in the BB method and ABC algorithm, the maximum number of function evaluations was set to 25,000. When an accuracy of 10 −6 was achieved, these methods were terminated. The BB method, ABC and SB-ABC algorithms are conducted for 30 independent runs for each benchmark problem.
The following metrics are used to estimate the performances of the BB, ABC and SB-ABC. The convergence speed of each algorithm is compared by recording the mean number of function evaluations (mean) required to reach the acceptable value. If the mean value is smaller, the convergence speed is faster. Since SI algorithms are stochastic, the obtained mean results are not the same in each run. To examine the stability of each method, standard deviation (SD) values are measured. The performance of an algorithm is more stable if the standard deviation value is lower. The success rate (SR) is used as a metric for robustness or reliability of methods. This rate is defined as the ratio of successful runs in the total number of executed runs. A run is considered successful if an algorithm obtains a solution for which the value of the objective function is less than the corresponding acceptable value. If the value of SR is greater, the reliability of the algorithm is better. In Table 2, the mean, corresponding standard deviation (SD) values and SR values of the BB method, ABC and SB-ABC for the benchmark problems FI 1 with 5, 10, 15, 20, 25 and 30 variables and test problems FI 2 -FI 7 over 30 runs are given. The best mean results are indicated in bold.
As shown in Table 2, with respect to the SR results reached by these methods, the SB-ABC performs the most reliably since, for each test case, the obtained SR result of the SB-ABC is 100%. The BB method performance is less robust than the SB-ABC for problem FI 1 with 30 variables, since it achieved only 14 successful runs out of 30, while in the remaining test cases, both approaches obtained the same SR results. The SB-ABC and ABC algorithms obtained the same SR results on all test problems, with the exception of problem FI 3 , where the ABC performance was less robust. With respect to the mean results, from Table 2, it can be observed that the SB-ABC performs better than its rivals in the majority of cases. To be exact, the SB-ABC is better than the BB method and ABC in 12 and 11 test cases, respectively. On the other hand, the BB method has better mean results for problem FI 2 , while the ABC outperformed the SB-ABC for test functions FI 5 and FI 7 . With respect to the standard deviation results, from Table 2, it can be seen that the SB-ABC performance is more stable than the BB and ABC methods in most cases.

Comparison against Other State-of-the-Art Algorithms for Integer Programming Problems
To further demonstrate the efficiency of the SB-ABC, it is benchmarked against 12 other metaheuristic algorithms that were previously successfully used to solve integer programming problems. These algorithms are the basic PSO and its four variants RWMPSOg, RWMPSOl, PSOg, PSOl [28], standard cuckoo search (CS), firefly algorithm (FA), gravitational search algorithm (GSA), whale optimization algorithm (WOA), hybrid cuckoo search algorithm with Nelder Mead method (HCSNM) [29], hybrid bat algorithm (HBDS) [30] and the recently proposed hybrid harmony search algorithm with multidirectional search method (MDHSA) [31].
The results obtained by the RWMPSOg, RWMPSOl, PSOg, PSOl are taken from [28], the results reached by the HCSNM are taken from [29], the results achieved by HBDS are taken from [30], while the results of the MDHSA and basic PSO, CS, FA, GSA and WOA are taken from [31]. In Table 3, the mean, corresponding standard deviation values and SR values of the RWMPSOg, RWMPSOl, PSOg, PSOl, HCSNM, MDHSA and SB-ABC for the benchmark problems FI 1 with 5 variables and test problems FI 2 -FI 7 over 50 runs are given. Table 4 presents the mean and standard deviation values obtained by the PSO, FA, CS, GSA, WOA, HBDS and SB-ABC for problem FI 1 with 5 variables and test problems FI 2 -FI 7 over 50 runs. The best mean results are in bold. The metaheuristics used for comparison with the SB-ABC also performed the maximum number of function evaluations of 20,000. Since the results of these 12 algorithms are achieved over 50 runs, the statistical results of the SB-ABC over 50 runs are presented in Tables 3 and 4. The results from Tables 3 and 4 show that the proposed algorithm obtained better mean results on the majority of benchmark problems in comparison with its competitors. Precisely, the SB-ABC is better than RWMPSOg, RWMPSOl, PSOg, PSOl, MDHSA, HCSNM, PSO, FA, CS, GSA, WOA and HBDS in six, six, seven, seven, five, four, seven, seven, seven, seven, seven, and six test problems, respectively. In contrast, the SB-ABC is outperformed by the RWMPSOg, RWMPSOl, PSOg, PSOl, MDHSA, HCSNM, PSO, FA, CS, GSA, WOA and HBDS in one, one, zero, zero, two, three, zero, zero, zero, zero, zero and one test problems, respectively. From the standard deviation values presented in Tables 3 and 4, it can be observed that the proposed SB-ABC has lower standard deviation values on the majority of benchmark problems in comparison with RWMPSOg, RWMPSOl, PSOg PSOl, PSO, CS, GSA and WOA. On the other hand, the HCSNM, MDHSA, FA and HBDS have lower standard deviations compared to the SB-ABC for most of the cases. In addition, the SR results demonstrate that the SB-ABC achieved a 100% success rate on all benchmark problems.

The General Performance of the SB-ABC for Minimax Problems
In this section, the performance of the SB-ABC for solving minimax problems is investigated. The performance of the SB-ABC is compared to the performance of the SQP method and standard ABC algorithm. The fair comparison is ensured since the SQP, ABC and SB-ABC algorithms employed the maximum number of function evaluations of 20,000. The run is counted as successful when the desired error goal is reached within the maximum number of function evaluations.
The specific parameter settings of the SB-ABC are kept the same, as mentioned in Section 5.2. Since the standard ABC is not tested to solve minimax problems in any of the studies, we have tested its performance in solving problems FM 1 -FM 10 . The ABC employed the following parameter settings, SP is 20, MR is 0.8 and limit is 5 · SP · D. These values of control parameters were used in the standard ABC, adjusted to solve integer programming problems [11]. The results of the SQP method were taken from the respective paper [2].
In Table 5, the mean, corresponding standard deviation (SD) values and SR values of the SQP method, ABC and SB-ABC for the benchmark problems FM 1 -FM 10 over 30 runs are presented. The best mean results are in bold. The mark (-) for FM 10 in the SQP method means that the results are not reported in its original paper. From Table 5, it can be noticed that the SB-ABC converges faster to the global minimum in comparison with the SQP method and ABC for the majority of test problems. From the obtained mean values, it can be observed that the SB-ABC has better performance than the SQP and ABC methods in 6 and 10 test problems, respectively. On the other hand, the SB-ABC is outperformed by the SQP and ABC on three and zero benchmarks, respectively. Furthermore, the SR results indicate that the SB-ABC performance is more robust in comparison with the SQP on six test problems (FM 1 , FM 2 , FM 6 , FM 7 , FM 8 and FM 9 ), while both methods reached the same SR results for the rest of the benchmarks. With respect to the standard deviation results, from Table 5, it can be noticed that the SB-ABC performance is more stable than the SQP and ABC methods in most cases. Furthermore, from Table 5, it can be seen that the SB-ABC performs more reliably than the ABC on four benchmark problems (FM 1 , FM 5 , FM 8 and FM 10 ), while both algorithms achieved the same SR results for the remaining problems.

Comparison against Other State-of-the-Art Algorithms for Minimax Problems
In order to further examine the efficiency of the SB-ABC for minimax problems, its performance is compared to the performance of 10 other algorithms that were previously successfully used to solve these problems. These methods are the heuristic pattern search algorithm HPS2, the basic PSO and its two variants RWMPSOg and UPSOm [58], HC-SNM [29], MDHSA [31], CS, FA, GSA and WOA.
The results obtained by the RWMPSOg are taken from [28], the results reached by the HPS2 are taken from [59], the results achieved by UPSOm are taken from [58], the results obtained by the HCSNM are taken from [29], while the results of the MDHSA, PSO, FA, CS, GSA and WOA are taken from [31]. In Table 6, the mean, standard deviation values and SR values of the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA and SB-ABC for the benchmark problems FM 1 -FM 10 over 50 runs are given. Table 7 presents the mean and standard deviation values obtained by the PSO, FA, CS, GSA, WOA and SB-ABC for benchmark problems FM 1 -FM 10 over 50 runs. The best mean results are in bold. The metaheuristics used for comparison with the SB-ABC also performed the maximum number of function evaluations of 20,000. The SB-ABC was configured with the specific parameter values, as described in Section 5.2. Since the results of these 10 algorithms are achieved over 50 runs, the statistical results of the SB-ABC over 50 runs are presented in Tables 6 and 7. In Table 6, the mark (-) indicates that the results are not presented in the corresponding paper.
The results from Tables 6 and 7 show that the proposed algorithm obtained better mean results on the majority of benchmark problems in comparison with its competitors. Concretely, the SB-ABC outperformed the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA, PSO, FA, CS, GSA and WOA in 5,8,6,6,7,9,8,10,9 and 10 test problems, respectively. In contrast, the SB-ABC is outperformed by the HPS2, UPSOm, RWMPSOg, HCSNM, MDHSA, PSO, FA, CS, GSA and WOA in three, one, zero, three, three, one, two, zero, one and zero benchmark problems, respectively. From standard deviation values presented in Tables 6 and 7, it can be seen that the SB-ABC has lower standard deviation values for most of the cases in comparison with the UPSOm, RWMPSOg, CS and WOA. On the other hand, the HPS2, HCSNM, MDHSA, PSO, FA and GSA have lower standard deviations compared to the SB-ABC in most cases. With respect to the SR results reached by these methods, the SB-ABC performs the same or more reliably in comparison with the HPS2, UPSOm, RWMPSOg, HCSNM and MDHSA in these minimax problems.

Discussion
The impact of the introduced modifications on the SB-ABC will be examined in this section. Seven integer programming problems and ten minimax problems were solved by two diverse variants of the SB-ABC. The obtained results of each variant are compared with respect to the same of the developed SB-ABC approach. In the following text, these variants are presented:

1.
Variant 1: To examine the effectiveness of employing the modified ABC operator in the onlooker bee phase given by Equation (10), an SB-ABC version that uses the standard ABC search equation is tested. The label SB-ABC1 is used for this variant.

2.
Variant 2: To investigate the effectiveness of employing the shuffle mutation operator in employed and onlooker bee phases, an SB-ABC version that does not include this operator is tested. The label SB-ABC2 is used for this variant.
Each SB-ABC version is run 50 times for each test problem. The maximum number of function evaluations was 20,000 for each method. The tested methods were configured with specific parameter values, as described in Section 5.2. The mean, standard deviation values and SR values obtained by the SB-ABC1, SB-ABC2 and SB-ABC for seven integer programming problems and ten minimax problems are presented in Tables 8 and 9. A result in boldface denotes the best mean result. The convergence graphs achieved by the SB-ABC1, SB-ABC2 and SB-ABC on the four picked integer programming functions and the four selected minimax optimization problems are given in Figures 1 and 2, respectively.
From the mean results presented in Table 8, it can be seen that the SB-ABC results outperform SB-ABC1 and SB-ABC2 versions on all integer programming benchmark problems. With respect to the SR values, it can be noticed that each algorithm achieved a 100% success rate on all FI 1 -FI 7 integer programming problems. From the standard deviation values presented in Table 8, it can be observed that the SB-ABC has lower standard deviation values for most of the cases in comparison with the SB-ABC1 and SB-ABC2 versions.  Compared with the SB-ABC1, it can be seen from Table 9 that the proposed algorithm reached better mean results and the same or better SR results for all FM 1 -FM 10 minimax problems. When comparing the SB-ABC with respect to the SB-ABC2, it can be noticed from Table 9 that the SB-ABC obtained better mean values for eight minimax problems and slightly worse mean results for the remaining two benchmarks (FM 4 and FM 10 ). From the standard deviation values presented in Table 9, it can be noticed that the SB-ABC has lower standard deviation values for most of the cases in comparison with the SB-ABC1 and SB-ABC2 variants. According to the SR results presented in Table 9, the SB-ABC outperformed the SB-ABC2 in five test problems (FM 5 , FM 6 , FM 8 , FM 9 and FM 10 ), while the SB-ABC and SB-ABC2 achieved the same results in the remaining benchmarks. As shown in Figures 1 and 2, SB-ABC converges faster to the optimum on the selected problems in comparison with its two variants.
These observations indicate that each introduced modification contributes to the satisfactory performance of the SB-ABC. Use of the shuffle mutation operator provides useful diversity and consequently helps the SB-ABC to locate the favorable areas within the search space. Adding the modified ABC operator enables an enhanced exploitation ability of the algorithm. Combining these modifications significantly improves the convergence speed and robustness of the SB-ABC algorithm.

Conclusions
In this paper, a novel shuffle-based artificial bee colony algorithm (SB-ABC) is proposed with the intention to solve integer programming and minimax problems. The proposed algorithm employs the shuffle mutation operator and modified ABC search strategy with an aim to improve the exploitation tendency of the algorithm to provide a valuable convergence speed. The proposed approach was applied to solve seven integer programming and ten minimax problems taken from the literature. The SB-ABC algorithm obtained highly competitive results in comparison with the standard ABC, BB method and 12 other metaheuristic algorithms in solving integer programming problems. Compared with the standard ABC, SQP method and 10 other state-of-the-art algorithms, the SB-ABC showed better performance for the majority of the minimax test problems.
The effects of introduced modifications related to the shuffle mutation operator and modified ABC search operator have been investigated. It is experimentally validated that the use of each introduced modification is of great importance in achieving a satisfactory performance of the SB-ABC with respect to the convergence speed and robustness. In the proposed SB-ABC method, the balance between the global exploration and local exploitation abilities is addressed by suitable configuration of control parameters. As in many other swarm intelligence techniques, the problem of discovering appropriate values for the control parameters also exists in the SB-ABC algorithm. Extending the SB-ABC method with a self-adaptive control mechanism to reach better exploration and exploitation balance during distinct search phases is a promising direction for future study. Developing a hybrid algorithm that would incorporate different operators of some other well-established metaheuristic methods in the SB-ABC algorithm for solving large scale integer programming and minimax problems will also be examined in future work. Another possible way for creating a hybrid approach is to employ certain metaheuristic methods to assume a role of a local optimizer, while the SB-ABC algorithm would perform a global search. In addition, the application of the proposed SB-ABC approach for solving some combinatorial optimization problems will be investigated.