Next Article in Journal
On a New Discrete SEIADR Model with Mixed Controls: Study of Its Properties
Next Article in Special Issue
Search Acceleration of Evolutionary Multi-Objective Optimization Using an Estimated Convergence Point
Previous Article in Journal
An Application of Multicriteria Decision-making for the Evaluation of Alternative Monorail Routes
Previous Article in Special Issue
A Novel Simple Particle Swarm Optimization Algorithm for Global Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm

1
School of Economics and Management, China University of Geosciences, Beijing 100083, China
2
Key Laboratory of Carrying Capacity Assessment for Resource and Environment, Ministry of Natural Resources, Beijing 100083, China
3
School of Information Engineering, Hebei GEO University, Shijiazhuang 050031, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 17; https://doi.org/10.3390/math7010017
Submission received: 1 September 2018 / Revised: 10 December 2018 / Accepted: 10 December 2018 / Published: 24 December 2018
(This article belongs to the Special Issue Evolutionary Computation)

Abstract

:
Moth search (MS) algorithm, originally proposed to solve continuous optimization problems, is a novel bio-inspired metaheuristic algorithm. At present, there seems to be little concern about using MS to solve discrete optimization problems. One of the most common and efficient ways to discretize MS is to use a transfer function, which is in charge of mapping a continuous search space to a discrete search space. In this paper, twelve transfer functions divided into three families, S-shaped (named S1, S2, S3, and S4), V-shaped (named V1, V2, V3, and V4), and other shapes (named O1, O2, O3, and O4), are combined with MS, and then twelve discrete versions MS algorithms are proposed for solving set-union knapsack problem (SUKP). Three groups of fifteen SUKP instances are employed to evaluate the importance of these transfer functions. The results show that O4 is the best transfer function when combined with MS to solve SUKP. Meanwhile, the importance of the transfer function in terms of improving the quality of solutions and convergence rate is demonstrated as well.

1. Introduction

The knapsack problem (KP) [1] is still considered as one of the most challenging and interesting classical combinatorial optimization problems, because it is non-deterministic polynomial hard problem and has many important applications in reality. As an extension of the standard 0–1 knapsack problem (0–1 KP) [2], the set-union knapsack problem (SUKP) [3] is a novel KP model recently introduced in [4,5]. The SUKP finds many practical applications such as financial decision making [4], data stream compression [6], flexible manufacturing machine [3], and public key prototype [7].
The classical 0–1 KP is one of the simplest KP model in which each item has a unique value and weight. However, SUKP is constructed of a set of items S = {U1, U2, U3, …, Um} and a set of elements U = {u1, u2, u3, …, un}. Each item is associated with a subset of elements. In SUKP, each item has a nonnegative profit and each element has a nonnegative weight. The goal is to maximize the total profit of a subset of items S * S such that the total weight of the corresponding element does not exceed the maximum capacity of knapsack C. Hence, SUKP is more complicated and more difficult to handle than the standard 0–1 KP. Thus far, only a few researchers have studied this issue despite its practical importance and NP-hard character. For example, Goldschmidt et al. applied the dynamic programming (DP) algorithm for SUKP [3]. However, when the exact algorithm is used, no satisfactory approximate solution is usually obtained in polynomial time. Afterwards, Ashwin [4] proposed an approximation algorithm A-SUKP for SUKP. Obviously, A-SUKP also has to face the inevitable problem, that is, how to compromise between achieving a high-quality solution and exponential runtime. Recently, He et al. [5] presented a binary artificial bee colony algorithm (BABC) to solve SUKP and comparative studies were conducted among BABC, A-SUKP, and binary differential evolution (DE) [8]. The results verified that BABC outperformed A-SUKP method. Ozsoydan et al. [9] proposed a swarm intelligence-based algorithm for the SUKP and designed an effective mutation procedure. Although this method does not require transfer functions, it lacks generality. Therefore, it is urgent to find an efficient metaheuristic algorithm to address SUKP whether from the perspective of academic research or practical application.
As a relatively novel nature-inspired metaheuristic algorithm, moth search (MS) algorithm was recently developed for continuous optimization by Wang [10]. Computational experiments have shown that MS is not only effective but also efficient when addressing unconstrained continuous optimization problems, compared with five state-of-the-art metaheuristic algorithms. Because of its relative novelty, extensive research on MS is relatively scarce, especially discrete version MS algorithm. Feng et al. presented a binary moth search algorithm (BMS) for discounted {0–1} knapsack problem (DKP) [11].
As we all know, the metaheuristic algorithm is usually discretized in two ways: direct discretization and indirect discretization. Direct discretization is usually achieved by modifying the evolutionary operator of the original algorithm to solve a particular discrete problem. This method depends on the algorithm used and the problem solved. Obviously, the disadvantages of direct discretization are lack of versatility and complicated operation. The latter is discretized by establishing a mapping relationship between continuous space and discrete space. Concretely speaking, indirect discretization is usually achieved by an appropriate transfer function to convert real-valued variables into discrete variables. Many discrete versions of swarm intelligence algorithms using transfer functions have been proposed to solve various optimization problems. Discrete binary particle swarm optimization [12], discrete firefly algorithm [13], and binary harmony search algorithm [14] are among the most typical algorithms. Through analyzing the literature, many kinds of transfer functions can be used, such as sigmoid function [12], tanh function [15], etc. However, most existing metaheuristics only consider one transfer function. Little research concentrates on the importance of transfer functions in solving discrete problems. In addition, a few studies [16,17] investigate the efficiency of multiple transfer functions.
In this paper, twelve principal transfer functions are used and then twelve new discrete MS algorithms are proposed to solve SUKP. These functions include four S-shaped transfer functions [16,17], named S1, S2, S3, and S4, respectively; four V-shaped transfer functions [16,17], named V1, V2, V3, and V4, respectively; and four other shapes transfer functions (Angle modulation method [18,19], Nearest integer method [20,21], Normalization method [22], and Rectified linear unit method [23]), named O1, O2, O3, and O4, respectively. Therefore, combining twelve transfer functions with MS algorithm, twelve discrete MS algorithms are naturally proposed, named as MSS1, MSS2, MSS3, MSS4, MSV1, MSV2, MSV3, MSV4, MSO1, MSO2, MSO3, and MSO4, respectively.
The remainder of the paper is organized as follows. In Section 2, we briefly introduce the SUKP problem and MS algorithm. The families of transfer functions and repair optimization mechanism are presented in Section 3. In Section 4, the twelve discrete MS algorithms are compared to shed light on how the transfer functions affect the performance of the algorithm. After that, the best algorithm (MSO4) is compared with five state-of-the-art methods on fifteen SUKP instances. Finally, we draw conclusions and suggest some directions for future research.

2. Background

To describe discrete MS algorithm for the SUKP, we first explain the mathematical model of SUKP and then introduce the MS algorithm.

2.1. Set-Union Knapsack Problem

The set-union knapsack problem (SUKP) [3,4] is a variant of the classical 0–1 knapsack problem (0–1 KP). More formally, the SUKP can be defined as follows: given a set of elements U = {u1, u2, u3, …, un} and a set of items S = {U1, U2, U3, …, Um}, such that S is the cover of U, and U i U i U (i = 1, 2, 3, …, m) and each item Ui has a value pi > 0. Each element uj (j = 1, 2, 3, …, n) has a weight wj > 0. Suppose that set A consists of some items packed into the knapsack with capacity C, namely A S . Then, the profit of A is defined as P ( A ) = U i A p i and the weight of A is defined as W ( A ) = u j U i A U i w j . The objective of the SUKP is to find a subset A that maximizes the total value P(A) on condition that the total weight W(A) C. Then, the mathematical model of SUKP can be formulated as follows:
M a x P ( A ) = U i A p i
s u b j e c t   t o   W ( A ) = u j U i A U i w j C , A S
where pi (i = 1, 2, 3, …, m), wj (j = 1, 2, 3, …, n), and C are all positive integers.
Recently, an integer programming model is proposed by He et al. [5] to solve SUKP easily by using metaheuristic algorithm; the new mathematical model of SUKP can be defined as follows:
M a x f ( Y ) = i = 1 m y i p i
s u b j e c t   t o   W ( A Y ) = j U i A Y U i w j C
Obviously, all the 0–1 vectors Y = [y1, y2, y3, …, ym] { 0 , 1 } m are the potential solutions of SUKP. A solution satisfying the constraint of Equation (4) is a feasible solution; otherwise, it is an infeasible solution. AY = {Ui|yi Y, yi = 1, 1 ≤ i ≤ m} S. Then, yi = 1 if and only if Ui AY.

2.2. Moth Search Algorithm

The MS algorithm [10] is a novel metaheuristic algorithm that was inspired by the phototaxis and Lévy flights of the moths in nature, which are the two most representative characteristics of moths. The MS is akin to other population-based swarm intelligence algorithms. However, MS differs from most the population-based metaheuristic algorithms, such as genetic algorithm (GA) [24,25] and particle swarm optimization algorithm (PSO) [26,27], which consist of only one population, as, in MS, the whole population is divided into two subpopulations according to the fitness, namely subpopulation1 and subpopulation2.
The MS starts its evolutionary process by first randomly generating n moth individuals. Each moth individual represents a candidate solution to the corresponding problem with a specific fitness function. In MS, two operators are considered including Lévy flights operator and straight flight operator. Correspondingly, an individual update in subpopulation1 and subpopulation2 is generated by performing Lévy flights operator and straight flight operator, respectively.
  • Lévy flights: For each individual i in subpopulation1, it will fly around the best one in the form of Lévy flights. The resulting new solution is calculated based on Equations (5)–(7).
    x i t + 1 = x i t + α L ( s )
    α = S m a x / t 2
    L ( s ) = ( β 1 ) Γ ( β 1 ) s i n ( π ( β 1 ) 2 ) π s β
    where xit and xit+1 denote the position of moth i at generation t and t + 1, respectively. α denotes the scale factor related to specific problem. Smax is the max walk step and it takes the value 1.0 in this paper. L(s) represents the step drawn from Lévy flights and Γ ( x ) is the gamma function. In this paper, β = 1.5 and s can be regarded as the position of moth individual in the solution space then s β is the β power of s.
  • Straight flights: for each individual i in subpopulation2, it will fly towards that source of light in line. The resulting new solution is formulated as Equation (8).
    x i t + 1 = { λ × ( x i t + φ × ( x b e s t t x i t ) ) i f   r a n d > 0.5 λ × ( x i t + 1 φ × ( x b e s t t x i t ) ) e l s e
    where λ and φ represent scale factor and acceleration factor, respectively. xtbest is the best individual at generation t. Rand is a function generating a random number uniformly distributed in (0, 1).

3. Discrete MS Optimization Method for SUKP

In this section, we describe the newly proposed discrete MS for SUKP. The main purpose of extending MS algorithm to solve the novel SUKP is to investigate the significant role of the transfer functions in terms of improving the quality of solutions and convergence rate. The basic MS algorithm was initially proposed for continuous optimization problems, while SUKP belongs to a discrete optimization problem with constraints. Therefore, the SUKP problem must contain three key elements, namely, discretization method, solution representation, and constraint handling. The three key elements are described in detail subsequently.

3.1. Transfer Functions

Transfer function is a major contributor of the discrete MS algorithm; therefore, it deserves special attention and research. In this section, 12 transfer functions are introduced. According to the shape of transfer function curve, we divide the twelve transfer functions into three groups: S-shaped transfer functions [12], V-shaped transfer functions [15], and other-shaped (O-shaped) transfer functions [19,21]. As described above, each group consists of four functions, which are named as Si, Vi, and Oi (i = 1, 2, 3, 4), respectively. These transfer functions are presented in Table 1 and Figure 1.
As stated in the literature [16,17], the transfer functions define the probability that the element of position vector of each moth individual changes from 0 to 1, and vice versa. Therefore, an appropriate transfer function should ensure that a real-valued vector in a continuous search space is mapped to the value 1 in a binary search space with greater probability. Suppose applying the transfer function T(x) will return a function value y (y = 1 or y = 0) through a mapping method. The probability of a transfer function with a value of 1 (PR) is displayed in Figure 2. Three groups of items, namely, 100 items, 300 items, and 500 items, were selected to count the PR value:
P R = i = 1 N { y i | y i = 1 } N × 100 %
where N represents the number of items. The value of longitudinal axis in Figure 2 is the average of PR among 100 independent runs.
As shown in Figure 2, the four S-shaped transfer functions have similar PR values, which are close to 0.5. However, the PR values of the four V-shaped transfer functions differ considerably. V2 has the best PR value while the PR value of V3 is less than 0.2. It seems that V3 combining with MS should show poor performance. Similarly, V4 also demonstrates unsatisfactory performance, with a PR value of less than 0.25. Of the four other shapes of transfer functions, O1, O3, and O4 obtain a similar PR value, that is, close to 0.5. The PR value of O2 is slightly smaller than that of O1, O3, and O4. In sum, according to the preliminary analysis of PR values, it seems that V3, V4, and O2 are not suitable for combining with MS to solve binary optimization problems.

3.2. Solution Representation

The basic MS is a real-valued algorithm and each moth individual is represented as a real-valued vector. Two main operators are defined in continuous space. However, SUKP is a discrete optimization problem with constraints and the solution is a binary vector. In this paper, the most general and simplest method, mapping the real-valued vectors into binary ones by transfer functions, is opted. Concretely speaking, a real-valued vector X = [x1, x2, …, xm] [−a, a]m still evolves in continuous space. Here, m is the number of items and a is a positive real value, and a = 5.0 in this paper. Then, transfer function T(x) is used to map X into a binary vector Y = [y1, y2, …, ym] {0, 1}m. According to the feature of these transfer functions, three mapping methods are as follows.
The first mapping method: Choose a transfer function from S1–S4, V1–V4, and O3.
y i = { 1 i f   r a n d ( ) T ( x i ) 0 e l s e
where rand( ) is a random number in (0, 1). In Figure 1, it can be observed that S-shaped transfer functions, V-shaped transfer functions, and O3 will return a random real number between 0 and 1. Therefore, the comparison of rand( ) to T(xi) equals 1 or 0. Then, the mapping procedure is shown as Table 2.
The second mapping method: Choose the transfer function O2.
y i = T ( x i )
The third mapping method: Choose either O1 or O4 as the transfer function.
y i = { 1 i f   T ( x i ) 0 0 e l s e
Then, the quality of any feasible solution Y is evaluated by the objective function f of the SUKP. Given a potential solution Y = [y1, y2, …, ym], the objective function value f(Y) is defined by
f ( Y ) = i = 1 m y i p i

3.3. Repair Mechanism and Greedy Optimization

Clearly, SUKP is a kind of important combinatorial optimization problem with constraints. Due to the existence of constraints, the feasible search space of decision variables becomes irregular, which will increase the difficulty of finding the optimal solution. Among the many constraint processing techniques, repairing the infeasible solution is a common method to solve the combinatorial optimization problem. Michalewicz [28] introduced an evolutionary system based on repair technology. Obviously, repairing technique is dependent on specific problems and different repairing method must be designed for different problems. Consequently, He et al. [5] designed a repairing and optimization algorithm (named S-GROA) for SUKP, which can not only repair infeasible solutions but also further optimize feasible solutions. On the basis of S-GROA [5], a quadratic greedy repair and optimization strategy (QGROS) is proposed by Liu et al. [29]. In this paper, QGROS is adopted. The preprocessing phase of QGROS can be summarized as follows:
(1)
Compute the frequency dj of the element j (j = 1, 2, 3, …, n) in the subsets U1, U2, U3, …, Um.
(2)
Calculate the unit weight Ri of the item i (i = 1, 2, 3, …, m).
R i = j U i ( w j / d j )
(3)
Record the profit density of each item in S according to PDi.
P D i = p i / R i ( i = 1 , 2 , 3 , , m )
(4)
Sort all the items in a non-ascending order based on PDi (i = 1, 2, 3, …, m) and then the index value recorded in an array H[1…m].
(5)
Define a term A Y = { U i | y i Y y i = 1 , 1 i m } for any binary vector Y = [y1, y2, …, ym] {0, 1}m.
The pseudocode of QGROS [29] is outlined in Algorithm 1.
Algorithm 1. QGROS algorithm for SUKP.
  Begin
    Step 1: Input: the candidate solution Y = [ y 1 , y 2 , , y m ] { 0 , 1 } m , H[1…m].
    Step 2: Initialization. The m-dimensional binary vector Z = [0, 0, …, 0].
    Step 3: Greedy repair stage
      For i = 1 to m do
        If ( y H [ i ] = 1   a n d   W ( A Z { H [ i ] } ) C )
           Z H [ i ] = 1   a n d   A Z = A Z { H [ i ] } .
        End if
      End for
      Y Z.
    Step 4: Quadratic greedy stage
       Do not consider the elements that have been packed into the knapsack,
       recalculate dj (j = 1, 2, 3, …, n), Ri (i = 1, 2, 3, … m), and H[1…m]
    Step 5: Optimization sate
      For i = 1 to m do
        If ( y H [ i ] = 0   a n d   W ( A Z { H [ i ] } ) C )
           y H [ i ] = 1   a n d   A Y = A Y { H [ i ] } .
        End if
      End for
    Step 6: Output: Y = [ y 1 , y 2 , , y m ]   a n d   f ( Y )
  End.
In Algorithm 1, we can observe that QGROS consists of three stages. The first stage is determining whether the constraints are met for the items in the potential solution that are ready to be packed into the knapsack. At this stage, items in the potential solution that are intended to be packed into knapsack but violate constraints will be removed. Therefore, all solutions are feasible after this stage. The second stage is recalculating the frequency of each element, the unit weight of each item, and the array H[1…m]. The third stage is optimizing the remaining items by loading appropriate items into the knapsack with the aim of maximizing the use of the remaining capacity. At this stage, items in the feasible solution that are not intended to be loaded in the knapsack but satisfy the constraints will be loaded. Hence, after this stage, all solutions remain feasible and the quality of solutions is improved.

3.4. The Main Scheme of Discrete MS for SUKP

Having discussed all the components of the discrete MS algorithms in detail, the complete procedure is outlined in Algorithm 2.

3.5. Computational Complexity of the Discrete MS Algorithm

Computational complexity is the main criterion for evaluating the running time of an algorithm, which can be calculated according to its structure and implementation. In Algorithm 2, it can be seen that the computing time in each iteration is mainly dependent on the number of moths, problem dimension, and sorting of items as well as moth individual in each iteration. In addition, the computational complexity is mainly determined by Steps 1–4. In Step 1, since the Quicksort algorithm is used, the average and the worst computational costs are O(mlogm) and O(m2), respectively. In Step 2, the initialization of N moth individuals costs time O(N × m) = O(m2). In Step 3, the fitness calculation of N moth individuals costs time O(N). In Step 4, Lévy flight operator has time complexity O(N/2 × m) = O(m2), straight flight operator has time complexity O(N/2 × m) = O(m2), QGROS has time complexity O(m × n), and sorting the population with Quicksort has average time complexity and worst time complexity of O(NlogN) and O(N2), respectively. Consequently, the overall computational complexity is O(mlogm) + O(m2) + O(N) + O(m2) + O(m2) + O(m × n) + O(NlogN) = O(m2), where m is the number of items and N is the number of moths.
Algorithm 2. The main procedure of discrete MS algorithm for SUKP.
  Begin
    Step 1: Sorting.
      Sort all items in S in non-increasing order according to PDi ( 0 i m ), and
      the indexes of items are recorded in array H [0…m].
    Step 2: Initialization.
      Set the maximum iteration number MaxGen and iteration counter G = 1; β
      = 1.5; the acceleration factor φ = 0.618.
      Generate N moth individuals randomly {X1, X2, …, XN}, Xi [−a, a]m.
      Divide the whole population into two subpopulations with equal size:
      subpopulation1 and subpopulation2, according to their fitness.
       Calculate the corresponding binary vector Yi = T(Xi) by using transfer
       functions (i = 1, 2, …, N).
      Perform repair and optimization with QGROS.
    Step 3: Fitness calculation.
      Calculate the initial fitness of each individual, f(Yi), 1 ≤ i N.
    Step 4: While G < MaxGen do
       Update subpopulation 1 by using Lévy flight operator.
       Update subpopulation 2 by using fly straightly operator.
        Calculate the corresponding binary vector Yi = T(Xi) by using transfer
        functions (i = 1, 2, …, N).
        Perform repair and optimization with QGROS.
        Evaluate the fitness of the population and record the <Xgbest, Ygbest>.
        G = G + 1.
        Recombine the two newly-generated subpopulations.
        Sort the population by fitness.
        Divide the whole population into subpopulation 1 and subpopulation 2.
    Step 5: End while
    Step 6: Output: the best results.
  End.

4. Results and Discussion

In this section, we present experimental studies on the proposed discrete MS algorithms for solving SUKP.
Test instance: Three groups of thirty SUKP instances were recently presented by He et al. [5]. What needs to be specified is that the set of items S = {U1, U2, U3, …, Um} is represented as a 0–1 matrix M = (rij), with m rows and n columns. For each element rij in M (i = 1, 2, …, m; j = 1, 2, …, n), rij = 1 if and only if uj = Ui. Therefore, each instance contains four factors: (1) m denotes the number of items; (2) n denotes the number of elements; (3) density of element 1 in the matrix M α {0.1, 0.15}; and (4) the ratio of C to the sum of all elements β {0.75, 0.85}. According to the relationship between m and n, three types of instances are generated. The first group: 10 SUKP instances with m > n, m {100, 200, 300, 400, 500} and n {85, 185, 285, 385, 485}, named as F01–F10, respectively. The second group: 10 SUKP instances with m = n, m {100, 200, 300, 400, 500} and n {100, 200, 300, 400, 500}, named as S01–S10, respectively. The third group: 10 SUKP instances with m < n, m {85, 185, 285, 385, 485} and n {100, 200, 300, 400, 500}, named as T01–T10, respectively. We selected five instances in each group with α = 0.1 and β = 0.75. The instances can be downloaded at http://sncet.com/ThreekindsofSUKPinstances(EAs).rar. Three categories with different relationships between m and n (m > n, m = n, and m < n) of 15 SUKP instances were selected for testing. The parameters and the best solution value (Best*) [5] are shown in Table 3.
Experimental environment: For fair comparisons, all proposed algorithms in this paper were coded in C++ and in the Microsoft Visual Studio 2015 environment. All the experiments were run on a PC with Intel (R) Core (TM) i7-7500 CPU (2.90 GHz and 8.00 GB RAM).
On the stopping condition, we followed the original paper [5] and set the iteration number MaxGen equal to max {m, n} for all SUKP instances. Here, m denotes the number of items and n is the number of elements in each SUKP instance. In addition, the population size of all the algorithms was set to N = 20. For each SUKP instance, we carried out 100 independent replications.
The parameters for the proposed discrete MS algorithms were set as follows: the max step Smax = 1.0, acceleration factor φ = 0.618, and the index β = 1.5.

4.1. The Performance of Discrete MS Algorithm with Different Transfer Functions

Computational results are summarized in Table 4, which records the results for SUKP instances with m > n, m = n, and m < n, respectively. For each instance, we give several criteria to evaluate the comprehensive performance of the twelve discrete MS algorithms. “Best” and “Mean” refer to the best value and the average value for each instance obtained by each algorithm among 100 independent runs. The best solution provided in [5] are given in parentheses in the first column.
In Table 4, it can be easily observed that MSO4 outperforms the eleven other discrete MS algorithms and demonstrates the best comprehensive performance when solving all fifteen SUKP instances. In addition, MSS2 and MSO3 show comparable performance.
To evaluate the performance of each algorithm, the relative percentage deviation (RPD) was defined to represent the similarity between the best value obtained by each algorithm and the best solution 5. The RPD of each SUKP instance is calculated as follows.
R P D = ( Best * Best ) / Best * × 100
where Best * is the best solution provided in [5]. Clearly, if the value of RPD is less than 0, the algorithm updates the best solution of the SUKP test instance in [5]. The statistical results are shown in Table 5.
In Table 5, it can be seen that, in all twelve discrete MS algorithms, MSS2, MSS3, MSV4, MSO1, MSO3, and MSO4 all update the best solutions [5]. However, MSS3 and MSV4 update only one SUKP instance, T05. MSO1 updates the instances F07, F09, S07, and T05. Moreover, MSO4 still keeps the best performance because its total average RPD is only −1.28. The total average RPD of MSO3 is −0.14, which implies that MSO3 is slightly worse than MSO4 but outperforms the ten other discrete MS algorithms. Obviously, MSS2 is the third best of the twelve discrete MS algorithms. Indeed, it can also be seen that MSO4 updates and obtains the best solutions [5] ten and two times (out of 15), i.e., 66.67% and 13.33% of the whole instance set, respectively. MSO3 updates and fails to find the best solutions 5 nine and six times (out of 15), i.e., 60.00% and 40.00% of the whole instance set, respectively. MSS2 updates and obtains the best solutions 5 eight (53.33%) and one times (6.60%), respectively.
To further evaluate the comprehensive performance of twelve discrete MS algorithms in solving fifteen SUKP instances, the average ranking based on the best values are displayed in Table 6 and Figure 3, respectively. In Table 6 and Figure 3, the average ranking value of MSO4 is 1.60 and it still ranks first. In addition, MSO3 and MSS2 are the second and the third best algorithms, respectively, which is very consistent with the previous analysis. The ranking of twelve discrete MS algorithms based on the best values are as follows:
MSO 4 MSO 3 MSS 2 MSO 1 MSS 1 MSV 1 MSS 4 MSS 3 =   MSV 2 MSV 4 MSV 3 MSO 2
By looking closely at Figure 2 and Figure 3, it is not difficult to see that V3, V4, and O2 exhibit the worst performance, which is consistent in the two figures. Similar to the previous analysis in Figure 2, O1, O3, and O4 show satisfactory performance among 12 transfer functions. Thus, it can be inferred that PR value can be used as a criterion for selecting transfer functions.
To analyze the experimental results for statistical purposes, we selected three representative instances (F09, S09, and T09) and provided boxplots in Figure 4, Figure 5 and Figure 6. In Figure 4, the boxplot of MSS2 has greater value and less height than those of other eleven algorithms. In Figure 5 and Figure 6, MSO3 exhibits a similar phenomenon as MSS2 in Figure 4. Additionally, the performance of MSO2 is the worst. In Figure 4, Figure 5 and Figure 6, we can also observe that MSO3 performs slightly better than MSO4 in solving large-scale instances.
Moreover, optimization process of each algorithm in solving F09, S09, and T09 instances is given in Figure 7, Figure 8 and Figure 9, respectively. In these three figures, all the function values are the average best values achieved from 100 runs. In Figure 7, the initial value of MSS2 is greater than that of other algorithms and then it quickly converges to the global optimum. For MSO3, the same scene appears in Figure 8 and Figure 9. Overall, MSS2 and MSO3 have stronger optimization ability and faster convergence speed than the other discrete MS algorithms.
Through the above experimental analysis, the following conclusions can be drawn: (1) For S-shaped transfer functions, the combination of S2 and MS (MSS2) is the most effective. (2) As far as V-shaped transfer functions are concerned, the combination of V1 and MS (MSV1) shows the best performance. (3) In the case of other shapes transfer functions, the more effective algorithms are MSO4, MSO3, and MSO1. (4) By comparing the family of S-shaped transfer functions and V-shaped transfer functions, the family of S-shaped transfer functions with MS is suitable for solving SUKP problem. (5) MSO4 has advantages over other algorithms in terms of the quality of solutions. (6) As far as the stability and convergence rate are concerned, MSO3 and MSS2 perform better than other algorithms.
Overall, it is evident that MSO4 has the best results (considering RPD values and average ranking values) on fifteen SUKP instances. Therefore, it appears that the proposed other-shapes family of transfer functions, particularly the O4 function, has many advantages combined with other algorithms to solve binary optimization problems. Additionally, the O3 function and S2 function are also suitable functions that can be considered for selection. In brief, these results demonstrate that the transfer function plays a very important role in solving SUKP using discrete MS algorithm. Thus, by carefully selecting the appropriate transfer function, the performance of discrete MS algorithm can be improved obviously.

4.2. Estimation of the Solution Space

SUKP is a binary coded problem and the solution space can be represented as a graph G = (V, E), in which vertex set V = S, where S is the set of solutions for a SUKP instance, S = {0, 1}n and edge set E = { ( s , s ) S × S | d ( s , s ) = d m i n } , where dmin is the minimum distance between two points in the search space. Especially, hamming distance is used to describe the similarity between individuals. Obviously, the minimum distance is 0 when all bits have the same value and the maximum distance is n, where n is the dimension of SUKP instance.
Here, MSO4 is specially selected to analyze the solution space for F01, S01, and T01 SUKP instance. The distribution of fitness at generation 0 and generation 100 is presented in Figure 10, Figure 11 and Figure 12. The distance between each individual and the best individual is given in Figure 13, Figure 14 and Figure 15. In Figure 10, Figure 11 and Figure 12, we can see that, at generation 0, the fitness values are more dispersed and worse than that at generation 100. In Figure 13, it can be observed that the hamming distance varies from 0 to 35 at generation 0 while the range is 0 to 12 at generation 100. Moreover, the hamming distance can be divided into eight levels at generation 100, which demonstrates that all individuals tend to some superior individuals. However, this phenomenon is not evident in S01 and T01.
To intuitively understand the similarity of the solutions, the spatial structure of the solutions at generation 100 is illustrated in Figure 16, Figure 17 and Figure 18. In Figure 16, the first node (denoting the first individual) has the maximum degree which also shows more individuals have approached the better individual. However, the value of degree is not much different in Figure 17 and Figure 18. This result is consistent with the previous analysis.

4.3. Discrete MS Algorithm vs. Other Optimization Algorithms

To further verify the performance of discrete MS algorithm, we chose MSO4 algorithm to compare with five other optimization algorithms. These comparison algorithms include PSO [12], DE [8], global harmony search (GHS) [30], firefly algorithm (FA) [31], and monarch butterfly optimization (MBO) [32,33]. In DE, the DE/rand/1/bin scheme was adopted. PSO, FA, and MBO are classical or novel swarm intelligence algorithms that simulate the social behavior of birds, firefly, and monarch butterfly, respectively. DE is derived from evolutionary theory in nature and has been proved to be one of the most promising stochastic real-value optimization algorithms. GHS is an efficient variant of HS, which imitates the music improvisation process. It is also noteworthy that all five comparison algorithms adopt the discretization method introduced in this paper and combine with O4, respectively. The parameter setting for each algorithm are shown in Table 7.
The best results and average results obtained by six methods over 100 independent runs as well the average time cost of each computation (unit: second, represented as “time”) are summarized in Table 8. The frequency (TBest and TMean) and average ranking (RBest and RMean) of each algorithm with the best performance based on the best values and average values are also recorded in Table 8. The average time cost of each computation for solving fifteen SUKP instances is illustrated in Figure 19. In Table 8, on best, MSO4 outperforms other methods on eight of fifteen instances (F01, F03, F05, F07, S01, S03, S05, and T03). MBO is the second most effective. In terms of average ranking, there is little difference between the performance of MSO4 and MBO. In terms of the average time cost, it can be observed in Figure 19 that DE has the slowest computing speed. However, GHS has surprisingly fast solving speed. In addition, MSO4 is second among the six algorithms. Overall, the computing speed of PSO, FA, MBO and MSO4 shows little difference.
To investigate the difference between the results obtained by MSO4 and those by the comparison algorithm from the perspective of statistics, Wilcoxon’s rank sum tests with the 5% significance level were performed. The results of rank sum tests are recorded in Table 9. In Table 9, “1” and “−1” indicate that MSO4 is superior or inferior to the corresponding comparison algorithm, respectively, while “0” shows that there is no statistical difference at 5% significance level between the two comparison algorithms. The statistical result is shown in Table 9.
In Table 8, MSO4 outperforms PSO and DE on all fifteen instances. In addition, MSO4 performs better than GHS and FA on most of the instances except for S05 and F01, respectively. Meanwhile, MSO4 is superior to MBO on eleven instances except for F07, F09, S01, and S05. Statistically, there is no difference between the performance of MSO4 and that of MBO for these four instances.
Considering the results shown in Table 8 and Table 9, a conclusion can be drawn that the performance of MSO4 is superior to or at least quite competitive with the five other methods.

5. Conclusions

In this paper, twelve different transfer functions-based discrete MS algorithms are proposed for solving SUKP. These transfer functions can be divided into three families, S-shaped, V-shaped, and other-shaped transfer functions. To investigate the performance of twelve discrete MS algorithms, three groups of fifteen SUKP instances were employed and the experimental results were compared and analyzed comprehensively. From the experimental results, we found that MSO4 has the best performance. Furthermore, the relative percentage deviation (RPD) was calculated to evaluate the similarity between the best value obtained by each algorithm and the best solution provided in [5]. The results show that six algorithms update the best solutions [5] for 11 SUKP instances. The results also indicate that four other shapes transfer functions, especially the O4 function combined with MS, have merits for solving discrete optimization problems.
The comparison results on the fifteen SUKP instances among MSO4 and five state-of-the-art algorithms show that MSO4 performs competitively.
There are several possible directions for further study. First, we will investigate some new transfer functions on other algorithms such as krill herd algorithm (KH) [34,35,36,37,38], fruit fly optimization algorithm (FOA) [39], earthworm optimization algorithm (EWA) [40], and cuckoo search (CS) [41,42]. Second, we will study other techniques to discrete continuous optimization algorithms such as k-means framework [43]. Third, we will apply these twelve transfer functions-based discrete MS algorithms to other related and more complicated binary optimization problems including multidimensional knapsack problem (MKP) [39] and flow shop scheduling problem (FSSP) [44]. Finally, we will incorporate other strategies, namely, information feedback [45] and chaos theory [46], into MS to improve the performance of the algorithm.

Author Contributions

Writing and methodology, Y.F.; supervision, H.A.; review and editing, X.G.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61806069, Key Research and Development Projects of Hebei Province, grant number 17210905.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  2. Du, D.Z.; Ko, K.I. Theory of Computational Complexity; John Wiley Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  3. Goldschmidt, O.; Nehme, D.; Yu, G. Note: On the set-union knapsack problem. Naval Res. Logist. (NRL) 1994, 41, 833–842. [Google Scholar] [CrossRef]
  4. Arulselvan, A. A note on the set union knapsack problem. Discret. Appl. Math. 2014, 169, 214–218. [Google Scholar] [CrossRef]
  5. He, Y.; Xie, H.; Wong, T.L.; Wang, X. A novel binary artificial bee colony algorithm for the set-union knapsack problem. Future Gener. Comput. Syst. 2017, 78, 77–86. [Google Scholar] [CrossRef]
  6. Yang, X.; Vernitski, A.; Carrea, L. An approximate dynamic programming approach for improving accuracy of lossy data compression by Bloom filters. Eur. J. Oper. Res. 2016, 252, 985–994. [Google Scholar] [CrossRef] [Green Version]
  7. Schneier, B. Applied Cryptography: Protocols, Algorithms, and Source Code in C; John Wiley Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  8. Engelbrecht, A.P.; Pampara, G. Binary differential evolution strategies. In Proceedings of the IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 1942–1947. [Google Scholar]
  9. Ozsoydan, F.B.; Baykasoglu, A. A swarm intelligence-based algorithm for the set-union knapsack problem. Future Gener. Comput. Syst. 2018, 93, 560–569. [Google Scholar] [CrossRef]
  10. Wang, G.G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2016. [Google Scholar] [CrossRef]
  11. Feng, Y.; Wang, G.G. Binary moth search algorithm for discounted 0-1 knapsack problem. IEEE Access 2018, 6, 10708–10719. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics—Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
  13. Karthikeyan, S.; Asokan, P.; Nickolas, S.; Page, T. A hybrid discrete firefly algorithm for solving multi-objective flexible job shop scheduling problems. Int. J. Bio-Inspired Comput. 2015, 7, 386–401. [Google Scholar] [CrossRef]
  14. Kong, X.; Gao, L.; Ouyang, H.; Li, S. A simplified binary harmony search algorithm for large scale 0-1 knapsack problems. Expert Syst. Appl. 2015, 42, 5337–5355. [Google Scholar] [CrossRef]
  15. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. BGSA: Binary gravitational search algorithm. Nat. Comput. 2010, 9, 727–745. [Google Scholar] [CrossRef]
  16. Saremi, S.; Mirjalili, S.; Lewis, A. How important is a transfer function in discrete heuristic algorithms. Neural Comput. Appl. 2015, 26, 625–640. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  18. Pampara, G.; Franken, N.; Engelbrecht, A.P. Combining particle swarm optimisation with angle modulation to solve binary problems. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 1, pp. 89–96. [Google Scholar]
  19. Leonard, B.J.; Engelbrecht, A.P.; Cleghorn, C.W. Critical considerations on angle modulated particle swarm optimisers. Swarm Intell. 2015, 9, 291–314. [Google Scholar] [CrossRef] [Green Version]
  20. Costa, M.F.P.; Rocha, A.M.A.C.; Francisco, R.B.; Fernandes, E.M.G.P. Heuristic-based firefly algorithm for bound constrained nonlinear binary optimization. Adv. Oper. Res. 2014, 2014, 215182. [Google Scholar] [CrossRef]
  21. Burnwal, S.; Deb, S. Scheduling optimization of flexible manufacturing system using cuckoo search-based approach. Int. J. Adv. Manuf. Technol. 2013, 64, 951–959. [Google Scholar] [CrossRef]
  22. Pampará, G.; Engelbrecht, A.P. Binary artificial bee colony optimization. In Proceedings of the 2011 IEEE Symposium on Swarm Intelligence (SIS), Paris, France, 11–15 April 2011; pp. 1–8. [Google Scholar]
  23. Zhu, H.; He, Y.; Wang, X.; Tsang, E.C.C. Discrete differential evolutions for the discounted {0-1} knapsack problem. Int. J. Bio-Inspired Comput. 2017, 10, 219–238. [Google Scholar] [CrossRef]
  24. Changdar, C.; Mahapatra, G.S.; Pal, R.K. An improved genetic algorithm based approach to solve constrained knapsack problem in fuzzy environment. Expert Syst. Appl. 2015, 42, 2276–2286. [Google Scholar] [CrossRef]
  25. Lim, T.Y.; Al-Betar, M.A.; Khader, A.T. Taming the 0/1 knapsack problem with monogamous pairs genetic algorithm. Expert Syst. Appl. 2016, 54, 241–250. [Google Scholar] [CrossRef]
  26. Cao, L.; Xu, L.; Goodman, E.D. A neighbor-based learning particle swarm optimizer with short-term and long-term memory for dynamic optimization problems. Inf. Sci. 2018, 453, 463–485. [Google Scholar] [CrossRef]
  27. Chih, M. Three pseudo-utility ratio-inspired particle swarm optimization with local search for multidimensional knapsack problem. Swarm Evol. Comput. 2017, 39. [Google Scholar] [CrossRef]
  28. Michalewicz, Z.; Nazhiyath, G. Genocop III: A co-evolutionary algorithm for numerical optimization problems with nonlinear constraints. In Proceedings of the 1995 IEEE International Conference on Evolutionary Computation, Perth, Western Austrilia, 29 November–1 December 1995; Volume 2, pp. 647–651. [Google Scholar]
  29. Liu, X.J.; He, Y.C.; Wu, C.C. Quadratic greedy mutated crow search algorithm for solving set-union knapsack problem. In Microelectro. Comput.; 2018; Volume 35, pp. 13–19. [Google Scholar]
  30. Omran, M.G.H.; Mahdavi, M. Global-best harmony search. Appl. Math. Comput. 2008, 198, 643–656. [Google Scholar] [CrossRef]
  31. Yang, X.S. Firefly Algorithm, Lévy Flights and Global Optimization. In Research and Development in Intelligent Systems XXVI; Springer: London, UK, 2010; pp. 209–218. [Google Scholar]
  32. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2015, 1–20. [Google Scholar] [CrossRef]
  33. Feng, Y.; Wang, G.G.; Li, W.; Li, N. Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem. Neural Comput. Appl. 2017, 1–18. [Google Scholar] [CrossRef]
  34. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H. An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl. Math. Model. 2014, 38, 2454–2462. [Google Scholar] [CrossRef]
  35. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370. [Google Scholar] [CrossRef]
  36. Wang, G.; Guo, L.; Wang, H.; Duan, H.; Liu, L.; Li, J. Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput. Appl. 2014, 24, 853–871. [Google Scholar] [CrossRef]
  37. Wang, H.; Yi, J.-H. An improved optimization method based on krill herd and artificial bee colony with information exchange. Memetic Comput. 2017. [Google Scholar] [CrossRef]
  38. Wang, G.-G.; Deb, S.; Gandomi, A.H.; Alavi, A.H. Opposition-based krill herd algorithm with Cauchy mutation and position clamping. Neurocomputing 2016, 177, 147–157. [Google Scholar] [CrossRef]
  39. Wang, L.; Zheng, X.L.; Wang, S.Y. A novel binary fruit fly optimization algorithm for solving the multidimensional knapsack problem. Knowl.-Based Syst. 2013, 48, 17–23. [Google Scholar] [CrossRef]
  40. Wang, G.-G.; Deb, S.; Coelho, L.D.S. Earthworm optimization algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Int. J. Bio-Inspired Comput. 2015. [Google Scholar] [CrossRef]
  41. Cui, Z.; Sun, B.; Wang, G.-G.; Xue, Y.; Chen, J. A novel oriented cuckoo search algorithm to improve DV-Hop performance for cyber-physical systems. J. Parallel. Distr. Comput. 2017, 103, 42–52. [Google Scholar] [CrossRef]
  42. Wang, G.-G.; Gandomi, A.H.; Zhao, X.; Chu, H.E. Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput. 2016, 20, 273–285. [Google Scholar] [CrossRef]
  43. García, J.; Crawford, B.; Soto, R.; Castro, C.; Paredes, F. A k-means binarization framework applied to multidimensional knapsack problem. Appl. Intell. 2018, 48, 357–380. [Google Scholar] [CrossRef]
  44. Deng, J.; Wang, L. A competitive memetic algorithm for multi-objective distributed permutation flow shop scheduling problem. Swarm Evolut. Comput. 2016, 32, 107–112. [Google Scholar] [CrossRef]
  45. Wang, G.-G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2017. [Google Scholar] [CrossRef] [PubMed]
  46. Wang, G.-G.; Guo, L.; Gandomi, A.H.; Hao, G.-S.; Wang, H. Chaotic krill herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
Figure 1. Twelve transfer functions.
Figure 1. Twelve transfer functions.
Mathematics 07 00017 g001
Figure 2. Probability of transfer function with a value of 1.
Figure 2. Probability of transfer function with a value of 1.
Mathematics 07 00017 g002
Figure 3. Comparison of the average rank of 12 discrete MS algorithms for 15 SUKP instances.
Figure 3. Comparison of the average rank of 12 discrete MS algorithms for 15 SUKP instances.
Mathematics 07 00017 g003
Figure 4. Boxplot of the best values on F09 in 100 runs.
Figure 4. Boxplot of the best values on F09 in 100 runs.
Mathematics 07 00017 g004
Figure 5. Boxplot of the best values on S09 in 100 runs.
Figure 5. Boxplot of the best values on S09 in 100 runs.
Mathematics 07 00017 g005
Figure 6. Boxplot of the best values on T09 in 100 runs.
Figure 6. Boxplot of the best values on T09 in 100 runs.
Mathematics 07 00017 g006
Figure 7. The convergence graph of twelve discrete MS algorithms on F09.
Figure 7. The convergence graph of twelve discrete MS algorithms on F09.
Mathematics 07 00017 g007
Figure 8. The convergence graph of twelve discrete MS algorithms on S09.
Figure 8. The convergence graph of twelve discrete MS algorithms on S09.
Mathematics 07 00017 g008
Figure 9. The convergence graph of twelve discrete MS algorithms on T09.
Figure 9. The convergence graph of twelve discrete MS algorithms on T09.
Mathematics 07 00017 g009
Figure 10. The distribution graph of fitness on MSO4 for F01.
Figure 10. The distribution graph of fitness on MSO4 for F01.
Mathematics 07 00017 g010
Figure 11. The distribution graph of fitness on MSO4 for S01.
Figure 11. The distribution graph of fitness on MSO4 for S01.
Mathematics 07 00017 g011
Figure 12. The distribution graph of fitness on MSO4 for T01.
Figure 12. The distribution graph of fitness on MSO4 for T01.
Mathematics 07 00017 g012
Figure 13. The distance to the best individual for F01.
Figure 13. The distance to the best individual for F01.
Mathematics 07 00017 g013
Figure 14. The distance to the best individual for S01.
Figure 14. The distance to the best individual for S01.
Mathematics 07 00017 g014
Figure 15. The distance to the best individual for T01.
Figure 15. The distance to the best individual for T01.
Mathematics 07 00017 g015
Figure 16. The spatial structure graph for F01 at generation 100.
Figure 16. The spatial structure graph for F01 at generation 100.
Mathematics 07 00017 g016
Figure 17. The spatial structure graph for S01 at generation 100.
Figure 17. The spatial structure graph for S01 at generation 100.
Mathematics 07 00017 g017
Figure 18. The spatial structure graph for T01 at generation 100.
Figure 18. The spatial structure graph for T01 at generation 100.
Mathematics 07 00017 g018
Figure 19. The average time cost of each computation for solving fifteen SUKP instances.
Figure 19. The average time cost of each computation for solving fifteen SUKP instances.
Mathematics 07 00017 g019
Table 1. Twelve transfer functions.
Table 1. Twelve transfer functions.
NumberMathematical Formula
S1 [17] T ( x ) = 1 1 + e 2 x
S2 [12] T ( x ) = 1 1 + e x
S3 [17] T ( x ) = 1 1 + e x / 2
S4 [17] T ( x ) = 1 1 + e x / 3
V1 [20] T ( x ) = | e r f ( π 2 x ) | = | 2 π 0 π 2 x e t 2 d t |
V2 [12] T ( x ) = | tanh ( x ) |
V3 [17] T ( x ) = | x 1 + x 2 |
V4 [17] T ( x ) = | 2 π arctan ( π 2 x ) |
O1 [18] T ( x ) = sin ( 2 π ( x a ) * b * cos ( 2 π ( x a ) * c ) ) + d
(a = 0, b = 1, c = 1, d = 0)
O2 [20] T ( x ) = | x mod 2 |
O3 [22] T ( x ) = ( x + x m i n ) ( | x m i n | + x m a x ) ( x m i n x x m a x )
O4 [23] T ( x ) = x
Table 2. The first mapping procedure according to S2 transfer function.
Table 2. The first mapping procedure according to S2 transfer function.
Elementx0x1x2x3x4x5x6x7x8x9
X2.963.32−3.252.652.61−1.57−0.070.911.041.68
T(X)0.950.970.040.930.930.170.480.710.740.84
Rand()0.610.170.070.150.080.860.720.390.800.62
Y0010011010
Table 3. The parameters and the best solution value provided in [5] for 15 SUKP instances.
Table 3. The parameters and the best solution value provided in [5] for 15 SUKP instances.
NumberInstancemnCapacityBest*
1F011008512,01513,251
2F0320018522,80913,241
3F0530028536,12610,553
4F0740038550,85610,766
5F0950048560,35111,031
6S0110010011,22314,044
7S0320020025,63011,846
8S0530030038,28912,304
9S0740040049,82210,626
10S0950050063,90210,755
11T018510012,18011,664
12T0318520025,40513,047
13T0528530038,92211,158
14T0738540049,81510,085
15T0948550062,51610,823
Table 4. The best values and average values of twelve discrete MS algorithms on 15 SUKP instances.
Table 4. The best values and average values of twelve discrete MS algorithms on 15 SUKP instances.
NumberCriterionMSS1MSS2MSS3MSS4MSV1MSV2MSV3MSV4MSO1MSO2MSO3MSO4
F01Best12,69813,28312,86113,05713,00313,00313,04413,04412,97312,67813,28313,283
(13251)Mean12,25013,10212,16812,22712,56412,74012,85812,69712,32012,06613,05213,062
F03Best12,76213,28612,21612,18912,87512,63911,95312,26713,17512,25513,32213,521
(13241)Mean11,77712,86011,46511,26112,17612,10011,32111,19312,37111,00713,10113,193
F05Best10,14210,668997410,047996695629752946010,539965610,64311,127
(10553)Mean958810,1969465939393529120925091319822898710,38110,302
F07Best10,45611,321979310,00510,62595399917981410,906980111,32111,435
(10766)Mean975010,6449349946710,04291509317926510,141902810,83310,411
F09Best10,66911,41010,64210,46110,71810,72510,59810,28811,279980811,17211,031
(11031)Mean10,29310,91310,082996510,420996910,134999710,648942910,75010,716
S01Best13,40514,04413,08013,61113,39613,81413,72113,72113,65913,20214,00314,044
(14044)Mean12,72513,47812,41812,60713,21113,56913,54013,50312,89912,33913,58313,649
S03Best11,24911,10410,90411,29511,32910,80210,48110,80811,75711,14711,87312,350
(11846)Mean10,46910,57610,28210,28510,622987910,11210,21210,789997511,41911,508
S05Best11,64912,07111,47211,45911,79911,68611,42111,38011,86211,04812,24012,598
(12304)Mean10,97911,65010,75310,78711,19911,20610,89811,16511,27210,15311,72111,541
S07Best10,33010,99010,21810,07310,17796699957997710,65010,00610,72210,727
(10626)Mean983110,37997669681996892869372946010,019942610,32710,343
S09Best10,07410,495999510,037993810,02510,04310,05210,199955310,35510,355
(10755)Mean9719996895839675965497079804971398079,12710,0569919
T01Best11,03411,57311,15811,33211,42711,02711,15111,07611,19511,15911,51911,735
(11664)Mean10,57711,25910,49110,50110,81210,57210,49610,78110,56810,49511,27611,287
T03Best12,23413,30612,35712,13612,41512,63312,03911,82912,79812,08513,37813,647
(13047)Mean11,54912,62111,62411,52211,74511,74311,53511,40011,98011,26712,94813,000
T05Best11,02511,17311,16710,76510,81410,72510,48511,24011,18310,29911,22611,391
(11158)Mean10,38510,87110,24710,11810,63510,22910,05910,73510,651958410,95710,816
T07Best967696099140916995949303904989659675919897839739
(10085)Mean898792648873887591619131864287339154869492619240
T09Best10,20810,54910,13110,09410,11510,201986610,00510,450998910,66010,539
(10823)Mean985610,2059753971199499771950697149985933210,35010,190
Table 5. The effect of twelve transfer functions on the performance of discrete MS algorithm (RPD values).
Table 5. The effect of twelve transfer functions on the performance of discrete MS algorithm (RPD values).
NumberMSS1MSS2MSS3MSS4MSV1MSV2MSV3MSV4MSO1MSO2MSO3MSO4
F014.17−0.242.941.461.871.871.561.562.104.32−0.24−0.24
F033.62−0.347.747.952.764.559.737.360.507.45−0.61−2.11
F053.89−1.095.494.795.569.397.5910.360.138.50−0.85−5.44
F072.88−5.169.047.071.3111.407.898.84−1.308.96−5.16−6.21
F093.28−3.443.535.172.842.773.936.74−2.2511.09−1.280.00
S014.550.006.863.084.611.642.302.302.746.000.290.00
S035.046.267.954.654.368.8111.528.760.755.90−0.23−4.25
S055.321.896.766.874.105.027.187.513.5910.210.52−2.39
S072.79−3.433.845.204.239.016.306.11−0.235.83−0.90−0.95
S096.332.427.076.687.606.796.626.545.1711.183.723.72
T015.400.784.342.852.035.464.405.044.024.331.24−0.61
T036.23−1.995.296.984.843.177.739.341.917.37−2.54−4.60
T051.19−0.13−0.083.523.083.886.03−0.73−0.227.70−0.61−2.09
T074.064.729.379.084.877.7510.2711.114.078.802.993.43
T095.682.536.396.746.545.758.847.563.457.711.512.62
Mean4.300.195.775.474.045.826.796.561.637.69−0.14−1.28
Table 6. Ranks of twelve discrete MS algorithms based on the best values.
Table 6. Ranks of twelve discrete MS algorithms based on the best values.
NumberMSS1MSS2MSS3MSS4MSV1MSV2MSV3MSV4MSO1MSO2MSO3MSO4
F01111104775591211
F03631011571284921
F05527681191241031
F07516871211104932
F09718106591121234
S01911281045571131
S03689541112103721
S05738956101141221
S07516871211104932
S09611081197641222
T01112854129106731
T03837965111241021
T05756981011241231
T07351096711124812
T09527986121041113
Mean 6.672.608.277.876.808.279.678.934.4710.072.271.60
Table 7. The parameter settings of six algorithms on SUKP.
Table 7. The parameter settings of six algorithms on SUKP.
AlgorithmParametersValue
PSOCognitive constant C11.0
Social constant C21.0
Inertial constant W0.3
DEWeighting factor F0.9
Crossover constant CR0.3
GHSHarmony memory considering rate HMCR0.9
Pitch adjusting rate PAR0.3
FAAlpha0.2
Beta1.0
Gamma1.0
MBOMigration ratio3/12
Migration period 1.4
Butterfly adjusting rate1/12
Max step1.0
MSO4Max step Smax1.0
Acceleration factor φ 0.618
Lévy distribution parameter β 1.5
Table 8. Computational results and comparisons on the Best and Mean on 15 SUKP instances.
Table 8. Computational results and comparisons on the Best and Mean on 15 SUKP instances.
NumberCriterionPSODEGHSFAMBOMSO4
F01Best13,28313,12513,25113,28313,28313,283
(13251)Mean12,98112,92312,49213,04112,94113,062
Time1.2971.9230.3301.6273.6841.398
F03Best13,31913,17212,32313,28213,38113,521
(13241)Mean12,69712,44311,23112,54412,88613,193
Time8.64313.3810.60311.55211.6767.901
F05Best10,40810,21410,51210,19110,78611,127
(10553)Mean9825942010,179909210,21010,302
Time28.62841.6330.92845.64740.93824.912
F07Best11,09110,13511,255974011,14211,435
(10766)Mean10,613957310,642922610,46310,411
Time63.290124.5041.63797.10261.58856.838
F09Best11,04611,01611,53611,09911,54611,031
(11031)Mean10,47310,44311,19910,47310,73610,716
Time138.551225.7493.179170.035157.478124.378
S01Best13,81413,51913,52213,81414,04414,044
(14044)Mean13,57512,96412,65613,47213,61213,649
Time1.6082.8000.3581.8052.6171.646
S03Best11,91411,08511,53111,40611,95512,350
(11846)Mean10,97810,40810,92510,83311,05611,508
Time8.43714.5430.47610.7539.3718.112
S05Best12,57412,07112,10411,39812,36912,598
(12304)Mean11,70911,25111,49210,99311,60411,541
Time35.25946.3021.01437.13026.55128.612
S07Best10,66910,26710,95210,24110,90610,727
(10626)Mean10,217975310,497982710,23710,343
Time79.622101.1181.71877.45876.04958.433
S09Best10,35210,10010,43410,05710,63310,355
(10755)Mean10,104970810,239976610,1399919
Time144.377242.4283.013167.492153.835121.622
T01Best11,75211,46911,43411,75511,74811,735
(11664)Mean11,15210,93010,37011,22611,20711,287
Time1.5171.8920.4071.7221.6681.354
T03Best13,100962412,61811,48713,00813,647
(13047)Mean12,0919,12211,85510,88012,18913,000
Time7.96412.7080.50711.95810.7027.642
T05Best11,03210,66911,07111,55711,09011,391
(11158)Mean10,65610,49010,72210,98310,68610,816
Time31.75341.1760.82232.17525.04424.539
T07Best979092509857939297709739
(10085)Mean963688979447889593229240
Time62.079113.7791.63979.98467.67557.000
T09Best10,48210,26010,64310,20710,66110,539
(10823)Mean10,111971710,306978310,24910,190
Time121.926195.7542.896144.926136.766114.066
TBest 102358
TMean 205106
RBest 3.275.473.404.402.202.27
RMean 3.175.473.274.432.532.13
Table 9. Results of rank sum tests for MSO4 with the comparison algorithms.
Table 9. Results of rank sum tests for MSO4 with the comparison algorithms.
MSO4PSODEGHSFAMBO
F0111101
F0311111
F0511111
F0711110
F0911110
S0111110
S0311111
S0511010
S0711111
S0911111
T0111111
T0311111
T0511111
T0711111
T0911111
11515141411
000114
−100000

Share and Cite

MDPI and ACS Style

Feng, Y.; An, H.; Gao, X. The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm. Mathematics 2019, 7, 17. https://doi.org/10.3390/math7010017

AMA Style

Feng Y, An H, Gao X. The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm. Mathematics. 2019; 7(1):17. https://doi.org/10.3390/math7010017

Chicago/Turabian Style

Feng, Yanhong, Haizhong An, and Xiangyun Gao. 2019. "The Importance of Transfer Function in Solving Set-Union Knapsack Problem Based on Discrete Moth Search Algorithm" Mathematics 7, no. 1: 17. https://doi.org/10.3390/math7010017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop