Next Article in Journal
A Survey on Network Optimization Techniques for Blockchain Systems
Previous Article in Journal
Clustering Algorithm with a Greedy Agglomerative Heuristic and Special Distance Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Constructing the Neighborhood Structure of VNS Based on Binomial Distribution for Solving QUBO Problems

1
Graduate School of Science and Technology for Innovation, Yamaguchi University, 1677-1, Yoshida, Yamaguchi 753-8512, Japan
2
Department of Mathematics Education, Faculty of Teacher Training and Education, Sebelas Maret University, Surakarta 57126, Indonesia
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(6), 192; https://doi.org/10.3390/a15060192
Submission received: 10 May 2022 / Revised: 27 May 2022 / Accepted: 30 May 2022 / Published: 2 June 2022

Abstract

:
The quadratic unconstrained binary optimization (QUBO) problem is categorized as an NP-hard combinatorial optimization problem. The variable neighborhood search (VNS) algorithm is one of the leading algorithms used to solve QUBO problems. As neighborhood structure change is the central concept in the VNS algorithm, the design of the neighborhood structure is crucial. This paper presents a modified VNS algorithm called “B-VNS”, which can be used to solve QUBO problems. A binomial trial was used to construct the neighborhood structure, and this was used with the aim of reducing computation time. The B-VNS and VNS algorithms were tested on standard QUBO problems from Glover and Beasley, on standard max-cut problems from Helmberg–Rendl, and on those proposed by Burer, Monteiro, and Zhang. Finally, Mann–Whitney tests were conducted using α = 0.05 , to statistically compare the performance of the two algorithms. It was shown that the B-VNS and VNS algorithms are able to provide good solutions, but the B-VNS algorithm runs substantially faster. Furthermore, the B-VNS algorithm performed the best in all of the max-cut problems, regardless of problem size, and it performed the best in QUBO problems, with sizes less than 500. The results suggest that the use of binomial distribution, to construct the neighborhood structure, has the potential for further development.

1. Introduction

Combinatorial optimization has attracted a good deal of attention, as it has many applications in various fields [1]. However, it is not always easy to solve combinatorial optimization problems, especially in some cases, classified as NP-hard problems. In this kind of situation, the use of the approximation method is a reasonable option [2]. One of the approximation methods that has attracted a great deal of attention in modern optimization is the metaheuristic method [3], which is designed to obtain good solutions within a reasonable time frame [4]. Even though it is not easy to prove that a solution obtained using the metaheuristic method is a global optimum [5], the results are, often, very close to the global optimum.
Depending on the case, a combinatorial problem can be formulated in various ways, so that it can be be easily solved. One method is the use of a Boolean or binary vector, which, despite its simplicity, is a compelling method for solving many combinatorial problems. A specific binary formulation forms a major optimization problem category, called “quadratic unconstrained binary optimization (QUBO)”, or “Ising model optimization” in some of the literature [6]. In the QUBO problem, given Q = ( q i j ) is a symmetric n-square matrix of coefficients, the objective is to maximize the function:
f ( X ) = X T QX = i = 1 n j = 1 n q i j x i x j
where X = ( x i ) is an n-dimensional vector of binary variables, i.e., x i { 0 , 1 } for i = 1 , 2 , . . . , n . QUBO problems are categorized as NP-hard problems [7], and their decision form is NP-complete [6]. The implementation of the QUBO formulation can be found in many combinatorial problems, e.g., graph coloring, partition, and maximum cut (max-cut) [6], which were included in Karp’s original 21 NP-complete problems [8]. More applications of QUBO were described by Glover and Kochenberger [9].
It is as difficult to solve QUBO problems as it is to solve other NP-hard problems. Many metaheuristic algorithms have been devised to solve them, based on the requirement that they are solved within a reasonable amount of time. Examples of these are simulated annealing [10], tabu search [11], genetic algorithm [12], and local search [13] algorithms. One particular metaheuristic algorithm that uses local search is the variable neighborhood search (VNS) algorithm [14]. The VNS algorithm can solve various problems, including QUBO problems.
As a single-trajectory-based algorithm, the VNS algorithm has the advantage that it is resource-efficient. Furthermore, its memory requirements are relatively low, compared to population-based metaheuristic algorithms. Therefore, the VNS algorithm is suitable for use in solving large-scale problems and does not need large memory allocations. It is a simple concept, and the VNS algorithm is, often, used in its original form, a modified form, or hybridized with another algorithm [15,16].
The VNS algorithm has been shown to perform well in various problems. Its applications include the max-cut combinatorial problem [17,18], the scheduling problem [19,20], the layouting problem [21], the vehicle routing problem [22], and the multiprocessor scheduling problem with communication delays [23]. It has, also, been used to tune proportional–integral–derivative controllers in cyber–physical systems [24]. Not only has the VNS algorithm been applied in numerous ways, but it has, also, been, often, used in combination with other algorithms to obtain combined performance. It is possible to hybridize the VNS algorithm with other metaheuristic algorithms, such as with the genetic algorithm [25], the particle swarm optimization algorithm [26], the migrating birds optimization algorithm [27], and the simulated annealing algorithm [28]. The VNS algorithm can also be implemented in parallel programming, by using a graphical processing unit [29] to leverage its performance.
Many factors determine the performance of the VNS algorithm, i.e., the initialization method, neighborhood structure construction, local search procedures, and update mechanisms. Although there are many determining factors, neighborhood structure is the central concept of the VNS algorithm that significantly influences its performance. One version of the VNS algorithm is based on the dynamic neighborhood model, proposed by Mladenović and Hansen [14]. Changing the neighborhood structure during a search enables the algorithm to escape the local optimum trap [30], as it allows the algorithm to move from one basin of search to another. The construction of a neighborhood is, thus, crucial to the performance of the VNS algorithm. However, in terms of solving the QUBO problem, previous research reports regarding neighborhood construction are difficult to find. The Hamming distance is commonly used in the basic VNS algorithm to construct the neighborhood, when solving a QUBO problem [5,17,31].
The VNS algorithm uses a strictly monotonic increasing neighborhood structure, when the local search does not yield a better solution. As a result, the basic VNS algorithm takes quite a long time to yield a good solution. A new version called “Jump VNS” was introduced, to speed up the neighborhood construction process. It enables the neighborhood structure to leap ahead, in accordance with parameter k s t e p N [32]. The basic VNS algorithm, which does not have the jump ability, is obtained by setting k s t e p = 1 . The performance of the Jump VNS algorithm does not appear to have been previously reported.
The VNS algorithm is simple, and the ability to slowly change the neighborhood structure is an advantage of its use. However, this is, also, a weakness of the VNS algorithm. This gradual but slow change may impact the length of computation time. Although it may be sped up with Jump VNS, the thorough search behavior may be lost. For example, setting the maximum distance on a VNS algorithm to 100 results in 100 times neighborhood-structure change, at most, but setting k s t e p = 2 on Jump VNS results in only 50 times neighborhood structure change. This value is even less if a larger k s t e p is used. So, the VNS and Jump VNS algorithms are not flexible.
This paper elaborates on the basic VNS algorithm and introduces a new method of improving it by focusing on the neighborhood structure by implementing binomial distribution. Instead of a strictly monotonic increase in neighborhood structure, the neighborhood distance follows a binomial distribution. Although the binomial distribution will cause a non-monotonic increase, the trend of a widening structure will remain the same. We investigated the potential of our proposed algorithm to be used in some QUBO and max-cut problems.

2. VNS Algorithm

The VNS algorithm is a well-known metaheuristic algorithm that utilizes dynamic neighborhood structure changes. The simple implementation of the VNS algorithm starts from a non-deterministic guest initial point and, then, attempts refinements using a local search. The algorithm shifts from its initial point to a neighboring point, before a local search is carried out. The algorithm should move to another neighborhood structure, when the local search does not yield a better solution. The algorithm systematically exploits the following observations [33]:
  • Observation 1: A local minimum for one neighborhood structure is not necessarily so for another;
  • Observation 2: A global minimum is a local minimum for all of the possible neighborhood structures;
  • Observation 3: For many problems, local minimums for one or several neighborhoods are relatively similar to each other.
In other words, according to observation 1, a local minimum for a specific neighborhood structure is not necessarily a local minimum for other neighborhood structures. Other neighborhoods may have other local optimums. The second observation means that the global minimum will only be found after examining all of the possible local optimums, which requires the examination of all of the possible neighborhood structures. Other neighborhoods should be examined, if a local minimum is not the global minimum. The last observation is an empirical observation that suggests a local optimum, usually, provides information that helps determine the global optimum [33]. For example, in the case of a multi-variable function, several variables, often, have the same value in the local optimum as in the global optimum [33].
There are several versions of the VNS algorithm; the most prominent is the dynamic neighborhood model, proposed by Mladenović and Hansen [14]. They proposed a random shift to a neighboring point X , which would be used instead of the initial point X , as the base point of a local search. They called this shifting “shaking”. In this model, if the local search does not yield an improvement, the neighborhood structure is expanded. This change enables the algorithm to move to another basin to be exploited. To retain efficiency, the change must be limited by a parameter that defines the maximum number of neighborhood structures examined. If the local search yields an improvement, the structure is, suddenly, shrunk back to the first structure. The use of this clever expand-and-shrink neighborhood structure, during a search, enables the VNS algorithm to avoid the local optimum trap [30] because the algorithm moves from one basin to another. Figure 1a shows the searching process [32] in the context of minimization. The pseudocode of the VNS algorithm is shown in Figure 1b.
A change in neighborhood structure can be illustrated as an expanded disc, with point X at the center. The neighborhood structure expands, following variable k N . As proposed elsewhere [17,31,34], neighborhood structure N is associated with variable k, which is limited by parameter k m a x N . Thus, Hamming distance d increases, following k, when solving QUBO problems. Accordingly, a point X N k ( X ) means that the Hamming distance d ( X , X ) = | X X | is exactly k. This neighborhood structure is the standard that is used in solving QUBO problems. The neighborhood structure of the basic VNS algorithm for use in solving QUBO problems is formulated as
N k ( X ) = { Y : | Y X | = k } ,
for k = 1 , 2 , . . . , k m a x .

3. Proposed Neighborhood Structure

The strictly monotonic expanding neighborhood structure used in the VNS algorithm is tenable. However, there is no guarantee that this expansion type will always result in a better solution. To exploit this technique to solve any QUBO problem, we propose the use of binomial distribution to create a neighborhood structure. As the proposed algorithm is based on the VNS algorithm, it takes advantage of the characteristics of the VNS algorithm, while aiming to reduce the computation time required.
Our proposed structure enables the algorithm to expand the neighborhood, by following a random schema. However, it is good to maintain a gradual expansion trend. In the basic VNS algorithm, the neighboring point X N k ( X ) is obtained by flipping k numbers of the element of X . The flipped elements are chosen randomly. As a result, some elements are changed, from 1 to 0 or vice versa, while some remain. Instead of applying this basic neighborhood structure, we propose a mechanism of determining whether each element should be flipped or not. Each element will be flipped, by considering a flip probability. The proposed neighborhood structure is as follows: we define a trial T, to flip each element of a vector X = ( x 1 , x 2 , . . . , x n ) , to obtain X = ( x 1 , x 2 , . . . , x n ) N ( X ) , in accordance with the following rule:
x i = f l i p ( x i ) , with probability p [ 0 , 1 ] x i , with probability ( 1 p ) ,
where
f l i p ( x i ) = 1 x i , for { 0 , 1 } encode x i , for ± 1 encode .
This trial satisfies binomial requirements, as it is repeated n times, where n corresponds to the vector length and sets a fixed probability p. Suppose a random variable A represents the number of flipped elements, and let a random variable D represent the obtained Hamming distance d. Then, there is a clear correspondence between random variable A and D. Take any vector X and generate a vector Y, by following Equation (3), and suppose that m random elements of X were flipped based on this process; from this supposition, we have d ( X , Y ) = | X Y | = m . As the flip is controlled by probability p, by following binomial distribution we have
μ D = n p ,
σ D = n p ( 1 p ) .
Based on binomial distribution, the distance at approximately n p has a high probability of occurring.
In the VNS algorithm, the distance is equal to variable k and is limited by the parameter k m a x . Therefore, k m a x is the maximum distance that can be reached, when constructing a neighborhood structure. To replace this concept, our proposed method uses a parameter p m a x [ 0 , 1 ] as a limitation. To apply the gradual expansion mechanism, we divide the p m a x into several equal chunks. Then, we ensure that the flip probability p corresponds to these chunks.
p c = c p m a x / C ,
for c = 1 , 2 , . . , C , where C N is a parameter for chunk size. This construction is applied every time a neighboring point X needs to be generated. Thus, this binomial neighborhood structure does not depend on variable k but, instead, on flip probability p c , which corresponds to chunk c.
N c ( X ) = { Y : D ( Y , X ) B i n o m ( p c , n ) } ,
for c = 1 , 2 , . . . , C considering (7), D ( Y , X ) is the Hamming distance between Y and X, where Y is generated by applying Equation (3).
The distribution of the obtained neighboring point X corresponds to the probability density function of the binomial distribution, as shown in Figure 2. The concept is advantageous because the distance of neighboring points X can be estimated, even though they are random. The proposed neighborhood structure is illustrated in Figure 3a. Note that the illustration is a simple version, as X can be obtained in any direction. The neighborhood expansion corresponds to flip probability p c , following the variable chunk c. The distance between neighboring point X and X will be random, even though it will follow the characteristic of binomial distribution. In contrast, in the basic VNS algorithm, the neighborhood structure can be illustrated as an expanded disc, as shown in Figure 3b. In the basic VNS algorithm, the expansion corresponds to variable k and results in d ( X , X ) being exactly k.
The complete implementation of our proposed method for solving QUBO problems is similar to that of the VNS algorithm, except for differences in the neighborhood construction. Unlike the basic VNS algorithm, we use parameter p m a x to control the distance. However, just like VNS, neighborhood structure change should be limited. Therefore, we use parameter chunk size C. Thus, we only change the construction of the neighborhood structure. For simplicity, we call our proposed algorithm “B-VNS”, while “VNS” refers to the basic VNS algorithm [14,17,31,34].
Changing the construction of the neighborhood structure will change the behavior. The B-VNS algorithm is more flexible than the VNS algorithm. In terms of k m a x in the VNS algorithm, the B-VNS algorithm can reach almost the same neighborhood structure, by setting up the parameter p m a x that
μ D = n p m a x k m a x .
However, the B-VNS algorithm is more flexible than the VNS algorithm. The chunk size C can be set as equal to k m a x , to give an almost equal condition. However, the chunk size C can be set as either larger or less than k m a x . Setting the chunk size C to less than k m a x may have the potential to speed up the computation time.
In addition to the potential advantages of B-VNS, there are, also, potential weaknesses. Like the Jump VNS algorithm, in the B-VNS algorithm, thorough search behavior may be lost. The VNS algorithm, as described by Hansen and Mladenović [5,31] and Festa et al. [17], performs a thorough search, by starting with the nearby neighborhood structure and slowly expanding it, when the local search fails to improve the solution. In contrast, in the proposed algorithm, a random pattern of the distance of the neighborhood structures is seen. This random characteristic can be advantageous, as it can increase the exploration search. However, a drawback is that it causes B-VNS to miss basins that the global optimum may be in. Consequently, this random characteristic has a possibility of incurring a longer searching time than the basic VNS algorithm.

4. Benchmarking

Considering that the B-VNS algorithm is a modification of the basic VNS algorithm, it is fair and appropriate to investigate the performance of our proposed B-VNS algorithm, alongside the VNS algorithm [17,30] alone. The investigation did not involve any other algorithms. Therefore, the impact of our modification can be evaluated, by using VNS algorithm performances as the bases.
The investigation was conducted by running simulations on some standard QUBO problems. The QUBO problems were taken from OR-Library [35] available at [36]. The best-known objective functions for those problems were compiled from [10,11,12,37,38]. We, also, tested the B-VNS algorithm, to solve some standard max-cut problems. We used problems from Helmberg–Rendl [39] that can be downloaded from [40]. Those problems were generated using a machine-independent graph generator, called rudy, which Giovanni Rinaldi developed. We, also, tested the B-VNS algorithm on problems proposed by Burer, Monteiro, and Zhang [41], as they have different problem constructions. Helmberg–Rendl problems consist of random, planar, and toroidal graphs, while those from Burer et al. are cubic lattice graphs that represent Ising spin-glass models [34]. Problems by Burer et al. can be downloaded from [42]. The best-known max-cut values were summarized from [34,43,44,45,46]. It is worth noting that all of those best-known values for QUBO and max-cut problems are open for improvement and may change in the future.
The simulation program used, practically, the same program code written in Fortran for both the VNS and B-VNS algorithms. The only difference was in the neighborhood construction section, where the remaining sections were the same. Therefore, we could accurately measure the performance difference between our proposed neighborhood structure and the standard VNS algorithm structure.
The simulation was compiled and run on a CentOS 8 system, powered by an Intel Core i7-8700 processor with 16 GB RAM. The simulation was, independently, run 30 times for each problem ( N S i m = 30 ). Therefore, the sample size for both the VNS and B-VNS algorithms on each test item was 30, which was sufficient for the conduction of statistical tests. The best objective value and computation time, obtained for each iteration and simulation, were recorded. We used three criteria to evaluate the performance. After obtaining the best-known values, we calculated the difference ( B e s t D i f ) between the best-known value and the best value, obtained from 30 simulations. If B e s t D i f = 0 , then the algorithm obtained the best-known value. A value of B e s t D i f > 0 means the algorithm failed to obtained the best-known value. On the other hand, B e s t D i f < 0 means the algorithm exceeded the best-known value.
B e s t S i m = m a x ( { f S i m i : i = 1 , 2 , . . . , N S i m } ) ,
B e s t D i f = B e s t K n o w n B e s t S i m .
We calculated the average differences ( A v g D i f ) between the best-known and obtained objective values, involving all 30 simulations.
A v g D i f = i = 1 N S i m ( f S i m i B e s t K n o w n ) N S i m .
Lastly, we calculated the average computation time ( A v g T ). Regarding all of the criteria, the algorithm that gives the lowest results is the best one. However, as the direct comparison of samples by their average values may have led to a biased conclusion, we conducted statistical analysis to precisely compare the results, using JASP [47].

4.1. Test on QUBO Problems

The local search, as described in [37], was used in the tested algorithms. Figure 4 shows the pseudocode of the local search, used for QUBO problems. We applied equal conditions for the VNS and B-VNS algorithms. We aimed to reduce the values of all of the parameters, to reduce the computation time, while obtaining high-quality solutions.
In the preliminary step, we used four problems for parameter tuning: two problems each, from Glover and Beasley, with sizes of 100. We started from larger parameter values and, gradually, reduced them. We found that k m a x = 0.02 n was adequate for the VNS algorithm to obtain good solutions within short computation times. Hence, we set p m a x = 0.002 on the B-VNS algorithm, to make it equal. The number of iterations was set at 0.2 n , for both the VNS and B-VNS algorithms. The variable n was the problem size that was equal to the length of the solution vector. For the B-VNS algorithm, parameter chunk size C was set at 0.02 n , so that it was equal to k m a x in the VNS algorithm. All of these settings made an equal condition for the VNS and B-VNS algorithms. Table 1 shows the test results for Glover problems [11], while Table 2 showsthe test results for Beasley problems [37].
Tests regarding Glover problems show that the B-VNS algorithm was able to give the same good results as the VNS algorithmm in terms of objective function values. Both algorithms obtained the best-known value in almost all of the problems and only failed in the 6b problem. The statistical analysis shows that the differences between the B-VNS and VNS algorithms were insignificantm in all of the Glover test cases, as seen from the p-value greater than 0.05. The Mann–Whitney test ( α = 0.05 ) was used because most samples did not satisfy the normality and homoscedasticity assumption. In terms of computation time, for tests on Glover problems, statistical analyses show that the B-VNS algorithm was faster than the VNS algorithm, specifically in problems with sizes up to 200. In problems with sizes of 500, the B-VNS algorithm was comparable to the VNS algorithm.
The test results regarding the Beasley problems, also, show a similar trend to the tests on the Glover problems. However, the B-VNS algorithm could obtain all of the best-known values, while the VNS algorithm failed on problems b q p 50 _ 1 , b q p 100 _ 1 , and b q p 250 _ 8 (The notation b q p n_m refers to the Beasley problem, which has size n and number m). Like the test on the Glover problem, as the differences were insignificant, both B-VNS and VNS algorithms were shown to be good algorithms, for use in solving Beasley problems.
Statistical analyses for computation time, regarding Beasley problems with n < 500 , show that the B-VNS algorithm was significantly faster than the VNS algorithm. The computation times of the two algorithms were only statistically the same in problems b q p 50 _ 1 , b q p 50 _ 5 , b q p 50 _ 6 , b q p 100 _ 8 , and b q p 250 _ 7 . For problems where n > 500 , the computation speeds for the B-VNS and VNS algorithms were comparable.
All of the tests regarding QUBO problems under equal conditions show that the the B-VNS and VNS algorithms are good, as they reached most of the best-known values, with some exceptions for the VNS algorithm, due to its failures. Moreover, the B-VNS algorithm ran substantially faster than the VNS algorithm, particularly on problems with sizes less than 500. Therefore, the B-VNS algorithm is best suited for problems with sizes less than 500 and is comparable to the VNS algorithm for larger problems.

4.2. Test on Max-Cut Problems

The representation of the max-cut problem in the QUBO problem can be found in [9]. Instead of using the QUBO form, we used the common formula for max-cut. Given undirected graph G = ( V , E ) , with node set V = { v 1 , v 2 , . . . , v n } and non-negative weight w i j = w j i on edge ( i , j ) E , a partition of G into two disjoint node subsets S and S c , which maximized the cut value, was found.
c u t ( S , S c ) = α S , β S w α β .
This definition corresponds to the following forms:
m a x c u t ( S , S c ) = 1 2 i < j w i j ( 1 x i x j ) ,
s . t . x i , x j { 1 , 1 } ,
or
m a x c u t ( S , S c ) = i < j w i j ( x i x j ) 2 ,
s . t . x i , x j { 0 , 1 } .
The ± 1 encoding was used on this test. We applied the local search process reported elsewhere [15,30] on both the VNS and B-VNS algorithms. The local search process was carried out as follows: with X as the current solution that corresponds to partition ( S , S c ) , a new partition was defined ( S , S c ) .
( S , S c ) = ( S \ { i } , S c { i } ) if node i S ( S { i } , S c \ { i } ) if node i S c
For each node i V , a function δ associated with solution X was defined as
δ ( i ) = j S w i j j S c w i j .
In order to improve the objective value, a node i made a movement from a subset of V to another subset, regarding these situations:
1.
if i S δ ( i ) > 0 , then S = S \ { i } , S c = S c { i } ;
2.
if i S c δ ( i ) < 0 , then S c = S c \ { i } , S = S { i } .
This local search examined all of the possible movements starting from the first node.
For Burer et al. problems, parameter k m a x was set at 0.1 n , while p m a x was set at 0.1 . The number of iterations was set at 0.5 n , for both the VNS and B-VNS algorithms. We designed different test conditions for Helmberg–Rendl problems. Parameter k m a x in the VNS algorithm was set at 100, for all of the Helmberg–Rendl problems used in the test, regardless of problem size, as suggested in [34]. Therefore, the parameter p m a x in the B-VNS algorithm was set at 100 / n , which enabled the B-VNS algorithm to reach an equal maximum neighborhood structure, as in the VNS algorithm. We found that setting the iterations to 0.2 n was adequate to obtain good solutions, while reducing computation times.
Unlike the tests for QUBO problems, we applied a non-equal condition by setting the chunk size C = 0.9 k m a x for Helmberg–Rendl problems as well as for Burer et al. problems. This non-equal condition was used to investigate the impact of setting the chunk size to less than k m a x . This setting has a risk, that the B-VNS algorithm will be much less thorough than the VNS algorithm, but it was used with the aim to be faster. Table 3 shows the test results for Helmberg–Rendl problems, while Table 4 shows the test results for Burer et al. problems. The Mann–Whitney test with α = 0.05 was conducted, as most samples did not satisfy the normality and homoscedasticity assumption.
Although the B-VNS algorithm is much less thorough, the results show that the B-VNS algorithm was, still, able to provide solutions as satisfactorily as the VNS algorithm. Setting the chunk size C to smaller than k m a x had an insignificant effect on the ability of the B-VNS algorithm, to achieve the objective values. Moreover, the B-VNS algorithm was shown to be substantially faster than the VNS algorithm, in all of the tested problems, regardless of the size. Therefore, the B-VNS algorithm was shown to clearly perform better than the VNS algorithm, on max-cut problems.

4.3. Discussion

We tested the basic VNS and B-VNS algorithms, using simulations on several QUBO and max-cut problems. QUBO problems from Glover and Beasley as well as max-cut problems from Helmberg–Rendl and Burer et al. were chosen for the test, since they are benchmarking standards. Thus, our simulations gave a good overview of the performance of the two algorithms.
When solving the QUBO problems, the parameters of the two algorithms were set to be equivalent. Compared to the problem size n, the parameters of both algorithms were set to very small values. We used k m a x = 0.02 n and p m a x = 0.02 . As a result, the k m a x in the VNS algorithm and the maximum μ D in the B-VNS algorithm only ranged from 1 to 10, while n ranged from 30 to 500. As the number of iterations was set at 0.2 n , the number of iterations was in the range of 6 to 100. However, both the VNS and B-VNS algorithms were able to obtain the best-known values. The failure on just a few problems that was observed is understandable, due to the small values used for the parameters. The result would be improved, by enlarging the parameter values, i.e., by increasing k m a x in the VNS algorithm, p m a x in the B-VNS algorithm, and the number of iterations. However, increasing the parameter values may result in a longer computation time.
For QUBO problems that apply equal conditions, the B-VNS algorithm was shown to be substantially faster than the VNS algorithm, for problems with sizes of less than 500. Moreover, the B-VNS algorithm was able to provide good solutions to all of the tested problems. Solving the standard QUBO problem, under equal conditions, showed that the B-VNS algorithm is able to match the VNS algorithm even better and faster. This trend, also, occurred in the tests for max-cut problems, even though the parameter settings were under non-equal conditions that risked the B-VNS algorithm being less thorough. However, the experiments show that reducing the thorough search behavior, by setting the chunk size C = 0.9 k m a x in the B-VNS algorithm, did not lessen the solution quality, and it even ran substantially faster in all of the max-cut problems tested.
Setting the parameter chunk size C to less than k m a x has the risk of lowering the accuracy of the B-VNS algorithm. The characteristic of binomial distribution, which results in a pattern of randomly increasing distances, may, also, reduce accuracy. Small chunk size and binomial distribution can cause the algorithm to jump too far and miss some basins. However, our experiments and statistical analysis show that the B-VNS algorithm was, still, able to provide solutions, as satisfactorily as the VNS algorithm. Therefore, setting the chunk-size parameter to slightly less than k m a x is advantageous for solution quality and efficiency.
The flexible design, implemented by parameter chunk size C, gives the B-VNS algorithm the potential to be faster than the VNS algorithm. Even though it is known that there is a risk of the B-VNS algorithm losing its thorough search characteristic, which may result in longer computation times, our experiments showed that this was not the case. The experiment results indicate that setting the B-VNS algorithm’s parameters as equal to the VNS algorithm’s parameters can avoid this risk.
All of the tests involving standard QUBO and standard max-cut problems show that the B-VNS and VNS algorithms are good. The VNS and B-VNS algorithms achieved most of the best-known values or were very close to them. It should be noted that the referenced best-known values were obtained using specific modified algorithms or by setting larger parameter values. The tests show that the B-VNS algorithm performed best in all of the max-cut problems, regardless of problem size, and it was shown to be best suited to QUBO problems, with sizes less than 500.
We tested the B-VNS and VNS algorithm to investigate their performances. However, there are limitations to this study. First, we used the most basic form of VNS algorithm. We did not use the advanced or hybrid form because we focused on the construction mechanism of the neighborhood structure. Many studies regarding the VNS algorithm have been carried out, previously, but reports on the use of alternative neighborhood structures to solve QUBO problems are very rare. Therefore, our study is useful for, fundamentally, developing the VNS algorithm. Even though we used small parameter settings, this was sufficient to observe the potential of both the B-VNS and VNS algorithms. Festa reported that the VNS algorithm is sensitive to problem characteristics, so early iterations can be used to predict the potential of the algorithm [34]. The results in this paper were, also, inseparable from the simulation we used. One of the crucial factors in metaheuristic simulation is the random-number generator. We used the Fortran language and applied a dynamic random seed. Random seeds change over time. Each time a random number function is called, it will use a different seed, to avoid generating specific patterns of random numbers. It is worth considering that the programming approach applied in simulation codes may, also, impact the results. Our tests only involved small- to medium-sized problems. However, our experiments show that using binomial distribution can, potentially, enhance the VNS algorithm. Thus, experimenting on much larger cases becomes a challenge. Considering that hybrid models tend to be more powerful, the hybrid form of the B-VNS algorithm is worth studying. Considering the successful implementation of parallel programming of the VNS algorithm [48], the potential of the B-VNS algorithm can be further developed, by applying a suitable hybrid model.

5. Conclusions

The B-VNS algorithm is a VNS algorithm that is modified by applying binomial distribution to construct the neighborhood. As a result, the expansion of the neighborhood structure is no longer strictly monotonous but random, following the characteristics of binomial distribution. Our experiments used QUBO problems from Glover [11] and Beasley [37] as well as max-cut problems from Helmberg–Rendl [39] and Burer et al. [41]. We confirmed that the B-VNS and VNS algorithms are suitable for use in solving QUBO and max-cut problems. The experiment results show that both algorithms can provide good solutions, but the B-VNS algorithm runs faster. Furthermore, the B-VNS algorithm performed best in all of the max-cut problems, regardless of problem size, and it performed best in QUBO problems, with sizes less than 500. Although we did not test large-sized problems, our results suggest that the use of binomial distribution to construct neighborhood structures can improve performance by reducing speed. We are currently designing a hybrid algorithm that combines the B-VNS algorithm with a population-based metaheuristic algorithm, while implementing parallel programming.

Author Contributions

Conceptualization, D.P. and M.K.; Formal analysis, D.P.; Funding acquisition, M.K.; Investigation, D.P.; Methodology, D.P.; Project administration, M.K.; Resources, M.K.; Software, D.P.; Supervision, M.K.; Validation, M.K.; Visualization, D.P.; Writing—original draft, D.P.; Writing—review and editing, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Japan Society for the Promotion of Science grant number 20K11973.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data has been presented in the main text.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Paschos, V.T. Applications of Combinatorial Optimizations; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2014. [Google Scholar] [CrossRef] [Green Version]
  2. Talbi, E.G. Metaheuristics; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2009. [Google Scholar] [CrossRef]
  3. Yang, X.S. Metaheuristic Optimization: Algorithm Analysis and Open Problems. In Proceedings of the Experimental Algorithms; Pardalos, P.M., Rebennack, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 21–32. [Google Scholar]
  4. Sörensen, K.; Glover, F.W. Metaheuristics. In Encyclopedia of Operations Research and Management Science; Gass, S.I., Fu, M.C., Eds.; Springer: Boston, MA, USA, 2013; pp. 960–970. [Google Scholar] [CrossRef]
  5. Glover, F.; Kochenberger, G.A. (Eds.) Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  6. Papalitsas, C.; Andronikos, T.; Giannakis, K.; Theocharopoulou, G.; Fanarioti, S. A QUBO Model for the Traveling Salesman Problem with Time Windows. Algorithms 2019, 12, 224. [Google Scholar] [CrossRef] [Green Version]
  7. Date, P.; Arthur, D.; Pusey-Nazzaro, L. QUBO formulations for training machine learning models. Sci. Rep. 2021, 11, 10029. [Google Scholar] [CrossRef] [PubMed]
  8. Karp, R.M. Reducibility Among Combinatorial Problems. In 50 Years of Integer Programming 1958–2008: From the Early Years to the State-of-the-Art; Springer: Berlin/Heidelberg, Germany, 2010; pp. 219–241. [Google Scholar] [CrossRef]
  9. Glover, F.; Kochenberger, G.; Du, Y. A Tutorial on Formulating and Using QUBO Models. CoRR 2018. Available online: http://xxx.lanl.gov/abs/1811.11538 (accessed on 6 May 2022).
  10. Katayama, K.; Narihisa, H. Performance of simulated annealing-based heuristic for the unconstrained binary quadratic programming problem. Eur. J. Oper. Res. 2001, 134, 103–119. [Google Scholar] [CrossRef]
  11. Glover, F.; Kochenberger, G.A.; Alidaee, B. Adaptive Memory Tabu Search for Binary Quadratic Programs. Manag. Sci. 1998, 44, 336–345. [Google Scholar] [CrossRef] [Green Version]
  12. Merz, P.; Freisleben, B. Genetic Algorithms for Binary Quadratic Programming; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1999; GECCO’99; pp. 417–424. [Google Scholar]
  13. Boros, E.; Hammer, P.L.; Tavares, G. Local search heuristics for Quadratic Unconstrained Binary Optimization (QUBO). J. Heuristics 2007, 13, 99–132. [Google Scholar] [CrossRef]
  14. Mladenović, N.; Hansen, P. Variable neighborhood search. Comput. Oper. Res. 1997, 24, 1097–1100. [Google Scholar] [CrossRef]
  15. Duarte, A.; Sánchez, A.; Fernández, F.; Cabido, R. A Low-Level Hybridization between Memetic Algorithm and VNS for the Max-Cut Problem. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; Association for Computing Machinery: New York, NY, USA, 2005. GECCO’05. pp. 999–1006. [Google Scholar] [CrossRef]
  16. Kim, S.H.; Kim, Y.H.; Moon, B.R. A Hybrid Genetic Algorithm for the MAX CUT Problem. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, San Fransisco, CA, USA, 7–11 July 2001; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2001. GECCO’01. pp. 416–423. [Google Scholar]
  17. Festa, P.; Pardalos, P.; Resende, M.; Ribeiro, C. GRASP and VNS for Max-Cut. In Proceedings of the Extended Abstracts of the Fourth Metaheuristics International Conference, Porto, Portugal, 16–20 July 2001; pp. 371–376. [Google Scholar]
  18. Resende, M. GRASP With Path Re-linking and VNS for MAXCUT. In Proceedings of the 4th MIC, Porto, Portugal, 16–20 July 2001. [Google Scholar]
  19. Ramli, M.A.M.; Bouchekara, H.R.E.H. Solving the Problem of Large-Scale Optimal Scheduling of Distributed Energy Resources in Smart Grids Using an Improved Variable Neighborhood Search. IEEE Access 2020, 8, 77321–77335. [Google Scholar] [CrossRef]
  20. Wang, F.; Deng, G.; Jiang, T.; Zhang, S. Multi-Objective Parallel Variable Neighborhood Search for Energy Consumption Scheduling in Blocking Flow Shops. IEEE Access 2018, 6, 68686–68700. [Google Scholar] [CrossRef]
  21. Garcia-Hernandez, L.; Salas-Morera, L.; Carmona-Muñoz, C.; Abraham, A.; Salcedo-Sanz, S. A Hybrid Coral Reefs Optimization—Variable Neighborhood Search Approach for the Unequal Area Facility Layout Problem. IEEE Access 2020, 8, 134042–134050. [Google Scholar] [CrossRef]
  22. He, M.; Wei, Z.; Wu, X.; Peng, Y. An Adaptive Variable Neighborhood Search Ant Colony Algorithm for Vehicle Routing Problem With Soft Time Windows. IEEE Access 2021, 9, 21258–21266. [Google Scholar] [CrossRef]
  23. El Cadi, A.A.; Atitallah, R.B.; Mladenović, N.; Artiba, A. A Variable Neighborhood Search (VNS) metaheuristic for Multiprocessor Scheduling Problem with Communication Delays. In Proceedings of the 2015 International Conference on Industrial Engineering and Systems Management (IESM), Seville, Spain, 21–23 October 2015; pp. 1091–1095. [Google Scholar] [CrossRef]
  24. Silva, G.; Silva, P.; Santos, V.; Segundo, A.; Luz, E.; Moreira, G. A VNS Algorithm for PID Controller: Hardware-In-The-Loop Approach. IEEE Latin Am. Trans. 2021, 19, 1502–1510. [Google Scholar] [CrossRef]
  25. Phanden, R.K.; Demir, H.I.; Gupta, R.D. Application of genetic algorithm and variable neighborhood search to solve the facility layout planning problem in job shop production system. In Proceedings of the 2018 7th International Conference on Industrial Technology and Management (ICITM), Oxford, UK, 7–9 March 2018; pp. 270–274. [Google Scholar] [CrossRef]
  26. Dabhi, D.; Pandya, K. Uncertain Scenario Based MicroGrid Optimization via Hybrid Levy Particle Swarm Variable Neighborhood Search Optimization (HL_PS_VNSO). IEEE Access 2020, 8, 108782–108797. [Google Scholar] [CrossRef]
  27. Zhang, S.; Gu, X.; Zhou, F. An Improved Discrete Migrating Birds Optimization Algorithm for the No-Wait Flow Shop Scheduling Problem. IEEE Access 2020, 8, 99380–99392. [Google Scholar] [CrossRef]
  28. Zhang, C.; Xie, Z.; Shao, X.; Tian, G. An effective VNSSA algorithm for the blocking flowshop scheduling problem with makespan minimization. In Proceedings of the 2015 International Conference on Advanced Mechatronic Systems (ICAMechS), Beijing, China, 22–24 August 2015; pp. 86–89. [Google Scholar] [CrossRef]
  29. Montemayor, A.S.; Duarte, A.; Pantrigo, J.J.; Cabido, R.; Carlos, J. High-performance VNS for the Max-cut problem using commodity graphics hardware. In Proceedings of the Mini-Euro Conference on VNS (MECVNS 05), Tenerife, Spain, 23–25 November 2005; pp. 1–11. [Google Scholar]
  30. Ling, A.; Xu, C.; Tang, L. A modified VNS metaheuristic for max-bisection problems. J. Comput. Appl. Math. 2008, 220, 413–421. [Google Scholar] [CrossRef] [Green Version]
  31. Hansen, P.; Mladenović, N.; Brimberg, J.; Pérez, J.A.M. Variable Neighborhood Search. In Handbook of Metaheuristics; Gendreau, M., Potvin, J.Y., Eds.; International Series in Operations Research & Management Science; Springer: Berlin/Heidelberg, Germany, 2010; Chapter 3; pp. 61–184. [Google Scholar]
  32. Hansen, P.; Mladenović, N.; Moreno Pérez, J.A. Variable neighbourhood search: Methods and applications. 4OR 2008, 6, 319–360. [Google Scholar] [CrossRef]
  33. Hansen, P.; Mladenović, N. A Tutorial on Variable Neighborhood Search; Technical report; Les Cahiers Du Gerad, Hec Montreal and Gerad: Montreal, QC, Canada, 2003. [Google Scholar]
  34. Festa, P.; Pardalos, P.; Resende, M.; Ribeiro, C. Randomized heuristics for the Max-Cut problem. Optim. Methods Softw. 2002, 17, 1033–1058. [Google Scholar] [CrossRef]
  35. Beasley, J.E. OR-Library: Distributing Test Problems by Electronic Mail. J. Oper. Res. Soc. 1990, 41, 1069–1072. [Google Scholar] [CrossRef]
  36. Beasley, J.E. OR-Library. 2004. Available online: http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files (accessed on 22 September 2021).
  37. Beasley, J.E. Heuristic Algorithms for the Unconstrained Binary Quadratic Programming Problem; Technical report; The Management School, Imperial College: London, UK, 1998. [Google Scholar]
  38. Wiegele, A. Biq Mac Library—A Collection of Max-Cut and Quadratic 0-1 Programming Instances of Medium Size; Technical report; Alpen-Adria-Universität Klagenfurt, Institut für Mathematik, Universitätsstr: Klagenfurt, Austria, 2007. [Google Scholar]
  39. Helmberg, C.; Rendl, F. A Spectral Bundle Method for Semidefinite Programming. SIAM J. Optim. 2000, 10, 673–696. [Google Scholar] [CrossRef]
  40. Ye, Y. Gset. 2003. Available online: https://web.stanford.edu/~yyye/yyye/Gset (accessed on 22 September 2021).
  41. Burer, S.; Monteiro, R.D.C.; Zhang, Y. Rank-Two Relaxation Heuristics for MAX-CUT and Other Binary Quadratic Programs. SIAM J. Optim. 2002, 12, 503–521. [Google Scholar] [CrossRef] [Green Version]
  42. Martí; Duarte; Laguna. Maxcut Problem. 2009. Available online: http://grafo.etsii.urjc.es/optsicom/maxcut/set2.zip (accessed on 22 September 2021).
  43. Kochenberger, G.A.; Hao, J.K.; Lü, Z.; Wang, H.; Glover, F.W. Solving large scale Max Cut problems via tabu search. J. Heuristics 2013, 19, 565–571. [Google Scholar] [CrossRef] [Green Version]
  44. Wang, Y.; Lü, Z.; Glover, F.; Hao, J.K. Probabilistic GRASP-Tabu Search algorithms for the UBQP problem. Comput. Oper. Res. 2013, 40, 3100–3107. [Google Scholar] [CrossRef] [Green Version]
  45. Palubeckis, G.; Krivickienė, V. Application of Multistart Tabu Search to the Max-Cut Problem. Inf. Technol. Control 2004, 31, 29–35. [Google Scholar]
  46. Boros, E.; Hammer, P.L.; Sun, R.; Tavares, G. A max-flow approach to improved lower bounds for quadratic unconstrained binary optimization (QUBO). Discret. Optim. 2008, 5, 501–529, In Memory of George B. Dantzig. [Google Scholar] [CrossRef] [Green Version]
  47. JASP Team. JASP, Version 0.16; Computer software; JASP Team. 2021. Available online: https://jasp-stats.org/faq/ (accessed on 6 May 2022).
  48. Kalatzantonakis, P.; Sifaleras, A.; Samaras, N. Cooperative versus non-cooperative parallel variable neighborhood search strategies: A case study on the capacitated vehicle routing problem. J. Glob. Optim. 2020, 78, 327–348. [Google Scholar] [CrossRef]
Figure 1. VNS algorithm. (a) The algorithm gradually expands its neighborhood structure according to k for going out from a local optimum trap [32]. (b) The distance of k is gradually increased by one to build a broader neighborhood structure.
Figure 1. VNS algorithm. (a) The algorithm gradually expands its neighborhood structure according to k for going out from a local optimum trap [32]. (b) The distance of k is gradually increased by one to build a broader neighborhood structure.
Algorithms 15 00192 g001
Figure 2. Distribution of X related to p. A neighboring point X having distance n p c has a high probability of appearing.
Figure 2. Distribution of X related to p. A neighboring point X having distance n p c has a high probability of appearing.
Algorithms 15 00192 g002
Figure 3. Neighborhood structure. (a) The neighborhood structure of B-VNS becomes wider according to probability p c , while (b) the neighborhood structure of basic VNS becomes wider according to variable k = 1 , 2 , . . . , k m a x [32].
Figure 3. Neighborhood structure. (a) The neighborhood structure of B-VNS becomes wider according to probability p c , while (b) the neighborhood structure of basic VNS becomes wider according to variable k = 1 , 2 , . . . , k m a x [32].
Algorithms 15 00192 g003
Figure 4. Local search for QUBO [37].
Figure 4. Local search for QUBO [37].
Algorithms 15 00192 g004
Table 1. Results for Glover [11] problems.
Table 1. Results for Glover [11] problems.
Problem
Number
nBest
Known
VNSB-VNSTest (p-Value) *
BestDifAvgDifTime **BestDifAvgDifTime **DifTime
1a50341401.6670.003010.0030.459***
2a606063000.012000.011-0.006
3a70603708.90.017011.4670.0160.773***
4a808598000.035000.030-0.009
5a505737000.00403.8670.003-0.041
6a30398000***00***-***
7a30454100***00***-***
8a10011,10901.4670.128000.121-***
1b40133018.033***021***0.512-
2b5012100.733***02.667***0.096***
3b6011804.6670.00108.5330.0010.0650.401
4b70129013.8670.004018.7330.0030.1120.005
5b80150000.012000.011-***
6b901461335.9330.0211337.8330.0160.277***
7b80160000.02702.5330.025-***
8b9014506.6330.06205.4330.0530.307***
9b100137020.15602.4330.1411***
10b12515400.2330.46700.2330.4141***
1c405058000.001000.001-0.507
2c506213000.003000.003-0.107
3c606665000.015000.013-0.003
4c707398000.020000.017-***
5c80736200.8670.035000.028-***
6c905824027.4670.048021.1670.0470.1860.043
7c1007225000.134000.123-***
1d1006333016.90.135013.7330.1290.5830.006
2d1006579031.9670.152019.6330.1450.2140.022
3d1009261014.5670.157016.0670.1440.658***
4d10010,72705.3670.16609.0670.1510.056***
5d10011,626011.6330.179014.70.1660.4710.001
6d10014,207050.17101.6670.1550.313***
7d10014,47608.90.19407.7330.1730.763***
8d10016,352000.176000.162-***
9d10015,65601.130.18000.30.1640.305***
10d10019,102000.184000.170-***
1e20016,464012.8334.689011.7674.2370.576***
2e20023,395085.63507.0675.2320.5790.001
3e20025,243006.172005.731-***
4e20035,59400.5335.07100.5334.6971***
5e20035,154020.335.995031.2335.9240.1270.264
1f50061,19402578.19401.2559.6440.6790.004
2f500100,16100.1545.88400.2521.0280.570***
3f500138,035038.967521.415037.9521.5020.5940.971
4f500172,771033.6440.721018450.5500.3540.050
5f500190,50702.833499.02103.3511.8530.5130.04
*: Mann–Whitney test; the difference is significant if p-value < α, (α = 0.05). **: average computation time (second). ***: <0.001.
Table 2. Results for Beasley [37] problems.
Table 2. Results for Beasley [37] problems.
nProblem
Number
Best
Known
VNSB-VNSTest (p-Value) *
BestDifAvgDifTime **BestDifAvgDifTime **DifTime
501209868127.30.003093.8670.0030.0620.305
237020150.003022.9670.0030.4970.006
34626011.3670.0030190.0030.2480.025
43544021.5330.003019.7330.0030.8630.006
54012010.6670.00302.9330.0030.1700.677
6369301.9330.00302.90.0030.6540.190
7452004.60.00304.8670.0030.2880.031
842160180.00307.3330.0030.117-
93780019.3670.005020.2670.0030.887***
103507027.7330.005032.8670.0030.602***
1001797042150.8670.0800173.1330.0760.1630.034
211,036015.3330.083019.3330.0780.7320.001
312,723000.071000.068-0.039
410,36808.3330.078011.5330.0720.984***
59083044.4670.085049.1670.0790.6820.006
610,21002.0670.08804.7670.0800.9100.006
710,125024.4670.082026.5330.0720.770***
811,43508.8670.079090.0770.8200.139
911,45500.60.08100.60.07510.004
1012,565018.6670.078012.9330.0690.247***
250145,6070811.873010.13310.8900.458***
244,810059.26712.123045.03311.4290.1470.005
349,037008.996008.662-0.045
441,274020.610.539033.1339.7430.067***
547,961015.9339.672010.9338.8080.611***
641,01408.611.766011.510.9730.876***
746,7570010.6240010.191-0.050
835,72652214.20013.311017712.1960.297***
948,916023.10011.330027.23310.3410.433***
1040,44203.53312.52602.211.2110.688***
5001116,58605.333586.40606.267592.7980.6770.398
2128,33902.5459.92604.4455.0130.4020.374
3130,81200501.77300496.806-0.432
4130,097028.933518.133026.733523.3510.4860.321
5125,487010.4521.38102.6516.2520.3800.300
*: Mann–Whitney test, the difference is significant if p-value < α, (α = 0.05). **: average computation time (second).
Table 3. Results for Helmberg–Rendl [39] problems.
Table 3. Results for Helmberg–Rendl [39] problems.
GraphProblemnBest
Known
VNSB-VNSTest (p-Value) *
BestDifAvgDifTime **BestDifAvgDifTime **DifTime
RandomG18001162400.033180.1302.533158.702******
G28001162007.133185.80406.8162.8540.830***
G38001162201.733195.16902.833172.0030.222***
G48001164600.567193.59500.633175.6640.507***
G58001163105.133191.3204.433169.1040.654***
Random ( ± 1 )G6800217801.867203.06502.2136.1560.299***
G7800200604.533195.77005.2169.9330.286***
G8800200503.4154.65002.733164.1330.845***
G9800205404.3195.34114.6169.7400.622***
G10800200003.2160.17302.333141.5890.191***
ToroidalG118005641425.26766.2742027.73344.7260.001***
G128005561824.466.7161624.13357.9200.916***
G138005821622.868.0791824.06762.0230.127***
PlanarG1480030642937.73378.6563239.669.9830.087***
G1580030503138.63377.1953239.86767.3500.179***
RandomG431000666018.967431.31419.733381.0730.323***
G441000665038.467428.17829.533375.8220.114***
G4510006654012.067425.304111.033374.1240.445***
G4610006654916.1410.047516.633373.8640.494***
G4710006654920.433415.6071320.267370.3320.472***
PlanarG51100038463947.433194.8913949.533192.6330.0700.007
G52100038494149.1126.2084250.033110.0350.265***
*: Mann–Whitney test; the difference is significant if p-value < α, (α = 0.05). **: average computation time (second). ***: <0.001.
Table 4. Results for Burer et al. [41] problems.
Table 4. Results for Burer et al. [41] problems.
ProblemnBest
Known
VNSB-VNSTest (p-Value) *
BestDifAvgDifTime **BestDifAvgDifTime **DifTime
sg3dl05200012511201.20.04201.80.0390.058***
sg3dl05400012511401.3330.04302.1330.0390.158***
sg3dl05600012511001.3330.04101.4670.0380.803***
sg3dl05800012510801.3330.04301.60.0390.312***
sg3dl051000012511203.2670.04202.40.0390.030***
sg3dl10200010009002028.867402.4211830.200350.9430.237***
sg3dl10400010008961828.667431.3932028.733351.4430.928***
sg3dl10600010008862431.267359.9992834.467322.7430.004***
sg3dl10800010008801626.867380.6132028.600311.6380.058***
sg3dl101000010008901828.800390.7972029.733354.6910.397***
*: Mann–Whitney test; the difference is significant if p-value < α, (α = 0.05). **: average computation time (second). ***: <0.001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pambudi, D.; Kawamura, M. Constructing the Neighborhood Structure of VNS Based on Binomial Distribution for Solving QUBO Problems. Algorithms 2022, 15, 192. https://doi.org/10.3390/a15060192

AMA Style

Pambudi D, Kawamura M. Constructing the Neighborhood Structure of VNS Based on Binomial Distribution for Solving QUBO Problems. Algorithms. 2022; 15(6):192. https://doi.org/10.3390/a15060192

Chicago/Turabian Style

Pambudi, Dhidhi, and Masaki Kawamura. 2022. "Constructing the Neighborhood Structure of VNS Based on Binomial Distribution for Solving QUBO Problems" Algorithms 15, no. 6: 192. https://doi.org/10.3390/a15060192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop