1. Introduction
Exact algorithms for the maximum clique problem are now remarkably efficient, but, for larger problems, heuristic algorithms are necessary. Tabu search is an effective metaheuristic for the maximum clique problem. This paper evaluates the option of including an exact algorithm within the tabu search, and considers the best exact algorithm to use. The candidate exact algorithms all use a graph coloring upper bound. This approach was used in the specific context of permutation code constructions in [
1]. The final tabu search algorithm described, modified from a published algorithm [
2], finds best known solutions to standard instances but is normally slower than a stateoftheart tabu search algorithm [
3] to find these solutions. However, it is more competitive for harder instances and actually finds significantly larger cliques for two open instances derived from the construction of permutation codes.
Let
$G=(V,E)$ be an undirected graph with vertex set
V and edge set
E. A clique
C of
G is a subset of the vertices of
V with every pair of vertices of
C adjacent (a complete subgraph). A maximum clique is a clique with the maximum number of vertices. Problems involving cliques arise in many applications surveyed in [
4], including bioinformatics, examination planning, location problems, signal transmission analysis, and social network analysis. Applications of the maximum clique problem in radio frequency assignment are described in [
5]. The problem also arises in the construction of various types of errorcorrecting code with the maximum number of codewords [
6,
7]. Note that the problem of finding a maximum clique in the graph
G is equivalent to the problem of finding a maximum independent set in the complement graph
$\overline{G}$.
The decision form of the clique problem (does there exist a clique of size
k) was proved to be one of the original 21 NPcomplete problems in a classic paper [
8]. It follows that the maximum clique problem is NPhard. When exact methods are impractical, heuristic methods are used.
Algorithms for the maximum clique problem, both exact and heuristic, are surveyed in [
4]. The algorithm SwapBased Tabu Search (SBTS) presented in [
3] is the only heuristic algorithm compared in [
4] that is able to find the optimal or best known solution over a wide range of instances. A more recent algorithm HTS (Hybrid Tabu Search) [
2] also finds the optimal or best known solution over these instances and adds some further instances based on the construction of permutation codes [
7]. HTS makes use of both an exact and a pseudoexact inner solver. In the tabu search in HTS, a number of carefully selected vertices are removed from the current clique before an exact or pseudoexact algorithm is applied to the subgraph induced by the set of vertices of
G adjacent to all vertices in the reduced clique.
As well as tabu search and other local search methods, there are also mathematical programming approaches, together with other methods, such as genetic algorithms, ant colony optimization, chemical reaction optimization, and particle swarm optimization. Hybrid and parallel versions of these algorithms also exist. The reader is referred to the survey [
4] for an extensive list of references. Some later references to the maximum clique and related problems can be found in [
9,
10,
11,
12,
13]. When comparable, these later approaches do not always match SBTS or HTS. For this reason, the current paper concentrates on tabu search. Although there are many effective algorithms, there is always scope for improvement on large and hard instances beyond the standard benchmarks.
There are four objectives of this paper. The first is to determine the best exact algorithm to replace the exact and pseudoexact algorithms used in HTS. This allows a comparison of the resulting tabu search algorithm, incorporating the best exact algorithm found, with a more standard tabu search approach. Initially, a comparison of various exact algorithms applied to benchmarks used in the comparison in [
4] is carried out. The best candidates are then incorporated into the HTS algorithm and further compared, leading to a somewhat different rank ordering of methods. The second objective is to consider and implement a small number of other improvements to HTS. The third is to dramatically simplify the number of parameters in HTS by finding methods to determine most automatically. Finally, the resulting new algorithm HTS2 (Code for HTS2 is available in the
Supplementary Material) can be compared with the original HTS and SBTS on the same machine. The overall aim of the work is to determine whether a tabu search algorithm including exact search can match SBTS on standard instances and find larger cliques than SBTS for some hard instances. This requires consideration of some new hard instances beyond those used in [
3]. The new instances here are motivated by the authors’ interest in the construction of permution codes.
2. Exact Algorithms with a Coloring Bound
Exact algorithms for the maximum clique problem are surveyed in [
4]. The basis for many improved algorithms is the algorithm of Carraghan and Pardalos [
14], and this is also the basis for the exact and pseudoexact algorithms used in HTS [
2]. Within HTS, this algorithm works with a current clique
F and a potential expansion set
$S=N\left(F\right)$ of vertices adjacent to all vertices of
F. Vertices of
S are selected in turn and removed from
S. The new potential expansion set
${S}^{\prime}$ is computed and the selected vertex is added to
F to create a set
${F}^{\prime}$. If the potential expansion set is empty, and a new best clique is obtained, the clique is assigned to a global variable
$Best$. Otherwise, subject to a pruning condition, the algorithm is applied recursively to the new potential expansion set
${S}^{\prime}$. The usual condition is that pruning of the search tree takes place unless
${F}^{\prime}+{S}^{\prime}$ is greater than the size of the best clique found so far in the exact algorithm. Within HTS, extra pruning is used involving an external lower bound arising from the best clique found so far in the metaheuristic. Pseudoexact search uses a probable upper bound for pruning derived from an evaluation of typical subproblems solved for the instance within HTS.
A major improvement arises from the use of a vertex coloring bound.
Definition 1. A vertex coloring of a graph G is an assignment of colors to the vertices of G such that adjacent vertices are assigned different colors.
Definition 2. The chromatic number of a graph G is the minimum number of colors in a vertex coloring.
The following proposition is well known:
Proposition 1. The chromatic number of a graph G is an upper bound for the number of vertices in a maximum clique.
Proof. Every vertex of the clique must be assigned a different color. □
The colors can be represented as positive integers $1,2,\dots $. Thus, any upper bound k for the chromatic number of the subgraph induced by ${S}^{\prime}$ (obtained from a coloring of ${S}^{\prime}$ with k colors) can replace ${S}^{\prime}$ in the pruning to reduce the size of the search, sometimes dramatically. Then, options to be considered here are (i) the best coloring algorithm to use, which must be very fast as the exact algorithm is used many times in hybrid tabu search, (ii) whether to locally color each subgraph induced by ${S}^{\prime}$, which may be expensive, or to color the initial subproblem and use the colors inherited by the subgraph induced by ${S}^{\prime}$, or a combination of the two, (iii) whether to reorder the vertices of ${S}^{\prime}$ in nonincreasing order of color within the exact solver.
2.1. Initial Evaluation of Exact Algorithms with a Coloring Bound
Improvements to the Carraghan and Pardalos algorithm (denoted as “C&P”) using a coloring bound are of three basic types. The first, denoted as “local coloring”, applies a coloring algorithm to the original subproblem and to each new set ${S}^{\prime}$ created as the depth of the search tree is increased. Clearly the coloring algorithm has to be very fast. The second, denoted as “start coloring”, applies the coloring algorithm to the original subproblem, and the color assigned to each vertex is inherited by that vertex in the set ${S}^{\prime}$. The number of colors actually used in ${S}^{\prime}$ can then be computed to give a (usually weaker) upper bound. The third, denoted as “start and local coloring”, is a hybrid. Start coloring and inheriting takes place as in “start coloring”, but, if the upper bound obtained does not lead to pruning, then local coloring also takes place. This may give a smaller lower bound that does lead to pruning.
The coloring algorithm used is a simple greedy algorithm. The vertices of the set are colored in turn, and each vertex is colored with the smallest available color. Two variations of this are applied to start coloring only. The
saturation degree of a vertex is obtained as follows. Once a new vertex has been colored, the saturation degree of an uncolored vertex is the number of colors in its neighborhood. Vertices are then colored in nonincreasing order of saturation degree. This is denoted as “
saturation start”. The second very similar variation applied to start vertices uses Degree of Saturation (DSatur) coloring [
15], itself based on saturation degree, but using a degree ordering to select the first vertex and to break ties. This is denoted as “
DSatur start”.
Another option that can be used is to order the vertices of each new set ${S}^{\prime}$ (created as the depth is increased) in nonincreasing order of color. This means that, as subsequent vertices of ${S}^{\prime}$ are considered, the coloring upper bound tends to decrease quickly and proves particularly effective. This option is denoted by “ordered” in the following tables.
There is an important difference between the evaluation here and the general evaluation in [
4]. Within HTS, many of the calls to the exact or pseudoexact algorithm supply a good lower bound. Thus, the evaluation here is carried out with the best available lower bound supplied. The time in seconds to complete the algorithms for a number of DIMACS instances [
16] used in [
4] are shown in
Table 1 and
Table 2. No single algorithm is the best for all instances, but clearly vertex ordering is particularly helpful.
2.2. Evaluation of Exact Algorithms with a Coloring Bound for Use in HS2
Although the results in
Table 1 and
Table 2 give a useful comparison, it should be noticed that the subproblems to which exact and pseudoexact search is typically applied in HTS may be much smaller than the instances used in those tables. Thus, four of the algorithms were evaluated further by using them in place of the exact and pseudoexact search in HTS. The critical improvement in performance arises from maximizing the mean number of tabu search iterations per second, and this was calculated for an extended run on each of seven instances. These were DIMACS, BHOSLIB, and permutation code instances used in [
2], and, in general, the same parameters were used for HTS as in the experiments in [
2]. The results are shown in
Table 3.
It appears that there is much less difference in the results than in the previous tables, in part because if the time for exact search is small the time to calculate neighborhoods becomes more significant. However, it appears from further experiments that the third and fourth algorithms (using ordering) allow the parameter
Tssetmin in HTS (determining the smallest current clique from which vertices may be removed) to be smaller. These smaller values allow exact search to be applied to larger subproblems without excessively slowing tabu search and can improve performance as a result. From inspection of
Table 3, the third algorithm “C&P with ordered local coloring” was selected for use in an improved version HTS2 of HTS. This algorithm also had the merit that it was simpler than the fourth algorithm. Pseudocode for “C&P with ordered local coloring” is shown in Algorithm 1.
Algorithm 1: The exact algorithm “C&P with ordered local coloring” applied to a graph or subgraph with vertex set V and edge set E. 
Require A set of selected vertices $F\subseteq V$ and a set of potential expansion vertices $S\subseteq V$. The best clique retrieved so far is contained in $Best$ and $External\_lower\_bound$ is a supplied external value. The recursive algorithm is invoked with $F:=\varnothing $, $S:=V$ and the global variable $Best:=\varnothing $. Exact(F, S) 1:
while$S\ne \varnothing $do  2:
if $\leftF\right+\leftS\right>max\left\{\rightBest,External\_lower\_bound1\}$ then  3:
color S greedily, using smallest available color;  4:
sort S in nonincreasing order of color;  5:
select $s\in S$ in above order;  6:
$S:=S\backslash \left\{s\right\}$;  7:
${S}^{\prime}:=S$;  8:
for $z\in {S}^{\prime}$ do  9:
if $(z,s)\notin E$ then { if $(z,s)$ is not an edge }  10:
${S}^{\prime}:={S}^{\prime}\backslash \left\{z\right\}$;  11:
end if  12:
end for  13:
${F}^{\prime}:=F\cup \left\{s\right\}$;  14:
if${S}^{\prime}=\varnothing $ and ${F}^{\prime}>Best$ then  15:
$Best:={F}^{\prime}$;  16:
else  17:
determine number of colors numcolors$\left({S}^{\prime}\right)$ used in ${S}^{\prime}$;  18:
if ${F}^{\prime}+numcolors\left({S}^{\prime}\right)>max\left\{\rightBest,External\_lower\_bound1\}$ then  19:
Exact(${F}^{\prime}$, ${S}^{\prime}$, $Best$);  20:
end if  21:
end if  22:
end if  23:
end while  24:
return(Best)

3. Minor Improvements in HTS2
Apart from the change from the exact and pseudoexact search in HTS to the exact “ordered local coloring” algorithm detailed in Algorithm 1, there are some other relatively minor changes from HTS in HTS2. As well as the tabu search algorithm, HTS has a main heuristic that generates good starting solutions (sometimes finds the best solution itself) and
ioptimizations for
$i=1,2,3$. These
ioptimizations remove
i vertices from the current clique in all possible ways and apply the exact algorithm to the neighbor set of vertices adjacent to all remaining vertices in an attempt to find a new larger clique. This continues until there is no improvement. In HTS, there are thresholds
${\theta}_{1}$,
${\theta}_{2}$,
${\theta}_{3}$,
${\theta}_{\mathrm{ts}}$ for the four optimizations. In HTS, these thresholds are allowed to be different and are selected so that each optimization is only applied to very promising cases, and 3optimization never makes HTS unacceptably slow. In HTS2, these four thresholds are replaced by a single threshold
$\theta $. The justification for this is that the improved performance of the tabu search makes a different threshold for tabu search unnecessary, and the relative slowness of 3optimization in some instances can be dealt with by considering the size of the current clique. Thus, the overall structure of HTS2 is as shown in Algorithm 2.
Algorithm 2: Outline of the overall structure of the algorithm HTS2. 
 1:
$BestC:=\varnothing $;  2:
$k:=0$;  3:
while$runtime\le max\_runtime$do  4:
$k:=k+1$;  5:
$C:=Main\_heuristic\left(k\right)$;  6:
if $\leftC\right\ge \theta $ then  7:
$1\_optimize\left(C\right)$;  8:
$C:=ts\_optimize\left(C\right)$;  9:
if $\leftC\right\le 200$ then  10:
$C:=3\_optimize\left(C\right)$;  11:
else  12:
$C:=2\_optimize\left(C\right)$;  13:
end if  14:
end if  15:
if $\leftC\right>\leftBestC\right$ then  16:
$BestC:=C$;  17:
end if  18:
end while

For any current clique
C,
$N\left(C\right)$ denotes the set of vertices of
V adjacent to all vertices in
C. In the main heuristic of the original HTS algorithm, a sequence
S is selected. A variable
$adjchoices$ cycles through the values in
S and controls the choice of the next vertex in
$N\left(C\right)$ to add to the clique
C. A set
U consists of all vertices of
$N\left(C\right)$ if
$\leftN\right(C\left)\right\le adjchoices$, or
$adjchoices$ randomly selected vertices of
$N\left(C\right)$ otherwise. A vertex
u of
U is chosen to add to
C which gives the largest value of
$N(C\cup \{u\left\}\right)$. The most common
S in the experiments in [
2] is
$S=[1,50,80,100,120,160,200,300,600,800]$, and
S is fixed to this choice in HTS2 unless the graph has
$<1000$ vertices when
$S=[1,50,80,100,120,160,200,300]$. Thus, no decision on an appropriate
S is necessary in HTS2. Pseudocode for the main heuristic is presented in Algorithm 3, and pseudocode for the ioptimizations is presented in Algorithm 4.
In the original HTS algorithm a parameter, Nontabu max is used in tabu search. If the neighborhood to which exact search is applied has at least Nontabu max vertices, a protection mechanism is invoked (selecting a random vertex in the neighborhood) to avoid application of the exact search to an excessively large neighborhood. Typical values of 30 or 60 were used. In HTS2, a parameter ${\lambda}_{2}$ from the main heuristic, to be outlined in the next section, is used instead of Nontabu max.
A final change is to replace the condition that the tabu search optimization runs for
Tstime seconds unless the size of the clique increases, by the requirement that it runs for 10,000 iterations, unless the size of the clique increases. This avoids the need for the parameter
Tstime and proved satisfactory for all instances. Pseudocode for the revised tabu search algorithm is presented in Algorithm 5.
Algorithm 3: Main heuristic at iteration k. 
The algorithm is invoked for a graph $G=(V,E)$ with $C:=${random vertex in V}. Parameters supplied are the threshold $\theta $ from Algorithm 2, and two further thresholds ${\lambda}_{1}$, ${\lambda}_{2}$ for exact search. The sequence of values $S=\left[{s}_{i}\right]$ is also supplied. Following the call to the main heuristic, C contains the best clique found by the main heuristic at iteration k. Main_heuristic(k) 1:
$C:=${random vertex in V};  2:
$Exactstored:=False$; $adjchoices:={s}_{1+k\phantom{\rule{0.277778em}{0ex}}mod\phantom{\rule{0.277778em}{0ex}}\leftS\right}$;  3:
while$\leftN\right(C\left)\right>0$and$\leftN\right(C\left)\right\ge \theta \leftC\right$do  4:
if$\leftN\left(C\right)\right<{\lambda}_{2}$ then  5:
$Best:=\varnothing $; $External\_lower\_bound:=\theta \leftC\right$;  6:
$Exact(\varnothing $, $N\left(C\right)$);  7:
$C:=C\cup Best$;  8:
else  9:
if $\leftN\left(C\right)\right<{\lambda}_{1}$ and $Exactstored=False$then  10:
${C}_{\mathrm{stored}}:=C$; $N{\left(C\right)}_{\mathrm{stored}}:=N\left(C\right)$;  11:
$Exactstored:=True$;  12:
end if  13:
if$\leftN\right(C\left)\right\le adjchoices$ then  14:
$U:=N\left(C\right)$;  15:
else  16:
$U:=\{{v}_{{j}_{1}},{v}_{{j}_{2}},\dots ,{v}_{{j}_{adjchoices}}{v}_{{j}_{i}}\in N\left(C\right)\phantom{\rule{3.33333pt}{0ex}}\mathrm{selected}\phantom{\rule{3.33333pt}{0ex}}\mathrm{randomly}\}$;  17:
end if  18:
Choose $u\in U$ such that $\leftN\right(C\cup \left\{u\right\}\left)\right$ is maximal;  19:
$C:=C\cup \left\{u\right\}$;  20:
end if  21:
end while  22:
if$Exactstored=True$and$\leftC\right\ge \theta $then  23:
$Best:=\varnothing $; $External\_lower\_bound:=\theta {C}_{\mathrm{stored}}$;  24:
$Exact(\varnothing $, $N{\left(C\right)}_{\mathrm{stored}}$);  25:
${C}_{\mathrm{temp}}:={C}_{\mathrm{stored}}\cup Best$;  26:
if ${C}_{\mathrm{temp}}>C$ then  27:
$C:={C}_{\mathrm{temp}}$;  28:
end if  29:
end if  30:
return(C);

Algorithm 4:$i\_\mathrm{Optimize}\phantom{\rule{0.166667em}{0ex}}.$. 
i_Optimize(C) 1:
$Current\_best:=C$;  2:
Recursive_i_opt(C)  3:
for all $S\subset C$ with $\leftS\right=i$ do  4:
if $\leftN\right(C\backslash S\left)\right>i$ then  5:
$Best:=\varnothing $; $External\_lower\_bound:=i+1$;  6:
Exact(∅, $N(C\backslash S)$);  7:
if$\leftBest\right>i$ then  8:
$Current\_opt:=Best\cup (C\backslash S)$  9:
if $Current\_opt>Current\_best$ then  10:
$Current\_best:=Current\_opt$;  11:
Recursive_i_opt($Current\_opt$);  12:
end if  13:
end if  14:
end if  15:
end for  16:
return($Current\_best$);

Algorithm 5: Tabu search. 
 1:
ts_Optimize(C);  2:
$Ts\_count:=0$;  3:
$Tabulist:=\varnothing $; $Tscurrent:=C$; $Tsbest:=C$;  4:
while ($Ts\_count<10000$) do  5:
$Ts\_count:=Ts\_count+1$; $Aspiration:=False$;  6:
for all $v\in Tscurrent$ do  7:
$Best:=\varnothing $; $External\_lower\_bound:=\leftTsbest\right\leftTscurrent\right+1$;  8:
Exact(∅, $N(Tscurrent\backslash \{v\left\}\right)$);  9:
$Tstemp1:=Best\cup (Tscurrent\backslash \{v\left\}\right)$;  10:
Update a list ${L}_{1}$ of all $v\in Tscurrent$ with $\leftTstemp1\right$ maximal (and for this value of $\leftTstemp1\right$ the value $\leftN\right(Tscurrent\backslash \left\{v\right\}\left)\right$ is maximal);  11:
if $\leftTstemp1\right>\leftTsbest\right$ then  12:
$Aspiration:=True$;  13:
else  14:
$Nontabu:=\{w\in N(Tscurrent\backslash \left\{v\right\}\left)\rightw\notin Tabulist\}$;  15:
$Best:=\varnothing $;$External\_lower\_bound:=0$;  16:
Exact(∅, $Nontabu$);  17:
$Tstemp2:=Best\cup (Tscurrent\backslash \{v\left\}\right)$;  18:
Update a list ${L}_{2}$ of all $v\in Tscurrent$ with $\leftTstemp2\right$ maximal (and for this value of $\leftTstemp2\right$ the value $\leftNontabu\right$ is maximal);  19:
end if  20:
end for  21:
if Aspiration = true then  22:
select ${v}^{\prime}\in {L}_{1}$ randomly;  23:
$Best:=\varnothing $; $External\_lower\_bound:=\leftTsbest\right\leftTscurrent\right+1$;  24:
Exact(∅, $N(Tscurrent\backslash \left\{{v}^{\prime}\right\})$);  25:
$Tscurrent:=Best\cup (Tscurrent\backslash \left\{{v}^{\prime}\right\})$; $Tsbest:=Tscurrent$;  26:
$Ts\_count:=0$;  27:
else  28:
Select ${v}^{\u2033}\in {L}_{2}$ randomly;  29:
$Tscurrent:=Tscurrent\backslash \left\{{v}^{\u2033}\right\}$;  30:
Add ${v}^{\u2033}$ to $Tabulist$ (removing oldest entry if list length exceeds $Tstenure$);  31:
if $\leftTscurrent\right\le Tssetmin+1$ then  32:
$Nontabu:=\{w\in N(Tscurrent\left)\rightw\notin Tabulist\}$;  33:
if $\leftNontabu\right<{\lambda}_{2}$ then  34:
$Best:=\varnothing $; $External\_lower\_bound:=0$;  35:
Exact(∅, $Nontabu$);  36:
$Tscurrent:=Best\cup Tscurrent$;  37:
else  38:
Select ${v}^{\u2034}\in Nontabu$ randomly;  39:
$Tscurrent:=\left\{{v}^{\u2034}\right\}\cup Tscurrent$;  40:
end if  41:
end if  42:
end if  43:
end while  44:
if$\leftTsbest\right>\leftC\right$then  45:
$C:=Tsbest$;  46:
end if  47:
return(C);

4. Simplification of Parameters
There are several parameters used in HTS. These are not difficult to determine by a short exploratory run, but it is more convenient if most of them are determined by the algorithm itself. The parameters in question are ${\theta}_{1}$, ${\theta}_{2}$, ${\theta}_{3}$, ${\theta}_{\mathrm{ts}}$, ${\lambda}_{1}$, ${\lambda}_{2}$, $Tstime$, $Tstenure$, and $Tssetmin$. Pseudoexact search in HTS also has a number of coefficients, but these are no longer relevant as pseudoexact search is not used in HTS2. The use of $Tstime$ was replaced by a condition “10,000 iterations without improvement” in the previous section.
The four $\theta $ parameters in HTS are replaced in HTS2 by a single parameter $\theta $ (see Algorithm 2) determined as follows. The main heuristic is run 100 times. If the largest clique ${C}_{\mathrm{max}}$ found is not unique, then $\theta ={C}_{\mathrm{max}}$. If this value is unique, then $\theta $ is the number of vertices in the second largest clique. This ensures that the other optimizations are only applied to the most promising cliques generated, but avoids single outliers that might only be generated very rarely in later iterations.
The two parameters
${\lambda}_{1}$ and
${\lambda}_{2}$ are used in the main heuristic (see Algorithm 3). If the size of the neighborhood
$N\left(C\right)$ of the current clique satisfies
$\leftN\left(C\right)\right<{\lambda}_{2}$, then the generation of the clique by the main heuristic is completed by applying the exact algorithm to
$N\left(C\right)$. If the clique generated has at least
$\theta $ vertices, the algorithm reverts to a somewhat larger stored neighborhood with less than
${\lambda}_{1}$ vertices and applies the exact algorithm. In HTS2, the two
$\lambda $ parameters are determined by the edge density
$2\leftE\right/\left(\rightV\left\right(\leftV\right1\left)\right)$, as shown in
Table 4. The parameter
${\lambda}_{2}$ is also used in the protection mechanism within tabu search to avoid applying the exact algorithm to excessively large subproblems.
The value of the tabu tenure Tstenure is not at all critical for HTS2. Values of 6, 12, and 20 were used, and all proved successful.
The tabu search parameter $Tssetmin$ is the critical parameter to be set by the user. It must be less than $\theta $, but, if it is too small, attempts may be made to solve exactly subproblems that are too large, requiring the protection mechanism. Otherwise, larger values make tabu search faster, but smaller values may make tabu search more powerful. A balance should be achieved.
It should be noted that the current implementation allows the original $\theta $ and $\lambda $ parameters from HTS to override these choices if the user requires it. Otherwise, Tstenure and $Tssetmin$ are the only relevant parameters in HTS2.
5. Comparison of HTS, HTS2, and SBTS
This section presents a comparison of the original HTS algorithm, the new HTS2 algorithm, and SBTS. The software for the SBTS algorithm is available online as [
17]. The benchmarks selected are BHOSLIB instances [
18], some of the harder DIMACS instances [
16], Sloane code construction instances [
6], and permutation code construction instances from [
7]. A few instances shown in [
2] to be most easily solved using a preprocessing method are excluded, so the description of preprocessing need not be repeated here. These benchmarks are not ideal in the sense that the optimal solution is known for many of them, but they are at least easily available to allow comparison by others. Using an exact algorithm cannot be expected to be the fastest algorithm, but the more thorough local search might lead to improved solutions if improved solutions are possible. Of particular interest, then, is the permutation code instance
$7\_5$. The vertices correspond to the permutations on 7 symbols, excluding those permutations at Hamming distance 1, 2, 3, or 4 from the identity permutation. Two vertices are adjacent if they are at Hamming distance ≥5. A theoretical construction using group theory shows that there exists a clique with 78 vertices [
19], but the largest clique found with a maximum clique algorithm previously has 72 vertices (or 73 using the preprocessing method in [
2]). For permutation code instances other than
$7\_4$ and
$7\_5$, the vertices of the graph correspond to orbits of permutations under a group, as explained in [
7].
The processor used for the experiments was an Intel(R) Core(TM) i39100 3.60GHz processor with 4 GB of RAM. In most cases, the parameters used for HTS were the same as given in [
2], the parameters for
Tstenure and
Tssetmin in HTS2 were the same as used for these parameters in HTS in [
2], and the default setting for tabu tenure was used in SBTS. Results for BHOSLIB instances are given in
Table 5, and results for all other instances are given in
Table 6. In these two tables, the first column gives the instance, and the second column gives the number of vertices in the largest known clique (marked with an asterisk if the clique is known to be optimal). The third, fourth, and fifth columns give the time in seconds to find the solution in column 2, with a note of the number of vertices found if the result in column 2 is not achieved.
For most instances, HTS2 is faster than HTS and finds a larger clique for C2000_9 than HTS without preprocessing, and also a larger clique for $7\_5$ than HTS with or without preprocessing. For BHOSLIB, most DIMACS and Sloane code construction instances, SBTS is much faster than HTS2, as might be expected. However, HTS2 is more competitive on harder DIMACS instances, such as C2000_9, and performs much better than SBTS on $7\_5$ and $8\_5$ cyclic. For $7\_5$, HTS2 found a clique with 75 vertices once and cliques with 74 vertices four times in 364,585 s, whereas SBTS found cliques with 72 vertices twice but could not improve this in 1,399,916.1 s. Similarly, for $8\_5$ cyclic, HTS2 found a clique with 49 vertices in 6799 s, whereas SBTS found cliques with 46 vertices four times but could not improve this in 61,097.2 s.
6. Conclusions
The replacement of the pseudoexact algorithm in HTS with an improved exact algorithm in HTS2 has allowed a pure comparison of tabu search using exact search with a more standard type of tabu search. The interest of the authors is in code construction problems, where certain instances only have to be solved once. The critical issue is that the largest clique possible is found, and run time is relatively unimportant. It is clear that SBTS is much faster than HTS2 for most instances and should be used in applications where clique problems have to be solved quickly. HTS2 has found the same size clique in all instances except $7\_5$ and $8\_5$ cyclic, where it has found larger cliques with three more vertices. The instance $7\_5$ is an important addition to the set of standard benchmarks for maximum clique problems, as it is of a similar size to the largest benchmarks, but no algorithm has been able to find a clique with 78 vertices that is known to exist. In summary, HTS2 finds the same size clique as SBTS for 46 instances, more slowly for 40 instances but faster for 4 instances. It finds a larger clique than SBTS for two instances, while SBTS never finds a larger clique than HTS2.