Finding the Best 3-OPT Move in Subcubic Time

: Given a Traveling Salesman Problem solution, the best 3-OPT move requires us to remove three edges and replace them with three new ones so as to shorten the tour as much as possible. No worst-case algorithm better than the Θ ( n 3 ) enumeration of all triples is likely to exist for this problem, but algorithms with average case O ( n 3 − (cid:101) ) are not ruled out. In this paper we describe a strategy for 3-OPT optimization which can ﬁnd the best move by looking only at a fraction of all possible moves. We extend our approach also to some other types of cubic moves, such as some special 6-OPT and 5-OPT moves. Empirical evidence shows that our algorithm runs in average subcubic time (upper bounded by O ( n 2.5 ) ) on a wide class of random graphs as well as Traveling Salesman Problem Library (TSPLIB) instances.


Introduction
The Traveling Salesman Problem (TSP), in all likelihood the most famous combinatorial optimization problem, calls for finding the shortest Hamiltonian cycle in a complete graph G = (V, E) of n nodes, weighted on the arcs. In this paper we consider the symmetric TSP, i.e., the graph is undirected and the distance between two nodes is the same irrespective of the direction in which we traverse an edge. Let us denote by c(i, j) = c(j, i) the distance between any two nodes i and j. We call each solution of the problem a tour. A tour is identified by a permutation of vertices (v 1 , . . . , v n ). We call {v i , v i+1 }, for i = 1, . . . , n − 1, and {v n , v 1 } the edges of the tour. The length of a tour T, denoted by c(T) is the sum of the lengths of the edges of the tour. More generally, for any set F of edges, we denote by c(F) the value ∑ e∈F c(e).
A large number of applications over the years have shown that local search is often a very effective way to tackle hard combinatorial optimization problems, including the TSP. The local search paradigm applies to a generic optimization problem in which we seek to minimize an objective function f (x) over a set X. Given a map N : X → 2 X which associates to every solution x ∈ X a set N(x) called its neighborhood, the basic idea is the following: start at any solution x 0 , set s := x 0 , and look for a solution x 1 ∈ N(s) better than s. If we find one, we replace s with x 1 and iterate the same search. We continue this way until we get to a solution s such that f (s) = min{ f (x)|x ∈ N(s)}. In this case, we say that s is a local optimum. Replacing x i with x i+1 is called performing a move of the search, and N(s) is the set of all solutions reachable with a move from s. The total number of moves performed to get from x 0 to the final local optimum is called the length of the convergence. If x is a solution reachable with a move from s and f (x) < f (s) we say that the move is an improving move and x is an improving solution. When searching in the neighborhood of x i we can adopt two main strategies, namely first-improvement and best-improvement (also called steepest-descent). In the first-improvement strategy, we set x i+1 to be the first solution that we find in N(x i ) such that f (x i+1 ) < f (x i ). In best-improvement, we set x i+1 to be such that f (x i+1 ) = min{ f (x)|x ∈ N(x i ) ∧ f (x) < f (x i )}. For small-size neighborhoods such as the one considered in this paper, most local search procedures choose to adopt the best-improvement strategy, since it causes the largest decrease in the objective function value at any given iteration of the search. In this paper we will focus precisely on the objective of finding the best possible move for a popular TSP neighborhood, i.e., the 3-OPT.
The idea of basic local search has been around for a very long time and it is difficult to attribute it to any particular author. Examples of its use are reported in standard textbooks on combinatorial optimization, such as [1], or devoted surveys, such as [2]. In time, many variants have been proposed to make local search more effective by avoiding it to get stuck in local optima. Namely, sometimes a non-improving move must be performed to keep the search going. Examples of these techniques are tabu search [3] and simulated annealing [4]. Our results apply to basic local search but can be readily adapted to use in more sophisticated procedures such as tabu search. This, however, is beyond the scope of this paper.

The K-OPT Neighborhood
Let K ∈ N be a constant. A K-OPT move on a tour T consists of first removing a set R of K edges and then inserting a set I of K edges so as (T \ R) ∪ I is still a tour (we include, possibly, R ∩ I = ∅. This implies that the (K − 1)-OPT moves are a subset of the K-OPT moves). A K-OPT move is improving if c((T \ R) ∪ I) < c(T) i.e., c(I) < c(R). An improving move is best improving if c(R) − c(I) is the maximum over all possible choices of R, I.
The standard local search approach for the TSP based on the K-OPT neighborhood starts from any tour T 0 (usually a random permutation of the vertices) and then proceeds along a sequence of tours T 1 , T 2 , . . . , T N where each tour T j is obtained by applying an improving K-OPT move to T j−1 . The final tour T N is such that there are no improving K-OPT moves for it. The hope is that T N is a good tour (optimistically, a global optimum) but its quality depends on many factors. One of them is the size of the neighborhood, the rationale being that with a larger-size neighborhood we sample a larger number of potential solutions, and hence increase the probability of ending up at a really good one. Clearly, there is a trade-off between the size of a neighborhood and the time required to explore it, so that most times people resort to the use of small neighborhoods since they are very fast to explore (for a discussion on how to deal with some very large size, i.e., exponential, neighborhoods for various combinatorial optimization problems, see [5]).
The exploration of the K-OPT neighborhood, for a fixed K, might be considered "fast" from a theoretical point of view, since there is an obvious polynomial algorithm (complete enumeration). However, in practice, complete enumeration makes the use of K-OPT impossible already for K = 3 (if n is large enough, like 3000 or more). There are Θ(n K ) choices of K edges to remove which implies that 3-OPT can be explored in time O(n 3 ) by listing all possibilities. For a given tour n = 6000 nodes, the time required to try all 3-OPT moves, on a reasonably fast desktop computer, is more than one hour, let alone converging to a local optimum. For this reason, 3-OPT has never been really adopted for the heuristic solution of the TSP.
The first use of K-OPT dates back to 1958 with the introduction of 2-OPT for the solution of the TSP in [6]. In 1965 Lin [7] described the 3-OPT neighborhood, and experimented with the Θ(n 3 ) algorithm, on which he also introduced a heuristic step fixing some edges of the solution (at risk of being wrong) with the goal of decreasing the size of the instance. Still, the instances which could be tackled at the time were fairly small (n ≤ 150). Later in 1968, Steiglitz and Weiner [8] described an improvement over Lin's method which made it 2 or 3 times faster, but still cubic in nature.
It was soon realized that the case K ≥ 3 was impractical, and later local search heuristics deviated from the paradigm of complete exploration of the neighborhood, replacing it with some clever ways to heuristically move from tour to tour. The best such heuristic is probably Lin and Kernighan's procedure [9] which applies a sequence of K-OPT moves for different values of K. Notice that our objective, and our approach, is fundamentally different from Lin and Kernighan's in that we always look for the best K-OPT move, and we keep K constantly equal to 3. On the other hand, Lin and Kernighan, besides varying K, apply a heuristic search for the best K-OPT move, which is fast but cannot guarantee to find the best move, and, indeed, is not guaranteed to find an improving move even if some exist. For a very good chapter comparing various heuristics for the TSP, including the 3-OPT neighborhood, see Johnson and Mc Geich [10].
An important recent result in [11] proves that, under a widely believed hypothesis similar to the P = NP conjecture, it is impossible to find the best 3-OPT move with a worst-case algorithm of time O(n 3− ) for any > 0 so that complete enumeration is, in a sense, optimal. However, this gives us little consolation when we are faced with the problem of applying 3-OPT to a large TSP instance. In fact, for complete enumeration the average case and the worst case coincide, and one might wonder if there exists a better practical algorithm, much faster than complete enumeration on the majority of instances but still O(n 3 ) in the worst case. The algorithm described in this paper (of which a preliminary version appeared in [12]) is such an example. A similar goal, but for problems other than the TSP, is pursued in [13], which focuses on the alternatives to brute force exploration of polynomial-size neighborhoods from the point of view of parameterized complexity.

The Goal of This Paper
The TSP is today very effectively solved, even to optimality, by using sophisticated mathematical programming-based approaches, such as Concorde [14]. No matter how ingenious, heuristics can hardly be competitive with these approaches when the latter are given enough running time. It is clear that simple heuristics, such as local search, are even less effective.
However, simple heuristics have a great quality which lies in their very name: since they are simple (in all respects, to understand, to implement and to maintain), they are appealing, especially to the world of practical, real-life application solvers, i.e., in the industrial world where the very vast majority of in-house built procedures are of heuristic nature. Besides its simplicity, local search has usually another great appeal: it is in general very fast, so that it can overcome its simplemindedness by the fact that it is able to sample a huge amount of good solutions (the local optima) in a relatively small time. Of course, if its speed is too slow, one loses all the interest in using a local search approach despite its simplicity. This is exactly the situation for the 3-OPT local search. It is simple, and might be effective, but we cannot use it in practice for mid-to-large sized instances because it is too slow. The goal of our work has been to show that, with a clever implementation of the search for improving moves, it can be actually used since it becomes much faster (even 1000× on the instances we tried) with respect to its standard implementation.
We remark that our intention in this paper is not that of proving that steepest-descent 3-OPT is indeed an effective heuristics for the TSP (this would require a separate work with a lot of experiments and can be the subject of future research). Our goal is to show anyone who is interested in using such a neighborhood how to implement its exploration so as to achieve a huge reductions in the running times. While a full discussion of the computational results can be found in Section 5, here is an example of the type of results that we will achieve with our approach: on a graph of 1000 nodes, we can sample about 100 local optima in the same time that the enumerative approach would take to reach only one local optimum.

Selections, Schemes and Moves
Let G = (V, E) be a complete graph on n nodes, and c : E → R + be a cost function for the edges. Without loss of generality, we assume V = {0, 1, . . . ,n}, wheren = n − 1. In this paper, we will describe an effective strategy for finding either the best improving or any improving move for a given current tour (v 1 , . . . , v n ). Without loss of generality, we will always assume that the current tour T is the tour (0, 1, . . . ,n).
We will be using modular arithmetic frequently. For convenience, for each x ∈ V and t ∈ N we define When moving from x to x ⊕ 1, x ⊕ 2 etc. we say that we are moving clockwise, or forward. In going from x to x 1, x 2, . . . we say that we are moving counter-clockwise, or backward.
We define the forward distance d + (x, y) from node x to node y as the t ∈ {0, . . . , n − 1} such that x ⊕ t = y. Similarly, we define the backward distance d − (x, y) from x to y as the t ∈ {0, . . . , n − 1} such that x t = y. Finally, the distance between any two nodes x and y is defined by A 3-OPT move is fully specified by two sets, i.e., the set of removed and of inserted edges. We call a removal set any set of three tour edges, i.e., three edges of type {i, i ⊕ 1}. A removal set is identified by a triple S = (i 1 , We call any such triple S a selection. A selection is complete if d(i j , i h ) ≥ 2 for each j = h, otherwise we say that S is a partial selection. We denote the set of all complete selections by S.
Complete selections should be treated differently than partial selections, since it is clear that the number of choices to make to determine a partial selection is lower than 3. For instance, the number of selections in which i 2 = i 1 ⊕ 1 is not cubic but quadratic, since it is enough to pick i 1 and i 3 in all possible ways such that d(i 1 , i 3 ) ≥ 2 and then set i 2 = i 1 ⊕ 1. We will address the computation of the number of complete selections for a given K in Section 2.2. Clearly, if we do not impose any special requirements on the selection then there are ( n 3 ) = Θ(n 3 ) selections. Let S be a selection and I ⊂ E with |I| = 3. If (T \ R(S)) ∪ I is still a tour then I is called a reinsertion set. Given a selection S, a reinsertion set I is pure if I ∩ R(S) = ∅, and degenerate otherwise. Finding the best 3-OPT move when the reinsertions are constrained to be degenerate is O(n 2 ) (in fact, 3-OPT degenerates to 2-OPT in this case). Therefore, the most computationally expensive task is to determine the best move when the selection is complete and the reinsertion is pure. We refer to these kind of moves as true 3-OPT. Thus, in the remainder of the paper, we will focus on true 3-OPT moves.

Reinsertion Schemes and Moves
Let S = (i 1 , i 2 , i 3 ) be a complete selection. When the edges R(S) are removed from a tour, the tour gets broken into three consecutive segments which we can label by {1, 2, 3} (segment j ends at node i j ). Since the selection is pure, each segment is indeed a path of at least one edge. A reinsertion set patches back the segments into a new tour. If we adopt the convention to start always a tour with segment 1 traversed clockwise, the reinsertion set: (i) determines a new ordering in which the segments are visited along the tour and (ii) may cause some segments to be traversed counterclockwise. In order to represent this fact, we use a notation called a reinsertion scheme. A reinsertion scheme is a signed permutation of {2, 3}. The permutation specifies the order in which the segments 2, 3 are visited after the move. The signing −s tells that segment s is traversed counterclockwise, while +s tells that it is traversed clockwise. For example, the third reinsertion set depicted in Figure 1 is represented by the reinsertion scheme < +3, −2 > since, from the end of segment 1, we jump to the beginning of segment 3 and traverse it forward. We then move to the last element of segment 2 and proceed backward to its first element. Finally, we close the tour by going back to the first element of segment 1.
There are potentially 2 2 × 2! = 8 reinsertion schemes, but for some of these the corresponding reinsertion sets are degenerate. A scheme for a pure reinsertion must not start with "+2", nor end with "+3", nor be < −3, −2 >. This leaves only 4 possible schemes, let them be r 1 , . . . , r 4 , depicted in Figure 1. Given a selection S, the application of a reinsertion scheme r univocally determines reinsertion set I(r, S) and clearly for every reinsertion set I there is an r such that I = I(r, S). Therefore, a 3-OPT move if fully specified by a pair (S, r) where S is a complete selection and r is a reinsertion scheme. Let us denote the set of all true 3-OPT moves by The enumeration of all moves can be done as follows: (i) we consider, in turn, each reinsertion scheme r = r 1 , . . . , r 4 ; (ii) given r, we consider all complete selections S = (i 1 , i 2 , i 3 ), obtaining the moves (S, r). Since step (ii) is done by complete enumeration, the cost of this procedure is Θ(n 3 ). In the remainder of the paper we will focus on a method for lowering significantly its complexity.

The Number of True 3-OPT Moves
For generality, we state here a result which applies to K-OPT for any K ≥ 2. Theorem 1. For each K = 2, . . . , n/2 the number of complete K-OPT selections is Proof. Assume the indices of a selection are 0 ≤ i 1 < i 2 < . . . < i K ≤n. Consider the tour and let: x 1 be the number of nodes between node 0 (included) and node i 1 (excluded); x t , for t = 2, . . . , K, be the number of nodes between node i t−1 (excluded) and node i t (excluded); x K+1 be the number of nodes between node i K (excluded) and noden (included). Then, there are as many complete selections in a K-OPT move as the nonnegative integer solutions of the equation subject to the constraints that x 1 + x K+1 ≥ 1, and x t ≥ 1 for t = 2, . . . , K. If we ignore the first constraint and replace x t by y t + 1 for t = 2, . . . , K we get the equation in nonnegative integer variables, which, by basic combinatorics, has ( (n−2K+1)+K K ) solutions. We then have to remove all solutions in which x 1 + x K+1 = 0, i.e., the solutions of the equation y 2 + · · · + y K = n − 2K + 1, of which there are ( (n−2K+1)+K−2 K−2 ). Corollary 1. The number of true 3-OPT moves is i.e., it is, asymptotically, 2 3 n 3 .

Proof.
We know from Theorem 1 that there are ( n−2 3 ) − (n − 4) = n 3 −9n 2 +20n 6 complete selections, and from Section 2.1 that there are 4 reinsertion schemes. By multiplying the two values we get the claim.
In Table 1 we report the number of true moves for various values of n, giving a striking example of why the exploration of the 3-OPT neighborhood would be impractical unless some effective strategies were adopted.

Speeding-Up the Search: The Basic Idea
The goal of our work is to provide an alternative, much faster, way for finding the best move to the classical "nested-for" approach over all reinsertion schemes and indices, which is something like and takes time Θ(n 3 ). (The expression P (A), given a predicate A returns 1 if A is true and 0 otherwise). As opposed to the brute force approach, we will call our algorithm the smart force procedure. Our idea for speeding-up the search is based on this consideration. Suppose there is a magic box that knows all the best moves. We can inquire the box, which answers in time O(1) by giving us a partial move. A partial move is the best move, but one of the selection indices has been deleted (and the partial move specifies which one). For example, we might get the answer ((4, −, 9), r 2 ), and then we would know that there is some x such that the selection (4, x, 9), with reinsertion scheme r 2 , is an optimal move. How many times should we inquire about the box to quickly retrieve an optimal move? Suppose there are many optimal moves (e.g., the arc {0, 1} costs 1000 and all other arcs cost 1, so that the move ((0, x, y), r) is optimal for every (x, y) and r). Then we could call the box O(n 2 ) times and get the reply ((−, x, y), r) for all possible x, y, r without ever getting the value of the first index of an optimal selection. However, it is clear that with just one call to the box, it is possible to compute an optimal 3-OPT move in time O(n). In fact, after the box has told us the reinsertion scheme to use and the values of two indices out of three, we can enumerate the values for the missing index (i.e., expand the two indices into a triple) to determine the best completion possible.
The bulk of our work has then been to simulate, heuristically, a similar magic box, i.e., a data structure that can be queried and should return a partial move much in a similar way as described above. In our heuristic version, the box, rather than returning a partial which could certainly be completed into a best move, returns a partial move which could likely be completed into a best move. As we will see, this can already greatly reduce the number of possible candidates to be best moves. In order to assess the likelihood with which a partial move could be completed into a best solution, we will use suitable functions described in the next sections.
Since there is no guarantee that a partial move suggested by our box can be indeed completed into an optimal move, we will have to iterate over many suggestions, looking for the best one. If there are O(n) iterations, given that each iteration costs us O(n) we end up with an O(n 2 ) algorithm. Some expansions will be winning, in that they produce a selection better than any one seen so far, while others will be losing, i.e., we enumerate O(n) completions for the partial move but none of them beats the best move seen so far. Clearly, to have an effective algorithm we must keep the number of losing iterations small. To achieve this goal, we provide an O(1) strong necessary condition that a partial move must satisfy for being a candidate to a winning iteration. When no partial move satisfies this condition, the search can be stopped since the best move found is in fact the optimal move.
The Fundamental Quantities τ + and τ − In this section, we define two functions of V × V into R fundamental for our work. Loosely speaking, these functions will be used to determine, for each pair of indices of a selection, the contribution of that pair to the value of a move. The rationale is that, the higher the contribution, the higher the probability that a particular pair is in a best selection.
The two functions are called τ + () and τ − (). For each a, b ∈ {0, . . . ,n}, we define τ + (a, b) to be the difference between the cost from a to its successor and to the successor of b, and τ − (a, b) to be the difference between the cost from a to its predecessor and to the predecessor of b: Clearly, each of these quantities can be computed in time O(1), and computing their values for a subset of possible pairs can never exceed time O(n 2 ).

Searching the 3-OPT Neighborhood
As discussed in Section 2.1, the pure 3-OPT reinsertion schemes are four (see Figure 1), namely : Notice that r 3 and r 4 are symmetric to r 2 . Therefore, we can just consider r 1 and r 2 since all we say about r 2 can be applied, mutatis mutandis (i.e., with a suitable renaming of the indices), to r 3 and r 4 as well.
Given a move µ = (S, r) ∈ M, where S = (i 1 , i 2 , i 3 ), its value is the difference between the cost of the set R(S) = {{i 1 , i 1 ⊕ 1}, {i 2 , i 2 ⊕ 1}, {i 3 , i 3 ⊕ 1}} of removed edges and the cost of the reinsertion set I(r, S). We will denote the value of the move µ by A key observation is that we can break-up the function ∆(), that has Θ(n 3 ) possible arguments, into a sum of functions of two parameters each (each has Θ(n 2 ) arguments). That is, we will have for suitable functions f 12 r (), f 23 r (), f 13 r (), each representing the contribution of a particular pair of indices to the value of the move. The domains of these functions are subsets of {0, . . . ,n} × {0, . . . ,n} which limit the valid input pairs to values obtained from two specific elements of a selection. For a, b ∈ {1, 2, 3}, with a < b, let us define Then the domain of f 12 r is S 12 , the domain of f 23 r is S 23 and the domain of f 13 r is S 13 . The functions f 12 r , f 23 r , f 13 r can be obtained through the functions τ + () and τ − () as follows: The three functions are The three functions are (For convenience, these functions, as well as the functions of some other moves described later on, are also reported in Table 2 of Section 6). The functions f 12 r , f 23 r , f 13 r are used in our procedure in two important ways. First, we use them to decide which are the most likely pairs of indices to belong in an optimal selection for r (the rationale being that, the higher a value f r (x, y), the more likely it is that (x, y) belongs in a good selection).
Secondly, we use them to discard from consideration some moves which cannot be optimal. These are the moves such that no two of the indices give a sufficiently large contribution to the total. Better said, we keep in consideration only moves for which at least one contribution of two indices is large enough. With respect to the strategy outlined in Section 3, this corresponds to a criterion for deciding if a partial move suggested by our heuristic box is worth expanding into all its completions.
Assume we want to find the best move overall and the best move we have seen so far (the current "champion") is µ * . We make the key observation that for a move ((i 1 , i 2 , i 3 ), r) to beat µ * it must be These three conditions are not exclusive but possibly overlapping, and in our algorithm, we will enumerate only moves that satisfy at least one of them. Furthermore, we will not enumerate these moves with a complete enumeration, but rather from the most promising to the least promising, stopping as soon as we realize that no move has still the possibility of being the best overall.
The most appropriate data structure for performing this kind of search (which hence be used for our heuristic implementation of the magic box described in Section 3) is the Max-Heap. A heap is perfect for taking the highest-valued elements from a set, in decreasing order. It can be built in linear time with respect to the number of its elements and has the property that the largest element can be extracted in logarithmic time, while still leaving a heap.
We then build a max-heap H whose elements are records of type where α ∈ {12, 13, 23}, r ∈ {r 1 , r 2 , r 3 , r 4 }, (x, y) ∈ S α , and f := f α r (x, y). The heap elements correspond to partial moves. The heap is organized according to the values of f , and the field α is a label identifying which selection indices are associated to a given heap entry. We initialize H by putting in it an element for each (x, y), r and α such that f α r (x, y) > ∆(µ * )/3. We then start to extract the elements from the heap. Let us denote by [x j , y j , f j , α j , r j ] the j-th element extracted. Heuristically, we might say that x 1 and y 1 are the most likely values that a given pair of indices (specified by α 1 ) can take in the selection of a best-improving move, since these values give the largest possible contribution to the move value (2). We will keep extracting the maximum [x j , y j , f j , α j , r j ] from the heap as long as f j > ∆(µ * )/3. This does not mean that we will extract all the elements of H, since ∆(µ * ) could change (namely, increase) during the search and hence the extractions might terminate before the heap is empty.
Each time we extract the heap maximum, we have that x j and y j are two possible indices out of three for a candidate move to beat µ * . With a linear-time scan, we then enumerate the third missing index (identified by α j ) and see if we get indeed a better move than µ * . For example, if α j = 13 then the missing index is i 2 and we run a for-loop over i 2 , with x j + 2 ≤ i 2 ≤ y j − 2, checking each time if the move ((x j , i 2 , y j ), r j ) is a better move than µ * . Whenever this is the case, we update µ * . Note that, since ∆(µ * ) increases, the number of elements still in the heap for which f > ∆(µ * )/3 after updating the champion may be considerably smaller than it was before the update. Lemma 1. When the procedure outlined above terminates, µ * is an optimal move. Proof. Suppose, by contradiction, that there exists a move µ = ((w 1 , , ab, r) would have eventually been popped from the heap and evaluated, producing a move better than µ * . Therefore the procedure could not have ended with the move µ * .
Assume L denotes the initial size of the heap and M denotes the number of elements that are extracted from the heap overall. Then the cost of the procedure is O(n 2 + L + M(log L + n)) since: (i) Θ(n 2 ) is the cost for computing the τ values; (ii) Θ(L) is the cost for building the heap; (iii) for each of the M elements extracted from the heap we must pay O(log L) for re-adjusting the heap, and then we complete the move in O(n) ways. Worst-case, the procedure has complexity O(n 3 ) like complete enumeration but, as we will show in our computational experiments, it is much smaller in practice. In fact, on our tests the complexity was growing slightly faster than a quadratic function of n (see Section 5). This is because the Mn selections which are indeed evaluated for possibly becoming the best move have a much bigger probability of being good than a generic selection, since two of the three indices are guaranteed to help the value of the move considerably.

Worst-Case Analysis
A theoretical result stated in [11] shows that no algorithm with a less than cubic worst-case complexity is possible for 3-OPT (under the widely believed ALL-PAIRS SHORTEST PAIR conjecture). In light of this result, we expected that there should exist some instances which force our algorithm to require cubic time. In particular, we have found the following example. Fix any > 0 and, for each 0 ≤ i < j ≤n, define c(i, j) to be . For these costs, the current tour (0, . . . ,n) is a local optimum. In fact, for each selection, (i 1 , i 2 , i 3 ) at least two of the nodes have the same parity and hence at least one edge of value 1 + 4 would be introduced by a move. Therefore the inserted edges would have cost at least 3 + 4 , while the removed edges have cost 3 + 3 .
We have Therefore, for each of the Θ(n 2 ) pairs (i, j) such that i and j have the same parity, it is τ + (i, j) = > 0. Since each such pair would be put in the heap and evaluated, fruitlessly, for a total running time of Ω(n 3 ).

Implementing the Search Procedure
We now describe more formally the procedure that finds the best improving 3-OPT move. To simplify the description of the code, we will make use of a global variable µ * representing the current champion move. Initially µ * is NULL, and we extend the function ∆() by defining ∆(NULL) := 0.
The main procedure is FIND3-OPTMOVE (Procedures 1 and 2). The procedure starts by setting µ * to a solution for which, possibly, ∆(µ * ) > 0. This is not strictly necessary, and setting µ * to NULL would work too, but a little effort in finding a "good" starting champion can yield a smaller heap and thus a faster, overall, procedure. In our case, we have chosen to just sample 4n random solutions and keep the best.
for t = 1 to n 4.
µ ← ((i, j, k), r) 7. return µ The procedure then builds a max-heap (see Procedure 3 BUILDHEAP) containing an entry for each r ∈ r 1 , . . . , r 4 , α ∈ {12, 23, 13} and (x, y) ∈ S α such that f α r (x, y) > ∆(µ * )/3. The specific functions f α r , for each α and r, are detailed in Table 2 of Section 6. The heap is implemented by an array H of records with five fields: x and y, which represent two indices of a selection; alpha which is a label identifying the two type of indices; val which is the numerical value used as the key to sort the heap; and scheme which is a label identifying a reinsertion scheme. By using the standard implementation of a heap (see, e.g., [15]), the array corresponds to a complete binary tree whose nodes are stored in consecutive entries from 1 to H. SIZE In the procedure BUILDHEAP we use a function range1st2nd(α) which returns the set of all values that a pair (x, y) of indices of type α can assume. More specifically, -range1st2nd(12) := {(x, y) : 0 ≤ x < x + 2 ≤ y ≤n − 2 − P (x = 0)} /* x is i 1 and y is i 2 */ -range1st2nd(23) := {(x, y) : 2 + P (y =n) ≤ x < x + 2 ≤ y ≤n} /* x is i 2 and y is i 3 */ -range1st2nd(13) := {(x, y) : 0 ≤ x < x + 4 ≤ y ≤n − P (x = 0)} /* x is i 1 and y is i 3 */ for α = 12, 23, 13 5.
for (x, y) ∈ range1st2nd(α) H Coming back to FIND3-OPTMOVE, once the heap has been built, a loop (lines 3-8) extracts the partial moves from the heap, from the most to the least promising, according to their value (field val). For each partial move popped from the heap we use a function range3rd(x, y, α) to obtain the set of all values for the missing index with respect to a pair (x, y) of type α. More specifically.
We then complete the partial move in all possible ways, and each way is compared to the champion to see if there has been an improvement (line 7). Whenever the champion changes, the condition for loop termination becomes easier to satisfy than before.
The procedure EXTRACTMAX (in the Procedure 5) returns the element of a maximum value of the heap (which must be in the root node, i.e., H [1]). It then replaces the root with the leaf H[H.SIZE] and, by calling heapify (H, 1), it moves it down along the tree until the heap property is again fulfilled. The cost of this procedure is O(log(H.SIZE)).

Computational Results
In this section, we report on our extensive computational experiments showing the effectiveness of the approach we propose. All tests were run on a Intel R Core TM i7-7700 8CPU under Linux Ubuntu, equipped with 16 GB RAM at 3.6GHz clock. The programs were implemented in C, compiled under gcc 5.4.0 and are available upon request,

Instance Types
The experiments were run on two types of graphs: (i) random graphs generated by us and (ii) instances from the standard benchmark repository TSPLIB [16]. The random graphs are divided into three categories: Our goal in using more than one type of random distribution was to assess if our method is sensible, and if so, to what extent, to variation in the type of edge costs involved. The results will show that the GEO instances are slightly harder to tackle, while UNI and GAU are more or less equivalent.

Random Instances
In a first set of experiments we compared our method to the brute force (BF) approach on random instances of the type described before. In particular, for each type of random graph we generated instances of sizes n 1 , . . . , n 9 where we set n 1 = 400 and n i+1 = √ 2 × n i for i = 1, . . . , 8. This geometric increase is chosen so that we should see a doubling of the running times in going from n i to n i+1 with a quadratic method, while the ratio should be about 2 √ 2 2.83 with the cubic BF approach.
For each given size n i we generated 1000 random instances. Finally, for each random instance, we generated a random initial tour and found the best 3-OPT move with both smart force (SF) and BF.
In Table 3 we report the results of the experiment. The table is divided vertically into three parts. The first part reports the average running time of the two approaches (columns BF avg and SF avg ) and the speed-up achieved by smart force over brute force. We can see that in our instances smart force is at least 70 times faster and as much as 500 times faster than brute force. Similar results are reported in the second vertical part, in which we focus on the number of triples evaluated by the two methods. Clearly, the average for the BF method is indeed a constant, since all triples must be evaluated. SF, on the other hand, evaluates a much smaller number of triples, going from a minimum of 200 times less up to 7000 times less triples than BF. Notice how the saving in the running time is actually smaller than the saving in the number of triples evaluated. This is due to the overhead needed for building and updating the heap. The standard deviations for the SF times (not reported in the table for space reasons) were in the range 10-25% of the mean value, while for the number of evaluated triples the standard deviations were between 30% and 40%.
In the final vertical section, we focus on the empirical estimate of a complexity function for the running time and the number of evaluated triples of the smart force approach. In particular, the column SF time reports the ratio between the average time of SF on instances of size n k and n k−1 . Given the way that we defined the n k 's, a value 2 on this column would be indicative of a quadratic algorithm, while a value 2 √ 2 2.83 would be indicative of a cubic algorithm. The average ratios for the running time of the SF algorithm were 2.37, 2.31 and 2.47 for the UNI, GAU and GEO instances respectively. These values are indicative of an algorithm whose empirical average complexity is definitely sub-cubic, but not quite quadratic. A reasonably good fit with an experimental upper bound to the time complexity is O(n 2.5 ) (see Figure 2).
The column labeled SF eval reports the same type of ratios but this time relatively to the number of triples evaluated by SF. We can see that for UNI and GAU graphs the number of triples grows almost exactly as a quadratic function of n. Notice that evaluating less than a quadratic number of triples would be impossible, since each edge must be looked at, and there is a quadratic number of edges. Again, the class of graphs GEO seems to be slightly harder than UNI and GAU.
For completeness, in columns BF time and BF eval we report the same type of values for the BF method.
The main message from the table is that while finding the best 3-OPT move for a tour of about 6000 nodes with the classical BF approach can take more than half an hour, with our approach it can be done in 5 s or so.

TSPLIB Instances
We have then run the same type of experiment on instances from the TSPLIB. For each instance, we generated 1000 random starting tours and found the best 3-OPT move. The results are reported in Table 4. The instances are all the Euclidean, Geographical and Explicit instances (according to TSPLIB categories) with sizes up to 6000 nodes. Smaller-size instances were ignored since the running times are too little, for both SF and BF, even to be measured precisely. Indeed, for n ≤ 200 finding the best move by SF yields only a small advantage (in the running time) over the BF approach. The effect increases with increasing n, and we can get about a factor-50 speed-up for instances with 3000 ≤ n ≤ 6000. The largest improvement is achieved on one instance of size n = 2319 where SF is 77 times faster than BF, and finds the best move in about two seconds versus two and a half minutes. Table 3. Comparing smart force (SF) and brute force (BF) in finding one Best Improving move starting from a random tour and using pure 3-OPT moves only (averaged over 1000 attempts).

Random Instances
A second set of experiments concerned the behavior of our algorithm over a sequence of iterations, i.e., in a full local search convergence starting from a random tour until a local optimum is reached. Since the time required for this experiment is considerable, we ran it only on a subset of the previous instances, namely the instances with n ≤ 1600.
The goal of the experiment was to assess how much the method is sensible to the quality of the current tour. Note that for BF the quality of the tour is irrelevant as the time needed for finding the best move is in practice constant at each step since all moves must be looked at (actually, the number of times the optimum gets updated is variable and this accounts for tiny differences in the time needed for a move). With our method, however, the quality of the current tour matters: when the tour is "poor", we expect to have a heap with a large number of candidates but we also expect that most extractions from the heap will be winning (i.e., they determine an improvement of the current champion) so that the stopping criterion is reached relatively soon. On the other hand, if the tour is "very good", we expect to have a small heap, since there are few candidates moves that can be improving, but we also expect that most extractions from the heap will be losing (i.e., they won't determine an improvement of the current champion) so that the stopping criterion will be hard to reach.
We can summarize this idea with the following trade-off: (i) When there are many improving moves (i.e., at the beginning of local search) we have many candidates on the heap, but the pruning is effective and we only expand a few of them. (ii) When there aren't many improving moves (i.e., near the local optimum) we have very few candidates on the heap, but the pruning is not effective and we need to expand most of them.
The time for a move is the product between the number M of expansions and the cost Θ(n) of an expansion and so we are interested in determining how M changes along the convergence.
The experiment was conducted as follows. We used random instances of the families described before. For each value of n and instance type, we generated 10 random instances and for each of them, we started 10 times a convergence from a random tour (a total of 100 runs for each value of n). For each run we took 10 "snapshots" at 10%, 20%, . . . , 100% of the search. In particular, we divided the path followed to the local optimum into ten parts and averaged the values over each of these parts (notice that the paths can have different lengths but this way we normalize them, in the sense that we consider 10 cases, i.e., "very close to the random starting tour and very far from the local optimum" (first 10%), then one step farther from the random tour and closer to the optimum, etc., until "very far to the random starting tour and very close to the local optimum" (last 10%)).
The results are summarized in Table 5. Rows labeled BF time and SF time report, respectively, the average time of BF and SF (in milliseconds, rounded to an integer) for each tenth of the convergence. The column labeled "Avg time" reports the average time of each convergence and of each move within the convergence over all 100 tests for each instance size, for both BF and SF. The final column "Speed-up" reports the speed-up factor of SF over BF.
Rows "SF heap" report the average size of the heap. Rows "SF exps" report the average number of expansions (i.e., elements popped from the heap). Rows "SF wins" report the average number of winning expansions (i.e., improvements of the champion).
The behavior of SF seems to be sensible to the type of edge costs involved. While for UNI and GAU graphs the statistics are very similar, the running times for GEO instances are slightly larger. If we look at row "SF heap" it can be seen how the numbers along these rows decrease very quickly during the convergence for all types of instances. One interesting phenomenon, however, is that in GEO instances we start with a larger heap than for UNI and GAU, but we end with a smaller one, so that the rate of decrease in the heap size is higher for this type of instance.
The analysis of rows "SF exps" shows how for UNI and GAU instances, the numbers are relatively stable in the beginning and then start to increase around halfway through the convergence. For GEO instances, on the other hand, these values start by increasing until they reach a peak at around 20%, and then they start to drastically decrease.
Finally the rows "SF wins" for all instance types contain numbers which are quite small, stable in the beginning and then decreasing until the end of the convergence.
From Table 5 we can see that the net effect of having fewer elements on the heap but more expansions is that SF takes more or less the same time along the convergence. This is particularly true for UNI and GAU instances, with a little slow down while approaching the local optimum, while for GEO instances the effectiveness of the method increases when nearing the local optimum.
In Figure 3 we can see a graphic visualization of the aforementioned trade-off effect between the heap size and the number of expansions. The figure describes the way the heap size and the number of expansions change during the convergence for the 100 UNI instances of size n = 1600. Since the lengths of the convergences are different, they have been normalized to 100 steps.   That is, for each convergence i = 1, . . . , 100 and step k = 1, . . . , 100 we have determined h i k (the average heap size in the interval from k − 1 to k percent of the i-th convergence), e i k (the average number of expansions in the same interval) and t i k (the average time for a move in the interval). Once all the convergences have been normalized to have the same length, we have taken the average of values h, e and t over all convergences in each of the 100 intervals. For each interval k, leth k be the average of h i k over all i, and define similarly the averagesē k andt k . The plots ofh,ē andt are shown in Figure 3. The x axis is labeled by the 100 intervals. The y-axis is labeled by the number of elements (left) and by the time in seconds (right). It can be seen that the heap size starts by staying more or less constant for about 20% of the convergence and then starts to rapidly decrease (with the exception of a peak at around 40%). The number of expansions stays more or less constant for about one-third of the convergence and then starts to increase in a linear fashion. The time for each move follows a similar curve, sincet is proportional (with a factor n) toē. The two curves forh andē approach each other until they almost touch at the end, when, basically, all of the heap elements must be expanded and pruning is ineffective.
Overall, as shown by Table 4, the use of SF over BF allows for speed-up factors of about two orders of magnitude on instances of 1600 nodes.

TSPLIB Instances
We then performed the same type of experiment on the TSPLIB instances previously described. For each instance we ran a local search all to way to convergence to a local optimum, starting from 100 random tours.
The results are reported in Table 6. This type of experiment is extremely time-consuming when BF is run on large graphs. For this reason, not all the times for BF are actual real values, but the results for graphs with n ≥ 3000 are actually estimates of the running times. In particular, when n ≥ 3000, BF would have to evaluate more than 18 billion triples at each move. Say the exact number is T n . Then we only evaluate the first T 0 := 100, 000, 000 triples. Assume this takes time t 0 (usually less than 2 s on our computer). Then we estimate the actual time of a move as t n := t 0 (T n /T 0 ). This is indeed a lower bound since in computing t 0 we do not account for updates to the current best triple (i.e., we removed the if and its body from the nested for loop of BF).
From Table 6 we can see that SF outperforms BF with improvements that range from 500% faster to more than 20,000% faster (instance pcb3038). These improvements are smaller than those for random graphs, but still, the experiment shows how with our method it is now possible to run a local search by using the 3-OPT neighborhood on TSPLIB instances that were previously impossible to tackle. For instance, reaching a 3-OPT local optimum for pcb3038 with the BF nested-for procedure would have taken at least two weeks (!) while with SF it can be done in less than 2 h.

Other Types of Cubic Moves
The ideas outlined for the 3-OPT neighborhood and the corresponding successful computational results have prompted us to investigate the possibility of speeding up in a similar way some other type of cubic moves. In this section, we just give a few examples to show that indeed this can be done.
Consider, for instance, some special types of K-OPT moves (where K > 3 edges are taken out from the tour and replaced by K new edges) in which the removed edges are identified by three indexes i 1 , i 2 , i 3 . For instance, let K = 6 and the removed edges be The tour is then reconnected in any way that excludes the removed edges (clearly there are many possibilities, we'll just look at a few). To describe the way in which the tour is reconnected we can still use the concept of the reinsertion scheme. Let us describe the tour before the move (by default, clockwise) as A, i 1 , B, i 2 , C, i 3 , where A = (i 3 ⊕ 1, . . . , i 1 1), B = (i 1 ⊕ 1, . . . , i 2 1) and C = (i 2 ⊕ 1, . . . , i 3 1). In the new tour, each segment X ∈ {A, B, C} can be traversed clockwise (denoted by X + ) or counter-clockwise (denoted by X − ). A reinsertion scheme is then a permutation of {A + , B, C, i 1 , i 2 , i 3 }, starting with A + (we adopt the convention that A is always the first segment and is always traversed clockwise) and with B and C signed either '+' or '-'. Let us first consider two simple examples of reinsertion schemes, i.e., those in which the move maintains the pattern "segment, node, segment, node, segment, node" and the segments keep the clockwise orientation and the original order (i.e., A + , B + , C + ). This leaves only two possible reinsertion schemes for the true moves (as many as the derangements of {1, 2, 3}), namely (see Figure 4) r 5 :=< A + , i 2 , B + , i 3 , C + , i 1 > variable L ij is therefore the difference of two uniform random variables, and this definitely complicates a probabilistic analysis of the algorithm. If one makes the simplifying assumption that the lengths L ij are independent uniform random variables drawn in [0, 1], then it can be shown that finding the largest triangle in a graph takes quadratic time on average, and this already requires a quite complex proof [18]. However, since in our case the variables L ij are not independent, nor uniformly distributed, we were not able to prove that the expected running time is subcubic, and such a proof appears very difficult to obtain.