Next Article in Journal / Special Issue
A Generalized MILP Formulation for the Period-Aggregated Resource Leveling Problem with Variable Job Duration
Previous Article in Journal / Special Issue
Two-Machine Job-Shop Scheduling Problem to Minimize the Makespan with Uncertain Job Durations

Algorithms 2020, 13(1), 5; https://doi.org/10.3390/a13010005

Article
Simple Constructive, Insertion, and Improvement Heuristics Based on the Girding Polygon for the Euclidean Traveling Salesman Problem
1
Centro de Investigación en Ciencias UAEMor, Universidad Autónoma del Estado de Morelos, 62209 Cuernavaca, Mexico
2
Facultad de Contaduría, Administración e Informática UAEMor, 62209 Cuernavaca, Mexico
3
Facultad de Matemáticas UAGro, Universidad Autónoma de Guerrero, 39650 Acapulco, Mexico
*
Author to whom correspondence should be addressed.
Received: 15 November 2019 / Accepted: 17 December 2019 / Published: 21 December 2019

Abstract

:
The Traveling Salesman Problem (TSP) aims at finding the shortest trip for a salesman, who has to visit each of the locations from a given set exactly once, starting and ending at the same location. Here, we consider the Euclidean version of the problem, in which the locations are points in the two-dimensional Euclidean space and the distances are correspondingly Euclidean distances. We propose simple, fast, and easily implementable heuristics that work well, in practice, for large real-life problem instances. The algorithm works on three phases, the constructive, the insertion, and the improvement phases. The first two phases run in time O ( n 2 ) and the number of repetitions in the improvement phase, in practice, is bounded by a small constant. We have tested the practical behavior of our heuristics on the available benchmark problem instances. The approximation provided by our algorithm for the tested benchmark problem instances did not beat best known results. At the same time, comparing the CPU time used by our algorithm with that of the earlier known ones, in about 92% of the cases our algorithm has required less computational time. Our algorithm is also memory efficient: for the largest tested problem instance with 744,710 cities, it has used about 50 MiB, whereas the average memory usage for the remained 217 instances was 1.6 MiB.
Keywords:
heuristic algorithm; traveling salesman problem; computational experiment; time complexity

1. Introduction

The Traveling Salesman Problem (TSP) is one of the most studied strongly NP-hard combinatorial optimization problems. Given an n × n matrix of distances between n objects, call them cities, one looks for a shortest possible feasible tour which can be seen as a permutation of the given n objects: a feasible tour visits each of the n cities exactly once except the first visited city with which the tour ends. The cost of a tour is the sum of the distances between each pair of the neighboring cities in that tour. This problem can also be described in graph terms. We have an undirected weighted complete graph G = ( V , E ) , where V is the set of n = | V | vertices (cities) and E is the set of the n 2 n edges ( i , j ) = ( j , i ) , i j . A non-negative weight of an edge ( i , j ) , w ( i , j ) is the distance between vertices i and j. There are two basic sets of restrictions that define feasible solution (a tour that has to start and complete at the same vertex and has to contain all the vertices from set V exactly once). A feasible tour T can be represented as:
T = ( i 1 , i 2 , , i n 1 , i n , i 1 ) ; i k V ,
and its cost is
C ( T ) = k = 1 n 1 w ( i k , i k + 1 ) + w ( i n , i 1 ) .
The objective is to find an optimal tour, a feasible one with the minimum cost min T C ( T ) .
Some special cases of the problem have been commonly considered. For instance, in the symmetric version, the distance matrix is symmetric (i.e., for each edge ( i , j ) , w ( i , j ) = w ( j , i ) ); in another setting, the distances between the cities are Euclidean distances (i.e., set V can be represented as points in the two-dimensional Euclidean space). Clearly, the Euclidean TSP is also a symmetric TSP but not vice versa. The Euclidean TSP has a straightforward immediate application in the real-life scenario when a salesman wishes to visit the cities using the shortest possible tour. Because in the Euclidean version the cities are points in plane, for each pair of points, the triangle inequality holds, which makes the problem a bit more accessible in the sense that simple geometric rules can be used for calculating the cost of a tour or the cost of the inclusion of a new point in a partial tour, unlike the general setting. Nevertheless, the Euclidean TSP remains strongly NP-hard; see Papadimitriou [1] and Garey et al. [2].
The exact solution methods for TSP can only solve problem instances with a moderate number of cities; hence, approximation algorithms are of a primary interest. There exist a vast amount of approximation heuristic algorithms for TSP. The literature on TSP is very wide-ranging, and it is not our goal to overview all the important relevant work here (we refer the reader, e.g., to a book by Lawler et al. [3] and an overview chapter by Jünger [4]).
The literature distinguishes two basic types of approximation algorithms for TSP: tour construction and loop improvement algorithms. The construction heuristics create a feasible tour in one pass so that the taken decisions are not reconsidered later. A feasible solution delivered by a construction heuristic can be used in a loop improvement heuristic as an initial feasible solution (though such initial solution can be constructed randomly). Given the current feasible tour, iteratively, an improvement algorithm, based on some local optimality criteria, makes some changes in that tour resulting in a new feasible solution with less cost. Well-known examples of tour improvement algorithms are 2-Opt Croes 2-Opt, its generalizations 3-Opt and k-Opt, and the algorithm by Lin and Kernighan [5], to mention a few.
The most successful algorithms we have found in the literature for large-scale TSP instances are Ant Colony Optimization (ACO) meta heuristics, with which we compare our results. On one hand, these algorithms give a good approximation. On the other hand, the traditional ACO-based algorithms tend to require a considerable computer memory, which is necessary to keep an n × n pheromone matrix. Typically, the time complexity of the selection of each next move using ACO is also costly. These drawbacks are addressed in some recent ACO-based algorithms in which, at each iteration of the calculation of the pheromone levels, the intermediate data are reduced storing only a limited number of the most promising tours in computer memory. With Partial ACO (PACO), only some part of a known good tour is altered. A PACO-based heuristic was proposed in Chitty [6] and the experimental results for four problem instances from library Art Gallery were reported. Effective Strategies + ACO (ESACO) uses pheromone values directly in the 2-opt local search for the solution improvement and reduces the pheromone matrix, yielding linear space complexity (see, for example, Ismkman [7]). Parallel Cooperative Hybrid Algorithm ACO (PACO-3Opt) uses a multi-colony of ants to prevent a possible stagnation (see, for example, Gülcü et al. [8]). In a very recent Restricted Pheromone Matrix Method (RPMM) [9], the pheromone matrix is reduced with a linear memory complexity, resulting in an essentially lower memory consumption. Another recent successful ACO-based Dynamic Flying ACO (DFACO) heuristic was proposed by Dahan et al. [10]. Besides these ACO-based heuristics, we have compared our heuristics with other two meta-heuristics. One of them is a parallel algorithm based on the nearest neighborhood search suggested by Al-Adwan et al. [11], and the other one, proposed by Zhong et al. [12], is a Discrete Pigeon-Inspired Optimization (DPIO) metaheuristic. We have also implemented directly the Nearest Neighborhood (NN) algorithm for the comparison purposes (see Section 4 and Appendix A).
In Table A1 in Appendix A, we give a summary of the above heuristics including the information on the type and the number of the instances for which these algorithms were tested and the number of the runs of each of these algorithms. Unlike these heuristics, the heuristic that we propose here is deterministic, in the sense that, for any input, it delivers the same solution each time it is invoked; hence, there is no need in the repeated runs of our algorithm. We have tested the performance of our algorithm on 218 benchmark problem instances (the number of the reported instances for the algorithms from Table A1 vary from 6 to 36). The relative error of our algorithm for the tested instances did not beat the earlier known best results; however, for some instances, our error was better than that of the above-mentioned algorithms (see Table 9 at the end of Section 3). The error percentage provided by our algorithm has varied from 0% to 17%, with an average relative error of 7.16%. The standard error deviation over all the tested instances was 0.03.
In terms of the CPU time, our algorithm was faster than ones from Table A1 except for six instances from Art Gallery RPMM [9] and Partial-ACO [6], and for two instances from TSPLIB DPIO [12] were faster (see Table 10). Among all the comparisons we made, in about 92% of the cases, our algorithm has required less computational time. We have halted the execution of our algorithm for the two of the above-mentioned largest problem instances in 15 days, and for the next largest instance ara238025 with 238,025 cities our algorithm has halted in about 36 h. The average CPU time for the remained instances were 19.2 min. The standard CPU time deviation for these instances was 89.3 min (for all the instances, including the above-mentioned three largest ones, it was 2068.4 min).
Our algorithm consumes very little computer memory. For the largest problem instance with 744,710 cities, it has used only about 50 MiB (mebibytes). The average memory usage for the remained 217 instances was 1.6 MiB (the average for all the instances including the above largest one was 1.88 MiB). The standard deviation of the usage of the memory is 4.6 MiB. Equation (3) below (see also Figure 15 in Section 3) shows the dependence of the memory required by our algorithm on the total number of cities n. As we can observe, this dependence is linear:
R A M = 0.0000685 n + 0.563 M i B .
Our algorithm consists of the constructive, the insertion and the improvement phases, we call it the Constructive, Insertion, and Improvement algorithm, the CII-algorithm, for short. The constructive heuristics of Phase 1 deliver a partial tour that includes solely the points of the girding polygon. The insertion heuristic of Phase 2 completes the partial tour of Phase 1 to a complete feasible tour using the cheapest insertion strategy: iteratively, the current partial tour is augmented with a new point, one yielding the minimal increase in the cost in an auxiliary, specially formed tour. We use simple geometry in the decision-making process at Phases 2 and 3. The tour improvement heuristic of Phase 3 improves iteratively the tour of Phase 2 based on the local optimality conditions: it uses two heuristic algorithms which carry out some local rearrangement of the current tour. At Phase 1, the girding polygon for the points of set V and an initial, yet infeasible (partial) tour including the vertices of that polygon is constructed in time O ( n 2 ) . The initial tour of Phase 1 is iteratively extended with the new points from the internal area of the polygon at Phase 2. Phase 2 also runs in time O ( n 2 ) and basically uses the triangle inequality for the selection of each newly added point. Phase 3 uses two heuristic algorithms. The first one, called 2-Opt, is a local search algorithm proposed by Croes [13]. The second one is based on the procedure of Phase 2. The two heuristics are repeatedly applied in the iterative improvement cycle until a special approximation condition is satisfied. The number of repetitions in the improvement cycle, in practice, is bounded by a small constant. In particular, the average number of the repetitions for all the tested instances was about 9 (the maximum of 49 repetitions was attained for one of the moderate sized instances lra498378, and for the largest instance lrb744710 with 744,710 points, Phase 3 was repeated 18 times).
The rest of the paper is organized as follows. In Section 2, we describe the CII-algorithm and show its time complexity. In Section 3, we give the implementation details and the results of our computational experiments, and, in Section 4, we give some concluding remarks and possible directions for the future work. The tables presented in Appendix A contain the complete data of our computational results.

2. Methods

We start this section with a brief aggregated description of our algorithm and in the following subsections we describe its three phases (Figure 1).

2.1. Phase 1

2.1.1. Procedure to Locate the Extreme Points

At Phase 1, we construct the girding polygon for the points of set V and construct an initial yet infeasible (partial) tour that includes the points of that polygon. The construction of this polygon employs four extreme points v 1 , v 2 , v 3 and v 4 ; the uppermost, leftmost, lowermost, and rightmost, respectively [14], with ones from set V defined as follows. First, we define the sets of points T , L , B and R with T = { i | y i i s m a x i m u m , i V } , L = { i | x i i s m i n i m u m , i V } , B = { i | y i i s m i n i m u m , i V } , and R = { i | x i i s m a x i m u m , i V } . Then,
v 1 = j | x j i s m a x i m u m ; j T ,
v 2 = j | y j i s m a x i m u m ; j L ,
v 3 = j | x j i s m i n i m u m ; j B ,
and
v 4 = j | y j i s m i n i m u m ; j R .
See the next procedure for the extreme points in Table 1.
Lemma 1.
The time complexity of Procedure extreme_points is O ( n ) .
Proof of Lemma 1.
In this and in the following proofs, we only consider those lines in the formal descriptions in which the number of elementary operations, denote it by f ( n ) , depends on n (ignoring the lines yielding a constant number of operations). In lines 5–9, there is a loop with n 1 cycles, hence { f ( n ) = n 1 } . In lines 11–15, there is a loop with n cycles, hence { f ( n ) = n } In lines 20–21, 22–23, 24–25 and 26–27; there are four loops, each one with at most has n cycles, so { f ( n ) = 4 n } . Hence, the total cost is O ( n ) . □

2.1.2. Procedure for the Construction of the Girding Polygon

Before we describe the procedure, let us define function θ ( i , j ) , returning the angle formed between the edge ( i , j ) and the positive direction of the x-axis (Equation (8) and Figure 2):
θ ( i , j ) = arccos x j x i w ( i , j ) i f arcsin y j y i w ( i , j ) 0 , arccos x j x i w ( i , j ) i f arcsin y j y i w ( i , j ) < 0 .
The girding Polygon P = P ( V ) is a convex geometric figure in a two-dimensional plane, such that any point in V either belongs to that polygon or to the area of that polygon Vakhania et al. [14].
The input of our procedure for the construction of polygon P (see Table 2), consists of (i) the set of vertices V and (ii) the distinguished extreme points v 1 , v 2 , v 3 and v 4 . Abusing slightly the notation, in the description below, we use: (i) P, for the array of the points that form the girding polygon, and (ii) k for the last vertex included so far into the array P. Initially, P : = ( v 1 ) and k : = v 1 .
Lemma 2.
The time complexity of Procedure polygon is O ( n 2 ) .
Proof of Lemma 2.
There are four independent while statements with similar structure, each of which can be repeated at most n times. In the first line of each of these while statements, in lines 4, 11, 18, and 25, the set of points V * is formed that yields { f ( n ) = 2 n } operations. In lines 5, 12, 19, and 26, the set of n 1 edges E * is formed in time { f ( n ) = n 1 } . In lines 6, 13, 20, and 27, the set of angles Θ * consisting of at most n 1 elements is formed in time { f ( n ) = n 1 } . In lines 7, 14, 21, and 28 to find the minimum angle in set Θ * at most n 1 comparisons are needed and the lemma follows. □
In Figure 3, we illustrate an example with V = { 1 , 2 , , 6 } with coordinates X = { x 1 , x 2 , , x 6 } and Y = { y 1 , y 2 , , y 6 } . The extreme points are: v 1 = 4 , v 2 = 2 , v 3 = 5 and v 4 = 5 and P = ( 4 , 2 , 5 , 4 ) . Initially, P = ( 4 ) . Then, vertex 2 is added to polygon in Step 1, vertex 5 is added in Step 2; Step 3 is not carried out because v 3 = v 4 ; vertex 4 is added at Step 4.
Using polygon P ( V ) constructed by the Procedure Polygon, we obtain our initial, yet infeasible (partial) tour T 0 = ( t 1 , t 2 , , t m , t 1 ) that is merely formed by all the points t 1 , t 2 , , t m of that polygon, where t 1 = v 1 and m is the number of the points.
In the example of Figure 3, P is the initial infeasible tour T 0 = ( 4 , 2 , 5 , 4 ) . V \ T 0 = { 1 , 3 , 6 } is the set of points that will be inserted into the final tour.

2.2. Phase 2

The initial tour of Phase 1 is iteratively extended with new points from the internal area of polygon P ( V ) using the cheapest insertion strategy at Phase 2 [15].
Let l T h 1 be a candidate point to be included in tour T h 1 , resulting in an extended tour T h of iteration h > 0 , and let t i T h 1 . Due to the triangle inequality, w ( t i , l ) + w ( l , t i + 1 ) w ( t i , t i + 1 ) ; i.e., the insertion of point l between points t i and t i + 1 , will increase the current total cost C ( T h 1 ) by w ( t i , l ) + w ( l , t i + 1 ) w ( t i , t i + 1 ) 0 (see Figure 4). Once point l is included between points t i and t i + 1 , for the convenience of the presentation, we let t m : = t m 1 , t m 1 : = t m 2 , ⋯, t i + 3 : = t i + 2 , t i + 2 : = t i + 1 and t i + 1 : = l (due to the way in which we represent our tours, this re-indexing yields no extra cost in our algorithm).
In Table 3, we give a formal description of our procedure that inserts point l between points t i and t i + 1 in tour T.

Procedure Construc_tour

At each iteration h, the current tour T h 1 is extended by point l h V \ T h 1 yielding the minimum cost c l h (defined below), which represents the increase in the the current total cost C ( T h 1 ) if that point is included into the current tour T h 1 . The cost for point l V \ T h 1 is defined as follows:
c l h = min t i T h 1 { w ( t i , l ) + w ( l , t i + 1 ) w ( t i , t i + 1 ) } .
For further references, we denote by i ( l ) the index of point t i for which the above minimum for point l is reached, i.e., w ( t i ( l ) , l ) + w ( l , t i ( l ) + 1 ) w ( t i ( l ) , t i ( l ) + 1 ) = min t i T h 1 { w ( t i , l ) + w ( l , t i + 1 ) w ( t i , t i + 1 ) } .
Thus, l h is a point that attains the minimum
min { c l h | l V \ T h 1 } ,
whereas the ties can be broken arbitrarily.
To speed up the procedure, we initially calculate the minimum cost for each point l V \ T h 1 . After the insertion of point l h , the minimum cost c l h is updated as follows:
c l h : = min { c l h 1 , w ( t i , l ) + w ( l , t i + 1 ) w ( t i , t i + 1 ) , w ( t i + 1 , l ) + w ( l , t i + 2 ) w ( t i + 1 , t i + 2 ) } .
We can describe now Procedure construct_tour as shown in Table 4.
Lemma 3.
The time complexity of the Procedure construct_tour is O ( n 2 ) .
Proof of Lemma 3.
In lines 2–3, there is a for statement with n ( m + h 1 ) repetitions. To calculate c l h in line 3, the same number of repetitions is needed and the total cost of the for statement is [ n ( m + h 1 ) ] [ n ( m + h 1 ) ] = [ n 2 2 ( m + h 1 ) n + ( m + h 1 ) 2 ] . The while statement in lines 4–9 is repeated at most n ( m + h 1 ) times. In line 5, to calculate c l h h (Equation (10)) n ( m + h 1 ) comparisons are required. In lines 7–8, there is a for statement nested in the above while statement with n ( m + h ) repetitions. Hence, the total cost is [ n 2 2 ( m + h 1 ) n + ( m + h 1 ) 2 ] + [ n ( m + h 1 ) ] { [ n ( m + h 1 ) ] + [ n ( m + h ) ] } = [ n 2 2 ( m + h 1 ) n + ( m 2 2 m 2 h + h 2 + 1 ) ] + [ n ( m + h 1 ) ] [ 2 n ( 2 m + 2 h 1 ) ] = [ n 2 ( 2 m + 2 h 2 ) n + ( m 2 2 m 2 h + h 2 + 1 ) ] + [ 2 n 2 ( 4 m + 4 h 3 ) n + ( 2 m 2 + 4 m h 3 m 3 h + 2 h 2 + 1 ) ] = 3 n 2 ( 6 m + 6 h 5 ) n + ( 3 m 2 + 4 m h 5 m 5 h + 3 h 2 + 2 ) = O ( n 2 ) . □
In the example of Figure 5, T 0 = ( 4 , 2 , 5 ) . The costs c l 1 , l V \ T 0 , are calculated as follows: c 1 1 = min { w ( 4 , 1 ) + w ( 1 , 2 ) w ( 4 , 2 ) , w ( 2 , 1 ) + w ( 1 , 5 ) w ( 2 , 5 ) , w ( 5 , 1 ) + w ( 1 , 4 ) w ( 5 , 4 ) } = w ( 5 , 1 ) + w ( 1 , 4 ) w ( 5 , 4 ) , c 3 1 = min { w ( 4 , 3 ) + w ( 3 , 2 ) w ( 4 , 2 ) , w ( 2 , 3 ) + w ( 3 , 5 ) w ( 2 , 5 ) , w ( 5 , 3 ) + w ( 3 , 4 ) w ( 5 , 4 ) } = w ( 4 , 3 ) + w ( 3 , 2 ) w ( 4 , 2 ) , c 6 1 = min { w ( 4 , 6 ) + w ( 6 , 2 ) w ( 4 , 2 ) , w ( 2 , 6 ) + w ( 6 , 5 ) w ( 2 , 5 ) , w ( 5 , 6 ) + w ( 6 , 4 ) w ( 5 , 4 ) } = w ( 4 , 6 ) + w ( 6 , 2 ) w ( 4 , 2 ) .
Hence, min { c 1 1 , c 3 1 , c 6 1 } = c 6 1 = w ( 4 , 6 ) + w ( 6 , 2 ) w ( 4 , 2 ) ; l 1 = 6 and i ( 6 ) = 4 . Therefore, point 6 will be included in tour T 1 between points 4 and 2 (Figure 6).
Now, T 1 = ( 4 , 6 , 2 , 5 , 4 ) and the minimum costs c l 2 for each point l V \ T 1 are: c 1 2 = { c 1 1 , w ( 4 , 1 ) + w ( 1 , 6 ) w ( 4 , 6 ) , w ( 6 , 1 ) + w ( 1 , 2 ) w ( 6 , 2 ) } = w ( 4 , 1 ) + w ( 1 , 6 ) w ( 4 , 6 ) . c 3 2 = { c 3 1 , w ( 4 , 3 ) + w ( 3 , 6 ) w ( 4 , 6 ) , w ( 6 , 3 ) + w ( 3 , 2 ) w ( 6 , 2 ) } = w ( 6 , 3 ) + w ( 3 , 2 ) w ( 6 , 2 ) .
Hence, min { c 1 2 , c 3 2 } = c 3 2 = w ( 6 , 3 ) + w ( 3 , 2 ) w ( 6 , 2 ) ; l 2 = 3 and i ( 3 ) = 6 . Therefore, point 3 will be included in tour T 2 between points 6 and 2 (Figure 7).
Now, T 2 = ( 4 , 6 , 3 , 2 , 5 , 4 ) and the minimum costs c l 3 , l V \ T 2 are
c 1 3 = { c 1 2 , w ( 6 , 1 ) + w ( 1 , 3 ) w ( 6 , 3 ) , w ( 3 , 1 ) + w ( 1 , 2 ) w ( 3 , 2 ) = c 1 2 .
Hence, min { c 1 3 } = c 1 2 = w ( 4 , 1 ) + w ( 1 , 6 ) w ( 4 , 6 ) ; l 3 = 1 and = i ( 1 ) = 4 . Therefore, point 1 will be included in tour T 3 between points 4 and 6 (Figure 8).
The resultant tour T = T 3 = ( 4 , 1 , 6 , 3 , 2 , 5 , 4 ) includes all points from set V and Procedure construct_tour halts.

2.3. Phase 3

At Phase 3, we iteratively improve the feasible tour T delivered by Phase 2. We use two heuristic algorithms. The first one is called 2-Opt, which is a local search algorithm proposed by Croes [13]. The second one is based on our construct_tour procedure, named improve_tour. The current solution (initially, it is the tour delivered by Phase 2) is repeatedly improved first by 2-Opt-heuristics and then by Procedure improve_tour, until there is an improvement. Phase 3 halts if either the output of one of the heuristics has the same objective value as the input (by the construction, the output cannot be worse than the input) or the following condition is satisfied:
C ( T i n ) C ( T o u t ) d i f m i n ,
where d i f m i n is a constant (for instance, we let d i f m i n = 0.0001 ). Thus, initially, 2-Opt-heuristics runs with input T. Repeatedly, Condition (12) is verified for the the output of every call of each of the heuristics. If it is satisfied, Phase 3 halts; otherwise, for the output of the last called heuristics, the other one is invoked and the whole procedure is repeated; see Figure 9.

2.3.1. Procedure 2-Opt

Procedure 2-Opt is a local search algorithm improving feasible solution T = ( t 1 , t 2 , , t n , t 1 ) ( n = | V | ). It is well-known that the time complexity of this procedure is O ( n 2 ) . For the completeness of our presentation, we give a formal description of this procedure in Table 5.
The result of a local replacement carried out by the procedure is represented schematically in the Figure 10).

2.3.2. Procedure improve_tour

We also use our algorithm construct_tour to improve a feasible solution T = ( t 1 , t 2 , , t n , t 1 ) , n = | V | . Iteratively, point t i + 1 , 1 i < n , is removed from the tour T and is reinserted by a call of procedure construct_tour ( V , T \ { t i + 1 } ) . If a removed point gets reinserted in the same position, then i : = i + 1 and the procedure continues until i n (see Table 6).
Figure 11 illustrates the iterative improvement in the cost of the solutions obtained at Phase 3 for a sample problem instance usa115475. The initial solution T 0 of Phase 2 is iteratively improved as shown in the diagram.
Lemma 4.
The time complexity of the Procedure improve_tour is O ( n 2 ) .
Proof of Lemma 4.
In lines 2–7, there is a while statement with n 1 repetitions. The call of Procedure construct_tour in line 5 yields the cost O ( n ) since with m = n 1 , h = 1 ; see the proof of Lemma 3 (m is the number of points in the current partial tour). The lemma follows. □

3. Implementation and Results

CII-algorithm was coded in C++ and compiled in g++ on a server with processor 2x Intel Xeon E5-2650 0 @ 2.8 GHz (Cuernavaca, Mor., Mexico), 32 GB in RAM and Ubuntu 18.04 (bionic) operating system (we have used only one CPU in our experiments). We did not keep the cost matrix in computer memory, but we have rather calculated the costs using the coordinates of the points. This does not increase the computation time too much and saves considerably the required computer memory.
We have tested the performance of CII-algorithm for 85 benchmark instances from TSPLIB [16] library and for 135 benchmark instances from TSP Test Data [17] library. The detailed results are presented in the Appendix. In our tables, parameter “Error” specifies the approximation factor of algorithm H compared to cost of the best known solution (C(BKS)):
E r r o r H = C ( B K S ) C ( T H ) C ( B K S ) 100 % .
In Table 7 below, we give the data on the average performance of our heuristics. The average error percentage of our heuristics is calculated using Formula (13). It shows, for each group of instances, the average error of the solutions delivered by Phase 2 and, at Phase 3, the number of cycles at Phase 3 and the average decrease in the cost of the solution decreased at Phase 3 compared to that Phase 3.
In the diagrams below (on the left hand-side), we illustrate the dependence of the approximation given by our algorithm on the size of the tested instances, and the dependence of the execution time of our algorithm on the size of the instances (right hand-side diagrams). We classify the tested instance into three groups: the small ones (from 1 to 199 points in Figure 12), the middle-sized ones (from 200 to 9999 points in Figure 13), and large instances (from 10,000 to 250,000 in Figure 14). We do not include the data for the largest two problem instances lra498378 and lrb744710 because of the visualization being technically complicated. The error for these instances is 12.5% and 15.9%, respectively, and the CPU time was limited to two weeks for both instances. As we can see, at Phase 3, there is an improvement in the quality of the solutions delivered by Phase 2.
Table 8 shows the summary of the comparison statistics of the solutions delivered by our algorithm CII with the solutions obtained by the heuristics that we have mentioned in the introduction (namely, DFACO [10], ACO-3Opt [10], ESACO [7], PACO-3Opt [8], DPIO [12], ACO-RPMM [9], Partial ACO [6], and PRNN [11]). We may observe in Table 9 that algorithm CII has attained an improved approximation for 17 instances. At the same time, in terms of the execution time, our heuristic dominates the other heuristics.
In the Table 9, we specify the problem instances for which our algorithm provided a better relative error than some of the earlier cited algorithms.
In terms of the CPU time comparison, see Table 10.
In the diagram below (Figure 15), we illustrate the dependence of the memory used by our algorithm of all tested instances.

4. Conclusions and Future Work

We have presented a simple, easily implementable and fast heuristic algorithm for the Euclidean traveling salesman problem that solves both small and large scale instances with an acceptable approximation and consumes a little computer memory. Since the algorithm uses simple geometric calculations, it is easily implementable. The algorithm is fast, the first two phases run in time O ( n 2 ) , whereas the number of the improvement repetitions in the third phase, in practice, is not large. The first two phases might be used independently from the third phase, for instance, for the generation of an initial tour in more complex loop improvement heuristics. The quality of the solution delivered already by Phase 2 is acceptable and is expected to greatly outperform that of a random solution used normally to initiate meta-heuristic algorithms. We have implemented NN (Nearest Neighborhood) heuristics and run the code for the benchmark instances (the initial vertex for NN heuristic was selected randomly). Phase 2 gave essentially better results. In average, for the tested 135 instances (6 large, 32 Medium and 97 small ones), the difference between the approximation factor obtained by the procedure of Phase 2 and that of Nearest Neighbor heuristic was 9.65 % (the average error of Phase 2 was 16.89% and that of NN was 26.55%, whereas the standard deviations were similar, 0.05% and 0.04%, respectively). As for the overall algorithm, it uses a negligible computer memory. Although for most of the tested benchmark instances it did not improve the best known results, the execution time of our heuristic, on average, was better than the earlier reported best known times. For future work, we intend to create a more powerful, yet more complex, CII-algorithm by augmenting each of the three phases of our algorithm with alternative ways for the creation of the initial tour and alternative insertion and improvement procedures.

Author Contributions

Conceptualization, N.V. and J.M.S.; Methodology, V.P.-V.; Validation, N.V.; Formal Analysis, N.V. and J.M.S.; Investigation, V.P.-V.; Resources, UAEMor administrated by J.A.H.; Writing—original draft preparation, V.P.-V.; Writing—review and editing, N.V.; Visualization, V.P.-V. and N.V.; Supervision, N.V.; Project administration, N.V.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In the table below (Table A1), we give some details on the earlier mentioned heuristics with which we compare our results (the entries in the column “Runs” specify the number of the reported runs of the corresponding heuristic).
Table A1. Heuristics used to compare the CII-algorithm.
Table A1. Heuristics used to compare the CII-algorithm.
Heuristic IdHeuristic NameNumber of Reported InstancesRuns
ACO-RPMM [9]ACO - Restricted Pheromone Matrix Method6 Large10
Partial ACO [6]Partial ACO4 Large and 5 Small100
DFACO [10]Dynamic Flying ACO30 Small100
ACO-3Opt [10]ACO-3Opt30 Small100
DPIO [12]Discrete Pigeon-inspired optimization with Metropolis acceptance1 Large, 6 Medium and 28 Small25
PACO-3Opt [8]Parallel Cooperative Hybrid Algorithm ACO21 Small20
ESACO [7]Effective Strategies + ACO5 Medium and 17 Small20
PRNN [11]Parallel Repetitive Nearest Neighbor3 Medium and 9 Small n = | V |
NNNearest Neighbor Algorithm4 Large, 25 Medium and 61 Small1
The next table (Table A2) discloses the headings of our tables.
Table A2. Description of the headings of Table A3Table A6.
Table A2. Description of the headings of Table A3Table A6.
HeaderHeader Description
| V | the number of vertices in the instance
Opt?“yes” if Best Known Solution (BKS) is optimal, “no” otherwise
C ( B K S ) the cost of BKS
C ( T ) Cost of the solution constructed by CII heuristic
RAMRAM used by CII heuristics
#the number of cycles at Phase 3 of CII heuristic
Erroras defined in Formula (13)
C a v g ( T H ) the average cost of the solution obtained by heuristic H
Heuristic Idnomenclature used in Table A1
Timethe processing time of a heuristic
ms, s, m, h, dtime units for milliseconds, seconds, minutes, hours and days respectively.
In the tables below, each line corresponds to a particular benchmark instance. For each of these instances, we indicate the performance of Phase 2 and Phase 3, separately, and that of the other heuristics reporting the results for that instance. In addition, 85 benchmark instances were taken from TSPLIB [16] and 135 instances are from TSP Test Data [17] libraries. Table A3, Table A4, and Table A6 include the earlier known results.
In some lines of our tables (e.g., line 1, Table A5), a slight difference in the approximation errors of our algorithm and those of the algorithms from the “Results for National TSP Benchmarks" table can be seen due to the way the distances in the obtained solutions are represented in our algorithm (we do not round the distances represented as decimal numbers, whereas the distances in the best known solutions are rounded).
Table A3. Results for TSPLIB benchmarks.
Table A3. Results for TSPLIB benchmarks.
InstanceCII Heuristic (Phase 2)CII Heuristic (Phase 3)Other Heuristics
VOpt? C ( BKS ) C ( T ) Error CII Time C ( T ) Error CII TimeRAM# C min ( T H ) Error H TimeHeuristic Id
eil5151yes4264546.6%0.4 ms4546.6%1.0 ms0.5 MiB14260.0%1.0 sDFACO
4260.0%1.0 sACO-3Opt
4260.0%1.1 sESACO
berlin5252yes754280586.8%1.1 ms80586.8%4.5 ms0.6 MiB375420.0%1.0 sDFACO
75420.0%1.0 sACO-3Opt
st7070yes6757105.2%0.6 ms7013.8%11.1 ms0.6 MiB382622.3%0.4 msNN
eil7676yes5385767.0%0.7 ms5563.4%2.2 ms0.6 MiB35380.0%3.0 sDFACO
5380.0%3.0 sACO-3Opt
5380.0%1.4 sESACO
pr7676yes108,159114,8086.1%0.7 ms112,9114.4%3.0 ms0.6 MiB4148,34837.2%0.5 msNN
rat9999yes121112946.9%1.0 ms12301.5%9.4 ms0.6 MiB3144219.1%0.8 msNN
kroA100100yes21,28223,0508.3%1.1 ms21,4430.8%3.5 ms0.6 MiB321,2820.0%2.0 sDFACO
21,2820.0%2.0 sACO-3Opt
21,2820.0%2.6 sESACO
kroB100100yes22,14123,2475.0%1.1 ms22,7162.6%3.3 ms0.6 MiB322,1410.0%2.0 sDFACO
22,1410.0%2.0 sACO-3Opt
kroC100100yes20,74921,6324.3%1.1 ms20,9220.8%3.8 ms0.6 MiB320,7490.0%2.0 sDFACO
20,7490.0%2.0 sACO-3Opt
kroD100100yes21,29421,7122.0%1.1 ms21,5821.4%3.4 ms0.6 MiB321,2940.0%3.0 sDFACO
21,2940.0%3.0 sACO-3Opt
kroE100100yes22,06822,8703.6%1.0 ms22,5282.1%8.3 ms0.6 MiB322,0680.0%2.0 sDFACO
22,0680.0%2.0 sACO-3Opt
rd100100yes791084657.0%1.2 ms82454.2%3.8 ms0.6 MiB379100.0%2.0 sDFACO
79100.0%2.0 sACO-3Opt
eil101101yes6296797.9%1.1 ms6665.9%19.5 ms0.6 MiB36290.0%12.0 sDFACO
6290.0%10.0 sACO-3Opt
lin105105yes14,37914,9133.7%1.2 ms14,4400.4%3.8 ms0.6 MiB314,3790.0%2.0 sDFACO
14,3790.0%2.0 sACO-3Opt
14,3790.0%2.0 sESACO
pr107107yes44,30345,7303.2%1.1 ms45,2622.2%18.1 ms0.6 MiB554,12122.2%0.9 msNN
pr124124yes59,03062,1935.4%1.4 ms60,0551.7%5.3 ms0.6 MiB373,00823.7%1.3 msNN
bier127127yes118,282121,5442.8%5.4 ms121,5442.8%5.6 ms0.6 MiB3118,2820.0%47.0 sDFACO
118,2820.0%56.0 sACO-3Opt
ch130130yes611066769.3%1.7 ms61901.3%27.9 ms0.6 MiB961100.0%13.0 sDFACO
61100.0%16.0 sACO-3Opt
pr136136yes96,772102,9346.4%1.7 ms98,7112.0%9.9 ms0.6 MiB5125,45829.6%1.2 msNN
pr144144yes58,53760,6253.6%2.1 ms59,9022.3%6.8 ms0.6 MiB364,88610.8%1.4 msNN
ch150150yes652870387.8%2.1 ms67463.3%11.5 ms0.6 MiB36,5280.0%24.0 sDFACO
65280.0%17.0 sACO-3Opt
kroA150150yes26,52428,8148.6%2.2 ms27,2302.7%10.2 ms0.6 MiB526,5240.0%57.0 sDFACO
26,5240.0%1.4 mACO-3Opt
kroB150150yes26,13027,4765.2%2.2 ms26,3991.0%26.4 ms0.6 MiB526,1300.0%7.0 sDFACO
26,1300.0%9.0 sACO-3Opt
pr152152yes73,68276,9524.4%2.3 ms74,6051.3%19.0 ms0.6 MiB586,90617.9%1.4 ms
u159159yes42,08047,59113.1%2.6 ms46,87511.4%15.7 ms0.6 MiB353,91828.1%1.6 msNN
rat195195yes2323256910.6%3.7 ms24857.0%16.2 ms0.6 MiB4282621.7%2.0 msNN
d198198yes15,78016,8626.9%3.9 ms16,1192.1%32.6 ms0.6 MiB415,7800.0%6.5 sESACO
kroA200200yes29,36831,7928.3%3.9 ms30,7674.8%17.5 ms0.6 MiB529,3680.0%2.8 mDFACO
29,3790.04%3.5 mACO-3Opt
29,3680.0%4.7 sESACO
kroB200200yes29,43732,1239.1%3.7 ms30,6314.1%11.8 ms0.6 MiB329,4420.02%3.1 mDFACO
29,4430.02%2.3 mACO-3Opt
ts225225yes126,643157,16324.1%4.5 ms132,8034.9%30.6 ms0.6 MiB7151,68519.8%2.5 msNN
tsp225225yes3916444213.4%4.9 ms41836.8%22.9 ms0.6 MiB5473320.9%2.7 msNN
pr226226yes80,36983,6374.1%4.8 ms82,1512.2%18.2 ms0.6 MiB394,25817.3%2.5 ms
gil262262yes2378268112.8%6.5 ms25396.8%45.4 ms0.6 MiB6310230.5%3.4 msNN
pr264264yes49,13553,4168.7%6.4 ms50,4022.6%41.4 ms0.6 MiB558,61519.3%3.6 msNN
a280280yes257926864.1%33.6ms 26864.1%52.9 ms0.6 MiB525790.0%4.5 sESACO
pr299299yes48,19152,9129.8%8.1 ms50,2254.2%43.6 ms0.6 MiB563,25431.3%4.3 msNN
lin318318yes42,02946,90411.6%9.4 ms45,0637.2%38.8 ms0.6 MiB442,2280.5%6.4 mDFACO
42,2440.5%5.8 mACO-3Opt
42,0540.06%10.2 sESACO
linhp318318yes41,34546,90413.4%9.4 ms45,0639.0%37.3 ms0.6 MiB450,29921.7%5.1 msNN
rd400400yes15,28117,14612.2%14.7 ms16,1585.7%92.8 ms0.6 MiB615,3840.7%2.2 mPACO-3Opt
15,6142.2%24.9 mDFACO
fl417417yes11,86112,6806.9%14.6 ms12,2953.7%119 ms0.6 MiB811,8800.2%1.6 mPACO-3Opt
11,9871.1%34.1 mDFACO
pr439439yes107,217120,67912.6%17.8 ms112,5315.0%66.7 ms0.6 MiB3107,5160.3%2.4 mPACO-3Opt
108,7021.4%35.5 mDFACO
pcb442442yes50,77858,74615.7%17.7 ms53,2754.9%126 ms0.7 MiB751,0470.5%2.2 mPACO-3Opt
52,2022.8%34.8 mDFACO
50,8040.05%11.5 sESACO
d493493yes35,00239,05011.6%21.8 ms37,0455.8%129 ms0.6 MiB535,2660.8%2.3 mPACO-3Opt
35,8412.4%52.9 mDFACO
u574574yes36,90542,43515.0%29.7 ms39,3556.6%247 ms0.6 MiB937,3671.3%1.9 mPACO-3Opt
38,0313.0%1.5 hDFACO
rat575575yes6773769213.6%29.4 ms72156.5%231 ms0.7 MiB870123.5%1.4 hPACO-3Opt
p654654yes34,64337,5428.4%37.6 ms36,4415.2%179 ms0.6 MiB534,7410.3%1.7 mDFACO
35,0751.2%2.5 hPACO-3Opt
d657657yes48,91256,26815.0%36.7 ms51,5535.4%265 ms0.6 MiB749,4631.1%2.3 mDFACO
50,2772.8%2.4 hPACO-3Opt
u724724yes41,91048,19815.0%60.9 ms44,7486.8%264 ms0.7 MiB642,4381.3%2.3 mDFACO
43,1222.9%3.2 hPACO-3Opt
rat783783yes880610,21816.0%54.1 ms94547.4%332 ms0.7 MiB610,49219.1%2.5 mDFACO
10,52519.5%15.4 mACO-3Opt
91273.6%4.0 hPACO-3Opt
88100.04%22.6 sESACO
dsj10001000yes18,659,68821,836,51417.0%83.6 ms20,225,5848.4%460 ms0.7 MiB518,732,0880.4%16.6 sDPIO
dsj1000ceil1000yes18,660,18821,836,51417.0%83.5 ms20,225,5848.4%452 ms0.6 MiB523,813,05027.6%39 msNN
pr10021002yes259,045295,87914.2%87.7 ms276,1226.6%744 ms0.7 MiB5260,4260.5%14.3 sDPIO
259,5090.2%35.8 sESACO
260,3660.5%14.1 sDPIO
u10601060yes224,094261,09316.5%99.5 ms239,7057.0%1.0 s0.7 MiB11224,9320.4%15.3 sDPIO
vm10841084yes239,297275,98915.3%104 ms257,3997.6%901 ms0.6 MiB9240,0790.3%17.4 sDPIO
pcb11731173yes56,89267,49718.6%124 ms60,7926.9%775 ms0.7 MiB757,2430.6%17.8 sDPIO
d12911291yes50,80158,23014.6%136 ms54,2856.9%927 ms0.7 MiB751,4591.3%19.4 sDPIO
rl13041304yes252,948302,66119.7%148 ms277,1939.6%1.2 s0.7 MiB9253,7400.3%21.5 sDPIO
rl13231323yes270,199322,96419.5%157 ms288,5016.8%1.3 s0.7 MiB9273,3681.2%38.1 mDFACO
273,9701.4%37.8 mACO-3Opt
271,2450.4%22.2 sACO-3Opt
271,3010.4%22.0 sDPIO
nrw13791379yes56,63864,92514.6%168 ms59,9055.8%1.2 s0.7 MiB856,9320.5%23.2 sDPIO
fl14001400yes20,12721,8008.3%162 ms21,0714.7%1.8 s0.7 MiB1020,3010.9%40.9 mDFACO
20,2920.8%41.2 mACO-3Opt
20,3421.1%24.6 sACO-3Opt
20,2110.4%24.5 sDPIO
u14321432yes152,970171,17911.9%181 ms160,2604.8%1.1 s0.7 MiB7153,5640.4%23.9 sDPIO
fl15771577yes22,24925,51314.7%210 ms24,51810.2%1.4 s0.7 MiB722,2890.2%25.3 sDPIO
22,2930.2%46.4 sESACO
d16551655yes62,12870,77913.9%225 ms65,5205.5%1.5 s0.7 MiB763,7082.5%25.4 mDFACO
63,7222.6%29.2 mACO-3Opt
62,7691.0%27.5 sACO-3Opt
62,3570.4%27.2 sDPIO
vm17481748yes336,556394,38917.2%267 ms365,6088.6%2.0 s0.7 MiB7338,1180.5%34.3 sDPIO
u18171817yes57,20165,78315.0%395 ms61,4537.4%1.8 s0.7 MiB757,5220.6%30.3 sDPIO
rl18891889yes316,536376,71519.0%319 ms344,5148.8%2.1 s0.8 MiB7318,7140.7%36.6 sDPIO
d21032103yes80,45086,2867.3%373 ms82,8563.0%2.5 s0.7 MiB780,5670.1%23.8 sDPIO
u21522152yes64,25375,21617.1%516 ms68,7667.0%2.7 s0.7 MiB764,7910.8%25.9 sDPIO
u23192319yes234,256254,4208.6%501 ms238,7851.9%3.1 s0.7 MiB7236,1580.8%34.2 sDPIO
pr23922392yes378,032443,37217.3%495 ms408,2378.0%3.0 s0.7 MiB6380,3460.6%29.7 sDPIO
pcb30383038yes137,694160,90916.9%807 ms146,3786.3%6.2 s0.8 MiB9138,6840.7%43.5 sDPIO
fl37953795yes28,77233,00214.7%1.2 s29,8823.9%35.6 s0.9 MiB3429,2091.5%1.1 mDPIO
28,8830.4%2.0 mESACO
fnl44614461yes182,566211,06415.6%1.9 s195,7867.2%11.1 s0.9 MiB7184,5601.1%44.2 sDPIO
183,4460.5%3.2 mESACO
rl59155915yes565,530664,78817.6%3.1 s605,6877.1%31.2 s1.0 MiB11571,2141.0%1.1 mDPIO
568,9350.6%3.6 mESACO
rl59345934yes556,045666,29519.8%3.2 s599,0667.7%25.8 s1.0 MiB9561,8781.0%48.7 sDPIO
pla73977397yes23,260,72827,709,17519.1%4.4 s25,075,6787.8%45.3 s1.1 MiB1123,605,2191.5%1.8 mDPIO
23,389,3410.6%3.6 mESACO
rl1184911,849yes923,2881,103,85419.6%12.4 s994,6067.7%2.3 m1.4 MiB11933,0931.1%5.0 mDPIO
930,3380.8%9.6 mESACO
usa1350913,509yes19,982,85924,125,44320.7%16.2 s21,907,1909.6%2.8 m1.5 MiB1020,217,4581.2%4.5 mDPIO
20,195,0891.1%15.2 mESACO
brd1405114,051yes469,385552,65817.7%15.9 s506,6687.9%3.1 m1.5 MiB11474,7881.1%5.1 mDPIO
474,0871.0%11.4 mESACO
d1511215,112yes1,573,0841,847,37717.4%19.2 s1,705,6648.4%3.6 m1.6 MiB111,588,5631.0%8.7 mDPIO
1,589,2881.0%12.9 mESACO
d1851218,512yes645,238756,66817.3%28.1 s696,5428.0%5.8 m1.9 MiB12652,6131.1%8.3 mDPIO
653,1541.2%11.4 mESACO
pla3381033,810yes66,048,94576,625,75216.0%1.6 m69,626,3805.4%25.7 m2.9 MiB1767,185,6471.7%21.0 mDPIO
pla8590085,900yes142,382,641167,355,04917.5%10.5 m149,546,7765.0%4.1 h6.5 MiB27144,334,7071.4%1.4 hDPIO
Table A4. Results for Art TSP benchmarks.
Table A4. Results for Art TSP benchmarks.
InstanceCII Heuristic (Phase 2)CII Heuristic (Phase 3)Other Heuristics
VOpt? C ( BKS ) C ( T ) Error CII Time C ( T ) Error CII TimeRAM# C min ( T H ) Error H TimeHeuristic Id
mona-lisa100,000no5,757,1916,123,2626.4%14.4 m5,951,4623.4%2.3 h7.5 MiB95,855,0631.7%1.4 hACO-RPMM
100K 6,070,9585.5%1.1 hPartial ACO
vangogh120,000no6,543,6106,971,4706.5%20.8 m6,773,4213.5%4.6 h8.8 MiB126,661,3951.8%1.9 hACO-RPMM
120K 6,924,4485.8%1.5 hPartial ACO
venus140,000no6,810,6657,245,0126.4%28.0 m7,043,7023.4%4.8 h10.2 MiB96,933,2571.8%2.6 hACO-RPMM
140K 7,206,3655.8%2.1 hPartial ACO
pareja 160K160,000no7,619,9538,113,5016.5%37.3 m7,888,6413.5%7.7 h11.6 MiB117,760,9221.9%3.5 hACO-RPMM
courbet 180K180,000no7,888,7338,439,7017.0%48.2 m8,179,4403.7%10.1 h13.0 MiB118,038,6191.9%4.5 hACO-RPMM
earring200,000no8,171,6778,781,7667.5%58.7 m8,493,7243.9%15.1 h14.3 MiB128,335,1112.0%6.0 hACO-RPMM
200K 8,760,0387.2%5.1 hPartial ACO
Table A5. Results for National TSP benchmarks.
Table A5. Results for National TSP benchmarks.
InstanceCII Heuristic (Phase 2)CII Heuristic (Phase 3)Other Heuristics
VOpt? C ( BKS ) C ( T ) Error CII Time C ( T ) Error CII TimeRAM# C min ( T H ) Error H TimeHeuristic Id
wi2929yes27,60327,7390.5%0.3 ms27,6010.0%28.0 ms0.6 MiB335,47428.5%0.2 msNN
dj3838yes665668633.1%0.3 ms66590.1%11.2 ms0.6 MiB5816522.7%0.3 msNN
qa194194yes935210,50512.3%3.8 ms98865.7%37.1 ms0.6 MiB712,48133.5%2.6 msNN
zi929929yes95,345110,18715.6%73.5 ms100,8425.8%630 ms0.7 MiB8119,68525.5%36.7 msNN
lu980980yes11,34012,83413.2%86.4 ms12,0776.5%404 ms0.6 MiB514,28426.0%29.4 msNN
rw16211621yes26,05130,31516.4%233 ms28,77110.4%1.6 s0.7 MiB833,49328.6%71.5 msNN
mu19791979yes86,89199,35614.3%350 ms91,6845.5%3.8 s0.8 MiB10113,36230.5%112 msNN
nu34963496yes96,132111,98116.5%1.1 s103,7177.9%9.2 s0.8 MiB10121,71326.6%327 msNN
ca46634663yes12903191,557,92320.7%1.9 s1,407,8919.1%18.2 s0.9 MiB101,637,46826.9%564 msNN
tz61176117no394,718477,86921.1%3.5 s433,7849.9%40.0 s1 MiB14494,62425.3%843 msNN
eg71467146no172,386198,56615.2%4.5 s182,9796.1%57.9 s1.1 MiB14219,36527.3%1.1 sNN
ym76637663yes238,314285,88120.0%5.0 s259,7809.0%1.4 m1.1 MiB18308,21929.3%1.1 sNN
pm80798079no114,855137,18219.4%5.7 s126,74610.4%55.6 s1.2 MiB10148,93629.7%1.2 sNN
ei82468246yes206,171248,69520.6%6.0 s225,1789.2%1.0 m1.1 MiB11254,55323.5%1.2 sNN
ar91529152no837,4791,014,04121.1%8.4 s927,34810.7%1.2 m1.2 MiB101,063,37627.0%1.5 sNN
ja98479847yes491,924611,95924.4%8.7 s544,41110.7%2.0 m1.2 MiB16630,16928.1%1.9 sNN
gr98829882yes300,899356,75318.6%8.5 s325,5998.2%1.8 m1.3 MiB14395,26731.4%2.3 sNN
kz99769976no1,061,8811,298,40522.3%8.9 s1,168,84310.1%1.6 m1.3 MiB121,344,84526.6%1.8 sNN
fi1063910639yes520,527633,62321.7%9.8 s574,00110.3%2.1 m1.3 MiB14659,80026.8%2.0 sNN
mo1418514185no427,377516,02820.7%17.3 s465,2028.9%3.8 m1.6 MiB14529,39623.9%4.6 sNN
ho1447314473no177,092207,32217.1%18.5 s193,6729.4%3.0 m1.6 MiB10216,77622.4%4.0 sNN
it1686216862yes557315670,70620.3%24.8 s613,13210.0%4.8 m1.7 MiB12706,42026.8%6.2 sNN
vm2277522775yes569,288688,98121.0%44.3 s617,7038.5%11.0 m2.1 MiB16720,28826.5%9.9 sNN
sw2497824978yes855,5971,042,49921.8%53.5 s944,53610.4%10.2 m2.3 MiB121,073,99325.5%12.2 sNN
bm3370833708no959,2891,151,42020.0%1.6 m1,046,7769.1%22.1 m2.9 MiB141,209,68226.1%21.5 sNN
ch7100971009no4,566,5065,475,57519.9%7.4 m4,986,9739.2%1.7 h5.5 MiB145,629,33123.3%1.6 mNN
usa115475115475no6,204,9997,492,27220.7%19.0 m6,779,4179.3%4.3 h8.5 MiB137,691,40224.0%4.1 mNN
Table A6. Results for the VLSI TSP benchmark.
Table A6. Results for the VLSI TSP benchmark.
InstanceCII Heuristic (Phase 2)CII Heuristic (Phase 3)Other Heuristics
VOpt? C ( BKS ) C ( T ) Error CII Time C ( T ) Error CII TimeRAM# C min ( T H ) Error H TimeHeuristic Id
xqf131131yes56462410.6%1.8 ms6006.3%16.4 ms0.6 MiB371226.3%1.0 msNN
xqg237237yes1019116614.4%5.0 ms10644.4%31.9 ms0.6 MiB7132530.0%3.0 msNN
pma343343yes136814908.9%10.0 ms14254.2%58.9 ms0.6 MiB5184635.5%7.4 msNN
pka379379yes133214226.8%12.1 ms13914.4%66.4 ms0.6 MiB4160620.6%7.5 msNN
bcl380380yes1621189416.9%12.4 ms17819.9%97.0 ms0.6 MiB6205526.8%6.6 msNN
pbl395395yes1281143211.8%13.6 ms13495.3%95.9 ms0.6 MiB7158123.5%7.9 msNN
pbk411411yes1343150512.1%14.4 ms14316.6%111 ms0.6 MiB7178933.2%7.7 msNN
pbn423423yes1365157315.2%15.1 ms14607.0%77.1 ms0.6 MiB5181132.6%9.2 msNN
pbm436436yes1443163813.5%16.4 ms15658.4%93.5 ms0.6 MiB5178323.6%9.0 msNN
xql662662yes2513299519.2%36.4 ms27429.1%269 ms0.6 MiB8314725.2%19 msNN
rbx711711yes3115361216.0%42.8 ms33487.5%312 ms0.6 MiB8374820.3%22 msNN
rbu737737yes3314389917.6%45.0 ms35577.3%230 ms0.6 MiB5409023.4%24 msNN
dkg813813yes3199376317.6%53.7 ms34708.5%369 ms0.6 MiB5412629.0%26 msNN
lim963963yes2789319914.7%78.8 ms29746.6%929 ms0.6 MiB10358328.5%37 msNN
pbd984984yes2797318914.0%80.8 ms29505.5%641 ms0.6 MiB9352125.9%36 msNN
xit10831083yes3558408214.7%98.8 ms38006.8%763 ms0.7 MiB8478134.4%42 msNN
dka13761376yes4666554618.8%167 ms50828.9%1.0 s0.7 MiB7592427.0%65 msNN
dca13891389yes5085604518.9%156 ms54717.6%1.0 s0.7 MiB7608019.6%‘NR’PRNN
dja14361436yes5257623618.6%168 ms56287.1%1.3 s0.7 MiB8665626.6%72 msNN
icw14831483yes4416512416.0%180 ms47617.8%1.1 s0.7 MiB5557226.2%75 msNN
fra14881488yes4264472810.9%179 ms44795.1%1.6 s0.6 MiB8557830.8%76 msNN
rbv15831583yes5387620715.2%205 ms57777.2%2.2 s0.7 MiB11687627.6%80 msNN
rby15991599yes5533634514.7%215 ms59998.4%1.9 s0.7 MiB10680923.1%83 msNN
fnb16151615yes4956567514.5%213 ms52596.1%1.6 s0.7 MiB8637728.7%83 msNN
djc17851785yes6115722518.2%261 ms66568.9%2.1 s0.7 MiB9771926.2%103 msNN
dcc19111911yes6396748417.0%296 ms68727.4%2.0 s0.7 MiB7804525.8%116 msNN
dkd19731973yes6421728013.4%302 ms68927.3%2.1 s0.7 MiB7850232.4%119 msNN
djb20362036yes6197749520.9%337 ms681910.0%2.2 s0.7 MiB7764523.4%‘NR’PRNN
dcb20862086yes6600806622.2%354 ms730710.7%2.9 s0.7 MiB9833526.3%124 msNN
bva21442144yes6304749418.9%362 ms68709.0%2.6 s0.7 MiB7826431.1%129 msNN
xqc21752175yes6830816719.6%386 ms74539.1%5.2 s0.7 MiB13829121.4%‘NR’PRNN
bck22172217yes6764815320.5%398 ms74089.5%3.3 s0.7 MiB9851525.9%141 msNN
xpr23082308yes7219866320.0%434 ms78378.6%3.3 s0.7 MiB8913026.5%155 msNN
ley23232323yes835210,14621.5%439 ms90147.9%4.9 s0.7 MiB1110,33023.7%148 msNN
dea23822382yes8017978222.0%455 ms87268.8%4.4 s0.7 MiB9996224.3%157 msNN
rbw24812481yes7724954823.6%495 ms851110.2%4.1 s0.7 MiB9986727.7%169 msNN
pds25662566yes7643910019.1%523 ms83108.7%4.2 s0.8 MiB8986729.1%190 msNN
mlt25972597yes8071985022.0%547 ms888910.1%5.0 s0.8 MiB1010,29527.6%183 msNN
bch27622762yes823410,02021.7%614 ms89348.5%5.0 s0.7 MiB910,39426.2%205 msNN
irw28022802yes842310,04419.2%625 ms91318.4%5.9 s0.7 MiB911,08731.6%210 msNN
lsm28542854yes8014944517.9%658 ms87539.2%5.6 s0.7 MiB910,10526.1%218 msNN
dbj29242924yes10,12812,06919.2%676 ms10,9227.8%4.6 s0.7 MiB712,93527.7%229 msNN
xva29932993yes8492993617.0%719 ms92268.6%5.9 s0.8 MiB910,82127.4%237 msNN
pia30563056yes8258974918.1%757 ms89188.0%8.2 s0.8 MiB1110,58528.2%245 msNN
dke30973097yes10,53912,76721.1%766 ms11,4818.9%5.1 s0.8 MiB7324925.7%247 msNN
lsn31193119yes911410,78418.3%803 ms98958.6%8.0 s0.8 MiB1111,46725.8%260 msNN
lta31403140yes951711,16017.3%805 ms10,3308.5%7.5 s0.8 MiB1012,45530.9%260 msNN
fdp32563256yes10,00811,66116.5%908 ms10,7497.4%7.1 s0.8 MiB812,67726.7%276 msNN
beg32933293yes977211,69319.7%877 ms10,5988.5%10.2 s0.7 MiB1312,63629.3%283 msNN
dhb33863386yes11,13713,34919.9%932 ms12,0828.5%8.0 s0.7 MiB913,89424.8%302 msNN
fjs36493649yes927210,34511.6%1.1 s98125.8%7.3 s0.7 MiB712,78637.9%326 msNN
fjr36723672yes960110,85413.1%1.1 s10,1816.0%8.7 s0.7 MiB812,84033.7%331 msNN
dlb36943694yes10,95912,81817.0%1.2 s11,7637.3%10.4 s0.7 MiB1013,98627.6%344 msNN
ltb37293729yes11,82113,87417.4%1.1 s12,9489.5%10.3 s0.7 MiB915,25929.1%361 msNN
xqe38913891yes11,99514,67222.3%1.3 s13,1539.7%10.0 s0.8 MiB914,59221.7%‘NR’PRNN
xua39373937yes11,23913,41219.3%1.2 s12,2859.3%13.3 s0.8 MiB1114,52029.2%373 msNN
dkc39383938yes12,50314,81718.5%1.3 s13,6198.9%10.5 s0.7 MiB915,93227.4%396 msNN
dkf39543954yes12,53814,93919.1%1.3 s13,7289.5%11.6 s0.8 MiB1015,67925.1%412 msNN
bgb43554355yes12,72314,94817.5%1.5 s13,7898.4%14.0 s0.9 MiB1015,62322.8%‘NR’PRNN
bgd43964396yes13,00916,23924.8%1.6 s14,38510.6%15.7 s0.8 MiB1116,72628.6%472 msNN
frv44104410yes10,71112,44016.1%1.5 s11,5878.2%10.3 s0.8 MiB713,75628.4%518 msNN
bgf44754475yes13,22115,98920.9%1.6 s14,56210.1%22.6 s0.8 MiB1516,43924.3%487 msNN
xqd49664966yes15,31617,63015.1%2.0 s16,5458.0%19.8 s0.8 MiB1019,80729.3%571 msNN
fqm50875087yes13,02914,87714.2%2.1 s14,0417.8%18.1 s0.8 MiB917,55434.7%586 msNN
fea55575557yes15,44518,17117.6%2.4 s16,6297.7%30.4 s0.9 MiB1319,73827.8%688 msNN
xsc68806880yes21,53526,40422.6%3.9 s23,70410.1%36.0 s1.1 MiB1026,24321.9%‘NR’PRNN
bnd71687168yes21,83425,96318.9%4.1 s23,8489.2%50.3 s1.1 MiB1326,57421.7%‘NR’PRNN
lap74547454yes19,53523,10718.3%4.5 s21,3459.3%50.7 s1 MiB1224,18423.8%1.1 sNN
ida81978197yes22,33826,15217.1%5.4 s23,9547.2%1.1 m1.2 MiB1327,51323.2%‘NR’PRNN
dga96989698yes27,72433,53321.0%7.9 s30,3749.6%1.4 m1.3 MiB1233,56421.1%‘NR’PRNN
xmc1015010,150yes28,38734,07120.0%8.8 s31,1249.6%1.1 m1.3 MiB834,14720.3%‘NR’PRNN
xvb1358413,584yes37,08344,12919.0%15.8 s40,5919.5%2.6 m1.5 MiB1145,83523.6%‘NR’PRNN
xrb1423314,233no45,46254,78620.5%17.1 s49,5939.1%3.2 m1.4 MiB1257,03425.5%3.6 sNN
xia1692816,928no52,85062,19517.7%24.0 s57,2208.3%3.4 m1.6 MiB966,39825.6%5.3 sNN
pjh1784517,845no48,09256,89218.3%27.5 s51,9348.0%5.3 m1.7 MiB1360,79726.4%5.4 sNN
frh1928919,289no55,79867,24320.5%32.3 s61,0079.3%5.3 m1.9 MiB1168,36022.5%‘NR’PRNN
fnc1940219,402no59,28769,91217.9%32.0 s64,1708.2%5.3 m1.8 MiB1174,44725.6%6.5 sNN
ido2121521,215no63,51775,87919.5%38.4 s69,2059.0%8.0 m1.9 MiB1479,46925.1%7.6 sNN
fma2155321,553no66,52777,95117.2%41.0 s71,9298.1%6.6 m2.0 MiB1183,44925.4%8.3 sNN
lsb2277722,777no60,97771,99718.1%44.6 s66,2988.7%7.3 m2.0 MiB1176,55125.5%8.8 sNN
xrh2410424,104no69,29483,30020.2%49.1 s75,7669.3%6.8 m2.1 MiB987,74725.2%10.2 sNN
bbz2523425,234no69,33582,21418.6%55.6 s75,4928.9%10.5 m2.2 MiB1387,34526.0%11.1 sNN
irx2826828,268no72,60785,13017.2%1.2 m78,2507.8%15.2 m2.4 MiB1590,93625.2%13.3 sNN
fyg2853428,534no78,56295,52521.6%1.2 m85,8439.3%13.4 m2.4 MiB1397,26023.8%14.0 sNN
icx2869828,698no78,08793,82820.2%1.2 m85,5629.6%11.8 m2.4 MiB1196,98724.2%13.6 sNN
boa2892428,924no79,62295,72920.2%1.2 m86,8349.1%13.9 m2.5 MiB1399,88125.4%14.4 sNN
ird2951429,514no80,35396,20619.7%1.4 m87,5659.0%14.6 m2.5 MiB13100,61725.2%15.4 sNN
pbh3044030,440no88,313104,98518.9%1.3 m95,9498.6%13.5 m2.6 MiB11110,33524.9%16.6 sNN
xib3289232,892no96,757113,36117.2%1.6 m104,5238.0%15.4 m2.7 MiB11120,73624.8%19.2 sNN
fry3320333,203no97,240116,01419.3%1.6 m105,7458.7%20.8 m2.8 MiB15120,66424.1%19.4 sNN
bby3465634,656no99,159118,79219.8%1.7 m108,4239.3%17.0 m2.9 MiB11124,83425.9%22.3 sNN
pba3847838,478no108,318128,31518.5%2.1 m117,7128.7%24.4 m3.1 MiB13134,77024.4%25.4 sNN
ics3960339,603no106,819130,04921.7%2.2 m117,80410.3%26.2 m3.2 MiB13133,66025.1%26.9 sNN
rbz4374843,748no125,183152,81722.1%2.6 m138,23510.4%29.4 m3.5 MiB11157,17325.6%33.2 sNN
fht4760847,608no125,104148,05118.3%3.2 m135,2168.1%39.4 m3.7 MiB13155,97224.7%39.2 sNN
fna5205752,057no147,789174,31718.0%3.8 m160,2318.4%46.9 m4.1 MiB13187,33626.8%51.6 sNN
bna5676956,769no158,078189,52119.9%4.6 m173,0749.5%1.0 h4.4 MiB14200,19826.6%56.8 sNN
dan5929659,296no165,371199,17520.4%5.0 m180,8509.4%1.2 h4.5 MiB15206,77525.0%1.0 mNN
sra104815104,815no251,761326,56129.7%15.6 m295,09217.2%3.7 h7.7 MiB14329,12030.7%3.2 mNN
ara238025238,025no578,761747,61929.2%1.4 h674,55916.6%1.5 d16.8 MiB22759,88231.3%16.5 mNN
lra498378498,378no2,168,0392,710,11625.0%5.8 h2,438,41012.5%15.0 d34.7 MiB492,688,80424.0%1.2 hNN
lrb744710744,710no1,611,2322,076,96628.9%13.7 h1,867,27315.9%15.0 d51.6 MiB182,104,58530.6%2.7 hNN

References

  1. Papadimitriou, C.H. The Euclidean travelling salesman problem is NP-complete. Theor. Comput. Sci. 1977, 4, 237–244. [Google Scholar] [CrossRef]
  2. Garey, M.R.; Graham, R.L.; Jhonson, D.S. Some NP-Complete geometric problems. In Proceedings of the Eight Annual ACM Symposium on Theory of Computing, Hershey, PA, USA, 3–5 May 1976; ACM: New York, NY, USA, 1976; pp. 10–22. [Google Scholar]
  3. Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.; Shmoys, D.B. (Eds.) The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization; Wiley: Chichester, UK, 1985. [Google Scholar]
  4. Jünger, M.; Reinelt, G.; Rinaldi, G. The traveling salesman problem. In Handbooks in Operations Research and Management Science; Elsevier Science B.V.: Amsterdam, The Netherlands, 1995; Volume 7, pp. 225–330. [Google Scholar]
  5. Lin, S.; Kernighan, B.W. An effective heuristic algorithm for the traveling-salesman problem. Oper. Res. 1973, 21, 498–516. [Google Scholar] [CrossRef]
  6. Chitty, D.M. Applying ACO to large scale TSP instances. In UK Workshop on Computational Intelligence; Springer: Cham, Switzerland, 2017; pp. 104–118. [Google Scholar]
  7. Ismkhan, H. Effective heuristics for ant colony optimization to handle large-scale problems. Swarm Evol. Comput. 2017, 32, 140–149. [Google Scholar] [CrossRef]
  8. Gülcü, Ş.; Mahi, M.; Baykan, Ö.K.; Kodaz, H. A parallel cooperative hybrid method based on ant colony optimization and 3-Opt algorithm for solving traveling salesman problem. Soft Comput. 2018, 22, 1669–1685. [Google Scholar]
  9. Peake, J.; Amos, M.; Yiapanis, P.; Lloyd, H. Scaling Techniques for Parallel Ant Colony Optimization on Large Problem Instances. In Proceedings of the Gecco’19—The Genetic and Evolutionary Computation Conference 2019, Prague, Czech Republic, 13–17 July 2019. [Google Scholar]
  10. Dahan, F.; El Hindi, K.; Mathkour, H.; AlSalman, H. Dynamic Flying Ant Colony Optimization (DFACO) for Solving the Traveling Salesman Problem. Sensors 2019, 19, 1837. [Google Scholar] [CrossRef] [PubMed]
  11. Al-Adwan, A.; Mahafzah, B.A.; Sharieh, A. Solving traveling salesman problem using parallel repetitive nearest neighbor algorithm on OTIS-Hypercube and OTIS-Mesh optoelectronic architectures. J. Supercomput. 2008, 74, 1–36. [Google Scholar] [CrossRef]
  12. Zhong, Y.; Wang, L.; Lin, M.; Zhang, H. Discrete pigeon-inspired optimization algorithm with Metropolis acceptance criterion for large-scale traveling salesman problem. Swarm Evol. Comput. 2019, 48, 134–144. [Google Scholar] [CrossRef]
  13. Croes, G.A. A method for solving traveling-salesman problems. Oper. Res. 1958, 6, 791–812. [Google Scholar] [CrossRef]
  14. Vakhania, N.; Hernandez, J.A.; Alonso-Pecina, F.; Zavala, C. A Simple Heuristic for Basic Vehicle Routing Problem. J. Comput. Sci. 2016, 3, 39. [Google Scholar] [CrossRef]
  15. Sahni, S.; Horowitz, E. Fundamentals of Computer Algorithms; Computer Science Press, Inc.: Rockville, MD, USA, 1978; pp. 174–179. [Google Scholar]
  16. Universität Heidelberg, Institut für Informatik; Reinelt, G. Symmetric Traveling Salesman Problem (TSP). Available online: https://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/ (accessed on 8 June 2019).
  17. Natural Sciences and Engineering Research Council of Canada (NSERC) and Department of Combinatorics and Optimization at the University of Waterloo. TSP Test Data. Available online: http://www.math.uwaterloo.ca/tsp/data/index.html (accessed on 8 June 2019).
Figure 1. Block diagram of the CII-algorithm: Phase 1 delivers a partial (yet infeasible) solution, Phase 2 extends the partial solution of Phase 1 to a complete feasible solution, and at Phase 3, the latter solution is further improved.
Figure 1. Block diagram of the CII-algorithm: Phase 1 delivers a partial (yet infeasible) solution, Phase 2 extends the partial solution of Phase 1 to a complete feasible solution, and at Phase 3, the latter solution is further improved.
Algorithms 13 00005 g001
Figure 2. Angle θ ( i , j ) .
Figure 2. Angle θ ( i , j ) .
Algorithms 13 00005 g002
Figure 3. Example that shows the extreme vertices and girding polygon.
Figure 3. Example that shows the extreme vertices and girding polygon.
Algorithms 13 00005 g003
Figure 4. The triangle inequality.
Figure 4. The triangle inequality.
Algorithms 13 00005 g004
Figure 5. Points 1, 3, and 6 that can be inserted between point 4 and 2, 2 and 5, or 5 and 4 from partial tour T 0 are depicted in Figures (a), (b), and (c), respectively.
Figure 5. Points 1, 3, and 6 that can be inserted between point 4 and 2, 2 and 5, or 5 and 4 from partial tour T 0 are depicted in Figures (a), (b), and (c), respectively.
Algorithms 13 00005 g005
Figure 6. Point 6 was inserted in the tour T 0 between points 4 and 2.
Figure 6. Point 6 was inserted in the tour T 0 between points 4 and 2.
Algorithms 13 00005 g006
Figure 7. Point 3 was inserted in the tour T 1 between points 6 and 2.
Figure 7. Point 3 was inserted in the tour T 1 between points 6 and 2.
Algorithms 13 00005 g007
Figure 8. Point 1 be inserted in the tour T 2 between points 4 and 6.
Figure 8. Point 1 be inserted in the tour T 2 between points 4 and 6.
Algorithms 13 00005 g008
Figure 9. Block diagram of Phase 3.
Figure 9. Block diagram of Phase 3.
Algorithms 13 00005 g009
Figure 10. (a) a fragment of a solution before applying the algorithm 2-Opt; (b) the corresponding fragment after applying algorithm 2-Opt.
Figure 10. (a) a fragment of a solution before applying the algorithm 2-Opt; (b) the corresponding fragment after applying algorithm 2-Opt.
Algorithms 13 00005 g010
Figure 11. The improvement rate at Phase 3 for instance usa115475.
Figure 11. The improvement rate at Phase 3 for instance usa115475.
Algorithms 13 00005 g011
Figure 12. (a) error vs. number of points, and (b) processing time vs. number of points, where 1 | V | < 200 .
Figure 12. (a) error vs. number of points, and (b) processing time vs. number of points, where 1 | V | < 200 .
Algorithms 13 00005 g012
Figure 13. (a) error vs. number of points, and (b) processing time vs. number of points, where 200 | V | < 10 , 000 .
Figure 13. (a) error vs. number of points, and (b) processing time vs. number of points, where 200 | V | < 10 , 000 .
Algorithms 13 00005 g013
Figure 14. (a) error vs. number of points, and (b) processing time vs. number of points, where 10,000 | V | < 250,000 .
Figure 14. (a) error vs. number of points, and (b) processing time vs. number of points, where 10,000 | V | < 250,000 .
Algorithms 13 00005 g014
Figure 15. RAM vs. number of points for all the tested instances.
Figure 15. RAM vs. number of points for all the tested instances.
Algorithms 13 00005 g015
Table 1. Procedure extreme_points.
Table 1. Procedure extreme_points.
PROCEDURE extreme_points ( V = { i 1 , i 2 , , i n })
1 y m a x : = y i 1                                                              //Initializing variables
2 x m i n : = x i 1
3 y m i n : = x i 1
4 x m a x : = y i 1
5FOR   j : = 2   TOnDO
6   IF y i j > y m a x THEN y m a x : = y i j
7   IF x i j < x m i n THEN x m i n : = x i j
8   IF y i j < y m i n THEN y m i n : = y i j
9   IF x i j > x m a x THEN x m a x : = x i j
10 T = L = B = R : =
11FOR   j : = 1 TOnDO
12   IF y i j = y m a x THEN T : = T { i j }
13   IF x i j = x m i n THEN L : = L { i j }
14   IF y i j = y m i n THEN B : = B { i j }
15   IF x i j = x m a x THEN R : = R { i j }
16 v 1 : = t 1        // T = { t 1 , t 2 , , t | T | } , | T | n
17 v 2 : = l 1        // L = { l 1 , l 2 , , l | L | } ,   | L | n
18 v 3 : = b 1        // B = { b 1 , b 2 , , b | B | } , | B | n
19 v 4 : = r 1        // R = { r 1 , r 2 , , r | R | } , | R | n
20FOR   j : = 2   TO   | T |   DO
21   IF x t j > x v 1 THEN v 1 : = t j
22FOR   j : = 2   TO   | L |   DO
23   IF x l j > x v 2 THEN v 2 : = l j
24FOR   j : = 2   TO   | B |   DO
25   IF x b j > x v 3 THEN v 3 : = b j
26FOR   j : = 2   TO   | R |   DO
27   IF x r j > x v 4 THEN v 4 : = r j
28RETURN v 1 , v 2 , v 3 , v 4
Table 2. Procedure polygon.
Table 2. Procedure polygon.
PROCEDURE polygon(V, v 1 , v 2 , v 3 , v 4 )
1 P : = ( v 1 )                                  //Initializing variables
2 k : = v 1
3WHILE k v 2 DO                                  //Step 1
4    form a subset of vertices V * : = { i | x i < x k y i y v 2 ; i V }         // V * V
5     form a subset of edges E * : = { ( k , j ) ; j V * }                 // E * E
6   form a set of angles Θ * : = { θ ( k , j ) ; ( k , j ) E * }
7   get the minimum angle θ ( k , l ) from Θ *
8   append the vertex l to P and update k equal to l.
9
10WHILE k v 3 DO                                  //Step 2
11   form a subset of vertices V * : = { i | x i x v 3 y i < y k ; i V }
12   form a subset of edges E * : = { ( k , j ) ; j V * }
13   form a set of angles Θ * : = { θ ( k , j ) ; ( k , j ) E * }
14   get the minimum angle θ ( k , l ) from Θ *
15   append the vertex l to P and update k equal to l.
16
17WHILE k v 4 DO                                  //Step 3
18   form a subset of vertices V * : = { i | x i > x k y i y v 4 ; i V }
19   form a subset of edges E * : = { ( k , j ) ; j V * }
20   form a set of angles Θ * : = { θ ( k , j ) ; ( k , j ) E * }
21   get the minimum angle θ ( k , l ) from Θ *
22   append the vertex l to P and update k equal to l.
23
24WHILE k v 1 DO                                  //Step 4
25   form a subset of vertices V * : = { i | x i x v 1 y i > y k ; i V }
26   form a subset of edges E * : = { ( k , j ) ; j V * }
27   form a set of angles Θ * : = { θ ( k , j ) ; ( k , j ) E * }
28   get the minimum angle θ ( k , l ) from Θ *
29   append the vertex l to P and update k equal to l.
Table 3. Procedure insert_point_in_tour.
Table 3. Procedure insert_point_in_tour.
PROCEDURE insert_point_in_tour( T , l , i )
1 p : = | T |
2IF i < p THEN
3    j : = p + 1
4    WHILE j > i + 1 DO
5       t j : = t j 1
6       j : = j 1
7 t i + 1 : = l
8RETURNT
Table 4. Procedure construct_tour.
Table 4. Procedure construct_tour.
PROCEDURE construct_tour(V, T 0 )
1 h : = 1
2FOR each point l V \ T h 1 DO
3   c l h : = min t i T h 1 { w ( t i , l ) + w ( l , t i + 1 ) w ( t i , t i + 1 ) }
4WHILE exists a vertex l V \ T h 1 DO
5   get l h
6   insert_point_in_tour ( T h 1 , l h , i ( l h ) )
7   FOR each point l V \ T h DO
8      c l h + 1 : = min { c l h , w ( t i , l ) + w ( l , t i + 1 ) w ( t i , t i + 1 ) , w ( t i + 1 , l ) + w ( l , t i + 2 ) w ( t i + 1 , t i + 2 ) }
9   h : = h + 1
Table 5. Procedure 2-Opt.
Table 5. Procedure 2-Opt.
PROCEDURE 2-Opt(V,T)
1 i : = 1
2 n : = | V |
3WHILE i < n 2 DO
4   j : = i + 1 ;
5   WHILE j < n 1 DO
6    IF w ( t i , t j ) + w ( t i + 1 , t j + 1 ) < w ( t i , t i + 1 ) + w ( t j , t j + 1 ) THEN
7      x : = i + 1
8      y : = j
9      WHILE x < y DO
10       t a u x : = t x
11       t x : = t y
12       t y : = t a u x
13       x : = x + 1
14       y : = y 1
15    j : = j + 1
16   i : = i + 1
17RETURNT
Table 6. Procedure improve_tour.
Table 6. Procedure improve_tour.
PROCEDURE improve_tour(V,T)
1 i : = 1
2WHILE i < n DO
3   t j : = t i + 1
4   remove t i + 1 from the tour T      //now T is infeasible
5   construct_tour(V, T \ { t i + 1 } )      //T is feasible again
6   IF t i + 1 = t j THEN
7    i : = i + 1
8RETURNT
Table 7. Statistics about the solutions delivered by CII.
Table 7. Statistics about the solutions delivered by CII.
DescriptionTSPLIBNATIONALART GALLERYVLSIAll
Number of instances83276102218
Average error percentage of the solutions at Phase 211.8%17.7%6.7%18.4%15.4%
Average number of cycles
performed at Phase 3
71111109
Average decrease in error at Phase 36.5%9.6%3.1%9.8%8.3%
Final average error percentage5.3%8.2%3.6%8.6%7.2%
Average memory usage0.8 MiB1.6 MiB10.9 MiB2.3 MiB1.88 MiB
Table 8. Statistics between CII and other heuristics.
Table 8. Statistics between CII and other heuristics.
DescriptionTSPLIBNATIONALART GALLERYVLSIAll
Number of instances83276102218
Number of the known results from other heuristics14201012164
Number of time CII gave a better error than other heuristics2041218
Number of times CII has improved the earlier known best execution time14000 140
Table 9. Comparative relative errors for some problem instances.
Table 9. Comparative relative errors for some problem instances.
Description Error CII Error H
TSPLIB/rat7837.4%19.1% and 19.5% (DFACO [10] and ACO-3Opt [10])
ART/Mona-lisa100K3.4%5.5% (Partial ACO [6])
ART/Vangogh120K3.5%5.8% (Partial ACO [6])
ART/Venus140K3.4%5.8% (Partial ACO [6])
ART/Earring200K3.9%7.2% (Partial ACO [6])
VLSI/dca13767.6%19.6% (PRNN [11])
VLSI/djb203610.0%23.4% (PRNN [11])
VLSI/xqc21759.1%21.4% (PRNN [11])
VLSI/xqe38919.7%21.7% (PRNN [11])
VLSI/bgb43558.4%22.8% (PRNN [11])
VLSI/xsc688010.1%21.9% (PRNN [11])
VLSI/bnd71689.2%21.7% (PRNN [11])
VLSI/ida81977.2%23.2% (PRNN [11])
VLSI/dga96989.6%21.1% (PRNN [11])
VLSI/xmc101509.6%20.3% (PRNN [11])
VLSI/xvb135849.5%23.6% (PRNN [11])
VLSI/frh192899.3%22.5% (PRNN [11])
Table 10. Comparative CPU time for the problem instances for which the other heuristics were faster.
Table 10. Comparative CPU time for the problem instances for which the other heuristics were faster.
Description Time CII Time H
TSPLIB/pla3381025.7 m21.0 m (DPIO [12])
TSPLIB/pla859004.1 h1.4 h (DPIO [12])
Art Gallery/mona-lisa100K2.3 h1.4 h and 1.1 h (ACO-RPMM [9] and Partial ACO [6])
Art Gallery/vangogh120K4.6 h1.9 h and 1.5 h (ACO-RPMM [9] and Partial ACO [6])
Art Gallery/venus140K4.8 h2.6 h and 2.1 h (ACO-RPMM [9] and Partial ACO [6])
Art Gallery/pareja160K7.7 h3.5 h (ACO-RPMM [9])
Art Gallery/coubert180K10.1 h4.5 h (ACO-RPMM [9])
Art Gallery/earring200K15.1 h6.0 h and 5.1 h (ACO-RPMM [9] and Partial ACO [6])
Back to TopTop