Next Article in Journal
UFaceNet: Research on Multi-Task Face Recognition Algorithm Based on CNN
Next Article in Special Issue
Use of the Codon Table to Quantify the Evolutionary Role of Random Mutations
Previous Article in Journal
Multi-Class Freeway Congestion and Emission Based on Robust Dynamic Multi-Objective Optimization
Previous Article in Special Issue
Scheduling Multiprocessor Tasks with Equal Processing Times as a Mixed Graph Coloring Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Constructive Heuristic Driven by Machine Learning for the Traveling Salesman Problem

by
Umberto Junior Mele
1,
Luca Maria Gambardella
1 and
Roberto Montemanni
2,*
1
Dalle Molle Institute for Artificial Intelligence (IDSIA), USI-SUPSI, 6962 Lugano, Switzerland
2
Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, 42122 Reggio Emilia, Italy
*
Author to whom correspondence should be addressed.
Algorithms 2021, 14(9), 267; https://doi.org/10.3390/a14090267
Submission received: 17 August 2021 / Revised: 6 September 2021 / Accepted: 9 September 2021 / Published: 14 September 2021
(This article belongs to the Special Issue 2021 Selected Papers from Algorithms Editorial Board Members)

Abstract

:
Recent systems applying Machine Learning (ML) to solve the Traveling Salesman Problem (TSP) exhibit issues when they try to scale up to real case scenarios with several hundred vertices. The use of Candidate Lists (CLs) has been brought up to cope with the issues. A CL is defined as a subset of all the edges linked to a given vertex such that it contains mainly edges that are believed to be found in the optimal tour. The initialization procedure that identifies a CL for each vertex in the TSP aids the solver by restricting the search space during solution creation. It results in a reduction of the computational burden as well, which is highly recommended when solving large TSPs. So far, ML was engaged to create CLs and values on the elements of these CLs by expressing ML preferences at solution insertion. Although promising, these systems do not restrict what the ML learns and does to create solutions, bringing with them some generalization issues. Therefore, motivated by exploratory and statistical studies of the CL behavior in multiple TSP solutions, in this work, we rethink the usage of ML by purposely employing this system just on a task that avoids well-known ML weaknesses, such as training in presence of frequent outliers and the detection of under-represented events. The task is to confirm inclusion in a solution just for edges that are most likely optimal. The CLs of the edge considered for inclusion are employed as an input of the neural network, and the ML is in charge of distinguishing when such edge is in the optimal solution from when it is not. The proposed approach enables a reasonable generalization and unveils an efficient balance between ML and optimization techniques. Our ML-Constructive heuristic is trained on small instances. Then, it is able to produce solutions—without losing quality—for large problems as well. We compare our method against classic constructive heuristics, showing that the new approach performs well for TSPLIB instances up to 1748 cities. Although ML-Constructive exhibits an expensive constant computation time due to training, we proved that the computational complexity in the worst-case scenario—for the solution construction after training—is O ( n 2 log n 2 ) , n being the number of vertices in the TSP instance.

1. Introduction

The TSP is one of the most intensively studied and relevant problems in the Combinatorial Optimization (CO) field [1]. Its simple definition—despite the membership to the NP-complete class—and its huge impact on real applications [2] make it an appealing problem to many researchers. As evidence of this, the last seventy years have seen the development of extensive literature, which brought valuable enhancement to the CO field. Concepts such as the Held-Karp algorithm [3], powerful meta-heuristics such as the Ant Colony Optimization [4], and effective implementations of local search heuristics such as the Lin-Kernighan-Helsgaun [5] have been suggested to solve the TSP. These contributions, along with others, have supported the development of various applied research domains such as logistics [6], genetics [7], telecommunications [8] and neuroscience [9].
In particular, during the last five years, an increasing number of ML-driven heuristics have appeared to make their contribution to the field [10,11]. The surge of interest was probably moved by the rich literature, and by the interesting opportunities provided by CO applications. Among the many works recently proposed it is worth mentioning those that have introduced empowering concepts such as the opportunity to leverage knowledge from past solutions [12,13], the ability to imitate computationally expensive operations [14], and the faculty of devising innovative strategies via reinforcement learning paradigms [15].
In light of the new features being brought by ML approaches, we wish to couple these ML qualities with well-known heuristic concepts, aiming to introduce a new kind of hybrid algorithm. The scope is to engineer an efficient interlocking between ML and optimization algorithms, which seeks robust enhancements with respect to classic approaches. Many attempts have been proposed so far, but none of them until now has succeeded in preserving the improvements while scaling up to larger problems. A promising idea to contrast the scaling up issue is to use CLs [16]. A CL identifies a subset of edges that are considered promising for the solution. Using CLs can help the solver to restrict the solution searching space since most of the edges are marked as unpromising and will not be considered in the optimization. Moreover, the employment of CL allows for a divide et conquer abstraction, which favors generalization. It can be argued that the generalization issue which emerged in the previous ML-driven approaches is caused mostly by the lack of a proper consideration of the ML weaknesses and limitations [17,18]. In fact, ML is known for having troubles with imbalanced datasets, outliers and extrapolation tasks. Such cases could lead to significant obstacles in achieving good performances with most ML systems. The aforementioned statistical studies were very useful to achieve fundamental insights which allowed our ML-Constructive to bypass these weaknesses. More details on these typical ML weaknesses, with our proposed solutions, will be provided in Section 2.2.
Our main contribution is the introduction of the first ML-driven heuristic that actively uses ML to construct partial TSP solutions without losing quality when scaling. The ML-Constructive heuristic is composed of two phases. The first phase uses ML to identify edges that are very likely to be optimal, the second completes the solution by a classic heuristic. The resulting overall heuristic shows good performance when tested on representative instances selected from the TSPLIB library [19]. The instances considered present up to 1748 vertices, and surprisingly ML-Constructive exhibits slightly better solutions on larger instances rather than on smaller ones, as shown in the experiments. Despite the fact that good results are shown in terms of quality, our heuristic presents an unappealing large constant computation time for training in the current state of the implementation. However, we prove that for the creation of a solution a number of operations bounded by O ( n 2 log n 2 ) is required after training (which is executed only once). ML-Constructive learns exclusively using local information, and it employs a simple ResNet architecture [20] to recognize some patterns from the CLs through images. The use of images, even if not optimal in terms of computation, allowed us to plainly see the input of the network and to get a better understanding of the internal processes in the network. We have finally introduced a novel loss function to help the network to understand how to make a substantial contribution when building tours.
The TSP is formally stated in Section 1.1, and a literature review is presented in Section 1.2. The concept of constructive heuristic is described in detail in Section 2.1 while statistical and exploratory studies on the CLs are spotlighted in Section 2.2. The general idea of the new method is discussed in Section 2.3, the ML-Constructive heuristic is explained in Section 2.4 and the ML model with the training procedure is discussed in Section 2.5. To conclude, experiments are presented in Section 3, and conclusions are stated in Section 4.

1.1. The Traveling Salesman Problem

Given a complete graph G ( V , E ) with n vertices belonging to the set V = { 0 , , n 1 } , and edges e i j E for each vertex i , j V with i j , let c i j be the cost for the directed edge e i j starting from vertex i and reaching vertex j. The objective of the Traveling Salesman Problem is to find the shortest possible route that visits each vertex in V exactly once and creates a loop returning to the starting vertex [1].
The [21] formulation of the TSP is an integer linear program describing the requirements that must be met to find an optimal solution to the problem. The variable x i j defines if the optimal route found picks the edge that goes from vertex i to vertex j with x i j = 1 , if the route does not pick such edge then x i j = 0 . A solution is defined as a matrix X with entries x i j and dimension n × n . The objective function is to minimize the route cost, as shown in Equation (1).
min i = 0 n 1 j = 0 , j i n 1 c i j x i j
i = 0 , i j n 1 x i j = 1 , j = 0 , , n 1
j = 0 , j i n 1 x i j = 1 , i = 0 , , n 1
i Q j Q , j i x i j | Q | 1 , Q { 0 , , n 1 } , n > | Q | 2
x i j { 0 , 1 } , i , j = 0 , , n 1 , i j
There are the following constraints: each vertex is arrived at from exactly one other vertex (Equation (2)), each vertex is a departure to exactly one other vertex (Equation (3)) and no inner-loop between vertices for any proper subset Q is created (Equation (4)). The constraints in Equation (4) prevent that the solution X is the union of smaller tours. To conclude, each edge x i j in solution must be not fractional (Equation (5)). We point out that the graphs used in this work are symmetric and placed in a two-dimension space.
A CL with cardinality k for vertex i is defined as the set of edges e i j , with j C L [ i ] , such that the vertices j are the closest k vertices to vertex i. Note that for each vertex there is a CL, and for each CL there are at most two optimal edges.

1.2. Literature Review

The first constructive heuristic documented for the TSP is the Nearest Neighbor (NN), which is a greedy strategy that repeats the rule “take the closest vertex from the set of unvisited vertices”. This procedure is very simple, but it is not very efficient. While the NN is choosing the best vertex to join the solution, the Multi-Fragment (MF) [22,23] and the Clarke-Wright (CW) [24] are alternatives to add the most promising edge in the solution. Note that the NN grows a single fragment of the tour by adding the closest vertex to the fragment extreme considered during the construction. On the other hand, MF and CW grow, join and give birth to many fragments of the final tour [25]. The approaches using many fragments show superior quality performances, and also come up with very low computational costs.
A different way of constructing TSP tours is known as insertion heuristic, such as the Furthest-Insertion (FI) [23]. These approaches iteratively expand the tour generated by the previous iteration. Considering that—at the mth iteration—the insertion heuristic has created a feasible tour with m + 1 vertices (iteration one starts with two vertices) belonging to the inserted edges subset V ^ m V . At iteration m + 1 , the expansion is carried out, and a new vertex, e.g., vertex j, is inserted into the subset V ^ m + 1 = V ^ m { j } . To preserve the feasibility of the expanded tour, one edge is removed from the previous tour and two edges are added to connect the released vertices to the new vertex j. The removal is done in such a way that the lowest cost for the new tour is achieved.
In case the insertion policy is Furthest, the new inserting vertex is always the farthest from all the vertices belonging to the current tour. Once the last iteration is reached, a complete feasible tour passing through each vertex in V has been constructed. For further details about the computational complexity and the functioning of these broadly used constructives (MF, CW and FI) we suggest chapter six of Gerhard Reinelt’s book [26].
Initially, the exploration in the literature produced to introduce ML concepts was about a setup similar to the NN. Systems such as pointer networks [27] or graph-based transformers [12,13,28] were engaged to select the next vertex in the NN iteration. These architectures were engaged to predict values for each valid departure edge of the extreme considered on the single fragment approach. Then, the best choice (next vertex) according to these values was added to the fragment. These systems were applying stochastic sampling as well, so they could produce thousands of solutions in parallel using GPU processing. Unfortunately, these ML approaches failed to scale, since all vertices were given as input to the networks, and TSP instances with different dimensions exhibit inconsistent intrinsic features.
To attempt to overcome the generalization problem, several works proposed systems arguably claimed to be able to generalize [16,29,30]. Results however showed that the proposed structure2vec [29] and Graph Pointer Network [30] keep their performances close to the FI [23] up to 250 vertices instances, then they lose their ability to generalize. The model called Att-GCN [16] is used to pre-process the TSP by initializing CLs. Even if solutions are promising in this case, the ML was not actively used to construct TSP tours. Furthermore, no comparisons with other CL constructors as: POPMUSIC [31], Delunay Triangolarization [32], minimum spanning 1-tree [1] and a recent CL constructor driven by ML [33] were provided in the paper.
The use of Reinforcement Learning (RL) to tackle TSP was proposed as well [34]. The ability to learn heuristics without the use of optimal solutions is very appealing. These architectures were trained just via the supervision of the objective function shown in Equation (1). Several RL algorithms were applied so far to solve TSP (and CO problems in general [35]). The actor-critic algorithm was employed in [36]. Later, ref. [13] used the actor-critic paradigm with an additional reward defined by the easy to compute minimum spanning tree cost (resembling a TSP solution, see [37]). Moreover, the Q-learning framework was used by [29], the Monte Carlo Tree Search was employed by [16] and Hierarchical RL applied in [30]. It is worth mentioning that [38] explicitly advised rethinking about the generalization issues to solve TSP employing ML, suggesting that a more hybrid approach was necessary. In Miki [39,40] was proposed for the first time the use of image processing techniques to solve the TSP. The choice—even if not obvious in terms of efficiency—has the advantage to get a better understanding of internal network processes.

2. Materials and Methods

For the sake of clarity, materials and methods employed in this work have been divided into subsections. In Section 2.1, constructive heuristics, based on fragments growth, are reviewed with some examples. The statistical study and the main intuitions behind our heuristic are presented in Section 2.2. The general idea of the ML-Constructive heuristic is described in Section 2.3, the overall algorithm with its complexity is demonstrated in Section 2.4. The ML system, the image creation process and the training procedure are explained in detail in Section 2.5.

2.1. Constructive Heuristics

Constructive heuristics are employed to create TSP solutions from scratch, when just the initial formulation described in Section 1.1 is available. They exploit intrinsic features during the solution creation, generally provide quick solutions with modest quality, and exhibit low polynomial computational complexity [1,24,41].
In this paper, we further develop constructive heuristics. Particularly, those that take their decisions on edges are addressed. These approaches grow many fragments of the tour, in opposition to NN that grows just a single fragment (Figure 1). Since at each addition the procedure must avoid inner-loops and no vertex can be connected with more than two other vertices, they take an extra effort concerning the NN to preserve the TSP constraint. However, the computational time needed to construct a tour remains limited [26].
As introduced by [42,43], two main choices are required to design an original constructive heuristic driven by edge choices: the order for the examination of the edges, and the constraints that ensure a correct addition. Note that to construct an optimal TSP solution the examination order of the edges must be optimal as well (there are more optimal orders). So, theoretically, the objective of an efficient constructive is to examine the edges in the best possible order.
The examination order is arranged according to the relevance that each edge of the instance exhibits. The relevance of an edge is related to the probability that we expect that such an edge is in the optimal solution. The higher is the relevance the earlier that edge should be examined. Different strategies are possible to measure the edges’ relevance, the most famous ones are the MF and the CW policies. The MF relevance is expressed by the cost values, the smaller is the cost, the higher is the probability of being added. Instead, the CW relevance depends on the saving value, which was designed on purpose to rethink the examination order. Saving is the gain obtained when rather than passing through a hub node h, the salesman uses the straight connection between vertex i and vertex j. Note that the hub vertex is chosen to exhibit the shortest Total Distance (TD) from the other vertices. Equation (6) describes the formulas used to find the hub, while Equation (7) shows the function employed to compute the saving values.
h = arg min i V T D [ i ] , T D [ i ] = j = 0 n 1 c i j , i V
s i j = c i h + c h j c i j , i , j V , with i j
The second important choice is to define simple constraints that ensure correct additions. The edge’s addition checker is the algorithm the allows to add feasible edges at each iteration. For both approaches (MF and CW), it is checked that the examined edge does not exhibit extremes vertices with already two connections (Equations (2) and (3)), and that does not create inner-loops (Equation (4)) with the current partial solution. Tracker is called the subroutine that checks if the examined edge can create an inner-loop, and it uses a quadratic number of operations for the worst-case scenario in our implementation (Appendix A). Note that there exist more efficient data structures and algorithms for the tracker subroutine that even runs in O ( n log n ) [43].

2.2. Statistical Study

As mentioned earlier, the generalization issues of ML approaches are likely caused by a poor consideration of well-known deep learning weaknesses [17,18,38]. It is known that dealing with imbalanced datasets, outliers and extrapolation tasks can critically affect the overall performances of an ML system. So one of the main reasons for using ML for TSP is to reduce the number of wrong forecasts during the construction of the solution.
An outlier arises when a data point differs significantly from other observations, outliers can cause problems during back-propagation [44] and in their pattern recognition [45]. Finally, with extrapolation it is mean the phenomenon that occurs when a learned system is required to operate beyond the space of known training examples, since it wants to extend the intrinsic features of the problem to similar but different tasks [46].
To overcome the aforementioned problems, we suggest supporting the ML with an optimization heuristic. We instruct the model to act as a decision-taker, and we engineer to place it in a context that allows it to act confidently. We emphasize the significance of designing a good heuristic that avoids gross errors, e.g., by omitting imbalanced class skews and outlier points. Choosing wisely a context for the ML that does not change too frequently during the algorithm iterations, can help it to deal with extrapolation tasks as well.
We addressed the challenge by introducing two operational components: the use of a subroutine and the employment of ML as a decision-taker for the solution construction. The subroutine consists of the detection of optimal edges from the elements of a CL. As stated in Section 1.1, CL identifies the most promising edges to be part of the optimal solution. For instance, [47] proved that just around 30 · n of the edges need to be taken into account by an optimal solver for large instances with more than a thousand points. The use of CL with ML was firstly suggested by [16]; but, as mentioned, they employed ML to initialize CL rather than constructing tours.
To understand the decision-taker task, an exploratory study was carried out to check the distribution of the optimal edges within the CLs. It was observed that after sorting the edges in the CL from the shortest to the longest, the occurring of an optimal edge was not uniform concerning the positions in the sorted CL, but followed a logarithmic distribution, as shown in Figure 2. Such a pattern unfortunately revealed a severe class distribution skew for some positions. In fact, some positions displayed the presence of optimal edges much more frequently than other positions.
Figure 2 also shows the rate of optimal edges found for each position. Note that an optimal edge occurrence arises when the optimal tour passes through the edge in the position taken into consideration by the CL. This study emphasizes the relevance of detecting when the CL’s shortest edge is optimal since about 88.6 % for the evaluation data-set and 86.7 % for the test data-set of the times these edges belong to the optimal tour. However, it reveals as well that detecting with ML when the shortest edge is not optimal is a hard task due to the over-represented situation.
Instead, considering the second shortest edge, a balanced scenario is observed. About half of the occurrences are positive and the other half is negative for both data sets. On the other hand, the rapid growth of under-represented positions can be observed from the third position onwards. Note that up to the fifth position the under-representation is not too severe, and imbalanced learning techniques could make their contribution to infer some useful patterns [48]. From the sixth on instead, the optimal occurrences are too rare to be able to recognize any useful pattern, even if these positions could be interpreted as very useful ones in terms of construction. Considering the rate of optimal edges shown in Figure 2, the sum of optimal occurrences for the first five positions in the CL represents about 95 % of the total optimal edges available. Therefore, the selection of such a subset of edges is promising in regards to ML pattern recognition, in such a way as to avoid all the under-represented scenarios.
The empirical probability density functions (PDF) shown in Figure 2 were computed using 1000 uniform random euclidean TSP instances—sampled in the unit side square with a total number of vertices varying uniformly between 500 and 1000—for the evaluation data-set and a representative selection of TSPLIB instances as test data-set. The latter data-set was select in such a way that all the instances available in the TSPLIB library with a total number of vertices varying between 100 and 2000 were included. Furthermore, these instances were required to be stacked in a two-dimensional space as well. The optimal solutions were computed with the Concorde solver [49] for both cases.
After that the most promising edges were selected, a relevant choice to be made in our heuristic is the examination order of these edges. As mentioned, edges selected in earlier stages of the construction exhibit higher probabilities to be accepted regarding the later ones. To explore the effectiveness of different strategies, we tested the behavior of classic constructive solvers such as MF, FI and CW (Figure 3). Using the representative TSPLIB instances [19], these solvers were investigated on some classic metrics such as the TPR, the FPR, the accuracy and the precision. Considering each constructive solver as a predictor, each position in the CL as a sample point, and the optimal edge positions as the actual targets to be predicted. For each CL, there are two position targets and two positions predicted.
A true-positive occurs when the predicted edge is also optimal, a false-positive instead occurs if the predicted edge is not optimal. Note that avoiding false-positive cases is crucial since they block other optimal edges in the process. To take care also of this aspect, the precision of the predicting heuristic was considered in Table 1 as well. Let D p be the dataset of all the edges available in the p positions for each CL. If the predictor truly finds an optimal edge in the observation i, the variable TP i will be equal to one, otherwise, it is zero. Similarly, it is for the false-positive FP i variable. Note that a positive ( P i ) or negative ( N i ) observation occur when the observation is optimal or not, respectively, and the frequency of these events varies according to the p position. Hence, studying the predictor performances by position is important since each position has different importance during the solution construction and for the optimal frequency.
T P R = i D p TP i i D p P i , F P R = i D p FP i i D p N i
To get a better look at the results shown in Figure 3, Table 1 emphasizes the values obtained during the experiment. MF exhibits a higher TPR for shorter edges as expected, while CW performs better with longer edges. However, less obvious is MF’s higher FPR in the first position. The latter fact brings attention to the importance of decreasing the FPR for the most frequent position. Note that the CW’s precision is higher concerning the other ones for the first position, which can be read as the main reason why CW comes up with better TSP solutions than MF. So, one of the main reasons for using ML for TSP is to reduce the FPR during the construction of the solution.

2.3. The General Idea

In light of the statistical study presented in Section 2.2, we propose a constructive heuristic called ML-Constructive. The heuristic follows the edge addition process (see MF and CW in Section 2.1) extended by an auxiliary operation that asks the ML to agree for any attaching edge during a first phase. The goal is to avoid as much as possible adding bad edges in the solution while allowing the addition of promising edges which are considered auspicious by the ML model. We emphasize that our focus is not on the development of highly efficient ML architectures, but rather on the successful interaction between ML and optimization techniques. Therefore, the ML is conceived to act as a decision-taker and the optimization heuristics as the texture of the solution building story. The result is a new hybrid constructive heuristic that succeeds in scaling up to big problems while keeping the improvements achieved thanks to ML.
As aforementioned, the ML is exploited just in situations where the data do not suggest underrepresented cases, and since about 95 % of the optimal edges are connections with one of the closest five vertices of a CL, only such subset of edges is initially considered to test the ML performances. Recall that it is a common practice to avoid employing ML in the prediction of rare events. Instead, it is commonly suggested to apply ML preferably in cases where a certain level of pattern recognition can be confidently detected. For this reason, our solution is designed to construct TSP tours in two phases. The first employs ML to construct partial solutions (Figure 4). While the second uses the CW heuristic to connect the remaining free vertices and complete the TSP tour.
Initially, during the first phase, considered most likely edges to be found in the optimal tour are collected in the list of promising edges L P . To appropriately choose these edges, several experiments were carried out and results are shown in Table 2. The strategy used to build the promising list was to include the edges of the first m vertices of each CL, considering the m value ranging from 1 to 5. The ML was in charge of predicting whether the edges under consideration in L P were in the optimal solution or not. Experiments were handled adopting the same ResNet [20] architecture and procedure explained in Section 2.5, but the ML was trained on different data to be consistent with the m tested value.
Several classic metrics were compared for the different m: the True Positive Rate (TPR), the False Positive Rate (FPR), and their difference [50]. Please note that ML objectives are to keep the FPR small, meanwhile to obtain good results in terms of TPR. Keeping a small FPR ensures that during the second phase the ML-Constructive has an higher probability in detecting good edges, meanwhile with a high TPR the search space for the second phase is hopefully reduced (Appendix B).
In terms of TPR, it seems to be the best choice to include just the shortest edge in L P ( m = 1 ), but by checking the FPR in Table 2 it becomes obvious that the ML has learned to predict almost always an agreement for m = 1 . Therefore, such a behavior is undesirable and it leads to a high FPR, and hence to worse solutions during the second phase. However, if the difference between TPR and FPR is taken into account, the best arrangement is when the first two shortest edges are put into the list ( m = 2 ). Although other arrangements might show to be effective as well, the selection through positions in the CLs and the selection of the first two shortest edges in each CL are proven to be efficient by the results. Recall that too many edges in L P can be confusing since outliers and classes with severe distribution skew can appear. For example, detecting optimal edges from the fourth position onwards is very difficult since they are very under-represented (Figure 2), and the creation of images connecting edges that are in the fourth position in their CL is very uncommon—causing outliers in the third channel (Section 2.5).
After engineering the promising list structure, L P is sorted according to a heuristic that seeks to anticipate the processing of good edges. It is crucial to find an effective sorting heuristic for the promising list since the order of it affects the learning process and the ML-Constructive algorithm as well. An edge belonging to the optimal tour and being straightforwardly detected by the ML model is regarded as good. Therefore, for simplicity, in this work, the list is sorted by edge’s position in the CL and cost length, but other approaches could be propitious, perhaps using ML. Note that, as repeatedly mentioned, the earlier examination of the most promising edges increases the probability to find good tours employing the multiple fragment paradigm (Appendix B).
At this point, the edges belonging to the sorted promising list are drawn in images and fed to the ML one at a time. If the represented edge meets the TSP constraints considering the partial solution found at the current iteration, the ML system will be challenged to detect if the edge is in the optimal solution. If the ML agrees with a given level of confidence, the heuristic will add the edge to the solution. Assuming that some CLs information provides enough details to detect common patterns from previous solutions, the images represent just a small subset of vertices given by the CLs of each edge extremes of the edge that is processed. The partial solution visible in such local context and available up to the insertions made by moving through the promising list is represented as well.
Once all the edges of the promising list have been processed, the second phase of the algorithm will complete the tour. Initially, it detects the remaining free vertices to connect (Figure 4), then it connects such vertices employing the CW. Note that CW usually captures the optimal long edges better than MF, as emphasized in Figure 3 and Table 1. Therefore CW represents a promising candidate solver to connect the remaining free fragment extremes of the partial solution into the final tour. However, other arrangements employing local searches or meta-heuristics may be considered promising as well even if more time-consuming [51].

2.4. The ML-Constructive Algorithm

The ML-Constructive starts as a modified version of MF, then concludes the tour exploiting the CW heuristic. The ML model behaves consequently as a glue since it is crucial to determine the partial solution available at the switch between solvers.
The list of promising edges L P and the confidence level of the ML decision-taker are critical specifications to set in the heuristic before than it runs. The reasons behind our promising list building choices were widely discussed in Section 2.3. While the confidence level is used to handle the exploitation vs exploration trade-off. It consists of a simple threshold applied to the predictions made by the ML system. If the predicted probability that validates the insertion is greater than such threshold, then the insertion is applied. The value of 0.99 has been verified to provide good results on tested instances. Since, lower values increase the occurrence of false-positive cases, thus leading to the inclusion of edges that are not optimal. On the other hand, higher values of it decrease the occurrence of true-positive cases, hence increasing the challenge of the second phase.
The overall pseudo-code for the heuristic is shown in Algorithm 1. Firstly, the CLs for each vertex in the instance are computed (line 2). We noticed that considering just the closest thirty vertices for each CL was a good option. As mentioned, just the first two connections are considered in L P , while the other vertices are used to create the local context in the image. The CL construction takes a linear number of operations for each vertex, and the overall time complexity for constructing it is O ( n 2 ) . Since finding the nearest vertex of a given vertex it takes linear time, the search for the second nearest takes the same time (after removing the previous from the neighborhood). So on until the thirtieth nearest vertex is found. As only the first thirty edges are searched, the operation can be completed in linear time. Then, promising edges are inserted in L P (line 3), then repeating edges are deleted to avoid unnecessary operations (line 4). The list is sorted according to the position in the CL and the cost values (line 5). All the edges that are the first nearest will be found first and sorted according to c i j , then the second nearest and so on. Since only the first two edges for each CL can be in the list, the sorting task is completed in O ( n log n ) . Several orders for L P were preliminary tested—e.g., employing descending cost values or even savings to sort the edges in L P —but experiments suggested that sorting the list according to ascending cost values is the best arrangement.
The first phase of ML-Constructive takes part. An empty solution X = 0 ¯ is initialized (line 6), and following the order in L P a variable l is updated with the edge considered for the addition (line 7). At first, l is checked to ensure that the edge complies the TSP constraints (Equations (2)–(4)). Then the ML decision-taker is queried to confirm the addition of the edge l. If the predicted probability is higher than the confidence level, the edge is added to the partial solution (lines 11 and 14). To evaluate the number of operations that this phase consumes, we must split the task according to the various sub-routines which are acted at each new addition in the solution. The “if” statements (lines 9, 13, 23 and 26) check that the constraints (2) and (3) are complied.
They verify that both extremes of the attaching edge l exhibit at most one connection in the current solution. The operation is computed with the help of hash maps, and it takes constant time for each considered edge l. The tracker verification (lines 10 and 24) ensures that l will not create an inner-loop (Equation (4)). This sub-routine is applied only after it is checked that both extremes of l have exactly one connection, each in the current partial solution. It takes overall O ( n 2 ) operations up to the final tour (proof in Appendix A).
Once all the constraints of the TSP have been met, the edge l is processed by the ML decision-taker (lines 11 and 14). Even if time-consuming, such sub-routine is completed in constant time for each l. Initially, the image depicting the l edge and its local information is created, then it is given as input to the neural network. To create the image, the vertices of the CLs and the existing connections in the current partial solution must be retrieved. Hash maps are used for both tasks, and since the image can include up to sixty vertices, this operation takes a constant amount of time for each l edge in L P . The size of the neural network does not vary with the number of vertices in the problem as well but remains constant for each edge in L P .
To complete the tour, the second phase starts by identifying the hub vertex (line 15). It considers all the vertices in the problem (free and not), following the rule explained in Equation (6). Free vertices are selected from the partial solution, and edges connecting such vertices are inserted in the difficult edges list L D (line 16). The saving for each edge in L D is computed (line 17), and the list is sorted according to these values (line 18) in O ( n 2 log n 2 ) . At this point (lines 20 to 27), the solution is completed employing the classical multiple fragment steps, which are known to be O ( n 2 ) [41].
Therefore, the complexity of the worst-case scenario for the ML-Constructive is:
O ( n + n log n + n 2 + n 2 log n 2 ) = O ( n 2 log n 2 )
Note that to complete the tour we proposed the use of CW, but rather hybrid approaches that also use some sort of exhaustive search could be very promising as well, although more time-consuming.
Algorithm 1 ML-Constructive.
Require:
TSP graph G ( V , E )
Ensure:
a feasible tour X
   1:
procedureML-Constructive( G ( V , E ) )
   2:
      create CL for each vertex
   3:
      insert the shortest two vertices for each CL into L P
   4:
      remove from L P all duplicate edges in their higher positions
   5:
      sort L P according to the position in the CL and the ascending costs c i , j
   6:
       X = 0 ¯
   7:
      for l in L P  do
   8:
           select the extreme vertices i , j of l
   9:
           if vertex iand vertex j have exactly one connection each in X then:
 10:
               if l do not creates a inner-loop then:
 11:
                   if the ML agrees the addition of lthen: x i , j = 1
 12:
           else
 13:
               if vertex iand vertex j have less than two connections each in X then:
 14:
                   if the ML agrees the addition of lthen: x i , j = 1
 15:
      find the hub vertex h
 16:
      select all the edges that connects free vertices and insert them into L D
 17:
      compute the saving values with respect to h for each edge in L D
 18:
      sort L D according to the descending savings s i , j
 19:
       t = 0
 20:
      while the solution X is not complete do
 21:
            l = L D [ t ] ,   t = t + 1
 22:
           select the extreme vertices i , j of l
 23:
           if vertex iand vertex j have less than two connections each in X then:
 24:
               if l do not creates a inner-loop then: x i , j = 1

2.5. The ML Decision-Taker

The ML decision-taker validates the insertions made by the ML-Constructive during the first phase. Its scope is to exploit the ML pattern recognition to increase the occurrences of finding good edges while reducing them for the bad edges.
Two data sets were specifically created to fit the ML decision-taker; the first was used to train the ML system while the second was to evaluate and choose the best model. The training data-set is composed of 38,400 instances uniformly sampled in the unit-sided square. The total number of vertices for instance n ranges uniformly from 100 to 300. On the other hand, the evaluation data-set is composed of 1000 instances uniformly sampled from the unit-sided square. The total number of vertices varies in this case from 500 to 1000. The data-set has been used to create the results in Table 2. The optimal solutions were found (in both cases) using the Concorde solver [49]. The creation of the training instances and their optimal tours took about 12 hours on a single CPU thread, while a total time of 24 CPU hours was needed for the evaluation data-set creation (since it includes instances of greater size). Note that, in comparison with other approaches that use RL, good results were achieved here even though we used far fewer training instances [13].
To get the ML input ready, the promising list L P was created for each instance in the data sets. In case m = 2 (best scenario), the two shortest edges of each CL were inserted into the list. To avoid repetitions, edges that occur several times in the list were inserted just once at the shortest available position. For example, if edge e i j is the first shortest edge in CL [ i ] and the second shortest edge in CL [ j ] , it will only occur in L P once such as the first position for vertex i. After that all promising edges had been inserted into L P , the list was sorted. Note that the list can contain at most m × n items in it. An image with a dimension of 96 × 96 × 3 pixels was created for each edge belonging to L P . Three channels (red, green, and blue) were set up to provide the information used to feed into the neural network. Each channel depicts some information inside a square with sides of 96 pixels each (Figure 5). The first channel of the image (red) shows each vertex in the local view, the second one (green) shows the edge l considered for the insertion with its extremes and the third one (blue) shows all the existing fragments currently in the partial solution and visible from the local view drawn in the first channel.
As mentioned, the local view was formed by merging the vertices belonging to the CLs of each extreme of the inserting edge. These vertices were collected and their positions normalized to fill the image. The normalization required having the middle of the inserting edge l such as the image center, whereas all the vertices visible in the local view were interior to a virtual sphere inscribed into the squared image, such that the maximum distance between the image center and the vertices in the local view was less than the radius of such sphere. The scope of the normalization was to keep consistency among the images created for the various instances.
The third channel was concerned with giving a temporal indication to aid the ML system in its decision. Representing those edges that had been inserted during the previous stages of the ML-Constructive, this information gave a helpful hint in the interpretation of which edges the final solution needs most. Two different policies were employed in the construction of it: the optimal policy (offline) and the ML adaptive policy (online). The first used the optimal tour and the L P order to create this channel (just on training), while the second one used the ML previous validations (train and test).
A simple ResNet [20] with 10 layers was adopted to agree on the inclusion of the edges into the solution. The choice of the model is motivated by the easiness that image processing ML models show on the understanding of the learning process. The architecture is shown in Figure 6. There are four residual connections, containing two convolutional layers each. The first layer in each residual connection is characterized by a stride equal to two (/2). As usual for the ResNet the kernels are set to 3 × 3 , and the number of features increases by multiplying by 2 at each residual connection, to balance the downscale of the images. The output is preceded by a fully connected layer with 9 neurons (fc) and by an average pool operation (avg pool) that shrinks the information in a single vector with 1024 features. For additional details we refer to [20]. The model is very compact, with the scope of avoiding the computational burden and other complexities.
The output of the network is represented by two neurons. One neuron is predicting the probability that the considered edge l is in the optimal solution, while the other is predicting if the edge is not optimal. The sum of both probabilities is equal to one. The choice of using two neurons as output instead of just one is due to the exploitation of the Cross-Entropy loss function, which is recommended to train classification problems. In fact, this loss penalizes especially those predictions that are confident and wrong. The network will know if the inserting edge l is optimal or not during train, while the ResNet should predict the probability that the edge is optimal during the test.
To train the network two loss functions were jointly utilized: the Cross-Entropy loss (Equation (10)) [52] and a reinforcement loss (Equation (11)) which was developed specifically for the task at hand. Initially, the first loss is employed alone up to convergence (about 1000 back-prop iterations), then the second loss is also engaged in the training. At each iteration of back-propagation the first loss updates the network firstly, then (after 1000 iterations) the second loss updates the network as well. The gradient of the second loss function is approximated by the REINFORCE algorithm [53] with a simple moving average with 100 periods used as a baseline.
loss 1 = E p ( x l ) log q θ ( x l )
loss 2 = E q θ ( x l ) T ( x l ) F ( x l )
In Equations (10) and (11), x l is the image of the inserting edge l, the function identifying whether l is optimal is accounted as p, while q θ is the ResNet approximation to it. Moreover, the T function returns one if the prediction made by q θ is true (TP or TN), and zero otherwise. While the F function returns one if the prediction is false. Note that the second loss exhibit an expected value with respect to q θ measure, since the third channel is updated using the ML adaptive policy (online), while the first loss uses the optimal policy (offline). The introduction of a second loss had the purpose of increasing the occurrences of true-positive while decreasing the false-positive cases. Note that it employs the same policy that will occur during the ML-Constructive test run.

3. Experiments & Results

To test the efficiency of the proposed heuristic, experiments were carried out on 54 standard instances. Such instances were taken from the TSPLIB collection [19] and their vertex set cardinality varies from 100 to 1748 vertices. Non-euclidean instances, such as the ones involving geographical distances (GEO) or special distance functions (ATT), were addressed as well. We recall that the ResNet model was trained on small (100 to 300 vertices) uniform random euclidean instances, evaluated on medium-large (500 to 1000 vertices) uniform random euclidean instances, and tested on TSPLIB instances. We emphasize that TSPLIB instances are generally not uniformly distributed, and, as mentioned, these instances were selected among all available instances in such a way to have no less than 100 total vertices, no more than 2000 total vertices and with the ability to be described in a two-dimensional space.
All the experiments were handled employing python 3.8.5 [54] for the algorithmic component, and pytorch 1.7.1 [55] to manage the neural networks. The following hardware was utilized:
  • a single GPU NVIDIA GeForce GTX 1050 Max-Q;
  • a single CPU Intel(R) Core(TM) i7-8750H @ 2.20GHz.
During training, all hardware was exploited, while just the CPU was used to test.
The experiments presented in Table 3 compare ML-Constructive (ML-C) results—achieved employing the ResNet—to other famous strategies based on fragments, such as the MF and the CW. The first is equivalent to include all the existing edges in the list of promising edges L P , sort the list according to the ascending cost values—without considering the CL positions—and then substituting the ML decision taker with a rule that always inserts the considered edge. While the second strategy is equivalent to keeping the list L P empty, this means that the first phase of ML-Constructive does not create any fragment, while the construction is made completely in the second phase. For the sake of completeness, the FI was tested as well and added to Table 3. Even if FI is not a growing fragment approach, and therefore is not an alternative strategy of our ML-Constructive heuristic, it is nevertheless a good benchmark solver to compare.
To explore, evaluate and interpret the behavior of our two-phase algorithm, other strategies were investigated as well. The ML decision-taker can act in very different ways, and a comparison with expert-made heuristic rules can be significant. Deterministic and stochastic heuristic rules were created to explore the optimality gap variation. The aim was to prove that the learned ML model would produce a higher gain for the heuristic rules, as corroborated by Table 3. The heuristic rules were substituting the ML decision-taker component within the ML-Constructive (lines 10 and 13 of Algorithm 1). No changes in the selecting and sorting strategies were applied to create the lists L P and L D . The First (F) rule decides to deterministically add the l edge if one of its extremes is the first closest vertices in the CL of the other extreme. The Second (S) rule is similar, but it adds l only if one extreme is the second shortest in the CL of the other. The policy that always validates (Y) the insertion of the edges in L P was examined as well. It represents with CW the extreme cases where the ML decision-taker always validates or not, respectively, the edges in L P . A stochastic strategy called empirical (E) was tested as well, which adds the edges in L P according to the distribution seen for the evaluation data-set in Figure 2. Therefore, it inserts the edge if one of the extremes is the first with probability 0.886 or the second with probability 0.512 . Twenty runs of the empirical strategy were made, and in (AE) we show the average results, while in (BE) we show the best from all runs. Finally, to check the behavior of the ML-Constructive in case the ML system validates with 100 % accuracy (the ML decision-taker is a perfect oracle), the Super Confident (ML-SC) policy was examined. This policy always answers correctly for all the edges in the promising list L P , and is achieved by exploiting the known optimal tours. To capture the potentiality of a super-accurate network for the first phase, the partial solution created by the ML-SC policy in the first phase and the solution constructed in the second phase are shown in Figure 7. Note that some crossing edges are created in the second phase. In fact, despite the solution created being very close to the optimal, the second phase sometimes adds bad edges to the solution. This last artificial policy has been added to demonstrate how much leverage we can still gain from the ML point of view.
The results obtained by the optimal policy (ML-SC) lead us to two interesting aspects. The first one, as mentioned before, shows the possible leverage from the ML perspective (first phase). While the second one gives us an idea of how much improvement is possible from the heuristic point of view (second phase). To emphasize these aspects the ML-SC (gap) column in Table 3 shows the difference, in terms of percentage error, between the ML-SC solution and the best solution found by the other heuristics (in bold). In fact, the top nine columns were compared with each other to find the best solution, while the tenth column (ML-SC) was later compared with all the others. The average, the standard deviation (std), and the number of times when the heuristic is best are shown as well for each strategy. The gap column is of interest since it reveals that occasionally the solution is highly affected by the behavior of the second phase of the heuristic. Even if the heuristic behaves greatly for promising edges the second phase can still ruin the solution.
Among the many policies shown in Table 3, the First (F) and the ML-based (ML-C) policies exhibit comparable average gaps. To prove that the enhancement introduced by the ML system is on average statistically significant, a statistical test was conducted. A T-test on the percentage errors obtained for the 54 instances in Table 3 shows that the p-value against the hypothesis both policies are similar in terms of the average optimal gap is equal to 0.03. The result proves that the enhancement is relevant, and that these systems have a promising role in improving the quality of TSP solvers.
To check the behavior of the heuristics concerning time, Table 4 shows the CPU time for each policy and heuristic shown in Table 3. Note that for each query to the ResNet the input procedure produces an image that increases the computational burden. Therefore, future work could be proposed to speed the ML component, even though the computation times remain short and acceptable for many online optimization scenarios.
Finally, to make a comparison with the metrics presented for the MF, FI, and CW in Section 2.2, the results in Table 5 show the final tour achievements for the F, ML-C, and ML-SC policies across the various positions in the CL. Note that although the TPR of ML-SC is 100 % , its FPR is not equal to zero since during the second phase some edges in the first and second position can be inserted. Also note that the accuracy of ML-C is consistently better than F, while the FPR for the first position is lower resulting in a higher TPR for the second position.

4. Discussion

A new strategy to design constructive heuristics has been presented. It gives a central role to the integration of statistical, mathematical, and heuristic exploration. We introduced a new way of thinking about the generalization of ML approaches for the TSP, leading to an efficient integration between learning useful information and exploiting it through classic approaches. The objective is to learn useful skills from experience to enhance the heuristic search. Our ML-Constructive is the first ML approach able to scale and show improvements at once with respect to a classic efficient constructive heuristic. Furthermore, the introduced approach can give good guidelines about how the ML can behave in the event of extreme negative or positive cases. Results are very promising and suggest that giving more emphasis on the generalization of hybrid designs pays off.
The relevance of an exploratory stage with statistical studies of the problem at hand had been emphasized. The target of these studies is to select an effective sub-problem that allows the avoidance of many known ML flaws.
More work needs to be done to improve the accuracy and the extrapolation of the ML classifier. Further improvements in future work could be in the direction of reducing the (constant) time required to prepare the input for the ML classifier, and to find the integration to meta-heuristics approaches as well.

Author Contributions

Conceptualization, U.J.M., L.M.G. and R.M.; methodology, U.J.M.; software, U.J.M.; validation, U.J.M., L.M.G. and R.M.; formal analysis, U.J.M.; investigation, U.J.M.; resources, U.J.M.; data curation, U.J.M.; writing—original draft preparation, U.J.M.; writing—review and editing, U.J.M. and R.M.; visualization, U.J.M.; supervision, R.M.; project administration, R.M.; funding acquisition, R.M. All authors have read and agreed to the published version of the manuscript.

Funding

Umberto Junior Mele was supported by the Swiss National Science Foundation through grants 200020-182360: “Machine learning and sampling-based metaheuristics for stochastic vehicle routing problems”.

Data Availability Statement

All the code for the experiments replication and for the data-sets creation can be found in the github repository: https://github.com/UmbertoJr/ML-Constructive (accessed on 8 September 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Complexity of the Inner-Loop Constraint Tracker

The purpose here is to compute the complexity of the inner-loop constraint tracker used in the ML-Constructive heuristic process, as stated in Section 2.4 and in Algorithm 1. For comprehension purposes we strict our analysis to the symmetric TSP, but similar results can be achieved for the asymmetric case as well.
Firstly, we observe that the constraint tracker procedure is applied only to edges that have both extremes with exactly one connection already in the partial solution, since the tracker routine follows the constraints expressed by lines 8 and 22 in Algorithm 1. Therefore, edges connecting vertices from internal points of the fragments are impossible to occur at this point, as shown in Figure A1. While those creating an inner-loop and joining two fragments are possible to occur events (Figure A2). Note that the goal of the tracker is to detect the inner-loop connections from the other. The growing connections and new fragment connections shown in Figure A2 are events that can be detected in constant time, since it’s enough to check that an extreme of the inserting edge has zero connections in the partial solution (lines 12 and 25). Also, note that these two events cannot occur as input of the tracker procedure since they do not satisfy the constraint expressed by lines 8 and 22.
Figure A1. Events that cannot occur as an input to the tracking procedure.
Figure A1. Events that cannot occur as an input to the tracking procedure.
Algorithms 14 00267 g0a1
Figure A2. Events that can occur and are prevented by the tracking procedure (first two), and events that can be detected in constant time (last two).
Figure A2. Events that can occur and are prevented by the tracking procedure (first two), and events that can be detected in constant time (last two).
Algorithms 14 00267 g0a2
Secondly, we notice that the complexity of the worst-case scenario for the whole procedure (from empty solution to the complete) is being computed in this Appendix. Therefore, we are not taking into consideration just the single call of the tracker, but the global computation during the complete tracking process. In fact, considering that the maximum number of positive addition for a constructive heuristic that grows fragments is equal to n (the length of the tour). Where, an addition is positive if the edge being attached to the partial solution complies with the TSP constraints in (1b-e) and the ML decision-taker agrees to add the considered edge in solution. We refer to the epoch between two positive additions as t, e.g., no edge is in solution at t = 0 , meanwhile exactly eight edges are in solution at t = 8 . Take into consideration that the epochs to be checked by the tracking routine for the symmetric TSP are from t = 2 to t = n 2 .
As mentioned, the computationally expensive events that the tracker needs to check are the “inner-loop connection” and the “joining fragments connections”. The inner-loop connection drawn in yellow (Figure A2) occurs when the extremes of a fragment are connected together by the attaching edge l. If we assume that at the epoch t there are at most s t fragments, then there exist at most s attaching edges at this epoch that can create an inner loop (Figure A3), and the sum of the operation needed to check these s inner-loops is at most equal to t. In fact, the tracker checks by spanning completely one of the fragments connecting to the attaching edge. Then if the other extreme of the fragment coincides with the other extreme of the attaching edge there is an “inner loop connection”, otherwise there is a “joining fragments connection”. Note that once an edge has been rejected the fragment associated with it is set free for the current epoch, and the tracker does not need to check anymore its extremes. Since we have at most t operations for epoch, and we have at most n epochs, the global computation is O ( n 2 ) .
Figure A3. Single and double fragments possible inner-loops at the eighth epoch.
Figure A3. Single and double fragments possible inner-loops at the eighth epoch.
Algorithms 14 00267 g0a3
Once the upper bound of the complexity for detecting the “inner-loop connections” has been found, the number of operations required for the “joining fragments connections” occurrences is still necessary to be estimated. Usually, after having encountered a “joining fragments connection” event, the insertion of the considered edge l takes place. But since in ML-Constructive it could happen that the ML decision-taker rejects the attaching edge (line 10), it may happen that the tracker is called many times during the same epoch. This could be a problem if the promising list L P was not limited at most m × n edges (Section 2.3). Assuming that the worst-case scenario is O ( n ) for each edge processed in the first phase, we are still safe with O ( n 2 ) operations for the global tracking computation.

Appendix B. The Earlier Insertion of the Most Promising Edges Could Increase the Probability of Finding the Optimal Tour

The purpose of this Appendix is to present some advantages that a procedure that inserts promising edges into a solution first has with respect to other approaches. If the growing fragments heuristic is considered as a stochastic process, then we could estimate the probability that the optimal tour has of occurring following the procedure. In fact, for each edge l considered to be included in the partial solution there are two possible events: included or not. If the random variable E l is used to refer to the event that the edge l is included in the solution ( ¬ E l otherwise), then we can express the probability that such event occurs as:
P ( E l ) = 1 P ( A l ) P ( B l ) and P ( ¬ E l ) = P ( A l ) + P ( B l )
where A l refers to the internal point connection events (Figure A1), while B l stand for the inner-loop connection events (Figure A2). Recall that internal point connection occurs when the constraint which ensures that no vertex is connected to more than two other vertices is not satisfied. While an inner-loop occurs when sub-solutions are generated, instead of having a single global loop.
In case that a list L is used to store all the existing edges of the TSP instance that we wish to solve, and we randomly shuffle such a list to create a random examination order. The probabilities of the events A l and B l will be dependent on the position p in such a list and the number of edges already inserted in the partial solution. So, combinatorics can help us calculate or approximate these probabilities. Recalling the epoch concept described in Appendix A, we can state that at t = 0 such probability is one, while at t = n the probability of E l is zero:
P ( E l | t = 0 ) = 1
P ( E l | t = n ) = 0
Since at t = 0 no edge has been placed in solution, no A l or B l event can occur. While at t = n the solution is complete. Then, we want to prove that the probability of E l will monotonically decrease as more edges are fed into the solution and as we progress through the L list. In case this conjecture is true, we can conclude that edges inspected earlier in the list are more likely to be included than those seen later. Emphasising the need to put first in the L list the edges that we consider most promising to be included in the optimal solution. As a first step, we determine the probability of occurrence of A l . To figure out such probability, we shall simply estimate the number of cases in which A l occurs and divide by the total number of possible cases. These cases vary depending on the position p in the list, the number of edges e, and the number of vertices n in the instance. Recalling that A l occurs when in the list the edge l is preceded by at least 2 other edges that have the same extreme with that of l. In case there are d of these overlapping edges, we have that A l occur for d = 2 to d = n 1 :
P ( A l | p ) = d = 2 n 1 n 1 d e n + 1 p 1 d e p 1 d = 2 n 1 ( p 1 ) ! ( p 1 d ) !
which is an increasing function with respect to the position p, and converge to 1 as p goes to e.
Meanwhile, to compute the probability of B l , the epoch in which the event occurs must be taken into account. Bearing in mind that as we proceed along with the p positions in the list such epoch is ascending, since there are no operations that remove an edge from the solution and it is possible just to add a new edge into such a partial solution.
Considering that the maximum total number of inner-loops for a given epoch is fixed and equal to t, we have that:
P ( B l | p , t ) < t e p 1 with t n
which has an upper bound that is an increasing function with respect to the position p and the epoch t.
To conclude, since the probabilities of A l and B l show an increasing trend, although not strictly due to the upper bound of B l , we can verify that the probability of E l has a decreasing trend due to Equation (A1). Therefore, the anticipation of the insertion of promising edges is a good strategy for the heuristic. However, such results do not prove that for any solving algorithm, the probability P ( E L | t ) is a strictly decreasing function. But it suggests that a general decreasing trend is present which should be exploited by the ML-Constructive heuristic during the construction of the solution.

References

  1. Applegate, D.L.; Bixby, R.E.; Chvátal, V.; Cook, W.J. The Traveling Salesman Problem: A Computational Study; Princeton Series in Applied Mathematics; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  2. Matai, R.; Singh, S.P.; Mittal, M.L. Traveling Salesman Problem: An Overview of Applications, Formulations, and Solution Approaches. Available online: https://www.intechopen.com/chapters/12736 (accessed on 8 September 2021).
  3. Held, M.; Karp, R.M. A dynamic programming approach to sequencing problems. J. Soc. Ind. Appl. Math. 1962, 10, 196–210. [Google Scholar] [CrossRef]
  4. Dorigo, M.; Gambardella, L.M. Ant colonies for the travelling salesman problem. Biosystems 1997, 43, 73–81. [Google Scholar] [CrossRef] [Green Version]
  5. Helsgaun, K. An effective implementation of the Lin–Kernighan traveling salesman heuristic. Eur. J. Oper. Res. 2000, 126, 106–130. [Google Scholar] [CrossRef] [Green Version]
  6. Dell’Amico, M.; Montemanni, R.; Novellani, S. Modeling the flying sidekick traveling salesman problem with multiple drones. Networks 2021. [Google Scholar] [CrossRef]
  7. Caserta, M.; Voß, S. A hybrid algorithm for the DNA sequencing problem. Discret. Appl. Math. 2014, 163, 87–99. [Google Scholar] [CrossRef]
  8. Montemanni, R.; Gambardella, L.M. Minimum power symmetric connectivity problem in wireless networks: A new approach. In Mobile and Wireless Communication Networks, Proceedings of the IFIP International Conference on Mobile and Wireless Communication Networks, Paris, France, 25–27 October 2004; Springer: Boston, MA, USA, 2004; pp. 497–508. [Google Scholar]
  9. MacGregor, J.N.; Ormerod, T. Human performance on the traveling salesman problem. Percept. Psychophys. 1996, 58, 527–539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Mele, U.J.; Gambardella, L.M.; Montemanni, R. Machine Learning Approaches for the Traveling Salesman Problem: A Survey. In Proceedings of the IEEE 8th International Conference on Industrial Engineering and Applications, Kyoto, Japan, 23–26 April 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
  11. Bengio, Y.; Lodi, A.; Prouvost, A. Machine learning for combinatorial optimization: A methodological tour d’horizon. Eur. J. Oper. Res. 2020, 290, 405–421. [Google Scholar] [CrossRef]
  12. Kool, W.; Van Hoof, H.; Welling, M. Attention, learn to solve routing problems! arXiv 2018, arXiv:1803.08475. [Google Scholar]
  13. Mele, U.J.; Chou, X.; Gambardella, L.M.; Montemanni, R. Reinforcement Learning and Additional Rewards for the Traveling Salesman Problem. In Proceedings of the IEEE 7th International Conference on Industrial Engineering and Applications, Bangkok, Thailand, 16–18 April 2020; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
  14. da Costa, P.R.; Rhuggenaath, J.; Zhang, Y.; Akcay, A. Learning 2-opt Heuristics for the Traveling Salesman Problem via Deep Reinforcement Learning. In Proceedings of the 12th Asian Conference on Machine Learning, Bangkok, Thailand, 18–20 November 2020; PMLR: New York, NY, USA, 2020; pp. 465–480. [Google Scholar]
  15. Zheng, J.; He, K.; Zhou, J.; Jin, Y.; Li, C.M. Combining reinforcement learning with lin-kernighan-helsgaun algorithm for the traveling salesman problem. arXiv 2020, arXiv:2012.04461. [Google Scholar]
  16. Fu, Z.H.; Qiu, K.B.; Zha, H. Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances. arXiv 2020, arXiv:2012.10658. [Google Scholar]
  17. Zohuri, B.; Moghaddam, M. Deep learning limitations and flaws. Mod. Approaches Mater. Sci. 2020, 2, 241–250. [Google Scholar]
  18. Marcus, G. Deep learning: A critical appraisal. arXiv 2018, arXiv:1801.00631. [Google Scholar]
  19. Reinelt, G. TSPLIB—A traveling salesman problem library. Orsa J. Comput. 1991, 3, 376–384. [Google Scholar] [CrossRef]
  20. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  21. Dantzig, G.; Fulkerson, R.; Johnson, S. Solution of a large-scale traveling-salesman problem. J. Oper. Res. Soc. Am. 1954, 2, 393–410. [Google Scholar] [CrossRef]
  22. Steiglitz, K. Some improved algorithms for computer solution of the traveling salesman problem. In Proceedings of the 6th Annual Allerton Conference on Communication, Control, and Computing; Department of Electrical Engineering and the Coordinated Science Laboratory, University of Illinois: Champaign, IL, USA, 1968; Available online: https://www.researchgate.net/publication/201976955_Some_Improved_Algorithms_for_Computer_Solution_of_the_Traveling_Salesman_Problem (accessed on 8 September 2021).
  23. Bentley, J.L. Experiments on traveling salesman heuristics. In Proceedings of the First Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, CA, USA, 22–24 January 1990; pp. 91–99. [Google Scholar]
  24. Clarke, G.; Wright, J.W. Scheduling of vehicles from a central depot to a number of delivery points. Oper. Res. 1964, 12, 568–581. [Google Scholar] [CrossRef]
  25. Johnson, D.S.; McGeoch, L.A. Experimental Analysis of Heuristics for the STSP. In The Traveling Salesman Problem and Its Variations; Springer: Boston, MA, USA, 2007; pp. 369–443. [Google Scholar]
  26. Reinelt, G. The Traveling Salesman: Computational Solutions for TSP Applications; Springer: Berlin/Heidelberg, Germany, 2003; Volume 840. [Google Scholar]
  27. Vinyals, O.; Fortunato, M.; Jaitly, N. Pointer networks. arXiv 2015, arXiv:1506.03134. [Google Scholar]
  28. Deudon, M.; Cournut, P.; Lacoste, A.; Adulyasak, Y.; Rousseau, L.M. Learning heuristics for the tsp by policy gradient. In Integration of Constraint Programming, Artificial Intelligence, and Operations Research; Springer: Cham, Switzerland, 2018; pp. 170–181. [Google Scholar]
  29. Dai, H.; Khalil, E.B.; Zhang, Y.; Dilkina, B.; Song, L. Learning combinatorial optimization algorithms over graphs. arXiv 2017, arXiv:1704.01665. [Google Scholar]
  30. Ma, Q.; Ge, S.; He, D.; Thaker, D.; Drori, I. Combinatorial optimization by graph pointer networks and hierarchical reinforcement learning. arXiv 2019, arXiv:1911.04936. [Google Scholar]
  31. Taillard, É.D.; Helsgaun, K. POPMUSIC for the travelling salesman problem. Eur. J. Oper. Res. 2019, 272, 420–429. [Google Scholar] [CrossRef]
  32. Lee, D.T.; Schachter, B.J. Two algorithms for constructing a Delaunay triangulation. Int. J. Comput. Inf. Sci. 1980, 9, 219–242. [Google Scholar] [CrossRef]
  33. Fitzpatrick, J.; Ajwani, D.; Carroll, P. Learning to Sparsify Travelling Salesman Problem Instances. arXiv 2021, arXiv:2104.09345. [Google Scholar]
  34. Bello, I.; Pham, H.; Le, Q.V.; Norouzi, M.; Bengio, S. Neural combinatorial optimization with reinforcement learning. arXiv 2016, arXiv:1611.09940. [Google Scholar]
  35. Mazyavkina, N.; Sviridov, S.; Ivanov, S.; Burnaev, E. Reinforcement learning for combinatorial optimization: A survey. Comput. Oper. Res. 2021, 134, 105400. [Google Scholar] [CrossRef]
  36. Konda, V.R.; Tsitsiklis, J.N. Actor-critic algorithms. In Advances in Neural Information Processing Systems; Citeseer: Princeton, NJ, USA, 2000; pp. 1008–1014. [Google Scholar]
  37. Christofides, N. Worst-Case Analysis of a New Heuristic for the Travelling Salesman Problem; Technical Report; Carnegie-Mellon Univ Pittsburgh Pa Management Sciences Research Group: Pittsburgh, PA, USA, 1976. [Google Scholar]
  38. Joshi, C.K.; Cappart, Q.; Rousseau, L.M.; Laurent, T.; Bresson, X. Learning TSP requires rethinking generalization. arXiv 2020, arXiv:2006.07054. [Google Scholar]
  39. Miki, S.; Ebara, H. Solving Traveling Salesman Problem with Image-Based Classification. In Proceedings of the IEEE 31st International Conference on Tools with Artificial Intelligence, Portland, OR, USA, 4–9 November 2019; pp. 1118–1123. [Google Scholar]
  40. Miki, S.; Yamamoto, D.; Ebara, H. Applying deep learning and reinforcement learning to traveling salesman problem. In Proceedings of the International Conference on Computing, Electronics & Communications Engineering (iCCECE 2018), Southend, UK, 16–17 August 2018; pp. 65–70. [Google Scholar]
  41. Bentley, J.J. Fast algorithms for geometric traveling salesman problems. Orsa J. Comput. 1992, 4, 387–411. [Google Scholar] [CrossRef]
  42. Wang, S.; Rao, W.; Hong, Y. A distance matrix based algorithm for solving the traveling salesman problem. Oper. Res. 2018, 20, 1–38. [Google Scholar] [CrossRef]
  43. Jackovich, P.; Cox, B.; Hill, R.R. Comparing greedy constructive heuristic subtour elimination methods for the traveling salesman problem. J. Def. Anal. Logist. 2020, 4, 167–182. [Google Scholar] [CrossRef]
  44. Chuang, C.C.; Su, S.F.; Hsiao, C.C. The annealing robust backpropagation (ARBP) learning algorithm. IEEE Trans. Neural Netw. 2000, 11, 1067–1077. [Google Scholar] [CrossRef]
  45. Miller, J.N. Tutorial review—Outliers in experimental data and their treatment. Analyst 1993, 118, 455–461. [Google Scholar] [CrossRef]
  46. Bardach, E. The extrapolation problem: How can we learn from the experience of others? J. Policy Anal. Manag. 2004, 23, 205–220. [Google Scholar] [CrossRef]
  47. Hougardy, S.; Schroeder, R.T. Edge elimination in TSP instances. In International Workshop on Graph-Theoretic Concepts in Computer Science; Springer: Cham, Switzerland, 2014; pp. 275–286. [Google Scholar]
  48. Lemaître, G.; Nogueira, F.; Aridas, C.K. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. J. Mach. Learn. Res. 2017, 18, 559–563. [Google Scholar]
  49. Applegate, D. Concorde—A Code for Solving Traveling Salesman Problems. Available online: http://www.math.princeton.edu/tsp/concorde.html (accessed on 8 September 2021).
  50. Colquhoun, D. The reproducibility of research and the misinterpretation of p-values. R. Soc. Open Sci. 2017, 4, 171085. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Vitali, T.; Mele, U.J.; Gambardella, L.M.; Montemanni, R. Machine Learning Constructives and Local Searches for the Travelling Salesman Problem. arXiv 2021, arXiv:2108.00938. [Google Scholar]
  52. De Boer, P.T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
  53. Williams, R.J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 1992, 8, 229–256. [Google Scholar] [CrossRef] [Green Version]
  54. Van Rossum, G.; Drake, F.L., Jr. Python Tutorial; Centrum voor Wiskunde en Informatica Amsterdam: Amsterdam, The Netherlands, 1995; Volume 620. [Google Scholar]
  55. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in pytorch. In Proceedings of the 31st Conference on Neural Information Processing System, Long Beach, CA, USA, 8 December 2017. [Google Scholar]
Figure 1. (A) Single fragment constructor that operates similarly to NN. (B) Constructor that growths multiple fragments similarly to MF and CW.
Figure 1. (A) Single fragment constructor that operates similarly to NN. (B) Constructor that growths multiple fragments similarly to MF and CW.
Algorithms 14 00267 g001
Figure 2. Empirical Probability Density Function (PDF) showing the optimal edge behavior in relation to the position in the CL. Over each bar is shown the rate of optimal edge occurrence for each considered position. (A) Evaluation data-set. (B) TSPLIB data-set.
Figure 2. Empirical Probability Density Function (PDF) showing the optimal edge behavior in relation to the position in the CL. Over each bar is shown the rate of optimal edge occurrence for each considered position. (A) Evaluation data-set. (B) TSPLIB data-set.
Algorithms 14 00267 g002
Figure 3. TPR (A) and FPR (B) comparison for MF, FI and CW heuristics. The first five positions in the CL are considered separately, while all the others are shown in the >5 bars.
Figure 3. TPR (A) and FPR (B) comparison for MF, FI and CW heuristics. The first five positions in the CL are considered separately, while all the others are shown in the >5 bars.
Algorithms 14 00267 g003
Figure 4. First phase partial solution constructed with the ML predictions. Vertices in light blue are free for the second phase of ML-Constructive. The instance is the KroA100 from the TSPLIB collection.
Figure 4. First phase partial solution constructed with the ML predictions. Vertices in light blue are free for the second phase of ML-Constructive. The instance is the KroA100 from the TSPLIB collection.
Algorithms 14 00267 g004
Figure 5. Example of input image. The vertices in the local view are in the red channel, the l edge is drawn in green channel, while the edges in the partial solution are in the blue channel.
Figure 5. Example of input image. The vertices in the local view are in the red channel, the l edge is drawn in green channel, while the edges in the partial solution are in the blue channel.
Algorithms 14 00267 g005
Figure 6. ResNet10.
Figure 6. ResNet10.
Algorithms 14 00267 g006
Figure 7. (A) The partial solution available at the end of the first phase for ML-SC. In light blue the remaining free vertices, and in dark blue the inserted edges. (B) The complete tour was found at the end of the ML-SC run. In red the edges were added during the second phase. The considered instance is the KroA100 from the TSPLIB collection.
Figure 7. (A) The partial solution available at the end of the first phase for ML-SC. In light blue the remaining free vertices, and in dark blue the inserted edges. (B) The complete tour was found at the end of the ML-SC run. In red the edges were added during the second phase. The considered instance is the KroA100 from the TSPLIB collection.
Algorithms 14 00267 g007
Table 1. TPR, FPR, accuracy and precision comparison across several positions and methods.
Table 1. TPR, FPR, accuracy and precision comparison across several positions and methods.
PositionMethodTPRFPRAccuracyPrecision
1MF92.57%61.65%85.20%90.52%
FI77.39%46.26%74.18%91.41%
CW82.79%46.56%78.80%91.88%
2MF83.21%29.57%78.19%81.30%
FI66.00%26.20%69.06%79.56%
CW72.29%27.01%72.57%80.53%
3MF52.41%9.23%81.59%64.15%
FI44.80%15.99%74.62%46.87%
CW55.03%15.04%77.79%53.54%
4MF38.47%4.79%88.15%53.28%
FI38.96%11.09%82.70%33.30%
CW45.99%10.40%84.18%38.59%
5MF27.12%2.27%93.75%41.65%
FI30.59%7.88%88.65%18.82%
CW36.20%5.75%90.98%27.35%
>5MF22.72%0.01%99.98%13.94%
FI27.55%0.03%99.97%9.60%
CW29.01%0.02%99.98%14.71%
Table 2. Comparison on TPR, FPR and their difference for several choices of the L P list.
Table 2. Comparison on TPR, FPR and their difference for several choices of the L P list.
Closest mTPRFPRTPR-FPR
1100.00%100.00%0.00%
253.91%13.70%40.21%
330.97%1.64%29.33%
431.00%1.50%29.50%
530.66%1.30%29.36%
Table 3. Percentage error comparison of various decision-takers policies for testing TSPLIB instances.
Table 3. Percentage error comparison of various decision-takers policies for testing TSPLIB instances.
InstancesMFFICWFSYAEBEML-CML-SCML-SC (Gap)
kroA10014.12016.5966.0439.61822.4378.63611.9867.4296.4803.7922.251
kroC10012.2704.97911.4808.36227.5585.26313.3916.95010.3434.7760.203
rd10016.92811.2148.73611.58024.12111.21414.2128.9388.5596.7381.821
eil10127.50412.7195.0878.42622.09918.76014.3488.4264.2930.0004.293
lin10516.0659.0068.6387.17737.33221.40612.5807.0808.48510.780−3.700
pr1075.7992.74210.1669.24510.6408.9369.4546.71711.1530.4452.297
pr12410.1108.2252.5024.91124.3447.9429.0794.1916.9912.997−0.495
bier12714.18616.1515.6594.75325.16212.37010.0914.5716.6040.0004.571
ch13028.46213.9127.4807.41422.7338.00312.3058.4294.9754.2060.769
pr13623.16010.0897.18611.70915.69316.04614.87811.70111.1513.7133.473
gr13727.23413.5028.2439.74230.20715.54814.17010.1247.3293.1884.141
pr14412.4833.9656.4446.62812.0164.1617.6253.7964.4743.962−0.166
kroA15020.23815.8768.46810.50726.93414.13412.7999.4676.8771.1395.738
pr15215.1967.0229.4557.20417.0606.1178.2395.6476.9193.7931.854
u15917.95225.6048.4089.00919.88110.54211.7885.5897.9526.024−0.435
rat19513.04315.4975.8547.57613.43113.34510.2865.8547.5330.0005.854
d19820.5078.2515.4446.71117.5357.7449.2626.2676.2554.0111.433
kroA20017.81912.7328.62211.09725.54111.29813.00810.3286.6812.0944.587
gr20215.93510.3235.6837.36719.9167.7169.6736.3314.4361.8252.611
ts22512.84224.5286.8049.4936.97514.92911.3098.0756.52011.330−4.810
tsp22526.23715.64410.4389.79024.1659.60913.9069.16911.2924.4034.766
pr22621.0521.7889.94811.91816.7849.03112.3478.7788.5995.370−3.582
gr22919.62424.0018.8497.59322.24210.93210.8076.5737.4952.8073.766
gil26212.27920.7329.7146.60224.76910.89210.9768.3686.2248.999−2.775
pr26414.98713.2598.8394.91916.2399.81610.0916.1956.0363.7391.180
a28020.82214.15313.99813.95915.39412.44715.94412.33011.4390.38811.051
pr29921.63915.9038.10310.46722.37115.24414.38510.9658.6360.9057.198
lin31818.35620.7507.8496.37730.84814.11412.6529.3986.6795.5010.876
rd40015.03218.2659.5488.27821.92310.84411.8407.5068.1083.4234.083
fl41712.46915.58912.33510.60628.4888.96212.1498.4738.1027.3940.708
gr43119.67216.60311.9536.94221.11411.55012.2707.86711.5544.1962.746
pr43915.98315.48714.6099.02423.18313.39512.1579.4397.6617.1040.557
pcb44221.42317.9889.93512.46016.40313.90813.23311.59610.1722.7877.148
d49316.55017.1728.6529.61817.8989.8729.7498.4436.8184.3522.466
att53222.22318.25611.0137.59625.02211.15011.6299.4558.4413.5904.006
u57422.27623.81810.7799.02324.3499.63813.1139.7769.7904.4954.528
rat57518.52924.3768.46010.35019.5928.96211.7349.6716.2012.7613.440
d65713.99719.7817.9497.58323.0088.87812.00910.0437.5873.8293.754
gr66613.47319.27713.2419.31424.33912.67013.73611.5349.6356.7412.573
u72417.83623.2989.8817.59023.0289.95211.2769.3086.5433.0543.489
rat78322.06222.3339.6427.04721.54911.61710.9958.8775.9344.9560.978
pr100218.85724.81810.7639.75120.48413.64813.23811.6008.5295.3643.165
u106017.32223.21310.7329.62021.75410.53713.12811.5808.9546.5372.417
vm108423.08322.96210.2989.61528.74612.48513.39011.2539.1236.9512.172
pcb117317.79226.40810.9179.56721.03314.63113.38011.8219.9865.7903.777
d129121.91722.50210.1555.71114.7839.6539.6437.5738.5353.0812.630
rl130412.14228.18810.6107.51225.06011.10011.8219.5809.5065.2282.284
rl132314.87626.43211.8047.46725.4018.38511.3109.7896.5383.6952.843
nrw137922.31425.2119.9148.84219.79411.35111.4439.8567.9963.6554.341
fl140020.52013.73311.43210.97527.26713.71814.5419.99711.2738.4711.526
u143223.32921.06510.40712.67614.96714.92814.66912.4918.4405.7252.715
fl157716.97623.05312.16712.63414.08212.35112.5029.08410.4634.3734.711
d165515.28220.78511.1879.50318.42710.36812.1309.4788.2695.6372.632
vm174814.13424.33511.9129.99224.52011.86813.75712.2259.3026.0943.208
average17.90617.1139.3418.87921.49311.34512.0828.8158.0354.3742.549
std4.6596.6522.3682.0775.4933.1421.7892.1551.8752.4732.715
best0/543/546/5410/540/540/540/5411/5424/5450/54
Table 4. CPU time comparison related to different greedy policies for testing TSPLIB instances.
Table 4. CPU time comparison related to different greedy policies for testing TSPLIB instances.
InstancesMFFICWFSYAEBEML-CML-SC
kroA1000.0040.0060.0060.0090.0080.0070.0140.2761.4230.011
kroC1000.0040.0060.0060.0110.0100.0080.0150.3091.7060.010
rd1000.0040.0060.0080.0110.0100.0080.0160.3231.6080.011
eil1010.0060.0060.0070.0120.0110.0110.0190.3712.1780.013
lin1050.0050.0060.0070.0130.0120.0110.0170.3351.4450.012
pr1070.0050.0070.0090.0130.0110.0110.0150.3021.2980.012
pr1240.0050.0090.0090.0170.0170.0160.0200.3951.5650.015
bier1270.0100.0090.0120.1000.0970.0470.0551.1062.5540.059
ch1300.0110.0090.0110.0170.0160.0150.0240.4732.5070.019
pr1360.0090.0200.0110.0150.0140.0140.0220.4382.3080.017
gr1370.0060.0110.0120.0230.0210.0200.0270.5472.5570.024
pr1440.0060.0120.0150.0230.0220.0180.0240.4870.6400.021
kroA1500.0110.0140.0120.0210.0170.0170.0250.4922.4440.019
pr1520.0100.0130.0150.0230.0210.0190.0260.5281.0470.022
u1590.0070.0140.0150.0260.0250.0240.0290.5802.9260.026
rat1950.0090.0210.0240.0270.0270.0260.0380.7653.5530.025
d1980.0220.0200.0270.1110.1090.1060.1152.3053.4510.106
kroA2000.0110.0210.0280.0360.0320.0240.0440.8793.9750.027
gr2020.0240.0220.0280.1370.1360.1290.1432.8664.0180.125
ts2250.0120.0270.0360.0430.0440.0340.0440.8843.9550.036
tsp2250.0190.0280.0340.0410.0380.0330.0521.0364.2390.031
pr2260.0180.0310.0310.0550.0520.0510.0561.1111.1020.054
gr2290.0190.0310.0320.0910.0830.0880.1001.9964.2200.080
gil2620.0170.0350.0430.0580.0610.0420.0681.3544.6400.060
pr2640.0300.0370.0370.0660.0670.0530.0701.4013.8150.055
a2800.0320.0430.0440.5240.5020.4730.50310.0645.3680.471
pr2990.0370.0500.0620.0730.0600.0590.0881.7705.4290.064
lin3180.0260.0570.0690.0910.0840.0790.0991.9774.8710.090
rd4000.0420.0950.0960.1270.1120.1190.1513.0137.5940.129
fl4170.0480.1140.1080.2110.1900.1920.2214.4257.0200.193
gr4310.0760.1100.1330.3590.3530.3520.3967.9278.0160.342
pr4390.0940.1230.1300.2310.1990.1940.2414.8177.0930.197
pcb4420.0880.1330.1470.1930.1790.1510.2034.0558.5260.157
d4930.1440.2310.2100.8840.8980.8860.90718.14110.2350.889
att5320.1110.2010.2190.3090.2650.2900.3296.57110.0100.230
u5740.1540.2190.2180.2970.2720.2810.3396.78910.6520.291
rat5750.1050.2330.2250.2360.2230.1860.2825.64711.0790.204
d6570.1590.3320.2941.0251.0341.0031.08121.61712.0541.028
gr6660.1700.6510.3810.7680.6510.7060.80816.15012.5130.680
u7240.1490.4550.3940.3380.3590.3100.4659.29812.7340.290
rat7830.3030.5040.4480.3830.3890.3140.53210.64714.5510.299
pr10020.4981.0420.9890.8790.7580.7661.09921.97420.1390.747
u10600.2941.1830.8021.4011.1981.0571.44428.87320.0741.149
vm10840.4121.5040.8871.0610.8670.7121.15223.04517.0570.722
pcb11730.4021.6570.9561.1091.0500.9971.45829.15622.9520.947
d12910.9182.1921.2474.5184.4524.1384.28985.77520.5173.881
rl13040.5382.4821.3031.2841.1451.0691.57431.47019.7921.075
rl13230.6961.8561.3321.7391.6551.2681.86137.21320.2151.324
nrw13790.9941.9421.6241.2701.2031.1071.76235.24929.4391.113
fl14000.7452.0201.2282.6552.6742.5592.81656.32530.4012.583
u14321.0502.1161.9171.9541.7601.4222.10442.07432.0861.072
fl15771.0442.7332.1172.6502.6161.8402.71354.26725.6731.438
d16551.5513.2292.6898.1317.9457.7518.623172.45030.2577.624
vm17480.6603.9142.3602.5251.8361.9142.89457.88428.8751.967
average0.2190.5900.4280.7080.6650.6120.76915.3749.8220.594
std0.3510.9710.6791.3641.3191.2511.43328.6609.2801.217
Table 5. TPR, FPR, accuracy and precision comparison across several positions and policies.
Table 5. TPR, FPR, accuracy and precision comparison across several positions and policies.
PositionMethodTPRFPRAccuracyPrecision
1F98.07%85.80%86.68%87.91%
ML-C93.64%53.08%87.29%91.82%
ML-SC100.00%3.41%99.54%99.47%
2F68.28%18.12%73.62%85.35%
ML-C84.74%29.07%79.32%81.84%
ML-SC100.00%3.01%98.82%98.09%
3F44.40%8.28%80.39%62.82%
ML-C42.74%6.02%81.71%69.11%
ML-SC86.27%0.91%96.02%96.74%
4F38.09%5.75%87.26%48.47%
ML-C33.08%3.60%88.52%56.62%
ML-SC80.27%0.94%96.73%92.41%
5F31.18%4.42%91.95%29.66%
ML-C26.52%1.94%94.02%44.92%
ML-SC73.95%0.88%97.70%83.42%
>5F28.94%0.02%99.97%13.62%
ML-C25.76%0.02%99.97%11.83%
ML-SC64.24%0.01%99.99%48.84%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mele, U.J.; Gambardella, L.M.; Montemanni, R. A New Constructive Heuristic Driven by Machine Learning for the Traveling Salesman Problem. Algorithms 2021, 14, 267. https://doi.org/10.3390/a14090267

AMA Style

Mele UJ, Gambardella LM, Montemanni R. A New Constructive Heuristic Driven by Machine Learning for the Traveling Salesman Problem. Algorithms. 2021; 14(9):267. https://doi.org/10.3390/a14090267

Chicago/Turabian Style

Mele, Umberto Junior, Luca Maria Gambardella, and Roberto Montemanni. 2021. "A New Constructive Heuristic Driven by Machine Learning for the Traveling Salesman Problem" Algorithms 14, no. 9: 267. https://doi.org/10.3390/a14090267

APA Style

Mele, U. J., Gambardella, L. M., & Montemanni, R. (2021). A New Constructive Heuristic Driven by Machine Learning for the Traveling Salesman Problem. Algorithms, 14(9), 267. https://doi.org/10.3390/a14090267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop