A New Constructive Heuristic driven by Machine Learning for the Traveling Salesman Problem

Recent systems applying Machine Learning (ML) to solve the Traveling Salesman Problem (TSP) exhibit issues when they try to scale up to real case scenarios with several hundred vertices. The use of Candidate Lists (CLs) has been brought up to cope with the issues. The procedure allows to restrict the search space during solution creation, consequently reducing the solver computational burden. So far, ML were engaged to create CLs and values on the edges of these CLs expressing ML preferences at solution insertion. Although promising, these systems do not clearly restrict what the ML learns and does to create solutions, bringing with them some generalization issues. Therefore, motivated by exploratory and statistical studies, in this work we instead use a machine learning model to confirm the addition in the solution just for high probable edges. CLs of the high probable edge are employed as input, and the ML is in charge of distinguishing cases where such edges are in the optimal solution from those where they are not. . This strategy enables a better generalization and creates an efficient balance between machine learning and searching techniques. Our ML-Constructive heuristic is trained on small instances. Then, it is able to produce solutions, without losing quality, to large problems as well. We compare our results with classic constructive heuristics, showing good performances for TSPLIB instances up to 1748 cities. Although our heuristic exhibits an expensive constant time operation, we proved that the computational complexity in worst-case scenario, for the solution construction after training, is $O(n^2 \log n^2)$, being $n$ the number of vertices in the TSP instance.


Introduction
The Traveling Salesman Problem (TSP) is one of the most intensively studied and relevant problems in the Combinatorial Optimization (CO) field [1]. Partially for its simple definition despite its membership to the NP-complete class, partially arXiv:2108.10224v1 [cs.AI] 17 Aug 2021 due to its huge impact on real applications [36]. The last seventy years have seen the development of an extensive literature, which brought valuable enhancement in the CO branch of study. Concepts as the Held-Karp algorithm [22], powerful meta-heuristics as the Ant Colony Optimization [17], and effective implementations of local search heuristics as the Lin-Kernighan-Helsgaun [23] have been suggested to solve the TSP. These contributions along with others have supported the development of various applied research domains such as logistics [15], genetics [7], telecommunications [42] and neuroscience [34].
In particular, during the last five years, an increasing number of Machine Learning-driven heuristics have appeared to make their contribution to the field [38; 4]. Probably the surge of interest was moved by the rich literature, and by the interesting opportunities that the CO field introduce with its applications. Some empowering features which ML models could bring to the field are: the opportunity to leverage knowledge from past solutions [29; 37], the ability to imitate computationally expensive operations [11], and the faculty of devising innovative strategies via reinforcement learning paradigms [51].
In light of the new features being brought by ML approaches, we wish to couple some powerful ML qualities with well known heuristic concepts within a hybrid algorithm, which seeks robust enhancements with respect to classic approaches. The scope is to create an efficient interlocking between ML and optimization heuristics, that strengthens the weaknesses of each single approach. Many attempts have been proposed so far, but none of them until now has succeeded to preserve the improvements by scaling up to larger problems. A promising idea to contrast such issue consists in using Candidate Lists (CL) to identify subset of edges [19]. Such subsets help the solver to restrict its searching space during the solution creation, leading to an incentive for better generalization.
It can be argued that the generalization issue, emerged in the previous ML driven approaches, is caused mostly by the lack of a proper consideration of the ML weaknesses and limitations [52 ; 35]. In fact, ML is known for having troubles with imbalanced datasets, outliers and extrapolation tasks. Such events could lead to significant obstacles in achieving good performances with any ML system. More details on these typical ML weaknesses, with our proposed solutions, will be provided in Section 2.2.
Our main contribution is the introduction of the first hybrid system that actively uses ML models to construct partial TSP solutions without losing quality when scaling. Our ML-Constructive heuristic is composed by two phases. The first uses ML to identify edges that are very likely to be optimal, the second completes the solution with a classic heuristic. The resulting heuristic shows good performance, when it is tested on 54 representative instances selected from the TSPLIB library [44]. The instances considered present up to 1748 vertices, and surprisingly our ML-Constructive exhibits slightly better solutions on larger instances rather than on smaller ones, as shown in the experiments. Despite good results are shown in terms of quality, our heuristic presents an unappealing large constant time operation in the current state of the implementation. However, we prove that for the creation of a solution a number of operations bounded by O(n 2 log n 2 ) is required after training. Our ML model learns exclusively using local information, and it employs a simple ResNet architecture [21] to recognize some patterns from the candidate lists through images. The use of images, even if not optimal in term of computation, allowed us to plainly see the input of the network and to get a better understanding of the internal processes in the network. We have finally introduced a novel loss function to help the network to understand how to make a substantial contribution when building tours.
The TSP is formally stated in Section 1.1, and literature review is presented in Section 1.2. The concept of constructive heuristic is described in details in Section 2.1, while statistical and exploratory studies on the candidate lists are spotlighted in Section 2.2. The statistical studies were useful to reach the fundamental insights which allow our ML-Constructive to bypass the ML weaknesses. Finally, the general idea of the new method is discussed in Section 2.3, the ML-Constructive heuristic is explained in Section 2.4, and the ML model with the training procedure is discussed in Section 2.5. To conclude, experiments are presented in Section 3, and conclusions are stated in Section 4.

The Traveling Salesman Problem
Given a graph G(V, E) with n vertices belonging to the set V = {0, . . . , n − 1}, and pairwise edges e ij ∈ E for each vertex i, j ∈ V with i = j, let c ij be the cost for the directed edge e ij starting from vertex i and reaching vertex j. The objective of the Travelling Salesman Problem is to find the shortest possible route that visits each vertex in V exactly once, and creates a loop returning to the starting vertex [1].
The Dantzig et al. [13] formulation for the TSP is an integer linear program describing the requirements that must be met to find the optimal solution for the problem. The variable x ij defines if the optimal route found picks the edge that goes from vertex i to vertex j with x ij = 1, if the route does not pick such edge then x ij = 0. A solution is defined as a matrix X with entries x ij and dimension n × n. The objective function is to minimize the route cost, as shown in Equation 1a.
i∈Q j∈Q,j =i Subject to the following constraints: each vertex is arrived at from exactly one other vertex (Equation 1b), each vertex is a departure to exactly one other vertex (Equation 1c), and no inner-loop between vertices for any proper subset Q is created (Equation 1d). The constraints in Equation 1d prevent that the solution X is the union of smaller tours, so it leads to an exponential number of constraints. Finally, each edge in the solution x ij is not fractional (Equation 1e).
The TSP is called euclidean if the vertices are described by coordinates in the euclidean space and the cost c ij is the euclidean distance between vertices. The euclidean TSP is also a symmetric, and the euclidean distance satisfies the triangle inequality (Equation 2) for all triplet of edges.
A Candidate List (CL) with cardinality k for vertex i is defined as the set of edges e ij , with j ∈ CL[i], such that the vertices j are the closest k vertices to vertex i. Therefore, one of these edges is considered likely to belong to an optimal solution.

Literature Review
The first constructive heuristic historically introduced for the TSP is the Nearest Neighbor (NN), a greedy strategy that repeats the rule "take the closest available node". This procedure is very simple, however it is not very efficient. In fact, while the NN is choosing the best available vertex, the Multi-Fragment (MF) [45; 6] and the Clarke Wright (CW) [9] are alternatives to add the most promising edge in the solution. The NN growths a single fragment of tour by adding the closest vertex to the fragment extreme considered during the construction. On the other hand, MF and CW grow, join and give birth to many fragments of the final tour [26]. The approaches using many fragments show superior quality performances, and also come up with very low computational costs.
A different way of constructing TSP tours is known as insertion heuristic, such as the Farthest Insertion (FI) [6]. This approach deals with expanding a tour generated by the previous iteration it on a subset of verticesV it ⊂ V . The expansion is carried out by inserting a new vertex, i.e. j, in the subset at each new iterationV it+1 =V it ∪ {j}. Therefore, to preserve the feasibility of the expanded tour, one edge is removed from the past tour and two edges are added to connect the released vertices to the new vertex j. The removal is done in such a way that the lowest cost for the new tour is achieved.
In case that the insertion policy is Farthest, the new inserted vertex is always the farthest from the existing vertices in the current tour. Once the last iterations has been reached, a complete feasible tour passing through each vertex in V has been constructed.
Initially, the exploration in the literature produced to introduce ML concepts was about a setup similar to the NN. Systems such as pointer networks [48] or graph-based transformers [29; 16; 37] were engaged to select the next vertex in the NN iteration. These architecture were engaged to predict a value for each available departure edge of the extreme considered on the single fragment approach. Then, the best choice (next vertex) according to these values were added to the fragment. These systems were applying stochastic sampling as well, so they could produce thousand of solution in parallel using GPU processing. Unfortunately, these ML approaches failed to scale, since all vertices were given as input to the networks, creating confusion to it.
To attempt overcoming the scalability problem, several works proposed systems arguably claimed to be able to generalize [12; 33; 19]. Results however showed that, the proposed structure2vec [12] and Graph Pointer Network [33] keep their performances close to the Farthest Insertion (FI) [6] up to 250 vertices instances, then they lose their ability to generalize. The model called Att-GCN [19] is used to pre-process the TSP by constructing candidate lists. Even if solutions are promising in this case, the ML was not actively used to construct TSP tours. Furthermore, no comparisons with other CL constructors as: POPMUSIC [46], Delunay Triangolarization [31], minimum spanning 1-tree [1] and a recent CL constructor driven by ML [18] were provided in the paper.
The use of Reinforcement Learning (RL) to tackle the TSP was proposed as well [3]. Where, the ability of learning heuristics without the use of optimal solutions was introduced. These architectures were trained just via the supervision of the objective function shown in Equation (1a). The actor-critic algorithm [28] was employed as well. Later, Mele et al. [37] used the actor-critic paradigm with an additional reward defined by the easy to compute minimum spanning tree cost (resembling a TSP solution, see [8]). Differently, the Q-learning RL framework was used by Dai et al. [12], the Monte Carlo RL was used by Fu et al. [19], and Hierarchical RL introduced in Ma et al. [33]. Finally, Joshi et al. [27] explicitly advised to rethink about the generalization issues and the need for a more hybrid approach was highlighted. In Miki et al. [39;40] was proposed for the first time the use of image processing techniques to solve the TSP. The choice, even if not obvious in terms of efficiency, has the advantage to get a better understanding of internal network processes.

Materials and Methods
For the sake of clarity, materials and methods employed in this work have been divided in subsections. In Section 2.1, constructive heuristics, based on fragments growth, are reviewed with some examples. The statistical facts and the leading intuitions behind our heuristic are presented in Section 2.2. The general idea of ML-Constructive is described in Section 2.3, the overall algorithm with its complexity is demonstrated in Section 2.4. The model embodying the ML decision-taker and the training procedure are explained in details in Section 2.5.

Constructive Heuristics
Constructive heuristics were introduced for the purpose of creating TSP solution from scratch, when just the initial formulation described in Section 1.1 is available. They exploit intrinsic features during the solution creation, and are employed when quick solutions with decent quality are needed [1; 5; 9].
In this paper, we further develop constructive heuristics. Particularly, those that take their decisions on edges are addressed. These approaches grow many fragments of tour, in opposition to NN that grows just a single fragment ( Figure 1). They take an extra effort with respect to the latter to preserve the TSP constraint (Equations 1b to 1e). Since at each addition the procedure must avoid inner-loops and no vertex can be connected with more than two other vertices. However, the computational time needed to construct a tour remains limited.
As introduced in Wang et al. [49] and Jackovich et al. [25], we want to emphasize that two main choices are required to design original constructive heuristic driven by edge choices: the order for the examination of the edges, and the type of constraints that ensure a correct addition. The total number of edges is O(n 2 ) for the TSP. Therefore, the sort of all the edges in the examination order is computed in O(n 2 log n 2 ). The relevance of an edge is related to the probability that we expect that such edge is in the optimal solution. The higher is the relevance the earlier that edge should be examined. Different strategies are possible, the most famous ones are the MF and the CW policies. For MF the relevance of an edge is related to cost values, the smaller is the cost, the higher is the probability of addition. For CW instead the relevance depends on the saving value, that is designed on purpose to rethink the addition order. MF sorts the ordering view by the cost values c ij , while CW order the examination with a function which computes savings. A saving is the gain obtained when rather than passing through an hub node h at each step, the salesman uses the straight edge between vertex i and vertex j. The hub vertex h is chosen in such a way that it is at the shortest total distance (TD) from the other vertices. The formula used to find the hub vertex is in Equation 3, while the function that computes the saving values is shown in Equation 4.
The edge's addition checker algorithm is the second choice in the design of a constructive heuristic. For both approaches (MF and CW), it checks that the examined edge does not has extremes vertices with already two connections in the partial solution (Equations 1b and 1c), and that such edge does not create inner-loops (Equation 1d). The subroutine that checks if an edge can create an inner-loop is called tracker, and it uses a quadratic number of operations for the worst case scenario in our implementation (Appendix A). Note that there exists more efficient data structures and algorithms for the tracker task that runs in O(n log n) [25].

Statistical Study
As mentioned earlier, the generalization issues of ML approaches are likely caused by the weak contemplation of the deep learning weaknesses [27; 35; 52]. In fact, it is known that dealing with imbalanced datasets, outliers and extrapolation tasks can critically effect the overall performances. An imbalanced dataset occurs when the output classes that the learning model wants to predict present underrepresented and severe class distribution skew [20; 30]. An outlier arises when a data point differs significantly from other observations, outliers can cause problems in their pattern recognition [41]. Finally, extrapolation issues arise when the learning system is required to operate beyond the space of known training samples, since the objective is to extend the intrinsic features of the problem to similar but different tasks [2].
To overcome the aforementioned generalization problems, we suggest supporting the ML model with a designed strategy. We instruct the model to act as a decision-taker, and we decide to place it in a context that allows the ML to act confidently. We highlight the significance of designing a good environment that avoids gross errors, i.g. by omitting imbalanced class skews and outlier points. In fact, choosing wisely a context that does not change too frequently during the algorithm iterations, helps the ML to deal with extrapolation skills as well.
As suggested by Fu et al. [19], we address the global problem with subroutines, but rather than treating the ML as an initializing procedure, we value it as decision-taker for the solution construction. The subroutine task consists of the detection of optimal solution edges from a given CL. As stated in Section 1.1, CL identifies the most promising edges to be part of the optimal solution. For instance, Hougardy and Schroeder [24] proved that just around 30 n of the edges need to be taken into account by an optimal solver for large instances with more than a thousand points. Consequently, we want to use an ML decision-taker to recognize the optimal edges from the CL selection. Note that for each vertex there is a CL, and for each CL there exists at most two optimal edges.
For the purpose of designing good strategies, an explorative study was carried out to check the distribution of the optimal edges within the CLs. It was observed that after sorting the edges in the CL from the shortest to the longest, the occurring of an optimal edge is not uniform with respect to the positions in the sorted CL, but follows a logarithmic distribution, as shown in Figure 2. Clearly, such a pattern reveals a severe class distribution skew for some positions. In fact, some positions verify the presence of an optimal edge much more infrequently than other positions.
The empirical probability density function (PDF) shown in Figure 2 is computed using 1000 uniform random euclidean TSP instances, with the number of vertices varying uniformly between 100 and 1000. These instances are sampled in the unit side square, and the optimal solutions are computed with the Concorde solver [1]. Figure 2 also shows, in conjunction to the position distribution, the rate of optimal edges found for each position. Note that an optimal edge occurrence arises when the optimal tour passes through the edge in the position taken into consideration in the CL. This study emphasises the relevance of detecting when the CL's shortest edge is optimal, since about 88.6% of the time such an edge is in the optimal tour. However, it reveals as well that detecting with ML when this edge is not optimal is a hard task due to the over-represented situation. Instead, considering the second position (second shortest edge), a more balanced scenario can be observed, since about half of the occurrences are positive and the other half is negative. A rapid growth of under-represented cases can be observed from the third position onwards, with only few optimal edge occurrences this time. Note that up to the fifth position the under-representation is not too severe, and imbalanced learning techniques could make their contribution [32]. From the sixth on the optimal occurrences are too rare to be able to recognize useful patterns, even if these cases could be interpreted as the most useful ones in terms of construction. Considering CLs built in such a way to contain all the vertices in the defined instance, then the number of optimal edges in each CL is two. And, the sum of optimal occurrences for the first five positions in the CL is about 95% of the total optimal edges available.
After the selection of the most promising edges to be processed with ML, a relevant choice to be made is the order of examination of these edges. In fact, as mentioned, edges selected in the earlier stages of the construction exhibit higher probabilities to be inserted regarding the later ones, since less constraints need to be validated.
To explore the effectiveness of different strategies, we tested the behaviour of classic constructive solvers as MF, and CW ( Figure 3). Using 54 TSPLIB instances [44] with dimension varying from 100 to 1748 vertices, we investigate the true positive rate (TPR) and the false positive rate (FPR) of these solvers. Considering each constructive solver as a predictor, each position in the CL as a sample point, and the optimal edge positions as the actual targets to be predicted, for each CL, there are two position targets and two position predicted. Considering the distribution shown in Figure 2, the positive (P) and negative (N) cases occur with a frequency that varies depending on the position. Hence, studying the predictor performance by position is important since each position has different relevance in the solution construction and optimal frequency. A true positive occurs when the predicted edge is also an optimal edge, a false positive instead occurs if the predicted edge is not optimal. Note that avoiding false positive cases is crucial, since they block other optimal edges in the process.
To take care also of this aspect, the positive likelihood rate (PLR) is considered in Table 1. Let D p be the dataset of all the edges available in the p positions for each CL. If the predictor truly find an optimal edge in the observation i, the variable TP i will be equal to one, otherwise is null. Similarly, it is for the false positive FP i variable.
In Table 1, MF exhibits an higher TPR for shorter edges as expected, while CW performs better with longer edges. However, less obvious is MF's higher FPR on the first position. Bringing attention to the relevance of decreasing the FPR for the most frequent first position. Note that the CW's PLR is higher with respect to MF's one for the first position, which can be read as the main reason why CW comes up with better TSP solutions than MF. So one of the main reasons for using ML for TSP is the reduction of the FPR for construction or general heuristics.

The general idea
In light of the statistical study presented in Section 2.2, we propose a constructive heuristic called ML-Constructive (ML-C). The heuristic follows the edge addition process (see MF and CW in Section 2.1) extended by an auxiliary step that asks the ML model to agree for any attaching edge during a first phase. The goal is to avoid as much as possible to add bad edges in the solution, while allowing the addition of promising edges which are considered auspicious by  Table 2: Comparison on several quality metrics for different choices of the L P list.
the ML model. Our focus is not on the development of highly efficient ML architectures, but rather on the successful interaction between machine learning and optimization techniques. We conceive the machine learning model as a decision-taker, and the optimization heuristics as the texture of the solution building story. The result is a new hybrid constructive heuristic which succeeds in scaling up to big problems while keeping the improvements achieved through machine learning.
To avoid the popular ML flaws discussed in previous section, the ML decision-taker is exploited just on situations where the data do not suggest underrepresented cases. Since about 95% of the optimal edges are connections with one of the closest five vertices of a candidate list, only such subset of edges is initially considered to test the ML system. It is a common practice to avoid employing ML models in the prediction of rare events, instead it is suggested to apply it preferably in cases where a certain degree of pattern recognition can be confidently detected. Taking this into account, our solution is designed to construct TSP tours in two phases. The first employs the ML to construct partial solutions where it can predict with confidence ( Figure 4). The second uses the CW heuristic to connect the remaining free vertices and complete the TSP tour.
During the first phase, the most likely edges to be found in the optimal tour are initially collected in the list of promising edges L P . As mentioned, the edges connecting the first five closest vertices for each CL are assumed as promising for testing. Note that, as highlighted in Figure 2, from the third shortest edge (position) onwards it is rare to find optimal edges, since they are already underrepresented. Therefore, to appropriately choose the edges to be placed in the promising list, several experiments were carried out and results are shown in Table 2. To build the promising list L P for the experiments, the strategy was to include the edges of the first m vertices of each CL, considering m ranging from 1 to 5. The ML decision-taker was in charge to predict whether the edges under consideration in L P (given m) were in the optimal solution or not. It adopted the same ResNet [21] architecture and procedure employed for the ML-Constructive, but each ML model was trained on different data in order to be consistent with the m tested case. More details on the training data in Section 2.5.
Several metrics were compared in the results obtained for the different m: the True Positive Rate (TPR), the False Positive Rate (FPR), the Accuracy (Acc) and the Positive Likelihood Rate (PLR) [10]. Please note that the decision-taker objectives are to keep the FPR small, meanwhile obtain good results in terms of TPR. In fact, having small FPR ensures that during the second phase the ML-Constructive has an higher probability in detecting optimal edges, while with an high TPR the search space for the second phase is reduced (Appendix B). The use of these specific metrics could lead to better results. In terms of accuracy it seems to be the best choice to include only the shortest edge for each CL (m = 1), but by checking the TPR and FPR in Table 2 it becomes obvious that the ML model almost always predicts an insertion for this case. This behaviour is undesirable as it leads to a high FPR, hence to worse solutions during the second phase. However, if the difference between TPR and FPR is taken into account, the best arrangement is when the first two shortest edges are put into the list (m = 2). Although other arrangements might show to be effective as well, the selection by means of positions in the CLs and the selection of the first two shortest edges in each CL are proven to be efficient by the results. Recall that too much edges in L P can confuse the decision-taker, since outlier cases and classes with severe distribution skew can appear.
After the most promising edges have been identified and included in L P , the list is sorted according to an heuristic that seeks to anticipate the processing of good edges. An edge belonging to the optimal tour and being straightforwardly detected by the ML model is regarded as good. It is crucial to find an effective sorting heuristic for the promising list, since the order of it effects the learning process and the ML-Constructive algorithm as well. For simplicity, in this work the list is sorted by edge's position in the CL and cost length, but other approaches could be propitious, perhaps using ML. Note that the earlier examination of the most promising edges increases the probability to find good tours employing the multiple fragment paradigm (Appendix B).
At this point, the edges belonging to the sorted promising list are drawn in images and fed to the ML decision-taker one at a time. If the represented edge meets the TSP constraints, the ML system will be challenged to detect if the edge is in the optimal solution. If the detection is done with a given level of confidence, the heuristic will validate the inclusion of the edge in solution. Assuming that some local information provides enough details to detect common patterns from previous solutions, the images represent just the small subset of vertices given by the candidate lists of each edge extremes of the edge that is processed. The partial solution visible in such local context and available up to the insertions made by moving through the promising list is represented as well.
Once all the edges of the promising list have been processed, the second phase of the algorithm is in charge of completing the tour. Initially, it detects the remaining free vertices to connect (Figure 4), then it connects such vertices employing a constructive heuristic based on fragments. Several strategies are possible, but to keep the construction straightforward we choose to conclude the tour with the CW heuristic. Note that CW usually captures the optimal long edges better than MF, as highlighted in Figure 3 and Table 1. Therefore CW represents a promising candidate solver to connect the remaining free fragment extremes of the partial solution into the final tour.

The ML-Constructive Algorithm
The ML-Constructive starts as a modified version of MF, then concludes the tour exploiting the CW heuristic. The ML model behaves consequently as a glue, since it is crucial to determine the partial solution available at the switch between solvers.
The list of promising edges L P and the confidence level of the ML decision-taker are critical specifications to set in the heuristic before than it runs. The reasons behind our promising list building choices were widely discussed in Section 2.3. While, the confidence level is used to handle the exploitation vs exploration trade-off. It consists in a simple threshold applied to the predictions made by the ML system. If the predicted probability that validates the insertion is greater than such threshold, then the insertion is applied. The value of 0.99 has been verified to provide good results on tested instances. Since, lower values increase the occurrence of false positive cases, thus leading to the inclusion of edges that are not optimal. On the other hand, higher values of it decrease the occurrence of true positive cases, hence increasing the challenge of the second phase.
The overall pseudo-code for the heuristic is shown in Algorithm 1. Firstly, the candidate lists for each vertex in the instance are computed (line 2). We noticed that considering just the closest thirty vertices for each CL was a good option. As mentioned, just the first two connections are considered in L P , while the other vertices are used to create the local context in the image. The CL construction takes a linear number of operation for each vertex, and the overall time complexity for constructing it is O(n 2 ). Since finding the nearest vertex of a given vertex it takes linear time, the search for the second nearest takes the same time (after removing the previous from the neighborhood). So on until the thirtieth nearest vertex is found. As only the first thirty edges are searched, the operation can be completed in linear time. Then, promising edges are inserted in L P , meanwhile repeating edges are deleted to avoid unnecessary operations (line 3). The list is sorted according to the position in the candidate list and the cost values (line 4). All the edges that are the first nearest will be found first and sorted according to c ij , then the second nearest and so on. Since only the first two edges for each CL can be in the list, the sorting task is completed in O(n log n).
The first phase of ML-Constructive takes part. An empty solution X =0 is initialized (line 5), and following the order in L P a variable l is updated with the edge considered for the addition (line 6). At first, l is checked to ensure that the edge complies the TSP constraints (Equations 1b to 1d). Then the ML decision-taker is queried to confirm the addition if vertex i and vertex j have less than two connections each in X then: 13: if the ML agrees the addition of l then: x i,j = 1 14: find the hub vertex h if vertex i and vertex j have less than two connections each in X then: 26: x i,j = 1 of the edge l. If the predicted probability is higher than the confidence level, the edge is added to the partial solution (lines 10 and 13). To evaluate the number of operations that this phase consumes, we must split the task according to the various sub-routines which are acted at each new addition in solution. The "if" statements (lines 8, 12, 22 and 25) check that the constraints 1b and 1c are complied. They verify that both extremes of the attaching edge l exhibit at most one connection in the current solution. The operation is computed with the help of hash maps, and it takes constant time for each considered edge l. The tracker verification (lines 9 and 23) ensures that l will not create an inner-loop (Equation 1d). This sub-routine is applied only after that is checked that both extremes of l have exactly one connection each in the current partial solution. It takes overall O(n 2 ) operations up to the final tour (proof in Appendix A).
Once all the constraints of the TSP have been met, the edge l is processed by the ML decision-taker (lines 10 and 13). Even if time consuming, such sub-routine is completed in constant time for each l. Initially the image depicting the l edge and its local information is created, then it is given as input to the neural network. To create the image, the vertices of the candidate lists and the existing connections in the current partial solution must be retrieved. Hash maps are used for both tasks, and since the image can include up to sixty vertices, this operation takes a constant amount of time for each l edge in L P . The size of the neural network does not vary with the number of vertices in the problem as well, but remains constant for each edge in L P .
To complete the tour, the second phase starts by identifying the hub vertex (line 14). It consider all the vertices in the problem (free and not), following the rule explained in Therefore, the complexity of the worst case scenario for the ML-Constructive is: O(n + n log n + n 2 + n 2 log n 2 ) = O(n 2 log n 2 ) Note that to complete the tour we proposed the use of CW, but rather hybrid approaches that also use some sort of exhaustive search could be very promising as well, although more time consuming.

The ML decision-taker
The ML decision-taker validates the insertions made by the ML-Constructive during the first phase. Its scope is to exploit the ML pattern recognition to increase the occurrences of finding good edges, while reducing them for the bad edges.
Two data-sets were specifically created to fit the ML decision-taker, the first was used to train the ML system while the second to evaluate and choose the best model. The training data-set is composed by 38400 instances uniformly sampled in the unit-sided square. The total number of vertices for instance n ranges uniformly from 100 to 300. On the other hand, the evaluation data-set is composed by 1000 instances uniformly sampled from the unit-sided square. The total number of vertices varies in this case from 500 to 1000. The data-set has been used to create the results in Table 2. The optimal solutions were found (in both cases) using the Concorde solver [1]. The creation of the training instances and their optimal tours took about 12 hours on a single CPU thread; while a total time of 24 CPU hours was needed for the evaluation data-set creation (since it includes instances of greater size). Note that, in comparison with other approaches that use Reinforcement Learning good results were achieved here even though we used far fewer training instances [37].
To get the ML input ready, the promising list L P was created for each instance in the data-sets. In case m = 2 (best scenario), the two shortest edges of each candidate list were inserted into the list. To avoid repetitions, edges that occur several times in the list were inserted just once at the shortest available position. For example, if edge e ij is the first shortest edge in CL[i] and the second shortest edge in CL[j], it will only occur in L P once such as first position for vertex i. After that all promising edges had been inserted into L P , the list was sorted. Note that the list can contain at most m × n items in it.
An image with dimension of 96 × 96 × 3 pixels was created for each edge belonging to L P . Three channels (red, green and blue) were set up to provide the information used to feed into the neural network. Each channel depicts some information inside a square with sides of 96 pixels each ( Figure 5). The first channel of the image (red) shows each vertex in the local view, the second one (green) shows the edge l considered for the insertion with its extremes, and the third one (blue) shows all the existing fragments currently in the partial solution and visible from the local view drawn in the first channel.
As mentioned, the local view was formed by merging the vertices belonging to the candidate lists of each extremes of the inserting edge. These vertices were collected and their positions normalized to fill the image. The normalization required having the middle of the inserting edge l such as image center. Whereas, all the vertices visible in the local view were interior to a virtual sphere inscribed into the squared image. Such that the maximum distance between the image center and the vertices in the local view was less than the radius of such sphere. The scope of the normalization was to keep consistency among the images created for the various instances.
The third channel was concerned in giving a temporal indication to aid the ML system in its decision. In fact, representing those edges which had been inserted during the previous stages of the ML-Constructive, this information gave a helpful hint in the interpretation of which edges the final solution needs most. Two different policies were employed in the construction of it: the optimal policy (offline) and the ML adaptive policy (online). The first used the optimal tour and the L P order to create this channel (just on training), while the second one used the ML previous validations (train and test).
A simple ResNet [21] with 10 layers was adopted to agree on the inclusion of the edges into the solution. The choice of the model is motivated by the easiness that image processing ML models show on the understanding of the learning process. The architecture is shown in Figure 6. There are four residual connections, containing two convolutional Figure 6: ResNet10 layers each. The first layer in each residual connection is characterized by a stride equal to two (/2). As usual for the ResNet the kernels are set to 3x3, and the number of features increase by multiplying by 2 at each residual connection, in order to balance the downscale of the images. The output is precede by a fully connected layer with 9 neurons (fc) and by an average pool operation (avg pool), that shrinks the information in a single vector with 1024 features. For additional details we refer to He et al. [21]. The model is very compact, with the scope of avoiding computational burden and other complexities.
The output of the network is represented by two neurons. One neuron is predicting the probability that the considered edge l is in the optimal solution, while the other is predicting if the edge is not optimal. The sum of both probabilities is equal to one. The choice of using two neurons as output instead of just one is due to the exploitation of the Cross Entropy loss function, which is recommended to train classification problems. In fact, this loss penalizes especially those predictions that are confident and wrong. The network will know if the inserting edge l is optimal or not during train, while the ResNet should predict the probability that the edge is optimal during test.
To train the network two loss functions were jointly utilized: the Cross Entropy loss (Equation 7) [14] and a reinforcement loss (Equation 8) which was developed specifically for the task at hand. Initially, the first loss is employed alone up to convergence (about 1000 back-prop iterations), then the second loss is also engaged in the training. At each iteration of back-propagation the first loss updates the network firstly, then (after 1000 iterations) the second loss updates the network as well. The gradient of the second loss function is approximated by the REINFORCE algorithm [50] with a simple moving average with 100 periods used as baseline.
In Equations 7 and 8, x l is the image of the inserting edge l, the function identifying whether l is optimal is accounted as p, while q θ is the ResNet approximation to it. Moreover, the T function returns one if the prediction made by q θ is true (TP or TN), and zero otherwise. While the F function returns one if the prediction is false. Note that the second loss exhibit an expected value with respect to q θ measure, since the third channel is updated using the the ML adaptive policy (online), while the first loss uses the optimal policy (offline). The introduction of a second loss had the purpose of increasing the occurrences of true positive while decreasing the false positive cases. Note that it employs the same policy that will occur during the ML-Constructive test run.

Experiments & Results
To test the efficiency of the proposed heuristic, experiments were carried out on 54 standard instances. Such instances were taken from the TSPLIB collection [44], and their vertex set cardinality vary from 100 to 1748 vertices. Noneuclidean instances, such as the ones involving geographical distances (GEO) or special distance functions (ATT), were addressed as well. We recall that the ResNet model was trained on small (100 to 300 vertices) uniform random euclidean instances, evaluated on medium-large (500 to 1000 vertices) uniform random euclidean instances, and tested on TSPLIB instances. We emphasize that TSPLIB instances are generally not uniformly distributed.
All the experiments 1 were handled employing python 3.8.5 [47] for the algorithmic component, and pytorch 1.7.1 [43] to manage the neural networks. The following hardware was utilized: • a single GPU NVIDIA GeForce GTX 1050 Max-Q; • a single CPU Intel(R) Core(TM) i7-8750H @ 2.20GHz.
During training all hardware was exploited, while just the CPU was used to test.
The experiments presented in Table 3 compare ML-Constructive (ML-C) results to other famous strategies based on fragments, such as the Multi-Fragment (MF) and the Clarke-Wright (CW). The first is equivalent to including all the existing edges in the list of promising edges L P , and then substituting the ML decision taker with a rule that always inserts the considered edge. While, the second strategy is equivalent to keeping the list L P empty; this means that the first phase of ML-Constructive does not creates any fragment, while the construction is made completely in the second phase.  Figure 7: a) The partial solution available at the end of the first phase for ML-SC. In light blue the remaining free vertices, and in dark blue the inserted edges. b) The complete tour found at the end of the ML-SC run. In red the edges added during the second phase. The considered instance is the KroA100 from the TSPLIB collection.
To explore, evaluate and interpret the behaviour of our two-phase algorithm, other strategies were investigated as well.
In fact, the ML decision-taker can act in very different ways, and a comparison with expert-made heuristic rules can be significant. Deterministic and stochastic heuristic rules were created to explore the optimality gap variation. The aim was to prove that the learnt ML model would produce a higher gain with respect to the heuristic rules, as corroborated by Table 3. The heuristic rules were substituting the ML decision-taker component within the ML-Constructive (lines 10 and 13 of Algorithm 1). No changes on the selecting and sorting strategies were applied to create the lists L P and L D . The First (F) rule decides to deterministically add the l edge if one of its extremes is the first closest vertices in the CL of the other extreme. The Second (S) rule is similar, but it adds l only if one extreme is the second shortest in the CL of the other. The policy that always validates (Y) the insertion of the edges in L P was examined as well. It represents with CW the extreme cases where the ML decision-taker always validates or not, respectively, the edges in L P . A stochastic strategy called empirical (E) was tested as well, which adds the edges in L P according to the distribution seen in Figure 2. Therefore, it inserts the edge if one of the extremes is the first with probability 0.886 or the second with probability 0.512. Twenty runs of the empirical strategy were made, and in (AE) we show the average results, while in (BE) we show the best from all runs. Finally, to check the behaviour of the ML-Constructive in case the ML system validates with 100% accuracy (the ML decision-taker is a perfect oracle), the Super Confident (ML-SC) policy was examined. This policy always answers correctly for all the edges in the promising list L P , and is achieved by exploiting the known optimal tours. To capture the potentiality of a super accurate network for the first phase, the partial solution created by the ML-SC policy in the first phase, and the solution constructed in the second phase are shown in Figure 7. Note that some crossing edges are created in the second phase. In fact, despite the solution created being very close to the optimal, the second phase sometimes adds bad edges to the solution. This last artificial policy has been added to demonstrate how much leverage we can still gain from the machine learning point of view.
The results obtained by the optimal policy (ML-SC) lead us to two interesting aspects. The first one, as mentioned before, shows the possible leverage from the ML perspective (first phase). While the second one gives us an idea of how much improvement is possible from the heuristic point of view (second phase). To emphasize these aspects the gap column in Table 3 clearly highlights the difference, in terms of percentage error, between the ML-SC solution and the best solution found by the other heuristics (in bold). The average, the standard deviation (std) and the number of times when the heuristic is best are shown as well for each strategy.
Among the many policies shown in Table 3, the First (F) and the ML based (ML-C) policies exhibit comparable average gaps. To prove that the enhancement introduced by the ML system is on average statistically significant, a statistical test was conducted. A T-test on the percentage errors obtained for the 54 instances in Table 3 shows that the p-value against the hypothesis both policies are similar in terms of average optimal gap is equal to 0.03. The result proves that the enhancement is relevant, and that these systems have a promising role in improving the quality of TSP solvers.
To check the behaviour of the heuristics in relation to time, Table 4 shows the CPU time for each policy and heuristic shown in Table 3. Note that for each query to the ResNet the input procedure produces an image that increases the computational burden. Therefore, future work could be proposed to speed the ML component, even though the computation times remain short and acceptable for many online optimization scenarios.
Finally, to make a comparison with the metrics presented for the MF and CW in Section 2.2, results in Table 5 show the final tour achievements for the F, ML-C and ML-SC policies across the various positions in the CL. Note that although  the TPR of ML-SC is 100%, its FPR is not equal to zero since during the second phase some edges in first and second position can be inserted. Also note that the accuracy of ML-C is consistently better than F, while the FPR for the first position is lower resulting in a higher TPR for the second position.

Discussion
A new strategy to design constructive heuristics has been presented. It gives a central role to the integration of statistical, mathematical and heuristic exploration. We introduced a new way of thinking about the generalization of machine learning approaches for the TSP, leading to an efficient integration between learning useful information and exploiting through classic approaches. The objective is to learn useful skills from past experience to enhance the heuristic search.
Our ML-Constructive is the first machine learning approach able to scale and show improvements at once with respect to a classic efficient constructive heuristic. Furthermore, the introduced approach is able to give good guidelines about how the machine learning can behave in the event of extreme negative or positive cases. Results are very promising, and suggest that giving more emphasis on the generalization of hybrid designs pays off.
The relevance of an exploratory stage with statistical studies of the problem at hand had been emphasised. The target of these studies is to select an effective sub-problem which allows the avoidance of many known machine learning flaws.
More work needs to be done to improve the accuracy and the extrapolation of the machine learning classifier. Further improvements in future work could be in the direction of reducing the (constant) time required to prepare the input for the machine learning classifier, and to find the integration to meta-heuristics approaches as well.

A Complexity of the inner-loop constraint tracker
The purpose here is to compute the complexity of the inner-loop constraint tracker used in the ML-Constructive heuristic process, as stated in Section 2.4 and in Algorithm 1. For comprehension purposes we strict our analysis to the symmetric TSP, but similar results can be achieved for the asymmetric case as well.
Firstly, we observe that the constraint tracker procedure is applied only to edges which have both extremes with exactly one connection already in the partial solution, since the tracker routine follows the constraints expressed by lines 8 and 22 in Algorithm 1. Therefore, edges connecting vertices from internal points of the fragments are impossible to occur at this point, as shown in Figure 8. While those creating an inner-loop and joining two fragments are possible to occur events ( Figure 9). Note that the goal of the tracker is to detect the inner-loop connections from the other. The growing connections and new fragment connections shown in Figure 9 are events that can be detected in constant time, since it's enough to check that an extreme of the inserting edge has zero connections in the partial solution (lines 12 and 25). Also note that these two events cannot occur as input of the tracker procedure since they do not satisfy the constraint expressed by lines 8 and 22.
Secondly, we notice that the complexity of the worst case scenario for the whole procedure (from empty solution to the complete) is being computed in this Appendix. Therefore, we are not taking in consideration just the single call of the tracker, but the global computation during the complete tracking process. In fact, considering that the maximum number of positive addition for a constructive heuristic that growths fragments is equal to n (the length of the tour).
Where, an addition is positive if the edge being attached to the partial solution complies the TSP constraints in (1b-e) and the ML decision-taker agrees to add the considered edge in solution. We refer to the epoch between two positive additions as t, i.e. no edge is in solution at t = 0, meanwhile exactly eight edges are in solution at t = 8. Take into consideration that the epochs to be checked by the tracking routine for the symmetric TSP are from t = 2 to t = n − 2.
As mentioned, the computationally expensive events that the tracker needs to check are the "inner-loop connection" and the "joining fragments connections". The inner-loop connection drawn in yellow ( Figure 9) occurs when the extremes of a fragment are connected together by the attaching edge l. If we assume that at the epoch t there are at most s ≤ t fragments, then there exist at most s attaching edges at this epoch that can create an inner loop (Figure 10), and the sum of the operation needed to check these s inner-loops is at most equal to t. In fact, the tracker checks by spanning completely one of the fragment connecting to the attaching edge. Then if the other extreme of the fragment coincide with the other extreme of the attaching edge there is an "inner loop connection", otherwise is a "joining fragments connection". Note that once an edge has been rejected the fragment associated to it is set free for the current epoch, and the tracker do not need to check anymore its extremes. Since we have at most t operations for epoch, and we have at most n epochs, the global computation is O(n 2 ).
Once the upper bound of the complexity for detecting the "inner-loop connections" has been found, the number of operations required for the "joining fragments connections" occurrences is still necessary to be estimated. Usually, after having encountered a "joining fragments connection" event, the insertion of the considered edge l takes place. But since in ML-Constructive it could happen that the ML decision-taker rejects the attaching edge (line 10), it may happen Figure 8: Events that cannot occur as an input to the tracking procedure Figure 9: Events that can occur and are prevented by the tracking procedure (first two), and events that can be detected in constant time (last two).
that the tracker is called many times during the same epoch. Which could be a problem if the promising list L P was not limited at most m × n edges (Section 2.3). Assuming that the worst case scenario is O(n) for each edge processed in the first phase, we are still safe with O(n 2 ) operations for the global tracking computation.
B The earlier insertion of the most promising edges could increase the probability of finding the optimal tour.
The purpose of this Appendix is to present some advantages that a procedure which inserts promising edges into solution first have with respect to others approaches. If the growing fragments heuristic is considered as a stochastic process, then we could estimate the probability that the optimal tour has of occurring following the procedure. In fact, for each edge l considered to be included in the partial solution there are two possible events: included or not. If the random variable E l is used to refer to the event that the edge l is included in solution (¬E l otherwise), then we can express the probability that such event occurs as: P (E l ) = 1 − P (A l ) − P (B l ) and P (¬E l ) = P (A l ) + P (B l ) where A l refers to the internal point connection events (Figure 8), while B l stand for the inner-loop connection events ( Figure 9). Recall that internal point connection occurs when the constraint which ensures that no vertex is connected to more than two other vertices is not satisfied. While an inner-loop occurs when sub-solutions are created, instead of having a single global loop.
In case that a list L is used to store all the existing edges of the TSP instance that we wish to solve, and we randomly shuffle such list to create a random examination order. The probabilities of the events A l and B l will be dependent on the position p in such list and the number of edges already inserted in the partial solution. So, combinatorics can help us calculate or approximate these probabilities. Recalling the epoch concept described in Appendix A, we can state that at t = 0 such probability is one, while at t = n the probability of E l is null: P (E l | t = 0) = 1 (10) P (E l | t = n) = 0 Since at t = 0 no edge has been placed in solution, no A l or B l event can occur. While at t=n the solution is complete. Then, we want to prove that the probability of E l will monotonically decrease as more edges are fed into the solution and as we progress through the L list. In case this conjecture is true, we can conclude that edges inspected earlier in the list are more likely to be included than those seen later. Emphasising the need to put first in the L list the edges that we consider most promising to be included in the optimal solution.
As a first step, we determine the probability of occurrence of A l . To figure out such probability, we shall simply estimate the number of cases in which A l occurs and divide by the total number of possible cases. These cases vary depending on the position p in the list, the number of edges e and the number of vertices n in the instance. Recalling that A l occurs when in the list the edge l is preceded by at least 2 other edges that have the same extreme with that of l. In case there are d of these overlapping edges, we have that A l occur for d = 2 to d = n − 1: which is an increasing function with respect to the position p, and converge to 1 as p goes to e.
Meanwhile, to compute the probability of B l , the epoch in which the event occurs must be taken into account. Bearing in mind that as we proceed along the p positions in the list such epoch is ascending, since there are no operations that remove an edge from the solution and it is possible just to add a new edge into such partial solution.
Considering that the maximum total number of inner-loops for a given epoch is fixed and equal to t, we have that: which has upper bound that is an increasing function with respect to the position p and the epoch t.
To conclude, since the probabilities of A l and B l show an increasing trend, although not strictly due to the upper bound of B l , we can verify that the probability of E l has a decreasing trend due to Equation 9. Therefore, the anticipation of the insertion of promising edges is a good strategy for the heuristic. However, such results do not prove that for any solving algorithm, the probability P (E L | t) is a strict decreasing function. But it suggests that a general decreasing trend is present which should be exploited by the ML-Constructive heuristic during the construction of the solution.