Dual-Neighborhood Search for Solving the Minimum Dominating Tree Problem

: The minimum dominating tree (MDT) problem consists of ﬁnding a minimum weight subgraph from an undirected graph, such that each vertex not in this subgraph is adjacent to at least one of the vertices in it, and the subgraph is connected without any ring structures. This paper presents a dual-neighborhood search (DNS) algorithm for solving the MDT problem, which integrates several distinguishing features, such as two neighborhoods collaboratively working for optimizing the objective function, a fast neighborhood evaluation method to boost the searching effectiveness, and several diversiﬁcation techniques to help the searching process jump out of the local optimum trap thus obtaining better solutions. DNS improves the previous best-known results for four public benchmark instances while providing competitive results for the remaining ones. Several ingredients of DNS are investigated to demonstrate the importance of the proposed ideas and techniques.


Introduction
The minimum dominating tree problem for weighted undirected graphs is to find a dominating tree in a weighted undirected graph such that all vertices in this weighted undirected graph are either in or adjacent to this tree, and the sum of the edge weights of this tree is minimized [1].Adjacent means that there is an edge between this vertex and at least one vertex in the tree.The minimum dominating tree is a concept in graph theory and one of the important classes of tree structures in graph theory.
A highly related problem, the minimum connected dominating set (MCDS), has been extensively studied for building routing backbone wireless sensor networks (WSNs) [2,3].One of the goals of introducing the MCDS in WSNs is to minimize energy consumption; if two devices are too far away from each other, they may consume too much power to communicate [4,5].Using a routing backbone to transmit messages will greatly reduce energy consumption, which increases dramatically as the transmission distance becomes longer [6].However, some directly connected vertices in MCDS may still be far away from each other because MCDS does not account for distance [7].Therefore, considering each edge in the routing backbone is more in line with energy consumption purposes [8].The minimum dominating tree (MDT) problem was first proposed by Zhang et al. [9] for generating a routing backbone that is well adapted to broadcast protocols.
Shin et al. [1] proved that the MDT problem is NP-hard and introduced an approximate framework for solving it.They also provided heuristic algorithms and mixed-integer programming (MIP) formulations for the MDT problem.Adasme et al. [10] introduced two other MIP formulations, one based on a tree formulation in the bidirectional counterpart of the input graph, and the other obtained from a generalized spanning tree polyhedron.Adasme et al. [11] proposed a primal dyadic model for the minimum-cost domi-nated tree problem and an effective inequality to improve the linear relaxation.Álvarez-Miranda et al. [12] proposed a precise solution framework that combined a primal-dual heuristic algorithm with a branch-and-cut approach to transform the problem into a Steiner tree problem with additional constraints.Their framework solved most instances in the literature within three hours and proved its optimality.
In recent years, efficient heuristic algorithms for MDT problems have flourished.Sundar and Singh [13] proposed two metaheuristic algorithms, the artificial bee colony (ABC-DT) algorithm and the ant colony optimization (ACO-DT) algorithm, for the MDT problem.These two algorithms were the first metaheuristics for the MDT problem and provided better performance than previous algorithms.They also provided 54 randomly generated instances in their work, which are considered challenging instances of the MDT problem and are widely used to evaluate the performance of algorithms for the MDT problem.Based on the latter work, Chaurasia and Singh [14] proposed an evolutionary algorithm with guided mutation (EA/G-MP) for MDT problems.Dražic et al. [15] proposed a variable neighborhood search algorithm (VNS) for MDT problems.Singh and Sundar [16] proposed another artificial bee colony (ABC-DTP) algorithm for the MDT problem.This new ABC-DTP method differed from the ABC-DT in the way it generated initial solutions and in the strategy for determining neighboring solutions.Their experiments showed that for the MDT problem, ABC-DTP outperformed all existing problem-specific heuristics and metaheuristics available in the literature.Hu et al. [17] proposed a hybrid algorithm combining genetic algorithms (GAITLS) and an iterative local search to solve the dominating tree problem.Experimental results on classical instances showed that the method outperformed existing algorithms.Xiong et al. [18] presented a two-level metaheuristic (TLMH) algorithm for solving the MDT problem with a solution sampling phase and two local search-based procedures nested in a hierarchical structure.The results demonstrated the efficiency of the proposed algorithm in terms of solution quality compared with the existing metaheuristics.
Metaheuristics have been shown to be very effective in solving many challenging real-world problems [19].However, for some problems, due to the complexity of the problem structure and the large search space, the classical metaheuristic framework fails to produce the desired results [20].Many researchers have relied on composite neighborhood structures.If properly designed, most composite neighborhood structures have proven successful [21].These methods include variable depth search (VDS), which searches a large search space through a series of successive simple neighborhood search operations.Although understanding the basic concepts of VDS algorithms dates back to the 1970s [22], researchers have maintained a sustained enthusiasm for the term [23,24].For a more detailed survey of VDS, we refer to Ahuja et al. [25][26][27].Another idea for dealing with complex structural problems is to use a hierarchical metatrial approach, where several trials are combined in a nested structure.Wu et al. [28] successfully implemented a two-level iterative local search for a network design problem with traffic sparing.According to their analysis, hierarchical metaheuristics must be carefully designed to balance the complexity of the algorithm and its performance.In particular, for the outer framework, keeping it as simple as possible makes the algorithm converge faster.Pop et al. [29] proposed a two-level solution to the generalized minimum spanning tree problem.Carrabs et al. [30] introduced a metaheuristic algorithm implementing a two-level structure to solve the shortest path problem for all colors.Contreras Bolton and Parada [31] proposed an iterative local search method to solve the generalized minimum spanning tree problem using a two-level solution.
In recent years, some improved tabu search algorithms and multineighborhood metaheuristic algorithms have been proposed and applied to NP-hard problems.Li et al. [32] proposed an improved tabu search algorithm to solve the vehicle routing problem, introducing an adaptive tabu length and neighborhood structure.Tong et al. [33] established a mixed-integer nonlinear programming model and solved the unmanned aerial vehicle transportation route optimization planning problem through a variable neighborhood tabu search algorithm.Seydanlou et al. [34] proposed a metaheuristic algorithm with a multineighborhood procedure, and experimental results proved the effectiveness of that method.Song et al. [35] proposed a new competition-guided multineighborhood local search (CMLS) algorithm to solve the course-based course scheduling problem, and computational results showed that the proposed algorithm was highly competitive.
In this paper, we design a metaheuristic algorithm for a two-neighborhood search to solve the MDT problem that uses two neighborhood moves to perform the search and combines a tabu search to escape local optima.The DNS algorithm is described in detail in Section 2, the experimental results of the DNS algorithm and comparison with other algorithms are given in Section 3, and some comparative experiments within the DNS algorithm are conducted in Section 4.

Tabu Search Algorithm
The DNS algorithm in this paper is based on the tabu search algorithm, so we first introduce the tabu search algorithm.

Introduction
Tabu search (TS) is a metaheuristic search method used for mathematical optimization.It was proposed by Fred W. Glover in 1998 [36].Tabu search improves the performance of local search by relaxing the basic rules of local search.It starts from an initial feasible solution, selects a series of specific search directions (moves) as probes, and chooses the move that makes the specific objective function value change the most.To avoid falling into local optimal solutions, TS search uses a flexible "memory" technique to record and select the optimization process that has been carried out, guiding the next step of search direction.Tabu search is based on neighborhood search, by setting up a tabu list to tabu some operations that have been experienced and using aspiration criteria to reward some good states.

Basic Elements
The basic components of tabu search include: Stopping criteria: this is a condition that when met, the algorithm will stop running.
The main indicators of the tabu list include: • Tabu objects: those changing elements that are tabued in the tabu list.• Tabu length: the number of steps that are tabued.

Basic Steps
The basic steps of tabu search are as follows: 1.
Start from an initial solution.

2.
Find all neighborhoods of the current solution and find the optimal neighborhood solution.

3.
If the optimal neighborhood solution is better than the current best solution, then it is taken as the new current solution.

4.
Add the new current solution to the tabu list.If the tabu list exceeds its maximum length, delete the oldest entry.

5.
Repeat steps 2-4 until the stopping criteria are met.
Perturbation is often used with tabu search algorithms.During the optimization process, if the algorithm stagnates around some local optimum value, the perturbation strategy will be activated.A perturbation strategy usually makes larger range changes to a current solution so that search can jump out of a current local optimum value and enter a new search area.

Design Challenges
The design of the various components and overall flow of a tabu search often has certain impact on its efficiency.When designing a tabu search, the following challenges need to be considered:

•
How to define the neighborhood: The neighborhood function determines the space of solutions that the algorithm can explore.If the neighborhood is defined too small, the algorithm may fall into a local optimum; if it is defined too large then the computational cost may become too high.

•
How to select tabu objects: The number of tabu objects need to be sufficient to prevent the algorithm from falling into a local optimum.However, if there are too many tabu objects, it may take up too much memory, and the lookup operation will also slow down.

•
How to determine the tabu length: too large a tabu length will slow down the search and make it difficult to converge; too small a length will make the search easily fall into a local optimum.• How to set stopping criteria: stopping too early may lead to not finding optimal solution; running for too long may lead to low efficiency.• How to set perturbation intensity: although a perturbation strategy helps improve the global optimization ability of the search, excessive perturbation may make the search process chaotic and unable to effectively converge to a global optimum.

Main Framework
The basic idea of our proposed DNS algorithm is to tackle the MDT problem by optimizing the candidate dominating tree weight using a neighborhood search-based metaheuristic with two neighborhood move operators.The search space of the DNS consists of all the minimum spanning trees of all the possible dominating sets of the instance graph.The proposed NDS algorithm optimizes the following objective function: where T = (X, E ) stands for the current configuration, i.e., the candidate dominating tree.Notations X and E represents the vertex and edge sets of T, respectively.Function f 1 (X) calculates the number of vertices not dominated by T. Function f 2 (E ) calculates the weights of the minimum spanning tree of T. α is a constant parameter to balance the importance between f 1 and f 2 .T is a feasible solution to the minimum dominating tree problem if and only if f 1 (X) = 0.The algorithm primarily comprises several key steps.Firstly, an initial solution is generated, followed by a neighborhood evaluation.Subsequently, the best neighborhood move is selected and executed iteratively.During the iteration, the best overall configuration is recorded.The framework of the algorithm can be represented in pseudocode as Algorithm 1.
In Algorithm 1, T i represents the initial configuration, T b represents the recorded best overall solution, and T c represents the current configuration.In each iteration, the subprocedure DO_NEIGHBOREVALUATE evaluates all the neighborhood moves in the current configuration.The following two subprocedures select and execute the best move.The termination condition can be the time or iteration limit.The time complexity of the DNS algorithm is O(V 2 + VE + E log E), and its space complexity is O(V 2 ).
Algorithm 1 Algorithm for the MDT problem.

Initial Solution Generation
The proposed DNS algorithm uses a feasible dominating tree as the initial configuration.The subprocedure GENERATE_INITIALSOLUTION generates this initial dominating tree.It first finds the minimum spanning tree for the whole graph and tries to trim the tree by removing leaves iteratively until removing one more leave breaks the dominancy of the tree.The pseudocode of this procedure is defined in Algorithm 2.
Algorithm 2 Algorithm for generating the initial solution.

Require:
The instance graph G(V, E) Ensure: A DTP configuration T i 1: procedure GENERATE_INITIALSOLUTION(G) 2: for n ∈ AllLeafVertices do 6: if n can remove and w(n) > w(v) then end for 10: end if 13: return T i 15: end procedure The procedure starts from the minimum spanning tree T i generated by Kruskal's algorithm.Then, it tries to delete the leaf with the largest edge weight.The process terminates when no more leaves can be deleted.The algorithm returns a feasible dominating tree as the initial configuration.In the following sections, we focus on the metaheuristic part of the proposed DNS algorithm, i.e., the neighborhood structure as well as its evaluation.

Definition
For a better description, we first define some important concepts and notations used in the proposed DNS algorithm.

•
X : the set of vertices in the current dominator tree.• X plus : the set of vertices dominated by X and not in X.

•
A 1 : An array of the number of undominated vertices; the length of the array is the number of graph vertices. A A 1 [i] denotes the number of vertices not dominated by the new X when moving i from X to X plus (or from X plus to X).

•
A 2 : array of minimum spanning tree weights for X.The length of the array is the number of graph vertices.
A 2 [i] denotes the weight of the new minimum spanning tree of X when moving i from X to X plus (or from X plus to X).
The following example illustrates how A 1 and A 2 are calculated.
As shown in the Figure 1, the current dominating tree is T <B,D> , containing two vertices, B and D. Therefore, X = B, D. The vertices dominated by X are A, C, and E. Thus, X plus = A, C, E. The vertices A, B, C, D, and E correspond to the array subscripts 0, 1, 2, 3, and 4, respectively.To evaluate the neighborhood moves, the algorithm takes vertex A out and puts it in the set of the other side.The number of vertices that are not dominated by the new X after this move is 0, thus A 1 [0] is assigned to 0. The weight of the new minimum spanning tree of X is 13, thus A 2 [0] is assigned to 13.After evaluating all the neighborhood moves, the resulting arrays are A 1 = [0, 1, 0, 1, 0] and A 2 = [13, 0, 17, 0, 3].A 1 and A 2 are used to evaluate the neighborhood moves.

Neighborhood Move and Evaluation
There are two kinds of neighborhood moves in the DNS algorithm: one is to take out one vertex in X and put it into X plus , and the other one is to take out one vertex in X plus and put it into X.At each iteration, the best neighborhood move is selected and performed among all the two kinds of neighborhood moves.There are two criteria to evaluate the quality of the moves, one is the dominance and the other is the weight of the dominating tree.The pseudocode for the neighborhood evaluation is described in Algorithm 3.
Algorithm 3 Algorithm for performing a neighborhood evaluation.

Require: EvaluateMatrices
for v ∈ X ∪ X plus do A move v back 7: end for 8: end procedure The evaluation is conducted by trying to move each vertex to the other set, then the A 1 and A 2 values are calculated.The time complexity of this module is O(V 2 + VE), and its space complexity is O(V 2 ).Based on these two arrays, the best move is selected as described in Algorithm 4.

Algorithm 4
Algorithm for selecting the best move.

Require: EvaluateMatrices
M best ← 0 3: end if return M best 12: end procedure Procedure SELECT_BESTMOVE picks the move with the smallest A 1 and A 2 , with a higher priority for A 1 .The time complexity of this module is O(V), and its space complexity is O(1).Then, the best move selected is performed by Algorithm 5.
Algorithm 5 Algorithm for executing the best neighborhood move.

Require: X, BestMove
if BestMove ∈ X then 3: move BestMove from X to X plus 4: move BestMove from X plus to X T c ← KRUSKAL(G(X)) 8: return T c 9: end procedure Procedure EXECUTE_BESTMOVE moves the selected vertex to X plus if it is in X, and vice versa.After the move, the minimum spanning tree of G(X) is calculated using Kruskal's algorithm and assigned to T c .The time complexity of this module is O(E log E), and its space complexity is O(V).The following example illustrates how the best move is evaluated and performed.
As shown in Figure 2, the current dominating tree is T <B,D> , X = {B, D}, X plus = {A, C, E}.To evaluate vertex A, we first move it from X plus to X; then, X becomes {A, B, D}.The number of vertices that are not dominated by the new X at this point is 0, thus A 1 [A] = 0.The minimum spanning tree weight of X = {A, B, D} is 13, thus A 2 [A] = 13.We then move A back to its original set.The evaluation for A is concluded.B, C, D, and E are evaluated sequentially by the same process.After the evaluation for each vertex, A 1 = [0, 1, 0, 1, 0] and A 2 = [13, 0, 17, 0, 3].Then, we pick the best neighborhood move by finding the minimum value from A 1 and A 2 , prior to A 1 .There are 3 minimum values in A 1 , corresponding to A, C, and E.Then, we compare the value of these three vertices in A 2 , the minimum value is 3, corresponding to vertex E. Therefore, the best vertex is E, and the best neighboring move is to move E.After the move, the new X = {B, D, E}.We calculate the minimum spanning tree of the new X.The new minimum spanning tree is T <B,D,E> with a weight of 3.

Fast Neighborhood Evaluation
In order to improve the efficiency of the algorithm, this paper proposes a method to dynamically update the neighborhood evaluation matrices A 1 , A 2 .

Fast Evaluation for A 1
The number of undominated vertices may increase or remain unchanged when vertices are moved from X to X plus .The newly added undominated vertices must be originally in the X plus set and connected to the moved vertex.Since the number of undominated vertices is zero throughout the algorithm, we can count the newly introduced undominated vertices by counting the vertices in X plus , where the moved vertex is its only connection to X.
When we move vertices from X plus to X, the number of undominated vertices may decrease or remain the same.Because X is dominated throughout the algorithm, the number of undominated vertices after this kind of moves is still 0. The above observation can be utilized to dynamically compute A 1 without having to traverse the entire graph.The formula is as follows:

Fast Evaluation for A 2
For A 2 , we use a dynamic Kruskal's algorithm.The algorithm dynamically maintains a set Roads, which is the set of edges contained in subgraph G(X), i.e., the set of edges whose two vertices are in X.The Roads set is sorted from smallest to largest by the weights of the edges.When a X-to-X plus move is performed, the edges connecting to the moved vertex and X are deleted from the Roads set.Similarly, when a X plus -to-X move is performed, the edges connecting to the moved vertex and X are inserted into the Roads set.Note that edges should be inserted into the appropriate position in Roads to guarantee that it is sorted.The dynamic Kruskal's algorithm then assumes that the edges before the deletion or insertion position are in the new minimum spanning tree and starts the normal procedure from that position.The pseudocode for the dynamic Kruskal's algorithm is described in Algorithms 6 and 7.
Algorithm 6 Algorithm for calculate a new minimum spanning tree.
Require: MovedVertex, G, Roads, T c Ensure: weight of minimum spanning tree T s 1: procedure CALCULATE_NEWMINSPANTREE(X, X plus , MovedVertex) min ← w(E(v, MovedVertex)) return w(T s ) 20: end procedure In Algorithm 6, the notation E(a, b) represents the edge connecting vertices a and b, and T c is the original minimum spanning tree, i.e., the entire algorithm of the current solution.The time complexity of this module is O(E), and its space complexity is O(V).The main job of this procedure is to update the Roads set.Moreover, Algorithm 7 calculates the spanning tree dynamically according to Roads.T s ← null 3: for i from 0 to index do return T s 14: end procedure The time complexity of this module is O(E), and its space complexity is O(V).The following example illustrates the above procedures: As shown in Figure 3, the original tree is T <A,B,D,F> ; currently, X = {A, B, D, F}, X plus = {C, E, G}, Roads = {< D, F >, < A, B >, < B, D >}, and the weights of the edges w(Roads) = {1, 4, 8}.Let us evaluate the move of vertex E from X plus to X.After the move X = {A, B, D, E, F}, X plus = {C, G}.Since E was originally in X plus , A 1 [E] = 0.The new edge added after the move is the edge {< B, E >, < E, F >} with weights {2, 5}.Then, we insert these two edges into the appropriate position in Roads according to their weights from smallest to largest in w(Roads) = {1, 2, 4, 5, 8}, and the corresponding Roads = {< D, F >, < B, E >, < A, B >, < E, F >, < B, D >}.We only need to start from the position of < B, E > to determine the new minimum spanning tree.The edges before < B, E > must be in the new minimum spanning tree.The evaluated minimum spanning tree is T <A,B,D,E,F> = {< D, F >, < B, E >, < A, B >, < E, F >} with weight 12, thus A 2 [E] = 12.

Tabu Strategy and Aspiration Mechanism
The proposed DNS algorithm implements the tabu strategy.The vertex is prohibited to be moved again within a tenure once it is moved.The tabu strategy is implemented in both kinds of moves in the algorithm.Since there is no intersection of X and X plus , only one tabu table is needed.We denote the tabu tenure of the move from X plus to X as TabuLength 1 and the move from X to X plus as TabuLength 2 .These two tabu tenures are set to the number of vertices in X and X plus , respectively, thus implementing dynamic tabu tenures.This tabu strategy improves the accuracy and efficiency and makes the algorithm jump out of the local optimum more easily.
In order to avoid missing some good solutions, an aspiration strategy is introduced.If one tabu move may improve the best overall solution, the searching process breaks its tabu status and selects it as a candidate best move.

Perturbation Strategy
In order to further improve the quality of the solution, the proposed DNS algorithm implements a perturbation strategy.The specific perturbation is to move some vertices from X plus to X randomly.The algorithm sets a parameter as the perturbation period.When the number of iterations reaches the perturbation period, a perturbation is triggered, and the number of iterations is cleared to zero if the best overall solution is updated within this period.There are two other parameters, the perturbation amplitude and the perturbation tabu tenure.The perturbation amplitude is the number of vertices taken out from X plus in the perturbation.The perturbation tabu tenure is the tabu tenure used during the perturbation period.In addition, after a certain number of small perturbations, a larger perturbation needs to be triggered to give a larger spatial span to the search process.The larger perturbation is implemented by moving one-third of the vertices from X plus to X randomly.

Datasets and Experimental Protocols
The experiments were carried out on the following two datasets:

•
The DTP dataset is a dataset proposed by Dražic et al. [15], with the number of vertices ranging from 150 to 1000.

•
The Range dataset is a dataset proposed by Sundar and Singh [13], with the number of vertices ranging from 50 to 500 and a transmission range from 100 to 150 m.
Both datasets are randomly generated and can be downloaded online or obtained from the authors.The DNS algorithm was implemented in Java (JDK17) and tested on a desktop computer equipped with an Intel ® Xeon ® W-2235 CPU @3.80 GHz, with 16.0 GB of RAM.

Calibration
In this section, we present experiments to fix the value of key parameters of the DNS algorithm: • Parameter DisturbPeriod: The first perturbation period.Values from 14 to 17 were tested.

•
Parameter DisturbLevel: The perturbation amplitude.Values from 7 to 12 were tested.

•
Parameter DisturbTL 1 : The tabu length of the neighborhood move consisting in taking a vertex from X plus and putting it into X during the perturbation.Values from one to two were tested.

•
Parameter DisturbTL 2 : The tabu length of the neighborhood move consisting in taking a vertex from X and putting it into X plus during perturbation.Values from three to eight were tested.
We selected 13 representative instances to tackle the calibration experiments.Representatives were instances 200-400-1, 200-600-1, 300-600-1, and 300-1000-1 In DTP; instances 300-1, 400-1, and 500-1 in Range100, Range125, and Range150, respectively .The experiment was conducted as follows: First, we performed a rough experiment with parameter combinations to select better parameter combinations.Then, for each set of parameters, we ran these 13 instances in sequence.Each instance was run five times with different random seeds for 300 s each time.We compared the gap rates for each parameter setting.The gap was calculated as: where n 1 is the average result obtained, and n 2 is the known best objective.Table 1 shows the result for the calibration experiment.According to the experimental data, the minimum total gap rate was 0.075, corresponding to a DisturbPeriod of 15, a DisturbLevel of 8, a DisturbTL 1 of 1, and a DisturbTL 2 of 4. In the following experiment, we set the parameters of the algorithm to this setting.Note that this experiment does not guarantee the optimal values of the parameters, and the optimal scheme may vary from one benchmark to another.It can also be seen that for different parameter combinations, the gap rate is small, indicating the robustness of the algorithm.

Algorithm Comparison
In this section, a comparison of algorithms for the minimum dominating tree is conducted, and the specific comparison is shown in Table 2.The overall optimal solution level and average level are good Takes a long time

Comparison on DTP Dataset
In this section, we compare the proposed DNS algorithm with other methods in the literature on the DTP dataset.There are two DTP datasets: dtp_large and dtp_small.Since all algorithms can obtain the best results for dtp_small with little difference in speed, only the experimental results for dtp_large are shown here.The compared algorithms were the TLMH, VNS, and GAITLS algorithms.For each instance, 10 runs with different random seeds were performed, each lasting 1000 s.The best, average objective values, and average time were recorded for each instance.The experimental results and comparisons are shown in Table 3. Bolded numbers represent that the current best value has been obtained and the results are not worse than other algorithms.The stars indicate when the DNS algorithm improved on the best objective value in the literature.
From Table 3, it can be seen that our algorithm obtained the best value in most instances, and those that did not reach the optimal value were also very close to it.The overall best values were slightly worse than those of the TLMH algorithm but better than those of the VNS and GAITLS algorithms.The overall average of our algorithm outperformed the other algorithms, demonstrating its stability and faster speed.It also improved on the best solution in two instances.

Range Dataset Experiments
In this section, the widely used Range dataset with 54 instances was tested and compared with the TLMH, ACO-DT, EA/G-MP, and ABC-DTP algorithms.The experimental results of these algorithms compared in this paper are the best results obtained using the best parameters in the original literature.In this section, our algorithm was run 10 times for each dataset with the previously measured best parameters and different random seeds.Each run lasted 1000 s and the best, average objective values, and average time were calculated.The results and comparisons are shown in Tables 4-6.Bolded numbers represent that the current best value has been obtained and the results are not worse than other algorithms.The stars indicate when the DNS algorithm improved on the best objective value in the literature.35 11 Our algorithm obtained the best solution for most instances in the Range dataset, and those that were not optimal were close to the optimal solution.It improved on the best solution for two instances and outperformed the TLMH algorithm in speed.

The Importance of the Initial Solution
A procedure for generating the initial solution was proposed in the previous section.To see the effect of that procedure, experiments were conducted using it in this section, where 18 instances of Range150 were tested.The objective value obtained by our procedure were compared with both the minimum spanning tree weights and the known best objective value to see how much our procedure improved on the initial solution and how close that initial solution was to the minimum dominating tree.The experimental results are shown in Table 7.In Table 7, T m represents the minimum spanning tree, T i represents the objective value obtained from the initialization procedure, and T b represents the known best objective value.
From the results, it can be seen that using the initialization procedure to obtain a dominating tree as the initial solution improves the results significantly compared to using the minimum spanning tree as the initial solution.The weight of this initial dominating tree is relatively close to that of the minimum dominating tree, allowing the algorithm to converge quickly to a near-optimal solution at the very beginning.To verify this improvement, this paper also used the minimum spanning tree of graph G(V, E) as an initial solution, and we conducted experiments on Range150 for comparison.This method is denoted as DNS-MS.The experimental results are presented in Table 8.The experimental results show that there is not much difference between the best and average values obtained by DNS and DNS-MS, demonstrating the robustness of the local search procedure of the DNS algorithm.It can be seen that under the condition that there is not much difference in the calculation results, the solution time required by DNS is less than that of DNS-MS.This indicates that the initial solution proposed in this paper improves the efficiency of the algorithm.

The Importance of the Fast Neighborhood Evaluation
The proposed DNS algorithm uses a fast neighborhood evaluation technique.To verify its effectiveness, an experiment was conducted to test the time taken to reach the same result for 18 instances of Range150 with and without the fast neighborhood evaluation.In this experiment, the perturbation was disabled, and only the tabu mechanism was enabled.The best objective value that could be reached at complete convergence was tested in advance for each instance and used as the target result.The random seed was fixed for each instance so that the difference in speed was only due to using the fast neighborhood evaluation.The program ran it reached the target result, and the time taken by each instance to reach the target result under these two methods was recorded separately.The results are shown in Table 9, where Method 1 represents the version without fast neighborhood evaluation and Method 2 represents the version with fast neighborhood evaluation: From Table 9, it can be seen that the version using the fast neighborhood evaluation significantly improves its speed compared to the version without it, verifying the effectiveness of the fast neighborhood evaluation.To observe the convergence of these two methods, scatter plots were generated by outputting the weights and corresponding times after each update.The convergence curves of some instances are shown in Figure 4, where NDNU represents the method without fast neighborhood evaluation, and DNU represents the method with the fast neighborhood evaluation.From Figure 4, it can be seen that the version using the fast neighborhood evaluation converges to the target value more quickly.For the version without the fast neighborhood evaluation, the curve is slower and takes longer to converge to the same objective value.

Importance of the Perturbation
The proposed DNS algorithm also implements a perturbation strategy.In this section, the effectiveness of that strategy was verified through experiments by testing the version with and without that strategy, for 18 instances in Range150.Each instance was run five times with different random seeds, with a time limit of 1000 s, and the results of the experiments are shown in Table 10: From Table 10, it can be seen that better solutions can be obtained by the version using the perturbation strategy, especially in some larger instances, verifying the effectiveness of the strategy.

Statistical Significance Testing among Versions of DNS
We conducted T-tests on different versions of DNS on some instances to check whether the differences in results were merely caused by randomness.These different versions of the algorithm included: • DNS-MS, the version of the algorithm with the minimum spanning tree as the initial solution.• DNS-NF, the version of the algorithm without fast neighborhood evaluation.• DNS-NP, the version of the algorithm without perturbation.
Among them, DNS-MS and DNS-NF tested for differences in time, while DNS-NP tested for differences in calculation results.The results and comparisons are as follows.
From Table 11, we can see that some results or speeds of DNS are significantly different (p ≤ 0.05) from the other versions.Additionally, it can be seen in DNS vs. DNS-NP that the p-value is 1.00 on some small-scale instances.This is because both DNS and DNS-NP can always obtain the optimal solution for these instances, so no difference can be observed between DNS and DNS-NP on these small-scale instances.However, significant differences can be observed on large-scale instances.

Conclusions
In this paper, a dual-neighborhood search algorithm was proposed to solve the minimum dominating tree problem.In order to improve the efficiency of the algorithm, a fast neighborhood evaluation method was proposed, in which the method of dynamically generating the minimum spanning tree from the subgraph was deduced from the dominating set.The tabu and the perturbation mechanisms helped the algorithm jump out of the local optimum trap, thus obtaining better solutions.The DNS algorithm was demonstrated to be highly effective in tests on a collection of widely used benchmark instances, where it was compared with algorithms from the literature.Out of 72 public instances, the DNS improved the best result on four problems while being competitive on the remaining ones with less computational time.Although the techniques proposed in this paper are specific to the minimum dominating tree problem, most of these ideas can be applied to other combinatorial optimization problems.For example, the dynamic spanning tree calculation used in the fast neighborhood evaluation can be used in problems with spanning tree structures.Moreover, the collaboration of two neighborhood structures can also be introduced to other relevant optimization problems.Finally, it would be interesting to test the proposed ideas in other metaheuristic frameworks with other optimization problems.

Algorithm 7
Algorithm for the dynamic Kruskal algorithm.Require: Roads, index, G, T c Ensure: a minimum spanning tree T s 1: procedure DYNAMICKRUSKAL(Roads, index, G, T c ) 2:

Table 1 .
Experimental results of parameter testing for the DNS.

Table 2 .
Comparison of Algorithms for Minimum Dominating Tree.

Table 3 .
Computational results of the DNS and comparisons on dtp_large.

Table 4 .
Computational results of the DNS and comparisons on Range100.

Table 5 .
Computational results of the DNS and comparisons on Range125.

Table 6 .
Computational results of the DNS and comparisons on Range150.

Table 7 .
Experimental results of the initial dominating tree algorithm.

Table 8 .
Computational results of the initial solution experiment on Range150.

Table 9 .
Comparison of fast neighborhood evaluation experiments.

Table 10 .
Comparison of disturbance strategy experiments.

Table 11 .
P values of T-Tests on each instance between versions of DNS.