Next Article in Journal
Fragmentation Characteristics of Seafloor Massive Sulfides: A Coupled Fluid-Particle Flow Simulation
Previous Article in Journal
CFD Aided Ship Design and Helicopter Operation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Discrete Group Teaching Optimization Algorithm for TSP Path Planning with Unmanned Surface Vehicles

School of Naval Architecture and Ocean Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Mar. Sci. Eng. 2022, 10(9), 1305; https://doi.org/10.3390/jmse10091305
Submission received: 11 August 2022 / Revised: 7 September 2022 / Accepted: 12 September 2022 / Published: 15 September 2022
(This article belongs to the Section Marine Environmental Science)

Abstract

:
A growing number of researchers are interested in deploying unmanned surface vehicles (USVs) in support of ocean environmental monitoring. To accomplish these missions efficiently, multiple-waypoint path planning strategies for survey USVs are still a key challenge. The multiple-waypoint path planning problem, mathematically equivalent to the traveling salesman problem (TSP), is addressed in this paper using a discrete group teaching optimization algorithm (DGTOA). Generally, the algorithm consists of three phases. In the initialization phase, the DGTOA generates the initial sequence for students through greedy initialization. In the crossover phase, a new greedy crossover algorithm is introduced to increase diversity. In the mutation phase, to balance the exploration and exploitation, this paper proposes a dynamic adaptive neighborhood radius based on triangular probability selection to apply in the shift mutation algorithm, the inversion mutation algorithm, and the 3-opt mutation algorithm. To verify the performance of the DGTOA, fifteen benchmark cases from TSPLIB are implemented to compare the DGTOA with the discrete tree seed algorithm, discrete Jaya algorithm, artificial bee colony optimization, particle swarm optimization-ant colony optimization, and discrete shuffled frog-leaping algorithm. The results demonstrate that the DGTOA is a robust and competitive algorithm, especially for large-scale TSP problems. Meanwhile, the USV simulation results indicate that the DGTOA performs well in terms of exploration and exploitation.

1. Introduction

Unmanned surface vehicles (USVs), which have attractive operating and maintenance costs, the capability to perform at high intensity, and good maneuverability [1], have gained wide attention in scientific research recently. For monitoring pollutant concentrations in lakes or oceans, USVs can be equipped with multiple monitoring sensors to effectively collect environmental data, as well as avoid direct long-term human exposure to hazardous environments [2,3] (as shown in Figure 1). In particular, prior environmental information and sampling path planning are important components for the guidance systems since they facilitate the design of an optimal path based on navigation information and mission objectives [4]. In this context, effective path planning for USVs is crucial for saving operation time and mission costs. Consequently, it has become a research hot spot to design a fast convergence algorithm that facilitates optimal path planning [5].
In view of the traversing order of multiple-waypoint for the USV path planning, the problem can be mathematically equivalent to the traveling salesman problem (TSP), which is a famous NP-hard combinatorial optimization problem, and so far lacks a polynomial-time algorithm to obtain an optimal solution [6]. This problem can be described as follows: a salesman who plans to visit several cities wants to find the shortest Hamilton cycle that permits him to visit each city only once and eventually return to his starting city [7,8]. As a typical optimization problem, the TSP is widely seen in a range of practical missions, including robot navigation, computer wiring, sensor placement, and logistics management [9]. For these reasons, many approaches have been proposed to solve the TSP in past decades, including exact and approximate algorithms. The optimal solution to the problem can be obtained with a rigorous math logical analysis for the exact algorithm [10,11]. However, the expensive computational cost makes it inadequate for solving medium-scale to large-scale NP-complete problems. Hence, many researchers turn to solving the TSP using approximation algorithms. These algorithms can be divided into two categories. For the first, local search algorithms, such as 2-opt [12] and 3-opt [13], are used to solve the small-scale TSP. Generally, the efficiency of these algorithms decreases as the problem dimension increases and algorithms tend to fall into local optimal solutions. Therefore, many metaheuristic algorithms for solving symmetric TSP have been presented in the literature over the past decades, including: the genetic algorithm (GA) [14], simulated annealing (SA) [15], artificial bee colony (ABC) [16], ant colony optimization (ACO) [17], Jaya algorithm (JAYA) [18], etc.
These metaheuristic algorithms solve the TSP in three main phases: initialization, crossover, and mutation [19]. In general, these metaheuristic algorithms can obtain good results when attempting to solve small-scale optimization problems. However, the larger the scale of optimization problems is, the slower the convergence speed will be, and it is also easier to fall into a local optimum [20]. Motivated by the aforementioned discussions, a novel discrete group teaching optimization algorithm (DGTOA), inspired by the group teaching optimization algorithm (GTOA) [21], is proposed to solve TSP. The DGTOA is presented in three phases in this paper. Firstly, in the initialization phase, the DGTOA generates the initial sequence for students through greedy initialization. Then, in the crossover phase, a new greedy crossover algorithm is employed to increase diversity. Finally, in the mutation phase, to balance the exploration and exploitation, this paper develops a dynamic adaptive neighborhood radius based on triangular probability selection to apply in the shift mutation algorithm, the inversion mutation algorithm, and the 3-opt mutation algorithm. In addition, to verify the performance of the DGTOA, fifteen benchmark problems in TSPLIB [22] are used to test the algorithm as well as compare it with the discrete tree seed algorithm (DTSA) [6], discrete Jaya algorithm (DJAYA) [18], ABC [7], particle swarm optimization-ant colony optimization (PSO-ACO) [23], and discrete shuffled frog-leaping algorithm (DSFLA) [24]. From the comparison results, we can conclude that the DGTOA has a comparative advantage. The main contributions of this algorithm to other algorithms in the literature are as follows:
  • This study presents the first application of the GTOA in the permutation-coded discrete form.
  • The DGTOA is a novel and effective discrete optimization algorithm for solving the TSP, and the comparison shows that the solutions obtained by the DGTOA have comparable performance.
  • The dynamic adaptive neighborhood radius can balance the exploration and exploitation for solving the TSP.
  • The DGTOA has been successfully applied to USV path planning, and the simulation results indicate that the DGTOA can provide a competitive advantage in path planning for USVs.
The remainder of the paper is arranged as follows. A literature survey on techniques to avoid falling into a local optimum and improve the optimization speed is given in Section 2. In Section 3, the original GTOA, the TSP, and the dynamic adaptive neighborhood radius model are described. After that, the DGTOA model is introduced in Section 4. Results and discussions are provided in Section 5. Finally, Section 6 concludes the study.

2. Literature Survey

The following section will focus on the current efforts of researchers to develop techniques to avoid falling into a local optimum and improve the optimization speed from the initialization, crossover, and mutation phases. Aiming at solving the TSP, related improvement techniques can be roughly introduced as follows.
The first one is improving the algorithm initialization rules to accelerate convergence. In order to speed up convergence, W. Li et al. discussed K-means clustering as a method to group individuals with similar positions into the same class to obtain the initial solution [25]. In another study, to ensure that the algorithm would execute within the given time, M. Bellmore et al. developed the nearest neighbor heuristic initialization algorithm [26]. On the basis of the nearest neighbor initialization, L. Wang et al. introduced the k-nearest neighbors’ initialization method, where the algorithm adopted a greedy approach to select the k-nearest neighbors [27]. A.C. Cinar et al. presented the initial solution of the tree during the initialization phase using the nearest neighbor and a random approach for balancing the speed and quality of its solution [6]. C. Wu et al. [28] and P. Guo et al. [29] adopted a greedy strategy for generating initialized populations to improve the optimization speed and avoid falling into a local optimum.
The second way to avoid getting trapped in the local optimum is to use crossover rules to increase diversity. For example, İlhan et al. used genetic edge recombination crossover and order crossover to avoid falling into the local optimum and improve their performances [30]. Z.H. Ahmed presented a sequential constructive crossover operator (SCX) to solve the TSP to avoid getting trapped in the local optimum [31]. In another study, Hussain et al. proposed a method based on the GA to solve TSP with a modified cycle crossover operator (CX2) [13]. In their approach, path representations have effectively balanced optimization speed and solution quality. Y. Nagata et al. devised a robust genetic algorithm based on the edge assembly crossover (EAX) operator [32]. To avoid being limited in the local optimum, the algorithm used both local and global versions of EAX.
The third one is adopting a mutation strategy to balance exploration and exploitation for solving the TSP. For instance, M. Albayrak et al. compared greedy sub-tour mutation (GSTM) with other mutation techniques. GSTM demonstrated significant advantages in polynomial-time [33]. To further avoid falling into a local optimum, A.C. Cinar et al. presented the use of swap, shift, and symmetric transformation operators for the DTSA to solve the problem of coding optimization in the path improvement phase [6]. To balance exploration and exploitation, M. Anantathanavit et al. employed K-means to cluster the sub-cities and merge them by the radius particle swarm optimization embedded into adaptive mutation, which could balance between time and accuracy [34].
Besides the main strategies for improvement mentioned above, some researchers combined a local search algorithm and a metaheuristic algorithm to avoid premature convergence and further improve the solution quality. For example, Mahi et al. developed a hybrid algorithm that combined PSO, ACO, and 3-opt algorithms, allowing it to avoid premature convergence and increase accuracy and robustness [23]. Moreover, a support vector regression approach is employed by R. Gupta et al. to solve the search space problem of the TSP [35].
Specifically, in practical ocean survey scenarios, D.V. Lyridis proposed an improved ACO with a fuzzy logic optimization algorithm for local path planning of USVs [36]. This algorithm offers considerable advantages in terms of optimal solution and convergence speed. Y.C. Liu et al. proposed a novel self-organizing map (SOM) algorithm for USV to generate sequences performing multiple tasks quickly and efficiently [37]. J.F. Xin et al. introduced a greedy mechanism and a 2-opt operator to improve the particle swarm algorithm for high-quality path planning of USV [4]. The improved algorithm was validated in a USV model in a realistic marine environment. J.H. Park et al. used a genetic algorithm to improve the mission planning capability of USVs and tested it in a simulation environment [38].
Following the literature survey, improving the algorithm initialization rules, using crossover rules, adopting a mutation strategy, combining with a local search algorithm, or all of these strategies in combination can be applied to the TSP to avoid falling into a local optimum. However, direct random crossover and global mutation are challenging in terms of convergence speed. Moreover, a higher convergence speed is required for the DGTOA to achieve rapid and optimal path planning for USVs. Thus, in contrast to the studies mentioned above, the DGTOA innovatively incorporates a dynamic adaptive neighborhood radius model, which is applied to the neighborhood mutation mode. Meanwhile, a new greedy crossover method is used to further improve TSP path exploration.

3. Background Work

In this section, we will introduce the group teaching optimization algorithm (GTOA) to solve continuous problems [21]. Meanwhile, the traveling salesman problem will be briefly described. In addition, a dynamic adaptive neighborhood radius model will be introduced.

3.1. Group Teaching Optimization Algorithm

The GTOA is inspired by the group teaching mechanism, which divides students into excellent and normal groups [39]. Depending on the results of the grouping, teachers create different teaching plans. In brief, the plans for teachers within the excellent group tend to raise the overall average, while those for students in normal groups tend to improve their individual knowledge, as shown in Formulas (1) and (2) for teachers of excellent and normal groups:
E x c e l l e n t G r o u p : x t e a c h e r , i t + 1 = x i t + a × ( T t 2 × ( b × M t + ( 1 b ) × x i t ) )
N o r m a l G r o u p : x t e a c h e r , i t + 1 = x i t + 2 × c × ( T t x i t )
where t is the number of iteration generations, x t e a c h e r , i t + 1 is the knowledge acquired by student i from the teacher at time t, x i t is the level of of student i at time t, T t is the level of knowledge of the teacher at time t, M t is the average student knowledge level of the group, and a, b, and c (0 < a, b, c < 1) are random numbers.
Combining the effects of lesson plans developed by the teacher with interaction with classmates and self-study, the knowledge learned by student i at time t can be calculated as follows:
x s t u d e n t , i t + 1 = x t e a c h e r , i t + 1 + r a n d × ( x t e a c h e r , i t + 1 x t e a c h e r , j t + 1 ) + r a n d × ( x t e a c h e r , i t + 1 x i t ) , f ( x t e a c h e r , i t + 1 ) < f ( x t e a c h e r , i t + 1 ) x t e a c h e r , i t + 1 r a n d × ( x t e a c h e r , i t + 1 x t e a c h e r , j t + 1 ) + r a n d × ( x t e a c h e r , i t + 1 x i t ) , f ( x t e a c h e r , i t + 1 ) f ( x t e a c h e r , i t + 1 )
x i t + 1 = x t e a c h e r , i t + 1 , f ( x t e a c h e r , i t + 1 ) < f ( x s t u d e n t , i t + 1 ) x s t u d e n t , i t + 1 , f ( x t e a c h e r , i t + 1 ) f ( x s t u d e n t , i t + 1 )
where x s t u d e n t , i t + 1 is the knowledge acquired by student i at time t, and x t e a c h e r , j t + 1 is the knowledge acquired by student j from the teacher at time t, j is not equal to i. Following the completion of the learning process at time t, two groups are combined and regrouped by their acquisition level until the termination condition is met, f ( x ) is the normal distribution [39] and x i t + 1 represents the knowledge of student i at time t + 1 . The process of the GTOA can be seen from Figure 2 and the pseudo-code of the GTOA is implemented in Algorithm 1.
Algorithm 1: Pseudo-code of GTOA
Jmse 10 01305 i001

3.2. Traveling Salesman Problem (TSP)

The TSP can be represented by the complete graph G = ( V , E ) , where V is the set of cities and E is the edges connecting them. In the Hamiltonian loop, once a city is taken as a starting point, all cities are visited only once and the loop eventually returns to the starting city. The target should be to make the Hamiltonian loop as short as possible [40].
c i j = 1 , T r a v e l l i n g s a l e s m a n t h r o u g h t h e e d g e ( i , j ) 0 , O t h e r s
min i V j V c i j d i j
i = 1 , j V D c i j = j = 1 , i V D c i j = 1
i S j S c i j S 1 S V , S 2
where d i j is the distance of the edge (i, j) and D indicates the total number of cities to be visited; Formula (7) guarantees that each city is visited only once; Formula (8) guarantees that the result is a Hamiltonian loop.

3.3. Dynamic Adaptive Neighborhood Radius

To improve accuracy and efficiency, this paper proposes a method of a dynamic adaptive neighborhood radius, which incorporates the exponential function into the iterative generations and tanh functions to balance linear and nonlinear relationships. A description of the model design can be decided through:
r z = tanh e ( i t e r M a x i t e r + 0.1 ) ( r a v g z d m i n d a v g d m i n ) ( r m a x z r m i n z ) + r m i n z , z = 1 , 2 , , N
where z is the current city and r m a x z , r m i n z , and r a v g z are the maximum, minimum, and average distance between the unvisited city and current city, respectively. d m i n and d a v g represent the minimum and average distance between all cities, respectively, M a x i t e r is the maximum number of iterations, and iter is the current number of iterations.
The triangular probability selection model [41] increases the probability of selecting relatively close individuals, as well as ensures that individuals from further away have a chance of being selected in the neighborhood. The details are as follows: firstly, the neighboring cities are sorted by distance from the current city z in descending order, and the probability of a city i being chosen is shown in Formula (10), where n is the number of neighboring cities. Then a random number k is generated according to Formula (11). Finally, a city c m is selected as the next target city of the current city z according to Formula (12):
P i = 2 ( n + 1 i ) / ( n ( n + 1 ) ) 1 i n
k [ 0 , n P 1 + ( n 1 ) P 2 + + P n ]
c 1 k P 1 c m ( m 1 ) P 1 + + P m 1 < k m P 1 + + P m , 1 < m n
where c m is the m-th city of the data sorted in descending order by distance from the current city z.

4. Discrete Group Teaching Optimization Algorithm Detail Design

In this section, the discrete group teaching optimization algorithm will be introduced to solve TSP. Meanwhile, a new greedy crossover algorithm, a middle student algorithm, a dynamic neighborhood shift mutation algorithm, a dynamic neighborhood inversion mutation algorithm, and a dynamic neighborhood 3-opt mutation algorithm will be described. Related details will be introduced in the following subsections.

4.1. Discrete Group Teaching Optimization Algorithm

In discrete optimal problems, the DGTOA is modified in two stages. For the first stage, the greedy principle [6] is used to generate the initial students’ sequences. For the later stage, students are divided into two groups, with the top 50 percent of students on the total path assigned to the excellent group and the rest to the normal group. The following process is shown in Figure 3.
In the excellent group, based on Section 3.1, the group focuses on improving the overall performance. The related design process for the DGTOA can be described as follows. (1) The middle student sequence is generated from the whole group of students’ sequences by the middle student algorithm. (2) Each student in the group is processed with the middle sequence using the new greedy crossover. (3) The dynamic neighborhood shift mutation algorithm, the dynamic neighborhood inversion mutation algorithm, and the dynamic neighborhood 3-opt mutation algorithm are used to improve the optimization results after greedy crossover. (4) Finally, the new sequences of the excellent group are output, as shown in Figure 3 in blue.
The normal group is designed as follows. (1) The shortest student sequence by selecting from all students in the normal group. (2) Using a new greedy crossover, each student in the group is processed with the shortest path sequence. (3) The dynamic neighborhood shift mutation algorithm, the dynamic neighborhood inversion mutation algorithm, and the dynamic neighborhood 3-opt mutation algorithm are used to improve the optimization results after greedy crossover. (4) Finally, the new sequences of the normal group are output, as shown in Figure 3 in green.
Then, combine the excellent group and the normal group and determine whether the termination condition is met. If not, the next step is to return to the excellent group and continue the later stage; otherwise, the final result will be output from the combining group as the global DGTOA optimum value, as shown in Figure 3 in gray. The corresponding pseudo-code is shown in Algorithm 2.
Algorithm 2: Pseudo-code of DGTOA
Jmse 10 01305 i002

4.2. New Greedy Crossover Algorithm

To begin with, two positions m and n are randomly selected from the students X i and X j , and the distance λ m n between two positions is calculated using Formula (14). Then the sequence is selected between positions m and n from the students X i and X j with a smaller length replacement to X i between positions m and n. Additionally, the repeated city is removed between positions m and n. Finally, the rest of the cities will be inserted into a new sequence according to the greedy rule, as follows:
x i k = x i k + N N < k < 1 x i k 1 k N x i k N N < k 2 N
λ m n = k = m n 1 d ( x i k , x i k + 1 ) m < n k = n m 1 d ( x i k , x i k + 1 ) n < m
min k = 1 r e 1 d ( x i k , x i k + 1 ) + k = r e N i r e 1 d ( x i k , x i k + 1 )
where d ( x i m , x i n ) is the distance between cities x i m and x i n , the subscript denotes the student number, ranging from 1 to M, in which M is the total number of students. The superscript denotes the student’s corresponding sequence position number, ranging from 1 to N, in which N is the total number of cities. In addition, r e is the unvisited city position and N i r e is the current sequence length of student X i . For instance, positions 3 and 5 are randomly selected from the students X i and X j (as seen from Figure 4). Then the smaller distance (positions 3->7->1) is selected and the repeated city 1 is removed. Finally, the unvisited city 6 is added to the output sequence in a greedy rule.

4.3. Middle Student Algorithm

The sequence of the middle student is obtained based on the form of most of the common cities, for N cities, M students can be established as in Formula 16.
C e n t e r ( X 1 , X 2 , , X M ) = M o s t i = 1 , 2 , , M ( x i 1 , x i 2 , , x i N )
where x i N denotes the city corresponding to position N from the i-th student sequence. The sequence of a middle student is processed in position order. First, delete cities that have been visited, and then, based on the occurrence frequency of the remaining cities, select the city with the highest frequency to fill the corresponding position. If more than one city with the highest frequency is presented, randomly select one to fill this position. Additionally, if all the cities in this position are deleted, select cities at random from the rest of the sequence until the whole sequence is done.
As shown in Figure 5, for instance, in the first case, the cities with the highest frequency in positions 2, 3, 4, 5, and 7 are selected based on the statistics of the remaining cities’ frequencies. In the second case, more than one city is presented with the highest frequency in position 1, and thus city 2 is randomly chosen. In the third case, all cities in position 6 are deleted, a city 6 is randomly selected from the rest cities to fill the sequence.

4.4. Dynamic Neighborhood Shift Mutation Algorithm

A position n is randomly selected from a sequence of student X i . A city is selected from the dynamic adaptive neighborhood radius r z of the city corresponding to position m in the sequence by the triangular probability selection model, where x n is the position n corresponding to the city. After that, the sequence’s city corresponding to position m is shifted to position n. Finally, the value of the left and right change to the path distance Δ λ 1 and Δ λ 2 are calculated, respectively.
Δ λ 1 = ( d ( x i m 1 , x i n ) + d ( x i m , x i n ) + d ( x i n 1 , x i n + 1 ) ) ( d ( x i n 1 , x i n ) + d ( x i n , x i n + 1 ) + d ( x i m 1 , x i m ) ) Δ λ 2 = ( d ( x i m , x i n ) + d ( x i m + 1 , x i n ) + d ( x i n 1 , x i n + 1 ) ) ( d ( x i n 1 , x i n ) + d ( x i n , x i n + 1 ) + d ( x i m , x i m + 1 ) )
where x i m and x i n are the cities corresponding to positions m and n in the i-th student sequence, respectively. Accordingly, if Δ λ 1 Δ λ 2 , move x i n between x i m 1 and x i m . If not, move x i n between x i m and x i m + 1 , as shown in Figure 6.

4.5. Dynamic Neighborhood Inversion Mutation Algorithm

In the sequence of student i, a position n is randomly selected. From the dynamic adaptive neighborhood radius of the city corresponding to position n, one city is selected and its position m is recorded. In addition, the positions m and n in the sequence are compared, and the smaller value is assigned to m i n , while the larger value is assigned to m a x , as determined by Formula (18), and the change in distance before and after the inversion mutation is calculated as Δ λ 3 , Δ λ 4 , Δ λ 5 , and Δ λ 6 :
x i m i n = x i m , x i m a x = x i n m < n x i m i n = x i n , x i m a x = x i m n < m
Δ λ 3 = ( d ( x i m a x 1 , x i m i n 1 ) + d ( x i m a x , x i m i n ) ) ( d ( x i m a x 1 , x i m a x ) + d ( x i m i n 1 , x i m i n ) ) Δ λ 4 = ( d ( x i m a x 1 , x i m i n ) + d ( x i m a x , x i m i n + 1 ) ) ( d ( x i m a x 1 , x i m a x ) + d ( x i m i n , x i m i n + 1 ) ) Δ λ 5 = ( d ( x i m a x , x i m i n 1 ) + d ( x i m a x + 1 , x i m i n ) ) ( d ( x i m a x , x i m a x + 1 ) + d ( x i m i n 1 , x i m i n ) ) Δ λ 6 = ( d ( x i m a x , x i m i n ) + d ( x i m a x + 1 , x i m i n + 1 ) ) ( d ( x i m a x , x i m a x + 1 ) + d ( x i m i n , x i m i n + 1 ) )
For instance, if Δ λ 3 < 0 , reverse the order of x i m i n x i m a x 1 ; if Δ λ 4 < 0 , reverse the order of x i m i n + 1 x i m a x 1 ; if Δ λ 5 < 0 , reverse the order of x i m i n x i m a x ; if Δ λ 6 < 0 , reverse the order of x i m i n + 1 x i m a x . As depicted in Figure 7a–d.

4.6. Dynamic Neighborhood 3-opt Mutation Algorithm

The 3-opt algorithm has a strong local search capability. However, directly processing all cities would take a long computational time, and the amount of time would increase as the number of cities increased. Therefore, we present a method of combining dynamic neighborhood radius with 3-opt to maximize its ability to find local optimal solutions: by selecting a position n randomly in the sequence of students i and then applying the 3-opt algorithm to determine the next city within the dynamic adaptive neighborhood radius of the cities.

5. Results and Discussions

The DGTOA with dynamic adaptive neighborhood optimization is tested using 15 benchmark TSP cases taken from TSPLIB [22]. Most of the instances in TSPLIB have been solved, and the optimum values are displayed. The numbers in the problem names indicate the city numbers (e.g., the eil51 benchmark problem means that the problem has 51 cities). Testing is performed using fifteen benchmark problems, which are divided into three categories: small-scale, medium-scale, and large-scale based on city numbers, respectively. For example, the case with less than 100 cities is considered a small-scale benchmark problem; the case with more than 100 but less than 200 cities is a medium-scale benchmark problem, and the case with more than 200 but less than 300 cities is a large-scale benchmark problem. Each of the experiments in this section is carried out 25 times independently, with the best results, mean results, and standard deviation (Std Dev) values produced by the algorithm having been recorded, and the best optimum results are written in bold font in the result tables. The relative error (RE) is calculated as follows:
R E = R O O × 100 %
where R is the obtained length (mean of 25 repeats) by the DGTOA, and O is the optimum value of the problem. The optimum problems and their values are given in Table 1 [18,42]. All experiments are carried out using a Windows 11 Professional Insider Preview laptop with an Intel (R) Core (TM) i7-7700HQ 2.8 GHz processor and 16 GB of RAM, with the scripts being written in MATLAB 2021a. The following is a series of experiments in which the maximum number of iterations is 1000 and the number of students is 100.

5.1. Experiment 1: Comparisons with Random Initialization, Neighborhood Initialization, and Greedy Initialization

The experiment uses fifteen benchmark problems to evaluate the efficacy of random initialization, neighborhood initialization, and greedy initialization to solve TSP. The obtained results are shown in Table 2.
According to the bold part in Table 2, in terms of mean and RE, the optimization solutions produced by neighborhood initialization and greedy initialization have considerable advantages over random initialization. On the other hand, by analyzing the neighborhood initialization as well as greedy initialization, it is evident from Table 2 that the greedy initialization has a slight advantage in 11 instances. However, the neighborhood initialization has a slight performance on eil101 and pr152. For further analysis of the three initialization methods iterative process, the convergence RE plots of the middle-scale eil101 and large-scale tsp225 benchmark problems are given in Figure 8 and Figure 9, which are used to compare the convergence processes of random initialization, neighborhood initialization, and greedy initialization.
According to Figure 8, the neighborhood initialization and greedy initialization show a considerable advantage in the initial solution over random initialization. For random initialization, it takes 500 generations to reach an RE of less than 3%, but for neighborhood initialization and greedy initialization, they take only 100 generations to satisfy convergence. Compared to neighborhood initialization, greedy initialization can achieve an RE of less than 1% within 200 generations, whereas neighborhood initialization takes 600 generations to achieve an RE of less than 1%.
As seen in Figure 9, compared to neighborhood initialization and greedy initialization, the random initialization method has a certain gap in the initial results as well as the final optimization results, and the convergence to an RE of less than 5% has a larger gap. Moreover, the RE of greedy initialization declines less than 3% faster than neighborhood initialization. Hence the DGTOA uses a greedy rule in the initialization phase.

5.2. Experiment 2: Comparisons with Adaptive Neighborhood Mutation and Dynamic Adaptive Neighborhood Mutation

To compare the adaptive neighborhood radius and the dynamic adaptive neighborhood radius during the mutation phase. The adaptive neighborhood radius is assigned i t e r as a value of 500 and fixed (Formula (9)). Fifteen benchmark problems are used to evaluate the effectiveness of the two neighborhood radius methods to solve the TSP, with all parameters except i t e r set the same each time. The results are shown in Table 3.
As seen from Table 3, the dynamic adaptive neighborhood mutation has a promising advantage over the adaptive neighborhood mutation in terms of mean and RE values, with an advantage in twelve of the fifteen benchmark problems and only a slight disadvantage in eil76 and ch150. Moreover, a box plot of the eil76 and ch150 benchmark problems with the dynamic adaptive neighborhood mutation and the adaptive neighborhood mutation for 25 tests is shown so that the results can be fully analyzed.
From Figure 10, the results from the dynamic adaptive neighborhood mutation are more concentrated. In contrast, the adaptive neighborhood mutation is less stable and thus presents a smaller final average result than the dynamic adaptive neighborhood mutation. For instance, the average optimal value in ch150 in Figure 10 is much smaller than the dynamic adaptive neighborhood mutation. However, the middle, upper quartile, and upper edge are larger than the dynamic adaptive neighborhood mutation. Therefore, the dynamic adaptive neighborhood mutation is optimal for the mutation phase due to its stability and efficiency.

5.3. Experiment 3: Comparisons with the DJAYA, DTSA, ABC, PSO-ACO, and DSFLA

The DJAYA [18], DTSA [6], ABC [7], PSO-ACO [23], and DSFLA [24] are used to compare with the DGTOA, and the comparison results regarding RE values are shown in Table 4. The results are taken directly from the related papers for the DJAYA, DTSA, ABC, PSO-ACO, and DSFLA.
As shown in Table 4, the DGTOA has competitive performance, obtaining optimal solutions for 10 of the 15 benchmark problems, representing 66.77% of all test cases. In terms of the quality of the solutions, DGTOA is obviously superior to the DJAYA, DTSA, and ABC, as well as 6 of 8 (75%) and 7 of 10 (70%) better than PSO-ACO and the DSFLA, respectively (i.e., the DSFLA has the same RE value on the Berlin52 and St70 benchmark problems). The test results show that the ABC, PSO-ACO, DSFLA, and DGTOA perform well at a small scale, but as the scale increases, the performance of the ABC algorithm decreases significantly. Meanwhile, for the tsp225 benchmark problem, the algorithm test result RE value is higher than 5% at a large scale. In addition, PSO-ACO, the DSFLA, and the DGTOA perform similarly on small and medium scales. In contrast, for large-scale kroa200, the DGTOA has a significant advantage over PSO-ACO and the DSFLA. Therefore, the DGTOA has a considered performance in fifteen benchmark problems.

5.4. Experiment 4: Case Study with USV Path Planning

To evaluate the effectiveness of the designed DGTOA in the context of USV path planning, the USV model published in [43] is used to verify the algorithm’s performance in MATLAB. The control algorithm is derived from the line-of-sight guidance laws described in [44,45]. For planning the path, 25 and 50 target waypoints are randomly generated, and then the relevant waypoints are entered into DGTOA. Finally, the optimized paths are provided to the simulated USV for tracking control experiments. The results are shown in Figure 11a,b, where the blue stars represent the waypoints and the solid red line indicates the USV tracking trajectory with the speed of 1 m/s. The generated paths are of satisfactory lengths, and no paths cross, significantly reducing the time and energy required for USV. Furthermore, the convergence RE plots of the 25 and 50 target waypoints are shown in Figure 12. When 25 target waypoints are considered, the DGTOA converges to the optimum after only three iterations. Meanwhile, for 50 target waypoints, the DGTOA converges to the optimum in only eight generations. From the two waypoint cases, the DGTOA converges to the optimal solution in 0.47 s and 0.58 s, respectively. In general, the DGTOA has good performance in terms of optimal solution and convergence speed.

6. Conclusions

To efficiently solve large-scale waypoint route planning issues, a novel DGTOA method is proposed for USVs. The DGTOA proposes a dynamic adaptive neighborhood radius strategy to balance exploration and exploitation. In the initialization phase, the DGTOA generates initial student sequences using greedy initialization to accelerate the convergence. During the crossover phase, when the new greedy crossover method is used, every student in the group is processed with the shortest sequence and the middle student sequence corresponding to the normal group and the excellent group, respectively. In the mutation phase, the dynamic neighborhood shift mutation algorithm, the dynamic neighborhood inversion mutation algorithm, and the dynamic neighborhood 3-opt mutation algorithm all use the dynamic adaptive neighborhood radius based on triangular probability selection to increase diversity.
In order to verify the effectiveness of the DGTOA, fifteen benchmark problems from TSPLIB are used as benchmarks for testing. In the study, the effects of random initialization, neighborhood initialization, and greedy initialization on the DGTOA are also discussed. In terms of quality and convergence speed, greedy initialization for the DGTOA has an advantage over random initialization and neighborhood initialization. What is more, the dynamic adaptive neighborhood mutation has promising performance relative to the adaptive neighborhood mutation in terms of mean and RE values. In comparison with the DJAYA, DTSA, ABC, PSO-ACO, and DSFLA on 15 benchmark problems from TSPLIB, the DGTOA shows obvious superiority over the DJAYA, DTSA, ABC, and is 75% and 70% better than PSO-ACO and the DSFLA, respectively. Furthermore, the DGTOA has been successfully applied to the path planning for a USV and the results indicate that the DGTOA performs well in terms of optimal solution and convergence speed. Therefore, the proposed DGTOA can provide a competitive advantage in path planning for USVs.
Nevertheless, this study also has some limitations. Firstly, the computation time and results of the algorithm are not optimal, especially as the problem scale increases. Secondly, the DGTOA will be modified to plan routes for multiple unmanned surface vehicles.

Author Contributions

Conceptualization, S.Y. and X.X.; methodology, S.Y.; software, J.H.; formal analysis, S.Y. and J.H.; resources, S.Y.; writing—original draft preparation, J.H. and W.L.; writing—review and editing, S.Y., J.H. and X.X.; supervision, S.Y. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology on Ship Integrated Power System Technology Laboratory (Grant 614221720200203); National Natural Science Foundation of China (Grant 52071153); Fundamental Research Funds for the Central Universities, China (Grant 2018KFYYXJJ015).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
USVUnmanned surface vehicle
DGTOADiscrete group teaching optimization algorithm
GTOAGroup teaching optimization algorithm
DTSADiscrete tree seed algorithm
TSATree seed algorithm
DJAYADiscrete Jaya algorithm
JAYAJaya algorithm
ABCArtificial bee colony
PSO-ACOParticle swarm optimization-ant colony optimization
DSFLADiscrete shuffled frog-leaping algorithm
GAGenetic algorithm
SASimulated annealing
SCXSequential constructive crossover operator
CX2Cycle crossover operator
EAXEdge assembly crossover
GSTMGreedy sub-tour mutation

References

  1. Nantogma, S.; Pan, K.; Song, W.; Luo, W.; Xu, Y. Towards Realizing Intelligent Coordinated Controllers for Multi-USV Systems Using Abstract Training Environments. J. Mar. Sci. Eng. 2021, 9, 560. [Google Scholar] [CrossRef]
  2. Xin, J.F.; Zhong, J.B.; Li, S.X.; Sheng, J.L.; Cui, Y. Greedy mechanism based particle swarm optimization for path planning problem of an unmanned surface vehicle. Sensors 2019, 19, 4620. [Google Scholar] [CrossRef]
  3. Wang, Z.; Yang, S.Y.; Xiang, X.B.; Antonio, V.; Nikola, M.; Ðula, N. Cloud-based mission control of USV fleet: Architecture, implementation and experiments. Control. Eng. Pract. 2021, 106, 104657. [Google Scholar] [CrossRef]
  4. Fan, J.; Li, Y.; Liao, Y.; Jiang, W.; Wang, L.F.; Jia, Q.; Wu, H.W. Second path planning for unmanned surface vehicle considering the constraint of motion performance. J. Mar. Sci. Eng. 2019, 7, 104. [Google Scholar] [CrossRef]
  5. Ege, E.; Ankarali, M.M. Feedback Motion Planning of Unmanned Surface Vehicles via Random Sequential Composition. Trans. Inst. Meas. Control. 2019, 41, 3321–3330. [Google Scholar] [CrossRef]
  6. Cinar, A.C.; Korkmaz, S.; Kiran, M.S. A discrete tree-seed algorithm for solving symmetric traveling salesman problem. Eng. Sci. Technol. Int. 2020, 23, 879–890. [Google Scholar] [CrossRef]
  7. Kıran, M.S.; İşcan, H.; Gündüz, M. The analysis of discrete artificial bee colony algorithm with neighborhood operator on traveling salesman problem. Neural Comput. 2013, 23, 9–21. [Google Scholar] [CrossRef]
  8. Ma, J.; Yang, T.; Hou, Z.-G.; Tan, M.; Liu, D. Neurodynamic programming: A case study of the traveling salesman problem. Neural Comput. 2008, 17, 347–355. [Google Scholar] [CrossRef]
  9. Matai, S.R.; Mittal, M.L. Traveling Salesman Problem: An Overview of Applications, Formulations, and Solution Approaches, Traveling Salesman Problem, Theory and Applications. Eng. Sci. Technol. Int. 2011, 23, 879–890. [Google Scholar]
  10. Pasandideh, S.H.R.; Niaki, S.T.A.; Gharaei, A. Optimization of a multiproduct economic production quantity problem with stochastic constraints using sequential quadratic programming. Knowl.-Based Syst. 2015, 84, 98–107. [Google Scholar] [CrossRef]
  11. Klerk, E.D.; Dobre, C. A comparison of lower bounds for the symmetric circulant traveling salesman problem. Discrete Appl. Math. 2011, 159, 1815–1826. [Google Scholar] [CrossRef]
  12. Chiang, C.-W.; Lee, W.-P.; Heh, J.-S. A 2-Opt based differential evolution for global optimization. Appl. Soft Comput. 2010, 10, 1200–1207. [Google Scholar] [CrossRef]
  13. Gulcu, S.; Mahi, M.; Baykan, O.; Kodaz, H. A parallel cooperative hybrid method based on ant colony optimization and 3-Opt algorithm for solving traveling salesman problem. Soft Comput. Fusion Found. Methodol. Appl. 2018, 22, 1669–1685. [Google Scholar]
  14. Yang, Z.; Li, J.; Li, L. Time-Dependent Theme Park Routing Problem by Partheno-Genetic Algorithm. Mathematics 2020, 8, 2193. [Google Scholar] [CrossRef]
  15. Chao, Z.X. Simulated annealing algorithm with adaptive neighborhood. Appl. Soft Comput. 2011, 11, 1827–1836. [Google Scholar]
  16. Khan, I.; Maiti, M.K. A swap sequence based Artificial Bee Colony algorithm for Traveling Salesman Problem. Swarm Evol. Comput. 2019, 44, 428–438. [Google Scholar] [CrossRef]
  17. Li, S.; Wei, Y.; Liu, X.; Zhu, H.; Yu, Z. A New Fast Ant Colony Optimization Algorithm: The Saltatory Evolution Ant Colony Optimization Algorithm. Mathematics 2022, 10, 925. [Google Scholar] [CrossRef]
  18. Gunduz, M.; Aslan, M. DJAYA: A discrete Jaya algorithm for solving traveling salesman problem. Appl. Soft Comput. 2021, 105, 107275. [Google Scholar] [CrossRef]
  19. Thanh, P.D.; Binh, H.T.T.; Trung, T.B. An efficient strategy for using multifactorial optimization to solve the clustered shortest path tree problem. Appl. Intell. 2020, 50, 1233–1258. [Google Scholar] [CrossRef]
  20. Zhang, H.; Cai, Z.; Ye, X.; Wang, M.; Kuang, F.; Chen, H.; Li, C.; Li, Y. A multi-strategy enhanced salp swarm algorithm for global optimization. Eng. Comput. 2022, 38, 1177–1203. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  22. Reinelt, G. TSPLIB—A Traveling Salesman Problem Library. Inf. J. Comput. 1991, 3, 376–384. [Google Scholar] [CrossRef]
  23. Mahi, M.; Baykan, Ö.K.; Kodaz, H. A new hybrid method based on Particle Swarm Optimization, Ant Colony Optimization and 3-Opt algorithms for Traveling Salesman Problem. Appl. Soft Comput. 2015, 30, 484–490. [Google Scholar] [CrossRef]
  24. Huang, Y.; Shen, X.-N.; You, X. A discrete shuffled frog-leaping algorithm based on heuristic information for traveling salesman problem. Appl. Soft Comput. 2021, 102, 107085. [Google Scholar] [CrossRef]
  25. Li, W.; Wang, G.-G. Improved elephant herding optimization using opposition-based learning and K-means clustering to solve numerical optimization problems. J. Ambient Intell. Humaniz. Comput. 2021, 1–32. [Google Scholar] [CrossRef]
  26. Bellmore, M.; Nemhauser, G.L. The Traveling Salesman Problem: A Survey. Oper. Res. 1968, 16, 538–558. [Google Scholar] [CrossRef]
  27. Wang, L.; Lu, J. A memetic algorithm with competition for the capacitated green vehicle routing problem. IEEECAA J. Autom. Sin. 2019, 6, 516–526. [Google Scholar] [CrossRef]
  28. Wu, C.; Fu, X. An agglomerative greedy brain storm optimization algorithm for solving the tsp. IEEE Access 2020, 8, 201606–201621. [Google Scholar] [CrossRef]
  29. Guo, P.; Hou, M.; Ye, L. MEATSP: A membrane evolutionary algorithm for solving TSP. IEEE Access 2020, 8, 199081–199096. [Google Scholar] [CrossRef]
  30. İlhan, İ.; Gökmen, G. A list-based simulated annealing algorithm with crossover operator for the traveling salesman problem. Neural Comput. Appl. 2022, 34, 7627–7652. [Google Scholar] [CrossRef]
  31. Ahmed, Z. Genetic Algorithm for the Traveling Salesman Problem using Sequential Constructive Crossover Operator. Int. J. Biom. Bioinform. 2010, 3, 96. [Google Scholar]
  32. Nagata, Y.; Kobayashi, S. A Powerful Genetic Algorithm Using Edge Assembly Crossover for the Traveling Salesman Problem. Inf. J. Comput. 2013, 25, 346–363. [Google Scholar] [CrossRef]
  33. Albayrak, M.; Allahverdi, N. Development a new mutation operator to solve the Traveling Salesman Problem by aid of Genetic Algorithms. Expert Syst. Appl. 2011, 38, 1313–1320. [Google Scholar] [CrossRef]
  34. Anantathanavit, M.; Munlin, M. Using K-means Radius Particle Swarm Optimization for the Travelling Salesman Problem. IETE Tech. Rev. 2016, 33, 172–180. [Google Scholar] [CrossRef]
  35. Gupta, R.; Nanda, S.J. Solving time varying many-objective TSP with dynamic θ-NSGA-III algorithm. Appl. Soft Comput. 2022, 118, 108493. [Google Scholar] [CrossRef]
  36. Lyridis, D.V. An improved ant colony optimization algorithm for unmanned surface vehicle local path planning with multi-modality constraints. Ocean Eng. 2021, 241, 109890. [Google Scholar] [CrossRef]
  37. Liu, Y.C.; Bucknall, R. Efficient multi-task allocation and path planning for unmanned surface vehicle in support of ocean operations. Neurocomputing 2018, 275, 1550–1566. [Google Scholar] [CrossRef]
  38. Park, J.; Kim, S.; Noh, G.; Kim, H.; Lee, D.; Lee, I. Mission planning and performance verification of an unmanned surface vehicle using a genetic algorithm. Int. J. Nav. Archit. Ocean Eng. 2021, 13, 575–584. [Google Scholar] [CrossRef]
  39. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 1, 1–15. [Google Scholar] [CrossRef]
  40. Rokbani, N.; Kumar, R.; Abraham, A.; Alimi, A.M.; Long, H.V.; Priyadarshini, I.; Son, L.H. Bi-heuristic ant colony optimization-based approaches for traveling salesman problem. Soft Comput. 2021, 25, 3775–3794. [Google Scholar] [CrossRef]
  41. Khanouche, M.E.; Mouloudj, S.; Hammoum, M. Two-steps qos-aware services composition algorithm for internet of things. In Proceedings of the 3rd International Conference on Future Networks and Distributed Systems, Paris, France, 1–2 July 2019; pp. 1–6. [Google Scholar]
  42. Du, P.; Liu, N.; Zhang, H.; Lu, J. An Improved Ant Colony Optimization Based on an Adaptive Heuristic Factor for the Traveling Salesman Problem. J. Adv. Transp. 2021, 2021, 6642009. [Google Scholar] [CrossRef]
  43. Do, K.D.; Pan, J. Robust path-following of underactuated ships: Theory and experiments on a model ship. Ocean Eng. 2006, 33, 1354–1372. [Google Scholar] [CrossRef]
  44. Yu, C.Y.; Xiang, X.B.; Philip, A.W.; Zhang, Q. Guidance-error-based Robust Fuzzy Adaptive Control for Bottom Following of a Flight-style AUV with Saturated Actuator Dynamics. IEEE Trans. Cybern. 2020, 50, 1887–1899. [Google Scholar] [CrossRef] [PubMed]
  45. Yu, C.Y.; Liu, C.H.; Lian, L.; Xiang, X.B.; Zeng, Z. ELOS-based path following control for underactuated surface vehicles with actuator dynamics. Ocean Eng. 2019, 187, 106139. [Google Scholar] [CrossRef]
Figure 1. A USV conducting the environmental monitoring mission.
Figure 1. A USV conducting the environmental monitoring mission.
Jmse 10 01305 g001
Figure 2. Flowchart of group teaching optimization algorithm.
Figure 2. Flowchart of group teaching optimization algorithm.
Jmse 10 01305 g002
Figure 3. Flowchart of discrete group teaching optimization algorithm.
Figure 3. Flowchart of discrete group teaching optimization algorithm.
Jmse 10 01305 g003
Figure 4. New greedy crossover algorithm.
Figure 4. New greedy crossover algorithm.
Jmse 10 01305 g004
Figure 5. Middle student algorithm.
Figure 5. Middle student algorithm.
Jmse 10 01305 g005
Figure 6. Dynamic neighborhood shift mutation.
Figure 6. Dynamic neighborhood shift mutation.
Jmse 10 01305 g006
Figure 7. Dynamic neighborhood inversion mutation. (a) Reverse the order x i m i n x i m a x 1 . (b) Reverse the order x i m i n + 1 x i m a x 1 . (c) Reverse the order x i m i n x i m a x . (d) Reverse the order x i m i n + 1 x i m a x .
Figure 7. Dynamic neighborhood inversion mutation. (a) Reverse the order x i m i n x i m a x 1 . (b) Reverse the order x i m i n + 1 x i m a x 1 . (c) Reverse the order x i m i n x i m a x . (d) Reverse the order x i m i n + 1 x i m a x .
Jmse 10 01305 g007
Figure 8. RE curves with random initialization, neighborhood initialization, and greedy initialization for different iteration periods based on eil101.
Figure 8. RE curves with random initialization, neighborhood initialization, and greedy initialization for different iteration periods based on eil101.
Jmse 10 01305 g008
Figure 9. RE curves with random initialization, neighborhood initialization, and greedy initialization for different iteration periods based on tsp225.
Figure 9. RE curves with random initialization, neighborhood initialization, and greedy initialization for different iteration periods based on tsp225.
Jmse 10 01305 g009
Figure 10. Box plot for the eil76 and ch150 benchmark problems with dynamic adaptive neighborhood mutation and adaptive neighborhood mutation.
Figure 10. Box plot for the eil76 and ch150 benchmark problems with dynamic adaptive neighborhood mutation and adaptive neighborhood mutation.
Jmse 10 01305 g010
Figure 11. Simulated USV path tracking results at 25 and 50 waypoints. (a) Simulated USV path tracking results at 25 waypoints. (b) Simulated USV path tracking results at 50 waypoints.
Figure 11. Simulated USV path tracking results at 25 and 50 waypoints. (a) Simulated USV path tracking results at 25 waypoints. (b) Simulated USV path tracking results at 50 waypoints.
Jmse 10 01305 g011
Figure 12. RE curves using the DGTOA for 25 and 50 waypoints.
Figure 12. RE curves using the DGTOA for 25 and 50 waypoints.
Jmse 10 01305 g012
Table 1. Number of cities and optimum tour lengths of the problems.
Table 1. Number of cities and optimum tour lengths of the problems.
ProblemNumber of CitiesOptimum Tour Length
eil5151428.87
berlin52527544.37
st7070677.11
pr7676108,159.44
eil7676545.38
kroa10010021,285.44
krob10010022,141
kroc10010020,749
krod10010021,294
kroe10010022,068
eil101101642.31
ch1501506532.1
pr15215273,683.6
kroa20020029,460
tsp2252253859
Table 2. Comparisons with random initialization, neighborhood initialization, and greedy initialization on fifteen benchmark problems.
Table 2. Comparisons with random initialization, neighborhood initialization, and greedy initialization on fifteen benchmark problems.
ProblemsRandom initialization Neighborhood Initialization Greedy Initialization
MeanStd DevRE (%) MeanStd DevRE (%) MeanStd DevRE (%)
eil51430.351.290.34 429.920.920.24 429.780.990.21
berlin527544.3700 7544.3700 7544.3700
st70681.33.380.62 677.110.020 677.110.020
pr76110,4991214.542.16 108,344331.360.17 108,298.6287.720.13
eil76554.092.611.6 549.841.810.82 549.441.810.74
kroa10021,466.997.440.85 21,285.81.750 21,286.162.430
krob10022,410130.911.22 22,235.432.650.43 22,216.748.050.34
kroc10020,974.1155.931.08 20,80944.610.29 20,778.1837.530.14
krod10021,621.9143.51.54 21,485.264.790.9 21,456.8355.580.76
kroe10022,330.197.781.19 22,169.948.770.46 22,152.8533.320.38
eil101651.892.881.49 647.082.920.74 646.752.620.69
ch1506763.460.963.54 6554.555.280.34 6554.43.930.34
pr15274,968.6386.561.74 74,356.1175.350.91 74,408.66225.480.98
kroa20030,995213.995.21 29,631.154.810.58 29,606.3570.20.5
tsp2254018.6725.944.14 3940.4119.732.11 3938.9516.762.07
Table 3. Comparisons with adaptive neighborhood mutation and dynamic adaptive neighborhood mutation.
Table 3. Comparisons with adaptive neighborhood mutation and dynamic adaptive neighborhood mutation.
ProblemsAdaptive Neighborhood Mutation Dynamic Adaptive Neighborhood Mutation
MeanStd DevRE (%)Best MeanStd DevRE (%)Best
eil51429.490.750.15428.98 429.780.990.21428.87
berlin527544.37007544.37 7544.37007544.37
st70677.120.020677.11 677.110.020677.11
pr76108,542447.330.35108,159 108,298.6287.720.13108,159.4
eil76549.332.010.72545.39 549.441.810.74545.97
kroa10021,286.97.120.0121,285.4 21,286.162.43021,285.4
krob10022,251.937.420.522,178.6 22,216.748.050.3422,139.07
kroc10020,824.755.150.3620,750.8 20,778.1837.530.1420,750.76
krod10021,46352.660.7921,345.5 21,456.8355.580.7621,323.48
kroe10022,157.134.250.422,119.9 22,152.8533.320.3822,115.61
eil101647.43.110.79641.88 646.752.620.69641.23
ch1506552.58.150.316530.9 6554.43.930.346545.16
pr15274,432.9250.231.0274,128.6 74,408.66225.480.9873,936.17
kroa20029,667.361.810.729,560 29,606.3570.20.529,405.72
tsp2253950.4718.522.373911.3 3938.9516.762.073912.3
Table 4. Comparisons with the DJAYA, DTSA, ABC, PSO-ACO, and DSFLA.
Table 4. Comparisons with the DJAYA, DTSA, ABC, PSO-ACO, and DSFLA.
SLProblemRE (%)
DJAYADTSAABCPSO-ACODSFLADGTOA
1eil512.643.510.280.110.10.21
2berlin520.480.0200.0200
3st703.724.660.660.4700
4pr764.716.260.43--0.13
5eil765.16.09-0.060.20.74
6kroa1001.991.060.980.770.140
7krob1003.764.51---0.34
8kroc1004.595.15--0.110.14
9krod1006.287.88---0.76
10kroe1002.331.83---0.38
11eil1015.467.412.90.590.620.69
12ch1501.633.32-0.550.530.34
13pr152----0.390.98
14kroa200---0.951.030.5
15tsp2256.129.936.83--2.07
Optimal number/ratio (%)0/00/01/14.292/255/66.6710/66.67
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, S.; Huang, J.; Li, W.; Xiang, X. A Novel Discrete Group Teaching Optimization Algorithm for TSP Path Planning with Unmanned Surface Vehicles. J. Mar. Sci. Eng. 2022, 10, 1305. https://doi.org/10.3390/jmse10091305

AMA Style

Yang S, Huang J, Li W, Xiang X. A Novel Discrete Group Teaching Optimization Algorithm for TSP Path Planning with Unmanned Surface Vehicles. Journal of Marine Science and Engineering. 2022; 10(9):1305. https://doi.org/10.3390/jmse10091305

Chicago/Turabian Style

Yang, Shaolong, Jin Huang, Weichao Li, and Xianbo Xiang. 2022. "A Novel Discrete Group Teaching Optimization Algorithm for TSP Path Planning with Unmanned Surface Vehicles" Journal of Marine Science and Engineering 10, no. 9: 1305. https://doi.org/10.3390/jmse10091305

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop