Next Article in Journal
Geometric Confinement in Gauge Theories
Previous Article in Journal
Coherent Spontaneous Emission from Short Electron Bunches: Competition between Different Transverse Waveguide Modes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multigoal Path-Planning Approach for Explosive Ordnance Disposal Robots Based on Bidirectional Dynamic Weighted-A* and Learn Memory-Swap Sequence PSO Algorithm

1
School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
2
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
3
School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454003, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(5), 1052; https://doi.org/10.3390/sym15051052
Submission received: 26 March 2023 / Revised: 4 May 2023 / Accepted: 7 May 2023 / Published: 9 May 2023

Abstract

:
In order to protect people’s lives and property, increasing numbers of explosive disposal robots have been developed. It is necessary for an explosive ordinance disposal (EOD) robot to quickly detect all explosives, especially when the location of the explosives is unknown. To achieve this goal, we propose a bidirectional dynamic weighted-A star (BD-A*) algorithm and a learn memory-swap sequence particle swarm optimization (LM-SSPSO) algorithm. Firstly, in the BD-A* algorithm, our aim is to obtain the shortest distance path between any two goal positions, considering computation time optimization. We optimize the computation time by introducing a bidirectional search and a dynamic OpenList cost weight strategy. Secondly, the search-adjacent nodes are extended to obtain a shorter path. Thirdly, by using the LM-SSPSO algorithm, we aim to plan the shortest distance path that traverses all goal positions. The problem is similar to the symmetric traveling salesman problem (TSP). We introduce the swap sequence strategy into the traditional PSO and optimize the whole PSO process by imitating human learning and memory behaviors. Fourthly, to verify the performance of the proposed algorithm, we begin by comparing the improved A* with traditional A* over different resolutions, weight coefficients, and nodes. The hybrid PSO algorithm is also compared with other intelligent algorithms. Finally, different environment maps are also discussed to further verify the performance of the algorithm. The simulation results demonstrate that our improved A* algorithm has superior performance by finding the shortest distance with less computational time. In the simulation results for LM-SSPSO, the convergence rate significantly improves, and the improved algorithm is more likely to obtain the optimal path.

1. Introduction

Global terrorist activities are on the rise, and it is easy for extremist actions to proliferate. Terrorism is a serious threat to the civilized world. To quickly and efficiently deal with explosives, various EOD equipment has been developed, such as the Andros series EOD robot [1], RMI-9WT [2], Packbot-EOD [3], etc. EOD robots can replace people in carrying out extremely hazardous tasks such as detecting, inspecting, grabbing, and transporting suspicious explosives in a dangerous environment. Many countries have developed different kinds of EOD robots. For example, the British Morfax corporation has invented an EOD robot called Wheelbarrow [4]. It is one of the most famous EOD robots in the world, possessing good vehicle mobility and explosive disposal performance. Meanwhile, American Remotec has developed an EOD robot called Andros F6A [5], which is used to help the police complete security tasks. It consists of a mobile carrier and a manipulator. The carrier is an articulated walking chassis that helps the robot adapt to complex terrain such as sloping, sandy, or rugged surfaces. The Chinese Shenyang Institute of Automation has invented the Lingxi-B EOD robot [6], which has three cameras; EOD personnel can accurately control the robot over long distances with cabled or cableless operations.
The path planning problem has attracted many researchers’ attention, and it is currently one of the biggest hotspots in the EOD robot control area. The main goal of path planning is to construct a collision-free path that allows the robot to move from the start position to the goal position in a given environment. Over the past few decades, a considerable amount of path planning algorithms have been proposed, such as artificial potential fields (APF) [7], genetic algorithm (GA) [8,9], harmony search algorithm (HSA) [10], A* algorithm [10,11,12], particle swarm optimization (PSO) [13,14], ant colony optimization (ACA) [15], rapidly exploring random tree. (RRT) [16,17], neural networks [18], etc. Pak et al. [19] proposed a path planning algorithm for smart farms by using simultaneous localization and mapping (SLAM) technology. In the study, A*, D*, RRT, and RRT* are compared to solve a short-distance path planning problem with static obstacles, longest-distance path planning problem with static obstacles, and a path planning problem with dynamic obstacles. The results showed that the A* algorithm is suitable when solving the path planning problem. Zeyu Tian et al. [20] introduce a SLAM construction based on the RRT algorithm to solve the path planning problemm which is considered as a partially observable Markov decision process (POMDP). The boundary points are extracted, and the global RRT tree are introduced with adaptive step. The results show that when solving the problem of local RRT boundary detection, the tree is reset to accelerate the detection of the boundary points around the robot and when solving the problem of the global RRT boundary detection process, the generated RRT tree can also continue to grow, so that the small corners and the robot’s boundary points can also be detected in the end. An RRT-based path planning method for two-dimensional building environments was proposed by Yiqun Dong et al. [21]. To minimize the planning time, the idea of biasing the RRT tree growth in more focused ways was used, the results show that the proposed RRT algorithm is faster than the other baseline planners such as RRT, RRT*, A*-RRT, Theta*-RRT, and MARRT. Symmetry is widely used in path planning. A* is a popular algorithm based on graph search. The method searches from the current node to surrounding nodes; this search process is symmetrical. The A* algorithm [22,23] is one of the best-known heuristic path planning algorithms. It was proposed by Peter Hart et al. in 1968. It is widely used due to its simple principle, high efficiency, and modifiability. It evaluates the path by estimating the cost value of the current node’s extensible nodes in the search area and guides the search toward the goal by selecting the lowest cost from the current node’s adjacent nodes. Although the traditional A* algorithm shows good performance, it may fall into a failed search state, and its speed is slow. Consequently, many scholars have proposed improved algorithms. Ruijun Yang and Liang Cheng [24] proposed an improved A* algorithm with an updated weights, which considers the degree of channel congestion in a restaurant using a gray-level model. The results show that the service robot can effectively avoid crowded channels in the restaurant and complete various tasks in a restaurant environment. Anh Vu Le et al. [25] invented a modified A* algorithm for efficient coverage path planning. They introduced zigzag thought into the traditional A* algorithm. The results show that the modified A* algorithm can generate waypoints to cover narrow spaces as well as efficiently maximize the coverage area. Erke Shang et al. [26] proposed a novel A* algorithm that considers evaluation standards, guidelines, and key points. The improved A* algorithm uses a new evaluation standard to measure the performance of many algorithms and selects appropriate parameters for the proposed A* algorithm. The experimental results demonstrated that the algorithm is robust and stable. Gang Tang et al. [27] proposed an improved geometric A* algorithm by introducing the filter function and the cubic B-spline curve. The improved strategy can reduce redundant nodes and effectively counter the problems associated with long distances.
Inspired by nature, PSO [28] is first introduced by Kennedy and Eberhart in 1995. This biological heuristic algorithm is widely used in path planning [29], complex optimization problems [30], inverse kinematics solutions [31], etc. In particular, the application of PSO in path planning has attracted the attention of many scholars. An improved PSO algorithm is proposed by Yong Zhang et al. [32] to determine the shortest and safest path problem. In their paper, the path plan problem is described as a constrained multiobjective problem. A membership function that considers both the degree of risk and the distance of the path is proposed to resolve the issue. The results showed that the improved algorithm can generate high-quality Pareto optimal paths to solve the robot planning path in an uncertain dangerous environment. Mihir Kanta Rath and B.B.V.L. Deepak [33] established a PSO-based system architecture to solve the robot path planning problem in a dynamic environment. The main goal of the proposed algorithm is to find the shortest possible path with obstacle avoidance. The simulation showed that the objective function successfully improves the efficiency of path planning in various environments.
Before the EOD robot removes explosives, searching for and finding the explosives from all the suspicious positions where terrorists may have hidden them is vital. Most studies only considered the planning of the path from one initial position to one goal position. Some additional problems that also need to be solved, however, are that the terrorists may hide multiple explosives in different positions. Furthermore, as the number of goal positions increases, the time needed to compute the path increases. It is particularly important to remove explosives quickly, as they may detonate at any time. The proposed algorithms are always time-consuming and easily fall into the local optimum. Thus, the EOD robot needs to plan the shortest path to detect multiple suspicious positions as quickly as possible. In this study, we focuse on the shortest distance path of traversing multiple goal positions while considering computation time optimization. The contributions of this study are as follows:
(1) A bidirectional dynamic weighted A* algorithm is proposed to reduce the calculation time. By using the grid method, the environment map is established to introduce the obstacle, initial position, and goal position information. The path planning problem is divided into two subproblems: one is the shortest path between any two positions, and the other is the shortest path for traversing all goal positions. Firstly, to ensure the problem is clearly described, the mathematical model is also presented. Then, we optimize the computation time by improving the A* algorithm with a novel dynamic OpenList cost weight strategy and a bidirectional search strategy. Finally, the adjacent nodes are expanded to 16 to obtain the shortest distance between any two positions. The simulation results of the improved strategies are analyzed and discussed.
(2) A learn-memory strategy is introduced to the PSO algorithm to enhance the exploration ability. The LM strategy imitates the behaviors of human learning and memory, such as comparison, analysis, association, retention, forgetting–reinforcement, and divergent thinking, to optimize the initial solutions, maintain the diversity of the swarm, improve the quality of solutions, etc. In the proposed algorithm, we obtain new particles by optimizing the nodes in one particle or between particles, learning some features or figures that exist to generate new excellent particles, etc.
(3) A swap-sequence strategy is introduced to the PSO algorithm to maintain the diversity of the swarm. The swap sequence strategy is introduced into the process of forgetting–reinforcement. By using the strategy, the traditional could make PSO avoid falling into local optimum. The performance of the LM-SSPSO algorithm is also compared with that of ACO, Tabu search (TB), swap sequence particle swarm optimization (SSPSO), and several other intelligent algorithms.
(4) The performance of the proposed algorithm is verified. To confirm the effectiveness of the proposed algorithm, different environment maps are also designed and analyzed. Finally, the conclusions of this paper are given.
The remainder of this paper is organized as follows: In Section 2, the environment map construction for path planning is established, and the mathematical model is presented. Then, in Section 3, an improved A* algorithm for the shortest distance path between any two goal positions and the computation time optimization are described and detailed. In Section 4, an LM-SSPSO algorithm is proposed for optimizing the path of traversing all goal positions. In Section 5, the experiments are designed and compared to further verify the effectiveness of the proposed approach. Finally, the conclusion and perspectives are given in Section 6.

2. Environment Representation and Mathematical Model

In this study, our goal was to improve the efficiency of path planning. The EOD robot path planning problem can be stated as follows: considering a cluttered environment that consists of an initial position and multiple suspicious positions, it is necessary to plan a collision-free path for the EOD robot moving from the initial position to multiple suspicious positions so that the robot can find all the hidden explosives. We plan the optimal path by considering the following two aspects. One is to obtain the shortest distance path between any two positions (including the initial position and the goal positions), and the other is to plan the shortest path where the EOD robot traverses all goal positions and returns to the initial position.

2.1. Grid-Based Environment Modeling and Definitions

For the path planning investigated in this paper, the environmental model is crucial. To describe environmental information, many methods have been developed. These include the grid method [34], visibility graph [35], Voronoi diagram [36], etc. The grid method is first proposed by W.E. Howden in 1968. It is one of the most widely used environment modeling methods that records the obstacle and path information. The main idea of the method is to divide the environment map into many small grids and to describe the free space and obstacle space efficiently. In this paper, we use the grid-based map to record the EOD robot environment information.
When the EOD robot moves around suspicious positions, the planned path should enable the robot to quickly and efficiently locate explosives. However, it is not easy to satisfy both of the following goals: one is that the final path should enable the robot to search around the suspicious position, so that the explosives can be detected effectively; the other is that the path should be as short as possible and that the robot needs to know where to stop before detecting explosives and where to go after detecting one in a suspicious position. To simplify the search process and to facilitate the detection of explosives, we define the EOD robot as a mass point and manually select three goal positions around each suspicious position, instead of moving all around it to detect explosives. As shown in Figure 1, the grid map is 180 × 180 . The contour area that the EOD robot cannot pass is marked with a black square; the area where the EOD can pass is marked with a white square that represents a free grid. The grid is marked with a red square that represents the initial position. The obstacle areas named {A, B, C, D, E} in the environment map are the suspicious positions. Each suspicious position is surrounded by three goal grids, which represent goal positions and are marked in fuchsia. There are 15 goal positions in total. All of the positions are named {2, 3, 4…15, 16}. The goal of the path planning is to plan an obstacle avoidance path for the EOD robot moving from the initial position, going through all of the 15 goal positions, and returning to the initial position at the end. To plan the path effectively, we establish a planar rectangular coordinate XOY and use the serial number method to encode the grids. In the XOY coordinates, the bottom left of the map is the origin; we encode the grid from bottom to top and left to right. The relationship between the grid number and grid coordinate is as follows:
N c o r i = ( x i 1 ) n + y i x i = N c o r i y i n + 1 y i = N c o r i ( x i 1 ) n
where N c o r i represents the grid number, ( x i , y i ) represents the grid coordinate, and n represents the number of grid map matrix rows; in Figure 1, n is 180.

2.2. Description and Definition of the EOD Robot Path Planning Problem

For the path planning of the EOD robot, our aim is to investigate an obstacle-free path for the EOD robot to detect and find the explosives. Generally, the shorter the computational time and the shorter the path, the better the efficiency of the EOD robot. We focuse our study on optimizing the shortest distance between any two positions and the computational time, aiming at the shortest distance required to traverse all the goal positions (including the start position). Two distance criteria should be considered: one is the shortest distance between any two positions, and the other is the shortest distance for traversing all the goal positions just once and returning to the initial position at the end.
The first distance criterion is as follows:
d i j = k = 1 l 1 x i j , k + 1 x i j , k 2 + y i j , k + 1 y i j , k 2
where d i j represents the shortest distance between position i and position j; it is composed of several sequential line segments from the start position ( x i j , 1 , y i j , 1 ) to the goal position ( x i j , l , y i j , l ) in Cartesian coordinates; ( x i j , k , y i j , k ) represents the kth grid that EOD has passed when the EOD robot moves from position i to position j; l represents the number of total nodes between the two positions.
The second distance criterion is as follows:
T d = u = 1 v 1 d m u , m u + 1 + d m v , m 1
where T d represents the sum of distances for which the robot traverses all goal positions, d ( m u , m u + 1 ) represents the distance from position m u to position m u + 1 , and v represents the total number of positions (including start position, waypoint, and goal positions).

3. Improved A* Algorithm for the Path Planning of Any Two Points

In this section, we analyze the path planning of any two goal positions. Although the A* algorithm is widely used in the path planning area, there are many problems such as large numbers of redundant nodes, turn angles, slow search speed, etc. These issues may waste time and lengthen the path distance. To find a collision-free, shortest distance, and reduce the computational time, we improve the algorithm by introducing the following two strategies: the computational time optimization strategy and the path distance optimization strategy. First, we try to introduce a bidirectional search and dynamic OpenList cost weight strategy into the traditional A* algorithm to reduce the computational time. Then, based on the time-optimal strategies, we extend the adjacent nodes from 8 to 16, calculate and compare the distance matriedces of any two positions planned by 8 adjacent nodes and 16 adjacent nodes, separately. Finally, we select the shorter path from two distance matrices as the new distance matrix for the following path planning.

3.1. The Traditional A* Algorithm

In the traditional A* algorithm, there are two important lists: OpenList and CloseList. After the extension nodes in the OpenList are evaluated, the node with the minimum cost will be removed from the OpenList and added to the CloseList until the end node is found. The cost function of traditional A* algorithm [37] is F ( n ) = G ( n ) + H ( n ) , where F ( n ) represents the total cost from the start node passes through the current node and arrives at the goal node; G ( n ) represents the actual cost that has finished, which evaluates the cost from the start node to the current node; H ( n ) is a heuristic function (Manhattan, Euclidean, and diagonal) that represents the estimated cost from the current node to the goal node.

3.2. Strategy for Computational Time Optimization between Any Two Points

To solve the long computational time problem mentioned above, in this subsection, we improve the traditional A* algorithm using the following strategies: bidirectional search and dynamic OpenList cost weight strategy.

3.2.1. Bidirectional Strategy for Computational Time Optimization

The bidirectional search algorithm is an efficient path planning algorithm. It shortens the planning time by simultaneously searching from the start node and the goal node. The search stops until the bidirectional nodes converge in the middle. The formula of the bidirectional A* algorithm is:
F current_s = G current_s + H current_s F current_g = G current_g + H current_g
where Current_s and Current_g represent the current traversal nodes that search from the start position and the goal position, respectively; F ( Current_s ) and F ( Current_g ) represent the total cost of node Current_s and node Current_g , respectively; G ( Current_s ) and G ( Current_g ) represent the cost of the start node to current node Current_s and the goal node to current node Current_g , respectively; H ( Current_s ) and H ( Current_g ) represent the cost of current node Current_s to node Current_g calculated in the last loop and current node Current_g to node Current_s calculated in the last loop, respectively.
The main purpose of bidirectional A* is to take advantage of the start and goal positions to search during the path planning process. There are two OpenLists ( OpenList_s , OpenList_g ) and two CloseLists ( CloseList_s , CloseList_g ) in the bidirectional A* algorithm. As shown in Figure 2, when initializing the parameters, in the forward search direction, the START node is in the OpenList_s , and the end node is the current node c u r r e n t g of the backward search direction; similarly, the GOAL node is in the OpenList_g , and the end node is the current node Current_s of the forward search direction.

3.2.2. Dynamic OpenList Cost Weight Strategy for Computational Time Optimization

In the A* algorithm formula, the heuristic function can be used to choose a preferred direction for searching. So, we adjust the H ( n ) part by introducing the weight coefficient to increase its influence and improve the calculation speed of the A* algorithm. When computing the cost of the OpenList nodes, they change gradually during the iterative process. With the increase in iterations, more and more nodes with less cost value are obtained. By considering the characteristic of the OpenList nodes’ maximum cost, minimum cost, and mean cost, we design a new weight coefficient dynamic OpenList cost weight for the path planning calculation in this study. The proposed dynamic OpenList cost weight is represented by the following formula:
F s current_s = G s current_s + w s H current_s F g current_g = G g current_g + w g H current_g
w s = w m i n + w m a x w m i n m e a n OpenListcost_s m a x OpenListcost_s m i n OpenListcost_s
w g = w m i n + w m a x w m i n m e a n OpenListcost_g m a x OpenListcost_g m i n OpenListcost_g
where F s ( current_s ) , G s ( current_s ) , and H ( current_s ) are the total cost, actual cost, and estimated cost of the c u r r e n t s node, respectively; similarly, F g ( current_g ) , G g ( current_g ) , and H ( current_g ) are the total cost, actual cost, and estimated cost of current_g node, respectively; w m a x , w m a x are constant; w s represents the node’s heuristic function weight (extended from the START node); w g represents the node’s heuristic function weight (extended from the TARGET node); OpenListcost_s is a cost list that consists of the cost value of all the nodes (extended from the start node direction) in the OpenList_s ; m e a n ( OpenListcost_s ) represents the mean value of all the costs in the OpenListcost_s , m a x ( OpenListcost_s ) represents the maximum value of all the costs in the OpenListcost_s ; m i n ( OpenListcost_s ) represents the minimum value of all the costs in the OpenListcost_s ; similarly, OpenListcost_g is a cost list that consists of the cost value of all the nodes (extending from the target node direction) in the OpenList_g ; m e a n ( OpenListcost_g ) represents the mean value of all the costs in the OpenList_g ; m a x ( OpenListcost_g ) represents the maximum value of all the costs in the OpenList_g ; and m i n ( OpenListcost_g ) represents the minimum value of all the costs in the OpenList_g .
The weight coefficients w s and w g are dynamically changing parameters that describe the effect of OpenList_s nodes cost and OpenList_g nodes cost on heuristic function, respectively. The coefficients include the maximum, minimum, and mean costs of the OpenList nodes. The mean cost and minimum cost are also related to the process of iterations, so the heuristic part is always dynamically changing. This optimizes the computational time by efficiently adjusting the weight coefficient and guide the search toward the goal node.
When calculating the H ( n ) , the two most commonly used methods are the Manhattan distance and Euclidean distance, as shown in Formulas (8) and (9). The Euclidean distance describes the distance between two points; the Manhattan distance describes the distance between two points in the north–south direction and the distance in the east–west direction. Euclidean distance is one of the most common distance metrics, measuring the absolute distance between two positions in a multidimensional space. It is the real distance between two positions. The Manhattan distance is used to indicate the sum of the absolute wheelbases of two positions on a standard coordinate system. It only needs to be added and subtracted, which requires less computation for a large number of calculations; this eliminates the approximation in solving the square root. To reduce the computational time, in this study, we use the Manhattan distance to calculate H ( n ) .
H E ( n ) = ( x n x g ) 2 + ( y n y g ) 2
H M ( n ) = x n x g + y n y g
where ( x n , y n ) is the position of the robot at the current node n; ( x g , y g ) is the position of the end node.
The process of the bidirectional dynamic OpenList cost A* algorithm can be described as follows:
Step 1: Establish OpenLists and CloseLists. First, we establish two OpenLists and CloseLists for the improved algorithm: the OpenList and the CloseList. The searches from the forward search direction are defined as OpenList_s and CloseList_s . Similarly, from the backward search direction; the OpenList_g and ClostList_g are also defined. Second, as shown in Figure 2, we add the START node into the OpenList_s and select the Current_g node (current node, searching from the backward search direction) as the end node. Similarly, we add the GOAL node into the OpenList_g and select the Current_s node (current node, search from the forward search direction) as the end node. We carry out the following operations on the OpenLists and ClostLists. We keep the ClostList_s list and ClostList_g empty at first.
Step 2: Parameter initialization. We define the weight coefficients of the forward search direction and backward search direction as w s and w g , respectively, and assign them a value of 1 at first. We repeat steps 3–6 twice, and obtain the mean, maximum, and minimum cost values of OpenList_s and OpenList_g nodes; then, we compute w s , w g by considering Formulas (6) and (7) and use them as the weight coefficients of the following calculation.
Step 3: Calculate the value of new extended nodes. We add the adjacent nodes in the forward direction into the OpenList_s and calculate the cost function F ( n ) value of the newly added nodes. If the node is an obstacle grid, then we remove it from the OpenList_s . We find the node with the minimum evaluation value in the OpenList_s and define it as the current node Current_s . Next, we remove Current_s from the OpenList_s and transfer it into the ClostList_s . We carry out the same operations on the adjacent nodes in the backward direction. Then, we update the OpenList_g and ClostList_g .
Step 4: Define the parent node. We set the adjacent nodes of the current node in the forward direction. These are not placed in the ClostList_s or into the OpenList_s . We use the current node in the forward direction as the parent node of the newly added nodes in the forward direction. We conduct the same operations on the adjacent nodes in the backward direction.
Step 5: Judge and update the newly added nodes F ( n ) and G ( n ) values. First, we check if the newly added nodes in the forward direction obtained in Step 3 already exist in the OpenList_s . If the nodes do not exist in the OpenList_s , then we use the current node as the parent node of the newly added nodes. If the nodes are in the OpenList_s , then we compare their G ( n ) values. If the newly added node’s value is smaller, we use the current node as the parent node of the newly added nodes and update the newly added node’s F ( n ) value. We carry out the same operations on the adjacent nodes in the backward direction and update the parent node of the newly added nodes in the backward direction.
Step 6: Judge the loop exit condition. We judge whether the OpenList_s is empty or not. If the OpenList_s is empty, all of the operations will stop; otherwise, we judge whether Openlist_s and Openlist_g have the same nodes. If so, then we calculate the F ( n ) value of all the same nodes and select the node with the smallest value as the intersection node of Openlist_s (forward search) and Openlist_g (backward search). We obtain the path from the GOAL node, move forward to the intersection node, and finally reach the START node. Otherwise, we continue to calculate the cost of the extended nodes that search from the forward search direction. We carry out the same operations on the OpenList_g .
As described above, in Step 1, the Current_g node and Current_s node were obtained from the iterative process. We needed to consider the initialization of the end node in both the forward direction and the backward direction. During initialization, we selected the GOAL node as the end node of the Current_s node and the START node as the end node of the Current_g node. Furthermore, to facilitate the calculation of the weight coefficient, as described in step 2, we first assigned the two weight coefficients of the heuristic function of the bidirectional search a value of 1. Thus, we obtained the initial cost of nodes in the OpenList_s and OpenList_g . Finally, the dynamic OpenList cost weight coefficients w s , w g were obtained; we could use these to calculate the cost value of F ( n ) later.

3.3. Simulation Results and Analysis of the Computational Time Optimization Strategy

In this study, an Intel(R) Core (TM) i5-5200U CPU @ 2.20 GHz, RAM 16 GB, with a Windows 10 Education 64 operating system, was used for experimental simulation. The coordinates of the start position were (12, 12), and the coordinates from goal position 1 to 15 in Figure 1 are (27, 30), (51, 24), (45, 60), (96, 24), (120, 63), (72, 87), (144 33), (165, 60), (147, 87), (33 87), (51,138), (18,138), (105,120), (135, 159), and (75,156). To verify the efficiency of the bidirectional dynamic OpenList cost weight A* algorithm, some comparative simulations were designed. We compared the weighted bidirectional improved A* algorithm with traditional A* and discussed the effects of different weight coefficients and the resolution on the computation time. To ensure the accuracy of the simulation results, we designed and repeated the algorithm 30 times.
As shown in Table 1, compared with the traditional A* algorithm, the computation time of the bidirectional dynamic OpenList cost weight A* algorithm is significantly shorter. As the resolution of the environment map changes from 120 × 120 to 180 × 180 and 120 × 120 to 240 × 240 , the calculation time’s multiples of the traditional A* algorithm and the bidirectional dynamic OpenList cost weight A* algorithm are 3.16 (250.91:79.50), 7.91 (628.53:79.50) and 1.90 (50.27:26.39), 3.16 (83.36:26.39), respectively. The multiple increases in the calculation time of the algorithm are significantly less than those of the traditional A* algorithm.
To further verify the effectiveness of the bidirectional dynamic OpenList cost weight A* algorithm, we introduced constant weight w = 1, 1.5, 2, and five other weight coefficients for comparison. The formulas of the five weight coefficient functions ( w 1 , w 2 , w 3 , w 4 , w 5 ) are described in Formulas (10)–(14):
w 1 : w k = w min + ( w max w min ) ( 1 k Est i t e r )
w 2 : w k = w min + ( w max w min ) ( k Est i t e r ) 2
w 3 : w k = w min + ( w max w min ) 2 k Est i t e r ( k Est i t e r ) 2
w 4 : w k = w min w min w max 1 1 + c k Est i t e r
w 5 : w k = w min + ( w max w min ) 2 π e ( k Est i t e r ) 2 2
where k represents the kth iteration; w k represents the weight coefficient of the kth iteration; w e n d , w s t a r t , and c are constants; E s t i t e r represents the pre-estimated maximum number of iterations’ and e is the Euler number.
The parameter E s t i t e r in the five functions are obtained by calculating the H ( n ) value, which evaluates from the start node to the end node. The constant c in the coefficient w 4 is 1. Constant w m i n is 1, and w m a x is 2. The e coefficient w 4 is 1. Constant w m i n is 1 and w m a x is 2.
The results of w 1 w 5 and w = 1, 1.5, 2 are based on a bidirectional search. As can be seen from Table 2, the dynamic OpenList cost weight A* algorithm shows better performance than any other weight coefficient in terms of calculation time.

3.4. Strategies of Improved A* Algorithm for the Shortest Distance between Any Two Points

Although the collision-free path can be obtained by using the traditional eight-adjacent nodes A* algorithm, there are many inflection nodes and repeated calculations, and sometimes the path distance is long. Thus, it is necessary to reduce the inflection nodes and shorten the path distance for path planning. To solve this problem, we changed the expansion node number from 8 to 16.
As shown in Figure 3, the 16 adjacent nodes search method can lead the search to the goal node (red node in Figure 3 directly), which significantly reduces the path length and the number of nodes. However, the 16 adjacent nodes strategy also introduces more nodes that must be evaluated to determine whether they are an obstacle or not. For example, in Figure 4, node P ( x p , y p ) and node Q ( x q , y q ) are the waypoints, and node E ( x e , y e ) is the expansion node; before adding the extend nodes into OpenList_s or OpenList_g , it is necessary to judge whether the three nodes (P, Q, and E) are obstacles or not. The relationship between the waypoints, current node C ( x c , y c ) , and next node E ( x e , y e ) in the coordinates are:
The obstacle judgment formula is:
x p = x c y p = y c + y e 2 x q = x e y q = y c + y e 2
When node P and node Q are above or at the bottom of node C, the relationships between the extended node and the waypoints are as shown in Formula (15), where ( x e , y e ), ( x p , y p ), ( x q , y q ), ( x c , y c ) are the coordinates of node E, node P, node Q, and node C in the Cartesian coordinate system, respectively. By using the proposed obstacle judgment formula method, we more accurately judge whether the new expansion point is an obstacle or not than is possible with many other methods, such as selecting the sample points in the path to judge whether the sample point is in the obstacle or not. When node P and node Q are to the left or to the right of node C, the relationship between the extended node and the waypoints is as shown in Formula (16):
x p = x c + x e 2 y p = y c x q = x c + x e 2 y q = y e

3.5. Simulation Results and Analysis of the Shortest Distance of the Path Planning Improved Strategy

To optimize the path planning and compare the performance of the shortest distance path planning strategy, we compared the 8 adjacent nodes and the 16 adjacent nodes using the improved bidirectional weight A* algorithm mentioned above. Table 3 and Table 4 are the path length matrix of the 8 neighborhood and 16 neighborhood, respectively. In the tables, the number in row i, column j represents the shortest distance from point i to point j. For example, the shortest distance from point 3 to point 1 in Table 3 is the number in row 3 column 1; that is, 42.968. Table 5 is the path length matrix composed of the minimum length of the 8 neighborhood (Table 3) and the 16 neighborhood (Table 4). For example, the shortest distance from point 3 to point 1 in Table 3 is 42.968; in Table 4, it is 38.14; and so, in Table 5, the shortest distance from point 3 to point 1 is 38.14. This is due to the distance matrix being symmetric, simultaneously allowing us to effectively solve the proposed problem. We can obtain a complete distance matrix of any two points, as shown in Table 5.
The total positions number for path planning is 16, so we obtain a 16 × 16 integer distance matrix as the distance matrix is symmetrical; that is, the distance from one position to the other is equal to the distance from the other to the position. In Table 3 and Table 4, we only show the partial distance for a better comparison of the distance value between any two positions; the distance number between all of the 16 nodes is 120. As shown in Table 6, compared with the path length generated by using 8 adjacent nodes, the 16 adjacent nodes A* algorithm shows better performance, and all of the generated path lengths are less than or equal to the 8 adjacent nodes A* algorithm for the redundant points are removed.
To further verify the performance of bidirectional dynamic weighted-A* algorithm, we compared the improved A* algorithm with goal-oriented rapidly exploring random tree (GORRT) [38] algorithm, where the step size of GORRT is eight, and the threshold is five. If nodes were closer than this, they were taken as almost the same. So, the shortest distance between of any two nodes by using GORRT are shown in Table 7.
By analyzing the results of the16 adjacent nodes (Table 4) and GORRT’s path length (Table 7), we also obtained the comparison results of the 16 adjacent nodes and GORRT’s path length. The results are shown in Table 8. The path length generated by using 16 adjacent nodes also produced better performance with 199 path lengths being shorter and 1 path length being longer.

4. LM-SSPSO Algorithm for Traversing Multiple Goal Positions

As mentioned above, we obtained the shortest distance of any two positions of the 16 nodes (including the start position). The next step was to find the shortest path for traversing all suspicious positions just once. The process is similar to that of the traditional trade salesperson problem (TSP) [39]: the distance from position P to position Q is equal to that from position Q to position P. Thus, in this section, we solve the multiple goal positions path planning problem by considering the methods of solving the TSP. To solve the TSP, many algorithms have been studied, such as the greedy algorithm, genetic algorithm (GA), particle swarm optimization (PSO), harmony search (HS), etc. Refael Hassin and Ariel Keinan [40] proposed greedy heuristics with regret as the cheapest insertion method for solving the TSP. They improve the Greedy algorithm by allowing partial regret. The computational results showed that the relative error is reduced and the computational time is quite short. Urszula Boryczka and Krzysztof Szwarc [41] studied an improved HS algorithm to solve the TSP. The results showed that the proposed algorithm has a significant effect and that the summary average error can be effectively reduced. An initial population strategy (KIP, k-means initial population) was applied by Yong Deng et al. [42]. The information to improve the GA, helping to find the optimal solution for the TSP. Fourteen TSP examples were used to test the algorithm, and the results showed that the KIP strategy can effectively decrease the best error value. Matthias Hoffmann et al. [43] presented a discrete PSO algorithm used to solve the TSP, which provides insight into the convergence behavior of the swarm. The method is based on edge exchanges and combines differences by computing their centroid. The method was compared with other approaches; the evaluation result showed that the proposed edge-exchange-based PSO algorithm performs better than the traditional approaches. Although many traditional algorithms have been used to solve the TSP, each of the above approaches has its limitations. For example, the diversity of PSO is easily lost during iteration. The greedy algorithm and HS algorithm may fall into a local optimum, and the result of GA strongly depends on the initial value. So, we introduced the LM-SSPSO algorithm to improve the efficiency of EOD robot path planning.

4.1. Mathematical Model of Multiple Goal Positions Path Planning

In this subsection, we present the path planning for the EOD robot for the multiple-goal positions mathematical model. The aim of our research was to find the shortest path that allows all goal positions to be traversed just once, followed by a return to the start position. So, we developed the mathematical solution with reference to the methods of solving the TSP. We define the path planning problem as an instance L = ( n , d i s t ) of multiple goal positions { 1 , , n } , so we obtain an n × n integer distance matrix of all the goal positions. The path to be optimized can be described as a permutation D of the multiple goal positions, where D ( i ) denotes the ith visited goal positions. Hence, the formula of the total distance L = ( n , d i s t ) is:
L = arg min D Rn dist D ( n ) , D ( 1 ) + 1 i n 1 dist D ( i ) , D ( i + 1 )
where Rn is the symmetric group on { 1 , , n } , d i s t D ( i ) , D ( i + 1 ) denotes the distance between the ith goal position and the ( i + 1 ) th goal position, and d i s t D ( n ) , D ( 1 ) denotes the distance between the nth goal position and the start position.

4.2. Traditional PSO Algorithm

The PSO algorithm is a population-based metaheuristic algorithm that is widely used for its powerful control parameters; it is also easy to apply. It is inspired by the social behavior of bird flocks. When birds move in a group, each bird can be regarded as one particle, and all the particles form a swarm S = ( s 1 , s 2 , , s i , , s N ) , where N is the population size of the swarm. Each bird has its own velocity and position. The velocity of particle i is represented as V i = ( v i 1 , v i 2 , , v i d , , v i D m ) , where d is the dimension. The position of particle i is represented as X i = ( x i 1 , x i 2 , , x i d , , x i D m ) . As time advances, the particles update their velocity and position with mutual coordination and information-sharing mechanisms. The particles obtain the best position by constantly updating their individual optimal positions and swarm optimal positions. The best position of an individual particle i is represented as p b e s t i = ( p i 1 , p i 2 , , p i d , , p i D m ) . The best position of the swarm is represented as g b e s t = ( g 1 , g 2 , , g d , , g D m ) . The positions and velocities of each particle are updated as follows:
v i d k + 1 = w v i d k + c 1 r 1 ( p b e s t i d x i d k ) + c 2 r 2 ( g b e s t d x i d k ) x i d k + 1 = x i d k + v i d k + 1
where Dm represents the maximum dimension of the particle; v i d k and v i d k + 1 represent the d-dimension element of the current velocity and new velocity of the particle, respectively; x i d k and x i d k + 1 represent the d-dimension element of the current position and new position of the particle, respectively; w represents the weight coefficient; i represents the particle number; k and k + 1 represent the kth and ( k + 1 ) th iteration, respectively; c 1 and c 2 are positive acceleration constants and scale the contribution of cognitive and social components, respectively; r 1 and r 2 represent uniform random variables between (0, 1); p b e s t i d represents the dth element of the particle i’s best individual position; and g b e s t d represents the dth element of the swarm’s global (swarm) best position. In the PSO algorithm, each particle is attracted by two positions: the global best position and its previous best position. During iterations, the particles adjust their positions and velocities according to Formula (18), and the flowchart of the traditional PSO algorithm shown in Figure 5; when the algorithm achieves the goal or reaches the maximum iteration, the search stops. The application of PSO in path planning has been explored in various studies. In the following section, we provide a hybrid algorithm that shows fast convergence and addresses diversity in solving optimization problems.

4.3. Improved Strategies for Planning the Shortest Distance of Traversing All Goal Positions

The PSO algorithm is widely used due to its simple operation, easy implementation, and fast speed. However, it still has some shortcomings, such as the results depending on the initial solution, and the lack of randomness in particle position changes, so the diversity of the particles may reduce and fall into a local optimum during iteration. To overcome the drawbacks, we improve the traditional PSO with the following strategies. First, a greedy algorithm and an association strategy are introduced to optimize initial solutions. In addition, forgetting–reinforcement, comparison, analysis strategies, and divergent thinking are proposed to maintain the diversity of the swarm, avoid the local optimum, and obtain better solutions. The forgetting–reinforcement strategy is used to update the particle in the swarm or to randomly generate a new particle under certain conditions, the comparison strategy is used to optimize the individual particle, and the analysis strategy is used to generate a new particle from two particles. Furthermore, a divergent thinking strategy is introduced to increase the randomness of the particles. Finally, the retention strategy is used to reserve a better solution in the swarm for the next iteration. The flowchart of the LM-SSPSO algorithm is presented in Figure 6.
Compared with the traditional PSO algorithm, there are four aspects in which the LM-SSPSO algorithm provides optimization. Firstly, the initial solutions are optimized; by using the greedy algorithm and analyze strategy, the quality of initial solution can be effectively improved. Then, two different particles are optimized by considering the distance of the same node to their adjacent nodes with the compare strategy. Meanwhile, the personal best particle and global best particle are also used to provide path points to optimize the particle. Additionally, the particle is optimized with analysis strategy the swaps two randomly different position in one particle. The swap sequence strategy is also introduced to update the particle by using new swap operators, which include new position and velocity update formulas. Whether the swap sequence strategy is used or not depends on the forgetting–reinforcement strategy. Finally, to avoid the local optimum, the divergent thinking strategy is also used to optimize the particle with the worst particle and select the better particle with the retention strategy in the end. The detailed description of each strategy is introduced in the following section.

4.3.1. The LM-SSPSO-Based Method to Traverse All Goal Positions

Inspired by the behaviors of learning and memory, we propose a novel intelligent algorithm called LM-SSPSO. In the improved algorithm, we introduce several strategies to solve the multiple goal positions path planning problem. When people learn and memorize information, they often forget or confuse similar knowledge. To solve problems, people can compare and analyze specific features, associate the information with a similar object, and combine divergent thinking to enhance memory, etc. For example, if someone confuses the words “dessert”, “desert”, and “dissert”, they can compare the differences: “d-es-sert”, “d-e-sert”, and “d-is-sert”. The middle letters are different in the three words. The flowchart of LM-SSPSO is shown in Figure 6. It includes the process of association, comparison, analysis, forgetting–reinforcement, divergent thinking, and retention.
Association means that one concept leads to other related concepts; the origin of association can be a geometric figure, such as the behaviors of animals or social behaviors, etc. The results are the operations with similar behaviors in the swarm. In the LM-SSPSO algorithm, the operations of association can be described as selecting a group of particles from the swarm and then referring to a feature or figure, for example, a triangle. The sequences of the corresponding nodes are swapped for each particle. Figure 7 shows the process of association; the node on the hypotenuse is swapped with that on the right-angled side. The operations are detailed in the following section.
Comparison is the act of distinguishing or finding the differences between two or more objects and obtaining a better final result. In LM-SSPSO, a comparison strategy is usually applied to two objects, and the strategy obtains a desirable solution according to a certain rule. There are two methods of comparison. One involves comparing the distance from the same node of two particles to their adjacent nodes, and the other involves comparing the distance of a partial optimization particle with that of an unprocessed particle. Partial optimization includes selecting a segment from one particle, deleting the nodes that are the same as the segment in another particle, and adding the selected segment to the end of another particle.
Forgetting [44] is a special function in the human brain, which means the things that are experienced no longer remain in the memory. This behavior may involve the random loss or removal of information. Reinforcing is the opposite of forgetting; it is a way to consolidate information firmly. The proposed forgetting–reinforcement strategy imitates the behaviors of people forgetting information and strengthening their memories. This can lead to the loss of some or all of the information in one particle to avoid local optima and to generate new particles to maintain the diversity of the swarm. In the LM-SSPSO, a forget probability function and a reinforcement probability function are introduced to increase the diversity of the swarm. The flowchart of the forgetting–reinforcement strategy is shown in Figure 8, and the forgetting probability formula and reinforcement probability formula are described in Formulas (19) and (20), respectively.
The forgetting–reinforcement strategy is applied after the analysis strategy of LM-SSPSO, as shown in Figure 6. It generates a new particle in a novel way with the swap sequence idea. In the forgetting–reinforcement strategy, two probability parameters, the forgetting probability and reinforcement probability, are used to determine the proper strategy. If a random number is less than the forgetting probability, a randomly particle is generated, or another randomly number is generated to determine whether the swap sequence strategy is used to generate new particle or not.
The forgetting probability formula is:
P f o r g = cos k π 2 ( k + 1 )
The reinforcement probability formula is:
P r e i n = sin k π 2 ( k + 1 )
The relationship between the forgetting probability and the reinforcement probability is:
P f o r g 2 + P r e i n 2 = 1
where k is the number of iterations, and R a n d f and R a n d r in Figure 8 are random variables between zero and one. When using the reinforcement strategy to generate a new particle, the swap sequence method is used; the exact operations are detailed in the subsections “Swap Sequence-Based PSO” and “Comparison Strategy and Forgetting–Reinforcement Strategy for Swarm Optimization”.
Analysis is the process of studying the essence and the inner connection of one object. In LM-SSPSO, this means obtaining a desirable particle by swapping the nodes sequence of a particle.
Divergent thinking is a diffuse state of thinking. It is manifested as a broad vision of thinking, which presents multidimensional divergence. Divergent thinking obtains a new particle by randomly generating a new solution under certain conditions.
Retention is the process of maintaining optimal results. In the algorithm, retention is used to keep the better particles for the next iteration. It works on the swarm and accelerates the convergence of the algorithm.

4.3.2. Swap Sequence-Based PSO

The swap-sequence-based PSO is an effective method for solving the TSP [45]. It has the same theory as traditional PSO but has new features by considering the swap sequence idea. In SSPSO, the swap sequence (SS) is defined as an ordered arrangement of one or more swap operators. SS = ( S O 1 , S O 2 ,…, S O n ), where S O 1 , S O 2 ,…, S O n are swap operators. In the velocity and position update formula, the position represents a complete TSP tour, and the velocity represents the swap operator from one tour to another. The position and velocity update formulas are as follows:
v i d k + 1 = w s s v i d k r i n d ( p b e s t i d x i d k ) r g o l ( g b e s t d x i d k )
x i d k + 1 = x i d k v i d k + 1
where the operator ⊕ means that the two swap sequences are merged to form a new swap sequence. For example, for the swap operators S O 1 = (1,2) and S O 2 = (2,3), SS = S O 1 S O 2 = ((1,2),(2,3)). The ⊖ operator indicates the subtraction of two TSP paths. For example, Path1 = {1,2,3,4} and Path2 = {1,2,4,3}, Path1⊖Path2 = SO(3,4). In contrast with Formula (18), r i n d in Formula (22) denotes the probability that all of the swap operators in the swap sequence ( p b e s t i d x i d k ) are used, and r g o l denotes the probability that all the swap operators in the swap sequence ( g b e s t d x i d k ) are used. An SS acts on a particle by applying all of the operators and finally producing a new particle.
Each particle updates its position and velocity by considering Formulas (22) and (23). The formula of weight coefficient w is:
w s s = w 1 ( w 1 w 2 ) × I t e r s s I t e r a t i o n s s
where w s s represents the weight coefficient of the velocity update in the formula. w 1 = 1, w 2 = 0.5; I t e r s s represents the current iteration count, and I t e r a t i o n s s represents the maximum number of iterations.

4.3.3. Association and Analysis Strategy for Initial Solution Optimization

Generally, the initial solutions always play an important role in the convergence of the swarm. Before the proposed association and analysis strategy is used, we optimize the initial solutions using greedy thought. The greedy algorithm is based on local optimum thought. Its solutions are not always compliant with the global optimum path, but they are always better than the stochastic path. Considering the greedy algorithm’s characteristics, we use the greedy idea to obtain local optimal initial particles. Then, based on the greedy solutions, the association strategy is used to imitate the right-angled triangle swap method and swap the corresponding nodes of the right-angled side and the hypotenuse, as shown in Figure 7. The operations are described as follows:
Step 1: Randomly generate N pop (the population size of the swarm) set paths.
Step 2: Generate a set path using the greedy algorithm. We define the START node as the initial node and plan the path using the greedy algorithm. For example, the first set in Figure 9a is a path that is planned by the greedy algorithm.
Step 3: Generate more sets of the path using the association strategy. First, we generate n g r o u p (a constant, n g r o u p < N p o p ) sets of paths that are the same as the path generated in Step 2. Then, we number the paths as 1 , 2 , 3 , 4 ( n g r o u p − 1)th, ( n g r o u p ) th. Finally, we use the association strategy and imitate the right-angled triangle to swap the ith node of the ith set path with the first node, as shown in Figure 9b, where n g r o u p = 16.
Step 4: Generate a new path using an analysis strategy. Generating n a n a (for example, 10) sets a path that is the same as that in Step 2. We use the analysis strategy to randomly select two nodes for each set path and then swap the selected nodes to generate new paths.

4.3.4. Comparison Strategy and Forgetting–Reinforcement Strategy for Swarm Optimization

In this subsection, we introduce the comparison and forgetting–reinforcement strategy into the PSO algorithm to avoid local optima and to maintain the diversity of the swarm. First, we use the comparison strategy to obtain new particles by comparing the distance between the same node of two particles and their adjacent nodes. If the fitness of the new particle is smaller, we update it. Secondly, we optimize the partial nodes of each particle with the individually best solution and the global best solution, in turn. If the fitness of the new particle is smaller, we update the particle. Finally, the forgetting–reinforcement strategy is used to maintain the diversity of the swarm; the operations are as follows:
Process 1. Optimize the particles by considering the distance from the same node of two particles to their adjacent nodes.
Step 1: Generate a group of particles for comparison. First, we generate n random variables between 0 and 1, and then we compare the generated n random numbers with the comparison probability P c . If the ith number is less than P c , we select the ith particle for comparison, then a selected particle group P s e l e c t is obtained.
Step 2: Generate a partial optimization particle using the comparison strategy. We select two particles in the P s e l e c t and randomly generate an integer m G A R as the first position of the new particle. Next, we find the index of m G A R in the two selected particles, separately, and compare the distance between m G A R and its right node in the two selected particles. We insert the node with a shorter distance into the right node of the new particle.
Step 3: Generate one new particle. We use the newly added node as the position to be compared and perform the same operations with the right nodes in the selected particles as the position m G A R . We use this method to tackle all of the resting positions until all the nodes have been added to the new particle.
Step 4: Generate more new particles. We select m G A L (equal to m G A R ) as the first position of another new particle, perform the operations described in Steps 2–3, and compare the distance from the corresponding node to its left node in the two selected particles. Next, we add the left node with a shorter distance into the left position of the new particle until another new particle is obtained.
Step 5: Deal with all the particles. We deal with all the particles in P s e l e c t as described in Steps 2–4 until all the particles have been compared or only one particle is yet to be compared.
Step 6: Update the individual best fitness and global best fitness.
Process 2. Optimize the partial nodes of each particle with a comparison strategy.
Step 1: Select a segment for comparison. We randomly generate two different integers, Position1 and Position2 (Position1 < Position2), which are no greater than the dimension of the particle (e.g., 16), and use the two randomly generated integers as the index of the path position. We select the positions from the index Position1 to the index Position2 as pathc. For example, if the historical best path is { 1 4 7 6 8 10 2 9 3 5 11 13 12 14 16 15 } Position1 = 6; Position2 = 8, then the pathc between Position1 and Position2 is { 10 2 9 } .
Step 2: Delete the comparison segment. We delete the positions in the historical best path that are the same as the positions in the cross-segment (pathc) and form a path that is made up of the remaining positions.
Step 3: Add a comparison segment. We add the path (pathc) to the end of the path generated in Step 2, and judge whether the new solution is less suitable. If this is the case, we update the position and fitness of the new particle.
Step 4: Deal with the global best path. We carry out the same operations on the global best path as those conducted for the historical best path.
Step 5: Deal with the particles using the analysis strategy. First, we randomly generate two unequal indexes: Position3 and Position4 (Position3 < Position4). Then, we swap the node of index Position3 with the node of index Position4. Finally, we judge whether the fitness of the solution is reduced. If it is indeed reduced, we update the position and fitness of the new particle.
Process 3. Forgetting–reinforcement strategy for maintaining the diversity of the swarm.
As shown in Figure 8, the process of the forgetting–reinforcement strategy is as follows: First, we generate a random variable R a n d f and determine if it is smaller than the forgetting probability P f o r g . If it is smaller, then we select one particle (ith) from swarm S and then randomly generate one particle and calculate its fitness F i t n e s s r a n d . Next, we judge whether F i t n e s s r a n d is less than the selected particle’s fitness. If F i t n e s s r a n d is lower than the selected fitness, then we update the position and fitness of the ith particle with the particle that is randomly generated. If R a n d f is no less than P f o r g , we generate another random variable R a n d r and determine if it is smaller than the reinforcement probability P r e i n . If it is smaller, we change the sequence of the swarm using the swap sequence strategy and obtain a new swarm SS. We compare the ith particle’s fitness in swarm SS with the ith particle’s fitness in swarm S; if it is smaller, then we update the particle.

4.3.5. Divergent Thinking Strategy to Avoid a Local optimum

Divergent thinking manifests as a broad vision of thinking, while thinking presents multidimensional divergence. This thinking with strong randomness can exert more imagination under minimal constraints; the operations are as follows:
Step 1: Initialization parameter. We assign a constant to heuristic probability H e u (between 0 and 1); then, we randomly generate a number between 0 and 1 and name it rand_Divergent .
Step 2: Generate a new solution vector for the LM-SSPSO algorithm. We judge whether rand_Divergent is less than H e u or not. If rand_Divergent is less, then we select a set of solutions from the swarm and define it as NEW_Dt ; otherwise, we randomly generate a set of solutions and define it as NEW_Dt .
Step 3: Optimize the worst solution in the swarm with the divergent thinking solution by considering the comparison and analysis strategies. We perform the same comparison and analysis operations as the process 2 strategies in the subsection “Comparison strategy and forgetting-reinforcement strategy for swarm optimization” on the NEW_Dt with the worst solution in the swarm.

4.3.6. Retention Strategy for Swarm optimization

To speed up the convergence of the algorithm, the retention strategy is used to reserve excellent individuals for the next iteration; the operations are as follows:
Step 1: Reserve top fitness particles. We sort all the particles via fitness minimum to maximum and reserve the top N r e s ( N r e s < N p o p ) particles for the following operation.
Step 2: Generate a new particle. We use roulette wheel selection to reserve particles from the initial swarm and form a new particle swarm.
Step 3: Compare and reserve better particles. We compare the fitness of the top N r e s set of particles generated in Step 2 with the top N r e s set of particles in step 1. We reserve the particle with minimum fitness as the corresponding new particle.

4.4. Performance Analysis of LM-SSPSO with Other Approaches for Multiple Goal Positions Path Planning

To evaluate the performance of the proposed LM-SSPSO algorithm, we compared several algorithms with the proposed algorithm. The parameter settings are presented. The resolution of the environment map used to compared the following algorithms was 180 × 180 , and the distances between any two positions are detailed in Table 5. To verify the performance, each of the following algorithms was run 50 times. The parameter settings are shown as Table 9:
N p o p represents the population size of the swarm, T m a x represents the maximum number of iterations, n a n a represents the number of particles selected from the swarm and is used for the analysis strategy, P c represents the comparison probability, H e u represents the heuristic probability, N r e s represents the numbers of particles that are reserved, and HMCR represents harmony memory, considering rate. P c r o s s represents the crossover probability; P m u t a t e represents the mutate probability; alpha and beta represent the relative importance of the pheromone information and heuristic information, respectively; rho represents the pheromone evaporation coefficient; Q represents the pheromone increase intensity coefficient; and TabuLength represents the Tabu length. We compared the minimum value, numbers of optimal solutions, maximum value, and average value of 50 calculations to verify the effectiveness of LM-SSPSO. The results are shown as Table 10:
The minimum value represents the minimum value in the 50 loop computations. The number of minimum values represents the number of all the intelligent algorithms’ minimum values in the 50 loop computations. The maximum value represents the maximum value in the 50 loop computations for each loop (this included 100 iterations, as shown in Table 9). We obtained an optimal solution: the maximum value is the maximum value of all the 50 optimal solutions. The average value represents the average value in the 50 loop computations. The optimal path is 1 2 4 7 11 13 12 16 15 14 10 9 8 6 5 3 1 . As shown in Table 10, the total distance is 560.87, where LM-SSPSO, GA, SSPSO, ACO, and TS can find the shortest path for the multiple goal positions and the 50 calculations. Compared with other algorithms, the LM-SSPSO shows better performance for 50 runs. The maximum value of the LM-SSPSO algorithm is also the smallest. The path planning results of the proposed algorithm and the training process are shown as Figure 10.

4.5. Postprocess of the Optimized Path

When planning the path, the search area is limited to the local adjacent nodes. As a result, the optimal path may not be obtained. Simultaneously, the variable goal positions in the bidirection can reduce the calculation time; however, the final path may also introduce new redundancy positions, and the efficiency of the robot movement may reduce as a result. To solve this problem, we optimize the path by introducing a postprocessing stage. In Figure 11, the yellow area represents the obstacle, the purple dot N o d e i represents the initial node, the red dot N o d e t is the goal node, and the blue dots are the waypoints. Due to the number of waypoints being significantly higher than usual, as shown in Figure 4, the proposed judgment formulas are unsuitable for the complicated postprocess, and we use sample points to complete the obstacle avoidance task. The postprocessing is described as follows:
Step 1: Initialization. Initialize the obstacle, initial position, waypoint, and goal position.
Step 2: Delete the redundancy point and optimize the path. As shown in Figure 11b, we assign the GOAL node as N o d e A and the adjacent point as N o d e B . Connect N o d e A and N o d e B , and obtain a dotted line. We divide the dotted line into N d o t (for example, N d o t = 3) equal parts (Figure 12), and judge whether the divided points (black point) are in the obstacle area or not. If not, we select the next point as N o d e B , connect N o d e A and N o d e B to form a new dotted line, and obtain the new divided points. Next, we judge whether the new divided points are in the obstacle or not; if so, as shown in Figure 11d, we connect N o d e A with the previous point of N o d e B using a solid line and delete the redundancy points, as shown in Figure 11e.
Step 3: Optimize the remaining path and obtain the final optimized path. We define the previous point of N o d e B in Step 2 as N o d e A . We repeatedly analyze the dotted line between N o d e A and N o d e B as Step 2, and delete the redundancy points until the initial point is N o d e B and its previous point is N o d e A . The final optimized path is shown in Figure 11f.
The postprocess results are presented in Figure 13.

5. The Analysis of Improved A* Algorithm and LM-SSPSO PSO in Different Environment Maps

To further verify the effectiveness of the proposed path planning approach, in this section, we compare it with existing approaches in three different environment maps. The environment maps show various obstacles (suspicious positions) and different goal position distributions. The information is presented in Figure 14.
The initial position and all of the goal position coordinate information is shown in Table 11.

5.1. Comparison Analysis of the Computation Time between Any Two Positions at Different Resolutions

All the parameters were set as shown in Table 9. The comparison results of the improved A* algorithms proposed in this paper using traditional A* for the computational time are given.
From Table 12, Table 13, Table 14 and Table 15, we obtained the following results.
(1) In the case of the different environment maps, the calculation time of the bidirectional dynamic OpenList cost weight A* algorithm is always less than that of the traditional A* algorithm. The improved A* algorithm has a shorter runtime and higher route planning efficiency.
(2) The calculation time of the bidirectional dynamic OpenList cost weight A* algorithm increases less when the resolution increases. When the resolution of the environment map changes from 120 × 120 to 180 × 180 and 120 × 120 to 240 × 240 , the calculation time multiples calculated via the traditional A* algorithm in the map1 are 2.27, 5.91, and the proposed weights with the A* algorithm are 2.01 and 4.93. For map2, the calculation times multiples calculated via the traditional A* algorithm are 3.13 and 6.86; the proposed weight with the A* algorithm are 1.92, 3.25. For map3, the calculation times multiples calculated via the traditional A* algorithm are 2.51 and 6.56; the bidirectional dynamic OpenList cost weights with the A* algorithm are 2.11 and 3.65. As shown in Table 15, the proposed A* algorithm performs best in different environment maps and consumes the least time.
(3) The results in Table 12, Table 13 and Table 14 show that our path planning method takes less calculation time, especially when the resolutions increase. It has significant advantages in different environment maps and better performance than traditional A* algorithms.

5.2. The Comparison Analysis of the Shortest Distance between Any Two Positions in Different Environment Maps

For the analysis of improved strategy on the shortest distance between any two positions, we defined the resolution of the environment maps as 180 × 180 ; the distance matrix between any two positions of all 16 positions was calculated using the 8 adjacent nodes and the 16 adjacent nodes strategy, separately. The distance between the corresponding two positions was also compared, and the results are shown in Table 16, Table 17 and Table 18, where bold font represents the better solution with small fitness.
The simulations results of Table 19 were used to analyze the effect of node numbers on the performance of the shortest distance path. The results show that the 16 adjacent nodes strategy provided better performance than the 8 adjacent nodes method. The 16 adjacent nodes strategy is stable and effective when it comes to finding the shortest path between any two positions in different maps. To obtain a shorter distance path, the shorter distance between any two positions generated by using 16 adjacent nodes, and 8 adjacent nodes are selected to form a new distance matrix (Table 20, Table 21 and Table 22) for the following optimization process.

5.3. Performance Analysis of LM-SSPSO with Other Approaches for Traversing All Goal Positions for Different Environment Maps

In Table 23, Table 24, Table 25 and Table 26 and Figure 15, Figure 16 and Figure 17, the parameter settings are the same as in Table 9. The numbers of the optimal solution obtained by using different algorithms are shown as below.
As shown in Table 23, SSPSO, GA, TS, and LM-SSPSO successfully found the optimal path for all three environment maps. Compared with the other intelligent algorithms, the LM-SSPSO showed better performance and a higher probability of obtaining the optimal path in the different maps.
The optimal paths of map1, map2, and map3 are 1 2 3 4 16 15 14 13 12 11 5 8 7 6 10 9 1 ; 1 14 16 12 15 13 11 9 8 10 6 7 5 3 2 4 1 ; 1 3 2 4 5 7 6 8 9 13 14 11 10 12 15 16 1 , and the shortest distances are 478.1, 548.966, and 656.22, respectively. The convergence speed of the algorithm is fast and the optimal solution calculated for the first time for map1, map2, and map3 are 5, 5, and 12, respectively.
According to Table 24, Table 25 and Table 26, we can observe that the LM-SSPSO is much more effective in terms of calculating the minimum value, maximum value, and average value than other algorithms for the different maps. The convergence speed of particles was significantly improved, as shown in Figure 15, Figure 16 and Figure 17. It is clear that the solutions calculated using LM-SSPSO were always better than those that were calculated using other algorithms given its characteristics with the optimal initial solution, avoiding the local optimum and optimal selection.
The path generated by the proposed algorithm with postprocessing is shown in Figure 18.

6. Conclusions

In this paper, we introduced a novel approach to multiple-goal position path planning. Two problems were solved. The first problem involved finding the shortest distance between any two positions, and the other involved finding the shortest distance path which traverses all the goal positions. When solving the first problem, we introduced a bidirectional strategy and dynamic OpenList cost weight strategy into the A* algorithm. As the resolution of the first environment map changed from 120 × 120 to 180 × 180 and 120 × 120 to 240 × 240 , the calculation time was less than that of traditional A* algorithm. The obtain time multiples were 1.90 and 3.16, respectively. These are significantly less than those of the traditional A* algorithm. When comparing with different weight coefficients, the proposed algorithm required the shortest time with 50.27 s at 180 × 180 resolution. Based on the proposed time optimization strategy, we also extended the search nodes from 8 to 16 adjacent nodes and introduced a new obstacle judgment formula to analyze the obstacles. Compared with the distance in the 8 adjacent nodes’ distance matrix, 115 path length in 16 adjacent nodes was shorter; the others were equal to 8 adjacent nodes. Compared with the GORRT algorithm, in 16 adjacent nodes, 119 path lengths were shorter, and 1 was longer. When tackling the second problem, we developed the LM-SSPSO strategy to purposefully optimize all the processes of PSO. First, the initial solutions are optimized by using the greedy algorithm, association strategy, and analysis strategy to obtain the local optimal solutions. Then, the comparison, forgetting–reinforcement, and diverse thinking strategy are used to maintain diversity, accelerating particles convergence. The swap sequence strategy is also used in the process of forgetting–reinforcement, and a forget probability function and a reinforcement probability function are used to avoid a local optimum. Finally, the retention idea was also introduced to obtain a better swarm. Compared with six other intelligent algorithms, the LM-SSPSO showed the best performance with the shortest Minmum, Maximum, Average path length 560.87 in the 50 loop computations. It also obtained a higher success rate in finding the optimal solution. After obtaining the multiple-goal-positions path, the redundant points are also removed in the postprocessing. Additionally, the results of comparative experiments were analyzed using three new environment maps. The simulation results further verified that the proposed algorithm’s performance is efficient across different environment maps.
In future work, we will improve our proposed BD-A* algorithm with novel criteria to reduce searching nodes and add a guide line to accelerate the search process and optimize the calculation time. We will optimize the swarm size of LM-SSPSO and analyze the influence of different strategies on the performance of LM-SSPSO, so that we can improve the effectiveness of the LM-SSPSO algorithm. Additionally, we will also focus our research on multiobjective function optimization, which includes torque, global energy reductions, shortest distance, etc. A dynamic environment will also be considered. We may apply our path planning method with the Pareto set strategy and create a real-time application of multiple-goal positions path planning in a dynamic environment.

Author Contributions

Conceptualization, M.L.; methodology, M.L.; software, M.L. and L.Q.; validation, J.J.; formal analysis, M.L. and L.Q.; investigation, M.L. and L.Q.; resources, L.Q. and J.J.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, L.Q.; visualization, M.L. and J.J.; supervision, M.L.; project administration, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSOParticle Swarm Optimization.
GAGenetic Algorithm.
HSHarmony Search.
TSPTraveling Salesman Problem.
EODExplosive Ordinance Disposal.
RRTRapidly Exploring Random Tree.
TSTabu Search.
ACOAnt Colony Optimization.

References

  1. Zhao, J.; Han, T.; Ma, X.; Ma, W.; Liu, C.; Li, J.; Liu, Y. Research on Kinematics Analysis and Trajectory Planning of Novel EOD Manipulator. Appl. Sci. 2021, 11, 9438. [Google Scholar] [CrossRef]
  2. Tran, J.; Ufkes, A.; Fiala, M.; Ferworn, A. Low-Cost 3D Scene Reconstruction for Response Robots in Real-time. In Proceedings of the 2011 IEEE International Symposium on Safety, Security and Rescue Robotics, Kyoto, Japan, 1–5 November 2011. [Google Scholar]
  3. Rudakevych, P.; Clark, S.; Wallace, J. Integration of the Fido Explosives Detector onto the PackBot EOD UGV. In Unmanned Systems Technology IX, Proceedings of the Defense and Security Symposium, Orlando, FL, USA, 9–13 April 2007; SPIE: Bellingham, MA, USA, 2007. [Google Scholar]
  4. Odedra, S.; Prio, S.; Karamanoglu, M.; Shen, S.T. Increasing the trafficability of unmanned ground vehicles through intelligent morphing. In Proceedings of the 2009 ASME/IFToMM International Conference on Reconfigurable Mechanisms and Robots, London, UK, 22–24 June 2009. [Google Scholar]
  5. Guo, J. Research on the application of intelligent robots in explosive crime scenes. Int. J. Syst. Assur. Eng. Manag. 2023, 14, 626–634. [Google Scholar] [CrossRef]
  6. Li, X.; Meng, C.; Liang, J.; Wang, T. Research on Simulation and Training System for EOD Robots. In Proceedings of the IEEE 2006 4th IEEE International Conference on Industrial Informatics, Singapore, 16–18 August 2006. [Google Scholar]
  7. Chen, Y.; Luo, G.; Mei, Y.; Yu, J.; Su, X. UAV path planning using artificial potential fieldmethod updated by optimal control theory. Int. J. Syst. Sci. 2016, 47, 1407–1420. [Google Scholar] [CrossRef]
  8. Chen, C.-M.; Lv, S.; Ning, J.; Wu, J.M.-T. A Genetic Algorithm for the Waitable Time-Varying Multi-Depot Green Vehicle Routing Problem. Symmetry 2023, 15, 124. [Google Scholar] [CrossRef]
  9. Cheng, K.P.; Mohan, R.E.; Nhan, N.H.K.; Le, A.V. Multi-Objective Genetic Algorithm-Based Autonomous Path Planning for Hinged-Tetro Reconfigurable Tiling Robot. IEEE Access 2020, 8, 121267–121284. [Google Scholar] [CrossRef]
  10. Yi, J.; Chu, C.H.; Kuo, C.L.; Li, X.; Gao, L. Optimized tool path planning for five-axis flank milling of ruled surfaces using geometric decomposition strategy and multi-population harmony search algorithm. Appl. Soft Comput. 2018, 73, 547–561. [Google Scholar] [CrossRef]
  11. Wang, H.; Qi, X.; Lou, S.; Jing, J.; He, H.; Liu, W. An Efficient and Robust Improved A* Algorithm for Path Planning. Symmetry 2021, 13, 2213. [Google Scholar] [CrossRef]
  12. Dang, C.V.; Ahn, H.; Lee, D.S.; Lee, S.C. Improved Analytic Expansions in Hybrid A-Star Path Planning for Non-Holonomic Robots. Appl. Sci. 2022, 12, 5999. [Google Scholar] [CrossRef]
  13. Zhang, J.; Zhang, J.; Zhang, Q.; Wei, X. Obstacle Avoidance Path Planning of Space Robot Based on Improved Particle Swarm Optimization. Symmetry 2022, 14, 938. [Google Scholar] [CrossRef]
  14. Ajeila, F.H.; Ibraheema, I.K.; Sahibb, M.A.; Humaidic, A.J. Multi-objective path planning of an autonomous mobile robot using hybrid PSO-MFB optimization algorithm. Appl. Soft Comput. 2020, 89, 106076. [Google Scholar] [CrossRef]
  15. Zhou, H.; Jiang, Z.; Xue, Y.; Li, W.; Cai, F.; Li, Y. Research on Path Planning in 3D Complex Environments Based on Improved Ant Colony Algorithm. Symmetry 2022, 14, 1917. [Google Scholar] [CrossRef]
  16. Zhang, H.; Wang, Y.; Zheng, J.; Yu, J. Path planning of industrial robot based on improved RRT algorithm in complex environments. IEEE Access 2018, 6, 53296–53306. [Google Scholar] [CrossRef]
  17. Feng, J.; Zhang, W. An Efficient RRT Algorithm for Motion Planning of Live-Line Maintenance Robots. Appl. Sci. 2021, 11, 10773. [Google Scholar] [CrossRef]
  18. Li, H.; Yang, S.X.; Seto, M.L. Neural-network-based path planning for a multirobot system with moving obstacles. IEEE Trans. Syst. Man Cybern. Part C 2009, 39, 410–419. [Google Scholar] [CrossRef]
  19. Pak, J.; Kim, J.; Park, Y.; Son, H.I. Field Evaluation of Path-Planning Algorithms for Autonomous Mobile Robot in Smart Farms. IEEE Access 2022, 10, 60253–60266. [Google Scholar] [CrossRef]
  20. Tian, Z.; Guo, Z.; Liu, Y.; Chen, J. An improved RRT robot autonomous exploration and SLAM construction method. In Proceedings of the 2020 5th International Conference on Automation, Control and Robotics Engineering (CACRE), Dalian, China, 15–20 September 2020. [Google Scholar]
  21. Dong, Y.; Camci, F. Faster RRT-based Nonholonomic Path Planning in 2D Building Environments Using Skeleton-constrained Path Biasing. J. Intell. Robot. Syst. 2018, 89, 387–401. [Google Scholar] [CrossRef]
  22. Wang, Z.; Xiang, X.; Yang, J.; Yang, S. Composite Astar and B-spline algorithm for path planning of Autonomous Underwater Vehicle. In Proceedings of the 2017 IEEE 7th International Conference on Underwater System Technology: Theory and Applications (USYS), Kuala Lumpur, Malaysia, 18–20 December 2017. [Google Scholar]
  23. Zhang, L.; Li, Y. Mobile Robot Path Planning Algorithm Based on Improved A Star. In Proceedings of the 2021 4th International Conference on Advanced Algorithms and Control Engineering (ICAACE 2021), Sanya, China, 29–31 January 2021. [Google Scholar]
  24. Yang, R.; Cheng, L. Path planning of restaurant service robot based on a-star algorithms with updated weights. In Proceedings of the 2019 12th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 14–15 December 2019. [Google Scholar]
  25. Le, A.V.; Prabakaran, V.; Sivanantham, V.; Mohan, R.E. Modified A-Star Algorithm for Efficient Coverage Path Planning in Tetris Inspired Self-Reconfigurable Robot with Integrated Laser Sensor. Sensors 2018, 18, 2585. [Google Scholar] [CrossRef]
  26. Shang, E.; Dai, B.; Nie, Y.; Zhu, Q.; Xiao, L.; Zhao, D. A guide-line and key-point based a-star path planning algorithm for autonomous land vehicles. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020. [Google Scholar]
  27. Tang, G.; Tang, C.; Claramunt, C.; Hu, X.; Zhou, P. Geometric A-star algorithm: An improved A-star algorithm for AGV path planning in a port environment. IEEE Access 2021, 9, 59196–59210. [Google Scholar] [CrossRef]
  28. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  29. Guo, X.; Ji, M.; Zhao, Z.; Wen, D.; Zhang, W. Global path planning and multi-objective path control for unmanned surface vehicle based on modified particle swarm optimization (PSO) algorithm. Ocean Eng. 2020, 216, 107693. [Google Scholar] [CrossRef]
  30. Mistry, K.; Zhang, L.; Neoh, S.C.; Lim, C.P.; Fielding, B. A Micro-GA embedded PSO feature selection approach to intelligent facial emotion recognition. IEEE Trans. Cybern. 2017, 47, 1496–1509. [Google Scholar] [CrossRef]
  31. Jiang, G.; Luo, M.; Bai, K.; Chen, S. A Precise Positioning Method for a Puncture Robot Based on a PSO-Optimized BP Neural Network Algorithm. Appl. Sci. 2017, 7, 969. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Gong, D.; Zhang, J. Robot path planning in uncertain environment using multi-objective particle swarm optimization. Neurocomputing 2013, 103, 172–185. [Google Scholar] [CrossRef]
  33. Rath, M.K.; Deepak, B.B.V.L. PSO based system architecture for path planning of mobile robot in dynamic environment. In Proceedings of the 2015 Global Conference on Communication Technologies (GCCT), Thuckalay, India, 23–24 April 2015. [Google Scholar]
  34. Yang, B.; Yan, J.; Cai, Z.; Ding, Z.; Li, D.; Cao, Y.; Guo, L. A Novel Heuristic Emergency Path Planning Method Based on Vector Grid Map. ISPRS Int. J. Geo-Inf. 2021, 10, 370. [Google Scholar] [CrossRef]
  35. Majeed, A.; Lee, S. A Fast Global Flight Path Planning Algorithm Based on Space Circumscription and Sparse Visibility Graph for Unmanned Aerial Vehicle. Electronics 2018, 7, 375. [Google Scholar] [CrossRef]
  36. Ayawli, B.B.K.; Mei, X.; Shen, M.; Appiah, A.Y.; Kyeremeh, F. Mobile robot path planning in dynamic environment using Voronoi diagram and computation geometry technique. IEEE Access 2019, 7, 86026–86040. [Google Scholar] [CrossRef]
  37. Yang, D.; Xu, B.; Rao, K.; Sheng, W. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm. Sensors 2018, 18, 332. [Google Scholar] [CrossRef] [PubMed]
  38. Kang, G.; Kim, Y.; Lee, Y.; Oh, H.; You, W.; Choi, H. Sampling-Based Motion Planning of Manipulator with Goal-Oriented Sampling. Intell. Serv. Robot. 2019, 12, 265–273. [Google Scholar] [CrossRef]
  39. Kovács, L.; Iantovics, L.B.; Iakovidis, D.K. IntraClusTSP—An Incremental Intra-Cluster Refinement Heuristic Algorithm for Symmetric Travelling Salesman Problem. Symmetry 2018, 10, 663. [Google Scholar] [CrossRef]
  40. Hassin, R.; Keinan, A. Greedy heuristics with regret, with application to the cheapest insertion algorithm for the TSP. Oper. Res. Lett. 2008, 36, 243–246. [Google Scholar] [CrossRef]
  41. Boryczka, U.; Szwarc, K. The harmony search algorithm with additional improvement of harmony memory for asymmetric traveling salesman problem. Expert Syst. Appl. 2019, 122, 43–53. [Google Scholar] [CrossRef]
  42. Deng, Y.; Liu, Y.; Zhou, D. An improved genetic algorithm with initial population strategy for symmetric TSP. Math. Probl. Eng. 2015, 2015, 212794. [Google Scholar] [CrossRef]
  43. Hoffmann, M.; Mühlenthaler, M.; Helwig, S. Discrete particle swarm optimization for TSP: Theoretical results and experimental evaluations. In Proceedings of the Second International Conference, ICAIS 2011, Klagenfurt, Austria, 6–8 September 2011. [Google Scholar]
  44. Brand, M.; Masuda, M.; Wehner, N.; Yu, X. Ant Colony Optimization Algorithm for Robot Path Planning. In Proceedings of the 2010 International Conference on Computer Design and Applications, Qinhuangdao, China, 25–27 June 2010. [Google Scholar]
  45. Emambocus, B.A.S.; Jasser, M.B.; Hamzah, M.; Mustapha, A.; Amphawan, A. An Enhanced Swap Sequence-Based Particle Swam Optimization Algorithm to Solve TSP. IEEE Access 2021, 9, 164820–164836. [Google Scholar] [CrossRef]
Figure 1. The description of the EOD environment and the goal positions’ information.
Figure 1. The description of the EOD environment and the goal positions’ information.
Symmetry 15 01052 g001
Figure 2. The forward and backward search composition and schematic of a bidirectional search strategy.
Figure 2. The forward and backward search composition and schematic of a bidirectional search strategy.
Symmetry 15 01052 g002
Figure 3. Traditional A* algorithm and improved A* algorithm search path planning schematic diagram. (a) Eight adjacent nodes search. (b) Sixteen adjacent nodes search. (c) Eight adjacent nodes search schematic diagram. (d) Sixteen adjacent nodes search schematic diagram.
Figure 3. Traditional A* algorithm and improved A* algorithm search path planning schematic diagram. (a) Eight adjacent nodes search. (b) Sixteen adjacent nodes search. (c) Eight adjacent nodes search schematic diagram. (d) Sixteen adjacent nodes search schematic diagram.
Symmetry 15 01052 g003
Figure 4. The relationship of extended node and waypoints. (a) | x e x c | = 1 , node P and node Q are above node C. (b) | x e x c | = 2 ; node P and node Q are to the right of node C.
Figure 4. The relationship of extended node and waypoints. (a) | x e x c | = 1 , node P and node Q are above node C. (b) | x e x c | = 2 ; node P and node Q are to the right of node C.
Symmetry 15 01052 g004
Figure 5. The flowchart of the conventional PSO algorithm.
Figure 5. The flowchart of the conventional PSO algorithm.
Symmetry 15 01052 g005
Figure 6. The flowchart of the LM-SSPSO algorithm.
Figure 6. The flowchart of the LM-SSPSO algorithm.
Symmetry 15 01052 g006
Figure 7. A schematic of the association strategy. (a) The initial state of the particles. (b) The triangle association strategy. (c) The final state of the particles.
Figure 7. A schematic of the association strategy. (a) The initial state of the particles. (b) The triangle association strategy. (c) The final state of the particles.
Symmetry 15 01052 g007
Figure 8. The flowchart of forgetting–reinforcement strategy.
Figure 8. The flowchart of forgetting–reinforcement strategy.
Symmetry 15 01052 g008
Figure 9. The new solution-generation process based on greedy solution. (a) The initial state of greedy solutions. (b) The final state of greedy solutions.
Figure 9. The new solution-generation process based on greedy solution. (a) The initial state of greedy solutions. (b) The final state of greedy solutions.
Symmetry 15 01052 g009
Figure 10. The training process of different algorithms in solving the multiple goal positions path planning problem.
Figure 10. The training process of different algorithms in solving the multiple goal positions path planning problem.
Symmetry 15 01052 g010
Figure 11. The postprocessing of optimal path. (a) The initial optimal path, (b) the definition of search direction, (c) connect the obstacle-free path, (d) redefine the remaining nodes to be detected, (e) delete the redundant point, and (f) the final path.
Figure 11. The postprocessing of optimal path. (a) The initial optimal path, (b) the definition of search direction, (c) connect the obstacle-free path, (d) redefine the remaining nodes to be detected, (e) delete the redundant point, and (f) the final path.
Symmetry 15 01052 g011
Figure 12. Obstacle avoidance analysis of the path postprocess.
Figure 12. Obstacle avoidance analysis of the path postprocess.
Symmetry 15 01052 g012
Figure 13. The postprocess results of improved A* and LM-SSPSO path planning.
Figure 13. The postprocess results of improved A* and LM-SSPSO path planning.
Symmetry 15 01052 g013
Figure 14. The environment information of three different maps: (a) map1, (b) map2, and (c) map3.
Figure 14. The environment information of three different maps: (a) map1, (b) map2, and (c) map3.
Symmetry 15 01052 g014
Figure 15. The training process of different algorithms’ in solving map1 path planning problem.
Figure 15. The training process of different algorithms’ in solving map1 path planning problem.
Symmetry 15 01052 g015
Figure 16. The training process of different algorithms in solving the map2 path planning problem.
Figure 16. The training process of different algorithms in solving the map2 path planning problem.
Symmetry 15 01052 g016
Figure 17. The training process of different algorithms in solving the map3 path planning problem.
Figure 17. The training process of different algorithms in solving the map3 path planning problem.
Symmetry 15 01052 g017
Figure 18. The postprocess results of improved A* and LM-SSPSO path planning in different environment maps. (a) Map1. (b) Map2. (c) Map3.
Figure 18. The postprocess results of improved A* and LM-SSPSO path planning in different environment maps. (a) Map1. (b) Map2. (c) Map3.
Symmetry 15 01052 g018
Table 1. Simulation comparison results of traditional A* algorithm and bidirectional dynamic OpenList cost weight A* algorithm regarding average computation time.
Table 1. Simulation comparison results of traditional A* algorithm and bidirectional dynamic OpenList cost weight A* algorithm regarding average computation time.
ResolutionTraditional A* Algorithm(s)Bidirectional Dynamic OpenList Cost Weight A* Algorithm(s)
120 × 120 79.5026.39
180 × 180 250.9150.27
240 × 240 628.5383.36
Table 2. Simulation comparison results of different weights of coefficients on average computational time.
Table 2. Simulation comparison results of different weights of coefficients on average computational time.
Resolutionw = 1 (s)w = 1.5 (s)w = 2 (s)Bidirectional Dynamic OpenList Cost Weight A* Algorithm (s) w 1  (s) w 2  (s) w 3  (s) w 4  (s) w 5  (s)
180 × 180 106.3556.7653.0350.2760.8366.1556.95631.1465.93
Table 3. The shortest distance between any two points of the 8-neighborhood method.
Table 3. The shortest distance between any two points of the 8-neighborhood method.
1 (Initial Point)2345678910111213141516
1
222.21
342.96824.484
466.14640.69438.968
586.96870.4844464.904
6136.63161.98104.77117.21647.936
7103.49679.45870.69436.76481.59268.49
8138.694118.72695.726127.42649.72637.936160.566
9179.63159.42145.356156.87293.80273.834114.31837.28
10228.196190.674187.604131.25483.11435.93678.96860.69433.452
1182.69459.31286.83430.96892.77114.94238174.26162.426116.968
12142.63117.59211278.484139.458101.73858.694141.26148.98115.11457.452
13126.484109.726134.2887.178148.98162.184118.7201.464211.668186.08256.2132
14149.33124.292119.01282.012107.66262.2145.248102.14687.52853.66283.66259.452141.77
15230.744205.706201.598178.846158.426103.522161.05131.038110.4274.968146.63696.662128.66259.42
16170.566145.528139.936106.42153.318165.80869.242196.05168.808148.22285.38829.45263.45259.59259.242
Table 4. The shortest distance between any two points of 16-neighborhood strategy.
Table 4. The shortest distance between any two points of 16-neighborhood strategy.
1 (Initial Point)2345678910111213141516
1
222.21
338.1424.484
460.66234.03837.968
580.96867.6564260.904
6124.796.3573.38887.80233.936
792.97466.97461.38227.86673.10866.076
8128.694115.31286.89896.45846.48433.038116.324
9151.942151.324103.974139.45871.00657.00693.24824.624
10124.776155.77685.598104.35668.14630.62471.96851.10825.038
1179.79655.31271.83424.1465.80299.04438133.222147.528106.14
12130.38899.93611275.656122.4279.52849.038113.082121.044100.07647.452
13125.656108.898121.03875.108119.942127.45884.458156.98159.012139.18446.55432
14117.49693.94288.45855.73291.76458.55430.69485.59266.07637.52270.69459.038114.356
15213.738144.432134.56692.254130.59291.96873.216124.1493.17869.312118.04489.662121.66246.108
16157.388114.974130.2190.28128.662130.80866.828157.91136.292123.63666.4222.55456.55448.52257.242
Table 5. The shortest distance between any two points.
Table 5. The shortest distance between any two points.
1 (Initial Point)2345678910111213141516
1022.2138.1460.66280.968124.792.974128.694151.942124.77679.796130.388125.656117.496213.738157.388
222.21024.48434.03867.65696.3566.974115.312151.324155.77655.31299.936108.89893.942144.432114.974
338.1424.484037.9684273.38861.38286.898103.97485.59871.834112121.03888.458134.566130.21
460.66234.03837.968060.90487.80227.86696.458139.458104.35624.1475.65675.10855.73292.25490.28
580.96867.6564260.904033.93673.10846.48471.00668.14665.802122.42119.94291.764130.592128.662
6124.796.3573.38887.80233.936066.07633.03857.00630.62499.04479.528127.45858.55491.968130.808
792.97466.97461.38227.86673.10866.0760116.32493.24871.9683849.03884.45830.69473.21666.828
8128.694115.31286.89896.45846.48433.038116.324024.62451.108133.222113.082156.9885.592124.14157.91
9151.942151.324103.974139.45871.00657.00693.24824.624025.038147.528121.044159.01266.07693.178136.292
10124.776155.77685.598104.35668.14630.62471.96851.10825.0380106.14100.076139.18437.52269.312123.636
1179.79655.31271.83424.1465.80299.04438133.222147.528106.14047.45246.55470.694118.04466.42
12130.38899.93611275.656122.4279.52849.038113.082121.044100.07647.45203259.03889.66222.554
13125.656108.898121.03875.108119.942127.45884.458156.98159.012139.18446.554320114.356121.66256.554
14117.49693.94288.45855.73291.76458.55430.69485.59266.07637.52270.69459.038114.356046.10848.522
15213.738144.432134.56692.254130.59291.96873.216124.1493.17869.312118.04489.662121.66246.108057.242
16157.388114.974130.2190.28128.662130.80866.828157.91136.292123.63666.4222.55456.55448.52257.2420
Table 6. Comparing the results of the 8 adjacent nodes and 16 adjacent nodes’ path lengths.
Table 6. Comparing the results of the 8 adjacent nodes and 16 adjacent nodes’ path lengths.
The 16 Adjacent Nodes Path Length Is ShorterThe Path Lengths Are EqualThe 8 Adjacent Nodes Path Length Is Shorter
Number11550
Table 7. The shortest distance between any two nodes obtained using GORRT.
Table 7. The shortest distance between any two nodes obtained using GORRT.
1 (Initial Point)2345678910111213141516
1
226.342
368.39728.399
495.55758.03841.884
5109.37799.53559.76376.168
6153.777130.836106.117130.88957.971
7120.788116.5375.65460.4899.20275.407
8175.884171.403123.506133.55268.79550.6157.141
9222.912188.225174.266181.991106.04498.66131.96267.721
10221.069229.089188.874155.367107.13442.562107.56883.2236.372
11100.86467.435101.05757.949140.626129.74649.095211.729253.191149.308
12204.597187.866153.49497.746179.677150.48183.997201.065213.031166.0880.975
13156.989123.806169.62116.288164.781196.576105.162223.524247.17171.41459.64659.319
14174.623175.014170.507107.231155.63476.44551.626117.39195.72577.426133.16382.739114.089
15257.273261.045234.944179.128181.176123.227147.694157.193115.66792.181171.403106.334147.73365.716
16234.244165.681208.532123.691179.6149.97183.888182.017195.789155.005104.48851.06193.71766.98869.66
Table 8. Comparing the results of the 16 adjacent nodes and GORRT path lengths.
Table 8. Comparing the results of the 16 adjacent nodes and GORRT path lengths.
16 Adjacent Nodes Path Length Is ShorterPath Lengths Are EqualGORRT Path Length Is Shorter
Number11901
Table 9. The parameter settings of LM-SSPSO.
Table 9. The parameter settings of LM-SSPSO.
N pop T max n ana P c H eu N res c ind c gol
LM-SSPSO120100100.850.9550.10.075
N p o p T m a x HMCR
HS1201000.95
N p o p T m a x r i n d r g o l
SSPSO1201000.10.075
N p o p T m a x
Greedy algorithm120100
N p o p T m a x P c r o s s P m u t a t e
GA1201000.80.05
N p o p T m a x AlphaBetaRhoQ
AC120100250.2100
N p o p T m a x TabuL
TS12010011
Table 10. The comparison results of path lengths by using different algorithms.
Table 10. The comparison results of path lengths by using different algorithms.
Minimum ValueNumbers of Optimal SolutionsMaximum ValueAverage Value
LM-SSPSO560.8750560.87560.87
HS914.51801183.6321054.2758
SSPSO560.871785.818662.253
Greedy777.130777.13777.13
GA560.8742579.946563.2406
ACO560.8725573.838567.354
TS560.872759.62615.2752
Table 11. The coordinate information of new environment maps.
Table 11. The coordinate information of new environment maps.
1 (Initial Position)2345678
Map1(165, 123)(144, 108)(117, 108)(99, 123)(78, 87)(99, 60)(72, 36)(54, 63)
Map2(108, 24)(27, 15)(21, 51)(54, 21)(27, 78)(18, 102)(48, 96)(36, 123)
Map3(162, 15)(81, 39)(126, 30)(45, 12)(63, 48)(42, 60)(72, 63)(33, 105)
910111213141516
Map1(156, 63)(123, 30)(54, 102)(75, 123)(90,150)(105, 159)(123, 165)(114, 150)
Map2(54, 144)(18, 144)(102, 129)(126, 96)(162, 120)(150, 45)(156, 75)(120, 63)
Map3(51, 123)(81, 162)(108, 123)(153, 129)(81, 135)(96, 90)(141, 72)(138, 54)
Table 12. The average computation time comparing the results on map1 at different resolutions.
Table 12. The average computation time comparing the results on map1 at different resolutions.
ResolutionTraditional A* Algorithm(s)Bidirectional Dynamic OpenList Cost Weight A* Algorithm(s)
120 × 120 56.2443.75
180 × 180 127.4388.02
240 × 240 332.38215.5
Table 13. The average computation time comparing the results on map2 at different resolutions.
Table 13. The average computation time comparing the results on map2 at different resolutions.
ResolutionTraditional A* Algorithm(s)Bidirectional Dynamic OpenList Cost Weight A* Algorithm(s)
120 × 120 63.0814.04
180 × 180 197.6826.98
240 × 240 432.4745.59
Table 14. The average computation time comparing the results on map3 at different resolutions.
Table 14. The average computation time comparing the results on map3 at different resolutions.
ResolutionTraditional A* Algorithm(s)Bidirectional Dynamic OpenList Cost Weight A* Algorithm(s)
120 × 120 82.9414.5
180 × 180 208.2230.58
240 × 240 544.1952.97
Table 15. Simulation comparison results of different weight coefficients with average computational time for different environment maps.
Table 15. Simulation comparison results of different weight coefficients with average computational time for different environment maps.
Environment Mapw = 1 (s)w = 1.5 (s)w = 2 (s)Bidirectional Dynamic OpenList Cost Weight A* Algorithm (s) w 1  (s) w 2  (s) w 3  (s) w 4  (s) w 5  (s)
Map1110.5495.0793.5588.02138.27101.28172.21431.0393.95
Map292.3737.2830.7426.9827.39157.8951.93509.3129.49
Map3106.1233.3131.1930.58137.3139.2633.86535.9141.61
Table 16. The shortest distance between any two positions of the 8-neighborhood and 16-neighborhood strategy for map1.
Table 16. The shortest distance between any two positions of the 8-neighborhood and 16-neighborhood strategy for map1.
Neighbor1 (Initial Point)2345678910111213141516
1
21629.312
829.312
316103.94226
852.2126
4166445.79627.656
86450.2128.07
516114.968101.5693.31848.484
8124.764115.178102.72651.312
616109.66257.59251.31279.45236.14
8101.80270.4959.31290.48440.554
716135.45872.6374.3599.10849.65627.866
8139.738108.01296.834106.2852.48435.936
816165.082107.114100.5671.86622.62442.82840.484
8175.254143.942130.83485.24831.10845.24240.898
91659.72646.55445.93680.83482.83454.82883.93698
863.55448.96860.14690.216104.59257.24293.178100
101698.76472.2876.07109.52260.4931.03848.0770.4258.898
8116.00684.69478.484112.86676.97437.93652.48481.66258.898
1116152.668137.04470.69442.96827.79649.59259.2138114.6385.184
8204.318166.968136.2853.86628.2164.4972.2838143.114100.91
12168665.2848.3822032.82865.38282.82859.452120.7787.24831.312
88877.10855.9682235.24277.79687.24266.694168.044114.21634.898
131683.17848.07637.86624.89862.55492.694107.38284.3591.388132.45849.10822.21
885.17869.38851.17829.72666.968108.522119.452100.904113.324174.97469.52232.21
141666.3551.52247.72636.0759.936108.834119.90486.90492.076159.80259.24833.52216.484
877.59266.14654.96836.48481.178126.178142.388115.114115.114136.2883.73246.4217.312
151666.10871.76460.52245.45278.42103.866148.012102.974110.7136.31276.07647.59228.55418.07
882.03888.52277.52251.52296.216123.694170.082130.152124.77138.31298.7763.38838.2118.484
161650.62437.86638.82822.1467.866105.49122.45894.83481.592116.48464.3537.038228.48416.484
865.86652.4241.24232.2176.49107.522145.872110.426103.388121.72678.6349.1782211.31217.312
Table 17. The shortest distance between any two positions of 8-neighborhood and 16-neighborhood strategies for map2.
Table 17. The shortest distance between any two positions of 8-neighborhood and 16-neighborhood strategies for map2.
Neighbor1 (Initial Point)2345678910111213141516
1
21684.554
884.554
316121.28633.968
8125.6339.796
41652.82825.65655.522
853.24228.48464.35
51693.52865.79624.0768.318
8102.35667.79628.48470.318
616113.73884.48448.82899.21623.726
8128.56690.55451.242123.45826.554
71682.21698.14645.93672.62422.55436.866
897.184111.80255.17880.62429.38237.28
816113.942111.4275.00696.10843.3519.79626.554
8130.152118.83482.834112.59255.76427.45230.968
916135.872147.14692.904119.65662.69439.59244.0721.796
8155.05158.802110.63123.65682.76454.90448.48427.038
1016133.324124.31290.828140.762.3124043.93619.79644.28
8157.604133.38293.242167.11469.3824058.4226.62448.28
1116130.668135.45883.942110.45873.97479.86658.76463.65651.79686.592
8151.668154.426111.464137.49695.11493.17865.66266.48452.2198.006
121677.038103.32498.83477.52898.662107.59286.93682.59269.38898.90438.624
877.452132.362123.458103.808114.146117.59291.936100.00689.872128.21641.936
1316127.496143.12144.426109.324150.496142.222118.286121.452102.624139.86655.03862.178
8138.98186.916189.534157.604184.152195.464176.566131.038115.936157.59265.03859.522
141641.866120.35133.69490.866122.764136.49100.146113.184117.98124.56696.00669.90479.382
848.694135.248136.108107.248136.49162.7121.942154.362156.19183.642114.97483.14679.796
151660.076129.63137.216103.802125.108134.04496.662120.01290.012125.9856.4922.62442.48426.14
880.974175.324171.044137.044136.694169.528128.458150.222128.566168.56675.11436.69447.31232.968
161638.898100.00695.31264.5690.554104.76464.76482.97487.28697.59862.96828.0756.83429.55439.694
842.968111.7102.96882.38898.21131.04491.286140.878115.324159.46471.45234.48474.97439.03847.522
Table 18. The shortest distance between any two positions of 8-neighborhood and 16-neighborhood strategies for map3.
Table 18. The shortest distance between any two positions of 8-neighborhood and 16-neighborhood strategies for map3.
Neighbor1 (Initial Point)2345678910111213141516
1
21694.076
894.076
31635.38245.866
840.2151.866
416122.62442.3587.318
8123.86646.764102.662
516101.716.89863.10847.312
8119.11419.72671.10847.312
616114.97441.86677.93684.24824.484
8136.6346.69494.4291.72627.312
71684.38820.48451.10856.62412.89828.828
8108.87225.72665.66267.03817.72629.242
816157.70685.452105.77112.59273.10861.83463.452
8184.94893.662142.738127.03878.69468.93669.452
916146.60482.3597.566118.03866.55495.14655.79630.382
8194.158100.56151.706126.62478.968100.52272.86632.726
1016170.286149.566156.426176.254129.91134.496114.012146.738117.082
8184.292223.904153.732209.7213.146155.668201.006162.006143.694
1116126.45892.31890.968121.15274.11480.763.31872.9685450.452
8147.566105.802101.108156.846118.25496.08299.31881.4525652.28
1216114.484106.66899.834123.80887.184117.66880.56116.936102.4981.59256.834
8115.726117.808113.904166.572126.98155.464113.184138.35125.31885.24871.42
1316126.77103.248101.216131.4984.66268.00665.24843.86627.312132.94827.31276.42
8158.502122.936122.63144.248107.66290.14694.17858.4232.968145.48430.96884.834
141683.80245.59255.10885.45845.24856.73243.86663.03846.2875.45232.1461.4242.554
8102.9864.4970.42111.25474.07685.87257.86671.52257.66285.45236.96879.73250.21
151661.79669.6342.694100.08268.21696.42664.592104.07688.799.24849.52253.72663.38846.624
865.86684.94254.592130.53497.63140.49683.42130.044117.426117.94263.66260.96886.42654.694
161642.62453.38221.312110.7771.48491.52258.108124.49116.044115.9159.97472.1486.01245.00620.968
847.93662.2126.968115.18477.14106.17875.28149.942157.324144.46492.94280.21115.46465.80228.796
Table 19. The comparison of the results of 8 adjacent nodes and 16 adjacent nodes path lengths.
Table 19. The comparison of the results of 8 adjacent nodes and 16 adjacent nodes path lengths.
16 Adjacent Nodes Path Length Is ShorterPath Lengths Are Equal8 Adjacent Nodes Path Length Is Shorter
Number(map1)11163
Number(map2)11721
Number(map3)11721
Table 20. The shortest distance between any two positions of map1.
Table 20. The shortest distance between any two positions of map1.
1 (Initial Position)2345678910111213141516
1029.31252.2164114.968101.802135.458165.08259.72698.764152.6688683.17866.3566.10850.624
229.31202645.796101.5657.59272.63107.11446.55472.28137.04465.2848.07651.52271.76437.866
352.2126027.65693.31851.31274.35100.5645.93676.0770.69448.38237.86647.72660.52238.828
46445.79627.656048.48479.45299.10871.86680.834109.52242.9682024.89836.0745.45222.14
5114.968101.5693.31848.484036.1449.65622.62482.83460.4927.79632.82862.55459.93678.4267.866
6101.80257.59251.31279.45236.14027.86642.82854.82831.03849.59265.38292.694108.834103.866105.49
7135.45872.6374.3599.10849.65627.866040.48483.93648.0759.2182.828107.382119.904148.012122.458
8165.082107.114100.5671.86622.62442.82840.48409870.423859.45284.3586.904102.97494.834
959.72646.55445.93680.83482.83454.82883.93698058.898114.63120.7791.38892.076110.781.592
1098.76472.2876.07109.52260.4931.03848.0770.4258.898085.18487.248132.458136.28136.312116.484
11152.668137.04470.69442.96827.79649.59259.2138114.6385.184031.31249.10859.24876.07664.35
128665.2848.3822032.82865.38282.82859.452120.7787.24831.312022.2133.52247.59237.038
1383.17848.07637.86624.89862.55492.694107.38284.3591.388132.45849.10822.21016.48428.55422
1466.3551.52247.72636.0759.936108.834119.90486.90492.076136.2859.24833.52216.484018.078.484
1566.10871.76460.52245.45278.42103.866148.012102.974110.7136.31276.07647.59228.55418.07016.484
1650.62437.86638.82822.1467.866105.49122.45894.83481.592116.48464.3537.038228.48416.4840
Table 21. The shortest distance between any two positions of map2.
Table 21. The shortest distance between any two positions of map2.
1 (Initial Position)2345678910111213141516
1084.554121.28652.82893.528113.73882.216113.942135.872133.324130.66877.038127.49641.86660.07638.898
284.554033.96825.65665.79684.48498.146111.42147.146124.312135.458103.324143.12120.35129.63100.006
3121.28633.968055.52224.0748.82845.93675.00692.90490.82883.94298.834144.426133.694137.21695.312
452.82825.65655.522068.31899.21672.62496.108119.656140.7110.45877.528109.32490.866103.80264.56
593.52865.79624.0768.318023.72622.55443.3562.69462.31273.97498.662150.496122.764125.10890.554
6113.73884.48448.82899.21623.726036.86619.79639.5924079.866107.592142.222136.49134.044104.764
782.21698.14645.93672.62422.55436.866026.55444.0743.93658.76486.936118.286100.14696.66264.764
8113.942111.4275.00696.10843.3519.79626.554021.79619.79663.65682.592121.452113.184120.01282.974
9135.872147.14692.904119.65662.69439.59244.0721.796044.2851.79669.388102.624117.9890.01287.286
10133.324124.31290.828140.762.3124043.93619.79644.28086.59298.904139.866124.566125.9897.598
11130.668135.45883.942110.45873.97479.86658.76463.65651.79686.592038.62455.03896.00656.4962.968
1277.038103.32498.83477.52898.662107.59286.93682.59269.38898.90438.624059.52269.90422.62428.07
13127.496143.12144.426109.324150.496142.222118.286121.452102.624139.86655.03859.522079.38242.48456.834
1441.866120.35133.69490.866122.764136.49100.146113.184117.98124.56696.00669.90479.382026.1429.554
1560.076129.63137.216103.802125.108134.04496.662120.01290.012125.9856.4922.62442.48426.14039.694
1638.898100.00695.31264.5690.554104.76464.76482.97487.28697.59862.96828.0756.83429.55439.6940
Table 22. The shortest distance between any two positions of map3.
Table 22. The shortest distance between any two positions of map3.
1 (Initial Position)2345678910111213141516
1094.07635.382122.624101.7114.97484.388157.706146.604170.286126.458114.484126.7783.80261.79642.624
294.076045.86642.3516.89841.86620.48485.45282.35149.56692.318106.668103.24845.59269.6353.382
335.38245.866087.31863.10877.93651.108105.7797.566153.73290.96899.834101.21655.10842.69421.312
4122.62442.3587.318047.31284.24856.624112.592118.038176.254121.152123.808131.4985.458100.082110.77
5101.716.89863.10847.312024.48412.89873.10866.554129.9174.11487.18484.66245.24868.21671.484
6114.97441.86677.93684.24824.484028.82861.83495.146134.49680.7117.66868.00656.73296.42691.522
784.38820.48451.10856.62412.89828.828063.45255.796114.01263.31880.5665.24843.86664.59258.108
8157.70685.452105.77112.59273.10861.83463.452030.382146.73872.968116.93643.86663.038104.076124.49
9146.60482.3597.566118.03866.55495.14655.79630.3820117.08254102.4927.31246.2888.7116.044
10170.286149.566153.732176.254129.91134.496114.012146.738117.082050.45281.592132.94875.45299.248115.91
11126.45892.31890.968121.15274.11480.763.31872.9685450.452056.83427.31232.1449.52259.974
12114.484106.66899.834123.80887.184117.66880.56116.936102.4981.59256.834076.4261.4253.72672.14
13126.77103.248101.216131.4984.66268.00665.24843.86627.312132.94827.31276.42042.55463.38886.012
1483.80245.59255.10885.45845.24856.73243.86663.03846.2875.45232.1461.4242.554046.62445.006
1561.79669.6342.694100.08268.21696.42664.592104.07688.799.24849.52253.72663.38846.624020.968
1642.62453.38221.312110.7771.48491.52258.108124.49116.044115.9159.97472.1486.01245.00620.9680
Table 23. Numbers of obtained optimal solutions using different algorithms.
Table 23. Numbers of obtained optimal solutions using different algorithms.
map1map2map3
LM-SSPSO505050
HS000
SSPSO121
Greedy000
GA265013
ACO1320
TS2102
Table 24. The maximum value of all 50 loops using different algorithms.
Table 24. The maximum value of all 50 loops using different algorithms.
map1map2map3
LM-SSPSO478.1548.966656.22
HS868.6721131.881098.772
SSPSO660.386736.952833.11
Greedy511.138588.208779.168
GA491.826548.966667.022
ACO491.826551.246674.398
TS612.316750.812788.27
Table 25. The average value of all 50 loops using different algorithms.
Table 25. The average value of all 50 loops using different algorithms.
map1map2map3
LM-SSPSO478.1548.966656.22
HS814.49998.63361018.1468
SSPSO550.2192633.4848723.9666
Greedy511.138588.208779.168
GA481.634548.966664.27
ACO481.0868549.3308667.97
TS523.8512600.8568692.771
Table 26. The minimum value of all 50 loops using different algorithms.
Table 26. The minimum value of all 50 loops using different algorithms.
map1map2map3
LM-SSPSO478.1548.966656.22
HS715.59845.812856.722
SSPSO478.1548.966656.022
Greedy511.138548.966779.168
GA478.1548.966656.22
ACO478.1548.966666.5
TS478.1548.966656.22
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Qiao, L.; Jiang, J. A Multigoal Path-Planning Approach for Explosive Ordnance Disposal Robots Based on Bidirectional Dynamic Weighted-A* and Learn Memory-Swap Sequence PSO Algorithm. Symmetry 2023, 15, 1052. https://doi.org/10.3390/sym15051052

AMA Style

Li M, Qiao L, Jiang J. A Multigoal Path-Planning Approach for Explosive Ordnance Disposal Robots Based on Bidirectional Dynamic Weighted-A* and Learn Memory-Swap Sequence PSO Algorithm. Symmetry. 2023; 15(5):1052. https://doi.org/10.3390/sym15051052

Chicago/Turabian Style

Li, Minghao, Lijun Qiao, and Jianfeng Jiang. 2023. "A Multigoal Path-Planning Approach for Explosive Ordnance Disposal Robots Based on Bidirectional Dynamic Weighted-A* and Learn Memory-Swap Sequence PSO Algorithm" Symmetry 15, no. 5: 1052. https://doi.org/10.3390/sym15051052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop