FPS: Fast Path Planner Algorithm Based on Sparse Visibility Graph and Bidirectional Breadth-First Search

: The majority of planning algorithms used are based on the occupancy grid maps, but in complicated situations, the occupancy grid maps have a signiﬁcant search overhead. This paper proposed a path planner based on the visibility graph (v-graph) for the mobile robot that uses sparse methods to speed up and simplify the construction of the v-graph. Firstly, the complementary grid framework is designed to reduce graph updating iteration costs during the data collection process in each data frame. Secondly, a ﬁlter approach based on the edge length and the number of vertices of the obstacle contour is proposed to reduce redundant nodes and edges in the v-graph. Thirdly, a bidirectional breadth-ﬁrst search is combined into the path searching process in the proposed fast path planner algorithm in order to reduce the waste of exploring space. Finally, the simulation results indicate that the proposed sparse v-graph planner can signiﬁcantly improve the efﬁciency of building the v-graph and reduce the time of path search. In highly convoluted unknown or partially known environments, our method is 40% faster than the FAR Planner and produces paths 25% shorter than it. Moreover, the physical experiment shows that the proposed path planner is faster than the FAR Planner in both the v-graph update process and laser process. The method proposed in this paper performs faster when seeking paths than the conventional method based on the occupancy grid.


Introduction
With the popularity of the robotics industry, simultaneous localization and mapping (SLAM) technology has developed rapidly. SLAM technology can be divided into three categories, i.e., LiDAR-SLAM [1][2][3][4], visual-SLAM [5][6][7][8] and LiDAR fusion visual SLAM [9][10][11], and it is widely used for robot navigation tasks. In the application of robot navigation, a by-product of SLAM is the map, including metric and topological maps. The metric map emphasizes accurately representing the positional relationships of objects, while the topological map emphasizes the relationships between map elements.
In smaller spaces, such as corridors and houses, occupancy grid maps [12] are preferred over topological maps. The topology maps, on the other hand, are more appropriate for path planning in large areas where the occupancy grid maps are computationally expensive. In this paper, the visibility graph [13], a topology-based type of map, will be constructed for route planning and navigation.
Path planning has been an emerging trend in research nowadays to cater to the needs of autonomous systems. The visibility graph (v-graph) is an efficient map representation for path planning, which allows the robot to move from one node to another, but it has some drawbacks. Firstly, it is not only hard to map the visibility graph completely in the 3D world [14], but it is also difficult to extract the outlines of obstacles. Secondly, when the number of nodes in the graph increases, the associated edges will be doubled, which causes increased computational cost in terrain where complex obstacles exist [15]. In this case, it is necessary to simplify complex obstacles to reduce vertices and edges for path planning.
Most existing v-graph generation methods [16][17][18][19][20][21] store the pointcloud explored by the LiDAR into the local grid and then perform plane mapping to extract the vertices of the polygon. However, these algorithms, such as [17][18][19][20], face the following problems. Firstly, if the local grid is too sparse, the sampling accuracy will be decreased. On the other hand, if the local grid is dense, the shape of a polygon will be more accurately determined, but the amount of calculation will increase dramatically. Secondly, the quantity of vertices and edges affects how complicated the v-graph-based path search algorithm is. There will be a lot of redundant vertices and edges in maps with a lot of intricate barriers, which makes path search a time-consuming task. Finally, the large number of vertices and edges in the v-graph slows down its traversal speed, which leads to a decrease in the maintenance speed of the v-graph. Although WonheeLee et al. [21] proposed a v-graph-based obstacle avoidance strategy, they did not address the issue of dense v-graph in complicated scenes. A sparse v-graph-based path planner is proposed as a solution to these issues, which lowers the cost of v-graph maintenance, increases the effectiveness of v-graph building, and decreases space waste during the path search. Overall, the main contributions of the paper are summarized as follows: • Compared to existing methods for storing a laser pointcloud, this paper proposes a complementary holed structure for iteratively updating the local grid. Basically, only half of the pointcloud data needs to be processed in each data frame to update the map. The pointcloud data are subjected to image blurring after planar mapping, and then key vertices are extracted from the blurred image. • For obstacles with complex contours, this paper proposes a filtering method based on the edge length and the number of vertices of the obstacle contour. The method effectively reduces the number of vertices in the v-graph and the maintenance cost of the v-graph by performing vertex filtering on large complex obstacles. Since the v-graph and the path search algorithm are tightly coupled, the efficiency of the path search algorithm will also be improved. • A bidirectional breadth-first search algorithm was introduced since exploring uncharted territory requires a lot of search space. In this paper, the edge between the goal point and the existing vertices in the v-graph is established by geometric checking. Therefore, the bidirectional breadth-first search algorithm could reduce the waste of exploration space in navigation.

Related Work
The current mainstream of path planning research is divided into the following categories: search-based, sampling (probability)-based, genetic algorithm (GA)-based, and learning-based. According to the planning results, it is further divided into complete planning algorithms and probabilistic complete planning algorithms.
Search-based planning methods: These methods mainly include Dijkstra [22] and its variants, such as A* [23], D* [24], etc. The Dijkstra and A* algorithms are often used to search on discrete grids. Such algorithms are re-initialized for each search cycle, thus taking a long time to plan routes. An incremental version of the Dijkstra-derived algorithm was proposed to reduce re-planning time by adjusting the local information to the planning result in the previous cycle. However, similar to D* Lite [25], when encountering complex environments, the computational load of the incremental algorithm to re-evaluate the current environment is even greater than that of A* without increment. Many improved A* algorithms have been proposed to decrease the memory space and achieve a better trajectory [26,27], but they are all based on the occupancy grids and thus cannot avoid the shortcomings of the occupancy grids.
Random sampling planning methods: These methods mainly include rapidly expanding random tree (RRT) [28] and a series of its variants, such as RRT* [29], informed RRT* [30], and RRT-Connect [31]. These methods are designed for a known environment. Some algorithms derived from RRT are used for planning in an unknown or partially known environment. These methods must perform maintenance or regeneration of random trees frequently to account for newly observed environments.
Genetic algorithm methods (GA): This algorithm performs path planning through crossover and mutation of chromosomes. Tu et al. [32] proposed a method for generating collision-free paths based on GA. Chen et al. [33] proposed an improved GA to minimize the total distance of the UAV. The method in [34] combined deep learning and GA to design a followable path for multi-robots. Moreover, there are many different improvements to GA [35][36][37] to achieve better results. In the genetic algorithm, the choice of parameters such as crossover rate and mutation rate has a significant impact on the quality of the solution, but these values are typically chosen based on experience.
Planning by deep learning: These methods [38][39][40] require a large number of groundtruth labels for training. For example, the essence of training based on deep learning is to encode and decode environmental information. During testing, these methods can handle scenarios similar to the training environment; the essence of training based on reinforcement learning is to encode and map the information of the state of motion every time. During testing, the results are output according to the encoded motion state information. GuichaoLin et al. [41] proposed a collision-free model based on deep reinforcement learning to allow robots to avoid obstacles. Tutsoy et al. [42] provided a minimum time path for a balancing task through reinforcement learning, and [43] considered the energy consumption to design a general dynamic control model based on deep reinforcement learning. However, learning-based methods are inherently data-driven, and the magnitude of the data limits their ability to scale to different environments.
The majority of the aforementioned path planning techniques, including common search-based algorithms, RRT algorithms, and evolutionary algorithms, are based on occupancy grids. When the scope of the searched scene is large, the drawbacks of occupancy grids will constrain the speed of these algorithms, i.e., as shown in Figure 1, in a 10 × 10 grid map, the algorithm based on the occupancy grid map traversed 20 grids to reach the goal, while the algorithm based on the v-graph only traversed five vertices of the obstacle. This paper mainly studies the visibility graph-based and path-planning methods. Although the literature [13,[44][45][46] studied the use of the visibility graph for robot navigation, and FAR Planner [19] applied the theoretical part to the exploration and navigation of the actual 3D world, the maintenance of a graph in complex situations is still expensive, and the way-point generated in unknown environments is easy to detour. In order to address these issues, a sparse v-graph path planner is proposed. This path planner enhances the efficiency of v-graph building while decreasing space waste and v-graph maintenance costs. Similar to FAR Planner [19], the pointcloud is extracted from obstacles and mapped into polygons, from which vertices and edges are extracted to construct the v-graph for navigation. The improvement of our approach is that each data frame of pointcloud in the local area does not fully participate in the construction of the global layer. The complementary hole structure for iteratively updating the local grid is used to store the pointcloud information in the current local area, which means that only half of the points in each data frame need to be processed each time. The method continuously updates the pointcloud information on each data frame until a global map is formed. Compared with the original algorithm of FAR Planner, the proposed algorithm can reach the target point within a shorter distance and take less time.
In simulation experiments, the feasibility of the method is evaluated through the simulated physical environment. The environment of the simulation experiment includes medium-scale, complex-scale, and large-scale environments in the Autonomous Exploration Development Environment provided by CMU [47], and medium-scale indoor environments and complex large-scale indoor environments provided by Matterport3D [48]. In the physical experiment, the LiDAR, and an Inertial Measurement Unit (IMU) are coupled to generate state estimation of the mobile robot [4], and the proposed replacednavigationnavgiation algorithm will be tested in a real garage.

Sparse Visibility Graph-Based Path Planner
Define Q ⊂ R 3 as the robot navigation space, and S ⊂ Q as the sensor data from obstacles. A down-sample strategy is used to update and maintain the v-graph, denoted as G, and the grid to store pointcloud is denoted as L. Define the position of robot as P robot ∈ Q, the goal P goal ∈ Q.
The flow chart of the path planner proposed in the paper is shown in Figure 2. And the process consists of three parts: (1) generating the geometric contours of obstacles by LiDAR-to-plane mapping; (2) aggregating and simplifying complex obstacle information to maintain the v-graph at a low cost; and (3) searching for nodes and edges to generate the path from the start point to the goal through the v-graph.

Pointcloud Extraction Structure
We denote the process of extracting and mapping the pointcloud to geometric contours as extract P k cloud ⊂ Q |k ∈ Z + , and the grid as Grid, respectively. In most laser-based SLAM, grids are used for accessibility analysis, which means that pointcloud information needs to be recorded in the global layer L global and local L local . Although an incremental method of updating the pointcloud is proposed, in the case of complex terrain and high-resolution grids, the computational resources used to update the pointcloud are still very high. Therefore, a general sparsification module denoted as F is used to create the holedstructure local grid and incrementally update the pointcloud.
The dilated convolutional module [49] is usually used in neural networks to enlarge the receptive field in the picture, and its whole structure gives another way to deal with the pointcloud in the local grid: as shown in Figure 3, the pointcloud will be stored in the complementary hole grid. The holed-structure local grid is defined as Grid d , and the F contains Sub ⊂ Grid d ; when obstacles are detected by the LiDAR, the sensor data S is transferred to Sub, all Sub forms the Grid d . We denote the voxel size as V S , and this value will affect the density of pointcloud. In this paper, the value of V S is set to 0.15 m. When the Grid d is formed, a PCL filter with a kernel size of (V S , V S , V S ) will be applied to reduce the size of the pointcloud. S is the remaining pointcloud in the Grid d . After the local pointcloud is formed, the S are classified as obstacles or free, and we denote the classified S as P k cloud . At this step, the fully classified Grid d is denoted as L local , then integrates L local to L global . The complementary hole grids will be generated respectively in different data frames, so that the final L global still contains all the information of the pointcloud. The module F is shown in Algorithm 1.
Algorithm 1 Module F . Input: Sensor data S Output: S 1: input S 2: for every data f rame do 3: Generate Grid d

4:
for each Sub ⊂ Grid d do 5: store pointcloud ∈ S 6: end for 7: end for 8: Apply PCL f ilter to point ∈ Grid d 9: classi f y point ∈ Grid d 10: S = remain pointcloud in Grid d 11: U pdate Grid d to L local As shown in Figure 4, the Sub is a cell in the grid, and it stores a part of the pointcloud information in the 3D space. The standard practice is to form a grid from all cells, but in this paper, a grid with holed structure, as shown in Figure 5a,b, is used to let only part of the cells participate in the calculation. In fact, for a grid of a certain size, the number of cells depends on its resolution. As shown in Figure 6, the higher the resolution, the more cells there are, and the denser the grid, the better its mapping effect. However, the amount of computation increases dramatically. Therefore, this kind of holed-structure grid can save the calculation cost very well because it mainly requires half cells to participate in the calculation in each data frame.  For the obstacle extraction process and obstacle vertices reconstruction process in the v-graph, the sensor information S is gridded and stored by the module F to obtain S . After that, the S will be converted to a binary image I. To enhance robustness, the image I will be blurred, then obstacle vertices will be extracted through image processing [50] to generate polygons P k contour ⊂ Q |k ∈ Z + . The polygon extraction algorithm is shown in Algorithm 2.
To define the kernel size of the box filter in Algorithm 2, the equation is as follows, where R W and R L are the width and length of the robot, respectively, and V S is the voxel size. In this paper, V S is set to 0.15 m: Figure 7 demonstrates the blurred picture of LiDAR-mapped obstacle geometry and the time consumption of the laser process. As can be seen from Figure 7a, the hollow structure (using module F ) does not affect the image after blurring. Compared with the original (without using a hollow structure), it can be found that our generated contour approximates the original image. The projected outline of the obstacle is thicker because of the blurred image, and the details inside the outline are lost.
The time required for the laser process, according to Figure 7b, includes gathering the raw pointcloud and downsampling. For the same area, the time consumed by using the holed structure is more gentle, while the processing process without the holed structure is steeper and its curve fluctuates greatly. The time it takes to process an image is depicted in Figure 7c. The total time of the image process includes mapping the pointcloud in L local into the image I, blurring the image, and initially extracting the obstacle contour points. About 20% of the total image processing time is spent on blurring the outline of obstacles.  Downsample vertices in P k contour based on [51] 7: end for

Simplified Complex Contours Algorithm
In the previous Section 3.1, the L was constructed to obtain the pointcloud from LiDAR. In this section, the polygons will be extracted and simplified based on L.
For the graph update method of the two-layer architecture, shown in Figure 8, we define G local and G global . Between them, G local is the local layer around the robot, and G global is the layer set of the entire observation environment. G local will be generated by the sensor information S for each data frame and then merged into G global For each data frame, the sensor information S will generate G local and then be merged with G global , noting that since the module F is used to store the sensor information S, now S ∈ L local can be used to merge G global at a lower cost. It is known that the computational complexity involved in constructing a v-graph is O(n 2 log n) [52], where n is the number of vertices in the graph. In normal case, the cost of building a local graph in the environment is small enough so that computational resources can be allocated to each data frame in an incremental update manner. However, redundant nodes will also be generated during each v-graph update if the environment has numerous complex obstacles, leading to a significant increase in the number of edges connecting the nodes. To ensure the effectiveness of the v-graph update, a method for further sparse operation on complex contours is required.
Constructing local layers: The S ∈ L local will be converted into local polygons P k contour , and use P k contour to construct a local visualization graph G local . Note that for complex polygons, as shown in Figure 9, the polygon contains many vertices composed of short edges. Adding redundant vertices will construct more useless edges; thus, a lot of computing resources are wasted on unnecessary vertices and edges in the process of path search in complex terrain. A threshold η is set to control the number of vertices for complex large local polygons.
When the number of vertices of the polygon P k contour is greater than η, the vertices will be reduced, which not only optimizes the geometric outline of large and complex obstacles, but also retains the geometric characteristics of small obstacles. As shown in Figure 10, the continuous vertices inside those red circles in Figure 10a should be eliminated, but the current method does not eliminate them well, resulting in more vertices and edges in the v-graph. Compared to Figure 10a, the optimized version in Figure 10b has fewer vertices. When the number of vertices of an obstacle is greater than η in the local layer, the algorithm preferentially records the distance between the two longest vertices in the obstacle. For example, the distance between the longest two vertices is dist max . The algorithm traverses the three consecutive vertices vertex i−1 , vertex i , vertex i+1 in the obstacle and calculates the length between the two vertices, respectively. The distance between them is denoted as dist (i,i−1) and dist (i,i+1) . If both dist (i,i−1) and dist (i,i+1) are less than 0.1× dist max , it means that vertex i is an invalid vertex (excess vertex), in which case we delete vertex i and destroy its connection edges edge (i−1,i) and edge (i,i+1) .
Since the simplified complex contours algorithm only works on the G local , the v-graph update process will not be slowed down by the accumulation of the number of nodes in the G global .
Update the global layer: After G local is constructed, the G local and the G global are fused. The strategy is: take out the overlapping parts of G local in G global , and associate the vertex position in the G local to the G global . The Euclidean distance is used to associate vertices in two layers, and the associated vertices are recorded. The entire graph updating algorithm is as follows in Algorithm 3, and the final obstacle contours and edges are shown in Figure 11.
For the given two points a(a x , a y , a z ) and b(b x , b y , b z ), the distance(a, b) in Algorithm 3 is defined as followed: Algorithm 3 Visibility Graph Update.
Input: S ∈ L local , graph G Output: U pdate graph G 1: P k contour ← Polygon Extraction(S ); // f rom Algorithm2 2: for each P k contour do 3: if the number o f vertices > η then 4: for each vertex in contour do 5: end for 7: while true do 8: for each vertex in vertices do 9: Eliminate vertex i

14:
end if 15: end for 16: if all dist (i,i±1) ≥ 0.1 × dist max then 17: break 18: end if 19: end while 20: else 21: continue 22: end if 23: end for 24: Associate vertices P k contour in the G global 25: U pate to visibility graph G Figure 11. An illustration of sparse v-graph. The edge (orange) that head into at least one polygon from the shaded angle are eliminated, and the blue one will be kept. After eliminating those green vertices, the dot blue edge will be removed from the G local .

Path-Planning Based on Bidirectional Breadth-First Search
In FAR Planner, the goal point P goal is used as a vertex, and the Euclidean distance is used as the score to update the parent node of P goal . Although the path can be found in an unknown environment, its spatial search range is obviously wasted. The strategy adopted by the robot in many cases is to explore many unnecessary spaces until it finally reaches the goal. As shown in Figure 12, the robot travels from position 0 (start) to position 1, resulting in unnecessary exploration space.
A bidirectional breadth-first search (bidirectional BFS) structure is combined with the v-graph to search for a path, selecting a vertex of a connecting edge of the robot in the forward search while simultaneously beginning a backward search from the P goal to find the path to the robot's current position. This minimizes the amount of unnecessary exploration space.
In the planning, assume that there are no obstacles in the unknown area where the P goal is located. The P goal uses geometric collision checking to establish edges with existing vertices P node | P node ⊂ G global in the v-graph, and then the P goal will be connected to the vertices of the discovered obstacles in the v-graph as shown in Figure 13a. The one-way BFS usually wastes some search space in unknown or partially known environments. This is because the one-way BFS starts from the nodes connected to the robot, calculates the target point according to the cost, and then iterates to the robot position according to the parent node of the target point.
As shown in Figure 13b,c, the one-way BFS enters a fork in the planning of the global path from the starting point to the ending point, resulting in an increase in the search space. The result is shown in Figure 13d, from the red point to the green point, the one-way search wastes a huge amount of space. Therefore, this paper embeds the goal in the v-graph and associates it with the existing vertices in the graph, and adopts a bidirectional breadth-first algorithm for path search. The bidirectional BFS structure shows in Algorithm 4, in which the two BFS are divided into forward and backward according to the direction of the search (forward searches from the robot position to the P goal , and backward searches from the P goal to the robot position).
In the Algorithm 4, parent F (·) and parent B (·) are the functions returning the forward and backward parent of a node. Q F and Q B are the min-priority queues in forward and backward ones, respectively, and Q F is ordered by g F , Q B is ordered by g B . µ is the cost of the best path found so far (initially, µ is set to ∞). Whenever the robot reaches a node and expands in the other search, µ will be updated if a better path goes through the node. g F is the current distance from start and g B is the current distance from P goal . top F and top B are the distances to the top nodes in the forward and backward queues, respectively. The STEP function in Algorithm 5 is responsible for advancing the search through the v-graph and updating µ.
One of the benefits of bidirectional search is that it can, to some extent, avoid the wasted search space caused by entering invalid forks. As shown in Figure 13e, the globally planned path no longer passes through the fork, thus avoiding excessive searching. As a result, as shown in Figure 13f, compared to Figure 13d, the distance traveled by the robot is greatly reduced.
It is not possible to use a vertex that has just been extracted from the v-graph G as a point of navigation directly; instead, a transform is used to turn the vertex into a way-point. As Figure 14 shows, in the obstacle where the point is located, the vertices connected to the point at the polygon will be extracted to calculate the direction vector of the point (in Algorithm 6, they are − → dir f ront and − → dir back , respectively, and the direction vector of the point is the − − → sur f dir ).
A detailed description is shown in Algorithm 6. In Algorithm 6, the parameter of search dist is set to constrain the searching area, and the near dist is a step parameter which extends the way-point from − − → sur f dir direction, and R W and R L are the width and length of the robot, respectively. When the way-point extends, the max extend is set to constrain the length of the extension. The NearOBS(·) function is to obtain a range of obscloud from L global , with P way−point as center and search dist as the radius. Check_Collision is used to detect if the expanded P way−point collides with surrounding obstacles, and the detail of the Check_Collision function is shown in Algorithm 7.

Experiments and Results
The paper uses the same experimental parameters as FAR Planner [19], (uniform sensor parameters, the robot speed is set to 2 m/s). A highly complex channel network tunnel, a parking garage with multiple floors, and a forest of trees with many irregularly shaped trees are all included in the simulated experimental environment. The indoor is moderately complex but easy to detour. Additionally, Matterport3D [48] provides a simple environment 17DRP5sb8fy (denoted as 17DR), a slightly complex environment PX4nDJXEHrG (denoted as PX4n), and a large complex environment 2azQ1b91cZZ (denoted as 2azQ).
In the simulation environment, all methods run on a 2.6Ghz i7 laptop, and the v-graphbased methods use images at 0.2 m/pixel resolution to extract points to form polygons. The local layer on the v-graph is in a 40 m × 40 m area with the robot in the center. The threshold of the length of each visibility edge is set to 5 m. Finally, the simulated mobile robot is 0.6 m long, 0.5 m wide, and 0.6 m high.
In the physical environment, ours runs on an embedded device, and the robot speed is set to 0.7 m/s. To adapt to the real environment, the local layer of the v-graph is set to 20 m × 20 m. The length, width, and height of the mobile robot are set as 0.32 m, 0.25 m, and 0.31 m, respectively.

Laser Process Simulation Experiment
In the laser process simulation experiment, seven different types of environments are used to compare the holed structure with the original one. The experimental results are shown in Figure 15. The robot moves according to a fixed route, and the experiment analyzes the time of the laser processing process. The laser processing process refers to the whole process of extracting and storing the pointcloud information of the local layer into the grid and classification (obstacle pointcloud or free pointcloud).
As shown in Figure 15, for example, in the indoor environment, the robot will start from 0 (start) and pass through the target points 1, 2, 3, 4, 5, and 6 in sequence. The initial state of the robot is set to be in an unknown environment, and the known information in the environment is continuously accumulated through exploration. The program records the time of receiving and processing the pointcloud data from LiDAR during the movement of the robot. Figure 16a is a summary of the average time of laser processing in different environments. It can be clearly seen from Figure 16a that our method used less time in the processing of the pointcloud information in every data frame. Figure 16b-h show that using the grid with the holed-structure leads to the smooth processing of the laser pointcloud. Compared with the original grid, the use of the holed structure can improve the processing speed of the pointcloud by 30.5∼44.5%.

Visibility Graph Update Simulation Experiment
Different values of η, ranging from 5 to 25, are set in representative indoor and outdoor environments in order to select an appropriate value. The robot in the simulation experiment travels throughout the entire environment to count all of the vertices in the v-graph and logs the average update speed. As the Table 1 shows, it can be seen that the number of vertices in the v-graph and the update speed of the v-graph are positively correlated with the value of η, but η is not as small as possible. Theoretically, a smaller η value should lead to fewer vertices in the v-graph. However, a value of η that is too small will have some drawbacks. The function of collision detection may be affected when some minor obstacles are ignored, as shown in Figure 17a, and this phenomenon also occurs in forest environments. When η is equal to 5, there are two small trees missing from the v-graph in Figure 17b. When η is greater than or equal to 15 and less than or equal to 20, with the increase of η, the outline of the obstacle is well guaranteed, and the update speed of the v-graph is also relatively fast. When η is greater than 25, the number of vertices in the v-graph gradually increases toward the direction of none η. After several rounds of testing, a preliminary conclusion can be drawn that when the value of η is between 15 and 20, the algorithm is most suitable for generating obstacle vertices.  In the v-graph update simulation experiment, seven different environments are used to compare our v-graph update method with FAR Planner's, and the parameter of η is set to 20. In different environments, a series of target points are established for the mobile robot to travel. These points are fixed, and both methods let the mobile robot pass through them in sequence. For example, in the indoor environment, the robot will pass through the six target points 1, 2, 3, 4, 5, and 6 in sequence. During the driving process of the robot, the update speed of the v-graph will be recorded, and similarly, all the vertices in the v-graph will be recorded. The experimental results are shown in Figure 18. In each environment in Figure 18a, the robot passes through a series of target points, and the known environment information is reset after reaching each target point. Figure 18b shows the optimized nodes and edges for complex irregular objects. For complex obstacles, ours simplifies the vertex information of the obstacle. When the redundant vertices are reduced, the redundant edges will also be correspondingly reduced. As shown in Figure 18b, the cyan and blue edges and red nodes in the optimized v-graph are significantly reduced, respectively. Figure 18c shows the average speed of the v-graph update, and Figure 18d shows the total number of vertices in the v-graph. For simple terrain and most of the obstacles with simple shapes and corners, such as the tunnel and 17DRP5sb8fy, our method obtains similar results to FAR Planner, and the improvement is only 10∼20%; for large-scale maps containing complex obstacles, such as the forest, 2azQ1b91cZZ, PX4nDJXEHrG, indoor, etc., our method significantly improves the efficiency by 40∼60% compared to FAR Planner.
It is evident from Figure 19 that for environments with complex terrain, our method greatly reduces the number of vertices for obstacles in the v-graph. In Figure 19e,f, the original method has a total of 73 vertices while our method has only 49 vertices.

Path Planning Simulation Experiment
For the path exploration in the unknown environment, we compared the slightly complex indoor environment and the tunnel environment with complex network structures, respectively. Similar to the graph update simulation experiment, a series of waypoints are set up in each environment, allowing the robot to pass one by one. Define the robot to accumulate environmental information while exploring the unknown environment.
FAR Planner (v-graph-based), A*, and D* Lite (occupancy grid-based) are all added for comparison with our algorithm. Figure 20 shows the trajectory paths generated by our algorithm and other algorithms in navigation. In the case of reaching the same navigation point, our algorithm can avoid unnecessary exploration space to a greater extent than the FAR Planner and A*, D* Lite, so as to achieve a shorter distance and less time.   As can be seen from Figure 20a for the indoor environment, the search space is wasted to varying degrees when using FAR Planner, A*, and D* Lite to navigate from point 1 to point 2. They are more inclined to explore the place where point 3 is located and then turn back after finding that there is no passage leading to point 2. The possible reason for this is that most of their cost functions only refer to the cost of the current node itself and the cost of the Euclidean distance between the endpoint and the current node. This causes the robot to tend to drive towards the node with the lowest total cost in a single direction, even though that node may not be able to reach the goal.
For adjacent navigation points with relatively short distances, such as from navigation point 4 to navigation point 5 in an indoor environment, the time and distance consumed by all algorithms are not much different.
The A* and D* Lite are known for their search integrity in finding the optimal path. However, those methods are difficult to scale as the computational cost increases significantly when environments are large and complex [19]. For the tunnel environment, although the environment contains a series of complex #-and T-shaped structures, there are almost no dead ends, that is, the goal can be reached in any direction, so the robot hardly needs to be turned back during the running process. For traditional A* and D* Lite, due to the increase in scene scale, the number of grids that need to be calculated increases sharply, and the cost of algorithm operation increases significantly in large-scale scenes. However, for us, the use of bidirectional BFS allows us to avoid some bifurcations and travel a shorter distance to the destination point.  As shown in Figure 20a,b, our planner is able to search for shorter paths and generate effective trajectories. Table 2 shows that in the indoor case, our method reduces travel time by 23% compared to A*, 28% compared to D* Lite, and 21% compared to FAR Planner's original algorithm. In terms of increased time, FAR Planner has the most time wasted due to ineffective exploration, while A* and D* Lite are time-consuming when the robot is constantly swinging back and forth in a certain position, but what all three have in common is wasted search space between some navigation points. Table 3 shows that in the tunnel case, our method produces the shortest distance, which is 8% shorter than A*, 32% shorter than D* Lite, and 25% shorter than FAR Planner. Tables 2 and 3 show that our planning algorithm can run faster because of the use of a hole-structured mesh to update the graph and vertex optimization for complex obstacles. Compared to FAR Planner, our search algorithm update rate is 43% faster.

Physical Experiment
The physical experiment uses the mobile robot platform in Figure 21 with the speed set to 0.7 m/s. The mobile robot is equipped with a Velodyne-16 LiDAR and an Inertial Measurement Unit (IMU) 9250. The entire autonomous system is built on Robot Operating System [53], and the Raspberry Pi 4B is the master and the laptop is the slave. In the autonomous system, the master is used to transmit LiDAR data, run the CMU autonomous exploration interface [47], and drive the stm32. The navigation algorithm runs on the slave. The camera at 640 × 480 resolution in the mobile robot is only used to obtain pictures of the environment. The LiDAR and IMU are coupled to generate the state estimation of the robot through Lego-LOAM [4]. The main system structure of the mobile robot is shown in Figure 22, the autonomy system incorporates navigation modules from CMU autonomous development interface, e.g., terrain analysis and way-point following as fundamental navigation modules.  As shown in Figure 23a,b, obstacles are mapped as polygons in exploration, and solid edges are formed from each relevant vertex. In Figure 23c, colored pointclouds of obstacles are displayed to better show the details; orange lines denote the obstacles' outlines, while cyan lines denote the relationships between the edges of various obstacles.
In the navigation, as shown on the left side of Figure 23d, the mobile robot started from point 1 and arrived at points 2, 3, and 4 in sequence and on the right side of the Figure 23d shows pictures when the mobile robot navigates to the corresponding position. As shown in Figure 24, our algorithm is compared with FAR Planner in the physical experiment. The processing speed of the laser data, the update speed of the v-graph, and the operation speed of the search algorithm are recorded every 0.5 seconds.  In Figure 24a, The mean and standard deviation for the laser processing of FAR Planner are 60.95 ms and 16.59 ms, respectively, while our method's mean and standard deviation of the laser processing are 41.59 ms and 9.93 ms, respectively. Not only is ours 31.76% faster on average than FAR Planner, but the time taken by the algorithm is also more stable. As shown in Figure 24b, both approaches will take longer to update the v-graph because of the unexpected rise in obstacles, but ours is on average 37.12% quicker than FAR Planner. In Figure 24c, ours is on average 18.06% faster than the FAR Planner on the search algorithm.

Grid in Mapping
In the process of robot simultaneous localization and mapping, grids can be used to store pointcloud data [12,[54][55][56][57][58], and can also be used as occupancy grid maps for robot navigation [56,[59][60][61][62][63]. As mentioned in Lau, B et al. [58], the processing speed of a computer is limited by the resolution of the grid. When the grid resolution is higher, the information represented by each cell is more accurate, but the calculation time is longer. In Homm et al. [61], they use a Graphics Processing Unit (GPU) to speed up the formation of fine grids, and in A. Birk et al. [56], they use multiple robots to jointly maintain grid maps, an approach that indirectly speeds up the construction of the grid map.
The aim of the work in this paper is to use a discrete hollow grid to convert pointcloud information into a binary image and extract obstacle vertices. Therefore, the focus of this paper is on how to quickly extract the grid to generate the vertices of obstacles. For a 30 m × 30 m local grid, the processing speed of a high-resolution grid, such as 0.5 m × 0.5 m per cell, is much slower than that of a low-resolution grid, such as 1 m × 1 m per cell. Therefore, a grid with spaced hollow structures is designed to speed up the processing of high-resolution grids. Since the use of hollow structures will affect subsequent images, such as the discontinuous edges of obstacles, it is necessary to blur the generated images. Blurring the image can cover empty spots caused by hollow structures. Additionally, a complementary hollow-structured grid is also considered, storing a set of complementary hollow-structured grids under adjacent data frames and integrated into the local map.
It is foreseeable that the grid application with this sparse structure can significantly reduce the amount of computation and shorten the computation time in three-dimensional space. In future work, the authors hope to use this sparse structure for 3D grids.

The Reduced Visibility Graph
The v-graph is a topology map that is widely used for path planning since it is constructed using the vertices of obstacles. The difficulty of calculating and maintaining the v-graph mainly depends on the number of vertices in the graph; thus, many researchers focus on how to simplify the v-graph [19,[64][65][66][67][68][69][70].
In Nguyet et al. [69], he proposed a method for clustering small obstacles according to their volume, which can well reduce the number of vertices in the v-graph, thereby improving the efficiency of the path search algorithm. However, this method needs to calculate the total area of the global map for each iteration, which wastes a lot of time for multiple small-volume obstacles. In Yang et al. [19], an angle ζ is set to limit the visibility of each obstacle vertex, which can well reduce the number of edges of the obstacle vertex. The method used in this paper combined Yang's method [19] and proposed a method of simplifying obstacle vertices that only act on the local map. Compared with [69], the speed of the algorithm proposed in this paper is not affected by the global map, and it effectively reduces invalid vertices and redundant edges.

Uncertainties in the Path Planning
The path planning is based on the produced map (they can be occupancy maps, topological maps, or semantic maps), and the localization of the robot is very important for constructing the map. In the simulation experiment, we can easily obtain the relevant data of the robot, such as the simulated IMU sensor information and the simulated LiDAR information, and this information is accurate and unbiased in the simulated environment to estimate the state of the robot. How well the robot is positioned determines whether the map used for navigation is available. However, in real life, we cannot obtain such unbiased data; therefore, we used Lego-LOAM for the state estimation of the robot. If the robot's state estimation data has a large error, the v-graph it builds will deviate from the real world.
As shown in Figure 25, in the real world, the mobile robot made an error in the state estimation, and the white pointcloud newly scanned by the LiDAR was obviously offset from the colored obstacle pointcloud. This will lead to the establishment of an unreliable v-graph, thus affecting the effect of path planning.

Conclusions
This paper proposed a sparse visibility graph-based path planner based on the FAR Planner framework. Our method is far superior to the FAR Planner in terms of the efficiency of v-graph maintenance and generation. Our method can be used for navigation in known environments and exploration in unknown environments. Moreover, a complementary hollow grid is designed for local layer updates and merges the local layer into the global layer. For complex obstacles in the environment, a method was proposed to reduce the cost of maintaining the v-graph by simplifying the vertices and edges of polygons. For small obstacles, their information is still preserved in the graph. Moreover, the paper proposed a path planning method based on a bidirectional breadth-first search combined with the v-graph. By comparing the original algorithm of FAR Planner and the traditional search algorithms A* and D* Lite, ours achieves the optimal path planning in unknown environments, and the speed of the path search algorithm is faster than that of FAR Planner.

Data Availability Statement:
The CMU simulation environment can be acquired through https: //www.cmu-exploration.com. The Matterport3D simulation environment can be acquired through https://niessner.github.io/Matterport. The original framework used in the paper can be acquired through https://github.com/MichaelFYang/far_planner (all accessed on 14 June 2022).