This section introduces the steps necessary to obtain the path planning for an autonomous robot (or team of robots) using the current location of the robots, satellite imagery, and cadastral information. For this purpose, we identified two main stages.
3.1. First Stage: Development of the Autonomous Mapping System
The proposed autonomous mapping module is composed of two main submodules:
Acquisition submodule: This submodule is in charge of acquiring the images from online databases to generate the final map of the parcel. Concretely, the system will download the images from a public administrative database,
SIGPAC, using the information provided by the
GPS sensor included in the robots. In other related experiments, we installed a submeter precision GPS in the autonomous robots (GPS Pathfinder ProXT Receiver) [
22]. The location accuracy obtained in those cases encourages us to integrate that GPS in the current system. However, the GPS integration is out of this paper’s scope, since the main objective of this paper is to introduce the algorithms to combine satellite and cadastral images and to show a proofofconcept. This submodule provides two images to the “map creation module”: the satellite image of the parcel and the image with parcel borders.
Map creation submodule: This submodule will be in charge of extracting all the information from the images: paths, orange trees, borders, and obstacles. With the extracted information, the map (path planning) which the robots should follow will be generated. The initial strategy to cover efficiently all the parcels with the robots will be done in this submodule. The maps and satellite imagery resolution are set to 0.125 m per pixel. This resolution was easily reached with both services (satellite imagery and cadastral imagery) and was low enough to represent the robot dimensions (around 50 by 80 cm), and the computation burden was not so demanding. With the new Galileo services, higher resolution images can be obtained (with lower cm per pixel rates). Although more precise paths could be obtained, the computational cost of fast marching to generate the paths would also considerably increase.
3.1.1. Acquisition SubModule
This submodule is in charge of obtaining the two images which are needed to create the map of the parcel from the current position of the autonomous system. Its description is detailed in Algorithm 1 where $CurrentPosition$ represents the current position of the robotic system, $Resolution$ is the resolution of the acquired imagery, and $Radius$ stands for the radius of the area.
Algorithm 1 ObtainImages{$CurrentPosition$,$Resolution$,$Radius$} 
$FullSatImage$ = ObstainSatImage{$CurrentPosition$,$Resolution$,$Radius$} $Borders$ = ObstainBorders{$CurrentPosition$,$Resolution$,$Radius$} Return:$FullSatImage$, $Borders$

The output generated in this submodule includes the full remote image of the area, $FullSatImage$, and the borders of the parcels, $Borders$, which will be processed in the “Map Creation module”.
3.1.2. Map Creation SubModule
This submodule is in charge of creating the map of the parcel from the two images obtained in the previous submodule. The map contains information about the parcel: trees rows, paths, and other elements where the robot cannot navigate through (i.e., obstacles). Its description can be found in Algorithm 2.
Algorithm 2 GenerateMap{$CurrentPosition$,$FullSatImage$,$Borders$} 
$FullParcelMask$ = GenerateParcelMask{$CurrentPosition$,$Borders$} [$ParcelMask$,$SatImage$] = CropImagery{$FullParcelMask$,$FullSatImage$} [$Path$,$Trees$,$Obstacles$] = ExtractMasks{$CurrentPosition$,$ParcelMask$,$SatImage$}

The first step consists of determining the full parcel mask from the borders previously acquired. With the robot coordinates and the image containing the borders, this procedure generates a binary mask that represents the current parcel where the robot is located. This procedure is described in Algorithm 3.
Algorithm 3 GenerateParcelMask{$CurrentPosition$,$Borders$} 
Determine position $CurrentPosition$ in $Borders$: $Po{s}_{x}$ and $Po{s}_{y}$ Convert $Borders$ to grayscale: $BordersG$ Binarize $BordersG$: $BordersMask$ Dilate $BordersMask$: $DilatedBorder$ Floodfill $DilatedBorder$ in $Po{s}_{x},Po{s}_{y}$: $DilatedBorderFloodfilled$ Erode $DilatedBorderFullfilled$: $ErodedBorderFloodfilled$ Dilate $ErodedBorderFloodfilled$: $FullParcelMask$ Return:$FullParcelMask$

Once the parcel mask is determined from the parcel image, it is time to remove those areas outside the parcel in the satellite image and the parcel mask itself. To avoid underrepresentation of the parcels, a small offset was added to the mask. This step is performed to reduce imagery size for further treatment with advanced techniques. This procedure is detailed in Algorithm 4.
Algorithm 4 CropImagery{$FullParcelMask$,$FullSatImage$} 
Determine real position of leftupper corner of $FullParcelMask$ Determine real position of leftupper corner of $FullSatImage$ Determine $FullSatImage$ offset in pixels with respect $FullParcelMask$ Determine parcel’s bounding box: Determine first and last row of $FullParcelMask$ that is in parcel Determine first and last column of $FullParcelMask$ that is in parcel Crop $FullParcelMask$ directly: $ParcelMask$ Crop $FullSatImage$ according to offset values: $SatImage$ Return:$ParcelMask$ and $SatImage$

Since the location of the pixels in the real world is known using the WGS84 with UTM zone 30N system (EPSG:32630), the offset between the satellite image and the parcel mask can be directly computed. Then, it is only necessary to determine the bounding box surrounding the parcel in the parcel mask to crop both images.
At this point, the reduced satellite map needs to be processed to distinguish between the path and obstacles. We analyzed different satellite images from groves located in the province of Castellón in Spain, and the images were clear with little presence of shadows. Therefore, we set the binarization threshold to 112 in our work. Although we applied a manual threshold, a dynamic thresholding binarization could be applied to have a more robust method to distinguish the different elements in the satellite image, in case shadows were present. Algorithm 5 has been used to process the reduced satellite map andto obtain the following three masks or binary images:
$Path$—a binary mask that contains the path where the robot can navigate;
$Trees$—a binary mask that contains the area assigned to orange trees;
$Obstacles$—a binary mask that contains those pixels which are considered obstacles: this mask corresponds to the inverse of the path.
Algorithm 5 GenerateMasks{$ParcelMask$ and $SatImage$} 
Convert $SatImage$ to gray scale: $SatImageGray$ Extract $min$ and $max$ values of the parcel pixels in $SatImageG$ Normalize $SatImageG$ values: $SatImageGN(x,y)=255\xb7\frac{SatImageG(x,y)MinValue}{MaxValue}$ Generate B&W Masks: $\begin{array}{ll}Path(x,y)\hfill & =\left\{\begin{array}{ll}1& \mathrm{if}\text{}SatImageGN(x,y)112\text{}\mathrm{and}\text{}ParcelMask(x,y)\\ 0& \mathrm{otherwise}\end{array}\right.\hfill \\ Trees(x,y)\hfill & =\left\{\begin{array}{ll}1& \mathrm{if}\text{}SatImageGN(x,y)\le 112\text{}\mathrm{and}\text{}ParcelMask(x,y)\hfill \\ 0& \mathrm{otherwise}\end{array}\right.\phantom{\rule{4pt}{0ex}}\hfill \end{array}$ Remove path pixels not connected to $CurrentPosition$ in $Path$ Generate $Obstacles$ Mask: $Obstacles(x,y)=1Path(x,y)$ Return:$Path$$OrangeTrees$ and $Obstacles$

3.2. Second Stage: Path Planning
This module is used to determine the final path that the robotic system or team of robots should follow. The principal work for path planning in autonomous robotic systems is to search for a collisionfree path and to cover all the groves. This section is devoted to describing all the steps necessary to generate the final path (see Algorithm 6) using as input the three masks provided by Algorithm 5.
To perform the main navigation task, the navigation module has been divided into five different submodules:
Determining the tree lines
Determining the reference points
Determining the intermediate points where the robot should navigate
Path planning through the intermediate points
Real navigation through the desired path
First of all, the tree lines are determined. From these lines and the Path binary map, the reference points of the grove are determined. Later, the intermediate points are selected from the reference points. With the intermediate points, a fast path free of collisions is generated to cover all the groves. Finally, the robotic systems perform real navigation using the generated path.
Algorithm 6 PathPlanning{$CurrentPosition$,$Trees$,$Path$} 
$SortedTreeLines$ = DetectLines{$Trees$,$Path$} [$Vertexes$,$MiddlePaths$] = ReferencePoints{$Path$,$SortedTreeLines$} $FinalPath$ = CalculatePath{$Position$,$Path$,$MiddlePath$,$Vertexes$ Return:$FinalPath$

3.2.1. Detecting Tree Lines
The organization of citric groves is well defined as it can be seen in the artificial composition depicted in
Figure 2; the trees are organized in rows separated by a relatively wide path to optimize treatment and fruit collection. The first step of the navigation module consists of detecting the tree lines.
The
Obstacles binary map (
$Obstacles$ in Algorithm 5) contains the pixels located inside the parcel that correspond to trees. For this reason, the rows containing trees are extracted by using the Hough Transform method (see Reference [
24]). Moreover, some constraints to this technique have been added to avoid selecting the same tree line several times. In this procedure, the
Hesse standard form (Equation (
1)) is used to represent lines with two parameters:
$\theta $ (angle) and
$\rho $ (displacement). Each element of the resulting matrix of the Hough Transform contains some votes which depend on the number of pixels which corresponds to the trees that are on the line.
According to the structure of the parcels, the main tree rows can be represented with lines. We applied Algorithm 7 to detect the main tree rows (rows of citric trees). First, the most important lines, the ones with the highest number of votes, are extracted. The first extracted line (the line with the highest number of votes) is considered the “main” line, and its parameters (${\theta}_{1}$, ${\rho}_{1}$ and $vote{s}_{1}$) are stored. Then, those lines that have at least $0.20\xb7vote{s}_{1}$ and of which the angle is $\rho \pm 1.5$ are also extracted. This first threshold (${\Gamma}_{1}=0.20$) has been used to filter the small lines that do not represent full orange rows, whereas the second threshold (${\Gamma}_{2}=1.5$) has been used to select those orange rows parallel to the main line. We have determined the two threshold values according to the characteristics of the analyzed groves: the target groves have parallel orange rows with similar length.
Algorithm 7 DetectLines{$Trees$,$Path$} 
Apply $Hough$ to $Threes$: $H=Hough\left(Trees\right)$ Extract line with highest value from H: $valu{e}_{main}=H({\rho}_{main}$,${\varphi}_{main}$) Set Treshold value: ${\Gamma}_{1}=0.20\xb7valu{e}_{main}$ Limit H to angles close the main angle (${\varphi}_{main}\pm {1.5}^{\circ}$): $HL$ lines = true Store main line parameters in $StoredLines$: ${\rho}_{main}$ ${\varphi}_{main}$ $valu{e}_{main}$ while lines do Extract candidate line with highest value from $HL$: $valu{e}_{c}=HL({\rho}_{c}$,${\varphi}_{c}$) Remove value in the matrix: $HL({\rho}_{c},{\varphi}_{c})=0$ if ($valu{e}_{c}>{\Gamma}_{1}$) then Calculate that distance to previous extracted lines is lower than threshold: $\parallel {\rho}_{c}{\rho}_{i}\parallel >(12\xb7(1+\parallel {\theta}_{c}{\theta}_{i}\parallel \left)\right)$ if distance between candidate and stored lines is never lower than threshold then Store line parameters in $StoredLines$: ${\rho}_{c}$ ${\varphi}_{c}$ $valu{e}_{c}$ end if else lines = false end if end while Sort stored lines: $SortedTreeLines$ Return:$SortedTreeLines$

Furthermore, to avoid detecting several times the same tree row, each extracted line cannot be close to any previous extracted line. We used Equation (
2) to denote the distance between a candidate line (
c) and a previous extracted line (
i) since the angles are similar (
$\pm {1.5}^{\circ}$). If this distance is higher than a threshold value (
${\Gamma}_{3}$), the candidate line is extracted. We use Equation (
3) to calculate the threshold value according to the difference of angles of the two lines.
Hough Transform has been commonly used in the literature related to detection tasks in agricultural environments. Recent research shows that it is useful to process local imagery [
25,
26] and remote imagery [
27,
28].
3.2.2. Determining the Reference Points
Reference or control points contain wellknown points and parts of the grove, e.g., the vertex of the orange tree lines or the middle of paths where farmers walk. A set of reference points will be used as intermediate points in the entire path that a robotic system should follow.
Firstly, the vertexes of the orange tree rows are extracted. For each extracted line, the vertexes correspond to the first and last points that belongs to the path according to the Path binary map $Path$.
Then, Hough transform is applied again to detect the two border paths perpendicular to the orange rows. With these two lines, their bisection is used to obtain additional reference points. These last points correspond to the centre of each path row (the path between two orange rows). Concretely, the intersection between the bisection line and the Path binary map is used to calculate the points. After obtaining the intersection, the skeleton algorithm is applied to obtain the precise coordinates of the new reference points.
However, under certain conditions, two central points have been detected for the same row, e.g., if obstacles are present at the centre of the path, the skeleton procedure can provide two (or more) middle points between two orange tree rows. In those cases, only the first middle point is selected for each path row.
Finally, all the reference points can be considered basic features of the orange grove since a subset of them will be used as intermediate points of the collisionfree path that a robotic system should follow to efficiently navigate through the orange grove. For that purpose, we applied Algorithm 8 to extract all the possible middle points in a parcel.
Algorithm 8 ReferencePoints{$Path$ $SortedTreeLines$} 
$Vertexes$ = Extract reference points from $SortedTreeLines$ $MiddlePaths$ = Extract reference points that belong to middle of path rows Return:$Vertexes$ and $MiddlePaths$

3.2.3. Determine Navigation Route from Reference Points
In this stage, the set with the intermediate points is extracted from the reference points. They correspond to the points where the robot should navigate. The main idea is first to move the robot to the closest corner, and then, the robot navigates through the grove (path row by path row), alternating the direction of navigation (i.e., a zigzag movement).
Firstly, the closest corner of the parcel is determined by the coordinates provided by the robotic system using the Euclidean distance. The corners are the vertexes of the lines extracted from the first and last rows containing trees previously detected. The idea is to move the robot from one side of the parcel to the opposite side through the row paths. The sequence of intermediate points is generated according to the first point (corner closest to the initial position of the robot) being the opposite corner the destination. Intermediate points are calculated using Algorithm 9.
Finally, this stage is not directly performed by path planning but Algorithm 10) will be used to determine the sequence of individual points. Those points will be the basis for the next stage where the paths for any robotic system that composes the team are determined.
Algorithm 9 IntermediatePoints{$Position$,$MiddlePaths$,$Vertexes$} 
Extract corners from $Vertexes$: $Corners$ = [$Vertexes\left(1\right)$,$Vertexes\left(2\right)$,$Vertexes\left(Penultimate\right)$,$Vertexes\left(Last\right)$] Detect closest corner to $CurrentPosition$: $Corner$ Add $Corner$ to $ListIntermediatePoints$ $ListPoints=SequencePoints\{Corner,MiddlePaths,Vertexes\}$ Add $ListPoints$ to $ListIntermediatePoints$ Return:$ListIntermediatePoints$

Algorithm 10 SequencePoints{$Corner$,$MiddlePaths$,$Vertexes$} 
Let be ${N}_{points}$ the number of points in $MiddlePath$ if$Corner$ is $Vertexes\left(1\right)$ then for i = 1 to (${N}_{points}$1) do Add MiddlePaths(i) to $ListPoints$ Add $Vertexes(i\xb72module(i+1,2\left)\right)$ to $ListIPoints$ end for Add MiddlePaths(${N}_{points}$) to $ListPoints$ Add Vertexes($last$) to $ListPoints$ end if if$Corner$ is $Vertexes\left(2\right)$ then for i = 1 to (${N}_{points}$1) do Add MiddlePaths(i) to $ListPoints$ Add $Vertexes(i\xb72module(i,2\left)\right)$ to $ListPoints$ end for Add MiddlePaths(${N}_{points}$) to $ListPoints$ Add Vertexes($penultimate$) to $ListPoints$ end if if$Corner$ is $Vertexes\left(penultimate\right)$ then for i = 1 to (${N}_{points}$1) do $j={N}_{points}+1i$ Add MiddlePaths(j) to $ListIPoints$ Add $Vertexes\left(\right(j1)\xb72module(i,2\left)\right)$ to $ListPoints$ end for Add MiddlePaths(1) to $ListPoints$ Add Vertexes(2) to $ListPoints$ end if if$Corner$ is $Vertexes\left(last\right)$ then for i = 1 to (${N}_{points}$1) do $j={N}_{points}+1i$ Add MiddlePaths(j) to $ListPoints$ Add $Vertexes\left(\right(j1)\xb72$  module(i+1,2)) to $ListPoints$ end for Add MiddlePaths(1) to $ListPoints$ Add Vertexes(1) to $ListPoints$ end if Return:$ListPoint$

3.2.4. Path Planning Using Processed Satellite Imagery
Once the initial and intermediate points and their sequence are known, the path planning map is determined. This means calculating the full path that the robotic system should follow.
Firstly, the
$Speed$ map is calculated with Equation (
4). This map is related to the distance to the obstacles (where the robot cannot navigate). The novel equation introduces a parameter,
$\alpha $, to penalize those parts of the path close to obstacles and to give more priority to those areas which are far from obstacles. Theoretically, the robotic system will navigate faster in those areas far from obstacles due to the low probability of collision with the obstacles. Therefore, the equation has been chosen to penalize the robotic system for navigating close to obstacles.
where
$distance$ refers to the pixeldistance of point
$(x,y)$ in the image to the closest obstacle. The final
$Speed$ value is set to value 1 when the pixel corresponds to an obstacle, whereas it has a value higher than 100 in those pixels which correspond to the class “path”.
The final procedure carried out to calculate the $Speed$ image is detailed in Algorithm 11.
Algorithm 11$Speed$ = CalculateSpeed{$Path$, $SortedTreeLines$} 
Add detected tree lines to the path binary mask $Path$ Select NonPath points with at least one neighbor which belong to path: $ObstablePoints$ Set initial $Speed$ values: $Speed(x,y)=1$∀$x\in [1,TotalWidth]$ $y\in [1,TotalHeight]$ for x=1:$TotalWidth$ do for y=1:$TotalHeight$ do if PathMask(x,y) belongs to Path then Calculate Euclidean distance to all ObstaclePoints Select the lowest distance value: $mindistance$ $Speed(x,y)=100+mindistanc{e}^{3}$ end if end for end for Return:$Speed$

Then, the Fast Marching algorithm is used to calculate the $Distances$ matrix. It contains the “distance” to a destination for any point of the satellite image. The values are calculated using the $Speed$ values and the target/destination coordinates in the satellite image. The result of this procedure is a $Distance$ matrix of which the normalization can be directly represented as an image. This distance does not correspond to the Euclidean distance; it corresponds to a distance measurement based on the expected time required to reach the destination.
The Fast Marching algorithm, introduced by Sethian (1996), is a numerical algorithm that is able to catch the viscosity solution of the Eikonal equation. According to the literature (see References [
29,
30,
31,
32,
33]), it is a good indicator of obstacle danger. The result is a distance image, and if the speed is constant, it can be seen as the distance function to a set of starting points. For this reason, Fast Marching is applied to all the intermediate points using the previously calculated distances as speed values. The aim is to obtain one distance image for each intermediate point; this image provides information to reach the corresponding intermediate point from any other point of the image.
Fast Marching is used to reach the fastest path to a destination, not the closest one. Although it is a timeconsuming procedure, an implementation based on MultiStencils Fast Marching [
34] has been used to reduce computational costs and to use it in realtime environments.
The final $Distances$ matrix obtained is used to reach the destination point from any other position of the image with an iterative procedure. The procedure carried out to navigate from any point to another determined point is given by Algorithm 12. First, the initial position (position where the robotic system is currently placed) is assigned to the path; then, the algorithm seeks iteratively for the neighbour with lowest distance value until the destination is reached ($Distance$ value equal to 0). This algorithm will provide the path as a sequence of consecutive points (coordinates of the distance image). Since the geometry of the satellite imagery is well known, the list of image coordinates can be directly translated to realworld coordinates. This information will be used by the lowlevel navigation system to move the system according to the calculated path.
Algorithm 12$PathPoints$ = NavigationPt2Pt{$InitialPt$,$DestinyPt$,$Speed$} 
$Distances=FastMarching(DestinyPt,Speed)$ $CurrentPoint=InitialPt$ while$Distances(Currentpoint.x,Currentpoint.y)>0$do Search the neighbor with lowest Distance Value: $NewPoint$ $CurrentPoint=NewPoint$ Add $CurrentPoint$ to $PathPoints$ end while Return:$PathPoints$

Finally, the final full path that the robotic system should follow consists of reaching sequentially all the intermediate points previously determined. This strategy is fully described in Algorithm 13, but it is only valid for the case of using only one single robotic system in the grove. For more than one robot present in the grove, an advanced algorithm is needed. Algorithm 14 introduces the strategy for two robots when they are close to each other.
Algorithm 13 CalculatePath{$Position$,$Path$,$MiddlePaths$,$Vertexes$} 
$ListPoints=SequencePoints\{CurrentPosition,MiddlePaths,Vertexes\}$ $Speed=CalculateSpeed\left(Path\right)$ $StartingPoint$ = $CurrentPosition$ for i = 1 to NumberOfIntermediatePoints do $IntermediatePoint=ListPoints\left(i\right)$ $PathPoints=NavigationPt2Pt\{$ $StartingPoint$,$IntermediatePoint$,$Speed$} Add $PathPoints$ to $FinalPath$ $StartingPoint=IntermediatePoint$ end for Return:$FinalPath$

Algorithm 14 CalculatePath{$Positions$,$Path$,$MiddlePaths$,$Vertexes$} 
Extract current position for robot 1: $CurrentPosition{s}_{robot1}$ Extract current position for robot 2: $CurrentPosition{s}_{robot2}$ Extract corners from $Vertexes$: $Corners$ = [$Vertexes\left(1\right)$,$Vertexes\left(2\right)$,$Vertexes\left(Penultimate\right)$,$Vertexes\left(Last\right)$] Detect closest corner and the distance for the two robots $Corne{r}_{robot1}=getClosestCorner(CurrentPosition{s}_{robot1},Corners)$ $distanc{e}_{robot1}=getPixelDistance(CurrentPosition{s}_{robot1},Corne{r}_{robot1})$ $Corne{r}_{robot2}=getClosestCorner(CurrentPosition{s}_{robot2},Corners)$ $distanc{e}_{robot2}=getPixelDistance(CurrentPosition{s}_{robot2},Corne{r}_{robot2})$ if$distanc{e}_{robot1}<distanc{e}_{robot2}$then Add $Corne{r}_{robot1}$ to $ListIntermediatePoin{t}_{robot1}$ $ListPoint{s}_{robot1}=SequencePoints\{Corne{r}_{robot1},MiddlePaths,Vertexes\}$ Add $ListPoint{s}_{robot1}$ to $ListIntermediatePoin{t}_{robot1}$ Detect opposite corner to $Corne{r}_{robot1}$ closest to robot 2: $Opposit{e}_{robot1}$ Add $Opposit{e}_{robot2}$ to $ListIntermediatePoin{t}_{robot2}$ $ListPoint{s}_{robot2}=SequencePoints\{Opposit{e}_{robot2},MiddlePaths,Vertexes\}$ Add $ListPoint{s}_{robot2}$ to $ListIntermediatePoin{t}_{robot2}$ else Add $Corne{r}_{robot2}$ to $ListIntermediatePoin{t}_{robot2}$ $ListPoint{s}_{robot2}=SequencePoints\{Corne{r}_{robot2},MiddlePaths,Vertexes\}$ Add $ListPoint{s}_{robot2}$ to $ListIntermediatePoin{t}_{robot2}$ Detect opposite corner to $Corne{r}_{robot2}$ closest to robot 1: $Opposit{e}_{robot1}$ Add $Opposit{e}_{robot1}$ to $ListIntermediatePoin{t}_{robot1}$ $ListPoint{s}_{robot1}=SequencePoints\{Opposit{e}_{robot1},MiddlePaths,Vertexes\}$ Add $ListPoint{s}_{robot1}$ to $ListIntermediatePoin{t}_{robot1}$ end if $Speed=CalculateSpeed\left(ParcelMask\right)$ $StartingPoin{t}_{robot1}$ = $CurrentPositio{n}_{robot1}$ for i = 1 to NumberOfIntermediatePoints do $IntermediatePoint=ListIntermediatePoints\left(i\right)$ f $Path=NavigationPoint2Point\{$ $StartingPoint$,$IntermediatePoint$,$Speed$} Add $Path$ to $FinalPat{h}_{robot1}$ $StartingPoint=IntermediatePoint$ end for $StartingPoin{t}_{robot2}$ = $CurrentPositio{n}_{robot2}$ for i = 1 to NumberOfIntermediatePoints do $IntermediatePoint=ListIntermediatePoints\left(i\right)$ $Path=NavigationPoint2Point\{$ $StartingPoint$,$IntermediatePoint$,$Speed$} Add $Path$ to $FinalPat{h}_{robot2}$ $StartingPoint=IntermediatePoint$ end for Store $FinalPath$
