Next Article in Journal
Unsteady State Lightweight Iris Certification Based on Multi-Algorithm Parallel Integration
Previous Article in Journal
Coarsely Quantized Decoding and Construction of Polar Codes Using the Information Bottleneck Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combining Satellite Images and Cadastral Information for Outdoor Autonomous Mapping and Navigation: A Proof-of-Concept Study in Citric Groves

Institute of New Imaging Technologes, Universitat Jaume I, 12071 Castellón, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2019, 12(9), 193; https://doi.org/10.3390/a12090193
Submission received: 28 June 2019 / Revised: 7 September 2019 / Accepted: 8 September 2019 / Published: 11 September 2019

Abstract

:
The development of robotic applications for agricultural environments has several problems which are not present in the robotic systems used for indoor environments. Some of these problems can be solved with an efficient navigation system. In this paper, a new system is introduced to improve the navigation tasks for those robots which operate in agricultural environments. Concretely, the paper focuses on the problem related to the autonomous mapping of agricultural parcels (i.e., an orange grove). The map created by the system will be used to help the robots navigate into the parcel to perform maintenance tasks such as weed removal, harvest, or pest inspection. The proposed system connects to a satellite positioning service to obtain the real coordinates where the robotic system is placed. With these coordinates, the parcel information is downloaded from an online map service in order to autonomously obtain a map of the parcel in a readable format for the robot. Finally, path planning is performed by means of Fast Marching techniques using the robot or a team of two robots. This paper introduces the proof-of-concept and describes all the necessary steps and algorithms to obtain the path planning just from the initial coordinates of the robot.

1. Introduction

Agriculture is an important industry to the national economy in Spain, Italy, France, Switzerland, and other countries around the world. In the case of Spain, the primary forms of property holding are large estates and tiny land plots. According to the 2009 National Agricultural Census (http://www.ine.es/jaxi/Tabla.htm?path=/t01/p042/ccaa_prov/l0/&file=010101.px&L=0), 64 % of the agricultural terrain was held in properties of 5 or more hectares. Those farms are located in those zones which are mainly devoted to permanent crops: orchards, olive groves, citric groves, and vineyards. However, only in the area of Valencia (Spain), there are more than 100,000 small farms with an exploitation area lower than 1 hectare. This represents 50% of the total number of farms located in Valencia and 20% of the small farms located in Spain.
The agricultural industry is suffering diverse problems which are causing the progressive leave of fields, groves, and orchards in Spain. The main problems are the ageing of the agricultural active population, the low incorporation of young people, and the financial crisis suffered in early 2010. The bitter campaign for the Spanish citrus sector in 2019, where the average price has been around 35% lower than previous campaigns, is accelerating the abandonment of small citrus groves in Valencia. This drives to an undesirable scenario where the active and the abandoned farms coexist. A small active grove next to abandoned ones is more prone to being affected by plagues or, even, wildfires. Thus, more effective maintenance tasks are needed.
One possible solution to minimize costs is to automatize maintenance and surveillance tasks. This can be done by teams of robots in large farms. In most large farms, the structure and the distribution of elements (e.g., citric trees) are homogeneous and well known. In contrast, in those areas where the small farms are located, the structure and distribution depend on the owners (see Figure 1), who usually are not technology-friendly and do not have digital information about their farms. Thus, to ease the operation of maintenance robots in such kinds of farms, a procedure to automatically retrieve information about where they have to operate is needed.
Remote sensing can be used to extract information about the farm. Blazquez et al. [1] introduced a manual methodology for citrus grove mapping using infrared aerial imagery. However, access to aerial imagery is often limited and using dedicated satellite imagery is not widely used for navigation tasks. As the authors of Reference [2] stated, multispectral remote imagery can provide accurate weed maps but the operating costs might be prohibitive.
This paper introduces a proof-of-concept where we target the combination of public satellite imagery along with parcel information provided by governmental institutions. In particular, the system uses the maps and orthophotos provided by SIGPAC (Sistema de Información Geográfica de parcelas agrícolas) to minimize operational costs. The proposed approach in the present paper is similar to the system introduced in Reference [3] with some differences. First, it uses public open data. Second, our system adopts only a binarization of the remote image instead of using classifiers because determining the kind of tree or grove is not critical. Third, the orange tree lines are also determined in the proposed mapping.
The remaining of this paper is organized as follows. Section 2 shows some related and state-of-the-art works in this field. Section 3 describes the algorithms to extract satellite imagery and the mask of the parcel where the path planning is to be determined. Section 4 describes the steps required to autonomously determine the path planning according to the available information (reduced satellite imagery, cadastral information, and the current position of the autonomous robots). Although this work lacks experimental validation, Section 5 shows an example of the intermediate and final outputs on a real case as a proof of concept. Finally, a few cases are introduced to show the potential of the proposed path planning generated for a single robot and a team composed of two robots.

2. State of the Art

As already mentioned, the agricultural sector is one of the least technologically developed. Nonetheless, some robotic systems have been already proposed for harvesting [4,5,6,7] and for weed control/elimination [8,9,10,11]. All these tasks have a common element: the robotic systems should autonomously navigate.
A major part of related works on autonomous mapping is based on remote imagery. An interesting review can be found in Reference [2]. However, as the authors stated, hyperspectral images produce highly accurate maps but this technology might not be portable because of the prohibitive operating costs. Similarly, other approaches (e.g., imagery provided by unmanned aerial vehicles (UAVs) [12] or airborne high spatial resolution hyperspectral images combined with Light Radar (LiDAR) data [13]) also have high operating costs. Amores et al. [3] introduced a classification system to perform mapping tasks on different types of groves using airborne colour images and orthophotos.
A semi-autonomous navigation system controlled by a human expert operator was introduced by Stentz et al. [14]. This navigation was proposed to supervise the path planning and navigation of a fleet of tractors using the information provided by vision sensors. During its execution, the system determines the route segment where each tractor should drive trough to perform crop spraying. Each segment is assigned by tracking positions and speeds previously stored. When an obstacle is detected, the tractor stops and the human operator decides the actions to be carried out.
A sensor fusion system using a fuzzy-logic-enhanced Kalman Filter was introduced by Subramanian et al. [15]. The paper aimed to fuse the information of different sensor systems for pathfinding. They used machine vision, LiDAR, an Inertial Measurement Unit (IMU), and a speedometer to guide an autonomous tractor trough citrus grove alleyways. The system was successfully tested on curved paths at different speeds.
In the research introduced by Rankin et al. [16], a set of binary detectors were implemented to detect obstacles, tree trunks, and tree lines, among other entities for off-road autonomous navigation by an unmanned ground vehicle (UGV). A compact terrain map was built from each frame of stereo images. According to the authors, the detectors label terrain as either traversable or non-traversable. Then, a cost analysis assigns a cost to driving over a discrete patch of terrain.
Bakker et al. [17] introduced a Real Time Kinematic Differential Global Positioning System-based autonomous field navigation system including automated headland turns. It was developed for crop-row mapping using machine vision. Concretely, they used a colour camera mounted at 0.9 m above the ground with an inclination of 37 for the vertical. Results from the field experiments showed that the system can be guided along a defined path with cm precision using an autonomous robot navigating in a crop row.
A navigation system for outdoor mobile robots using GPS for global positioning and GPRS for communication was introduced in Reference [18]. A server–client architecture was proposed and its hardware, communication software, and movement algorithms were described. The results on two typical scenarios showed the robot manoeuvring and depicted the behaviour of the novel implemented algorithms.
Rovira-Mas et al. [19] proposed a framework to build globally referenced navigation grids by combining global positioning and stereo vision. First, the creation of local grids occurs in real time right after the stereo images have been captured by the vehicle in the field. In this way, perception data is effectively condensed in regular grids. Then, global referencing allows the discontinuous appendage of data to succeed in the completion and updating of navigation grids over time over multiple mapping sessions. With this, global references are extracted for every cell of the grid. This methodology takes the advantages of local and global information so navigation is highly benefited. Finally, the whole system was successfully validated in a commercial vineyard.
In Reference [20], the authors proposed a novel field-of-view machine system to control a small agricultural robot that navigates between rows in cornfields. The robot was guided using a camera with pitch and yaw motion control in the prototype, and it moved to accommodate the near, far, and lateral fields of view mode.
The approach we propose differs from others since satellite imagery and a camera placed in the autonomous robot (see References [21,22]) are combined for advanced navigation. GPS sensor and satellite imagery are used (1) to acquire global knowledge of the groves where the robots should navigate and (2) to autonomously generate a first path planning. This first path planning contains the route where each robotic system should navigate. Moreover, the local vision system at the autonomous robot, that is composed by a colour Charge Coupled Device (CCD) camera with auto iris control, is used to add real-time information and to allow the robot to navigate centred in the path (see Reference [21]). Although the first path planning already introduces a route to navigate centred in the paths, local vision provides more precise information. We can benefit from combining global and local information; we can establish a first path planning from remote imagery which can be updated by local imagery captured in real time.

3. Autonomous Mapping and Navigation with Satellite Imagery and Cadastral Information

This section introduces the steps necessary to obtain the path planning for an autonomous robot (or team of robots) using the current location of the robots, satellite imagery, and cadastral information. For this purpose, we identified two main stages.
To allow the interaction among all the modules and sub-modules, proper control architecture is needed. In particular, we used Agriture [23] to integrate all the modules into the system and to enhance the interoperability.

3.1. First Stage: Development of the Autonomous Mapping System

The proposed autonomous mapping module is composed of two main sub-modules:
  • Acquisition sub-module: This sub-module is in charge of acquiring the images from online databases to generate the final map of the parcel. Concretely, the system will download the images from a public administrative database, SIGPAC, using the information provided by the GPS sensor included in the robots. In other related experiments, we installed a submeter precision GPS in the autonomous robots (GPS Pathfinder ProXT Receiver) [22]. The location accuracy obtained in those cases encourages us to integrate that GPS in the current system. However, the GPS integration is out of this paper’s scope, since the main objective of this paper is to introduce the algorithms to combine satellite and cadastral images and to show a proof-of-concept. This sub-module provides two images to the “map creation module”: the satellite image of the parcel and the image with parcel borders.
  • Map creation sub-module: This sub-module will be in charge of extracting all the information from the images: paths, orange trees, borders, and obstacles. With the extracted information, the map (path planning) which the robots should follow will be generated. The initial strategy to cover efficiently all the parcels with the robots will be done in this sub-module. The maps and satellite imagery resolution are set to 0.125 m per pixel. This resolution was easily reached with both services (satellite imagery and cadastral imagery) and was low enough to represent the robot dimensions (around 50 by 80 cm), and the computation burden was not so demanding. With the new Galileo services, higher resolution images can be obtained (with lower cm per pixel rates). Although more precise paths could be obtained, the computational cost of fast marching to generate the paths would also considerably increase.

3.1.1. Acquisition Sub-Module

This sub-module is in charge of obtaining the two images which are needed to create the map of the parcel from the current position of the autonomous system. Its description is detailed in Algorithm 1 where C u r r e n t P o s i t i o n represents the current position of the robotic system, R e s o l u t i o n is the resolution of the acquired imagery, and R a d i u s stands for the radius of the area.
Algorithm 1 ObtainImages{ C u r r e n t P o s i t i o n , R e s o l u t i o n , R a d i u s }
  • F u l l S a t I m a g e = ObstainSatImage{ C u r r e n t P o s i t i o n , R e s o l u t i o n , R a d i u s }
  • B o r d e r s = ObstainBorders{ C u r r e n t P o s i t i o n , R e s o l u t i o n , R a d i u s }
  • Return: F u l l S a t I m a g e , B o r d e r s
The output generated in this sub-module includes the full remote image of the area, F u l l S a t I m a g e , and the borders of the parcels, B o r d e r s , which will be processed in the “Map Creation module”.

3.1.2. Map Creation Sub-Module

This sub-module is in charge of creating the map of the parcel from the two images obtained in the previous sub-module. The map contains information about the parcel: trees rows, paths, and other elements where the robot cannot navigate through (i.e., obstacles). Its description can be found in Algorithm 2.
Algorithm 2 GenerateMap{ C u r r e n t P o s i t i o n , F u l l S a t I m a g e , B o r d e r s }
  • F u l l P a r c e l M a s k = GenerateParcelMask{ C u r r e n t P o s i t i o n , B o r d e r s }
  • [ P a r c e l M a s k , S a t I m a g e ] = CropImagery{ F u l l P a r c e l M a s k , F u l l S a t I m a g e }
  • [ P a t h , T r e e s , O b s t a c l e s ] = ExtractMasks{ C u r r e n t P o s i t i o n , P a r c e l M a s k , S a t I m a g e }
The first step consists of determining the full parcel mask from the borders previously acquired. With the robot coordinates and the image containing the borders, this procedure generates a binary mask that represents the current parcel where the robot is located. This procedure is described in Algorithm 3.
Algorithm 3 GenerateParcelMask{ C u r r e n t P o s i t i o n , B o r d e r s }
  • Determine position C u r r e n t P o s i t i o n in B o r d e r s : P o s x and P o s y
  • Convert B o r d e r s to grayscale: B o r d e r s G
  • Binarize B o r d e r s G : B o r d e r s M a s k
  • Dilate B o r d e r s M a s k : D i l a t e d B o r d e r
  • Flood-fill D i l a t e d B o r d e r in P o s x , P o s y : D i l a t e d B o r d e r F l o o d f i l l e d
  • Erode D i l a t e d B o r d e r F u l l f i l l e d : E r o d e d B o r d e r F l o o d f i l l e d
  • Dilate E r o d e d B o r d e r F l o o d f i l l e d : F u l l P a r c e l M a s k
  • Return: F u l l P a r c e l M a s k
Once the parcel mask is determined from the parcel image, it is time to remove those areas outside the parcel in the satellite image and the parcel mask itself. To avoid underrepresentation of the parcels, a small offset was added to the mask. This step is performed to reduce imagery size for further treatment with advanced techniques. This procedure is detailed in Algorithm 4.
Algorithm 4 CropImagery{ F u l l P a r c e l M a s k , F u l l S a t I m a g e }
  • Determine real position of left-upper corner of F u l l P a r c e l M a s k
  • Determine real position of left-upper corner of F u l l S a t I m a g e
  • Determine F u l l S a t I m a g e offset in pixels with respect F u l l P a r c e l M a s k
  • Determine parcel’s bounding box:
  •     Determine first and last row of F u l l P a r c e l M a s k that is in parcel
  •     Determine first and last column of F u l l P a r c e l M a s k that is in parcel
  • Crop F u l l P a r c e l M a s k directly: P a r c e l M a s k
  • Crop F u l l S a t I m a g e according to offset values: S a t I m a g e
  • Return: P a r c e l M a s k and S a t I m a g e
Since the location of the pixels in the real world is known using the WGS84 with UTM zone 30N system (EPSG:32630), the offset between the satellite image and the parcel mask can be directly computed. Then, it is only necessary to determine the bounding box surrounding the parcel in the parcel mask to crop both images.
At this point, the reduced satellite map needs to be processed to distinguish between the path and obstacles. We analyzed different satellite images from groves located in the province of Castellón in Spain, and the images were clear with little presence of shadows. Therefore, we set the binarization threshold to 112 in our work. Although we applied a manual threshold, a dynamic thresholding binarization could be applied to have a more robust method to distinguish the different elements in the satellite image, in case shadows were present. Algorithm 5 has been used to process the reduced satellite map andto obtain the following three masks or binary images:
  • P a t h —a binary mask that contains the path where the robot can navigate;
  • T r e e s —a binary mask that contains the area assigned to orange trees;
  • O b s t a c l e s —a binary mask that contains those pixels which are considered obstacles: this mask corresponds to the inverse of the path.
Algorithm 5 GenerateMasks{ P a r c e l M a s k and S a t I m a g e }
  • Convert S a t I m a g e to gray scale: S a t I m a g e G r a y
  • Extract m i n and m a x values of the parcel pixels in S a t I m a g e G
  • Normalize S a t I m a g e G values:
  •    S a t I m a g e G N ( x , y ) = 255 · S a t I m a g e G ( x , y ) M i n V a l u e M a x V a l u e
  • Generate B&W Masks:
  •          P a t h ( x , y ) = 1 if   S a t I m a g e G N ( x , y ) > 112   and   P a r c e l M a s k ( x , y ) 0 otherwise T r e e s ( x , y ) = 1 if   S a t I m a g e G N ( x , y ) 112   and   P a r c e l M a s k ( x , y ) 0 otherwise
  • Remove path pixels not connected to C u r r e n t P o s i t i o n in P a t h
  • Generate O b s t a c l e s Mask:
  •    O b s t a c l e s ( x , y ) = 1 P a t h ( x , y )
  • Return: P a t h O r a n g e T r e e s and O b s t a c l e s

3.2. Second Stage: Path Planning

This module is used to determine the final path that the robotic system or team of robots should follow. The principal work for path planning in autonomous robotic systems is to search for a collision-free path and to cover all the groves. This section is devoted to describing all the steps necessary to generate the final path (see Algorithm 6) using as input the three masks provided by Algorithm 5.
To perform the main navigation task, the navigation module has been divided into five different sub-modules:
  • Determining the tree lines
  • Determining the reference points
  • Determining the intermediate points where the robot should navigate
  • Path planning through the intermediate points
  • Real navigation through the desired path
First of all, the tree lines are determined. From these lines and the Path binary map, the reference points of the grove are determined. Later, the intermediate points are selected from the reference points. With the intermediate points, a fast path free of collisions is generated to cover all the groves. Finally, the robotic systems perform real navigation using the generated path.
Algorithm 6 PathPlanning{ C u r r e n t P o s i t i o n , T r e e s , P a t h }
  • S o r t e d T r e e L i n e s = DetectLines{ T r e e s , P a t h }
  • [ V e r t e x e s , M i d d l e P a t h s ] = ReferencePoints{ P a t h , S o r t e d T r e e L i n e s }
  • F i n a l P a t h = CalculatePath{ P o s i t i o n , P a t h , M i d d l e P a t h , V e r t e x e s
  • Return: F i n a l P a t h

3.2.1. Detecting Tree Lines

The organization of citric groves is well defined as it can be seen in the artificial composition depicted in Figure 2; the trees are organized in rows separated by a relatively wide path to optimize treatment and fruit collection. The first step of the navigation module consists of detecting the tree lines.
The Obstacles binary map ( O b s t a c l e s in Algorithm 5) contains the pixels located inside the parcel that correspond to trees. For this reason, the rows containing trees are extracted by using the Hough Transform method (see Reference [24]). Moreover, some constraints to this technique have been added to avoid selecting the same tree line several times. In this procedure, the Hesse standard form (Equation (1)) is used to represent lines with two parameters: θ (angle) and ρ (displacement). Each element of the resulting matrix of the Hough Transform contains some votes which depend on the number of pixels which corresponds to the trees that are on the line.
y · s i n ( θ p a t h ) + x · c o s ( θ p a t h ) ρ p a t h = 0
According to the structure of the parcels, the main tree rows can be represented with lines. We applied Algorithm 7 to detect the main tree rows (rows of citric trees). First, the most important lines, the ones with the highest number of votes, are extracted. The first extracted line (the line with the highest number of votes) is considered the “main” line, and its parameters ( θ 1 , ρ 1 and v o t e s 1 ) are stored. Then, those lines that have at least 0.20 · v o t e s 1 and of which the angle is ρ ± 1.5 are also extracted. This first threshold ( Γ 1 = 0.20 ) has been used to filter the small lines that do not represent full orange rows, whereas the second threshold ( Γ 2 = 1.5 ) has been used to select those orange rows parallel to the main line. We have determined the two threshold values according to the characteristics of the analyzed groves: the target groves have parallel orange rows with similar length.
Algorithm 7 DetectLines{ T r e e s , P a t h }
  • Apply H o u g h to T h r e e s : H = H o u g h ( T r e e s )
  • Extract line with highest value from H: v a l u e m a i n = H ( ρ m a i n , ϕ m a i n )
  • Set Treshold value: Γ 1 = 0.20 · v a l u e m a i n
  • Limit H to angles close the main angle ( ϕ m a i n ± 1.5 ): H L
  • lines = true
  • Store main line parameters in S t o r e d L i n e s : ρ m a i n ϕ m a i n v a l u e m a i n
  • while lines do
  •  Extract candidate line with highest value from H L : v a l u e c = H L ( ρ c , ϕ c )
  •  Remove value in the matrix: H L ( ρ c , ϕ c ) = 0
  • if ( v a l u e c > Γ 1 ) then
  •   Calculate that distance to previous extracted lines is lower than threshold:
  •       ρ c ρ i > ( 12 · ( 1 + θ c θ i ) )
  •   if distance between candidate and stored lines is never lower than threshold then
  •    Store line parameters in S t o r e d L i n e s : ρ c ϕ c v a l u e c
  •   end if
  • else
  •   lines = false
  • end if
  • end while
  • Sort stored lines: S o r t e d T r e e L i n e s
  • Return: S o r t e d T r e e L i n e s
Furthermore, to avoid detecting several times the same tree row, each extracted line cannot be close to any previous extracted line. We used Equation (2) to denote the distance between a candidate line (c) and a previous extracted line (i) since the angles are similar ( ± 1.5 ). If this distance is higher than a threshold value ( Γ 3 ), the candidate line is extracted. We use Equation (3) to calculate the threshold value according to the difference of angles of the two lines.
d i s t a n c e c , i = ρ c ρ i
Γ 3 = 12 · ( 1 + θ c θ i )
Hough Transform has been commonly used in the literature related to detection tasks in agricultural environments. Recent research shows that it is useful to process local imagery [25,26] and remote imagery [27,28].

3.2.2. Determining the Reference Points

Reference or control points contain well-known points and parts of the grove, e.g., the vertex of the orange tree lines or the middle of paths where farmers walk. A set of reference points will be used as intermediate points in the entire path that a robotic system should follow.
Firstly, the vertexes of the orange tree rows are extracted. For each extracted line, the vertexes correspond to the first and last points that belongs to the path according to the Path binary map P a t h .
Then, Hough transform is applied again to detect the two border paths perpendicular to the orange rows. With these two lines, their bisection is used to obtain additional reference points. These last points correspond to the centre of each path row (the path between two orange rows). Concretely, the intersection between the bisection line and the Path binary map is used to calculate the points. After obtaining the intersection, the skeleton algorithm is applied to obtain the precise coordinates of the new reference points.
However, under certain conditions, two central points have been detected for the same row, e.g., if obstacles are present at the centre of the path, the skeleton procedure can provide two (or more) middle points between two orange tree rows. In those cases, only the first middle point is selected for each path row.
Finally, all the reference points can be considered basic features of the orange grove since a subset of them will be used as intermediate points of the collision-free path that a robotic system should follow to efficiently navigate through the orange grove. For that purpose, we applied Algorithm 8 to extract all the possible middle points in a parcel.
Algorithm 8 ReferencePoints{ P a t h S o r t e d T r e e L i n e s }
  • V e r t e x e s = Extract reference points from S o r t e d T r e e L i n e s
  • M i d d l e P a t h s = Extract reference points that belong to middle of path rows
  • Return: V e r t e x e s and M i d d l e P a t h s

3.2.3. Determine Navigation Route from Reference Points

In this stage, the set with the intermediate points is extracted from the reference points. They correspond to the points where the robot should navigate. The main idea is first to move the robot to the closest corner, and then, the robot navigates through the grove (path row by path row), alternating the direction of navigation (i.e., a zig-zag movement).
Firstly, the closest corner of the parcel is determined by the coordinates provided by the robotic system using the Euclidean distance. The corners are the vertexes of the lines extracted from the first and last rows containing trees previously detected. The idea is to move the robot from one side of the parcel to the opposite side through the row paths. The sequence of intermediate points is generated according to the first point (corner closest to the initial position of the robot) being the opposite corner the destination. Intermediate points are calculated using Algorithm 9.
Finally, this stage is not directly performed by path planning but Algorithm 10) will be used to determine the sequence of individual points. Those points will be the basis for the next stage where the paths for any robotic system that composes the team are determined.
Algorithm 9 IntermediatePoints{ P o s i t i o n , M i d d l e P a t h s , V e r t e x e s }
  • Extract corners from V e r t e x e s :
  •    C o r n e r s = [ V e r t e x e s ( 1 ) , V e r t e x e s ( 2 ) , V e r t e x e s ( P e n u l t i m a t e ) , V e r t e x e s ( L a s t ) ]
  • Detect closest corner to C u r r e n t P o s i t i o n : C o r n e r
  • Add C o r n e r to L i s t I n t e r m e d i a t e P o i n t s
  • L i s t P o i n t s = S e q u e n c e P o i n t s { C o r n e r , M i d d l e P a t h s , V e r t e x e s }
  • Add L i s t P o i n t s to L i s t I n t e r m e d i a t e P o i n t s
  • Return: L i s t I n t e r m e d i a t e P o i n t s
Algorithm 10 SequencePoints{ C o r n e r , M i d d l e P a t h s , V e r t e x e s }
  • Let be N p o i n t s the number of points in M i d d l e P a t h
  • if C o r n e r is V e r t e x e s ( 1 ) then
  • for i = 1 to ( N p o i n t s -1) do
  •   Add MiddlePaths(i) to L i s t P o i n t s
  •   Add V e r t e x e s ( i · 2 m o d u l e ( i + 1 , 2 ) ) to L i s t I P o i n t s
  • end for
  •  Add MiddlePaths( N p o i n t s ) to L i s t P o i n t s
  •  Add Vertexes( l a s t ) to L i s t P o i n t s
  • end if
  • if C o r n e r is V e r t e x e s ( 2 ) then
  • for i = 1 to ( N p o i n t s -1) do
  •   Add MiddlePaths(i) to L i s t P o i n t s
  •   Add V e r t e x e s ( i · 2 m o d u l e ( i , 2 ) ) to L i s t P o i n t s
  • end for
  •  Add MiddlePaths( N p o i n t s ) to L i s t P o i n t s
  •  Add Vertexes( p e n u l t i m a t e ) to L i s t P o i n t s
  • end if
  • if C o r n e r is V e r t e x e s ( p e n u l t i m a t e ) then
  • for i = 1 to ( N p o i n t s -1) do
  •    j = N p o i n t s + 1 i
  •   Add MiddlePaths(j) to L i s t I P o i n t s
  •   Add V e r t e x e s ( ( j 1 ) · 2 m o d u l e ( i , 2 ) ) to L i s t P o i n t s
  • end for
  •  Add MiddlePaths(1) to L i s t P o i n t s
  •  Add Vertexes(2) to L i s t P o i n t s
  • end if
  • if C o r n e r is V e r t e x e s ( l a s t ) then
  • for i = 1 to ( N p o i n t s -1) do
  •    j = N p o i n t s + 1 i
  •   Add MiddlePaths(j) to L i s t P o i n t s
  •   Add V e r t e x e s ( ( j 1 ) · 2 - module(i+1,2)) to L i s t P o i n t s
  • end for
  •  Add MiddlePaths(1) to L i s t P o i n t s
  •  Add Vertexes(1) to L i s t P o i n t s
  • end if
  • Return: L i s t P o i n t

3.2.4. Path Planning Using Processed Satellite Imagery

Once the initial and intermediate points and their sequence are known, the path planning map is determined. This means calculating the full path that the robotic system should follow.
Firstly, the S p e e d map is calculated with Equation (4). This map is related to the distance to the obstacles (where the robot cannot navigate). The novel equation introduces a parameter, α , to penalize those parts of the path close to obstacles and to give more priority to those areas which are far from obstacles. Theoretically, the robotic system will navigate faster in those areas far from obstacles due to the low probability of collision with the obstacles. Therefore, the equation has been chosen to penalize the robotic system for navigating close to obstacles.
S p e e d ( x , y ) = 100 + d i s t a n c e ( x , y ) α if   d i s t a n c e ( x , y ) > 0 1 otherwise
where d i s t a n c e refers to the pixel-distance of point ( x , y ) in the image to the closest obstacle. The final S p e e d value is set to value 1 when the pixel corresponds to an obstacle, whereas it has a value higher than 100 in those pixels which correspond to the class “path”.
The final procedure carried out to calculate the S p e e d image is detailed in Algorithm 11.
Algorithm 11 S p e e d = CalculateSpeed{ P a t h , S o r t e d T r e e L i n e s }
  • Add detected tree lines to the path binary mask P a t h
  • Select Non-Path points with at least one neighbor which belong to path: O b s t a b l e P o i n t s
  • Set initial S p e e d values: S p e e d ( x , y ) = 1 x [ 1 , T o t a l W i d t h ] y [ 1 , T o t a l H e i g h t ]
  • for x=1: T o t a l W i d t h do
  • for y=1: T o t a l H e i g h t do
  •   if PathMask(x,y) belongs to Path then
  •    Calculate Euclidean distance to all ObstaclePoints
  •    Select the lowest distance value: m i n d i s t a n c e
  •     S p e e d ( x , y ) = 100 + m i n d i s t a n c e 3
  •   end if
  • end for
  • end for
  • Return: S p e e d
Then, the Fast Marching algorithm is used to calculate the D i s t a n c e s matrix. It contains the “distance” to a destination for any point of the satellite image. The values are calculated using the S p e e d values and the target/destination coordinates in the satellite image. The result of this procedure is a D i s t a n c e matrix of which the normalization can be directly represented as an image. This distance does not correspond to the Euclidean distance; it corresponds to a distance measurement based on the expected time required to reach the destination.
The Fast Marching algorithm, introduced by Sethian (1996), is a numerical algorithm that is able to catch the viscosity solution of the Eikonal equation. According to the literature (see References [29,30,31,32,33]), it is a good indicator of obstacle danger. The result is a distance image, and if the speed is constant, it can be seen as the distance function to a set of starting points. For this reason, Fast Marching is applied to all the intermediate points using the previously calculated distances as speed values. The aim is to obtain one distance image for each intermediate point; this image provides information to reach the corresponding intermediate point from any other point of the image.
Fast Marching is used to reach the fastest path to a destination, not the closest one. Although it is a time-consuming procedure, an implementation based on MultiStencils Fast Marching [34] has been used to reduce computational costs and to use it in real-time environments.
The final D i s t a n c e s matrix obtained is used to reach the destination point from any other position of the image with an iterative procedure. The procedure carried out to navigate from any point to another determined point is given by Algorithm 12. First, the initial position (position where the robotic system is currently placed) is assigned to the path; then, the algorithm seeks iteratively for the neighbour with lowest distance value until the destination is reached ( D i s t a n c e value equal to 0). This algorithm will provide the path as a sequence of consecutive points (coordinates of the distance image). Since the geometry of the satellite imagery is well known, the list of image coordinates can be directly translated to real-world coordinates. This information will be used by the low-level navigation system to move the system according to the calculated path.
Algorithm 12 P a t h P o i n t s = NavigationPt2Pt{ I n i t i a l P t , D e s t i n y P t , S p e e d }
  • D i s t a n c e s = F a s t M a r c h i n g ( D e s t i n y P t , S p e e d )
  • C u r r e n t P o i n t = I n i t i a l P t
  • while D i s t a n c e s ( C u r r e n t p o i n t . x , C u r r e n t p o i n t . y ) > 0 do
  •  Search the neighbor with lowest Distance Value: N e w P o i n t
  • C u r r e n t P o i n t = N e w P o i n t
  •  Add C u r r e n t P o i n t to P a t h P o i n t s
  • end while
  • Return: P a t h P o i n t s
Finally, the final full path that the robotic system should follow consists of reaching sequentially all the intermediate points previously determined. This strategy is fully described in Algorithm 13, but it is only valid for the case of using only one single robotic system in the grove. For more than one robot present in the grove, an advanced algorithm is needed. Algorithm 14 introduces the strategy for two robots when they are close to each other.
Algorithm 13 CalculatePath{ P o s i t i o n , P a t h , M i d d l e P a t h s , V e r t e x e s }
  • L i s t P o i n t s = S e q u e n c e P o i n t s { C u r r e n t P o s i t i o n , M i d d l e P a t h s , V e r t e x e s }
  • S p e e d = C a l c u l a t e S p e e d ( P a t h )
  • S t a r t i n g P o i n t = C u r r e n t P o s i t i o n
  • for i = 1 to NumberOfIntermediatePoints do
  • I n t e r m e d i a t e P o i n t = L i s t P o i n t s ( i )
  • P a t h P o i n t s = N a v i g a t i o n P t 2 P t { S t a r t i n g P o i n t , I n t e r m e d i a t e P o i n t , S p e e d }
  •  Add P a t h P o i n t s to F i n a l P a t h
  • S t a r t i n g P o i n t = I n t e r m e d i a t e P o i n t
  • end for
  • Return: F i n a l P a t h
Algorithm 14 CalculatePath{ P o s i t i o n s , P a t h , M i d d l e P a t h s , V e r t e x e s }
  • Extract current position for robot 1: C u r r e n t P o s i t i o n s r o b o t 1
  • Extract current position for robot 2: C u r r e n t P o s i t i o n s r o b o t 2
  • Extract corners from V e r t e x e s :
  •    C o r n e r s = [ V e r t e x e s ( 1 ) , V e r t e x e s ( 2 ) , V e r t e x e s ( P e n u l t i m a t e ) , V e r t e x e s ( L a s t ) ]
  • Detect closest corner and the distance for the two robots
  •    C o r n e r r o b o t 1 = g e t C l o s e s t C o r n e r ( C u r r e n t P o s i t i o n s r o b o t 1 , C o r n e r s )
  •    d i s t a n c e r o b o t 1 = g e t P i x e l D i s t a n c e ( C u r r e n t P o s i t i o n s r o b o t 1 , C o r n e r r o b o t 1 )
  •    C o r n e r r o b o t 2 = g e t C l o s e s t C o r n e r ( C u r r e n t P o s i t i o n s r o b o t 2 , C o r n e r s )
  •    d i s t a n c e r o b o t 2 = g e t P i x e l D i s t a n c e ( C u r r e n t P o s i t i o n s r o b o t 2 , C o r n e r r o b o t 2 )
  • if d i s t a n c e r o b o t 1 < d i s t a n c e r o b o t 2 then
  •  Add C o r n e r r o b o t 1 to L i s t I n t e r m e d i a t e P o i n t r o b o t 1
  • L i s t P o i n t s r o b o t 1 = S e q u e n c e P o i n t s { C o r n e r r o b o t 1 , M i d d l e P a t h s , V e r t e x e s }
  •  Add L i s t P o i n t s r o b o t 1 to L i s t I n t e r m e d i a t e P o i n t r o b o t 1
  •  Detect opposite corner to C o r n e r r o b o t 1 closest to robot 2: O p p o s i t e r o b o t 1
  •  Add O p p o s i t e r o b o t 2 to L i s t I n t e r m e d i a t e P o i n t r o b o t 2
  • L i s t P o i n t s r o b o t 2 = S e q u e n c e P o i n t s { O p p o s i t e r o b o t 2 , M i d d l e P a t h s , V e r t e x e s }
  •  Add L i s t P o i n t s r o b o t 2 to L i s t I n t e r m e d i a t e P o i n t r o b o t 2
  • else
  •  Add C o r n e r r o b o t 2 to L i s t I n t e r m e d i a t e P o i n t r o b o t 2
  • L i s t P o i n t s r o b o t 2 = S e q u e n c e P o i n t s { C o r n e r r o b o t 2 , M i d d l e P a t h s , V e r t e x e s }
  •  Add L i s t P o i n t s r o b o t 2 to L i s t I n t e r m e d i a t e P o i n t r o b o t 2
  •  Detect opposite corner to C o r n e r r o b o t 2 closest to robot 1: O p p o s i t e r o b o t 1
  •  Add O p p o s i t e r o b o t 1 to L i s t I n t e r m e d i a t e P o i n t r o b o t 1
  • L i s t P o i n t s r o b o t 1 = S e q u e n c e P o i n t s { O p p o s i t e r o b o t 1 , M i d d l e P a t h s , V e r t e x e s }
  •  Add L i s t P o i n t s r o b o t 1 to L i s t I n t e r m e d i a t e P o i n t r o b o t 1
  • end if
  • S p e e d = C a l c u l a t e S p e e d ( P a r c e l M a s k )
  • S t a r t i n g P o i n t r o b o t 1 = C u r r e n t P o s i t i o n r o b o t 1
  • for i = 1 to NumberOfIntermediatePoints do
  • I n t e r m e d i a t e P o i n t = L i s t I n t e r m e d i a t e P o i n t s ( i ) f
  • P a t h = N a v i g a t i o n P o i n t 2 P o i n t { S t a r t i n g P o i n t , I n t e r m e d i a t e P o i n t , S p e e d }
  •  Add P a t h to F i n a l P a t h r o b o t 1
  • S t a r t i n g P o i n t = I n t e r m e d i a t e P o i n t
  • end for
  • S t a r t i n g P o i n t r o b o t 2 = C u r r e n t P o s i t i o n r o b o t 2
  • for i = 1 to NumberOfIntermediatePoints do
  • I n t e r m e d i a t e P o i n t = L i s t I n t e r m e d i a t e P o i n t s ( i )
  • P a t h = N a v i g a t i o n P o i n t 2 P o i n t { S t a r t i n g P o i n t , I n t e r m e d i a t e P o i n t , S p e e d }
  •  Add P a t h to F i n a l P a t h r o b o t 2
  • S t a r t i n g P o i n t = I n t e r m e d i a t e P o i n t
  • end for
  • Store F i n a l P a t h

4. Results on a Real Case: Proof-of-Concept Validation

The previous section introduced the algorithms to obtain the path that a robot or system of two robots have to follow to cover the whole citric grove for, for instance, maintenance tasks. To show the feasibility of the proposed system, the system has been evaluated in a real context as a proof-of-concept.

4.1. Results of the Mapping System

First, given the current position of the robot (or team of robots), the maps and parcel masks are obtained. For our test, a GPS receiver with submeter precision was used. It is important to remark that the system must be inside the parcel to get the data accurately. From the initial position of the system, inside the parcel, the satellite maps of the parcel will be obtained using a satellite maps service via the Internet. In our case, two images were downloaded from the SIGPAC service, the photography of the parcel and the image with the boundaries of the parcel, as can be seen in Figure 3. Due to the characteristics of the latter image, some additional steps were included in Algorithm 3.
In Algorithm 3, the image with the borders of the parcel (see parcel image in Figure 3) is first processed to obtain a binary mask with pixels which belongs to the parcel and the pixels which are outside the parcel. For this reason, this image is first converted to black and white with a custom binarization algorithm (see Figure 4a). The purpose of the binarization is double: (1) to remove watermarks (red pixels in Figure 3) and (2) to highlight the borders (purple pixels in Figure 3).
This early version of the binary mask is then dilated to perform a better connection between those pixels which correspond to the border (see Figure 4b). The dilation is performed three times with a 5 × 5 mask. Then, the pixel corresponding to the position of the robots is flood-filled in white (see Figure 4c). Finally, the image is eroded to revert the dilated filtering previously performed and to remove the borders between parcels. For this purpose, the erosion algorithm is run 6 times with the same 5 × 5 mask. After applying this last filter, the borders are removed and the parcel area is kept in the mask image. After obtaining the last binary mask, it is dilated again 3 times to better capture the real borders of the parcel (see Figure 4d).
Then, it is only necessary to determine the bounding box surrounding the parcel in the parcel mask (see Figure 4d) to crop both images. This procedure is visually depicted on the top side of Figure 5.
The first mask, P a t h , contains the path where the robot can navigate (see Figure 5b). The second mask, T r e e s , contains the area assigned to orange trees (see Figure 5c). The last mask, O b s t a c l e s , contains those pixels which are considered obstacles; this mask corresponds to the inverse of the path (see Figure 5d).

4.2. Results of the Path Planner System

4.2.1. Detecting Tree Lines

As a result of the Hough Transform application in Algorithm 7, all the orange tree rows are extracted as depicted in Figure 6. Although the order in which they are extracted does not correspond to the real order inside the grove, they are reordered and labelled according to its real order for further steps.

4.2.2. Determining the Reference Points

Algorithm 8 introduced how to determine the reference points that could be used as intermediate points in the path planner. Figure 7 and Figure 8 show a graphical example of these reference points. The former set of images shows how the tree row vertexes are computed, whereas the latter includes the process to extract the middle point of each path row.

4.2.3. Determine Navigation Route from Reference Points

The results of Algorithm 9 are shown in Figure 9, where the numbered sequence of points is shown for four different initial configurations. In each one, the robot is placed near a different corner, simulating four possible initial places for the robots, so the four possible intermediate point distributions are considered.

4.2.4. Path Planning Using Processed Satellite Imagery

Algorithm 11 is used to calculate the S p e e d image required to determine the path that the robot should follow. Initially, the Path binary map (Figure 10a) was considered to be used as a base to determine the distances, but a modified version was used instead (Figure 10b). The modification consisted of adding the detected lines for the orange rows. The lines have been added to ensure that the robot navigates entirely through each path row and that there is no way of trespassing the orange tree rows during navigation.
Three different S p e e d images are shown in Figure 11a–c depending on the value of the α parameter. After performing some experiments, the best results are obtained when the α value is set to 3. With lower values of α , the points close to obstacles were not penalized. With higher values, some narrow path rows were omitted from navigation due to their low-speed values. Finally, in our experiments, we used α = 3 , as shown in Algorithm 11.
Figure 12a shows the parcel with the destination position drawn on it along with two representations of the D i s t a n c e s matrix (Figure 12b,c). The first one corresponds to the normalized D i s t a n c e s matrix of which the values are adapted to the [ 0 255 ] range. However, it is nearly impossible to visually see how to arrive at the destination from any other point inside the parcel. It is due to the high penalty value assigned to those points belonging to obstacles (non-path points). For this reason, a modified way to visualize the distance matrix is introduced by the following equation:
D i s t I m g ( x , y ) = 100 + 155 D i s t a n c e s ( x , y ) m a x ( D i s t a n c e s ) if   O b s t a c l e s ( x , y ) = f a l s e 0 otherwise
In the image, it can be seen that there are high distance values in the first and last path rows (top-left and bottom-right sides of Figure 12c) which are not present in most of the other path rows. These two rows are narrow, and the speed values are, therefore, also quite low. For example, a robot placed in the top-left corner should navigate through the second path row to achieve the top-right corner instead of navigating through the shortest path (first path row). The aim of Fast Marching is to calculate the fastest path to achieving the destination point, not the shortest one. In the first row, the distance to obstacles is almost always very low.
Figure 13 shows the results of the path planner for a robotic system. In the example image, the robotic is initially placed in the location marked with the red x-cross. Then, all the intermediate points, including the closest corner, are determined (green x-crosses).
Figure 14 introduces an example for a team composed of two robots. Figure 14a,b shows the complete determined path for the first and second robots. The first robot moves to the closest corner and then navigates through the grove, and the second robot moves to the closest opposite corner (located at the last row) and then also navigates through the grove but in the opposite direction (concerning the first robot). The path is coloured depending on the distance to the initial position (blue is used to denote low distances, and red is used to denote high distances).
However, the robotic systems will not perform all the paths because they will arrive at a point (stop point marked with a black x-cross) which has been already reached by the other robotic system and they will stop there (see Figure 14c,d). The colour between the initial point (red x-cross) and the stop point (black x-cross) graphically denotes the relative distance. Finally, Figure 14d shows a composition of the path done by each robotic system into one image.
Similar examples are shown in Figure 15 and Figure 16, where the robots are not initially close to the corners. In the former example, the path between the initial position and the corners are not marked as “visited” because the robotic system navigation starts when the robot gets to the first intermediate point. In the latter case, we show an example where the robots stop when they detect that they have reached the same location.

5. Conclusions

This paper has shown a proof-of-concept of a novel path planning based on remote imagery publicly available. The partial results, shown as intermediate illustrative real examples of each performed step, encourage reliance on satellite imagery and cadastral information to automatically generate the path planning. According to it, useful knowledge can be obtained from the combination of satellite imagery and information about the borders that divide the land into parcels. Some limitations have been also identified: some parameters (such as thresholds) are image dependent and have been manually set for the current set of satellite images; it can only be applied in the current configuration of citric trees in rows; and changes in the grove distribution are not immediately reflected in the satellite imagery. In our platform, this path planning is complemented by local imagery, which provides navigation centred in the path.
We can conclude by remarking that the steps necessary to autonomously determine path planning in citric groves in the province of Castellón have been described in this paper and that the potential of this autonomous path planner has been shown with successful use cases. Firstly, a procedure to extract binary maps from a public remote imagery service has been described. Secondly, a novel methodology to automatically detect reference points from orange groves has been introduced using remote imagery. Some knowledge and global information about the grove can be extracted from those points. Thirdly, a procedure to determine a path has been designed for a stand-alone robotic vehicle. Moreover, it has been modified to be suitable for a team composed of two robotic systems. The first path planning provides useful information, and it may be considered the fast way to perform a maintenance task. This path is complemented by local navigation techniques (see Reference [21]) to obtain optimal results in the final implementation of the whole navigation module. Finally, although we introduced the system for orange groves, all the steps can be applied to other groves with a similar main structure (other citrus groves or some olive groves). Moreover, they could be also adapted for other crops or situations by changing the way they extract reference points, i.e., a hybrid team of multiple robots will require a specific planner and path strategy.
Finally, this paper has introduced a proof-of-concept showing the feasibility of the proposed algorithms to automatically determine path planning inside a citric grove. The proposed algorithms can be used to ease the manual labelling of satellite imagery to create manual maps for each parcel. We will identify the full implementation in different contexts and experimental validation as further work.

Author Contributions

J.T.-S. and P.N. equally contributed in the elaboration of this work.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blazquez, C.H.; Edwards, G.J.; Horn, F.W. Citrus grove mapping with colored infrared aerial photography. Proc. Fla. State Hortic. Soc. 1979, 91, 5–8. [Google Scholar]
  2. López-Granados, F. Weed detection for site-specific weed management: Mapping and real-time approaches. Weed Res. 2011, 51, 1–11. [Google Scholar] [CrossRef]
  3. Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J.; Camps Valls, G.; Calpe Maravilla, J. Land cover classification of VHR airborne images for citrus grove identification. ISPRS J. Photogramm. Remote Sens. 2011, 66, 115–123. [Google Scholar] [CrossRef]
  4. Peterson, D.; Wolford, S. Fresh-market quality tree fruit harvester. Part II: Apples. Appl. Eng. Agric. 2003, 19, 545–548. [Google Scholar]
  5. Arima, S.; Kondo, N.; Monta, M. Strawberry Harvesting Robot on Table-Top Culture; Technical Report; American Society of Association Executives (ASAE): St. Joseph, MI, USA, 2004. [Google Scholar]
  6. Hannan, M.; Burks, T. Current Developments in Automated Citrus Harvesting; Technical Report; American Society of Association Executives (ASAE): St. Joseph, MI, USA, 2004. [Google Scholar]
  7. Foglia, M.; Reina, G. Agricultural Robot Radicchio Harvesting. J. Field Robot. 2006, 23, 363–377. [Google Scholar] [CrossRef]
  8. Blasco, J.; Aleixos, N.; Roger, J.; Rabatel, G.; Moltó, E. Robotic weed control using machine vision. Biosyst. Eng. 2002, 83, 149–157. [Google Scholar] [CrossRef]
  9. Bak, T.; Jakobsen, H. Agricultural robotic platform with four wheel steering for weed detection. Biosyst. Eng. 2004, 87, 125–136. [Google Scholar] [CrossRef]
  10. Downey, D.; Giles, D.; Slaughter, D. Weeds accurately mapped using DGPS and ground-based vision identification. Calif. Agric. 2004, 58, 218–221. [Google Scholar] [CrossRef]
  11. Giles, D.; Downey, D.; Slaughter, D.; Brevis-Acuna, J.; Lanini, W. Herbicide micro-dosing for weed control in field grown processing tomatoes. Appl. Eng. Agric. 2004, 20, 7355–7743. [Google Scholar] [CrossRef]
  12. Xiang, H.; Tian, L. Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (UAV). Biosyst. Eng. 2011, 108, 174–190. [Google Scholar] [CrossRef]
  13. Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
  14. Stentz, A.; Dima, C.; Wellington, C.; Herman, H.; Stager, D. A System for Semi-Autonomous Tractor Operations. Auton. Robot. 2002, 13, 87–104. [Google Scholar] [CrossRef]
  15. Subramanian, V.; Burks, T.F.; Dixon, W.E. Sensor fusion using fuzzy logic enhanced kalman filter for autonomous vehicle guidance in citrus groves. Trans. ASABE 2009, 52, 1411–1422. [Google Scholar] [CrossRef]
  16. Rankin, A.L.; Huertas, A.; Matthies, L.H. Stereo-vision-based terrain mapping for off-road autonomous navigation. Proc. SPIE 2009, 7332, 733210. [Google Scholar] [CrossRef]
  17. Bakker, T.; van Asselt, K.; Bontsema, J.; Müller, J.; van Straten, G. Autonomous navigation using a robot platform in a sugar beet field. Biosyst. Eng. 2011, 109, 357–368. [Google Scholar] [CrossRef]
  18. Velagic, J.; Osmic, N.; Hodzic, F.; Siljak, H. Outdoor navigation of a mobile robot using GPS and GPRS communication system. In Proceedings of the ELMAR-2011, Zadar, Croatia, 14–16 September 2011; pp. 173–177. [Google Scholar]
  19. Rovira-Más, F. Global-referenced navigation grids for off-road vehicles and environments. Robot. Auton. Syst. 2012, 60, 278–287. [Google Scholar] [CrossRef]
  20. Xue, J.; Zhang, L.; Grift, T.E. Variable field-of-view machine vision based row guidance of an agricultural robot. Comput. Electron. Agric. 2012, 84, 85–91. [Google Scholar] [CrossRef]
  21. Torres-Sospedra, J.; Nebot, P. A New Approach to Visual-Based Sensory System for Navigation into Orange Groves. Sensors 2011, 11, 4086–4103. [Google Scholar] [CrossRef]
  22. Torres-Sospedra, J.; Nebot, P. Two-stage procedure based on smoothed ensembles of neural networks applied to weed detection in orange groves. Biosyst. Eng. 2014, 123, 40–55. [Google Scholar] [CrossRef] [Green Version]
  23. Nebot, P.; Torres-Sospedra, J.; Martinez, R.J. A New HLA-Based Distributed Control Architecture for Agricultural Teams of Robots in Hybrid Applications with Real and Simulated Devices or Environments. Sensors 2011, 11, 4385–4400. [Google Scholar] [CrossRef]
  24. Hough, P. Method and Means for Recognizing Complex Patterns. U.S. Patent US3069654A, 18 December 1962. [Google Scholar]
  25. Rovira-Más, F.; Zhang, Q.; Reid, J.F.; Will, J.D. Hough-transform-based vision algorithm for crop row detection of an automated agricultural vehicle. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2005, 219, 999–1010. [Google Scholar] [CrossRef]
  26. Wu, G.; Tan, Y.; Zheng, Y.; Wang, S. Walking Goal Line Detection Based on Machine Vision on Harvesting Robot. In Proceedings of the 2011 Third Pacific-Asia Conference on Circuits, Communications and System (PACCS), Wuhan, China, 17–18 July 2011; pp. 1–4. [Google Scholar] [CrossRef]
  27. Ruiz, L.; Recio, J.; Fernández-Sarría, A.; Hermosilla, T. A feature extraction software tool for agricultural object-based image analysis. Comput. Electron. Agric. 2011, 76, 284–296. [Google Scholar] [CrossRef] [Green Version]
  28. Le Hegarat-Mascle, S.; Ottle, C. Automatic detection of field furrows from very high resolution optical imagery. Int. J. Remote Sens. 2013, 34, 3467–3484. [Google Scholar] [CrossRef]
  29. Melchior, P.; Orsoni, B.; Lavialle, O.; Poty, A.; Oustaloup, A. Consideration of obstacle danger level in path planning using A* and Fast-Marching optimisation: Comparative study. Signal Process. 2003, 83, 2387–2396. [Google Scholar] [CrossRef]
  30. Garrido, S.; Moreno, L.; Abderrahim, M.; Martin, F. Path Planning for Mobile Robot Navigation using Voronoi Diagram and Fast Marching. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 2376–2381. [Google Scholar] [CrossRef]
  31. Garrido, S.; Moreno, L.; Lima, P.U. Robot formation motion planning using Fast Marching. Robot. Auton. Syst. 2011, 59, 675–683. [Google Scholar] [CrossRef] [Green Version]
  32. Garrido, S.; Malfaz, M.; Blanco, D. Application of the fast marching method for outdoor motion planning in robotics. Robot. Auton. Syst. 2013, 61, 106–114. [Google Scholar] [CrossRef] [Green Version]
  33. Ardiyanto, I.; Miura, J. Real-time navigation using randomized kinodynamic planning with arrival time field. Robot. Auton. Syst. 2012, 60, 1579–1591. [Google Scholar] [CrossRef]
  34. Hassouna, M.; Farag, A. MultiStencils Fast Marching Methods: A Highly Accurate Solution to the Eikonal Equation on Cartesian Domains. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1563–1574. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Sample of small farms, including abandoned ones, in Valencia (Spain).
Figure 1. Sample of small farms, including abandoned ones, in Valencia (Spain).
Algorithms 12 00193 g001
Figure 2. A farmer performing maintenance tasks in a young orange grove: The red lines stand for the rows of citrus trees (or just tree rows), and the green line stands for the path where the farmer is walking.
Figure 2. A farmer performing maintenance tasks in a young orange grove: The red lines stand for the rows of citrus trees (or just tree rows), and the green line stands for the path where the farmer is walking.
Algorithms 12 00193 g002
Figure 3. Obtaining the satellite maps from the current position.
Figure 3. Obtaining the satellite maps from the current position.
Algorithms 12 00193 g003
Figure 4. Filtering the parcel images to detect the parcel or grove where the robots are located. (a) after applying the custom binarization, (b) after dilating, (c) after filling the current parcel, (d) after erosion.
Figure 4. Filtering the parcel images to detect the parcel or grove where the robots are located. (a) after applying the custom binarization, (b) after dilating, (c) after filling the current parcel, (d) after erosion.
Algorithms 12 00193 g004
Figure 5. Generating binary maps from the satellite maps. (a) satellite view of the current parcel, (b) P a t h image of the current parcel, (c) T r e e s image of the current parcel, (d) O b s t a c l e s image of the current parcel.
Figure 5. Generating binary maps from the satellite maps. (a) satellite view of the current parcel, (b) P a t h image of the current parcel, (c) T r e e s image of the current parcel, (d) O b s t a c l e s image of the current parcel.
Algorithms 12 00193 g005
Figure 6. Detecting tree rows with Hough. (a) current parcel view with the first detected row, (b) current parcel view with the first and second detected rows, (c) current parcel view with all detected rows.
Figure 6. Detecting tree rows with Hough. (a) current parcel view with the first detected row, (b) current parcel view with the first and second detected rows, (c) current parcel view with all detected rows.
Algorithms 12 00193 g006
Figure 7. Detecting tree rows with Hough: (a) the extended satellite image related to the parcel with the detected lines and (b) the extended parcel with detected vertexes.
Figure 7. Detecting tree rows with Hough: (a) the extended satellite image related to the parcel with the detected lines and (b) the extended parcel with detected vertexes.
Algorithms 12 00193 g007
Figure 8. Extracting reference points: (a) determining path row lines perpendicular to tree rows, (b) determining bisection between path row lines, (c,d) intersection between bisection and the Path binary map, and (e,f) new reference points and its representation on the parcel satellite image.
Figure 8. Extracting reference points: (a) determining path row lines perpendicular to tree rows, (b) determining bisection between path row lines, (c,d) intersection between bisection and the Path binary map, and (e,f) new reference points and its representation on the parcel satellite image.
Algorithms 12 00193 g008
Figure 9. Intermediate points (numbered blue dots) for four different configurations depending on the robot initial position (red circle). The robot must move first to the closest corner (green circle), and then, it must navigate trough the intermediate points (small numbered circles) in order to achieve the last position in the opposite corner (big blue circle). (a) robot placed in the upper-left corner, (b) robot placed in the bottom-left corner, (c) robot placed in the bottom-right corner, (d) robot placed in the upper-right corner.
Figure 9. Intermediate points (numbered blue dots) for four different configurations depending on the robot initial position (red circle). The robot must move first to the closest corner (green circle), and then, it must navigate trough the intermediate points (small numbered circles) in order to achieve the last position in the opposite corner (big blue circle). (a) robot placed in the upper-left corner, (b) robot placed in the bottom-left corner, (c) robot placed in the bottom-right corner, (d) robot placed in the upper-right corner.
Algorithms 12 00193 g009
Figure 10. Generating S p e e d images: (a) the original Path binary map and (b) the modified map.
Figure 10. Generating S p e e d images: (a) the original Path binary map and (b) the modified map.
Algorithms 12 00193 g010
Figure 11. Generating S p e e d images: (ac) graphical representations of S p e e d values calculated with Equation (4) for α equal to 1, 2, and 3.
Figure 11. Generating S p e e d images: (ac) graphical representations of S p e e d values calculated with Equation (4) for α equal to 1, 2, and 3.
Algorithms 12 00193 g011
Figure 12. Distance images: (a) Parcel satellite image with the final position (blue x-cross), (b) a representation of the normalized D i s t a n c e matrix image, and (c) the modified representation.
Figure 12. Distance images: (a) Parcel satellite image with the final position (blue x-cross), (b) a representation of the normalized D i s t a n c e matrix image, and (c) the modified representation.
Algorithms 12 00193 g012
Figure 13. Example of a path planned for a robotic system.
Figure 13. Example of a path planned for a robotic system.
Algorithms 12 00193 g013
Figure 14. Determining path planning for a team composed by two robots (I). (a) path planning for robot 1 , (b) path planning for robot 2 , (c) real path for robot 1 , (d) real path planning for robot 2 , (e) superposition of (c) and (d).
Figure 14. Determining path planning for a team composed by two robots (I). (a) path planning for robot 1 , (b) path planning for robot 2 , (c) real path for robot 1 , (d) real path planning for robot 2 , (e) superposition of (c) and (d).
Algorithms 12 00193 g014
Figure 15. Determining path planning for a team composed by two robots (II). (a) real path for robot 1 , (b) real path planning for robot 2 , (c) superposition of (a) and (b).
Figure 15. Determining path planning for a team composed by two robots (II). (a) real path for robot 1 , (b) real path planning for robot 2 , (c) superposition of (a) and (b).
Algorithms 12 00193 g015
Figure 16. Determining path planning for a team composed by two robots (III). (a) real path for robot 1 , (b) real path planning for robot 2 , (c) superposition of (a) and (b).
Figure 16. Determining path planning for a team composed by two robots (III). (a) real path for robot 1 , (b) real path planning for robot 2 , (c) superposition of (a) and (b).
Algorithms 12 00193 g016

Share and Cite

MDPI and ACS Style

Torres-Sospedra, J.; Nebot, P. Combining Satellite Images and Cadastral Information for Outdoor Autonomous Mapping and Navigation: A Proof-of-Concept Study in Citric Groves. Algorithms 2019, 12, 193. https://doi.org/10.3390/a12090193

AMA Style

Torres-Sospedra J, Nebot P. Combining Satellite Images and Cadastral Information for Outdoor Autonomous Mapping and Navigation: A Proof-of-Concept Study in Citric Groves. Algorithms. 2019; 12(9):193. https://doi.org/10.3390/a12090193

Chicago/Turabian Style

Torres-Sospedra, Joaquín, and Patricio Nebot. 2019. "Combining Satellite Images and Cadastral Information for Outdoor Autonomous Mapping and Navigation: A Proof-of-Concept Study in Citric Groves" Algorithms 12, no. 9: 193. https://doi.org/10.3390/a12090193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop