Next Article in Journal
Reexamining the Determinants of Organic Food Purchases in Online Contexts: The Dual-Factor Model Perspective
Previous Article in Journal
Red Raspberry Maturity Detection Based on Multi-Module Optimized YOLOv11n and Its Application in Field and Greenhouse Environments
Previous Article in Special Issue
Stereo Visual Odometry and Real-Time Appearance-Based SLAM for Mapping and Localization in Indoor and Outdoor Orchard Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping for Autonomous Navigation of Agricultural Robots Through Crop Rows Using UAV

Biological and Agricultural Engineering (BAE), Kansas State University, Manhattan, KS 66506, USA
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(8), 882; https://doi.org/10.3390/agriculture15080882
Submission received: 27 February 2025 / Revised: 8 April 2025 / Accepted: 14 April 2025 / Published: 18 April 2025

Abstract

:
Mapping is fundamental to the autonomous navigation of agricultural robots, as it provides a comprehensive spatial understanding of the farming environment. Accurate maps enable robots to plan efficient routes, avoid obstacles, and precisely execute tasks such as planting, spraying, and harvesting. Row crop navigation presents unique challenges, and mapping plays a crucial role in optimizing routes and avoiding obstacles in coverage path planning (CPP), which is essential for efficient agricultural operations. This study proposes a simple method for using Unmanned Aerial Vehicles (UAVs) to create maps and its application to row crop navigation. A case study is presented to demonstrate the method’s viability and illustrate how the resulting map can be applied in agricultural scenarios. This study focused on two major row crops, namely corn and soybean, but the results indicate that map creation is feasible when the inter-row spaces are not obscured by canopy cover from the adjacent rows. Although the study did not apply the map in a real-world scenario, it offers valuable insights for guiding future research.

1. Introduction

Robotic mapping is a very active and vibrant research field in mobile robotics [1]. Mapping plays a crucial role in autonomous robot navigation, providing a fundamental understanding of the robot’s environment. It involves creating a representation of the surroundings, which enables the robot to perceive and navigate its environment effectively [2]. Mapping is an important task for the autonomous navigation of all types of robots, including those used for farming operations. The navigation of ground-based mobile robots in outdoor farm environments generally uses four main types of route planning approaches: sampling-based planners, graph search-based planners, numerical optimization planners, and interpolating curve planners [3]. Route planning can be deterministic, where the entire path is generated prior to navigation, or sensor-based, where the path is created using real-time sensor feedback. A global optimum can be found in deterministic planning, but the execution depends on the local sensor’s feedback information. However, processing extensive data from sensors would require substantial computational power, limited in small mobile robots. Hence, creating a map before the intended operation is helpful for small robots to devise optimized routes rather than solely relying on local sensor feedback.
The natural outdoor agricultural environment is diverse, and the crop field environment is no exception. Moreover, the agricultural field environment is more complex than a conventional city or urban environment. Row planting is a widespread practice in farming systems. Precision agriculture has flourished in row plantation in recent years, maximizing crop production and cost-effectiveness by using different sensors, Geographic Information Systems (GISs), and Global Navigation Satellite Systems (GNSSs) [4,5,6]. While large over-canopy agricultural machinery can easily use an RTK-GPS (Real-Time Kinematic Global Positioning System) for autonomous navigation, under-canopy robot navigation presents significant challenges because it cannot rely on an RTK-GPS [7]. Leveraging RGB imagery for this task has proven complex due to substantial visual variability across different times of the day and growing seasons, which undermines the reliability of heuristic-based crop row detection [8]. Additionally, the visual homogeneity of the environment often leads to positional drift in SLAM-based approaches [9]. The field environment of row crops such as corn, soybeans, wheat, or sorghum is complex for autonomous robots traveling through rows. Finding the optimum route is challenging, including non-deterministic polynomial time (NP-hard) problems, among others [10]. Hence, aerial map creation for a particular crop field before any robotic operations would help in assessing the field conditions and optimizing operations in accordance with the capability of ground robots before starting the operation carried out by ground robots.
Drones are Unmanned Aerial Vehicles (UAVs), which have received special attention as a low-cost platform for carrying imaging sensors during the last two decades [11,12]. UAVs can gather images in the visible, infrared, and thermal regions of the electromagnetic spectrum with a very high spatio-temporal resolution [13]. It is imperative to implement autonomous methods for extracting features from UAV images efficiently and affordably to meet the increasing need for sustainable and Precision Agriculture (PA) [14]. Detecting crop rows in UAV images and subsequent automated extraction are crucial for crop management, including rectifying sowing errors, monitoring growth stages, counting seedlings, estimating yields, and facilitating crop harvesting [15]. Various approaches have previously been employed to identify and extract crop rows from UAV images. The majority of these approaches involved image processing, where RGB (Red Green Blue) or multispectral images were converted into binary representations of vegetation and non-vegetation using color indices and segmentation techniques such as supervised and unsupervised clustering [16,17] or image transformation techniques [11]. Many studies used the Hough transform and its different variations [15,18,19]. Some studies used the Fast Fourier Transform (FFT) [20,21]. Recently, Convolutional Neural Networks (CNNs) have also shown promising results [22,23,24].
Drones have one particular advantage: they can be operated high above the ground, allowing them to have a large vision area, and that area can be varied by varying their height while taking pictures. These are more convenient for gathering data than custom satellite imagery of a particular crop field, and they can collect images multiple times as needed to keep pace with the development of crops’ growth. Cropping operations like weeding, watering, or spraying are performed at different times/stages of crop growth, and different maps are needed for the intended operations. For example, weeding requires a map of weeds that are overgrown in the inter-row or intra-row areas of the crops and varies from time to time. When weeds are overgrown to a certain level, this will warrant site-specific weeding; before weeding, creating a map would be helpful for the operation.
This paper shows a simple way of creating maps for the row crop navigation of Unmanned Ground Vehicles (UGVs) for agricultural operations using some standard tools. The created map could be used to optimize route plans, calculate robot energy budgets, and plan farming operations such as weeding, spraying, etc. A methodology is proposed for using the created map for autonomous navigation. Moreover, this method has the potential to be integrated with precision agricultural operations such as the site-specific application of fertilizer or pesticide by using small robots.

1.1. Mapping for Robots

According to the IEEE standards [25], for 2D map representations for mobile robots, there are two main types of maps, namely metric maps and topological maps. A metric map is a type of map that represents the environment in a way that allows the robot to measure distances and locations accurately. It provides a geometric representation of the environment and is typically used for path planning, localization, and obstacle avoidance in mobile robotics. Metric maps provide a geometric representation of the environment, including the positions and shapes of obstacles, walls, landmarks, and other relevant features. This representation often takes the form of coordinates (e.g., x, y, and sometimes z) that allow the robot to measure distances and angles accurately. It is the most common type of map and one we are familiar with using daily, with examples including city maps or maps of a specific area.
Topological map representations can be thought of as structures rooted in graphs, where separate locations act as nodes linked by edges that specify viable routes for traversal. Unique places refer to positions in the environment characterized by sufficiently distinctive features, making it relatively simple to determine their precise location. The typical approaches for creating topological maps often stem from using Voronoi diagrams. Topological maps are structurally correct but do not give an idea of whether a path is blocked or occupied.
An occupancy grid is one of the most crucial mapping methods for environmental modeling in mobile robotics [26]. Occupancy grids portray the environment using gridded cells in two dimensions or voxel-based cells in three dimensions. They are discrete representations where each cell is assigned a binary or probabilistic value to indicate its occupancy status. A binary occupancy grid uses a simple binary representation (1 for occupied and 0 for unoccupied). In contrast, a probabilistic grid assigns a probability value to each cell, indicating the likelihood of it being occupied or unoccupied. For probabilistic occupancy grid cells, the assigned values indicate the confidence level regarding whether they are occupied or unoccupied. A probabilistic map operates under the usual assumption that the grid cells are unrelated. Occupancy grid maps are beneficial for different kinds of mobile robotic applications as they facilitate localization, path planning, navigation, and obstacle avoidance [1,27,28]. Occupancy maps can be built using laser range finders [29], sonar sensors [30,31], and stereo vision [32]. Figure 1 shows a representation of these commonly used 2D maps.
For mobile robotics, usually, maps are either generated using SLAM (Simultaneous Localization and Mapping) or a prebuilt map is loaded into the robot beforehand for autonomous navigation through an environment. SLAM is very useful for any autonomous navigation task and is especially suitable for navigation in dynamic environments such as city traffic, supermarkets, or hospitals where humans, pets, and cars move alongside robots and in areas where no GPS signals are obstructed. In addition to this, in city areas, there is a wide range of moving objects, like cars, bicycles, and trams with varying speeds, or stationary objects like road dividers, road barricades, etc. Therefore, a prebuilt map is not suitable in dynamic environments. In contrast, in agricultural scenarios such as in crop fields or greenhouses, there are very few moving objects compared to city areas; therefore, a prebuilt map is more suitable than a dynamic mapping method like SLAM. Moreover, prebuilt maps are helpful for energy budget calculation for small agricultural robots and in optimizing route planning for specific tasks like crop scouting.
SLAM is suitable for dynamic environments but has one important disadvantage: it is generally reliable in local regions but accumulates errors when mapping larger regions [33]. Although there are some methods to avoid this error, such as feature matching, sensor scan matching, etc., mapping over a large area is not error-proof. Hence, SLAM is not a good option for mapping vast expanses of crop fields, and a prebuilt map is more beneficial.
Figure 1. Types of maps. (a) The original map showing the real environment, (b) the metric map, (c) the topological map, and (d) the occupancy grid map. Adapted from [34].
Figure 1. Types of maps. (a) The original map showing the real environment, (b) the metric map, (c) the topological map, and (d) the occupancy grid map. Adapted from [34].
Agriculture 15 00882 g001

1.2. Related Work

The consideration of UGV and UAV integration in various tasks is not new and has been described by several studies [35,36]. Some authors have mentioned a framework for collaborative work between ground and aerial autonomous vehicles in agriculture 4.0 scenarios and its pros and cons [37]. UAVs have been used to create a map of obstacles and reconstruct the ground map for UGV navigation using the traditional A* algorithm [38]. Some authors have used UAVs to create maps using various image processing methods and developed a custom path-planning algorithm for UGV navigation [39]. Different procedures have been used to create maps from UAV images that depend on the target type and environment [11,39]. There has been one important study related to mapping for agricultural purposes in which the authors used a UAV for the route mapping of orchards [40]. Some studies have used UAVs to detect crop rows [11]. However, no studies have been found so far to the best knowledge of the authors that have proposed using UAV-generated maps for the crop row navigation of UGVs. Hence, the main objectives of our study were (a) to see how the canopy cover hampers the creation of a map for row crop navigation, (b) to use a very simple and basic technique to extract crop rows from UAV imagery, (c) and to determine the feasibility of using occupancy grid creation for crop rows without using any kind of machine learning techniques.

2. Materials and Methods

We propose a methodology for using UAV imagery to create maps for the navigation of agricultural robots through row crops. The methodology consists of five steps. A block diagram of these steps is shown in Figure 2, and details of each step are given in the following sections:
Step 1: Image acquisition using UAV
UAVs (Unmanned Aerial Vehicles) or drones fly over the crop field in which the robot will perform its intended operation, such as scouting, spraying, or weeding. The flight plan should cover the whole area of the crop field. Mission planning for the UAV could be performed using an open-source mission planner like Mission Planner or QGroundControl. Flight parameters like the altitude, overlapping of the images, interval for taking pictures, etc., are selected using the mission planner.
Step 2: Generation of orthomosaic image
Orthomosaic images are typically created by stitching together multiple individual images, such as aerial or satellite images, to create a seamless, orthorectified representation of a larger area. Orthorectification involves correcting the image for distortions caused by terrain variation and sensor characteristics, ensuring that the resulting image has a consistent scale and orientation. Orthomosaic images are essential for accurately measuring areas because the images are corrected for the perspective views, camera angles, and lens distortion usually present in images taken using UAVs. Orthomosaic image generation software such as WebODM (free) or Agisoft (paid) could be used to create an orthomosaic image of the crop field from a series of overlapped images.
Step 3: Generation of occupancy grid map
An occupancy grid map is created for autonomous robot navigation through crop rows that divides the robot’s operating environment into a grid of cells. Each cell in the grid represents a specific area or region of the environment, and its occupancy status indicates whether the corresponding region is occupied by an obstacle (like a pivot rut or big stone) or is a free space in which to move. Image processing techniques are used to convert RGB or multispectral images into binary images where vegetative and non-vegetative areas are separated through some established methods to create an occupancy grid map (binary map) from the images. The binary map is then converted to a .pgm format, which is used in the Robot Operating System (ROS) for storing occupancy grid maps.
Step 4: Conversion of coordinates
After map generation, it is important to convert the GPS coordinates to the actual distance because this is required to measure the traversable path for the robot. GPS coordinates, usually given in a latitude/longitude format, need to be converted to cartesian coordinates using the UTM coordinate conversion method. UTM stands for Universal Transverse Mercator, a global coordinate system used to represent locations on the Earth’s surface. It is based on a two-dimensional Cartesian coordinate system, which makes it easier to perform accurate distance and angle calculations for relatively small areas on the ground. As the generated map will be for ground robot navigation, it is very convenient to use UTM coordinates. UTM coordinates are widely used in cartography, surveying, and navigation. They divide the Earth’s surface into multiple zones, each spanning 6 degrees of longitude. The zones are numbered from 1 to 60, starting at the international date line in the Pacific Ocean and moving eastward. The UTM coordinate system utilizes eastings (measured along the east–west direction) and northings (measured along the north–south direction) to represent a location within a specific zone.
Step 5: Use of Nav2 framework in ROS
As mentioned earlier, an occupancy grid map is generated automatically using SLAM techniques with the help of a lidar sensor or depth camera. For this purpose, there are packages in Nav2 like “nav2_amcl”, “slam_toolbox”, and “nav2_costmap_2d” which take sensor readings as messages and localize robots on the map. Here, the map is created using UAVs, so the ROS would use that map directly for path planning and guiding robots to reach a certain goal. The occupancy grid map in a .pgm (Portable Gray Map) format created earlier is fed to the Navigation 2, or Nav2, framework [41]. This is a powerful and flexible navigation stack designed to facilitate autonomous navigation for mobile robots. The Nav2 framework is an evolution of the original ROS navigation stack, designed to provide more flexibility, modularity, and robustness for autonomous robot navigation [42]. Nav2 is built on top of ROS2, the newer and more capable version of the ROS, which addresses some of the limitations of ROS1 and adds support for real-time and distributed systems. Nav2 offers improved modularity, making it easier to customize navigation components based on specific robot platforms and environmental requirements. It also supports a broader range of sensor inputs such as LiDAR, RADAR, sonar, depth images, etc., making it more adaptable to robotic systems. It uses modern techniques like adaptive Monte Carlo localization (AMCL) based on a particle filter used for localization in a static map. It has another important feature, behavior trees (BTs), which can be conveniently used in complex decision-making processes for multi-mobile robot systems [43]. Behavior trees are better and more convenient than finite state machines (FSMs) for decision-making in complex scenarios [44]. To complete the navigable areas’ coverage, the open navigation “Nav2 Complete Coverage” package in Nav2 could be used. This package is an extension of the “Fields2Cover” package [45], which is a package for coverage path planning (CPP) problems. CPP is especially important for autonomous agricultural vehicles.

2.1. A Case Study

A small case study was conducted to test the validity of our method. Two popular row crops in two different locations were selected for the study: one was corn, and the other was soybean.
Location 1:
This study was conducted in four-acre corn fields in the Kansas research valley experiment field, Silver Lake, KS, USA, during 2020 and 2021. The experiment field had a Eudora Silt Loam soil type. The growth stage was V12, and the row spacing was 30 inches. Corn was planted in a row direction from west to east.
Location 2:
This study was conducted on seventeen-acre soybean fields at the Clay Center, KS, USA, during summer 2022. The experiment field had a Silt Loam soil type. The growth stage was R4, and the row spacing was 30 inches. Soybeans were planted in a row direction from south to north.

2.2. Map Creation

The navigational map creation process consisted of two parts: orthomosaic image creation from a series of drone images and occupancy grid creation from the orthomosaic image created. The description of the two processes is as follows:

2.2.1. Orthomosaic Image Creation

A quadcopter, Matrice 100 (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China), was used to collect aerial imagery. It had a payload capacity of 1 kg with a maximum flight time of 40 min. RGB (red, green, and blue) or true color images were collected using a Sony Alpha ( α ) 5100, which had a CMOS sensor with an effective pixel density of 24 megapixels. For taking photographs, the UAV was flown at a 50 m altitude with a speed of 3 m/s, which provided a ground sampling distance of 1.5 cm/pixel [46]. Multiple flights were conducted throughout the study, but only flights representing days with a clear sky and good-quality images were considered for analysis.
The UAV was flown in autopilot mode, guided by a fixed route of waypoints, commonly known as a flight mission, generated using the mission planner. The forward and side overlaps between images were kept at 85 percent to meet image stitching requirements and enable the smooth development of orthomosaics. As recommended by the mission planner, the camera was triggered after 1 s depending on the flying altitude and image overlap. Since the UAV was not directly compatible with the mission planner, a web-based platform (Litchi, VC Technology Ltd., London, UK) was used to transfer missions from the mission planner to the UAV. Images were geotagged using the integrated module within the camera. Flights were conducted around solar noon since shadows on the side of plants before and after solar noon can cause errors in imagery. The captured images were stitched into orthomosaics following the standard workflow in Metashape Professional by Agisoft (Agisoft LLC, St. Petersburg, Russia). The Real-Time Kinematic (RTK) corrected GPS coordinates of 16 ground control points (GCPs), distributed evenly throughout the field, were measured using Topcon GR-5 (Topcon, Tokyo, Japan) GNSS receivers with centimeter-level positioning accuracy. These coordinates were then used in Agisoft to provide images and orthomosaics with very high positioning accuracy. Before generating orthomosaics, images were calibrated to account for the lighting conditions during the acquisition period. For this calibration, images taken before and after the UAV flight of a calibration panel (provided by the camera manufacturer) were used in Agisoft. For each flight date, the final products used that had been generated in Agisoft were multispectral orthomosaics with cm level positioning accuracy. Any raster-based mathematical operations were performed using the raster calculator tool in ArcMap 10.6.1 (ArcGIS, ESRI, Redlands, CA, USA).
The orthomosaic images created of the corn and soybean fields and zoomed-in views of the associated rows are shown in Figure 3 and Figure 4, respectively. The figures show that in the case of corn, the canopy was very dense, and the leaves of plants in adjacent rows heavily covered the inter-row spaces with random patterns. In contrast, the soybeans’ canopy was not dense like that of corn, and the inter-row spaces were discernible with much better clarity than those of the corn field.

2.2.2. Occupancy Grid Creation

Image processing (thresholding) was used to create a binary map [47] that could be used as an occupancy grid map for the autonomous navigation of robots. The image segmenter app in MATLAB R2023a was used to perform all the processing tasks. MATLAB used Otsu thresholding algorithm to create a binary map. The following image processing operations (with the default options) were used to extract the crop rows:
  • Two thresholding (global and adaptive) operations.
  • Morphological operations (dilation, erosion, opening, and closing) with different structuring elements (disc, diamond, and square).
We used the slider tab of the segmenter app to create the best binary map where inter-row spaces were the most visible and did not use any special values for thresholding. After thresholding, the RGB images were converted into binary images, which would then be used as occupancy grid maps and subsequently integrated into the ROS navigation stack to support autonomous robot navigation.

3. Results

The thresholding operations, which are a kind of image segmentation method, show promising results for soybean, where the crop rows are visibly well defined with no significant overlaps, as shown in Figure 5. However, in the case of corn, where leaves overlapped most of the rows, segmentation failed to efficiently differentiate the rows (see Figure 6), unlike for soybeans. The two figures also show that global thresholding yielded a better result than using an adaptive threshold in soybeans, whereas using an adaptive threshold showed a slightly better result than that using a global threshold in corn. After segmentation, morphological operations (dilation, erosion, opening, and closing) with different structuring elements (disc, diamond, and square) were applied, but the processes did not significantly improve the segmentation results and are not worth mentioning.

4. Discussion

We propose a methodology to create maps from UAV imagery and use them for the navigation of agricultural robots through crop rows. Of the five steps in our proposed methodology, we implemented the first three; the remaining two steps will be addressed in a future study. Therefore, detailed descriptions of the last two stages are not included in our case study. The final step integrating the maps into the Nav2 stack depends on the intended application of the map, whether for path following or coverage path planning. Based on the specific objective, the Nav2 stack must be customized accordingly.
Our objective was to create maps for autonomous row crop navigation using a simple technique rather than a complex one. We selected the MATLAB segmenter app for this purpose because it is intuitive and does not require coding or specialized skills. Moreover, the app can generate scripts for its operations, which can be easily integrated into automated image processing workflows. We did not use any machine learning techniques or custom algorithms to create occupancy grids from UAV imagery. No image format conversion was performed beyond the use of the app’s built-in options. Additionally, no vegetation indices such as the Normalized Difference Vegetation Index (NDVI) or Excess Green Index (ExG) or clustering methods were used for image processing.
The reasons behind the output differences when using global and adaptive thresholding lie in the fact that global thresholding sets the threshold value based on a histogram of the entire image’s pixel intensity distribution. In contrast, adaptive thresholding calculates a threshold value for each small image region, allowing each region to have a distinct threshold value. Therefore, when a row was distinctive, global thresholding showed promising results. When a row was not distinguishable, adaptive thresholding showed better results as the threshold value was calculated based on local intensity variations.
This proposed method shows the feasibility of extracting crop rows for mapping purposes except in cases when the crop canopy covers most inter-row spaces. In scenarios of complete coverage, conventional segmentation algorithms are insufficient to extract the navigable spaces. However, when the canopy does not cover the inter-row spaces, this simple method works well, as shown in the case of soybeans. Although this method was not tested for the early stages of crops, it is logical to assume that the process will yield better results when crops are in early stages of development. It is expected that its performance in the very early emergence stage might not be optimum because, at this stage, the physical shape of crops changes too rapidly, and the canopy coverage changes accordingly, ultimately dictating the inter-row distances. The optimum picture-taking times mainly depend on the intended purposes (e.g., weeding, spraying) and the duration of the map usage by robots. When there is a significant change in the physical size of crops, it is always better to update the map just before a robot’s operation.

5. The Uses of the Generated Map

The generated map can be used for various robotic operations, such as the following:
  • Coverage path planning (CPP): Coverage Path Planning, or CPP, is a crucial aspect of autonomous robot navigation and refers to the process of generating a path or trajectory for a robot to traverse in order to cover an entire area or region of interest. The primary objective of coverage path planning is to ensure that the robot can systematically explore and survey the target environment efficiently and effectively, minimizing redundant or unnecessary movements. Several algorithms are used for coverage path planning, such as grid-based methods [48], cell decomposition methods [49], Voronoi-based methods [50], potential field methods [51], and sampling-based methods [52]. However, all the methods require finding the navigable area for the robot first; hence, this kind of map would be indispensable.
  • Energy budget calculation: Autonomous mobile robots operating alone or in fleets need to know their energy consumption before starting operations to calculate when to go to the nearest station for manual refueling or recharge their batteries (in the case of electric vehicles). Moreover, estimating the energy budget is very important when selecting the installation location for charging stations.
  • Global path planning: Global path planning is a fundamental aspect of autonomous mobile robot navigation. It involves finding a high-level path from the robot’s initial position to the goal location, considering the overall environment and the robot’s capabilities. This path is typically represented as a series of waypoints, or key markers the robot must follow to reach its destination. The global path is planned before the robot starts moving, and it provides a general roadmap for the entire navigation task. The environment is usually represented as a map, either in a grid-based format or using continuous representations like occupancy grids or point clouds. The map contains information about obstacles, free spaces, and other relevant features. Various algorithms are used to compute the global path, and these algorithms find the shortest or most optimal path from the starting point to the goal, considering the map’s obstacles and terrain. Global path planning may consider high-level constraints, such as avoiding specific areas (e.g., pivot ruts), considering different terrain types, or optimizing for specific criteria like energy consumption or time. Once the global path is generated, the robot follows it until it encounters local obstacles or deviations from the planned trajectory.
  • Obstacle avoidance: The robot could use the map to detect obstacles such as pivot ruts, large boulders/rocks, or big ditches in its path. By comparing the planned trajectory with the occupancy status of grid cells along the path, the robot could identify potential obstacles or collisions and adjust its route to avoid them. Global planning algorithms use map information to generate safe, smooth motion trajectories that avoid obstacles/collisions.

6. Limitations and Future Work

The height of the flight was 50 m from the ground, usually used for precision agricultural purposes. However, a lower height might give better results, which was not tested here. Therefore, there is room for optimizing the height to extract the best quality row information for the map creation. We did not optimize the windspeed, cloud cover conditions, temperature, etc., while taking pictures using the UAV, which are important for taking crisp images. We only used RGB images without any color model conversion. Other bands, like the red edge band or infrared band, might provide better segmentation results for the map creation. We used a simple and straightforward way to create a binary map from orthomosaic images rather than lengthy, complex, or custom-developed algorithms. Custom algorithms or machine learning techniques might provide better results for occupancy grid creation from UAV imagery, but they were not tested here because we aimed to keep the process very simple and quick.

7. Conclusions

Mapping is an essential task for autonomous agricultural robots. A predefined map is helpful for global or coverage path planning. Acquiring current imagery of crop fields for map creation using satellites is either difficult or expensive, whereas image gathering using UAVs is convenient and cheap. This study has shown a simple methodology with which maps can be created using UAVs and utilized for row crop navigation by UGVs. It has been shown that a map can be created using orthomosaic images with simple image processing operations like thresholding. As the study was a small case study to show the suitability of using simple operations for map creation, there is scope for the improvement of the procedures, such as the optimization of the UAV’s height and map coverage, determining the proper timing of flight operations to extract the best-quality row information, and robust algorithm development. This work paves the way for map generation for the autonomous navigation of agricultural robots through crop rows.

Author Contributions

Conceptualization, H.M.; methodology, H.M.; software, H.M., M.G. and J.E.A.; formal analysis, H.M.; investigation, H.M.; resources, H.M., M.G. and J.E.A.; data curation, H.M., M.G. and J.E.A.; writing—original draft preparation, H.M. and M.G.; writing—review and editing, H.M.; visualization, H.M.; supervision, D.F.; project administration, D.F.; funding acquisition, D.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the National Institute of Food and Agriculture (NIFA-USDA).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AMCLAdaptive Monte Carlo localization
CNNConvolutional Neural Network
CPPCoverage Path Planning
ExGExcess Green Index
FFTFast Fourier Transform
GISGeographic Information Systems
GNSSGlobal Navigation Satellite Systems
IEEEInstitute of Electrical and Electronics Engineers
NDVINormalized Difference Vegetation Index
pgmPortable Gray Map
RGBRed, Green, and Blue
ROSRobot Operating System
RTK-GPSReal-Time Kinematic Global Positioning System
SLAMSimultaneous Localization and Mapping
UAVUnmanned Aerial Vehicle
UGVUnmanned Ground Vehicle

References

  1. Thrun, S.; Burgard, W.; Fox, D. A probabilistic approach to concurrent mapping and localization for mobile robots. Auton. Robot. 1998, 5, 253–271. [Google Scholar] [CrossRef]
  2. Stachniss, C. Exploration and Mapping with Mobile Robots. Ph.D. Thesis, Verlag Nicht Ermittelbar, Zürich, Switzerland, 2006. [Google Scholar]
  3. González, D.; Pérez, J.; Milanés, V.; Nashashibi, F. A Review of Motion Planning Techniques for Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1135–1145. [Google Scholar] [CrossRef]
  4. Gebbers, R.; Adamchuk, V.I. Precision agriculture and food security. Science 2010, 327, 828–831. [Google Scholar] [CrossRef] [PubMed]
  5. Zhang, H.; Chen, B.; Zhang, L. Detection Algorithm for Crop Multi-Centerlines Based on Machine Vision. Trans. ASABE 2008, 51, 1089–1097. [Google Scholar] [CrossRef]
  6. Barnes, A.P.; Soto, I.; Eory, V.; Beck, B.; Balafoutis, A.; Sánchez, B.; Vangeyte, J.; Fountas, S.; van der Wal, T.; Gómez-Barbero, M. Exploring the adoption of precision agricultural technologies: A cross regional study of EU farmers. Land Use Policy 2019, 80, 163–174. [Google Scholar] [CrossRef]
  7. Farrell, J. Aided Navigation: GPS with High Rate Sensors; McGraw-Hill, Inc.: Chicago, IL, USA, 2008. [Google Scholar]
  8. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  9. Xue, J.; Zhang, L.; Grift, T.E. Variable field-of-view machine vision based row guidance of an agricultural robot. Comput. Electron. Agric. 2012, 84, 85–91. [Google Scholar] [CrossRef]
  10. Kiani, F.; Seyyedabbasi, A.; Nematzadeh, S.; Candan, F.; Çevik, T.; Anka, F.A.; Randazzo, G.; Lanza, S.; Muzirafuti, A. Adaptive metaheuristic-based methods for autonomous robot path planning: Sustainable agricultural applications. Appl. Sci. 2022, 12, 943. [Google Scholar] [CrossRef]
  11. Hassanein, M.; Khedr, M.; El-Sheimy, N. Crop row detection procedure using low-cost UAV imagery system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 349–356. [Google Scholar] [CrossRef]
  12. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  13. Ludovisi, R.; Tauro, F.; Salvati, R.; Khoury, S.; Mugnozza Scarascia, G.; Harfouche, A. UAV-based thermal imaging for high-throughput field phenotyping of black poplar response to drought. Front. Plant Sci. 2017, 8, 252873. [Google Scholar] [CrossRef] [PubMed]
  14. Pádua, L.; Adão, T.; Hruška, J.; Sousa, J.J.; Peres, E.; Morais, R.; Sousa, A. Very high resolution aerial data to support multi-temporal precision agriculture information management. Procedia Comput. Sci. 2017, 121, 407–414. [Google Scholar] [CrossRef]
  15. Soares, G.A.; Abdala, D.D.; Escarpinati, M.C. Plantation Rows Identification by Means of Image Tiling and Hough Transform. In Proceedings of the VISIGRAPP (4:VISAPP), Madeira, Portugal, 27–29 January2018; pp. 453–459. [Google Scholar]
  16. Louargant, M.; Jones, G.; Faroux, R.; Paoli, J.N.; Maillot, T.; Gée, C.; Villette, S. Unsupervised classification algorithm for early weed detection in row-crops by combining spatial and spectral information. Remote Sens. 2018, 10, 761. [Google Scholar] [CrossRef]
  17. Rabatel, G.; Delenne, C.; Deshayes, M. A non-supervised approach using Gabor filters for vine-plot detection in aerial images. Comput. Electron. Agric. 2008, 62, 159–168. [Google Scholar] [CrossRef]
  18. Ji, R.; Qi, L. Crop-row detection algorithm based on Random Hough Transformation. Math. Comput. Model. 2011, 54, 1016–1020. [Google Scholar] [CrossRef]
  19. Vidović, I.; Cupec, R.; Hocenski, Z. Crop row detection by global energy minimization. Pattern Recognit. 2016, 55, 68–86. [Google Scholar] [CrossRef]
  20. Delenne, C.; Durrieu, S.; Rabatel, G.; Deshayes, M. A Local Fourier Transform approach for vine plot extraction from aerial images. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Springer: Berlin/Heidelberg, Germany, 2008; pp. 443–456. [Google Scholar]
  21. Delenne, C.; Durrieu, S.; Rabatel, G.; Deshayes, M.; Bailly, J.S.; Lelong, C.; Couteron, P. Textural approaches for vineyard detection and characterization using very high spatial resolution remote sensing data. Int. J. Remote Sens. 2008, 29, 1153–1167. [Google Scholar] [CrossRef]
  22. Mortensen, A.K.; Dyrmann, M.; Karstoft, H.; Jørgensen, R.N.; Gislum, R. Semantic Segmentation of Mixed Crops Using Deep Convolutional Neural Network. 2016. Available online: https://conferences.au.dk/uploads/tx_powermail/biomap_-_cigr_2016_-_paper_-_final.pdf (accessed on 8 March 2025).
  23. Bah, M.D.; Hafiane, A.; Canals, R. Deep learning with unsupervised data labeling for weed detection in line crops in UAV images. Remote Sens. 2018, 10, 1690. [Google Scholar] [CrossRef]
  24. Osco, L.P.; De Arruda, M.d.S.; Junior, J.M.; Da Silva, N.B.; Ramos, A.P.M.; Moryia, E.A.S.; Imai, N.N.; Pereira, D.R.; Creste, J.E.; Matsubara, E.T.; et al. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2020, 160, 97–106. [Google Scholar] [CrossRef]
  25. IEEE Standard for Robot Map Data Representations for Navigation. 2015. Available online: https://ieeexplore.ieee.org/document/7112058 (accessed on 8 March 2025).
  26. Kortenkamp, D.; Bonasso, R.P.; Murphy, R. Artificial Intelligence and Mobile Robots: Case Studies of Successful Robot Systems; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  27. Borenstein, J.; Koren, Y. The vector field histogram-fast obstacle avoidance for mobile robots. IEEE Trans. Robot. Autom. 1991, 7, 278–288. [Google Scholar] [CrossRef]
  28. Dissanayake, M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef]
  29. Thrun, S. Learning metric-topological maps for indoor mobile robot navigation. Artif. Intell. 1998, 99, 21–71. [Google Scholar] [CrossRef]
  30. Yamauchi, B. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97—‘Towards New Computational Principles for Robotics and Automation’, Monterey, CA, USA, 10–11 July 1997; IEEE: Piscataway, NJ, USA, 1997; pp. 146–151. [Google Scholar]
  31. Moravec, H.; Elfes, A. High resolution maps from wide angle sonar. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25-28 March 1985; IEEE: Piscataway, NJ, USA, 1985; Volume 2, pp. 116–121. [Google Scholar]
  32. Li, Y.; Ruichek, Y. Occupancy grid mapping in urban environments from a moving on-board stereo-vision system. Sensors 2014, 14, 10454–10478. [Google Scholar] [CrossRef]
  33. Alsadik, B.; Karam, S. The simultaneous localization and mapping (SLAM)-An overview. J. Appl. Sci. Technol. Trends 2021, 2, 147–158. [Google Scholar] [CrossRef]
  34. Filliat, D.; Meyer, J.A. Map-based navigation in mobile robots:: I. A review of localization strategies. Cogn. Syst. Res. 2003, 4, 243–282. [Google Scholar] [CrossRef]
  35. Cantelli, L.; Mangiameli, M.; Melita, C.D.; Muscato, G. UAV/UGV cooperation for surveying operations in humanitarian demining. In Proceedings of the 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Linköping, Sweden, 21–26 October 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–6. [Google Scholar]
  36. Minaeian, S.; Liu, J.; Son, Y.J. Vision-based target detection and localization via a team of cooperative UAV and UGVs. IEEE Trans. Syst. Man, Cybern. Syst. 2015, 46, 1005–1016. [Google Scholar] [CrossRef]
  37. Mammarella, M.; Comba, L.; Biglia, A.; Dabbene, F.; Gay, P. Cooperation of unmanned systems for agricultural applications: A theoretical framework. Biosyst. Eng. 2022, 223, 61–80. [Google Scholar] [CrossRef]
  38. Huo, J.; Zenkevich, S.L.; Nazarova, A.V.; Zhai, M. Path planning based on map matching in UAV/UGV collaboration system. Int. J. Intell. Unmanned Syst. 2021, 9, 81–95. [Google Scholar] [CrossRef]
  39. Li, J.; Deng, G.; Luo, C.; Lin, Q.; Yan, Q.; Ming, Z. A hybrid path planning method in unmanned air/ground vehicle (UAV/UGV) cooperative systems. IEEE Trans. Veh. Technol. 2016, 65, 9585–9596. [Google Scholar] [CrossRef]
  40. Katikaridis, D.; Moysiadis, V.; Tsolakis, N.; Busato, P.; Kateris, D.; Pearson, S.; Sørensen, C.G.; Bochtis, D. UAV-supported route planning for UGVs in semi-deterministic agricultural environments. Agronomy 2022, 12, 1937. [Google Scholar] [CrossRef]
  41. Macenski, S.; Martín, F.; White, R.; Ginés Clavero, J. The Marathon 2: A Navigation System. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
  42. Macenski, S.; Moore, T.; Lu, D.; Merzlyakov, A.; Ferguson, M. From the desks of ROS maintainers: A survey of modern & capable mobile robotics algorithms in the robot operating system 2. Robot. Auton. Syst. 2023, 168, 104493. [Google Scholar]
  43. Iovino, M.; Scukins, E.; Styrud, J.; Ögren, P.; Smith, C. A survey of behavior trees in robotics and AI. Robot. Auton. Syst. 2022, 154, 104096. [Google Scholar] [CrossRef]
  44. Iovino, M.; Förster, J.; Falco, P.; Chung, J.J.; Siegwart, R.; Smith, C. On the programming effort required to generate Behavior Trees and Finite State Machines for robotic applications. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 5807–5813. [Google Scholar] [CrossRef]
  45. Mier, G.; Valente, J.; de Bruin, S. Fields2Cover: An Open-Source Coverage Path Planning Library for Unmanned Agricultural Vehicles. IEEE Robot. Autom. Lett. 2023, 8, 2166–2172. [Google Scholar] [CrossRef]
  46. Gadhwal, M.; Sharda, A.; Sangha, H.S.; Van der Merwe, D. Spatial corn canopy temperature extraction: How focal length and sUAS flying altitude influence thermal infrared sensing accuracy. Comput. Electron. Agric. 2023, 209, 107812. [Google Scholar] [CrossRef]
  47. Yousefi, J. Image Binarization Using Otsu Thresholding Algorithm; University of Guelph: Ontario, ON, Canada, 2011; Volume 10, p. 9. [Google Scholar]
  48. Shao, X.; Zheng, R.; Wei, J.; Guo, D.; Yang, T.; Wang, B.; Zhao, Y. Path planning of mobile Robot based on improved ant colony algorithm based on Honeycomb grid. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 5, pp. 1358–1362. [Google Scholar]
  49. Šeda, M. Roadmap methods vs. cell decomposition in robot motion planning. In Proceedings of the 6th WSEAS International Conference on Signal Processing, Robotics and Automation, Corfu Island, Greece, 16–19 February 2007; World Scientific and Engineering Academy and Society (WSEAS): Athens, Greece, 2007; pp. 127–132. [Google Scholar]
  50. Gomez, C.; Fehr, M.; Millane, A.; Hernandez, A.C.; Nieto, J.; Barber, R.; Siegwart, R. Hybrid topological and 3d dense mapping through autonomous exploration for large indoor environments. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 9673–9679. [Google Scholar]
  51. Liu, H.; Wang, D.; Wang, Y.; Lu, X. Research of path planning for mobile robots based on fuzzy artificial potential field method. Control Eng. China 2022, 29, 33–38. [Google Scholar]
  52. Palmieri, L.; Koenig, S.; Arras, K.O. RRT-based nonholonomic motion planning using any-angle path biasing. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2775–2781. [Google Scholar]
Figure 2. Steps in the proposed methodology.
Figure 2. Steps in the proposed methodology.
Agriculture 15 00882 g002
Figure 3. Orthomosaic image of corn field and zoomed-in view of crop rows.
Figure 3. Orthomosaic image of corn field and zoomed-in view of crop rows.
Agriculture 15 00882 g003
Figure 4. Orthomosaic image of soybean field and zoomed-in view of crop rows.
Figure 4. Orthomosaic image of soybean field and zoomed-in view of crop rows.
Agriculture 15 00882 g004
Figure 5. Zoomed-in section of a soybean field (left) and its corresponding binary images after thresholding operations: global thresholding (top right) and adaptive thresholding (bottom right).
Figure 5. Zoomed-in section of a soybean field (left) and its corresponding binary images after thresholding operations: global thresholding (top right) and adaptive thresholding (bottom right).
Agriculture 15 00882 g005
Figure 6. Zoomed-in section of a corn field (left) and its corresponding binary images after thresholding operations: adaptive thresholding (top right) and global thresholding (bottom right).
Figure 6. Zoomed-in section of a corn field (left) and its corresponding binary images after thresholding operations: adaptive thresholding (top right) and global thresholding (bottom right).
Agriculture 15 00882 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mansur, H.; Gadhwal, M.; Abon, J.E.; Flippo, D. Mapping for Autonomous Navigation of Agricultural Robots Through Crop Rows Using UAV. Agriculture 2025, 15, 882. https://doi.org/10.3390/agriculture15080882

AMA Style

Mansur H, Gadhwal M, Abon JE, Flippo D. Mapping for Autonomous Navigation of Agricultural Robots Through Crop Rows Using UAV. Agriculture. 2025; 15(8):882. https://doi.org/10.3390/agriculture15080882

Chicago/Turabian Style

Mansur, Hasib, Manoj Gadhwal, John Eric Abon, and Daniel Flippo. 2025. "Mapping for Autonomous Navigation of Agricultural Robots Through Crop Rows Using UAV" Agriculture 15, no. 8: 882. https://doi.org/10.3390/agriculture15080882

APA Style

Mansur, H., Gadhwal, M., Abon, J. E., & Flippo, D. (2025). Mapping for Autonomous Navigation of Agricultural Robots Through Crop Rows Using UAV. Agriculture, 15(8), 882. https://doi.org/10.3390/agriculture15080882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop