Agriculture Robotics

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (31 August 2017) | Viewed by 78652

Special Issue Editor

Biological Systems Engineering Department, Center for Precision & Automated Agricultural Systems, Washington State University, Pullman, WA, USA
Interests: machine vision; field robotics; human–machine interaction; agriculture system modeling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Agriculture is one of the oldest and most important industries human civilization has ever established. Fundamentally, agriculture relies on efficient utilization of natural resources, such as land, water, nutrition, and other chemicals to produce basic necessities of human lives, including food, fiber, feed, and fuel. As predicted by the United Nations, the world’s population will increase to approximately 10 billion by 2050. The continuously increasing pressure on feeding a rapidly growing population presents a huge challenge to the agricultural industry: How to sustainably produce enough agricultural supplies to meet such a huge demand.

Agricultural mechanization; use of machinery to perform laborious operations; has helped improve agricul­tural productivity through more efficient use of labor, increased timeliness of operations, and more efficient input management. Continuing advancement in agricultural mechanization and automation technologies in recent decades has led the agriculture into an era of robotic farming era. Agricultural robots, in general, can be defined as a line of intelligent machinery that exhibits some similar behaviors to a human operator, such as the capabilities of perception, reasoning and manipulation in farming settings to perform predetermined operations and tasks, with or without human supervision. Such robotic technologies have a potential to further reduce the use of labor and increase the precision and efficiency of production inputs thus contributing to increased agricultural productively and long-term sustainability of the industry.

Agricultural products can be broadly grouped into food, feed and raw materials for various other products, all cultivated differently in different geographic reasons around the world. It results in a wide variation in mechanisms, technologies and machines required to do complete different agricultural operations or handle special agricultural challenges. For example, after over a few decades of research and development, thousands of milking robots have been installed in dairy farms, tractors are auto-guided and auto-steered in performing different field operations, and drones are offering unique and novel applications in agriculture to improve productivity and reduce labor and input use, worldwide. This is just the beginning of what is expected to be a revolution in the way agricultural industry is operated. The objective of this Special Issue is, therefore, to promote a deeper understanding of major conceptual and technical challenges and facilitate spreading of recent breakthroughs in agricultural robotics. This Special Issue, by achieving this objective, is expected to enable safe, efficient, and economical agricultural production, and to advance the state-of-the-art in sensing, mobility, manipulation, and management technologies applied to the production of grain, fruit, vegetable, meat, milk and other agricultural products.

Topics of interest include (but are not limited to):

  • Sensing technologies for situation awareness in agricultural applications
  • Control strategies for robot manipulation in agricultural applications
  • Automatic guidance of robotic vehicle in agriculture sites
  • UASs or drones in agriculture
  • Robotics for row crop production
  • Robotics for specialty crop production (including fruit and vegetable)
  • Robotics for greenhouse and vertical farming systems
  • Robots for animal production
  • Machine learning and arterial intelligence in agriculture

Prof. Dr. Qin Zhang
Prof. Dr. Manoj Karkee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine vision and other sensing systems
  • Modeling, simulation, and controls
  • Navigation and guidance
  • End-effectors and manipulators
  • Artificial intelligence
  • Soft computing and machine learning
  • Autonomous operations

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 904 KiB  
Article
Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot
by Ahmad Ostovar, Ola Ringdahl and Thomas Hellström
Robotics 2018, 7(1), 11; https://doi.org/10.3390/robotics7010011 - 05 Feb 2018
Cited by 22 | Viewed by 7052
Abstract
The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires [...] Read more.
The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Show Figures

Figure 1

7970 KiB  
Article
Automated Detection of Branch Shaking Locations for Robotic Cherry Harvesting Using Machine Vision
by Suraj Amatya, Manoj Karkee, Qin Zhang and Matthew D. Whiting
Robotics 2017, 6(4), 31; https://doi.org/10.3390/robotics6040031 - 28 Oct 2017
Cited by 26 | Viewed by 12217
Abstract
Automation in cherry harvesting is essential to reduce the demand for seasonal labor for cherry picking and reduce the cost of production. The mechanical shaking of tree branches is one of the widely studied and used techniques for harvesting small tree fruit crops [...] Read more.
Automation in cherry harvesting is essential to reduce the demand for seasonal labor for cherry picking and reduce the cost of production. The mechanical shaking of tree branches is one of the widely studied and used techniques for harvesting small tree fruit crops like cherries. To automate the branch shaking operation, different methods of detecting branches and cherries in full foliage canopies of the cherry tree have been developed previously. The next step in this process is the localization of shaking positions in the detected tree branches for mechanical shaking. In this study, a method of locating shaking positions for automated cherry harvesting was developed based on branch and cherry pixel locations determined using RGB images and 3D camera images. First, branch and cherry regions were located in 2D RGB images. Depth information provided by a 3D camera was then mapped on to the RGB images using a standard stereo calibration method. The overall root mean square error in estimating the distance to desired shaking points was 0.064 m. Cherry trees trained in two different canopy architectures, Y-trellis and vertical trellis systems, were used in this study. Harvesting testing was carried out by shaking tree branches at the locations selected by the algorithm. For the Y-trellis system, the maximum fruit removal efficiency of 92.9% was achieved using up to five shaking events per branch. However, maximum fruit removal efficiency for the vertical trellis system was 86.6% with up to four shakings per branch. However, it was found that only three shakings per branch would achieve a fruit removal percentage of 92.3% and 86.4% in Y and vertical trellis systems respectively. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Show Figures

Figure 1

58606 KiB  
Article
The Thorvald II Agricultural Robotic System
by Lars Grimstad and Pål Johan From
Robotics 2017, 6(4), 24; https://doi.org/10.3390/robotics6040024 - 30 Sep 2017
Cited by 92 | Viewed by 24286
Abstract
This paper presents a novel and modular approach to agricultural robots. Food production is highly diverse in several aspects. Even farms that grow the same crops may differ in topology, infrastructure, production method, and so on. Modular robots help us adapt to this [...] Read more.
This paper presents a novel and modular approach to agricultural robots. Food production is highly diverse in several aspects. Even farms that grow the same crops may differ in topology, infrastructure, production method, and so on. Modular robots help us adapt to this diversity, as they can quickly be configured for various farm environments. The robots presented in this paper are hardware modular in the sense that they can be reconfigured to obtain the necessary physical properties to operate in different production systems—such as tunnels, greenhouses and open fields—and their mechanical properties can be adapted to adjust for track width, power requirements, ground clearance, load capacity, and so on. The robot’s software is generalizing to work with the great variation of robot designs that can be realized by assembling hardware modules in different configurations. The paper presents several novel ideas for agricultural robotics, as well as extensive field trials of several different versions of the Thorvald II platform. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Show Figures

Figure 1

2696 KiB  
Article
Automated Remote Insect Surveillance at a Global Scale and the Internet of Things
by Ilyas Potamitis, Panagiotis Eliopoulos and Iraklis Rigakis
Robotics 2017, 6(3), 19; https://doi.org/10.3390/robotics6030019 - 22 Aug 2017
Cited by 56 | Viewed by 13115
Abstract
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic [...] Read more.
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic sensors to guard the entrance of the trap to detect, time-stamp, GPS tag, and—in relevant cases—identify the species of the incoming insect from their wingbeat. For every important crop pest, there are monitoring protocols to be followed to decide when to initiate a treatment procedure before a serious infestation occurs. Monitoring protocols are mainly based on specifically designed insect traps. Traditional insect monitoring suffers in that the scope of such monitoring: is curtailed by its cost, requires intensive labor, is time consuming, and an expert is often needed for sufficient accuracy which can sometimes raise safety issues for humans. These disadvantages reduce the extent to which manual insect monitoring is applied and therefore its accuracy, which finally results in significant crop loss due to damage caused by pests. With the term ‘surveillance’ we intend to push the monitoring idea to unprecedented levels of information extraction regarding the presence, time-stamping detection events, species identification, and population density of targeted insect pests. Insect counts, as well as environmental parameters that correlate with insects’ population development, are wirelessly transmitted to the central monitoring agency in real time and are visualized and streamed to statistical methods to assist enforcement of security control to insect pests. In this work, we emphasize how the traps can be self-organized in networks that collectively report data at local, regional, country, continental, and global scales using the emerging technology of the Internet of Things (IoT). This research is necessarily interdisciplinary and falls at the intersection of entomology, optoelectronic engineering, data-science, and crop science and encompasses the design and implementation of low-cost, low-power technology to help reduce the extent of quantitative and qualitative crop losses by many of the most significant agricultural pests. We argue that smart traps communicating through IoT to report in real-time the level of the pest population from the field straight to a human controlled agency can, in the very near future, have a profound impact on the decision-making process in crop protection and will be disruptive of existing manual practices. In the present study, three cases are investigated: monitoring Rhynchophorus ferrugineus (Olivier) (Coleoptera: Curculionidae) using (a) Picusan and (b) Lindgren trap; and (c) monitoring various stored grain beetle pests using the stored-grain pitfall trap. Our approach is very accurate, reaching 98–99% accuracy on automatic counts compared with real detected numbers of insects in each type of trap. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Show Figures

Figure 1

7364 KiB  
Article
Bin-Dog: A Robotic Platform for Bin Management in Orchards
by Yunxiang Ye, Zhaodong Wang, Dylan Jones, Long He, Matthew E. Taylor, Geoffrey A. Hollinger and Qin Zhang
Robotics 2017, 6(2), 12; https://doi.org/10.3390/robotics6020012 - 22 May 2017
Cited by 23 | Viewed by 11102
Abstract
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management, [...] Read more.
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management, the concept of a robotic bin-dog system is proposed in this study. This system is designed with a “go-over-the-bin” feature, which allows it to drive over bins between tree rows and complete the above process in one trip. To validate this system concept, a prototype and its control and navigation system were designed and built. Field tests were conducted in a commercial orchard to validate its key functionalities in three tasks including headland turning, straight-line tracking between tree rows, and “go-over-the-bin.” Tests of the headland turning showed that bin-dog followed a predefined path to align with an alleyway with lateral and orientation errors of 0.02 m and 1.5°. Tests of straight-line tracking showed that bin-dog could successfully track the alleyway centerline at speeds up to 1.00 m·s−1 with a RMSE offset of 0.07 m. The navigation system also successfully guided the bin-dog to complete the task of go-over-the-bin at a speed of 0.60 m·s−1. The successful validation tests proved that the prototype can achieve all desired functionality. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Show Figures

Figure 1

1593 KiB  
Article
Feasibility of Using the Optical Sensing Techniques for Early Detection of Huanglongbing in Citrus Seedlings
by Alireza Pourreza, Won Suk Lee, Eva Czarnecka, Lance Verner and William Gurley
Robotics 2017, 6(2), 11; https://doi.org/10.3390/robotics6020011 - 23 Apr 2017
Cited by 3 | Viewed by 8241
Abstract
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant [...] Read more.
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant impact on citrus yield and quality in Florida. Unfortunately, no cure has been reported for HLB. Starch accumulates in HLB infected leaf chloroplasts, which causes the mottled blotchy green pattern. Starch rotates the polarization plane of light. A polarized imaging technique was used to detect the polarization-rotation caused by the hyper-accumulation of starch as a pre-symptomatic indication of HLB in young seedlings. Citrus seedlings were grown in a room with controlled conditions and exposed to intensive feeding by CLas-positive psyllids for eight weeks. A quantitative polymerase chain reaction was employed to confirm the HLB status of samples. Two datasets were acquired; the first created one month after the exposer to psyllids and the second two months later. The results showed that, with relatively unsophisticated imaging equipment, four levels of HLB infections could be detected with accuracies of 72%–81%. As expected, increasing the time interval between psyllid exposure and imaging increased the development of symptoms and, accordingly, improved the detection accuracy. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Show Figures

Figure 1

Back to TopTop