Next Article in Journal
Levels of Whole-Body Vibrations Transmitted to the Driver of a Tractor Equipped with Self-Levelling Cab during Soil Primary Tillage
Next Article in Special Issue
A Multi-Control Strategy to Achieve Autonomous Field Operation
Previous Article in Journal
Evaluating the Expediency of Smartphone Applications for Indian Farmers and Other Stakeholders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

AI-Assisted Vision for Agricultural Robots

Laboratory of Agricultural Machinery, Department of Natural Resources Management and Agricultural Engineering, Agricultural University of Athens, 75 Iera Odos Str., 11855 Athens, Greece
*
Author to whom correspondence should be addressed.
AgriEngineering 2022, 4(3), 674-694; https://doi.org/10.3390/agriengineering4030043
Submission received: 6 June 2022 / Revised: 22 July 2022 / Accepted: 22 July 2022 / Published: 1 August 2022
(This article belongs to the Special Issue Selected Papers from The Ag Robotic Forum—World FIRA 2021)

Abstract

:
Robotics has been increasingly relevant over the years. The ever-increasing demand for productivity, the reduction of tedious labor, and safety for the operator and the environment have brought robotics to the forefront of technological innovation. The same principle applies to agricultural robots, where such solutions can aid in making farming easier for the farmers, safer, and with greater margins for profit, while at the same time offering higher quality products with minimal environmental impact. This paper focuses on reviewing the existing state of the art for vision-based perception in agricultural robots across a variety of field operations; specifically: weed detection, crop scouting, phenotyping, disease detection, vision-based navigation, harvesting, and spraying. The review revealed a large interest in the uptake of vision-based solutions in agricultural robotics, with RGB cameras being the most popular sensor of choice. It also outlined that AI can achieve promising results and that there is not a single algorithm that outperforms all others; instead, different artificial intelligence techniques offer their unique advantages to address specific agronomic problems.

1. Introduction

Agricultural robots have received significant attention over the last decade and are considered by many as one of the most viable ways toward a more sustainable and more productive agricultural sector. However, agricultural robots are complex systems that consist of several parts; manipulators, grippers, wheels, navigation, and perception devices, to name a few. Moreover, agricultural robots need to be intelligent enough to perform complex tasks such as moving between rows, recognizing objects of interest, and avoiding obstacles in the field. It, therefore, becomes clear that the development of both software and hardware must be done in parallel if the industry is to achieve and surpass the current standards and benchmarks set by humans.
Some of the most critical issues agricultural robots must face are related to the ability of the robot to “perceive” its surroundings (vision system) and the intelligence not only to understand it but also to control itself and the connected implements [1]. As a result, breakthroughs in vision systems and artificial intelligence (AI) to improve robot perception have occurred in recent years. Various sensing devices have been tested and are currently used for this specific task, ranging from bump sensors to soil sensors and from sonar systems to RGB cameras, each of which comes with its own benefits and limitations, thus making them suitable for specific agricultural tasks. The same applies to the multitude of AI algorithms deployed, ranging from highly complex and computationally intensive to less complex and faster to execute.
Several reviews have been presented focusing on agricultural robots, dividing them based on specific tasks, such as seeding, pruning, weeding, and harvesting [2,3]. Other reviews focus on specific tasks such as the control of agricultural robot tractors [4], autonomous navigation [5], and the key vision techniques for harvesting [6]. At the same time, there are also crop-focused reviews, such as cotton harvesting [7] and strawberry production [8].
This paper aims to study and examine vision-based robotic perception and the AI algorithms coupled with them. More precisely, the main challenges that led to the concept of the presented review were: (i) to identify current vision sensors and AI algorithms developed and deployed for each field operation, (ii) to support a well-documented development of vision-based agricultural robotic systems, and (iii) the identification of key challenges and future solutions for agricultural robotics vision-based developments.
This literature review paper covers scientific agricultural robotic systems over a wide range of agricultural operations, mainly weeding, crop monitoring, phenotyping, disease detection, spraying, navigation, and harvesting. Commercial systems are not covered because technical details about sensors and algorithms are usually kept confidential. The vision-based sensors covered by the review range from RGB, IR, and spectral cameras to stereo vision devices, while the AI algorithms used to exploit those sensors for executing various field operations range from Support Vector Machines (SVM) and Convolutional Neural Networks (CNN) to traditional machine learning techniques such as linear regression.

2. Review Framework

Before conducting the review, several theoretical considerations were considered that led to establishing methodological steps, as presented in Figure 1: (i) establishment of eligibility criteria, (ii) definition of the classification framework, (iii) literature research based on the prior established criteria, (iv) selection of the most relevant studies for each identified class, (v) analysis of the selected studies, and (vi) comparison of the findings.
Firstly, a set of predefined eligibility criteria were set: (i) the references should be related to and developed specifically for agricultural robotic systems, and (ii) the proposed solutions should have been deployed and tested in the field. On the other hand, there is no limitation on the environment the solutions are developed for, e.g., greenhouse, orchard. Next, the classification framework for the literature findings was selected according to the major agricultural robotics operations as described in Section 3.
During the literature research, the eligibility criteria were refined to allow for the selection of the most relevant studies. Those are: (1) the included studies should be research articles published in scientific journals or conference proceedings; (2) they should have been published within the last 15 years (without including the current year). This restriction is set in place as any technology developed before this is most likely obsolete today. The selected studies have been collected from open online sources (such as open-access journals, websites, conference proceedings, etc.), and through a literature review in several electronic depositories, namely Web of Science, ScienceDirect, Scopus, and SpringerLink. The primary literature research was conducted at the beginning of 2022 using the following search expression: (Agriculture OR farming) AND (robot OR robotics) AND ((weeding OR weed detection OR weed identification)|(crop monitoring OR crop scouting)|phenotyping|disease detection | navigation|harvesting|spraying).
As it can be observed, the first two parts of the expression focus the search on agriculture and robotics. The third part is different according to the specific field that is being researched. For instance, only “weeding OR weed detection OR weed identification” will be used when the weeding problem is analyzed.
Any emerging publication at the time of the primary literature research will have been possibly missed and not included. Upon completion of the literature research, all selected studies were analyzed, which led to several findings as described in Section 4 and Section 5.

3. Operational Classification Analysis

The selected literature on agricultural robotics was classified according to the major agricultural operations of which vision is an essential part, as follows: (1) weed detection, (2) crop scouting, (3) crop phenotyping, (4) disease/insect detection, (5) spraying, and (6) harvesting. The whole agricultural operation lifecycle can be observed in Figure 2. Additionally, navigation (7) based on vision sensors was included as it is the core task of every agricultural mobile robot, as an alternative to agricultural GPS-denied environments during field operations. A summary of the most critical literature for each category is presented below.
It is important to note that every agricultural operation and related research may use a different metric to report their results. This will depend on the criteria for what a successful application is. For example, for weed identification or disease detection, discrete outputs will lead to specific metrics, such as accuracy (AC), precision (PR), recall (RE), mean average precision (mAP), Mean Intersection-over-Union (MIoU), and F1-score (F1). At the same time, these metrics also point to some specific computer vision problems. For instance, mAP is used in object detection while MIoU is used in instance segmentation. Other times, numerical continuous outputs will be necessary for some operations such as phenotyping or harvesting. In this case, the coefficient of determination (R2), the Root Mean Squared Error (RMSE), Mapping in Absolute Errors (MAE), and the Coefficient of Variation (CV) will be used in several cases. Finally, other papers could report very specific application-oriented performance metrics, which are tricky to extrapolate to other related works. For instance, average harvesting time or pesticide spraying reduction. All these metrics can be checked in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7. In the event any specific statistic is mentioned (e.g., maximum, minimum), the average is reported.
For the rest of the manuscript, color cameras will be used as a synonym for RGB cameras, while spectral cameras will be used to comprehend both multi-spectral and hyper-spectral cameras, i.e., cameras that cover a wider spectrum than visible light.

3.1. Weed Detection

Weeding is one of the most labor-intensive and tedious tasks in agriculture. Weeding is performed either by spraying chemicals on the weeds and plants or by mechanically destroying them using blades, fire, etc. Uniform chemical weeding applications were and still are the most popular option to fight weeds as pesticides are selective enough to only damage weeds and not the crop, even if the crop is covered by them. However, in recent years, due to continuous pressure to shift towards more sustainable agriculture, spot spraying, where only selected parts of the fields receive chemicals, and mechanical weeding, where no chemicals are applied, have gained traction. Besides reducing agriculture’s environmental footprint, those techniques also lower pesticide costs by reducing the amounts of chemicals used. Spot spraying so far has not been possible due to hardware and software restrictions. However, mechanical weeding, although available since the early days of agriculture, was heavily avoided due to the high labor costs and the physical strain of the task. The solution to making both mechanical weeding and spot spraying mainstream came through the robotization of both tasks. Although spot spraying applications are straightforward, mechanical weeding is more complex as there are two types of it: on the first occasion, weeds are removed only between crop rows (inter-row weeding), and on the second occasion, the space between crops is targeted (intra-row weeding); with the second one being the more challenging as it requires the recognition of the crop in order not to damage it. As a result, herbicide spraying robotic solutions such as the one proposed by Raja et al. [9] for lettuce as well as mechanical weeding robots have been developed for a variety of crops such as rice [10], lettuce [9], peach [11], maize [12], and tomato [13], with some of them being already commercially available, e.g., Dino from Naio technologies and Farming GT-Weeding robot from Farming Revolution.
A vision system is required for the robotic system to recognize and distinguish crops from weeds. Therefore, most weeding robots are equipped with a vision sensor. The most common one is the RGB camera, with the majority of them being relatively low resolution, ranging from 640 × 480 [14] up to 1624 × 1230 [15]. The low resolution could be explained by the fact that, in most cases, the camera sensor is placed near the object of interest. Alternative vision sensors used for mechanical weeding are InfraRed cameras [16] and stereo cameras [17].
Besides the hardware component, software, and specifically AI, plays a crucially important role, as robotic weeding systems need to recognize objects of interest prior to executing any task. As shown in Table 1, modern computer vision offers a multitude of different solutions to choose from. For the weed detection and identification tasks, several techniques have been used, from traditional machine vision techniques such as the Otsu method [14] and using the green component in the RGB color space [18] for image segmentation purposes to more advanced algorithms such as Faster-RCNN [19], YOLOv3 [12], Mask R-CNN [15], and semantic graphics [10], all of which achieve accuracies higher than 70% and up to 99.75%. A detailed table of the algorithms used and their performance metrics can be found below (Table 1). Finally, another proposed approach is crop signaling, where dye is used to mark the plants, and then the algorithms detect the color of the dye instead of the plants and weeds [9,13,20].
Table 1. Performance of machine vision weed detection algorithms (PR: precision; RE: recall; MIoU: Mean Intersection over Union; AC: accuracy; mAP: mean average precision).
Table 1. Performance of machine vision weed detection algorithms (PR: precision; RE: recall; MIoU: Mean Intersection over Union; AC: accuracy; mAP: mean average precision).
SensorCropProposed AI AlgorithmMetricsRef.
RGBPeachFaster-RCNNPR: 86.41%
RE: 92.15%
MIoU: 85.45%
[11]
RGBRiceESNetPR: 86.18%
RE: 86.53%
MIoU: 51.78%
F1: 86.08%
[10]
RGBRed radish, garden cress, and dandelionFaster-RCNNmAP: 67–95% plants
mAP: 84–99% weeds
[19]
RGBMaizeYOLOv3AC: 93.43 maize
AC: 90.9 weeds
[12]
RGBSoybeanArea feature
Template matching
Saturation threshold
Voting algorithm
AC: 73.3%
AC: 68.42%
AC: 65.22%
AC: 81.82%
[21]
RGB32 kinds [22]kNN
SVM
Decision tree
Random forest
CNN
AC: 84.4%
PR: 85.2%
AC: 77%
PR: 71%
AC: 78.8%
PR: 79.5%
AC: 90%
PR: 79.5%
AC: 99.5%
[23]
RGBMaize, common beanMask RCNNmAP: 0.49[15]
RGBCabbageHaar cascade classifierAC: 96.3%[24]
Summing up, weed detection is a task where vision-based perception and AI systems perform more than adequately in combination. This can be attributed to the fact that, on most occasions, the distance between the sensor and the object of interest is small, that in many approaches, everything that is not a crop is identified as a weed, and the comprehensive research on this topic, as weeding was the first agricultural task that gained traction regarding robotization. However, several challenges still lie ahead for creating systems that perform almost flawlessly. The most important weeding challenges that visual perception will need to overcome are green on green detection, i.e., detecting weeds between the crops, multiclass weed identification, i.e., identifying the different types of weed accurately, and finally matching the system’s operational FPS with current conventional weeding tractor driving speeds.

3.2. Crop Scouting

Crop scouting is one of the essential tasks that growers perform and that is sometimes overlooked as, on multiple occasions, it accompanies another task such as spraying, weeding, etc. Whenever the grower is in the field, he constantly tries to classify plants based on their growth rate, blossoms, number of fruits, etc. All these tasks take place to predict future crop needs and optimize costs. However, this task requires high levels of concentration, which the grower cannot offer while multitasking. Moreover, it is based on the grower’s experience and thus cannot be objective and cannot be performed by inexperienced personnel. As a result, several robotic solutions have been introduced to automate such measurements while also “standardizing” them and making them more objective.
Those solutions depend heavily on vision as they try to introduce a second pair of eyes in the field, increasing the field data collected. The vision devices used vary based on the trait being investigated. Common devices used are color cameras [25], spectral cameras [26,27], various sensors measuring plant light reflectance such as IR radiometers and OptRX sensors [28,29], and stereo cameras [30]. Measurements coming from these sensors could be used in real-time as long as the sensors are factory calibrated. An example is vegetation index measurements [26,27]. On other occasions, AI algorithms have to be used to translate sensor data into meaningful and actionable information. Examples include background segmentation, object detection, and predictive algorithms (Table 2).
Table 2. Crop scouting AI algorithm and performance per task (R2: coefficient of determination; AC: accuracy; RMSE: Root Mean Squared Error).
Table 2. Crop scouting AI algorithm and performance per task (R2: coefficient of determination; AC: accuracy; RMSE: Root Mean Squared Error).
SensorCropTaskProposed AI AlgorithmMetricsRef.
SpectralSoybeansImage segmentationSimple linear regression
NDVI based segmentation
R2: 0.71 (daytime), 0.85 (nighttime)
AC: 72.5% (daytime), 73.9% (nighttime)
[25]
Stereo cameraChinese cabbage, potato, sesame, radish, and soybeanCrop height measurementCoordinate transformations of pixelsR2: 0.78–0.84[26]
Stereo, thermal, spectral cameraGrapeHarvesting zone sorting--[27]
RGBGreenhouse tomatoFruit countingFaster R-CNN (detection)
Centroid based (counting)
AC: 88.6% (occluded objects included)
AC: 90.2% (without occluded objects)
[28]
OptRxOrchards and vineyardsCanopy thicknessProprietaryR2: 0.78–0.80[29]
Spectral, thermalGrapeWater statusPLSR2: 0.57 (morning), 0.42 (midday)
RMSE 0.191
[30]
To conclude, vision systems have considerable potential regarding crop scouting as most of the attributes related to plant health and status can be visually detected in visible, NIR, and IR regions. So far, several systems have been proposed with promising results; however, due to the speciality sensors required and the necessary algorithm calibration, they have yet to achieve their full potential. Novel lower-cost vision sensors that are task-specific will be needed, especially as AI algorithms have already reached a level of maturity where they can handle increasingly large amounts of data in real-time. The next step regarding AI will be to fuse data streams coming from different vision sensors.

3.3. Phenotyping

Plant phenotyping is not only a laborious task, but it also requires high levels of precision and concentration, since accurate and consistent measurements are of the utmost importance to assure high-quality results. Plant phenotyping is crucial because it allows the plant research community to accurately measure a plethora of plant traits (height, biomass, tolerance, resistance, architecture, and leaf shape) to select and adapt crops to diverse environments, new policy limitations, and trends such as low-input agriculture and resource-limiting environments crop cultivation [31,32]. Most plant traits can be measured using vision devices; as a result, robotic phenotyping platforms carry a number of them to accurately measure more than one trait. Almost all phenotyping robots included in this study were equipped with RGB cameras, as most traits can be detected without specialized sensors. However, on some occasions, additional vision sensors are needed to facilitate measurements of specific traits, such as stereo cameras for canopy-related characteristic detection, e.g., stem diameter [33] and plant height [34]. Finally, as most sensors are not calibrated for specific measurements, AI plays a crucial role in extracting plant trait information out of the bulk of collected data. Traditional machine learning techniques such as logistic regression function are used for plant classification [35], and popular deep learning techniques such as CNN [36] and Faster RCNN [33] are used to detect objects of interest (stalks, stems, and leaves). An overview of the sensors and machine learning techniques can be found below (Table 3).
Table 3. Phenotyping machine vision algorithms and performance per task (R2: Coefficient of determination; AC: accuracy; RMSE: Root Mean Squared Error; mAP: mean average precision; CV: coefficient of variation).
Table 3. Phenotyping machine vision algorithms and performance per task (R2: Coefficient of determination; AC: accuracy; RMSE: Root Mean Squared Error; mAP: mean average precision; CV: coefficient of variation).
SensorCropTaskProposed AI AlgorithmMetricsRef.
Trinocular stereo cameraMaize, sorghumPlant height measurements3D reconstructionR2: 0.99[31]
Stereo camera, ToF depth sensor, IR cameraEnergy sorghumPlant height, stem width measurements-Absolute measurement error: 15% (plant height), 13% (stem width)[32]
RGB-DSorghumLeaf area, length, and width measurementsStructure from Motion (3D reconstruction), SVM (pixel classification), MLP (phenotype classification)Relative RMSE: 26.15% (Leaf area), 26.67% (Leaf length), 25.15% (Leaf width)[33]
Binocular RGB cameras SorghumPlant height, plant width, convex hull volume, and surface area measurements3DMST, OpenCV’s Semi-Global Block MatchingHeight: STD 0.05 m, CV 4.7%, 3DMST, STD 0.04 m, CV 3.8% SGBM
Width: STD 0.03, CV 8.6% (3DMST), STD 0.04 m, CV 9.8% (SGBM)
Convex hull volume: STD 0.03 m3 CV 17.8% (3DMST), STD 0.03 m3 CV 10.7% (SGBM)
Plant surface area: STD 0.08 m2, CV 8.7% (3DMST), STD 0.11 m2 CV = 9.1 (SGBM)
[34]
RGB and IRWheat---[35]
RGBCornCorn stand countingTransfer learning, CNN with Softmax layer replaced by SVMR2: 0.96[36]
RGB-DMaizeStem detection and diameter measurementFaster RCNN, convex hull and plane projectionmAP: 67%
R2: 0.72
RMSE 2.95
[37]
RGB-DMaizeMaize stalk detectionCNNAC: 90%[38]
To sum up, phenotyping has the potential to be fully automated through machine vision as most plant traits can be measured just by using visual information. Calibrated plant phenotyping vision systems could offer an objective, reliable alternative to manual measurements in the near future. Already developed systems achieve excellent performance in assessing essential plant characteristics. The next step toward making such systems the norm is the standardization of measurement procedures and the calibration of each vision sensor and algorithm.

3.4. Disease/Insect/Deficiency Detection

Anomaly detection in plant production is one of the most challenging tasks agricultural robotics faces [39,40,41,42,43,44,45,46,47,48]. Pathogens that require entirely different treatments can cause similar symptoms; for example, yellow discoloration of the leaves can be caused by nutrient deficiencies, fungi, and insects. Moreover, on many occasions, symptoms are not clearly visible as they can be located on the lower side of leaves or even under the bark, thus making detection even harder. Finally, such systems need to be highly accurate and offer high sensitivity to avoid false-negative detections, as such a mistake could cause substantial financial damage to the grower, which could be irreversible during the growing season. For the reasons mentioned above, such robotic systems have failed to gain popularity amongst the research community as the stakes are too high. However, because of the importance of detecting such anomalies in time, the increasing cost of Plant Protection Products (PPP), the resistance of plant enemies to existing chemicals, and the ever-stricter regulations regarding PPP application, automation and robotization of anomaly detection are going to be one of the few ways to move forward.
Despite the difficulties mentioned above, several disease detection robots have been developed. The majority of them use color cameras since not all occluded symptoms can be detected in the visible spectrum, while some solutions add additional vision sensors such as spectral and thermal cameras as well as RGB-D sensors [39] to increase detection rates and accuracy. Moreover, most of the developed systems focus on diseases caused by funguses, bacteria, and viruses and fewer on insects, as in most insect infestations, identifying the insect is also part of the diagnosis, and the robot movement can cause them to move and hide, thus making it increasingly difficult to capture images of them. Despite the limited research, existing solutions already exploit to the fullest the available AI algorithms such as Neural Networks (AlexNet, SqueezeNet) [40], K-means, and SVM [41], achieving excellent performance metrics such as disease detection accuracies higher than 90% in cotton [42], higher than 98% in greenhouse tomato [43], and F1-scores higher than 97% for apples [39]. An overview of the performance of vision-based anomaly detection robotic systems can be found in Table 4.
Table 4. Performance of vision-based anomaly detection robotic systems (PR: precision; RE: recall; AC: accuracy; R2: coefficient of determination; F1: F1-score).
Table 4. Performance of vision-based anomaly detection robotic systems (PR: precision; RE: recall; AC: accuracy; R2: coefficient of determination; F1: F1-score).
SensorCropTaskProposed AI AlgorithmMetricsRef.
RGBGreenhouse tomatoLeaf mold
Yellow leaf curl virus
SVM
RF
CNN
AC: 98.61%
AC: 80%
AC: 80%
[43]
RGBGreenhouse tomatoPlant village dataset diseases [44]AlexNet
SqueezeNet
AC: 96%
AC: 94%
[40]
RGBCotton, GroundnutBacterial blight/magnesium deficiency (cotton)
Leaf spot/anthracnose (groundnut)
NNAC: 83–96%[42]
RGBCottonhealthy leaf, healthy cotton, healthy repining ball, diseased leaf, diseased damages cotton, diseased repining ball, and insect-AC: 90%[48]
RGBCoffeeAlternaria, Bacterial Blight, MyrotheciumK-meansAC: 79%[45]
RGB, Spectral, ThermalOlive treeXylella Fastidiosa-R2 < 0.45[46]
RGB-D, Spectral, ThermalAppleRust
Scab
Mask R-CNN (segmentation)
VGG16 (classification)
PR: 99.2%
RE: 97.5%
F1: 98.3% (Healthy)
PR: 96%
RE: 98%
F1: 97% (Rust)
PR: 97.2%
RE: 97.2%
F1: 97.2% (Scab)
RGB-Pyralidae insectsOtsu segmentation, Hu moments-[47]
RGBBasilBacterial blightK-means
SVM
-[41]
Concluding, anomaly detection robots equipped with vision devices and algorithms have started to prove their feasibility and the benefits they can offer. Performance achieved is promising, and it can be expected only to increase as novel algorithms are developed. Early detection of such problems will also be considered in the future. However, it is a highly complicated task that requires an interdisciplinary approach and will most likely rely on non-visible symptoms, as once the symptoms are visible, detection is considered too late. Moreover, so far, early disease detection solutions have not been tested in combination with an agricultural robotic platform. Finally, growers’ acceptance of such proposed systems is expected to increase over the next few years, especially as stricter environmental and safety regulations come into place, further reducing the growers’ quiver of available solutions.

3.5. Robotic Spraying

Performing spray applications is the standard practice in agriculture to tackle harvest losses. Weed control, pest control, and disease prevention are the most common spraying actions that a farmer must perform across a crop season to retain as much as possible of the cultivated harvest. This practice increases profit and product quality; therefore, it is considered the most essential procedure apart from plant watering. Conventional spraying methods have been proven to be quite harmful to the environment since the application method is uniform. Currently, the spraying rate is independent of disease or weed presence and plant growth stage. In essence, farmers spray as much as possible to ensure maximum coverage. This practice leads to excessive contamination of the soil and groundwater reserves, which in turn causes permanent damage to the local ecosystem as well as the consumers that will eventually purchase the product. Furthermore, the spraying procedure is quite labor-intensive and hazardous for the operators, who must wear protective equipment to prevent contamination.
Recent developments in the robotic sector have started to creep into the spraying aspects of agriculture slowly but steadily. Considering that spraying products are harmful to the environment and humans alike, policymakers continuously regulate them, with bans occurring annually. As a result, robotics offers a very appealing alternative to standard practices in spraying applications to reduce human exposure while making sure that chemicals are used properly and in the correct amounts. Multiple researchers have studied the ways that a robot can be utilized for spraying, both in open field agriculture as well as greenhouses.
The problems that most researchers are focusing on are spraying based on disease detection and plant and weed presence [49,50,51,52,53,54,55]. Additionally, some researchers are trying to optimize the robot’s operation by calculating the optimal air assistance for maximum penetration based on the plant volume [50]. The results are more than promising, as spraying regulation can be so efficient by a robot that the chemical reduction has been studied to be reduced by up to 95% in extremely favorable scenarios for one of the studies [51].
Most of the studies are using standard RGB cameras, while a minority are experimenting with more specialized sensors such as multispectral cameras [52] and Kinect sensors [53]. The most common strategy for spraying guidance appears to be the conventional image processing strategies (such as image segmentation and color transformation), with other researchers focusing on more advanced tools from the likes of data fusion algorithms, spectral indices, Huff transformations, Least Mean Squares (LMS) and Bezier Curves to the use of Genetic Algorithms. The results vary in terms of metrics, but the overall performance is promising for the uptake of such solutions in the future, as shown in Table 5.
Table 5. Machine vision for robotic spraying.
Table 5. Machine vision for robotic spraying.
SensorCropProposed AI AlgorithmMetricsRef.
RGBVineyardsData fusion algorithmError: Standard Deviation:
x = 0.34, y = 0,81, θ = 0.11, φ = 0.17
[49]
Multispectral cameraGrapes (powdery mildew)Spectral indices85–100% (detection accuracy), 65–85% (reduction in pesticide use)[52]
kinect (RGB, IR)AnyHuff transformations19% detection error[53]
Stereo camera greenhouse tomato, vineyardsSensor fusion0.23 m trajectory error[54]
RGB cameraCerealsImage processing, dedicated developed algorithm18–97% savings in herbicide[51]
High Speed RGB cameraOrchards, vineyardsImage analysisAirflow up to 10 m/s−1, distance 300 mm, 150 mm diameter[50]
RGBAnyImage analysisPesticide reduction: 45%[55]

3.6. Robotic Harvesting

One of the largest problems the agriculture industry has been experiencing in the last few years is the steady decline in workforce availability and constantly increasing workforce costs. The combination of those two is resulting in a reduction in production capacity and an increase in production costs. Agricultural robotic harvesting has received increased attention to tackle this phenomenon [56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72]. Bulk harvesters are already commercially available (bulk apple harvesters for cider production); however, they are not suitable for high-quality products as they damage the product while harvesting it. The next step towards robotic harvesting is machines that can understand the product maturity, thus allowing for selective harvesting while making sure that the product remains undamaged. As a result of the above, the main tasks of these autonomous machines are focused on the real-time detection, localization, and sometimes maturity estimation of the fruits to be harvested. On top of that, agricultural robotic platforms must be able to operate in multiple outdoor environments while also achieving robust detection under different types of crops whose products differ in color, size, and shape.
The most important unit of a robotic harvester is the vision system, as it provides critical information about the fruit detected as required by the harvesting components. As it can be seen from Table 6, the majority (94%) of the methodologies are based on the color information derived from an RGB camera. For fruit detection, image processing based on color indexes is the most applied method in research papers [56,57], since fruits are different from the canopy in terms of color characteristics. Moreover, besides directly detecting the fruit, some researchers focused on peduncle [58] and stem [56] detection by setting a predefined ROI above the fruit. However, outdoor image acquisition can have a great impact on the quality of the images since the light is constantly changing, thus producing image sensor noise. To deal with this type of noise, researchers often use non-linear operations like gamma-correction [59], or blurring and morphological operations among others [64,66,70], but also use transformations to alternative color spaces [60,61] to have better control of the lightness dimension. A common step after image segmentation of the fruits in the image plane is to fit the points that include a target fruit based on its size and shape [57]. Moreover, upon successful detection of the target, a sensor capable of providing 3D information like a stereo camera [56,57,60,62] is usually able to output the 3D location of the fruit. This transformation from the image plane to the real 3D dimension is then sent to the harvesting component (end-effector).
Table 6. Machine vision for harvesting in agricultural robotics (PR: precision; RE: recall; AC: accuracy; R2: coefficient of determination; F1: F1-score; mAP: mean average precision; MIoU: Mean Intersection over Union; TPR: true positive rate).
Table 6. Machine vision for harvesting in agricultural robotics (PR: precision; RE: recall; AC: accuracy; R2: coefficient of determination; F1: F1-score; mAP: mean average precision; MIoU: Mean Intersection over Union; TPR: true positive rate).
SensorCropProposed AI AlgorithmMetricsRef.
RGB cameraApplesCNN with improved YOLOv5PR: 83.83%
RE: 91.48%
mAP: 86.75%
F1: 87.49%
[63]
RGB cameraZanthoxylum pepperCNN with improved YOLOv5mAP: 94.5%,
Detection speed (s/pic) 0.012,
Detection speed on edge device (s/pic) 0.072,
GPU load on edge device 20.11
Detection FPS on edge device 33.23
[64]
RGB cameraStrawberriesMask R-CNN and custom vision strawberry localization method to find the location of the strawberriesPR: 95.78%
RE: 95.41%,
MIoU: 89.85%
average error of visual strawberry localization = ±1.2 mm
[65]
RGB-D cameraSweet peppersColor filter for identifying the pepper, semantic segmentation using deep learning, Canny edge and Hough transformation for stem detectionUnder a single row system assumption, harvest performance averaged over both crop varieties was 61% for the modified crop as opposed to 29% of the unmodified crop.
Under a single row system assumption, the harvest success rate for the modified condition was 81% for first pepper variety and 43% for second pepper variety.
For the unmodified condition, results were closer at 36% for first pepper variety and 23% for second pepper variety.
[56]
RGB cameraApplesSupport vector machine with radial basis functionAC: 77%
Average harvesting time 15 s per apple.
[66]
RGB and stereo cameraStrawberriesColor filtering to detect strawberry and set ROI for searching for the peduncle in 3D imageAC: 60%
Fruits with 80% maturity or more, have the successful harvesting rate at 41.3% with suction device and 34.9% without suction.
[58]
Stereo cameraTomatoesImage normalization with difference of red and green graying, OTSU segmentation, ellipse fitting, localization of fruits can be found using feature extraction and matching using Harris corners and camera’s intrinsicACC 99.3%
Tomato position error less that 10 mm when distance is less than 600 mm except some singular points.
Success rate of picking tomatoes at 86% and 88% in two round tests.
[62]
RGB TomatoesHIS color filtering, image binarization, circular fittingSuccess rates 83.9% in first round tests and 79.4% in second round tests[61]
RGB cameraApples and orangesImage pre- and post-processing and Yolov3 for detection MIoU: 89.7%
PR: 93.7%
RE: 93.3%
F1: 92.8%
[67]
Stereo cameraTomatoesColor extraction, Euclidean distance clustering, Z-sorting, Sphere fitting using RANSACSuccessful harvesting rate: 60%[68]
RGB cameraGreen pepperLocal contrast enhancement, Super-pixels extracted via energy-driven sampling (SEEDS), saliency map construction, morphological operationsAC: 83.6%
RE: 81.2%
[69]
RGB cameraApplesAdaptive gamma correction, color filtering, total pixel area calculationAverage time reduction ratio: 70%.[70]
RGB cameraBroccolilow-pass filtering, E7*E7 Laws’ texture energy filter, median filtering, morphological operation PR: 99.5%,
AC: 92.4%
[59]
RGB and ToF cameraAubergineFor detection of aubergine: color transformation, cubic SVM, watershed transformation, point cloud extraction, ellipse fitting for occlusions: centroid calculation, calculation of vector direction.RE: 88.10%
PR: 88.35%
F1: 87.8%
[57]
RGB and NIR cameraApples, avocado, capsicum, mango, rock melon, strawberry, orange, and sweet pepperFaster R-CNN with late or early fusion.F1: 0.828–0.948[71]
RGB and Stereo camerasStrawberriesColor segmentation using HSI color space, thresholds in this color space to get the maturity level, region of interest (ROI) for searching for the peduncle on a predefined threshold areaSuccessful harvesting rate: 54.9%[60]
RGB cameraStrawberries and orangesMulti-task Cascaded Convolutional Networks using 3 CNN, augmentation fusion dataset TPR: 0.98 with 0.9 threshold value [72]
Most modern approaches use deep learning methodologies [56,63,67] to detect and localize fruits so that they can be harvested. Such methodologies will gain more ground in the years to come as Edge AI and state-of-the-art embedded devices become more powerful, allowing for real-time detection without compromising the results achieved. However, despite advancements in deep learning, robotic harvesting seems to also rely on traditional computer vision and combinations of the two, with promising results in both cases.

3.7. Robotic Vision Navigation

During the last decade, for agricultural operations, attention has been shifted to Unmanned Ground Vehicles (UGVs) to address human labor shortages and improve crop production [73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91]. The most common solution for autonomous navigation in agriculture is based on RTK-GNSS implementations to guide the platform along with the pre-programmed path plans. However, the high cost of these sensors and the vulnerability to signal loss have led many scientists to utilize low-cost cameras as an alternative to performing navigation in fields by identifying natural landmarks. Several crops, from orchards [79], cornfields [62], maize fields [80], wheat [81], tea crops [82], wolfberry orchards [83], sugar beets [84], potato fields [85], carrots [86], and soya beans [87] have been tested for visual navigation (See Table 7).
From the sensor perspective, visual navigation algorithms utilize front, back, lateral RGB, NIR, multispectral, or depth cameras for extracting the crop row that the robotic platform will follow in real-time. Then, the image information is further used by traditional computer vision algorithms such as grayscaling, excess green threshold, and image binarization to identify and segment crops in the image, as well as morphological operations to remove the sensor noise. After extracting the crops in the image, the most common solution for extracting the navigation center or baselines is the Hough transformation or least-squared error method.
Table 7. Machine vision for navigation in agricultural robotics sorted by grouping by using the: RMSE, Angular and Linear Deviation, SE, AC, and other metrics (MAE: mean absolute error; RMSE: Root Mean Squared Error; F1: F1-score; AC: accuracy; SE: Standard Error; DBMR: Detection Based on Micro-ROI; TMGEM: Template matching with Global Energy Minimization; DAGP: Detection for accumulation of green pixels).
Table 7. Machine vision for navigation in agricultural robotics sorted by grouping by using the: RMSE, Angular and Linear Deviation, SE, AC, and other metrics (MAE: mean absolute error; RMSE: Root Mean Squared Error; F1: F1-score; AC: accuracy; SE: Standard Error; DBMR: Detection Based on Micro-ROI; TMGEM: Template matching with Global Energy Minimization; DAGP: Detection for accumulation of green pixels).
SensorCropProposed AI AlgorithmMetricsRef.
Multispectral cameraOrchardsGreen plane extraction, thresholdingMaximum deviation: 3.5 cm RMSE: 2.13 cm[82]
RGB cameraWheat and sorghumKalman filter RMSE: 28–120 mm [76]
RGB cameraOrchardsGabor filtering, PCA, K-means, Silhouette method, medial axis, fuzzy logicMaximum trajectory tracking: 14.6 ± 6.8 mm
RMSE: 45.3 mm
[73]
RGB cameraMaizeBackground segmentation, binary image, line extractionRMSE: 78.1 ± 7.5 mm[77]
RGB cameraCrop-row fieldsDeep CNNRMSE: 5.8 degrees[89]
RGB cameraSweet, green and snow pea, Chinese lettuce, Cabbage, green pepper, tomato, and teaVegetation index, K-means, pixel spatial operations, RANSACMax. RMSE of positioning (pixels): 83.9
Max. RMSE angle (degrees): 16.1
[88]
Stereo cameraSoya bean FieldsSobel, sum squared differenceRMSE speed estimation: 0.13 m/s
Yaw angle estimation: 0.44 degrees
[91]
RGB cameraArable fieldsOtsu’s method, SIFTAverage deviation: 3.82 cm (Navigation accuracy)
F1: 62.1% (Lane Switching technique)
[74]
RGB cameraCrop-row fieldsExcess Green Index, least-square fitting methodDeviation:4 cm[79]
RGB cameraMaize Otsu’s method, Canny edge, Hough transformation, linear fitting Angle difference ±5 cm between extracted and artificial navigation line[80]
RGB cameraWolfberry orchardsgray scaling, maximum entropy threshold, morphological opening operation, rectangle fitting, least-square method, fuzzy controlLateral deviation ≤ 6.2 cm,
Average lateral deviation: 2.9 cm
[83]
RGB cameraCrop row fieldsVegetation index-based image segmentation, Otsu’s method, Particle Swarm Optimization, Morphological Operation, Floyd algorithm, linear least-square methodMaximum deviation detection accuracy
Θ(left): 0.49
Θ(middle): 0.4303
Θ(right): 4.2302,
Θ(average): 1.5283
[87]
RGB cameraGreenhouse cucumberGray scaling, image binarization, morphological operations, Hough transformation, least-square methodFirst experiment,
Max angle deviation
Predicted point Hough transform: 0.48°
Traditional Hough transform: 9.51°, least-square method: 15.21°.
Second experiment, Maximum deviation angle, predicted point Hough transform: 0.46°
Traditional Hough transform: 1.46°, least-square method: 5.28°.
[84]
RGB cameraPaddy fieldsSUSAN corners, Hough transform, fuzzy controllerWith initial position and angle error: 0, the SE: 4.29 degrees and 44.68 mm,
With initial position: −20 mm and angle error: −12 degrees, SE = 8,61 degrees and 45, 42 mm,
With initial position error: 80 mm and angle error: 5 degrees, SE: 8, 85 degrees and 53, 56 mm,
With initial position error: 40 mm and angle error: 17, SE: 8, 60 degrees and (47 and 32 mm)
[78]
RGB cameraMaizeImage segmentation, image denoising, position point extraction, straight-line fitting, extract navigation lineAC: 92%[75]
Time-of-Flight cameraMaize and sorghumBilateral filtering, RANSAC, Euclidean clusteringMAE: 3.4–3.6 cm.
Lateral positioning MAE: 4.2–5.0 cm
[64]
RGB cameraSugar beetGray scaling, Hough transform Mean error: 5–198 (±6–108) mm
Median error: 22 mm
[85]
RGB cameraTea cropsSemantic segmentation, Hough-line transform Angle bias: 6.2 degrees and 13.9 pixels distance[81]
NIR and depth cameraCarrots, onions, and radishImage segmentation, RANSAC, particle filterRANSAC: 94.40%
Particle filter: 97.71%
[90]
RGB cameraPotatoExcess green, morphological operation, least-square error methodThe mean detection rate DBMR against TMGEM and DAGP regarding CRDA
TMGEM: 0.627
DAGP: 0.860
DBMR: 0.871
[86]
After having a navigation path from the vision system, this information will be sent to the motion control system. A mobile control system is responsible for regulating the robot’s motor, based on sensor feedback, generating the control signal needed to reach the reference state. Rather than using traditional PID controllers, which are used in the industry to compute and provide a correction based on proportional, integral, and derivative terms, most researchers use fuzzy-logic controllers to control the steering angles. Unlike proportional-integral (PI) and proportional-integral derivative (PID) controllers, fuzzy controllers [83,90] can guarantee a more stable operation and better system performance. The term “fuzzy” refers to the logical variables that are used to express the different states, making the problem more intuitive to human operators. To conclude, machine vision-based agricultural guidance systems combining state-of-the-art algorithms with motion control offer promising results.

4. Discussion

The reviewed literature on vision-based perception in agriculture was classified based on the major field operations: weed detection, crop scouting, phenotyping, disease detection, vision-based navigation, harvesting, and spraying. The total number of references included is 101 in total, retrieved from various journals such as “Computers and Electronics in Agriculture”, “Biosystems Engineering”, “AgriEngineering”, “Sensors”, etc., as well as from publications in conference proceedings.
In most of the reviewed references and for most reviewed operations, the standard sensor of choice was the RGB camera (see Figure 3). This is the first and intuitive selection as it is a sensor that can be easily integrated into any existing solution. It receives lots of support from the research and computer vision community while being low cost compared to other more advanced sensors. However, based on the type of information required by each operation and the level of detail required, other vision-based sensors might be used as well. Such sensors include spectral and thermal cameras. Tasks such as plant phenotyping and spraying rely on stereo and RGB-D sensors as well as on spectral and thermal sensors. This happens because information not visible to the human eye is required, and depth information can, on occasions, be more important than color information. Moreover, the trade-off between the level of detail and the cost of sensors is also depicted in the choice of sensors. Operations that will be mainly executed by farmers and growers, meaning that they will have to spend capital to acquire the proposed solution, need to keep relatively low costs to allow for acceptable depreciation times. As a result, weeding, harvesting, and crop scouting solutions make use of cheaper sensors (RGB cameras) to keep equipment costs low. While for tasks mainly to be executed by R&D departments such as plant phenotyping more expensive sensors are used such as thermal and spectral cameras. Moreover, recent developments in computer vision and AI have allowed for the optimization of those sensors to artificially enhance sensor performance. As a result, a large amount of effort is spent on the choice and optimization of the AI component.
However, extracting conclusions by comparing AI algorithm performances among papers that use different datasets is a quite tough and unfair task. A very promising technique on a specific dataset could have the opposite results on a different but related dataset. The illumination conditions, occluded objects, and type of sensors could lead to different conclusions on the same algorithm. This issue relates to the popular no-free-lunch-theorem problem found in optimization and machine learning. For that reason, the great variety of techniques proposed across the different research reviewed in this paper can be observed as positive. Three categories can be observed: (i) those that use histogram/vegetation indexes and thresholding [30,43,52,61,64,71]; (ii) those that use traditional machine learning algorithms such as SVM or Random Forests, and therefore, the features of the images are extracted manually [26,39,47,49]; and (iii) those that rely on deep-learning and CNN’s methods (e.g., AlexNet, SqueezeNet, and VGG-16) [46,53], and thus, the features are extracted automatically. Although this last technique represents the last trend among AI methods, it has the drawback of requiring larger amounts of data to obtain good performance and avoid overfitting. In addition, works such as [42] contain some hybrid solutions that combine traditional and deep learning. Specifically, the fully-connected part of the CNNs is replaced by an SVM classifier. It is noted that for all these techniques based on CNNs, the use of self-supervised techniques is not reported [92]. These techniques allow the use of much fewer data points and there is a seminar paper used for weed identification [93]. However, in this paper, it was not reported that the computational pipeline was deployed on a robotic platform, and, therefore, this line of research should be further explored to integrate with operational deployments.
When object detection solutions are provided, the balance between latency during inference time and high performance (usually mAP or MIoU) is something to take note of. Therefore, some works used YOLO-based detectors [12,55,80,83] looking for real-time performance, while others [11,19,25,33,87] preferred the use of RCNN-based architectures, where theoretically the performance is going to be higher. As it was explained previously, claims across papers that Faster-RCNN’s performance is better than YOLO is challenging since other algorithmic factors could play a major role; for instance, feature extractors used in Faster-RCNN (VGG16, ResNet, Inception, MobileNet), matching strategy and IoU threshold, non-max suppression and IoU threshold, data augmentation methods selected, and training configurations including batch size, input image resize, learning rate, and learning rate decay.
On the positive side, all the above-mentioned experimentation has allowed AI solutions to achieve promising performances across all studied agricultural operations. An overview of the most popular algorithms can be seen in Figure 4. However, it should be studied critically as it was mentioned beforehand, each operation has its unique characteristics, and even among the same operation, large performance variations can be observed due to the large differences in shape, size, and color of the crop/weed of interest and the difficulty or simplicity they introduce.

5. Conclusions

In this review, the most recent research on agricultural robots, their sensors, and the computer vision algorithms used has been presented. Different fields of agriculture have been analyzed, specifically: weed detection, crop scouting, phenotyping, disease detection, navigation, harvesting, and robotic spraying. From the analysis, it can be concluded that regarding the vision sensors used, researchers are still experimenting with a great variety of sensors, especially as the prices of the most complex and novel sensors drop over time. However, due to their simplicity and the low trade-off between performance and accuracy, color cameras are the prevailing sensors across all operations. Regarding the artificial intelligence component coupled to those perception sensors, a great variety of algorithms and techniques were used with promising results in most cases. This means that it is possible a specific subfield of artificial intelligence could miss all the advantages of other complementary subfields. Due to the many possibilities, AutoML [94] should be further studied and used to avoid the exploration of some possibilities that are not relevant either for research purposes or engineering ones. Specifically, fine-tuning some thresholds and hyper-parameters that are highly dependent on datasets should be found automatically with the AutoML approach. Regarding the next steps of research in agricultural robotics, two paths are identified. The first one is closely linked with the accelerated commercialization of agricultural robots in the past years. This will provide large amounts of data regarding the real-world feasibility and performance of AI algorithms and vision sensors in a variety of environments and socio-economic conditions. A thorough examination of that data will be required to avoid repeating the same mistakes in the future as well as to get feedback from actual end-users regarding the performance and the trade-offs between speed and performance and cost and performance. The second line of research will need to focus on the constant and rapid development of AI techniques such as transformers [95] and self-supervised learning. With the first one, image-based solutions like robots using RGB images as input could improve their performance since transformers have demonstrated their superiority to CNNs. Concerning the second, the training phase of the robotics algorithms could be more efficient since this technique allows for training models without hand-crafted labels. Finally, to improve the current review methodology, the PRISMA flow approach [96] will be used to obtain a more systematic revision of the current literature.

Author Contributions

Conceptualization, methodology, writing—review and editing, supervision, funding acquisition, S.F.; conceptualization, methodology, investigation, writing—original draft preparation, I.M.; investigation, writing—original draft preparation, L.A.; investigation, writing—original draft preparation, I.A.; investigation, writing—original draft preparation, writing—review and editing, B.E.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Commission H2020 “Robs4Crops” project grant agreement No. 101016807, https://robs4crops.eu (accessed on 21 July 2022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fountas, S.; Mylonas, N.; Malounas, I.; Rodias, E.; Santos, C.H.; Pekkeriet, E. Agricultural robotics for field operations. Sensors 2020, 20, 2672. [Google Scholar] [CrossRef] [PubMed]
  2. Bechar, A.; Vigneault, C. Agricultural robots for field operations. Part 2: Operations and systems. Biosyst. Eng. 2017, 153, 110–128. [Google Scholar] [CrossRef]
  3. Reddy, N.V.; Reddy, A.; Pranavadithya, S.; Kumar, J.J. A critical review on agricultural robots. Int. J. Mech. Eng. Technol. 2016, 7, 183–188. [Google Scholar]
  4. Alberto-Rodriguez, A.; Neri-Muñoz, M.; Fernández, J.C.R.; Márquez-Vera, M.A.; Velasco, L.E.R.; Díaz-Parra, O.; Hernández-Huerta, E. Review of control on agricultural robot tractors. Int. J. Comb. Optim. Probl. Inform. 2020, 11, 9–20. [Google Scholar]
  5. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. A review of autonomous navigation systems in agricultural environments. In Proceedings of the SEAg 2013: Innovative Agricultural Technologies for a Sustainable Future, Barton, Australia, 22–25 September 2013. [Google Scholar]
  6. Zhao, Y.; Gong, L.; Huang, Y.; Liu, C. A review of key techniques of vision-based control for harvesting robot. Comput. Electron. Agric. 2016, 127, 311–323. [Google Scholar] [CrossRef]
  7. Fue, K.G.; Porter, W.M.; Barnes, E.M.; Rains, G.C. An extensive review of mobile agricultural robotics for field operations: Focus on cotton harvesting. AgriEngineering 2020, 2, 150–174. [Google Scholar] [CrossRef] [Green Version]
  8. Defterli, S.G. Review of robotic technology for strawberry production. Appl. Eng. Agric. 2016, 32, 301–318. [Google Scholar]
  9. Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosyst. Eng. 2020, 192, 257–274. [Google Scholar] [CrossRef]
  10. Adhikari, S.P.; Yang, H.; Kim, H. Learning semantic graphics using convolutional encoder–decoder network for autonomous weeding in paddy. Front. Plant Sci. 2019, 10, 1404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Luo, J.; You, Y.; Wang, D.; Sun, X.; Lv, J.; Ma, W.; Zhang, X. Peach tree detection for weeding robot based on Faster-RCNN. In Proceedings of the 2020 ASABE Annual International Virtual Meeting, Virtual, 13–15 July 2020; p. 1. [Google Scholar]
  12. Quan, L.; Jiang, W.; Li, H.; Li, H.; Wang, Q.; Chen, L. Intelligent intra-row robotic weeding system combining deep learning technology with a targeted weeding mode. Biosyst. Eng. 2022, 216, 13–31. [Google Scholar] [CrossRef]
  13. Raja, R.; Nguyen, T.T.; Vuong, V.L.; Slaughter, D.C.; Fennimore, S.A. RTD-SEPs: Real-time detection of stem emerging points and classification of crop-weed for robotic weed control in producing tomato. Biosyst. Eng. 2020, 195, 152–171. [Google Scholar] [CrossRef]
  14. Choi, K.H.; Han, S.K.; Han, S.H.; Park, K.-H.; Kim, K.-S.; Kim, S. Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields. Comput. Electron. Agric. 2015, 113, 266–274. Available online: https://www.sciencedirect.com/science/article/pii/S0168169915000563 (accessed on 17 March 2022). [CrossRef]
  15. Champ, J.; Mora-Fallas, A.; Goëau, H.; Mata-Montero, E.; Bonnet, P.; Joly, A. Instance segmentation for the fine detection of crop and weed plants by precision agricultural robots. Appl. Plant Sci. 2020, 8, e11373. [Google Scholar] [CrossRef]
  16. Machleb, J.; Peteinatos, G.G.; Sökefeld, M.; Gerhards, R. Sensor-based intrarow mechanical weed control in sugar beets with motorized finger weeders. Agronomy 2021, 11, 1517. [Google Scholar] [CrossRef]
  17. Igawa, H.; Tanaka, T.; Kaneko, S.; Tada, T.; Suzuki, S.; Ohmura, I. Base position detection of grape stem considering its displacement for weeding robot in vineyards. In Proceedings of the IECON 2012-38th Annual Conference on IEEE Industrial Electronics Society, Montreal, QC, Canada, 25–28 October 2012; pp. 2567–2572. [Google Scholar]
  18. Zhang, C.; Huang, X.; Liu, W.; Zhang, Y.; Li, N.; Zhang, J.; Li, W. Information acquisition method for mechanical intra-row weeding robot. Trans. Chin. Soc. Agric. Eng. 2012, 28, 142–146. [Google Scholar]
  19. Shah, T.M.; Nasika, D.P.B.; Otterpohl, R. Plant and weed identifier robot as an agroecological tool using artificial neural networks for image identification. Agriculture 2021, 11, 222. [Google Scholar] [CrossRef]
  20. Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-time robotic weed knife control system for tomato and lettuce based on geometric appearance of plant labels. Biosyst. Eng. 2020, 194, 152–164. [Google Scholar] [CrossRef]
  21. Miao, Z.; Yu, X.; Li, N.; He, C.; Sun, T. Weed Detection Based on the Fusion of Multiple Image Processing Algorithms. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 4217–4222. [Google Scholar]
  22. Wu, S.G.; Bao, F.S.; Xu, E.Y.; Wang, Y.-X.; Chang, Y.-F.; Xiang, Q.-L. A leaf recognition algorithm for plant classification using probabilistic neural network. In Proceedings of the 2007 IEEE International Symposium on Signal Processing and Information Technology, Giza, Egypt, 15–18 December 2007; pp. 11–16. [Google Scholar]
  23. Sethia, G.; Guragol, H.K.S.; Sandhya, S.; Shruthi, J.; Rashmi, N. Automated Computer Vision based Weed Removal Bot. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2–4 July 2020; pp. 1–6. [Google Scholar]
  24. Vedula, R.; Nanda, A.; Gochhayat, S.S.; Hota, A.; Agarwal, R.; Reddy, S.K.; Mahapatra, S.; Swain, K.K.; Das, S. Computer vision assisted autonomous intra-row weeder. In Proceedings of the 2018 International Conference on Information Technology (ICIT), Bhubaneswar, India, 20–22 December 2018; pp. 79–84. [Google Scholar]
  25. Yamasaki, Y.; Morie, M.; Noguchi, N. Development of a high-accuracy autonomous sensing system for a field scouting robot. Comput. Electron. Agric. 2022, 193, 106630. [Google Scholar] [CrossRef]
  26. Kim, W.-S.; Lee, D.-H.; Kim, Y.-J.; Kim, T.; Lee, W.-S.; Choi, C.-H. Stereo-vision-based crop height estimation for agricultural robots. Comput. Electron. Agric. 2021, 181, 105937. [Google Scholar] [CrossRef]
  27. Rovira-Más, F.; Saiz-Rubio, V.; Cuenca-Cuenca, A. Sensing architecture for terrestrial crop monitoring: Harvesting data as an asset. Sensors 2021, 21, 3114. [Google Scholar] [CrossRef] [PubMed]
  28. Seo, D.; Cho, B.-H.; Kim, K. Development of Monitoring Robot System for Tomato Fruits in Hydroponic Greenhouses. Agronomy 2021, 11, 2211. [Google Scholar] [CrossRef]
  29. Vidoni, R.; Gallo, R.; Ristorto, G.; Carabin, G.; Mazzetto, F.; Scalera, L.; Gasparetto, A. ByeLab: An agricultural mobile robot prototype for proximal sensing and precision farming. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition 2017, Tampa, FL, USA, 3–9 November 2017; Volume 58370, p. V04AT05A057. [Google Scholar]
  30. Fernández-Novales, J.; Saiz-Rubio, V.; Barrio, I.; Rovira-Más, F.; Cuenca-Cuenca, A.; Santos Alves, F.; Valente, J.; Tardaguila, J.; Diago, M.P. Monitoring and mapping vineyard water status using non-invasive technologies by a ground robot. Remote Sens. 2021, 13, 2830. [Google Scholar] [CrossRef]
  31. Shafiekhani, A.; Kadam, S.; Fritschi, F.B.; DeSouza, G.N. Vinobot and vinoculer: Two robotic platforms for high-throughput field phenotyping. Sensors 2017, 17, 214. [Google Scholar] [CrossRef] [PubMed]
  32. SYoung, N.; Kayacan, E.; Peschel, J.M. Design and field evaluation of a ground robot for high-throughput phenotyping of energy sorghum. Precis. Agric. 2019, 20, 697–722. [Google Scholar]
  33. Vijayarangan, S.; Sodhi, P.; Kini, P.; Bourne, J.; Du, S.; Sun, H.; Poczos, B.; Apostolopoulos, D.; Wettergreen, D. High-throughput robotic phenotyping of energy sorghum crops. In Field and Service Robotics; Springer: Cham, Switzerland, 2018; pp. 99–113. [Google Scholar]
  34. Bao, Y.; Tang, L.; Breitzman, M.W.; Fernandez, M.G.S.; Schnable, P.S. Field-based robotic phenotyping of sorghum plant architecture using stereo vision. J. Field Robot. 2019, 36, 397–415. [Google Scholar] [CrossRef]
  35. Grimstad, L.; Skattum, K.; Solberg, E.; Loureiro, G.; From, P.J. Thorvald II configuration for wheat phenotyping. In Proceedings of the IROS Workshop on Agri-Food Robotics: Learning from Industry, Vancouver, BC, Canada, 28 September 2017; p. 4. [Google Scholar]
  36. Kayacan, E.; Zhang, Z.-Z.; Chowdhary, G. Embedded High Precision Control and Corn Stand Counting Algorithms for an Ultra-Compact 3D Printed Field Robot. In Proceedings of the Robotics: Science and Systems, Pittsburgh, PA, USA, 26–30 June 2018; Volume 14, p. 9. [Google Scholar]
  37. Fan, Z.; Sun, N.; Qiu, Q.; Li, T.; Feng, Q.; Zhao, C. In Situ Measuring Stem Diameters of Maize Crops with a High-Throughput Phenotyping Robot. Remote Sens. 2022, 14, 1030. [Google Scholar] [CrossRef]
  38. Manish, R.; Lin, Y.-C.; Ravi, R.; Hasheminasab, S.M.; Zhou, T.; Habib, A. Development of a Miniaturized Mobile Mapping System for In-Row, Under-Canopy Phenotyping. Remote Sens. 2021, 13, 276. [Google Scholar] [CrossRef]
  39. Karpyshev, P.; Ilin, V.; Kalinov, I.; Petrovsky, A.; Tsetserukou, D. Autonomous mobile robot for apple plant disease detection based on cnn and multi-spectral vision system. In Proceedings of the 2021 IEEE/SICE International Symposium on System Integration (SII), Iwaki, Japan, 11–14 January 2021; pp. 157–162. [Google Scholar]
  40. Durmuş, H.; Güneş, E.O.; Kırcı, M. Disease detection on the leaves of the tomato plants by using deep learning. In Proceedings of the 2017 6th International Conference on Agro-Geoinformatics, Fairfax, VA, USA, 7–10 August 2017; pp. 1–5. [Google Scholar]
  41. Nooraiyeen, A. Robotic Vehicle for Automated Detection of Leaf Diseases. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2–4 July 2020; pp. 1–6. [Google Scholar]
  42. Pilli, S.K.; Nallathambi, B.; George, S.J.; Diwanji, V. eAGROBOT—A robot for early crop disease detection using image processing. In Proceedings of the 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India, 26–27 February 2015; pp. 1684–1689. [Google Scholar]
  43. Fernando, S.; Nethmi, R.; Silva, A.; Perera, A.; de Silva, R.; Abeygunawardhana, P.K.W. Intelligent disease detection system for greenhouse with a robotic monitoring system. In Proceedings of the 2020 2nd International Conference on Advancements in Computing (ICAC), Colombo, Sri Lanka, 10–11 December 2020; Volume 1, pp. 204–209. [Google Scholar]
  44. Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
  45. Rahul, M.S.P.; Rajesh, M. Image processing based Automatic Plant Disease Detection and Stem Cutting Robot. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 889–894. [Google Scholar]
  46. Rey, B.; Aleixos, N.; Cubero, S.; Blasco, J. XF-ROVIM. A field robot to detect olive trees infected by Xylella fastidiosa using proximal sensing. Remote Sens. 2019, 11, 221. [Google Scholar] [CrossRef] [Green Version]
  47. Hu, Z.; Liu, B.; Zhao, Y. Agricultural robot for intelligent detection of pyralidae insects. In Agricultural Robots-Fundamentals and Applications; IntechOpen: London, UK, 2018. [Google Scholar]
  48. Doddamani, S.T.; Karadgi, S.; Giriyapur, A.C. Multi-Label Classification of Cotton Plant with Agriculture Mobile Robot. In Data Intelligence and Cognitive Informatics; Springer: Singapore, 2022; pp. 759–772. [Google Scholar]
  49. Zaidner, G.; Shapiro, A. A novel data fusion algorithm for low-cost localisation and navigation of autonomous vineyard sprayer robots. Biosyst. Eng. 2016, 146, 133–148. [Google Scholar] [CrossRef]
  50. Malneršič, A.; Dular, M.; Širok, B.; Oberti, R.; Hočevar, M. Close-range air-assisted precision spot-spraying for robotic applications: Aerodynamics and spray coverage analysis. Biosyst. Eng. 2016, 146, 216–226. [Google Scholar] [CrossRef]
  51. Berge, T.W.; Goldberg, S.; Kaspersen, K.; Netland, J. Towards machine vision based site-specific weed management in cereals. Comput. Electron. Agric. 2012, 81, 79–86. [Google Scholar] [CrossRef]
  52. Oberti, R.; Marchi, M.; Tirelli, P.; Calcante, A.; Iriti, M.; Tona, E.; Hočevar, M.; Baur, J.; Pfaff, J.; Schütz, C. Selective spraying of grapevines for disease control using a modular agricultural robot. Biosyst. Eng. 2016, 146, 203–215. [Google Scholar] [CrossRef]
  53. Hejazipoor, H.; Massah, J.; Soryani, M.; Vakilian, K.A.; Chegini, G. An intelligent spraying robot based on plant bulk volume. Comput. Electron. Agric. 2021, 180, 105859. [Google Scholar] [CrossRef]
  54. Cantelli, L.; Bonaccorso, F.; Longo, D.; Melita, C.D.; Schillaci, G.; Muscato, G. A small versatile electrical robot for autonomous spraying in agriculture. AgriEngineering 2019, 1, 391–402. [Google Scholar] [CrossRef] [Green Version]
  55. Berenstein, R.; Edan, Y. Automatic adjustable spraying device for site-specific agricultural application. IEEE Trans. Autom. Sci. Eng. 2017, 15, 641–650. [Google Scholar] [CrossRef]
  56. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  57. SepúLveda, D.; Fernández, R.; Navas, E.; Armada, M.; González-De-Santos, P. Robotic aubergine harvesting using dual-arm manipulation. IEEE Access 2020, 8, 121889–121904. [Google Scholar] [CrossRef]
  58. Hayashi, S.; Shigematsu, K.; Yamamoto, S.; Kobayashi, K.; Kohno, Y.; Kamata, J.; Kurita, M. Evaluation of a strawberry-harvesting robot in a field test. Biosyst. Eng. 2010, 105, 160–171. [Google Scholar] [CrossRef]
  59. Blok, P.M.; Barth, R.; van den Berg, W. Machine vision for a selective broccoli harvesting robot. IFAC-PapersOnLine 2016, 49, 66–71. [Google Scholar] [CrossRef]
  60. Hayashi, S.; Yamamoto, S.; Saito, S.; Ochiai, Y.; Kamata, J.; Kurita, M.; Yamamoto, K. Field operation of a movable strawberry-harvesting robot using a travel platform. Jpn. Agric. Res. Q. JARQ 2014, 48, 307–316. [Google Scholar] [CrossRef] [Green Version]
  61. Wang, X.; Wu, P.; Feng, Q.; Wang, G. Design and test of tomatoes harvesting robot. J. Agric. Mech. Res. 2016, 4, 94–98. [Google Scholar]
  62. Lili, W.; Bo, Z.; Jinwei, F.; Xiaoan, H.; Shu, W.; Yashuo, L.; Zhou, Q.; Chongfeng, W. Development of a tomato harvesting robot used in greenhouse. Int. J. Agric. Biol. Eng. 2017, 10, 140–149. [Google Scholar] [CrossRef]
  63. Yan, B.; Fan, P.; Lei, X.; Liu, Z.; Yang, F. A real-time apple targets detection method for picking robot based on improved YOLOv5. Remote Sens. 2021, 13, 1619. [Google Scholar] [CrossRef]
  64. Gai, J.; Xiang, L.; Tang, L. Using a depth camera for crop row detection and mapping for under-canopy navigation of agricultural robotic vehicle. Comput. Electron. Agric. 2021, 188, 106301. [Google Scholar] [CrossRef]
  65. Yu, Y.; Zhang, K.; Yang, L.; Zhang, D. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput. Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
  66. De-An, Z.; Jidong, L.; Wei, J.; Ying, Z.; Yu, C. Design and control of an apple harvesting robot. Biosyst. Eng. 2011, 110, 112–122. [Google Scholar] [CrossRef]
  67. Kuznetsova, A.; Maleva, T.; Soloviev, V. Using YOLOv3 algorithm with pre-and post-processing for apple detection in fruit-harvesting robot. Agronomy 2020, 10, 1016. [Google Scholar] [CrossRef]
  68. Yaguchi, H.; Nagahama, K.; Hasegawa, T.; Inaba, M. Development of an autonomous tomato harvesting robot with rotational plucking gripper. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 652–657. [Google Scholar]
  69. Ji, W.; Gao, X.; Xu, B.; Chen, G.; Zhao, D. Target recognition method of green pepper harvesting robot based on manifold ranking. Comput. Electron. Agric. 2020, 177, 105663. [Google Scholar] [CrossRef]
  70. Lv, J.; Wang, Y.; Xu, L.; Gu, Y.; Zou, L.; Yang, B.; Ma, Z. A method to obtain the near-large fruit from apple image in orchard for single-arm apple harvesting robot. Sci. Hortic. 2019, 257, 108758. [Google Scholar] [CrossRef]
  71. Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. Deepfruits: A fruit detection system using deep neural networks. Sensors 2016, 16, 1222. [Google Scholar] [CrossRef] [Green Version]
  72. Zhang, L.; Gui, G.; Khattak, A.M.; Wang, M.; Gao, W.; Jia, J. Multi-task cascaded convolutional networks based intelligent fruit detection for designing automated robot. IEEE Access 2019, 7, 56028–56038. [Google Scholar] [CrossRef]
  73. Opiyo, S.; Okinda, C.; Zhou, J.; Mwangi, E.; Makange, N. Medial axis-based machine-vision system for orchard robot navigation. Comput. Electron. Agric. 2021, 185, 106153. [Google Scholar] [CrossRef]
  74. Ahmadi, A.; Halstead, M.; McCool, C. Towards autonomous crop-agnostic visual navigation in arable fields. arXiv 2021, arXiv:2109.11936. [Google Scholar]
  75. Yang, S.; Mei, S.; Zhang, Y. Detection of maize navigation centerline based on machine vision. IFAC-PapersOnLine 2018, 51, 570–575. [Google Scholar] [CrossRef]
  76. English, A.; Ross, P.; Ball, D.; Corke, P. Vision based guidance for robot navigation in agriculture. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1693–1698. [Google Scholar]
  77. Xue, J.; Zhang, L.; Grift, T.E. Variable field-of-view machine vision based row guidance of an agricultural robot. Comput. Electron. Agric. 2012, 84, 85–91. [Google Scholar] [CrossRef]
  78. Zhang, Q.; Chen, M.E.S.; Li, B. A visual navigation algorithm for paddy field weeding robot based on image understanding. Comput. Electron. Agric. 2017, 143, 66–78. [Google Scholar] [CrossRef]
  79. Ahmadi, A.; Nardi, L.; Chebrolu, N.; Stachniss, C. Visual servoing-based navigation for monitoring row-crop fields. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4920–4926. [Google Scholar]
  80. Gong, J.; Wang, X.; Zhang, Y.; Lan, Y.; Mostafa, K. Navigation line extraction based on root and stalk composite locating points. Comput. Electr. Eng. 2021, 92, 107115. [Google Scholar] [CrossRef]
  81. Lin, Y.-K.; Chen, S.-F. Development of navigation system for tea field machine using semantic segmentation. IFAC-PapersOnLine 2019, 52, 108–113. [Google Scholar] [CrossRef]
  82. Radcliffe, J.; Cox, J.; Bulanon, D.M. Machine vision for orchard navigation. Comput. Ind. 2018, 98, 165–171. [Google Scholar] [CrossRef]
  83. Ma, Y.; Zhang, W.; Qureshi, W.S.; Gao, C.; Zhang, C.; Li, W. Autonomous navigation for a wolfberry picking robot using visual cues and fuzzy control. Inf. Process. Agric. 2021, 8, 15–26. [Google Scholar] [CrossRef]
  84. Chen, J.; Qiang, H.; Wu, J.; Xu, G.; Wang, Z. Navigation path extraction for greenhouse cucumber-picking robots using the prediction-point Hough transform. Comput. Electron. Agric. 2021, 180, 105911. [Google Scholar] [CrossRef]
  85. Bakker, T.; Wouters, H.; Van Asselt, K.; Bontsema, J.; Tang, L.; Müller, J.; van Straten, G. A vision based row detection system for sugar beet. Comput. Electron. Agric. 2008, 60, 87–95. [Google Scholar] [CrossRef]
  86. García-Santillán, I.; Peluffo-Ordoñez, D.; Caranqui, V.; Pusdá, M.; Garrido, F.; Granda, P. Computer vision-based method for automatic detection of crop rows in potato fields. In Proceedings of the International Conference on Information Technology & Systems, Libertad City, Ecuador, 10–12 January 2018; pp. 355–366. [Google Scholar]
  87. Zhang, X.; Li, X.; Zhang, B.; Zhou, J.; Tian, G.; Xiong, Y.; Gu, B. Automated robust crop-row detection in maize fields based on position clustering algorithm and shortest path method. Comput. Electron. Agric. 2018, 154, 165–175. [Google Scholar] [CrossRef]
  88. Morio, Y.; Teramoto, K.; Murakami, K. Vision-based furrow line detection for navigating intelligent worker assistance robot. Eng. Agric. Environ. food 2017, 10, 87–103. [Google Scholar] [CrossRef]
  89. Bakken, M.; Moore, R.J.D.; From, P. End-to-end learning for autonomous crop row-following. IFAC-PapersOnLine 2019, 52, 102–107. [Google Scholar] [CrossRef]
  90. Halmetschlager, G.; Prankl, J.; Vincze, M. Probabilistic near infrared and depth based crop line identification. In Proceedings of the Workshop Proceedings of IAS-13 Conference on 2014, Padova, Italy, 18 July 2014; pp. 474–482. [Google Scholar]
  91. Kise, M.; Zhang, Q. Development of a stereovision sensing system for 3D crop row structure mapping and tractor guidance. Biosyst. Eng. 2008, 101, 191–198. [Google Scholar] [CrossRef]
  92. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G.E. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the 37th International Conference on Machine Learning, Virtual Event, 13–18 July 2020. [Google Scholar]
  93. Güldenring, R.; Nalpantidis, L. Self-supervised contrastive learning on agricultural images. Comput. Electron. Agric. 2021, 191, 106510. [Google Scholar] [CrossRef]
  94. Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J.T.; Blum, M.; Hutter, F. Efficient and Robust Automated Machine Learning. In Proceedings of the NIPS 2015, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
  95. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:abs/2010.11929. [Google Scholar]
  96. Danzon-Chambaud, S. PRISMA Checklist for ‘A Systematic Review of Automated Journalism Scholarship: Guidelines and Suggestions for Future Research’. Zenodo, 16 January 2021. [Google Scholar] [CrossRef]
Figure 1. Review framework methodological steps.
Figure 1. Review framework methodological steps.
Agriengineering 04 00043 g001
Figure 2. Agricultural operations lifecycle.
Figure 2. Agricultural operations lifecycle.
Agriengineering 04 00043 g002
Figure 3. The most widely selected sensors per operation as a percentage of the total studies reviewed. The order in which they are presented, top to bottom, is based on the percentage of total studies reviewed (highest to lowest).
Figure 3. The most widely selected sensors per operation as a percentage of the total studies reviewed. The order in which they are presented, top to bottom, is based on the percentage of total studies reviewed (highest to lowest).
Agriengineering 04 00043 g003
Figure 4. The most popular computer vision algorithms amongst the studies reviewed.
Figure 4. The most popular computer vision algorithms amongst the studies reviewed.
Agriengineering 04 00043 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fountas, S.; Malounas, I.; Athanasakos, L.; Avgoustakis, I.; Espejo-Garcia, B. AI-Assisted Vision for Agricultural Robots. AgriEngineering 2022, 4, 674-694. https://doi.org/10.3390/agriengineering4030043

AMA Style

Fountas S, Malounas I, Athanasakos L, Avgoustakis I, Espejo-Garcia B. AI-Assisted Vision for Agricultural Robots. AgriEngineering. 2022; 4(3):674-694. https://doi.org/10.3390/agriengineering4030043

Chicago/Turabian Style

Fountas, Spyros, Ioannis Malounas, Loukas Athanasakos, Ioannis Avgoustakis, and Borja Espejo-Garcia. 2022. "AI-Assisted Vision for Agricultural Robots" AgriEngineering 4, no. 3: 674-694. https://doi.org/10.3390/agriengineering4030043

APA Style

Fountas, S., Malounas, I., Athanasakos, L., Avgoustakis, I., & Espejo-Garcia, B. (2022). AI-Assisted Vision for Agricultural Robots. AgriEngineering, 4(3), 674-694. https://doi.org/10.3390/agriengineering4030043

Article Metrics

Back to TopTop