sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensing Technologies for Agricultural Automation and Robotics"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 30 June 2021.

Special Issue Editor

Prof. Dr. Manoj Karkee
E-Mail Website
Guest Editor
Department of Biosystems Engineering, Center for Precision and Automated Agricultural Systems, Washington State University, 24106 N, Bunn Road, Prosser, WA 99350, USA
Interests: machine vision and AI; field robotics; human-machine collaboration; sensing and control, agricultural system modeling and simulation
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The world is facing increasingly critical challenges in producing sufficient, quality food, feed, fiber, and fuel with depleting farming resources such as water, chemicals, and labor. To address these challenges, scientists and engineers around the world have, in recent years, been exploring opportunities in utilizing new innovations in artificial intelligence, Internet of things, Big Data analytics, and robotics in farming. Widespread research and development in this area are expected to lead the farming industry towards Ag 4.0 supported by smart, automated/autonomous machines and agricultural systems that will increase consistency and reliability of farming decisions and operations, reduce input (including labor), and optimize crop yield and quality. Novel sensing and machine vision systems, sensor and data fusion techniques, and efficient data analytics are crucial to support all aspects of automated/robotic and smart farming operations including, but not limited to; i) perceiving and understand the farming environment; ii) understanding stresses/status and needs of crops; iii) assessing crop growth and maturity, iv) localizing various objects of interest and obstacles in the field environment for automated/autonomous operations; v) guiding robotic machines through fields and for performing specific tasks; vi) collaborating with human and other robotic machines; and vii) providing operation, supervision, diagnostics, and maintenance capabilities to farmers/operators.

In this context, there has been rapid advancement in sensing technologies and data analytics techniques in recent years leading to more cost-effective, powerful, lighter, and reliable sensing systems for agricultural applications around the world, including small UAV-based sensing systems. Some of the areas of innovation and advancement include multi- and hyper-spectral imaging, thermal imaging, and color and 3D imaging (including RGB-D sensing). Novel techniques are being investigated to expand the sensing systems available for agricultural automation and robotics from vision to hearing, touch/feel, taste, and smell. Powerful computational infrastructure and associated data analytics techniques, including deep learning, have also played an instrumental role in improving the robustness and reliability and widening practical applications of sensing technologies in all aspects of production agriculture. The objective of this Special Issue is, therefore, to promote a deeper understanding of major conceptual and technical challenges and facilitate the spread of recent breakthroughs in sensing technologies for smart farming, in general, and agricultural automation and robotics, in particular. This Special Issue is expected to help realize safe, efficient, and economical agricultural production, and to advance the state-of-the-art in sensing, machine vision, sensor and data fusion, and data analytics techniques as applied to agricultural automation and robotics.

Topics of interest include (but are not limited to):

  • Novel sensing techniques for taste, smell (electronic noses), touch/feel, and hearing
  • Sensor fusion techniques
  • UASs-based sensing and crop monitoring
  • Crop scouting with ground-vehicle-based sensing
  • Sensing technologies for situation awareness in agricultural applications
  • Sensors and systems for crop phenotyping
  • Sensing and machine vision system for crop monitoring
  • Sensing and machine vision system for automation and robotics in agriculture
  • Sensing system for guidance in agricultural fields
  • Sensor applications in swarm robotics
  • Sensing for management and maintenance of agricultural robots
  • Sensing and machine vision for remote supervision and operation of machines
  • Sensing and machine vision for automating green-houses, plant factories, and vertical farms
  • Sensing and machine vision in animal production
  • Machine learning and arterial intelligence in sensing and data analytics
  • IoT, Big Data and data analytics for smart agriculture
  • Sensing and data analytics for post-harvest monitoring
  • Sensing and data analytics for crop quality assessment

Prof. Dr. Manoj Karkee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-spectral and hyperspectral Sensing
  • Machine vision
  • Image processing
  • Navigation and guidance
  • Artificial intelligence
  • Soft computing and machine learning
  • Deep learning
  • Autonomous operations
  • Automation and robotics
  • Situation awareness
  • Operation supervision
  • Internet of things
  • Big Data analytics
  • Virtual reality and augmented reality
  • 3D Perception
  • Remote monitoring
  • Human–machine collaboration

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Sensing Architecture for Terrestrial Crop Monitoring: Harvesting Data as an Asset
Sensors 2021, 21(9), 3114; https://doi.org/10.3390/s21093114 - 30 Apr 2021
Viewed by 334
Abstract
Very often, the root of problems found to produce food sustainably, as well as the origin of many environmental issues, derive from making decisions with unreliable or inexistent data. Data-driven agriculture has emerged as a way to palliate the lack of meaningful information [...] Read more.
Very often, the root of problems found to produce food sustainably, as well as the origin of many environmental issues, derive from making decisions with unreliable or inexistent data. Data-driven agriculture has emerged as a way to palliate the lack of meaningful information when taking critical steps in the field. However, many decisive parameters still require manual measurements and proximity to the target, which results in the typical undersampling that impedes statistical significance and the application of AI techniques that rely on massive data. To invert this trend, and simultaneously combine crop proximity with massive sampling, a sensing architecture for automating crop scouting from ground vehicles is proposed. At present, there are no clear guidelines of how monitoring vehicles must be configured for optimally tracking crop parameters at high resolution. This paper structures the architecture for such vehicles in four subsystems, examines the most common components for each subsystem, and delves into their interactions for an efficient delivery of high-density field data from initial acquisition to final recommendation. Its main advantages rest on the real time generation of crop maps that blend the global positioning of canopy location, some of their agronomical traits, and the precise monitoring of the ambient conditions surrounding such canopies. As a use case, the envisioned architecture was embodied in an autonomous robot to automatically sort two harvesting zones of a commercial vineyard to produce two wines of dissimilar characteristics. The information contained in the maps delivered by the robot may help growers systematically apply differential harvesting, evidencing the suitability of the proposed architecture for massive monitoring and subsequent data-driven actuation. While many crop parameters still cannot be measured non-invasively, the availability of novel sensors is continually growing; to benefit from them, an efficient and trustable sensing architecture becomes indispensable. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Graphical abstract

Article
An Evaluation of Different NIR-Spectral Pre-Treatments to Derive the Soil Parameters C and N of a Humus-Clay-Rich Soil
Sensors 2021, 21(4), 1423; https://doi.org/10.3390/s21041423 - 18 Feb 2021
Viewed by 467
Abstract
Near-infrared reflectance spectroscopy (NIRS) was successfully used in this study to measure soil properties, mainly C and N, requiring spectral pre-treatments. Calculations in this evaluation were carried out using multivariate statistical procedures with preceding pre-treatment procedures of the spectral data. Such transformations could [...] Read more.
Near-infrared reflectance spectroscopy (NIRS) was successfully used in this study to measure soil properties, mainly C and N, requiring spectral pre-treatments. Calculations in this evaluation were carried out using multivariate statistical procedures with preceding pre-treatment procedures of the spectral data. Such transformations could remove noise, highlight features, and extract essential wavelengths for quantitative predictions. This frequently significantly improved the predictions. Since selecting the appropriate transformation was not straightforward due to the large numbers of available methods, more comprehensive insight into choosing appropriate and optimized pre-treatments was required. Therefore, the objectives of this study were (i) to compare various pre-processing transformations of spectral data to determine their suitability for modeling soil C and N using NIR spectra (55 pre-treatment procedures were tested), and (ii) to determine which wavelengths were most important for the prediction of C and N. The investigations were carried out on an arable field in South Germany with a soil type of Calcaric Fluvic Relictigleyic Phaeozem (Epigeoabruptic and Pantoclayic), created in the flooding area of the Isar River. The best fit and highest model accuracy for the C (Ct, Corg, and Ccarb) and N models in the calibration and validation modes were achieved using derivations with Savitzky–Golay (SG). This enabled us to calculate the Ct, Corg, and N with an R2 higher than 0.98/0.86 and an ratio of performance to the interquartile range (RPIQ) higher than 10.9/4.1 (calibration/validation). Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Optimization of 3D Point Clouds of Oilseed Rape Plants Based on Time-of-Flight Cameras
Sensors 2021, 21(2), 664; https://doi.org/10.3390/s21020664 - 19 Jan 2021
Viewed by 659
Abstract
Three-dimensional (3D) structure is an important morphological trait of plants for describing their growth and biotic/abiotic stress responses. Various methods have been developed for obtaining 3D plant data, but the data quality and equipment costs are the main factors limiting their development. Here, [...] Read more.
Three-dimensional (3D) structure is an important morphological trait of plants for describing their growth and biotic/abiotic stress responses. Various methods have been developed for obtaining 3D plant data, but the data quality and equipment costs are the main factors limiting their development. Here, we propose a method to improve the quality of 3D plant data using the time-of-flight (TOF) camera Kinect V2. A K-dimension (k-d) tree was applied to spatial topological relationships for searching points. Background noise points were then removed with a minimum oriented bounding box (MOBB) with a pass-through filter, while outliers and flying pixel points were removed based on viewpoints and surface normals. After being smoothed with the bilateral filter, the 3D plant data were registered and meshed. We adjusted the mesh patches to eliminate layered points. The results showed that the patches were closer. The average distance between the patches was 1.88 × 10−3 m, and the average angle was 17.64°, which were 54.97% and 48.33% of those values before optimization. The proposed method performed better in reducing noise and the local layered-points phenomenon, and it could help to more accurately determine 3D structure parameters from point clouds and mesh models. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Weed and Corn Seedling Detection in Field Based on Multi Feature Fusion and Support Vector Machine
Sensors 2021, 21(1), 212; https://doi.org/10.3390/s21010212 - 31 Dec 2020
Cited by 5 | Viewed by 767
Abstract
Detection of weeds and crops is the key step for precision spraying using the spraying herbicide robot and precise fertilization for the agriculture machine in the field. On the basis of k-mean clustering image segmentation using color information and connected region analysis, a [...] Read more.
Detection of weeds and crops is the key step for precision spraying using the spraying herbicide robot and precise fertilization for the agriculture machine in the field. On the basis of k-mean clustering image segmentation using color information and connected region analysis, a method combining multi feature fusion and support vector machine (SVM) was proposed to identify and detect the position of corn seedlings and weeds, to reduce the harm of weeds on corn growth, and to achieve accurate fertilization, thereby realizing precise weeding or fertilizing. First, the image dataset for weed and corn seedling classification in the corn seedling stage was established. Second, many different features of corn seedlings and weeds were extracted, and dimensionality was reduced by principal component analysis, including the histogram of oriented gradient feature, rotation invariant local binary pattern (LBP) feature, Hu invariant moment feature, Gabor feature, gray level co-occurrence matrix, and gray level-gradient co-occurrence matrix. Then, the classifier training based on SVM was conducted to obtain the recognition model for corn seedlings and weeds. The comprehensive recognition performance of single feature or different fusion strategies for six features is compared and analyzed, and the optimal feature fusion strategy is obtained. Finally, by utilizing the actual corn seedling field images, the proposed weed and corn seedling detection method effect was tested. LAB color space and K-means clustering were used to achieve image segmentation. Connected component analysis was adopted to remove small objects. The previously trained recognition model was utilized to identify and label each connected region to identify and detect weeds and corn seedlings. The experimental results showed that the fusion feature combination of rotation invariant LBP feature and gray level-gradient co-occurrence matrix based on SVM classifier obtained the highest classification accuracy and accurately detected all kinds of weeds and corn seedlings. It provided information on weed and crop positions to the spraying herbicide robot for accurate spraying or to the precise fertilization machine for accurate fertilizing. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Robust Species Distribution Mapping of Crop Mixtures Using Color Images and Convolutional Neural Networks
Sensors 2021, 21(1), 175; https://doi.org/10.3390/s21010175 - 29 Dec 2020
Cited by 1 | Viewed by 844
Abstract
Crop mixtures are often beneficial in crop rotations to enhance resource utilization and yield stability. While targeted management, dependent on the local species composition, has the potential to increase the crop value, it comes at a higher expense in terms of field surveys. [...] Read more.
Crop mixtures are often beneficial in crop rotations to enhance resource utilization and yield stability. While targeted management, dependent on the local species composition, has the potential to increase the crop value, it comes at a higher expense in terms of field surveys. As fine-grained species distribution mapping of within-field variation is typically unfeasible, the potential of targeted management remains an open research area. In this work, we propose a new method for determining the biomass species composition from high resolution color images using a DeepLabv3+ based convolutional neural network. Data collection has been performed at four separate experimental plot trial sites over three growing seasons. The method is thoroughly evaluated by predicting the biomass composition of different grass clover mixtures using only an image of the canopy. With a relative biomass clover content prediction of R2 = 0.91, we present new state-of-the-art results across the largely varying sites. Combining the algorithm with an all terrain vehicle (ATV)-mounted image acquisition system, we demonstrate a feasible method for robust coverage and species distribution mapping of 225 ha of mixed crops at a median capacity of 17 ha per hour at 173 images per hectare. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Feasibility of Volatile Biomarker-Based Detection of Pythium Leak in Postharvest Stored Potato Tubers Using Field Asymmetric Ion Mobility Spectrometry
Sensors 2020, 20(24), 7350; https://doi.org/10.3390/s20247350 - 21 Dec 2020
Viewed by 908
Abstract
The study evaluates the suitability of a field asymmetric ion mobility spectrometry (FAIMS) system for early detection of the Pythium leak disease in potato tubers simulating bulk storage conditions. Tubers of Ranger Russet (RR) and Russet Burbank (RB) cultivars were inoculated with Pythium [...] Read more.
The study evaluates the suitability of a field asymmetric ion mobility spectrometry (FAIMS) system for early detection of the Pythium leak disease in potato tubers simulating bulk storage conditions. Tubers of Ranger Russet (RR) and Russet Burbank (RB) cultivars were inoculated with Pythium ultimum, the causal agent of Pythium leak (with negative control samples as well) and placed in glass jars. The headspace in sampling jars was scanned using the FAIMS system at regular intervals (in days up to 14 and 31 days for the tubers stored at 25 °C and 4 °C, respectively) to acquire ion mobility current profiles representing the volatile organic compounds (VOCs). Principal component analysis plots revealed that VOCs ion peak profiles specific to Pythium ultimum were detected for the cultivars as early as one day after inoculation (DAI) at room temperature storage condition, while delayed detection was observed for tubers stored at 4 °C (RR: 5th DAI and RB: 10th DAI), possibly due to a slower disease progression at a lower temperature. There was also some overlap between control and inoculated samples at a lower temperature, which could be because of the limited volatile release. Additionally, data suggested that the RB cultivar might be less susceptible to Pythium ultimum under reduced temperature storage conditions. Disease symptom-specific critical compensation voltage (CV) and dispersion field (DF) from FAIMS responses were in the ranges of −0.58 to −2.97 V and 30–84% for the tubers stored at room temperature, and −0.31 to −2.97 V and 28–90% for reduced temperature, respectively. The ion current intensities at −1.31 V CV and 74% DF showed distinctive temporal progression associated with healthy control and infected tuber samples. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Application-Specific Evaluation of a Weed-Detection Algorithm for Plant-Specific Spraying
Sensors 2020, 20(24), 7262; https://doi.org/10.3390/s20247262 - 18 Dec 2020
Cited by 1 | Viewed by 854
Abstract
Robotic plant-specific spraying can reduce herbicide usage in agriculture while minimizing labor costs and maximizing yield. Weed detection is a crucial step in automated weeding. Currently, weed detection algorithms are always evaluated at the image level, using conventional image metrics. However, these metrics [...] Read more.
Robotic plant-specific spraying can reduce herbicide usage in agriculture while minimizing labor costs and maximizing yield. Weed detection is a crucial step in automated weeding. Currently, weed detection algorithms are always evaluated at the image level, using conventional image metrics. However, these metrics do not consider the full pipeline connecting image acquisition to the site-specific operation of the spraying nozzles, which is vital for an accurate evaluation of the system. Therefore, we propose a novel application-specific image-evaluation method, which analyses the weed detections on the plant level and in the light of the spraying decision made by the robot. In this paper, a spraying robot is evaluated on three levels: (1) On image-level, using conventional image metrics, (2) on application-level, using our novel application-specific image-evaluation method, and (3) on field level, in which the weed-detection algorithm is implemented on an autonomous spraying robot and tested in the field. On image level, our detection system achieved a recall of 57% and a precision of 84%, which is a lower performance than detection systems reported in literature. However, integrated on an autonomous volunteer-potato sprayer-system we outperformed the state-of-the-art, effectively controlling 96% of the weeds while terminating only 3% of the crops. Using the application-level evaluation, an accurate indication of the field performance of the weed-detection algorithm prior to the field test was given and the type of errors produced by the spraying system was correctly predicted. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Development and Application of a Vehicle-Mounted Soil Texture Detector
Sensors 2020, 20(24), 7175; https://doi.org/10.3390/s20247175 - 15 Dec 2020
Viewed by 430
Abstract
It is of great significance to obtain soil texture information quickly for the realization of farmland management. Soil with good particle condition can well regulate the needs of plants for water, nutrients, air, and temperature during crop growth, thereby promoting high crop yields. [...] Read more.
It is of great significance to obtain soil texture information quickly for the realization of farmland management. Soil with good particle condition can well regulate the needs of plants for water, nutrients, air, and temperature during crop growth, thereby promoting high crop yields. The existing methods of measuring soil texture cannot meet the requirements of time and spatial resolution. For this reason, a vehicle-mounted soil texture detector was designed and developed based on machine vision and soil electrical conductivity devices. The detector does not require pretreatment such as air-drying and screening of the soil, and completely uses the original information of the farmland. The whole process can obtain the soil texture information in real time, omitting the complicated chemical process, and saving manpower and material resources. The vehicle-mounted detector is divided into a mechanical part, a control part, and a display part. The mechanical part provides measurement support for the acquisition of soil texture information; the control part collects and processes signals and images; the measurement results can be intuitively observed and recorded on the display, and can be operated through the mobile phone. The vehicle-mounted detector obtains soil conductivity through 4 disc electrodes, while the vehicle-mounted industrial camera captures the soil surface image, and extracts texture parameters through image processing, takes EC and texture parameters as input, and the embedded SVM model of the instrument was used to perform soil texture prediction. In order to verify the measurement accuracy of the detector, farmland verification experiments were carried out on farmland loam in Tongzhou District and Haidian District of Beijing. The R2 of the correlation between the measured value of soil EC and the actual value was 0.75, and the accuracy of soil texture prediction was 84.86%. It shows that the developed vehicle-mounted soil texture detector can meet the requirements for rapid acquisition of farmland texture information. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Comparison of Soil Total Nitrogen Content Prediction Models Based on Vis-NIR Spectroscopy
Sensors 2020, 20(24), 7078; https://doi.org/10.3390/s20247078 - 10 Dec 2020
Cited by 3 | Viewed by 662
Abstract
Visible-near-infrared spectrum (Vis-NIR) spectroscopy technology is one of the most important methods for non-destructive and rapid detection of soil total nitrogen (STN) content. In order to find a practical way to build STN content prediction model, three conventional machine learning methods and one [...] Read more.
Visible-near-infrared spectrum (Vis-NIR) spectroscopy technology is one of the most important methods for non-destructive and rapid detection of soil total nitrogen (STN) content. In order to find a practical way to build STN content prediction model, three conventional machine learning methods and one deep learning approach are investigated and their predictive performances are compared and analyzed by using a public dataset called LUCAS Soil (19,019 samples). The three conventional machine learning methods include ordinary least square estimation (OLSE), random forest (RF), and extreme learning machine (ELM), while for the deep learning method, three different structures of convolutional neural network (CNN) incorporated Inception module are constructed and investigated. In order to clarify effectiveness of different pre-treatments on predicting STN content, the three conventional machine learning methods are combined with four pre-processing approaches (including baseline correction, smoothing, dimensional reduction, and feature selection) are investigated, compared, and analyzed. The results indicate that the baseline-corrected and smoothed ELM model reaches practical precision (coefficient of determination (R2) = 0.89, root mean square error of prediction (RMSEP) = 1.60 g/kg, and residual prediction deviation (RPD) = 2.34). While among three different structured CNN models, the one with more 1 × 1 convolutions preforms better (R2 = 0.93; RMSEP = 0.95 g/kg; and RPD = 3.85 in optimal case). In addition, in order to evaluate the influence of data set characteristics on the model, the LUCAS data set was divided into different data subsets according to dataset size, organic carbon (OC) content and countries, and the results show that the deep learning method is more effective and practical than conventional machine learning methods and, on the premise of enough data samples, it can be used to build a robust STN content prediction model with high accuracy for the same type of soil with similar agricultural treatment. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Visual Guidance and Egg Collection Scheme for a Smart Poultry Robot for Free-Range Farms
Sensors 2020, 20(22), 6624; https://doi.org/10.3390/s20226624 - 19 Nov 2020
Cited by 1 | Viewed by 690
Abstract
Free-range chicken farming allows egg-laying hens to move freely through their environment and perform their natural behavior, including laying her eggs. However, it takes time to gather these eggs manually, giving rise to high labor costs. This study proposes a smart mobile robot [...] Read more.
Free-range chicken farming allows egg-laying hens to move freely through their environment and perform their natural behavior, including laying her eggs. However, it takes time to gather these eggs manually, giving rise to high labor costs. This study proposes a smart mobile robot for poultry farms that can recognize eggs of two different colors on free-range farms. The robot can also pick up and sort eggs without damaging them. An egg feature extraction method with automatic thresholding is employed to detect both white and brown eggs, and a behavior-based navigation method is applied to allow the robot to reach the eggs while avoiding obstacles. The robot can move towards the position of each egg via visual tracking. Once the egg is within the collection area of the robot, it is gathered, sorted and stored in the tank inside the robot. Experiments are carried out in an outdoor field of size 5 m × 5 m under different climatic conditions, and the results showed that the average egg recognition rate is between 94.7% and 97.6%. The proposed mobile poultry robot is low in production cost and simple in operation. It can provide chicken farmers with automatic egg gathering on free-range farms. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Plant Leaf Position Estimation with Computer Vision
Sensors 2020, 20(20), 5933; https://doi.org/10.3390/s20205933 - 20 Oct 2020
Viewed by 816
Abstract
Autonomous analysis of plants, such as for phenotyping and health monitoring etc., often requires the reliable identification and localization of single leaves, a task complicated by their complex and variable shape. Robotic sensor platforms commonly use depth sensors that rely on either infrared [...] Read more.
Autonomous analysis of plants, such as for phenotyping and health monitoring etc., often requires the reliable identification and localization of single leaves, a task complicated by their complex and variable shape. Robotic sensor platforms commonly use depth sensors that rely on either infrared light or ultrasound, in addition to imaging. However, infrared methods have the disadvantage of being affected by the presence of ambient light, and ultrasound methods generally have too wide a field of view, making them ineffective for measuring complex and intricate structures. Alternatives may include stereoscopic or structured light scanners, but these can be costly and overly complex to implement. This article presents a fully computer-vision based solution capable of estimating the three-dimensional location of all leaves of a subject plant with the use of a single digital camera autonomously positioned by a three-axis linear robot. A custom trained neural network was used to classify leaves captured in multiple images taken of a subject plant. Parallax calculations were applied to predict leaf depth, and from this, the three-dimensional position. This article demonstrates proof of concept of the method, and initial tests with positioned leaves suggest an expected error of 20 mm. Future modifications are identified to further improve accuracy and utility across different plant canopies. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Environment Monitoring of Rose Crops Greenhouse Based on Autonomous Vehicles with a WSN and Data Analysis
Sensors 2020, 20(20), 5905; https://doi.org/10.3390/s20205905 - 19 Oct 2020
Cited by 3 | Viewed by 886
Abstract
This work presents a monitoring system for the environmental conditions of rose flower-cultivation in greenhouses. Its main objective is to improve the quality of the crops while regulating the production time. To this end, a system consisting of autonomous quadruped vehicles connected with [...] Read more.
This work presents a monitoring system for the environmental conditions of rose flower-cultivation in greenhouses. Its main objective is to improve the quality of the crops while regulating the production time. To this end, a system consisting of autonomous quadruped vehicles connected with a wireless sensor network (WSN) is developed, which supports the decision-making on type of action to be carried out in a greenhouse to maintain the appropriate environmental conditions for rose cultivation. A data analysis process was carried out, aimed at designing an in-situ intelligent system able to make proper decisions regarding the cultivation process. This process involves stages for balancing data, prototype selection, and supervised classification. The proposed system produces a significant reduction of data in the training set obtained by the WSN while reaching a high classification performance in real conditions—amounting to 90% and 97.5%, respectively. As a remarkable outcome, it is also provided an approach to ensure correct planning and selection of routes for the autonomous vehicle through the global positioning system. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Graphical abstract

Article
Research on a Dynamic Algorithm for Cow Weighing Based on an SVM and Empirical Wavelet Transform
Sensors 2020, 20(18), 5363; https://doi.org/10.3390/s20185363 - 18 Sep 2020
Cited by 1 | Viewed by 664
Abstract
Weight is an important indicator of the growth and development of dairy cows. The traditional static weighing methods require considerable human and financial resources, and the existing dynamic weighing algorithms do not consider the influence of the cow motion state on the weight [...] Read more.
Weight is an important indicator of the growth and development of dairy cows. The traditional static weighing methods require considerable human and financial resources, and the existing dynamic weighing algorithms do not consider the influence of the cow motion state on the weight curve. In this paper, a dynamic weighing algorithm for cows based on a support vector machine (SVM) and empirical wavelet transform (EWT) is proposed for classification and analysis. First, the dynamic weight curve is obtained by using a weighing device placed along a cow travel corridor. Next, the data are preprocessed through valid signal acquisition, feature extraction, and normalization, and the results are divided into three active degrees during motion for low, medium, and high grade using the SVM algorithm. Finally, a mean filtering algorithm, the EWT algorithm, and a combined periodic continuation-EWT algorithm are used to obtain the dynamic weight values. Weight data were collected for 910 cows, and the experimental results displayed a classification accuracy of 98.6928%. The three algorithms were used to calculate the dynamic weight values for comparison with real values, and the average error rates were 0.1838%, 0.6724%, and 0.9462%. This method can be widely used at farms and expand the current knowledgebase regarding the dynamic weighing of cows. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Autonomous Crop Row Guidance Using Adaptive Multi-ROI in Strawberry Fields
Sensors 2020, 20(18), 5249; https://doi.org/10.3390/s20185249 - 14 Sep 2020
Cited by 2 | Viewed by 995
Abstract
Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer [...] Read more.
Automated robotic platforms are an important part of precision agriculture solutions for sustainable food production. Agri-robots require robust and accurate guidance systems in order to navigate between crops and to and from their base station. Onboard sensors such as machine vision cameras offer a flexible guidance alternative to more expensive solutions for structured environments such as scanning lidar or RTK-GNSS. The main challenges for visual crop row guidance are the dramatic differences in appearance of crops between farms and throughout the season and the variations in crop spacing and contours of the crop rows. Here we present a visual guidance pipeline for an agri-robot operating in strawberry fields in Norway that is based on semantic segmentation with a convolution neural network (CNN) to segment input RGB images into crop and not-crop (i.e., drivable terrain) regions. To handle the uneven contours of crop rows in Norway’s hilly agricultural regions, we develop a new adaptive multi-ROI method for fitting trajectories to the drivable regions. We test our approach in open-loop trials with a real agri-robot operating in the field and show that our approach compares favourably to other traditional guidance approaches. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Autonomous Navigation of a Center-Articulated and Hydrostatic Transmission Rover using a Modified Pure Pursuit Algorithm in a Cotton Field
Sensors 2020, 20(16), 4412; https://doi.org/10.3390/s20164412 - 07 Aug 2020
Viewed by 950
Abstract
This study proposes an algorithm that controls an autonomous, multi-purpose, center-articulated hydrostatic transmission rover to navigate along crop rows. This multi-purpose rover (MPR) is being developed to harvest undefoliated cotton to expand the harvest window to up to 50 days. The rover would [...] Read more.
This study proposes an algorithm that controls an autonomous, multi-purpose, center-articulated hydrostatic transmission rover to navigate along crop rows. This multi-purpose rover (MPR) is being developed to harvest undefoliated cotton to expand the harvest window to up to 50 days. The rover would harvest cotton in teams by performing several passes as the bolls become ready to harvest. We propose that a small robot could make cotton production more profitable for farmers and more accessible to owners of smaller plots of land who cannot afford large tractors and harvesting equipment. The rover was localized with a low-cost Real-Time Kinematic Global Navigation Satellite System (RTK-GNSS), encoders, and Inertial Measurement Unit (IMU)s for heading. Robot Operating System (ROS)-based software was developed to harness the sensor information, localize the rover, and execute path following controls. To test the localization and modified pure-pursuit path-following controls, first, GNSS waypoints were obtained by manually steering the rover over the rows followed by the rover autonomously driving over the rows. The results showed that the robot achieved a mean absolute error (MAE) of 0.04 m, 0.06 m, and 0.09 m for the first, second and third passes of the experiment, respectively. The robot achieved an MAE of 0.06 m. When turning at the end of the row, the MAE from the RTK-GNSS-generated path was 0.24 m. The turning errors were acceptable for the open field at the end of the row. Errors while driving down the row did damage the plants by moving close to the plants’ stems, and these errors likely would not impede operations designed for the MPR. Therefore, the designed rover and control algorithms are good and can be used for cotton harvesting operations. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Article
Developing an Online Measurement Device Based on Resistance Sensor for Measurement of Single Grain Moisture Content in Drying Process
Sensors 2020, 20(15), 4102; https://doi.org/10.3390/s20154102 - 23 Jul 2020
Cited by 2 | Viewed by 606
Abstract
The online measurement of moisture content for grains is an essential technology to realize real-time tracking and control, improve drying quality and reduce energy consumption of the drying process. To improve the measurement accuracy and reliability of the dynamic measurement process as well [...] Read more.
The online measurement of moisture content for grains is an essential technology to realize real-time tracking and control, improve drying quality and reduce energy consumption of the drying process. To improve the measurement accuracy and reliability of the dynamic measurement process as well as expand the application scope of the device, the present work constructed an experimental equipment for determining dynamic resistance characteristics of a single grain. The relations between moisture content and real-time resistance waveform were revealed, and an analytical calculation method of peak value and peak area of waveform was proposed, which correctly revealed the electrical measurement properties of grain. The results demonstrated that the gap width between the electrodes had large influence on the sensor’s performance. Moreover, an online measuring device was developed based on the experimental analysis and calculation method, and the test results in both lab and field for different grains showed that online real-time absolute measurement error are within ±0.5% in the varied moisture content (10–35%w.b.) and the temperature (−20–50 °C). The main results and the developed device might provide technical support for developing intelligent grain drying equipment. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Graphical abstract

Article
Growth Stages Classification of Potato Crop Based on Analysis of Spectral Response and Variables Optimization
Sensors 2020, 20(14), 3995; https://doi.org/10.3390/s20143995 - 17 Jul 2020
Cited by 6 | Viewed by 818
Abstract
Potato is the world’s fourth-largest food crop, following rice, wheat, and maize. Unlike other crops, it is a typical root crop with a special growth cycle pattern and underground tubers, which makes it harder to track the progress of potatoes and to provide [...] Read more.
Potato is the world’s fourth-largest food crop, following rice, wheat, and maize. Unlike other crops, it is a typical root crop with a special growth cycle pattern and underground tubers, which makes it harder to track the progress of potatoes and to provide automated crop management. The classification of growth stages has great significance for right time management in the potato field. This paper aims to study how to classify the growth stage of potato crops accurately on the basis of spectroscopy technology. To develop a classification model that monitors the growth stage of potato crops, the field experiments were conducted at the tillering stage (S1), tuber formation stage (S2), tuber bulking stage (S3), and tuber maturation stage (S4), respectively. After spectral data pre-processing, the dynamic changes in chlorophyll content and spectral response during growth were analyzed. A classification model was then established using the support vector machine (SVM) algorithm based on spectral bands and the wavelet coefficients obtained from the continuous wavelet transform (CWT) of reflectance spectra. The spectral variables, which include sensitive spectral bands and feature wavelet coefficients, were optimized using three selection algorithms to improve the classification performance of the model. The selection algorithms include correlation analysis (CA), the successive projection algorithm (SPA), and the random frog (RF) algorithm. The model results were used to compare the performance of various methods. The CWT-SPA-SVM model exhibited excellent performance. The classification accuracies on the training set (Atrain) and the test set (Atest) were respectively 100% and 97.37%, demonstrating the good classification capability of the model. The difference between the Atrain and accuracy of cross-validation (Acv) was 1%, which showed that the model has good stability. Therefore, the CWT-SPA-SVM model can be used to classify the growth stages of potato crops accurately. This study provides an important support method for the classification of growth stages in the potato field. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Graphical abstract

Article
Automated Measurement of Heart Girth for Pigs Using Two Kinect Depth Sensors
Sensors 2020, 20(14), 3848; https://doi.org/10.3390/s20143848 - 10 Jul 2020
Viewed by 788
Abstract
The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, [...] Read more.
The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
Opportunities and Possibilities of Developing an Advanced Precision Spraying System for Tree Fruits
Sensors 2021, 21(9), 3262; https://doi.org/10.3390/s21093262 - 08 May 2021
Viewed by 421
Abstract
Reducing risk from pesticide applications has been gaining serious attention in the last few decades due to the significant damage to human health, environment, and ecosystems. Pesticide applications are an essential part of current agriculture, enhancing cultivated crop productivity and quality and preventing [...] Read more.
Reducing risk from pesticide applications has been gaining serious attention in the last few decades due to the significant damage to human health, environment, and ecosystems. Pesticide applications are an essential part of current agriculture, enhancing cultivated crop productivity and quality and preventing losses of up to 45% of the world food supply. However, inappropriate and excessive use of pesticides is a major rising concern. Precision spraying addresses these concerns by precisely and efficiently applying pesticides to the target area and substantially reducing pesticide usage while maintaining efficacy at preventing crop losses. This review provides a systematic summary of current technologies used for precision spraying in tree fruits and highlights their potential, briefly discusses factors affecting spraying parameters, and concludes with possible solutions to reduce excessive agrochemical uses. We conclude there is a critical need for appropriate sensing techniques that can accurately detect the target. In addition, air jet velocity, travel speed, wind speed and direction, droplet size, and canopy characteristics need to be considered for successful droplet deposition by the spraying system. Assessment of terrain is important when field elevation has significant variability. Control of airflow during spraying is another important parameter that needs to be considered. Incorporation of these variables in precision spraying systems will optimize spray decisions and help reduce excessive agrochemical applications. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Other

Jump to: Research, Review

Letter
Prediction of End-Of-Season Tuber Yield and Tuber Set in Potatoes Using In-Season UAV-Based Hyperspectral Imagery and Machine Learning
Sensors 2020, 20(18), 5293; https://doi.org/10.3390/s20185293 - 16 Sep 2020
Cited by 4 | Viewed by 777
Abstract
Potato is the largest non-cereal food crop in the world. Timely estimation of end-of-season tuber production using in-season information can inform sustainable agricultural management decisions that increase productivity while reducing impacts on the environment. Recently, unmanned aerial vehicles (UAVs) have become increasingly popular [...] Read more.
Potato is the largest non-cereal food crop in the world. Timely estimation of end-of-season tuber production using in-season information can inform sustainable agricultural management decisions that increase productivity while reducing impacts on the environment. Recently, unmanned aerial vehicles (UAVs) have become increasingly popular in precision agriculture due to their flexibility in data acquisition and improved spatial and spectral resolutions. In addition, compared with natural color and multispectral imagery, hyperspectral data can provide higher spectral fidelity which is important for modelling crop traits. In this study, we conducted end-of-season potato tuber yield and tuber set predictions using in-season UAV-based hyperspectral images and machine learning. Specifically, six mainstream machine learning models, i.e., ordinary least square (OLS), ridge regression, partial least square regression (PLSR), support vector regression (SVR), random forest (RF), and adaptive boosting (AdaBoost), were developed and compared across potato research plots with different irrigation rates at the University of Wisconsin Hancock Agricultural Research Station. Our results showed that the tuber set could be better predicted than the tuber yield, and using the multi-temporal hyperspectral data improved the model performance. Ridge achieved the best performance for predicting tuber yield (R2 = 0.63) while Ridge and PLSR had similar performance for predicting tuber set (R2 = 0.69). Our study demonstrated that hyperspectral imagery and machine learning have good potential to help potato growers efficiently manage their irrigation practices. Full article
(This article belongs to the Special Issue Sensing Technologies for Agricultural Automation and Robotics)
Show Figures

Figure 1

Back to TopTop