remotesensing-logo

Journal Browser

Journal Browser

Special Issue "3D Modelling and Mapping for Precision Agriculture"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: 15 March 2023 | Viewed by 11627

Special Issue Editors

Dr. Lorenzo Comba
E-Mail Website
Guest Editor
Department of Agricultural, Forestry and Food Sciences (DiSAFA) of University of Turin, 10124 Torino TO, Italy
Interests: precision agriculture; robotics; remote sensing; image processing; machine learning
Dr. Jordi Llorens
E-Mail Website
Guest Editor
Universitat de Lleida, Lleida, Spain
Interests: agricultural machinery; sensors; precision agriculture
Dr. Alessandro Biglia
E-Mail Website
Guest Editor
Dipartimento di Scienze Agrarie, Forestali e Alimentari, Università degli Studi di Torino, Turin, Italy
Interests: precision agriculture; precision viticulture; UAV; renewable energy; machine learning

Special Issue Information

Dear Colleagues,

An effective precision agriculture (PA) management approach relies on accurate knowledge of the agricultural environment, with the aim of timely and properly performing site specific operations. Recent solutions for PA are based on unmanned vehicles, both ground (UGVs) and aerial (UAVs), that can profitably perform crop scouting and monitoring tasks, and even accomplish several management operations in an autonomous way.

In this context, the contribution of 3D models of crops to the improvements of PA practices is rapidly growing. Indeed, point clouds of agricultural environments can be profitably exploited to retrieve information on the crop status, geometries, field yield, and other valuable agronomical indices. In addition, 3D models are proving to be an effective input of robust control and navigation algorithms of autonomous vehicles in complex scenarios, such as the agricultural ones, allowing for enhanced obstacles and targets detection. In order to mine valuable information for agricultural purposes from 3D point clouds, however, specific computing frameworks are usually required, many of which are based on artificial intelligence (AI) and machine learning (ML) methods.

The goal of this Special Issue is to present an up-to-date overview of the recent achievements in the field of 3D modelling and mapping in agriculture, as well as to identify the obstacles still ahead. Review and research papers on, but not limited to, the following topics are welcome:

  • Time of flight (ToF) and structured light (SL) technologies for PA
  • Structure from motion (SfM) methods for PA
  • 3D point cloud processing
  • Machine learning and artificial intelligence
  • Crop 3D modelling
  • Field 3D mapping
  • Navigation and control based on 3D point clouds
  • Agricultural UAVs
  • Agricultural UGVs
  • Agricultural robots

Dr. Lorenzo Comba
Dr. Jordi Llorens
Dr. Alessandro Biglia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Precision agriculture
  • 3D point clouds
  • Sensing for automation
  • Crop monitoring
  • Drones and robotics
  • UAVs and UGVs
  • Feature extraction
  • Machine learning
  • Semantic segmentation

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Virtual Laser Scanning Approach to Assessing Impact of Geometric Inaccuracy on 3D Plant Traits
Remote Sens. 2022, 14(19), 4727; https://doi.org/10.3390/rs14194727 - 21 Sep 2022
Viewed by 310
Abstract
In recent years, 3D imaging became an increasingly popular screening modality for high-throughput plant phenotyping. The 3D scans provide a rich source of information about architectural plant organization which cannot always be derived from multi-view projection 2D images. On the other hand, 3D [...] Read more.
In recent years, 3D imaging became an increasingly popular screening modality for high-throughput plant phenotyping. The 3D scans provide a rich source of information about architectural plant organization which cannot always be derived from multi-view projection 2D images. On the other hand, 3D scanning is associated with a principle inaccuracy by assessment of geometrically complex plant structures, for example, due the loss of geometrical information on reflective, shadowed, inclined and/or curved leaf surfaces. Here, we aim to quantitatively assess the impact of geometrical inaccuracies in 3D plant data on phenotypic descriptors of four different shoot architectures, including tomato, maize, cucumber, and arabidopsis. For this purpose, virtual laser scanning of synthetic models of these four plant species was used. This approach was applied to simulate different scenarios of 3D model perturbation, as well as the principle loss of geometrical information in shadowed plant regions. Our experimental results show that different plant traits exhibit different and, in general, plant type specific dependency on the level of geometrical perturbations. However, some phenotypic traits are tendentially more or less correlated with the degree of geometrical inaccuracies in assessing 3D plant architecture. In particular, integrative traits, such as plant area, volume, and physiologically important light absorption show stronger correlation with the effectively visible plant area than linear shoot traits, such as total plant height and width crossover different scenarios of geometrical perturbation. Our study addresses an important question of reliability and accuracy of 3D plant measurements and provides solution suggestions for consistent quantitative analysis and interpretation of imperfect data by combining measurement results with computational simulation of synthetic plant models. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Article
3D Distance Filter for the Autonomous Navigation of UAVs in Agricultural Scenarios
Remote Sens. 2022, 14(6), 1374; https://doi.org/10.3390/rs14061374 - 11 Mar 2022
Viewed by 1001
Abstract
In precision agriculture, remote sensing is an essential phase in assessing crop status and variability when considering both the spatial and the temporal dimensions. To this aim, the use of unmanned aerial vehicles (UAVs) is growing in popularity, allowing for the autonomous performance [...] Read more.
In precision agriculture, remote sensing is an essential phase in assessing crop status and variability when considering both the spatial and the temporal dimensions. To this aim, the use of unmanned aerial vehicles (UAVs) is growing in popularity, allowing for the autonomous performance of a variety of in-field tasks which are not limited to scouting or monitoring. To enable autonomous navigation, however, a crucial capability lies in accurately locating the vehicle within the surrounding environment. This task becomes challenging in agricultural scenarios where the crops and/or the adopted trellis systems can negatively affect GPS signal reception and localisation reliability. A viable solution to this problem can be the exploitation of high-accuracy 3D maps, which provide important data regarding crop morphology, as an additional input of the UAVs’ localisation system. However, the management of such big data may be difficult in real-time applications. In this paper, an innovative 3D sensor fusion approach is proposed, which combines the data provided by onboard proprioceptive (i.e., GPS and IMU) and exteroceptive (i.e., ultrasound) sensors with the information provided by a georeferenced 3D low-complexity map. In particular, the parallel-cuts ellipsoid method is used to merge the data from the distance sensors and the 3D map. Then, the improved estimation of the UAV location is fused with the data provided by the GPS and IMU sensors, using a Kalman-based filtering scheme. The simulation results prove the efficacy of the proposed navigation approach when applied to a quadrotor that autonomously navigates between vine rows. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Article
Comparison of Aerial and Ground 3D Point Clouds for Canopy Size Assessment in Precision Viticulture
Remote Sens. 2022, 14(5), 1145; https://doi.org/10.3390/rs14051145 - 25 Feb 2022
Cited by 2 | Viewed by 987
Abstract
In precision viticulture, the intra-field spatial variability characterization is a crucial step to efficiently use natural resources by lowering the environmental impact. In recent years, technologies such as Unmanned Aerial Vehicles (UAVs), Mobile Laser Scanners (MLS), multispectral sensors, Mobile Apps (MA) and Structure [...] Read more.
In precision viticulture, the intra-field spatial variability characterization is a crucial step to efficiently use natural resources by lowering the environmental impact. In recent years, technologies such as Unmanned Aerial Vehicles (UAVs), Mobile Laser Scanners (MLS), multispectral sensors, Mobile Apps (MA) and Structure from Motion (SfM) techniques enabled the possibility to characterize this variability with low efforts. The study aims to evaluate, compare and cross-validate the potentiality and the limits of several tools (UAV, MA, MLS) to assess the vine canopy size parameters (thickness, height, volume) by processing 3D point clouds. Three trials were carried out to test the different tools in a vineyard located in the Chianti Classico area (Tuscany, Italy). Each test was made of a UAV flight, an MLS scanning over the vineyard and a MA acquisition over 48 geo-referenced vines. The Leaf Area Index (LAI) were also assessed and taken as reference value. The results showed that the analyzed tools were able to correctly discriminate between zones with different canopy size characteristics. In particular, the R2 between the canopy volumes acquired with the different tools was higher than 0.7, being the highest value of R2 = 0.78 with a RMSE = 0.057 m3 for the UAV vs. MLS comparison. The highest correlations were found between the height data, being the highest value of R2 = 0.86 with a RMSE = 0.105 m for the MA vs. MLS comparison. For the thickness data, the correlations were weaker, being the lowest value of R2 = 0.48 with a RMSE = 0.052 m for the UAV vs. MLS comparison. The correlation between the LAI and the canopy volumes was moderately strong for all the tools with the highest value of R2 = 0.74 for the LAI vs. V_MLS data and the lowest value of R2 = 0.69 for the LAI vs. V_UAV data. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Article
Determination of the Optimal Orientation of Chinese Solar Greenhouses Using 3D Light Environment Simulations
Remote Sens. 2022, 14(4), 912; https://doi.org/10.3390/rs14040912 - 14 Feb 2022
Viewed by 596
Abstract
With the continuous use of resources, solar energy is expected to be the most used sustainable energy. To improve the solar energy efficiency in Chinese Solar Greenhouses (CSG), the effect of CSG orientation on intercepted solar radiation was systematically studied. By using a [...] Read more.
With the continuous use of resources, solar energy is expected to be the most used sustainable energy. To improve the solar energy efficiency in Chinese Solar Greenhouses (CSG), the effect of CSG orientation on intercepted solar radiation was systematically studied. By using a 3D CSG model and a detailed crop canopy model, the light environment within CSG was optimized. Taking the most widely used Liao-Shen type Chinese solar greenhouse (CSG-LS) as the prototype, the simulation was fully verified. The intercepted solar radiation of the maintenance structures and crops was used as the evaluation index. The results showed that the highest amount of solar radiation intercepted by the maintenance structures occurred in the CSG orientations of 4–6° south to west (S-W) in 36.8° N and 38° N areas, 8–10° S-W in 41.8° N areas, and 2–4° south to east (S-E) in 43.6° N areas. The solar radiation intercepted by the crop canopy displayed the highest value at an orientation of 2–4° S-W in 36.8° N, 38° N, 43.6° N areas, and 4–6° S-W in the 41.8° N area. Furthermore, the proposed model could provide scientific guidance for greenhouse crop modelling. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

Article
High-Throughput Legume Seed Phenotyping Using a Handheld 3D Laser Scanner
Remote Sens. 2022, 14(2), 431; https://doi.org/10.3390/rs14020431 - 17 Jan 2022
Viewed by 769
Abstract
High-throughput phenotyping involves many samples and diverse trait types. For the goal of automatic measurement and batch data processing, a novel method for high-throughput legume seed phenotyping is proposed. A pipeline of automatic data acquisition and processing, including point cloud acquisition, single-seed extraction, [...] Read more.
High-throughput phenotyping involves many samples and diverse trait types. For the goal of automatic measurement and batch data processing, a novel method for high-throughput legume seed phenotyping is proposed. A pipeline of automatic data acquisition and processing, including point cloud acquisition, single-seed extraction, pose normalization, three-dimensional (3D) reconstruction, and trait estimation, is proposed. First, a handheld laser scanner is used to obtain the legume seed point clouds in batches. Second, a combined segmentation method using the RANSAC method, the Euclidean segmentation method, and the dimensionality of the features is proposed to conduct single-seed extraction. Third, a coordinate rotation method based on PCA and the table normal is proposed to conduct pose normalization. Fourth, a fast symmetry-based 3D reconstruction method is built to reconstruct a 3D model of the single seed, and the Poisson surface reconstruction method is used for surface reconstruction. Finally, 34 traits, including 11 morphological traits, 11 scale factors, and 12 shape factors, are automatically calculated. A total of 2500 samples of five kinds of legume seeds are measured. Experimental results show that the average accuracies of scanning and segmentation are 99.52% and 100%, respectively. The overall average reconstruction error is 0.014 mm. The average morphological trait measurement accuracy is submillimeter, and the average relative percentage error is within 3%. The proposed method provides a feasible method of batch data acquisition and processing, which will facilitate the automation in high-throughput legume seed phenotyping. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Article
Four-Dimensional Plant Phenotyping Model Integrating Low-Density LiDAR Data and Multispectral Images
Remote Sens. 2022, 14(2), 356; https://doi.org/10.3390/rs14020356 - 13 Jan 2022
Viewed by 884
Abstract
High-throughput platforms for plant phenotyping usually demand expensive high-density LiDAR devices with computational intense methods for characterizing several morphological variables. In fact, most platforms require offline processing to achieve a comprehensive plant architecture model. In this paper, we propose a low-cost plant phenotyping [...] Read more.
High-throughput platforms for plant phenotyping usually demand expensive high-density LiDAR devices with computational intense methods for characterizing several morphological variables. In fact, most platforms require offline processing to achieve a comprehensive plant architecture model. In this paper, we propose a low-cost plant phenotyping system based on the sensory fusion of low-density LiDAR data with multispectral imagery. Our contribution is twofold: (i) an integrated phenotyping platform with embedded processing methods capable of providing real-time morphological data, and (ii) a multi-sensor fusion algorithm that precisely match the 3D LiDAR point-cloud data with the corresponding multispectral information, aiming for the consolidation of four-dimensional plant models. We conducted extensive experimental tests over two plants with different morphological structures, demonstrating the potential of the proposed solution for enabling real-time plant architecture modeling in the field, based on low-density LiDARs. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Article
Canopy Volume Extraction of Citrus reticulate Blanco cv. Shatangju Trees Using UAV Image-Based Point Cloud Deep Learning
Remote Sens. 2021, 13(17), 3437; https://doi.org/10.3390/rs13173437 - 30 Aug 2021
Cited by 6 | Viewed by 1344
Abstract
Automatic acquisition of the canopy volume parameters of the Citrus reticulate Blanco cv. Shatangju tree is of great significance to precision management of the orchard. This research combined the point cloud deep learning algorithm with the volume calculation algorithm to segment the canopy [...] Read more.
Automatic acquisition of the canopy volume parameters of the Citrus reticulate Blanco cv. Shatangju tree is of great significance to precision management of the orchard. This research combined the point cloud deep learning algorithm with the volume calculation algorithm to segment the canopy of the Citrus reticulate Blanco cv. Shatangju trees. The 3D (Three-Dimensional) point cloud model of a Citrus reticulate Blanco cv. Shatangju orchard was generated using UAV tilt photogrammetry images. The segmentation effects of three deep learning models, PointNet++, MinkowskiNet and FPConv, on Shatangju trees and the ground were compared. The following three volume algorithms: convex hull by slices, voxel-based method and 3D convex hull were applied to calculate the volume of Shatangju trees. Model accuracy was evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE). The results show that the overall accuracy of the MinkowskiNet model (94.57%) is higher than the other two models, which indicates the best segmentation effect. The 3D convex hull algorithm received the highest R2 (0.8215) and the lowest RMSE (0.3186 m3) for the canopy volume calculation, which best reflects the real volume of Citrus reticulate Blanco cv. Shatangju trees. The proposed method is capable of rapid and automatic acquisition for the canopy volume of Citrus reticulate Blanco cv. Shatangju trees. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Article
How to Build a 2D and 3D Aerial Multispectral Map?—All Steps Deeply Explained
Remote Sens. 2021, 13(16), 3227; https://doi.org/10.3390/rs13163227 - 13 Aug 2021
Cited by 3 | Viewed by 1181
Abstract
The increased development of camera resolution, processing power, and aerial platforms helped to create more cost-efficient approaches to capture and generate point clouds to assist in scientific fields. The continuous development of methods to produce three-dimensional models based on two-dimensional images such as [...] Read more.
The increased development of camera resolution, processing power, and aerial platforms helped to create more cost-efficient approaches to capture and generate point clouds to assist in scientific fields. The continuous development of methods to produce three-dimensional models based on two-dimensional images such as Structure from Motion (SfM) and Multi-View Stereopsis (MVS) allowed to improve the resolution of the produced models by a significant amount. By taking inspiration from the free and accessible workflow made available by OpenDroneMap, a detailed analysis of the processes is displayed in this paper. As of the writing of this paper, no literature was found that described in detail the necessary steps and processes that would allow the creation of digital models in two or three dimensions based on aerial images. With this, and based on the workflow of OpenDroneMap, a detailed study was performed. The digital model reconstruction process takes the initial aerial images obtained from the field survey and passes them through a series of stages. From each stage, a product is acquired and used for the following stage, for example, at the end of the initial stage a sparse reconstruction is produced, obtained by extracting features of the images and matching them, which is used in the following step, to increase its resolution. Additionally, from the analysis of the workflow, adaptations were made to the standard workflow in order to increase the compatibility of the developed system to different types of image sets. Particularly, adaptations focused on thermal imagery were made. Due to the low presence of strong features and therefore difficulty to match features across thermal images, a modification was implemented, so thermal models could be produced alongside the already implemented processes for multispectral and RGB image sets. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Graphical abstract

Article
EasyIDP: A Python Package for Intermediate Data Processing in UAV-Based Plant Phenotyping
Remote Sens. 2021, 13(13), 2622; https://doi.org/10.3390/rs13132622 - 03 Jul 2021
Cited by 5 | Viewed by 2146
Abstract
Unmanned aerial vehicle (UAV) and structure from motion (SfM) photogrammetry techniques are widely used for field-based, high-throughput plant phenotyping nowadays, but some of the intermediate processes throughout the workflow remain manual. For example, geographic information system (GIS) software is used to manually assess [...] Read more.
Unmanned aerial vehicle (UAV) and structure from motion (SfM) photogrammetry techniques are widely used for field-based, high-throughput plant phenotyping nowadays, but some of the intermediate processes throughout the workflow remain manual. For example, geographic information system (GIS) software is used to manually assess the 2D/3D field reconstruction quality and cropping region of interests (ROIs) from the whole field. In addition, extracting phenotypic traits from raw UAV images is more competitive than directly from the digital orthomosaic (DOM). Currently, no easy-to-use tools are available to implement previous tasks for commonly used commercial SfM software, such as Pix4D and Agisoft Metashape. Hence, an open source software package called easy intermediate data processor (EasyIDP; MIT license) was developed to decrease the workload in intermediate data processing mentioned above. The functions of the proposed package include (1) an ROI cropping module, assisting in reconstruction quality assessment and cropping ROIs from the whole field, and (2) an ROI reversing module, projecting ROIs to relative raw images. The result showed that both cropping and reversing modules work as expected. Moreover, the effects of ROI height selection and reversed ROI position on raw images to reverse calculation were discussed. This tool shows great potential for decreasing workload in data annotation for machine learning applications. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)
Show Figures

Figure 1

Other

Jump to: Research

Technical Note
Technical Challenges for Multi-Temporal and Multi-Sensor Image Processing Surveyed by UAV for Mapping and Monitoring in Precision Agriculture
Remote Sens. 2022, 14(19), 4954; https://doi.org/10.3390/rs14194954 (registering DOI) - 04 Oct 2022
Viewed by 161
Abstract
Precision Agriculture (PA) is an approach to maximizing crop productivity in a sustainable manner. PA requires up-to-date, accurate and georeferenced information on crops, which can be collected from different sensors from ground, aerial or satellite platforms. The use of optical and thermal sensors [...] Read more.
Precision Agriculture (PA) is an approach to maximizing crop productivity in a sustainable manner. PA requires up-to-date, accurate and georeferenced information on crops, which can be collected from different sensors from ground, aerial or satellite platforms. The use of optical and thermal sensors from Unmanned Aerial Vehicle (UAV) platform is an emerging solution for mapping and monitoring in PA, yet many technological challenges are still open. This technical note discusses the choice of UAV type and its scientific payload for surveying a sample area of 5 hectares, as well as the procedures for replicating the study on a larger scale. This case study is an ideal opportunity to test the best practices to combine the requirements of PA surveys with the limitations imposed by local UAV regulations. In the field area, to follow crop development at various stages, nine flights over a period of four months were planned and executed. The usage of ground control points for optimal georeferencing and accurate alignment of maps created by multi-temporal processing is analyzed. Output maps are produced in both visible and thermal bands, after appropriate strip alignment, mosaicking, sensor calibration, and processing with Structure from Motion techniques. The discussion of strategies, checklists, workflow, and processing is backed by data from more than 5000 optical and radiometric thermal images taken during five hours of flight time in nine flights throughout the crop season. The geomatics challenges of a georeferenced survey for PA using UAVs are the key focus of this technical note. Accurate maps derived from these multi-temporal and multi-sensor surveys feed Geographic Information Systems (GIS) and Decision Support Systems (DSS) to benefit PA in a multidisciplinary approach. Full article
(This article belongs to the Special Issue 3D Modelling and Mapping for Precision Agriculture)

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

1. Evaluating the total leaf area of an individual tree canopy using mobile laser scanning system

Qiujie Li, Yuxi Xue, Pengcheng Yuan, Yu Ru

College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China

*  Correspondence: [email protected]

Abstract: In recent years, the problem of pesticide waste and environmental pollution caused by continuous spraying methods has received more and more attention, thus the variable-rate spraying technology has been proposed. The variable-rate spraying methods use sensors to measure the leaf amounts of the target canopies, which are utilized to adjust the liquid amounts to make the droplets spray evenly in the canopy. One of the methods uses mobile laser scanning (MLS) technologies to capture the point cloud of the canopy by a two-dimensional (2D) light and detection ranging (LiDAR) sensor, then the number of point cloud is calculated to evaluate the leaf area of the canopy. Since the vehicle speed and ranging distance are often varied in the process of spraying, the number of point cloud captured from a canopy is not consistent, which has a great influence on the accuracy of leaf area measurement. Aiming to solve the above problem, this paper proposes a new method to measure the leaf area, which sums up the grid area measured by single points. The grid area is calculated by utilizing two measurement resolutions of the MLS system in the vehicle moving and LiDAR ranging. According to the number of echoes reflected by the leaf, the points in the canopy are classified into three categories and the grid area is calculated correspondingly: points at a single leaf, points at the edge of a single leaf, and points at the junction of multiple leaves. To evaluate the proposed method, an artificial tree was scanned by a 2D LiDAR UTM-30LX-EW. The branches were pruned to generate nine canopies with different densities. Both sides of the canopies were scanned at three distances, 1.0 m, 1.5 m, and 2.0m. The points clouds of canopies were then subsampled to simulate multiple vehicle speeds. Finally, a total of 594 samples were obtained. We adopted univariate linear regression to evaluate the total leaf area (TLA) of a canopy by using the total point number (TPN) and total grid area (TGA) of a canopy as the argument respectively. The coefficient of determination of TPN evaluation is 0.6317 with a 4345.3458 cm2 mean squared error (MSE), while the coefficient of determination of TGA evaluation is 0.8778 with a 2503.2932 cm2 MSE. Experimental results has shown that the proposed TGA evaluation is more accurate than TPN evaluation in terms of diverse point densities caused by various vehicle speed and ranging distance.

Keywords: variable-rate spraying; leaf area measurement; light detection and ranging (LiDAR); mobile laser scanning (MLS)

2. Title: Comparison of aerial and ground 3D point clouds for canopy size assessment in viticulture

Authors: A. Pagliai,S.P. Kartsiotis*, D. Sarri, M. Ammoniaci, R. Lisci, R. Perria, M. Vieri, M.D. Arcangelo, P. Storchi

Abstract: In precision viticulture, the characterization of the within-field spatial variability is a critical step to achieve the efficient use of natural resources by lowering the environmental impact. In recent years, technologies such as Unmanned Aerial Vehicles (UAVs), Mobile Laser Scanners (MLS), multispectral sensors, mobile app and Structure from Motion (SfM) photogrammetry techniques enable the possibility to characterize this variability with relatively low efforts. The paper aims to evaluate, compare and cross-validate the potentiality and the limits of several tools to assess the geometrical features of the vines canopies by processing 3D point clouds. Three trials were carried out in vineyards located in the Chianti Classico area (Tuscany, Italy) to test the different tools. Each trial consisted of a UAV flight and a MLS scanning over the vineyard and mobile app acquisitions over 200 geo-referenced vines. Results showed that the 3D point clouds achieved by the tested technologies were characterized by different accuracies highlighting a good qualitative correlation between them. UAVs demonstrate to be a powerful and rapid tool to assess canopies volumes, while MLS and mobile app can provide higher absolute accuracy, but they are more time-consuming.

Back to TopTop