Next Article in Journal
Spatial Modeling of Mosquito Vectors for Rift Valley Fever Virus in Northern Senegal: Integrating Satellite-Derived Meteorological Estimates in Population Dynamics Models
Next Article in Special Issue
Remotely Sensed Vegetation Indices to Discriminate Field-Grown Olive Cultivars
Previous Article in Journal
Quantitative Aerosol Optical Depth Detection during Dust Outbreaks from Meteosat Imagery Using an Artificial Neural Network Model
Previous Article in Special Issue
Mapping Plantations in Myanmar by Fusing Landsat-8, Sentinel-2 and Sentinel-1 Data along with Systematic Error Quantification
 
 
Article
Peer-Review Record

Comparison of Unsupervised Algorithms for Vineyard Canopy Segmentation from UAV Multispectral Images

Remote Sens. 2019, 11(9), 1023; https://doi.org/10.3390/rs11091023
by Paolo Cinat 1, Salvatore Filippo Di Gennaro 1,*, Andrea Berton 2 and Alessandro Matese 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2019, 11(9), 1023; https://doi.org/10.3390/rs11091023
Submission received: 9 April 2019 / Revised: 24 April 2019 / Accepted: 25 April 2019 / Published: 30 April 2019
(This article belongs to the Special Issue Remote Sensing for Agroforestry)

Round 1

Reviewer 1 Report

The authors updated the manuscript by considering some comments from the previous review. However, there is still some concerns towards the study, especially considering the entire generated orthophoto mosaic for validation of the results. As explained in the last review, non-vineyard vegetation is present in the imagery such as trees along with bad photogrammetric processing present in the borders of the orthophoto mosaics. For the sake of the study this vegetation should be removed. Authors could state that this interaction will not be considered in real-usage scenarios.


Some comments:


Line 68 and 397: There are still some references to a guide user interface. Should it be corrected to graphical user interface?


Line 422: Define the CPU model of the workstation and the number of cores.


Consider reduce the size of some figures namely: Figure 1, Figure 3. Moreover, the overall, figures quality should be improved, at least in the provided PDF version generally the figures seems to a low resolution with some pixelated areas. Figures 8 and 11 depicts important results from the methods applied in this study but are impracticable to analyse the delineate features, these figures should be reworked.


Authors replied that more studies were included in the discussion, but most of the suggested studies were not properly discussed towards the performed study. Most of the are cited in lines 668-670. If some of these studies were individually analysed authors could enrich the discussion content.

Author Response

The authors updated the manuscript by considering some comments from the previous review. However, there is still some concerns towards the study, especially considering the entire generated orthophoto mosaic for validation of the results. As explained in the last review, non-vineyard vegetation is present in the imagery such as trees along with bad photogrammetric processing present in the borders of the orthophoto mosaics. For the sake of the study this vegetation should be removed. Authors could state that this interaction will not be considered in real-usage scenarios.

Reply: Following the reviewer’s suggestion the following sentence was added to Lines 491-494: “Non-vineyard vegetation is present in the imagery such as trees along with bad photogrammetric processing present in the borders of the orthophoto mosaics. This vegetation has been maintained to stress the performance of algorithms, but this interaction will not be considered in real-usage scenarios.”


Some comments:

Line 68 and 397: There are still some references to a guide user interface. Should it be corrected to graphical user interface?

Reply: We corrected to graphical user interface.


Line 422: Define the CPU model of the workstation and the number of cores.

Reply:  We defined this specification at line 418. “2 processors Genuine Intel(R) CPU 0000 @ 2.40GHz with 14 Cores and 28 Threads each and 256 GB of RAM”


Consider reduce the size of some figures namely: Figure 1, Figure 3. Moreover, the overall, figures quality should be improved, at least in the provided PDF version generally the figures seems to a low resolution with some pixelated areas. Figures 8 and 11 depicts important results from the methods applied in this study but are impracticable to analyse the delineate features, these figures should be reworked.

Reply: We agree with the reviewer comments; we reduced the size of Figure 1-3. For the figures 8 and 11, unfortunately the reviewer stated that is impracticable to analyse the feature, but these figures were done only for a graphical example on how the algorithm performed. We think that we did the best we could to make the figures more readily comprehensible.


Authors replied that more studies were included in the discussion, but most of the suggested studies were not properly discussed towards the performed study. Most of the are cited in lines 668-670. If some of these studies were individually analysed authors could enrich the discussion content.

Reply: Following the reviewer’s comments we individually analysed the added references as reported at lines 636-647:

“Bobillet et al. [27] presented an image processing algorithm for automatic row detection using an active contour model consisting of a network of lines to adjust to the vine rows.

Kalisperakis et al. [31] estimated canopy levels from hyperspectral data, 2D RGB orthomosaics and 3D crop surface models found good correlation (r2 >73%) with the ground truth.

Burgos et al. [32] have used a differential digital model (DDM) of a vineyard to obtain vine pixels, by selecting all pixels with an elevation higher than 50 cm above the ground level showing good results. A manually delineation of polygons based on the RGB image was used to obtain those results.

Weiss and Baret [33] have used the terrain altitude extracted from the dense point cloud to get the 2D distribution of height of the vineyard. By applying a threshold on the height, the rows were separated from the row spacing. The comparison with ground measurements showed RMSE = 9.8 cm for row height, RMSE = 8.7 cm for row width and RMSE = 7 cm for row spacing.”

 

 


Reviewer 2 Report

Dear authors,


thank you for revising this manuscript.


In my first revision I wrote:

I have read your study on detecting vine canopy from UAV data. Although I see a need for scientific attention to develop approaches based on remote sensing in precision agriculture, I do not think your manuscript contains enough novelty in contributing to this development and therefore suggest to reject it.
In my opinion, this is a study on applying unsupervised approaches to detect vine canopy that lacks a throughout justification for selecting these approaches. The introduction can improve on making this clearer. The contribution is intended to lie in comparing these approaches. However, the comparison is not well described and designed, so that no clear conclusions can be drawn. Overall, the study does not contain a novel approach, neither is the comparative analysis designed or presented in a way that allows significant conclusions. In general, the language, as well as the structure and presentation of information can be improved.


I now would like to accept your manuscript. You have addressed the points marked throughout the manuscript, which have improved the manuscript and make your the study more comprehensive. I still have my doubts on the novelty and scientific contribution of this study, wherefore my overall merit is till low. However, I can imagine that the paper is of interest to application oriented readers.


Kind regards.

Author Response

I now would like to accept your manuscript. You have addressed the points marked throughout the manuscript, which have improved the manuscript and make your the study more comprehensive. I still have my doubts on the novelty and scientific contribution of this study, wherefore my overall merit is till low. However, I can imagine that the paper is of interest to application oriented readers.

 

Reply: We are grateful with the reviewer for the professionalism demonstrated.


Reviewer 3 Report

The manuscript is modified based on the review comments. But still, the paper fails to express the specific "objective/goal/purpose" of the study. This is needed to draw the conclusion of the study. 

Author Response

The manuscript is modified based on the review comments. But still, the paper fails to express the specific "objective/goal/purpose" of the study. This is needed to draw the conclusion of the study. 


Reply: We are grateful to the reviewer for the strong effort helping us to improve the paper in the review stage. At this point we think that the objective/goal/purpose of the work has been clarified.


This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The manuscript presents different unsupervised methods for vineyard vegetation detection using UAV-based multispectral imagery, different methods were tested, namely K-means clustering, different HSV-based approaches and an approach based in the digital elevation models. Methods performance and computational time were evaluated. However, the manuscript needs several changes which are pointed out in the following text.

The study is very similar to the one conducted by Poblete-Echeverría et al. (cited in the manuscript) where supervised and unsupervised methods were tested. However in that study only RGB imagery was tested, this study includes multispectral imagery, but only unsupervised methods were tested.

Some questions and comments:

Were ground control points used for imagery alignment?

How point cloud was interpolated to generate the DEM?

Which sensor was used to compute the DEM?

What was the point cloud density per m2 of the different studied sites obtained during photogrammetric processing? Did it differed much on each sensor?

Regarding the methods tested for vine vegetation segmentation, why the authors did not tested vegetation indices? Those provide a good and quick way for this porpuse, according to [19], it obtained similar accuracies than supervised machine learning methods (artificial neural networks). In the reviewer opinion this could have been tested, since its application only involves simple arithmetic operations and a image thresholding for binarization.

What is the usefulness of estimating shadows in a vineyard? The shadows can effect the algorithms detection accuracy, of corse, but should not be treated specially. Moreover, not all the methods described in the manuscript are able to detect shadows (as the case of DEM). In a vineyard case it is rare that shadows from vine canopies overlap each other. However, there are cases where shadows from adjacent vegetation, such as trees, can be uppon the vines, this way only the detection of vine vegetation is influenced. If the authors were faced with this issue (shadowed canopy) what would be considered vine vegetation descriminated among the shadows or everithing as being shadow?

A point that, in the reviewer opinion, raises some concerns is the comparison methodology. In the results the exact detection was not quantitatively evaluated, only visualy. This way, future studies do not have a chance to compare the results obtained.

Some recomendations:

Abstract

Define NGR in its first occurrence.

Please include some results for computational performance and over and under estimation.


Key words

Consider to replace “segmentation” by “vineyard segmentation” and “multispectral” by “multispectral imagery”.


Introduction

Some studies found in bibliography are missing, please consider to include them:

Chanussot, Jocelyn, Patrick Bas, and Lionel Bombrun. "Airborne remote sensing of vineyards for the detection of dead vine trees." Geoscience and Remote Sensing Symposium, 2005. IGARSS'05. Proceedings. 2005 IEEE International. Vol. 5. IEEE, 2005.

Comba, Lorenzo, et al. "Vineyard detection from unmanned aerial systems images." Computers and Electronics in Agriculture 114 (2015): 78-87.

Baofeng, Su, et al. "Digital surface model applied to unmanned aerial vehicle based photogrammetry to assess potential biotic or abiotic effects on grapevine canopies." International Journal of Agricultural and Biological Engineering 9.6 (2016): 119-130.

Primicerio, Jacopo, et al. "Individual plant definition and missing plant characterization in vineyards from high-resolution UAV imagery." European Journal of Remote Sensing 50.1 (2017): 179-186.

Weiss, Marie, and Frédéric Baret. "Using 3D point clouds derived from UAV RGB imagery to describe vineyard 3D macro-structure." Remote Sensing 9.2 (2017): 111.

Pádua, Luís, et al. "Multi-Temporal Vineyard Monitoring through UAV-Based RGB Imagery." Remote Sensing 10.12 (2018): 1907.

Comba, Lorenzo, et al. "Unsupervised detection of vineyards by 3D point-cloud UAV photogrammetry for precision agriculture." Computers and Electronics in Agriculture 155 (2018): 84-95.

Lines 46-48: Authors point out that multi-spectral cameras are the most used sensors, a reference is missing at the end of this statement. Moreover, RGB sensors are commonly not inserted in multispectral sensors, since only information from visible part of the electromagnetic (EM) spectrum is used. However, it is considered a multispectral sensor when there are bands relying in other parts of the EM spectrum, as the case of near infrared.

Line 58: Please replace “similar colors” by “similar spectral response”.

Line 119: it is stated that Matlab 2016 was used but [30] is referred to the 2012b release. Please consider removing the reference and to present the correct version used in the study presented in this manuscript. Instead of presenting a reference authors can present it this way: “… developed in MATLAB (version 2016, MathWorks Inc., Massachusetts, USA).”

Consider reducing the text from the penultimate paragraph and include in the materials and methods section.


Materials and methods

Along the manuscript there are some strikethrough words, please correct.

Line 206: please refer the image pointed in this sentence.

Table 1: add the nm for each band of the Tetracam ADC Snap.

Line 223: replace “algorithm” by “algorithms”.

Line 227: apart from vines, shadows and soil, some weeds are also identifiable in the imagery.

Line 235: This sentence is unnecessary.

Lines 239-240: According to https://www.mathworks.com/help/images/convert-from-hsv-to-rgb-color-space.html the representation of HSV color space is a cone, please modify if necessary here and in other parts of the manuscript.

Regarding K-means implementation, it seems that it is based in the MATLAB example, available here. Moreover, the authors did not provide any reasonable explanation on why to use five clusters, since the goal was to detect three different classes: vines, soil and shadows. This way, and without further testing documented in the manuscript, the number of clusters should have been tested.

Line 349: According to MATLAB online documentation there is no function called “imopath”. Correct in accordance.

Figure 5: can be moved to results.

Line 384: Authors refer that they used a “Guide User Interface (GUI)”, do you mean to say “Graphical User Interface (GUI)”? MATLAB has a GUIDE (GUI development environment) to make GUIs. Please correct and present a screenshot of the developed (GUI).

Moreover, the text of the HSV algorithms explanation is confuse. The text can be simplified and a table could be added explaining the values used for H, S and V.

In the first paragraph of the comparison methodology it is refered to a contigency table, but no such table is presented in the results.

Results

Line 421: Define the CPU model of the workstation.

Why to use the entire generated orthophoto mosaic resultant from the photogrammetric processing? It clearly has some areas that are not vineyards and are outliers since it is located in the boundaries of the acquired imagery, those areas have low overlap in the photogrammetric processing and usually results in distorted parts in the orthorectification phase. This can influence the obtained results, if areas outside vineyards were considered. In figure 6 it is visible that vegetation around the vineyard was also detected

HSV-G masks (Figure 7) seems to be detecting the inverse of vineyard vegetation.

The boundaries of each tested algorithm in Figures 7 and 11 are confuse, please add zoomed image where these lines are clearly visible. Specially in Figure 11 where this is mostly not visible. If aithors added some opacity to the background image (orthomosaic) the algorithm results could have seen more clearly.

Line 494: add the reference to the figure where you state “… see orthomosaic application on P18, …“.


Discussion

The discussion cloud has been more incisive into directly comparing the results obtained from this study i.e. under-. over- and exact- detection rates.

There are some studies missing in the discussion for instance:

It is stated that the method from de Castro et al. [22] is able to detect missing vine plants however the following studies also provide some insights in this topic

Chanussot, Jocelyn, Patrick Bas, and Lionel Bombrun. "Airborne remote sensing of vineyards for the detection of dead vine trees." Geoscience and Remote Sensing Symposium, 2005. IGARSS'05. Proceedings. 2005 IEEE International. Vol. 5. IEEE, 2005.

Baofeng, Su, et al. "Digital surface model applied to unmanned aerial vehicle based photogrammetry to assess potential biotic or abiotic effects on grapevine canopies." International Journal of Agricultural and Biological Engineering 9.6 (2016): 119-130.

Primicerio, Jacopo, et al. "Individual plant definition and missing plant characterization in vineyards from high-resolution UAV imagery." European Journal of Remote Sensing 50.1 (2017): 179-186.

Weiss, Marie, and Frédéric Baret. "Using 3D point clouds derived from UAV RGB imagery to describe vineyard 3D macro-structure." Remote Sensing 9.2 (2017): 111.

Pádua, Luís, et al. "Vineyard properties extraction combining UAS-based RGB imagery with elevation data." International Journal of Remote Sensing (2018): 1-25.

It is stated that the method from de Castro et al. [22] is able to compute vine biomass, the following studies also point some directions towards this topic:

Weiss, Marie, and Frédéric Baret. "Using 3D point clouds derived from UAV RGB imagery to describe vineyard 3D macro-structure." Remote Sensing 9.2 (2017): 111.

Pádua, Luís, et al. "Multi-Temporal Vineyard Monitoring through UAV-Based RGB Imagery." Remote Sensing 10.12 (2018): 1907.

Just a comment, the authors refer that NRG imagery is not explored, which they are right, however this band combination is not that common to have in most of the sensors. Modified cameras RGB cameras, without infrared filter, usually provide Nir, Green and Blue. Other multispectral cameras provide images in separate bands, resulting into a multiple single-band images.

There are some more studies which also proposed DEM-based vineyard detection that should be included in the manuscript:

Baofeng, Su, et al. "Digital surface model applied to unmanned aerial vehicle based photogrammetry to assess potential biotic or abiotic effects on grapevine canopies." International Journal of Agricultural and Biological Engineering 9.6 (2016): 119-130.

Kalisperakis, I., et al. "Leaf area index estimation in vineyards from UAV hyperspectral data, 2D image mosaics and 3D canopy surface models." The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 40.1 (2015): 299.

Burgos, S., et al. "Use of very high-resolution airborne images to analyse 3D canopy architecture of a vineyard." The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 40.3 (2015): 399.


Figures and tables

Figure 1: there is no need to have repeated images of the sensors, instead why not to enumerate the places where the photos were taken, if different, as (a) and (b) and sensors as (c) and (d).

Figure 2: the text size is unproportionable in respect to the figure size, please reduce it. Please add a scale bar to, at least, (c), (d) and (e). In the legend of this figure (c), (d) and (e) are referred as being DEM. However, they seem as a 3D model instead, please correct if necessary.

Figure 5: this figure text needs to be resized, specially the chart of k-means results.

Figure 9: there are cases where the over estimation in HSV-G is not visible at all, please correct.

Figure 12: add a horizontal line in the zero value similarly to Figure 9.

According to the journal rules, please add scale bars and coordinates to the images representing maps.

Reviewer 2 Report

Dear authors,

I have read your study on detecting vine canopy from UAV data. Although I see a need for scientific attention to develop approaches based on remote sensing in precision agriculture, I do not think your manuscript contains enough novelty in contributing to this development and therefore suggest to reject it.

In my opinion, this is a study on applying unsupervised approaches to detect vine canopy that lacks a throughout justification for selecting these approaches. The introduction can improve on making this clearer. The contribution is intended to lie in comparing these approaches. However, the comparison is not well described and designed, so that no clear conclusions can be drawn. Overall, the study does not contain a novel approach, neither is the comparative analysis designed or presented in a way that allows significant conclusions.

In general, the language, as well as the structure and presentation of information can be improved.

Please find further explanations inserted as comments on specific examples throughout the manuscript.

Comments for author File: Comments.pdf

Reviewer 3 Report

Please see the attached file.

Comments for author File: Comments.docx

Back to TopTop