Detection of Crop Damage in Maize Using Red–Green–Blue Imagery and LiDAR Data Acquired Using an Unmanned Aerial Vehicle
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe manuscript presents an interesting study on the use of UAVs equipped with RGB cameras and LiDAR sensors to detect crop damage in maize fields. The authors have addressed an important issue in agriculture, namely the timely and accurate assessment of crop damage caused by wild animals.
Specific comments:
1. in Figure 1, the author picked a rectangular area of a whole maize field as the analyzed target. Then what about the fields with irregular shape?
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis paper studies the use of RGB images obtained from UAVs and LiDAR data to assess corn crop damage caused by wildlife (especially wild boars). The overall structure of the paper is clear and provides important guidance for agricultural production practices. The article is well-written and based on sufficient research. However, the following points need to be addressed to enhance the paper’s validity:
Section 2.1: Please elaborate on the specific characteristics of the selected experimental area, such as climate conditions and other relevant factors.
Section 2.2: "Manual selection of crop damage area" — Please clarify the basis for the manual selection.
"Whereas the largest contiguous damaged area reached 3775 m²" — How is the damage defined? Please provide further explanation.
In Table 1, please explain the reason for selecting 1.5. Will other parameters influence the identification of the damaged area?
In the Discussion section, it is recommended to use tables or other visual methods to illustrate the findings. This would better highlight the advantages and disadvantages of each model.
In the Conclusions section, you mention the combination of DSM and deep learning models. Could you briefly explain how this combination improves the approach? The Conclusions section needs more detailed discussion and suggestions for improvement. Comparing results with existing literature is crucial for a comprehensive analysis. This comparison helps place the findings in context, identifies differences or similarities with past research, and strengthens the interpretation and impact of the results. Addressing the study’s limitations and suggesting future research directions would also improve this section's quality.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsDetection of Crop Damage in Maize Using RGB Imagery and LiDAR Data Acquired Using UAV - agronomy-3416398
After reviewing your manuscript, I suggest that it could be significantly improved by expanding on certain areas to enhance its clarity and overall scientific impact. The methodology section, in particular, needs a more comprehensive explanation to help readers grasp your techniques and processes fully. Moreover, incorporating a wider selection of relevant references would deepen the foundation of your arguments and broaden the context of your analysis. Additionally, I noticed some ambiguity in the results section. Clearing up these unclear parts will undoubtedly strengthen the persuasiveness and effectiveness of your findings.
Lines 50-61: Considering your explanation of the use of RGB photos and multispectral cameras for assessing crop damage by wild animals, could you elaborate on the specific advantages of using RGB images over multispectral images, aside from cost? How do vegetation indices like NDVI perform in differentiating between damaged and undamaged crop areas during the growing season, and what are the limitations of this method as the season progresses? Furthermore, could you discuss the role of digital surface models (DSM) in evaluating crop damage and the comparative benefits of using data from RGB-equipped UAVs versus LiDAR sensors for creating these models? Also, considering the high cost of LiDAR sensors, are there emerging technologies or methods that could potentially lower the barriers to their use in agricultural assessments?
Lines 103-108: In discussing the manual selection of crop damage areas by using high-resolution RGB orthophoto image processing, it would be strongly beneficial to consider these studies
https://doi.org/10.1016/j.marenvres.2024.106780
https://doi.org/10.1016/j.jenvman.2023.118226
doi:10.3850/38WC092019-1215
These studies can provide additional context or validate the methodology employed in your study, especially if it includes similar methods or recent advancements in the field of remote sensing and damage assessment. Including these references will enrich your discussion by linking it to relevant research, potentially offering new insights or corroborative evidence that enhances the credibility of your findings. This will also help to showcase the relevance of your methods within the broader scientific community and could introduce more advanced or alternative techniques that might improve the precision of damage assessments.
Lines 117-128: Considering your use of LiDAR data to detect crop damage through DSM and DEM, could you elaborate on how the differentiation between these models enhances the detection process? Specifically, how does subtracting the DEM from the DSM improve the accuracy of identifying damaged areas? Additionally, how does the spatial resolution of 1 meter influence the precision and reliability of your damage assessments?
Lines 133-145: Given the use of deep convolutional neural networks (CNNs) and transformers for automated crop damage detection, could you discuss the specific features or capabilities of these models that make them suitable for analyzing high-resolution RGB imagery of maize fields? Additionally, how does the integration of the Deepness plugin with QGIS 3.40 enhance the analysis process? Moreover, can you elaborate on the steps involved in training the neural networks and transformers? What type of data was used for training, and how were the models validated against the reference dataset? Considering the high spatial resolution of 2.1 cm, how does this level of detail affect the model's performance in detecting subtle variations in crop damage?
Lines 194-210: Given the higher accuracy of the DSM-based method for crop damage evaluation compared to the deep neural network method, yet considering its lower precision and significantly poorer sensitivity, how can the use of the deep neural network method be justified in practical scenarios despite these limitations?
Lines 266-276: Considering the shift from the successful application of DSM in a previous study, where a spatial filter with the "edge" option effectively identified small, scattered damaged areas, to its inadequacy in the current study dealing with larger damaged sections, how do you account for this variation in effectiveness? Specifically, what modifications or alternative approaches would you recommend enhancing the precision of DSM in scenarios involving extensive damage?
Lines 309-314: Given the assertion that DSM-based methods excel in quantifying large damage areas but may overlook smaller clusters of undamaged plants within these zones, how might the integration of DSM and deep learning models specifically enhance the accuracy and granularity of crop damage assessments? Can you elaborate on the potential synergies between these methodologies and how they might compensate for each other's weaknesses in various agricultural scenarios? Furthermore, what specific metrics or criteria would you propose to evaluate the effectiveness of this combined approach across different crop types and damage patterns?
Comments on the Quality of English Language
Detection of Crop Damage in Maize Using RGB Imagery and LiDAR Data Acquired Using UAV - agronomy-3416398
After reviewing your manuscript, I suggest that it could be significantly improved by expanding on certain areas to enhance its clarity and overall scientific impact. The methodology section, in particular, needs a more comprehensive explanation to help readers grasp your techniques and processes fully. Moreover, incorporating a wider selection of relevant references would deepen the foundation of your arguments and broaden the context of your analysis. Additionally, I noticed some ambiguity in the results section. Clearing up these unclear parts will undoubtedly strengthen the persuasiveness and effectiveness of your findings.
Lines 50-61: Considering your explanation of the use of RGB photos and multispectral cameras for assessing crop damage by wild animals, could you elaborate on the specific advantages of using RGB images over multispectral images, aside from cost? How do vegetation indices like NDVI perform in differentiating between damaged and undamaged crop areas during the growing season, and what are the limitations of this method as the season progresses? Furthermore, could you discuss the role of digital surface models (DSM) in evaluating crop damage and the comparative benefits of using data from RGB-equipped UAVs versus LiDAR sensors for creating these models? Also, considering the high cost of LiDAR sensors, are there emerging technologies or methods that could potentially lower the barriers to their use in agricultural assessments?
Lines 103-108: In discussing the manual selection of crop damage areas by using high-resolution RGB orthophoto image processing, it would be strongly beneficial to consider these studies
https://doi.org/10.1016/j.marenvres.2024.106780
https://doi.org/10.1016/j.jenvman.2023.118226
doi:10.3850/38WC092019-1215
These studies can provide additional context or validate the methodology employed in your study, especially if it includes similar methods or recent advancements in the field of remote sensing and damage assessment. Including these references will enrich your discussion by linking it to relevant research, potentially offering new insights or corroborative evidence that enhances the credibility of your findings. This will also help to showcase the relevance of your methods within the broader scientific community and could introduce more advanced or alternative techniques that might improve the precision of damage assessments.
Lines 117-128: Considering your use of LiDAR data to detect crop damage through DSM and DEM, could you elaborate on how the differentiation between these models enhances the detection process? Specifically, how does subtracting the DEM from the DSM improve the accuracy of identifying damaged areas? Additionally, how does the spatial resolution of 1 meter influence the precision and reliability of your damage assessments?
Lines 133-145: Given the use of deep convolutional neural networks (CNNs) and transformers for automated crop damage detection, could you discuss the specific features or capabilities of these models that make them suitable for analyzing high-resolution RGB imagery of maize fields? Additionally, how does the integration of the Deepness plugin with QGIS 3.40 enhance the analysis process? Moreover, can you elaborate on the steps involved in training the neural networks and transformers? What type of data was used for training, and how were the models validated against the reference dataset? Considering the high spatial resolution of 2.1 cm, how does this level of detail affect the model's performance in detecting subtle variations in crop damage?
Lines 194-210: Given the higher accuracy of the DSM-based method for crop damage evaluation compared to the deep neural network method, yet considering its lower precision and significantly poorer sensitivity, how can the use of the deep neural network method be justified in practical scenarios despite these limitations?
Lines 266-276: Considering the shift from the successful application of DSM in a previous study, where a spatial filter with the "edge" option effectively identified small, scattered damaged areas, to its inadequacy in the current study dealing with larger damaged sections, how do you account for this variation in effectiveness? Specifically, what modifications or alternative approaches would you recommend enhancing the precision of DSM in scenarios involving extensive damage?
Lines 309-314: Given the assertion that DSM-based methods excel in quantifying large damage areas but may overlook smaller clusters of undamaged plants within these zones, how might the integration of DSM and deep learning models specifically enhance the accuracy and granularity of crop damage assessments? Can you elaborate on the potential synergies between these methodologies and how they might compensate for each other's weaknesses in various agricultural scenarios? Furthermore, what specific metrics or criteria would you propose to evaluate the effectiveness of this combined approach across different crop types and damage patterns?
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for Authorsà