A Method for Predicting Canopy Light Distribution in Cherry Trees Based on Fused Point Cloud Data
Round 1
Reviewer 1 Report
The manuscript present a method for predicting canopy light distribution using point clouds.
The manuscript is well organized and every part of the research is clearly presented.
The chosen acquisition, preprocessing, and processing methods are appropriate, and their relevance to the research is well described.
My only question is whether the authors considered using terrestrial laser scanner instead of Azure Kinematic DK system for point cloud collection? TLS can provide millimeter registration accuracy instead of 5-7 mm obtained by DK sensor.
I've found only small amount of typing errors.
Author Response
Point 1: My only question is whether the authors considered using terrestrial laser scanner instead of Azure Kinematic DK system for point cloud collection? TLS can provide millimeter registration accuracy instead of 5-7 mm obtained by DK sensor.
Response 1: During the 3D point cloud data collection process, we considered terrestrial laser scanner, which had the advantage of high data accuracy, large data volumes and high registration accuracy, but at a high price and cost. Azure Kinect DK is Microsoft's latest depth camera after the release of Kinect V1 and Kinect V2 products, with low cost, small size and high performance. Therefore, we choose DK as the point cloud acquisition equipment, which can improve the applicability in agriculture.
Point 2: I've found only small amount of typing errors.
Response 2: We are very sorry for our typing errors and revise the manuscript on Line 28, Line 363 and Line 543.
Author Response File: Author Response.pdf
Reviewer 2 Report
Reasonable canopy structure of fruit trees is beneficial to the effective distribution of internal light. Based on cherry tree point cloud data, a prediction model of cherry canopy light distribution is established to reveal the rule between the canopy structure and light of fruit trees. The research results can provide technical support for pruning branches and scientific cultivation of cherry trees, and have important significance for optimizing the canopy structure of fruit trees and improving fruit yield and quality.
This manuscript builds a vision system based on dual Azure Kinect DK cameras, which can quickly and completely obtain cherry tree color point cloud data by scanning cherry trees from both front and rear stations.
This manuscript proposes a global alignment method of cherry tree point cloud to obtain a complete 3D point cloud model of cherry tree, including a coarse point cloud alignment algorithm based on ISS key points and an improved iterative closest point (ICP) algorithm based on bidirectional KD-tree. Compared with other alignment methods, the proposed method can effectively reduce the alignment error and alignment time, and has good robustness and effectiveness.
This manuscript proposes a method for quantifying the canopy structure of cherry trees, which uses an alpha-shapes based concave wrapping algorithm to calculate point cloud projected area features, and uses OBB-based minimum bounding box extraction algorithm to calculate surface area and volume features of the minimum bounding box of the point cloud. On this basis, A GBRT-based light prediction model for cherry tree canopies is proposed, which takes point cloud relative projected area features and the relative surface area and volume features of the minimum bounding box as inputs and the relative light intensity as output. Compared with other machine learning models, the prediction accuracy of GBRT model is significantly improved, and it can predict the illumination distribution law of cherry canopy more accurately.
This manuscript provides a valuable exploration for the construction and optimization of cherry canopy light distribution prediction model. The author puts forward some effective methods and ideas, especially in the point cloud alignment, feature extraction and other aspects. The topic falls within the scope of this journal. The basic idea of the manuscript is clear and concisely written. Also a comprehensive experimental analysis is made in this manuscript.
However, there are some places can be further improved in the manuscript:
1、Line 64-68, The author did not state the accuracy of the lighting prediction method proposed in reference 19.
2、Figure 2(a) should be improved by adding some details according to cherry tree point cloud data collection experiments.
3、Section 2.2.3: The author should add an introduction to the use and settings of HOBO temperature/luminosity data logger.
4、Figure 4 should be improved by adding some details according to the light data acquisition experiment.
5、Section 2.3.1: The author should add the calculated positional transformation matrix (T_1) of the two depth cameras.
6、The prediction accuracy of the GBRT model is missing in Table 8, and the author should supplement it.
7、Please separate the results and discussion into 2 parts acccording to the requirement of Remote Sensing.
8、There are some grammar or writing mistakes. Please check through the whole paper. Some examples are:
1)Line 556, “because the it only” should be “because it only”
2)Line 603, “enclosing box” should be “bounding box”
fine
Author Response
Point 1: Line 64-68, The author did not state the accuracy of the lighting prediction method proposed in reference 19.
Response 1: The accuracy of the lighting prediction method proposed in reference 19 is 86.4%. We add it on Line 69.
Point 2: Figure 2(a) should be improved by adding some details according to cherry tree point cloud data collection experiments.
Response 2: We modify Figure 2(a) to add some details, such as the distance between the visual system and the cherry tree trunk, the distance between the target ball and the cherry tree trunk, which will more intuitively show the cherry tree point cloud collection experiment strategy.
Point 3: Section 2.2.3: The author should add an introduction to the use and settings of HOBO temperature/luminosity data logger.
Response 3: The HOBO data logger has a light data range of 0~320000 lux and a response time of 10min. It is connected to the computer through the built-in coupler and starts with the supporting HOBOware software to set the start time and interval of light intensity data acquisition. We add it between Line 215 and 218.
Point 4: Figure 4 should be improved by adding some details according to the light data acquisition experiment.
Response 4: We modify Figure 4 to add some details, such as the distance between the two adjacent HOBO data logger, the distance between light collection devices and the trunk of the cherry tree, which will more intuitively show the light data collection experiment strategy.
Point 5: Section 2.3.1: The author should add the calculated positional transformation matrix of the two depth cameras.
Response 5: Based on the RANSAC orb calibration method, we calculate the positional transformation matrix between the two depth cameras and add it between Line 297 and Line 299.
Point 6: The prediction accuracy of the GBRT model is missing in Table 8, and the author should supplement it.
Response 6: We add the prediction accuracy of the GBRT model in Table 7.
Point 7: Please separate the results and discussion into 2 parts according to the requirement of Remote Sensing.
Response 7: We are very sorry for our incorrect writing form and separate the results and discussion into 2 parts according to the requirement of Remote Sensing.
Point 8: There are some grammar or writing mistakes. Please check through the whole paper. Some examples are:
1)Line 556, “because the it only” should be “because it only”
2)Line 603, “enclosing box” should be “bounding box”
Response 8: We are very sorry for some grammar or writing mistakes and revise them on Line 561 and Line 630.
Reviewer 3 Report
see appendix
Comments for author File: Comments.pdf
Minor editing of English language required
Author Response
Please see the attachment
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
All comments have done