Next Article in Journal
Does Online Public Opinion Regarding Swine Epidemic Diseases Influence Fluctuations in Pork Prices?—An Analysis Based on TVP-VAR and LDA Models
Previous Article in Journal
Review on Mechanisms of Iron Accelerants and Their Effects on Anaerobic Digestion
 
 
Article
Peer-Review Record

Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction

Agriculture 2025, 15(7), 729; https://doi.org/10.3390/agriculture15070729
by Xiuni Li 1,2,3, Menggen Chen 1, Shuyuan He 1, Xiangyao Xu 1, Panxia Shao 1, Yahan Su 1, Lingxiao He 1, Jia Qiao 4, Mei Xu 1, Yao Zhao 1,2,3, Wenyu Yang 1,2,3, Wouter H. Maes 5,* and Weiguo Liu 1,2,3,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Agriculture 2025, 15(7), 729; https://doi.org/10.3390/agriculture15070729
Submission received: 11 March 2025 / Revised: 27 March 2025 / Accepted: 27 March 2025 / Published: 28 March 2025
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors
  1. Lines 110-136: Can the method of collecting pictures during the growth period of soybeans planted in pots well reflect the actual intercropping mode?
  2. Lines 140-155: What limitations does the self-developed automatic shooting software have in dealing with complex lighting conditions or sudden changes in environmental factors during the image collection of soybean plants? How were these limitations avoided? Please provide additional explanations.
  3. Line 161: What is the basis for setting the number of images to 36 for each angle? Is this number the optimal choice that balances the number of images for a complete 3D reconstruction model and the computational load of the model? The content of Line 348 belongs to the experimental part. Should it be considered to be written at the position of Line 161?
  4. Can the images in Figure 7 be made clearer?
  5. It is recommended to write the discussion section and the conclusion section separately. The discussion section should focus on emphasizing the advantages and disadvantages of the methods adopted in this paper, and the conclusion section should mainly present the core research results of this paper.

Author Response

Dear Reviewers, Thank you for your valuable comments and suggestions on this manuscript. The authors have carefully addressed each point, and all revisions have been marked in red within the manuscript for your convenience.

Reviewer 1:

  1. Lines 110-136: Can the method of collecting pictures during the growth period of soybeans planted in pots well reflect the actual intercropping mode?

Response: Yes, this is feasible. The high-throughput phenotyping platform used in this study was specifically designed to allow soybeans to grow under real outdoor field conditions while enabling stable image acquisition indoors, thereby ensuring the accuracy of image-derived parameters. This approach yields phenotypic data that more closely reflect the true characteristics of soybean plants compared to those obtained from indoor cultivation. Furthermore, based on this phenotyping platform and the pot-based cultivation system, I have previously published two academic papers.

  1. Lines 140-155: What limitations does the self-developed automatic shooting software have in dealing with complex lighting conditions or sudden changes in environmental factors during the image collection of soybean plants? How were these limitations avoided? Please provide additional explanations.

Response: Dear Reviewer, you are absolutely correct that environmental variations during image acquisition are highly important and deserve careful consideration. To ensure the stability of the imaging environment, we adopted a field-based pot cultivation approach, allowing soybeans to grow under real field conditions while capturing images indoors under stable lighting conditions. Additionally, we recognize that extreme weather events (such as strong winds and heavy rain) may impact soybean plant architecture. Therefore, in this revision, the authors have added a discussion regarding the influence of weather conditions on image acquisition in the discussion section.

The specific imaging environment is shown in the figure below:

 

 

  1. Line 161: What is the basis for setting the number of images to 36 for each angle? Is this number the optimal choice that balances the number of images for a complete 3D reconstruction model and the computational load of the model? The content of Line 348 belongs to the experimental part. Should it be considered to be written at the position of Line 161?

Response: Existing studies have shown that multi-view 3D reconstruction typically requires between 20 and 90 images [39]. The authors conducted extensive preliminary experiments and, in order to balance time cost with reconstruction quality, this study tested four image quantities (24, 36, 48, and 72 images) as variables during different soybean growth stages.

Following your suggestion, the content previously located at line 348 has been moved to line 161.

  1. Can the images in Figure 7 be made clearer?

Response: The resolution of Figure 7 has been improved.

 

  1. It is recommended to write the discussion section and the conclusion section separately. The discussion section should focus on emphasizing the advantages and disadvantages of the methods adopted in this paper, and the conclusion section should mainly present the core research results of this paper.

Response: Following your suggestion, the discussion and conclusion sections have been separated.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The research paper "Optimization of the Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction" is very structured and deals with applying a relevant, fast-evolving area of research. I would, however, propose these major revisions to strengthen the manuscript further:

  1. The research aims must be clearly outlined at the outset. Defining the problem statement and hypotheses clearly would enhance the quality of readability as well as scientific objectivity of the research. An eloquent problem statement can also assist in substantiating the utility of the proposed method.
  2. The methodology presented shows the advantages over classical methods in the quality of the images and extraction of parameters but lacks a comparative analysis with available state-of-the-art methods. Adding a comparative table with quantified error values like RMSE, MAE, and R² with the available methods will enhance the contribution of the study.
  3. Theoretically, the work covers optimization of rotational velocity, sensor orientation, and imaging angles but has nothing to say on the computation aspects of how optimizations were done. Mentioning mathematical formulas, algorithmic processes, or experimental protocols used in parameter adjustments would increase reproducibility of the research.
  4. The research establishes the V6 and R4 growth phases as the best times for yield estimation, but justification is mostly corroborative. A statistical justification, e.g., ANOVA or regression analysis, to confirm this choice would add weight to these results without increasing the complexity of the test.
  5. The explanation of the "diminishing marginal return" effect of model accuracy versus the number of images is a good one, but no value is given that can be considered a threshold above which more images do not lead to much more accuracy. Having such a threshold would be helpful.
  6. The paper mentions that the accuracy of traditional methods to estimate stem diameter is 0.890, but it doesn't provide any measure of how much better the proposed method performs. Adding an extensive error analysis between traditional and proposed methods employing statistical error metrics (e.g., RMSE, MAPE) would increase the robustness of the results.
  7. The paper writes about optimizing sensor orientation to achieve canopy details but fails to articulate the methodology utilized to calculate the best angle. Was this done by means of experimental trials, quality of images, or algorithmic optimization? A systematic study of various sensor angles with accompanying improvement in accuracy should be provided.
  8. The article notes that strip intercropping in low light leads to weaker stems but does not state how various light intensities were treated when processing images.
  9. The article mentions rotational speed optimization for enhanced image clarity but does not indicate how the optimal speed was determined. Did an empirical process take place, or were individual speed metrics tested against levels of image blur and noise? Displaying a plot of rotational speed vs. image clarity (e.g., through edge detection sharpness) would make the process more transparent.
  10. The study clarifies that it optimizes the Concave Hull and Convex Hull algorithms for canopy shape analysis but fails to clarify under what conditions one algorithm performs better than the other. Including a comparative analysis of the algorithms in terms of accuracy measures would improve the explanation for their use.
  11. The paper cites just 24 research articles, which might not be adequate to establish a good background on 3D canopy reconstruction, intercropping systems, and image processing methodologies. A stronger foundation for the study can be achieved with a more thorough literature review of current developments in UAV-based phenotyping, the use of LiDAR, and deep learning for plant trait prediction.
  12. The background research fails to adequately investigate the scope of previous work in 3D canopy reconstruction and plant phenotyping. A more detailed literature review of recent developments in UAV-based phenotyping, LiDAR uses, and deep learning for plant trait prediction would make the study more solid.
  13. There are instances of poor phrasing, grammatical flaws, and complicated sentence structures that make the text hard to read. Refining the language for coherence, concision, and clarity would enhance the overall presentation of the study. A careful proofreading or formal language editing is suggested to improve the text's fluency and technical accuracy.
Comments on the Quality of English Language

Poor phrasing, grammatical flaws, and complicated sentence structures make the text hard to read. Refining the language for coherence, precision, and clarity would enhance the overall presentation of the study. A careful proofreading or formal language editing is suggested to improve the text's fluency and technical accuracy.

Author Response

Dear Reviewers, Thank you for your valuable comments and suggestions on this manuscript. The authors have carefully addressed each point, and all revisions have been marked in red within the manuscript for your convenience.

Reviewer 2:

  1. The research aims must be clearly outlined at the outset. Defining the problem statement and hypotheses clearly would enhance the quality of readability as well as scientific objectivity of the research. An eloquent problem statement can also assist in substantiating the utility of the proposed method.

Response: The authors have outlined the research objectives in the abstract as well as in the final paragraph of the introduction. The specific content is as follows:

Abstract:  This study focuses on optimizing the 3D reconstruction process for intercropped soybeans to efficiently extract canopy structural parameters throughout the entire growth cycle, thereby enhancing the accuracy of early yield prediction.

Introduction:How can soybean 3D reconstruction be optimized to enable early and accurate yield prediction? This study systematically investigates image acquisition angles, plant rotation speeds, point cloud preprocessing, and multidimensional parameter extraction across the entire soybean growth cycle. The objective is to provide technical support for the efficient and accurate acquisition of soybean 3D structural data under strip intercropping conditions and to establish a scientific basis for the precise identification of soybean germplasm resources.

  1. The methodology presented shows the advantages over classical methods in the quality of the images and extraction of parameters but lacks a comparative analysis with available state-of-the-art methods. Adding a comparative table with quantified error values like RMSE, MAE, and R² with the available methods will enhance the contribution of the study.

Response: The authors have followed your suggestion and added additional references throughout the manuscript.

  1. Theoretically, the work covers optimization of rotational velocity, sensor orientation, and imaging angles but has nothing to say on the computation aspects of how optimizations were done. Mentioning mathematical formulas, algorithmic processes, or experimental protocols used in parameter adjustments would increase reproducibility of the research.

Response: The revision has been made. The specific changes are as follows:

2.1.1. Raw Image Acquisition

Soybean Plant Image Acquisition Angles

The angle ? was calculated using the tangent function, expressed as tan(?) = a/b, where a represents the length of the side opposite to angle ?, and b denotes the length of the adjacent side. During camera adjustments, the center of the plant was consistently aligned with the center of the image frame to ensure uniform imaging conditions.

Soybean Plant Rotation Speed

 Preliminary trials indicated a marked deterioration in image quality when the rotation speed exceeded 1.5 rpm. Therefore, four rotation speeds, denoted as ? (0.8 rpm, 1.0 rpm, 1.2 rpm, and 1.4 rpm), were selected for testing. To determine the optimal rotation speed, image clarity under each condition was evaluated. The rotation speed was programmed via the Programmable Logic Controller (PLC) integrated into the high-throughput phenotyping platform.

Image quality was assessed using standard metrics, including Intersection over Union (IOU), Precision (PA), and Recall. The calculation formulas are as follows:

 

                                                       (Eq. 1)

                                                        (Eq. 2)

                                                        (Eq. 3)

TP represents the actual plant pixel points that are correctly identified as plant points by the network; FP represents the actual plant pixel points that are incorrectly identified as background points by the network; TN represents the actual background pixel points that are correctly identified as background points by the network; FN represents the actual background pixel points that are incorrectly identified as plant points by the network.

  1. The research establishes the V6 and R4 growth phases as the best times for yield estimation, but justification is mostly corroborative. A statistical justification, e.g., ANOVA or regression analysis, to confirm this choice would add weight to these results without increasing the complexity of the test.

Response: To identify the optimal growth stage for yield prediction, the authors first applied stepwise regression analysis to predict soybean yield using image-derived parameters collected across the entire growth period (14 sampling points in total). The analysis revealed that the V6 and R4 stages provided higher prediction accuracy with relatively lower errors. Therefore, scatter plots were generated to present the detailed prediction results for these two stages. The specific results are as follows for your reference:

To develop an early-stage soybean yield prediction model, this study performed stepwise regression analysis between image-derived parameters collected throughout the entire growth period and single-plant yield. As shown in Figure 8a, prediction performance progressively improved as the plants advanced through growth stages, with increasing accuracy and decreasing prediction error. The highest prediction ac-curacy and lowest error were observed at the V6 (vegetative) and R4 (reproductive) stages

The stepwise regression results showed that at the V4 stage, the model achieved a prediction performance of R² = 0.503, RMSE = 2.54 g, and MAE = 2.17 g. The regression equation at this stage was: y=6.903+4.615*x1+3124.496*x2. where x1 represents α-shape volume, and x2 represents minimum bounding box surface area (Figure 8b).

At the R4 stage, prediction performance further improved to R² = 0.625, RMSE = 2.12 g, and MAE = 1.7 g, with the regression equation: y=5.557+717.88*x1. where x1 represents voxel volume (Figure 8c).

 

Figure 8. Prediction of soybean yield. a. Dynamic prediction performance of soybean yield based on image-derived parameters throughout the entire growth period. The green background represents the vegetative growth stage, while the red background represents the reproductive growth stage. b. Prediction performance of soybean yield at the V6 stage based on image-derived parameters. c. Prediction performance of soybean yield at the R4 stage based on image-derived parameters.

  1. The explanation of the "diminishing marginal return" effect of model accuracy versus the number of images is a good one, but no value is given that can be considered a threshold above which more images do not lead to much more accuracy. Having such a threshold would be helpful.

Response: Please refer to Section 3.1.3. In this study, the number of images was set at four levels: 24, 36, 48, and 72. However, the experimental results indicated that the optimal thresholds were 36 images during the vegetative growth stage and 48 images during the reproductive growth stage.

  1. The paper mentions that the accuracy of traditional methods to estimate stem diameter is 0.890, but it doesn't provide any measure of how much better the proposed method performs. Adding an extensive error analysis between traditional and proposed methods employing statistical error metrics (e.g., RMSE, MAPE) would increase the robustness of the results.

Response: You are absolutely correct. Evaluation metrics should include not only accuracy but also error measurements. The authors reviewed Reference 43 and found that it only reported accuracy, without providing any information on error metrics. The original figure from that reference is shown below:

 
   
 
   

To ensure the rigor of this experiment, the authors have added the RMSE values of this study to the manuscript. The specific details are as follows:

For parameter extraction, traditional methods that simplify canopies into regular ge-ometric shapes often introduce measurement errors (e.g., an accuracy of only 0.890 for stem diameter) [45]. In contrast, this study retrieved extreme points from the 3D point cloud and accurately extracted basic structural parameters, such as plant height and width, via coordinate calculations, achieving R² values of 0.990 and 0.950, and RMSE values of 0.018 m and 0.016 m, respectively.

  1. The paper writes about optimizing sensor orientation to achieve canopy details but fails to articulate the methodology utilized to calculate the best angle. Was this done by means of experimental trials, quality of images, or algorithmic optimization? A systematic study of various sensor angles with accompanying improvement in accuracy should be provided.

Response: The authors have added the specific evaluation methods in Section 2.1.1:For each angle setting, 36 images were captured and used for 3D reconstruction. The optimal imaging angle was identified by evaluating the completeness of the resulting 3D reconstruction models, focusing on aspects such as realism and the retention of structural details. A visual assessment approach was employed to determine which angle yielded the most comprehensive and accurate models.

The authors have also added the method for angle measurement:The angle ? was calculated using the tangent function, expressed as tan(?) = a/b, where a represents the length of the side opposite to angle ?, and b denotes the length of the adjacent side. During camera adjustments, the center of the plant was consistently aligned with the center of the image frame to ensure uniform imaging conditions.

In the discussion section, it has been clarified that the optimization of the sensor angle was achieved through image quality evaluation. In addition, a discussion on the impact of different sensor angles on accuracy has been added, as detailed below:To ensure the quality of the original images, this study focused on three key factors, with the shooting angle being the most critical, as it directly influences the model’s ability to capture structural details and extract features effectively [40]. Previous research has consistently highlighted the importance of imaging angles in 3D modeling. For instance, Jiang Y et al. [41] used a depth camera to acquire top-view images of plants for 3D model construction and parameter extraction. Their results indicated that fiber yield was associated with static traits after the canopy development stage (R² = 0.35–0.71) and with growth rate during the early canopy development stage (R² = 0.29–0.52). Similarly, Andújar D et al. [42] compared the effects of four different viewing angles—top view (0°), oblique view (45°), vertical side view (90°), and ground-up view (-45°)—on 3D reconstruction performance. They found that the top view performed poorly due to upper leaves obscuring lower canopy layers, whereas other angles yielded better results. These findings are consistent with the preliminary experimental observations of this study. Building on this prior research, the present study systematically explored four gradient imaging angles for soybean and optimized them based on image quality criteria. The selected optimal angle maximized the capture of canopy structural details, thereby enhancing both the accuracy and stability of the resulting 3D models.

  1. Jiang Y, Li C, Paterson A H, et al. Quantitative analysis of cotton canopy size in field conditions using a consumer-grade RGB-D camera[J]. Frontiers in plant science, 2018, 8: 2233.

 

  1. Andújar D, Escolà A, Rosell-Polo J R, et al. Using depth cameras for biomass estimation–a multi-angle approach[M]//Precision agriculture'15. Wageningen Academic, 2015: 97-102.
  2. The article notes that strip intercropping in low light leads to weaker stems but does not state how various light intensities were treated when processing images.

Response: Since the weak soybean stems require careful regulation of rotation speed to ensure image quality, the authors have added details on the rotation speed control method in Section 2.1.1. The specific content is as follows: The rotation speed was programmed via the Programmable Logic Controller (PLC) integrated into the high-throughput phenotyping platform.

In addition, the authors would like to explain a common scenario in intercropping systems: "In intercropping, only compact maize varieties are typically considered, and the commonly used maize varieties tend to have similar plant architectures. As a result, the light environments they create are also very similar, making the shading differences experienced by soybeans in such systems negligible. Moreover, in this study, the authors selected Zhongyu No. 3, which is not only widely used in field production but also the most representative variety."That being said, you have raised a valid point regarding the need for scientific rigor. Therefore, the authors have added a discussion on the potential impact of different light intensities on image acquisition, which can be found in the discussion section:

In addition, under intercropping conditions, soybean stems tend to be fragile and susceptible to lodging due to limited light availability [43], making rotation speed a key factor in image clarity and reconstruction quality. To address this, the rotation speed was optimized to ensure both image stability and sharpness. However, as stem weakness is primarily influenced by shading severity, the rotation speed could be further reduced in future experiments conducted under more intense shading conditions to preserve image quality.

  1. The article mentions rotational speed optimization for enhanced image clarity but does not indicate how the optimal speed was determined. Did an empirical process take place, or were individual speed metrics tested against levels of image blur and noise? Displaying a plot of rotational speed vs. image clarity (e.g., through edge detection sharpness) would make the process more transparent.

Response: The authors have added both the evaluation methods and the corresponding results. The specific content is as follows:

Soybean Plant Rotation Speed

Image quality was assessed using standard metrics, including Intersection over Union (IOU), Precision (PA), and Recall. The calculation formulas are as follows:

                                                       (Eq. 1)

                                                        (Eq. 2)

                                                        (Eq. 3)

TP represents the actual plant pixel points that are correctly identified as plant points by the network; FP represents the actual plant pixel points that are incorrectly identified as background points by the network; TN represents the actual background pixel points that are correctly identified as background points by the network; FN represents the actual background pixel points that are incorrectly identified as plant points by the network.

3.1.2. Plant Rotation Speed

By calculating the IOU, PA, and Recall values for images captured at different rotation speeds, it was observed that increasing the speed from 0.8 rpm to 1.2 rpm had minimal impact on image quality—IOU and Recall remained stable, while PA decreased by only 0.01. However, when the rotation speed increased from 1.2 rpm to 1.4 rpm, all three metrics—IOU, PA, and Recall—showed significant declines (Table 1). Based on a comprehensive assessment of image quality and acquisition efficiency, a rotation speed of 1.2 rpm was selected as the optimal setting, as it ensures the accuracy of the 3D model while maintaining a balanced and efficient data collection process.

Table 1 Image Quality Evaluation

Rotation Speed(rpm)

IOU

PA

Recall

0.8

0.97

0.98

0.97

1.0

0.97

0.98

0.97

1.2

0.97

0.97

0.97

1.4

0.95

0.95

0.95

  1. The study clarifies that it optimizes the Concave Hull and Convex Hull algorithms for canopy shape analysis but fails to clarify under what conditions one algorithm performs better than the other. Including a comparative analysis of the algorithms in terms of accuracy measures would improve the explanation for their use.

Response: In Section 3.3.2, since canopy morphology cannot be measured manually, we compared the images and numerical results obtained from the two algorithms. The comparison showed that the projection area calculated using the convex hull algorithm was consistently larger than that of the concave hull algorithm, with differences ranging from 17% to 41%. However, as you rightly pointed out, both methods have their respective advantages and limitations. The convex hull algorithm has lower accuracy but is simple to implement, highly efficient, and well-established. In contrast, the concave hull algorithm offers higher accuracy but involves more complex computations and longer processing times, making it more suitable for capturing complex phenotypic traits. In practice, the choice between these algorithms largely depends on the level of accuracy required for specific application scenarios.

  1. The paper cites just 24 research articles, which might not be adequate to establish a good background on 3D canopy reconstruction, intercropping systems, and image processing methodologies. A stronger foundation for the study can be achieved with a more thorough literature review of current developments in UAV-based phenotyping, the use of LiDAR, and deep learning for plant trait prediction.

Response: As you correctly pointed out, the references lacked recent studies on UAV-based phenotyping, LiDAR applications, and deep learning-based plant trait prediction. Therefore, the authors have supplemented the manuscript with relevant literature in these areas. The final number of references in this paper is now 52.

  1. The background research fails to adequately investigate the scope of previous work in 3D canopy reconstruction and plant phenotyping. A more detailed literature review of recent developments in UAV-based phenotyping, LiDAR uses, and deep learning for plant trait prediction would make the study more solid.

Response: Additional references have been included in the research background section. The specific content is as follows:

A large number of studies have been conducted on crop yield prediction using unmanned aerial vehicles (UAVs) combined with multi-sensor data fusion and machine learning methods [26,27]. Multi-sensor fusion, such as combining multispectral and thermal infrared data, significantly improves prediction accuracy. Support vector machine (SVM) and deep neural network (DNN) models achieve an R² value of 0.692 for wheat yield prediction [28]. By further integrating multimodal sensor data fusion, the yield prediction for winter wheat was optimized, with the R² value increasing to 0.78 and RMSE decreasing by about 22% [29]. Spatio-temporal deep learning models, such as 3D-CNN, use multi-temporal RGB image sequences to achieve high-accuracy predictions during the early growth stage, with a mean absolute error (MAE) of 292.8 kg/ha [30]. Combining LiDAR technology with machine learning methods enhances the accuracy of biomass prediction in farmland, with R² values of 0.71 and 0.93 at 1-meter and 2-meter resolution, respectively [31]. These research findings provide strong technical support for high-throughput plant phenotyping and precision agricultural management.

  1. There are instances of poor phrasing, grammatical flaws, and complicated sentence structures that make the text hard to read. Refining the language for coherence, concision, and clarity would enhance the overall presentation of the study. A careful proofreading or formal language editing is suggested to improve the text's fluency and technical accuracy.

Response: The entire manuscript has been revised.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The authors wrote an article entitled Optimization of the Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction, which deals with the optimization of the 3D reconstruction method for soybean yield prediction in strip cultivation conditions. Ideas for improvement:
- The optimized method is tested in a controlled experimental environment with carefully set lighting and plant rotation parameters. In real conditions, the results could be distorted due to, for example, weather changes, plant movement by wind, or lighting variability, which is not sufficiently addressed in the article. Discuss this, because there will never be ideal conditions.
- On line 56, [5, 6, 7] is written, it is better to write [5-7]
- In the text from line 228 onwards, you have equations listed. Write them in the same way as on line 222. Do this throughout the text, it is clearer.
- On line 258, the equation numbering is missing. Also further in the text. This needs to be supplemented and unified.
- Figure 4 is nice, but the values ​​are not readable.
- Same for Figure 7. Here it would probably be appropriate to split the image to make the values ​​more readable.
If all this is fixed, the article will be suitable for publication. However, the authors should elaborate more on the validation of the model in different environments, compare their method with alternative technologies and focus on applicability in real growing conditions.

Author Response

Dear Reviewers, Thank you for your valuable comments and suggestions on this manuscript. The authors have carefully addressed each point, and all revisions have been marked in red within the manuscript for your convenience.

Reviewer 3:

  1. The optimized method is tested in a controlled experimental environment with carefully set lighting and plant rotation parameters. In real conditions, the results could be distorted due to, for example, weather changes, plant movement by wind, or lighting variability, which is not sufficiently addressed in the article. Discuss this, because there will never be ideal conditions.

Response: Thank you for your suggestion. The discussion has been added to the discussion section:

It is important to note that in this study, the soybeans grow outdoors while stable imaging takes place indoors. Therefore, weather conditions must be considered during image acquisition. In extreme weather conditions, soybean plants may be affected, but once the weather clears up, the plants will resume growth through their own regulatory abilities. To ensure the accuracy of phenotypic data acquisition, the optimal weather for image collection is clear and windless.

  1. On line 56, [5, 6, 7] is written, it is better to write [5-7]

Response: Revised accordingly.

  1. In the text from line 228 onwards, you have equations listed. Write them in the same way as on line 222. Do this throughout the text, it is clearer.

Response: Revised accordingly.

  1. On line 258, the equation numbering is missing. Also further in the text. This needs to be supplemented and unified.

Response: Revised accordingly.

  1. Figure 4 is nice, but the values are not readable.

Response: The font size of the numerical values has been increased, and the revised figure is shown :

 

  1. Same for Figure 7. Here it would probably be appropriate to split the image to make the values more readable.

Response: The font size in Figure 7 has been increased. We also attempted to split the figure, but after re-layout, the current version proved to be effective, as shown below. (If you still have any concerns about this figure, please feel free to contact me, and I will provide the split version.)

  1. If all this is fixed, the article will be suitable for publication. However, the authors should elaborate more on the validation of the model in different environments, compare their method with alternative technologies and focus on applicability in real growing conditions.

Response: The authors have added new references related to UAV remote sensing, LiDAR, multispectral, and thermal infrared technologies, as reflected throughout the manuscript.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The author has already improved the manuscript well according to the first round of revision suggestions. The suggested modifications for this time are as follows:

1. It is recommended to continue revising the conclusion by adding key data from the research findings to the conclusion section.

 

Author Response

Thank you very much for your suggestions. To facilitate your review of the revised sections, the modifications have been highlighted in the original text with a yellow background and red font.

Comments and Suggestions for Authors

The author has already improved the manuscript well according to the first round of revision suggestions. The suggested modifications for this time are as follows:

It is recommended to continue revising the conclusion by adding key data from the research findings to the conclusion section.

A: As per your suggestion, key data has been added to the conclusion section. The specific revisions are as follows:

This study developed a three-dimensional reconstruction method applicable to the entire growth cycle of soybeans, encompassing image acquisition, 3D canopy reconstruction, and structural parameter extraction. This method enables continuous and non-destructive monitoring of canopy parameters in intercropped soybeans and establishes an early yield prediction model. The study optimized image acquisition parameters (capture angle of 30°, plant rotation speed of 1.2 rpm, and image numbers of 36 and 48 for the vegetative and reproductive stages, respectively) and point cloud preprocessing methods to ensure high-precision 3D canopy reconstruction. Additionally, the voxel volume-based yield prediction achieved an R² of up to 0.788. This research provides a scientific basis for phenotypic screening under stress conditions, high-yield soybean variety selection, and optimization of intercropping systems. Moreover, it offers a reliable approach for accurately identifying soybean germplasm resources and efficiently obtaining 3D structural information of intercropped soybeans, holding significant theoretical and practical value.

 

Reviewer 2 Report

Comments and Suggestions for Authors

All the comments are clarified properly. I am satisfied with the updated manuscript.

Author Response

Comments and Suggestions for Authors

All the comments are clarified properly. I am satisfied with the updated manuscript.

Thank you very much for your support of this article. Thank you again!

Back to TopTop