Obtaining the Highest Quality from a Low-Cost Mobile Scanner: A Comparison of Several Pipelines with a New Scanning Device
Round 1
Reviewer 1 Report (New Reviewer)
Comments and Suggestions for AuthorsThe authors present a paper in which 4 scanning devecis and 5 evaluation routines for forestry purposes (tree detection and dbh estimation) are applied and compared. A prototype of a new low-cost scanner (Mapry LA03) is also used. To my knowledge, this is the first application and publication about this scanner in connection with forestry. This is a unique selling point.
The authors did not develop any new or self-developed methods. Common sample designs, devices (excl. LA03) and evaluation routines were used and compared. However, this comprehensive comparison of 4 scanners and 5 evaluation routines took place. This is worth publishing and has a unique selling point.
There is some potential for revision. I suggest major revisions. If these are done carefully, this will be a great paper!
General comments:
A workflow diagram and/or a workflow table should be included in the material and methods section. This makes the paper more logical and structured. Or e.g. 2 workflow diagrams: i) one general and ii) one detailed about the workflow in the evaluation routines (setting parameters, key metrics of ground classification, normalization, tree detection, dbh estimation etc etc). The quality measures/evaluation metrics must be best explained in the methods with equations (RMSE and so on). It is not enough to simply write values in the results. There is no comprehensive discussion of why and why the same trajectory was chosen for all 3(4) scanners. They simply have different ranges, resolutions, etc. It's not the iPhone's fault in particular if you simply walked through the forest “the wrong way”. The device only has a range of 5 meters. Of course there are reasons to always follow the same path (e.g. you can train users better and need less experience, etc.). Please discuss this better.
Detailed comments can be found in the attached pdf.
Comments for author File:
Comments.pdf
Author Response
Dear Reviewers,
We would like to extend our sincere thanks for your thoughtful, detailed, and constructive reviews of our manuscript titled “Getting the highest quality from a low-cost mobile scanner: comparison of several pipelines with a new scanning device.” We greatly appreciate the time and effort you invested in providing such valuable feedback. Your comments and suggestions have been instrumental in improving the clarity, methodological rigor, and overall quality of our work.
To briefly summarize, the manuscript evaluates the performance of a low-cost Mobile Laser Scanning (MLS) system by comparing it with several established scanning technologies. A standardized data acquisition protocol was implemented across a large-scale forest sample plot, and five distinct point cloud processing pipelines were assessed. The primary performance metric was DBH accuracy, verified against manually collected field measurements.
The most substantial revisions were made to the descriptions of the individual processing pipelines, the discussion on the potential and cost-efficiency of low-cost scanning devices, and the structure of the Materials and Methods section. Additionally, we have included a simplified workflow visualization and improved the description of the forest sample plot to enhance clarity.
Below, we provide detailed responses to each of your comments, outlining the corresponding revisions made in the manuscript.
Reviewer 1
Major Comments
- A workflow diagram/table should be included in the Materials and Methods section.
Thank you for this suggestion. A simplified workflow diagram has now been included in the Materials and Methods section as Figure 3. (Line 199) - The quality measures and metrics must be explained in the Methods.
We appreciate this important point. A new subsection has been added to the Materials and Methods section, detailing all the evaluation metrics and the rationale behind their selection. (Lines 464–498) - Discussion of why the same trajectory was used for all scanners, despite differing ranges.
Thank you for your insight. Although this point was initially addressed in the Data Collection section, we have now expanded the explanation to provide a clearer justification of this methodological decision. (Lines 219–223)
Minor Comments
- Mentioning the devices used in the abstract.
Thank you; we agree this adds clarity. The device names have now been included in the abstract. (Lines 19–21) - Other in-text suggestions and typographical corrections.
We are grateful for these detailed comments. All of them have been carefully addressed and incorporated into the revised manuscript. These improvements contribute to a more comprehensive understanding of the methodologies used.
Reviewer 2
Major Comments
- Improve the structure of Section 2.4.
Thank you, this comment was particularly helpful. While most of the changes involved formatting, we have also added explanatory text to each pipeline description to enhance clarity. Additionally, the “Manual-RANSAC” method has been renamed “Manual Approach” to avoid confusion. - A cited study does not address occlusion as a complication.
Thank you for catching this. The reference was mistakenly included from another project and has now been replaced with appropriate citations. (Line 90) - Elaborate on key findings: getting high-quality data from low-cost scanners, path selection, SLAM improvements.
We appreciate this suggestion. The Discussion section has been extended to include these aspects. (Lines 612–636)
Minor Comments
- Specify the range limitations of Apple device LiDAR.
This information has now been added to the description of the iPhone. (Line 250) - State the prototype name in the introduction.
The names of all scanners are now clearly stated in the abstract. - Number of trees and distribution of DBH and height.
Thank you. The total number of trees is now specified in both the Study Area and Data Collection sections. A new Figure 4 illustrates the DBH and tree height distributions. (Line 212) - Tools used for DBH measurement.
Thank you for raising this. The Haglöf Mantax Digitech caliper used for DBH measurement is now mentioned, along with a description of the measurement protocol. (Lines 201-206) - Orientation of scanner towards the plot center and SLAM accuracy.
This orientation was selected based on findings in prior studies (e.g., https://doi.org/10.1016/j.jag.2022.103104). Although SLAM may be affected by scanner orientation, observing a consistent area still allows reliable point cloud registration. - Clarification on how the same trajectory was maintained in the terrain.
Thank you for highlighting this. The explanation has been added. (Lines 146–147, 212–213) - Use of puffers to mitigate edge effects.
The trajectory was designed to encompass all trees within a closed polygon, effectively incorporating a buffer zone. - Why the iPhone used the same trajectory and not an individualized approach.
We thank you for this suggestion. The rationale for this design choice is now explained: to assess performance relative to scanning time and cost. Circumnavigating each tree individually would not align with the time-saving goals of remote sensing technologies. (Lines 212–216) - Improve Figure 6 to demonstrate RANSAC performance.
Thank you for your suggestion. While the figure itself remains unchanged, the accompanying text has been revised to better clarify its purpose. (Lines 290–291) - Estimated computation time per tree for RANSAC.
Thank you for this question. The RANSAC process, optimized through voxelization, takes less than one minute per tree. This is now stated in the text. (Lines 328–331) - Justification for using RANSAC over DBSCAN.
Thank you. RANSAC was used for circle fitting in all pipelines, including the Manual Approach. DBSCAN is employed within some pipelines, but not for DBH estimation. No changes were made to the text, as this is already addressed. - Why spline methods were not used for stem profile fitting.
As the focus of the study was not on stem shape analysis, spline methods were not utilized. This is now clarified. - Cite the specific RANSAC implementation.
Thank you for pointing this out. The implementation used is part of the FORTLS package, which is now clearly stated. - Clarify use of TLS vs. manual caliper data as ground truth.
Thank you. Manual calipering was conducted for reference measurements. A new figure has been added to summarize the methods used throughout the study. - Include balanced performance metrics such as F1 score.
Thank you for raising this concern. Our evaluation approach ensures that false positives and negatives are excluded from error metrics (MAE, RMSE, rRMSE) through manual matching. We believe this provides a reliable performance assessment. - Clarify how extent values in Table 1 were computed.
Thank you. We have now specified that a 10×10 cm tile is considered filled if it contains at least one point. (Lines 481–483) - Report total point count and average point density for each method.
Thank you for the suggestion. These metrics have been added to Table 1 and described in the text. (Lines 481–485) - Maintain consistent scanner order in Figure 2.
Thank you. The order in the figure and descriptive text has been aligned for clarity. - Clarify RANSAC vs. pipeline naming.
As noted, all pipelines include RANSAC. To prevent confusion, the “Manual-RANSAC” method has been renamed to “Manual Approach,” and the text has been updated accordingly. - Explain detection rate below 100% for Manual Approach.
This is most likely due to scan incompleteness and has been clarified. - Concerns about iPhone comparison due to range and trajectory.
Thank you. We have expanded the discussion on the rationale behind using a uniform trajectory across all devices, including the iPhone. (Lines 249–251, XXX–XXX) - Comment on the suitability of the ZEB MLS scanner for complex or young forests.
Thank you. This point is now addressed in the Discussion. (Lines 616–621)
Reviewer 3
Minor Comments
- Restructure the section describing different LiDAR methods into a comparative framework.
Thank you for this valuable suggestion. The section has been restructured for clarity and coherence. Renaming and formatting changes have helped make this part more readable, while additional details were added to improve understanding of each pipeline. - Add detail on specifications and roles of LiDAR systems used.
We have included further information about the specifications and roles of the compared systems. (Lines 133–135, 217–221) - Clarify the performance metrics used in evaluation.
Thank you. A new subsection in the Materials and Methods section now outlines all selected performance metrics and their purpose. (Lines 437–470) - Explain how these metrics validate the LA03 device.
Thank you for this comment. This is now explicitly discussed in the new “Statistical Metrics” subsection. (Lines 437–470) - Separate the Discussion into performance and affordability/usability aspects.
We appreciate this helpful suggestion. The Discussion has been revised to better highlight the affordability and usability of the LA03 device, alongside its comparative performance.
Once again, we are truly grateful for your careful reading and insightful feedback, which have greatly contributed to enhancing the manuscript. We believe that the revised version now offers greater clarity, coherence, and scientific value thanks to your input.
Thank you for your support and consideration.
With kind regards,
The Authors
Reviewer 2 Report (New Reviewer)
Comments and Suggestions for AuthorsDear authors,
I read your study "Getting the highest quality from a low-cost mobile scanner: Coperion of several pipelines with a new scanning device", which presents a novel prototype scanner Mapry L03 and a series of tests for practical applicability with varying data analysis workflows (focused on stem detection).
The motivation and aim of the study are clearly presented, also the discussion is well written with a firm recommendation to using LiDAR in measuring DBH of trees.
I have three major concerns:
A) the structure of section 2.4 can be improved. Firstly, heading 2.5.1 follows 2.4, so either 2.5 is missing or 2.5.1 should be 2.4.1 I understand the study to test various point cloud processing tools for the Mapry LA03 - maybe the subsections in 2.4 could each explain one workflow in detail and general terms can be expalined in a separate heading. For example RANSAC is elaborated in multiple places, which makes the method section confusing to read.
B) l.88: The cited study does not provide context that occlusion complicates data processing and interpration. It does not even mention the term "occlusion" - please check your references in the whole introduction section and improve them if necessary.
C) In the discussion section please elaborate your findings - how can you get the highest quality from a low-cost mobile scanner: Is it worth reconsidering the walking path to create denser point clouds? Is a filter recommended to clear the noise or a shorter walking path? Is the device only good for simple forest conditions or should the SLAM algorithm be improved? Your overal findings show a clear picture, but you could provide some more interpretation based on your experience working with the device to make it more interesting to the readers.
Here are some minor issues that have to be addressed prior to publication:
l.110: can you please specify a metre-range for the limitation of apple devices LiDAR sensor?
l.123: please state the name of the prototype device already in the introduction.
l.143: How many trees were considered in the study? Please give more information about dbh and possibly height distribution.
l.193: Did you take manual reference measurements of DBH with caliper (and with which alignment) or pi-tape?
l.216: You state that "the operator maintained a consistent orientation towards the centre of the plot to ensure optimal coverage" - please elaborate this decision, as usual SLAM systems are trained on moving the sensor in the direction of viewing, thereby this rotated position might cause the SLAM algorithm to lack accuracy.
l.226: How did you ensure that you followed the exactly same scanning protocol/trajectory with various devices? Did you mark the way and walk it several times with the same operator or did you carry multiple devices that measured at the same time?
l.244: did you include a puffer in your plot area to compensate edge effects? Figure 3 shows the trajectory following the borders of the plot - are the considered trees sufficiently far enough inside the plot area?
l.232: was iPhone scanning really conducted in the same walking trajectory? Given the short range it is unlikely to yield satisfying results if not capturing the tree trunk in full detail. You might think about including an additional iPhone scan that circles the individual trees on a smaller plot, for walking the same trajectory tends to be a incorrect application of the short range iPhone LiDAR sensor (as visible in Figure 5).
l.282: Figure 4 does not provide additional benefit to me for understanding the RANSAC algorithm. Either you showcase multiple trees captured with different devices or you analyze various circle fitting methods when compared to RANSAC. For me it does not seem logical that the mean absolute error is smallest for an incomplete scan by MLS.
l.307: By using the exhaustive RANSAC method you tested multiple variations to fit the perfect circle - this surely is computationally expensive given for large data sets. Can you please specify the computation time for a single tree and maybe differentiate it regarding the number of points per DBH stem slice?
l.324: How did you justify using RANSAC over other circle fitting methods (for example DBSCAN)? Were there any considerations for efficiency, outliers or varying point cloud density?
l.325: Why did you not use flexible splines to
l.328: "the RANSAC function used is natively implemented within the package" I suppose you refer to the RANSAC package, please refer to it and cite it correctly.
l.409: are you refering to TLS data as "ground truth data" or did you employ manual measurements? if you chose TLS - which method was used to derive the values?
l.416: Detection rate alone does not provide sufficient results to compare a comparison of two sets of tree locations. False positives as well as false negatives should be considered, therefore accuracy, F1 score or a balanced metric that accounts for omission and commission error has to be used.
l.424 Regarding the extent in Table 1 - how did you compute these values in detail? you state use voxelized 2D projections at a spatial resolution of 10cm - is one 100cm2 tile consider filled as soon as one 3D point is in the tile? Please add more explanation of how the extent values are derived!
l.425: If you gave the total number of points that were captured inside the plot area with the various methods the extent calculation could be backed up significantly. Please report the average point density per m² for ground and vegetation points (a well chosen cut-off value between 10 and 50 cm depending on ground vegetation should be sufficient)
l.426: In Figure 2 please keep the same order of the methods as in the description - the iPhone should be positioned after the LA03.
l.431: You are comparing RANSAC (a circle fitting algorithm) to a full workflow SAMICE - are you refering to manual RANSAC? Then you might consider renaming the method for another abbreviation to clear out any confusion.
l.466: Table 2: How can Detection rate for the manual RANSAC be less than 100%? Is it due to sensor limitations that not all trees were captured due to occlusion?
l.466-II: as written in the comment to l.232 - the comparison of the iPhone scanner is not fair given the trajectory and sensor range limitations.
l.496: even though your study did not consider forest structural complexity - can you maybe discuss the usability of the ZEB MLS scanner in more complex or younger forest stands when compared to the Mapry scanner, maybe you have some experience in this, which would be very benefitial for potential readers.
Author Response
Dear Reviewers,
We would like to extend our sincere thanks for your thoughtful, detailed, and constructive reviews of our manuscript titled “Getting the highest quality from a low-cost mobile scanner: comparison of several pipelines with a new scanning device.” We greatly appreciate the time and effort you invested in providing such valuable feedback. Your comments and suggestions have been instrumental in improving the clarity, methodological rigor, and overall quality of our work.
To briefly summarize, the manuscript evaluates the performance of a low-cost Mobile Laser Scanning (MLS) system by comparing it with several established scanning technologies. A standardized data acquisition protocol was implemented across a large-scale forest sample plot, and five distinct point cloud processing pipelines were assessed. The primary performance metric was DBH accuracy, verified against manually collected field measurements.
The most substantial revisions were made to the descriptions of the individual processing pipelines, the discussion on the potential and cost-efficiency of low-cost scanning devices, and the structure of the Materials and Methods section. Additionally, we have included a simplified workflow visualization and improved the description of the forest sample plot to enhance clarity.
Below, we provide detailed responses to each of your comments, outlining the corresponding revisions made in the manuscript.
Reviewer 1
Major Comments
- A workflow diagram/table should be included in the Materials and Methods section.
Thank you for this suggestion. A simplified workflow diagram has now been included in the Materials and Methods section as Figure 3. (Line 199) - The quality measures and metrics must be explained in the Methods.
We appreciate this important point. A new subsection has been added to the Materials and Methods section, detailing all the evaluation metrics and the rationale behind their selection. (Lines 464–498) - Discussion of why the same trajectory was used for all scanners, despite differing ranges.
Thank you for your insight. Although this point was initially addressed in the Data Collection section, we have now expanded the explanation to provide a clearer justification of this methodological decision. (Lines 219–223)
Minor Comments
- Mentioning the devices used in the abstract.
Thank you; we agree this adds clarity. The device names have now been included in the abstract. (Lines 19–21) - Other in-text suggestions and typographical corrections.
We are grateful for these detailed comments. All of them have been carefully addressed and incorporated into the revised manuscript. These improvements contribute to a more comprehensive understanding of the methodologies used.
Reviewer 2
Major Comments
- Improve the structure of Section 2.4.
Thank you, this comment was particularly helpful. While most of the changes involved formatting, we have also added explanatory text to each pipeline description to enhance clarity. Additionally, the “Manual-RANSAC” method has been renamed “Manual Approach” to avoid confusion. - A cited study does not address occlusion as a complication.
Thank you for catching this. The reference was mistakenly included from another project and has now been replaced with appropriate citations. (Line 90) - Elaborate on key findings: getting high-quality data from low-cost scanners, path selection, SLAM improvements.
We appreciate this suggestion. The Discussion section has been extended to include these aspects. (Lines 612–636)
Minor Comments
- Specify the range limitations of Apple device LiDAR.
This information has now been added to the description of the iPhone. (Line 250) - State the prototype name in the introduction.
The names of all scanners are now clearly stated in the abstract. - Number of trees and distribution of DBH and height.
Thank you. The total number of trees is now specified in both the Study Area and Data Collection sections. A new Figure 4 illustrates the DBH and tree height distributions. (Line 212) - Tools used for DBH measurement.
Thank you for raising this. The Haglöf Mantax Digitech caliper used for DBH measurement is now mentioned, along with a description of the measurement protocol. (Lines 201-206) - Orientation of scanner towards the plot center and SLAM accuracy.
This orientation was selected based on findings in prior studies (e.g., https://doi.org/10.1016/j.jag.2022.103104). Although SLAM may be affected by scanner orientation, observing a consistent area still allows reliable point cloud registration. - Clarification on how the same trajectory was maintained in the terrain.
Thank you for highlighting this. The explanation has been added. (Lines 146–147, 212–213) - Use of puffers to mitigate edge effects.
The trajectory was designed to encompass all trees within a closed polygon, effectively incorporating a buffer zone. - Why the iPhone used the same trajectory and not an individualized approach.
We thank you for this suggestion. The rationale for this design choice is now explained: to assess performance relative to scanning time and cost. Circumnavigating each tree individually would not align with the time-saving goals of remote sensing technologies. (Lines 212–216) - Improve Figure 6 to demonstrate RANSAC performance.
Thank you for your suggestion. While the figure itself remains unchanged, the accompanying text has been revised to better clarify its purpose. (Lines 290–291) - Estimated computation time per tree for RANSAC.
Thank you for this question. The RANSAC process, optimized through voxelization, takes less than one minute per tree. This is now stated in the text. (Lines 328–331) - Justification for using RANSAC over DBSCAN.
Thank you. RANSAC was used for circle fitting in all pipelines, including the Manual Approach. DBSCAN is employed within some pipelines, but not for DBH estimation. No changes were made to the text, as this is already addressed. - Why spline methods were not used for stem profile fitting.
As the focus of the study was not on stem shape analysis, spline methods were not utilized. This is now clarified. - Cite the specific RANSAC implementation.
Thank you for pointing this out. The implementation used is part of the FORTLS package, which is now clearly stated. - Clarify use of TLS vs. manual caliper data as ground truth.
Thank you. Manual calipering was conducted for reference measurements. A new figure has been added to summarize the methods used throughout the study. - Include balanced performance metrics such as F1 score.
Thank you for raising this concern. Our evaluation approach ensures that false positives and negatives are excluded from error metrics (MAE, RMSE, rRMSE) through manual matching. We believe this provides a reliable performance assessment. - Clarify how extent values in Table 1 were computed.
Thank you. We have now specified that a 10×10 cm tile is considered filled if it contains at least one point. (Lines 481–483) - Report total point count and average point density for each method.
Thank you for the suggestion. These metrics have been added to Table 1 and described in the text. (Lines 481–485) - Maintain consistent scanner order in Figure 2.
Thank you. The order in the figure and descriptive text has been aligned for clarity. - Clarify RANSAC vs. pipeline naming.
As noted, all pipelines include RANSAC. To prevent confusion, the “Manual-RANSAC” method has been renamed to “Manual Approach,” and the text has been updated accordingly. - Explain detection rate below 100% for Manual Approach.
This is most likely due to scan incompleteness and has been clarified. - Concerns about iPhone comparison due to range and trajectory.
Thank you. We have expanded the discussion on the rationale behind using a uniform trajectory across all devices, including the iPhone. (Lines 249–251, XXX–XXX) - Comment on the suitability of the ZEB MLS scanner for complex or young forests.
Thank you. This point is now addressed in the Discussion. (Lines 616–621)
Reviewer 3
Minor Comments
- Restructure the section describing different LiDAR methods into a comparative framework.
Thank you for this valuable suggestion. The section has been restructured for clarity and coherence. Renaming and formatting changes have helped make this part more readable, while additional details were added to improve understanding of each pipeline. - Add detail on specifications and roles of LiDAR systems used.
We have included further information about the specifications and roles of the compared systems. (Lines 133–135, 217–221) - Clarify the performance metrics used in evaluation.
Thank you. A new subsection in the Materials and Methods section now outlines all selected performance metrics and their purpose. (Lines 437–470) - Explain how these metrics validate the LA03 device.
Thank you for this comment. This is now explicitly discussed in the new “Statistical Metrics” subsection. (Lines 437–470) - Separate the Discussion into performance and affordability/usability aspects.
We appreciate this helpful suggestion. The Discussion has been revised to better highlight the affordability and usability of the LA03 device, alongside its comparative performance.
Once again, we are truly grateful for your careful reading and insightful feedback, which have greatly contributed to enhancing the manuscript. We believe that the revised version now offers greater clarity, coherence, and scientific value thanks to your input.
Thank you for your support and consideration.
With kind regards,
The Authors
Reviewer 3 Report (New Reviewer)
Comments and Suggestions for AuthorsThis study compared high-end laser scanners with a low-cost device for measuring tree diameter at breast height (DBH), aiming to evaluate the feasibility of the affordable alternative for forest inventory applications. While high-end systems achieved superior performance, the tested low-cost device achieved moderate results, varying based on the 3D processing algorithms used. The study suggests the low-cost device could be suitable for scanning small sample plots cost-effectively and potentially deployed at larger scales to support forest inventory initiatives where high precision is not critical. I recommend a minor revision for this paper.
Comments for author File:
Comments.pdf
The English expression in this paper contains no significant errors, but ​​a double-check of certain details is recommended​​.
Author Response
Dear Reviewers,
We would like to extend our sincere thanks for your thoughtful, detailed, and constructive reviews of our manuscript titled “Getting the highest quality from a low-cost mobile scanner: comparison of several pipelines with a new scanning device.” We greatly appreciate the time and effort you invested in providing such valuable feedback. Your comments and suggestions have been instrumental in improving the clarity, methodological rigor, and overall quality of our work.
To briefly summarize, the manuscript evaluates the performance of a low-cost Mobile Laser Scanning (MLS) system by comparing it with several established scanning technologies. A standardized data acquisition protocol was implemented across a large-scale forest sample plot, and five distinct point cloud processing pipelines were assessed. The primary performance metric was DBH accuracy, verified against manually collected field measurements.
The most substantial revisions were made to the descriptions of the individual processing pipelines, the discussion on the potential and cost-efficiency of low-cost scanning devices, and the structure of the Materials and Methods section. Additionally, we have included a simplified workflow visualization and improved the description of the forest sample plot to enhance clarity.
Below, we provide detailed responses to each of your comments, outlining the corresponding revisions made in the manuscript.
Reviewer 1
Major Comments
- A workflow diagram/table should be included in the Materials and Methods section.
Thank you for this suggestion. A simplified workflow diagram has now been included in the Materials and Methods section as Figure 3. (Line 199) - The quality measures and metrics must be explained in the Methods.
We appreciate this important point. A new subsection has been added to the Materials and Methods section, detailing all the evaluation metrics and the rationale behind their selection. (Lines 464–498) - Discussion of why the same trajectory was used for all scanners, despite differing ranges.
Thank you for your insight. Although this point was initially addressed in the Data Collection section, we have now expanded the explanation to provide a clearer justification of this methodological decision. (Lines 219–223)
Minor Comments
- Mentioning the devices used in the abstract.
Thank you; we agree this adds clarity. The device names have now been included in the abstract. (Lines 19–21) - Other in-text suggestions and typographical corrections.
We are grateful for these detailed comments. All of them have been carefully addressed and incorporated into the revised manuscript. These improvements contribute to a more comprehensive understanding of the methodologies used.
Reviewer 2
Major Comments
- Improve the structure of Section 2.4.
Thank you, this comment was particularly helpful. While most of the changes involved formatting, we have also added explanatory text to each pipeline description to enhance clarity. Additionally, the “Manual-RANSAC” method has been renamed “Manual Approach” to avoid confusion. - A cited study does not address occlusion as a complication.
Thank you for catching this. The reference was mistakenly included from another project and has now been replaced with appropriate citations. (Line 90) - Elaborate on key findings: getting high-quality data from low-cost scanners, path selection, SLAM improvements.
We appreciate this suggestion. The Discussion section has been extended to include these aspects. (Lines 612–636)
Minor Comments
- Specify the range limitations of Apple device LiDAR.
This information has now been added to the description of the iPhone. (Line 250) - State the prototype name in the introduction.
The names of all scanners are now clearly stated in the abstract. - Number of trees and distribution of DBH and height.
Thank you. The total number of trees is now specified in both the Study Area and Data Collection sections. A new Figure 4 illustrates the DBH and tree height distributions. (Line 212) - Tools used for DBH measurement.
Thank you for raising this. The Haglöf Mantax Digitech caliper used for DBH measurement is now mentioned, along with a description of the measurement protocol. (Lines 201-206) - Orientation of scanner towards the plot center and SLAM accuracy.
This orientation was selected based on findings in prior studies (e.g., https://doi.org/10.1016/j.jag.2022.103104). Although SLAM may be affected by scanner orientation, observing a consistent area still allows reliable point cloud registration. - Clarification on how the same trajectory was maintained in the terrain.
Thank you for highlighting this. The explanation has been added. (Lines 146–147, 212–213) - Use of puffers to mitigate edge effects.
The trajectory was designed to encompass all trees within a closed polygon, effectively incorporating a buffer zone. - Why the iPhone used the same trajectory and not an individualized approach.
We thank you for this suggestion. The rationale for this design choice is now explained: to assess performance relative to scanning time and cost. Circumnavigating each tree individually would not align with the time-saving goals of remote sensing technologies. (Lines 212–216) - Improve Figure 6 to demonstrate RANSAC performance.
Thank you for your suggestion. While the figure itself remains unchanged, the accompanying text has been revised to better clarify its purpose. (Lines 290–291) - Estimated computation time per tree for RANSAC.
Thank you for this question. The RANSAC process, optimized through voxelization, takes less than one minute per tree. This is now stated in the text. (Lines 328–331) - Justification for using RANSAC over DBSCAN.
Thank you. RANSAC was used for circle fitting in all pipelines, including the Manual Approach. DBSCAN is employed within some pipelines, but not for DBH estimation. No changes were made to the text, as this is already addressed. - Why spline methods were not used for stem profile fitting.
As the focus of the study was not on stem shape analysis, spline methods were not utilized. This is now clarified. - Cite the specific RANSAC implementation.
Thank you for pointing this out. The implementation used is part of the FORTLS package, which is now clearly stated. - Clarify use of TLS vs. manual caliper data as ground truth.
Thank you. Manual calipering was conducted for reference measurements. A new figure has been added to summarize the methods used throughout the study. - Include balanced performance metrics such as F1 score.
Thank you for raising this concern. Our evaluation approach ensures that false positives and negatives are excluded from error metrics (MAE, RMSE, rRMSE) through manual matching. We believe this provides a reliable performance assessment. - Clarify how extent values in Table 1 were computed.
Thank you. We have now specified that a 10×10 cm tile is considered filled if it contains at least one point. (Lines 481–483) - Report total point count and average point density for each method.
Thank you for the suggestion. These metrics have been added to Table 1 and described in the text. (Lines 481–485) - Maintain consistent scanner order in Figure 2.
Thank you. The order in the figure and descriptive text has been aligned for clarity. - Clarify RANSAC vs. pipeline naming.
As noted, all pipelines include RANSAC. To prevent confusion, the “Manual-RANSAC” method has been renamed to “Manual Approach,” and the text has been updated accordingly. - Explain detection rate below 100% for Manual Approach.
This is most likely due to scan incompleteness and has been clarified. - Concerns about iPhone comparison due to range and trajectory.
Thank you. We have expanded the discussion on the rationale behind using a uniform trajectory across all devices, including the iPhone. (Lines 249–251, XXX–XXX) - Comment on the suitability of the ZEB MLS scanner for complex or young forests.
Thank you. This point is now addressed in the Discussion. (Lines 616–621)
Reviewer 3
Minor Comments
- Restructure the section describing different LiDAR methods into a comparative framework.
Thank you for this valuable suggestion. The section has been restructured for clarity and coherence. Renaming and formatting changes have helped make this part more readable, while additional details were added to improve understanding of each pipeline. - Add detail on specifications and roles of LiDAR systems used.
We have included further information about the specifications and roles of the compared systems. (Lines 133–135, 217–221) - Clarify the performance metrics used in evaluation.
Thank you. A new subsection in the Materials and Methods section now outlines all selected performance metrics and their purpose. (Lines 437–470) - Explain how these metrics validate the LA03 device.
Thank you for this comment. This is now explicitly discussed in the new “Statistical Metrics” subsection. (Lines 437–470) - Separate the Discussion into performance and affordability/usability aspects.
We appreciate this helpful suggestion. The Discussion has been revised to better highlight the affordability and usability of the LA03 device, alongside its comparative performance.
Once again, we are truly grateful for your careful reading and insightful feedback, which have greatly contributed to enhancing the manuscript. We believe that the revised version now offers greater clarity, coherence, and scientific value thanks to your input.
Thank you for your support and consideration.
With kind regards,
The Authors
Round 2
Reviewer 1 Report (New Reviewer)
Comments and Suggestions for AuthorsThe authors have incorporated the comments of the 1st review round well. The paper has turned out great. Congratulations!
Reviewer 2 Report (New Reviewer)
Comments and Suggestions for AuthorsDear authors,
I have reviewed your revised manuscript and commend you on the clarifications. I believe your work offers a valuable contribution to the field and recommend it for publication.
Thank you for addressing the previous feedback effectively.
This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsDBH is a key parameter in forest inventory. Traditionally, DBH was measured manually using the DHB ruler, which was usually affected by subjective errors. In recent decades, terrestrial laser scanning has become an effective method for obtaining accurate DBH. However, TLS itself was difficult to operate in a dense forest. In recent years, mobile laser scanning has been rapidly developed, which could replace TLS to obtain accurate DHB.
In general, it is necessary to compare the ability to obtain DHB using existing equipment.
Here are some concerns:
1. The title is hard to understand. Highest quality of what?
2. In the abstract, the accuracy of 2.14 can't be found in the results section.
3. The introduction is partly illogical. Please reorganize the existing relevant work requirements and point out the problems.
4. Line 140, The area, species, number of trees and range of diameter at breast height were not given for each sample plot.
5. Line 145, What is the total weight of the equipment? How does the operator perform the scan?
6. Line 150, How is the sensor connected to the USB key and Bluetooth module?
7. Line 151, What is the function of the camera? Is it used for data collection?
8. Table 1, The interface of the app is in Japanese, can you translate it into English and explain exactly how to use it?
9. Line 228, Figure 3d is not found in the article.
10. Line 247, Methodology of results evaluation is not described.
11. Section 3, The results section should be split into two subsections: tree detection capability and DBH estimation.
12. The ANOVA model is not described in the Methods section.
13. In terms of accuracy and cost, the recommended method has no advantage over existing methods using smartphones. What are the advantages of the recommended method?
Reviewer 2 Report
Comments and Suggestions for AuthorsSee review report
Comments for author File:
Comments.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsOverview
Surový et al. present an analysis comparing the effectiveness of tree stem diameter estimation using a new, lost-cost laser scanner. Using a more expensive static terrestrial laser scanner and mobile laser scanner as a comparative benchmark, they demonstrate that the low-cost scanner performs similarly. Although this study tackles a topic that has received a lot of recent attention, I think its impact and interest to the Remote Sensing readership will be quite low. The paper lacks detail in many sections, affective its readability. Its results are based on a very small sample of trees in an urban setting, limiting its broad applicability. Furthermore, the scanner being tested is not widely used, further limiting the usefulness of the results to the remote sensing community. In my view, this paper, in its current form, does not warrant publication in Remote Sensing.
Major Comments
- The Abstract needs improvement. It needs to stand alone as a succinct explanation of the work presented. To fully understand what is put forth in the Abstract in its current form requires reading the rest of the paper, which defies the purpose of an abstract.
- The small number of trees in this study calls its broad applicability into question. Though the number of trees is not reported (and should be…), it is clearly less than 100 – a number that does not instill confidence in the results presented. Furthermore, the trees appear to be clustered in an urban or suburban setting, likely with some degree of landscape manipulation, so the transferability of the findings here to natural or even managed forested environments is likely low.
- While I agree that lowering the cost of laser scanners is a desirable goal, and it is useful to understand tradeoffs between expensive and less expensive sensors, the sensor described here does not appear to be widely available. The authors describe lidar scans from common mobile devices (e.g., iPhones, iPads) in the Introduction, which is a good example of widely-used, widely-available devices whose performance in tree size estimation is of great interest. But presenting a comparison to a device such as the Mapry instrument, which as far as I can tell has only ever been used in one other published work, will garner little interest from the Remote Sensing readership.
- There is a comparison made between three different tree DBH estimation algorithms, but all three seem to rely at least in part on RANSAC. And then the authors basically disregard the differences in performance, saying in L252-253 that there are differences between their performance, but that’s not the focus of the paper. So, why compare methods then?
- Overall, I found the Results section to be quite lacking in substance. The focus is on significance tests, and very little significance was found. But, it’s not even clear how the significance tests were performed – at the individual tree level? At the plot level? In either case, your sample sizes are so small that the reliability of your statistical significance tests is likely to be low. You can still discuss prevailing differences in performance without leaning solely on significance tests as the sole basis of evaluating true differences.
Minor Comments
L15-17: This sentence needs to be rephrased to improve clarity. You have not yet introduced (besides in the title) the idea that there is a “new scanner”, yet this sentence seems to imply that you have already done so.
L19-20: What does “4.13 cm in accuracy” mean? Positional accuracy of point cloud data? Terrain model accuracy? Tree stem diameter accuracy? And, to be more precise, a measure like this is better described as “error” rather than “accuracy”.
L30: “…urgency of addressing climate *change*” would be a useful revision.
L32-33: Can you clarify what is meant by forests must “be more structured”?
L51: By “crown extension” I suspect you mean “crown diameter”?
L90: Somewhat atypical to have a subsection in the Introduction – especially since it is the only one. Can probably remove, or perhaps create a dedicated “background” section?
L106: Perhaps a translation problem, but data are generally not referred to as being “heavy” – perhaps “more massive” or “more voluminous”?
L128: GeoSLAM *ZEB* Horizon – the “ZEB” is needed so readers can understand that this is the same sensor you refer to later only as “ZEB”.
Section 2.1: Please tell us how many trees were included in this study.
Figure 1 would benefit from a few improvements: (1) subfigure labels (e.g., “a”, “b”, “c”, and “d”) and associated descriptions in the caption; (2) I happened to recognize that the locator map represents the Czech Republic, but many readers will not recognize this – labels or other geographic context are needed; (3) given the similarity in scale for the three plot maps, it would seem very easy to plot them at the same map scale, eliminating the need for three differently sized scale bars; (4) I assume that the points represent tree stems, but there is no written or graphical explanation of what they represent.
L147: I assume 240,000 is pulses per second? Please include units.
L163-164: You make no mention of what TLS instrument you are using for static scanning – the only mention so far was “Trimble” in the Introduction. What model? For that matter, considerably more detail is needed when describing the specifications of the Trimble and ZEB instruments so we can understand the tradeoffs.
L176: First mention of RANSAC, provided without context, until several sentences later. Furthermore, you say that “point clouds […] were normalized”. Precise language is important. I assume you mean you normalized the point cloud elevations relative to the ground surface elevation to yield aboveground heights for each lidar point? This is a non-trivial process that could have substantial effects on the results. Please describe with greater detail.
Section 2.5.1: More detail is needed on what RANSAC is actually doing. The reader has to assume that it is iteratively attempting to fit circles to the slices of point cloud data in 2D.
L218-220: Need to be more specific about what is meant by “all of the aforementioned approaches”.
L224-227: This needs to be better described. I’ve read it a few times now and still do not understand what this algorithm is doing.
L252-253: You describe different results from the three algorithms, and then basically say “but we’re not going to interrogate that, because it’s not the focus of this study”. Then, why present three different algorithms?
Table 1: Shouldn’t all references to “Mappry” be “Mapry”?
Comments on the Quality of English Language
The quality of English language in this paper is, at times, low.

