Next Article in Journal
Coastal Waveform Retracking in the Slick-Rich Sulawesi Sea of Indonesia, Based on Variable Footprint Size with Homogeneous Sea Surface Roughness
Previous Article in Journal
Tree Height Estimation of Forest Plantation in Mountainous Terrain from Bare-Earth Points Using a DoG-Coupled Radial Basis Function Neural Network
 
 
Article
Peer-Review Record

Developing an Algorithm for Buildings Extraction and Determining Changes from Airborne LiDAR, and Comparing with R-CNN Method from Drone Images

Remote Sens. 2019, 11(11), 1272; https://doi.org/10.3390/rs11111272
by Saied Pirasteh 1,2,*, Pejman Rashidi 3, Heidar Rastiveis 3, Shengzhi Huang 1, Qing Zhu 1, Guoxiang Liu 1, Yun Li 1, Jonathan Li 2 and Erfan Seydipour 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2019, 11(11), 1272; https://doi.org/10.3390/rs11111272
Submission received: 30 April 2019 / Revised: 21 May 2019 / Accepted: 23 May 2019 / Published: 29 May 2019
(This article belongs to the Section Urban Remote Sensing)

Round 1

Reviewer 1 Report

While the authors improved the paper significantly, I recommend the authors to address the following comments:

A thorough proofreading is required.

     2. Provide a strong justification for using the R-CNN method in introduction.

     3. Results of RCNN should be provided in a Figure. 

      4. I see the significant difference between your algorithms results and RCNN in tables 4 and 5. please clarify why this happens and whether or not it is a good achivement? I mean how you finally validate your results by comparison of these two tables?

     5. while there is a RMSE equation in the paper, I don't see any result for that. Please add results of RMSE for both your method and the RCNN method.


Author Response

Response to Reviewers’ Comments of 2nd Submission Round

Comments Round 1

 

Reviewer 1:

Comments and Suggestions for Authors

While the authors improved the paper significantly, I recommend the authors to address the following comments:

1. A thorough proofreading is required.

ANS: It is done.

2. Provide a strong justification for using the R-CNN method in introduction.

ANS: It is added to the Introduction section.

Line 66-70: Also, Uijlings studied the recent advances in object detection [38]. It is driven by the success of region proposal methods [38] and the convolutional neural networks region-based (R-CNNs) [39]. These region-based CNNs are computationally expensive as compared to originally developed by Girshick [39]. They reduced the cost drastically to sharing convolutions across proposals given by Grishick and He [40, 41].

 

Line 91-95: It is because R-CNN has been attracting increasing attention for efficient, yet accurate, and visual recognition. In addition, R-CNN applies for both objection detection tasks and region proposal generation as well.  Perhaps, with such design we could detect objects much faster than other methods [42] when we use drone images.

3. Results of R-CNN should be provided in a Figure. 

ANS: Please see Figure 13, line: 372-376.

4. I see the significant difference between your algorithms results and R-CNN in tables 4 and 5. Please clarify why this happens and whether or not it is a good achievement? I mean how you finally validate your results by comparison of these two tables?

ANS: This is because of the nature of image and point cloud. For LiDAR, we have more point clouds than the points/pixel extracted from the drone image. It is also because; R-CNN can be applied for small building edge detection.

In this study, we tried to compare both LiDAR and image to determine how LiDAR and the proposed method are efficient than R-CNN method. Therefore, we revealed that when we use LiDAR and the proposed method, we can accomplish a better result. Also, please look at line 442-444.

The validation was done from one image clip of training dataset and two other image clips of testing dataset. We also had ground truth observations and we are in the process of further research and comparison.

I have added the following:

Line: 418-423: As it can be seen from Table 5, the minimum difference for the selected building area on the drone images is 0.84 m2 belongs to the third building. Also, the most significant difference belongs to the fourth building which is depicted in Table 5 with 71.83 m2. Meanwhile, the maximum area of building belongs to the first building with 65.17 m2 difference.

Line: 442-444: In this study, we tried to compare both LiDAR and image to determine how LiDAR and the proposed method are efficient than R-CNN method. Therefore, we revealed that when we use LiDAR and the proposed method, we can accomplish a better result.

5. While there is a RMSE equation in the paper, I don't see any result for that. Please add results of RMSE for both your method and the R-CNN method.

ANS: Please look at line: 405 and 423.

We have added the following:

Line: 421-423: The RMSE is 34.50 and it is very high value that reveals, the proposed method with lower RSME is a better approach.  


Reviewer 2 Report

In my opinion, the present version is a considerable improvement as compared to the previous one. However, the manuscript can still be slightly improved. (1) Line 351-352: The authors argue "3D change detection of buildings (horizontal and vertical) can be identified." I don't understand the horizontal and vertical changes, may be you can take some examples to clarify. (2) Line 375-376: typo error

Author Response

Reviewer 2:

 

Comments and Suggestions for Authors

In my opinion, the present version is a considerable improvement as compared to the previous one. However, the manuscript can still be slightly improved.

(1) Line 351-352: The authors argue "3D change detection of buildings (horizontal and vertical) can be identified." I don't understand the horizontal and vertical changes, may be you can take some examples to clarify.

ANS: I changed the word “corner” to “edge”. Example is provided below in the case of an image.

In general, the changes are related to the positional of points and lines on Figure 11, we obtained. I mean we do linear measurement and it is on the horizontal plane. It determines the distance between two points horizontally. While, vertical distances are in height (elevation) and are measured along the vertical axis between points.

In addition, vertical corner is the vertical edge. When we implement the proposed method and attempt filtering, in fact, we applied the detection of the horizontal and vertical edges using the image and point cloud of the detail of selected buildings.

Example: When we use image of a building at 256 gray levels or point cloud to create raster image of a building, the image has strong horizontal and vertical edges, and it useful for the illustration of the method.  Please see the following image.

256 gray levels of a building

Generally, we can say that vertical edges are detected by using a horizontal gradient operator and therefore, it is followed by a threshold operation. In this case, the extreme values of the gradient can be detected. In this example, the horizontal gradient can be calculated by taking image value’s differences between columns. In the case of image an odd number of pixels in a gradient calculation prevents a shift in location and calculate by the following equations.

(2) Line 375-376: typo error

ANS: The errors have been corrected.

 

 


Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

This paper proposes an algorithm for 3D change detection of buildings from airborne LiDAR data. This paper is straight forward and easy follow, with a good view of the existing literature and a well performed validation against two datasets. The result is good which the authors reported a RMSE of 2.44m2. However, novelties are less in this paper. Besides, I think some of the steps are not necessary, and too many thresholds need to determine in their method. I also think the data set tested in their study is not large enough as only 110 buildings in the study area and the test scene is not complicate enough. So I doubt the feasibility of the proposed method, especially for the large urban areas. Bellows are my special comments.

 

(1)   Title: I think the main work of this paper is building extraction, not 3d change detection as the authors didn't extract the 3D changes. So I think the title is not appropriate.

(2)   Section 2: the two datasets have different point densities, it will influence the accuracy of the building extraction. Besides, the study area is not large and typical enough in my opinion.

(3)   Line 136:”candidate pixel is l labelled” typo error

(4)   Line 199: please give details for all the letters appeared in the equation.

(5)   Line 206: in my opinion, the function should be Q = the intersection of A and B; and the target Q should reach the largest possible value

(6)   Section 3.3.2: the authors can directly merge the rectangles; why should they use such a complicate method?

(7)   Line 241-242: Please give the details of the time cost?

(8)   Line 320-321: For readers, I think the 3D volume is more important.

(9)   Line 232: see comment 4.


Author Response

Comments and Suggestions for Authors

This paper proposes an algorithm for 3D change detection of buildings from airborne LiDAR data. This paper is straight forward and easy follow, with a good view of the existing literature and a well performed validation against two datasets. The result is good which the authors reported a RMSE of 2.44m2. However, novelties are less in this paper. Besides, I think some of the steps are not necessary, and too many thresholds need to determine in their method. I also think the data set tested in their study is not large enough as only 110 buildings in the study area and the test scene is not complicate enough. So I doubt the feasibility of the proposed method, especially for the large urban areas. Bellows are my special comments.

 (1)   Title: I think the main work of this paper is building extraction, not 3d change detection as the authors didn't extract the 3D changes. So I think the title is not appropriate.

ANS: Although building extraction is one of the main steps in this paper, the authors compared buildings in two different epochs to identify the type of changes in a building. Also, changes divided into newly built or demolished buildings. Therefore, building extraction is used for finding changes in buildings.

However, the authors changed the title of paper to “Developing an Algorithm for Buildings Extractions and Determining Changes from Airborne LiDAR Point Clouds”.

(2)   Section 2: the two datasets have different point densities; it will influence the accuracy of the building extraction. Besides, the study area is not large and typical enough in my opinion.

ANS: Indeed, one of the challenges in change detection of buildings from LiDAR data is understanding of the difference between point densities and it can have effect on the accuracy of building extraction. However, the method of filtering and building extraction in this paper has a high functionality and could extract buildings with a decent accuracy and none of the buildings have been deleted. Moreover, because of the extreme topography and different objects (such as road, car, vegetation, and building with different sizes) the experimental region is generally challenging and it could be an appropriate study area for evaluating this purpose.    

(3)   Line 136:”candidate pixel is l labelled” typo error

ANS: Thank you. It was corrected in line 147.

(4)   Line 199: please give details for all the letters appeared in the equation.

ANS: More details were added in lines 213-215.

(5)   Line 206: in my opinion, the function should be Q = the intersection of A and B; and the target Q should reach the largest possible value

ANS: Actually, we used equation (4) in this study and we did not test the equation that you mentioned. So, the authors think, it is true and the target function (Q) reach the lowest possible value.

(6)   Section 3.3.2: the authors can directly merge the rectangles; why should they use such a complicate method?

ANS: One of the purposes of this paper is showing the potential of different algorithms in a practical project. Indeed, as you mentioned, there are different methods such as merging the rectangles, but ant colony algorithm can be a new method for border extraction in this paper.     

(7)   Line 241-242: Please give the details of the time cost?

ANS: The question is not clear. Could you please clarify it? Thank you.

(8)   Line 320-321: For readers, I think the 3D volume is more important.

ANS: Thank you. We will consider it for the next publication. 

(9)   Line 232: see comment 4

ANS: More details were added in lines 238, and 239.


Reviewer 2 Report

Abstract:

The authors claim that their proposed method outperforms the existing methods. "This study shows that the proposed algorithm identifies the changes of all buildings with a higher accuracy of extracting border of buildings than the existing methods, successfully." However, they have not shown the results of this claim neither proved it in the paper. 

The authors present the performance of their method by the RMSE. "This study also determines that the amount of root mean square error (RMSE) is 2.44 m2." However, they did not mention what this value represent. I belief that the validation of the method need to be done with other accuracy metrics, including geometric and positional once and not only the statistical measures. 


Introduction:

The authors need to critically analyze the existing literature about building change detection and not only presenting them in a table. They need to mention the strengths and weaknesses of the methods and highlight the research gap clearly. 

The authors need to highlight the motivations of their method. 


Study Area and Data Used:

The authors need to show the data characteristics. How they have collected the data by what systems, sensors, conditions etc. 

The area contains low-complex urban buildings, I think even simpler methods can detect the changes in this situation. For the validation of the method, they have to consider another set of data. There are a few free datasets online which they can use. 


Methodology:

In 3.3.2, the authors explain the ant colony in terms of TSP which they should highlight its power for building detection instead. "the Traveling Salesman Problem (TSP). The objective of solving the traveling salesman". They need to rewrite this section to reflect the topic of the current research. 

The authors need to explain how they have implemented the optimization algorithms (firefly and ant colony) in ArcGIS. This requires programming and a customized solution. Why they have used a commercial software instead of free and open source solutions that are out there. 


Results:

The authors need to show the reference buildings. It is important to see the detected polygons and the reference buildings side by side. 

They need to explain the need for two evolutionary optimization methods in this research. Why firefly or ant colony cannot do the job alone.

It would be useful for readers to explain where the proposed method often fail or expected to fail. They should compare the results based on different building geometries and complexities. 


Validation:

I think the authors need to consider another dataset. Also, they need to use additional accuracy metrics.  Data U

Author Response

Comments and Suggestions for Authors

Abstract:

The authors claim that their proposed method outperforms the existing methods. "This study shows that the proposed algorithm identifies the changes of all buildings with a higher accuracy of extracting border of buildings than the existing methods, successfully." However, they have not shown the results of this claim neither proved it in the paper. 

The authors present the performance of their method by the RMSE. "This study also determines that the amount of root mean square error (RMSE) is 2.44 m2." However, they did not mention what this value represent. I belief that the validation of the method need to be done with other accuracy metrics, including geometric and positional once and not only the statistical measures. 

Introduction:

(1)   The authors need to critically analyze the existing literature about building change detection and not only presenting them in a table. They need to mention the strengths and weaknesses of the methods and highlight the research gap clearly. 

ANS: More details were added in lines 59, 60, 67-70. However, we will consider further in the next publications. Thank you.

(2)   The authors need to highlight the motivations of their method. 

ANS: The motivation was added clearly in lines 72 and 73.

(3)   Study Area and Data Used:

ANS: More information has been added.

(4)   The authors need to show the data characteristics. How they have collected the data by what systems, sensors, conditions etc. 

ANS: Although the authors wrote about collecting data and conditions, some details were added in lines 80-82. 

(5)   The area contains low-complex urban buildings, I think even simpler methods can detect the changes in this situation. For the validation of the method, they have to consider another set of data. There are a few free datasets online which they can use. 

ANS: Indeed, because of the extreme topography and different objects (such as road, car, vegetation, and building with different sizes) the experimental region is generally challenging and it could be an appropriate study area for evaluating building change detection. However, the authors would like to consider more datasets in different situations for the future study. Thank you. 

Methodology:

(6)   In 3.3.2, the authors explain the ant colony in terms of TSP which they should highlight its power for building detection instead. "the Traveling Salesman Problem (TSP). The objective of solving the traveling salesman". They need to rewrite this section to reflect the topic of the current research. 

ANS: The objective of TSP is to show the solution for finding the shortest path for finding border of building, and as you can see in the next sentence after TSP (line 230 and 231), the authors referred to this subject. Therefore, TSP was introduced just in 3 lines for giving a good vision to the reader about the problem and solution.

(7)   The authors need to explain how they have implemented the optimization algorithms (firefly and ant colony) in ArcGIS. This requires programming and a customized solution. Why they have used commercial software instead of free and open source solutions that are out there. 

ANS: As the authors mentioned in line 254 and 255, the proposed method is implemented in MATLAB R2015 on Windows 10 operating system, and the authors used ArcGIS just for validation of the proposed method.

Results:

(8)   The authors need to show the reference buildings. It is important to see the detected polygons and the reference buildings side by side. 

ANS: We have not access to the ground truth and observation because of the funding limitation; however, we will do the reference check accurately when we visit the US. I am hoping to visit the field soon and apply it in the next publication and revise validation. Thank you.

(9)   They need to explain the need for two evolutionary optimization methods in this research. Why firefly or ant colony cannot do the job alone.

ANS: Actually, one of the purposes of this paper is to show the potential of different algorithms in a practical project. Indeed, as you mentioned, firefly and ant colony can do the job alone or there are different methods such as merging the rectangles, but ant colony algorithm can be a new method for border extraction in this paper. Moreover, we want to acquaint the reader with different algorithms.

(10)                       It would be useful for readers to explain where the proposed method often fails or expected to fail. They should compare the results based on different building geometries and complexities. 

ANS: Some details were added in lines 353-357.

Validation:

(11)                       I think the authors need to consider another dataset. Also, they need to use additional accuracy metrics.  Data U

ANS: Actually, the authors claim that because of different challenges in this study area (such as topography and various objects), this dataset can evaluate the performance of the method. Also, we mentioned this subject in line 328-330. However, we great enthusiasms consider more dataset for the future study and publication. Thank you.


Reviewer 3 Report

Comments:

What do the algorithms do? Change detection or building boundary extraction? Seems mixed between the two.

Have you seen the following studies?

- Sun, Y., Zhang, X., Zhao, X. and Xin, Q., 2018. Extracting building boundaries from high resolution optical images and LiDAR data by integrating the convolutional neural network and the active contour model. Remote Sensing, 10(9), p.1459.

-Ji, M., Liu, L. and Buchroithner, M., 2018. Identifying Collapsed Buildings Using Post-Earthquake Satellite Imagery and Convolutional Neural Networks: A Case Study of the 2010 Haiti Earthquake. Remote Sensing, 10(11), p. 1689.

-Shirowzhan, S. and Trinder, J., 2017. Building classification from lidar data for spatio-temporal assessment of 3D urban developments. Procedia Eng, 180, pp.1453-1461.


Introduction:

The introduction should focus on the problem of buildings boundaries extraction in change detection using lidar data to see how much work is done by others and what is remaining.  Instead, the current introduction concentrates on the advantages of lidar compared to photogrammetry.  I recommend the authors to improve this part to orientate the reader to the problem investigated in this research.

Study area:

Line 68- What the authors mean by “extreme topography”? slant area or a complex scene? Please explain. Visual representation of the study area doesn’t show that this area is a challenging scene for change detection.

Methodology:

How the authors overcome the problem of the difference in point density that affects the results of change detection? What is the pixel size chosen for DSMs?

Line 81. How buildings are extracted from each lidar epoch? Manually or automatically? If it is automatic, is it point based or pixel based? How much is the level of error in this classification?

Line 82- “most studies” please be specific and refer to studies that didn’t attend the building boundaries in change detection approach.

line 88- “ndsm was generated to extract building points after applying the height threshold accurately”. Seems not correct. You already extracted buildings points as stated in Line 81. In this step of ndsm generation you will extract the height of buildings above ground. Please modify this.

Line 162-“ topography doesn’t have an adverse …” why? Seems not true.

Line 168- usually there are three classes : new buildings, demolished buildings and unchanged buildings. Why the differentiation between changed classes is ignored here?

Line 192- what is the difference of using the proposed algorithms for builings and changed buildings? It seems that these algorithms are applied after change detection results appear and this implies no difference if we have only one building and apply these algorithms for foot print extraction. The question is that what is the advantage of these algorithms in a change detection study? From the abstract, one may think that the algorithms are used for an automatic building boundaries changes extraction but as we go ahead we see that this study focuses on a type of foot print extraction for changed buildings (please see Figure 10).

Results:

Line 257-repeatition.

Result of change detection is not presented. What is presented is the boundaries of changed buildings so what about pixel based result?

What is the source of equation 8 and why this after results section. One expects to see such equations in methodology and the outcomes of validation as part of results. I recommend to restructure.

Line 342- table caption “Comparison of buildings area between the proposed method and ArcGIS”. What do you mean by comparison of proposed method and ArcGIS? If you have a reference data in ArcGIS you need to say that. I recommend to modify it as :…between the results and reference data

Missing discussion- this paper requires a discussion section so that you state the limitations, strengths and comparison of your findings with literature.

Conclusion

345-347. this method suffers from the problem of inconsistency of the classified buildings for the difference of point density for each epoch. How did you overcome this problem in this study?

347- “this algorithm” , which one? Change detection or building boundaries extraction algorithms?

351-352- not clear. Why? Seems also repetition.

354- what do you mean by “effectively” what is the more specific word here?

2.44 sqm is high level of error for urban areas.  

 


Author Response

Comments and Suggestions for Authors

Comments:

(1)   What do the algorithms do? Change detection or building boundary extraction? Seems mixed between the two.

ANS: Although building extraction is one of the main steps in this paper, the authors compared buildings in two different epochs to identify the type of changes in a building. Also, changes divided into newly built or demolished buildings. Moreover, building boundary extraction is used after finding changes in buildings for extracting more information and simulating the shape of building.

However, the authors changed the title of paper to “Developing an Algorithm for Buildings Extractions and Determining Changes from Airborne LiDAR Point Clouds”.

 

(2)   Have you seen the following studies?

- Sun, Y., Zhang, X., Zhao, X. and Xin, Q., 2018. Extracting building boundaries from high resolution optical images and LiDAR data by integrating the convolutional neural network and the active contour model. Remote Sensing, 10(9), p.1459.

-Ji, M., Liu, L. and Buchroithner, M., 2018. Identifying Collapsed Buildings Using Post-Earthquake Satellite Imagery and Convolutional Neural Networks: A Case Study of the 2010 Haiti Earthquake. Remote Sensing, 10(11), p. 1689.

-Shirowzhan, S. and Trinder, J., 2017. Building classification from lidar data for spatio-temporal assessment of 3D urban developments. Procedia Eng, 180, pp.1453-1461.

ANS: Thank you, they are useful articles and the authors will consider and evaluate these methods for future studies.  

Introduction:

(3)   The introduction should focus on the problem of buildings boundaries extraction in change detection using lidar data to see how much work is done by others and what is remaining.  Instead, the current introduction concentrates on the advantages of lidar compared to photogrammetry.  I recommend the authors to improve this part to orientate the reader to the problem investigated in this research.

ANS: Actually, the main purpose of this study is building change detection from LiDAR point clouds. Moreover, the authors added some details in comparison and weaknesses of previous work for improving this part in lines 59, 60, and 67-70. 

Study area:

(4)   Line 68- What the authors mean by “extreme topography”? slant area or a complex scene? Please explain. Visual representation of the study area doesn’t show that this area is a challenging scene for change detection.

ANS: Indeed, this study area is in the vicinity of the mountain and it is almost a slant area. Also, there are different objects (such as road, car, vegetation, and building with different sizes), so the experimental region is generally challenging and it could be an appropriate study area for evaluating this purpose.     

Methodology:

(5)   How the authors overcome the problem of the difference in point density that affects the results of change detection? What is the pixel size chosen for DSMs?

ANS: One of the challenges in change detection of buildings from LiDAR data is the difference between point densities and it can have effect on the accuracy of building extraction. However, the method of filtering and building extraction in this paper has a high functionality and could extract buildings with a decent accuracy and none of the buildings have been deleted. Also, as mentioned in manuscript in line 264, the pixel size is 0.5 meter.

(6)   Line 81. How buildings are extracted from each lidar epoch? Manually or automatically? If it is automatic, is it point based or pixel based? How much is the level of error in this classification?

ANS: Building extraction is pixel based and after LiDAR point clouds filtering and nDSM generation, buildings were extracted by using a decent height threshold. None of the buildings have been deleted, and the authors reported the amount of RMSE for accuracy of border extraction. 

(7)   Line 82- “most studies” please be specific and refer to studies that didn’t attend the building boundaries in change detection approach.

ANS: I added some references in line 93.

(8)   line 88- “ndsm was generated to extract building points after applying the height threshold accurately”. Seems not correct. You already extracted buildings points as stated in Line 81. In this step of ndsm generation you will extract the height of buildings above ground. Please modify this.

ANS: As you can see in the figure 2, buildings were extracted after applying height threshold on the nDSM and removing regions which are smaller than the smallest building. Also, in the section 3.1.2. the process of building extraction is explained clearly.

(9)   Line 162-“ topography doesn’t have an adverse …” why? Seems not true.

ANS: The authors modified this sentence in line 173.

(10)                       Line 168- usually there are three classes : new buildings, demolished buildings and unchanged buildings. Why the differentiation between changed classes is ignored here?

ANS: Change detection in an urban area is mostly classified as “changed” or “unchanged.” Also, unchanged buildings can be divided into two categories: 1. Demolished building 2. Newly built. The authors tried to explain this section clearly in part 3.2.

(11)                       Line 192- what is the difference of using the proposed algorithms for builings and changed buildings? It seems that these algorithms are applied after change detection results appear and this implies no difference if we have only one building and apply these algorithms for foot print extraction. The question is that what is the advantage of these algorithms in a change detection study? From the abstract, one may think that the algorithms are used for an automatic building boundaries changes extraction but as we go ahead we see that this study focuses on a type of foot print extraction for changed buildings (please see Figure 10).

ANS: In this study, the main goal is building change detection, and the secondary goal is building boundary extraction. Therefore, the first step is building change detection and for this purpose the authors extracted building patches and compared buildings in two different epochs for finding the type of changes (more details are in sections 3.1. and 3.2.). After that, the authors proposed a new method for building boundary extraction (more details are in sections 3.3.). Indeed, after finding building changes, we can extract information from that such as the shape of buildings and area of them by boundary extraction, and in Figure 10, there are 11 changed buildings that the boundary of these buildings are extracted by using firefly and ant colony algorithms.  

Results:

(12)                       Line 257-repeatition.

ANS: This is just to emphasis the importance.

(13)                        Result of change detection is not presented. What is presented is the boundaries of changed buildings so what about pixel based result?

ANS: All results of change detection in this study were appeared in Figure 9 and 10 and Table 3.

(14)                       What is the source of equation 8 and why this after results section. One expects to see such equations in methodology and the outcomes of validation as part of results. I recommend restructuring.

ANS: The authors added source of equation 8. Also, this equation was mentioned for evaluating the results, so it is in section validation of the proposed method.    

(15)                       Line 342- table caption “Comparison of buildings area between the proposed method and ArcGIS”. What do you mean by comparison of proposed method and ArcGIS? If you have a reference data in ArcGIS you need to say that. I recommend to modify it as :…between the results and reference data

ANS: It has been modified. Thank you.

(16)                       Missing discussion- this paper requires a discussion section so that you state the limitations, strengths and comparison of your findings with literature.

ANS: The authors added “discussion” in the title of section 4. Also, some discussions about limitations and strengths were added in lines 353-357.

Conclusion

(17)                       345-347. this method suffers from the problem of inconsistency of the classified buildings for the difference of point density for each epoch. How did you overcome this problem in this study?

ANS: As you asked in question 5 about differences in density, the authors extracted buildings from each LiDAR data separately, and then, buildings were compared to find the type of building changes.

(18)                       347- “this algorithm” , which one? Change detection or building boundaries extraction algorithms?

ANS: “this algorithm of border extraction of building is well suited to simulate the shape of the building”

(19)                       351-352- not clear. Why? Seems also repetition.

ANS: It is for more emphasis for readers.

(20) 354- what do you mean by “effectively” what is the more specific word here? 2.44 sqm is high level of error for urban areas.

ANS: As you know, the area of buildings is large (between 100.48m to 435.49m) and 2.44 m2 is not high level of error in this scale and the accuracy of algorithm is well.


Round 2

Reviewer 1 Report

Thank you for revising the manuscript. However, the authors only have done some minor editing. In my opinion, direct edits of words or phrases and minor revisions won't sufficiently address my previous comments. Please perform more analysis and experiments as suggested.  For comment 6, I mean the manuscript lacks a description of the computation cost of your method. Please add it. 

Reviewer 2 Report

Thank you for revising the manuscript and making it more comprehensive. However, I have asked a major revision, but the authors only have done some minor editing (adding some new text). To accept this paper, need more experiments. Please perform some analysis and experiments as suggested in the first round of revision. 



Reviewer 3 Report

I am not convinced this paper is improved to merit publication at this stage. Only 4 (i.e. comments No 7, 14, 15 and 18) out of 20 comments are addressed which are minor changes. For example, still the reasons for selecting the study extent as a challenging site are not enough because the slope of this area is not provided numerically and in a profile view. In addition, in such vast area with large building sizes , as it appears in the answer to comment 20, where buildings are separaed from each other and from trees, it seems that there is no challenge for change detection. Also, the method of change detection used in this paper seems appropriate for simple sites in flat areas. As another example, response to No 13 is not appropriate as the figures don't show change detection results.

I believe that the paper should focus only on building boundaries extraction algorithm because addition of change detection makes a lot of confusion for the readers. 

Back to TopTop