Next Article in Journal
Satellite Remote Sensing for the Analysis of the Micia and Germisara Archaeological Sites
Next Article in Special Issue
Supporting Urban Weed Biosecurity Programs with Remote Sensing
Previous Article in Journal
Refined UNet: UNet-Based Refinement Network for Cloud and Shadow Precise Segmentation
Previous Article in Special Issue
Using Earth Observation for Monitoring SDG 11.3.1-Ratio of Land Consumption Rate to Population Growth Rate in Mainland China
 
 
Article
Peer-Review Record

Generating Elevation Surface from a Single RGB Remotely Sensed Image Using Deep Learning

Remote Sens. 2020, 12(12), 2002; https://doi.org/10.3390/rs12122002
by Emmanouil Panagiotou 1,*,†, Georgios Chochlakis 1,†, Lazaros Grammatikopoulos 2 and Eleni Charou 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2020, 12(12), 2002; https://doi.org/10.3390/rs12122002
Submission received: 25 May 2020 / Revised: 12 June 2020 / Accepted: 18 June 2020 / Published: 22 June 2020

Round 1

Reviewer 1 Report

The most critical issue of this manuscript can be considered hide behind the point of view based on deep learning approach. The authors, in fact, wrote:
"At its core, the problem of generating DEMs can be formulated as a function G whose input is a representation of the terrain patch of interest (e.g. RGB satellite images) and its output is the DEM itself G : X > Y. …. .On the other hand, Artificial Intelligence (AI) and Machine Learning (ML) can learn such rules in an automated, data-driven manner by building an internal model of the area of interest and its structure".


Unfortunately, I am not so convinced that the supposed rules exist. I would like to draw the authors' attention to some examples. In an acquired scene, two cars with exact the same colors can be one in a valley and the other one in mountain. Identical residential units can be placed at different levels in the same district. A city composed by similar buildings can extend from the plain to the hills. Therefore, the assumption that a GENERAL function G can exist is generally false. Anyway, considering specific cases, it is possible that the G can be assumed and therefore that the corresponding rules can be learn automatically.

Regarding the use of RGB images, the authors underestimate the problem. When an RGB image is obtained from a remote sensing acquisition, its colors can be quite arbitrary. I would like to point that a specific spectral channel very rarely can be considered a color channel. For this reason, the usual operation that consists in assigning a spectral channel to a color channel should be used only to display the data (visual assessment). Furthermore, it is possible, even likely, that the same material can determine different colors in different acquisitions even if the same instrument is used [ref1]. Obviously, this aspect represents a relevant problem for the proposed methodology.

The authors consider shorting the section "conclusion".


[ref1] Li, Z.; Zhu, H.; Zhou, C.; Cao, L.; Zhong, Y.; Zeng, T.; Liu, J. A Color Consistency Processing Method for HY-1C Images of Antarctica. Remote Sens. 2020, 12, 1143.

Author Response

Dear Reviewer 1,

We wish to thank you all for your feedback and constructive comments in this first round of review. We are grateful for your insightful indications, as they provided valuable details to refine our manuscripts contents and add important improvements to our paper. In this document we try to address the issues raised by you and your colleagues, as effectively as possible. All page numbers refer to the revised manuscript file with the respective lines in the difference file with tracked changes in parentheses (blue color).

 

Response to Reviewer 1 Comments

 

Point 1:

 

The most critical issue of this manuscript can be considered hide behind the point of view based on deep learning approach.

 

The authors, in fact, wrote:

"At its core, the problem of generating DEMs can be formulated as a function G whose input is a representation of the terrain patch of interest (e.g. RGB satellite images) and its output is the DEM itself G : X > Y. …. .On the other hand, Artificial Intelligence (AI) and Machine Learning (ML) can learn such rules in an automated, data-driven manner by building an internal model of the area of interest and its structure".

Unfortunately, I am not so convinced that the supposed rules exist. I would like to draw the authors' attention to some examples. In an acquired scene, two cars with exact the same colors can be one in a valley and the other one in mountain. Identical residential units can be placed at different levels in the same district. A city composed by similar buildings can extend from the plain to the hills. Therefore, the assumption that a GENERAL function G can exist is generally false. Anyway, considering specific cases, it is possible that the G can be assumed and therefore that the corresponding rules can be learn automatically.

 

Response 1:

 

Thank you for bringing this up. We agree we were too assertive in our statements. Notice, however just after these statements, we express our concerns regarding this assumption with a general statement that includes your pertinent examples, i.e. a constant added to the height of a scene may not alter its appearance. We will soften our assumption by mentioning that G can exist locally, which is our task anyway, given we test the model in nearby areas. Moreover, for the drone imagery, the drone maintains an approximately constant elevation above the ground when capturing images (which we did not explicitly mention in the respective section and we will also include), so the DSMs are normalized (ground height is effectively removed due to the capturing process) and the same house should appear to have the same height no matter its global altitude and position. However, were that not to be the case, two cars, one in a valley and another in a mountain would actually appear different, as the car in the valley would appear much smaller in the image, conveying useful information for the model. Also, notice, in Figure 10a, although certain pixels belonging to different levels of a building have the same color and the same local neighborhood (i.e. the rooftop and the parking spots are locally gray, row 2), they are assigned correct heights by the model. Finally, while we make that assumption, it has no influence on the learning process, meaning we do not modify the mathematical formalism, as you correctly mention.

Changes 1:

We emphasised more on explaining our assumptions, lines 45-50 (45-51).

We removed an error relating to the normalization of the Urban/Rural dataset, line 212 (231-232), and added clarification on that matter in lines 230-231 (250-251).

 

Point 2:

 

Regarding the use of RGB images, the authors underestimate the problem. When an RGB image is obtained from a remote sensing acquisition, its colors can be quite arbitrary. I would like to point that a specific spectral channel very rarely can be considered a color channel. For this reason, the usual operation th

at consists in assigning a spectral channel to a color channel should be used only to display the data (visual assessment). Furthermore, it is possible, even likely, that the same material can determine different colors in different acquisitions even if the same instrument is used [ref1]. Obviously, this aspect represents a relevant problem for the proposed methodology.

 

Response 2:

 

Your observation is correct. You raise an important issue, which is, however, a Deep Learning field on its own, namely Adversarial robustness. The essence of their studies is that data (mostly images) that appear rightfully similar to humans are treated differently by Deep Learning models, i.e. slight perturbations in the color that do not impact an image in the eyes of a human result in vastly different classification results. It is obviously a challenging topic, but we feel it is beyond the scope of our work.

Changes 2:

We mention this issue and explain our stance, lines 419-422 (464-467).

 

 

Point 3:

 

The authors consider shorting the section "conclusion".



Response 3:

Thank you for this observation, we tried to shorten the “conclusion” section at some points. You can view all the cuts in the difference file starting from line (494).

 

 

 

We appreciate the time and effort that you dedicated to providing feedback on our manuscript. Your inputs have been precious and we tried to address them as best as possible.

 

 

Thank you, the Authors.

Reviewer 2 Report

1. For the abstract, can the authors please include some more quantitative details pertaining to the results attained (e.g., accuracies attained relative to ground truth, etc). Also, please make it clear to readers if the proposed approach estimates DEM on a relative scale.

2. What are the units for the y-axis distances in Figure 12 plots?

3. The Results section requires further work.
Further to the L1 error metric, can the authors provide some more granular details on the accuracies achieved, especially,
quantitative indication on the planimetric error, as well as the height error for the predicted DEMs in comparison to the Ground Truth DEMs?
For example, from the planimetric aspect, it would be interesting to see the performance of the method based on well-known 2D metrics
such as completeness, correctness, etc, so as to provide the reader with some in-depth detail on the overlap error/difference between the predicted & Ground truth DEMs.


4. Section 3.2 may be better suited to Section 2.

5. While, the proposed approach is compared to GAN and U-Net, there is also dedicated and relevant work on depth estimation from single-view imagery in the vision community.
See: i) Zhou, T., Brown, M., Snavely, N. and Lowe, D.G., 2017. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1851-1858).
and: ii) Eigen, D., Puhrsch, C. and Fergus, R., 2014. Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems (pp. 2366-2374).

Please expand on these in the background/introduction when discussing previous works.

In addition, the code implementation for these are openly accessible (e.g., DepthCNN by Zhou et al 2017). It would add value to the manuscript if the authors can compare to at least one of these approaches and include as part of the Results section.

Author Response

Dear Reviewer 2,

We wish to thank you all for your feedback and constructive comments in this first round of review. We are grateful for your insightful indications, as they provided valuable details to refine our manuscripts contents and add important improvements to our paper. In this document we try to address the issues raised by you and your colleagues, as effectively as possible. All page numbers refer to the revised manuscript file with the respective lines in the difference file with tracked changes in parentheses (blue color).

 

Response to Reviewer 2 Comments

 

Point 1:

For the abstract, can the authors please include some more quantitative details pertaining to the results attained (e.g., accuracies attained relative to ground truth, etc). Also, please make it clear to readers if the proposed approach estimates DEM on a relative scale.



Response 1:

You are correct in bringing up this issue. We should somehow inform the readers from the start of the quantitative results obtained that quantitative measures are suboptimal. Interested readers can see more details in Section 3.2 . For the second issue you raise, we did not include any such statement as we perform predictions in both regimes (absolute for satellite, relative for drones given the capturing process). Since you raise this concern, we shall include an explicit statement of that in the abstract. Notice that we already address this for the satellite imagery in line 212 (global maximum-minimum), whereas we will make that fact that DSMs are normalized explicit in the respective section (section 2.10) .

Changes 1:

We made clear that we construct both relative and absolute point clouds, line 11 (11).

Informing the readers on suboptimal results compared to traditional methods (with multiple images as input) , lines 15-16 (15-16).



Point 2: What are the units for the y-axis distances in Figure 12 plots?



Response 2:

The units in the y-axis of Figure 12 do not correspond to any real units, since, as we mention in line 211, we normalize to [-1, 1]. We will reiterate that in the caption since it created confusion.



Changes 2:

Change in Figure 12 caption, for clarification on the distances.

Point 3:

The Results section requires further work.

Further to the L1 error metric, can the authors provide some more granular details on the accuracies achieved, especially,

quantitative indication on the planimetric error, as well as the height error for the predicted DEMs in comparison to the Ground Truth DEMs?

For example, from the planimetric aspect, it would be interesting to see the performance of the method based on well-known 2D metrics

such as completeness, correctness, etc, so as to provide the reader with some in-depth detail on the overlap error/difference between the predicted & Ground truth DEMs.





Response 3:

Thank you for bringing the planimetric error to our attention, but we feel like the most appropriate metric for vertical accuracy evaluation in our case is the L1 error (along with the regularization methods we provide for better generalization). We also believe that performing such a strategy of comparing points, planes or edges, would yield less sound and less statistically significant results compared to the L1 error, which is applied to the entire 256x256 image for the whole dataset. The estimation of completeness and correctness 2D metrics based on classification results derived from our predicted DEMs would be an interesting aspect for further investigation, however it is beyond the scope of the current research.

As you correctly indicated, we added Table 2 to our manuscript to further support absolute height error in meters.



Changes 3:

Table 2 added to the section 3.2 providing height error in meters.

Point 4: Section 3.2 may be better suited to Section 2.

Response 4:

We contemplated on making the change you are suggesting before submitting the manuscript. Given that you raise this issue, we agree and relocate some of the information provided in this section to the “Materials and Methods” section. Notice the differences made in the difference file, highlighted with blue color in section 2.5 and with red color in section 3.2.

Point 5:

While, the proposed approach is compared to GAN and U-Net, there is also dedicated and relevant work on depth estimation from single-view imagery in the vision community.

See: i) Zhou, T., Brown, M., Snavely, N. and Lowe, D.G., 2017. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1851-1858).

and: ii) Eigen, D., Puhrsch, C. and Fergus, R., 2014. Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems (pp. 2366-2374).

Please expand on these in the background/introduction when discussing previous works.

In addition, the code implementation for these are openly accessible (e.g., DepthCNN by Zhou et al 2017). It would add value to the manuscript if the authors can compare to at least one of these approaches and include as part of the Results section.



Response 5:

Thank you for suggesting these papers. First of all, to comment on the specific papers you have suggested. We do not feel that the approach in Eigen et al., 2014 would contribute to the paper, as, in terms of Deep Learning, 6 years almost render an approach obsolete. Furthermore, Zhou et al., 2017 examine depth map estimation from video, so their approach is not applicable to our task, while the architecture they use, DepthCNN, is a slight modification of the already studied U-net and we feel it would not add any further value to our manuscript. However, given your objections to the baselines, we observe that we have not made our purpose for the baseline section clear enough. Our goals were i) to demonstrate that another recent famous framework, CycleGAN, performs worse than the pix2pix and justify our choice of CGAN architecture, and ii) to demonstrate that the plain U-net performs worse than the U-net augmented by the CGAN framework, and marginal improvements, although welcome, are left for interested readers and further work. SO, we will make this more explicit in that section. Moreover, we are well aware of the methods mainly deployed in depth estimation. As you can expect from the comments on the DepthCNN, we have observed that U-net or slight modifications are mainly used in such studies. For example, we have actually tested a variation of “Mou, L.; Zhu, X.X. IM2HEIGHT: Height estimation from single monocular imagery via fully residual convolutional-deconvolutional network. arXiv preprint arXiv:1802.102492018”, which is a more recent approach (2018). It, too, uses a modified U-net. Nonetheless, many attempts to make it work were unsuccessful enough that we did not include it in our article. We thank you, however, for pointing out that depth estimation studies must be included in the previous work, as we neglected it, and will add comments on their approaches.

Changes 5:

Mentioning related work on depth estimation in the Introduction, line 65-68 (66-69).

Better explaining the goal of the baseline section, line 352-354 (393-396).





We appreciate the time and effort that you dedicated to providing feedback on our manuscript. Your inputs have been precious and we tried to address them as best as possible.

 

 

Thank you, the Authors.

Reviewer 3 Report

In my opinion, the subject of this work is relevant for the Remote Sensing journal readers, and sufficiently novel and interesting to warrant publication. The research questions are very important, in various fields of scientific and applied investigation. The work sound and the overall approach envisioned and implemented are generally correct. All the key-elements (i.e., abstract, introduction, methodology, results, discussion, and conclusions) are present but not always clearly arranged - indeed, I am of the opinion that there are too many sub-sections within the Materials and Methods and in the Results. It is evident that the science underlying the discussion is solid. The analyses and discussion are the logical outcome of the presented data. Figures and tables, generally, are all necessary and with sufficient quality.

However, it is my opinion that the strategy and study design must be much better explained and structured. The paper has a good emphasis on describing neural nets and too little on describing practically how the network was implemented. The authors need more specific detail on the practical implementation of the Convolutional Neural Network (CNN) to be reproducible by the reader. I think that the entire methodological description is written only for an expert ANN audience. These things are certainly clear and obvious in the minds of the authors but need to be explained in practical way in the methodological section.

At some points, the text is confused and not very well organized with mixed results and discussion. Indeed, in the results section and sub-sections no bibliographic references should be necessary. Also, the text between the lines 71 and 77 need to be included in the discussion or conclusion sections (not in the introduction).

In summary, I think that this manuscript could be accepted as an article in the Remote Sensing journal only after some revisions.

Author Response

Dear Reviewer 3,

We wish to thank you all for your feedback and constructive comments in this first round of review. We are grateful for your insightful indications, as they provided valuable details to refine our manuscripts contents and add important improvements to our paper. In this document we try to address the issues raised by you and your colleagues, as effectively as possible. All page numbers refer to the revised manuscript file with the respective lines in the difference file with tracked changes in parentheses (blue color).

 

Response to Reviewer 3 Comments

 

Point 1:

In my opinion, the subject of this work is relevant for the Remote Sensing journal readers, and sufficiently novel and interesting to warrant publication. The research questions are very important, in various fields of scientific and applied investigation. The work sound and the overall approach envisioned and implemented are generally correct. All the key-elements (i.e., abstract, introduction, methodology, results, discussion, and conclusions) are present but not always clearly arranged - indeed, I am of the opinion that there are too many sub-sections within the Materials and Methods and in the Results. It is evident that the science underlying the discussion is solid. The analyses and discussion are the logical outcome of the presented data. Figures and tables, generally, are all necessary and with sufficient quality.

However, it is my opinion that the strategy and study design must be much better explained and structured. The paper has a good emphasis on describing neural nets and too little on describing practically how the network was implemented. The authors need more specific detail on the practical implementation of the Convolutional Neural Network (CNN) to be reproducible by the reader. I think that the entire methodological description is written only for an expert ANN audience. These things are certainly clear and obvious in the minds of the authors but need to be explained in practical way in the methodological section.

 



Response 1:

Thank you very much for your positive statements. We agree that too many subsections can be confusing to the reader, therefore we merged some of them. To address your other concern, we have of course omitted many introductory information regarding the implementation of the network. However, we tried to expand on the topics by explaining, at a high level, the relevant concepts and provided the link to our published code for more information for interested parties. Additionally, given the recent surge in libraries for Deep Learning, one is not required to be acquainted with technical details, as, for example, in Tensorflow (or Pytorch, Caffe, etc) a convolutional block can be implemented in a single line of code. Explaining in detail so many topics will render our original research manuscript a Deep Learning tutorial. So, instead of adding more implementation details, we will refer readers to excellent introductory sources, like the freely available Deep Learning book by Goodfellow et al. and, where possible, we substitute high level ideas with lower level details, e.g. we add more details in the description of our building blocks, the convolutional layers. Finally, we will highlight the fact that most of the work is done by the aforementioned frameworks.

Changes 1:

We merged the subsections Discriminator Architecture, Generator Architecture, Evaluation Metrics and Overall Architecture of the “Materials and Methods” section, into one subsection “Architecture Analysis”. Notice the crossed out subsections in the Materials and Methods section, highlighted with red in the differrence file.

Provided a citation to Goodfellow’s book for non-expert readers and a statement about contemporary Deep Learning frameworks, lines 80-83 (86-89).

More detailed CNN architecture description, suitable for all readers regardless of background, subsection Typical Convolutional Architecture in lines (91-100). We also rephrased some of our original explanations on the U-net architecture of the Generator, lines (134-143).

Point 2:

At some points, the text is confused and not very well organized with mixed results and discussion. Indeed, in the results section and sub-sections no bibliographic references should be necessary. Also, the text between the lines 71 and 77 need to be included in the discussion or conclusion sections (not in the introduction).

In summary, I think that this manuscript could be accepted as an article in the Remote Sensing journal only after some revisions.





Response 2:

We thank you for that observation. The conclusion of our studies is indeed better suited for the discussion and the conclusion. However, we relocate the text beginning from line 73, as the previous lines briefly discuss the sections in which we demonstrate the quantitative performance of the model. We agree that some of the bibliographic references were unnecessary and therefore removed many of them, keeping only those which refer to similar results to further support our claims and some related to mathematical proofs. References contained in Figure captions, are not removed, as our Figures should be self contained.





Changes 2:

Relocated lines from (79-83) highlighted with red to lines 395-397 (440-442), of the Introduction to the Discussion section.

 

Removed citations in lines (301, 390, 392, 393, 398, 401, 406, 410, 424), highlighted with red color in the difference file.



We appreciate the time and effort that you dedicated to providing feedback on our manuscript. Your inputs have been precious and we tried to address them as best as possible.

 

 

Thank you, the Authors.

Reviewer 4 Report

The paper proposes a GAN model for end-to-end DEM generation from a single RGM image. I recommend accept after major revision. 

1) First of all the proposed method must be compared against state-of-the-art methods in a quantitative way in a Table format.

2) The following sentence "Our main focus is on producing sharp, robust results rather than accuracy,..." should be extended and in details. So the accuracy is not important at all? The GAN just create a DEM model without any real dept values?

3) "given the right amount of data and resources, CGANs can become a reliable tool in extracting 3D geometry"

How much data is "right amount ". A detailed evaluation is missing, e.g. a table showing the quality of the generated DEMs related to the amount of training data.

4) On Figure 10, both subfigure are seems to me urban regions, however subfigure b is denoted to rural region.  ?

5) It should be more precisely clarify, what is the proposed model by the Authors, because pix2pix, U-net, PatchGan, CGAN methods were mentioned, but these are existing methods created by other Authors. So a detailed more clear section about the created model is needed.

Author Response

Dear Reviewer 4,

We wish to thank you all for your feedback and constructive comments in this first round of review. We are grateful for your insightful indications, as they provided valuable details to refine our manuscripts contents and add important improvements to our paper. In this document we try to address the issues raised by you and your colleagues, as effectively as possible. All page numbers refer to the revised manuscript file with the respective lines in the difference file with tracked changes in parentheses (blue color).

 

Response to Reviewer 4 Comments

 

Point 1:First of all the proposed method must be compared against state-of-the-art methods in a quantitative way in a Table format.







Response 1:

Thank you for bringing this up, we will include a Table in the baselines section, for comparing the pix2pix, Unet and CycleGAN state-of-the-art Deep Learning architectures for image-to-image translation tasks. If however by “state-of-the-art” you mean photogrammetric techniques, we think that such a comparison is not properly defined as all traditional photogrammetric methods utilize multiple images from different viewpoints.

Changes 1:

Added the Table 3 to the baselines section, along with a relocation of comments to its caption.

Point 2:

The following sentence "Our main focus is on producing sharp, robust results rather than accuracy,..." should be extended and in details. So the accuracy is not important at all? The GAN just create a DEM model without any real dept values?

Response 2:

The sentence denotes more so that the quantitative results are suboptimal while the qualitative performance is surprisingly good, and less so that we have no interest in providing good results. We will indeed rephrase that to make it clearer. The DEMs do have real depth values, there is no way around that, they are just not accurate enough. Moreover, accuracy is important to us, as we state in the rest of the sentence, regarding our efforts to indicate that overfitting occurs and more work will definitely reduce the error. Besides, we have devoted a section in the Results (section 3.2) to study quantitative metrics and added more detailed results in Table 2. To make our case even stronger for the given approach, we briefly discuss some preliminary results on a satellite image segmentation task where our generated DEMs were provided as additional input to the model.

Changes 2:

Rephrasing in line 73-75 (74-76).

Added brief discussion in lines 408-411 (453-456).

Point 3:

"given the right amount of data and resources, CGANs can become a reliable tool in extracting 3D geometry"

How much data is "right amount ". A detailed evaluation is missing, e.g. a table showing the quality of the generated DEMs related to the amount of training data.

Response 3:

Thank you for bringing up this omission, as it is almost self evident for Deep Learning researchers that we did not bother to support it further. Since it is a well known problem / property of Deep Learning models and due to resource constraints, we decide to resolve this issue by citing well-known research results that clearly indicate that this statement is true, both for generative networks and deep learning models in general. Concerning our results, where the case could have been different, the difference in quality can be observed in the variance between train and test error, and the figures where data augmentation is used (as we always indicate) and no data augmentation is used, where we actually point the reader to figures data augmentation is used. We will comment on this after the results have been presented, i.e. in the discussion section.

Changes 3:

Added references to significant studies and to our Figures for readers to compare quality of the DEMs in relation to training data provided (NOTE: this part was moved from the introduction to the discussion section to satisfy another reviewers request) , line 397-400 (442-446).



Point 4:

On Figure 10, both subfigure are seems to me urban regions, however subfigure b is denoted to rural region. ?

Response 4:

The reason these may seem urban to you is that we had to pick examples with complicated scenery to demonstrate indicative results, leading to structures like buildings and cars being present in the figure. We understand that confusion can be caused, especially given that rural is somewhat of an arbitrary concept and varies from region to region. We can assure you that these are indeed from a rural area in Greece.



Point 5:

It should be more precisely clarify, what is the proposed model by the Authors, because pix2pix, U-net, PatchGan, CGAN methods were mentioned, but these are existing methods created by other Authors. So a detailed more clear section about the created model is needed.

Response 5:

You are correct. What we propose is the approach of generating DEMs, whereas the models are indeed from other authors. We used the words models, approach and architecture interchangeably when referring to our efforts. We will correct that and clearly state that fact in addition to the statement.

Changes 5:

Adjusting statements of our efforts to avoid misconceptions, lines 224, 266, 287, 311, 383, 413, 416, 432, 442 (244, 286, 308, 332, 426, 458, 460, 477, 488), Figure 10, Figure 11.



We appreciate the time and effort that you dedicated to providing feedback on our manuscript. Your inputs have been precious and we tried to address them as best as possible.

 

 

Thank you, the Authors.

Round 2

Reviewer 1 Report

The authors have taken into account my comments, inserting appropriate modifications and clarifications into the manuscript. For this reason, it can be considered ready to be published.

 

 

Reviewer 2 Report

1) Line 398 should be "...better results than a plain...."?

Reviewer 4 Report

Thank You for the revisions.  I accept your changes. 

Back to TopTop