Next Article in Journal
A Novel Phase Compensation Method for Urban 3D Reconstruction Using SAR Tomography
Previous Article in Journal
Improved Ship Detection Algorithm Based on YOLOX for SAR Outline Enhancement Image
Previous Article in Special Issue
Integrating Geophysical and Photographic Data to Visualize the Quarried Structures of the Roman Town of Bassianae
 
 
Article
Peer-Review Record

Increasing the Lateral Resolution of 3D-GPR Datasets through 2D-FFT Interpolation with Application to a Case Study of the Roman Villa of Horta da Torre (Fronteira, Portugal)

Remote Sens. 2022, 14(16), 4069; https://doi.org/10.3390/rs14164069
by Rui Jorge Oliveira 1,2,3,*, Bento Caldeira 1,2,3, Teresa Teixidó 4, José Fernando Borges 1,2,3 and André Carneiro 5,6
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2022, 14(16), 4069; https://doi.org/10.3390/rs14164069
Submission received: 8 June 2022 / Revised: 28 July 2022 / Accepted: 18 August 2022 / Published: 20 August 2022
(This article belongs to the Special Issue Radar Applications in Cultural Heritage)

Round 1

Reviewer 1 Report

General Comments

1.

It is certainly an interesting idea to evaluate the use of seismic trace interpolation for the densification of GPR data by calculating additional profiles. However, the manuscript does not go beyond the reporting of the implementation and does not evaluate the results fully, or in comparison with existing methods. It is hence a report of some processing tasks and not a scientific publication.

2.

The presented data cannot be evaluated visually as the image representation is too poor. In addition to the overview of depth-slices (Fig. 8- 11) detailed comparisons are required so that the differences can be seen. Even figures 12-16 are of poor quality so that no visual comparisons can be made. Considering the pdf format of this publication the input images have to be prepared in such a way that the published figures make sense.

3.

It is not explored why the algorithm creates stripes in the data. This is strange and worrying. The cause of these stripes has to be found and rectified. Removing the stripes afterwards is a very poor solution and should not be necessary. If the authors cannot identify the source of these stripes they need to explain what measures they have taken to find their cause and how they failed to remedy this. If this were a problem inherent to the ‘suinterp’ it would have been reported before. It therefore appears to a problem with the implementation of the processing chain.

4.

There is no comparison at all with existing interpolation schemes that may operate in the spatial domain for individual depth-slices. These have already been used very successfully in the past. The authors’ unsubstantiated claim that “sometimes numerical artifacts with no correspondence to reality are created” (L590) is unhelpful; good interpolation schemes rarely do that. It is therefore requested that several standard interpolation methods are applied to the decimated data sets and that these results are compared with results using the FFT algorithm. Only then can a possible superiority of the FFT algorithm be established.

5.

The section on technical aspects of the FFT processing (Section 2.1) is unsatisfactory. Large parts of this text relate to processing algorithms that are not used in this report. By contrast the explanation and evaluation of ‘suinterp’ (Section 2.2) is cursory and superficial. In addition to a more detailed analysis of the algorithm’s operation a discussions of case studies is required where it has been used successfully to densify seismic traces. Figures 5a and b look identical and provide no insight whatsoever. It remains unclear what the motivation was why the authors used a FFT algorithm for densification. All of Section 2 has to be thoroughly rewritten.

6.

The use of ‘cover surfaces’ is uncommon and needs to be explained in more detail. The provided information is insufficient. Two or three sentences would be necessary.

7.

There are several peculiar expressions that seem wrong, but may be a result of poor English language wording:

It is claimed that the processing “recovers” missing data [e.g. L539]. This is incorrect - missing data cannot be recovered. At best they can be estimated. Later in the text the authors acknowledge this, but there must not be any such claim that data recovery is possible.

The authors label their algorithm as being “iterative” [L499]. This is not obvious from its description. It looks more like being applied to one profile after the other. This is not iterative but sequential.

Overall the text requires language editing. Some suggestions are provided below.

Specific Comments

The line numbers relate to the lines annotated in the provided manuscript.

8.

L42

turn -> make

9.

L40

study -> studies that are

10.

L18

INT-FFT -> The INT-FFT

11.

L54

pin -> [Delete]

12.

L80

Bath -> the Bath

13.

L83

Only traces discovered were at the bottom. [This makes no sense - change]

14.

L98

first survey [what equipment was used in these two surveys? probably some GSSI equipment?]

15.

L111

generating -> generating a

16.

L112

thickness -> thickness were created

17.

L119

0.04m/ns [why was this chosen; justification?]

18.

L136

shows -> show

19.

L139

with -> along

20.

L160

B-scan -> B-scan (i.e. profile) [the term “B-scan” is not familiar to all readers]

21.

L208

seismic signal [are you certain you mean _seismic_ here? maybe omit seismic?]

22.

L223

seismic -> [delete]

23.

L249

desired information can perform -> [what does that mean? rephrase]

24.

L341

show -> show (Figure 8)

25.

L349

Table 1 -> Table 3 [change everywhere]

26.

L351

Table 2 -> Table 4 [change everywhere]

27.

L389

does -> that is

28.

L424

; and (c) after variogram correction [variogram correction is missing. Delete this part of the text]

29.

L468

is demonstrated -> [unclear. Is this the same resolution as before?]

30.

L528

very -> widely

31.

L566

This step implies a very well-defined knowledge of the geometry of data acquisition. [This is a strange statement: any field practice where such knowledge is not available would be egregiously wrong.]

 

Author Response

REVISION 1

On behalf of all authors, we thank you for reading the manuscript and for the suggestions and opinions expressed in this review.

  1. “It is certainly an interesting idea to evaluate the use of seismic trace interpolation for the densification of GPR data by calculating additional profiles. However, the manuscript does not go beyond the reporting of the implementation and does not evaluate the results fully, or in comparison with existing methods. It is hence a report of some processing tasks and not a scientific publication.”

Rui Oliveira: The presented work intends to present an approach conceived by the authors to densify GPR data using a 2D Fourier interpolation scheme. The proposed scheme is not comparable with other interpolation schemes that exist and for this reason a comparison with them was not carried out. Our team proposes an approach that allows to interpolate new GPR profiles between every two that exist, interpolation performed at the trace level according to the characterization presented in Section 2.3. The GPR data densification scheme is a complementary step to the standard GPR data processing operations. Considering your comments, we have reformulated the text so that the purpose of the approach and its description in the manuscript are more understandable.

  1. “The presented data cannot be evaluated visually as the image representation is too poor. In addition to the overview of depth-slices (Fig. 8- 11) detailed comparisons are required so that the differences can be seen. Even figures 12-16 are of poor quality so that no visual comparisons can be made. Considering the pdf format of this publication the input images have to be prepared in such a way that the published figures make sense.”

Rui Oliveira: Yes, in fact the images are small and of poor quality. The quality had to be reduced to be able to submit the manuscript on the platform. There was also some concern with the images in terms of the size on each page. We tried to ensure that there was no disproportionality, which resulted in the images being as presented. We have reformulated the images so that they are perceptible to the reader. No interpretive lines have been added over the reflection alignments as we fear this will disturb the analysis of the depth-slices.

  1. “It is not explored why the algorithm creates stripes in the data. This is strange and worrying. The cause of these stripes has to be found and rectified. Removing the stripes afterwards is a very poor solution and should not be necessary. If the authors cannot identify the source of these stripes they need to explain what measures they have taken to find their cause and how they failed to remedy this. If this were a problem inherent to the ‘suinterp’ it would have been reported before. It therefore appears to a problem with the implementation of the processing chain.”

Rui Oliveira: It is not the algorithm that creates the striped effect seen in the data. For our mistake, the message that passed was that it is the fault of the profile interpolation algorithm. We wanted to say that the stripped effect is caused by miscalibrations of the equipment's odometer wheel, which has consequences in the adjustment of each profile at the beginning of the processing. Any lag that the dataset has becomes more evident after performing the interpolation of profiles. Topographic effects can also contribute to make this striped effect more pronounced. What we mean is that even if this effect is verified, it can be mitigated with a complementary processing like the one suggested, through the calculation of the experimental semi-variogram and its use in the production of the grids of each depth-slice. The text of the manuscript has been corrected so that this explanation is better understood by the reader.

 

  1. “There is no comparison at all with existing interpolation schemes that may operate in the spatial domain for individual depth-slices. These have already been used very successfully in the past. The authors’ unsubstantiated claim that “sometimes numerical artifacts with no correspondence to reality are created” (L590) is unhelpful; good interpolation schemes rarely do that. It is therefore requested that several standard interpolation methods are applied to the decimated data sets and that these results are compared with results using the FFT algorithm. Only then can a possible superiority of the FFT algorithm be established.”

Rui Oliveira: We don't want to be pretentious in saying that the FFT algorithm is better than the others. The message we want to convey is that the FFT algorithm presents itself as a type of interpolation that is more advantageous for our purpose. The interpolation is performed in the spectral domain, at each trace and considering each two adjacent traces. The advantage lies in the fact that each acquired trace, given the ellipsoidal footprint of the data acquisition scheme, has information about the adjacent traces, information that with this algorithm can be used to estimate information between each two traces, for the intermediate position between both. This entire process that allows estimating the data has been successfully tested and reported in the bibliography for seismic data. In this article it is shown that it is possible to apply the same approach with GPR data. The mention of interpolation errors when it is applied exclusively in space refers to numerical errors that produce information that does not correspond to reality. This effect is greater when there is a great lack of information in the space to be interpolated, when there is a phenomenon of under-sampling. Since the FFT interpolation is completely different from the interpolation performed in the space domain, we do not think it is relevant to make a comparison between methods, since we are not interpolating at the depth-slice level, but between traces of the GPR profiles. The text has been corrected in order to increase the reader's understanding of this aspect.

  1. “The section on technical aspects of the FFT processing (Section 2.1) is unsatisfactory. Large parts of this text relate to processing algorithms that are not used in this report. By contrast the explanation and evaluation of ‘suinterp’ (Section 2.2) is cursory and superficial. In addition to a more detailed analysis of the algorithm’s operation a discussions of case studies is required where it has been used successfully to densify seismic traces. Figures 5a and b look identical and provide no insight whatsoever. It remains unclear what the motivation was why the authors used a FFT algorithm for densification. All of Section 2 has to be thoroughly rewritten.”

Rui Oliveira: Section 2.2 (former section 2.1) details the theoretical foundation around trace interpolation, with bibliographic examples on seismic method data. The methods were not applied in the approach that was reported in this manuscript, but we believe that it is important to substantiate with bibliography how the problem of sub-sampling has been mitigated to estimate information from seismic data. We agree that the explanation of the algorithm can be considered superficial, but the information is summarized, with reference to bibliographic references, so that we do not deviate from the focus of the problem presented and the respective suggestion of resolution with the approach we present. Figure 5 shows two identical GPR profiles, the main difference being the number of traces that have doubled minus one. It is intended that the images are the same in terms of graphic aspects, in order to show that the algorithm does not introduce information that could corrupt the data. The choice of the trace interpolation method with a spectral method is since the new information that is estimated is obtained from the spectrum of the data, as reported in the mentioned bibliography. Section 2 has been restructured so that the information is more compartmentalized and better explained, in order to facilitate the understanding of our message to the reader.

  1. “The use of ‘cover surfaces’ is uncommon and needs to be explained in more detail. The provided information is insufficient. Two or three sentences would be necessary.”

Rui Oliveira: The use of cover surfaces may not be widely used in standard processing of GPR data, but it is a very useful technique to increase the understanding of reflection alignments that can be observed in the 3D reflection model. The cover surface is created from several depth-slices of different depths. From each one, the reflections that have a greater expression are estimated by comparison with the background amplitude value of each depth-slice. If this expressive range exists in the other depth-slices then the information will be highlighted, so that its representation gives a three-dimensional perspective of an alignment of reflections, such as a buried wall. The manuscript text has been reformulated in order to succinctly explain this technique.

  1. “There are several peculiar expressions that seem wrong, but may be a result of poor English language wording: It is claimed that the processing “recovers” missing data [e.g. L539]. This is incorrect - missing data cannot be recovered. At best they can be estimated. Later in the text the authors acknowledge this, but there must not be any such claim that data recovery is possible. The authors label their algorithm as being “iterative” [L499]. This is not obvious from its description. It looks more like being applied to one profile after the other. This is not iterative but sequential. Overall the text requires language editing. Some suggestions are provided below.”

Rui Oliveira: We apologize for the English language writing problems. The term data recovery is used in several bibliographic sources on the subject applied to seismic data. We are aware that information is not retrieved but estimated. In view of your opinion and agreeing that there is verbal abuse of the expressions used, we have corrected the text of the manuscript so that there are no such expressions. Regarding the use of the iterative expression on the algorithm we used, in fact the algorithm is iterative since the calculation of each interpolated trace is performed repeatedly until the entire interpolation for each profile has been completed. At the same time, it is also sequential as the next action is performed after the previous one is completed. When we say that it is iterative, it is because it is performed by the algorithm, without user action. He just must start compiling it and wait for the process to complete. The text in the manuscript has been corrected to better explain this concept. We appreciate your suggestions for language correction. Please be advised that the manuscript has been reviewed by an accredited language review service.

  1. “0.04m/ns [why was this chosen; justification?]”

Rui Oliveira: The velocity value of 0.04 m/ns was measured experimentally through the processing program, by modeling the shape of the hyperbola during the parameterization of the migration operation. This value is compatible with the expected value reported in the literature for an organic soil, as is the case of the soil of the site prospected with the GPR method. Additional information has been added so that the reader has this information.

  1. “B-scan -> B-scan (i.e. profile) [the term “B-scan” is not familiar to all readers]”

Rui Oliveira: The use of the term B-scan to refer to the GPR profile is because all bibliographic references on the GPR method and processing GPR data use this term. We do not intend to distance ourselves from the bibliographic references on this topic. The text has been corrected in order to standardize the term B-scan.

  1. “; and (c) after variogram correction [variogram correction is missing. Delete this part of the text]”

Rui Oliveira: Our apologies. By mistake, a legend was inserted over a panel that does not exist in this figure. The text has been corrected.

  1. “This step implies a very well-defined knowledge of the geometry of data acquisition. [This is a strange statement: any field practice where such knowledge is not available would be egregiously wrong.]”

Rui Oliveira: We recognize that this sentence may be superfluous, but we just want to say that if there is a spatial lag in the data it will make the application of this approach unfeasible.

Reviewer 2 Report

Summary

The authors propose a novel methodology to densify GPR data based on established techniques from exploration seismology data processing. The application area is to guide more accurate archeological exploration/excavation by better delineating structural features that are buried such as walls defining rooms or other spaces as well as objects that may be present within. This is a very useful goal and application of near-surface geophysical imaging and represents a novel attempt to improve GPR data interpretation.

Review

I have one primary question about the methodology regarding the use of the Sharpness Index (Equation 1) between lines 312 and 313.

In the text, the authors discuss that archeological features such as collapsed walls, etc., can make the collected data less sharp, and that the goal is to sharpen the data because, I believe, this better delineates structures (e.g. walls). However:

·        collapsed structure will make small scale scattering, and thus larger gradients and larger Sharpness Index, it is not clear how this quantitative measure is improving the ability to interpret the data over the raw image in this type of application.

·        For example, in Figure 8, I see little difference in C0 and C1 images aside from C1 sometimes appearing a little brighter in some of the depth sections. However, I don’t know that I would interpret the data any differently. Just using the change in the Sharpness Index does not appear to guide the user of the data to a different conclusion. A more specific discussion between the Sharpness Index and interpretation would be useful.

·        Although Figure 5 shows results of a synthetic test, I would like to see a more specific discussion of how the interpolated images are guiding the user to an improved interpretation.

·        Lines 538-542, on the other hand, illustrate that resolution for interpretation is not lost if the field data is de-densified (up to a point, of course). This might be a point that is made more clearly about the value of the methodology earlier in the manuscript. However, for a practitioner planning field time, some comparisons in how much time it takes to collect a denser data set might be evaluated. In other words, what is the argument against just collecting denser field data when the field site is not so large?

 

Specific Comments

1.       I am mindful of the challenges inherent to writing a scientific article in a non-native language. However extensive editing of the English grammar is necessary as a typical reader will not spend the time with the article that a reviewer does. As an example, the sentence on lines 39-40 is not grammatically complete.

2.       Especially since the journal is electronic only, some of the figures should be improved by making them larger – particularly Figures 8, 9, 10, 11. The reader is asked to make a lot of visual comparisons, and these figures are cramped on the printed page making it difficult to study them.

3.       I appreciate the discussion on resolution in both the raw and interpolated data in terms of what still can not be resolved (for example, lines 520 – 522). This type of discussion is useful to the reader who is less familiar with resolution issues and why even “improved” data sets still lack resolution. In fact, this type of discussion might be accentuated in the manuscript.

4.       Line 321: This footnote material should be incorporated into the discussion of Methods or Data Processing procedures.

5.       Figure 12 refers to a panel (c) but no such figure panel is present

6.       The Abstract states that technical conditions on the data for the processing method to be valid are satisfied (lines 21-24), but it is not clear in the manuscript how this is known or how it was analyzed. Given that the method is being adapted from a seismic method, it would be useful to more explicitly indicate how the GPR data meets these requirements mentioned in the Abstract.

Author Response

REVISION 2

On behalf of all authors, we thank you for reading the manuscript and for the suggestions and opinions expressed in this review.

  1. “I have one primary question about the methodology regarding the use of the Sharpness Index (Equation 1) between lines 312 and 313.

In the text, the authors discuss that archeological features such as collapsed walls, etc., can make the collected data less sharp, and that the goal is to sharpen the data because, I believe, this better delineates structures (e.g. walls). However:

  • collapsed structure will make small scale scattering, and thus larger gradients and larger Sharpness Index, it is not clear how this quantitative measure is improving the ability to interpret the data over the raw image in this type of application.”

Rui Oliveira: We are aware that landslides and collapses cause dispersion. The INT-FFT algorithm increases the effect of this dispersion. However, its application causes the overall sharpness of depth-slices to increase, as has been experimentally verified. The sharpness index was a measure that we think may be useful to evaluate the application. Considering your opinion, we have modified the text so that this observation is referred to and considered.

“For example, in Figure 8, I see little difference in C0 and C1 images aside from C1 sometimes appearing a little brighter in some of the depth sections. However, I don’t know that I would interpret the data any differently. Just using the change in the Sharpness Index does not appear to guide the user of the data to a different conclusion. A more specific discussion between the Sharpness Index and interpretation would be useful.”

Rui Oliveira: Figure 8 shows the result of the interpolation of GPR profiles in initially dense data (0.25 m spacing between profiles). Naturally, the interpolation results do not show much difference, which is reflected in the high structural similarity index values. This example shows that the algorithm does not introduce incorrect information into the data. The use of the sharpness index intends to contribute to the evaluation of the application of this algorithm in GPR data obtained in an archaeological environment. The parameter alone does not help the process of interpreting the results. We have amended the text to clarify the discussion about using the sharpness index to evaluate data.

“Although Figure 5 shows results of a synthetic test, I would like to see a more specific discussion of how the interpolated images are guiding the user to an improved interpretation.”

Rui Oliveira: Figure 5 is not a synthetic test. A real GPR profile obtained in the laboratory was used as input, which was densified with the INT-FFT algorithm. The result shows the same profile, without modification, except the number of traces has increased to double minus one. We want to emphasize that this algorithm does not introduce numerical artifacts in the interpolated data.

“Lines 538-542, on the other hand, illustrate that resolution for interpretation is not lost if the field data is de-densified (up to a point, of course). This might be a point that is made more clearly about the value of the methodology earlier in the manuscript. However, for a practitioner planning field time, some comparisons in how much time it takes to collect a denser data set might be evaluated. In other words, what is the argument against just collecting denser field data when the field site is not so large?”

Rui Oliveira: We have reworded the manuscript so that the information you mentioned can be referenced earlier in the manuscript. We also present some information about the acquisition times of the datasets, so that the reader has an idea of the time it takes to perform a GPR survey. We don't mean to suggest that you shouldn't do dense GPR research in a small space. On the contrary, the denser the data, the better the situation will always be. We intend to draw attention to the fact that in a situation where the area to be prospected is very extensive, it is possible to choose to increase the distance between the profiles in order to optimize the survey execution time, for distances that do not cause under sampling, a since this densification approach allows estimating profiles among those that were acquired, without compromising the result.

Specific comments

  1. “I am mindful of the challenges inherent to writing a scientific article in a non-native language. However extensive editing of the English grammar is necessary as a typical reader will not spend the time with the article that a reviewer does. As an example, the sentence on lines 39-40 is not grammatically complete.”

Rui Oliveira: The manuscript was sent to an accredited proofreading service so that the problems of the initial version do not recur. Our apologies.

  1. “Especially since the journal is electronic only, some of the figures should be improved by making them larger – particularly Figures 8, 9, 10, 11. The reader is asked to make a lot of visual comparisons, and these figures are cramped on the printed page making it difficult to study them.”

Rui Oliveira: Our apologies for the lack of image quality. These have been corrected in order to increase the quality and reduce the reader's difficulty in their analysis.

  1. “I appreciate the discussion on resolution in both the raw and interpolated data in terms of what still can not be resolved (for example, lines 520 – 522). This type of discussion is useful to the reader who is less familiar with resolution issues and why even “improved” data sets still lack resolution. In fact, this type of discussion might be accentuated in the manuscript.”

Rui Oliveira: We have changed the manuscript to accentuate the discussion of the aspect you refer.

  1. “Line 321: This footnote material should be incorporated into the discussion of Methods or Data Processing procedures.”

Rui Oliveira: We amended the manuscript to eliminate the footnote. These contents are now referred to when we describe the application of GPR surveys.

  1. “Figure 12 refers to a panel (c) but no such figure panel is present”

Rui Oliveira: Our apologies. By mistake, a panel was mentioned that is not present in the image. The text has been corrected.

  1. “The Abstract states that technical conditions on the data for the processing method to be valid are satisfied (lines 21-24), but it is not clear in the manuscript how this is known or how it was analyzed. Given that the method is being adapted from a seismic method, it would be useful to more explicitly indicate how the GPR data meets these requirements mentioned in the Abstract.”

Rui Oliveira: We agree that the information you mention is very important. The study was carried out, but by mistake it was not properly reported in the manuscript. The text has been corrected to reflect this information.

Reviewer 3 Report

The work is very interesting, more case studies should be tested in order to propose 2D-FFT interpolation as standard operation to improve the quality of the processed results. testing the same method on different application cases will give better indications on how the data react in different raw data acquisition conditions, the increased lateral resolution using this method may also depend on the case study and the size of the recognized buried structures. The approach used is useful for both types of acquisition taken into consideration, denser at 0.25 m and less dense at 0.50 m. Generally when you do large georadar campaigns you can work even at 1 meter spacing. Could this method be applied to simulate a tighter mesh sampling? just my curiosity.

Author Response

REVISION 3

On behalf of all authors, we thank you for reading the manuscript and for the opinions expressed in this review.

  1. “The work is very interesting, more case studies should be tested in order to propose 2D-FFT interpolation as standard operation to improve the quality of the processed results. testing the same method on different application cases will give better indications on how the data react in different raw data acquisition conditions, the increased lateral resolution using this method may also depend on the case study and the size of the recognized buried structures. The approach used is useful for both types of acquisition taken into consideration, denser at 0.25 m and less dense at 0.50 m. Generally when you do large georadar campaigns you can work even at 1 meter spacing. Could this method be applied to simulate a tighter mesh sampling? just my curiosity.”

Rui Oliveira: The densification method we propose can be used on denser and less dense meshes. The only limitation is in the application in meshes whose distance between profiles is so high that the data can be considered as having sampling problems, that is, subsampling. If this happens, it is not possible to estimate information through the adjacent profiles since we are facing an aliasing phenomenon. However, the choice of the distance between profiles is always a function of the size of the structures that are intended to be prospected, in case it is a priori known information. For large structures, we can increase (carefully) the distance between profiles without compromising the results.

Round 2

Reviewer 1 Report

Revision 1 (MDPI Version 2)

I thank the authors for their detailed comments. This is very helpful.

Point 1

As you know, a report only needs to state the outcomes of an investigation whereas a scientific publication also discusses the results and compares and contrasts them with other related investigations and puts all of this into perspective. Therefore, scientific publications require an extensive section of ‘discussion’.

It is therefore not sufficient to state the outcome of your investigations, but you need to discuss these results in relation to other approaches. And while your approach to interpolation may be different from others (i.e. using FFT trace-interpolation), your end-result is an interpolated dataset, mostly presented as depth slices. Hence, you must compare your results with other interpolation schemes. Your results may be better or worse, or just different. We, as readers, simply don’t know.

I hence maintain that a comparison with other interpolation schemes for depths slices, applied to the same data, is necessary.

Point 2

I understand that you cannot control the quality of pdf files that the publisher produces. You therefore have to mitigate their reduced quality, for example by adding enlargements of certain parts of your images so that the important aspects are visible even after the publisher’s pdf creation. For example, in the new Figure 5 I still cannot see a difference between a) and b), they look identical in the pdf. I would hence suggest creating an enlarged extract of a relevant part of the data where the difference is visible and place it as an inset into the figure, for a) and for b). Figures 8 and 9 are clearly improved.

Point 3

Any horizontal lag in the data has to be corrected before your interpolation scheme is applied. Otherwise you are performing trace interpolations between unrelated traces. You must improve the pre-processing of your data, otherwise the required conditions for your processing algorithm are not met and you cannot safely apply it. With such deficiencies in the data collection I would actually assume that conventional interpolation schemes create better results than the FFT algorithm used. Applying a zero-mean-transect post-processing step is a very poor solution, in some way similar to applying a low-pass filter to hide such problems. Such post-processing solutions usually remove important parts of the data that would be required for a detailed archaeological analysis.

Point 4

See Point 1. You cannot claim that “the FFT algorithm presents itself as a type of interpolation that is more advantageous for our purpose” if you do not compare it with other results. The fact that “in theory” it should be better is insufficient. You have to demonstrate this.

Points 5 & 6

You don’t need to explain technical details to the reviewers (in your reply), they must be explained to the reader. Often reference to other works are difficult to find and hence some very short explanations are all that is required.

Point 7

Many thanks for clarifying some technical terms. It is of course acknowledged that there are different expressions used in different languages that may sound similar but are different. It is good to have the review process to smooth out such issues.

On the terminology of “iterative algorithm” the information you provided indicates to me that your processing is not iterative. Although your software may “iterate” over the index of the traces in the data block, but an “iterative algorithm” is one where the same data are processed over and over again to improve some fitting parameter until eventually a “best fit” is achieved (https://en.wikipedia.org/wiki/Iterative_method). The reason why I insist on the correct terminology here is that your algorithm does not require a “gradual improvement”, but is using a set processing scheme (with fixed parameters) from the start. I would hence call your processing “sequential”.

Reviewer 2 Report

I believe the revisions have addressed my comments from the initial review.

Back to TopTop